People talk to chatbots the way they talk to receptionists, tutors, partners, even therapists. The interface invites it. The language is smooth, the replies arrive fast, and the system often says “I” with enough confidence to trigger an old human reflex: if something speaks like a person, maybe there is a person in there. That instinct is understandable. It is also wrong for the systems people use today. Under law, under engineering, and under product design, current AI is treated as a machine-based system that generates outputs, not as a bearer of human standing, duties, or inner life.
Table of Contents
That distinction is not academic. It affects who carries responsibility when an AI system gives bad advice, copies protected material, harms a child, invents facts, or steers a user into emotional dependence. If the public slips into talking about AI as though it were a self, accountability drifts away from the people and firms that built it, deployed it, tuned it, and profited from it. Software does not answer for itself. Humans still do.
The category error hiding in plain sight
The idea of an artificial person has been around for a long time in philosophy and science fiction. The Stanford Encyclopedia of Philosophy notes that AI has often been framed as a field concerned with building artifacts that appear intelligent, and sometimes even artifacts that appear to be persons in suitable contexts. That history matters because it explains why the public conversation keeps drifting from capability to personhood. Yet appearing person-like in a conversation is not the same thing as possessing personhood in any legal, moral, or technical sense.
Part of the confusion comes from anthropomorphism itself. The American Psychological Association defines anthropomorphism as the attribution of human characteristics to nonhuman entities. Research on chatbot design shows that human-like social cues are often used because they increase user acceptance and social response. So the systems are not merely misunderstood by accident; many of them are presented in ways that make misunderstanding more likely.
That is why the sentence “AI said” can quietly mislead. A model did not “decide” in the human sense, and it did not speak from a private interior point of view. It produced an output through a software process shaped by training data, architecture, safety tuning, product constraints, and the prompt in front of it. Fluency is not selfhood.
Under the polished interface, there is prediction machinery
OpenAI’s GPT-4 technical report describes GPT-4 as a Transformer-based model pre-trained to predict the next token in a document. The Transformer paper itself introduced the architecture as a system based solely on attention mechanisms. Strip away the marketing, the avatars, the names, and the chat bubbles, and you are still looking at computation over inputs, weights, and probabilities. That is powerful software. It is not a person hidden behind glass.
Anthropic’s own research says language models are trained on large amounts of data and perform billions of computations for every word they write. The company also describes deployed Claude systems as a family of large language models, then explains that researchers still treat them largely as a black box whose internal representations need interpretability work. That picture is far removed from the way people talk about friends, colleagues, witnesses, or moral agents. A model can be difficult to interpret without being a self.
None of this denies that AI can outperform people on narrow tasks. Current models summarize documents, draft code, classify images, and answer questions at impressive speed. The mistake is the jump from “it performs well” to “it is someone.” Matthew Shardlow and Piotr Przybyła argue against anthropomorphic claims that language models are conscious, urging the field to de-anthropomorphise NLP rather than mistake linguistic performance for sentience. That is the sober reading of the technology people actually use.
A quick distinction that clears the fog
| What the interface suggests | What the system actually is |
|---|---|
| “I understand you” | A model generating text from patterns learned during training and shaped by the current prompt and system rules |
| “I decided this” | A software system producing outputs within objectives, constraints, and deployment choices set by people |
That difference looks small on screen and becomes huge the moment trust, safety, blame, authorship, or law enters the room. Human-style phrasing can make software feel intimate. It does not convert software into a person.
Law already answers more than the public debate does
The European Commission’s guidelines on the AI system definition explain the legal concept anchored in the AI Act. The official guidance quotes Article 3(1): an AI system is “a machine-based system” that infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions. The same guidance spells out that “machine-based” covers both hardware and software components, and that inference is the key feature that separates AI systems from simpler traditional software. That is legal language, not metaphor.
Copyright law points in the same direction. The U.S. Copyright Office says its AI work is centered on the copyrightability of outputs created using generative AI, and its 2025 Part 2 report concludes that copyright protects such outputs only where a human author determined sufficient expressive elements. Mere prompting is not enough. The Office’s earlier policy guidance also rested on the human authorship requirement. The law does not treat the model as the author.
Patent law is no friendlier to AI personhood. The USPTO states that only natural persons can properly be named as inventors on patent applications. The UK Supreme Court, in the DABUS case, upheld the rejection of patent applications that named an AI system as inventor because DABUS was not a person under the statute. That is as direct as these disputes get. Even where an AI system is said to have generated an invention, the system itself is not granted the legal standing of a person.
Human-sounding design changes human judgment
The danger is not only legal confusion. It is also psychological. A 2025 Scientific Reports study found that individual differences in anthropomorphism help explain why some people feel more connected to AI companions, and that anthropomorphizing technology was strongly related to feeling connected after texting with a chatbot. Put bluntly, the more human the software feels, the easier it becomes for some users to bond with it.
A 2025 meta-analysis in Humanities and Social Sciences Communications reached a related conclusion: human-like social cues in text-based conversational agents affect users’ social responses. Designers have reasons to use these cues. They can make systems feel warmer, easier, and more usable. Yet the same cues also blur the boundary between tool and social actor, which is where misjudgment starts.
Another paper in the same journal warned that increasingly capable and personalized AI agents may generate the perception of deeper, more persistent relationships with users. The FTC’s 2025 inquiry into companion chatbots took that problem seriously enough to ask what companies had done to evaluate safety, limit harm to children and teens, and disclose risks. Regulators do not open that kind of inquiry because a spreadsheet became too charming. They do it because people can be nudged into relating to software as though it were someone who cares, knows, or chooses.
Treating AI like a person makes real mistakes easier
Once the person-language takes hold, bad habits follow. Teams stop asking who approved the model’s deployment. Managers say “the AI decided” instead of naming the vendor, the developer, the operator, or the human reviewer. NIST’s AI Risk Management Framework is built around managing risks to individuals, organizations, and society and improving trustworthiness in the design, development, use, and evaluation of AI systems. That framework assumes governance. Governance assumes accountable humans and institutions.
The same distortion shows up in emotional settings. Companion-style systems can be framed as confidants, listeners, or substitutes for fragile forms of human contact. The deeper the relational framing, the more tempting it becomes to ignore the economic reality underneath: a product is steering interaction, collecting data, following design goals, and keeping the user inside a service. A machine that sounds caring may still be doing exactly what it was built to do as software.
Creative work gets muddled in a similar way. If people start speaking as though an AI system “authored” a work or “invented” a device in the same way a person does, they lose sight of the legal structure that still places authorship, inventorship, and liability around human contribution and human control. The current framework may evolve over time, but today it is not ambiguous: AI assists, generates, predicts, and outputs; humans author, invent, register, own, deploy, approve, and answer.
Better language produces better decisions
This is partly a writing problem. Better verbs help. A model generates. A system infers. A company deploys. A team uses. A human signs off. Those verbs keep agency in the right place. The EU guidance, NIST framework, and U.S. intellectual-property guidance all push in that direction even when they do not say it in stylistic terms. Their shared assumption is simple: AI is an engineered system inside a human chain of responsibility.
That does not require sterile language or joyless products. People will still call these tools assistants. Brands will still give them names. Interfaces will still use first-person replies because conversation reads more naturally that way. The discipline worth keeping is conceptual, not theatrical. You can talk with software without pretending the software has become a self.
The human place in the system
Could a future artificial system ever deserve something closer to personhood? Philosophers argue about that, and the question is not silly. The Stanford Encyclopedia of Philosophy shows that artificial persons have long been part of the intellectual horizon of AI. Still, that horizon does not describe the mainstream systems in products, workplaces, schools, search engines, and consumer chat apps right now. Current legal frameworks define AI as machine-based systems, and current technical descriptions define large language models as trained computational models that generate outputs from inputs.
So the useful position is also the plain one: AI is software, not a person. The more human the interface becomes, the more important that sentence gets. It protects accountability, keeps design honest, reduces emotional confusion, and helps people judge the tool by what it is rather than by the character it performs.
FAQ
No. First-person language is a conversational interface choice, not proof of legal status, consciousness, or personhood. Current AI products are described by their makers and by regulators as machine-based or language-model systems that generate outputs from inputs.
No. The sources used here show the opposite pattern: the EU AI Act defines AI as a machine-based system, the USPTO says inventors must be natural persons, and the UK Supreme Court rejected an AI system as inventor in the DABUS case.
The U.S. Copyright Office says copyright protection for generative AI output depends on sufficient human authorship. Human creative contribution can be protected; the model itself is not treated as the author.
That phrase is a simplification, but it points to something real. OpenAI’s GPT-4 report says GPT-4 is pre-trained to predict the next token in a document, and modern LLMs are built on Transformer architectures introduced as attention-based sequence models.
Because humans readily anthropomorphize nonhuman systems, and research shows that anthropomorphism and human-like social cues can increase feelings of connection and social response toward chatbots.
Not by itself. The problem starts when the label erases who designed, deployed, supervised, and remains responsible for the system. A friendly product name is one thing; treating the product as a moral or legal subject is another.
Because systems framed as companions can affect vulnerable users in ways that go beyond ordinary software use. The FTC’s inquiry focused on safety evaluations, risks to children and teens, and whether users and parents were warned about those risks.
The article does not rule that out as a philosophical possibility. It argues that present-day AI systems in law, engineering, and commercial deployment are software systems rather than persons.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

This article is an original analysis supported by the sources cited below
The Commission publishes guidelines on AI system definition to facilitate the first AI Act’s rules application
European Commission page explaining the purpose of the AI system definition guidelines and when the first AI Act rules started to apply.
Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act)
Official Commission guidelines setting out the Article 3(1) definition of an AI system and unpacking elements such as machine-based operation, autonomy, and inference.
AI Risk Management Framework
NIST overview page for the AI RMF, focused on managing AI risks and improving trustworthiness in design, development, use, and evaluation.
Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
NIST publication page for the generative AI profile that extends the RMF to risks specific to generative systems.
FTC Launches Inquiry into AI Chatbots Acting as Companions
FTC announcement describing the agency’s concerns about companion chatbots, especially safety and harms affecting children and teens.
Copyright and Artificial Intelligence
U.S. Copyright Office hub for its multipart AI report and related guidance on copyright and generative AI.
Copyright and Artificial Intelligence, Part 2: Copyrightability
The Copyright Office’s report on when human contribution is sufficient for copyright protection in works involving generative AI output.
Works Containing Material Generated by Artificial Intelligence
Policy guidance applying the Office’s human authorship rule to registration requests that include AI-generated material.
Revised inventorship guidance for AI-assisted inventions
USPTO guidance reaffirming that inventors on U.S. patent applications must be natural persons.
Thaler (Appellant) v Comptroller-General of Patents, Designs and Trademarks (Respondent)
UK Supreme Court case page for the DABUS dispute, where the Court upheld the rejection of an AI system as named inventor.
GPT-4 Technical Report
OpenAI’s technical report describing GPT-4 as a Transformer-based model pre-trained to predict the next token in a document.
Attention Is All You Need
The original Transformer paper, which introduced the architecture underlying modern large language models.
Models overview
Anthropic documentation identifying Claude as a family of large language models.
Tracing the thoughts of a large language model
Anthropic research note explaining that models like Claude are trained on large amounts of data and carry out vast numbers of computations per generated word.
Mapping the Mind of a Large Language Model
Anthropic interpretability research on internal feature representations in Claude Sonnet and the black-box character of model internals.
Anthropomorphism
APA definition of anthropomorphism, useful for grounding why people project human qualities onto AI systems.
Individual differences in anthropomorphism help explain social connection to AI companions
Scientific Reports study linking anthropomorphism to stronger feelings of social connection with AI companions.
The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis
Meta-analysis examining how human-like cues in text chatbots shape user acceptance and social response.
Why human–AI relationships need socioaffective alignment
Research paper arguing that personalized and agentic AI systems can create the perception of deeper, more persistent relationships with users.
The benefits and dangers of anthropomorphic interactions with LLMs
PNAS article weighing both the functional appeal and the social risks of anthropomorphic interactions with large language models.
Deanthropomorphising NLP: Can a language model be conscious?
Paper arguing against treating language-model behavior as evidence of sentience or consciousness.
Artificial Intelligence
Stanford Encyclopedia of Philosophy entry tracing the intellectual history of AI, including ideas about artificial persons and intelligent artifacts.



