Note -- this is a working draft that is changing as you read this.
"First, LLMs do have robust internal representations. Second, there is an open question to answer about whether LLMs have robust action dispositions. Third, existing skeptical challenges to LLM representation do not survive philosophical scrutiny." -- Simon Goldstein & Benjamin Anders Levinstein, Does ChatGPT Have a Mind? - PhilPapers
The 4 Degrees of Anthropomorphism of Generative AI
"We explore the lives of people who are in love with their AI chatbots. Replika is a chatbot designed to adapt to the emotional needs of its users. It is a good enough surrogate for human interaction that many people have decided that it can fulfill their romantic needs." -- S6, Episode 3: Love in Time of Replika (April 25th, 2023) – Hi-Phi Nation
"... our concerns go beyond bias; we want to caution against the anthropomorphizing of AI. AI is not human, and we should not be using human-related terms to refer to these systems and tools because that can lead to misconceptions that cause harm not just to our students but to our communities as well." -- Anthropomorphism of AI in Learning Environments: Risks of Humanizing the Machine | EdSurge News
"Chatbots, and the large language models (LLMs) on which they are built, are showing us the dangers of dishonest anthropomorphism. Built with humanlike features that present themselves as having cognitive and emotional abilities that they do not actually possess, their design can dupe us into overtrusting them, overestimating their capabilities, and wrongly treating them with a degree of autonomy that can cause serious moral confusion. Chatbots programmed to express feelings or that provide responses as if typing in real-time raise significant questions about ethical anthropomorphism on the part of generative AI developers." -- The Danger of Dishonest Anthropomorphism in Chatbot Design | Psychology Today Canada
"By elevating machines to human capabilities, we diminish the specialness of people. I’m eager to preserve the distinction and clarify responsibility. So I do not think machines should use first-person pronouns, but should describe who is responsible for the system or simply respond in a machine-like way." -- On AI Anthropomorphism. by Ben Shneiderman (University of… | by Chenhao Tan | Human-Centered AI | Medium
"... worries are – at least as far as large language models are concerned – groundless. ChatGPT and similar technologies are sophisticated sentence completion applications – nothing more, nothing less. Their uncanny responses are a function of how predictable humans are if one has enough data about the ways in which we communicate." -- AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it
"When talking to an AI chatbot, users may feel comfortable sharing more information than they ordinarily would if the chatbot sounds human-like and uses first- or second-person language. It may feel like the information provided to the chatbot is being shared with a friendly person rather than an enterprise that may use those data for a variety of purposes. For example, people may talk to a chatbot for a while and eventually reveal sensitive information (e.g., health issues they are struggling with). Most chatbots will not warn users when they are providing sensitive information." -- The Privacy Pros and Cons of Anthropomorphized AI
"The tendency to humanize AI and the degree to which people trust it highlights serious ethical and legal concerns. AI-powered ‘humanizer’ tools claim to transform AI-generated content into “natural” and “human-like” narratives. Others have created “digital humans” for use in marketing and advertising. Chances are, the next ad you see featuring a person isn’t a person at all but a form of synthetic media. Actually, let’s stick to calling it exactly what it is — a deepfake."
"It’s not just an ethical problem; it’s also a security problem since anything designed to persuade can make us more susceptible to manipulation. In the context of cybersecurity, this presents a whole new level of threat from social engineering scammers."
"People form relationships with other people, not with machines. But when it becomes almost impossible to tell the difference, we’re more likely to trust AI when making sensitive decisions. We become more vulnerable; more willing to share our personal thoughts and, in the case of business, our trade secrets and intellectual property."
"This presents serious ramifications for information security and privacy. Most large language models (LLMs) keep a record of every interaction, potentially using it for training future models."
"Do we really want our virtual assistants to reveal our private information to future users? Do business leaders want their intellectual property to resurface in later responses? Do we want our secrets to become part of a massive corpus of text, audio and visual content to train the next iteration of AI?"
"If we start thinking of machines as substitutes for real human interaction, then all these things are much likelier to happen." -- The dangers of anthropomorphizing AI: An infosec perspective
"With more human-like interactions, people develop a sense of trust and adapt to using AI technology quicker given its more innate to our human nature. Furthermore with the advent of Generative AI, humans are embedding natural language interactions with machines (as an evolution to Conversational AI) in the form of prompt engineering." -- (10) The Pros and Cons of Anthropomorphizing Artificial Intelligence | LinkedIn
"Anthropomorphizing isn’t always detrimental. For instance, it may improve your well-being by creating a sense of comfort and connectedness." -- https://psychcentral.com/health/why-do-we-anthropomorphize#recap
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.” -- –A.I. expert Professor Emily Bender in The Washington Post -- Chatbots Are Not People: Designed-In Dangers of Human-Like A.I. Systems - Public Citizen
"He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says" -- 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says
"anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust." -- Anthropomorphism in AI: hype and fallacy | AI and Ethics