Common terms used to describe human thought processes can inadvertently misrepresent artificial intelligence (AI), making it seem more human-like than it truly is. Jo Mackiewicz, an English professor at Iowa State University, highlights the tendency to use mental verbs when discussing machines. "We naturally use these terms in our daily conversations, which helps us relate to technology," she explains. However, this practice can blur the distinctions between human cognition and machine operations.
Mackiewicz, along with Jeanine Aune, a teaching professor and director of the advanced communication program at Iowa State, conducted a study examining the anthropomorphism of AI language. Their research, titled "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," was published in Technical Communication Quarterly. They collaborated with Matthew J. Baker from Brigham Young University and Jordan Smith from the University of Northern Colorado, both of whom have ties to Iowa State.
The Risks of Misleading Language
The researchers found that employing mental verbs like "think," "know," and "understand" to describe AI can create misconceptions about its capabilities. Such terms imply that AI possesses thoughts or awareness, which is misleading. In reality, AI operates by recognizing patterns in data rather than forming independent ideas.
Furthermore, phrases like "AI decided" can exaggerate the technology's autonomy, potentially leading to unrealistic expectations regarding its reliability and intelligence. Aune emphasizes that this anthropomorphic language can overshadow the human developers behind these systems, who are responsible for their design and implementation.
Analyzing AI Language in News
To assess the frequency of anthropomorphic language in journalism, the research team analyzed the News on the Web (NOW) corpus, a vast collection of over 20 billion words from English-language news articles across 20 countries. Surprisingly, they discovered that mental verbs are less commonly associated with AI than expected.
While such language is prevalent in everyday conversation, it appears less frequently in news articles. For instance, the term "needs" was most often linked to AI, appearing 661 times, while "knows" was used only 32 times in connection with ChatGPT. The researchers suggest that editorial guidelines, such as those from the Associated Press, which discourage attributing human emotions to AI, may influence journalistic practices.
Understanding Context
Even when mental verbs are employed, they may not always suggest anthropomorphism. For example, "AI needs large amounts of data" describes operational requirements rather than implying human-like qualities. Such phrasing often shifts the focus back to human responsibility in the technology's use.
The Spectrum of Anthropomorphism
The study reveals that the use of mental verbs varies in its implication of human-like traits. Some phrases, like "AI needs to understand the real world," hint at expectations tied to human reasoning and ethics. This indicates that anthropomorphism exists on a continuum rather than being binary.
The Importance of Language Choices
Ultimately, the research underscores the nuanced relationship between language and perception. As Mackiewicz notes, the language chosen by writers significantly influences how AI systems are understood. The study encourages professionals to reflect on their descriptions of AI, emphasizing that as AI evolves, so too must our language surrounding it.
Looking forward, the researchers propose further studies to investigate how different word choices affect public understanding and whether even infrequent anthropomorphic language can shape perceptions of AI.