As artificial intelligence (AI) becomes more pervasive in our society, understanding how we receive and experience AI has become increasingly important. According to new research from Professor Gil Appel of the GW School of Business, those with lower AI literacy are more receptive to AI, but not for the reasons you might think.
Prof. Appel, in collaboration with Stephanie Tully from the University of Southern California and Chiara Longoni from Bocconi University, set out to understand the relationship between AI literacy and AI receptivity.
The existing literature on consumer receptivity towards AI has been focused on situational factors associated with receptivity. Prof. Appel and his coauthors sought to investigate individual-level factors that predict AI receptivity and inform systematic differences across consumers.
Prof. Appel has always been interested in researching new technological innovations and how they impact consumer adoption. AI's rapid adoption piqued his interest.
“Whenever a new innovation is introduced, I want to understand its impact on the world, particularly on consumers,” Appel explained. “When AI entered the scene, I was immediately hooked by how accessible it was. This shift in accessibility made me curious about how it affects consumer behavior and adoption decisions.”
Prof. Appel and his coauthors first aimed to demonstrate the existence of the lower literacy-higher receptivity link. To do so, they started by analyzing secondary data to examine the link on a global scale. Next, they looked at AI usage data from six studies to understand AI literacy and receptivity at the individual level.
Appel and his colleagues adopted a unique approach to measuring AI literacy. They used two scales: one 25-item measure they developed and one 17-item measure developed by AI.
To develop the 25-item measure, the authors reviewed academic literature, public resources, and practitioner-focused materials on AI, algorithms, and digital literacy to create a set of 25 questions. The number of correct answers determined each participant’s AI literacy score.
The authors developed the 17-item measure based on a new framework, pioneered by Duri Long and Brian Magerko, defining AI literacy along 17 competencies. They asked Claude.AI and ChatGPT-4 to generate multiple-choice questions for each competency and selected the most representative questions. The number of correct answers determined each participant’s AI literacy score.
The authors assessed AI receptivity more broadly. The team examined consumers’ willingness to have AI complete various tasks – ranging from general attitudes toward AI in the future to specific preferences for AI performing tasks like answering historical questions.
The team’s findings were striking – lower artificial intelligence literacy predicts greater AI receptivity, but not for the reasons one might suspect, like differences in perceptions of AI’s capability, ethicality, or feared impact on humanity.
When considering tasks typically associated with inherently human traits—such as demonstrating empathy, humor, or creative insight—individuals with lower AI literacy tend to misattribute these qualities to AI systems.
Their limited understanding of the underlying mechanisms leads them to view AI’s performance not merely as technological, but as embodying a kind of magical, human-like essence. This misperception evokes strong feelings of awe, ultimately heightening their receptivity toward AI innovations.
The awe associated with the lower literacy-higher receptivity link functions almost like the awe around a magic trick, the paper argues. The magic of a magic trick is that you don’t know how it functions. Once you understand how a magic trick works, it seems less magical and can cast a shadow over your excitement for magic generally.
Once the authors established that lower literacy predicts greater receptivity, they transitioned to understanding why this link exists.
Specifically, they explored why some people perceive AI as magical, why lower-literacy individuals experience awe in response to AI, and why higher-literacy individuals do not. The team conducted a series of studies to dissect the psychological mechanisms at play and to ensure their observations were not driven by alternative explanations.
Prof. Appel says his role as a business professor gives him a much more applied perspective on AI.
“While understanding theoretical constructs is essential, it’s not enough on its own. I constantly think about how to ensure research is managerially relevant – how can this knowledge be applied in real-world business contexts, and what practical value can it bring?” he said.
For managers and policymakers, Appel argues that their findings underscore the importance of targeting the right audience when marketing AI innovations. As a result of their findings, the authors suggest a more nuanced approach to AI literacy.
Appel is enthusiastic about how this research will inform AI research, practice, and policy.
“This research presents an exciting opportunity to examine the adoption of AI up close and understand how consumers perceive and integrate this innovation into their lives,” he concluded.