GW TAI Professor Alexa Alice Joubin Explores the Role of AI in the Humanities


June 2, 2025

Professor Joubin speaking at the 2025 GW Leadership Forum.

Professor Joubin speaking at the 2025 GW Leadership Forum.

All the world’s a stage, according to William Shakespeare’s “As You Like It.” GW Professor and Shakespearean expert Alexa Alice Joubin believes this dynamic translates to human interactions with artificial intelligence (AI).

“AI is very performative,” Joubin explained. “When we interact with AI, it’s putting on a somewhat improvised performance for us. AI’s interfaces are modeled on a chat experience.” She links the artistic creation of virtual worlds to the virtuality enabled by AI.

As a Professor of English and Women’s, Gender, and Sexuality Studies, Joubin is focused on literary, performative, and digital forms of expression in early modern and postmodern cultures.

Much of her work involves analyzing words and their impact, which led her to the world of generative AI research and her involvement in GW’s Trustworthy AI Initiative. She researches performance theory and trust, which she finds intimately linked with the human-AI loop.

“When you go to the theater - do you trust what the characters are saying? Maybe not all the time. That’s the pleasure of following a story – they all have their biases. That dynamic parallels AI’s operation.” Joubin finds AI use in humanities teaching to be productive but not without its challenges.

Joubin posits that outsourcing trust in higher education can foster mistrust among students and faculty. Her recent publication, “Enhancing the Trustworthiness of Generative Artificial Intelligence in Responsive Pedagogy in the Context of Humanities Higher Education,” explores how we can create trust-engendering digital infrastructure that fosters trustworthy activities. To Jobin, trust is not a property of an object but rather a type of relation and social action.

The study investigates what conversational AI tools can accomplish in the humanities and the challenges of their use in a behavioral environment with AI-mediated communication. Joubin identified two main challenges: that students may interpret the fluent prose of AI as the singular, ultimate answer and that students tend to mistake AI synthesis for critical thinking.

She applies two media and performance theories in her study: remediation and interface theories. Generative AI represents data from one medium, like datasets difficult for humans to process meaningfully, in another medium – succinct, human-like conversations. Therefore, AI “remediates” human narratives and performs various versions of the collective consciousness of the publics.

AI uses a conversational interface as a form of social robotics designed to interact with humans. Although its interface may appear to be a transmittal vehicle, it accrues external value as it conveys information.

Applying these theories led Joubin to develop two actionable strategies to help mitigate the challenges of AI use in the humanities in higher education. To address the challenge of false singularity, Joubin teaches students an iterative process of using AI to fine-tune their research questions. Students have AI ask them questions related to or oppositional to their initial draft question.

To help students avoid mistaking synthesis for domain knowledge, Joubin helps students develop meta-cognitive skills. Students build their draft thesis before using AI to simulate historical pro and con arguments about the topic. They do not aim to extract information from AI but instead use AI to enhance critical distance and to create new pathways through course materials. Joubin encourages her students to take a metacritical stance to their research questions.

Joubin’s desire to empower students to use AI more effectively has led her to develop her own AI chatbot. She, alongside GW Computer Science Masters student Akhilesh Rangani, developed her own open-source and open-access course resident AI chatbot to accompany her open-access online textbooks on Critical Theory and Screening Shakespeare. The chatbot is programmed to frame its outputs as questions and to reveal its reasoning processes as part of each output.

Image
A screenshot of Professor Joubin's AI chatbot in action.
Professor Joubin's AI chatbot in action. 

In building a trust-engendering infrastructure and ecosystem, her goal is to make accessing the textbook and corresponding chatbot as user-friendly as possible so that any professor from any discipline can model her approach with their own course materials.

“AI can produce a lot of uncertainties, but AI also compels us to think deeper about our assumptions and actions,” she said. In the next phase of scaling up her project, she will open up the instructor dashboard to enable other professors to build their own custom multilingual chatbots with adaptive responses.

According to Joubin, it’s not about asking AI for a definitive, verifiable answer but asking AI for various possible answers to critically evaluate and apply as appropriate.

Joubin’s commitment to enhancing the use of digital tools in the humanities extends far beyond her AI-related research. In 2013, Joubin co-founded the GW Digital Humanities Institute (DHI) to increase STEM students’ engagement in the humanities and to increase humanities students’ digital and visual literacy.

The DHI was founded on the belief that digital cultures actively transform the arts and humanities. The Institute supports collaborative scholarship endeavors and multimodal venues of teaching and learning through grants, workshops, symposia, and exhibitions. The Digital Humanities Institute is a partner program of GW TAI, of which Joubin is an affiliate researcher.

As GW’s inaugural Public Interest Technology Scholar, Joubin continues to push for a culture of openness in technology. She addressed this in her TED-style talk at the 2025 GW Leadership Forum.

She believes that an open culture supports ethical human-AI collaboration in higher education. Her open-access AI exemplifies the principles of civic science, which is a partnership between public stakeholders, end beneficiaries, and scientists-as-citizens to co-create technologies for public good.

For Joubin, it’s not just about researching how we can best use AI in the humanities; it’s about putting that research into practice as a humanities educator herself.

“I am an educator trying to do this for other educators,” Joubin concluded.