image: A new institute, based at Brown and supported by a $20 million National Science Foundation grant, will convene researchers to guide development of a new generation of AI assistants for use in mental and behavioral health. Ellie Pavlick, an associate professor of computer science at Brown, will lead the effort.
Credit: Nick Dentamaro/Brown University
PROVIDENCE, R.I. [Brown University] — With a $20 million grant from the U.S. National Science Foundation, Brown University researchers will lead an artificial intelligence research institute aimed at developing a new generation of AI assistants capable of trustworthy, sensitive and context-aware interactions with people. Work to develop the advanced assistants is specifically motivated by the potential for use in mental and behavioral health, where trust and safety are of the utmost importance.
The AI Research Institute on Interaction for AI Assistants (ARIA) will combine research on human and machine cognition, with the goal of creating AI systems that are able to interpret a person’s unique behavioral needs and provide helpful feedback in real time. To understand what form such systems should take and how they could be safely and responsibly deployed, the institute will bring together experts from across the nation spanning computer science and machine learning, cognitive and behavioral science, law, philosophy and education.
Creating AI systems that can operate safely in a sensitive area like mental health care will require capabilities that extend well beyond those of even today’s most advanced chatbots and language models, according to Ellie Pavlick, an associate professor of computer science at Brown who will lead the ARIA collaboration.
“Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” Pavlick said. “At the same time, the system needs to be transparent about why it makes the recommendations that it does in order to build trust with the user. Mental health is a high stakes setting that embodies all the hardest problems facing AI today. That’s why we’re excited to tackle this and figure out what it takes to get these things absolutely right.”
That work will require deep collaboration across institutions, expertise and academic disciplines, Pavlick said. She and her colleagues have carefully assembled a nationwide collaboration to address these critical challenges in AI development.
“AI systems — particularly those brought to bear in sensitive areas of human health — require thoughtful development that combines technological advancement with a deep understanding of their societal implications,” said Brown University Provost Francis J. Doyle III. “Brown is well-positioned to lead this collaborative research, and I’m confident the work of ARIA’s scholars will produce scientific breakthroughs that will have a positive impact on the lives of countless people.”
ARIA is one of five national AI institutes that will receive a total of $100 million in funding, the National Science Foundation, in partnership with Capital One and Intel, announced on Tuesday, July 29. The public-private investment aligns with the White House AI Action Plan, a national initiative to sustain and enhance America's global AI leadership, the NSF noted.
“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness," said Brian Stone, who is performing the duties of the NSF director. "Through the National AI Research Institutes, we are turning cutting-edge ideas and research into real-world solutions and preparing Americans to lead in the technologies and jobs of the future."
ARIA’s research team includes experts from leading research institutions nationwide including Colby College; Dartmouth College; New York University; Carnegie Mellon University; the University of California, Berkeley; the University of California, San Diego; the University of New Mexico; the Santa Fe Institute; and Data and Society, a civil society organization in New York. The institute will draw on specialized expertise from Brown’s Data Science Institute and Carney Institute for Brain Science, Dartmouth’s Center for Technology and Behavioral Health, and Colby’s Davis Institute for AI.
Additional collaborators include SureStart, Google, the National Institutes of Health, Addiction Policy Forum, Community College of Rhode Island, and Clemson University. As part of its partnership with NSF, Capital One is contributing $1 million over five years to support ARIA’s research efforts.
“ARIA, in its very conception, incorporates some of the most important ideals of doing people- and community-centered research,” said Suresh Venkatasubramanian, a professor of computer science at Brown, director of Brown’s Center for Technological Responsibility, Reimagination and Redesign, and co-director of ARIA. “Our team has scholars who span multiple disciplines, deep engagement with stakeholders in the mental and behavioral health community, and cutting-edge expertise in doing sociotechnical research.”
ARIA’s work will also include a robust education and workforce development program spanning K-12 students through working professionals. The ARIA team will work with the Bootstrap program, a computer science curriculum developed at Brown, to support evidence-based practices for building new AI curricula and training for K-12 teachers. An initiative called the Building Bridges Summer Program will bring college and high school students from across the country to ARIA campuses to work on cutting-edge AI research.
New technologies for tomorrow, new insights for today
According to the National Institute of Mental Health, more than one in five Americans lives with a mood, anxiety or substance use disorder. There are effective treatments for these conditions, but high cost, lack of insurance, limited access to transportation and social stigma can all create barriers to effective care. AI has the potential to break through these barriers in a variety of ways, Pavlick says.
“There are still a lot of open questions about what a ‘good’ AI system for mental health support looks like,” Pavlick said. “We can imagine people wearing smartwatches or other devices that collect behavioral and biometric information, and having an AI system that uses that data to provide nudges or goal-oriented feedback. But there are obviously a lot of considerations about privacy, accuracy, personalization, safety and when to have a therapist in the loop. Part of the work of the institute will be to understand what forms this technology could take, which types of systems could work and which shouldn’t exist.”
The need for this work is urgent, according to Pavlick. New startups and existing companies are already developing AI apps and chatbots for mental health support, and evidence suggests that people often turn to ChatGPT and other chatbots for relationship advice and other information tied to mental well-being.
“The work we’ll be doing on trust, safety and responsible AI will hopefully address immediate safety concerns with these systems — for example, developing safeguards against responses that reinforce delusions or unempathetic responses that could increase someone’s distress,” Pavlick said. “We need short-term solutions to avoid harms from systems already in wide use, paired with long-term research to fix these problems where they originate.”
New and smarter AI systems will be required to help deliver the kind of trustworthy and context-aware feedback required for safe and effective mental health interventions. Today’s large language models generate text through statistical inference — predicting which words to use next based on prior words or user inputs. Unlike humans, they don’t have a mental model of the world around them, they don’t understand chains of cause and effect, and they have little intuition about the internal states of the people with whom they interact.
“There's a lot of work in cognitive science and neuroscience trying to understand how humans develop this kind of causal understanding of the world and of their own activities,” Pavlick said. “We’ll be adding to that work and thinking about how to endow AI systems with analogous abilities so that they can interact naturally and effectively with people.”
At the same time, the team will engage legal scholars, philosophers, education experts and others to better understand how such systems would fit into existing social and cultural infrastructure.
“You don't just want to take for granted that any system that you can build should exist, because not all of them will have a net benefit,” Pavlick said. “So we’ll be addressing questions about what systems should even be built and which should not.”
Ultimately, Pavlick says, developing smarter, more responsible AI will be a benefit not only in the mental health sphere, but in the course of AI development in general.
“We’re addressing this critical alignment question of how to build technology that is ultimately good for society,” she said. “These are extremely hard problems in AI in general that happen to have a particularly pointed use case in mental health. By working toward answers to these questions, we’ll work toward making AI that’s beneficial to all.”