Feature Story | 27-Feb-2026

Dartmouth, where AI began, celebrates a 70-year milestone and defines what's next

Researchers, scholars guide the future of AI while building on the legacy of the 1956 Dartmouth Summer Research Project on Artificial Intelligence.

Dartmouth College

Seventy years ago this summer, a small group of mathematicians and scientists gathered in Hanover, NH, for the Dartmouth Summer Research Project on Artificial Intelligence, a two-month workshop that coined the term “artificial intelligence” and launched a field now reshaping science, technology, and daily life.

Dartmouth will mark the 70th anniversary of AI with a yearlong series of events, convenings, and public conversations designed to reflect on that legacy and help shape what comes next.

The year’s flagship event, “The Dartmouth Conference, Revisited,” will connect AI’s founding vision to a future-facing mandate for responsible innovation. Drawing inspiration from the 1956 summer workshop, the conference will gather researchers, developers, creators, and institutional leaders at Dartmouth from Oct. 29-30 to deliberate on how artificial intelligence can responsibly augment human judgment and creativity and clarify higher education’s role in preserving and cultivating distinctly human capabilities.

If the original conference asked whether machines could think, today’s questions are more complex, and more human. As AI systems generate text, images, code, and predictions with remarkable efficacy, the central challenge has shifted from “Can machines think?” to “How do humans think, create, make ethical decisions, and lead alongside machines?”

“Our legacy carries a responsibility to anchor innovation in human judgment, to ask hard questions about the role of new tools in teaching and research, and to lead higher education in defining how this technology advances knowledge with integrity and purpose,” says President Sian Leah Beilock, a cognitive scientist.

That responsibility starts in the classroom. Faculty across disciplines are using AI not to bypass student thinking but to sharpen it, making reasoning visible, testing assumptions, and pushing ideas into new territory. The anniversary year will also surface Dartmouth’s emerging norms for responsible AI use developed through faculty leadership and ongoing campus dialogue, creating a model for institutions navigating similar questions.

“Students learn to develop and defend interpretations, challenge assumptions, and push ideas into uncharted territory—while discerning when automation should give way to human judgment and imagination,” says Provost Santiago Schnell.

The measure of excellence, he says, is less what can be produced and more the rigor of reasoning, the originality of the questions posed, and the ability to navigate complexity with clarity and care. The aim is to prepare students to perform thoughtfully, critically, and ethically wherever AI is present—not just as engineers or users, but as citizens, creators, and decision-makers.

"Our legacy carries a responsibility to anchor innovation in human judgment, to ask hard questions about the role of new tools in teaching and research."

Darmouth President Sian Leah Beilock

Dartmouth’s strength as a liberal arts research university—where computational science converges with philosophy, ethics, media studies, the arts, and the social sciences—means these questions can be addressed from every angle. Researchers are advancing AI’s capabilities while interrogating its risks, from hallucinations and bias in training data to the broader challenge of ensuring that powerful tools serve, rather than replace, human judgment. That integration of depth and breadth is producing scholarship that informs institutions, policymakers, and the public. Among the work underway at Dartmouth:

Dartmouth’s graduate and professional schools have also rolled out new offerings related to AI. The Thayer School of Engineering has a new AI track as an option within its master of engineering program, and undergraduates pursuing a bachelor of engineering can now choose a concentration in AI.

Meanwhile, AI now surfaces across the entire Tuck School of Business curriculum—appearing in every course through teaching, research, and hands-on learning—with at least 10 electives this academic year devoted specifically to AI.

Setting the Agenda for AI’s Next Chapter

The anniversary commemoration opens Feb. 26–27 with the presentation of the McGuire Prize for Societal Impact to Hany Farid, a recognized expert in digital forensics whose technologies for detecting deepfakes and exploitive content of children are widely used today. A UC Berkeley professor who spent 20 years as a Dartmouth faculty member, Farid's innovations include PhotoDNA, a technology deployed globally to identify and remove images of child exploitation, and are now essential tools for law enforcement, human rights advocates, and major tech companies. 

Launching the 70th anniversary with Farid’s recognition underlines a central theme of the year—rigorous technical innovation must be paired with ethical responsibility and accountability. The original conference asked how machines could be made to solve human problems. As the field has matured and expanded, the questions now are more nuanced: “How do we strengthen human judgment and creativity in the age of AI?” and “What is higher education’s role in expanding these capacities?”

“These questions go to the heart of higher education’s responsibility in an AI era,” says Peter Chin, professor of engineering and co-chair of Dartmouth’s Faculty Leadership Group on AI. “Framing this dialogue through the committee’s role—to discern where AI can accelerate our mission and where thoughtful restraint is needed—ensures that critical thinking and ethical inquiry remain central to crafting an evidence-based approach that can guide others in academia.”

The conference aims to produce actionable guidance—published frameworks for centering human judgment, creativity, and ethical responsibility as AI capabilities expand.

A yearlong conversation

A series of events across the anniversary year will extend Dartmouth’s leadership, exploring how AI reshapes not just research and decision-making but imaginative work across disciplines—and what educational institutions must do in response. 

Among them:

This spring, students will compete for the DALI TechniGala Student Prize, Dartmouth’s first-ever AI innovation prize. The competition celebrates outstanding student work at the intersection of design, technology, and human-centered innovation—a showcase of how the next generation is already building with AI while prioritizing creativity, responsibility, and real-world impact.

In September, the Magnuson Center’s Dartmouth Entrepreneurs Forum in San Francisco will bring the university’s perspective to the innovation and venture capital community, making the case that entrepreneurship in the age of AI still depends on human judgment, creative vision, and ethical consideration.

And a “Future of Work” event in New York City in spring 2027 will examine how organizations can structure work to preserve human judgment, creative capacity, and ethical accountability as AI capabilities expand, and how colleges and universities prepare the leaders who will navigate these challenges.

More events and details will be announced on the AI at Dartmouth website in coming months. As the institution that first launched the field, Dartmouth now seeks to convene a broader conversation about what must remain distinctly human and how educational institutions can ensure those capacities endure.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.