Can artificial intelligence have morality? Philosophy weighs in
Texas A&M University
As the influence of artificial intelligence grows, so do the ethical questions that surround it.
Dr. Martin Peterson, a philosophy professor at Texas A&M University, says that while AI can mimic human decision-making, it cannot truly make moral choices.
AI cannot, by itself, be a “moral agent” with an understanding of the difference between right and wrong and is held accountable for its actions, he said.
“AI can produce the same decisions and recommendations that humans would produce,” he said, “but the causal history of those decisions differs in important ways.” Unlike humans, AI lacks free will and cannot be held morally responsible. If an AI system causes harm, the blame lies with its developers or users, not the technology itself.
Instead, Peterson asserts, AI is a tool that can be aligned with human values such as fairness, safety and transparency.
But achieving that alignment is far from simple.
“We cannot get AI to do what we want unless we can be very clear about how we should define value terms such as ‘bias,’ ‘fairness’ and ‘safety,’” he said, noting that even with improved training data, ambiguity in defining these concepts can lead to questionable outcomes.
Peterson, who studies the history and ethics of professional engineering, is working on a way to measure value alignment across platforms in AI. The idea is to create a kind of “scorecard” that can help determine whether one AI system is better aligned with moral values than another — a necessary step, he argues, if society wants to make informed choices about which technologies to adopt.
Despite the challenges, Peterson sees promise in AI’s potential to revolutionize health care, particularly in diagnostics and personalized treatment. But he also warns of the dangers, especially in military applications. “AI drones are likely to become incredibly sophisticated killer machines in the near future,” he said. “The people who control the best military AI drones will win the next war.”
Dr. Glen Miller, director of undergraduate studies in the Department of Philosophy at Texas A&M, shares many of Peterson’s concerns. Miller’s research focuses on engineering, cyberethics and the ethics of politics.
Miller describes AI as part of a “sociotechnical system” in which ethical responsibility is distributed among developers, users, corporations and regulators. He cautions against overreliance on AI in education and mental health, noting that while AI can assist in these areas, it lacks the human capacity for practical judgment — what philosophers call “phronesis.”
“We shouldn’t be fooled into thinking that AI ‘understands’ us,” Miller said. “AI therapy and companionship may supplement human engagement, but it can also lead people toward disastrous ends. We need to make sure appropriate oversight is put in place.”
Both agree that the ethical implications of AI are not just academic, they’re widespread and urgent.
As Miller puts it, “AI is actively reshaping what we do and what we think, and each person needs to consider the short- and long-term effects of using or not using AI in their personal, work, social and public lives.”
By Lesley Henton, Texas A&M University Division of Marketing and Communications
###
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.