News Release

Why people cooperate with fair AI — but not with “nice” AI

A study of 1,152 people finds that humans cooperate with fair AI as readily as with other humans — but not with AI that is simply nice or self-interested

Peer-Reviewed Publication

Science China Press

Imagine an AI assistant helping to coordinate a team project, negotiate a deal or make decisions with people. It may be fluent, helpful and fast. But there is still a basic social problem: people often do not cooperate with machines as readily as they do with other humans.

Researchers call this the “machine penalty.” A new study suggests that overcoming it may require something unexpected. AI does not need to be endlessly nice. It needs to feel fair.

In a pre-registered experiment involving 1,152 participants, researchers tested whether large language model agents could encourage people to cooperate in repeated social dilemma games. Participants interacted either with another human or with one of three AI agents: a cooperative agent, a selfish agent or a fair agent. Participants were explicitly told whether their partner was human or machine.

The main result was clear. Only the fair AI agent brought human cooperation up to the level seen in human-human interaction. The cooperative agent did not. The selfish agent did not. In other words, being helpful was not enough, and being strategically self-interested clearly did not work either. Fairness was the key.

That finding is striking because it goes against a common intuition in AI design. One might expect that the most cooperative agent would be the best partner. But the study found that people did not respond most strongly to unconditional niceness. They responded to an agent that behaved more like a real social partner.

The fair agent did not simply cooperate all the time. It sometimes broke pre-game cooperation promises, though much less often than the selfish agent. That imperfection turned out to matter. The researchers found that occasional promise-breaking was associated with the highest human cooperation, while frequent promise-breaking pushed cooperation down.

Why would that happen? The study suggests that fair agents may have succeeded because they reflected a more human form of reciprocity. People are often willing to cooperate, but not blindly. Human cooperation is usually tied to fairness, expectations and a readiness to pull back when cooperation is not returned. The fair agent matched that logic better than the other two agents. As a result, it appeared more socially credible and more capable of establishing cooperation as the norm.

Post-experiment surveys supported that interpretation. Participants who interacted with fair agents gave the highest estimates of how cooperative others would be, suggesting that fair agents helped create stronger cooperative expectations. Fair agents were also seen as intelligent and agentic, and were rated as more trustworthy, likable, cooperative and fair than human partners on several measures.

The broader message is that successful AI cooperation may depend less on surface human-likeness or constant helpfulness than on social intelligence. An AI agent may speak smoothly and still fail as a partner if its behavior does not fit the social rules people recognize. This study suggests that humans respond best to AI that can navigate fairness, reciprocity and social expectations in a believable way.

That insight carries an important lesson for the future design of AI agents. If AI is going to work with people in negotiation, teamwork, education, healthcare or digital assistance, developers may need to move beyond the idea of AI as either a pure optimizer or a perfectly obedient helper. Instead, AI agents may need to be designed as socially aware partners — capable not only of reasoning and communicating, but also of acting in ways people perceive as fair.

 


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.