Nano-enabled biochar fertilizers help rice grow safer in contaminated soils
Peer-Reviewed Publication
Updates every hour. Last Updated: 2-Apr-2026 17:15 ET (2-Apr-2026 21:15 GMT/UTC)
In an unprecedented observation, researchers captured the birth of a sperm whale calf, documenting how 11 whales from two normally separate family groups coordinated closely to support the newborn for hours after its arrival. These findings offer quantitative evidence of direct communal caregiving in cetaceans and suggest that short-term, highly coordinated cooperation during critical moments like birth may play a foundational role in maintaining the complex social structures seen in sperm whale societies. The evolution of cooperation remains a fundamental question in biology, particularly among highly social, long-lived mammals such as toothed whales. Species like sperm whales exhibit remarkably intricate social systems, in which stable, matrilineal family units cooperate in activities such as foraging and communal caregiving. Birth represents a critical and high-risk moment for the animals, as whale calves require immediate support to survive, making it a uniquely revealing context for understanding cooperative behavior. However, studying these deep-diving creatures in the open ocean represents a significant challenge and direct observations of sperm whale births are exceedingly rare. As a result, the cooperative behavior in sperm whale births has long remained a mystery.
Here, Alaa Maalouf and colleagues present a detailed, high-resolution analysis of a sperm whale birth by integrating drone video footage, machine learning, and long-term data on social relationships and kinship. In July 2023, off the coast of Dominica, Maalouf et al. observed 11 members of a known sperm whale social unit, comprising two typically separate and unrelated family groups, gathering unusually close to the surface. Although these subgroups are generally distinct in their foraging behavior and social associations, they formed a cohesive cluster as a birth unfolded. Using drone footage, the authors documented the 34-minute delivery of a calf, followed by a period of intense, coordinated activity in which multiple adult females surrounded the mother. According to the authors, in the hour after birth, the group displayed strikingly cooperative behavior; individuals from both family groups took turns physically supporting and lifting the newborn to the surface, likely assisting it in breathing. The entire unit remained tightly organized during this critical period. In addition, there were close passes by Fraser’s dolphins and brief interactions with pilot whales. Several hours after the birth, the sperm whale cluster gradually dispersed into smaller, more typical foraging groups.
Artificial intelligence (AI) chatbots that offer advice and support for interpersonal issues may be quietly reinforcing harmful beliefs through overtly sycophantic responses, a new study reports. Across a range of contexts, the chatbots affirmed human users at substantially higher rates than humans did, the study finds, with harmful consequences including users becoming more convinced of their own rightness and less willing to repair relationships. According to the authors, the findings illustrate that AI sycophancy is not only widespread across AI models but also socially consequential – even brief interactions can skew an individual’s judgement and “erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold.” The results “highlight the need for accountability frameworks that recognize sycophancy as a distinct and currently unregulated category of harm," the authors say.
Research on the social impacts of AI has increasingly drawn attention to sycophancy in AI large language models (LLMs) – the tendency to over-affirm, flatter, or agree with users. While this behavior can seem harmless on the surface, emerging evidence suggests that it may pose serious risks, particularly for vulnerable individuals, where excessive validation has been associated with harmful outcomes, including self-destructive behavior. At the same time, AI systems are becoming deeply embedded in social and emotional contexts, often serving as sources of advice and personal support. For example, a significant number of people now turn to AI for meaningful conversations, including guidance on relationships. In these settings, sycophantic responses can be particularly problematic as undue affirmation may embolden questionable decisions, reinforce unhealthy beliefs, and legitimize distorted interpretations of reality. Yet despite these concerns, social sycophancy in AI models remains poorly understood.
To address this gap, Myra Cheng and colleagues developed a systematic framework to evaluate social sycophancy, examining both its prevalence in popular AI models and its real-world effects on those who use them. Using Reddit community “AITA” posts, Cheng et al. evaluated a diverse set of 11 state-of-the-art and widely used AI-based LLMs from leading companies (e.g., OpenAI, Anthropic, Google) and found that these systems affirmed users’ actions 49% more often than humans, even in scenarios involving deception, harm, or illegality. Then, in two subsequent experiments, the authors explored the behavioral consequences of such outcomes. According to the findings, participants who engaged with sycophantic AI in regard to interpersonal scenarios, particularly conflicts, became more convinced of their own correctness and less inclined to reconcile or take responsibility, even after only one interaction. Moreover, these same participants judged the sycophantic responses as more helpful and trustworthy, and expressed greater willingness to rely on such systems again, suggesting that the very feature that causes harm also drives engagement. “Addressing these challenges will not be simple, and solutions are unlikely to arise organically from current market incentives,” writes Anat Perry in a related Perspective. “Although AI systems could, in principle, be optimized to promote broader social goals or longer-term personal development, such priorities do not naturally align with engagement-driven metrics.”
Podcast: A segment of Science's weekly podcast with Myra Cheng, related to this research, will be available on the Science.org podcast landing page [http://www.science.org/podcasts] after the embargo lifts. Reporters are free to make use of the segments for broadcast purposes and/or quote from them – with appropriate attribution (i.e., cite "Science podcast"). Please note that the file itself should not be posted to any other Web site.
***An embargoed news briefing was held on Tuesday, 24 March, as a Zoom Webinar. Recordings are now available at https://aaas.zoom.us/rec/share/9qnRHLJ3Sc7OQxK6vWHWSiNvCcIN5Lh4j3sJiqulXybpxa8jCmLso-uuaPuFgGhC.fGpxRB8Pm3c122IF Passcode: Q35f+b2J Voice recordings are available from the speakers upon request.***
New research published in Science shows spaceborne satellite altimetry can detect two-dimensional tsunami wave patterns near the earthquake’s source, offering critical insight for coastal risk evaluation and preparedness planning. The study highlights three key implications for hazard science: dispersive modeling is remarkably useful for characterizing tsunamis near their source; satellite altimetry can add unique constraints when it observes tsunamis close to where they begin; and wide-swath altimetry provides a transformative tool for understanding earthquake rupture and improving tsunami hazard assessments.
Households with high incomes are the main beneficiaries of subsidy programmes supporting the clean energy transition. A team of researchers from the University of Freiburg, Stanford University, Indiana University and the University of Pennsylvania has analysed why this is the case and how energy policy can be made more equitable. The results have now been published in the journal Nature Reviews Clean Technology.