image: James Doss-Gollin, assistant professor of civil and environmental engineering at Rice University.
Credit: Rice University.
When families decide where to buy a home, when cities approve new development or when governments decide where to invest billions in resilience, they increasingly turn to climate-risk scores for guidance.
But how trustworthy are those projections, and how should different users interpret them?
A new paper published in Proceedings of the National Academy of Sciences argues that trustworthiness depends not just on the sophistication of the models used to produce the scores but also on whether the science behind them is open, reusable and transparent enough for others to examine, test and improve.
Led by Dartmouth Engineering, the paper brought together leading researchers across more than a dozen institutions to issue a clear call to action: Climate-risk science must make its assumptions, data and code easier to inspect if it is to reliably inform high-stakes decisions.
“Climate-risk projections are being used in infrastructure design, housing markets and public policy,” said co-author James Doss-Gollin, assistant professor of civil and environmental engineering at Rice University. “But the devil really is in the details. Small differences that seem reasonable in isolation can have a big influence on final scores. Being able to look into those details — to understand what assumptions are driving results and how alternative assumptions might lead to different results — is essential if these tools are going to support high-stakes decisions.”
The urgency of the problem became visible in recent years as commercial climate-risk scores entered the mainstream. Millions of U.S. homebuyers relied on property-level flood-risk estimates embedded in real-estate platforms until some of those platforms quietly pulled the data back.
Zillow, for example, stopped displaying climate-risk scores on listings after users complained that the numbers felt arbitrary and opaque.
“That reaction wasn’t surprising,” Doss-Gollin said. “Climate-risk projections often combine multiple layers of modeling — from estimating future weather probabilities to translating weather into hazards like floods to estimating damages — each with its own uncertainties and limits on validation.”
In one striking example cited in the paper, two widely used flood-hazard models for Los Angeles agreed on only 24% of which properties fell within the current 100-year flood plain. Limited historical flood and damage data made it impossible to definitively determine which model performed better.
“None of this means the science is wrong,” said corresponding author Adam Pollack, who led the research at Dartmouth and is now an assistant professor at the University of Iowa. “But when data and code are difficult or impossible to access, it slows down climate-risk science, especially in application areas that require integrating many methods and tools.”
To assess how open the field actually is, the authors conducted a meta-analysis of highly cited climate-risk studies published between 2021 and 2022. The result was stark: Only 4% of those studies shared both their data and code — a widely accepted minimum standard for transparency that many journals and funders already claim to require.
The finding doesn’t challenge the broad conclusions of climate science or climate-risk research, the authors emphasize. Instead, it highlights a structural barrier to cumulative progress at a moment when timely, evidence-based risk information is urgently needed.
“Transparency lets others scrutinize assumptions, and reusability lets researchers build on one another’s work,” Pollack said. “The major successes in climate-risk science show how critical those practices are for understanding why different models produce different answers.”
The paper points to landmark open-science efforts, such as global climate model intercomparison projects and openly shared economic models, as proof that transparent foundations can accelerate scientific progress and directly inform policy.
Beyond diagnosing the problem, the study outlines concrete steps researchers, journals and institutions can take now:
- Journals should enforce their own transparency standards or revise them to standards they are willing to uphold.
- Researchers should share data and code whenever possible with thoughtful exceptions for privacy or legal constraints.
- Open datasets and software should be formally cited, strengthening incentives to share.
- Clear data and code availability statements should explain what is and isn’t accessible and why.
The authors also stress that simply uploading files isn’t enough. True reusability requires documentation, simple tests and clear instructions.
“These are not abstract ideals,” Doss-Gollin said. “They’re practical tools for revealing assumptions, identifying errors and helping users understand the limits of risk projections.”
Looking ahead, the paper argues that transformational progress will require larger investments from funders and institutions, including support for research software engineers, training pipelines and shared benchmarking platforms. The authors are also careful to note that transparency in noncommercial research does not undermine private-sector innovation. Instead, they argue, open foundations make it easier to benchmark products, validate methods and communicate uncertainty with transparency.
“If these projections are guiding decisions that affect people’s lives and livelihoods, we owe it to the public to make the science behind them as open and understandable as possible,” Doss-Gollin said.
Journal
Proceedings of the National Academy of Sciences
Article Title
Unlocking the benefits of transparent and reusable science for climate risk management
Article Publication Date
14-Jan-2026