Systematic over-crediting of Forest Carbon Offsets in CA

By Grayson Badgley, Jeremy Freeman, Joseph Hamman, Barbara Haya, Anna Trugman, William R L Anderegg, & Danny Cullenward [Excerpts]

Carbon offsets are widely used by individuals, corporations, and governments to mitigate their greenhouse gas emissions. Because offsets effectively allow pollution to continue, however, they must reflect real climate benefits.

To better understand whether these climate claims hold up in practice, we performed a comprehensive evaluation of California’s forest carbon offsets program — the largest such program in existence, worth more than $2 billion. Our analysis of crediting errors demonstrates that a large fraction of the credits in the program do not reflect real climate benefits. The scale of the problem is enormous: 29% of the offsets we analyzed are over-credited, totaling 30 million tCO₂e worth approximately $410 million.


Over-Crediting Percent: 29.4% (20.1-37.8%)

Over-Crediting Value: $410m ($280-528m)

Analyzed Credits: 102m

Over-Crediting: 30m (20-39m)


This article provides an overview of how we identified crediting errors in CA’s offsets program. For a deeper dive on our methods and analysis, you can read our preprint. To better understand its implications, you can read a story by Lisa Song (ProPublica) and James Temple (MIT Technology Review) that covers and contextualizes our findings. Finally, you can browse an interactive online map of the projects we analyzed, or download the open source data and code that underlies our analysis.



Carbon offset programs issue credits to projects that purport to avoid greenhouse gas emissions or remove carbon dioxide from the atmosphere. For example, an oil refinery that is subject to an emissions limit might purchase an offset credit issued to a forest owner who agrees to reduce or delay a timber harvest. The refinery can then pollute more, claiming the avoided forest emissions as compensation.

California’s offsets program plays a central role in the state’s prominent cap-and-trade program. While it is open to many kinds of offset projects, most credits come from forest projects, which can take place anywhere in the continental US and southern Alaska. You might think these projects involve growing new forests, but the vast majority instead involve a practice called “improved forest management” (IFM). An IFM project claims to increase forest carbon storage through changes in existing forest management practices, such as increasing the length of timber harvest rotations.








We analyzed 65 IFM projects for which sufficient public data were available, totaling 102 million upfront IFM credits. These represent about two-thirds of all forest offsets, and about one half of California’s entire offsets program.

The critical aspect of California’s forest offsets program is that it awards large volumes of credits at the start of a project when carbon stocks exceed regional averages. These “upfront” credits to IFM projects are responsible for more than half of the total carbon offsets program, and more than two-thirds of all forest credits.

How are these credits awarded? Projects provide “baseline” scenarios that are meant to represent the average amount of carbon that would remain under a typical harvest scenario. The difference between the initial carbon and this baseline determines the credits awarded. To prevent unrealistic baseline scenarios, the protocol requires that the average carbon stored in a baseline scenario stays above a value called “common practice,” which is defined by the average regional carbon stocks from putatively similar forest types.


Thus, erroneously low estimates of common practice can lead to over-crediting. As it happens, about 90% of projects report baseline averages that are equal to or within just 5% of the minimum common practice number. Thus, crediting is determined almost entirely by the value of common practice — and if common practice is set too low, that means projects are getting excess credits that do not reflect real climate benefits.


Analysis of crediting error

To investigate potential crediting errors, we developed a novel dataset by digitizing public offset project records, most of which currently exist only as PDFs and, to our knowledge, have never been comprehensively analyzed. These records contain each forest project’s tree species composition.

To test the integrity of California’s program, we asked how well each project’s common practice number represents typical carbon stocks for similar forests. As explained further below, the California program uses broad, regional averages that fail to distinguish between species. In contrast, our new database allowed us to estimate typical carbon outcomes across only those forests with similar species. If common practice is set too low, that implies over-crediting; and if it is too high, that implies under-crediting.

We found evidence that the vast majority of projects were over-credited: for these projects, common practice values are systematically low because they reflect averages based on dissimilar species types. As a result, projects received more credits than they would have under a more ecologically accurate and robust definition of common practice.

For example, in the “Southern Cascades” region of California, the common practice numbers used in the program average together temperate, carbon-dense forest types like Douglas Fir and Tanoak with less-carbon-dense forest types that occupy more arid niches, like Ponderosa pine.

Comparing project carbon against this average causes projects like ACR189, which is located in Northern California and is composed primarily of Douglas fir (26% of basal area) and Tanoak (49% of basal area), to receive substantial credits simply due to a mismatch between the species in the project and the species included in the regional average. If we instead compare ACR189 against Douglas Fir and Tanoak — a more ecologically robust comparison — we find the project was over-credited by 50.1% of its total credits.

But ACR189 wasn’t an exception. We found this same pattern over and over again. To quantify these errors systematically, we replaced projects’ common practice numbers with an independent, species-specific estimate. We then used the protocol rules to recalculate how many credits each project should have received using this more ecologically robust approach. (We also checked to make sure we could accurately reproduce the most recent common practice numbers and the number of credits projects actually received, in order to be confident in our ability to estimate any over- or under-crediting.)

Our analysis relied on the digitized project records described above, as well as public data from the US Forest Service Forest Inventory Analysis program and the open source rFIA package. Our methods are described in detail in our preprint and all of the code and additional data underlying our analysis is open source and fully reproducible.

Across the program as a whole, we estimate net over-crediting of 30 million tCO₂e total (90% CI: 20.5 to 38.6 million tCO₂e) or 29.4% of the credits we analyzed (90% CI: 20.1 to 37.8%). At recent market prices of $13.67 per offset credit, these excess credits are worth $410 million (90% CI: $280 to $528 million) — and likely more, as market prices would rise if market regulators took steps to correct for over-crediting.

A key feature of our study is that it does not depend on counterfactual analysis. In general, offsets must reflect “additional” climate benefits above and beyond what is expected under business-as-usual conditions. Claims about the additionality of entire projects are important to consider but difficult to evaluate quantitatively because counterfactual scenarios cannot be observed directly. In contrast, our analysis uses revealed program outcomes to directly estimate crediting errors.


The problem with averages

The fundamental challenge with awarding upfront offset credits lies in defining an ecologically robust point of comparison. The California program aggregates forest data across both species and geographic regions, creating ecologically inappropriate points of comparison for awarding most of the credits in the program. Our results suggest that these protocol incentives have led to widespread “adverse selection”, with projects preferentially located in forests where carbon stocks naturally exceed those coarse, regional averages.

It’s important to note that while some of the problematic outcomes we document likely reflect intentional strategies to take advantage of poorly designed program rules, our results don’t assume or depend on market participants’ intentions. It doesn’t matter whether landowners or project developers intended to take advantage of these rules, or simply benefited from them without awareness.

While our analysis is critical and the results are disappointing, we believe forward progress only begins by understanding our mistakes — so that we can do better in the future.


Why open science

Our approach to both conducting and releasing this work fully embraces the growing trend toward open science, which differs from traditional academic research.

We are sharing our work now, as a preprint, rather than waiting months or years for publication in a peer-reviewed journal. We are taking this approach both to address an urgent set of policy-relevant concerns and so that we discuss these issues in the open, rather than behind closed academic doors.

In order to bring this discussion into the open right away, we are making all of our materials fully public and reproducible: the digitized project database, all additional data used throughout our analysis, code to generate figures from those data, and the complete code base used for our analysis. At any point now or in the future, the entire community is welcome to review our work on its merits. We look forward to further improving our analysis based on the criticisms and collaborations that come from open science. If you have feedback, please open an issue on our GitHub repository.

We are also committed to subjecting our work to critical review from our peers. We have incorporated review in several ways prior to public release and are taking additional steps moving forward. First, we implemented our own round of independent review with domain experts. Second, we shared early versions of our work with two journalists who, in the process of developing their own story, asked independent researchers for comment and asked us questions raised by feedback from affected parties — including the California Air Resources Board, as well as nonprofit organizations, project developers, and individual landowners that participate in the program. Although an adversarial review process involving program beneficiaries and conducted with journalists as intermediaries is different from a traditional journal publication process, we believe its quality control was just as good if not better. Finally, we have also submitted our work to an academic journal and are committed to seeing it subjected to that form of peer review.

It is precisely on issues of such critical importance to the public where we believe this open, transparent approach to science and government accountability matters most.



Grayson, Danny, Jeremy, and Joe designed the research; Grayson digitized the project report data; Grayson, Danny, Jeremy, Joe, and Barbara performed the research and analyzed the data; all authors contributed to interpreting the results and writing the paper. Jeremy developed the interactive graphics with input from Jonny Black of Ordinary Things.


A version of this work is currently under peer review and is available now via the following preprint:


G Badgley, J Freeman, J Hamman, B Haya, A T Trugman, W R L Anderegg, D Cullenward (2021) “Systematic over-crediting in California’s forest carbon offsets program” bioRxiv doi: 10.1101/2021.04.28.441870


Please cite this web article as: G Badgley, J Freeman, J Hamman, B Haya, A T Trugman, W R L Anderegg, D Cullenward (2021) “Systematic over-crediting of forest offsets” CarbonPlan



CarbonPlan received a grant from Microsoft AI for Earth to support the portion of work that involved digitization of offset project records. All other research design and data analysis was performed using CarbonPlan’s unrestricted funding. No one except the authors of the paper exercised control over the research process or products. The authors are solely responsible for the content of this write-up, which does not reflect the views of any other individuals or organizations.

Grayson Badgley is a Postdoctoral Scientist at Black Rock Forest and Columbia University, Barbara Haya is a Research Fellow and Director of the Berkeley Carbon Trading Project at UC Berkeley, Anna Trugman is a professor at UC Santa Barbara, and William R.L. Anderegg is a professor at the University of Utah.


If you enjoyed this post, please consider leaving a comment or subscribing to the RSS feed to have future articles delivered to your feed reader.