2024-05-17 12:11:23

When it comes to carbon, the key to understanding opportunities and risks is access to consistent, comprehensive and credible data, writes Kevin Birn
Carbon intensity has emerged as a new metric of competitiveness. The interest is driven by the belief that whether through the energy transition, more carbon-intensive assets, companies or commodities could be more exposed to faster-than-expected changes in demand or face new costs from incremental climate policy.
Companies and their investors are keenly interested in how they can benefit from this new potential competitive framework, as well as understanding the potential risks. Key to understanding the opportunities and risks is access to consistent, comprehensive and credible data.
Companies have been responding to greater interest in these data from stakeholders and regulators with increasing levels of emissions disclosure. But inconsistencies in disclosures, and the carbon accounting frameworks that underpin them, continue to limit the comparability of these data, even when comparing within the same industry.
Common sources of inconsistency go beyond scope 1 and scope 2 definitions, and can also include how certain emissions are estimated and which factors are used (if they are used), how co-products are treated, and the units used to present the information, with the different units not always readily convertible and thus comparable.
The absence of a consistent framework continues to limit the utility of disclosure, leading to an increasing call for greater information. The irony is that more information may only exacerbate the confusion.
This challenge is widely recognized by financial companies through to large industrial emitters.

At CERAWeek 2024, one of the themes that presented itself around decarbonization and carbon markets was the need for a harmonized emissions accounting framework to support greater comparability across estimates. This is an active area within S&P Global Commodity Insights, which has been sharing detailed methods behind its emissions work to ensure market participants can understand and compare against our growing array of large, standardized datasets.
Data quality is another source of inconsistencies between estimates that warrants more attention. For the market to be able to incorporate carbon into business decisions, the information must be believed to be accurate or representative.
A greenhouse gas (GHG) emissions estimate or claim is often based upon a spectrum of information, from observed or metered fuel or energy use to complex models and formulas based on expected or historical performance. Given the array of information often required, it is almost inevitable that some assumptions must be made. For users of these data, it is not clear what level of assumptions are being made between estimates, and thereby affecting the reliability of the result.
Given sufficiently high-quality data, estimates of high reliability can be made. Today, however, emissions estimate can exhibit a wide range of rigor. Moreover, users of these data have limited ability to distinguish the differences. This erodes the confidence in these numbers, and the utility of them.

Consider the scatter plot of data above contrasting S&P Global Commodity Insights’ assessment of the quality of a number of our own upstream GHG estimates against the corresponding estimate of the carbon intensity. Most of the discussion today about carbon intensity has been one-dimensional: better versus worse or high versus low.
The introduction of data quality brings in another dimension and introduces trade-offs. Obviously, highly reliable estimates of lower carbon commodities or assets are desirable in all cases (bottom left in the figure). However, if a carbon intensity estimate is low but unreliable is that still more desirable? Contrast this dilemma to the market today where there is limited ability to differentiate.
If the market is to act upon GHG emissions data, it must be believed to be accurate or sufficiently representative. This requires the ability to systematically communicate and track quality. The inability to distinguish quality limits the ability for users to assign value to the relative effort between estimates.
For companies reporting their GHG emissions, the ability to distinguish rigor is critical to support investments in estimation improvement, such as projects to measure and monitor methane. This is a particularly important point when the outcome of investments is uncertain and could result in upward revision of emissions.
The principles of data and estimate quality are well documented throughout the GHG estimation and life-cycle analysis literature. Most of the work and literature has been focused on self-assessment to identify area of improvement. However, in today’s world the need is to be able to compare.
To this end, S&P Global worked with the US Department of Energy National Energy Technology Laboratory to develop a means to communicate data quality. The result was the Data Quality Metric (DQM).
The initial DQM was developed with the oil and gas sector in mind. However, the concept of the DQM has broad applicability to GHG estimation and reporting with the appropriate sectoral guidance. The Data Quality Metric was first proposed in a 2022 special report by S&P Global Commodity Insights titled “The Right Measure: A guidebook for life-cycle GHG estimation of crude oil.”
The DQM assesses quality along two variables – reliability and representativeness – using a five-byfive pedigree matrix. Reliability is the degree to which an estimate can be depended on to be accurate, for example, the comprehensiveness of underlying data; while representativeness refers to the degree an estimate can be expected to reflect reality or to what degree the data represents the asset or assets in question.
The system can be scaled from an individual flow, through to an asset and even across a value chain. Throughout, the DQM delivers a consistent two letter grade assessment. In this way, the communication value can be consistent across companies, sectors, and even value-chains, making it easily understood.
The DQM is far from a perfect solution as there are far more rigorous means to assess quality. It does, however, provide a balance between granularity and ease of communication. Over time the DQM will evolve as the rigor of estimates improve, and so will our estimation. S&P Global is currently deploying the DQM at scale against its own estimates.
The aim is to provide a data quality assessment with each carbon intensity estimate. We are also seeing some traction in the market with this approach. Frameworks similar to the DQM can be found as part of the Open-Hydrogen Initiative, and the advancing US Department of Energy Greenhouse Gas Supply Chain Emissions Measurement, Monitoring, Reporting, Verification Framework.
If the market is going to transact on carbon, then it needs to have a consistent framework to compare, contrast and ultimately make choices. Quality needs to be part of that framework.
©IHS Global, Inc.. View All Articles.