Climate Model Benchmarking for CMIP7 – A CMIP Task Team
The goal of the Coupled Model Intercomparison Project (CMIP) is to better understand past, present, and future climate changes in a multi-model context. The CMIP panel is seeking to identify ways to increase the project's scientific and societal relevance, improve accessibility, and widen participation. An important prerequisite for providing reliable climate information from climate and Earth system models is to understand their capabilities and limitations. Thus, systematically and comprehensively evaluating the models with the best available observations and reanalysis data is essential. For CMIP7 new evaluation challenges stemming from models with higher resolution and enhanced complexity need to be rigorously addressed. These challenges are both technical (e.g., memory limits, increasingly unstructured and regional grids), and scientific. In particular, innovative diagnostics, including the support of machine learning-based analysis of CMIP simulations, must be developed. The CMIP Climate Model Benchmarking Task Team aims to provide a vision and concrete guidance for establishing a systematic, open, and rapid performance assessment of the expected large number of models participating in CMIP7, including a variety of informative diagnostics and performance metrics. The goal is to fully integrate evaluation tools into the CMIP publication workflow, and their diagnostic outputs published alongside the model output on the Earth System Grid Federation (ESGF), ideally displayed through an easily accessible website. To accomplish this goal, we have designed a Rapid Evaluation Framework (REF) that would employ existing model benchmarking capabilities from open source software packages. Once developed this REF will evaluate and benchmark climate model output as it is published in the ESGF, and it will readily incorporate updates, including new observations and a multitude of additional diagnostics and metrics as they become available from the research community.