Skip to main content
U.S. flag

An official website of the United States government

Overfitting and Uncertainty in the Presence of Model Structural Error

Funding Program Area(s)
Project Type
University Grant
Project Term
to
Project Team

Principal Investigator

Co-Principal Investigator

The current standard method to evaluate a new physical parameterization in a global atmospheric model re-uses much of the same observational data both to train the parameterization and to test it.  This practice of re-using training data favors more complex parameterizations and is prone to overfitting.  

To avoid the reuse of training data for the purpose of testing, this project will use cross-validation, a resampling method that draws sample points for tuning and sets the others aside for testing. Cross-validation requires fast re-tuning.  This will be accomplished by an approximate tuner (“QuadTune”) developed by the PIs.

Objectives

  1. Calculate (out-of-sample) present-day prediction error and use it to assess overfitting.
  2. Develop an improved method to evaluate new parameterizations.
  3. Explore the relationship between present-day prediction error and cloud feedback strength. 

Methods

  1. Cross-validation is used to estimate the prediction error of a global atmospheric model. 
  2. Monte-Carlo integration over the parameter space is used in order to assess the relationship between prediction error and cloud feedback strength. 

Impact and Benefits

The project is expected to lead to an improved understanding of the effects of model structural error and overfitting on uncertainty in cloud feedback strength.  It is also expected to lead to an improved method for evaluating atmospheric parameterizations.