Max-Planck Institute for Meteorology, Hamburg, Germany
23-24 February, 2017

Workshop on Connecting Climate Model Evaluation to Assessing Fitness for Purpose

Funding/Sponsorship

This meeting was sponsored by the World Climate Research Programme (WCRP) which supported travel of developing-country scientists, the IPCC Working Group I Technical Support Unit, providing organizational assistance, the Max-Planck Institute for Meteorology, providing meeting space, coffee/tea breaks, and group dinner.

ipcc logo full rgb   MPIM

Purpose of the meeting

This meeting was organized to bring together a small, but varied, group of scientists to discuss the question of how model evaluation informs us of a model’s fitness for a particular application (for example, detection and attribution of climate change, seasonal to decadal climate prediction, long-term climate projection, or informing climate impact studies). The fundamental issue is that evaluation of climate models, making use of historical and paleoclimate observations, does not directly inform us about the quality, skill, or trustworthiness of a particular model in the applications listed above. This was an issue that arose very clearly in the preparation of the WG-I contribution to the AR5 and was reiterated in the WCRP/IPCC ‘Lessons Learnt’ workshop [https://www.wcrp-climate.org/ipcc-wcrp-about]. The objective of the meeting was to discuss the issue in depth, to assess ways in which model evaluation can be better linked to model fitness, and to provide guidance for near-term research.

Conduct of the meeting

After a welcome from the host (J. Marotzke), there were three introductory talks: an overview of the meeting objectives (G. Flato); a presentation on the Coupled Model Intercomparison Project (V. Eyring); and a presentation on philosophical issues around ‘fitness-for-purpose’ (W. Parker). The remainder of the meeting was organized around 5 sessions, each of which started with a series of short introductory talks (roughly 5-10 minutes each) followed by extensive discussion. The sessions focused on the following topics:

  • Limitations with respect to assessing model reliability
  • Different evaluation approaches and the insights they provide
  • Emergent constraints, what they are and what is their power
  • Bias correction, model weighting and issues of model independence
  • Model tuning and how it might undermine aspects of model evaluation

The meeting concluded with a discussion of how model evaluation and model performance metrics are connected to ‘fitness-for-purpose’, avenues and opportunities for new research, and knowledge gaps that might be filled.

Meeting participants agreed to prepare a paper summarizing the outcomes of the meeting with the intent of submitting it to a peer-reviewed journal.

 

Documents

 For a larger size of the picture, click on the image