Academic Journals Database
Disseminating quality controlled scientific knowledge

A framework of benchmarking land models

ADD TO MY LIST
 
Author(s): Y. Q. Luo | J. Randerson | G. Abramowitz | C. Bacour | E. Blyth | N. Carvalhais | P. Ciais | D. Dalmonech | J. Fisher | R. Fisher | P. Friedlingstein | K. Hibbard | F. Hoffman | D. Huntzinger | C. D. Jones | C. Koven | D. Lawrence | D. J. Li | M. Mahecha | S. L. Niu | R. Norby | S. L. Piao | X. Qi | P. Peylin | I. C. Prentice | W. Riley | M. Reichstein | C. Schwalm | Y. P. Wang | J. Y. Xia | S. Zaehle | X. H. Zhou

Journal: Biogeosciences Discussions
ISSN 1810-6277

Volume: 9;
Issue: 2;
Start page: 1899;
Date: 2012;
VIEW PDF   PDF DOWNLOAD PDF   Download PDF Original page

ABSTRACT
Land models, which have been developed by the modeling community in the past two decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure and evaluate performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land models. The framework includes (1) targeted aspects of model performance to be evaluated; (2) a set of benchmarks as defined references to test model performance; (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies; and (4) model improvement. Component 4 may or may not be involved in a benchmark analysis but is an ultimate goal of general modeling research. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and the land-surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics across timescales in response to both weather and climate change. Benchmarks that are used to evaluate models generally consist of direct observations, data-model products, and data-derived patterns and relationships. Metrics of measuring mismatches between models and benchmarks may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data-model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance for future improvement. Iterations between model evaluation and improvement via benchmarking shall demonstrate progress of land modeling and help establish confidence in land models for their predictions of future states of ecosystems and climate.
Affiliate Program      Why do you need a reservation system?