Photos in this story: Shutterstock
Computer models play an increasingly important role in the search for solutions to the challenges facing society. That means one error or misunderstanding about the model can have big implications. A protocol with 22 requirements has been drawn up to help in the development, optimisation and maintenance of Wageningen’s computer models, in assessing their applicability and in getting a better picture of what investments are needed to keep the model up to date.
The news that an incorrect adjustment factor was used in Aerius, the model the National Institute for Public Health and the Environment (RIVM) uses to calculate nitrogen deposition and help decide on permits for the building industry and agriculture, caused a huge commotion in October 2022. Discrepancies ranging from 30 to 100 per cent had been found between Aerius’s calculations and physical measurements. RIVM director Wijnker admitted its quality assurance was not up to scratch. RIVM’s computer model was not built by Wageningen, but could the same thing happen with our models?
What is more, models that produce incorrect output damage trust in all computer models. At the same time, computer models are playing an increasingly significant role in the search for solutions to the challenges facing society, such as the nitrogen problems. It is therefore important for the quality of a computer model to be undisputed.
As the manager and auditor of the Quality Assurance of Models project, Geerten Hengeveld aims to make sure that problems of the kind seen with Aerius can’t happen with Wageningen’s computer models. That is quite a job as models are by definition incomplete. Hengeveld: “A model is a simplified version of the real world. In other words, a model is an abstraction of reality that we try to describe systematically. Reality is more complex. And given that we can’t know everything, you could say a model is always wrong. But I think that is too simplistic. A model can be incredibly useful as long as you know what you can use it for and what not.”
‘A computer model is incomplete by definition because the real world is more complex’
Photo: Guy Ackermans
What makes a computer model so useful?
“A model is scientific know-how turned into mathematical formulas or rules, embedded in a casing of software. The rules and formulas are used to make an idea explicit. The model is merciless: if I start calculating and one variable increases slightly, the model will tell me how much impact that has.
“A nice example is the Ministry of Agriculture’s nitrogen map. For a long time, the debate was about general guidelines: a reduction in nitrogen of fifty per cent that applied to the entire country. The map showed very explicitly what these general figures meant for individual places. That creates clarity, but at the same time you need to know how much value should be attached to such a map — what assumptions it is based on and how much influence specific assumptions or policy choices have. The model gives pointers for the discussion about what is feasible and what not.”
So a model is incomplete by definition. Given that, is it ever possible to build a good model?
“A good model has to satisfy a lot of criteria. To start with, you need to know what the purpose of the model is, what it is supposed to do. Then you need a clear-cut description of the assumptions underlying the model: what is important and what not. If that is documented, other people can assess whether the model is relevant for the application they have in mind. A model that is ‘good’ in calculating the influence of climate change on the rate of growth of forests worldwide might be of no use to you if you’re aiming to determine the amount of timber available from the Veluwe woodlands for new houses. Your model should also be consistent with the available data and field measurements.
“There are more criteria we could cite, each with its own reason as to why it is important. You also need to be aware that ‘a good model’ means different things to different groups of people. A model with scientifically impressive documentation is not necessarily applicable in a specific project.”
What is Wageningen doing to guarantee the quality of our models?
“Of course, we start from the premise that we want to deliver high-quality work. That’s why we have drawn up a protocol, a checklist of 22 requirements that every new computer model must satisfy. This checklist helps Wageningen’s model builders develop their models, assess the applicability of the models and get a better picture of what investments are needed to keep them up to date.
“The basis of the protocol is the Scientific Guide for the Good Use of Models from 1999, which was compiled by the Foundation for Applied Water Research (STOWA). This guide sets out how you can build water models that are sound in terms of their quality. At the time, Wageningen researchers drew up their own protocol on the basis of this report. I then used that to produce this new list.
“The idea behind our protocol is we want to show everyone this is a reliable model; that we know when you can use the model and when you can’t, and that you can read this for yourself. This protocol should give us a better picture of which models we need to invest in to improve and maintain the quality. Using the protocol makes a model builder aware of deficiencies or a lack of clarity in the descriptions and analyses. We also want to increase awareness among researchers and improve the way they communicate with managers and clients, for example, about the models. Managers and clients aren’t interested in the high-level maths and abstract discussions.
“You could see it as a kind of exam the researchers need to take. If they pass the exam, the model gets a stamp of approval. As the auditors of the Quality Assurance of Models project, we read all the model documentation and descriptions meticulously and we check them against the requirements that a WUR model must satisfy. So I’m essentially the man with the stamp.”
‘The idea behind our protocol is to show everyone this is a reliable model’
It would be going too far to name all 22 requirements here. Can you explain in a nutshell what they are about?
“It’s essential to have a detailed description of the model. What is the purpose of the model, what can it be used for and what not? We also want to see an analysis of the model. How sensitive is it to variation in the input data, what sources of uncertainty are there in the model and underlying data? Another important aspect is a methodical approach. You should know and document who does what and who has what responsibilities.
“Communication with the model’s end users — public authorities or companies — is also very important. The end users need to know what the model can and can’t do and the limits within which the model is reliable. A good example from recent months is the request that the minister Rob Jetten made to the Authority for Consumers & Markets (ACM) to calculate what would be a reasonable margin for energy companies with respect to the price cap for electricity and gas. ACM refused the assignment, saying its computer model is not set up to answer this specific question posed by the minister.
“Documentation is also important because knowledge about the model shouldn’t depend on one person. That would mean everything coming to an end if that person were to suddenly leave the organisation.”
The protocol says nothing about data security. Shouldn’t that also be an aspect of a good model? Isn’t it essential to make sure the data the model is based on can’t end up inadvertently in the public domain?
“That’s true enough. When this list was drawn up in 1999, data security was not yet a prominent topic, but now it is a major issue. Fortunately, the list is not set in stone; it is still very much evolving. Data security will undoubtedly be one of the requirements in any new version. It’s not a perfect list, but it does offer pointers and a basis for discussions. Ultimately, I hope every model builder will walk out of the door with the stamp of approval for their model.”
Share this article
Project Task 3 Quality management
Team Cheng Liu, Janien van der Greft, George van Voorn, Rogier Pouwels, Ab Veldhuizen, Sabine Schnabel, Joao Paulo, Peter Hobbelen, Peter Verweij, André Bannink, Co Daatselaar, Vincent Hin, Chen Chun