Source DocumentPrevious PageTable Of ContentsNext Page

Predicting and shaping the future: models - looking back and planning ahead

J.W. Bowden

WA Department of Agriculture, Baron-Hay Court, South Perth WA 6151

Introduction

Agronomists should now, more than ever before, use modelling as a necessary tool of their trade in much the way that biometry has been used in the past. This paper will focus on issues important to agronomists rather than dwell on the problems facing full time modellers today. Why have agronomists largely ignored this obviously valuable tool and why are funding bodies often loath to support modelling today. But first, how is the term 'modelling' used in this paper?

What is modelling?

Modellers sometimes try to gain sympathy from the lay public by making a statement such as `everyone models at some level or other', which is true if the definition of modelling is broadened to include almost any analytical thought process. This definition can be tightened by requiring the formal development of a conceptual framework for any work being considered. For this talk the definition will be restricted even further to require some quantitative thinking; that is, we are dealing with quantitative mathematical modelling. The subject is still very broad as it includes static and dynamic modelling. It encompasses modelling with end uses for research workers, extension officers, policy makers and practical agronomists, namely farmers. It ranges from complex crop growth simulation models such as CERES and whole farm optimization models such as MIDAS, through to simple calculations such as can be carried out in spreadsheets and ready reckoners.

Before the advent of mechanistic simulation modelling in the mid sixties, agronomists had only empirical or regression modelling as a tool to predict crop production across a range of biophysical environments and management systems. This was far too restrictive in terms of its ability to integrate across disciplinary and knowledge boundaries, and too demanding of the resources required to obtain the empirical data that are necessary for building comprehensive regression surfaces. Modelling in agriculture currently ranges between the extremes of mechanism and empiricism, depending on the prediction level required and the ability of potential model users to obtain input parameter values.

Why model?

After 25 years of exposure to the modeller's propaganda it should not be necessary to re-state why agronomists should be using modelling routinely. The reasons should be self-evident. However, in view of some sarcastic one-liners from my agronomic peers and feedback currently coming from research committees and farmers, it is apparent that a selling job is still needed. Times have changed since the initial enthusiasts pushed the case for modelling. Funding for agricultural research is diminishing and funding bodies are increasingly wanting to know, rightly or wrongly, how the findings of any proposed piece of research are going to be put into practice in a wide range of agricultural environments. Agronomists need to re-examine this question in terms of the alternative methods that are available to them to see that their work is used. Added to this, modelling technology has improved to the level where agronomists with no computing or programming skills can readily address the quantitative questions they had in the past, always assigned to the too hard basket. This is not to say that agronomists should become experts in everything. Rather, they should know enough about modelling and what it has to offer them, to apply the elements when possible and call in the experts when necessary. They should not ignore modelling.

Some reasons why agronomists should become involved in modelling are listed below.

Quantitative outputs

Increasingly, quantitative outputs are required of our research if it is to be of any use in practical situations. It is not good enough to state that `If you get leaching rains after seeding then you will have to use more nitrogen'. How much rain and on which soil types is 'leaching' rain? How long 'after' seeding? How much is 'more' nitrogen?

In 1977 at the Cereal Agronomy conference in Perth, I was sitting with some Victorians who were very scathing about the 'simple minded' equation used by Norman Halse when he stated:

yield = potential - detriments.

The point they missed was the implication for research and extension of making the above relationship quantitative, for the range of circumstances of agricultural significance in WA. From 1977 the WA Department of Agriculture threw a lot of resources into determining the grain yield potential of wheat in the field and modelling this, so that results would be transferable to other sites and seasons. At the same time, other resources were poured into obtaining quantitative estimates of the impact of weeds, diseases and nutrition on grain yield. Decision support systems were and are still being developed for WA farmers based on this work.

Research findings must be transferable

To be of any use to other research agronomists, extension workers and general agriculturalists, research findings must be transferable to other situations. They must be freed from the site, season and management specificity which dogs most agronomic research. In many instances, modelling is the most resource-efficient way to provide this transferability particularly when it is linked to a well defined diagnostic system.

In 1980, at the First Australian Agronomy Conference at Gatton, John Leslie voiced his frustration at the sort of papers and discussion which surface at every conference of this kind. People report how treatment 'a' had this effect in their experiment and that this is a new finding because research workers x" y ' and 'z' all found something different. The question which was rarely addressed was, what are the differences in environment and management which caused these disparate findings? It is only by understanding the interactions of environment and management with treatment, that a generic answer can be found. John illustrated his point by describing a very complete, time of sowing, trial where the crop (sunflower, I think) was sown each month for three successive years. The direct result (for the duration of the trial) showed that the best time to sow 'on average' was in February. The local farmers disagreed; they reckoned to sow in September. The information from this and ancillary experiments was used to construct a simulation model of the crop. When the model was run for the climatic information for the three years of the trial it was no surprise to find that February was the optimum sowing time. When the model was run on real climatic data (1) for the last 50 years it was found that a case could be made for sowing in September. The message was simple: even with data from three years of pedantic research it was not possible to produce a generic result. By understanding the interactions with season and building a simulation model, it was possible to at least examine a range of situations, in this case, seasons, which was not covered by the experiment. This example, illustrates the need for a modelling framework to make results transferable. The alternative agronomic approach to that problem would be to run the three year, time of seeding trial over even more years!

With modelling in the agronomists' tool box, there is no need to do as much repetitive trial work. Rather, new areas of research and measurement are implied. The need for transferability and the use of modelling to achieve it, demands an understanding of the interactions of treatments with environment on crop growth. Rather than determining the response to a treatment in innumerable situations it is better to understand why you got the observed result in a few well characterised situations and why you might have obtained a different result if conditions were slightly, or even grossly, different. If a simple empirical diagnosis or a calculational interpretation of the situation can be derived, well and good. If not, then more basic work will be needed to allow transferability.

Agronomists using regression techniques are rightly warned about the dangers of extrapolating beyond their data. The mechanistic modellers are obviously subject to the same dangers which, however, can be reduced to far less significance by applying an experienced agronomist's knowledge of functional forms and their behaviour under limiting conditions. It is usually far better to use mechanistic models to transfer information from one situation to another than to extrapolate regression surfaces.

Integrating multi-disciplinary research findings

Most agronomic research is disciplinary in nature. To apply the results, the findings usually have to be placed into context in a farming system. Such systems are usually multi-disciplinary in nature and so some method has to be used to integrate the output of research in one discipline with that of other disciplines in a way which gives an appropriate weighting to the discipline in question. Modelling at one level or another provides a means for integrating multidisciplinary research findings. Examples of this integrative approach abound in the literature but none is more obvious than when an economic analysis of agronomic output is required. Most practical decision support systems integrate biological output with an economic analysis.

In the building of MUDAS, a WA model to determine optimum rotations for different soil types, on a whole farm basis with due account being taken of seasonal variation, this integrative role of modelling was taken to its limit. MUDAS, an optimizing linear programming model, was already too large to allow much simulation. The economists therefore demanded quantitative data from a range of disciplines and then integrated it into the whole farm context. Much of the data could not be obtained from direct observations but had to be generated using other models. Body weight curves through time, of different classes of sheep, facing different amounts of forage, which in turn depended on soil type, season, month and phase of rotation, had to be provided. Nitrogen response curves for wheat for seven soils, for nine seasons and several rotations also were needed. The only other options available to answer the demands made by MUDAS would have been to use experimental results directly or use results from surveys. Unfortunately experiments and surveys do not cover enough environments. Also surveys are retrospective; they cannot answer speculative questions about what might be or might have been.

Investigating the experimentally impossible

As well as allowing extrapolation into situations which can not be explored by conventional means, modelling allows researchers to address problems which otherwise could not be addressed. Mike Perry's cloud seeding example illustrates this point. When asked if cloud seeding could be worthwhile in drought years, Mike addressed the question with his (at that time, rudimentary) crop growth model. He ran the model using 20 years of historical rainfall data. He then simulated cloud seeding by adding 5 mm of rain to the first rainfall event in September of each of those 20 years and ran the model again. In the driest years he obtained as little as 2 kg/mm of additional rain. In the wetter years, the return was up to 50 kg/mm of additional rain. The finding was obvious and rational once it was obtained, but there was no way that it could have been obtained experimentally. Further, even though quite coarse, the model could be used to get a first approximation of how much rain would have to be generated in any stated season to pay for the cloud seeding.

Educational value

The construction of a model has great value in educating the builder. Attempting to model a research area will give new insights to even the most competent and experienced agronomists. This aspect of modelling is often stated but much undervalued. Many agronomists believe that they already think sufficiently analytically about the problems they confront. Even if they are correct in their self assessment, are they thinking sufficiently synthetically; that is, are they thinking enough about how their information can be incorporated into a practical agricultural system?

Models also are useful for educating students and lay people. Educational models do not necessarily have to be quantitatively correct; they need only reflect the principles and interactions which are being extended. Because they force otherwise ambiguous qualitative statements to be written mathematically, models can help to overcome communication difficulties between research workers, even across disciplinary boundaries.

Knowledge gaps and research priorities

Modelling can pinpoint areas where knowledge and data are lacking. Even a simple conceptual or first approximation model can be used to determine areas which have priority for research and so can reduce the amount of ad hoc research, which abounds in agronomy. It is interesting that research priority setting workshops dominated by agriculturalists, seem to demand a modelling approach, whereas no one dares use the word 'modelling' if laymen predominate at the meeting.

Why the bad name?

With all the above good reasons for using it, why does modelling have such a bad name amongst research agronomists, extension workers and funding bodies? My feeling is that the funding bodies are advised by, and have fallen for the propaganda of non-modelling agronomists. So what is it about modelling which has so disenchanted agronomists?

Ignorance

Most agronomists were educated before modelling was ever taught in undergraduate courses. Many agricultural faculties still don't offer modelling courses. Not having been exposed to modelling, except at the end of a sales pitch, most agronomists have no first hand experience of its virtues or foibles. In contrast, biometrics is and has been, taught in most agricultural courses. As such, biometrics is subject to far less critical scrutiny and criticism by agronomists.

New ideas

Even trained scientists have a natural reserve about anything new; particularly when it claims to be a new way of thinking. It is easy to ignore or reject something which demands retraining, or at least rethinking, when your time is already well occupied. It is an insult to be told that there may be better ways of doing what you have planned and executed to the best of your ability. It is easier to uncritically pick up catch cries about the inadequacies of the new system and so reject it rather than learn more about it.

The nature of biologists

As a rule, agricultural undergraduates deliberately take biological courses to avoid the rigors of the mathematical and physical sciences. Any demands for some facility in these disciplines meet a natural inertia. We did good work without computers and modelling, so why can't you?' is becoming less of a cry now, but certainly carried considerable weight in the past. Unfortunately, agronomists with non-mathematical backgrounds are still too influential when it comes to assessing research methods and priorities for funding.

Unrealized dreams

The 'dreams' of the early modelling enthusiasts, who propagandised their new found toy, often did not match the reality. Many simulation modellers still seem to believe that they will be able to construct a model which will be sufficiently mechanistic to be applied in any situation. The belief in the universally applicable model is akin to megalomania. Hard headed agronomists can rapidly see through such claims for a proposed model because it is obvious that the complexity of real biological-systems will always defeat such an idealist. The sooner modellers become less ambitious, adopt a 'horses-for-courses' philosophy and, at least in public, become less starry eyed about what they can achieve, the better.

Models simplify reality

To capture a real system in mathematical terms, modellers necessarily have to use simplifying assumptions. This often annoys 'pure' scientists who are grappling with the real complexity of a given situation. 'You can't just assume away the complexity with which we have to work!' they cry. Often, the problem here is one of confused objectives. The 'pure' scientist may be trying to understand one situation alone while the modeller may be attempting to use the information so obtained to generalize to many different situations and have a practical outcome in mind.

Transparency

By their nature, models are far more transparent than most other ways of presenting research information and as such are far easier to analyse critically. Critics of modelling are not necessarily as critical of the alternatives.

Examples of failure

If you want to knock an area of endeavour, it is easy to find examples of failure. This then makes it easy to throw away the good with the bad. Modelling has its failures but it also has a lot of successes. The current demand for increased accountability in research, which carries with it an horrendous waste of resources associated with the administration and repeated justification of projects, has some of its roots in a few examples of waste. It ignores the tremendous gains which were, and still could be, made under far less bureaucratic systems of research management.

Redundancy of effort

A major fear of funding bodies is to waste resources on redundant research efforts. This seems to be particularly so when it comes to modelling, probably because 'building a model of wheat growth' implies that the same thing is needed and is going to be produced whether the proposal for development comes from Wisconsin or Wagga Wagga. The word 'model ' carries unwarranted connotations of universal application. Most models, despite the claims of their champions, require local adjustments and regression derived calibrations. They are more transferable than other technology but certainly not universally transferable.

Models will only be used at a local level if they have local champions. Such people rarely come with off-the-shelf models; they are usually the product of a lot of local development and calibration work. We in WA will only run with Canberra developed GRAZGRO if we can convince ourselves that it can produce local pasture growth curves. In this particular case we will want to see that the seed set, seed dynamics, germination and establishment routines become far better than they are at present. Similarly with CERES, we require equations that account for the phenology of local wheat varieties, better runoff and drainage routines and possibly a better root growth model to allow us to handle row spacing and the positional availability of variously soil mobile nutrients. If you ignore the parochial nature of output requirements and also the educational aspects of model development, then perhaps there could be redundancy in modelling efforts. However, with the current severe limitations on funds such redundancy is most unlikely to occur.

The opinions of experts

The cautions of modelling experts and critics can be misquoted and over stressed. Passioura (2), in a much quoted and well-balanced paper, details some of the problems of meeting the claims for 'comprehensive simulation models'. He offers alternatives to simulation modelling, for generating and testing hypotheses, for mobilizing knowledge and for providing a biological context within which one can hang one's research programme. His criticisms have validity in some areas of more fundamental research and for some levels of modelling. They should not serve as the ultimate reason for less informed reviewers or critics to reject modelling projects. Indeed, Passioura quotes de Wit when describing the misuse of simulation model calibration (` ...the most c umbersome and subjective technique of curve fitting that can be imagined'). Even so, de Wit is a very strong advocate of using quantitative mathematical analysis in agriculture and has his name on many simulation models. Another well informed critic and user of modelling, Seligman (3), concludes a discussion of the inadequacies of pasture growth and utilization models with the words; 'It seems therefore, that grasslands models today can be justified not on performance but on promise. A slim justification indeed, but the need is so obvious and, the alternatives so few and so demanding, that the promise will have to be proved vain before grassland modelling is given up'.

The future?

First, what does modelling have to say for agronomists in the future? At the Third Australian Agronomy Conference in 1985, Professor Loomis was given a similar brief to mine today. He pointed out that agronomic research has changed markedly over the past fifty years, from a qualitative to a quantitative understanding of systems. Modelling was obviously one way to formalize this understanding. He looked at the question of whether it was better to apply Occam's razor to modelling, resulting in a range of simple though possibly repetitive and inconsistent models, versus building large and highly detailed models for each crop.

To overcome the problems (2) of building such 'grand' models, Loomis suggested a modular approach with the use of tables rather than mathematical functions for input of information. Professor Loomis's take-home message was quite clear (and little different from mine today), namely, 'agronomists must develop sharply improved skills for dissective and integrative analyses of agricultural systems'. At the same time he called for 'student training in a range of the more mathematical and physical disciplines' so that agronomists of the future would be better equipped to handle the quantitative analyses which are now appropriate for understanding and managing agricultural systems.

While accepting that this latter point is important in the training of agronomists of the future, the trend in modern modelling software makes formal computer programming training less essential, particularly for those agronomists who are already in the work force. As long as they are aware of the many good reasons to get more analytical and quantitative about their agronomy, modelling software with user friendly interfaces is available for them to improve the value of their work. Packages such as STELLA allow agronomists with little or no mathematical training to investigate the dynamics and stochastic variation of their systems (1).

At the same time as advocating the use of such tools, it is worth noting the dangers associated with the ease of their use. It is very easy to produce an (uncalibrated and unvalidated) model to represent ideas and reproduce observed qualitative data to the extent that you start believing that you have discovered some basic truth. It is another thing again to demonstrate that truth to any discriminating audience. It is easy to delude yourself about why systems behave in a certain way, when you design a model to reproduce your prejudices. As with biometrics, there is no doubt that modelling at even a rudimentary level will have to be a tool of trade, for agronomists of the future.

And what of the future for full time modellers? In August 1987, at La Trobe, a special workshop on modelling followed the 4th Australian Agronomy Conference. That workshop was used as a platform to extend the findings and recommendations from a workshop on 'Computer-assisted Management of Agricultural Production Systems', held in Melbourne in May 1987. In essence, the delegates were responding to calls from all and sundry to clean up the bad name modelling was getting among the funding bodies and the industry in general. Rather than rebutting or analyzing the criticisms in the context of what was being attempted and achieved by the modellers in Australian agriculture, there was a knee jerk reaction, reflected in `motherhood' recommendations for more co-ordination and co-operation between workers to reduce the alleged duplication of effort. There was a call to 'standardize terminology' and units `without stifling innovation'. The desire to develop a 'modular' style of modelling was again expressed in the name of the efficient use of modelling resources. A supposed need was seen for the development of 'standard, minimum data sets' for testing and comparison of simulation models. Methods and infrastructure for putting the recommendations into place were being planned.

Not addressed at the workshops, was the problem of corporate ownership of supposedly potentially valuable software and how this was already stultifying the exchange of information and source code. The cooperation necessary for the adaptation of someone else's model to your environment can be difficult if you cannot fiddle with the source code!

While the above points must be addressed by the professional modellers, they are designed to turn the ordinary agronomist away from modelling at any level. They will also tend to 'stifle innovation'. I prefer a more laissez faire approach so that new ideas and innovations can arise and can be tried and tested in a host of different situations. You cannot legislate for innovation. New ideas and approaches will evolve only if as many people as possible are encouraged to try their ideas and are not constrained by standards or rejected by pedantic referees.

Conclusions

Rudimentary modelling skills should be part of every agronomist's tool box. They should be used to make experimental results transferable to other situations and to provide a more analytical and quantitative understanding of the complicated systems with which we deal. Modelling is becoming increasingly important in providing policy guidelines for administrators and has an important role to play in determining gaps in knowledge and priorities for research. Computer modelling for decision support is more likely to be used by consultants and advisers to farmers than by farmers themselves, mainly because of the complexity of the technology being used. Although this argument cuts little ice with funding bodies and corporate managers, the education of farmers and scientists to the complexities of the systems with which they deal, will probably be the main contribution of modelling to the agricultural industries in the long term.

References

1. Davies, S. and Diggle, A.J. 1991. Technote No. 4/91, WADA.

2. Passioura, J.B. 1973. J. Aust. Inst. Agric. Sci. 39, 181-183.

3. Seligman, N.G. 1976. In: Critical Appraisal of Systems Analysis in Ecosystems, Research and Management. (Eds A. and de Wit)

4. Smith, R.C.G., English S.D. and Harris, H.C. 1978. Field Crops Res. 1, 229-242.

Previous PageTop Of PageNext Page