Previous PageTable Of ContentsNext Page

A framework for planning and self-evaluating dairy extension programs

Peter Wegener

University of Queensland, School of Natural and Rural Systems Management
St Lucia Campus, Qld 4072, Australia

Abstract

This paper describes a planning and self-evaluation framework that has been developed in conjunction with Target 10, a dairy extension program in South-east Queensland. The program has a diverse range of activities and a variety of stakeholders ranging from farmers to extension officers and program managers. A formative evaluation process must cater to the needs of these various stakeholders as well as effectively evaluate the different program activities.

The planning and self-evaluation framework is based around the process model of:

Inputs — Process — Outputs — Short Term Goals — Long Term Goals.

It includes a number of phases that start with planning, then monitoring, evaluating, reviewing and finally reflecting, before moving back to a planning phase. Each phase highlights a number of key questions that will assist a group to plan and evaluate their program or activity.

Within this framework, a large number of models and tools can be employed that best suit the people doing the evaluation and the activity being evaluated. An evaluation can be as simple or as complex as required. To assist groups in selecting a suitable evaluation process, we have developed a booklet that describes a number of practical planning and evaluation methods suitable for a range of activities.

Results from these evaluations can then be entered into an Access database along with other information relating to the dairy extension program. The database allows for easy entry and retrieval of information and can be used to produce reports for farmers, extension officers and program managers. It also produces tables comparing plans and results of various activities as well as comparing the results of different activities. These can be used by all stakeholders to monitor and improve the dairy extension program.

Introduction

Target 10 is a dairy extension program in South-east Queensland jointly supported by the Queensland Department of Primary Industries and 'Dairy Farmers', a milk processing company. The program aims to provide useful and timely information to dairy farmers to help them make more informed decisions about their farming operations. Information is provided in a range of formats such as:

  • On-going groups (e.g. farmer discussion groups).
  • Short term or one off activities (e.g. workshops, seminars and field days).
  • Individual farmer contact (e.g. individual farm visits and telephone calls).
  • Publications (e.g. newsletters and fact sheets).

There is a need to effectively plan, evaluate and integrate these various activities. Managers of extension programs need to ensure that the program as a whole is moving in the right direction, that the various components are in line with overall goals, and activities are effectively coordinated and implemented. Those who implement a program (extension officers), need to ensure that their activities are relevant, effective and efficient, and that they can collect and respond to feedback from their clients. Program users (dairy farmers), would benefit from assessing if activities are meeting their needs and from having an input into the direction of future activities.

However, the diverse nature of an extension program and the different requirements of stakeholders makes it difficult to design a planning and evaluation process that is useful to everyone while not consuming too much time and too many resources. This paper describes a process that allows various groups within an extension program to plan and evaluate their progress in a way that is relevant and useful to them. However, each of these self-evaluations can be conducted within a common framework so that they can be combined and compared for overall program improvement.

Background to developing a framework

Owen (1993 p.3) described evaluation as "the process of providing information designed to assist with decision-making about the object being evaluated". That object may be a whole extension program or a single activity within the program. The difficulty lies in the manner in which the information is collected and who uses that information for decision-making.

Evaluation can be classified into two general types, namely formative and summative. In general terms, formative evaluation deals with program improvement, whereas summative evaluation aims to make a judgement on the worth of a program (Scriven, 1991). Summative evaluations are most commonly performed by “external” evaluators. They can provide an independent assessment of a program or activity and can also detect problems that may not be obvious to people immediately involved with a program. This sort of evaluation is most useful for program planners and managers. It can be used to indicate the success or otherwise of a program and to make suggestions for overall changes in the direction of that program.

Formative evaluations are most effective when they involve people within a program (i.e. involve “internal” evaluators). These evaluations are usually more meaningful and immediately useful to program participants and can be used by all stakeholders to improve various aspects of a program. The greater the involvement of program participants in designing and conducting a formative evaluation, the more likely the results of that evaluation will be used and acted upon.

The framework described in this paper is designed to be used for formative evaluations. It builds on a process developed by Frank and Claridge (1998) for evaluating Property Management Planning programs. This process was based around the idea that people are most interested in the effects of their own activities within a local context. Such a process needed to be:

  • seen to be useful so that people want to continue it
  • simple enough for all parties to understand and use
  • flexible for use by different groups
  • practical, so it can be applied to any situation and
  • effective for different types of stakeholders.

However, a useful evaluation process should not only provide a means of conducting a wide range of small evaluations. Those evaluations need to be conducted within a common framework such that information can be compiled and compared for use by other stakeholders in their own evaluation processes and also for overall program improvement.

An extension program comprises many groups and individuals including farmers, extension officers, support staff and organisations, program managers, and program financiers. All of these groups, and even individuals within the same group, have different roles in the program, have different goals and desires, and come from different backgrounds. They will thus have different evaluation needs and will probably require different processes to perform evaluations that they find useful. How can these different needs and processes be reconciled with a common framework for comparison across evaluations?

Frank and Claridge (1998) compared a number of evaluation models and highlighted the similarities between these models. They included Bennett's Hierarchy (Bennett & Rockwell, 1995), the Snyder evaluation process (Dick, 1992) and Stufflebeam's CIPP evaluation model (Stufflebeam et al., 1971). Logical Frameworks (Project Management Solutions (Aust) Pty Ltd., 1998) could also be included in this list.

Each of these models encourages participants to ask insightful questions at relevant stages of an evaluation. They also have a similar approach to evaluation in that they are based around key aspects of a general process model. This process model can be summarised as inputs being used in a process that leads to outputs which are needed to achieve short term goals in the pursuit of longer term goals.

These different models could therefore be used or adapted to suit the particular evaluation needs of each stakeholder group. However, their similar structure would allow results form each evaluation to be combined and compared.

The planning and self-evaluation framework

The planning and self-evaluation framework (Fig.1) is based around the general process model of inputs being used in a process to produce outputs. The driving force that gives direction to this process is short and long term goals. This model is applicable to a whole extension program or a single activity within that program.

The framework also has a number of phases ranging from planning, monitoring, evaluating, reflecting and reviewing. Each of these phases highlights a number of key questions that will assist in planning and evaluating a program or activity (Table 1).

The framework uses the process model to firstly plan a program or activity. An essential part of effective planning is to have a clear idea of the short and long term goals and so this is where the planning process should begin. Many programs or activities have not devoted enough time on clearly defining goals. This can result in a “rudderless” process that lacks a clear direction and can also cause difficulties in evaluating and improving the process. Once goals have been established, the planning phase moves backwards, using the goals to determine the desired outputs, the outputs to design the best processes and the processes to determine the necessary inputs.

The second phase is to devise monitoring indicators that can be used to measure progress during the implementation of that plan. If possible, indicators should be valid, measurable verifiable, cost effective, timely, simple, relevant, sensitive and punctual (Abbot & Guijt, 1999).

The third phase, evaluation, begins at the inputs stage and works its way up to assessing whether long term goals are being achieved. The indicators can be used to measure progress towards these goals.

The fourth phase compares planned processes and results with actual activities and outcomes. It is used to suggest reasons for any inconsistencies between plans and results and to determine what should have been done differently. This will help participants to refine future plans.

Finally, participants should reflect on the whole program or activity to determine if the overall direction needs to be reassessed or if any other aspects of the program need closer scrutiny. Phases three and four deal mainly with assessing and improving the planned program or activity. Phase five encourages participants to reflect on the overall aims and direction of the program or activity. This then leads back to a new planning phase.

Figure 1 summarises the framework, while Table 1 highlights the key questions that should be asked at each phase. Note however, that this is a general planning and evaluating guide. Not all steps need to be covered in all cases. The group or person doing the evaluation must decide what is most appropriate for their circumstances. Thus, an evaluation of all extension officer activities would more than likely follow all the phases outlined above, use a reasonable amount of time and resources and follow a fairly structured approach. However, an evaluation of a single field day may only use the key questions as a rough guide in designing a brief informal evaluation.

Figure 1. The planning and self-evaluation framework

Table 1. Key questions that the framework seeks to ask at each phase in the process

 

Key questions

(a) Planning

- Where do we want to go? (short and long term goals)
- What do we need to achieve to get there? (outputs)
- What is the best way of getting there? (process)
- Who and what should we include and where and when should we do it? (inputs)

(b) Monitoring

- What do we need to measure at each step to see if we are achieving what we want to achieve? (progress indicators)

(c) Evaluating




- Who and what did we actually use? (inputs)
- What did we do and how did we do it? (process)
- What were the immediate results – opinions, reactions, participation (outputs)
- What changed as a result of this - changes in knowledge, attitudes, skills, aspirations; changes in practices; changes in social, economic and environmental conditions (short and long term effects)

(d) Reviewing

- Were planned and actual results the same at each stage?
- If not, why not?
- What should we have done differently?

(e) Reflecting




- Have any internal and/or external circumstances changed?
- Should we change our overall goal in the light of current circumstances?
- Are the programs and activities we are doing the best use of resources and the best way of achieving our goals?
- Do these programs and activities support and complement each other?
- Should we consult other sources of information?

Applying the framework with different stakeholder groups

Depending on the group doing the self-evaluation and their particular requirements, any number of models and tools can be employed that fall within the scope of the above general framework. Thus, management may employ Logical Frameworks to plan and design the overall program, extension officers may use Bennett's Hierarchy to monitor changes in knowledge, attitudes and practices of farmers, while farmer groups may use a simplified form of the Snyder process to plan and monitor their progress.

The models described above may not include processes for answering all of the key questions. In such cases additional processes and tools can be used to help answer questions in a meaningful and useful way. Tools and techniques such as those described by Carman and Keith (1994), Clark and Timms (1999) or McIntosh et al. (1995) can be applied as necessary to any steps within the framework.

As mentioned above, it is not necessary to perform all of the steps in the process for every evaluation. For any individual evaluation the most important consideration should be:

“What is best for our particular situation so that we can improve what we are doing?”

The general framework can then be used as a guide to help design a suitable process.

However, not all groups may be aware of the best process to use from the large range of possible options. The author has been working in conjunction with the Target 10 extension program to develop useful and practical planning and evaluation processes. These processes fall within the guidelines of the general framework and have been produced in booklet form to assist stakeholders in designing suitable evaluations. They include processes for:

  • Regular reviews of farmer discussion groups
  • Annual planning and reviews of farmer discussion groups
  • Planning and evaluating workshops, field days or seminars (either a series or a one-off event)
  • Planning and evaluating monthly programs and activities for extension officers
  • Planning and evaluating annual programs and activities for extension officers
  • Planning and evaluating the overall direction of the extension program

Evaluations at higher levels of the program will normally require information from farmers and extension officers. However, wherever possible they are encouraged to use information already collected from self-evaluations conducted by farmers and extension officers. This limits the amount of data that needs to be collected from extension staff and farmers for purposes not directly beneficial to those particular stakeholders.

A database to compile and compare results

The final stage in developing a useful planning and evaluation process is to have a means of compiling the information gathered from each individual evaluation. This requires a database that is easy to use in both entering and retrieving data. It should consume a minimum amount of time and be of practical use for those who have to spend time inputting the data (usually extension officers).

The Target 10 program is currently refining a Microsoft Access database that we have developed for use by extension officers and program managers. It allows for easy entry of information relating to the above planning and evaluation processes. It also includes forms for entering information relating to weekly and monthly extension officer activities as well as farmer feedback and issues of concern to farmers.

The database can then be used to easily produce reports that summarise:

  • monthly farmer discussion group meetings and reviews
  • farmer discussion group annual plans and reviews for all groups or for any individual group
  • plans, results and feedback from seminars, workshops and field days
  • monthly programs and activities for extension officers
  • plans and evaluations of annual programs and activities for extension officers
  • plans and evaluations of the overall extension program

The ease of producing these reports makes it a useful tool for extension staff and management. It thus encourages them to record all of their activities and evaluation results on the database.

Once information has been entered into the database it can not only be used to produce reports, but also to compile and compare the various individual plans and evaluations. Tables can be produced to compare plans with results for any individual activity. It can also compare the plans and results of different activities (such as comparisons between farmer discussion groups or comparing the results of different workshops). This information can be used by farmers, extension officers and program managers to monitor and improve various aspects of the program as well as the overall program.

Conclusions

Formative evaluations are an effective way of monitoring and improving extension activities and programs. The greater the involvement of program participants in designing and conducting a formative evaluation, the more likely the results will be used and acted upon.

The framework described in this paper encourages groups or individuals within an extension program to plan and evaluate activities that are relevant to them. Rather than collecting information for unknown reasons or unseen people this process helps stakeholders think about and improve activities that are of direct interest to them. They are also not required to follow an evaluation model that is seen as too difficult or of no use, but rather develop a process that best suits their needs. However, a common basic format allows data from each of these evaluations to be collected and compared to assist other stakeholders in their own evaluations and to help with overall program design.

References

  1. Abbot, J., & Guijt, I. (1999). Changing Views on Change: Participatory Approaches to Monitoring the Environment (SARL Working Paper Series - draft document ). London: International Institute for Environment and Development (IIED).
  2. Bennett, C., & Rockwell, K. (1995). Targeting Outcomes of Programs (TOP): An Integrated Approach to Planning and Evaluation (Draft paper ): USDA-CSREES-PAPPP.
  3. Carman, K., & Keith, K. J. (1994). Community consultation techniques: purposes, processes and pitfalls. A guide for planners and facilitators. Brisbane: Dept. of Primary Industries.
  4. Clark, R., & Timms, J. (Eds.). (1999). Enabling Continuous Improvement and Innovation - The Better Practices process - Focussed Action for Impact on Performance. Gatton: The Rural Extension Centre, Queensland.
  5. Dick, B. (1992). Qualitative Evaluation for Program Improvement (Paper presented at IIR Conference on Evaluation, Brisbane September 1992 ). Brisbane.
  6. Frank, B., & Claridge, C. (1998, 7-9 October 1998). Developing a Self-Evaluating Framework for Property Management Planning. Paper presented at the Australasian Evaluation Society 1998 International Conference, Melbourne, Australia.
  7. McIntosh, F., Chamala, S., Frank, B., & Norcott, J. (1995). Working Towards a Self-Reliant Group: A handbook to help dairy industry groups achieve self-reliance through action learning using group facilitation techniques. Melbourne: Dairy Research and Development Corporation of Australia
  8. Owen, J. M.(1993). Program evaluation: forms and approaches. St Leonards, N.S.W.: Allen & Unwin
  9. Project Management Solutions (Aust) Pty Ltd. (1998). Logical Framework - Maximising Project Success by Design. LogFRAME Workshop Manual . Brisbane: Project Management Solutions (Aust) Pty Ltd.
  10. Scriven, M. (1991). Evaluation thesaurus. (4th ed.). Newbury Park, Calif.: Sage.
  11. Stufflebeam, D. L., Foley, W., Gephart, W., Hammond, R., Merriman, H., Provus, M., & (Phi Delta
  12. Kappa National Study Committee on Evaluation). (1971). Educational evaluation & decision making. Itasca, Ill.: Peacock.

Previous PageTop Of PageNext Page