Previous PageTable Of ContentsNext Page

A case study of how a small-scale evaluation led to large-scale change

Penny Floyd1, Rabi Maskey2 and Jason Trompf3

1Department of Natural Resources and Environment, PO Box 124, Benalla 3675
Department of Natural Resources and Environment, Private Bag 1, Ferguson Road, Tatura 3616
Department of Agricultural Sciences, La Trobe University, Melbourne, 3086


This paper focuses on the methodology used to maximise the utilisation of findings from the evaluation of the Graduate Program. This program aims to provide the Victorian Department of Natural Resources and Environment (NRE) with an ongoing source of highly skilled extension staff, by recruiting well-qualified and highly motivated people and giving them specialist training. The process used in the evaluation related to the key premises of utilisation-focussed evaluation. The evaluation focused on assessing the impacts of the program and outlined potential improvements that could be made to the program. The evaluation was conducted using a series of workshops (group interviews) and interviews, which addressed key evaluation questions of the stakeholders in the program. The methodology provided an effective way of gaining participant's ownership and assistance in the implementation of evaluation findings. The reporting styles enabled the findings to be communicated to a wide audience. The evaluation has resulted in improvements being made to the program, expansion of the program, and a review of the management of general staff intake into NRE.


Early evaluation texts defined evaluation narrowly as the application of social sciences to measure goal attainment. More recent definitions of evaluation emphasize providing useful information for program improvement and decision-making (Patton 1997). The nature of the evaluation conducted on the NRE Graduate Program encompasses both of these definitions in that it involves the assessment of the impacts of the program since its inception, identifies any potential improvements that could be made to the program and examines the prospect of expanding the program. In order to achieve all of these objectives a detailed evaluation of the program and its stakeholders was undertaken. However, conducting such an evaluation was only part of the challenge. The biggest obstacle as far as we (the evaluators) were concerned was ensuring that the evaluation findings were utilised. This was consistent with a comment made at an NRE evaluation seminar in Victoria in 2000 that “the greatest challenge in evaluation is utilisation” (pers. comm. CF Bennett). Similarly Patton (1997) posed the question as to how we can avoid producing reports that gather dust on bookshelves, remaining unread and unused. This is not a recent phenomena given that House (1972) commented that producing data is one thing, but getting it used is quite another.

The evaluation in this study was conducted on the Graduate Program, which was initiated in 1994 to help address a skills gap existing within NRE. Throughout this paper ‘graduates’ refers to NRE staff participating in the Graduate Program. The program has been developed and managed by Catchment and Agriculture Services (CAS), the extension provider for the Agriculture Division, and the Catchment and Water Division. Several issues suggested a future trend towards a shortage of skilled personnel in the organisation. First, it is estimated that in the next 10 years up to fifty percent of staff employed within CAS may retire (see Figure 1). Secondly, when recruiting staff CAS has had difficulty attracting a reasonable pool of applicants, especially for higher level positions (Anon. 1999). The Graduate Program offers five positions to recent university graduates each year. In the last financial year the Graduate Program accounted for five of the twenty-six entry level (VPS2) staff employed externally (outside NRE) by CAS.

Graduates in the program are placed in NRE project teams but undertake specific training and development activities in addition to their normal NRE project work. Graduates on average shift to three different locations over the two years in the program. Graduates are allocated a supervisor at each location and a coordinating supervisor to oversee their two-year program.

This paper presents a case study of a small-scale evaluation that lead to large-scale impact. The main elements of 'utilisation-focused evaluation' (Patton 1997) were embedded to achieve the original aim of contributing to long-term program effectiveness and improved decision making.


This section outlines the process followed from the initiation of the evaluation, through to conducting the evaluation, reporting and finally utilisation of the evaluation. Each stage of the process is related to fundamental premises of utilisation-focussed evaluation as identified by Patton (1997) and others.

Initiation of the evaluation

A conversation between one of the Executive Directors of NRE and the Graduate Program Manager about the apparent success of the program and the possible expansion of the program resulted in the evaluation being initiated. The Graduate Program Manager was particularly committed to the evaluation and improving the program. This directly relates to the fundamental premise highlighted by Patton (1997) that commitment to intended use by intended users should be the driving force in an evaluation. In addition, from this point of initiation Patton (1997) believes the evaluator should ask intended users how the purpose, focus, design, methods and reporting will affect the use of the evaluation. Such steps were undertaken in this evaluation.

This initiation process for the evaluation also addresses another key element of utilisation-focussed evaluation according to Patton (1997). That is, the personal nature of the conversation held between the Executive Director and the Graduate Program Manager illustrates their interests and commitment to the evaluation.

Background of the evaluation

A group comprising of the Program Manager, a representative from the NRE Evaluation Support Team and a past participant of the Graduate Program that had done training in evaluation, was set up to start the evaluation. Discussion amongst the group led to the development of five key evaluation questions based on the information needs of the program’s key stakeholders.

The key evaluation questions were:

  1. To what extent does the Graduate Program achieve its objectives?
  2. What impact does the Graduate Program have on the individual graduate, NRE, and industry and rural communities?
  3. What are the strengths and weaknesses of the Graduate Program?
  4. How can the Graduate Program be improved?
  5. What is the economic benefit of the Graduate Program to NRE?

The decision was made by the group to conduct the evaluation internally (within the organisation) for a range of reasons. These were that, the internal evaluator already had an understanding of the program and NRE, that the knowledge gained from the evaluation should remain in NRE and that the process should foster the development of evaluation skills within NRE. In addition, by conducting the evaluation internally the costs were minimised and it enhanced the ability to facilitate the implementation of the recommendations. The only perceived negative of an internal evaluation was the objectivity of the evaluation and the credibility of the outcome. A series of actions were undertaken to address this issue. First, assistance was sort from the Evaluation Support Team to oversee the evaluation and to have technical input. Second, a consultant was brought in to conduct the three workshops (group interviews) where data were collected. Finally, a steering committee was set up to oversee the evaluation. The steering committee comprised of the key stakeholders in the program, including the Program Manager, Evaluation Support Team member, Human Resource member, Economics Branch member, Corporate Communications member and representatives of the two Executive Directors that purchase the program.

The establishment of the steering committee was a key step in this evaluation adhering to fundamental premises of utilisation-focussed evaluation. According to Patton (1997) effective evaluations adapt to changed conditions and are active-reactive-adaptive in working with primary intended users, such as the members of the steering committee. Furthermore, Wargo (1995) analysed three successful federal evaluations in a search of the ‘characteristics of successful program evaluations’. It was found that active involvement of key stakeholders was critical at every stage; these include planning the evaluation, conducting the evaluation, and in dissemination of findings. Such involvement occurred with the steering committee of this evaluation.

The aim of establishing the steering committee for this evaluation was to gain input and ownership from all of the major stakeholders in the program. Patton (1997) highlighted that people who participate in creating something tend to feel more ownership of what they have created and make more use of it. Active participants in evaluation, such as those involved in the steering committee and the evaluation participants themselves, are more likely to feel ownership not only of their evaluation findings, but also of the evaluation process itself (Patton 1997).

A major consideration when establishing an evaluation of this nature is to allow enough time in the design of the evaluation to communicate with the stakeholders. A lot of time is absorbed in consultation with these stakeholders in designing and conducting the evaluation, let alone in assisting to implement the recommendations. It must be remembered that for any particular evaluation there is likely to be multiple levels of stakeholders and therefore there needs to be multiple levels of stakeholder involvement (Patton 1997).

Conducting of the evaluation

The evaluation focused on assessing the impacts of the program and outlining potential improvements that could be made to the program. The evaluation was conducted using a series of workshops (group interviews) and interviews, which addressed the key evaluation questions of the stakeholders in the program. The key stakeholders included the graduates, past graduates, supervisors, program management, industry program coordinators and key project leaders for the two divisions that purchase the programs. The Economics Branch in NRE also conducted an economic analysis of the Graduate Program.

The workshop method used in this evaluation is discussed in detail below as an example of a utilisation-focussed evaluation method. The workshop method (group interview) was used to collect feedback from the current graduates about what they considered to be the strengths and weaknesses of the program. Then discussion and analysis of these weaknesses during the workshop enabled them to develop improvements in conjunction with program management. The current graduates used the experience from their involvement in the program in consultation with others involved in the group interview to make the suggested improvements. The improvements were then ranked according to their importance and urgency, and graduates volunteered to work on improvement areas that they were interested in. The program management focussed on addressing suggested improvements that related to program design. This process gave the graduates and program management ownership of the results and the ability to improve their program. The workshops were effective in identifying improvements to the program due to the positive attitudes of the program management toward the evaluation. This created a safe environment in the workshops that led to open and frank discussion between graduates and program management. The workshop method enabled decisions on improvements to be made immediately, as program management personnel was present. Subsequently this enabled the results of the evaluation to be implemented straight away by people that had an understanding of the issues.

The workshop process outlined above addressed many of the key elements of utilisation-focused evaluation. According to Patton (1997) high-quality involvement of intended users in an evaluation results in high quality, useful evaluations. This type of evaluation is referred to as ‘empowerment evaluation’ and is most appropriate where the goals of the program include helping participants to become more self-sufficient and personally effective. Such evaluations are also intervention orientated, in that the evaluation is designed and implemented to support and enhance the program's desired outcomes. In order to do so, collection of evaluation data and their use must be integrated into program delivery and management, rather than being separated from or independent of the program processes (Patton 1997). These documented attributes of utilisation-focused evaluation were integral parts of the evaluation of the Graduate Program.

The processes used in conducting this evaluation also addressed key principles of ‘participatory evaluation’ (Patton 1997). These include the participants working together as a group with support from the evaluation facilitator to assist group cohesion and collective inquiry, the evaluation facilitator recognising and valuing participants perspectives and helping participants to recognise and value their own and each others expertise. As a result, differences in status between the evaluation facilitator and participants are minimised. Utilisation-focused evaluation is inherently participatory and collaborative in actively involving primary intended users in all aspects of the evaluation (Patton 1997). This evaluation provides direct evidence to demonstrate the effectiveness of this strategy in increasing use of findings. A potential subsequent benefit of this participatory and collaborative evaluation is an ongoing, longer-term commitment to using evaluation logic and building a culture of learning (Patton 1997) in both the Graduate Program and NRE.

Reporting on the evaluation

Reporting on the evaluation to a range of key stakeholders was an ongoing process throughout the evaluation. An initial briefing was given to all stakeholders about the program, reminding them of its objectives, providing background on how it was run and details on what the evaluation was trying to achieve. Monthly reports were prepared for the steering committee meetings that outlined the evaluation findings, issues, plans and communication needs. Reports on the workshops conducted with past and current graduates were sent to key stakeholders. Presentations were made at the Graduate Program’s quarterly meeting on the progress of the evaluation and updates given on new findings.

A preliminary report on the evaluation was compiled to keep all stakeholders informed of the progress. This was followed by a final report that was again issued to all the stakeholders. Particular effort was made to kept the report brief (5 pages), reporting only on the significant findings of the evaluation. The report used graphs and diagrams to help communicate findings. Both the preliminary and final reports were presented to and approved by, the steering committee prior to release. The recommendations in the final report had input from the Program Manager to ensure that they were presented in a form that would assist implementation. The primary aim of the brief reports was to make them user-friendly. This is consistent with the fundamental premise of utlisation-focussed evaluation highlighted by Patton (1997) that the style, format, content, and process of reporting should all be geared towards intended use by the intended users. In general, policymakers and funders are more likely to read concise executive summaries than full reports (Patton 1997).

Utilisation of the evaluation

The recommendations in the final report addressed key stakeholders in the program. The list of recommended improvements to the program that where generated at the workshop was implemented over a period of time by the graduates and program management. The workshop method enabled graduates and management to address issues while they where still fresh in their minds which was important in having the improvements implemented.

A number of briefings to the Executive Directors of the two divisions that funded the program and a drafted proposal were required to achieve implementation of the recommended expansion of the program. The information from the final report was put into Bennett's Hierarchy (Bennett and Rockwell 1995) to communicate key messages to this particular audience. The combination of the economic analysis, the staff retention rate data (quantitative data) and the impact information highlighting the program achievements towards addressing key NRE objectives (qualitative data), were very important in getting the expansion approved. Also, working through issues or misconceptions that the program management had with the current program, was important in getting the expansion approved. There was a real need to provide a clear overview of the program in these meetings and to answer questions that relating to the expansion. In total, a lot of work was put into making sure that the recommendations were implemented.

Utilisation-focused evaluation is concerned with its use from the beginning, and the final written report is only one of many mechanisms for facilitating use (Patton 1997). According to Hendricks (1984) formal oral briefings, presented with careful preparation and skill, can have immediate and dramatic impact. This author also believes that good charts and graphics is the key for capturing attention and communicating quickly. Experiences in this evaluation would concur with these findings.


The evaluation found that the Graduate Program is providing NRE with additional, unexpected benefits. These include improved retention rates of staff, provides succession planning for programs, together with staff that have extensive networks across the state, extensive knowledge of the organisation, strong leadership qualities, value corporate contribution and are more comfortable with change. Finally the program provides NRE with highly skilled extension staff that have a real ownership of NRE. However the focus of this report is not on the results of the evaluation themselves but more importantly how the findings were used. First, the graduates developed and implemented processes for improving their program, while the program management implemented alterations to program design that had been identified. Second, the Graduate Program has now been expanded so that it has the capacity to take in three times the number of graduates and is available to all the divisions in NRE. The program now caters for both extension and research graduates. This expansion gives more university graduates that enter NRE access to specialised training and supervision.

The proposed expansion of the Graduate Program also initiated a review of the three specialised recruitment programs in NRE. These programs target either current or recently graduated university students and the programs provide them with specialised training. A steering committee has been set up to manage the three programs under what is now known as the ‘Graduate Strategy’. As a direct result of the evaluation and the subsequent development of the Graduate Strategy the training and supervision of all general staff intake is being reviewed. The secretary of NRE as one of the key achievements for the year listed ‘the reviewed intake of graduates and development of an expanded graduate recruitment program, that will result in an increasing focus on graduate employment over coming years’.


In conclusions, the recommendations from this case study that could be applied to achieve high utilisation of your evaluation are:

  • the evaluation must be initiated by people who want to use the findings;
  • communication with all the stakeholders is essential throughout all stages of the evaluation;
  • the methods used should engage participants and enable them to become actively involved in the evaluation of their program, which result in ownership;
  • conduct the evaluation internally as the evaluators are familiar with the program and the organisation; which assists when designing the evaluation, tailoring the recommendations and accessing information and resources within the organisation.
  • that time must be allocated in the evaluation design to communicate with all the stakeholders to gain input and ownership of evaluation;
  • that time must also be allowed in the evaluation design to assist with the implementation of the recommendations.

The four most important features that contributed to the utilisation of the evaluation were the positive attitude of the key stakeholder towards evaluation, the methods used in evaluation, the time devoted to communication with stakeholders and the time taken to assist in implementing the recommendations.

It has been pointed out by Patton (1997) that evaluators need ‘people skills’ in how to build relationships, to facilitate groups, to manage conflict, to walk ‘political tightropes’, and to communicate effectively. All of these were skills required by the evaluators of the Graduate Program. The evaluators had to become facilitators, collaborators, and teachers in support of program participants and staff engaging in their own evaluation (Patton 1997).


  1. Anon. (1999) CAS workforce data. (Department of Natural Resources and Environment: Melbourne)
  2. Bennett C and Rockwell C (1995) Targeting Outcomes of Program (TOP): An Integrated Approach to Planning and Evaluation. (United States Department of Agriculture: Washington).
  3. Hendricks M (1984) Preparing and using briefing charts. Evaluation News 5 (3), 19-20.
  4. House ER (1972) The conscience of educational evaluation. Teachers College Record 73 (3), 405-14.
  5. Patton MQ (1997) Utilisation-focused evaluation: the new century text. 3rd Ed. (Sage Publications: California).
  6. Wargo MJ (1995) The impact of federal government re-invention on federal evaluation activity. Evaluation Practice 16 (3), 227-37.

Previous PageTop Of PageNext Page