Publications filter

A paper by the late Dr Paul Brock, looking at recent research into the elements of teacher professional development that lead to improved student outcomes.

How can evidence-based research answer the following question:

Are there forms of professional development or learning processes which, when applied to school teaching and learning contexts, make significant contributions to demonstrably improved student learning outcomes?

Download the Expert Paper (PDF, 450kb)

Some basic principles that I suggest should underpin all quality policy development in education

It is certainly true that evidence-based research must underpin any authentic response to the fundamental question posed above, and this will be the principal focus of this paper. But from a broader policy perspective, evidence-based research is an example of when what is ‘necessary’ may not be ‘sufficient’. Already completed and published research is but one of at least four inter-dependent and interrelated fundamental sources of information and understanding that need to be heeded.

The second fundamental source is scholarship – ie. the ideas, speculation, imagination, creativity, innovation and so on – generated and articulated by thinkers who would not fit into the mould of evidence-based researchers.

The third is the wisdom distilled from the reflection over their experience by excellent teachers, principals, and other school leaders who may never have undertaken evidence-based research, who may never have published in the scholarship genre, but who are able to abundantly irrigate educational theory and practice because of their own reflections on their expertise and experiences.

The fourth is practical, good old fashioned strategic nous, which might be described as that down to earth, insightful, flexible exercise of common sense, while fully aware of the complexities of the relevant context.

Some reflections on research 

There is considerable educational research that merely confirms what good teachers, principals, and educators in many contexts have known or suspected for quite a while. For example, the research that has demonstrated that the quality of teaching is the most significant within-school factor in the quality of student learning, and that within-school differences are often more significant than between-school differences. These are really ‘no brainers’ these days.

When seeking to establish any compelling link between cause and effect in research, it is always important not to confuse cause with correlation.

When reading the outcomes of any particular piece of educational research, it is necessary to stress the importance of context when assessing the value of that research. For example, one should generally respond cautiously to any black and white research pontifications about the significance of any one, isolated, factor within the rich and diverse landscape that constitutes teaching and learning.

We must carefully exercise our critical powers when reading research. The questions that should arise include the following: Who undertook the research? What is their reputation? What was the purpose of this research? What was its context? What methodology was used? What were any underlying assumptions? Who funded the research? Who may have benefitted from it? What data was included? Was data excluded? How is the research intended to be used?

Teacher ‘professional development’ or teacher ‘professional learning’?

As in many areas of education, it is important not to get too caught up in semantics. For example, striving for any black and white distinction between teacher ‘professional development’ and teacher ‘professional learning’ can be counter-productive. Any professional development that does not involve professional learning – and any professional learning that does not involve professional development – is not worth very much at all. For the purpose of this short paper, I am focusing on the professional development of teachers through learning. I will use the expressions professional development, professional learning, professional development / learning and professional learning / development interchangeably in this paper.

False and misleading dichotomies

One of the features that too often bedevils educational theory and practice is asserting or imposing dichotomies where they do not exist.

What are known as the ‘Literacy wars’ provide classic examples of this very thing. Too often acolytes of ‘gurus’ hurled abuse (however scholarly phrased) at each other without fully understanding the depth and nuances of the theories or practices they were inveighing against. What one side claimed that the other side ignored, may be found, on close forensic inspection, to be untrue – and vice versa. Another tactic not infrequently used is the ‘strawperson’ extremist misrepresentation of the opponent’s position, which is then rather easily demolished. Again, not infrequently it is the campaign waged by acolytes of notable ‘gurus’ which is more aggressive, even combative, than the positions taken by the originating researcher or scholar.

Incidentally, in some of the adversarial discourses within education in general, one sometimes hears the almost dichotomous assertion – made either in defence or attack – that ‘it works all right in theory, but it doesn’t work in practice’. I have always held the view, however, that if the theory does not work in practice, then there is something wrong with the theory; or the practice has not properly applied the theory; or a combination of both.

Teacher professional learning / development – a personal view

In general, professional learning / development for experienced teachers, (early career teachers are not the focus of this paper) to assist them to become better professionals, will be characterised by at least an appropriate richness and rigour of understanding of content; expertise in engagement of students in their learning; diversity and flexibility in pedagogical practice; and a thorough understanding of the importance of purpose and context in learning. This will all be true whether the learning is being undertaken by the teacher herself or himself, or if it is being provided by the education system within which the teacher exercises her or his professionalism.

I believe that there are two major dimensions in the theory and practice of teacher professional development / learning:

• Like any member of a profession, every teacher or principal has a responsibility for their own professional development. For example, secondary school English teachers have a personal professional responsibility to keep abreast of research and scholarly developments in their field. Such professional development / learning can take a variety of forms.

• Those who employ teachers have a responsibility to provide systemic professional development / learning for their teachers and principals when significant systemic change is undertaken. For example, the NSW Department of Education recognises and accepts its responsibility to provide appropriate professional learning for those with the responsibility of implementing new systemic policies. Similarly, such professional development / learning can take a variety of forms.

The phrase ‘life-long learning’ has become almost a contemporary mantra; if not, indeed, a cliché. But it has validity. For teachers and principals this can be expressed as ‘career-long learning’. Apart from anything else, those who seek to have their students learn must be learners themselves. Learning is not a static process – either for teacher or student.

Teacher professional development / learning – analogies with student learning  

As far as student learning is concerned, it is now the increasingly accepted view that there is no one silver bullet form of teaching and learning. Here are some examples of what can be components of effective pedagogy:

• ‘Stand and deliver’– for example, a teacher giving a lecture to combined classes of Year 10 students studying Macbeth – can be one perfectly legitimate component of a diversified teaching / learning strategy.

• A ‘typical’ classroom lesson – provided the teacher is both thoroughly familiar with the content and able to engage the students effectively, and which incorporates a range of teaching / learning strategies – is another string to such a flexible strategic bow.

• Small discussion groups of students – properly set up and monitored by the teacher, with clearly enunciated principles, processes of engagement and authentic forms of assessment – also feed into the mix.

• Students focusing on their work in pairs, also has a place.

• As does, of course, a student working on her or his own – in whatever learning space this may occur; whether it be the classroom or the library or under a tree, or at home.

Any one particular approach consistently applied in isolation, is not, sui generis, the silver bullet. And then when you cross-reference or irrigate this (or any more extensive mixture) with more macro pedagogical approaches – for example, problem-solving, or project-based methodologies, etc – other possibilities come into play. When the considerable array of potential learning and teaching flexibilities and synergies generated by multi-media information communication technological platforms are overlaid on all of the above, the possibilities are rich indeed. But at the same time, such technological wizardry in no way removes the timeless educational need for discernment, curiosity, knowledge, understanding, skills, values and all those other characteristics of quality education articulated and outlined, for example, in the Melbourne Declaration on Educational Goals for Young Australians.

Two perspectives from just over a decade ago on the issue of evidence-based research for identifying features of high quality professional learning

 

Borko, 2004

In 2004 Hilda Borko (in her paper ‘Professional development and teacher learning: mapping the terrain’) was trenchant in her criticism of the quality of professional development then available to school teachers in the United States of America. She lamented the absence of high quality, evidence-based research to underpin professional development. Indeed, she cited Sykes, who characterised the inadequacy of conventional professional development as ‘the most serious unsolved problem for policy and practice in American education today’ (Sykes 1996, p. 465). The premise of my paper is that it still remains a ’serious unsolved problem’ for educational research.

 

Ingvarson, Meiers and Beavis, 2005

A year after Borko published her article, the Australian scholars Lawrence Ingvarson, Marion Meiers and Catherine Beavis – in the introduction to their article ‘Factors affecting the impact of professional development programs on teachers’ knowledge, practice, student outcomes and efficacy’ (2005) – noted the vital role played by professional development in ensuring quality teaching and learning in schools, and the increased interest in research that identifies features of effective professional learning. They called for more sophisticated methods for evaluating professional development programs, and argued that the previous approach of distributing questionnaires at the door no longer suffices. However, their paper made no explicit call for evaluating the efficacy of professional development for teachers in terms of student learning outcomes; or, in the case of professional development experienced by principals, improved learning outcomes across their schools.

More recent perspectives

A reading of the articles by Borko and by Ingvarson, Meiers and Beavis raises the question as to what evidence-based research there should be to demonstrate the efficacy of various forms of professional development / learning – other than surveying teachers to see what they believe to have been the better or best forms that they have experienced? Of course this subjective assessment provides one legitimate source of information.

But it would clearly be of value if there could be some form or forms of evidence-based research that could show that improved development / learning outcomes of their students could be attributed to improved professional development / learning outcomes of their teachers.

 

Timperley, 2008

Timperley’s booklet Teacher Professional Learning and Development (2008) is based on a synthesis of research evidence produced for the New Zealand Ministry of Education’s Iterative Best Evidence Synthesis (BES) Programme, which was designed to be a catalyst for systemic improvement and sustainable development in education.

At the outset, Helen Timperley sets out the aim of this relatively short work as follows: ’The focus of this particular booklet is on the interrelated conditions for professional learning and development that impact positively on valued student outcomes’ (pp. 6-7).

Before the concluding chapter, Professor Timperley observes that ‘Sustained improvement in student outcomes requires that teachers have sound theoretical knowledge, evidence-informed inquiry skills, and supportive organisational conditions’.

The basis of this publication consists of 10 inter-dependent principles for driving effective forms of teacher professional development / learning – each of which is explored in each of the chapters. These principles are as follows:

Chapter 10, ‘Maintaining momentum’, commences with the following highlighted statement: ’Sustained improvement in student outcomes requires that teachers have sound theoretical knowledge, evidence-informed inquiry skills, and supportive organisational conditions’ (p. 24). What follows in Timperley’s Chapter 10 warrants direct quoting:

Research findings

Regrettably, most efforts to improve student outcomes through professional learning and development are short-lived. For improvement to be sustained, short-term perspectives need to be extended to more distant horizons.

Although the research base identifying the conditions associated with long-term improvement is somewhat thin, one thing does appear clear: sustainability depends both on what happens during the professional learning experience and on the organisational conditions that are in place when external support is withdrawn.

The professional learning experience 

A sustained improvement in student outcomes depends firstly on teachers developing strong theoretical frameworks that provide them with a basis for making principled changes to practice in response to student needs. When confronted with specific teaching–learning challenges, teachers can go back to the theory to determine what adjustments they need to make to their practice.

Sustained improvement also depends on teachers developing professional, self-regulatory inquiry skills so that they can collect relevant evidence, use it to inquire into the effectiveness of their teaching, and make continuing adjustments to their practice. Teachers with these crucial self-regulatory skills are able to answer three vital questions: ’Where am I going?’, ’How am I doing?’, and ’Where to next?’ The answer to the ’Where am I going?’ question is sometimes referenced explicitly to national or state standards; more often it is found in, for example, improvements in students’ mathematical problem solving or text comprehension. The answer to the question, ’How am I doing?’ is a measure of how effective teaching is in terms of student progress. The answer to the ’Where to next?’ question is guided by a detailed and theoretically sophisticated knowledge of curriculum content and student progressions.

Organisational conditions

Continued forward momentum also depends on an organisational infrastructure that supports professional learning and self-regulated inquiry. It is difficult for teachers to engage in sophisticated inquiry processes unless sitebased leaders reinforce the importance of goals for student learning, assist teachers to collect and analyse relevant evidence of progress toward them, and access expert assistance when required (pp. 24-25).

 

Towards the end of this fairly short publication, Timperley lists six questions or issues underpinning the set of principles, each of which forms the basis of each of her preceding chapters. These questions or issues are as follows:

• What educational outcomes are valued for our students and how are our students doing in relation to those outcomes?

• What has been the impact of our changed actions on our students?

• Engagement of students in new learning experiences.

• What knowledge and skills do we as teachers need to enable our students to bridge the gap between current understandings and valued outcomes?

• How can we as leaders promote the learning of our teachers to bridge the gap for our students?

• Engagement of teachers in further learning to deepen professional knowledge and refine skills (pp. 26-27)

The following is what Professor Timperley considers to be the 10 principles for driving effective forms of teacher professional development or learning. She emphasises that the 10 principles ‘do not operate independently; rather, they are integrated to inform cycles of learning and action’ (p.28).

1. Focus on valued student outcomes

Professional learning experiences that focus on the links between particular teaching activities and valued student outcomes are associated with positive impacts on those outcomes.

2. Worthwhile content

The knowledge and skills developed are those that have been established as effective in achieving valued student outcomes.

3. Integration of knowledge and skills

The integration of essential teacher knowledge and skills promotes deep teacher learning and effective changes in practice.

4. Assessment for professional inquiry

Information about what students need to know and do is used to identify what teachers need to know and do.

5. Multiple opportunities to learn and apply information

To make significant changes to their practice, teachers need multiple opportunities to learn new information and understand its implications for practice. Furthermore, they need to encounter these opportunities in environments that offer both trust and challenge.

6. Approaches responsive to learning processes

The promotion of professional learning requires different approaches depending on whether or not new ideas are consistent with the assumptions that currently underpin practice.

7. Opportunities to process new learning with others

Collegial interaction that is focused on student outcomes can help teachers integrate new learning into existing practice.

8. Knowledgeable expertise

Expertise external to the group of participating teachers is necessary to challenge existing assumptions and develop the kinds of new knowledge and skills associated with positive outcomes for students.

9. Active leadership

Designated educational leaders have a key role in developing expectations for improved student outcomes and organising and promoting engagement in professional learning opportunities.

10. Maintaining momentum 

Sustained improvement in student outcomes requires that teachers have sound theoretical knowledge, evidence-informed inquiry skills, and supportive organisational conditions (pp. 8-25).

In outlining her principles for professional development or learning, Timperley hits the nail on the head. She points directly at the need to link professional development or learning with demonstrable student learning outcomes. However, in subsequent years it has been difficult to find any comprehensive forms of evidence-based research that have been able to demonstrate improved learning outcomes of their students that could be attributed to improved professional development or learning outcomes of their teachers.

 

Schleicher, 2011

In his research report Building a high-quality teaching profession: lessons from around the world (2011), Dr Andreas Schleicher – well-known internationally as the OECD’s Special Advisor to the Secretary-General on Education Policy and Head of the Indicators and Analysis Division at the Directorate for Education – provided a critiqued collation of some evidence-based international research on the efficacy of professional development. One of the largest international surveys of teachers and school principals, the report was based on data from over 70,000 teachers and school principals from lower secondary teachers in the 23 participating countries. In reference to the report, Schleicher noted that ‘relatively few teachers participate in the kinds of professional development which they find has the largest impact on their work’ (see Figure 3.16 in his presentation - included in the bibliography).

Referring to results from the 2009 Teaching and Learning International Survey (TALIS), he noted that those professional development opportunities rated by teacher participants as having the highest impact on their work were: individual and collaborative research; qualification programs; informal dialogue to improve teaching; reading professional literature; courses and workshops; and professional development networks. But those with the highest levels of participation were informal dialogue to improve teaching; and courses and workshops.

However, the efficacy of all of these professional development programs was determined only by the aggregation of each participant’s own self-assessment of the ’impact’ of the professional development or learning on their teaching. No evidence was provided – or at least reported – on the efficacy as demonstrated by the improved learning of their students.

 

McIntyre, 2012

Early in the monograph A significant and direct impact by teachers and school leaders on student learning outcomes in literacy (2012) written by Ann McIntyre, following discussions with her colleagues in the NSW Department of Education and Training (as it was then known) Professional Learning and Leadership Development Directorate (of which she was the Director), references were made to then recent research on the impact of school leaders and teachers on student learning – and the link to professional learning or development. McIntyre noted that ’nearly 60 per cent of a school’s impact on student achievement is attributed to principal and teacher effectiveness’ (McIntyre, 2011, p. 9, citing McKinsey, 2010, p. 5, which itself cited Barber, Whelan and Clark, 2010) and that there is an interdependent link between student and school improvement and professional learning (cf. Elmore, 2006; Robinson, 2007). 

McIntyre proceeded to argue that successful school improvement is dependent upon the capacity of a system to successfully undertake the following four actions:

1. Research and analyse the practices of school leaders and teachers that have the most significant impact on student learning.

2. Develop professional learning programs that articulate and promote these practices and develop teacher and school leader capacity to implement this learning in the context of their school and classrooms.

3. Analyse the learning needs of students, develop clear targets for school and classroom action and implement coherent, aligned professional learning strategies to build both teacher and school leader capacity to improve student learning.

4. Implement school improvement systems that enable the alignment of teacher and school leader learning to student learning. 

(cf. McIntyre, 2011, pp. 48-49)

It is worth noting that Dr Ben Jensen, in his analysis of PISA-successful East Asian countries and cities, emphasised the importance of the practical implementation of creating a strong culture of teacher collaboration and mentoring; teachers observing and providing feedback on their colleagues’ teaching; sustained high quality professional development; and highly focused research on the learning development of students within their classrooms (Grattan Institute, 2012).

 

McIntyre, 2013

The paper Teacher quality evidence for action (ACE 2013) by Ann McIntyre is an example of a fine piece of recent research based on surveys of a large number of public school teachers (approximately 6,000) who provided their own self-assessment of the relative value of a range of professional development or learning ‘programs’ they had experienced.

The six key elements that had the greatest impact for primary teachers were, in order of influence:

• the collaborative preparation of lessons and teaching resources,

• lesson observation and observing each other’s lessons,

• the collaborative assessment and evaluation of student work,

• structured feedback meetings,

• developing evidence to demonstrate the achievement of professional teaching standards,

• team teaching.

McIntyre noted that the benefit of teachers working together highlights the importance of reframing activities within schools to ensure that schools are not only places for students to learn but also places for teachers to learn. Structuring time within schools to enable lesson observation and feedback and the collaborative development and evaluation of lessons collectively provide a significant source of professional learning for teachers.

Similar responses were given in a study that focussed on teachers who were described by their principals as being accomplished and of high quality.

While, by and large, these forms of professional development or learning can also be found in Dr Andreas Schleicher’s collation, what does not appear in this particular research undertaken by McIntyre is any reference to forms of professional development or learning delivered and/or experienced external to the school itself. This can be explained, however, by the fact that the policy of the former NSW Department of Education and Training had become committed to a model of funding professional learning programs fully devolved to, or within schools – which had to be ‘driven’ by one or more of a set of professional learning themes set down by the Department. Therefore, McIntyre’s research, which surveyed approximately 6,750 teachers – conducted as part of evaluation of use of teacher professional learning funds – had its dominant focus on within-school professional learning processes.

As a consequence, and to reiterate, none of the forms of external professional development or learning found to have an important impact on teachers in Schleicher’s research – individual and collaborative research; qualification programs; reading professional literature; courses and workshops; and professional development networks – are recorded as having a significant impact on the teachers in McIntyre’s research. However, Individual and collaborative research was one of the strategies that was evidenced in the analysis of student learning and approaches to lesson planning and observation. Within McIntyre’s research she found that Schleicher’s professional learning categories did not sit well with her research as her focus was, as it always has been, on professional learning as an outcomes-driven process rather than an input-driven process. This was made very clear in McIntyre’s (as yet unpublished) research paper Teacher learning to improve student learning: professional development policy and impact in NSW, Australia, presented at the Annual General Meeting of the American Educational Research Association, Chicago, in April, 2015.

In a subsequent piece of research undertaken for the NSW Primary Principals’ Association which commenced in 2013, Ann McIntyre set out to identify evidence to demonstrate the efficacy – or otherwise – of the professional learning experienced by the principals involved in the pilot of a new principal credential for the Association. The project was modelled on her research regarding the elements of professional learning that were most likely to impact practice. The program involved face-to-face evidence-informed learning seminars, mentoring by successful practising principals, and action learning – over the 18 months’ duration of the project. She planned that at the conclusion of the research project, participants would be required to present evidence that included the ‘School improvement challenge’, the ‘Performance and development plan’ and an ‘Executive summary’ that would outline their learning as leaders through the learning development processes they had implemented in their schools as a result of the professional learning processes. Essential to this concluding document would have to be their demonstrable evidence of efficacy through the provision of evidence aligned to both the Australian Professional Standard for Principals and the key accountabilities of their role. Results of this research were not available at the time this particular paper was written. When published it should provide demonstrable evidence of improved learning as a result of identified professional learning processes.

Conclusion

As indicated in the note below, this project commenced in 2014. Therefore, generally speaking, much of the research, but not all of it, that is cited in the preceding text, does not go beyond 2013. Even from the fairly brief collation of research identified and discussed in this paper, what does clearly emerge is the need for authentic, evidence-based causal (not merely correlational) links between the provision or experience of identified professional development or learning processes, and demonstrable student learning outcome effects that can be directly attributed to those professional development or learning causes.

 

A word from the author, 17 November 2015.
I commenced drafting this paper in August 2014. For a number of reasons, its completion was delayed. Most of the original draft remains. The first eight paragraphs of this paper can be found in my article, ‘Show an Affirming Flame: A Message to the Profession’ in the Journal of Professional Learning, https://cpl.asn.au/journal/semester-2-2015/show-an-affirming-flame-a-message-to-the-profession  

 About the author.
Dr Paul Brock AM FACE FACEL was Director of Learning and Development Research with the NSW Department of Education, an Adjunct Professor in the Faculty of Education and Social Work at the University of Sydney, an Honorary Research Fellow of the University of New England and an Honorary Associate with the Faculty of Medicine at the University of Sydney. He was also a Vice Patron of the Motor Neurone Disease Association of NSW. Dr Brock authored, co-authored and edited more than 130 publications, including books, book chapters, monographs, refereed journal articles and poetry. He also delivered over 100 academic papers to international and Australian conferences. Dr Brock passed away in March 2016.

Back to top

 

 

Published in Research report

All education programs are well-intentioned and many of them are highly effective. However, there are usually more ways than one to achieve good educational outcomes for students. When faced with this scenario, how do educators and education policymakers decide which alternative is likely to provide most ‘bang for buck’?

There’s also an uncomfortable truth that educators and policymakers need to grapple with: some programs are not effective and some may even be harmful. What is the best way to identify these programs so that they can be remediated or stopped altogether?

Program evaluation is a tool to inform these decisions. More formally, program evaluation is a systematic and objective process to make judgements about the merit or worth of our actions, usually in relation to their effectiveness, efficiency and appropriateness (NSW Government 2016). Evaluation and self-assessment is at the heart of strong education systems and evaluative thinking is a core competency of effective educational leadership. Teachers, school leaders and people in policy roles should all apply the principles of evaluation to their daily work.

Research shows that:

  • Effective teachers use data and other evidence to constantly assess how well students are progressing in response to their lessons (Timperley & Parr, 2009).
  • Effective principals constantly plan, coordinate and evaluate teaching and the use of the curriculum with systematic use of assessment data (Robinson, Lloyd & Rowe, 2008).
  • Effective education systems engage all school staff and students in school self-evaluations so that program and policy settings can be adjusted to maximise educational outcomes (OECD, 2013).

 

This Learning Curve sets out five conditions for effective evaluation in education. These are not the only considerations and they are not unique to education. However, if these parameters are missing, evaluation will not be possible or it will be ineffective.
The five prerequisites for effective evaluation in education are:
  1. Start with a clear and measurable statement of objectives
  2. Develop a theory about how program activities will lead to improved outcomes (a program logic) and structure the evaluation questions around that logic
  3. Let the evaluation questions determine the evaluation method
  4. For questions about program impact, either a baseline or a comparison group will be required (preferably both)
  5. Be open-minded about the findings and have a clear plan for how to use the results.

 

1. Start with clear and measurable objectives

It may sound obvious but understanding whether program activities have been effective requires a clear understanding of what the program is trying to achieve. The objectives also need to be measureable.

For some programs or activities this is very easy. For example, reading interventions like Reading Recovery aim to improve students’ ability to read. In these instances it is easy to start with a clear statement of objectives (i.e. to improve students’ ability to read). It is also quite easy to measure outcomes because reading progression is relatively easy to measure (although the issue of causal attribution is important – more on that later).

However, for some programs, it can be more difficult to develop a clear statement of objectives and it is even more difficult to measure whether they have been achieved. Take the Bring Your Own Device (BYOD) policy as an example. The objective of BYOD is often described as using technology to ‘deepen learning’, ‘foster creativity’ or ‘engage students’. These are worthy objectives. The challenge for schools and systems is to work out whether they have been achieved. What does ‘deep learning’ look like and how can it be measured? How will teachers know if a student is more ‘creative’ or ‘engaged’ now than they were before? How much of that gain is due to the program or policy (BYOD) and how much is due to other factors?

Figure 1 provides some examples of common objectives and possible measures that will inform whether they have been achieved. These are highly idealised examples and the problems that educators are trying to solve are usually more multi-faceted and complex than these. In some cases it may not even be possible to robustly measure outcomes. In other cases, there may be more than one outcome resulting from a set of activities. However, no matter how hard and complex the problem, if there is no clarity about what the problem is, there is also no chance of measuring whether it has been solved.

Figure 1. Some examples of common objectives and measures that might inform whether they’ve been achieved

Figure 1. Some examples of common objectives and measures that might inform whether they’ve been achieved

 

Figure 2. A simple logic model

A simple logic model

 

2. Linking activities and outcomes

Effective programs have a clear line of sight between the needs they are responding to, the resources available, the activities undertaken with those resources, and how activities will deliver outcomes. Logic modelling is one way to put these components on a piece of paper. Wherever possible, this should be done by those who are developing and implementing a program or policy, in conjunction with an experienced evaluator. At its most simple, a logic model looks like that shown in Figure 2.

The needs are about the problem at hand and why it is important to solve it. Inputs are the things put in to address the need (usually a combination of money, time and resources). Activities describe the things that happen with the inputs. Outcomes are usually expressed in terms as measures of success. A logic model is not dissimilar to the processes used in school planning. Needs are usually the strategic priorities identified in the plan. Inputs are the resources allocated to address those needs. Activities are often referred to as processes or projects. Outcomes and impacts are used interchangeably. Figure 3 gives some common examples of needs, inputs, activities and outcomes.

Some of these examples are ‘add-on’ activities to business-as-usual (e.g. speech pathology) and some simply reflect the way good teachers organise their classroom (e.g. differentiated instruction). Figure 3 merely serves to illustrate that the evaluative process involves thinking about the resources going into education, how those inputs are organised and how they might plausibly lead to change.

Good evaluation will make an assessment of how well the activities have been implemented (process evaluation) and whether these activities made a difference (outcome evaluation). If programs are effective, it might also be prudent to ask whether they provide value for money (economic evaluation).

A simple logic modelling worksheet can be found in the Appendix.

 

Figure 3. Some examples of program needs, inputs, activities and outcomes

Some examples of program needs, inputs, activities and outcomes

 

Types of evaluation

Process evaluation is particularly helpful where programs fail to achieve their goals. It helps to explain whether that occurred because of a failure of implementation, a design flaw in the program, or because of some external barrier in the operating environment. Process evaluation also helps to build an understanding of the mechanisms at play in successful programs so that they can be replicated and built upon.

Outcome evaluation usually identifies average effects: were the recipients better off under this program than they would have been in its absence. However, when viewed in combination with process evaluation, it can provide a more nuanced overview of the program. It can explore who the program had an impact on, to what extent, in what ways, and under what circumstances. This is important because very few programs work for everyone. Identifying people who are not responding to the program helps to target alternative courses of action.

Economic evaluations help us choose between alternatives when we have many known ways of achieving the same outcomes. In these circumstances, the choice often comes down to what is the most effective use of limited resources. If programs are demonstrably ineffective, there is little sense in conducting economic evaluations. Ineffective programs do not provide value for money.

 

When program logic breaks down – repeating a school year

While repeating a school year is relatively uncommon in NSW, it is quite common in some countries such as the United States. It is a practice that has considerable intuitive appeal – if a student is falling behind (need) the theory is that an additional year of education (input) will afford them the additional instruction (activity) required to achieve positive educational outcomes (outcome). Evidence suggests that this is true only for a small proportion of students who are held back. In fact, after one year, students who are held back are on average four months further behind similar-aged peers than they would have been had they not been held back.

According to research conducted by the UK Education Endowment Foundation, the reason that repeating a year is not effective is that it “just provides ‘more of the same’, in contrast to other strategies which provide additional targeted support or involve a new pedagogical approach. In addition, it appears that repeating a year is likely to have a negative impact on the student’s self-confidence and belief that they can be an effective learner”. In other words, for most recipients of the program the activities are poorly suited to the students’ needs. In situations like this, well-intentioned activities can actually have a negative impact on a majority of students.

Source: https://educationendowmentfoundation.org.uk/resources/teaching-learning-toolkit/repeating-a-year/

 

3. Let the evaluation questions determine the method

Once a clear problem statement has been developed, the inputs and activities are identified, and intended outcomes have been established, coherent evaluation questions can be developed.
Good evaluation will ask questions such as:

  • Did the program deliver what was intended? If not, why not?
  • Did the program reach the right recipients? If not, why not?
  • Did the program achieve the intended outcome and were there any unintended (positive or negative) outcomes?
  • For whom did it work and under what circumstances?
  • Is this the most efficient way to use limited resources?

All too often educational researchers get hung up on using ‘qualitative’ versus ‘quantitative’ methods when answering these questions. This is a false dichotomy. The method employed to answer the research question depends critically on the question itself.

Qualitative research usually refers to semi-structured techniques such as in-depth interviews, focus groups or case studies. Quantitative research usually refers to more structured approaches to data collection and analysis where the intention is to make statements about a population derived from a sample.

Both approaches will have merit depending on the evaluation question. In-depth interviews and focus groups are often the best ways of understanding whether a program has been implemented as intended and, if not, why not. These methods have limitations when trying to work out impact because, by definition, information is only gleaned from the people who were interviewed. Unless something is known about the people who weren’t interviewed, these sorts of methods can be highly misleading. For example, people who didn’t respond well to the intervention might also be less likely to participate in interviews or focus groups. This is where quantitative methods are more appropriate because they can generalise to describe overall effects across all individuals. However, combining both qualitative and quantitative methods can be useful for identifying for whom and under what conditions the program will be effective. For example, CESE researchers investigating the practices of high-growth NSW schools used quantitative analysis to identify high-growth schools and analyse survey results, and qualitative interviews to find out more about the practices these schools implemented.

The possible sources of data to inform evaluation questions are endless. The key issue is to think about the evaluation question and adopt the data and methods that will provide the most robust answer to that question.

 

4. For questions about program impact, either a baseline or a comparison group will be required (preferably both)

The number one question that most evaluations should set out to answer is: did the program achieve what it set out to achieve? This raises the vexing problem of how to attribute activities to any observed outcomes.

No single evaluation approach will give a certain answer to the attribution question. However, some research designs will allow for more certain conclusions that the effects are real and are linked to the program. CESE uses a simple three-level hierarchy to classify the evidence strength, as shown in Figure 4. There are many variations on this hierarchy, most of which can be found in the health and medical literature.

Figure 4. CESE Evidence Hierarchy

CESE Evidence Hierarchy

Taking before (pre) and after (post) measures is a good start and is often the only way to measure outcomes. However, simple comparisons like this need to be treated cautiously because some outcomes will change over time without any special intervention by schools. For example, if a student’s reading level was measured at two time points, they would usually be at a higher level at the second time point just through the course of normal class and home reading practice.

This is where reference to benchmarks or comparison groups is critical. For example, if the typical growth in reading achievement over a specified period of time is known, it can be used to benchmark students against that expected growth. Statements can then be made about whether growth is higher or lower than expected as a result of program activities.

An even stronger design is when students (or schools, or whatever the target group is comprised of) are matched like-for-like with a comparison group. This design is more likely to ensure that differences are due to the program and not due to some other factor or set of factors. These designs are referred to as 'quasi-experiments' in Figure 4.

Even better are randomised controlled trials (RCTs) where participants are randomly allocated to different conditions. Outcomes are then observed for the different groups and any differences are attributed to the experience they received relative to their peers. RCTs can also be conducted using a wait-list approach where everyone gets the program either immediately or after a waiting period. RCTs allow for strong causal attributions because the random assignment effectively balances the groups on all of the factors that could have influenced those outcomes.

RCTs have a place in educational research but they will probably always be the exception rather than the rule. RCTs are usually reserved for large-scale projects and wouldn't normally be used to measure programs operating at the classroom level. Special skills are required to run these sorts of trials and most of the programs run by education systems would be unsuited to this research design. In the absence of RCTs, it is still important to think about ways to measure what the world looked like before the activity began and what it looked like after some period of activity has been undertaken. This requires taking baseline and follow-up measures and comparing these over time.

As a rule, the less rigorous the evaluation methodology, the more likely we are to falsely conclude that a program has been effective. This suggests that stronger research designs are required to truly understand what works, for whom and under what circumstances.

 

5. Be open-minded and have a clear plan for how to use the results

In all of the above, it is crucial for educators to be open-minded about what the results of the evaluation might show and be prepared to act either way. Evaluation should not be a tool for justifying or ‘evidence washing’ a predetermined conclusion or course of action. The reason for engaging in evaluation is to understand program impact in the face of uncertainty. It provides the facts (as best they can be estimated) to help make decisions about how to structure programs, whether they should be expanded, whether they need to be adjusted along the way, or whether they need to stop altogether.

Evaluation not only asks ‘what is so?’ – it also asks ‘so what?’ In other words, evaluation is most useful if it will lead to meaningful change. Before embarking on any evaluation, it is important to think about what can reasonably be achieved from the research. If continuation of the program is not in question, it may be better focusing on process questions bearing on program efficiency or quality improvement. It is also important to think about stakeholders, how they might react to the evaluation and what needs to happen to keep them informed along the way.

In accordance with the NSW Government Program Evaluation Guidelines (NSW Government 2016), evaluation should be conducted independently of program delivery and it should be publicly available for transparency. Independence might not always be possible where no budget exists or where activity is business-as-usual or small in scale (e.g. classroom-level or school-level programs). Evaluative thinking is still critical in these circumstances as part of ongoing quality improvement.

Where a formal evaluation has been conducted, transparency is a critical part of the process. Stakeholders need to understand the questions the evaluation sought to answer, the methods employed to answer them, any assumptions that were made, what the evaluation found and the consequences of those findings. Transparency also helps people in later times or in other schools or jurisdictions to identify what works.

 

Conclusion

To embed the sort of evaluative thinking described above into activity across education requires everyone to be evaluative thinkers in one way or another. Everyone designing or implementing a program needs to be clear on what problem they are trying to solve, how they are planning to solve it and how success will be measured.

For smaller, more routine programs and policies, performance should be monitored using the sort of benchmarking described above to determine the effectiveness, efficiency
and appropriateness of expenditure. This could be done by an early childhood service Director, by a school teacher, by a principal, school leadership group, Directors Public Schools or Principals School Leadership. If more technical assistance is required, it may be better to bring in that technical expertise.

 

References

Centre for Education Statistics and Evaluation 2015, ‘Six effective practices in high growth schools’, Learning Curve Issue 8, Centre for Education Statistics and Evaluation, Sydney.

NSW Government 2016, ‘NSW Government Program Evaluation Guidelines', Department of Premier and Cabinet, NSW Government, Sydney.

OECD 2013, ‘Synergies for better learning: An international perspective on evaluation and assessment’, OECD Publishing, Paris.

Robinson, V, Lloyd, C & Rowe, K 2008, ‘The impact of leadership on student outcomes: An analysis of the differential effects of leadership types’, Educational Administration Quarterly, vol. 44, no. 5, pp. 635-674.

Timperley, H & Parr, J 2009, ‘Chain of Influence from policy to practice in the New Zealand literacy strategy’, Research Papers in Education, vol.24, no.2, pp.135-154.

 

 

Published in Learning Curve

Publications advanced search

Accessible documents

If you find a CESE publication is not accessible, please contact us

Waratah-NSWGovt-Reverse