The Supporting students' learning - insights from students, parents and teachers (PDF, 1MB) learning curve presents findings from the 2016 Tell Them From Me school surveys completed by primary and secondary students, parents/carer and teachers in NSW government schools. Students provide feedback on how much support they receive from their teachers and their parents/carers, while responses from teachers and parent/carers indicate how much support they provide in school and at home, respectively. It draws on all three perspectives to explore the provision of advocacy and support and how this varies for different groups of students at different stages of school.
The Supporting students' learning - resources and case studies for schools, teachers and parents (PDF, 808kB) accompanies the learning curve, providing evidence-based strategies and two case studies that describe how to create supportive learning environments.
Read the audio paper transcript (PDF, 106kB).
Alongside effective teaching practices, students need a supportive learning environment to succeed. In an education context, advocacy and support for learning refers to the active consideration of, and support for, students’ academic and wellbeing needs.
The NSW Department of Education Strategic Plan 2018-2022 includes the commitment to ensure that every student is known, valued and cared for in our schools. School advocacy and support for learning are necessary components for happy and successful students. Schools can use the department’s Tell Them From Me surveys to engage with, clarify and strengthen the important relationship between teachers, parents and schools by providing an evidence-based platform to capture feedback. This knowledge can then help build an accurate and timely picture that schools can use for practical improvements.
The summary on this page is also available as a PDF. Download the summary of the two publications (PDF, 180kB).
This transition period is important because of the impact it may have on students’ engagement in learning and their sense of belonging at school. This publication examines the relationship between students’ sense of belonging and other types of engagement across the transition from primary to secondary. It includes an analysis of 12,000 students who completed surveys in Year 6, and then again in Year 7.
This decline is experienced even more by students from low-socioeconomic backgrounds and Aboriginal and Torres Strait Islander students. Between Year 6 and Year 7, there is a decline in the percentage of students who value school outcomes and those who are trying hard to succeed. Students’ sense of belonging also declines over the transition.
Students who report having a positive sense of belonging in Year 6 are more likely to have a positive sense of belonging in Year 7. Factors that help influence a student’s sense of belonging at the beginning of high school include their relationships with teachers and peers, the support they receive at school and at home, and school practice.
Primary schools should be attentive to Year 6 students’ sense of belonging and their relationships with teachers and peers, especially in the lead up to the transition. Secondary schools should develop strong, supportive student-teacher relationships as early as possible. There are more practical tips on how to do this in the publication and the Homebush West case study.
This literature review provides the evidence base for the department’s anti-bullying strategy. Released in 2017, the NSW Anti-bullying Strategy brings together evidence-based resources and information to support schools, parents and carers, and students to prevent and respond to bullying effectively.
Bullying can be face-to-face, covert or online. It has three main features: it involves repeated actions, is intended to cause distress or harm, and is grounded in an imbalance of power.
In 2015, 14.8 per cent of Australian students reported being bullied at least a few times per month. Bullying peaks during the transition from primary school to high school, before decreasing to low levels by the end of high school. Boys tend to bully more than girls, however, girls use more covert bullying than boys.
Anti-bullying programs reduce bullying behaviours by an average of 20 – 23 per cent.
The most effective anti-bullying interventions:
• take a holistic, whole-school and whole-community approach, which includes promoting awareness of anti-bullying interventions
• include educational content in the classroom that allows students to develop social and emotional competencies, and to learn appropriate ways to respond to bullying – both as a student who experiences bullying and as a bystander
• provide support and sustainable professional development for school staff on how best to enhance understanding, skills and self-efficacy to address and prevent bullying behaviours
• ensure systematic implementation and evaluation.
There are Australian and international examples of whole-schools approaches that have the characteristics common to effective anti-bullying interventions and have been subjected to program evaluations. Australian examples are the National Safe Schools Framework, Positive Behaviour for Learning, Friendly Schools, KidsMatter and MindMatters. International examples are the Olweus Bullying Prevention Program (Norway), Sheffield Anti-Bullying Project (England), Seville Anti-Bullying in School Project (Spain) and KiVa Anti-Bullying Program (Finland).
Schools need greater support to maximise the outcomes of anti-bullying interventions and to identify what is likely to be successful based on their specific contexts and requirements. There is very little available currently in the way of specific advice to guide schools in their choice of anti-bullying programs.
Visit the department's anti-bullying website.
To help share the evidence, Anti-bullying interventions is available as a summary poster (PDF, 1.4MB).
Bullying peaks during the transition from primary school to high school.
It decreases to low levels by the end of high school. Boys tend to bully more than girls, however, girls use more covert bullying than boys.
Anti-bullying programs reduce bullying behaviours by an average of 20-23%.
In 2017, the Centre for Education Statistics and Evaluation (CESE) released a literature review on effective anti-bullying interventions in schools. This review became the evidence base for the NSW Department of Education’s Anti-bullying Strategy. This strategy brings together evidence-based resources and information to support schools, parents and carers, and students to prevent and respond to bullying effectively.
Bullying can be face-to-face, covert or online.
It has three main features:
• it involves repeated actions
• is intended to cause distress or harm, and
• is grounded in an imbalance of power.
The most effective anti-bullying interventions:
• take a holistic, whole-school and whole-community approach
• include educational content in the classroom that allows students to learn appropriate ways to respond to bullying
• provide support and sustainable professional development for school staff
• ensure systematic implementation and evaluation.
The Primary school engagement and wellbeing publication (PDF, 1.1MB) presents findings from the 2015 Tell Them From Me primary school survey. The survey measures the engagement of primary students in Years 4, 5 and 6 and classroom, school and family factors that influence student engagement and achievement.
Learn more about the Tell Them From Me surveys.
All education programs are well-intentioned and many of them are highly effective. However, there are usually more ways than one to achieve good educational outcomes for students. When faced with this scenario, how do educators and education policymakers decide which alternative is likely to provide most ‘bang for buck’?
There’s also an uncomfortable truth that educators and policymakers need to grapple with: some programs are not effective and some may even be harmful. What is the best way to identify these programs so that they can be remediated or stopped altogether?
Program evaluation is a tool to inform these decisions. More formally, program evaluation is a systematic and objective process to make judgements about the merit or worth of our actions, usually in relation to their effectiveness, efficiency and appropriateness (NSW Government 2016). Evaluation and self-assessment is at the heart of strong education systems and evaluative thinking is a core competency of effective educational leadership. Teachers, school leaders and people in policy roles should all apply the principles of evaluation to their daily work.
Research shows that:
It may sound obvious but understanding whether program activities have been effective requires a clear understanding of what the program is trying to achieve. The objectives also need to be measureable.
For some programs or activities this is very easy. For example, reading interventions like Reading Recovery aim to improve students’ ability to read. In these instances it is easy to start with a clear statement of objectives (i.e. to improve students’ ability to read). It is also quite easy to measure outcomes because reading progression is relatively easy to measure (although the issue of causal attribution is important – more on that later).
However, for some programs, it can be more difficult to develop a clear statement of objectives and it is even more difficult to measure whether they have been achieved. Take the Bring Your Own Device (BYOD) policy as an example. The objective of BYOD is often described as using technology to ‘deepen learning’, ‘foster creativity’ or ‘engage students’. These are worthy objectives. The challenge for schools and systems is to work out whether they have been achieved. What does ‘deep learning’ look like and how can it be measured? How will teachers know if a student is more ‘creative’ or ‘engaged’ now than they were before? How much of that gain is due to the program or policy (BYOD) and how much is due to other factors?
Figure 1 provides some examples of common objectives and possible measures that will inform whether they have been achieved. These are highly idealised examples and the problems that educators are trying to solve are usually more multi-faceted and complex than these. In some cases it may not even be possible to robustly measure outcomes. In other cases, there may be more than one outcome resulting from a set of activities. However, no matter how hard and complex the problem, if there is no clarity about what the problem is, there is also no chance of measuring whether it has been solved.
Effective programs have a clear line of sight between the needs they are responding to, the resources available, the activities undertaken with those resources, and how activities will deliver outcomes. Logic modelling is one way to put these components on a piece of paper. Wherever possible, this should be done by those who are developing and implementing a program or policy, in conjunction with an experienced evaluator. At its most simple, a logic model looks like that shown in Figure 2.
The needs are about the problem at hand and why it is important to solve it. Inputs are the things put in to address the need (usually a combination of money, time and resources). Activities describe the things that happen with the inputs. Outcomes are usually expressed in terms as measures of success. A logic model is not dissimilar to the processes used in school planning. Needs are usually the strategic priorities identified in the plan. Inputs are the resources allocated to address those needs. Activities are often referred to as processes or projects. Outcomes and impacts are used interchangeably. Figure 3 gives some common examples of needs, inputs, activities and outcomes.
Some of these examples are ‘add-on’ activities to business-as-usual (e.g. speech pathology) and some simply reflect the way good teachers organise their classroom (e.g. differentiated instruction). Figure 3 merely serves to illustrate that the evaluative process involves thinking about the resources going into education, how those inputs are organised and how they might plausibly lead to change.
Good evaluation will make an assessment of how well the activities have been implemented (process evaluation) and whether these activities made a difference (outcome evaluation). If programs are effective, it might also be prudent to ask whether they provide value for money (economic evaluation).
A simple logic modelling worksheet can be found in the Appendix.
Process evaluation is particularly helpful where programs fail to achieve their goals. It helps to explain whether that occurred because of a failure of implementation, a design flaw in the program, or because of some external barrier in the operating environment. Process evaluation also helps to build an understanding of the mechanisms at play in successful programs so that they can be replicated and built upon.
Outcome evaluation usually identifies average effects: were the recipients better off under this program than they would have been in its absence. However, when viewed in combination with process evaluation, it can provide a more nuanced overview of the program. It can explore who the program had an impact on, to what extent, in what ways, and under what circumstances. This is important because very few programs work for everyone. Identifying people who are not responding to the program helps to target alternative courses of action.
Economic evaluations help us choose between alternatives when we have many known ways of achieving the same outcomes. In these circumstances, the choice often comes down to what is the most effective use of limited resources. If programs are demonstrably ineffective, there is little sense in conducting economic evaluations. Ineffective programs do not provide value for money.
While repeating a school year is relatively uncommon in NSW, it is quite common in some countries such as the United States. It is a practice that has considerable intuitive appeal – if a student is falling behind (need) the theory is that an additional year of education (input) will afford them the additional instruction (activity) required to achieve positive educational outcomes (outcome). Evidence suggests that this is true only for a small proportion of students who are held back. In fact, after one year, students who are held back are on average four months further behind similar-aged peers than they would have been had they not been held back.
According to research conducted by the UK Education Endowment Foundation, the reason that repeating a year is not effective is that it “just provides ‘more of the same’, in contrast to other strategies which provide additional targeted support or involve a new pedagogical approach. In addition, it appears that repeating a year is likely to have a negative impact on the student’s self-confidence and belief that they can be an effective learner”. In other words, for most recipients of the program the activities are poorly suited to the students’ needs. In situations like this, well-intentioned activities can actually have a negative impact on a majority of students.
Once a clear problem statement has been developed, the inputs and activities are identified, and intended outcomes have been established, coherent evaluation questions can be developed.
Good evaluation will ask questions such as:
All too often educational researchers get hung up on using ‘qualitative’ versus ‘quantitative’ methods when answering these questions. This is a false dichotomy. The method employed to answer the research question depends critically on the question itself.
Qualitative research usually refers to semi-structured techniques such as in-depth interviews, focus groups or case studies. Quantitative research usually refers to more structured approaches to data collection and analysis where the intention is to make statements about a population derived from a sample.
Both approaches will have merit depending on the evaluation question. In-depth interviews and focus groups are often the best ways of understanding whether a program has been implemented as intended and, if not, why not. These methods have limitations when trying to work out impact because, by definition, information is only gleaned from the people who were interviewed. Unless something is known about the people who weren’t interviewed, these sorts of methods can be highly misleading. For example, people who didn’t respond well to the intervention might also be less likely to participate in interviews or focus groups. This is where quantitative methods are more appropriate because they can generalise to describe overall effects across all individuals. However, combining both qualitative and quantitative methods can be useful for identifying for whom and under what conditions the program will be effective. For example, CESE researchers investigating the practices of high-growth NSW schools used quantitative analysis to identify high-growth schools and analyse survey results, and qualitative interviews to find out more about the practices these schools implemented.
The possible sources of data to inform evaluation questions are endless. The key issue is to think about the evaluation question and adopt the data and methods that will provide the most robust answer to that question.
The number one question that most evaluations should set out to answer is: did the program achieve what it set out to achieve? This raises the vexing problem of how to attribute activities to any observed outcomes.
No single evaluation approach will give a certain answer to the attribution question. However, some research designs will allow for more certain conclusions that the effects are real and are linked to the program. CESE uses a simple three-level hierarchy to classify the evidence strength, as shown in Figure 4. There are many variations on this hierarchy, most of which can be found in the health and medical literature.
Taking before (pre) and after (post) measures is a good start and is often the only way to measure outcomes. However, simple comparisons like this need to be treated cautiously because some outcomes will change over time without any special intervention by schools. For example, if a student’s reading level was measured at two time points, they would usually be at a higher level at the second time point just through the course of normal class and home reading practice.
This is where reference to benchmarks or comparison groups is critical. For example, if the typical growth in reading achievement over a specified period of time is known, it can be used to benchmark students against that expected growth. Statements can then be made about whether growth is higher or lower than expected as a result of program activities.
An even stronger design is when students (or schools, or whatever the target group is comprised of) are matched like-for-like with a comparison group. This design is more likely to ensure that differences are due to the program and not due to some other factor or set of factors. These designs are referred to as 'quasi-experiments' in Figure 4.
Even better are randomised controlled trials (RCTs) where participants are randomly allocated to different conditions. Outcomes are then observed for the different groups and any differences are attributed to the experience they received relative to their peers. RCTs can also be conducted using a wait-list approach where everyone gets the program either immediately or after a waiting period. RCTs allow for strong causal attributions because the random assignment effectively balances the groups on all of the factors that could have influenced those outcomes.
RCTs have a place in educational research but they will probably always be the exception rather than the rule. RCTs are usually reserved for large-scale projects and wouldn't normally be used to measure programs operating at the classroom level. Special skills are required to run these sorts of trials and most of the programs run by education systems would be unsuited to this research design. In the absence of RCTs, it is still important to think about ways to measure what the world looked like before the activity began and what it looked like after some period of activity has been undertaken. This requires taking baseline and follow-up measures and comparing these over time.
As a rule, the less rigorous the evaluation methodology, the more likely we are to falsely conclude that a program has been effective. This suggests that stronger research designs are required to truly understand what works, for whom and under what circumstances.
In all of the above, it is crucial for educators to be open-minded about what the results of the evaluation might show and be prepared to act either way. Evaluation should not be a tool for justifying or ‘evidence washing’ a predetermined conclusion or course of action. The reason for engaging in evaluation is to understand program impact in the face of uncertainty. It provides the facts (as best they can be estimated) to help make decisions about how to structure programs, whether they should be expanded, whether they need to be adjusted along the way, or whether they need to stop altogether.
Evaluation not only asks ‘what is so?’ – it also asks ‘so what?’ In other words, evaluation is most useful if it will lead to meaningful change. Before embarking on any evaluation, it is important to think about what can reasonably be achieved from the research. If continuation of the program is not in question, it may be better focusing on process questions bearing on program efficiency or quality improvement. It is also important to think about stakeholders, how they might react to the evaluation and what needs to happen to keep them informed along the way.
In accordance with the NSW Government Program Evaluation Guidelines (NSW Government 2016), evaluation should be conducted independently of program delivery and it should be publicly available for transparency. Independence might not always be possible where no budget exists or where activity is business-as-usual or small in scale (e.g. classroom-level or school-level programs). Evaluative thinking is still critical in these circumstances as part of ongoing quality improvement.
Where a formal evaluation has been conducted, transparency is a critical part of the process. Stakeholders need to understand the questions the evaluation sought to answer, the methods employed to answer them, any assumptions that were made, what the evaluation found and the consequences of those findings. Transparency also helps people in later times or in other schools or jurisdictions to identify what works.
To embed the sort of evaluative thinking described above into activity across education requires everyone to be evaluative thinkers in one way or another. Everyone designing or implementing a program needs to be clear on what problem they are trying to solve, how they are planning to solve it and how success will be measured.
For smaller, more routine programs and policies, performance should be monitored using the sort of benchmarking described above to determine the effectiveness, efficiency
and appropriateness of expenditure. This could be done by an early childhood service Director, by a school teacher, by a principal, school leadership group, Directors Public Schools or Principals School Leadership. If more technical assistance is required, it may be better to bring in that technical expertise.
Centre for Education Statistics and Evaluation 2015, ‘Six effective practices in high growth schools’, Learning Curve Issue 8, Centre for Education Statistics and Evaluation, Sydney.
NSW Government 2016, ‘NSW Government Program Evaluation Guidelines', Department of Premier and Cabinet, NSW Government, Sydney. https://www.dpc.nsw.gov.au/__data/assets/pdf_file/0009/155844/NSW_Government_Program_Evaluation_Guidelines.pdf
OECD 2013, ‘Synergies for better learning: An international perspective on evaluation and assessment’, OECD Publishing, Paris.
Robinson, V, Lloyd, C & Rowe, K 2008, ‘The impact of leadership on student outcomes: An analysis of the differential effects of leadership types’, Educational Administration Quarterly, vol. 44, no. 5, pp. 635-674.
Timperley, H & Parr, J 2009, ‘Chain of Influence from policy to practice in the New Zealand literacy strategy’, Research Papers in Education, vol.24, no.2, pp.135-154.
Student voice refers to the views of students on their own schooling. This publication explores:
• why student voice should be measured
• how and when it should be measured
• what questions can and should be asked
• how student voice should be interpreted.
The act of capturing student voice gives students the opportunity to provide feedback and influence their own school experience. This can have an impact on their effort, participation and engagement in learning. Student feedback may also help teachers develop new perspectives on their teaching and can contribute to broader areas of school planning and improvement.
It is important to consider how student feedback is intended to be used. This will help inform when to capture the feedback, which methods are best for capturing the feedback and what questions to ask. Measuring student voice over time can help examine whether particular strategies have led to changes in the way students perceive school or learning.
Tell Them From Me is a suite of surveys used across NSW public schools. The surveys can help schools understand students’ perspectives on their school experience, including their engagement, wellbeing and exposure to quality teaching practices. Read the Tell Them From Me case studies to learn how other NSW schools have used Tell Them From Me for school planning and improvement.
Income mobility is a measure of whether children from disadvantaged backgrounds have access to economic opportunities later in life. The Income Mobility publication (PDF, 1.7MB) summarises recent research on income mobility in Australia and the role played by the Australian education system.
Reading Recovery: A sector-wide analysis (PDF, 1MB) briefly describes the results of an evaluation examining the impact of Reading Recovery on students' outcomes in NSW government schools. You can also read the Reading Recovery evaluation.
This publication presents a snapshot of the current workforce profile of principals in NSW government schools. It also outlines the research evidence on what makes an effective principal and the best ways to identify, develop and support aspiring school principals.
Drivers of school improvement are often complex and context specific. This publication describes the effective practices common to NSW government schools that achieved high growth in NAPLAN over a sustained period. These schools are defined as ‘High Value-Add’ (HVA) schools.
Effective collaboration is considered vital to driving whole-school improvement. It includes teachers sharing work samples to ensure consistency in teacher judgement, developing easily accessible platforms to share teaching resources and using peer coaching and support programs to promote and develop effective teaching practice.
Professional learning needs to support strategic school goals and be shared among staff so that learning is embedded across the school. It includes using staff meetings as a platform to share learning and internal expertise, having peer supports to ensure that professional learning is applied and obtaining tangible skills and materials for the classroom.
Educators need to work together and set shared goals for effective change to occur. This includes having whole-school planning days and regular staff meetings to discuss, support and evaluate progress towards achieving goals.
Showing students what success looks like and breaking down the steps required to achieve success is an important teaching strategy in high growth schools. Other strategies include using student data to identify students’ learning needs, developing learning targets and monitoring progress and developing accessible teaching resources that include templates for how to differentiate lessons and assessments.
Promoting a positive learning culture where students are engaged in school and value their outcomes is key to improving school performance. This includes using innovative teaching techniques, teaching students about literacy and numeracy through real-world examples such as transport, and organising trips to local universities for students and parents to help raise expectations about future study.
Creating high expectations for students, both academically and behaviourally, is essential to improving student performance. This could include displaying learning progressions in classrooms to show students what performance benchmarks are and having a common set of guidelines across a school that rewards positive behaviour.
For more information on how we selected HVA schools for this study, read High value-add schools: Key drivers of school improvement.
Student engagement and wellbeing in NSW (PDF, 2MB) presents findings from a pilot study undertaken in 2013 which measured student engagement, wellbeing and quality teaching in a group of NSW government secondary schools.
Value-added measures are based on learning growth and used by schooling systems to indicate the contribution that a school makes to student learning, over and above the contribution made by the average school.
CESE has developed a set of value-added measures for NSW government schools that adjust for factors outside the control of schools, such as students' Socio-Economic Status (SES). This publication provides an introduction to the measures, including their use, interpretation, and what they tell us about the factors influencing student outcomes.
CESE has developed a new measure of school socio-economic status, the Family Occupation and Education Index (FOEI), to accurately identify levels of socio-economic disadvantage. Getting the funding right (PDF, 1MB) outlines what FOEI is, how it assesses disadvantage and makes comparisons with other related measures.
You can also read the accompanying report Methodological Advice on Family Occupation and Education Index.