Publications filter

Learning Curve (22)

Thursday, 23 May 2019

Supporting school completion

Supporting school completion: The importance of engagement and effective teaching Learning Curve (PDF, 1.3MB)Supporting school completion: Resources and case studies for schools, teachers and parents/carers (PDF, 3MB)

The Supporting school completion: The importance of engagement and effective teaching (PDF, 1.3MB) explores links between students' engagement and experience of teaching practices in the middle of high school (Year 10) and their likelihood of completing Year 12.
The Supporting school completion: Resources and case studies for schools, teachers and parents/carers (PDF, 3MB) accompanies the Learning Curve and outlines practical strategies that may help facilitate high school completion and post-school transition.

 

Background

In Australia, socioeconomic status remains a key factor in school completion. By age 19, only 61% of the most disadvantaged students have completed Year 12, compared with 89% of the most advantaged students. It is important that all young people are given the opportunity to complete Year 12, or an equivalent pathway, particularly students who are at risk of not completing school due to their socioeconomic disadvantage.

Using data from the NSW Tell Them From Me (TTFM) secondary school student survey, this Learning Curve explores the links between students’ engagement and experience of teaching practices in the middle of high school (Year 10) and their likelihood of completing Year 12 two years later.

 

Main findings

Positive engagement and effective teaching increase all students’ chances of completing Year 12. When students develop positive relationships with teachers and are supported and challenged by teachers, they are more likely to complete school. Likewise, when students put effort in at school, see value in doing homework and believe school is important and useful for future success, they are also more likely to complete Year 12.

Engaging disadvantaged students increases their chances of completing school. When students from low-SES backgrounds report high levels of engagement and effective teaching practice in the middle of high school they are more likely to complete school than students from high-SES backgrounds who are not engaged in school.

Students from low-SES backgrounds are more likely to be disengaged in key predictors of school completion than students from high-SES backgrounds. In NSW, around half of all high-SES students in Year 10 report positive teacher relationships, positive attendance and value the outcomes of school, whereas only a quarter of low-SES students report a similar level of engagement.

This Learning Curve is accompanied by the resource, Supporting school completion: Resources for schools, teachers and parents/carers, which outlines practical strategies that may help facilitate high school completion and post-school transition. The resource includes four case studies from low-SES schools across metropolitan and regional NSW.

Some of the common themes that emerge from the four case studies are:

  • Developing strong teacher-student relations in the years prior to students finishing high school is an important foundation for successful post-school transition
  • Setting high expectations for all students fosters high aspirations and encourages students to work towards those aspirations
  • Providing information and support to students and parents about post-school transition broadens their awareness of available options for post-school life
  • Having dedicated resources within the school, through staff and/or ‘drop-in’ centres that students can draw on, improves students’ chances of making a successful transition.

 

Friday, 20 July 2018

Supporting students' learning

Supporting students' learning - insights from students, parents and teachers (PDF, 1MB)Supporting-students-learning-resources-thumb

The Supporting students' learning - insights from students, parents and teachers (PDF, 1MB) learning curve presents findings from the 2016 Tell Them From Me school surveys completed by primary and secondary students, parents/carer and teachers in NSW government schools. Students provide feedback on how much support they receive from their teachers and their parents/carers, while responses from teachers and parent/carers indicate how much support they provide in school and at home, respectively. It draws on all three perspectives to explore the provision of advocacy and support and how this varies for different groups of students at different stages of school. 

The Supporting students' learning - resources and case studies for schools, teachers and parents (PDF, 808kB) accompanies the learning curve, providing evidence-based strategies and two case studies that describe how to create supportive learning environments. 

Read the audio paper transcript (PDF, 106kB). 

 

Background

Alongside effective teaching practices, students need a supportive learning environment to succeed. In an education context, advocacy and support for learning refers to the active consideration of, and support for, students’ academic and wellbeing needs.

Main findings

  • The results show that students and teachers report different levels of advocacy and support in school depending on the stage of schooling. Students’ perceptions of teacher support start to decline in the final years of primary school. Secondary school students perceive teacher support to dip in the middle years of school, before improving in Years 11 and 12. Teachers report that they increase the amount of classroom support they provide to students in key schooling years (Years 5-6 and Years 10-12).
  • In NSW, both parents and students report a continual decline in the frequency of supportive interactions at home that relate to school.
  • While there are some differences between boys’ and girls’ experiences of advocacy and support in school and at home, there is a large disadvantage gap between low and high-SES students. These findings suggest that more can be done to make sure all students have access to support sources, which they can turn to for advice and encouragement.
  • Accompanying this Learning Curve, CESE has used evidence-based practices and local examples to provide practical strategies for fostering advocacy and support in schools and at home. Case studies on Whalan Public School and Sir Joseph Banks High School highlight some of the programs and initiatives these schools have used to achieve high levels of advocacy at school. This qualitative research shows that schools that provide high levels of advocacy at school are also committed to strengthening the homeschool partnership for their students.

More information

The NSW Department of Education Strategic Plan 2018-2022 includes the commitment to ensure that every student is known, valued and cared for in our schools. School advocacy and support for learning are necessary components for happy and successful students. Schools can use the department’s Tell Them From Me surveys to engage with, clarify and strengthen the important relationship between teachers, parents and schools by providing an evidence-based platform to capture feedback. This knowledge can then help build an accurate and timely picture that schools can use for practical improvements.

The summary on this page is also available as a PDF. Download the summary of the two publications (PDF, 180kB).

The role of student engagement in the transition from primary to secondary school (PDF, 2.2MB)

The role of student engagement in the transition from primary to secondary school (PDF, 2.2MB)

Related: Homebush West Public School case study

 

Summary

The primary to secondary transition marks a significant change for most students 

This transition period is important because of the impact it may have on students’ engagement in learning and their sense of belonging at school. This publication examines the relationship between students’ sense of belonging and other types of engagement across the transition from primary to secondary. It includes an analysis of 12,000 students who completed surveys in Year 6, and then again in Year 7. 

There is typically a decline in student engagement during the transition from Year 6 to Year 7

This decline is experienced even more by students from low-socioeconomic backgrounds and Aboriginal and Torres Strait Islander students. Between Year 6 and Year 7, there is a decline in the percentage of students who value school outcomes and those who are trying hard to succeed. Students’ sense of belonging also declines over the transition.

Students’ experiences in primary school can have flow on effects for their engagement and learning in secondary

Students who report having a positive sense of belonging in Year 6 are more likely to have a positive sense of belonging in Year 7. Factors that help influence a student’s sense of belonging at the beginning of high school include their relationships with teachers and peers, the support they receive at school and at home, and school practice. 

Both primary and secondary schools can help make the transition easier for students

Primary schools should be attentive to Year 6 students’ sense of belonging and their relationships with teachers and peers, especially in the lead up to the transition. Secondary schools should develop strong, supportive student-teacher relationships as early as possible. There are more practical tips on how to do this in the publication and the Homebush West case study. 

anti bullying thumb

Anti-bullying interventions literature review (PDF, 1.1MB)

One-page summary (PDF, 251kB)

Evidence summary poster for school staffrooms

Anti-bullying interventions myPL course 

Background

This literature review provides the evidence base for the department’s anti-bullying strategy. Released in 2017, the NSW Anti-bullying Strategy brings together evidence-based resources and information to support schools, parents and carers, and students to prevent and respond to bullying effectively.
Bullying can be face-to-face, covert or online. It has three main features: it involves repeated actions, is intended to cause distress or harm, and is grounded in an imbalance of power.

In 2015, 14.8 per cent of Australian students reported being bullied at least a few times per month. Bullying peaks during the transition from primary school to high school, before decreasing to low levels by the end of high school. Boys tend to bully more than girls, however, girls use more covert bullying than boys.

Main findings

Anti-bullying programs reduce bullying behaviours by an average of 20 – 23 per cent.

The most effective anti-bullying interventions:

• take a holistic, whole-school and whole-community approach, which includes promoting awareness of anti-bullying interventions

• include educational content in the classroom that allows students to develop social and emotional competencies, and to learn appropriate ways to respond to bullying – both as a student who experiences bullying and as a bystander

• provide support and sustainable professional development for school staff on how best to enhance understanding, skills and self-efficacy to address and prevent bullying behaviours

• ensure systematic implementation and evaluation.

There are Australian and international examples of whole-schools approaches that have the characteristics common to effective anti-bullying interventions and have been subjected to program evaluations. Australian examples are the National Safe Schools Framework, Positive Behaviour for Learning, Friendly Schools, KidsMatter and MindMatters. International examples are the Olweus Bullying Prevention Program (Norway), Sheffield Anti-Bullying Project (England), Seville Anti-Bullying in School Project (Spain) and KiVa Anti-Bullying Program (Finland).
Schools need greater support to maximise the outcomes of anti-bullying interventions and to identify what is likely to be successful based on their specific contexts and requirements. There is very little available currently in the way of specific advice to guide schools in their choice of anti-bullying programs.

More information

Visit the department's anti-bullying website.

Related publications:

The role of student engagement in the transition from primary to secondary school. 

 

Evidence summary poster for school staffrooms

To help share the evidence, Anti-bullying interventions is available as a summary poster (PDF, 1.4MB)

What does the poster say?

  • In 2015, 14.8% of Australian students reported being bullied at least a few times per month.
  • Bullying peaks during the transition from primary school to high school.
    It decreases to low levels by the end of high school. Boys tend to bully more than girls, however, girls use more covert bullying than boys.

  • Anti-bullying programs reduce bullying behaviours by an average of 20-23%.

The NSW Anti-bullying Strategy

In 2017, the Centre for Education Statistics and Evaluation (CESE) released a literature review on effective anti-bullying interventions in schools. This review became the evidence base for the NSW Department of Education’s Anti-bullying Strategy. This strategy brings together evidence-based resources and information to support schools, parents and carers, and students to prevent and respond to bullying effectively.
Bullying can be face-to-face, covert or online.

It has three main features:
• it involves repeated actions
• is intended to cause distress or harm, and
• is grounded in an imbalance of power.
The most effective anti-bullying interventions:
• take a holistic, whole-school and whole-community approach
• include educational content in the classroom that allows students to learn appropriate ways to respond to bullying
• provide support and sustainable professional development for school staff
• ensure systematic implementation and evaluation.

wellbeing thumb

The Primary school engagement and wellbeing publication (PDF, 1.1MB) presents findings from the 2015 Tell Them From Me primary school survey. The survey measures the engagement of primary students in Years 4, 5 and 6 and classroom, school and family factors that influence student engagement and achievement.

Learn more about the Tell Them From Me surveys. 

Gender and Engagement Learning Curve (PDF, 1.7MB)

The Gender and Engagement Learning Curve (PDF, 1.7MB) analyses gender and engagement in NSW public schools using data from the NSW Tell Them From Me secondary school survey.

Learn more about the Tell Them From Me surveys

All education programs are well-intentioned and many of them are highly effective. However, there are usually more ways than one to achieve good educational outcomes for students. When faced with this scenario, how do educators and education policymakers decide which alternative is likely to provide most ‘bang for buck’?

There’s also an uncomfortable truth that educators and policymakers need to grapple with: some programs are not effective and some may even be harmful. What is the best way to identify these programs so that they can be remediated or stopped altogether?

Program evaluation is a tool to inform these decisions. More formally, program evaluation is a systematic and objective process to make judgements about the merit or worth of our actions, usually in relation to their effectiveness, efficiency and appropriateness (NSW Government 2016). Evaluation and self-assessment is at the heart of strong education systems and evaluative thinking is a core competency of effective educational leadership. Teachers, school leaders and people in policy roles should all apply the principles of evaluation to their daily work.

Research shows that:

  • Effective teachers use data and other evidence to constantly assess how well students are progressing in response to their lessons (Timperley & Parr, 2009).
  • Effective principals constantly plan, coordinate and evaluate teaching and the use of the curriculum with systematic use of assessment data (Robinson, Lloyd & Rowe, 2008).
  • Effective education systems engage all school staff and students in school self-evaluations so that program and policy settings can be adjusted to maximise educational outcomes (OECD, 2013).

 

This Learning Curve sets out five conditions for effective evaluation in education. These are not the only considerations and they are not unique to education. However, if these parameters are missing, evaluation will not be possible or it will be ineffective.
The five prerequisites for effective evaluation in education are:
  1. Start with a clear and measurable statement of objectives
  2. Develop a theory about how program activities will lead to improved outcomes (i.e. a program logic) and structure the evaluation questions around that logic
  3. Let the evaluation questions determine the evaluation method
  4. For questions about program impact, either a baseline or a comparison group will be required (preferably both)
  5. Be open-minded about the findings and have a clear plan for how to use the results.

 

1. Start with clear and measurable objectives

It may sound obvious but understanding whether program activities have been effective requires a clear understanding of what the program is trying to achieve. The objectives also need to be measureable.

For some programs or activities this is very easy. For example, reading interventions like Reading Recovery aim to improve students’ ability to read. In these instances it is easy to start with a clear statement of objectives (i.e. to improve students’ ability to read). It is also quite easy to measure outcomes because reading progression is relatively easy to measure (although the issue of causal attribution is important – more on that later).

However, for some programs, it can be more difficult to develop a clear statement of objectives and it is even more difficult to measure whether they have been achieved. Take the Bring Your Own Device (BYOD) policy as an example. The objective of BYOD is often described as using technology to ‘deepen learning’, ‘foster creativity’ or ‘engage students’. These are worthy objectives. The challenge for schools and systems is to work out whether they have been achieved. What does ‘deep learning’ look like and how can it be measured? How will teachers know if a student is more ‘creative’ or ‘engaged’ now than they were before? How much of that gain is due to the program or policy (BYOD) and how much is due to other factors?

Figure 1 provides some examples of common objectives and possible measures that will inform whether they have been achieved. These are highly idealised examples and the problems that educators are trying to solve are usually more multi-faceted and complex than these. In some cases it may not even be possible to robustly measure outcomes. In other cases, there may be more than one outcome resulting from a set of activities. However, no matter how hard and complex the problem, if there is no clarity about what the problem is, there is also no chance of measuring whether it has been solved.

Figure 1. Some examples of common objectives and measures that might inform whether they’ve been achieved

Figure 1. Some examples of common objectives and measures that might inform whether they’ve been achieved

 

Figure 2. A simple logic model

A simple logic model

 

2. Linking activities and outcomes

Effective programs have a clear line of sight between the needs they are responding to, the resources available, the activities undertaken with those resources, and how activities will deliver outcomes. Logic modelling is one way to put these components on a piece of paper. Wherever possible, this should be done by those who are developing and implementing a program or policy, in conjunction with an experienced evaluator. At its most simple, a logic model looks like that shown in Figure 2.

The needs are about the problem at hand and why it is important to solve it. Inputs are the things put in to address the need (usually a combination of money, time and resources). Activities describe the things that happen with the inputs. Outcomes are usually expressed in terms as measures of success. A logic model is not dissimilar to the processes used in school planning. Needs are usually the strategic priorities identified in the plan. Inputs are the resources allocated to address those needs. Activities are often referred to as processes or projects. Outcomes and impacts are used interchangeably. Figure 3 gives some common examples of needs, inputs, activities and outcomes.

Some of these examples are ‘add-on’ activities to business-as-usual (e.g. speech pathology) and some simply reflect the way good teachers organise their classroom (e.g. differentiated instruction). Figure 3 merely serves to illustrate that the evaluative process involves thinking about the resources going into education, how those inputs are organised and how they might plausibly lead to change.

Good evaluation will make an assessment of how well the activities have been implemented (process evaluation) and whether these activities made a difference (outcome evaluation). If programs are effective, it might also be prudent to ask whether they provide value for money (economic evaluation).

A simple logic modelling worksheet can be found in the Appendix.

 

Figure 3. Some examples of program needs, inputs, activities and outcomes

Some examples of program needs, inputs, activities and outcomes

 

Types of evaluation

Process evaluation is particularly helpful where programs fail to achieve their goals. It helps to explain whether that occurred because of a failure of implementation, a design flaw in the program, or because of some external barrier in the operating environment. Process evaluation also helps to build an understanding of the mechanisms at play in successful programs so that they can be replicated and built upon.

Outcome evaluation usually identifies average effects: were the recipients better off under this program than they would have been in its absence. However, when viewed in combination with process evaluation, it can provide a more nuanced overview of the program. It can explore who the program had an impact on, to what extent, in what ways, and under what circumstances. This is important because very few programs work for everyone. Identifying people who are not responding to the program helps to target alternative courses of action.

Economic evaluations help us choose between alternatives when we have many known ways of achieving the same outcomes. In these circumstances, the choice often comes down to what is the most effective use of limited resources. If programs are demonstrably ineffective, there is little sense in conducting economic evaluations. Ineffective programs do not provide value for money.

 

When program logic breaks down – repeating a school year

While repeating a school year is relatively uncommon in NSW, it is quite common in some countries such as the United States. It is a practice that has considerable intuitive appeal – if a student is falling behind (need) the theory is that an additional year of education (input) will afford them the additional instruction (activity) required to achieve positive educational outcomes (outcome). Evidence suggests that this is true only for a small proportion of students who are held back. In fact, after one year, students who are held back are on average four months further behind similar-aged peers than they would have been had they not been held back.

According to research conducted by the UK Education Endowment Foundation, the reason that repeating a year is not effective is that it “just provides ‘more of the same’, in contrast to other strategies which provide additional targeted support or involve a new pedagogical approach. In addition, it appears that repeating a year is likely to have a negative impact on the student’s self-confidence and belief that they can be an effective learner”. In other words, for most recipients of the program the activities are poorly suited to the students’ needs. In situations like this, well-intentioned activities can actually have a negative impact on a majority of students.

Source: https://educationendowmentfoundation.org.uk/resources/teaching-learning-toolkit/repeating-a-year/

 

3. Let the evaluation questions determine the method

Once a clear problem statement has been developed, the inputs and activities are identified, and intended outcomes have been established, coherent evaluation questions can be developed.
Good evaluation will ask questions such as:

  • Did the program deliver what was intended? If not, why not?
  • Did the program reach the right recipients? If not, why not?
  • Did the program achieve the intended outcome and were there any unintended (positive or negative) outcomes?
  • For whom did it work and under what circumstances?
  • Is this the most efficient way to use limited resources?

All too often educational researchers get hung up on using ‘qualitative’ versus ‘quantitative’ methods when answering these questions. This is a false dichotomy. The method employed to answer the research question depends critically on the question itself.

Qualitative research usually refers to semi-structured techniques such as in-depth interviews, focus groups or case studies. Quantitative research usually refers to more structured approaches to data collection and analysis where the intention is to make statements about a population derived from a sample.

Both approaches will have merit depending on the evaluation question. In-depth interviews and focus groups are often the best ways of understanding whether a program has been implemented as intended and, if not, why not. These methods have limitations when trying to work out impact because, by definition, information is only gleaned from the people who were interviewed. Unless something is known about the people who weren’t interviewed, these sorts of methods can be highly misleading. For example, people who didn’t respond well to the intervention might also be less likely to participate in interviews or focus groups. This is where quantitative methods are more appropriate because they can generalise to describe overall effects across all individuals. However, combining both qualitative and quantitative methods can be useful for identifying for whom and under what conditions the program will be effective. For example, CESE researchers investigating the practices of high-growth NSW schools used quantitative analysis to identify high-growth schools and analyse survey results, and qualitative interviews to find out more about the practices these schools implemented.

The possible sources of data to inform evaluation questions are endless. The key issue is to think about the evaluation question and adopt the data and methods that will provide the most robust answer to that question.

 

4. For questions about program impact, either a baseline or a comparison group will be required (preferably both)

The number one question that most evaluations should set out to answer is: did the program achieve what it set out to achieve? This raises the vexing problem of how to attribute activities to any observed outcomes.

No single evaluation approach will give a certain answer to the attribution question. However, some research designs will allow for more certain conclusions that the effects are real and are linked to the program. CESE uses a simple three-level hierarchy to classify the evidence strength, as shown in Figure 4. There are many variations on this hierarchy, most of which can be found in the health and medical literature.

Figure 4. CESE Evidence Hierarchy

CESE Evidence Hierarchy

Taking before (pre) and after (post) measures is a good start and is often the only way to measure outcomes. However, simple comparisons like this need to be treated cautiously because some outcomes will change over time without any special intervention by schools. For example, if a student’s reading level was measured at two time points, they would usually be at a higher level at the second time point just through the course of normal class and home reading practice.

This is where reference to benchmarks or comparison groups is critical. For example, if the typical growth in reading achievement over a specified period of time is known, it can be used to benchmark students against that expected growth. Statements can then be made about whether growth is higher or lower than expected as a result of program activities.

An even stronger design is when students (or schools, or whatever the target group is comprised of) are matched like-for-like with a comparison group. This design is more likely to ensure that differences are due to the program and not due to some other factor or set of factors. These designs are referred to as 'quasi-experiments' in Figure 4.

Even better are randomised controlled trials (RCTs) where participants are randomly allocated to different conditions. Outcomes are then observed for the different groups and any differences are attributed to the experience they received relative to their peers. RCTs can also be conducted using a wait-list approach where everyone gets the program either immediately or after a waiting period. RCTs allow for strong causal attributions because the random assignment effectively balances the groups on all of the factors that could have influenced those outcomes.

RCTs have a place in educational research but they will probably always be the exception rather than the rule. RCTs are usually reserved for large-scale projects and wouldn't normally be used to measure programs operating at the classroom level. Special skills are required to run these sorts of trials and most of the programs run by education systems would be unsuited to this research design. In the absence of RCTs, it is still important to think about ways to measure what the world looked like before the activity began and what it looked like after some period of activity has been undertaken. This requires taking baseline and follow-up measures and comparing these over time.

As a rule, the less rigorous the evaluation methodology, the more likely we are to falsely conclude that a program has been effective. This suggests that stronger research designs are required to truly understand what works, for whom and under what circumstances.

 

5. Be open-minded and have a clear plan for how to use the results

In all of the above, it is crucial for educators to be open-minded about what the results of the evaluation might show and be prepared to act either way. Evaluation should not be a tool for justifying or ‘evidence washing’ a predetermined conclusion or course of action. The reason for engaging in evaluation is to understand program impact in the face of uncertainty. It provides the facts (as best they can be estimated) to help make decisions about how to structure programs, whether they should be expanded, whether they need to be adjusted along the way, or whether they need to stop altogether.

Evaluation not only asks ‘what is so?’ – it also asks ‘so what?’ In other words, evaluation is most useful if it will lead to meaningful change. Before embarking on any evaluation, it is important to think about what can reasonably be achieved from the research. If continuation of the program is not in question, it may be better focusing on process questions bearing on program efficiency or quality improvement. It is also important to think about stakeholders, how they might react to the evaluation and what needs to happen to keep them informed along the way.

In accordance with the NSW Government Program Evaluation Guidelines (NSW Government 2016), evaluation should be conducted independently of program delivery and it should be publicly available for transparency. Independence might not always be possible where no budget exists or where activity is business-as-usual or small in scale (e.g. classroom-level or school-level programs). Evaluative thinking is still critical in these circumstances as part of ongoing quality improvement.

Where a formal evaluation has been conducted, transparency is a critical part of the process. Stakeholders need to understand the questions the evaluation sought to answer, the methods employed to answer them, any assumptions that were made, what the evaluation found and the consequences of those findings. Transparency also helps people in later times or in other schools or jurisdictions to identify what works.

 

Conclusion

To embed the sort of evaluative thinking described above into activity across education requires everyone to be evaluative thinkers in one way or another. Everyone designing or implementing a program needs to be clear on what problem they are trying to solve, how they are planning to solve it and how success will be measured.

For smaller, more routine programs and policies, performance should be monitored using the sort of benchmarking described above to determine the effectiveness, efficiency
and appropriateness of expenditure. This could be done by an early childhood service Director, by a school teacher, by a principal, school leadership group, Directors Public Schools or Principals School Leadership. If more technical assistance is required, it may be better to bring in that technical expertise.

 

References

Centre for Education Statistics and Evaluation 2015, ‘Six effective practices in high growth schools’, Learning Curve Issue 8, Centre for Education Statistics and Evaluation, Sydney.

NSW Government 2016, ‘NSW Government Program Evaluation Guidelines', Department of Premier and Cabinet, NSW Government, Sydney. https://www.dpc.nsw.gov.au/__data/assets/pdf_file/0009/155844/NSW_Government_Program_Evaluation_Guidelines.pdf

OECD 2013, ‘Synergies for better learning: An international perspective on evaluation and assessment’, OECD Publishing, Paris.

Robinson, V, Lloyd, C & Rowe, K 2008, ‘The impact of leadership on student outcomes: An analysis of the differential effects of leadership types’, Educational Administration Quarterly, vol. 44, no. 5, pp. 635-674.

Timperley, H & Parr, J 2009, ‘Chain of Influence from policy to practice in the New Zealand literacy strategy’, Research Papers in Education, vol.24, no.2, pp.135-154.

 

 

Tuesday, 01 November 2016

Capturing and measuring student voice

Capturing and measuring student voice (PDF, 1.1MB)

Capturing and measuring student voice (PDF, 1.1MB)

 

Summary

Student voice helps us to understand learning from the perspective of the learner

Student voice refers to the views of students on their own schooling. This publication explores:
• why student voice should be measured
• how and when it should be measured
• what questions can and should be asked
• how student voice should be interpreted.

Capturing student voice can improve engagement and provides useful data for school planning

The act of capturing student voice gives students the opportunity to provide feedback and influence their own school experience. This can have an impact on their effort, participation and engagement in learning. Student feedback may also help teachers develop new perspectives on their teaching and can contribute to broader areas of school planning and improvement.

The methodology used for capturing student voice is important

It is important to consider how student feedback is intended to be used. This will help inform when to capture the feedback, which methods are best for capturing the feedback and what questions to ask. Measuring student voice over time can help examine whether particular strategies have led to changes in the way students perceive school or learning.

NSW public schools can use Tell Them From Me to capture and measure student voice

Tell Them From Me is a suite of surveys used across NSW public schools. The surveys can help schools understand students’ perspectives on their school experience, including their engagement, wellbeing and exposure to quality teaching practices. Read the Tell Them From Me case studies to learn how other NSW schools have used Tell Them From Me for school planning and improvement.

Monday, 29 February 2016

Does changing school matter?

LC 13

 

Does changing school matter? (PDF 1MB) explores student mobility in NSW government schools and the impact mobility has on student outcomes.

Thursday, 21 January 2016

Income mobility in Australia

Income Mobility publication (PDF, 1.7MB)

Income mobility is a measure of whether children from disadvantaged backgrounds have access to economic opportunities later in life. The Income Mobility publication (PDF, 1.7MB) summarises recent research on income mobility in Australia and the role played by the Australian education system.

Reading Recovery: A sector-wide analysis (PDF, 1MB)

Reading Recovery: A sector-wide analysis (PDF, 1MB) briefly describes the results of an evaluation examining the impact of Reading Recovery on students' outcomes in NSW government schools. You can also read the Reading Recovery evaluation. 

Tuesday, 17 November 2015

Effective leadership

el thumbnail 

Effective Leadership (PDF, 1.1MB) 

Effective Leadership myPL course

This publication presents a snapshot of the current workforce profile of principals in NSW government schools. It also outlines the research evidence on what makes an effective principal and the best ways to identify, develop and support aspiring school principals.

5 key lessons from NP evaluations thumb

The five key lessons publication (PDF, 1MB) extracts five key lessons that can be learnt from the collective findings of the Smarter Schools National Partnerships evaluations.

Six Effective Practices in High Growth Schools (PDF, 1.5MB)

Six effective practices in high growth schools (PDF, 1.5MB) 

Six effective practices myPL course

 

Summary 

This publication explores the effective practices common to high growth schools

Drivers of school improvement are often complex and context specific. This publication describes the effective practices common to NSW government schools that achieved high growth in NAPLAN over a sustained period. These schools are defined as ‘High Value-Add’ (HVA) schools.

The six effective practices in high growth schools are:

1. Effective collaboration

Effective collaboration is considered vital to driving whole-school improvement. It includes teachers sharing work samples to ensure consistency in teacher judgement, developing easily accessible platforms to share teaching resources and using peer coaching and support programs to promote and develop effective teaching practice.

2. Engaging and sharing in professional learning

Professional learning needs to support strategic school goals and be shared among staff so that learning is embedded across the school. It includes using staff meetings as a platform to share learning and internal expertise, having peer supports to ensure that professional learning is applied and obtaining tangible skills and materials for the classroom.

3. Setting whole-school goals and strategies for change

Educators need to work together and set shared goals for effective change to occur. This includes having whole-school planning days and regular staff meetings to discuss, support and evaluate progress towards achieving goals.

4. Using explicit and effective teaching strategies

Showing students what success looks like and breaking down the steps required to achieve success is an important teaching strategy in high growth schools. Other strategies include using student data to identify students’ learning needs, developing learning targets and monitoring progress and developing accessible teaching resources that include templates for how to differentiate lessons and assessments.

5. Creating an environment that promotes learning and high levels of student engagement

Promoting a positive learning culture where students are engaged in school and value their outcomes is key to improving school performance. This includes using innovative teaching techniques, teaching students about literacy and numeracy through real-world examples such as transport, and organising trips to local universities for students and parents to help raise expectations about future study.

6. Setting high expectations for achievement

Creating high expectations for students, both academically and behaviourally, is essential to improving student performance. This could include displaying learning progressions in classrooms to show students what performance benchmarks are and having a common set of guidelines across a school that rewards positive behaviour.

For more information on how we selected HVA schools for this study, read High value-add schools: Key drivers of school improvement.

Student engagement and wellbeing in NSW (PDF, 2MB)

Student engagement and wellbeing in NSW (PDF, 2MB) presents findings from a pilot study undertaken in 2013 which measured student engagement, wellbeing and quality teaching in a group of NSW government secondary schools.

lc6

Download Using value-added measures to identify school contributions to student learning (PDF, 1.2MB).

Value-added measures are based on learning growth and used by schooling systems to indicate the contribution that a school makes to student learning, over and above the contribution made by the average school.

CESE has developed a set of value-added measures for NSW government schools that adjust for factors outside the control of schools, such as students' Socio-Economic Status (SES). This publication provides an introduction to the measures, including their use, interpretation, and what they tell us about the factors influencing student outcomes.


Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 90

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 96

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 90

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 96

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 90

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 96

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 90

Warning: Illegal string offset 'active' in /home/cece5862/public_html/templates/cese/html/pagination.php on line 96
Page 1 of 2

Publications advanced search

Accessible documents

If you find a CESE publication is not accessible, please contact us

Waratah-NSWGovt-Reverse