What can my data tell me?

As PYP teachers, assessment data has a pivotal role in our professional inquiries as a source of “evidence”. However, before we can begin using assessment data purposefully, we need to consider what types of questions our assessment data can help us answer, as this will help us identify which sources of data are going to be most helpful.

I am focusing on the use of questions here as they are the drivers of inquiry. As teachers, we have to be able to develop questions that will help us learn about trends, patterns, and areas for exceptionality for students. Without a guiding question, it is easy for us to get sidetracked and waste precious time.

While there is undoubtedly overlap, I think most of our questions will fall into one of three categories:

  1. improving teaching and learning,
  2. developing learner agency, and
  3. engaging in programme evaluation.

I have included questions about learner agency here as this is a core conception in the PYP enhancements, and assessment plays an important role in helping students develop agency in their learning.

We can also work in reverse, which is the position I am in. My school has amassed data and I want to determine which category of questions it can or cannot be used to answer. For example, is our International Schools Assessment (ISA) data useful for answering questions about teaching and learning? Can our Developmental Reading Assessment (DRA) data be used to support learner agency?

For data to be useful to improve teaching and learning, it must:

  • be centered on the student:

The assessment data we collect must be able to tell us specifically about what each student has learned. Assessment data that tells us how the whole class did as a cohort will not provide us with enough detail about what each student has understood and what misconceptions they still hold. It is this level of detail we need to know to be able to answer the three big questions of learning: Where is the student going? Where is the student now? and What strategies can help the student get to where s/he needs to go? (Hattie & Timperley, 2007).

  • be collected frequently:

To impact our planning and teaching we need to know daily what our students have or have not understood. This is because the dirty secret of teaching is that what we teach is not necessarily what students’ learn (Wiliam, 2011, 47). Useful data helps teachers identify what their student has (or has not) learned at that moment compared to the learning outcome that is trying to be achieved.

  • be timely.

If we must wait weeks or months for the assessment data to be generated and analysed, the window for improving our instruction has passed.

  • directly assess what was taught.

This one seems obvious – we want our assessments to be based on what was taught, otherwise how can it answer questions about the connection between teaching and learning?

Using standardised assessment data

Given these factors, as important question can be raised here about the appropriateness of using standardised assessment data to answer questions about teaching and learning. For many of us in international schools, we either do certain national or locally mandated standardised assessments or our schools choose a standardised assessment tool such as the ISA or MAP. Before you can use these assessments to answer questions about teaching and learning you’ll have to consider:

  • How frequently is the assessment done? Venables (2014) states that data collected yearly, such as my ISA data, is less impactful on teaching and learning as the questions have to span a wide array of topics and thus do not give enough detail about the mastery and misconceptions of our students.
  • How closely does the content align to your curriculum? For example, when I worked at an international school in Ontario, Canada, we moved curriculum outcomes between year levels so we could create coherent units of inquiry. We met the provincial mandate for teaching the prescribed curriculum, but it was taught out of order, which did impact on the usefulness of the standardised provincial testing data at Grade 3.
  • How quickly can we access and analyse the data? Herman et al (2012) claim that this is one of the main challenges in using standardised assessment data because the results come back weeks or months after the assessment was given or at the end of the academic year when it is too late to use it. Advances in technology are helping to close this time gap, but it is still something to be cognisant of.

Indeed in the new PYP: From Principles into Practices (2018) the IB cautions schools using standardised tests to measure students’ performance to “carefully consider how to effectively use this data point to add to the comprehensive view of student learning” (75).

For data to be useful for developing learner agency, it must:

  • be understood by the learner

Nicol & MacFarlane-Dick (2006) found that there is an assumption amongst many teachers that students automatically know what the feedback they are given means. They call this the ‘transmission view’ of formative assessment: the teacher transmits feedback to students but no time is set aside for the student and teacher to co-construct an understanding of it. For learners to be able to use assessment data to gain more voice, choice and ownership over their learning, they need to be clear about what the feedback means.

  • indicate how to improve

While teachers often provide written and oral feedback about a learner’s work, the “feedback and assessments typically lack suggestions for what students can do as active agents within the assessment process” (Braund & DeLuca, 2018, 80). Too often feedback is focused on what was done and does not address the “next steps” which would help the learner improve. Useful data makes the next steps clear to the learner.

  • help students develop an ‘anatomy of quality’

Assessment data should help learners develop their ability to understand what is expected of them and what quality work looks like. It is only by knowing this that they can effectively determine what and how to improve. Rodgers (2018) found that descriptive feedback “allowed students to develop a set of inner criteria for what learning felt like” (95) and students developed a more sophisticated language for describing their learning.

  • have a positive impact on motivation and self-efficacy

Research clearly shows that assessment data can reduce students’ motivation to learn, particularly amongst low-achieving students (Assessment Reform Group, 2002), and in settings where there is a testing culture that values performance over learning.

As PYP schools, one of our mandates is to develop lifelong learners. This requires students to develop self-efficacy – their belief in their ability to succeed based on past experiences. Thus, assessment has a powerful role to play. We must ensure that the type of assessment data we share with students:

  • helps them see how much they’ve learned,
  • provides feedback that helps them determine their next steps and how to take them successfully,
  • supports conversations about the value of mistakes,
  • encourages them to value persistence and the effort they invested, and
  • helps them develop responsibility for their learning (Assessment Reform Group, 2002).

For data to be useful for programme evaluation,

it will really depend upon the type of programme evaluation you would like to conduct. The four main types are:

  • formative: “ensures that a program is feasible, appropriate, and acceptable before it is fully implemented” (Center for Disease Control, n.d.)
  • process: evaluates whether the program was implemented as designed
  • outcome: measures the effect on the targeted group of the program
  • impact: measures how effective the program was in achieving its stated goals

If your school is planning to engage in programme evaluation, I highly recommend the resources at Better Evaluation. Standardised assessment data typically has a role to play in programme evaluation as it allows the user to answer questions directly related to subgroups within the targeted population, and questions related to progress over time. They can also be used to answer questions about student ability, which is a factor over which the school has no direct control, but may influence the success of a programme.

What this means for me:

It is clear to me that for most of our questions about teaching and learning, we will need to rely on the data we collect through formal and informal classroom assessments as these are done frequently, are timely, and are directly related to what we teach (hopefully).  Given that we have common formative and summative tasks for our units of inquiry, we have data that we can share and analyse as a team to determine next steps for our students and the units of inquiry.

I still have questions though about what data we may be missing that we will need to guide instructional decisions. Maybe this cannot be known until we have developed a specific inquiry question?

It also seems clear that classroom based assessment data is also the best type to use to answer questions about how students are developing learner agency.

I think our Developmental Reading Assessment data can be used to answer questions about teaching and learning, but we will need to be very careful about how and if we share these results with students. The ‘levelled’ nature of the assessment means that students are aware whether they have improved as readers or not, despite the effort they may have invested. Incorrectly used, the data could demotivate students.

However, our reading and writing continuums could be used with students as they can easily see their progress and it will help them identify specific areas to improve on. The question that arises here is whether the form is accessible to our students and from what year level. Ideally, if it is accessible, the students would engage in co-evaluation with their teacher.

As for our International Schools Assessment (ISA) data, I think this is best kept for programme evaluation. The gap in time between when we administer it and get the results is a matter of months. We also run the assessment only once a year and the content does not align directly with our curriculum. I think it might be able to answer questions for us about whether our programme is improving by looking at the results over time, and may even be able to guide decision-making at the school leadership level.

My next post: Using data with teaching teams


References:

Assessment Reform Group (2002). Testing, Motivation and Learning. Retrieved from http://www.hkeaa.edu.hk/DocLibrary/SBA/HKDSE/Eng_DVD/doc/Testing_Motivation_Learning.pdf

Australian Center for Educational Research (n.d.) About the ISA. Retrieved from https://www.acer.org/isa/about.

Braund, H., & DeLuca, C. (2018). Elementary students as active agents in their learning: an empirical study of the connections between assessment practices and student metacognition. The Australian Educational Researcher, 45(1), 65-85.

Center for Disease Control (n.d.). Types of Evaluations. Retrieved from https://www.cdc.gov/std/Program/pupestd/Types%20of%20Evaluation.pdf

Hattie, J., & Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), 81-112.

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199-218.

Rodgers, C. (2018). Descriptive feedback: student voice in K-5 classrooms. The Australian Educational Researcher, 45(1), 87-102.

Wiliam, D. (2011). Embedded Formative Assessment. Bloomington, IN: Solution Tree.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s