Since joining Ofsted as the Deputy Director for Data and Insight team and its Chief Statistician in January, I have been looking at how data is currently used to support our business and how it can be developed. It is fair to say that I am impressed with the importance already given to the good and effective use of data. There is a clear desire to ensure that inspection activity is evidence-based. There is also a desire to see that data isn’t used or presented in a way that can be misinterpreted. The latter is a challenge for anyone in the data industry. Statistics, a bit like art, can be open to interpretation.
That’s where the insight part of our role comes in. A lot of what we do is to place information in context. When we say that something is bigger or smaller than average, we have to be clear whether that is because of randomness, or whether there are other factors at play. It’s here that statistical testing and predictive modelling can be very powerful, helping identify settings whose data appear at odds with what we normally expect.
But what is normal? Every child and every setting is different, and special. That is true, of course. Unlike art, however, the beauty of statistics (as a science) is that we can remove some subjectivity and provide high confidence about whether (or not!) we are observing something unusual.
Data starts the conversation
That said, other 'laws' of statistics are that random events do happen; such as tossing a coin to get 10 heads in a row or winning the lottery. And then there is the limitations of the data itself. Data rarely tells us everything we need to know and can only represent the measured characteristics. It is for this reason that we produce data and data-driven products as a start of the conversation and not 'the answer'.
For example, our work to risk-assess schools encompasses various factors. This includes pupil attainment and progress, but recognises that there may be other factors at play. Our inspection data summary reports (given to inspectors before a visit) tell inspectors whether performance data is unusual. Inspection judgements rely on a broad range of evidence-based conversations and observations by trained and experienced inspectors to work out what is actually going on. Sean Harford’s recent blog made a similar point.
So, in a data-driven world, we want to better understand what works and what does not (from a data perspective) and keep telling the story. It is not just about looking through the lens of existing data (although that’s important), but also looking at what else we and the wider statistical/academic research community can show about what drives effective education and care.
Follow Jason on Twitter: @JRbradbury
Comments