How it Works
Why our data is representative
How do we make our data representative?
Teacher Tapp asks thousands of teachers three single or multiple-response questions each day. We never ask any other sorts of questions – they are invariably terrible to answer on a mobile phone! In addition to these three questions, it is possible to invite a sample of respondents to answer an additional batch of questions. We’ve never asked more than 12 questions in this additional batch (we wouldn’t want to bore our users).
How do we know our respondents are teachers?
Like all surveys, we rely on trust — but we don’t take it at face value. New users aren’t included in published results straight away. Instead, they answer daily questions for about a week (around 20 in total), during which we collect demographic details such as role, school type, and gender. These are cross-checked against government databases, giving us extra assurance that respondents are genuine teachers in real schools.
How do we know our sample is representative based on observed characteristics?
Teacher Tapp has the largest active sample of practising teachers in England, with over 10,000 responding every day. Our panel is representative rather than random: we apply post-stratification weights to ensure the composition of our respondents matches known teacher population benchmarks. Research comparing survey methods (Jerrim, 2023) highlights that so-called “gold standard” random surveys often suffer from very low response rates, meaning they may not capture the full range of teachers’ experiences. By contrast, teacher panels can gather much larger and more frequent samples, and when weighted and externally checked, often provide a more accurate and timely picture of the profession.
Currently we re-weight by gender, age group, senior leadership status, school phase and private vs. state-funded. This re-weighting allows us to upweight or downweight particular respondents so the weighted composition of our sample reflects known teacher population margins. For example, in a recent analysis, female primary classroom teachers in their 20s were given a weighting of 2.4x the typical respondent, while male secondary senior leaders were weighted 0.5x.
How do we know our sample is representative based on unobserved characteristics?
This is more difficult — and a challenge for all surveys. Our concern is whether teachers who enjoy using Teacher Tapp differ systematically from those who don’t. We reduce this risk in several ways:
- Academic analyses: RAND (Burge et al., 2021) studied teacher retention using a discrete choice experiment, pooling data from Teacher Tapp and the NFER Teacher Voice omnibus. They found the combined dataset to be “broadly nationally representative,” showing Teacher Tapp responses aligned with a long-established panel.
- Validation against external data: During the 2023 teacher strikes, Teacher Tapp polled teachers on school closures. Results closely mirrored the Department for Education’s official transparency data on strike-day school status, confirming our survey findings tracked real-world outcomes.
Together, these checks show that Teacher Tapp’s weighted estimates consistently match other high-quality sources, boosting confidence that our data reflects teachers’ real experiences.
Limitations and Transparency
We are clear about our limits: Teacher Tapp is not a random sample, and weighting cannot adjust for everything, especially where no reliable benchmarks exist. We only re-weight on characteristics with trustworthy population data and avoid over-adjustment. Our methodology and assumptions are published so others can assess the robustness of our approach.