Methodology
How We Track UK Polls: Poll-of-Polls Methodology
This page explains how UKPollingData.com constructs its poll-of-polls average, how we select and weight individual polls, how we handle house effects, and how to interpret the data we publish. It also includes a FAQ on common polling concepts that affect how UK political polls should be read.
What is a poll-of-polls?
A poll-of-polls — sometimes called a polling average — is a statistical aggregation of multiple individual opinion polls. Rather than relying on any single survey, a poll-of-polls averages across several, reducing the impact of any one firm’s house effects or random sampling variation.
The core intuition is simple: if five pollsters each produce an estimate with a margin of error of ±2.5 points, combining their results will produce an average with a smaller effective error than any one of them individually. The more independent estimates you combine, the closer the aggregate should be to the true population value — provided the errors are genuinely random rather than systematically biased in the same direction.
The challenge with UK polling is that the errors are not always random. Different firms use different methodologies that produce systematic differences — so-called house effects. A naive average of all available polls may simply average biases rather than cancel them out. Our methodology is designed to address this.
Data collection and eligibility
We include polls in the average if they meet the following criteria:
- Published by a British Polling Council member firm. BPC membership requires disclosure of full data tables within two working days of a poll’s release. Non-BPC polls lack the transparency needed to assess their methodology.
- Cover GB adults. UK-wide polls that include Northern Ireland on a separate sampling basis are adjusted. Scotland-only, England-only, or constituency-level polls are tracked separately and do not enter the GB average.
- Fieldwork within the trailing 35 days. Polls older than 35 days are excluded from the current average, though they remain in our historical database and charts.
- Sample size of at least 800 respondents. Very small polls carry disproportionate statistical noise. We apply a soft floor at 800; polls with samples between 800 and 1,000 receive a slight downward weight adjustment.
Recency weighting
More recent polls are more informative about current public opinion than older ones. We apply an exponential decay weighting function: a poll fielded today receives full weight, while a poll fielded 30 days ago receives approximately 45% of that weight. Polls are excluded entirely after 35 days.
The decay rate is calibrated so that the effective window — the period over which roughly half the weight is concentrated — is approximately 10–14 days. This means the average responds meaningfully to real shifts in public opinion over a two-week period without over-reacting to a single outlier poll.
Sample size weighting
Larger samples carry more statistical information. We apply a square-root weighting by sample size: a poll with 2,000 respondents receives approximately 1.41 times the weight of a poll with 1,000 respondents (the square root of 2 is approximately 1.41). We cap the size premium at a factor of 1.5 to prevent very large polls from dominating the average.
House effect correction
House effects are the most important methodological challenge in building a UK polling average. Some firms systematically produce higher figures for certain parties than others, even when measuring the same underlying population at the same time.
We estimate house effects by comparing each firm’s rolling average against the cross-firm average over a trailing 90-day period. If Firm A consistently records Reform UK 3 points higher than the cross-firm average, we apply a correction factor of approximately 1.5 points (we use a partial, not full, correction to avoid over-correcting).
House effect corrections are updated on a rolling basis. They are published transparently on each pollster’s profile page. Currently applied corrections:
| Firm | Reform UK adj. | Labour adj. | Conservative adj. |
|---|---|---|---|
| YouGov | +0.5 | +0.5 | 0 |
| Ipsos | 0 | 0 | +0.5 |
| Redfield & Wilton | -1.5 | +1.5 | 0 |
| Techne UK | -2.0 | +1.5 | 0 |
| Deltapoll | 0 | 0 | 0 |
| Survation | 0 | 0 | 0 |
Adjustments are in percentage points added to raw figures before weighting into the average. Positive values indicate the firm records lower than average; the correction adds points to bring them up to the cross-firm mean.
How to read MRP polls
MRP stands for Multilevel Regression and Post-stratification. It is a statistical technique that uses a national poll sample — typically 10,000 to 40,000 respondents for a UK general election model — to estimate voting intention at the constituency level.
Here is how it works:
- Data collection: A very large national poll is conducted, asking people their voting intention alongside a battery of demographic and geographic variables.
- Multilevel regression: A statistical model is fitted to predict voting intention from individual and area-level characteristics (age, education, past vote, local deprivation scores, etc.).
- Post-stratification: The model is applied to Census-based data on the composition of each constituency’s electorate to generate a constituency-level vote share estimate.
- Seat projection: Estimated constituency vote shares are translated into seat counts using first-past-the-post rules.
MRP models are more informative than a uniform national swing applied uniformly to constituencies, but they are not infallible. The 2017 YouGov MRP model that correctly called the hung parliament was rightly celebrated; the 2019 MRP models varied in quality. At the 2024 election, multiple MRP models slightly underestimated the Liberal Democrat performance in their target seats.
Key limitations of MRP:
- The national poll underlying the MRP still has a margin of error; that error propagates through to the constituency estimates.
- MRP models are sensitive to assumptions about turnout and swing. Different modelling choices can produce materially different seat projections from the same underlying data.
- Small constituencies (particularly Scottish seats) have high uncertainty because the sample contains fewer respondents from those areas.
We track all published UK MRP polls on the MRP seat projections page.
FAQ: Common polling questions
What does “voting intention” actually measure?
Voting intention polls ask a version of the question: “If there were a general election tomorrow, which party would you vote for?” The responses are collected, weighted to match the demographic profile of the GB adult population, and reported as percentage shares. Crucially, these figures measure current stated preference, not actual future behaviour. How people say they would vote today often differs from how they actually vote at an election that may be years away, particularly during the campaign period when party positions become more salient.
What is a margin of error?
The margin of error reflects the statistical uncertainty in a sample-based estimate. For a poll with 1,000 respondents measuring a party at 25%, the 95% confidence interval is approximately ±2.7 percentage points — meaning the true value in the population is likely between 22.3% and 27.7%. For 1,500 respondents the interval narrows to approximately ±2.2 points. This is why single polls should never be treated as definitive; only a consistent pattern across multiple polls signals a genuine trend.
Why do different pollsters show different numbers?
Different methodological choices produce different results — even when measuring the same underlying population at the same time. Key sources of divergence include: online vs. telephone sampling, how undecideds are handled, which demographic variables are used for weighting, how past vote recall is calibrated, and what likelihood-to-vote filter is applied. These systematic differences are house effects; our methodology section above explains how we account for them.
Do the polls predict the election result?
Polls measure current opinion; they do not predict future elections. The relationship between current VI figures and eventual election results depends on how much opinion shifts between now and polling day, how turnout is distributed across parties and demographics, and tactical voting patterns. In a UK context, the conversion of national vote share to seats is particularly non-linear under first-past-the-post, which makes seat projections especially sensitive to small changes in underlying vote shares.
What is the difference between a regular poll and an MRP poll?
A regular voting intention poll gives you national vote share estimates. An MRP poll uses a much larger sample and a sophisticated statistical model to estimate vote shares at the constituency level — enabling seat projections. See the section above for a full explanation of MRP methodology.
How do I tell if a polling movement is real?
Look for confirmation across multiple firms over multiple polls. A single 3-point shift in one firm’s figures is not sufficient evidence of a genuine trend — it could easily be within the margin of sampling error. When the cross-firm average moves by 2+ points in the same direction over 3–4 weeks, that is a more reliable signal. Our voting intention tracker makes this comparison easy by displaying all firms side by side.