With national polls showing Reform UK at 28%, Labour at 18%, and the Conservatives at 19%, it matters enormously how those numbers are produced. British polling methodology has changed dramatically since 2010, moving from telephone to online interviewing and developing increasingly sophisticated weighting procedures. Here is how it actually works.
Building the Panel: Who Takes Part in Online Surveys
The major British polling firms — YouGov, Savanta, Ipsos, Opinium, Techne — maintain large proprietary panels of people who have registered to take regular surveys. Panel sizes vary from around 150,000 to over 800,000 for YouGov, which has one of the largest online research panels in the world. Panellists sign up voluntarily and are typically rewarded with points redeemable for vouchers or small cash payments.
For each voting intention poll, a firm will send invitations to a sample of panellists selected to broadly represent the national demographic profile. They typically aim for 1,000–2,000 completed responses, regarded as sufficient for national-level estimates with a margin of error of around ±2–3 percentage points. The raw responses are never used as collected — the real work begins with weighting.
Demographic Weighting: Making the Sample Representative
Because online panels do not perfectly mirror the electorate — they typically over-represent younger, more educated, and more politically engaged respondents — every firm applies statistical weights to adjust the sample. The standard demographic variables include age, gender, region, and social grade. If your sample has too many 25–34 year olds, responses from that group are down-weighted; if it has too few over-65s, their responses count for more.
Past vote recall has become the most contested weighting variable in British polling. Firms weight their samples so that the proportion recalling a vote for each party in the last election roughly matches the actual result. This sounds simple but is complicated by “false recall” — people who subconsciously misremember how they voted, typically in the direction of the party they currently support. Getting this weighting right or wrong can move headline figures by 2–4 points.
Turnout Modelling: Filtering Likely Voters
Not every person who answers a survey will actually vote. British pollsters ask respondents to rate how likely they are to vote on a scale of 0–10. Most firms filter out or down-weight those rating themselves 5 or below, though the exact threshold varies. Some firms also use past voting behaviour as an additional filter, giving more weight to those who report having voted in previous elections.
Turnout modelling can significantly shift party shares. In the UK context, younger voters and those saying they will vote for smaller parties typically report lower likelihood-to-vote scores. Heavy turnout filtering therefore tends to reduce Green and Lib Dem shares while boosting Conservative and Labour. This is one reason why reported shares for the same party can vary 3–4 points between firms applying different turnout models.
Question Wording and Order Effects
The wording of the voting intention question itself matters. The standard British formulation is: “If there were a UK general election tomorrow, which party would you vote for?” Some firms include a follow-up squeeze question for undecideds: “Which party would you be most inclined to support?” The squeeze question reliably boosts smaller parties by 1–2 points, since it captures soft preference rather than firm intention.
Question order also matters. If respondents are asked about satisfaction with the government immediately before the voting intention question, the resulting figures tend to show slightly more negative views of the governing party. Reputable firms aim to minimise these priming effects, but they cannot be eliminated entirely. The British Polling Council requires firms to publish full question wordings and topline results, allowing researchers to examine these effects systematically.
Why Polls Diverge: Herding, House Effects, and Genuine Uncertainty
In any given week, published polls from reputable firms can show differences of 4–6 points for individual parties. Some of this reflects genuine sampling variation. Some reflects house effects — systematic methodological differences between firms that persistently push one firm higher or lower on a given party. YouGov has historically shown slightly higher Reform shares than some competitors, for instance, which likely reflects its larger and more diverse panel.
Herding is the most problematic phenomenon: pollsters adjusting their numbers to stay close to the consensus rather than publishing outliers that might harm their reputation. The 2015 UK election polling failure was partly attributed to herding, with all firms converging on a prediction of a hung parliament that proved badly wrong. Post-2015 methodological reforms at the major firms were specifically designed to reduce this incentive, though critics argue the industry has not fully solved the problem. See our full analysis of when UK polls have got it wrong.