Cookies remember you so we can ensure to give you the best experience possible. By using this website or closing this message, you are agreeing to our cookies and policies

Do not show this message again
UK Insights

Understanding a political decision in a fragmented Britain

Alexander Wheatley

Innovation Researcher

Politics 02.07.2018 / 18:00


Can polling help us make sense of new fragments in UK society? Part of our #FragmentNation series.

Political landscapes the world over are shifting. Nations are fragmented, and the fragments are not what they once were. As such, it is becoming ever harder to understand what people think and feel in the run up to political events. Indeed, political polling has been getting a bad press, as populist shocks and complicated debates make the pollsters’ jobs more difficult.

But the criticism of polling is unfounded. The democratisation of information means that the number of pundits, aggregators and small-time pollsters has grown in recent years. For every bit of rigorous research there is another one that is less scientific… and the noise is deafening.

A well sampled and well understood piece of polling is still the most effective means of predicting the outcome of an election, but this is no easy task. Kantar Public invests great effort in understanding the shifts in demographics, debates and the political landscapes that are presenting new challenges for the industry. They have an enviable track record in predicting the results of elections to prove this.  

Better predictions in a fragmented society

To predict successfully, firstly you need to understand who you are talking to. Samples have to be built carefully; weightings that are applied need to be thoroughly researched – the fragments must be properly understood.

Furthermore, you must be aware of the complexity of the polling question itself. “How do you intend to vote?” may seem a straightforward question, but it is not. In survey research, there are many barriers between a researcher and the truth, and this question is a good example of how these barriers manifest.

Firstly, it is a sensitive question. To get a valid response you must ensure that the person answering trusts you and feels comfortable giving you a truthful answer. Context is important.

Secondly, it is a self-observation, and such questions must be broken down into the simplest form if we want them answered truthfully. Any room for interpretation leaves a margin for error; for example, if you ask respondents to tell you their average alcohol consumption in one questionnaire and in another ask them the simpler question of what drinks they consumed that week, you see reported figures change from nine units to twenty.

Finally, it is a prediction about future behaviour, not an objective statement. Asking about future behaviour in survey research is often more of an art than a science, and is hard to keep consistent. The pitfalls are numerous; it is easy to find that discrepancies and mistakes influence your conclusions. In the build up to the EU Referendum, for example, telephone polls consistently underestimated the Vote Leave campaign.

Understandably, therefore, uncovering alternative methods for understanding political landscapes and predicting election outcomes is a field of great interest to researchers.

An alternative approach to uncovering truth

Over the last four years, across multiple elections, Lightspeed’s Innovation team has been seeking to develop such an alternative method. From the UK’s EU Referendum to Australia’s vote on Marriage equality – looking at everything from isolating expert predictors to measuring reaction times to visual stimuli – we have built a set of methodological approaches which go beyond simply asking about voting intention.

We build a series of question approaches to understand the strength of a political choice in a way which lets the researcher quantify the choices comparatively. The approaches outlined here have proven their worth in the EU Referendum, the US Presidential election and various other elections.

The first relevant question is still how someone intends to vote. However, we do not wish to simply quantify this response, but also understand the strength of the decision. Voting intention itself should be answered on a sliding scale, or followed with a question clarifying voter certainty. We have seen when asked in this way the strength of the decision is pulled apart in a quantifiable fashion.

Following from certainty, there is the question of the level of difficulty involved in making the decision. What we see is that, while numbers of people choosing one side over another might be the same, how they rate the difficulty of that decision is far from equal.

The final aspect to quantifying the decision is how much commitment lies behind it. We ask, “Do you think you will vote?” to see how “certain” someone is to actually turn out. The purpose of these questions is to quantify the strength behind the decision. If required, this can then be compared across demographic factors to model and understand the relevant factors on both sides of the debate.

The “why” behind the intention

Having looked at the respondent’s voting intention in multiple ways, it may also be illuminating to investigate their reasons. An open-ended question asking for justifications and reasoning provides great insights. These reveal the common drivers, which can be placed into quantifiable follow-up questions. If you see that “immigration” or simply “there is no better option” is a common driver, for example, then you can go on to measure the strength of these views in the overall population.

While it may be straightforward to discover conscious drivers behind a decision, a more nuanced approach must be taken to understand unconscious drivers. This may involve gathering timed responses to various visual stimuli around the debate and those leading it, such as images of political leaders, icons framing the debates, and people at rallies.

In a debate like the EU Referendum, with so many personalities and debates involved, it may be hard to see a clear picture. But what we often see is that key imagery, such as the EU logo for example, can clearly show you that the gut reaction of a population might not align exactly with how you expect them to vote.

Linked to this is the final consideration: perceptions. Perceptions frame the context, and can influence the decisions of the undecided, as well as blind the researcher from the truth. During the EU Referendum we could clearly see, when asking respondents who they thought would win and comparing to that how they and their immediate friends were voting, there was a clear underestimation of the Leave campaign. Team this with a general consensus that the media was against the Leave campaign and you unveil the confirmation bias that needs to be accounted for.

In all the cases where we tested such methods, the results allowed us to draw strong conclusions as to who would win. Exploring the debate in this manner and evaluating the decision, rather than the people making it, bypassed the complexities of understanding and accounting for the fragments involved. Beyond simply evaluating a decision, exploring the decision-making process itself can be an effective means of quantifying the support for an argument and predicting the result.

Source : Lightspeed

Editor's Notes

See Alex present on this topic at #FragmentNation, a new event from Kantar taking place on 11 July in London. Register here.

Latest Stories

We ask the CMO Moves podcast host to do some meta analysis on her little black book…

Businesses are failing to lock in long-term growth, according to new research from Kantar.

We speak to Bruce Daisley in Cannes about the role of Twitter, what’s coming next, and his passion for finding joy at work.

Hear from BrandZ’s Doreen Wang on why brands care about brand valuations, and what it takes to get in to the Top 100…

We speak to Tara Walpert Levy live on the Google Beach at Cannes Lions…

Related Content