Surveys and polls should come with a health warning, advises Paul Finch
As that wise old owl Mervyn King remarked in his book on the financial crisis, The End of Alchemy, the only certain future we face is one of radical uncertainty. That was decidedly the case as a general election was triggered last week, despite all that stuff about fixed Parliaments.
I was intrigued by the AJ’s initial online poll of architects to ascertain instantly who they might vote for, especially in the context of general cynicism about opinion polls. However, there is a very considerable difference between amateurs and professionals in the world of surveys.
The problem is self-selection. Unless you do proper sampling, using the techniques of people like Ipsos Mori, it is quite impossible to draw general conclusions from respondents who may have an axe to grind, or are simply unrepresentative.
Unfortunately the sort of graphics produced to illustrate the results often imply that they show how the profession as a whole is thinking, rather than just the respondents. You need a survey health warning.
Admittedly, if you have an overwhelming number of respondents then you may feel the chances of a survey accurately reflecting the entire group are increased. One example of this (to my chagrin) was a survey I organised back in the early 1990s about deregulating the profession. Architects were invited to express their views, and about 80 per cent of respondents wished to keep regulation and protection of title. Thousands responded and, as a proportion of the profession, the number was too big to ignore.
It is unimpressive, however, if the only survey statistics you get are percentages. What about, for example, the number of respondents? How do you know if there was only a handful? Nor is it just about numbers. If you are drawing conclusions about the views of an entire profession, you need some assurance that the respondents are reflective of the age profile, geographical disposition and gender/ethnic balance of the group as a whole.
I always wonder about just exactly who has voted for people on X Factor
I always wonder about just exactly who has voted for people on The X Factor or similar shows; there is a distinct absence of information about the voters both in terms of numbers and make-up. This may be embarrassment about sectoral voting on the basis of ethnicity, nationality or sexuality, but again how can we tell?
Online surveys are another cause for concern unless there are checks on whether the respondents are who they claim to be, or indeed whether they are telling the truth.
Professional pollsters have ways of identifying people who are trying to mislead them and of course depend in the long term on the accuracy of their conclusions. Despite being perceived as ‘getting things wrong’ about recent elections, pollsters have a pretty good and consistent record when it comes to their bread-and-butter polling, which they do all the time for many government departments and where the issues and audiences are consistent – for example what is worrying people most (more likely to be crime than Jeremy Corbyn, believe it or not).
What the public and clients think about architects and architecture is far more interesting than what architects think about politics, public or clients. If the profession is interested in improving its image, perhaps it should start commissioning some serious polling to find out how.
No doubt I will read with interest the result of all sorts of polls in the next few weeks, but let’s remember they all have a margin for error.