Writing in the National Post, Macdonald-Laurier Institute Senior Fellow Philip Cross says a controversy over Temporary Foreign Workers is just one example of why should be more cautious about relying on data and statistics.
By Philip Cross, May 5, 2015
Recall about a year ago the mystery of how widespread business claims of labour shortages, and their calls to preserve the Temporary Foreign Workers program, could co-exist with critics who pointed to a job vacancy rate of 1.5% in a Statistics Canada survey. Some of the mystery was cleared up last week, when Statistics Canada quietly put out a technical paper called 2011 Workplace Survey Summary and Lessons Learned. It holds lessons not just for understanding vacancy rates but also for anyone relying on data alone to provide the answer to all policy questions and conundrums.
Statistics Canada found that the 1.5% vacancy rate in its business payroll survey nearly doubled to 2.7% in its Workplace Survey. The increase was most pronounced for the industries that were complaining the most about shortages in 2011, such as mining, transportation and construction, and for the Western provinces. In other words, these data make a lot more sense and better fit the narrative coming from the business community at the time.
So what was the difference between the two vacancy surveys? Statistics Canada pointed to two factors. One was a subtle change in the questionnaire that redrew the boundary between whether searches were only outside the firm or included some internal searches. But mostly the change reflected who in the organization was targeted to respond as “having a knowledgeable respondent answering the questionnaire was an issue in data collection.”
No kidding. It makes a big difference whether human resource personnel answered instead of the usual “StatsCan guy” in the payroll department, who knows little about the organization’s human resource needs. (And how far down the totem pole do you have to be as the designated “Statcan guy” who responds to its surveys? If it was me, I would definitely be updating my CV and LinkedIn contacts.)
In a December 2014 paper for a School of Public Policy Conference in Calgary, I noted these and other problems with surveys of vacancy rates. The real conundrum is that, unlike straightforward variables like retail sales, the concept of vacancies is open to interpretation by the respondent, and this varies significantly with firm size. It is always problematic to ask people to interpret economic concepts themselves; ask 10 people what is unemployment, and you will get 10 different answers. So I have lingering doubts about the vacancy rate estimate of 2.7%. Another survey could find a totally different answer. Pouring money into more surveys is unlikely to produce a definitive answer, apart from further discrediting the unlikely result of the first survey that found vacancies were almost non-existent.
The real problem is not the challenge of measuring nebulous concepts like vacancies, but our society’s exclusive reliance on data, of whatever quality, as an evidence-based guide to policy. Knowledge based on hard to measure variables like vacancies is “fragile” to borrow a concept from Nassim Taleb; that is, conclusions based on such data can change significantly in response to seemingly small modifications to its environment, such as how a question is phrased, the order questions are asked or who it is sent to. Even easily defined concepts like exports can be hard to measure; just this week, Statistics Canada revised down its estimate of energy exports for February by 14% (or $1.2 billion, not an unusually large revision for energy exports). The press goes bonkers over a record monthly trade deficit of $3 billion when Statistics Canada could easily revise that next month to less than $2 billion or more than $4 billion. Given its recent track record for preliminary estimates of energy exports, Statistics Canada might want to start using a dart board for its estimates. A common shortcoming in the use of data in public debates is a failure to inform users of the uncertainty that surrounds estimates.
The uncertainty about estimates increases exponentially when making forecasts. A great deal was made of the Parliamentary Budget Officer’s projection of how much Tax Free Savings Accounts would cost the government in 2080, as if that could be forecast with precision. Imagine all the assumptions and relationships that have to be known, layered on top to a morass of imperfect data about the present, to make a forecast 65 years ahead. Yet journalists ask straight-faced questions based on these forecasts when economists routinely demonstrate they can’t forecast the economy 6.5 months in advance, never mind 65 years. Who saw the slump in oil prices at the start of 2014? Remember former economist Jeff Rubin and his quack forecast of $200 for a barrel of oil? There’s an old joke among economists — we forecast not because we’re good at it, but because we’re asked to. It is the responsibility of the economics profession to provide better guidance on the imprecision and limits of its knowledge. And it is incumbent on users to be skeptical that the world is best understood only with numbers.
Philip Cross is former chief economic analyst at Statistics Canada