So i guess theres no point in people bleeting about poor statistics if they dont bother to send in their details, yes us newbies probably are the ones sending in their details, shame on the others that dont & then complain about the results. As far as I can see it as a "newbie" the BBKA are trying to help us & the Bees. Ive seen this in other interests with other bodies, if people dont take part it will fall apart !
It's not a complaint about poor statistics, it's a complaint about the way they are gathered and presented. As it happens I did the survey, but I know what was entered was at best only a partial reflection of how productive the season had been.
Start with the idea of who responds. Two fundamental ways of getting an accurate number. One is to record the whole population, the other is to sample. Recording the whole population of beekeepers isn't practical, around 20-50% depending on area and who's estimating are not members of any association. As we see, even among the association there are 24,000 members, under 2,000 responses. About 8%. Could be higher with more campaigning, but would doubling the number help? Not really, because the sample is still biased. If you do try to record the whole population, there are always those who refuse to respond. Even with legal sanctions, such as car registration or national census, it's never 100% accurate. So we're left with sampling which can work, but it has to be carefuly constructed. Putting out an open invitation for anyone to contribute is not a random sample. It's a basic flaw of many of the "surveys" reported in newspapers. The term is "self selected" which is effectively only thse who readily respond are counted, either because they respond to any survey or they are enthusistic beginners (nothing wrong with that) or they are proud of what they have achieved. Again nothing wrong with wanting to record your good yield, but do those with poor yields want to reveal that? It's a built in bias.
There are ways of accurate sampling. One is to take a few beekeepers with a genuinely random selection method and put a lot of effort into getting a complete response record. That's how most ONS surveys work. Another is to follow a select number of volunteers year by year and compare one year with the next - not so much an idea of overal yield but potentially accurate on how it varies. It's how some BTO surveys work, for instance sampling the same farmland year after year.
The other big question is within the numbers requested. The flaw is asking how many colonies a beekeeper has at the end of the season and dividing total honey by that. As they admit, 22% of responses are from starters and previous years have seen as high as 40%. Will a bought in nuc, a donated swarm or a split colony yield as high as an established hive? Of course not. It takes time to build the numbers and they have to be available when the maximum foraging potential is available. Building new wax frames takes a lot of energy in the form of consumed honey, most books suggest to produce 1 pound of wax bees reduce honey production by around 7 pounds.
Honey is not the only measure of production. Total yield for a beekeeper is what they end the year with in terms of honey, colonies (including those sold on or given away) and other products such as wax minus what they started with. Many professional bee farmers make more from breeding and selling bees than they do from extracted honey. In other words, it doesn't matter how many respond if the wrong questions are asked.
The overall problem is not that the survey takes place, or even how it's constructed. The real problem is that it's presented to the news outlets and to members as definitive statistics in BBKA publicity. It's an informal survey that can use the same methods as a rough guide to increased or decreased honey production year by year. That's fine, but don't present it as anything more.
As a generally readable introduction to media treatment of surveys try the book "Bad Science" by Ben Goldacre.