By Michael Burke | November 18, 2019

A solid survey might be the ‘crown jewel’ of PR and marketing agency efforts if it’s interesting to your target audience. Not only does it give journalists, social media users and bloggers a reason to reference you, but great surveys have legs–one survey we did for a client in 2008 is still getting written about in 2019!  

With that said, there’s more to conducting a survey than simply running a poll. In this series, we’ll explore the fundamentals of a solid survey, from selecting the sample to designing and implementation, to interpreting and presenting the results. 

The most fundamental question is of course what you want to learn. In some cases you might be trying to solve a business problem, such as finding out what users think about your new product, exploring the feasibility of entering a new market, or determining whether or not to launch a new product. These kinds of studies are typically done with an internal audience in mind, and are a pillar of market research. 

In PR and content marketing, however, surveys are often conducted with external audiences in mind. For example, you may be searching for a news hook that will get your brand’s name into top tier publications. Or, you might be looking to create content that will attract and keep visitors on your site–this might even be gated content that serves as an entry point for your marketing/sales funnel.

Either way, survey results have to be credible, and credibility starts with your sample

What is ‘sample’?

One of my favorite things about Trader Joe’s is that they’ve got employees giving away free samples of their yummiest food. Unfortunately, they can’t afford to give away the whole pie, but I only need a bite of their pizza to suspect that the rest of it will be equally delicious. Likewise, in any survey you want to gather the opinions of a certain population, such as registered voters, home-owners in the state of California, parents with children under 2, etc., but surveying all of them is not practical (when you do survey everyone, it’s called a census, and as American knows, it’s a major pain in the neck).

A ‘sample’ is a subset that you believe will be representative enough to make observations that will hold true over the whole population of interest. Where do you get the sample? From what is known as a ‘sampling frame’. For instance, if you’re interested in what everyone in Scranton thinks about a certain issue, your population is all Scranton residents. If you use the Scranton phone book to find people to poll, your sampling frame is the phone book. 

What makes a sample representative?

So how do you know that your population is truly representative? A sample is considered representative when its characteristics or ‘parameters’ are similar to those of the population. No sample is perfectly representative (we’ll talk about this later when we discuss margin of error), but one of the critical techniques that helps us to be reasonably certain that it is close enough to be useful is ‘randomization’. 

While we like to say we have ‘random thoughts’ and do random things, in reality humans aren’t good at being random. Random sampling, therefore, takes human thought process out of the selection process and instead relies on chance. Randomization used to be handled by shaking numbers on paper in a hat, but now it’s handled by software. In fact, it’s so non-human that when you really think about it, it’s kind of difficult to describe what ‘random’ actually means (kind of like ‘infinite’). It is easy, however, to describe what it is not

  • If you want to get the perspective of the average American on the protection of endangered species and your sample frame is the Sierra Club’s membership records, it’s not random.
  • If you send out an unsolicited email asking people to voluntarily respond to a survey about people’s attitudes toward unsolicited emails, it’s not random. 

Of course, populations are multi-faceted and often contain subgroups that are of interest. A simple random sample (SRS) is one in which each group within your population is represented, and this is done by sampling in such a way that each person in your sampling frame has an equal chance of being selected. If one subgroup is over-represented, you’ll get misleading results. For instance, there’s plenty of evidence that Americans have different voting patterns by gender. If your sample of registered voters contained 80 percent men, you’d probably get skewed results if you were polling them about whether or not they’d vote for Elizabeth Warren. 

Stratification is a method for making sure that the proportions of subgroups within your sample are similar to the subgroups in the actual population. In cases when you think it’ll be too tough to identify the subgroups, you can use cluster sampling, which divides the population into separate groups, and then takes a random sample of clusters.

Sample size is NOT relative

A final word about sampling. How can it be that a survey of 1000 people can be used to predict who the next president is? After all, there are more than 150M registered voters in the U.S. With a population that large, shouldn’t we be surveying more people? 

The answer to that is two-fold. First, sample size actually does matter, as larger sample sizes result in smaller margins of error (which, once again, we’ll talk about in a future post). BUT…oddly enough, the accuracy of a survey is determined only by the sample size, and the ratio of the sample size to the population size is irrelevant. A survey of 100 people from a population of a million, billion people, would actually be just as accurate as a survey of 100 people from a population of only 1000 people. It sounds crazy, but it’s true.

Stay tuned, because we’ll be talking about ‘margin of error’.

Also be sure to check out:

10 Ways to Ensure Your PR and Marketing Survey Fails Part 1

10 Ways to Ensure Your PR and Marketing Survey Fails Part 2