At last! Your online survey has received numerous responses and there is plenty of fantastic raw data ready to be analyzed. You should jump straight into examining outcomes, reporting trends, and drawing some conclusions. Right?

Well, not quite.

There is one more step between receiving participants’ answers and beginning with the data analysis. A step that, although very important, is often overlooked: cleaning your survey data (data evaluation).

By adjusting poll data, you identify and remove all responses from individuals who either do not meet your target group criteria or who did not answer your questions carefully enough. As with cleaning anything, cleaning your survey data can be tedious and annoying. Yet, it is essential to achieve a reliable dataset. And that will later result in you reaching well-founded conclusions!

In this quick guide, we present you with 7 cases in which you absolutely need to evaluate your data. Because the PollPool team cares about the quality of >your< survey data!

Sometimes your survey data needs a deep-clean before analyzing

1: Participants, who answered your survey only partly

Let’s face it: Participants, who answered only some of the required questions, can mess up the results of your online survey. To start with, it may mean that they were unsuitable to take part in your poll in the first place - that’s why they haven’t completed it. It also may show that they were not as thoughtful and committed to providing feedback as other participants.

But it may also mean that your survey design was problematic. For example, it contained irrelevant questions or too many of them. Or maybe the internal logic of the online survey was inaccurate to start with!

When you work with incomplete results for some participants, it distorts your final results. It also messes up some filters you use while grouping the data.

In PollPool, we strive to make sure that all the Participants finish your online survey. We only award PollCoins to participants who have completed it in full!

2: Participants, who did not meet your target group

Suppose you want to interview young mothers, which makes your target group women between the age of 16 and 29. You certainly don’t want a 50-year-old man’s answer to affect your survey results, right?

It doesn't matter what target group requirements you set for your online survey. You can simply filter out participants who do not meet them.

PollPool helps you with that! While sharing your online survey, you can select or tag a target group by its age, gender, and nationality. Your survey will then only be displayed to those who meet the general group criteria!

But what if you forgot to do that before uploading your survey? Don’t worry – you can still determine who has met your target group after you received the results. We would recommend doing it manually for every respondent. Access the datasheet of your received responses and filter out participants who don’t meet your criteria.

3: Participants, who answered your survey too quickly (Speeding/Lurking)

If your survey has 10 questions, it’s unrealistic for someone to complete it within a few seconds. Chances are that they raced through the questions like a race car. He or she probably didn’t even bother to read the questions, let alone to answer them carefully.

How do you spot a speedster? Well, it varies. The easiest way is to determine the average response time it takes for users to participate in your complete survey. That way, you can easily spot any abnormalities. Either set a margin of x fastest responders or n% of those who are the fastest compared to the median and discard this data. No time for that? Then you can simply ignore the survey data of participants, which sticks out too much from the average (otherwise known as outliers).

PollPool prevents ‘speeding’ through automatic time control. We also apply sanctions for users, who lurk excessively!

4: Participants, who are layabouts (straightlining)

Straightlining describes a situation, in which a participant repeatedly chooses the same answer.

For example, it may always be the first option in a multiple-choice question. It is thus a masking strategy for speeding/lurking behaviour (described above). Participants with tendencies to straightline rush through surveys. They most likely give little or no thought to your individual questions.

Do you suspect some of your participants are straightliners? It is easy to spot them by exporting all the responses to an Excel spreadsheet or statistical software.

5: Participants, who gave unrealistic answers (Freak value)

Let’s imagine an answer ‘165 hours’ from a participant in a question on how much TV he/she watches per week. Given that the week has only 168 hours, they were surely exaggerating!

This type of answer is called a ‘freak value’. It's not only outside of responses’ normal range – it's completely unrealistic. As with straightlining, you can spot such delinquents through Excel or statistical software. And at the end, it is necessary to exclude their responses from your data analysis.

Sadly, not all responses are valuable

6: Participants, who provided mismatched answers

If the participant’s answer in one question contradicts their answer in another, they are either being dishonest or careless. Or both!

You can spot such issues with the use of data filters. Say your questionnaire has a question on how much time participants spend on social media weekly. You then filter out participants who answered ‘no time on social media at all’ .

Afterwards, you move to the next question “Which social media platforms you use most often’. At this point, you are left with participants who claimed they use social media now and then. So, if you see one of your responses to this question being ‘None’, this should raise your concerns. Double-check the raw data and exclude inconsistent participants!

PollPool cares about your data quality. We provide random test-checks on our users’ responses and sanction inconsistent users!

7: Participants, who provided absurd feedback in your compulsory open questions

Although answers like ‘Fdsklj’ may entertain you, they won’t get you far in your study. You should promptly delete them.

Yet this doesn't apply to feedback such as ‘None’ or ‘Nothing’. Participants might have found a question irrelevant or were unsure how to respond. You also shouldn’t ignore answers with visible typos. Some people have a hard time typing their answers on a keyboard or aren’t aware of correct spelling.

In all the above cases, you should either filter out or delete faulty responses, along with all other answers given by this participant. This allows you to focus on more effective data analysis, which leads to achieving a better data quality at final results!

Click for the next part of the guide --> Choosing the right software for analyzing your data