Which question is correctly stated for use in a research study questionnaire?

First is the semantic scale. You want to choose options that are simple and unambiguous. Among the most common: Agree—Disagree, Helpful—Not Helpful, Excellent—Poor, Satisfied—Dissatisfied, Always—Never. But just because they’re popular doesn’t mean they are clear.

And make sure the differences between the categories are valid and useful. Let’s say you want to measure how often a person gets up from their desk at work. You choose Never—Seldom—Sometimes—Often—Always. How do you quantify the difference between seldom and sometimes?

If a scale is potentially ambiguous, either explain the meanings in your introduction or change the scale. Don’t use ‘Sometimes’ when you really mean ‘Once a week.’

Second is the number of response choices. Likert-type responses often have an odd number, so respondents have a neutral option. The jury is still out on whether that is necessary or even desirable.

Most researchers agree that, at a minimum, you should use a 5-point Likert scale survey. But other research shows that the more choices there are, the less often respondents use the middle or neutral category. What seems clear is that a 7-point Likert scale approaches the upper limits of reliability—so adding more options is likely to give you worse, not better, Likert scale data.

Think about those HappyOrNot terminals you find in airports, shopping malls, and even in toilets. You know, the ones with the happy and angry faces on them? Most of them use an even number of response choices. Why? Because it forces people to choose a side, making it easier to collapse responses into two categories (positive vs. negative experience).

Which you choose depends on how you plan to evaluate the responses.

Here are a few more tips for creating your Likert scale responses:

Be creative! Just because you’re collecting data doesn’t mean you need to sound like a robot. Survey tools like Typeform let you edit the labels, so your brand voice can shine through your questions. Try out Interesting—Not Interesting, or even No Way—Meh—Totally! to keep people engaged.

Use unipolar responses. The aim of a Likert scale is to get at a larger concept with a series of questions. Don’t name that concept with polar opposites like Safe—Dangerous or Strong—Weak. Instead, measure in degrees: Very Safe—Not Safe and Very Strong—Not at All Strong. They’re just easier for people to understand, and you can be sure that one extreme is the exact opposite of the other.

Stay consistent with your scales. Creating a Likert scale involves summing or averaging the responses to measure a concept or phenomenon. Without consistent scales, you can’t be certain you are measuring the same thing with each statement.

Loaded question: Do you think there are more postgraduates (Master’s, PhD, MBA) because of the country’s weak economy?

The question also includes a false premise: the participant is required to agree that the economy is weak to answer. The question also imposes a causal relationship between the economy and postgraduate study that a person may not see. Loaded questions are inherently biased and push respondents into confirming a particular argument they may not agree with.

Which question is correctly stated for use in a research study questionnaire?

Double-barreled question: Would you like to be rich and famous?

Double-barreled questions are difficult for people to answer. A person might like to be rich but not famous and would thus have trouble responding to this question. Additionally, you don’t know whether they are responding to both parts of the question or just one.

Biased question: Do you agree that the President is doing a wonderful job on foreign policy?

Biased language that either triggers emotional responses or imposes your opinion can influence the results of your survey. Survey questions should be neutral, simple, and void of emotion.

Assumptive question: Do you have extra money after paying bills that you invest?

This question assumes that the participant has extra money after paying bills. When a person reads a question they feel is irrelevant to him or her, it can lead to attrition from the survey. This is why Logic Jump is useful—surveys should adapt to respondents’ answers so they can skip questions that don’t apply to them.

This question would be better asked in two parts: do you have extra money after paying bills? (If yes: Do you invest the extra money you have after paying bills?

Second-hand knowledge question: Does your community have a problem with crime?

Not only are ‘crime’ and ‘problem’ vague, it’s challenging for a layperson to report on something related to the community-at-large. The responses to the question wouldn’t be reliable. Stick to asking questions that cover people’s first-hand knowledge.

If you are trying to understand the prevalence of criminal acts, it would be better to ask: In the past 12 months, have you been the victim of a crime?

Hypothetical questions: If you received a $10,000 bonus at work, would you invest it?

People are terrible at predicting future behavior, particularly in situations they’ve never encountered. Behavior is deeply situational, so what a person might do upon receiving a bonus could depend on whether they had credit card debt, whether they needed to make an immediate purchase, the time of year, and so on.