Likert Scale Questions Analysis Essay

The results are back from your online surveys. Now that you’ve collected your statistical survey results and have a data analysis plan, it’s time to dig in, start sorting, and analyze the data. Here’s how our Survey Research Scientists make sense of quantitative data (versus making sense of qualitative data), from looking at the answers and focusing on their top research questions and survey goals, to crunching the numbers and drawing conclusions.

Here are FOUR steps aimed at showing you how to analyze data more effectively:

  1. Take a look at your top research questions.
  2. Cross-tabulate and filter your results.
  3. Crunch the numbers.
  4. Draw conclusions.

Take a look at your top research questions

First, let’s talk about how you analyze the results for your top research questions. Did you feature empirical research questions? Did you consider probability sampling? Remember that you should have outlined your top research questions when you set a goal for your survey.

For example, if you held an education conference and gave attendees a post-event feedback survey, one of your top research questions may look like this: How did the attendees rate the conference overall? Now take a look at the answers you collected for a specific survey question that speaks to that top research question:

Notice that in the responses, you’ve got some percentages (71%, 18%) and some raw numbers (852, 216).

The percentages are just that–the percent of people who gave a particular answer. Put another way, the percentages represent the number of people who gave each answer as a proportion of the number of people who answered the question. So, 71% of your survey respondents (852 of the 1,200 surveyed) plan on coming back next year.

This table also shows you that 18% say they are planning to return and 11% say they are not sure.

The raw numbers are the number of individual survey respondents who gave each answer — these should not involve any sampling. So 852 people said, “Yes, I’m coming back next year!” If you assume that most of the people who said yes–and maybe some of those who said they were not sure–are coming next year, you can build a forecasting model to estimate the number of people* who will attend next year’s conference. *You can determine this number with more confidence if you had a very high participation rate, meaning most of the people who attended the conference and received your survey filled it out.

Cross-tabulating and filtering results

Recall that when you set a goal for your survey and developed your analysis plan, you thought about what subgroups you were going to analyze and compare. Now is when that planning pays off. For example, say you wanted to see how teachers, students, and administrators compared to one another in answering the question about next year’s conference. To figure this out, you want to delve into response rates by means of cross tabulation, where you show the results of the conference question by subgroup:

From this table you see that a large majority of the students (86%) and teachers (80%) plan to come back next year. However, the administrators who attended your conference look different, with under half (46%) of them intending to come back! Hopefully, some of our other questions will help you figure out why this is the case and what you can do to improve the conference for administrators so more of them will return year after year.

Using a filter is another useful tool for modeling data. Filtering means narrowing your focus to one particular subgroup, and filtering out the others. So instead of comparing subgroups to one another, here we’re just looking at how one subgroup answered the question. For instance, you could limit your focus to just women, or just men, then re-run the crosstab by type of attendee to compare female administrators, female teachers, and female students. One thing to be wary of as you slice and dice your results: Every time you apply a filter or cross tab, your sample size decreases. To make sure your results are statistically significant, it may be helpful to use a sample size calculator.

Benchmarking, trending, and comparative data

Let’s say on your conference feedback survey, one key question is, “Overall how satisfied were you with the conference?” Your results show that 75% of the attendees were satisfied with the conference. That sounds pretty good. But wouldn’t you like to have some context? Something to compare it against? Is that better or worse than last year? How does it compare to other conferences?

Well, say you did ask this question in your conference feedback survey after last year’s conference. You’d be able to make a trend comparison. Professional pollsters make poor comedians, but one favorite line is “trend is your friend.”

If last year’s satisfaction rate was 60%, you increased satisfaction by 15 percentage points!  What caused this increase in satisfaction? Hopefully the responses to other questions in your survey will provide some answers.

If you don’t have data from prior years’ conference, make this the year you start collecting feedback after every conference. This is called benchmarking. You establish a benchmark or baseline number and, moving forward, you can see whether and how this has changed. You can benchmark not just attendees’ satisfaction, but other questions as well.  You’ll be able to track, year after year, what attendees think of the conference. This is called longitudinal data analysis. Learn more about how

SurveyMonkey Benchmarks can help give your survey results context.

What is longitudinal analysis?

Longitudinal data analysis (often called “trend analysis”) is basically tracking how findings for specific questions change over time.  Once a benchmark is established, you can determine whether and how numbers shift.  Suppose the satisfaction rate for your conference was 50% three years ago, 55% two years ago, 65% last year, and 75% this year. Congratulations are in order! Your longitudinal data analysis shows a solid, upward trend in satisfaction.

You can even track data for different subgroups. Say for example that satisfaction rates are increasing year over year for students and teachers, but not for administrators.  You might want to look at administrators’ responses to various questions to see if you can gain insight into why they are less satisfied than other attendees.

Crunching the numbers

You know how many people said they were coming back, but how do you know if your survey has yielded answers that you can trust and answers that you can use with confidence to inform future decisions? It’s important to pay attention to the quality of your data and to understand the components of statistical significance.

In everyday conversation, the word “significant” means important or meaningful. In survey analysis and statistics, significant means “an assessment of accuracy.” This is where the inevitable “plus or minus” comes into survey work. In particular, it means that survey results are accurate within a certain confidence level and not due to random chance. Drawing an inference based on results that are inaccurate (i.e., not statistically significant) is risky. The first factor to consider in any assessment of statistical significance is the representativeness of your sample—that is, to what extent the group of people who were included in your survey “look like” the total population of people about whom you want to draw conclusions.

You have a problem if 90% of conference attendees who completed the survey were men, but only 15% of all your conference attendees were male. The more you know about the population you are interested in studying, the more confident you can be when your survey lines up with those numbers. At least when it comes to gender, you’re feeling pretty good if men make up 15% of survey respondents in this example.

If your survey sample is a random selection from a known population, statistical significance can be calculated in a straightforward manner. A primary factor here is sample size. Suppose 50 of the 1,000 people who attended your conference replied to the survey.  Fifty (50) is a small sample size and results in a broad margin of error.  In short, your results won’t carry much weight.

Say you asked your survey respondents how many of the 10 available sessions they attended over the course of the conference. And your results look like this:

You might want to analyze the average. As you may recall, there are three different kinds of averages: mean, median and mode.

In the table above, the average number of sessions attended is 6.3. The average reported here is the mean, the kind of average that’s probably most familiar to you. To determine the mean you add up the data and divide that by the number of figures you added. In this example, you have 10 people saying they attended one session, 50 people for four sessions, 100 people for five sessions, etc. So, you multiply all of these pairs together, sum them up, and divide by the total number of people.

The median is another kind of average.  The median is the middle value, the 50% mark. In the table above, we would locate the number of sessions where 500 people were to the left of the number and 500 to the right. The median is, in this case, 7 sessions. This can help you eliminate the influence of outliers, which may adversely affect your data.

The last kind of average is mode. The mode is the most frequent response. In this case the answer is six. 260 survey participants attended 6 sessions, more than attended any other number of sessions.

Means–and other types of averages–can also be used if your results were based on Likert scales.

Drawing conclusions

When it comes to reporting on survey results, think about the story the data tells.

Say your conference overall got mediocre ratings.  You dig deeper to find out what’s going on.  The data show that attendees gave very high ratings to almost all the aspects of your conference — the sessions and classes, the social events, and the hotel — but they really disliked the city chosen for the conference.  (Maybe the conference was held in Chicago in January and it was too cold for anyone to go outside!) That is part of the story right there — great conference overall, lousy choice of locations.  Miami or San Diego might be a better choice for a winter conference.

One aspect of data analysis and reporting you have to consider is causation vs. correlation.

What is the difference between correlation and causation?

Causation is when one factor causes another, while correlation is when two variables move together, but one does not influence or cause the other.

For example, drinking hot chocolate and wearing mittens are two variables that are correlated — they tend to go up and down together.  However, one does not cause the other.  In fact, they are both caused by a third factor, cold weather. Cold weather influences both hot chocolate consumption and the likelihood of wearing mittens. Cold weather is the independent variable and hot chocolate consumption and the likelihood of wearing mittens are the dependent variables. In the case of our conference feedback survey, cold weather likely influenced attendees dissatisfaction with the conference city and the conference overall. Finally, to further examine the relationship between variables in your survey you might need to perform a regression analysis.

What is regression analysis?

Regression analysis is an advanced method of data visualization and analysis that allows you to look at the relationship between two or more variables. There a many types of regression analysis and the one(s) a survey scientist chooses will depend on the variables he or she is examining.  What all types of regression analysis have in common is that they look at the influence of one or more independent variables on a dependent variable. In analyzing our survey data we might be interested in knowing what factors most impact attendees’ satisfaction with the conference. Is it a matter of the number of sessions? The keynote speaker? The social events? The site? Using regression analysis, a survey scientist can determine whether and to what extent satisfaction with these different attributes of the conference contribute to overall satisfaction. This, in turn, provides insight into what aspects of the conference you might want to alter next time around. Say, for example, you paid a high honorarium to get a top flight keynote speaker for your opening session. Participants gave this speaker and the conference overall high marks. Based on these two facts you might think that having a fabulous (and expensive) keynote speaker is the key to conference success. Regression analysis can help you determine if this is indeed the case. You might find that the popularity of the keynote speaker was a major driver of satisfaction with the conference. If so, next year you’ll want to get a great keynote speaker again. But say the regression shows that, while everyone liked the speaker, this did not contribute much to attendees’ satisfaction with the conference. If that is the case, the big bucks spent on the speaker might be best spent elsewhere. If you take the time to carefully analyze the soundness of your survey data, you’ll be on your way to using the answers to help you make informed decisions.

Back to Surveys 101

3 quick tips to improve survey response rates

Here are some ideas to ensure that respondents will answer your surveys.

1. Be quick

If your survey is short and sweet, there's a greater chance that more respondents will complete it.

2. Offer incentives

Little incentives like small discount or an entry into a drawing can help ensure respondents complete your survey.

3. Buy a targeted audience

With SurveyMonkey Audience, you can purchase access to an audience who meets specific demographic criteria for your survey. It's a great way to get targeted responses from a specific group.

Looking for more survey types and survey examples?

Here's why millions of people rely on SurveyMonkey

Unlimited surveys

Send as many surveys and quizzes as you want—even with free plans.

Fast answers

Easily create and send professional surveys. Get reliable results quickly.

Expert approved

Access pre-written questions and templates approved by our survey scientists.

Real-time results

Check results on the go from any device. Spot trends as data comes in.

Fresh ideas

Surveys give you more than just answers. Get feedback and new perspectives.

Actionable data

Extract and share insights from your data with your team.

STATISTICS ROUNDTABLE

Likert Scales and Data Analyses

by I. Elaine Allen and Christopher A. Seaman

Surveys are consistently used to measure quality. For example, surveys might be used to gauge customer perception of product quality or quality performance in service delivery.

Likert scales are a common ratings format for surveys. Respondents rank quality from high to low or best to worst using five or seven levels.

Statisticians have generally grouped data collected from these surveys into a hierarchy of four levels of measurement:

  1. Nominal data: The weakest level of measurement representing categories without numerical representation.
  2. Ordinal data: Data in which an ordering or ranking of responses is possible but no measure of distance is possible.
  3. Interval data: Generally integer data in which ordering and distance measurement are possible.
  4. Ratio data: Data in which meaningful ordering, distance, decimals and fractions between variables are possible.

Data analyses using nominal, interval and ratio data are generally straightforward and transparent. Analyses of ordinal data, particularly as it relates to Likert or other scales in surveys, are not. This is not a new issue. The adequacy of treating ordinal data as interval data continues to be controversial in survey analyses in a variety of applied fields.1,2

An underlying reason for analyzing ordinal data as interval data might be the contention that parametric statistical tests (based on the central limit theorem) are more powerful than nonparametric alternatives. Also, conclusions and interpretations of parametric tests might be considered easier to interpret and provide more information than nonparametric alternatives.

However, treating ordinal data as interval (or even ratio) data without examining the values of the dataset and the objectives of the analysis can both mislead and misrepresent the findings of a survey. To examine the appropriate analyses of scalar data and when its preferable to treat ordinal data as interval data, we will concentrate on Likert scales.

Basics of Likert Scales

Likert scales were developed in 1932 as the familiar five-point bipolar response that most people are familiar with today.3 These scales range from a group of categories—least to most—asking people to indicate how much they agree or disagree, approve or disapprove, or believe to be true or false. There’s really no wrong way to build a Likert scale. The most important consideration is to include at least five response categories. Some examples of category groups appear in Table 1.

The ends of the scale often are increased to create a seven-point scale by adding “very” to the respective top and bottom of the five-point scales. The seven-point scale has been shown to reach the upper limits of the scale’s reliability.4 As a general rule, Likert and others recommend that it is best to use as wide a scale as possible. You can always collapse the responses into condensed categories, if appropriate, for analysis.

With that in mind, scales are sometimes truncated to an even number of categories (typically four) to eliminate the “neutral” option in a “forced choice” survey scale. Rensis Likert’s original paper clearly identifies there might be an underlying continuous variable whose value characterizes the respondents’ opinions or attitudes and this underlying variable is interval level, at best.5

Analysis, Generalization To Continuous Indexes

As a general rule, mean and standard deviation are invalid parameters for descriptive statistics whenever data are on ordinal scales, as are any parametric analyses based on the normal distribution. Nonparametric procedures—based on the rank, median or range—are appropriate for analyzing these data, as are distribution free methods such as tabulations, frequencies, contingency tables and chi-squared statistics.

Kruskall-Wallis models can provide the same type of results as an analysis of variance, but based on the ranks and not the means of the responses. Given these scales are representative of an underlying continuous measure, one recommendation is to analyze them as interval data as a pilot prior to gathering the continuous measure.

Table 2 includes an example of misleading conclusions, showing the results from the annual Alfred P. Sloan Foundation survey of the quality and extent of online learning in the United States. Respondents used a Likert scale to evaluate the quality of online learning compared to face-to-face learning.

While 60%-plus of the respondents perceived online learning as equal to or better than face-to-face, there is a persistent minority that perceived online learning as at least somewhat inferior. If these data were analyzed using means, with a scale from 1 to 5 from inferior to superior, this separation would be lost, giving means of 2.7, 2.6 and 2.7 for these three years, respectively. This would indicate a slightly lower than average agreement rather than the actual distribution of the responses.

A more extreme example would be to place all the respondents at the extremes of the scale, yielding a mean of “same” but a completely different interpretation from the ac-tual responses.

Under what circumstances might Likert scales be used with interval procedures? Suppose the rank data included a survey of income measuring $0, $25,000, $50,000, $75,000 or $100,000 exactly, and these were measured as “low,” “medium” and “high.”

The “intervalness” here is an attribute of the data, not of the labels. Also, the scale item should be at least five and preferably seven categories.

Another example of analyzing Likert scales as interval values is when the sets of Likert items can be combined to form indexes. However, there is a strong caveat to this approach: Most researchers insist such combinations of scales pass the Cronbach’s alpha or the Kappa test of intercorrelation and validity.

Also, the combination of scales to form an interval level index assumes this combination forms an underlying characteristic or variable.

Alternative Continuous Measures for Scales

Alternatives to using a formal Likert scale can be the use of a continuous line or track bar. For pain measurement, a 100 mm line can be used on a paper survey to measure from worst ever to best ever, yielding a continuous interval measure.

In the advent of many online surveys, this can be done with track bars similar to those illustrated in Figure 1. The respondents here can calibrate their responses to continuous intervals that can be captured by survey software as continuous values.

Conclusion

Your initial analysis of Likert scalar data should not involve parametric statistics but should rely on the ordinal nature of the data. While Likert scale variables usually represent an underlying continuous measure, analysis of individual items should use parametric procedures only as a pilot analysis.

Combining Likert scales into indexes adds values and variability to the data. If the assumptions of normality are met, analysis with parametric procedure can be followed. Finally, converting a five or seven category instrument to a continuous variable is possible with a calibrated line or track bar.


REFERENCES

  1. Gideon Vigderhous, “The Level of Measurement and ‘Permissible’ Statistical Analysis in Social Research,” Pacific Sociological Review, Vol. 20, No. 1, 1977, pp. 61-72.
  2. Ulf Jakobsson, “Statistical Presentation and Analysis of Ordinal Data in Nursing Research,” Scandinavian Journal of Caring Sciences, Vol. 18, 2004, pp. 437-440.
  3. Rensis Likert, “A Technique for the Measurement of Attitudes,” Archives of Psychology, 1932, Vol. 140, No. 55.
  4. Jum C. Nunnally, Psychometric Theory, McGraw Hill, 1978.
  5. Dennis L. Clasen and Thomas J. Dormody, “Analyzing Data Measured by Individual Likert-Type Items,” Journal of Agricultural Education, Vol. 35, No. 4, 1994.

 BIBLIOGRAPHY

  1. Jacoby, Jacob, and Michael S. Matell, “Three-Point Likert Scales Are Good Enough,” Journal of Marketing Research, Vol. 8, No. 4, 1971, pp. 495-500.
  2. Jamieson, Susan, “Likert Scales: How to (Ab)use Them,” Medical Education, Vol. 38, No. 12), 2004, pp. 1,217-1,218.

I. ELAINE ALLEN is an associate professor of statistics and entrepreneurship at Babson College in Babson Park, MA. She has a doctorate in statistics from Cornell University in Ithaca, NY. Allen is a senior member of ASQ.

CHRISTOPHER A. SEAMAN is a doctoral student in mathematics at the Graduate Center of City University of New York.

One thought on “Likert Scale Questions Analysis Essay

Leave a Reply

Your email address will not be published. Required fields are marked *