Three-Year Snapshot Comparison: NSSE Results

​​By Sheilynda Stewart, Director of Institutional Research, East Central University

This dashboard provides a three-year snapshot comparison of results from the National Survey of Student Engagement (NSSE). Decision makers at East Central University use results from NSSE to measure student engagement educational practices that are empirically linked to outcomes such as persistence, satisfaction, and graduation. Source data for this visualization was compiled in Microsoft Excel to create the charts on this dashboard. The top visual uses a horizontal stacked chart to show typical class preparation percentages of ECU first-year and senior participants. The middle chart is used to show how first-year and senior students rate the quality of academic advising. The last chart at the bottom of the page displays a comparison of coursework activities by year for first-year students on a horizontal bar chart.  

NSSE-Dashboard.JPG

 

 Comments

 
To add a comment, Sign In
Total Comments: 9
 
Tim posted on 8/14/2014 10:30 AM
This is a lot of good information that will give administrators the ability to make positive improvements. The visualization is good, especially the titles including where the percentages are derived. The first question about the display is in the first chart. I understand why the total percentage by level and year could be less than 100% (non-responses), but I am not sure why there is a 1 or 2% increase in some of the years. I suspect a rounding issue. It just raises unnecessary questions and distracts from the information. The other question is why the last chart is not broken out by first-year and senior year? It might be a campus culture decision, so that is appropriate.
Sam posted on 8/14/2014 10:50 AM
These visualizations are off to a great start. Clearly ECU (go Pirates!) is investing in data and analytics to improve student outcomes. I would suggest the following to improve the readability of these visuals.

1. Where applicable, use colors which convey the "heat" or theoretical value of an ordinal or Likert scale variable. In this instance preparing for class "10 hours or less" is less desirable, so its color might be red or orange. Move towards green the more desirable the category. Or, if no theoretical value is appropriate, use shades of 1 particular color.

2. Avoid reusing color palettes for variables with different values. Purple was "Greater than 20" in the first visual but "Poor" in the second. Likewise, blue is 2012 in the last visual but different categories in the previous two. This makes the reader adjust which meaning to associate with a color.

3. Be consistent in how time values are displayed. In the first two they are groups of bars, in the third they are colors. (Sort of relates to point 2).

By incorporating and applying standards here and across all visual products, you can get colleagues used to a particular and consistent visual narrative. This will eventually make sense-making of new products easier (e.g., years or time elements are always colors or groups, etc).

I also noticed in the first visual that the overall percentage is more than 100% on some of the bars.

Good stuff Sheilynda!

Sam
Karolynn posted on 8/14/2014 11:01 AM
Great dashboard, a lot of interesting things happening. I agree with Tim regarding the first chart, it is distracting that the charts do not sum to 100%. And the scale could be reduced from 120% to 100% to make it more clear that going over 100% is not possible.

I would also change some of the colors in the 2nd and 3rd charts because you are using the same colors to indicate different things (scale in 2nd and years in 3rd), but the user's eye will try to link the similar colors together. What do you want the 3rd chart to highlight? Currently the 2011 red line stands out most to my eye. If you want the most recent year to be highlighted you could keep that line blue and make the two previous years shades of gray, so 2012 pops out and makes the chart easier to understand.

For the 2nd chart, if the point of the chart is to compare changes over time, it is difficult (but not impossible) to do so in the current format. It would be more helpful to put the year lines next to each other in a stacked horizontal bar chart, similar to the first chart. And if you want to group the top two categories together, you could give them different shades of the same color, for example (Excellent=dark green, Good=light green, Fair=gray, Poor=red).
Betsy posted on 8/14/2014 12:45 PM
The others have included most of my comments on the look and feel, but I am also immediately drawn into the data and want to ask questions, which is a good thing. For instance, why the apparent slack off in studying especially among seniors in 2012? or Why did so few new students in 2010 select excellent when for all other groups and years the "excellents" surpass the "goods" by a significant margin? In the second chart, the summary statistic for overall satisfaction would mask this difference, and maybe that's a good reminder that the perceived difference between them might be much less than the distance between "good" and "fair."

Of course, the relatively small size of the population would introduce more variability, but still the year to year changes are striking.

Finally, one comment on the last chart: the end of bar totals look a little cramped in the space, at least as it is rendered here on the web. I also don't know how many choices were involved all together on the NSSE scales, so I'm less sure how to interpret the results.
Sheilynda posted on 8/14/2014 2:51 PM
Thank you all for your comments! I had not thought about standardizing the color palettes for variables. This is something I will consider for future dashboards. Also, I will add a note explaining how numbers don't always add up to 100 due to rounding issues. My aim is to use these dashboards to start conversations with faculty, staff, and administrators about student engagement trends to improve planning and institutional effectiveness.
Ijay posted on 8/15/2014 2:41 AM
This is indeed an excellent way of displaying results and in essence confirms the superiority of this new paradigm in data analysis.The choice of colors is good too,i find it easy to interprete.I was wondering if it is actual result or imaginary?The slight differences of 1 or 2% may not really show any signifficant difference when tested.At the first level of data analysis ,thi is vey good ,but at variance analysis ,it may not bring out the needed variance.This is quite encouraging to those of us who easily develop phobia for figures.Welldone! Sheilyda dear
Terry posted on 8/15/2014 8:42 AM
I like this visualization. It meets the important criteria that the information be relevant to the subject. I like the way the first bar chart is formatted. A lot of times bar charts obscure some of the information but this one makes clear what is being presented.
Alex posted on 8/15/2014 12:28 PM
Thanks for sharing this example, Sheilynda! Also great to see the good comments and suggestions posted by others. Here's a simple fix for making sure the first display always sums to 100 (assuming you're creating these in Excel). Copying & pasting the percentages from the Excel version of your NSSE reports would preserve precision and ought to take care of the problem, but if you choose 100% stacked bar as your chart type the percentages will be treated as values and the stacked bars will definitely sum to 100%.
Michelle posted on 8/15/2014 1:00 PM
Hello Sheilynda! Love the annual comparisons that move toward looking at trends. I find it helpful to look at cohorts when I'm trying to identify trends. Maybe "change" has more to do with the individuals responding than with programmatic changes (which NSSE is supposed to help track). But program changes take time to show impacts, if any.

I like adding commentary that notes when those changes were piloted, operational, etc. 'Caveats' throws people off into talking about the validity of the data rather than any patterns they may be seeing. I title the list (bullets, no numbers for implied rank) as 'Notes'. I usually try to order them chronologically. Since I may not be privy to many of those initiatives, I usually start with program implementers - who own this data - and ask what else we should consider in looking at the information. They fill in the notes for me! :-)

NSSE results then become a piece for understanding a larger pattern. Program implementers and administrators then use that information in combination with other data to create recommendations.