2012 Perceptual Edge Dashboard Design Competition: A Solution of My Own
I have finally created my own version of the Student Performance Dashboard that contestants in the 2012 Perceptual Edge Dashboard Design competition were asked to create. I don’t feel that I should judge the efforts of others unless I’m willing to submit my own work for scrutiny as well. This version along with those created by the two winners will appear in the second edition of Information Dashboard Design.
Examine this dashboard on your own for a few minutes. Before reading further, examine each measure and the way it’s expressed, including the context. Look at each component, both on its own and in relation to the whole. Consider the overall visual design: how it draws you into the information and draws your eyes to what’s important.
Hopefully, the reasons for each of my design choices became clear as you examined it closely. You may have noticed that I incorporated several of the ideas that were exhibited by dashboards that were submitted to the competition, especially the two winning solutions. Yes, I cheated, and for this reason I didn’t give myself an award. Here are a few of the good qualities of this dashboard that were present in others as well:
- All of the information is present.
- It is easy to spot the students who are most in need of attention.
- The organization is clear.
- The students that most need attention are clearly featured, using simple blue icons.
- Graphics have been used to support efficient scanning of the information.
- Everything about a student can be seen by scanning across a single row.
- Students can be easily compared by scanning down the columns.
- Even though there is a great deal of information, little training would be required to learn how to interpret this dashboard.
- The information has been displayed in an aesthetically pleasing manner.
- It is scalable in that more or fewer students could be accommodated by simply adding or removing rows.
Now let’s consider a few ways that this design succeeds where others fell short.
- Student-level and class-level information has been well integrated.
- The sparklines are more informative.
- It is easier to see time-based attendance patterns (absences and tardies).
By placing class-level summaries below related student-level information, the relationship between them is clearly shown and comparisons can be easily made.
The sparklines are a variation of a version of Edward Tufte’s space-efficient invention, which I call bandlines, that I introduced in the current edition of the Visual Business Intelligence Newsletter in an article titled, “Introducing Bandlines: Sparklines Enriched with Information about Magnitude and Distribution.” In this case, I’ve used horizontal bands of color to represent ranges of scores that correspond to grades A, B, C, D, and F. With this design, I was able to provide the teacher with a quick glimpse of historical student achievement that reveals not only patterns and trends but also information about the magnitudes and variability of values. Usually, bandlines use bands of color to represent information about how a measure is quantitatively distributed based on quartiles, similar to a box plot. As such, it is adaptable to a broad range of measures.
To show historical attendance information, I designed a display that was similar to the winning solution by Jason Lockwood, but is a little easier to perceive and comprehend at a glance.
I hope you can appreciate the design choices that I made to produce this dashboard and understand how they support performance monitoring. I have no illusion that this version of the Student Performance Dashboard is perfect. I have never designed anything that I couldn’t improve later. As new ideas come to mind, several of which will no doubt come from you, I’ll continue to improve this dashboard with each new printing of the book. Despite the evolutionary nature of design—time is a great teacher—I’m confident that this dashboard could be used by teachers to help their students achieve their best.
Take care,
38 Comments on “2012 Perceptual Edge Dashboard Design Competition: A Solution of My Own”
There is a huge amount of information conveyed in this example. I’ve never tried to put across that many variables at once. It is rather exciting to think that so much can be presented so concisely. The only thing that gives me any pause are the sparklines. I don’t feel like I’m getting as much information from them as from some of the other elements. Now to figure out why I feel that way.
Looks very interesting. Can’t wait to see the second addition of “Information Dashboard Design”
I missed the original brief for the competition so I am just curious were the submissions all static images or were any interactive examples produced?
The vertical bandlines that are used for the “Overall Course Grade” and “Spread” are helpful, but the horizontal bandlines used for “YTD” and “Last 5” are rather confusing.
I am not a fan of the shocking pink you have used; your dashboard winner used a very effective blue, which did not bombard the senses.
At bottom, why did you choose to use a line to represent the % of students achieving a certain grade rather than a bar?
This is great stuff; I agree with the poster above that it is exciting to realize how much insight can be provided so concisely.
I love the way you have presented the data, now I have to figure out how to make the tool I use produce some of those charts. My only concern is that you have sorted students by their current course grade which I think would be fairly obvious to any attentive teacher. I would suggest thinking about the measures with an eye to detecting problems, recent changes in attendance patterns, recent changes in test scores, would let a teacher know that a problem is beginning to surface. An “A” student who gets a “C” on a test would be a red flag for intervention. But since you have the best performers buried at the bottom of the screen we might not ever see them until the bubbled up.
A slightly unrelated question, you mentioned your newsletter, I already get an RSS feed of your blog, how do I subscribe to your newsletter?
Hi Stephen and thanks again for such an interesting “competition”, or should you call it “coopetition”? ;)
Great post and insights (as usual). Some thoughts, already discussed some of it in the forum before. I’ll focus only on the tiny/few points where maybe it could be (even) further improved.
Regarding the sparklines, I still think that they should all use the same vertical scale… This way we get too much “noise”, we perceive far more “irrelevant” variations & trends than do not deserve our attention. The bandlines would still be useful, not sure you would need more vertical space for them to be distinguishable though.
And I know, for practical reasons also I think, that the competition needed a very restricted scope, data visualization only, and not data “exploration” and interactivity. But these days, everything becoming touch “aware”, I don’t think we can think of data visualization without taking user interaction into account. Although for they can still be studied separately, they will work much better when working together in real use cases and dashboard designers should be very, very comfortable with this. “Group selectors” come to mind also, and many other patterns. Bet in a few years we won’t even have touch-less paper any more… :)
Final note :) I miss some color here, some heart, a little bit more “feeling” :)
Kind regards,
Rui
Hi Stephen
Love your viz – amazing how much information can be made easy accessable in an easy to understand fashion.
I’m a little courious – are your dashboards all photoshop or do you use a frontend tool? – if so, could you make the file public?
Looking forward to hear you in Copenhagen later this spring
Cheers
Peter
Hi. My entry for dumbest question award: why is it last to first and not first to last? I’m assuming there was a reason, and am genuinely curious to know it.
Great graphic. Busy and crowded, but very, very information rich.
Terry, that was the first thing I noticed but a few seconds later it was very clear to me: teachers must pay more attention to students with the worst grade so they can take action and help them.
I like the vertical format, and the general mix of graphs vs. numerical analytic content–the format would also lend itself well to a customer rationalization process in a for-profit company. I don’t like the hot pink, though–blue prints better in black and white when necessary, and is easier on the retinal stem.
This would be miles more useful than just nice-to-look-at if the underlying software how-tos were available, like–can we see the actual file/formulas in Excel, or whatever you used to produce the visual?
Hi Stephen
This is an awesome dashboard. Just wondering which tool you have used to design this dashboard.
As always, very elegant. One thing I would really love to see is how this is made? What software do you use to build a dashboard like this? Can you share, even briefly, what you use and how you do it?
Thanks.
David
Thank you so much to share this competition and discussion with us.
Congratulations for all those that participated in the competition.
Regards,
Pedro Perfeito
PS: The Rui Quintino dashboards was also amazing!
Overall I think this is a great design, and really shows how a lot of complex ideas can be put into a design that ends up being relatively straightforward.
I love the summary graphs at the bottom – I don’t think many other dashboards came close to making it that clear what summaries related to which columns. I also wouldn’t have thought that they could effective so small, but they are.
My issues with it:
1) All of the notes on sparkline scaling that were brought up in discussions. I still very strongly feel that large variations for one student, and minute variations for another, should be scaled so that it’s clear which is major and which is minor. The bandlines help in that regard, of course, but it’s not enough IMO
2) the grey color scaling seems counter intuitive. In every other example we are familiar with, dark grey is bad, light grey and white are good.
I am very curious why you reversed that here?
3) I am on the fence with the searing pink color. It’s grown on me the more I’ve looked at the image, but it is a very strong color…
On inspecting further, and understanding the purpose of the strip plot/ range plot following the sparklines better…perhaps *that* makes it enough.
Stephen,
At first, I wasn’t sure what to think about your entry (especially because I had not yet read your latest newsletter). For me, the pink lines were jarring on first glance – but line color is part preference, I admit. Was there a specific reason for your choice of line color?
In any event, having read your description of bandlines, I see their power: performance and distribution are communicated quickly and effectively. I find it somewhat poetic (hopefully not too poetic for this audience) in the “era of big data” some of the best visual solutions are, in fact, quite small.
Hi Steve
There’s a lot of information conveyed in a small space!
I have one comment though – the heading ‘Attendance’ is a bit misleading as the data refers to absenteeism and tardiness. Perhaps ‘Non-Attendance’ would be better?
Cheers
Adrian
Aesthetics: A+
Data Coverage: A
Analytics: B-
Insight: B
Final Grade: ??
In a traditional grade book, Frederic Chandler’s entry might list his assignment scores and a current grade average: 71%, 65, 0, 60, 68 => F (53%).
The first line of the dashboard shows the same information in three plus graphs. Is this a major improvement?
I’m open to the argument that graphs are better than tabular data, but it seems like there should be a greater return for a teacher who has to invest time in a new tool; or rather an additional tool. This dashboard still requires a traditional grade book if the teacher wants to review students’ individual assignment scores and explain their final grades.
Analytics / derived information. For an individual student, I’d like to know what average score he needs on the remaining assignments to improve his grade by one step (e.g., B to an A) -– if it’s even possible or probable, given his scores to date. The blue dots next to the three D/F students, by contrast, have low analytic value. Teachers who would use this dashboard are likely conscientious and know, without the dots, that these three students are in academic trouble.
Insight. At a class level, grouping students by their grades in the last math class or their scores on the last standardized assessment, and their grades in this class could be more helpful than ordering students from lowest to highest grade. Are students doing better in my class than their last math class? Are they underperforming relative to their standardized test scores?
The dashboard is beautiful. The sparkbands seem like a useful addition to our toolboxes. The overall visualization still leaves some part of me asking: So what?
Hi Steve,
This is great and helpful! I really like the bandlines for the sparklines. This is an elegant solution that clearly solves the major issue I had in my design. I probably agree with some of the other posters that I would that I’d do the grey shading the other way (dark for low, light for high) but this is really minor and quite often a personal preference.
I also really like how you improved my attendance timeline. This is really much more clear. One of those things that seems so obvious when you think about it, but is hard to come up with at the time.
As always, thanks for inspiration!
Jason
Like others, I’m not keen on the pink. Also, could the bar at the top not be placed at the bottom? I would imagine that a teacher is well aware of the date (to a reasonable approximation). Finally, I don’t really understand the Spread charts. What is the number represented at the left end? Chandler got 0 for one score but every other score was 60 or above. Kim on the other hand had no zero score but one score in the 40’s and one in the 50’s. So I can’t understand how the two have the same lowest scores according to the spread.
On the other hand, the information density is very impressive. And it does look very elegant.
And having looked at it again… the graph showing the overall course grade and what I assume is the latest grades look identical. Am I missing something?
Superb work as usual. One smaller improvement opportunity:
The qualitative difference between grades (say F and D) is very significant. I suspect it would be beneficial to emphasize this by simply changing the current 3 row banding lines (which currently actually de-emphasize the difference between grades) to instead only have horizontal banding lines between different grades, i.e. one line between F and D, one line between D and C etc.
Cheers,
Kristian
The overall course grade, the spread and the last columns are all strip plots showing overlapping bits of information. Splitting these in to three separate plots seems wasteful.
I now understand the spread charts in relation to the top two students. I can see that if I peer really closely the background colours are different. I’m not convinced this is obvious enough, however. The bright pink box and circles with the white fill dominate when compared to the slight difference in grey value between F and D. Or they do on my monitor and with my eyes.
This is a very good dashboard that conveys a tremendous amount of information. Well done.
I did want to mention a few things that came to mind.
I may very well be in the minority here but I actually found the bandlines made it harder to see the data when used with the sparkline. In particular, under the assignments column the YTD and in the assesments column with Last 5. The changing scale of the bandlines forced me to look very closely at each item and recalibrate for each student.
Additionally, in the Assignments column, the Spread data, I found the box and circles helpful but not sure that the graded scale helped me get a better idea of the spread for that student. It seems like the box and circles gave me a good enough idea of the spread of the data without the graded scale.
Lastly, in the behavior column. It seems like I really have sparse data here and the larger differentiation seems to be whether the student had behavior issues or not and not necessarily the scale of the behavior issues. Perhaps just having a filled circle for two categories (Ref & Det) rather than a number for each of the four categories would make it faster to read.
I’ve been tied up working long days for the National Institutes of Health and the U.S. Census Bureau this week, which I why I haven’t responded to comments until now. I’ll try to address all of the comments that questioned aspects of my dashboard’s design in this one response, except for those about bandlines and sparkstrips, which I’ll respond to in my Discussion Forum.
Many of you expressed opinions of the pink color (actually magenta) that I used for all of the primary data. Here’s the spectrum of opinions:
As you can see, opinions varied from “it’s too bright†to “there isn’t enough color,†dominated by the former. My choice of pure magenta was not arbitrary. I experimented with many colors in the context of this dashboard and chose magenta for the following reasons:
Although it is unusual to see magenta on a data display such as a dashboard, this is not because magenta isn’t suitable. I wouldn’t use this particular shade of magenta for large objects (e.g., heavy bars in a bar graph) because it would be too bright, but small objects such as those that display primary data in this dashboard must be fairly bright or dark to be seen against a variety of backgrounds. When used for objects of this size, it is not true that magenta is glaring or stressful to the retinal system. I couldn’t use a semi-saturated shade of blue similar to the color that was used in the winning dashboard because it would not stand out sufficiently.
To illustrate how various colors work in this context, I’ve created six different versions of a section of the dashboard, which easy uses a different color for the primary data:
(Click to enlarge.)
Notice that of these colors, magenta and black stand out best against the full set of background colors. The darker blue and green versions do better than the cyan (light blue) and orange.
Mike asked the following question: “At bottom, why did you choose to use a line to represent the % of students achieving a certain grade rather than a bar?†I assume that you’re referring the frequency distribution display. I used a frequency polygon (i.e., a line that displays a frequency distribution) instead of a histogram (i.e., bars that display a frequency distribution) because it shows the same information in a way that involves less ink. Heavy bars aren’t needed.
Rui said: “Regarding the sparklines, I still think that they should all use the same vertical scale…This way we get too much ‘noise’, we perceive far more ‘irrelevant’ variations & trends than do not deserve our attention.†The design of these sparklines did not introduce any noise in the data. Much of the information contained in these sparklines, regardless of how they are designed, could be considered noise depending on what the teacher considers significant information about performance. What’s important is that the teacher can differentiate the signals from the noise and do so easily and quickly. A consistent scale among an entire set of sparklines in this case would produce flat lines for lower values, making their patterns of change difficult or impossible to see. Due to the small size of the sparklines, scaling them to fill the vertical space provide the best means to see the patterns and trends. The information that is provided by the bandlines and sparkstrips reveal the magnitudes of values and variability.
Rui also wrote: “And I know, for practical reasons also I think, that the competition needed a very restricted scope, data visualization only, and not data ‘exploration’ and interactivity. But these days, everything becoming touch ‘aware’, I don’t think we can think of data visualization without taking user interaction into account. Although for they can still be studied separately, they will work much better when working together in real use cases and dashboard designers should be very, very comfortable with this. ‘Group selectors’ come to mind also, and many other patterns. Bet in a few years we won’t even have touch-less paper any more… :)†This statement makes a distinction between data visualization, data exploration, and interactivity, which does not exist. Data exploration and interactivity are part of data visualization whenever needed. Data exploration is not a dashboard activity, however, and interactivity should play a limited role on a dashboard. If you know my work, you know that I define a dashboard more specifically than many others. “A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single computer screens so it may be monitored and understood at a glance.†A dashboard is an information display that is used to monitor what’s going on. If you design a dashboard to support other purposes, such as data exploration, it will no longer effectively support monitoring. If you design a dashboard in a way that requires a great deal of interaction, which changes the appearance of the dashboard and the information that’s displayed, the user will not be able to scan it efficiently, which can only be done if the it looks the same each time it’s used, except for the fact that the information has been updated. Interactivity is integral to data exploration and analysis, which is why I used highly interactive tools and build highly interactive applications for this purpose. We can only design effective data visualizations if we take the purpose of their use into account and design a display that serves that purpose effectively.
Jim Wahl wrote: “The blue dots next to the three D/F students…have low analytic value. Teachers who would use this dashboard are likely conscientious and know, without the dots, that these three students are in academic trouble.†The blue circle icons serve as alerts. They serve a performance monitoring purpose, not an analytical purpose. The dashboard’s role is to provide the information that is necessary to monitor student performance. When analysis of a student’s performance must be done to understand how to respond to a situation that the dashboard displays, then the dashboard should make it easy to access that information. If the information can be conveyed briefly, this information could be provided in a small pop-up window when the mouse hovers over the relevant information. If more information is needed or an analytical tool, them that should be accessible in a separate window from the dashboard, ideally a click away. Additional information and data analysis functionality will at times be needed in response to a condition that appears on the dashboard, but this should never be integrated into the dashboard itself in a way that causes the dashboard to change in a persistent manner. Jim Wahl gave this dashboard a B- grade for analytics, which I suppose is a compliment given the fact that it was not designed to support analytics. Performance monitoring and data analysis are often complementary activities and should be, but they are different activities that must be supported by different displays and interactivity.
Jim Wahl also wrote: “At a class level, grouping students by their grades in the last math class or their scores on the last standardized assessment, and their grades in this class could be more helpful than ordering students from lowest to highest grade.†The order in which students are listed in this dashboard could be arranged in various ways. I ordered them in the way that seems most useful overall for a teacher who is monitoring their performance, based on my experience as a teacher and that of the others whom I consulted when assessing requirements for this dashboard. Because the student’s grades/scores are vital to the assessment of performance, sorting the students by grade has made it easy to scan all of the information related to grades and assignments. Grades and scores that are close to one another are near one another on the dashboard, which makes them easy to scan and compare. Imagine how difficult it would be to scan and compare grades and assignments if the students were sorted on some other variable, such as alphabetically by name.
Jim Wahl wrote: “I’m open to the argument that graphs are better than tabular data, but it seems like there should be a greater return for a teacher who has to invest time in a new tool; or rather an additional tool. This dashboard still requires a traditional grade book if the teacher wants to review students’ individual assignment scores and explain their final grades.†A grading application will always be needed as a means to enter student’s scores. However, the dashboard could replace all monitoring activity and could be supplemented with additional screens and pop-up menus to provide everything that a teacher would need regarding student performance apart from data entry. Tabular data is not better than graphical data in general. It is better for specific purposes. Seeing patterns in quantitative data and rapid monitoring both require graphics.
Tim asked: “Could the bar at the top not be placed at the bottom?†Yes, the title bar at the top, which identifies the class, states the effective date of the data, names the dashboard, and provides a Help button, could be positioned at the bottom. I sometimes design dashboards in this way. Designed as it is, however, the gray band of information at the top would not slow the teacher down in scanning student information, nor would it serve as a distraction.
Tim also pointed out that “the graph showing the overall course grade and what I assume is the latest grades look identical†and asked if this is accurate. Good question! It looks like it is probably an error, which I’ll look into a correct if so. Thanks Tim.
Adrian wrote: “The heading ‘Attendance’ is a bit misleading as the data refers to absenteeism and tardiness. Perhaps ‘Non-Attendance’ would be better?†I understand the point, but schools typically refer to absences and tardies as attendance information. In other words, this is the heading that makes most sense to teachers.
Kristian suggested the following: “The qualitative difference between grades (say F and D) is very significant. I suspect it would be beneficial to emphasize this by simply changing the current 3 row banding lines (which currently actually de-emphasize the difference between grades) to instead only have horizontal banding lines between different grades, i.e. one line between F and D, one line between D and C etc.†I suspect that I’m not understanding Kristian’s question properly. Band colors are different for every grade, so there is a boundary between each. I must be missing something.
Several of you wanted to know what software I used to produce this design. I almost always use Adobe Illustrate to do this, which was the case this time as well. I suspect that many products could reproduce this dashboard, except perhaps for the bandline and sparkstrip enhancements to sparklines. I also suspect that a few could handle these features as well, but I don’t know which products fall into this category offhand.
Very effective illustration in favor of the magenta.
I am still curious about the reversal of the gray shading – whereas in most cases I am familiar with (and as is the case with bullet graphs), shading from light to dark equates to a ‘good to bad’ scale. In this example it is the opposite – what is the reasoning behind that decision?
jbriggs,
The shading could run in the other direction. I’ve shaded the bands in the way that I’ve suggested as the standard for bandlines, which is to run from light gray for low values and darker gray for high values. It is intuitive to think of darker as higher in value. In any particular case, however, the direction of the shading could be reversed if that works better. One benefit of the current shading is that a value rendered in magenta (or any other fairly dark color) will pop out more against a light background than it will against a dark background, which causes the low scores to pop more than the high, which is useful.
It’s an impressive dashboard – visually pleasing, and conveys a lot of data.
Here are the few things I could find to “complain” about … :)
It’s somewhat “big”, which makes it a bit unwieldy. I have to view it at full-size to comfortably read the text, and when I do that it doesn’t quite fit in my 1280×1024 screen (since I’m viewing it in a browser, and the edged/borders/title-bar of my browser consume some of my screen).
It’s also difficult (for me) to follow a student’s name across the screen. Even with the horizontal gray stripes every 3 students, I still find it difficult. For example, if I’m looking at some graphic at the right of the screen, it’s difficult to determine which student it corresponds to. It might be easier if the horizontal lines were a tad darker. Also, with some space-saving measures, you might make the dashboard a bit more narrow, which would also help.
I assume the overall class absence line graph (at the bottom of the page) only have plot markers on the days that had absences, and then connect those days with a straight line? (I could be wrong!) I would recommend including the zeroes as well, for the days with zero absences – that will change the way the line looks drastically. Skipping the zeroes and only connecting the non-zero days creates the visual effect of much higher absences. (For example, if there are 5 absences on Monday, and 5 on Friday, and you connect them with a line … it will visually look like there are also 5 absences Tuesday, Wednesday, and Thursday). You’ll also want to skip/omit the days when class was not held. Since there are several issues/gotchas with using a line chart for the total absence data, an alternative would be to use a very small (micro) bar chart.
You could save some real estate, and reduce the amount of ‘ink’, by leaving off the ‘%’ signs. If there is really a need to make sure the reader knows the grades are percents, maybe show a ‘%’ sign on the first value in a column (but no need to show it with every value, imho).
The column for the number of ‘Late’ assignments seems to take up a lot of real estate. Seems like it could be a simple/narrow column.
Two of the graphs at the bottom, showing the % of students in each grade range – I would lean towards using a bar chart / histogram there. But I can see how the argument could be made for using the “bell curve” (which you used) also.
Robert,
Thanks for the feedback. Here are a few responses:
Per the requirements set forth in the competition, this dashboard has an aspect ratio of precisely 1280×1024. When viewed full screen (not in a browser) it would fill the screen of a screen that is running at this particular resolution.
The class absences and tardies line graphs do include zero values. The lines are continuous from left to right without any breaks. If you’re not seeing this, something’s wrong. Non-class days (weekends and holidays) were omitted, as you suggested. There are no gotchas that I’m aware of when using a line graph as I have here.
Using line graphs in the form of frequency polygons to display frequency distributions at the bottom rather than bars in the form of histograms was intentional. The lines show the same information in a way that involves less ink and show the shape of the distribution more clearly than a histogram.
Per the absence graph … let me give an example…
Notice in your line chart for Absences, towards the left/middle of the graph where it looks a little like the top of a “cat’s head” (with 2 ears sticking up). The left ear ends at Feb 17 and the right ear starts at Feb 21 — there are no absences between these two dates, but yet there is a line joining them at a height of 1 absence (which visually implies that there were 1 absence on Feb 18, 19, and 20 … but actually there were no absences on those dates).
Robert,
I believe that the discrepancy that you’re pointing out is due to an error. When I constructed the line graphs to summarize absences and tardies, I used a different set of dates than Bryan did, who constructed the individual student absence and tardy graphics. When I discovered this, I never went back to correct. I was lazy. Now that you’ve found me out, I’ll fix it.
By the way, if you look at the dashboard again, you will see that I removed the percentage signs, per your suggestion. Thanks.
I have created a SAS implementation of Stephen’s dashboard design:
http://robslink.com/SAS/democd63/sfew_dashboard2.htm
I tried to keep the graphical design as exact/pixel-perfect to Stephen’s original as possible. I took the liberty to add some “interactivity” as a proof-of-concept to show what is possible — most of the text and graphics in the SAS dashboard have html mouse-over text and href drilldown links associated with them, and 3 of the columns are “sortable” by clicking the column header. The ‘Help’ button is also active, and provides some basic info to help the user interpret what they’re seeing.
Here is a link to some technical info about how the dashboard was created, along with the SAS program:
http://robslink.com/SAS/democd63/sfew_dashboard2_info.htm
The magenta color is easier to read under the flourescent lights of the classroom. The blue of the the contest-winner is completetely washed out.
It is not about wht the designer thinks looks best, but what design will work best for the end user.
Gregg – I’ve viewing the dashboard under flourescent lights, and the blue shows up just fine on my monitor. (perhaps this varies from person to person, monitor to monitor, or in various lighting conditions?)
Stephen,
Thank you for this example. Comparing this and the other winners, I can clearly see and review my own submission and understand how I could’ve presented data better.
However, one thing I’m struggling with is not how the information should be displayed, but whether it should be displayed at all. Take # of tardies, for example. It has almost no predictive power whatsoever on a grade outcome. Simple linear regression returns an r-squared of 9% vs. latest standardized math score and similarly 2% vs. the average of there recent 5 assignments. What obligation do we as designers have to assess whether the data is important to the decision making process before displaying it?
Secondly, all solutions focused on highlighting the bottom performers. I’m not an educator myself, but the approach I took with my design was to attempt to highlight individuals that were on the cusp of meeting/not meeting their goal grade for the class. If you have limited time/resources (which I presume you do as a teacher) then shouldn’t one expend their energy where it stands the greatest chance of succeeding? Bae Kim, for example, has a grade goal of a C, but presuming some final test equally weighted to the completed assignments, has basically zero chance of making that goal (on the assumption that their future grades will be substantially similar to their prior ones). Highlighting this person for attention strikes me as the dashboard encouraging the reader to waste resources, no? Secondarily, what obligation do we have to use knowledge about the future to help guide/predict vs. simply display the past data?
I appreciate your input!
Adam,
Regarding the predictive power of tardies in relation to grade outcomes, you are assuming that grades are the only valid measure of a student’s performance. Absenteeism and tardiness affect a student’s educational experience and are therefore of interest to teachers even if they don’t influence grade performance. The fact is, however, that absenteeism and tardiness do potentially impact a student’s grade performance, even if the data that we fabricated for this competition cannot be used to make this case statistically.
Regarding an emphasis on students who are performing poorly in an absolute sense rather than in relation to their personal goals, here’s my thinking. You could certainly make a valid argument that in some cases the primary ranking of students should be based on the variance of actual grade performance to their own goals. In the case of high school students, however, allowing students to set their own goals primarily serves as a means of encouraging them to take responsibility for their own performance and to track their progress toward their own stated goals. Most of them are not yet mature enough, however, to take this exercise seriously and to set realistic goals. In many cases their goals would be fairly arbitrary. By making the students’ goals the primary area of focus, a teacher would not be focusing on more objective measures of performance, which are probably more meaningful in this case. This was an assumption on which I based my evaluations of the dashboards that were submitted.
Stephen,
Thank you for your response. Your response to the second part of my question re-raised, in my mind, the first question. If student goals are “fairly arbitrary” and other measures are “more meaningful in this case” then why present the student goal data point? But, rather than debate specific data elements in this data set that do/don’t have value, my more general question I’m curious about from your experience is do we seek to present data that we are given the best way possible or is there an obligation of the designer to question what is given and help separate signal from noise for the viewer?
Thanks again!
Adam,
Even though the goals set by many of the students are arbitrary, the teacher still finds them useful. Keep in mind that the purpose of having students set their own goals is to encourage the students to take personal responsibility for their performance. Even if the student wasn’t mature enough to take the goal-setting process seriously and set a realistic goal, the teacher could still use the goal as a means of motivating the student to work harder or, in some cases, as a means of encouraging students by pointing out that they are doing better than they themselves expected.
Robert,
You did a fantastic job in recreating the dashboard! Thanks for including the code for us geeks to view. Kudos to you and SAS for showing that this type of dashboard is within reach.
It might be cool to add a Steve’s color legend from his defense of magenta post to the header to allow people to change your fewcolor variable. It would show a little more interactivity that addresses something that people commented on a lot. No this is not a challenge.