Even though I’ve questioned the usefulness and integrity of Gartner’s Magic Quadrant many times when it’s been applied to products related to analytics and data visualization, I’ve recently realized that there’s at least one aspect of the Magic Quadrant for which we should be grateful: the honesty of its name. By calling the quadrant “magic,” Gartner helpfully hints that it should not be taken seriously—it’s magical. We should approach it as we would the performance of a stage magician. When reading it, we should suspend disbelief and simply enjoy the ruse. Gartner’s Magic Quadrant is an act of misdirection, sleight of hand, smoke and mirrors. Understood as such, it’s grand entertainment.
Gartner recently published the 2017 edition of its “Magic Quadrant for Business Intelligence and Analytics Platforms.” As in past years, it is not a valid assessment of the products and vendors. Unfortunately, however, it will nevertheless be used by many organizations to select products and vendors. There is a dirty little secret about the Magic Quadrant that most industry leaders won’t publicly admit: few of them, including the vendors themselves, take the Magic Quadrant seriously as a valid assessment. They laugh about it in whispers and behind closed doors. They do take it seriously, however, as a report that exercises an undeserved degree of influence. The Magic Quadrant is a highly subjective evaluation of products and vendors that reflects the interests of the vendors that appear on it and Gartner’s interests as well, for Gartner is paid dearly by those vendors. Gartner’s coverage is about as “fair and balanced” as Fox News.
Although it should always have been included in any serious review of Business Intelligence tools, it took Gartner several years to shift the focus of its Magic Quadrant for Business Intelligence to one that supposedly embraces the importance of analytics (i.e., data sensemaking)—a transition that Gartner now claims is complete. Unfortunately, Gartner does not understand analytics very well, so it bases its evaluation on criteria that reflect the interests and activities of the vendors rather than a clear understanding of analytical work and its requirements. The criteria largely focus on technical features rather than on the fundamental needs of data analysts, and many of the features on which it bases its assessment are distractions at best, and, in some cases, recipes for disaster.
I won’t take the time to critique this year’s Magic Quadrant in detail, but will instead highlight a few of its flaws.
The Magic Quadrant displays its findings in a scatterplot that has been divided into four equal regions: “Niche Players,” “Challengers,” “Visionaries,” and “Leaders.” As with all scatterplots, a quantitative scale is associated with each of the axes: “Completeness of Vision” on the X-axis and “Ability to Execute” on the Y-axis.
The actual measures that have been assigned to each vendor for these two variables are not shown, however, nor are the underlying measures that were combined to come up with these high-level measures. Gartner is not willing to share this data, so we have no way to assess the merits of the results that appear in the Magic Quadrant. Even if we could see the data, the Magic Quadrant would be of little use, though, for it doesn’t measure the most important qualities of BI and analytics products, nor is it based on data that is capable of assessing the merits of these products. We cannot actually measure a vendor’s ability to execute or its completeness of vision. Gartner’s conclusion that the vendors with the most complete visions are Microsoft, followed at some distance by Salesforce and the ClearStory Data, is laughable. The visions that place these vendors at the forefront in the Magic Quadrant will not lead to improved data sensemaking. Something is definitely wrong with the way that vision is being measured.
If I decided to use a scatterplot to provide a summary assessment of these products, I would probably associate “Usefulness” with one axis and “Effectiveness” with the other. What matters most is that the tools that we use for data sensemaking provide the functionality that is most useful and do so in a way that works well.
The Magic Quadrant is almost entirely based on responses to questionnaires that are completed by the vendors themselves and by those who use their products. This is not the basis for a meaningful evaluation. It is roughly equal to evaluating Mr. Trump’s performance by asking only for his opinion and that of those who voted for him. The degree of bias that is built into this approach is enormous. Obviously, we cannot trust what vendors say about themselves, nor can we trust the opinions of those who use their products, for they will almost always be biased in favor of the tools that they selected and use and will lack direct knowledge of other tools. The best way to evaluate these products would involve a small team of experts using a good, consistent set of criteria to review and test each product as objectively as possible. Questionnaires completed by those who routinely use the products could be used only to alert the experts to particular flaws and merits that might not be obvious without extensive use. Why doesn’t Gartner evaluate the field of vendors and products in this manner? Because it would involve a great deal more work and require a team of people with deep expertise acquired through many years of doing the actual work of data sensemaking.
Immediately following a two-sentence “Summary” at the beginning of the report, Gartner lists its “Strategic Planning Assumptions,” which are in fact a set of six prognostications for the near future. Calling them assumptions lends credence that these guesses don’t deserve. They are not predictions based on solid evidence, but seem like more of a wish list of sorts. Let’s take a look at the list.
By 2020, smart, governed, Hadoop/Spark-, search- and visual based data discover capabilities will converge into a single set of next-generation data discovery capabilities as components of modern BI and analytics platforms.
At least one member of Gartner’s team of BI analyst must have a marketing background. This is meaningless drivel that can neither be confirmed nor denied in 2020.
By 2021, the number of users of modern BI and analytics platforms that are differentiated by smart data discovery capabilities will grow at twice the rate of those that are not, and will deliver twice the business value.
For some unknown reason, this prediction will take a year longer than the others to be realized. What are these “smart data discovery capabilities?”
Smart data discovery — introduced by IBM Watson Analytics and BeyondCore (acquired by Salesforce as of September 2016) — leverages machine learning to automate the analytics workflow (from preparing and exploring data to sharing insights and explaining findings). Natural-language processing (NLP), natural-language query (NLQ) and natural-language generation (NLG) for text- and voice-based interaction and narration of the most statistically important findings in the user context are key capabilities of smart data discovery.
First off, I hope this doesn’t come true because these so-called “smart data discovery capabilities” are almost entirely hokum. Relinquishing control of data sensemaking to algorithms will be the death of meaningful and useful analytics. Regardless, there is no actual way to confirm if those who use these capabilities will “grow at twice the rate” of those who don’t, and there certainly isn’t a way to measure a two-fold increase in business value. Even if they defined what they mean by these measures, they wouldn’t have a way to gather the data.
By 2020, natural-language generation and artificial intelligence will be a standard feature of 90% of modern BI platforms.
This is somewhat redundant because Gartner defines smart data discovery, addressed in the previous prediction, as products that incorporate machine learning and natural language processing. I’m assuming that by “artificial intelligence” Gartner is actually referring to machine learning algorithms, because none of these products will incorporate true AI by 2020, and probably never will.
By 2020, 50% of analytic queries will be generated using search, natural-language processing or voice, or will be autogenerated.
According to this, by 2020 the 90% of BI products that incorporate natural language processing will be used to generate 50% of all queries through natural language interfaces. That sounds cool, but isn’t. Natural language is not an efficient way to generate data sensemaking queries. Anyone who knows what they’re doing will prefer to use well-designed interfaces that allow them to directly manipulate information and objects on the screen rather than using words.
By 2020, organizations that offer users access to a curated catalog of internal and external data will realize twice the business value from analytics investments than those that do not.
What do they mean by a “curated catalog?” Here’s the closest that they come to a definition:
A curated agile data catalog where business users can search, access, find and rate certified internal data as well as open and premium external data with workflow — in order to promote harmonized data to certified status — is becoming key to governed modern deployments leveraging complex distributed data with an increasing number of distributed content authors.
This is mostly gobbledygook. Without a clear idea of what this is, this prediction can never be confirmed, and even if the meaning were clear, we would not be able to determine if these features led to “twice the business value.”
The sixth and final prediction is one of my favorites:
Through 2020, the number of citizen data scientists will grow five times faster than the number of data scientists.
As I’ve written before, there is no science of data. The term data scientist is a misnomer. Even if this were not the case, there is no commonly accepted definition of the term, so this prediction is meaningless. Now, add to this a new term that is even more meaningless—“citizen data scientist”—and we have the makings of complete nonsense. And finally, if in 2020 you can demonstrate that so-called citizen data scientists grew five times faster than the number of so-called data scientists, I’ll give you my house.
It’s ironic that Gartner makes such unintelligent statements about business intelligence and such unanalytical statements about analytics and then expects us to trust them. Unfortunately, this irony is missed by most of the folks who rely on Gartner’s advice.