Gartner’s Annual Magic Show

Even though I’ve questioned the usefulness and integrity of Gartner’s Magic Quadrant many times when it’s been applied to products related to analytics and data visualization, I’ve recently realized that there’s at least one aspect of the Magic Quadrant for which we should be grateful: the honesty of its name. By calling the quadrant “magic,” Gartner helpfully hints that it should not be taken seriously—it’s magical. We should approach it as we would the performance of a stage magician. When reading it, we should suspend disbelief and simply enjoy the ruse. Gartner’s Magic Quadrant is an act of misdirection, sleight of hand, smoke and mirrors. Understood as such, it’s grand entertainment.

Gartner recently published the 2017 edition of its “Magic Quadrant for Business Intelligence and Analytics Platforms.” As in past years, it is not a valid assessment of the products and vendors. Unfortunately, however, it will nevertheless be used by many organizations to select products and vendors. There is a dirty little secret about the Magic Quadrant that most industry leaders won’t publicly admit: few of them, including the vendors themselves, take the Magic Quadrant seriously as a valid assessment. They laugh about it in whispers and behind closed doors. They do take it seriously, however, as a report that exercises an undeserved degree of influence. The Magic Quadrant is a highly subjective evaluation of products and vendors that reflects the interests of the vendors that appear on it and Gartner’s interests as well, for Gartner is paid dearly by those vendors. Gartner’s coverage is about as “fair and balanced” as Fox News.

Although it should always have been included in any serious review of Business Intelligence tools, it took Gartner several years to shift the focus of its Magic Quadrant for Business Intelligence to one that supposedly embraces the importance of analytics (i.e., data sensemaking)—a transition that Gartner now claims is complete. Unfortunately, Gartner does not understand analytics very well, so it bases its evaluation on criteria that reflect the interests and activities of the vendors rather than a clear understanding of analytical work and its requirements. The criteria largely focus on technical features rather than on the fundamental needs of data analysts, and many of the features on which it bases its assessment are distractions at best, and, in some cases, recipes for disaster.

I won’t take the time to critique this year’s Magic Quadrant in detail, but will instead highlight a few of its flaws.

The Magic Quadrant displays its findings in a scatterplot that has been divided into four equal regions: “Niche Players,” “Challengers,” “Visionaries,” and “Leaders.” As with all scatterplots, a quantitative scale is associated with each of the axes: “Completeness of Vision” on the X-axis and “Ability to Execute” on the Y-axis.

Magic Quadrant

The actual measures that have been assigned to each vendor for these two variables are not shown, however, nor are the underlying measures that were combined to come up with these high-level measures. Gartner is not willing to share this data, so we have no way to assess the merits of the results that appear in the Magic Quadrant. Even if we could see the data, the Magic Quadrant would be of little use, though, for it doesn’t measure the most important qualities of BI and analytics products, nor is it based on data that is capable of assessing the merits of these products. We cannot actually measure a vendor’s ability to execute or its completeness of vision. Gartner’s conclusion that the vendors with the most complete visions are Microsoft, followed at some distance by Salesforce and the ClearStory Data, is laughable. The visions that place these vendors at the forefront in the Magic Quadrant will not lead to improved data sensemaking. Something is definitely wrong with the way that vision is being measured.

If I decided to use a scatterplot to provide a summary assessment of these products, I would probably associate “Usefulness” with one axis and “Effectiveness” with the other. What matters most is that the tools that we use for data sensemaking provide the functionality that is most useful and do so in a way that works well.

The Magic Quadrant is almost entirely based on responses to questionnaires that are completed by the vendors themselves and by those who use their products. This is not the basis for a meaningful evaluation. It is roughly equal to evaluating Mr. Trump’s performance by asking only for his opinion and that of those who voted for him. The degree of bias that is built into this approach is enormous. Obviously, we cannot trust what vendors say about themselves, nor can we trust the opinions of those who use their products, for they will almost always be biased in favor of the tools that they selected and use and will lack direct knowledge of other tools. The best way to evaluate these products would involve a small team of experts using a good, consistent set of criteria to review and test each product as objectively as possible. Questionnaires completed by those who routinely use the products could be used only to alert the experts to particular flaws and merits that might not be obvious without extensive use. Why doesn’t Gartner evaluate the field of vendors and products in this manner? Because it would involve a great deal more work and require a team of people with deep expertise acquired through many years of doing the actual work of data sensemaking.

Immediately following a two-sentence “Summary” at the beginning of the report, Gartner lists its “Strategic Planning Assumptions,” which are in fact a set of six prognostications for the near future. Calling them assumptions lends credence that these guesses don’t deserve. They are not predictions based on solid evidence, but seem like more of a wish list of sorts. Let’s take a look at the list.

By 2020, smart, governed, Hadoop/Spark-, search- and visual based data discover capabilities will converge into a single set of next-generation data discovery capabilities as components of modern BI and analytics platforms.  

At least one member of Gartner’s team of BI analyst must have a marketing background. This is meaningless drivel that can neither be confirmed nor denied in 2020.

By 2021, the number of users of modern BI and analytics platforms that are differentiated by smart data discovery capabilities will grow at twice the rate of those that are not, and will deliver twice the business value.

For some unknown reason, this prediction will take a year longer than the others to be realized. What are these “smart data discovery capabilities?”

Smart data discovery — introduced by IBM Watson Analytics and BeyondCore (acquired by Salesforce as of September 2016) — leverages machine learning to automate the analytics workflow (from preparing and exploring data to sharing insights and explaining findings). Natural-language processing (NLP), natural-language query (NLQ) and natural-language generation (NLG) for text- and voice-based interaction and narration of the most statistically important findings in the user context are key capabilities of smart data discovery.

First off, I hope this doesn’t come true because these so-called “smart data discovery capabilities” are almost entirely hokum. Relinquishing control of data sensemaking to algorithms will be the death of meaningful and useful analytics. Regardless, there is no actual way to confirm if those who use these capabilities will “grow at twice the rate” of those who don’t, and there certainly isn’t a way to measure a two-fold increase in business value. Even if they defined what they mean by these measures, they wouldn’t have a way to gather the data.

By 2020, natural-language generation and artificial intelligence will be a standard feature of 90% of modern BI platforms.

This is somewhat redundant because Gartner defines smart data discovery, addressed in the previous prediction, as products that incorporate machine learning and natural language processing. I’m assuming that by “artificial intelligence” Gartner is actually referring to machine learning algorithms, because none of these products will incorporate true AI by 2020, and probably never will.

By 2020, 50% of analytic queries will be generated using search, natural-language processing or voice, or will be autogenerated.

According to this, by 2020 the 90% of BI products that incorporate natural language processing will be used to generate 50% of all queries through natural language interfaces. That sounds cool, but isn’t. Natural language is not an efficient way to generate data sensemaking queries. Anyone who knows what they’re doing will prefer to use well-designed interfaces that allow them to directly manipulate information and objects on the screen rather than using words.

 By 2020, organizations that offer users access to a curated catalog of internal and external data will realize twice the business value from analytics investments than those that do not.

What do they mean by a “curated catalog?” Here’s the closest that they come to a definition:

A curated agile data catalog where business users can search, access, find and rate certified internal data as well as open and premium external data with workflow — in order to promote harmonized data to certified status — is becoming key to governed modern deployments leveraging complex distributed data with an increasing number of distributed content authors.

This is mostly gobbledygook. Without a clear idea of what this is, this prediction can never be confirmed, and even if the meaning were clear, we would not be able to determine if these features led to “twice the business value.”

The sixth and final prediction is one of my favorites:

Through 2020, the number of citizen data scientists will grow five times faster than the number of data scientists.

As I’ve written before, there is no science of data. The term data scientist is a misnomer. Even if this were not the case, there is no commonly accepted definition of the term, so this prediction is meaningless. Now, add to this a new term that is even more meaningless—“citizen data scientist”—and we have the makings of complete nonsense. And finally, if in 2020 you can demonstrate that so-called citizen data scientists grew five times faster than the number of so-called data scientists, I’ll give you my house.

It’s ironic that Gartner makes such unintelligent statements about business intelligence and such unanalytical statements about analytics and then expects us to trust them. Unfortunately, this irony is missed by most of the folks who rely on Gartner’s advice.

Take care,

Signature

17 Comments on “Gartner’s Annual Magic Show”


By Nicol Sandoli. March 5th, 2017 at 12:22 am

What about the Critical Capabilities Report? Do you have a similar opinion or it something we could use as a reference for understanding the BI and analytics landscape?

Thank you,
Nicola

By Stephen Few. March 5th, 2017 at 9:40 am

Nicol,

I’ve never read Gartner’s Critical Capabilities Report. If it’s based on the same sources as the Magic Quadrant report, it is similarly biased, and if it’s based on the same criteria, it doesn’t reflect the most important capabilities of BI/Analytics products.

By David Hinchee. March 5th, 2017 at 2:21 pm

Hi Steve,

In my role as an accountant and financial analyst, I often prepare analytic reports and visualizations. I’m no expert in these areas, and I rely on sites like yours to make my presentations are informative as possible.

To date Excel has been my tool of choice, but I’ve recently started to learn R, and I’m beginning to appreciate the advantages of a 1980s-era command line scripting interface over a “modern” GUI. Learning R is subtly changing the way I think about the analytic problem I’m trying to solve by forcing me to think more about the problem and less about the process.

It seems to me that the natural language and voice recognition features that Gartner touts are a lot like a GUI—just another interface to get between analyst and data. For me, interacting with data is integral to the learning process. Maybe by 2020 I’ll be able to offload my analysis work to Alexa but It’ll come at the expense of my understanding the data I’m working with.

Thanks again for the tremendous work you do and the information you provide in your books and on this site.

David

By Dale Lehman. March 6th, 2017 at 5:33 am

Personally, I like GUI interfaces (being a JMP user myself – which I find superior in so many ways to the alternatives). But that is an aside. Stephen, thank you for this column. I’ve been testing a number of these products for use in my Business Analytics program and I’ve tried to make sense out of the Magic Quadrant – but I can’t. It seems to me that all of the vendors and moving across the value chain so that they will soon “offer” all of the capabilities required to do analytical work (or data sensemaking, to use your term). However, realizing their “offer” is not easy. All of the products fall down at one or more of the critical parts of the value chain. What Gartner’s analysis should have done is illuminate where their strengths and weaknesses lie.

Of course, the critical component is, and always will be, the ability of the user to meaningfully think about data and how to improve decision making. Tools can help this ability – but many do not. I don’t know what “completeness of vision” is either, but it certainly doesn’t seem to include the critical component of the user’s thought process. It almost looks like Gartner is evaluating outdoor grills rather than business intelligence products.

By Stephen Few. March 6th, 2017 at 9:30 am

Dale,

The full Magic Quadrant report does cover the supposed “strengths” and “cautions” of each vendor that Gartner has chosen to include in the list, Unfortunately, however, Gartner does not have a clear sense of what these products ought to do, so these strengths and cautions are of limited value.

By Kenneth Black. March 7th, 2017 at 7:56 am

Hi Stephen,

I like your candid style and I agree with your statement: “The best way to evaluate these products would involve a small team of experts using a good, consistent set of criteria to review and test each product as objectively as possible.”

Well, I might not represent a small team of experts, but I do have a lot of experience in software development, testing and evaluation by making direct comparisons between products. I also happen to have a lot of experience in the real-world application of analytics software.

With this in mind, a few months ago I decided to cut through the hype of Microsoft Power BI to see if there is any truth behind the marketing claims. The journey I am on is continuing but has been interesting and has uncovered some interesting findings.

I do believe that this type of testing is what you are referring to in your article. The problem is, most people don’t want to do it because it takes a lot of work and they have a fear of recrimination in their career. I’m not too concerned with speaking the truth. In fact, there is a lot more I could say that has emerged from this testing that directly supports several of your claims in this article. Maybe one day, I’ll write those findings and share them, too.

Thanks,

Ken

By David Jones. March 7th, 2017 at 9:13 am

Stephen,

I’m curious, if each vendor is equally as biased towards itself, and each customer of each vendor is equally biased to their own choice of vendors, isn’t the bias across the population sort of negate itself? Not saying you ignore it, but rather that it is consistent and therefore less significant when isolating any individual vendor’s location?

I don’t think this makes the quadrant any more useful, I just don’t understand the significance of the bias if it’s consistent across all data points. For an analogy, what if each point was artificially moved up 1 increment? the relation between the data points remains, so what is the true net impact of the 1 incremental movement?

Another way I am trying to make sense of this is: if each vendor’s customer has the opportunity to be as biased towards its vendor choice, but some end up more biased than others, isn’t that a meaningful data point? Client A prefers their choice but not nearly as biased and client B is about their choice… is there no value in trying to measure that?

I understand this is completely subjective and there may not be any significance to the exact relation of (or distance between) one vendor to the next, but if one vendor is so much more preferred by its own clients than another, then I would think there is value in understanding, even broadly, one vendor is more preferable than another on one axis.

Final thought, and I am curious to get your feedback.

The MQ is not an empirical piece of research, it’s a tool used to make business decisions. It is, therefore, not a surprise to many that it would also be influenced by business. My real estate agent just helped me buy my second home. I am not surprised the real estate broker association has literature about the virtues of home ownership. I don’t have the expectation the data points used in their literature paint a full picture of all factors of home ownership, but I don’t understand the value (or meaning) in pointing out their content is designed to fuel their own industry. If I didn’t know that going in, I would think there are far greater problems with the concept of me spending that kind of money in the first place. I get that every data point needs context, but if this needs to be pointed out with the MQ, you likely are not in a position to spend a ton of money on one of these vendors anyway.

By Stephen Few. March 7th, 2017 at 10:05 am

David,

The data that Gartner gathers from its questionnaires is different from what you seem to imagine. Gartner does not ask vendors and their customers to cast votes for the top vendor. If that is what it did, with each vendor getting one vote and an equal number of customers for each vendor getting one vote each, then an equal level of bias would result in each vendor earning an equal number of votes. Instead, the questionnaires ask questions about paricular features and plans. Responses from vendors and their customers don’t reflect objective reality about the products, not just because of their biases, but also because they are the wrong people to assess product merits and because they questionnaires don’t capture the data that is needed to assess product usefulness and effectiveness.

You seem to assume that business decisions should not be made based on empirical research. In fact, decisions such as product selection should definitely be made based on empirical research. Organizations such as Gartner are expected by their customers to conduct valid empirical research, but they don’t. If you ask them to reveal how they came to their conclusions, they will all claim that they used reliable and sophisticated resarch methods, but will never reveal them. Imagine a scientist who published findings but refused to reveal her data or research methods. We wouldn’t trust that scientist, would we? Similarly, we shouldn’t trust organization’s such as Gartner that hide their data and methods in a black box.

P.S. Even your real estate agent should provide you with objective information and advice to serve your interests. When they don’t, their not doing their jobs.

By David Jones. March 7th, 2017 at 11:05 am

Stephen,

I appreciate your answers. I obviously have not participated in one of their questionnaires, so this is all helpful context. I am not doubting the mq has its flaws, I’m just trying to better understand them to determine if there is any function for their work.

I guess my disconnect is that while the idea of empirical research is ideal, I am not surprised that in reality it doesn’t exist. Being able to bridge the gap between what I am reading vs what i am told i am reading is just wisdom, accumulated over time. I’m still working on accumulating mine, and feel i have a long way to go.

I am curious, what is so fundamentally flawed about a company going to a vendor’s clients and asking, ‘you are an imperfect user of vendor A, in an imperfect environment, attempting an imperfect task. would you mind rating your experience?’ If you collect enough data points, trends will certainly appear, separating some vendors from others in certain ways, wouldn’t they? Is there no value in the collective responses? Or am i off in left field and the questions are not categorized like this at all?

Now, there is the critical issue that the criteria are hidden from the reader. No doubt, defensiveness flaw. Cite me, for example, of an uneducated reader misinterpreting what i see.

The irony of your point is well made: concealing the data, in a conversation about understanding data, is just bad. shouldn’t that be the warning a reader needs to know the content needs context?

I guess if the folks at gartner are overstating their findings, giving more credit to the significance of their work than what is true, and you are trying to help folks put it in perspective, then i see your point.

By Stephen Few. March 7th, 2017 at 11:16 am

David,

As I mentioned in my original blog post, I think that questionnaires that are answered by users of these products are at best useful for exposing flaws that might not be obvious without extensive use of those products. This could clue a team of expert evaluators into the existence of problems that they might not find on their own. Other than this, I don’t see any value in these questionnaires because few users of these products have enough expertise in data sensemaking or experience with a broad range of products, which are both needed to assess the merits of these products.

By David Jones. March 7th, 2017 at 12:13 pm

Stephen,

Thanks for taking the time to respond to my questions, and for your patience with my lack of understanding of the questionnaire. I certainly have a better understanding of what the mq is and isn’t after your post.

By Chris Brobin. March 9th, 2017 at 4:45 am

Hi Stephen, good article – thank you for sharing.

It would be interesting to see whether there is a correlation between the premier and platinum vendors who sponsor Gartner and their relative location on the magic quadrant. I cannot seem to identify who actually sponsor the event though.

Regards,

Chris

By Stephen Few. March 9th, 2017 at 10:15 am

Chris,

I suspect that there is a strong correlation between the relative rankings of vendors and the relative amounts of money that they pay Gartner. We’ll never know for sure, because Gartner won’t make this information available. Vendors that appear in the Magic Quadrant subscribe to Gartner’s vendor services. I doubt that there are any vendors that appear in the Magic Quadrant that don’t subscribe to Gartner’s services or any vendors that do not appear in it who do subscribe. In my opinion, organizations that evaluate vendors should not be allowed to accept money from those vendors. The risk of bias is too great.

By Broken Analysis. January 8th, 2018 at 3:02 pm

Stephen Few, I hate to tell you but your analysis of Gartner analysis is also flawed. Putting out statements such as “I suspect that there is a strong correlation between the relative rankings of vendors and the relative amounts of money that they pay Gartner” shows your ignorance. Let me explain.

You have bashed a particular Forrester analyst (Boris Evelson) for jumping outside of his area of expertise yet you have exhibited the same. You have complained that Gartner hasn’t made the data for the Quadrant public yet you expect others to just “accept” your statement of correlation.

Have you considered the opposite as a possibility? What if there was an inverse correlation. Why wouldn’t it be in the best interest of an analyst to bash vendors and then charge them to become a client to raise their score?

By Stephen Few. January 8th, 2018 at 4:38 pm

Broken Analysis,

You are always welcome to point out flaws in my reasoning, but I insist that you do so accurately if you wish to engage in discussion. Unlike Evelson, I have made no claims that are not firmly within my realm of expertise. Evelson made claims regarding data visualization, yet I know for a fact, based on personal interaction with him, that he has no expertise in this field. Also, you state that I claimed a correlation without evidence. I made no such a claim. As you pointed out yourself, I merely said that I “suspect” a correlation between Gartner’s ratings of vendors and the amount of money that those vendors have paid Gartner. I cannot demonstrate the existence of this correlation one way or the other because Gartner will not share the relevant data. If Gartner ever provides the data, we can determine if this correlation exists. The possibility that you suggested—that it might be in Gartner’s interest to “bash vendors and then charge them to become a client to raise their score” does not conflict with my claim. In fact, evidence suggests that Gartner has indeed done this. The fact that Tableau Software never appeared anywhere on the magic quadrant until after they began to pay for Gartner’s services certainly hints at this possibility.

If you wish to engage in discussion in this forum in the future, please identify yourself by name. I’ve found that those who remain anonymous or use pseudonyms in discussions such as this often do so for a reason.

By Scott Stephens. March 2nd, 2018 at 8:31 am

Stephen,

You speak to a better way of evaluating vendors is by using a “small group of experts using a good, consistent set of criteria”. Is there any place I can go to read what you would consider to be a “good, consistent set of criteria”? I am new to this and would love to learn more. So far, I’ve really enjoyed learning your thoughts on this subject. I’d love to dive deeper.

Best,
Scott

By Stephen Few. March 2nd, 2018 at 9:32 am

Scott,

I wish I had an encouraging answer for you, but I don’t. I’m not aware of any organization that evaluates the merits of analytics products in the manner that’s needed. Building a good team of experts and allowing them to work without being influenced by the vendors would be expensive. It could certainly be done, and should be, but organizations such as Gartner are managed like typical businesses with a focus on maximum profits. As such, they don’t hire the best experts, who are expensive, and they accept money from the vendors, which introduces biases. Another reason why this venture is so expensive is the fact that there are far too many products to evaluate. The marketplace is overrun by mostly poor products. It simply wouldn’t be possible to evaluate every product. The solution to this particular problem involves narrowing the field. A basic set of criteria that could be used to perform a quick, relatively inexpensive evaluation would need to be used to determine which products are considered.

As it is, organizations that are shopping for analytics products must rely primarily on their own expertise to evaluate them. To do this effectively, they must have at least one person on staff who is a truly expert analytics practioner. They must also make sure that their one or more experts are not subjected to inappropriate incentives to favor some products over others, including the engaging smiles of attractive sale representatives.

Leave a Reply