Thanks for taking the time to read my thoughts about Visual Business Intelligence. This blog provides me (and others on occasion) with a venue for ideas and opinions that are either too urgent to wait for a full-blown article or too limited in length, scope, or development to require the larger venue. For a selection of articles, white papers, and books, please visit my library.

 

Building Better BI

June 14th, 2017

I was recently made aware of an article about Business Intelligence (BI) that struck me as extraordinary and smart. The author, Paul Holland, is a BI consultant who has a great deal of real-world BI experience. In the article, he emphasizes the role of people rather than technologies in BI success, which is correct and under-appreciated. Paul has given me permission to share his article with you here in my blog.


Building Better BI: Should I Ride the Donkey or the Bullet Train?

Recently, I was in conversation with a senior executive of a business who was explaining to me how spending many hundreds of thousands of pounds on a very shiny, new and aggressively marketed BI visualisation platform would enable them to access even more of their data than ever before.

Now it would have been impolite of me to point out that accessing data, any and nearly all of this company’s data, is not a problem to begin with and that this statement alone indicates a deeply flawed understanding and approach that many senior managers in need of analysing and understanding their business(es) seem to arrive at – namely, we should spend more money on cutting edge IT systems to gain a competitive advantage. Furthermore, these senior managers will often control the purse strings of an organisation or remain deeply influential in how a company invests in its IT infrastructure. On these occasions, such processes as they are, are little more than a fait accompli; your organisation will end up with a new and shiny silver bullet IT/Information system whether you like it or worse still, whether you need it or not.

Consequently, it should come as no surprise to many of you reading this that many organisations out there are putting down enough money to buy a nice apartment in Paris in order to buy a contemporary BI visualisation tool; the one you’ve been told enthusiastically by someone, ‘it will solve all your reporting problems, believe me, really it will’. What is a surprise though, to me at least, is that having sunk this money into something so powerful, and I’ve seen what I’m going to say next happen on many, many occasions, they simply then connect this expensive, shiny ‘BI system’ full of Promethean promise straight up to their existing data sources. Or in a similar vein will take the myriad of spreadsheets they’ve built over the years and just bang them into their shiny new system, spending an incalculable amount of time trying to get what they had before, working exactly like it did before, looking exactly like it did before. So, to reduce it to its crudest and least sophisticated form, they unplug the old thing and then simply plug the old thing into the new thing, thereby, producing a new old thing. Get it?

Job done then.

And so the rallying cry goes out through the organisation, “relax everybody, you are now going to get access to the most powerful information you’ve ever seen!”.

Only you are not.

You see, BI is not simply an IT system any more than an F1 car is simply a driving machine. It is the amalgamation and unification of a set of processes, different business units, strategies, people and skill-sets combining together within an organisation to produce meaningful and viable information. And this makes it most of all a ‘people thing’ rather than a technology thing. In fact, to be more precise I’m going to argue that BI is about what I call the three Ps; people, purposes and processes. So in my world, there’s no Ts in the Ps!

Certainly, from my experience, understanding and managing these three factors is what makes producing good BI much harder than simply buying what are increasingly becoming very large investments into visualisation systems or BI suites. And if you rely on the IT system delivering your reporting salvation, no matter how advanced and cutting edge you’ve been told it is, then you are probably heading straight for the interminable BI graveyard. Don’t just take my word for it, any review of literature on the subject will reveal you are in good company here. I know of one large organisation in England that has invested its scare resources in three, yes that is three, new shiny silver bullet BI visualisation tools in the last decade. All of them failures. In fact, two of them going virtually completely unused for the lifespan of their investment. This should serve as a cautionary tale to anyone thinking of getting an easy fix for their money.

Furthermore, what has most commonly been gained for this considerable investment is simply easier access to a morass of existing company data resplendent with all its inherent data quality problems. And all too often this comes with the added ‘benefit’ of actually increasing costs or workloads due to the subsequent addition of a ‘BI’ team to focus on making these old data connections work just like they did before. Typically, in a classic case of technological determinism, the system creates the organisation, the workflow and resourcing post facto, after the thing itself has been bought. So rather than simplifying things and reducing costs, you can end up with a larger team consisting of business or data analysts and IT people, all of whom will spend considerable time working towards returning you to, potentially, the state you began the exercise in. I mean what is the point in spending something like a quarter of a million pounds if you can’t replicate my trusty, ill defined spreadsheet I had before huh? I’ve actually seen people sit with their spreadsheet, which they call a ‘dashboard’, spending valuable time recreating the exact same thing in their sophisticated and powerful visualisation tool, usually sucking in the resources of two or three other office staff nearby. I’ve also been in training sessions where people have asked for the system to be redesigned so that they can recreate their local, specific and rather limited use, spreadsheet which they also then keep active afterwards to cross check the new view that has been created at great cost for them in their shiny new system full of Promethean promise. Is it just me or does this seem wrong, crazy even?

This type of behaviour, which I think is not uncommon across organisations (just look at the books and conferences out there), seems to me apropos of building something like a railroad then continuing to ride along it with your donkeys instead of trains. Sure your donkeys have better and easier access to the existing route, and you know your donkey so well too, you can keep it straight on the tracks but you absolutely do not need a train line to ride along on a donkey, do you? So why spend so much money on a railroad to keep your donkey’s running? Similarly, I ask myself then, why does any organisations spend close to £300k on hooking up a sophisticated visualisation platform only to recreate what you already had or have had before, a proliferation of rows and columns and some red and green (far too often) bar charts and pie graphs? I expect you to get bullet trains for 300k, not donkeys.

Now I’d argue that you don’t buy a new system just because you want better, or more efficient, or easier access to existing reports. You buy a system like this because your organisation has made a strategic decision to understand its business better, to measure activity and performance, to seek out inefficiencies and wasted resources or use of time; to understand, measure and refine its processes or proposition to its customers. Bear in mind though that often this is still looking backwards at your business, what’s happened already, but with a good strategy and the right people you can also use such a system to look forwards, to predict certain outcomes for you or to measure the magnitude of some change that is occurring in your business too. That’s the kind of power I expect to get at my fingertips for £300k.

I’m willing to argue that 90% or more of the companies, organisations and large corporations out there already have access to more data than they can possibly manipulate never mind contemplate. We’ve been collecting lots of data for a long time now and, furthermore, accessing this data has not really been a problem, people have been doing it routinely for decades. I think understanding how to define and make better use of this data is actually what we have been doing wrong for decades and for me it is this work that is fundamental to successfully building and deploying effective BI solutions in an organisation. This is where I think the focus needs to change, to move away in the first instance from the software, the data and databases and to focus your time and investment instead on engaging with people, purposes and processes. The three ‘Ps’, if you will.

In the work I do with clients, it’s the three Ps first rather than data which is the focus of any BI undertaking I am involved in. Even NASA and contemporary Astrophysicists know that people are really what you need to build a model, to confirm a hypothesis, to verify the data and help turn it into useful information. It is surprising in this day and age but there are some jobs people are just better at than a computer. That’s why Professor Brian Cox on the latest series of Skygazing Live ‘farmed’ out to viewers the task of analysing large amounts of astronomical data to identify patterns that might indicate a 9th body in the solar system. Surely science departments the world over have super computers and programmers to analyse this data, no? And yet it is deemed that people at home can do this job better. And that’s because data is just that, data, but with people you garner understanding, comprehension, nuances and connections about the subject of your inquiry too.

See it here: https://www.zooniverse.org/projects/marckuchner/backyard-worlds-planet-9/about/research

So, even with the greatest dataset, computers and powerful algorithms to hand some jobs are done best by people. And in keeping with this point of view that’s why when it comes to BI, I always start with people and not with data. Data will not build you an effective BI system, no matter how much data you cram into your data warehouse. But people who require information about their business to make informed decisions, to predict problems, to deploy resources efficiently, they will help you build an effective BI system, one that is fit for purpose, one that informs decision making and one that they themselves will have confidence in and in using too.

So what do I do then? It would be too much to detail here so I’ll outline my methodology briefly to counterpoint my arguments above.

Well you now know I don’t start with data when I help someone to build a BI solution. Instead, I start with the purpose, the reason why someone needs it. I investigate the processes that are the subject of the purpose which helps me understand the breadth of the subject area and systems related to it. I discuss these needs and work out with individuals and groups why they need that information, when they need that information and for whom they need or present this information to either internally or externally for the organisation. In unison with the relevant people we then grade the importance of each item/category identified for serving the purpose and collate it, thereby, building a record of prioritised needs for the technical team and any associated project members to continue working on. In short, we essentially build what I think of as an information landscape, an information map of requirements for the organisation that leads to the compilation of a set of contained business questions that address the purpose(s) that we started out with. I call this my ‘virtuous circle’, everything done should be harmonious with, and work towards, satisfying the purpose. Ultimately, this process also helps to delimit the scope of the design and solution, thus, helping to avoid the insidious ‘scope creep’ of a project. These processes also have the benefit of producing a definitive record of what has been included, excluded, assessed, defined and agreed upon by the business unit/owner of the solution.

It is only after all of this work has been done do we begin to sit down with technical database staff or such like and begin to identify the right data items to bring into the data warehouse and, subsequently, how that data will be treated when it comes into the data warehouse to ensure the veracity of the information that is produced. This process ensures that the data being brought into the warehouse matches the businesses’ definitions and meets the purpose of inquiry and not a technical definition of a field somewhere or a piece of data that could potentially be compiled of other or unknown things to the business.

I have no doubt that sometime in the future, this method/record will also prove invaluable to you when system problems occur, data items change in core systems and for those arguments that happen in meetings when people claim you have the wrong numbers or are measuring the wrong thing. You simply pull out your lovely fat A4 file and patiently take them through it and if you are feeling cheeky, you can ask them to show how their numbers were derived, who defined them, who agreed they are fit for this purpose etc. These definitions should, in practice, become the authoritative source for reporting in the area concerned. No more arguments about what something means, well, maybe a lot less argument! It also provides more confidence in using and sharing the reports built from this approach between departments, managers and analysts alike.

And of course we do end up talking about data with ‘the business’ no matter how hard I try and avoid it in the nascent stages. This is to a large degree historical in that often people are conditioned to see data as information and vice versa. Often the people responsible for providing requested information are also system/data gatekeepers in some part of the organisation. Understandably, they often make their own decisions about how to compile or consolidate different data to create a metric. They think in fields and tables and look ups and not in terms of information and the life-cycle it encapsulates for the consumer, how it will be consumed, its potential audience(s) and its purpose.

I know I’ve covered a lot of issues and ideas here already but consider this before I finish, I haven’t once yet mentioned the BI system itself, the software have I? No technology company fireworks and sexy quadrants, no industry white-papers, no product names, no slick features, no concurrent and fashionable packages or systems at all. And that’s because you don’t really need a silver bullet to begin building your BI suite or programme. You can do all these things I’ve suggested above without handing over a single cent to a software seller. In fact, remaining system or solution agnostic at an early stage will allow for more open thinking and for ideas to percolate to the top. So it will be no surprise to you by now to learn that I’m of the opinion that if you do some of these things I’ve suggested above prior to tendering for a system it will only aid your journey in finding the best aligned and most economical visualisation system for your organisation. And who knows, you may even find that the tools you have in house are capable of delivering the types of information and visualisations you need already which means you get to keep that hard earned £300k in your back pocket after all. Good for you and good for your business.

Paul Holland

Details Regarding the Future of the Visual Business Intelligence Workshops

June 2nd, 2017

This blog entry was written by Nick Desbarats.

As Steve has recently announced, he’s decided to transition away from teaching in the coming months to focus on new projects. After more than 15 years in the field of data visualization and more than 30 in IT, hopefully, we can all find our way to forgiving him for shifting his focus. It seems futile to try to estimate the number of people and organizations that have learned how to understand and communicate their data from Steve, but to say he’s changed the day-to-day practice of data visualization for more people in more ways than anyone else doesn’t strike me as hyperbolic.

While anyone who’s attended one of Steve’s workshops will tell you that it was transformative, I was completely bowled over by the public workshop that I attended in Minneapolis in 2013. My data analysis and visualization work suddenly and unexpectedly collided with my longstanding interest in research findings in the fields of perception, cognition, neuroscience, decision-making, and design. I’d seen and done plenty of public speaking by that point, but the skill with which this potentially esoteric knowledge was explained and the accessible and engaging way in which it was delivered were astonishing. My eyes were opened to the importance of these crucial skills, the absence of which leads to bad decisions that cause untold suffering and waste around the globe every day.

Shortly after attending that workshop, I approached Steve and rather sheepishly asked if he’d ever considered adding a second instructor to teach his courses. Our subsequent conversations quickly veered away from data visualization and into a strikingly wide array of topics, touching on pedagogical research, evolutionary psychology, critical and statistical thinking, organized religion, artificial intelligence, and the nature of science, to name but a few. Seven months and the steepest learning curve I’ve ever experienced later, I began teaching Steve’s courses as on-site group workshops at organizations such as NASA, Bloomberg, and the Central Bank of Tanzania. Seeing lightbulbs go off above more than 1,000 workshop participants’ heads since then has been incredibly gratifying.

As Steve mentioned, I’ll soon begin teaching public workshops in addition to the private workshops and consulting engagements that I’ve been delivering via Perceptual Edge since 2014. I won’t say that I’ll try to fill Steve’s shoes since that would clearly be delusional, however, I will say that I’ll bring the same drive to increase data analysis and communication competence in the world to this work. Specifically, the following changes will occur on January 1st, 2018:

  • The Show Me the Numbers, Information Dashboard Design, and Now You See It courses will be offered via my new consultancy, Practical Reporting Inc., the website for which will be launched this summer and announced on this blog. The content of these courses will remain the same aside from some updating and minor tweaking, and Steve’s books will continue to be provided to all workshop participants.
  • Public workshops will continue to be offered in the U.S. and internationally. Public workshop locations, dates and registration links will be posted on the Practical Reporting website.
  • I’ll continue to deliver dashboard design consulting services and private, on-site training workshops for groups of 30 to 70 participants. These services will start being offered through Practical Reporting instead of Perceptual Edge.
  • I’ll continue to write about data visualization, dashboard design, and other topics, but will begin to do so on the Practical Reporting blog following its launch. I’ll also be soliciting feedback on sections of a new book on which I’m working that proposes a blueprint for organizing and designing whole data presentation systems that include dashboards, as well as other types of information displays, such as lookup displays and self-serve analysis displays.

Being mentored by Steve has been a unique and life-changing experience for which I will always be grateful, and his friendship is one that I’ll continue to hold dear. Teaching his workshops is an awesome responsibility, but it’s one that I relish. I hope you’ll join me as I take the torch from Steve and continue to teach his courses, courses that I found so transformative and insightful back when he first taught them to me.

Nick

Future Plans for the Visual Business Intelligence Workshops

June 1st, 2017

Since founding Perceptual Edge back in 2003, I’ve put in an enormous number of miles on airplanes. I’ve made most of these flights to teach workshops. After 14 years of this, I’m sure you’ll understand when I say that I’m weary of travel.

I’m leading up to some news. I will only be teaching three more of my Visual Business Intelligence Workshops. This work is no less important than when I began, but I’m now ready to focus my efforts differently and to spend more of my time close to home.

When I began this venture, data visualization was not well known, but that would soon change. I helped to build data visualization into the popular field of study and work that it is today, but with popularity has come an incursion of nonsense and bad practices. I trust that the tide will shift in time, but I’ve done my part as an educator and will now leave it to others to carry on this important work.

In September of this year I’ll teach my final U.S.-based Visual Business Intelligence Workshop in Portland, Oregon and in April of next year I’ll teach my final non-U.S.-based workshop in Stockholm, Sweden. The only other workshop that I’ll be teaching will take place in Sydney, Australia next week. If you find this disappointing, I have some good news. My workshops will continue, just not with my direct involvement. Nick Desbarats, who has been teaching my courses privately for the last few years, will begin teaching my courses in public workshops as well through his own company, Practical Reporting. Nick will describe his plans in this blog soon.

Now that little of my time will be dedicated to teaching and travel, I’ll be shifting the focus of my work to more research and writing. I have several projects that have been waiting in the wings for uninterrupted time to become available. I’m thrilled that this time is finally at hand.

Take care,

Signature

Lollipop Charts: “Who Loves You, Baby?”

May 17th, 2017

If you were around in the ‘70s, you probably remember the hard-edged, bald-headed TV police detective named Kojak. He had a signature phrase—“Who loves you, baby?”—and a signature behavior—sucking a lollipop. The juxtaposition of a tough police detective engaged in the childish act of sucking a lollipop was entertaining. The same could be said of lollipop charts, but data visualization isn’t a joke (or shouldn’t be). “Lollipop Chart” is just cute name for a malformed bar graph.

Bar graphs encode quantitative values in two ways: the length of the bar and the position of its end. So-called lollipop charts encode values in the same two ways: the length of the line, which functions as a thin bar, and the position of its bulbous end.

Lollipop Chart Example

A lollipop chart is malformed in that it’s length has been rendered harder to see by making it thin, and its end has been rendered imprecise and inaccurate, by making it large and round. The center of the circle at the end of the lollipop marks the value, but the location of the center is difficult to judge, making it imprecise compared to the straight edge of a bar, and half of the circle extends beyond the value that it represents, making it inaccurate.

What inspired this less effective version of a bar graph? I suspect that it’s the same thing that has inspired so many silly graphs: a desire for cuteness and novelty. Both of these qualities wear off quickly, however, and you’re just left with a poorly designed graph.

You might feel that this is “much ado about nothing.” After all, you might argue, lollipop charts are not nearly as bad as other dessert or candy charts, such as pies and donuts. This is true, but when did it become our objective to create new charts that aren’t all that bad, rather than those that do the best job possible? Have we run out of potentially new ways to visualize data effectively? Not at all. Data visualization is still a fledgling collection of visual representations, methods, practices, and technologies. Let’s focus our creativity and passion on developing new approaches that work as effectively as possible and stop wasting our time striving for good enough.

Take care,

Signature

What Is Data Visualization?

May 4th, 2017

Since I founded Perceptual Edge in 2003, data visualization has transitioned from an obscure area of interest to a popular field of endeavor. As with many fields that experience rapid growth, the meaning and practice of data visualization have become muddled. Everyone has their own idea of its purpose and how it should be done. For me, data visualization has remained fairly clear and consistent in meaning and purpose. Here’s a simple definition:

Data visualization is a collection of methods that use visual representations to explore, make sense of, and communicate quantitative data.

You might bristle at the fact that this definition narrows the scope of data visualization to quantitative data. It is certainly true that non-quantitative data may be visualized, but charts, diagrams, and illustrations of this type are not typically categorized as data visualizations. For example, neither a flow chart, nor an organization chart, nor an ER (entity relationship) diagram qualifies as a data visualization unless it includes quantitative information.

The immediate purpose of data visualization is to improve understanding. When data visualization is done in ways that do not improve understanding, it is done poorly. The ultimate purpose of data visualization, beyond understanding, is to enable better decisions and actions.

Understanding the meaning and purpose of data visualization isn’t difficult, but doing the work well requires skill, augmented by good technologies. Data visualization is primarily enabled by skills—the human part of the equation—and these skills are augmented by technologies. The human component is primary, but sadly it receives much less attention than the technological component. For this reason data visualization is usually done poorly. The path to effective data visualization begins with the development of relevant skills through learning and a great deal of practice. Tools are used during this process; they do not drive it.

Data visualization technologies only work when they are designed by people who understand how humans interact with data to make sense of it. This requires an understanding of human perception and cognition. It also requires an understanding of what we humans need from data. Interacting with data is not useful unless it leads to an understanding of things that matter. Few data visualization technology vendors have provided tools that work effectively because their knowledge of the domain is superficial and often erroneous. You can only design good data visualization tools if you’ve engaged in the practice of data visualization yourself at an expert level. Poor tools exist, in part, because vendors care primarily about sales, and most consumers of data visualization products lack the skills that are needed to differentiate useful from useless tools, so they clamor for silly, dysfunctional features. Vendors justify the development of dumb tools by arguing that it is their job to give consumers what they want. I understand their responsibility differently. As parents, we don’t give our children what they want when it conflicts with what they need. Vendors should be good providers.

Data visualization can contribute a great deal to the world, but only if it is done well. We’ll get there eventually. We’ll get there faster if we have a clear understanding of what data visualization is and what it’s for.

Take care,

Signature