“Should We?”: The Question That Is Rarely Asked

The unique ability of the human brain to create technologies has taken us far. The benefits of technology, however, are not guaranteed, yet we celebrate and pursue them with abandon. When we imagine new technological abilities, we tend to ask one question only: “Can we?” “Can we create such a thing?” However, we’re good at creating what we can but shouldn’t. “Should we?”, though rarely asked, is the more important question by far.

I recently read a book by Samuel Arbesman, entitled Overcomplicated. I found it intriguing, yet also utterly frightening. Arbeson is Scientist in Residence at Lux Capital, a science and technology venture capital firm. He is a fine spokesperson for his employer’s interests, for he gives the technologies that make venture capitalists rich free license to do what they will by calling it inevitable.

Many modern technologies are now complicated in ways and to degrees that place them beyond our understanding. Arbesman accepts these over-complications as a given. In light of this, he proposes ways to study them that might yield a bit more understanding, even though, in his opinion, they will forever remain beyond our full grasp. He argues that modern technologies are like biological systems—the result of evolution rather than design—sometimes a mishmash of kluges embedded in millions of lines of programming code and sometimes the results of computers generating their own code with little or no human involvement. At no point in the book does Arbesman ask the question that was constantly screaming in my head as I read it: “Should we?” Should we create technologies that exceed our understanding and can therefore never be fully controlled? The only rational and moral answer to this question is “No, we shouldn’t.”

Arbesman assumes that we often cannot design and develop modern technologies in ways that remain within the reach of human understanding. Even though he acknowledges several examples of technologies that have created havoc because they were not understood, such as financial trading systems and power grids, he accepts these over-complications as inevitable.

As a technology professional of many years, I see things differently. These technological monsters that we create today as the products of kluges are over-complicated not because they cannot be kept within the realm of understanding and our control but because of poor, sloppy, undisciplined, and shortsighted design. Arbesman and others who pull the strings of modern information technologies want us to believe that these technologies are inherently and necessarily beyond human understanding, but this is a lie. Those who create these technologies are simply not willing to do the work that’s required to build them well.

We have a choice. We could demand better design. We could and should set the limits of human understanding as the unyielding boundary of our technologies. We can choose to only build what we can understand. This is harder than quickly and carelessly throwing together kluges or trusting algorithms to manage themselves, but it is a path that we must take to avoid destruction.

Arbesman advocates humility in the face of technologies that we cannot understand, but this is an odd humility, for it’s wrapped in hubris—a belief that we have the right to unleash on the world that which we can neither understand nor control. We may have this ability, but we do not have this right, for it is an almost certain path to destruction. Along with most of the technologists that he admiringly quotes in the book, Arbesman seems to embrace all information technologies that can be created as both inevitable and good—a reverence for Technology with a capital “T” that is both irrational and dangerous.

I’m certainly not the only technology professional who is concerned about this. Many share my perspective and express it, but our concerns are not backed by the deep pockets of technology companies, which currently set the agendas and shape the values of cultures throughout the developed world. The fear that our technologies could do great harm if left uncontrolled has been around for ages. This is a reasonable fear. In his film Jurassic Park, Steven Spielberg poignantly expressed this fear regarding biological technologies. There’s a great scene in the movie when a scientist played by the actor Jeff Goldblum asks the questions that we should always ask about potential technologies before we create and unleash them on the world. The scene accurately frames the problem as one that results from the selfishness of those who care only about their own immediate gains, never raising their eyes to look further into the future and never doubting the essential goodness of their creations, despite the monsters we are capable of creating.

Although this concern about unbridled technological development is occasionally expressed, it has had little effect on modern culture so far. Each of us who cares about the future of humanity and understands that the arc of technological development can be brought into line with the interests of humanity without sacrificing anything of real value should do what we can to voice our concerns. In your own organization, when an opportunity to create, modify, or uniquely apply a technology arises, you can ask, “Should we?” This might not be the path to popularity—those who choose to do good are often unappreciated for a time—but it is the only path that doesn’t lead to destruction. Be courageous, because you should.

Take care,

Signature

15 Comments on ““Should We?”: The Question That Is Rarely Asked”


By John Long. October 17th, 2016 at 9:08 pm

On your recommendation, I read the book “Risk Savvy” which I have found to be instrumental in understanding all kinds of decisions and situations in my life. I am a software engineer who focuses on ensuring quality and reliable products, and I immediately recognized that software engineering is characterized by uncertainty, not risk. Complicated systems, built and tested by humans of varying backgrounds and abilities, are all mixed together in the hope that they will work the way they were intended. But the possible interactions simply confound any attempt to ensure that nothing will go wrong.
Yet, in this article, it sure sounds like you believe software development to be a matter of risk, that if we can just measure the right things and put in the right controls, all will be safe and understood. I’ve been programming and testing programs for 30 years, and I have no idea what you’re talking about.
It’s not that I don’t think design could be better, testing more robust and systems better managed. But I have worked with some amazing software engineers, even entire teams of them. And not one of them could tell you exactly how an entire system the size of, say, Adobe Photoshop, works. It is, literally, beyond human understanding because the number of things that can go wrong is staggering, even in the best designed code.
And it seems to me, that just as it was in the housing bubble, attempts to come up with metrics and measurements that introduce “certainty” to a fundamentally uncertain exercise will only serve to exacerbate and magnify problems, not solve them.

By Mark. October 18th, 2016 at 8:00 am

Excellent article, which I agree with. I’ve seen many IT departments pick a tool and then try to solve a problem with it, rather than establishing what the problem was first and then selecting the appropriate tool.

I like your warning from Jurassic Park – I was thinking of Skynet within the Terminator series before I got that far in your article.

By Nick Desbarats. October 18th, 2016 at 8:15 am

Great read. Thanks, Steve.

Having just read Nick Bostrom’s Superintelligence, which catalogs the risks of an AI “intelligence explosion”, I share you concerns even more now than a few months ago. One of Bostrom’s insights, however, is we may or may not have the option abort the development of a new technology, depending on what that technology is. Technologies that require large capital investments to develop are, of course, relatively easy to regulate and monitor, so the concern is really around new technologies that small groups with relatively little funding (i.e., may not even have VC) could potentially develop. Because the cost of powerful R&D ingredients such as cloud computing and gene sequencing are plummeting, more and more “breakthroughs” are within reach of larger numbers of smaller (i.e., less and less disciplined/ethical) groups.

Therefore, when an ethically-minded group is within reach of a breakthrough, they must not only ask themselves if they should proceed, but also, if they don’t proceed, how long it might take for a potentially less ethical, more short-sighted group to develop that same technology, and to perhaps develop it with fewer safeguards in place. With the plummeting cost of R&D inputs, the answer to that question is more and more often, “not very long”. As such, I suspect that the answer to the question, “Should we develop this technology?” is moot in many cases, and should be replaced with, “Is this technology less risky in our hands right now than in someone else’s hands a few years (or months) from now?”

I’m not suggesting that this is a good or desirable state of affairs (it’s not), only that asking “Should we develop this technology?” may not be the right question to ask in an increasing number of cases.

By Stephen Few. October 18th, 2016 at 9:09 am

John,

Yes, I think that we can build information technologies in a way that keeps them understandable. It’s certainly possible that some technologies cannot be built in this way, but I’m not convinced that this is true. Part of the solution is to avoid the use of huge teams of mediocre developers whose efforts cannot be coordinated, instead using smaller teams of more highly skilled designers and developers. Another part of the solution is to give development teams enough time to do good work.

Whether or not we can build understandable technologies, the heart of my argument remains: we should not build what we cannot understand and control. As we turn more and more of our lives over to computers, we can no longer ignore the potential for harm. I am not advocating a retreat from technologies. Rather, I’m advocating a more thoughtful approach to them.

Like Arbesman, you seem to accept overcomplicated technologies as inevitable. You don’t seem to consider the possibility of saying “No” to technologies that we cannot understand and therefore cannot control. How can we passively accept the destructiveness of our creations as inevitable? By doing so we are playing God; a myopic, careless God.

By Stephen Few. October 18th, 2016 at 9:22 am

Nick,

The question “Is this technology less risky in our hands right now than in someone else’s hands a few years (or months) from now?” is not a replacement for the question “Should we develop this technology?” It is a part of the “Should we?” question. “Should we development this technology in light of the fact that, if we don’t, someone else will do it less well and in ways that lead to even more harm than our implementation?” Your question and the fact that you see it as a viable replacement for “Should we?” makes me nervous. How much evil has been unleashed on the world by people who argued, “If we don’t do it, someone else will and the harm will be even greater?” This strikes me as a convenient rationalization for some very self-serving and harmful decisions.

By John Long. October 18th, 2016 at 10:59 am

Stephen,
I have huge amounts of respect for you and your work. But I am completely baffled by what you are saying here.
Maybe I don’t understand what you mean when you say “understand and control”. Is this, in your mind, some kind of absolute understanding and complete control? Or is it more of a threshold of understanding and control?
Even if you get agreement there, I also see no way to make any of your objections binding without global state controls that would surpass the Soviet Union at its worst. Code is written, cheaply, by millions of people across the globe. Where do you insert these requirements?
I also think you overestimate how much has been understood of technological progress before now. Have you ever watched or read Connections, by James Burke? Technological progress has been chaotic, poorly understood and subject to accidents of history and individuals throughout human history. It is a messy business, and understanding has been the exception, not the rule.
Please help me understand you. What kind of software development process are you proposing? What kinds of systems and languages do you think would produce the kind of understanding and control you are asking for?

By John Long. October 18th, 2016 at 11:32 am

Let me simplify my question even further: What does “saying no” look like in actual practice? You say I haven’t considered it as a possibility, but I say you haven’t defined it in a way that I can consider.

By Stephen Few. October 18th, 2016 at 11:35 am

John,

I am not suggesting that this will be easy or that we can do it perfectly. If we make the attempt, we will certainly make mistakes along the way. The problem is that we are not making the attempt. We are failing to do this, in part, because of a false and dangerous assumption that over-complications leading potentially to destruction are inevitable, so why bother?

Technological advances do not need to be as messy as they typically are. The fact that they’ve been messy in the past is no excuse for allowing them to be as messy in the future. Can we do better? I believe we can. Should we do better. You bet we should. Do we have a choice when the alternative is almost certainly suicidal? We are not entirely subject to the whims of evolution. These powerful yet flawed brains of ours can chart a better course. Not entirely and not perfectly, but we can do a hell of a lot better than we are. The first step is to acknowledge the need along with our ability and responsbility to address it.

Saying “no” looks like this. When we contemplate the creation of a new technology, we do our best to anticipate its ramifications. If it appears that the technology will present a significant risk of harm, we try to figure out if that risk can be eliminated. If it cannot be eliminated or, at a minimum, significantly reduced, we decide that, until the risks can be addressed, we don’t create the technology. We also put safeguards in place to prevent others from creating the technology, much as we do today when faced with known threats, such as potential annihiliation by nuclear weapons.

When faced with potential destruction, do you suggest that we do nothing? Don’t wait for me to propose a fully-formed solution before taking this seriously. I don’t have a fully-formed solution. I’m expressing a concern. This concern is not exclusively mine. I’m inviting others, such as you, to share and address this concern. If the concern is legitimate, let’s not ignore it until someone comes up with a perfect, fully-formed solution. Let’s work together to create a that solution, even though it will be difficult and will never be perfect.

By Alberto Cairo. October 18th, 2016 at 12:59 pm

Steve, a related concept, often discussed in the literature about the ethics of science, technology, is the “precautionary principle” (http://www.sehn.org/precaution.html).

Here’s an introduction:

http://unesdoc.unesco.org/images/0013/001395/139578e.pdf

By Brian M. October 18th, 2016 at 2:23 pm

I think there is a practical, rather than ethical, reason why some people argue that these types of technologies (and their inevitable blowback) are inevitable… human nature. The observational evidence on humanity is not encouraging. Human beings seem, as a species, woefully unable to control their cleverness and love of technological tools and toys. Yes, there are individuals that are able to do so, who are capable of restraint, who consider the long-term implications of their work, and who act accordingly, driven by ethics and moral values rather than simply by profit or ego. But for every one of these people there are those on the other end of the spectrum, equally skilled but without the restraint. History implies (if not screams) that if there is money or power to be had through the creation of a tool or technology, then it will be created by somebody somewhere. And it will be used in the attempt to leverage that use into more money and/or power.

The question is not whether we should ask “should we” (I mean, “duh?!). The question is, does it really matter if those that care enough to ask are the only ones that do? After all, the ones who stop long enough to ask the question are probably not the ones you have to worry about. After all, if the church burns down, the choir probably did not set the fire.

I find this reality to be rather depressing, but I also find it to be reality. That doesn’t mean that you are wrong to urge precaution and thought. Nor does it mean that Arbesman is right to just throw up his hands and say there is nothing to be done. But it does mean that the problem is much more intractable than simply somehow espousing caution or teaching the precautionary principle in schools. There are those humans who have the ability and the means to build the technology, but who do not have the restraint or the inclination to acquire it. Protecting us from our technology is not just a matter of getting responsible people to agree to act responsibly. It’s a matter of actively stopping the irresponsible from exercising their vision of creativity. This is a much more Orwellian problem, and one for which I have no realistic solution. And that is why I find the reality to be so depressing, because I think that it is entirely plausible (if not almost certain) that we will end up where Arbesman suggests, totally overwhelmed by the effects (both direct and indirect) of our technologies, but, unlike Arbesman, I don’t think humility is likely to help much.

By Stephen Few. October 18th, 2016 at 3:08 pm

Brian,

You’ve framed this eloquently. I don’t underestimate the magnitude and difficulty of the task. It would be easy to throw up my hands in frustration, but I have some hope that, if the need for greater responsibility in our approach to technologies were clearly exposed, there might be enough good in humanity to put useful ethical guidelines in place. I’ll gladly do what I can to craft a practical solution if and when an opportunity arises, but for now it’s important to convince people that one’s needed. As much as I share your pessimism regardig humanity, I desperately want to believe that there is reason for hope.

By Andrew Craft. October 19th, 2016 at 9:45 am

It doesn’t help that a lot of these creators are software developers, many of whom can be exceedingly defensive about what they do, unable to take any criticism. Which is unfortunate, because criticism is needed to stop them from doing things like designing frameworks on top of frameworks on top of frameworks ad nauseam, or completely rewriting every existing computer technology in their latest favorite programming language.

I once started a flame war on a dev forum by saying (not to anyone in particular) that code is not art. So many complete strangers seemed to take it as a direct personal insult. It’s like the suggestion that what they create every day isn’t the priceless masterpiece they intended it to be was deliberately painful. Sad, since code is tremendously useful in practical applications – why does it need to be more?

Of course this all probably sounds familiar to you.

By Nick Desbarats. October 19th, 2016 at 12:41 pm

Thanks, Steve. I should be clear, though, that I’m not suggesting that the “Is this technology less risky in our hands right now than in someone else’s hands a few years from now?” question is always a replacement for the “Should we develop it?” question. The first question should only be appended to the second when the risk of a less-responsible actor developing the same technology seems high. Based on your rewording of the question, I think we’re saying more or less the same thing.

I’m more of a technology skeptic than most and am certainly not looking for reasons to justify the development of as many new technologies as possible. I recently read Kevin Kelly’s “The Inevitable” and found several of his predictions to be nightmarish even though that wasn’t his intent.

I also wholeheartedly agree that we need to do everything possible to try to avoid or at least contain harmful new technologies but, as Brian and others on this thread have eloquently underscored, we need new ways of doing so because the current ones, including regulation and encouraging R&D organizations to act with more foresight, don’t have a very good track record. Your example of nuclear technologies illustrates this well, with North Korea having detonated several warheads and a number of Russian devices now unaccounted for despite large investments in regulation and security.

Bostrom’s “Superintelligence” was a real eye-opener for me regarding just how hard it would be to prevent or contain something like a generalized artificial intelligence. It would be surprisingly easy, for example, to actually increase the risk of civilizational catastrophe via regulations or policies based on the best of intentions. I’m not saying that we shouldn’t try -we have to try- but coming up with policies or other interventions that don’t accidentally backfire is harder than I thought it was a few books ago.

By Stephen Few. October 19th, 2016 at 1:03 pm

I appreciate the thoughtfulness of everyone’s responses. Solving this will take great and intelligent effort. To begin, I think it’s important to clue the general public into the fact that technologies, including information technologies, are not inherently good for the world. Despite the tremendous benefits of good technologies, we must create and use technologies with care.

By Christopher Butler. November 4th, 2016 at 8:03 am

Steve,

A friend of mine pointed me to your site after reading a post of mine on a similar theme — whether building conversant machines is, in the grand scheme of things, best for humanity, and whether it’s as inevitable as we are lead to believe (http://chrbutler.com/talking-to-machines). As you so aptly put it, those who promote technological progress (especially those involved in VC) promote specific visions which benefit very view as if they are inevitable, as opposed to what they really are: possibilities for which they need people’s money to pursue.

One additional thread that I addressed is the nature of predicting the future. What’s in vogue right now is “predicting” — market analysis that yields an expectation for what will come next — rather than imagining possible futures and specifically investing in the ones that everyone agrees are good for humanity. You can look at so many devices and trace them back to inceptions in science fiction, which means that our imaginations tend to install ideas into the common consciousness before they are technically possible. The iPad preceded by Star Trek’s PADD, among many others. And my exploration had to do with approaching something like voice interfaces in the same way. We are told they’re inevitable (and the price is surveillance), but why not invent them differently?

Thanks for such a thoughtful discussion. I’m glad to have been introduced to your writing!

– CB

Leave a Reply