Today, many products we use in our daily lives are nothing short of miraculous. There’s an increasingly impressive array of well-crafted mobile apps that do a myriad of things. Many modern appliances are easier to operate than ever; even our thermostats have become smart and connected. But all these advances apply mostly to the products we choose to use — consumer products. The products we must use at work are frankly sad in comparison. Since most of us spend more time at work than at home, this is a travesty. Often, when I rent a car or ask to change an airplane ticket, my heart sinks. These transactions are performed using software that looks incredibly similar to interfaces we saw on computers back before the mouse was commonplace. If it isn’t black and white, or even a green screen, it’s a garish set of 8-bit colors (without any hint of retro-cool). Granted, there are many web-based enterprise apps that make better use of a mouse-based interface. But even these bear a closer resemblance to Web 1.0 than to the consumer products of the last 10 years.
At times, the enterprise software landscape can feel hopeless. But it isn’t any more hopeless than consumer design was as recently as a decade ago. In a world of iPhones, it’s easy to forget that most everyday things used to be designed without any serious attempt to make them approachable or functionally intuitive, let alone enjoyable to use. In 1988, that was certainly true of nearly everything we used that had an interface. That was the year Donald Norman published a book called The Design of Everyday Things. In it he presented a set of ideas intended to change the way we approached designing daily objects. Norman’s book helped pave the way for the design of consumer products that we have today.
Seeing Potential, Not Just Problems
I first read The Design of Everyday Things sometime around 1994, as an undergraduate student. At the time, I was spending large portions of my day typing Bash commands (even on a Mac that had a relatively friendly graphic user interface). Simple tasks like writing an email were laborious. I would often type and then have to wait 10 seconds or more to see a handful of letters appear. But I didn’t think twice about using a ridiculously slow connection to compose my messages. The value of the telnet client and other clunky software seemed blindingly obvious to me.
The same was not true for the vast majority of my friends studying design or other applied arts. To them, it was just painfully inaccessible, unintuitive technology. This stuff was for experts, aka geeks, and it wasn’t that interesting.
But those of us who did find it interesting found it breathtaking. Together with a handful of compatriots who were crossing the divide between computer science and design, I would endlessly evangelize that computers and software were in the process of changing our world. Soon, we professed, new ways of thinking about computers would make them accessible to everyone; their value would become undeniable. Learning how to harness computers’ potential would be well worth the effort. We were faced with an opportunity to help reshape computing and be part of redesigning our world.
I believe that a similar opportunity exists today, and it’s one that too many designers are missing out on. But fully understanding that opportunity might require a couple of steps further back into the history of software design.
Looking Back to Look Ahead
As young designers in the mid-1990s, we’d been inspired by the work of people like Douglas Engelbart, Mark Weiser, and John Seely Brown (aka JSB). In a 1991 paper called “The Computer for the 21st Century,” Weiser had described a world where computers would drift into the background and be capable of leveraging our intuition. This was the paper that first described “ubiquitous computing,” and it has been endlessly referenced ever since. Illustrated with real prototypes, the paper described how computers could “account [for] the natural human environment.” To my friends and me, Weiser’s ideas seemed within reach, but few of our design peers agreed. To them, it was just an academic paper full of hard-to-grasp concepts. The prototypes might have been real, but all we had were pictures.
Another inspiration was MIT’s truly incredible “Aspen Movie Map” (1978–1981). Though it was already over 15 years old, to us, this work clearly showed the future of navigation. Using LaserDiscs, the movie map demo showcased controllable video footage that enabled a virtual drive around Aspen, Colorado. As with Google Street View, you could go anywhere, moving backward and forward, and look in any direction. We’d dream about how amazing it would be to create maps like these of the whole world. Though it was an easy concept to grasp, it was just as easy to dismiss as a theoretical novelty. Clearly, making it a reality would require inordinate resources. “It’s just more geeky tech,” our friends would say.
Our last big inspiration was even older: Dr. Engelbart’s so-called “Mother of All Demos,” a 1968 presentation in San Francisco that lasted a little more than an hour. Engelbart’s demo had revolutionized the direction of computer interfaces. He demonstrated the first working examples of video conferencing, the word processor, and a graphical interface controlled with a mouse — interactions that showed how computers could be “humanized.”
By 1994, these technologies weren’t ubiquitous, but they were relatively accessible. Engelbart had also demonstrated “hypertext,” and we were now watching it become a reality in the form of Tim Berners-Lee’s just-unveiled World Wide Web.
Arguing with our skeptical designer friends, we tried to use the logic that Engelbart’s work had probably seemed as intangible in 1968 as Weiser’s concepts — and even the Aspen Movie Map — did in 1994. And the speed of technology was set to advance — hadn’t they heard of Moore’s law? Surely anyone could see the value of battling with 1994 computers?
Perhaps not surprisingly in hindsight, all my budding designer friends heard was a bunch of hyperbole. All they saw were clunky, pixelated demos. Our impassioned protestations of an impending computer revolution inspired mild derision. I am sure that some were concerned that we were losing it. I will never forget the look of anxiety on the face of a girlfriend as I blathered on about my explorations with the latest release of HTML and an update to the “graphic interchange format” — the gif89a, otherwise known as the “animated gif.”
But the transformation they couldn’t see was already taking shape. After Donald Norman’s book, The Design of Everyday Things, something shifted. He’d made some of the same connections between technology, social science, philosophy, psychology, and anthropology that Weiser, JSB, and others had, but Norman did so in a way that was accessible. He described how the world wasn’t just filled with unintuitive, over-engineered computers, but that almost all everyday objects failed to effectively communicate their function. Everything from door handles to telephones was designed in a manner that made it unapproachable and difficult to use. Norman argued that designers needed to adopt new methods to make things useable and engaging. To this end, he outlined a concept he called user-centered design (UCD).
Over the next 20 years, many designers adopted Norman’s concepts and evolved his notions. The terms he introduced, like affordance, have become commonplace in organizations throughout the world. UCD methodologies and adaptations of human-centered approaches gave design a credible voice that helped many companies begin to see the value of designing products rather than just engineering them to work. As a consequence, we’ve seen both software and hardware evolve in a way that was unimaginable even to the most enthusiastic of us. Well-crafted interfaces that present meaningful content, rich imagery, and realtime video have become commonplace. Computers connect billions of people, not just in desktop computer browsers but across a multitude of other connected devices. The transformation is still barely fathomable.
An equally amazing shift has happened in the way we design products. Design has grown from a terribly undervalued discipline to an integral component of many companies. Designing for technology is no longer a niche specialty of little interest to the vast majority of budding designers. Surrounded by inspiring examples of well-designed products, many young designers look to get into interaction and experience design.
Designers have successfully embraced the promise of technology. We’ve learned to collaborate with product owners and engineers and often advocate for ways to build products. Now, there are long-established methodologies, processes, and toolkits that designers commonly use. And the way design practices should evolve is a matter of vigorous debate.
The Next Transformation
But for all of this radical change, we’re far from completely eradicating the world of products and software that are clunky, cumbersome and downright unfulfilling to use. Ironically, many of those early visions of humanized computing — like Weiser’s, JSB’s and Engelbart’s — focused on technology for work. The products we use in everyday life have significantly changed but the software that is made specifically for us to use at work, particularly specialist enterprise software, hasn’t seen anything like the same type of profound transformation.
Explanations for this state of affairs abound. A complex web of legacy systems underpins many enterprise systems, and it isn’t uncommon for 30 years worth of data and business policy to be wrapped up in ancient mainframe technology, making it difficult and expensive to replace. Often, various groups within organizations and external to them have influence over the software businesses use, and hold wildly differing perspectives on replacing that software and whether it’s even important to do so. Inevitably, the concern arises that introducing anything new will require workforces to be retrained — not just for one role but probably for many. There may even be legal implications. Redesigning and rebuilding enterprise software, and then getting organizations to migrate, means corralling and aligning a lot of different people — and seeing beyond short-term obstacles. Partly as a result of this difficulty, there aren’t many inspiring examples out there to draw designers in. Why work on enterprise apps when you can work on a beautiful consumer product?
All of these obstacles are real. So why am I so excited to work in enterprise software design? Part of it is the feeling of déjà vu I get from the state of design for business and enterprise applications. In some ways, it doesn’t look all that different from the landscape of consumer technology 20 years ago. Just like then, the best examples do little more than suggest what the software could be.
But even seeing that potential requires a leap of faith. Just like then, the constraints that can hinder creativity and design excellence need to be managed. Just like then, it seems easier to conceive of better solutions and illustrate them than it will ever be to implement these designs. And as with the design of any product two decades ago, it requires designers to acquire skills that they didn’t sign up for.
All these factors represent an opportunity to deliver true impact and contribute to the evolution of design. Just like 20 years ago, when we needed new methodologies and approaches that would end up reshaping how people spend their days. We need new ways of designing that will make the workplace a more approachable and rewarding place to be. These new design methodologies need to be aware of everything that preceded them, but they need to be new, too. We need to evolve the methodologies that reshaped consumer’s interactions with technology for the business context. We need to embrace this different context and determine what is approachable, rewarding, and easy to use. As designers, we need to focus more on efficiency and effectiveness than on engagement or common models of intuition. We’ll need to rethink notions of co-creation. Perhaps the whole notion of how we think about user-centered design will need to evolve…
Taking the Leap
Enterprise and business software — for commerce or non-profit — should be as human-centered and approachable as today’s consumer technology. And it is the responsibility of designers to help get us there. Just like we did in the late ’90s and early 2000s, we need designers who can look beyond where things are today and strive toward a state that may seem unattainable. We need designers who are willing to work on things that aren’t in the limelight or perceived as cool — at least not yet.
We need designers who don’t have a preconceived notion of the “right way to design,” but instead are actively looking to define new ways to design. We need designers who aren’t afraid to be completely wrong. Designers who are comfortable with challenging themselves in ways that make others uncomfortable. Designers who can cut through ambiguity and make a leap of faith. Designers who can listen and empathize, advocate and persuade. This is what takes to create great software experiences—regardless of purpose or audience.
For me, as a designer, this is what design is really about: focusing on the promise of what things can be and working out how we can get there. It’s about solving real problems to make the things we use work better for us. Today, the best place to do that — the place where we can make giant strides, not just minor enhancements — is business software. That’s why I’m excited to be designing software for the workplace — and why I believe other designers should be, too.