UI/​UX From the Dark Ages

(Inspired by a spirited discussion on #lesswrong)

You may not realize it, but our existing UI/​UX paradigms are mainly from the 1970s, when human-oriented computing was still in dark ages. Menus, forms, check boxes, search bars, file systems, you name it. They are fine tools, polished by decades of doing the best we could with limited computing power we had, but they are still very much dated. Various attempts to introduce better UX have fallen flat due to limitations of algorithms, memory and compute. Bold attempts like Microsoft Bob and Clippy failed because they made user experience worse, not better.

Things are different now. Natural language processing in limited domains is good enough or nearly good enough to provide an audio- and typed DWIM interface without forcing the user to go through the levels of menus (usually poorly designed by the developers). This is not to replace the menus or the check boxes, but to add another way to do something. Here are some things you might say or type in an ever-present “do box” (not just a “search box”), some of which already sort of work on some gadgets:

  • “call mom on whatsapp”

  • “play that song I used to listen to a lot last year”

  • “turn on the dark theme”

  • “show me a list of academic articles on …”

  • “play the soup nazi Seinfeld episode on the big TV”

  • “send me an email when someone replies to my comment or post”

  • “connect wireless headphones I just turned on”

  • “connect to the public wifi nearby”

  • “mark me as being on vacation from now till the end of the month”

  • “open diary entry from … or the nearest one with discussion of topic X”

  • “print a black and white copy of this document I am working on”

  • “insert a link to the wiki entry on UI/​UX in this post”

Google Assistant, Siri and ilk can already recognize and interpret many of those, and execute some if asked just right, but it is still an afterthought most of the time, compared to the traditional way to do things. My point is that the technology is good enough now to make a natural language interface a convenient way to do things that require digging through menus and settings, and searching for solutions online. But the UI/​UX mindset is still stuck in the old days, and it is basically hit-and-miss.

This is a bit ironic, given that this UI/​UX is front and center in sci-fi movies and shows, where, in addition to pressing some weird symbols on a colorful star trek touch panel, one can also say “Computer, find all Ferenghi ships nearby and list those that recently stopped at Risa” or “set course to Star Base 23″ or “recalibrate tachyon sensors”.

My hope is that the UI/​UX paradigm will evolve to take advantage of the new AI capabilities and will require less memorization and digging through the forest of settings hidden in the bowels of each app or device.

Of course, a next logical step is a version of Scott Alexander’s Magic Whispering Earring, though maybe not as sinister.