반응형

Summoning the Next Interface: Agentive Tools & SAUNa Technology

 

http://www.cooper.com/journal/2013/05/summoning-the-next-interface-agentive-tools-sauna-technology.html

 

 

 

Part 1: Toward a New UX

If we consider the evolution of technology—from thigh-bones-as-clubs to the coming singularity (when artificial intelligence leaves us biological things behind)—there are four supercategories of tools that influence the nature of what’s to come:

  1. Manual tools are things like rocks, plows, and hammers; well-formed masses of atoms that shape the forces we apply to them. Manual tools were the earliest tools.
  2. Powered tools are systems—like windmills and electrical machines—that set things in motion and let us manipulate the forces present in the system. Powered tools came after manual tools, and took a quantum leap with the age of electricity. They kept becoming more and more complex until World War II, when the most advanced technology of the time, military aircraft, were so complex that even well trained people couldn’t manage them, and the entire field of interaction design was invented in response, as “human factors engineering.”
  3. Assistive tools do some of the low-level information work for us—like spell check in word processing software and proximity alerts in cars—harnessing algorithms, ubiquitous sensor networks, smart defaults, and machine learning. These tools came about decades after the silicon revolution.
  4. The fourth category is the emerging category, the new thing that bears some consideration and preparation. I have been thinking and presenting about this last category across the world:
    Agentive tools, which do more and more stuff on their own accord, like learning about their users, and are approaching the artificial intelligence that will ultimately, if you believe Vernor Vinge, eventually begin evolving beyond our ken.

"WIthin 30 years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended."

Where We Are vs. Where We Think We Are

For the most part, our clients at Cooper approach us to make powered tools for their users, like the video tools for (now-gone) Flip video. That’s fine. There’s a great deal of history and best practices established to help us knock these problems out of the park. We’re good at this.

Over the course of these projects we often identify and suggest cutting-edge opportunities that are farther along the arc, that would have them building assistive tools to help their users, but if I had to guess, I’d say only one out of five organizations are well-positioned enough to move on these ideas. I suspect that over time, the number of clients moving on assistive tools will increase. So if that’s where we are, with most clients asking for powered tools and most interaction designers providing designs for assistive tools, we’re both still a bit behind.

Based on the advancements, patents, popular familiarity, and marketization of a host of particular technologies (that you’ll see listed below), we are farther along that arc, able to create designs that are genuinely agentive tools. That has some implications to discuss, but first, let’s clarify the categories of technology that are enabling this sea change.

As a group I call these agentive-enabling technologies SAUNa tech. The acronym means nothing, really. There’s no generative richness here to a literal sauna, but it’s a useful acronym to encompass the four types of technologies involved: Social systems, Agentive algorithms, and Ubiquitous technologies accessed via Natural User Interactions.

Social Systems

“Social” has come to mean “social networks” but my use of the term covers more than just Facebook and Twitter. The paradigm of one user using one machine to perform one task is becoming the exception, with multiple users working across multiple systems accomplishing shared goals being more common (and more problematic to design for). For example, this article itself was originally written in Google Docs with permissions granted for an author and a number of editors, who have worked on it from laptops in airports and coffee shops, on phones and public transport, and desktop computers at Cooper.

We also must deal with the fact that when combined with agentive algorithms, social systems can now access Big Social Data. Systems looking to learn have humanity and history as their dataset. Doctors using agentive tech won’t just know what happened to their own patients, they’ll know what happened to all patients at all times in recorded history (to varying levels of detail) and adjust diagnoses and treatments accordingly.

Agentive Algorithms

Perhaps the most cutting edge of these are the agentive technologies, those that feel like low-level artificial intelligence. These technologies are aware of us—our identities, our intentions, and possibly even our emotional states. They adhere to Gricean maxims for interaction, and perform low-level machine learning about us as they help us achieve our goals. (Be sure to note the difference between these agentive algorithms and the category of technology that is named for them.)

Ubiquitous Technology

The technological skin that humans wear is getting thicker and more interconnected all the time. Technology is already in our skies, in space, in hospitals, on the battlefield, and under the sea. As information workers in the Western world, technology is under our fingers most of the day: on our desktops, in our hands, in our cars, with us at the gym, carried with us to the bathroom, and pulled to our faces first thing when we wake up in the morning. It’s on the streets of our cities and inside the shops we patronize. Our users are constantly moving across these everpresent technological touchpoints, and experiences have to shift to take advantage of them seamlessly.

Natural User Interactions

Personally, I hate the term, “natural” UI. Given the old saw, “The only intuitive interface is the nipple. Everything else is learned,” “natural” is an overblown promise especially for a set of technologies that are often DOS-like in their absent affordances. But no better label has taken hold that describes the set of technologies that engage more of our bodies and capabilities than WIMP paradigms, so we’re going forward with the term, with objections noted. Though it’s evolving and adapting, my working list of those technologies at the time of writing follows.


 

  • Haptic technology: Outputting information to our skin.
  • Gesture recognition: The ability to communicate with computers with our bodies and especially our arms and fingers.
  • Tangible and touch tech: The ability to directly indicate selections and manipulate objects in ways that computers can understand.
  • Voice recognition and generation: The ability to speak to a computer as we would speak to another person.
  • Ocular control or gaze monitoring: The ability to point with our eyes.
  • Cerebral (brain) interfaces: Using thoughts or brain waves to communicate to computers.
  • Near field communications: Letting us place objects in proximity to initiate data transfer and indicate selections or focus.
  • OLED and eInk displays: Visualizations of the abstractions around us, everywhere.
  • Heads-up displays: The personalized augmentation of the world around us


 

SAUNa technologies are each powerful enough on their own, but once they’re combined into holistic platforms, we’ll be deep into assistive territory and have an outstretched foot into the agentive zone. These things are coming, and I believe coming sooner rather than later.


 

In the next post I’ll touch on the implications that the full-scale adoption of SAUNa tech has for interaction design.

 

 

Summoning the Next Interface Agentive Tools & SAUNa Technology Cooper Journal.mht

반응형

+ Recent posts