AI and AR can supercharge ‘ambient computing’
“Sal awakens; she smells coffee. A few minutes ago her alarm clock, alerted by her restless rolling before waking, had quietly asked ‘coffee?’ and she mumbled ‘yes.’ ‘Yes’ and ‘no’ are the only words it knows.” Then, the alarm clock tells the coffee maker to get busy — and Sal’s morning has begun.
This scenario was described by Mark Weiser, a computer scientist and the CTO at Xerox PARC in 1991 when he wrote a piece for Scientific American about “ambient computing” and coined that phrase. (The concept and related ideas are also referred to as “ubiquitous computing” and “invisible computing.”)
Ambient computing is not a technology. Instead, it’s a broad usage pattern, akin to “desktop computing” and “mobile computing.”
The idea has been in the ether for decades, especially a few years ago with the rise of the Internet of Things (IoT). While IoT describes networks of low-power connected and sensor-based home and office appliances — along with dedicated IoT devices — the “ambient computing” notion results in seamless and natural human interaction with those devices; the “user” doesn’t really “use” anything, but digital devices anticipate preferred action by people in the environment and respond accordingly.
And while most of us don’t have alarm clocks that tell the coffee maker to make coffee — and, of course, we could and should have that — we do have some ambient computing devices in our lives. For example, think about smart thermostats that adjust temperature based on time, history, and whether anyone is at home, or thermostats that communicate with lighting, blinds systems and home security systems.
Nascent ambient computing systems have also emerged in workplaces: smart conference rooms with automated meeting setups, adaptive lighting systems that self-adjust based on occupancy and ambient light, voice-activated assistants providing proactive notifications, automated maintenance and monitoring in manufacturing, automated guided vehicles optimizing production routes and others.
A surge, and retreat, in interest
During the three decades of the ambient computing concept, interest has surged or retreated based on new ideas in technology that support it — or the lack thereof.
When low-cost components shed from the smartphone revolution supercharged IoT by making connected sensor-based devices faster, better, smaller and cheaper — components like microprocessors, memory chips, tiny cameras, touchscreen displays, batteries, sensors (accelerometers, gyroscopes, proximity sensors), miniaturized antennas and wireless communication modules (Wi-Fi, Bluetooth, NFC), microphones and speakers, power management ICs, LEDs and OLEDs, GPS modules, and others — everybody talked about ambient computing.
When Amazon released the Amazon Echo in 2014, and Apple, Google and other companies unveiled their smart speakers over the next few years, the act of talking to an assistant without specifically even looking at it or even knowing where it is become normalized. Chatter about ambient computing rose again.
Some companies have exploited the ambient computing halo effect to promote bad ideas.
The Humane pin, for example, launched by former Apple engineers in April, was basically smartglasses without the glasses. The company markets the gadget as “ambient computing for the real world.” In addition to functional problems, Humane decided to pack the electronics into a form factor nobody uses or wants (a pin or magnet hanging on clothing) instead of glasses, already worn by 4 billion people. The product will be gone and forgotten in a year.
Four years ago at Google I/O, Google beat the ambient computing drum mercilessly, mostly around Google Nest integration, Project Connected Home over IP (CHIP), Google Assistant enhancements, Ambient Mode for Android devices, Google Home app updates, AI and machine learning integration, Android Auto and Google Assistant Driving Mode.
Google also showcased research into hidden displays, which can put a digital display through wood or other material.
That’s all interesting, but you don’t much hear about ambient computing from Google anymore.
Meanwhile, the automobile is gradually becoming a fully realized ambient computing space. New cars are increasingly integrating built-in voice assistants like Amazon Alexa or Google Assistant for hands-free control, intelligent ambient lighting that adjusts for visual cues and aesthetics, and smart sensors that enable features like adaptive cruise control and lane-keeping assistance. These technologies work together to create a cohesive and responsive driving experience, operating largely in the background.
For years, ambient computing has been slowly emerging and developing. And now, it seems a new set of technologies will drive a new surge of interest.
How AR and AI make computing more ambient
Historically, ambient computing aimed to make technology interactions natural and unobtrusive. With IoT and smart devices, we took steps in that direction. Now, with the fusion of AI and AR, the concept is fully realizable.
Weiser called the effect of ambient computing and what we would later call IoT “embodiedvirtuality.” While virtual reality builds a world inside computers, embodied virtuality does the opposite: it builds a computer out of the world — it’s real life, peppered with the attributes of a digital, connected environment.
That’s what AR does as well — it digitizes, connects and provides a digital layer upon real-world physical space. The best glimpse of the future of AR we’ve seen so far is, of course, Apple Vision Pro. The combination of looking at something and making a subtle gesture — for example, pinching the fingers together or pinching and dragging — to effect change in the presentation of holographic digital information seemingly hovering in space in the real world, is almost certainly how ordinary-looking glasses will operate some day.
AR glasses will tell sensors and devices in our living and working spaces who we are, where we are and what we’re looking at. And AI will tell them what we want to happen. In other words, AR glasses with AI complete the ambient computing picture by enabling humans to participate as connected electronic “devices,” rather than as just biological people.
AI will both anticipate our needs based on past preferences and respond to our spoken requests. AI-based AR glasses will remember things for us. Multimodal AI, which combines inputs such as image recognition, text, and sound will prove crucial for ambient computing, where the goal is to anticipate and respond to human needs without explicit commands.
I also suspect the ambient computing idea changes our perception from Mark Zuckerberg’s “metaverse” misdirection, where AR and VR were seen as two sides of the same coin, rather than opposites (a fake world made in a computer rather than the real world made into a computer).
Gene Munster recently suggested that Meta’s move from “closed Metaverse Quest” and toward Ray-Ban Meta glasses was a step toward ambient computing; I think he’s exactly right.
Beyond that, while we can expect AR and AI to complete the ambient computing picture, it will also re-define it. This is already happening.
Gartner Global Chief of Research Chris Howard defines ambient computing as computing that takes place in “ambient spaces,” where physical space and digital space interact “in interesting ways.” To Howard, the key enabling technologies will be edge computing, including SLMs (small language models) running close to the edge (instead of large language models running in the cloud). Edge computing will drive performance, innovation and efficiency in this new ambient computing world.
Ambient computing offers us a whole new way of understanding the era of AI+AR, and what it means for our everyday lives.
In the meantime, alarm clock: Tell that coffee pot to make us some coffee. We need to wake up to a new world.