The Rise of Screenless Computing: A Glimpse into a Future Without Screens

This article was first published by the Wiser! Newsletter on 28th Sept 2023.

w/Ambient Computing

The Future Is Screenless

Imagine a future where you don’t need to look at a screen at all. You could ask your computer assistant to read your emails for you, control your smart home devices, send messages, make calls, book flights, and more, all without needing to look at a screen. Instead, you interact with your computer through voice commands or even gestures. In some ways, it would be similar to how we interact with other humans. We don’t need screens to communicate with each other, and we’re able to get things done without needing to look at screens.

I’m talking about interacting with devices like smart speakers, wearables, like Humane’s AI Pin, as well as something more tomorrow’s world, like your augmented or virtual reality glasses. This is the world of screenless computing.

One of the most interesting things about screenless computing is the concept of ambient computing. The idea is that instead of interacting with a screen or keyboard and mouse, we can simply talk to our computers to get things done. With AI, these computers can understand our intention because they’ve built a personal understanding of who we are. In this context, they carry out our requests without us needing to look at a screen.

It’s like having an always on, personal assistant in the background, ready to help at any moment. This vision can have lots of benefits, but it also raises questions about privacy and security. How comfortable would you feel if your computer was always listening and responding to you? I’ll come on to that…

Sign up for free

Use Cases

Alexa and other smart speakers are a great starting point. They’re really the first wave of mainstream consumer-facing, screenless computing devices. But it’s fair to say that 9 years after the launch of Alexa, the tech has hardly been driven by innovation or any kind of advancement. Up until now, Alexa has largely been used as a clock/radio. Will that change with Amazon’s new announcement that they have added generative AI to Alexa?

Apparently, Alexa’s generative AI capabilities are designed to interact in a natural way as if you are talking to another human. You will just have to say, “Alexa, let’s chat” and then the interactive conversation can start. Amazon says that “Alexa’s ability to reason, infer customer intent, and understand complex requests will remarkably improve.” I’m optimistic, but it’s been a long time coming!

Another use case is navigation using smart glasses. Instead of following a Maps on on your mobile, the information is presented before your eyes, through your glasses. Imagine you’re in an unfamiliar city, and you need directions. With smart glasses, you could get turn-by-turn directions projected right into your field of view, without needing to look at your phone. This would allow you to navigate safely and without losing focus on your surroundings.

As I write this article, we’re hours away from Meta Connect, the annual developer’s conference for Mark Zuckerberg’s metaverse aspirations. There’s expectation that the 2nd generation of Ray Ban smart glasses will be announced as well as Meta’s entry into the augmented reality space.

Another realistic use case is in healthcare. Imagine you’re a surgeon and you could get access to a patient’s medical records displayed before your eyes. Or get live information about their vital signs right in your field of view without needing to look away from the surgery and focus on a screen.

Virtual and augmented reality, and smart glasses, are going to play a big role in the shift to screenless computing. These technologies will allow us to interact with computers in a much more immersive and natural way. Smart glasses projecting information into your field of view would allow for a more seamless and intuitive computing experience, where you can get information when you need it, without needing to break your focus or look away from the world around you.

But are smart glasses “screenless”? In a way, a lens in a pair of glasses could be considered a screen, since it’s displaying information. But at the same time, there’s an important difference between a lens and a traditional screen, like a phone or computer monitor. With a lens, the information is integrated into your field of view, and it’s not a separate device that you have to look at. So even though it’s technically a screen, the experience of using it is very different from looking at a traditional screen.

🎙️ Big Tech Little Tech Podcast

Privacy and Attention

Since information is being projected into your field of view, there could be concerns about who else can see that information. This is something that developers and manufacturers will need to address, to make sure that screenless computing is secure and private. There’s also the question of how these technologies will impact our attention spans and ability to focus. If we’re constantly getting information projected into our field of view, will we be able to focus on the task at hand? Or will we be constantly distracted by the information coming in?

In researching this article, I found that there’s quite a bit of research on this. One study from the University of Washington found that using augmented reality glasses for navigation while walking led to slower walking speeds and more errors, suggesting that the information being presented was distracting and led to cognitive overload.

Another study examined how augmented reality affects people’s behaviour in the real world after they remove the AR device. The study found that people became more socially distant, less attentive, and less confident in their own abilities than those who did not experience AR. However, it’s worth noting that these studies were done with early versions of augmented reality glasses, and the technology is still in development, so it’s possible that future versions could address these concerns.

It seems to me that we cant have both. If we want a wearable, always on, hands free computing assistant, then it has to be listening. I guess the issue is “who” is listening and “what” do they do with what they’re heard.

It’s not just about privacy either. It’s also about how much trust we’re willing to place in these systems. If we’re going to have a computer assistant that can carry out our requests without us needing to give instructions via a screen, then we have to be sure that it’ll do the right thing and act in our best interests. There’s a lot of ethical questions here. How can we ensure that these devices act in an unbiased way, for example?

This is a recurring debate and it’s an ongoing area of research in the field of AI ethics. One idea is to design these systems with something called “value alignment”. In other words, the AI’s goals and values should be aligned with our own. So instead of just being programmed to complete a task, the AI should also be programmed to do so in a way that aligns with our ethical framework. For example, we might want an AI assistant to always act in a way that’s fair, respects privacy, and avoids causing harm.

One example is OpenAI’s language model, GPT-3. It was trained on a large dataset of text, and the developers also gave it a set of guiding principles, or values, to follow. One of these is to be safe and ethical. This means the model was trained not just to generate text based on patterns in the training data, but also to avoid generating harmful or unethical content.

The same can be said for Anthropic’s Claude, which uses a principles based training model called constitutional AI. Instead of using human’s to moderate and correct information, the AI does it itself based on a set of pre-defined principles and values.

Here’s The Thing:

There will always be a need for a screen as we interact with computers. But the direction of travel is to make computing seamlessly integrated with our lives. So that we become less dependable on the unnatural act of picking up a phone every 5 minutes. Simply say with you want and the computer will hear you and act accordingly.

With time, the AI systems you use become trained to know you personally, in much the same way that 15 years ago Facebook developed algorithms that learnt to understand you better than anyone in your life, except your spouse. The same thing will happen with AI and you can see it in action with systems like Pi from Inflection.AI.

I use Pi most days and it is getting to know me better. That is evident in our conversation. But Pi still needs a screen to have a conversation.

Humane’s AI Pin promises to take that one stage further, by providing personalised artificial intelligence that doesn’t need a screen. See this for yourself…

That’s all for this week, thanks for reading, Rick

Sign up for free

About The Author

Rick Huckstep is a writer, podcaster and YouTuber with a passion for emerging technologies and the way they will shape tomorrow’s digital world.

🤔 Get Wiser! every week (newsletter): https://rickhuckstep.substack.com/

📽️ Follow on YouTube: https://youtube.com/@rickhuckstep

🎙️ Listen to Big Tech Little Tech: https://btltpod.com

This article was partly written with the use of AI tools from Anthropic, NotionAI and Canva.

--

--

Rick Huckstep - Making Sense Of Tech

Supercharge your career with AI - 10x your productivity, prospects and wisdom with tips, tricks, tools and insights about AI and emerging technologies