Everyone in tech journalism is talking about Google Glass these days. Glass is Google’s new wearable computing interface. It’s a tiny transparent display screen that sits in the upper right corner of your vision. The frame includes a camera and a microphone, for taking pictures and video. Currently it’s not being sold commercially, and is only available to a select few developers, journalists, and enthusiasts, who may or may not have too much time and money. The device is new and interesting, but I am having a hard time envisioning any practical uses for the thing.
Over dinner, a friend of mine was expressing his excitement for the device. He exclaimed that finally, after decades of promises from science fiction, true augmented reality is just around the corner. Soon, our vision will be aided by superimposed information, like The Terminator, Robocop, and Star Trek’s Geordi La Forge! I completely understand this excitement. Seeing the awesome inventions of science fiction become a reality is a fun experience. The smart phones we carry in our pockets are in many ways similar, if not superior, to the tricorders in Star Trek. And the last decade, when smart phones were evolving and we were all figuring out their potential, was a really exciting time. Smart phones, and their usefulness on a day-to-day basis, make sense. I am not, however, convinced that’s the case for augmented vision. Especially for Google Glass’s take on augmented vision.
Google Glass’s screen doesn’t fill your vision. It simply displays text and images on a small display up in the corner. So forget walking into a biker bar and having people’s pants sizes superimposed around their waists. In essence, it’s not augmented reality. Nothing is being augmented. It’s reality, with a tiny screen of information over to the side. And really, what benefit does this bring? Smart phones solved a practical problem. Before smart phones, in order to check our email, or get information online, we had to search out an internet connected computer, sit down, and interact with a keyboard and mouse. Now we can do all that anywhere, by pulling a small device out of our pocket. So what problem is Glass solving? The pulling the device out of the pocket problem? Is taking the phone out of the pocket such a chore that we need to remove that step entirely?
Maybe it’s a device for people who can’t, or don’t want to, give their full attention to any one thing at any given time. With Google Glass, you can read your text messages while having a conversation with someone, and you don’t even have to look down at your phone. I am sure both the person you’re texting, and the person you’re talking to, will applaud your efficiency.
Honestly, I don’t know what Google Glass is for. The vast majority of people interested in this product are likely first world information workers. People who work in an office in front a giant computer display for eight hours a day. Why, then, would you need a tiny screen attached to your head when you’re staring at a giant screen on your desk? Is it for the commute? For the small amount of time between the screens at work and the screens at home? If using a cell phone in the car causes distracted driving, I can’t imagine Glass being any better. Focusing on the Glass screen, and the information on it, still takes your focus away from the road. Some may argue that Glass would be great for turn-by-turn navigation. Possibly, but is it needed? How many times do you find yourself driving somewhere where you need navigation? For me, it’s no more than a few times a year. And for me, I’ve never had much trouble reading a map before I get into the car, then remembering the directions.
Photo taken with Google Glass. Is this guy making eye contact with the photographer? Shouldn’t he be looking into the camera?!
(photo by Robert Scoble, extreme Google Glass enthusiast)
Google Glass’s photo and video abilities can be another argued feature, but again, I am not convinced. Maybe, if you really can’t handle holding a camera in your hands, then it’s a good solution. Google Glass supposedly takes pictures of what your eyes are seeing. But it doesn’t. The camera is located slightly above and to the right of your vision. So any picture it takes will be from a viewpoint slightly above and to the right of what you intend. This is clearly obvious when looking at any pictures of people taken with Glass. The advantage of real cameras, even cell phone cameras, is the viewfinder. With a viewfinder, you can compose the shot you want to take and know that what you’re seeing is exactly what will be included in the final picture.
I can only come up with one practical, day-to-day use for Google Glass. If you’re doing a job that requires both your hands, and you need to quickly access information that you don’t know off the top of your head, then Glass might come in handy. For example, a Starbucks barista making drinks. If a customer orders something you don’t know how to make, the recipe can display in Glass, and you can go about making the drink without having to check any employee manuals. Or a mechanic looking up information on how to work on a car while he’s working on a car. Besides cases like these, I don’t see how Glass can enrich any of our lives.
Maybe I am becoming a troglodyte. Maybe I am, like generations before me, becoming too old to realize the potential benefit of new technologies. I doubt it, though, because I am only turning 30 this year, and 30 is the new 20. And I think I’m a pretty creative guy! The current Google Glass units are intended for developers, so they can come up with useful applications for the product. Hopefully they’ll be more creative than I am. Hopefully someone will develop some compelling application for Glass that makes it useful. Because, while it’s cool hardware, I just don’t see the point.