Google Glass is likely to include a touch-pad, camera, microphone and see-through display all rolled into one to fit on the bridge of your nose.
A patent document submitted to the United States Patents and Trademarks Office gives a few hints about Google Glass' controls, specifications, features and design.
Google Glass is described as a "wearable computing device" in the document, comprising of a see-through display lens with a projector, a camera positioned on the extending side-arm and a touch-pad interface including a keypad.
Apart from the buttons and touch-pad, the glasses will also detect speech commands. Shout out a name of a friend, and it will open your contacts book. Start the engine of your car, and Glass could throw up GPS directions.
Even cooler is the mention of a "controller" which can determine the wearer's skin tone, hand shape and size based on an image of the wearer's hand.
The user interface for the glasses is located "outside of the field of view of the wearer", which means it is probably located near the temple of the user's head, along one of the arms of the glasses.
The patent, which names Google's David Petrou as the inventor, explains user interaction through an interface that "could be, for example, a touch pad, keypad, an arrangement of buttons, or any combination thereof".
For the sake of imagination, Google says it is considering using "coloured dots" for visual representations.
"Coloured dots may appear in response to physical contact or close proximity with the interface. Further, the dots may create an illusionary effect such that the wearer feels like that the interface is right in front of the wearer," the application reads.
Sources of input data
Some illustrations, which you can see as part of the patent applicationshow that the glasses include a microphone, a keyboard, a camera, and a touch-pad.
Project Glass speech commands can also be used to play videos, open applications, translate sentences heard or spoken, answer questions and convert spoken words to text.
Winter is coming? No problem
It seems like Google Glass is also winter-friendly. Glass will detect the temperature outside and switch to a hands-free mode, in case the user is wearing gloves.
Glass will include "an antenna and transceiver device" for wired/wireless communications between itself and "a remote device or communication network". This could mean 3G and 4G networks, as well as Bluetooth and Wi-Fi.
This will probably allow connectivity between Glass and phones, tablets or other "wearable computing devices or a server in a communication network".
The Wall Street Journal profiles Apple's "go-slow" approach to mobile payments. In June, Apple announced the inclusion of a feature called Passbook to iOS 6. Passbook allows users to keep loyalty cards, tickets and coupons in one central app. Passbook, however, does not offer a full payment system which has been a rumored area of research for Apple.
The Wall Street Journal reveals that this is a very deliberate decision from Apple:
Holding back in mobile payments was a deliberate strategy, the result of deep discussion last year. Some Apple engineers argued for a more-aggressive approach that would integrate payments more directly.
But Apple executives chose the go-slow approach for now. An Apple spokeswoman declined to comment on the decision-making process. Apple's head of world-wide marketing, Phil Schiller, in an interview last month, said that digital-wallet mobile-payment services are "all fighting over their piece of the pie, and we aren't doing that."
According to the Wall Street Journal's sources, a small group within Apple was reported to have been investigating a new service that would embed payment methods into the iPhone or even build a new payment network. Discussions reportedly included Apple facilitating payments with merchants and even all the way to the possibility of Apple to begin acting as a bank. Apple also considered simpler wallet app possibilities or working with existing middlemen and taking a small cut of each transaction.
Meanwhile, the Apple iPhone team had indeed explored NFC communications options in the next iPhone. Various concerns included impact on battery life, security, vendor adoption and customer satisfaction.
Ultimately, Passbook is said to be the current compromise while Apple presumably waits to see how the mobile payment market matures.
I thought that glasses with "augmented reality" would be hopelessly dorky and could never go mainstream - until I saw the technology in action.
At first glance, Thad Starner does not look out of place at Google. A pioneering researcher in the field of wearable computing, Starner is a big, charming man with unruly hair. But everyone who meets him does a double take, because mounted over the left lens of his eyeglasses is a small rectangle. It looks like a car's side-view mirror made for a human face. The device is actually a minuscule computer monitor aimed at Starner's eye; he sees its display—pictures, e-mails, anything—superimposed on top of the world, Terminator-style.
Starner's heads-up display is his own system, not a prototype of Project Glass, Google's recently announced effort to build augmented-reality goggles. In April, Google X, the company's special-projects lab, posted a video in which an imaginary user meanders around New York City while maps, text messages, and calendar reminders pop up in front of his eye—a digital wonderland overlaid on the analog world. Google says the project is still in its early phases; Google employees have been testing the technology in public, but the company has declined to show prototypes to most journalists, including myself.
Instead, Google let me speak to Starner, a technical lead for the project, who is one of the world's leading experts on what it's like to live a cyborg's life. He has been wearing various kinds of augmented-reality goggles full time since the early 1990s, which once meant he walked around with video displays that obscured much of his face and required seven pounds of batteries. Even in computer science circles, then, Starner has long been an oddity. I went to Google headquarters not only to find out how he gets by in the world but also to challenge him. Project Glass—and the whole idea of machines that directly augment your senses—seemed to me to be a nerd's fantasy, not a potential mainstream technology.
But as soon as Starner walked into the colorful Google conference room where we met, I began to question my skepticism. I'd come to the meeting laden with gadgets—I'd compiled my questions on an iPad, I was recording audio using a digital smart pen, and in my pocket my phone buzzed with updates. As we chatted, my attention wandered from device to device in the distracted dance of a tech-addled madman.
Starner, meanwhile, was the picture of concentration. His tiny display is connected to a computer he carries in a messenger bag, a machine he controls with a small, one-handed keyboard that he's always gripping in his left hand. He owns an Android phone, too, but he says he never uses it other than for calls (though it would be possible to route calls through his eyeglass system). The spectacles take the place of his desktop computer, his mobile computer, and his all-knowing digital assistant. For all its utility, though, Starner's machine is less distracting than any other computer I've ever seen. This was a revelation. Here was a guy wearing a computer, but because he could use it without becoming lost in it—as we all do when we consult our many devices—he appeared less in thrall to the digital world than you and I are every day. "One of the key points here," Starner says, "is that we're trying to make mobile systems that help the user pay moreattention to the real world as opposed to retreating from it."
By the end of my meeting with Starner, I decided that if Google manages to pull off anything like the machine he uses, wearable computers seem certain to conquer the world. It simply will be better to have a machine that's hooked onto your body than one that responds to it relatively slowly and clumsily.
I understand that this might not seem plausible now. When Google unveiled Project Glass, many people shared my early take, criticizing the plan as just too geeky for the masses. But while it will take some time to get used to interactive goggles as a mainstream necessity, we have already gotten used to wearable electronics such as headphones, Bluetooth headsets, and health and sleep monitoring devices. And even though you don't exactly wear your smart phone, it derives its utility from its immediate proximity to your body.
In fact, wearable computers could end up being a fashion statement. They actually fit into a larger history of functional wearable objects—think of glasses, monocles, wristwatches, and whistles. "There's a lot of things we wear today that are just decorative, just jewelry," says Travis Bogard, vice president of product management and strategy at Jawbone, which makes a line of fashion-conscious Bluetooth headsets. "When we talk about this new stuff, we think about it as 'functional jewelry.'" The trick for makers of wearable machines, Bogard explains, is to add utility to jewelry without negatively affecting aesthetics.
One criticism of Google's demo video of Project Glass is that it paints a picture of a guy lost in his own digital cocoon. But Starner argues that a heads-up display will actually tether you more firmly to real-life social interactions.
This wasn't possible 20 years ago, when the technology behind Starner's cyborg life was ridiculously awkward. But Starner points out that since he first began wearing his goggles, wearable computing has followed the same path as all digital technology—devices keep getter smaller and better, and as they do, they become ever more difficult to resist. "Back in 1993, the question I would always get was, 'Why would I want a mobile computer?'" he says. "Then the Newton came out and people were still like, 'Why do I want a mobile computer?' But then the Palm Pilot came out, and then when MP3 players and smart phones came out, people started saying, 'Hey, there's something really useful here.'" Today, Starner's device is as small as a Bluetooth headset, and as researchers figure out ways to miniaturize displays—or even embed them into glasses and contact lenses—they'll get still less obtrusive.
At the moment, the biggest stumbling block may be the input device—Starner's miniature keyboard requires a learning curve that many consumers would find daunting, and keeping a trackpad in your pocket might seem a little creepy. The best input system eventually could be your voice, though it could take a few years to perfect that technology. Still, Starner says, the wearable future is coming into focus. "It's only been recently that these on-body devices have enough power, the networks are good enough, and the prices have gone down enough that it's actually capturing people's imagination," Starner says. "This display I'm wearing costs $3,000—that's not reasonable for most people. But I think you're going to see it happen real soon."
One criticism of Google's demo video of Project Glass is that it paints a picture of a guy lost in his own digital cocoon. But Starner argues that a heads-up display will actually tether you more firmly to real-life social interactions. He says the video's augmented-reality visualizations—images that are tied to real-world sights, like direction bubbles that pop up on the sidewalk, showing you how to get to your friend's house—are all meant to be relevant to what you're doing at any given point and thus won't seem like distracting interruptions.
Much of what I think you'll use goggles for will be the sort of quotidian stuff you do on your smart phone all the time—look up your next appointment on your calendar, check to see whether that last text was important, quickly fire up Shazam to learn the title of a song you heard on the radio. So why not just keep your smart phone? Because the goggles promise speed and invisibility. Imagine that one afternoon at work, you meet your boss in the hall and he asks you how your weekly sales numbers are looking. The truth is, you haven't checked your sales numbers in a few days. You could easily look up the info on your phone, but how obvious would that be? A socially aware heads-up display could someday solve this problem. At Starner's computer science lab at the Georgia Institute of Technology, grad students built a wearable display system that listens for "dual-purpose speech" in conversation—speech that seems natural to humans but is actually meant as a cue to the machine. For instance, when your boss asks you about your sales numbers, you might repeat, "This week's sales numbers?" Your goggles—with Siri-like prowess—would instantly look up the info and present it to you in your display.
You could argue that the glasses would open up all kinds of problems: would people be concerned that you were constantly recording them? And what about the potential for deeper distraction—goofing off by watching YouTube during a meeting, say? But Starner counters that most of these problems exist today. Your cell phone can record video and audio of everything around you, and your iPad is an ever-present invitation to goof off. Starner says we'll create social and design norms for digital goggles the way we have with all new technologies. For instance, you'll probably need to do something obvious—like put your hand to your frames—to take a photo, and perhaps a light will come on to signal that you're recording or that you're watching a video. It seems likely that once we get over the initial shock, goggles could go far in mitigating many of the social annoyances that other gadgets have caused.
I know this because during my hour-long conversation with Starner, he was constantly pulling up notes and conducting Web searches on his glasses, but I didn't notice anything amiss. To an outside observer, he would have seemed far less distracted than I was. "One of the coolest things is that this makes me more socially graceful," he says.
I got to see this firsthand when Starner let me try on his glasses. It took my eye a few seconds to adjust to the display, but after that, things began to look clearer. I could see the room around me, except now, hovering off to the side, was a computer screen. Suddenly I noticed something on the screen: Starner had left open some notes that a Google public-relations rep had sent him. The notes were about me and what Starner should and should not say during the interview, including "Try to steer the conversation away from the specifics of Project Glass." In other words, Starner was being coached, invisibly, right there in his glasses. And you know what? He'd totally won me over.
Written by Farhad Manjoo.
Farhad Manjoo is the technology columnist at Slate and contributes regularly to Fast Company and the New York Times. He is the author of True Enough: Learning to Live in a Post-Fact Society.
Project Glass Is The Future Of Google
by Peter Ha
Over the last few years one could easily say that Google had lost their way. They were no longer known for search. Somehow they’d turned into a company that acquired a series of nonsensical entities, launched half baked products that eventually hit the dead pool or just got into some really weird shit.
But last year that all started to change as the company announced that it would focus on its core products. Hindsight always being 20/20 it all makes sense. It’s like anything else, really. Spitball as many ideas as you possibly can just to see what sticks. And so whether it was by design or not, Project Glass is the future of Google. Not as a product that will make them billions of dollars but what it means for Google as a company and its future.
“The charter of Google X is to take bold risks and push the edges of technology beyond what they’ve been to where the future might be,” Sergey Brin told a small group of reporters duing demo of Project Glass. “We want you to be less of a slave to your devices. It’s been really liberating and I’m really excited to share it with all of you.”
Brin noted that Project Glass is what Google believes could be the next form factor of computing. As it stands now, many of us are willingly beholden to our smartphones with all the web browsing, twittering, pathing, instagramming and whatever else consuming most of our time. Human interaction has all but faded away. The fact that people play the “stacking game” is comical and cute but a sign of how infatuated we are with technology. Glass has the potential to buck that trend by “keeping people in the moment,” said Steve Lee, Product Manager for Glass. Brin also mentioned that Glass shouldn’t be used to fill idle time or to browse the web and that your phone or tablet perfectly fits those needs.
Dorky as they might look, Glass signals the first glimpse of how to integrate such invasive and important technology into our lives in a more seamless way. Isabelle Olsson, the industrial design guru on the team, says the design of Glass ensures “you can look into people’s eyes.” During my brief time with Sergey’s Glass, I can say that the display didn’t hinder my ability to see or look around. The display disappeared until I needed to see what was being shown. I might never have to pull my phone out again to reply to a text, get directions or snap a photo. So, yeah, I’ll deal with looking like a dork but don’t be surprised to see Glass integrated with existing glasses. Brin did mention that Google has been in talks with eyeglass makers and the like.
While the hardware is still in prototype phase, I overheard Brin say that he’s experienced up to six hours of juice off a single charge. But that can and will likely change based on usage (uploading photos, capturing video, etc.). Photos, for instance, will be stored locally and can by synced with the cloud later. Both Lee and Brin said that they’re working hard to optimize what data is being transmitted and stored both on the device and in the cloud to alleviate any battery woes. There may be settings that allow users to control the content being shared until you’re within reach of Wi-Fi or when you’ve plugged in your Glasses for the night. Babak Parviz, a contributor to Project Glass, said a previous build allowed him to query a voice search for the capital of China broadening his own knowledge base to everything that’s available on the Web.
I asked what actually worked on Glass now and Brin politely skirted the question by saying that they’re testing and implementing various features with each build to see what sticks. Facial recognition, while discussed and experimented with, doesn’t sound like it’s been compelling enough that the team wants to immediately integrate it.
Here’s what you won’t see in Glass: advertising. Brin stated pretty vehemently that they have no plans to integrate advertising into Glass and that the only plan is to simply sell the hardware, which will be “significantly” cheaper than the $1,500 Explorer Editions that were announced today. The Glass team says they’re focused on the quality of the experience and not making it as cheap as possible. (Thank gawd.)
Core Google apps like Gmail and Plus (Hangouts) are being tested now along with Android apps. What isn’t clear is whether or not the Android and Google apps teams are working with the team at Glass and vice versa.
So what was the reason for today’s announcement of the $1,500 Explorer Edition of Project Glass? It’s actually a slight pivot from what they’ve done in the past. For once, the typical Google way of pushing out half-done products might work to their advantage. Parviz, Lee and Brin emphasized how important it will be to involve the developer community to further push the platform before Glass becomes available to consumers some time next year. Speaking of ship dates, Brin says the consumer version will ship within a year of when the Explorer Editions ship. Developers will have access to a cloud-based API that is “pretty far along.”
Does this mean Google wants to compete with Microsoft or Apple toe-to-toe? No. Google will always be the weird kid in the corner who sporadically does something mindblowing. They’re not thinking about what’s going on now but what might happen in the distant future. Everything they’ve done up until now seems like a tiny spec of something larger and greater. The late Ray Bradbury said it best: “Life is trying things to see if they work.” And that appears to be what Google is doing.
SOLVING THE NEED FOR UBIQUITOUS COMPUTING--WHILE MAKING THE TECH AS UNOBTRUSIVE AS POSSIBLE:
At Google X, the company’s now-not-so-top-secret R&D lab, engineers and neuroscientists and artificial-intelligence experts dream up a future without the pressure of market deadlines: driverless cars, robots, space elevators. But for lead product manager Steve Lee, his X pursuits are anything but an exercise in the fantastical: Project Glass, the futuristic eyeware he’s developing with an interactive heads-up display, might just hit market in the near future alongside products like Gmail and Android.
For Lee, it’s a matter of wrangling a sci-fi idea into a practical product. Whereas Apple and Microsoft have grounded their mobile future in the belief that the Post-PC World will revolve around the pillars of smartphones and tablets, Google is adding a late, left-field entry into the mobile space that’s as much of a technical feat as it is a fashion statement. In the first part of our interview published earlier week, Lee told Fast Company "something like this has never been created before." Today, he tells us what goes into designing a product like Google Glass.
THE PROBLEM: HOW DO YOU KEEP PEOPLE CONNECTED, BUT STILL PRESENT IN MEATSPACE?
Lee calls much modern-day technology a distraction. "If you walk around the streets of New York, people have their smartphones out and they’re looking down. They do that while they’re standing around, waiting for a bus or a taxi, or while they’re walking. Even if you go out to dinner with a friend or a date, the technology is taking them away," he says." But at the same time, people clearly have a desire to be connected to the Internet. We thought that was a really interesting problem to solve: trying to get technology out of the way while allowing people to still be connected out in the real world."
WHAT TAKES 60 SECONDS ON A PHONE WILL TAKE TWO TO FOUR SECONDS ON GLASS.
Lee ticks off some annoying use cases of traditional tech: holding a device up for an extended period to record a video, which he calls "fatiguing," or yanking out your phone to look up a map or share a photo. "Let’s say you’re meeting a friend at a bar you haven’t been to before. You just want to quickly check a map to make sure you’re going in the right direction. Even with my phone, it still takes a long time today to pull it out of my pocket, unlock it, open up Google Maps, and zoom into my location," he says. "A design goal for Project Glass is to make that much faster and much easier. What may take 30 to 60 seconds on a phone will instead take two to four seconds on Glass. Making that substantial of an improvement on speed and access will hopefully prove to be a game changer. I mean, you can capture moments with something like our Project Glass prototype that are impossible or just inconvenient or awkward with camera phone. If you can do those things in a few seconds, that’s going to be really meaningful to people."
It’s actually why the team called the device Google Glass, as opposed to say, Google Glasses. "Glass has a lot of connotations but certainly one is how to view this technology: being transparent and getting out of the way," he says. "Past wearable computer projects that people have seen likely conjure up something that gets in your way and blocks your vision or senses. That’s actually counter to our project goal. Everything around our design is exactly the opposite of that."
ANOTHER PROBLEM: HOW DO USERS INTERACT WITH IT?
We’ve gone from mouse and keyboard controls on PCs, to touch screens on smartphones and tablets, to hand gestures on Xboxes. Now with Glass, Google wants to move to the next level of interactivity. But how? Lee’s team hasn’t settled on specifics but he takes me through the experimentation process.
WE’VE DABBLED AND EXPERIMENTED WITH LOTS OF DIFFERENT TYPES OF INPUT."The input to this device is a real challenge because there is no physical keyboard--just like a phone doesn’t have a physical keyboard--and there is no touch screen either. How do you input? We’ve dabbled and experimented with lots of different types of input including using your voice, using some type of touch interface on the side of the device itself, as well as using your head," Lee says. "So using your head as input, for example, we’ve tried dozens and dozens of different types of head gestures. As you can imagine, some are more extreme than others. It creates a pretty funny experience. In fact, we created a game internally, both to exercise and test things out, but also to demonstrate the absurdity of using your head. It’s kind of like DDR but with your head instead of your feet."
"We created some pretty funny videos," he adds, with a chuckle. "I think there will likely be some way to move your head, which is comfortable and natural for a user, as well as not make them look odd and strange. But there’s many, many [gestures] that would definitely make you look strange to observers."
TRYING TO SOLVE THE DORK FACTOR
Designing Google Glass isn’t like designing an app on a smartphone that only that user and that user alone will see. All factors must be considered--the comfort, the style, the ergonomics, the societal acceptance--because, as Lee puts it, "You care about how it looks on your face. Unlike software or even hardware like a phone which you can sort of sneak into your pocket, this is quite visible--you can’t hide it."
It was a lesson Lee learned while playing around with early prototypes--such as a wearable computer he housed in a backpack to power the eyeware. "As you can imagine: not super comfortable. Setting aside style issues, it was simply not comfortable. Another thing that you really start to appreciate when you put things on your head is how important weight is--every single gram. I’ve never thought about 0.1 grams before as much as I have on this project," Lee says. "Societal acceptance and style are also extremely important. Because we can create the coolest Google technology and functionality, but if it’s embarrassing to wear around people, then it’s not going to get adoption."
I’VE NEVER THOUGHT ABOUT 0.1 GRAMS BEFORE AS MUCH AS I HAVE ON THIS PROJECT.
Lee acknowledges that because of all the style, functionality, and ergonomic issues involved, it is "unlikely that we’ll be able to service everyone, though that is our ultimate aim." When asked whether Google might outsource the product’s hardware to fashion designers such as Gucci and Prada, as we have suggested--and just as Android does with smartphone makers--Lee seems open to the idea.
"I think it’s TBD," he says. "There’s just all these different connotations and permutations--like eyeglasses and sunglasses--so it’s really hard to address everyone. Certainly we’re going to consider partnering with various folks to accelerate that. But it’s really early on, and I don’t know how exactly it’s going to unfold. But we’re certainly considering it."
THE PROTOTYPING BREAKTHROUGH: GETTING THE THING OUT OF THE WAY.
We’ve seen the current prototype now in design renderings, concept videos, on Charlie Rose, and even on Sergey Brin. Lee takes us through the design behind the current iteration of Project Glass. (For insight into early iterations, check out the first part of our interview with Lee.)
"There are a number of attractive things about this form factor: It’s actually quite sleek and has a nice profile," he says. "One key aspect of the prototype that we’re showing right now is that the display piece is actually up and out of the way. That was a key insight that we learned while developing this. Again, it gets back to the point of trying to free up people’s senses, and get out of the way. Putting lenses and things in the way of your eyes, we think is certainly a real challenge and has drawbacks. Putting the display up and out of the way, I feel very comfortable having a face-to-face conversation with someone, who doesn’t feel weird or odd, because I can make eye contact with them."
KEVIN MAKO, MAKO INVENT
06/06/2012 05:15 PM
Mako is a new-product development company for Inventors and Product Developers to Design, Prototype, Manufacture, and Sell their Inventions. Est. 1999, Mako is North America's top company for taking customers’ products from Idea to Store Shelves.
It's amazing, as soon as a new technology concept is released (a while ago for
this), at our invention development firm we get inundated with home inventors
coming up with brilliant applications for it, long before the base technology
is even released to market. It's almost as if the innovation lifecycle of a product is quickly compressing as information exchange and collaborative efforts increase in society.