Augmented Reality (AR, or I guess "spatial computing" now) has always been what I thought would push the whole thing over the edge, and not VR. The "killer app" for VR is immersive experiences - while there's a niche for them, and I think we'll see more and better iterations, I don't think it has the growth potential that AR does.
The killer app for AR is the pair of glasses that pops up information as an overlay on the rest of the world. Directions? You get an arrow and line telling you where to go. Meet someone for lunch? Facial recognition tells you information about them - either recent social media posts if you're already connected or something like a public profile if you don't. Testing out a new recipe? It shows you step by step directions and floating video of the task you need to do right them. Watching a movie? Built in subtitles and tells you who the actors (or voice actors) are. Looking to buy a car? The AR headset gives you pricing information and stats for any car that you look at.
I could keep going. Think combination of Ready Player One, any gadget spy movie, and a video game isekai. Menus that allow you to make notes for when you go to the store, calendar reminders popping up visually
The potential for abuse of privacy is high, but the convenience factor and instant gratification probably mean that people will still adopt it in droves. Add in some kind of AI assistant that does a halfway decent job at anticipating the user's needs and it's a done deal.
The main things holding back that kind of thing are battery tech and miniaturization. 2 hours isn't gonna cut it, and it needs to shrink to a more innocuous level before we get there, but that's the direction I see it going. That's probably also why these companies are working so hard at iterating, to get that first-mover advantage.
If we do get there I pity anyone with photosensitivity issues or epilepsy. They're gonna have a rough time.
The phone already has all that minus the heads up display. I wonder if anyone’s working on a glasses display that uses your phones considerable compute, networking, etc resources.
one of their main goals was making the latency imperceptible, and from the initial hands-on reviews they've achieved it, by integrating it right down on hardware level using an RTOS for the graphics & display part. and since Apple is all about SoC's nowadays, it doesn't really make sense for them to use half of an iPhone's chip if they can as well use the CPU already next to their GPU on the same chip.
They probably thought about it, but found it was too slow. Everything about Vision Pro hardware to me screams: this is the absolute cutting edge and that's what's needed to make it usable. In fact, it's barely there so we had to do something we don't like to do - preannounce for next year - because it will only be then that we can mass produce it.
If you think about it, the product is essentially a MacBook that sits on your face, micronized with massive optimizations.
The killer app for AR is the pair of glasses that pops up information as an overlay on the rest of the world. Directions? You get an arrow and line telling you where to go. Meet someone for lunch? Facial recognition tells you information about them - either recent social media posts if you're already connected or something like a public profile if you don't. Testing out a new recipe? It shows you step by step directions and floating video of the task you need to do right them. Watching a movie? Built in subtitles and tells you who the actors (or voice actors) are. Looking to buy a car? The AR headset gives you pricing information and stats for any car that you look at.
I could keep going. Think combination of Ready Player One, any gadget spy movie, and a video game isekai. Menus that allow you to make notes for when you go to the store, calendar reminders popping up visually
The potential for abuse of privacy is high, but the convenience factor and instant gratification probably mean that people will still adopt it in droves. Add in some kind of AI assistant that does a halfway decent job at anticipating the user's needs and it's a done deal.
The main things holding back that kind of thing are battery tech and miniaturization. 2 hours isn't gonna cut it, and it needs to shrink to a more innocuous level before we get there, but that's the direction I see it going. That's probably also why these companies are working so hard at iterating, to get that first-mover advantage.
If we do get there I pity anyone with photosensitivity issues or epilepsy. They're gonna have a rough time.