I cut out a hole for my phone's camera in my cardboard, was hoping to have the ability to "minimize" VR like minimizing a window and seeing your desktop background except it would be the camera so you could see someone without taking off your headset.
The "minimizing" VR world to show real-world is one that I want as well.
With AR in general, one would want to place virtual objects onto particular real-world locations. This requires depth-vision and mapping this into a (simplified) 3d-world. For interactive use it needs to be done real-time, requiring quite a lot of compute. The single camera available on standard smartphones is not enough hardware for this.
Maybe we will get stereo-cameras by standard. The megapixel race is over, the high-FPS/slowmo fight is happening now, maybe stereo/depth/3d will be next.
I'm not sure if lytro (can't quite recall the name) but the technology that captures a photo at various depths of field, if you had that for 360, and captured sound, you could travel around a spot/zoom/focus on things. I want that.
I get your point about one camera versus 2. It's probably cost maybe? Why they use spinning LiDAR's versus phased array (non-spinning). Though even if it phased you'd still need something that was like 360 anyway to get around.
Why? Processing?
I cut out a hole for my phone's camera in my cardboard, was hoping to have the ability to "minimize" VR like minimizing a window and seeing your desktop background except it would be the camera so you could see someone without taking off your headset.