Consider this:

There has been an exciting development of “LIDAR” and augmented reality. While it has remained on the fringes- so far- the process involves adding depth perception to the “computer vision”. There is a specific pattern to it- big tech usually rolls out a “fun experiment” and then scales up the “experiment” in a different domain.
Consider this as well (emphasis mine):
Initially, the AI was capable of developing depth data information from photographs. Since then, State-of-the-art machine learning algorithms can extract two-dimensional objects from photographs and render them faithfully in three dimensions. It’s a technique that’s applicable to augmented reality apps and robotics as well as navigation, which is why it’s an acute area of research for Facebook.
The Death of the Photo Studio. How GPT-3, your smartphone and… | by Sai Krishna V. K | Jul, 2020 | Medium
Tie this up with the computer vision in “assisted robotics” for surgery with surgeons wearing the VR masks. I agree that it would help them get a geo-spatial perspective, but it would also open up the doors for autonomous surgeries.
I am keeping a close watch on computational photography.