Date:19/07/16
Although computers have gotten pretty good at 3D rendering moving objects in the last few years, this motion is usually only occurs within a strict and predetermined range, and may still involve hours of post-processing. In this sense, the ability to capture moving objects and render them in 3D in real time is something of a holy grail, and with Fusion4D, Microsoft’s new experimental motion capture technology, the computing giant may have struck gold.
As detailed in the above video and corresponding paper, Fusion4D uses eight cameras to capture a subject’s motion and then runs these images through a new algorithm developed by Microsoft, which allows the program to reconstruct the 3D object in real time with unprecedented accuracy. It's kind of like how game companies motion capture actors in those bodysuits with pingpong balls attached, only there's no need for the goofy suit, and the cameras can capture the entire scene and great detail, as opposed to just body movement.
First, the algorithm uses a “learning-based technique” to analyze the object’s movement across a number of RGB frames (colored still images) to get a rough idea of the object’s movement. The algorithm then matches these RGB images with images rendered to show depth in the frame, the combination of which yields a high fidelity 3D reconstruction of the object (4D, if you count its temporal fidelity as another dimension).
The really exciting aspect of this development is the algorithm’s ability to capture and reconstruct motion in real time without having to pre-configure the system to teach it to record a certain type of movement. This is repeatedly demonstrated in the video by rendering algorithmically complicated and totally arbitrary movements, such as undressing, the movement of long hair, or a martial arts sequence.
Although this technology is still in its infancy, its further development could lead to some pretty wild applications. The researchers involved in the project speculated that it could one day be used for things like remotely attending a live concert or sporting event in full 3D, or combining it with a virtual reality device to create what essentially amounts to a 3D Skype (something Microsoft is already exploring with holoportation).
Microsoft Research’s Fusion4D project can enable high fidelity immersive telepresence
The ability to capture and render scenes in 3D has been around for a while now, but for the most part this technology has been limited to non-moving objects. The reason for this limitation is obvious: as soon as you incorporate motion into the picture, there are way more parameters the algorithm rendering the object needs to account for.Although computers have gotten pretty good at 3D rendering moving objects in the last few years, this motion is usually only occurs within a strict and predetermined range, and may still involve hours of post-processing. In this sense, the ability to capture moving objects and render them in 3D in real time is something of a holy grail, and with Fusion4D, Microsoft’s new experimental motion capture technology, the computing giant may have struck gold.
As detailed in the above video and corresponding paper, Fusion4D uses eight cameras to capture a subject’s motion and then runs these images through a new algorithm developed by Microsoft, which allows the program to reconstruct the 3D object in real time with unprecedented accuracy. It's kind of like how game companies motion capture actors in those bodysuits with pingpong balls attached, only there's no need for the goofy suit, and the cameras can capture the entire scene and great detail, as opposed to just body movement.
First, the algorithm uses a “learning-based technique” to analyze the object’s movement across a number of RGB frames (colored still images) to get a rough idea of the object’s movement. The algorithm then matches these RGB images with images rendered to show depth in the frame, the combination of which yields a high fidelity 3D reconstruction of the object (4D, if you count its temporal fidelity as another dimension).
The really exciting aspect of this development is the algorithm’s ability to capture and reconstruct motion in real time without having to pre-configure the system to teach it to record a certain type of movement. This is repeatedly demonstrated in the video by rendering algorithmically complicated and totally arbitrary movements, such as undressing, the movement of long hair, or a martial arts sequence.
Although this technology is still in its infancy, its further development could lead to some pretty wild applications. The researchers involved in the project speculated that it could one day be used for things like remotely attending a live concert or sporting event in full 3D, or combining it with a virtual reality device to create what essentially amounts to a 3D Skype (something Microsoft is already exploring with holoportation).
Views: 546
©ictnews.az. All rights reserved.Similar news
- Azerbaijani project to monitor disease via mobile phones
- Innovative educational system to be improved under presidential decree
- NTRC prolongs license of two TV and radio organizations for 6 years
- Azerbaijan establishes e-registry for medicines
- Azerbaijani museum introduces e-guide
- Nar Mobile opens “Nar Dunyasi” sales and service center in Siyazan city
- International conference on custom electronic services held in Baku
- OIC secretary general to attend COMSTECH meeting in Baku
- Azerbaijan develops earthquake warning system
- New law to regulate transition to digital broadcasting in Azerbaijan
- Azerbaijani State Social Protection Fund introduces electronic digital signature
- Intellectual traffic management system in Baku to be commissioned in December
- Tax Ministry of Azerbaijan started receiving video-addresses
- World Bank recommends Azerbaijan to speed up e-service introduction in real estate
- Azerbaijan to shift to electronic registration of real estate