Future Of Markerless Motion Capture: A Chat With iPi Soft Founder Michael Nikonov
Friday, February 3, 2017 10:55 pm
Over the past eight years since iPi Soft launched with its flagship product iPi Motion Capture, the marker-less motion capture industry has moved quickly from academic experiments, to the introduction of an affordable, accurate and easy-to-use production tool that is unconstrained by sensor suits and green-screen stages.
More and more large film/game studios, as well as independent content creators, have adopted the image-based marker-less motion capture system for animating human characters, finding it especially helpful in previz and simulation of large crowd scenes. Among these, Iloura, the high-powered animation and visual effects studio, recently used iPi Motion Capture on their Emmy-winning work on HBO's Game Of Thrones.
However, for the majority of entertainment content creators on the high-end, a real-time markerless motion capture solution remains mission critical.
We spoke with Michael Nikonov, iPi Soft founder/chief technology architect, to get his take on the evolution of his company, the markerless motion capture industry overall, and what needs to happen for the technology to compete creatively with more traditional sensor-based motion capture solutions.
Q. Overall, the entertainment industry seems to be gradually adopting markerless motion capture software. What are your general thoughts on how you see entertainment studios using markerless motion capture?
MN: Seeing our software used for Games of Thrones was actually a thrill for us. I'm a big fan of the action genre and Game of Thrones is among the most popular television action shows of today, so it was exciting to see how they used it. In addition to the entertainment sectors we're also seeing growth in other segments such as education and biomechanical research. In general, for high-end visual effects studios, markerless motion capture use is growing but the way they're using it has been limited to several key tasks such as crowd simulation and pre-visualizations.
Q. What needs to happen for it to go further than that?
MN: Basically two things: The speed of motion capture processing and the accuracy of the software need to improve. For our part, we are working on the processing speed optimizations and expect to release a new version of iPi Motion Capture that will essentially offer real-time capability. This will solve the issues affecting speed with current algorithms, but then there remains the problem with accuracy, which requires improved algorithms that in turn require even more computer processing power. Known markerless mocap algorithms require an exponentially increasing amount of resources (e.g. computer processing power) for only a constant increase in problem size (e.g. higher accuracy). From the other side, newer hardware and our advances in algorithms optimizations lead to tremendous speed-ups of our R&D. It's an amazing race of exponential complexity versus exponential growth. I believe we will meet our speed and accuracy goals, eventually.
Q. Would you say that your development efforts are hampered by the processing power of today's PC?
MN: To some degree, yes. But we cannot just sit and wait for Intel and Nvidia to magically solve all problems by releasing new hardware, because even with Moore's Law (computer power doubling every two years due to advances in semiconductor technology) this could take some time. We're working toward creating an algorithm that allows our customers to perform markerless motion capture accurately in real-time on near-future hardware. While we're hopeful to release this version later this year, the question remains if there will be enough available computing power. Simply put: we need hardware to be four times faster than the current generation. Despite present limitations, in 2016 we were still able to deliver processing speeds four times faster than previous algorithms (on the same hardware). We achieved processing speed of 15fps, very close to real-time, for customers using standard, off-the-shelf Kinect devices, with twice the accuracy.
Q. From a technology standpoint, what needs to happen for studios, like Iloura, to use markerless motion capture beyond just crowd simulation and previz?
MN: Accuracy of mocap is the most important factor for high-end studios. Accuracy of our markerless software is still lower then the accuracy of more expensive high-end marker-based mocap systems. This is not a fundamental limitation. Even at 640 by 480 pixels resolution, our accuracy is still limited by lack of processing power, not by camera resolution. I believe there is still a lot of untapped potential for improving accuracy of our algorithms even with low-resolution (640 by 480 pixels) cameras and depth sensors (like Kinect). With HD cameras, potential accuracy of markerless mocap can be even higher than that of traditional marker-based systems. But the computer processing power requirements will be really tremendous in that case.
Q. Tell us about making iPi Motion Capture available via game development platforms like Unity and Unreal and its impact on the creative development of markerless motion capture.
MN: It's been huge. Tools like Unity, Unreal and Valve have democratized game development so that right now young developers are creating great looking games with a minimal investment. It's enabled a lot of indie companies to take game development in dynamic new directions, and for a lot of them that means inventive uses of markerless mocap.
Q. iPi Soft launched eight years ago. Taking a look back has the company come a long way, and where do you see things are heading?
MN: I still feel we are just at the beginning. I'm pleased with the pace of our research and development and encouraged by the creativity of our customers across the production spectrum. I'm hopeful 2017 will bring new hardware and software optimizations for an overall improved markerless mocap workflow. Where we go from here is hard to say right now, but it will be exciting to see how the year unfolds.
Visit the iPi Soft website for more information on their motion capture technology