Michael Nikonov - iPi Soft, Founder/Chief Technology Officer
Renderosity Q&A Exclusive:
How has iPi Soft grown over the years from appealing to an indie market to larger production houses?
MN: Adoption of our software by larger studios has been increasing gradually as we continue to improve our software, clearing out bugs and adding new features. Our development efforts are to continue to make improvements and optimizations to reach the level of accuracy of older and more expensive marker-based mocap systems. But the flexibility and convenience of our markerless mocap system make it a great choice for many tasks like previz, background characters, gameplay animations, etc.
I noticed iPi Soft added Unreal to its workflow. How did this come about? What advantages do you now have using iPi Soft with Unreal.
MN: Yes, we now have presets for working with Unreal’s standard bipedal characters. The standard skeleton is pretty stable in recent versions of Unreal Engine, and adding a preset for Unreal in iPi Mocap was a logical step. In older versions of our software users could tune motion transfer for any reasonable bipedal character, but having an out-of-the-box preset for Unreal Engine standard characters is so much easier for novice users. Unreal Engine uses a Z-up coordinate system, as opposed to the Y-up coordinate system used in Maya, iPi Mocap and many other animation systems. We used to have an option for conversion between Z-up and Y-up orientations of characters, but it was not obvious to many users. Now it is as easy as just selecting "Unreal" from the menu of available characters and rigs.
Speed is one of the things I see in artists who use iPi Soft: how does iPi Soft enable users to work faster?
MN: One of the main advantages of our markerless mocap system is its very quick preparation of an actor for motion capture. Zero preparation time, in fact - no markers, no suits. This works great in two scenarios.
First, an animator can test his/her ideas almost instantly. As one of our users put it, "The moments where I need a capture, I pull out the Kinect, place it on a stool and capture action right there at my desk in roughly a 4×4 foot space." This is a very valuable capability for many animation tasks, including gameplay animations.
The second scenario is using mocap for various mass scenes that require lots of background actors performing different motions. Zero preparation time for actors means you can motion capture many actors efficiently, in a relatively short time.
What are your future goals for iPi Soft?
MN: We are working to improve in two directions: Better accuracy and better processing speed. We’re working to improve the accuracy of head and foot tracking. Tracking these body parts is challenging and has required a lot of development time and resources. In addition, we are working to improve accuracy in low-light situations and in variable lighting situations. This should make life easier for many of our users.
Processing speed is a matter of software optimization. We have come a long way in that regard, recently releasing a version of our software with real-time tracking for one or two depth cameras. In the future, we plan to provide real-time tracking for a larger number of cameras, for example, for a configuration with 8 RGB cameras.
How does a beginner get started using iPi Soft?
MN: Beginners do have more challenges understanding motion capture and acting in general, rather than difficulties working particularly with our software. That's why I recommend beginners make themselves familiar with acting theory before working with any mocap system. Ed Hooks' excellent book "Acting for Animators," is an invaluable resource and is applicable both for acting for motion capture (in many smaller studios, that can be animator's job) and for directing mocap sessions.
Another topic not well understood by beginners is the importance of props in motion capture. If a character interacts with a weapon, for example, it is important to give the actor something physical to hold during the mocap session. Otherwise, your results will not be very convincing. Designing good props that do not interfere with optical mocap is an art in itself, but it is also a lot of fun. I also recommend the writings of Ace Ruele and his masterclasses on Acting for Animators. He has a very good understanding of the physical aspects of motion capture in general and of props in particular and maintains a very interesting Facebook page https://www.facebook.com/iamaceruele/.
A great way of starting the mocap journey is to use our software with one depth-sensing camera. For those who are able to get a hold of it, I suggest trying the new Kinect Azure. While this configuration is a bit limited in capability because of using just one camera, the benefit is it’s very easy to set up. As you study the software and gain experience, you'll be able to move to multi-camera configurations.