Abstract: We present a real‐time system for character control that relies on the classification of locomotive actions in skeletal motion capture data. Our method is both progress dependent and style invariant. Two deep neural networks are used to correlate body shape and implicit dynamics to locomotive types and their respective progress. In comparison to related work, our approach does not require a setup step and enables the user to act in a natural, unconstrained manner. Also, our method displays better performance than the related work in scenarios where the actor performs sharp changes in direction and highly stylized motions while maintaining at least as good performance in other scenarios. Our motivation is to enable character control of non‐bipedal characters in virtual production and live immersive experiences, where mannerisms in the actor’s performance may be an issue for previous methods.
Presented at the 31st Conference on Graphics, Patterns and Images (SIBGRPI) – Awarded with an Honorable Mention
Summary: Modern motion capturing systems can accurately store human motion with high precision. Editing this kind of data is troublesome, due to the amount and complexity of data. In this paper, we present a method for decoupling the aspects of human motion that are strictly related to locomotion and balance, from other movements that may convey expressiveness and intentionality. We then demonstrate how this decoupling can be useful in creating variations of the original motion, or in mixing different actions together.
Many Machine Learning libraries support a Python interface. Many DCC applications support Python. But you will often find that these DCC applications and software libraries are binary incompatible. Here is how to solve this problem in Maya for Windows.
About three weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this second post of a two-post series, I’ll try to go over why Fabric was a great fit for Machine Learning in the context of 3d Content Creation (animation, games and VFX). Bear in mind these are my own personal opinions.
AI, beyond the hype
Nowadays the acronym AI (artificial intelligence) seems to be everywhere. AI cars, AI stealing your job, processing AI in your GPU, and so on… Not unusually you see the buzzword AI alongside stuff that has no direct relation to artificial intelligence, like this tweet in which the WSJ uses VR goggles to depict AI.
Most of the times when people talk about AI, they are really talking about a rapidly developing segment of AI called Machine Learning (ML). Machine Learning is a field in computer science that is interested in techniques that make it possible to program computers with data, instead of using explicit instructions. ML techniques are either based on a statistical approach or a connectionist approach. The connectionist approach is in vogue nowadays, especially a specific approach known as Deep Learning.
Data is what makes ML tick. This data can be of two kinds: labeled and unlabeled. Unlabeled data can be used to find patterns within the data itself. A very nice example of the use of ML with unlabeled data in a 3d content creation setting is the work of Loper et al. (2015). The authors used a dataset consisting of 2000 scanned human bodies (per gender) and, using a statistical technique called PCA, found that they could describe scanned bodies with less than 5% error using only 10 blendshapes. You can experiment with the results of this work in Maya, clicking here.
A couple of weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this two-post series, I’ll try to go over what Fabric Engine was, the different positionings it assumed through the years, and what voids does it leave in the CG community. Bear in mind these are my own personal opinions.
Update, check the second post in this series: Fabric Engine and a Void in 3DCC Machine Learning.
In the beginning, there was KL
However, why was it any good for CG folks?
I’ve recently published, as an open source project, the code for feNeuralNet. This is a Fabric Engine extension for doing simulation of previously trained Neural Networks. Fabric Engine is a plataform for CG development that works standalone but is also integrated in many animation packages like 3ds Max, Maya, Softimage, and Modo. Yes, that means you can now play with neural nets in any of those packages.
If you are wondering what good is having neural networks in an animation package, so am I! On a more serious note, machine learning is enabling people in many fields to come up with computational solutions for problems that were previously very hard to solve… I am curious to see what real applications can emerge in animation and vfx pipelines in the next few years.
In this three part video series I show the challenges in pre-processing motion capture data for machine learning and how one can go about this task. I use Fabric Engine as a development tool for this task. If you find this video helpful or if you have further comments or questions on this topic, please, leave a reply bellow.
A thread from the Fabric Engine forum: http://forums.fabricengine.com/discussion/797/processing-mocap-data-for-machine-learning-with-fabric-engine#latest
In this work we propose a framework for using human acting as input for the animation of non-humanoid creatures; captured motion is classified using machine learning techniques, and a combination of preexisting clips and motion retargeting are used to synthetize new motions. This should lead to a broader use of motion capture.
This work was presented as a poster at SIGGRAPH 2016. Click here for the full publication.