Many Machine Learning libraries support a Python interface. Many DCC applications support Python. But you will often find that these DCC applications and software libraries are binary incompatible. Here is how to solve this problem in Maya for Windows.
About three weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this second post of a two-post series, I’ll try to go over why Fabric was a great fit for Machine Learning in the context of 3d Content Creation (animation, games and VFX). Bear in mind these are my own personal opinions.
AI, beyond the hype
Nowadays the acronym AI (artificial intelligence) seems to be everywhere. AI cars, AI stealing your job, processing AI in your GPU, and so on… Not unusually you see the buzzword AI alongside stuff that has no direct relation to artificial intelligence, like this tweet in which the WSJ uses VR goggles to depict AI.
Most of the times when people talk about AI, they are really talking about a rapidly developing segment of AI called Machine Learning (ML). Machine Learning is a field in computer science that is interested in techniques that make it possible to program computers with data, instead of using explicit instructions. ML techniques are either based on a statistical approach or a connectionist approach. The connectionist approach is in vogue nowadays, especially a specific approach known as Deep Learning.
Data is what makes ML tick. This data can be of two kinds: labeled and unlabeled. Unlabeled data can be used to find patterns within the data itself. A very nice example of the use of ML with unlabeled data in a 3d content creation setting is the work of Loper et al. (2015). The authors used a dataset consisting of 2000 scanned human bodies (per gender) and, using a statistical technique called PCA, found that they could describe scanned bodies with less than 5% error using only 10 blendshapes. You can experiment with the results of this work in Maya, clicking here.
A couple of weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this two-post series, I’ll try to go over what Fabric Engine was, the different positionings it assumed through the years, and what voids does it leave in the CG community. Bear in mind these are my own personal opinions.
Update, check the second post in this series: Fabric Engine and a Void in 3DCC Machine Learning.
In the beginning, there was KL
However, why was it any good for CG folks?
I’ve recently published, as an open source project, the code for feNeuralNet. This is a Fabric Engine extension for doing simulation of previously trained Neural Networks. Fabric Engine is a plataform for CG development that works standalone but is also integrated in many animation packages like 3ds Max, Maya, Softimage, and Modo. Yes, that means you can now play with neural nets in any of those packages.
If you are wondering what good is having neural networks in an animation package, so am I! On a more serious note, machine learning is enabling people in many fields to come up with computational solutions for problems that were previously very hard to solve… I am curious to see what real applications can emerge in animation and vfx pipelines in the next few years.
In this three part video series I show the challenges in pre-processing motion capture data for machine learning and how one can go about this task. I use Fabric Engine as a development tool for this task. If you find this video helpful or if you have further comments or questions on this topic, please, leave a reply bellow.
A thread from the Fabric Engine forum: http://forums.fabricengine.com/discussion/797/processing-mocap-data-for-machine-learning-with-fabric-engine#latest
In this work we propose a framework for using human acting as input for the animation of non-humanoid creatures; captured motion is classified using machine learning techniques, and a combination of preexisting clips and motion retargeting are used to synthetize new motions. This should lead to a broader use of motion capture.
This work was presented as a poster at SIGGRAPH 2016. Click here for the full publication.
Abstract: The number of products capable of displaying stereoscopic (also known as 3D) images has been growing in recent years. The use of this technology has outgrown the silver screen and is now available in televisions, computers, tablets, and even cell phones. Due to its nature, content created for stereoscopic media requires attention in relation to some characteristics not present in the context of monoscopic media. With a focus on image creation, the objective of this research was to assess how different stereoscopic image generation methods can affect human perception. To achieve this a virtual environment was created and from it different videos were generated using various methods including converging cameras, parallel cameras, and depth image-based rendering (DIBR). These videos were shown to participants who assessed the picture quality, depth quality, and visual comfort of the media. It was found that there was very little difference between the perception of images generated by parallel and convergent cameras, while there was a substantial difference in terms of perception between these two types of image and DIBR images. Such results can significantly affect the choice of technology for stereoscopic image generation, influencing the production costs, the methods involved, and human and machine time consumption.
Originaly presented at the International Conference on 3d Imaging, december 2014.
Dias Velho e os Corsários (1988) is a Graphic Novel, written and illustrated by Eleutério Nicolau da Conceição, which depicts a pirate invasion to the island of Santa Catarina in the 17th century. Using a historic event as inspiration the work successfully represents local themes such as culture and social organization. The audience to which the publication was originally targeted (13+) allowed the story to be represented in a realistic manner, with complex characters and episodes of violence and death. This work details the adaptation of such story to the media of animation and a significantly younger audience (pre-schoolers).
This work was originally presented at: XVIII Colóquio Internacional da Escola Latino-Americana de Comunicação, december 2014.