Fabric Engine and a Void in 3DCC Machine Learning

cg news

About three weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this second post of a two-post series, I’ll try to go over why Fabric was a great fit for Machine Learning in the context of 3d Content Creation (animation, games and VFX). Bear in mind these are my own personal opinions.

AI, beyond the hype

Nowadays the acronym AI (artificial intelligence) seems to be everywhere. AI cars, AI stealing your job, processing AI in your GPU, and so on… Not unusually you see the buzzword AI alongside stuff that has no direct relation to artificial intelligence, like this tweet in which the WSJ uses VR goggles to depict AI.

WSJ twit on AI, with image of man using a VR headset

VR goggles used to depict the concept of AI (credit: WSJ, Twitter)

Most of the times when people talk about AI, they are really talking about a rapidly developing segment of AI called Machine Learning (ML). Machine Learning is a field in computer science that is interested in techniques that make it possible to program computers with data, instead of using explicit instructions. ML techniques are either based on a statistical approach or a connectionist approach. The connectionist approach is in vogue nowadays, especially a specific approach known as Deep Learning.

Gimme data

Data is what makes ML tick. This data can be of two kinds: labeled and unlabeled. Unlabeled data can be used to find patterns within the data itself. A very nice example of the use of ML with unlabeled data in a 3d content creation setting is the work of Loper et al. (2015). The authors used a dataset consisting of 2000 scanned human bodies (per gender) and, using a statistical technique called PCA, found that they could describe scanned bodies with less than 5% error using only 10 blendshapes. You can experiment with the results of this work in Maya, clicking here.

9 different virtual reproductions of the human body

The three principal components of the human shape (credit: Loper et al. 2015)

Labeled data, on the other hand, is useful in solving problems where one wants to get a specific answer (output) from certain data (input). This means you must label the training data yourself, so the computer can learn what it is expected to output. Keeping examples within the same research group authors Sreuber et al. (2016) labeled those same scanned bodies through crowdsourcing, using describing words such as fat, skinny, tall, short and so on. The authors used support vector regression to correlate the labels to the scanned bodies and in practice created a method for generating 3d models from a set of written words. For more info on this project click here.

6 different virtual bodies with acompaning word descriptions of their shapes

Crowdsourced labels regressed to blendshapes (credit: Sreuber et al. 2016)

But why Fabric?

As I have explained in the first post, Fabric was a high-performance computing platform targeted at computer graphics tasks; it could read some industry standard formats, it had its own rendering engine, and could output data directly to a host DCC or game engine.

Most Machine Learning approaches require massaging data quite a bit. You might want to infer the speed of a character from its positions, or normalize objects in respect to their size; you may need to remove the global transforms of your models or register one data sample against the other in some way. The list goes on and changes a lot depending on your data what you are trying to achieve.

Fabric Engine’s node programming environment (Canvas) was just the perfect place for prototyping such data manipulation, orders of magnitude better than typing code and checking a stream of scalar values for error.

Besides prototyping, you could output data in CSV, JSON, and more recently SQL, pretty much all you need to transfer data to an ML library. Or you could even wrap your library of choice in KL like Stephen T. did with Tensorflow (a very prominent library developed by Google).

The era of MacGyver

I guess what I will miss the most about Fabric is having a hub for ML work. A hub that has all proper methods to manipulate kinematic and geometric data. One will still be able to use a combination of Python libraries and DCC APIs to read and manipulate data, then use other libraries and APIs to display the data in that same DCC or somewhere else.

I can think of two clear disadvantages the lack of a hub brings: (1) a steeper learning curve, and (2) an environment that is less prone to sharing.

Maybe, in time, if ML does have any relevance in VFX, games and animation pipelines, something in this fashion will come up. For the time one will have to duct tape what’s already out there.

More on this topic

Processing data for Machine Learning with Fabric Engine

Using Neural Networks within Fabric Engine

Citroen Aircross

professional work

This last job I’ve participated in was of a very rare breed. Animaking was bold enough to mix a bunch of different techniques, namely live action, miniature models, stop motion, cg and post… phew. It all blends into a nice view of the Aircross automobile running through the atacama desert…

I was a small part of this great effort, running most particle sims. It was very nice to work with some old friends and meet a few more nice people.

Watch in HD, if you will…

workshop

training

Back from the metropolis! This past week I have presented a workshop on ICE for 40 people at Melies school of cinema and animation, in São Paulo. Those who attended got a clear idea of what a interactive visual programing envoiroment like ICE may bring to the table in the context of animation and effects. We also stablished a panorama of most type of sims that exist in this, and other plataforms, trying to understand the pros and cons of each. Besides this overview we got into the guts of all the math behind ICE and some things those who are not aquinted with the tool get a hard time with (like data’s type and context). To finish it all off tornados and explosions were simulated, good times!

Motion Tools

research and development

Motion Tools is a small collection of tools that aims to aid Motion Graphic work being created inside Softimage. It does so by providing many ICE compounds and partially abstracting the ICEtree construction processes. It eases the creation, procedural animation and simulation of many many objects or chunks of geometry.

Confira a pagina deste produto!

motion tools v0.4

research and development

Motion Tools is a small collection of tools that aims to aid Motion Graphic work being created inside Softimage. In this latest release the capability of controling polygons and polygon islands with particles was added. Therefore enabling one to control this elements with regular ICE nodes, or Motion Tools’ compounds.
Some high priority bugs and workflow enhancements were tackled, although, due to time constraints these improvements were not akin to the initial intention. I hope this still may prove useful to some.

Download (right click, or drag into Softimage):
Softimage 2013 – MotionTools/v0.4/2013/MotionTools.xsiaddon
Softimage 2012 – MotionTools/v0.4/2012/MotionTools.xsiaddon

Release Notes:
MotionTools/v0.4/ReleaseNotes.txt

Video:
vimeo.com/44039628