Fabric Engine and a Void in 3DCC Machine Learning

cg news

About three weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this second post of a two-post series, I’ll try to go over why Fabric was a great fit for Machine Learning in the context of 3d Content Creation (animation, games and VFX). Bear in mind these are my own personal opinions.

AI, beyond the hype

Nowadays the acronym AI (artificial intelligence) seems to be everywhere. AI cars, AI stealing your job, processing AI in your GPU, and so on… Not unusually you see the buzzword AI alongside stuff that has no direct relation to artificial intelligence, like this tweet in which the WSJ uses VR goggles to depict AI.

WSJ twit on AI, with image of man using a VR headset

VR goggles used to depict the concept of AI (credit: WSJ, Twitter)

Most of the times when people talk about AI, they are really talking about a rapidly developing segment of AI called Machine Learning (ML). Machine Learning is a field in computer science that is interested in techniques that make it possible to program computers with data, instead of using explicit instructions. ML techniques are either based on a statistical approach or a connectionist approach. The connectionist approach is in vogue nowadays, especially a specific approach known as Deep Learning.

Gimme data

Data is what makes ML tick. This data can be of two kinds: labeled and unlabeled. Unlabeled data can be used to find patterns within the data itself. A very nice example of the use of ML with unlabeled data in a 3d content creation setting is the work of Loper et al. (2015). The authors used a dataset consisting of 2000 scanned human bodies (per gender) and, using a statistical technique called PCA, found that they could describe scanned bodies with less than 5% error using only 10 blendshapes. You can experiment with the results of this work in Maya, clicking here.

9 different virtual reproductions of the human body

The three principal components of the human shape (credit: Loper et al. 2015)

Labeled data, on the other hand, is useful in solving problems where one wants to get a specific answer (output) from certain data (input). This means you must label the training data yourself, so the computer can learn what it is expected to output. Keeping examples within the same research group authors Sreuber et al. (2016) labeled those same scanned bodies through crowdsourcing, using describing words such as fat, skinny, tall, short and so on. The authors used support vector regression to correlate the labels to the scanned bodies and in practice created a method for generating 3d models from a set of written words. For more info on this project click here.

6 different virtual bodies with acompaning word descriptions of their shapes

Crowdsourced labels regressed to blendshapes (credit: Sreuber et al. 2016)

But why Fabric?

As I have explained in the first post, Fabric was a high-performance computing platform targeted at computer graphics tasks; it could read some industry standard formats, it had its own rendering engine, and could output data directly to a host DCC or game engine.

Most Machine Learning approaches require massaging data quite a bit. You might want to infer the speed of a character from its positions, or normalize objects in respect to their size; you may need to remove the global transforms of your models or register one data sample against the other in some way. The list goes on and changes a lot depending on your data what you are trying to achieve.

Fabric Engine’s node programming environment (Canvas) was just the perfect place for prototyping such data manipulation, orders of magnitude better than typing code and checking a stream of scalar values for error.

Besides prototyping, you could output data in CSV, JSON, and more recently SQL, pretty much all you need to transfer data to an ML library. Or you could even wrap your library of choice in KL like Stephen T. did with Tensorflow (a very prominent library developed by Google).

The era of MacGyver

I guess what I will miss the most about Fabric is having a hub for ML work. A hub that has all proper methods to manipulate kinematic and geometric data. One will still be able to use a combination of Python libraries and DCC APIs to read and manipulate data, then use other libraries and APIs to display the data in that same DCC or somewhere else.

I can think of two clear disadvantages the lack of a hub brings: (1) a steeper learning curve, and (2) an environment that is less prone to sharing.

Maybe, in time, if ML does have any relevance in VFX, games and animation pipelines, something in this fashion will come up. For the time one will have to duct tape what’s already out there.

More on this topic

Processing data for Machine Learning with Fabric Engine

Using Neural Networks within Fabric Engine

What was and what was not Fabric Engine?

cg news

A couple of weeks ago Fabric Software abruptly ended the development of Fabric Engine without any following announcements. In this two-post series, I’ll try to go over what Fabric Engine was, the different positionings it assumed through the years, and what voids does it leave in the CG community. Bear in mind these are my own personal opinions.

Update, check the second post in this series: Fabric Engine and a Void in 3DCC Machine Learning.

In the beginning, there was KL

From the start, Fabric Engine was a “high-performance computation platform” (https://goo.gl/muzRzV) it was not supposed to be a plug-in for Maya or other DCCs (https://goo.gl/SxqzxG). Anything one would want to compute in Fabric needed to be coded in its own special language called KL. KL was an object-oriented scripting language with JavaScript syntax. Its big plus over something like Python, for example, was its just in time compilation and parallel computing capabilities. So, it was fast.

Diagram

Fabric Engine’s main components

However, why was it any good for CG folks?