1 fingers

Expanding Brain
Face Filter

Discovering Instagram Face Filters as somewhat late-bloomers, in the past months our fascination rapidly grew to nonreversible scale and after the obvious "puking type and mapping cucumbers on your eyes" there was yet another even better idea waiting behind every next corner and no stop in sight anymore.

So, for the first release of the Dinamo Face Filter series, we translated the famous Expanding Brain meme into the augmented face filter format. In the original meme, which started circulating on Reddit in 2017, the brain size expands relative to other variables. Though the expanding brain usually implies intellectual superiority over various objects, it is more commonly used ironically to imply the opposite.


For our interactive version, we modelled an entire 3D character and its expandable brain into the Instagram user’s upper body and face. The user can control how their brain expands, via tapping or holding the finger on the screen – relatively to what is happening in the video.

Idea by Dinamo after janskishimanski
Development in collaboration with Moritz Tontsch
using Spark AR V70 and Blender 2.8

Bildschirmfoto%202019 09 16%20um%2014.40.18

Testing out the brain expand together with the SparkAR Facemesh


Using a 3D Model and testing out different environments and styles

Bildschirmfoto%202019 09 16%20um%2014.15.27

Painting an alpha map for the 3D Model so the edge fades out smoothly and the model is only visible from the neck upwords so it feels more like its your own body which is changing to the abstract model

Screen%20shot%202019 09 22%20at%2001.30.21
Bildschirmfoto%202019 09 18%20um%2011.54.10

Top: Refining materials and shading in SparkAR
Bottom: The Scene view and Preview in Spark AR


Moritz Tontsch
about developing the
Expanding Brain Face Filter

Facefilters are adding a virtual layer onto the filmed camera input. With facetracking technology it is possible to track the facial features of the face in front of the camera and to simulate a three-dimensional scene. This enables the possibility to virtually extend the physical body with 3D elements, images and visual effects and to alienate the face from its natural form.

In this project we tried to translate the Expanding Brain Meme into a Facefilter while creating a fun and shareable experience. We tried to get it conceptually and graphically as close as possible to the original meme. Anyway we decided to make some small changes to enhance the usability and representation of the filter. We decided to change the first step, where the original meme displays a skeleton to the physical face and to go on with the next steps of the brain from there.

The Expanding Brain Meme displays the human body in a slightly abstracted form without any personal facial features. The displayed body can be distinguished as displaying a biological male human as the body shows features which are associated with male sex. For the facefilter which should represent a broad user group a 3D Model with more androgyn facial features was chosen. 
The Background of the original meme is relative plain monotone colored. In the face filter we mixed these colors as well as the abstract human model with the Camera input to make the filter more personal and more relateable to the physical situation which is captured by the camera. While in the first step the physical world is completely visible the augmented virtual world gets more and more dominante as its climaxes in last step of the facefilter where the physical environment is not visible at all anymore.

The style of the human body in the Expanded Brain meme was technically created through taking in the camera input as a n enviroment texture into a glass shader on a Layer which adds its color information to the color of input camera image. Through this the glow which can be seen in the Expanding brain model was recreated.

The different states which are happening in the Brain Expand memes where translated into different steps which animate into each other when the screen is taped. The different animations were build technically with a system of patch groups which are transitioning the animated scale, color and transparency values of the rendered model.

The abstract model in which the physical human gets transformed was prepared and rigged in Blender. The Implementation, Scene building and Interaction of the Facefilter was done in Spark AR Studio.

— Moritz Tontsch, September 2019

Screen%20shot%202019 09 22%20at%2022.24.09