VIDNESS VISUALS (2012-2014)

This half hour live recording shows a vidness set played only with two iPads for the first time. One with a custom Lemur controller layout controlling the other running a custom Unity3D renderer.

Note: If you're too impatient or simply don't have the time to watch the whole piece, please press play anyway and skip to minute 20:00 and watch a few minutes.

Music: "this is not..." DJ-set by Prince of Denmark

Used material: Plexiglass screen, projector, 2x iPad, Unity3D Pro and Lemur software

 

The goal I wanted to achieve by programming this prototype was to integrate the main functions I used in other software to create my visuals into one system. With the resulting system comparably few things can go wrong while it's running and nothing distracts me from engaging with the instrument.

On one iPad runs a custom rendering App built with the Unity3D engine displaying its imagery on the full screen. At its core a virtual camera is directly points at a stack of four rectangular layers displaying this very cameras image as their texture. The shader corresponding to the texture exposes parameters for opacity, basic additive color mixing, and in addition to that, extra coloring and brightening to get starting energy into the feedback loop. Saturation and contrast parameters complete the shaders color controls. The layers can be scaled, translated and rotated as a whole, and furthermore the textures themselves can be scaled, tiled and offset relative to the rectangle of their layer. The horizontal and vertical axes of the touchscreen directly control the horizontal and vertical angle of the virtual camera relative to the layer stack as a whole. This is the only user facing control of this App. All other parameters are controlled with OpenSoundControl commands sent via WiFi from a second iPad.

On a second iPad runs a custom controller layout built with Lemur software. All of the above mentioned parameters can be manually adjusted for each layer, or they can be rhythmically animated using one of the four LFO groups. Each of these four groups can be composed out of two sine, triangle, sawtooth or square-waves LFOs mixed together with different frequency-multiples, amplitudes and phases. The whole LFO group synthesis revolves around a central BPM value which has to be manually clocked in. Additionally the control layout supports crossfading, splitting the four-layer stack into two two-layer stacks. Lastly it offers a control for the trace level, which decides how long old frames are visible and overdrawn by newer ones.

In short, I started building a video synthesizer offering a basic set of abstract functionality proven in years of experimentation and performing live visuals for many different flavors of music, and defying any kind of sound-controlled automation on the way.