Gen.AV 2

Hackathon and performance on Generative Audiovisuals
Hackathon: 25-26/Jul/2015, London Music Hackspace
Performance: 30/Jul/2015, Goldsmiths, University of London


About the projects

ButterflyA sort of general purpose scope viewer that can be mapped arbitrarily to any internal bits of SuperCollider synths. It is intended as a interactive instrument that can be manipulated in real time for generative audio visuals performances. The visual tool can be used with any SuperCollider synth definitions, even your own as long as they follow some naming conventions.
Authors: Fiore Martin, Patrick Hartono

Project link:

Cantor Dust
CantorDustCantor Dust is an audio visual instrument that generates, displays, and sonifies cantor set like fractals. It allows to the user to control the starting seed and number of iterations for a number of fractals at once, either through the graphical interface, or through an midi controller. The project is written in plain old Javascript, and runs entirely in a (modern) web browser.
Author: Louis Pilfold, Berkan Eskikaya
Project link:

Esoterion Universe Gestenkrach
EsoterionGestenkrach2Esoterion Universe Gestenkrach is a fork of Esoterion Universe which adds support for SuperCollider sound engine with much wider variety of sounds, and LeapMotion sensor integration for more playful universe navigation and planet sculpting. Also the UI was adapted and made more directly responsive to LeapMotion input. Original description: under Gen.AV 1
Authors: Borut KumperščakJens Meisner
Previous contributors: Coralie Diatkine, Matthias Moos, Will Gallia
Project link:

OnTheTapStap reactive audio visual system. The system plays with the tactile, analog feel of tapping surfaces as a digital input device. This input and it’s gestures in turn drive sound and visuals expressively.
Authors: Alois Yang, George Profenza, Sabba Keynejad
Project link:

residUUm is a an attempt to sonify a particle system whose inhabitants exchange and discard their sonic characteristics as they collide leaving remnants that contribute a din of noise as their larger bodies fade. The sound engine and graphics are done, but the exchanging of characteristics has yet to be accomplished. This project uses processing to send visual characteristics of particle bodies to be sonified in pd.
Authors: Ireti Olowe, Giulio Moro
Project link:

watAn Audio-Visual Exploration of Chaotic 2-Dimensional Dynamical Systems (Or ‘Wat’). We would like to develop a 2D or 3D visualization based on Continuous Cellular Automata with various evolving rulesets, and sonify the result in a musical way. The core principle involves applying a matrix of mathematical operations (generally non-linear functions) to an image specifying the starting conditions. We apply our operation matrix my sliding it across the image (as in convolution) and applying the operation in each element of the matrix to the corresponding element in the image matrix. The result is complex evolving, moving, unpredictable textures which can be sonified with the right method. Furthermore, the rules of the system can be ‘performed’ by varying the operation set and coefficients in real time through some sort of input.
Authors: Alessia Milo, Bogdan Vera, Christian Heinrichs
Project link:


Thank you to London Music Hackspace for hosting the Hackathon.
Thank you to Peter Mackenzie for the technical support.
Thank you to all hackathon participants.

Gen.AV 2 is co-organized by EAVI (Embodied AudioVisual Interaction) group at Goldsmiths, University of London, in collaboration with London Music Hackspace. It is part of the Enabling AVUIs research project being conducted at EAVI, and supported by the European Union (Marie Curie programme). More information:

Gen.AV 1

Hackathon and performance on Generative Audiovisuals
Hackathon: 6-7/Dec/2014, Goldsmiths, University of London
Performance: 6/Feb/2015, Goldsmiths, University of London


About the projects:

abp1 cropIn ABP, a PureData patch creates music and sends data to the Cinder visuals software via OSC.
Several visual parameters like color, alpha, zoom, repetition of objects, tempo are given by the PureData patch.
Authors: Adam John Williams, Bruce Lane, Piotr Nierobisz
Project link:

ShapeNoisedrawSynth consists of a GUI to control sound and image. Users can draw shapes and select colours. By doing this, users control the synthesis engine. Each pair of vertices of a shape will control an FM oscillator – one vertex for carrier and the other for the modulator. The project is built with openFrameworks for graphics and interaction, and Max/MSP for sound, using FM synthesis. OSC is used for communication between both.
Authors: Antonio Creo, Diego Fagundes, George Haworth
Project link:

Esoterion Universe
Esoterion Universe cropEsoterion Universe consists of an empty 3D space that can be filled with planet-like audiovisual objects. The objects can be manipulated and can be given different appearances and sounds. Users can navigate in space and the audiovisual outcome is influenced by that navigation. Generic, media neutral terms such as warmth, sharpness, size and oscillation are used to characterise and connect sound and visuals. A semantic approach was chosen for this connection instead of a one-to-one parameter mapping. The GUI consists of sliders distributed concentrically, in the shape of a star graph, embedded in the centre of the object. It integrates aesthetically with the objects. openFrameworks with openGL is used for graphics and interaction, and Max/MSP for sound. OSC is used for communication.
Authors: Borut Kumperscak, Coralie Diatkine, Matthias Moos, Will Gallia
Project link:

GS.avi2GS.avi is a gestural instrument that generates continuous spatial visualizations and music from the input of a performer. The features extracted form a performer’s gesture defines the color, position, form and orientation of a 3-demensional Delaunay mesh — its composite triangles, vertices, edges and walk. The music, composed using granular synthesis, is generated from features extracted from the mesh — its colors, strokes, position, orientation and patterns. The project was created using Processing and MAX/MSP. OSC is used to communicate between the two.
Authors: Ireti Olowe, Miguel Ortiz, Will Brown
Project link:

ModulantModulant allows for the creation of images and their sonification. The present implementation is built upon image-importing and freehand-drawing modules that may be used to create arbitrary visual scenes, with more constrained functional and typographical modules in development. The audio engine is inspired by a 1940’s synthesizer, the ANS, that scans across images. In this scanning, one axis is time and the other axis is frequency. Modulant thus becomes a graphical space to be explored sonically and vice-versa. The project is built with Processing for graphics and interaction, and Ruby with Puredata for sound.
Authors: Berkan Eskikaya, Louis Pilfold, Alessandro Zippilli
Guest performer: Blanca Regina
Project link:


Thank you to Alessandro Altavilla and Peter Mackenzie for the (documentation/technical) support.
Thank you to all workshop participants and performers.

Gen.AV 1 is co-organized by EAVI (Embodied AudioVisual Interaction) group at Goldsmiths, University of London, in collaboration with London Video Hackspace . It is part of the Enabling AVUIs research project being conducted at EAVI, and supported by the European Union (Marie Curie programme). More information: