The exo music visualization engine is fully hardware accelerated and features a 2D procedural texture generation unit and a 3D scene description.
It allow to mix both approach in the same environment.
Currently its running on Mac OS X and uses OpenGL for the acceleration. It uses PBuffer to create the procedural textures.
There are a few nice things about the engine:
- Uses XML to load description, so its very flexible
- Has a mathematical parser to evaluate expressions for all behaviour (particles, colors, waveform...)
- Can load 3D shapes in the XML description
- Has a full modular 3D shape synthesizer, really fun to explore !
- A preset system allow to store effect descriptions and to reload them within a simple statement
The number of 2D or 3D layers is only limited by the GPU and the available texture memory.
There are global and local variables, global variables give the time, beat information.
There can be any number of local variables; they are allocated dynamically when the expressions are read from the description.
The 2D effects are the classical dissolutions, rotozoom, fade... all hardware accelerated through PBuffer.
Most of the effects allocate a PBuffer to render its content and mix it to the final texture.
Expressions can be attached to several events: on beat, on init, on render element. So its very flexible.
The current implementation does not use a lot of OpenGL extensions and so run of a very large spectrum of hardware.
When building the 3D scene, any number of layers can be stacked and the procedural texture can be just rendered as a quad or go through complex shapes like blobs, tunnel...
The modular 3D shape is just another layer; it is inspired by an article written by Andrew Glassner.
There are a few basic blocs that can be interconnected; the first one is the clock that generates the basics nodes that will be processed by the synthesizer.
The final one is a polygon module that has 4 inputs. In between there are parsers, translators, delays... that can be interconnected to build all kind of dynamical shapes.
Through the parser the shape can be influenced by the spectrum, wave or beat values.
The first thing that was developed is a small application that can open an mp3 file and send the audio to the engine.
Once it was complete enough I did also move it into an iTunes plug-in.
In the future I will add support of pixel shaders, first for the procedural texture generation.
It would also be fun to send real pictures in the rendering process.
The camera movement can be improved in the description.
The main motivation to work on this prototype is to see how far I can go with a mathematical parser to build music visualization modules connected to hardware acceleration.