Was performed at:
Inspired by concepts from granular synthesis and phase shifting in sound, “Grain by grain” is a visual performance generated through the composition of “visual grains” at real time using fragment shaders. Where a “visual grain” is defined as a sample of a video signal, or recording.
It is a displacement loop of the “frame by frame” paradigm, in which the environment for live coding visuals is perceived as an infinite nested sequence of frames. Mutating frame by grain is introducing a new generation for visual narratives.
Working with “visual grains” lead to building “visual phrases”, which can be defined by a finite number of frames, originally from a recording, and they can be looped, transformed in time and/or space, mixed, and overlaid.
The abstract is displayed here for proof-reading and will only be part of the published proceedings, not of the final version of this web catalogue.
How can concepts from granular synthesis be applied to video signals? Can we control through GPU-programming a "phase shifting" effect on video samples at real time? How can we generate more structure on visual compositions, and at the same time find our own expression? What is the role of computers?
“Grain by grain” is a visual performance generated through the composition of “visual grains” at real time using fragment shaders. Where a “visual grain” is defined as a sample of a video signal, or recording.
It shows an evolution of thought of live coding visuals. My practice used to be through a feedback system, and perceiving the environment, shader editor, as an infinite nested sequence of frames, which I referred to as a “frame by frame” paradigm.
The evolution from this paradigm into another can be thought of in the same way DNA mutates, during the d-loop, or displacement loop. Mutating frame into grain, introduces a new “grain by grain” paradigm, and a new generation for visual narratives.
Working with “visual grains” lead to building “visual phrases”, which can be defined by a finite number of frames, originally from a recording, and they can be looped, transformed in time, mixed, overlaid, and transformed in space by a fragment-shader. Introducing these grains detatches the concept frame from a linear time basis.
Throughout 2022, there have been mainly two lines of exploration: having a real time video sampler, and having a “grain generator” transforming a video(mp4) into a spritesheet(jpeg). Using these last makes it possible to have control on how (frequency and duration of) the sample is being reproduced at each frame, leading to exploring a non-uniform “play” of a recording, and to having multiple lines of a sample being reproduced at different speed-rates and with different offsets (starting point).