Was presented at:
Publication: https://doi.org/10.5281/zenodo.15530091
In today’s landscape of diverse sound generation methods, we shifted our focus to inspect novel auditory experiences through the distribution of bi-directional streaming for storytelling and generative sound synthesis. We therefore constructed the platform Gencaster, which is aimed at live-streaming dynamically generated non-linear compositions and stories in multi-directional ways over the internet without the need for an app. Gencaster is a platform for building audio streaming experiences with dynamic and generative content which can be controlled and modified on the fly by the listener or the creator, allowing to entangle live-coding situations with multiple participants. This allows for a unique stream per user instead of delivering the same synchronized stream to everyone (as with traditional radio broadcasting) or relying solely on on-demand interaction (as with current streaming or podcast platforms). This new way of streaming audio enables the introduction of formats that are shaped by user preferences or interactions. Due to the low latency of WebRTC technology, we can achieve new forms of storytelling and sound distribution that are not limited to spoken content, but can extend into new musical compositions. By relying on WebRTC it is possible to embed our player into any website, lowering the barrier for creators and listeners to access and create unique, generative streams. This state of a stream can either be influenced by the listener (e.g. position or other user input) or shaped through a real-time graph-editor interface. This allows for live coding and managing multiple divergent states—each listener receiving a unique version of the stream—which can also be merged when needed, a concept we aim to explore further in this paper.