There were seven half-day workshops to choose from, and all took place on Monday morning (13th July 2015).

Slowness

Slow coding, slow doing, slow listening, slow performance

  • Hester Reeve, Sheffield Hallam University
  • Tom Hall, Anglia Ruskin University

Overview

Explorations of slowness over the last generation have ranged from the ‘slow food’ movement, ‘slow scholarship’ and ‘slow technology’ in design practice (Hallnäs and Redström 2001), to recent academic work on aesthetic slowness (Koepnick 2014) and in popular culture, the emergence of ‘slow television’ (BBC Four Goes Slow, [http://www.bbc.co.uk/programmes/p02q34z8]). Whilst speed has been an ubiquitous trope surrounding cultures of the technological, in this workshop we focus on the implications of different notions and approaches to slowness in relation to the body (hearing, seeing, performing), sound and composition (composition of code, music, writing or a mixture of all three).

This workshop is open to coders/performers and writers/theorists alike. Our objective is to allow participants to experience and understand the aesthetic value of altering the timescale of live coding and live performance, and to consider the wider social and theoretical meanings of doing so. We consider this workshop to be an experimental laboratory rather than a masterclass.

What to expect: firstly, through simple workshop situations, we will facilitate participants to experience the ‘texture of the present’ via a series of slow activities aimed at the senses. Secondly, through participants’ own work, we will provide a framework for participants to consider and experiment with how insights gained from the above might open up potentials for their own practice of composition (and therefore for the experience of their audience or readership). There will be an opportunity for an open group discussion of issues and discoveries that have arisen at the end of the session.

Some of the questions that have informed the content of our workshops are:

  • How does timescale adjust the meaning and experience of a work? Can we meaningfully distinguish between aesthetic and purely temporal slowness?
  • How can slowness be used as a non-oppositional force of the contemporary?
  • ‘To slow code, is to know code’? (Hall 2007)
  • What might a mindful experience of the electronic feel like?
  • To what extent does subjectivity–authorship control code and under what circumstances does it collaborate with code to let in creative ‘unknowns’?

We ask participants to bring a current piece of work in process with them to the workshop (and a laptop if a coder, an instrument if a musician and writing means if a writer).

Workshop leaders

Hester Reeve has been working with time based live art performance for over 20 years. Many of her solo works are over 4 hours long, sometimes spread over a period of days. In 2012 Reeve devised the AHRC-funded “Live Notation – Extending Matters of Performance” with live coder Alex McLean ([http://blog.livenotation.org]). [http://www.shu.ac.uk/research/c3ri/people/hester-reeve]

Composer, performer and musicologist Tom Hall coined the music-related meaning of the term ‘slow code’ in a 2007 manifesto (Hall, 2007) and has a decade’s worth of experience teaching and performing using SuperCollider. Recent compositional work uses traditional instruments with electronics whilst embracing notions of aesthetic slowness. [http://www.ludions.com]

References

  • Hall, Tom. 2007. Towards a Slow Code Manifesto. Available online, http://ludions.com/texts/2007a/
  • Hallnäs, Lars and Redström, Johan, 2001. Slow Technology: Designing for Reflection. Personal and Ubiquitous Computing, 5 (3). pp. 201-212.
  • Koepnick, Lutz. 2014. On Slowness. New York: Columbia University Press.

Movement

Livecoding relations between body movement and sound

  • Marije Baalman, nescivi

Workshop on livecoding relationships between body movement and sound. Using wireless sensors that capture movement of the body and a data sharing network, the participants will engage in collaboratively create mappings of sensor data to sonic or other media output.

Description

As (wireless) sensor interfaces become more and more available and accessible to artists, the creation of the the relationships between the data the sensors produce and the output in media has become more and more a subject of research. Rather than fixed relationships or mappings between sensor data and output media, livecoding can create a dialog between all the performers in an interactive performance, between the movement and the code, between the movers and the coders. Even in preparation of interactive performances, livecoding as a skill is very valuable in order to be able to quickly prototype different approaches of mapping data, try them out, and evaluate their artistic quality. The adaptations can range from changing of parameter ranges, to rewriting the algorithms that establish rules for mapping. As a mode of performance, this approach can enable a new form of improvisation, creating a dialog between dancers and livecoders, as the dancers adapt their movements based on the mappings to output media that are created by the livecoders, and the livecoders who adapt their code based on the movement of the dancers.

During the workshop we will collaboratively explore the livecoding of such mappings, using a wireless sensor system (the Sense/Stage MiniBee ; [https://docs.sensestage.eu]), equipped with accelerometers and a selected range of other body-based sensors, and a data sharing network ([https://github.com/sensestage/xosc]).

The workshop will start with an exploration of the framework within which we will work, before going on to exploring body movements, the sensor data this produces and strategies for mapping this data to output media - mainly sound. While examples of algorithms will be given in SuperCollider, for the experienced livecoder they should be easy to translate into his or her preferred programming language, as long as a quick access to incoming OSC data is available. The workshop should end with a collaborative livecoding and movement session of all participants.

Participant requirements

  • A laptop with the livecoding programming language that you are familiar with.
  • The livecoding programming language should be capable of receiving (and sending) custom OSC messages.
  • The laptop should have WiFi capabilities (or you should bring an ethernet cable).
  • Participants with a background in movement (dance, theatre, etc) are also welcome - even if they may not be livecoders themselves, but are interested in the collaboration with livecoders.

Fragments

Live Coding the OpenGL Fragment Shader with Audio Responsiveness

  • Shawn Lawson, Rensselaer Polytechnic Institute

Purpose

The intent of this workshop is to create a larger community of graphics-based live coders and introduce a performance IDE, The Force. The Force is open source, https://github.com/shawnlawson/The_Force ; run it here, http://shawnlawson.github.io/The_Force/.

Curriculum

All portions are hands-on, total length is approximately 3 hours. Additional detailed material can easily be included should 4 hours be expected. For self-starters, the IDE has Open Sound Control communication, should anyone which to connect to external applications.

  • 15 min - Introductions and getting everyone up and running
  • 10 min - Explanation of the tool, how it's constructed, and how it works: lecture/demonstration
  • 25 min - Some basic functionalities of the OpenGL Fragment Shader: color, texture, uniforms, coordinate spaces
  • 10 min - Break/breather
  • 50 min - Some simple coding examples: basic math, shapes, lines, patterns, animation
  • 10 min - Break/breather
  • 60 min - Integrate audio source with some examples, go live coding crazy: audio input, OSC for the adventurous

Utopia

Live Coding with Utopia

  • Birmingham Ensemble for Electroacoustic Research

This workshop will introduce participants to networked live coding in SuperCollider using the Utopia library ([https://github.com/muellmusik/Utopia]). Utopia is a modular library for the creation of networked music applications, and builds upon the work of the Republic Quark and other older network systems in SuperCollider. It aims to be modular (features available largely 'à la carte'), secure (provides methods for authentication and encryption), and flexible (to the extent possible, it tries not to impose a particular design or architecture). It provides functionality for synchronisation, communication, code sharing, and data sharing.

For the last two years, the Birmingham Ensemble for Electroacoustic Research has used Utopia as the primary platform for its activities, and actively developed it in workshops, rehearsals, and concerts. In these sessions, members of BEER will introduce participants to the library and its possibilities for live coding, with a particular emphasis on creating networked structures for live coded improvisation.

Extramuros

Extramuros: collaborative live coding and distributed JavaScript visualizations

  • David Ogborn, McMaster University
  • Eldad Tsabary, Concordia University
  • Ian Jarvis, McMaster University
  • Alexandra Cárdenas, University of the Arts in Berlin

The extramuros software was developed to explore live coding and network music, bringing live coding musicians together around shared, text buffers. A half-day workshop at ICLC 2015 around extramuros will be divided into two sections. The first half of the workshop introduces the software and leads to a collective jam session in the Tidal language. The second half of the workshop is a hack-in around new possibilities for distributed, live-coded JavaScript visualizations of collective live coding activity.

Details

extramuros was developed to explore live coding and network music, bringing live coding musicians together around shared text buffers. In it’s basic operation, a server is run on one computer. Some number of performers use a web browser to connect to the server and edit shared buffers of text code, with each person’s editing visible to the others. At any computer where the sounding result of the code is desired (for example, the individual machines in a globally distributed ensemble), a client application is run that receives evaluated text from the server and pipes it to the audio programming language of choice. The software is released on an ongoing basis through github.com with a GPL version 3 license.

At ICLC 2015, we will conduct a half-day workshop around the extramuros software. In the first part, we will introduce the software, assist with troubleshooting, and perform a networked jam session in the Tidal language. In the second part, we will have a “hack-in” on new distributed visualization facilities. These will allow Open Sound Control messages to the server to arrive back in the connected web browsers as calls to a JavaScript stub function, which could be used to give feedback about syntax errors, other aspects of system state (such as audio levels), or more sophisticated forms of visualization of live coding activity. The intent for the hack-in is to collectively produce some strong examples of JavaScript visualization of live coding activity, which could potentially be incorporated into the extramuros codebase as examples for further development.

Participants are invited to bring their own laptops with WiFi. No software installation is required as everything to be done in the workshop can be done through a web browser. Previous exposure to Tidal and/or JavaScript, while beneficial, is not required and we will demonstrate basic strategies applicable to both halves of the workshop.

Software: [https://github.com/d0kt0r0/extramuros]
Example of extramuros in action: [https://www.youtube.com/watch?v=zLR02FQDqOM]

Opusmodus

Opusmodus: Live Coding and Music Composition Environment Workshop

  • Stephane Boussuge, Composer, Sound-designer, Associate Composer at Opusmodus Ltd.

"Composing music today is a fascinating business. As the technology gets more complex, I feel more like a computer programmer than a musician. But I take the sounds in my head and in the world around me and bend the technology into the world I want to create." - Robert Rich

Algorithmic software is revolutionizing creative disciplines. 3D design, film animation and CGI effects, textiles, ceramics, fine art practice… We are now also seeing extraordinary inventions in signature buildings that have been designed by applying parametric software (Rhino and Grasshopper).

Why Opusmodus?

Opusmodus software operates by the same principles. Opusmodus is a new Live Coding and Music Composition environment with hundreds of functions generating and processing musical events combined with features enabling also an output in the form of a score notation.

With design factor points to realize and render thoughts and ideas in standard staff notation, Opusmodus is particularly appropriate for creating music for human performance combining instruments and voices. Let’s not assume that it cannot be useful in other areas of music, but we have realized that there is a gap in the production of concert and media music (including games) that Opusmodus can fill effectively.

At the heart of Opusmodus is OMN (Opus Modus Notation), a special notation describing every possible score notation output as a script. The multiple functions in Opusmodus can generate and manipulate OMN notation, for Live Coding or Music Score composition, giving a very elegant, powerful and innovative way to compose music.

This workshop will introduce you to the Opusmodus Environment and teach the basics of this powerful music creation software.

What are you going to get from this workshop?

  1. New possibility for music composition exploration and experimentation
  2. New toolbox understanding with hundreds of tools
  3. Access to the Opusmodus user’s community
  4. Good knowledge of score generation and creation with Opusmodus tools
  5. Global view of all the software features: 5.1 Fast score generation 5.2 Export to MusicXml 5.3 Midi import capabilities 5.4 Powerful music description language

What are the requirements?

A computer, a desire to learn, a willingness to share your passion for music composition with your peers around a truly innovative composition tool designed to leverage your creativity!

Opusmodus Workshop or how you can embrace your creativity with a powerful user centric tool designed by composers for composers!

The workshop will start with the presentation of the Opusmodus environment…

How the composer will use the Opusmodus interface with its many features will always be related to her/his experience and personal choice. But one of the objectives around the design of Opusmodus is to respond to the various approaches composers have to make in particular projects under certain circumstances.

No creative task is ever quite the same, so it would be foolish to expect one workspace design to ‘catch all’ desires. Nevertheless, the Opusmodus interface is multi-faceted and has already proved it can deal with very exacting musical situations from the highly conceptual and experimental to the pragmatic needs of a commissioned work.

… followed by a presentation of Opus Modus Notation key features,

As music notation moves inextricably from printed pages to backlit digital displays, OMN language fully responds to the future of music presentation. With computers replacing pianos as a composer’s helpmate, so have the conditions changed surrounding music creation.

New music technology has focused largely on production and presentation, whereas the conceptualisation and origination of new music requires a very different paradigm. Opusmodus takes a step towards that paradigm, being a 3rd way forward and driven by its own notation script OMN.

Working directly in OMN is perfect for those ‘on the fly’ experiments (test and learn!) that all composers make when they are starting out on a project.

It is almost like having a piano close by to lay down new creative thoughts, but with one that always plays what’s written quite flawlessly and captures your creativity at a particular time, anywhere, everywhere.

… with a review of some typical approach to music composition using Opusmodus toolbox key features (System Functions)

With such a powerful application, there are just so many things that are not just useful but necessary. You need to dive into it. But the advantage of a digital workspace for a composer is that it can bring together in a single location many, different, and essential things we need to make ever lasting music creation. We will this opportunity to study some of the System Functions that form the vocabulary of the scripting language of Opusmodus… in just a few hours!

… and finally improvise together with Opusmodus showing the features of the “Live Coding Instrument” (LCI):

The LCI gives the composer an intuitive control panel and the possibility of working in true live coding style directly with the script. In practice composers who use the Live Coding Instrument often begin with a script, making a change, then ‘playing’ that change from the buttons of the LCI control panel. Where it becomes a true composer’s instrument is in its “Save” button. Yes the “save button”. This function captures to midi-file every nuance of a ‘live’ session . . . and from midi-file, via the Export menu, to OMN script and from there, well let’s say there is no frontier between technology and creativity.