Monthly Archives: May 2012

Neural magazine interview on live coding (2007)

Here’s an interview which appeared in the excellent Neural magazine in June 2007 (issue 27).  A scan is also available.

Live Coding: I think in text
Alex McLean

Alessandro Ludovico: The term ‘live coding’ is usually meant to
describe the coding of music on the fly. It seems a process of
unveiling the (running) machine to manipulate it, resounding
accordingly. Which are your main concerns while performing live?

Alex McLean: When it’s good I have no concerns, and can just get on
with developing the music. I’m really just switching focus between
what Ade and Dave (the other ‘slub’ members) are doing and what I’m
adding to that. Whether I need to stop doing something to give them
room, or whether they’re reaching a conclusion with their stuff and I
need to get ready to take the lead with some new code.

AL: Live coding is “deconstructing the idea of the temporal dichotomy
of tool and product” as it’s stated in the TOPLAP website. So the
tool mutates in to a product. In your opinion is it regaining its
status of magmatic digital data? Or is it mutating into a hybrid
powerful machine-oriented code?

AM: I’m not sure what you mean by ‘magmatic digital data’. I think
though that live coding isn’t about tools or products, but instead
about languages and musical activity. Tool doesn’t really come into
it.

With the commercial model it goes:

[code] -> compiled -> [tool] -> used -> [music]

the dichotomy comes because the person making the tool is different
from the person making the music.

With livecoding it goes

[code] -> interpreted -> [music]

where the code is modified to change the music.

So the code and the music comes closer together by missing out the
tool stage. Of course the big secret about the commercial model is
that a lot of the music comes from the code, as compiled into the
tool. As Kim Casone says, "The tool is the message." Well in the
case of livecoding the tool isn't the message - there is no tool. The
code is the message! And the music is the message... And the music is
the code...

AL: And how do you feel the ambivalent code evolution / music so
generated? Is it a parallel (but conceptually linked) flow or a
digital cause/effect relationship?

AM: There is a feedback loop. The livecoder writes code, that makes
sound, which the livecoder hears and perceives as music, which they
then react to, by editing the code. The code is a kind of notation for
the music. Unlike traditional notation the code describes the sound
completely, because it is read by a formal interpreter that is in turn
described as code. Lovely!

AL: Changing the code while it runs seems similar to composing phrases
on the fly (as we humans are used to do). Do you think that live
coding has some 'semiotic' characteristics that can be compared to
poetry live improvising?

AM: No, but I believe it will go in this direction. In fact I have
become very interested in articulatory speech synthesis, which makes
sound from models of the human body. My current research project is
to apply techniques from speech synthesis to musical sounds, not
necessarily human-like sounds. There is rich history of people talking
about writing down musical sounds as 'vocable' words, for example
Canntaireachd for bagpipe and Bols for tabla sounds. I want to make a
synthesis system for livecoding so I can type the word
"krapplesnaffle" and have it turned into sound and immediately placed
into livecoded rhythmic structure. [http://speechless.lurk.org/]
contained my experiments towards this...

AL: Another key point of live coding performances it to have no backup
(MiniDisc, DVD, safety net computer). Is this meant to legitimate the
(eventual) accident as a part of the performance?

AM: Yes, a little danger is good, adding an edge to the performance
both for us and the audience... There are three of us though, and if
one of our systems go down the others can take over. Then the
audience has some fun watching our boot up procedure :)

AL: You also used to play live (as half of 'slub' band) with your own
'command-line music'. Why you choosed to use the minimal command line
interface? Which software was involved?

AM: Correction: since 2006 there are now three of us: Adrian Ward,
Dave Griffiths and myself Alex McLean. I use the UNIX shell because I
think in text. It's fast, there's a beautiful relationship between
data and code, and it's easy to recall and modify past actions - you
don't have to repeat yourself all the time like with GUIs. When
interactive commandline shells were first developed they were called
"conversational languages," part of a field of research called
"conversational computing." It's a shame that this terminology fell out
of use.

AL: What's 'moving' in your performance is not an arm that plays a
violin, but the shape of your algorithms, forcing your fingers to move
fast on the keyboard. Even if this is barely seen by the audience,
there's a gesture, more evident and theatrical than the usual laptop
performance. How important do you consider the gesture in your live
set?

AM: Well, livecoders always project their screens, so people can see
the typing gestures, which I think are really beautiful even if you
can't see the fingers that are typing them. Jaromil and Jodi's "time
based text" (http://tbt.dync.org/) highlight this really well. AS
there are three of us improvisers there are human gestures between us
too. I think all this is important, if you can see someone is on
stage, but you can't see any movement making the music, then there is
no performance.

AL: Performing live coding, you feel to purely "improvise"? Here, do
you feel a substantial difference with the improvisation music school?
If yes, which one?

AM: Improvisation is the creation of work while it is being performed,
so it's clear that livecoding is a form of that. I have had really
enjoyable improvisations with vocalists, guitarists, rappers and
drummers as well as other laptopists, so don't see much
difference. The only real difference is that livecoding is quite new,
and I think has a bit more developing to do...

AL: TOPLAP (whose acronym has a number of interpretations, one being
the Temporary Organisation for the Proliferation of Live Audio
Programming) is advocating live coding practices in different areas.
In its 'draft' manifesto it's written: "Programs are instruments that
can change themselves." Do you think that software is the ultimate
music instrument?

AM: No I don't. I think computer languages are great mediums for
making instruments though, and livecoding allows you to change those
instruments while you're playing them in some interesting ways. But
you can make amazing sounds with an egg whisk. Who am I to say that
Perl or Haskell is better than an egg whisk? In fact if I was to pick
an ultimate instrument I think the human voice would be it.

AL: The TOPLAP crew also stated that they advocate the "humanisation
of generative music." What's wrong with 'classic' generative music
software?

AM: According to Brian Eno, generative music is like making seeds and
sitting back seeing what they produce. There's nothing at all wrong
with this idea, I love gardening. But livecoding is something a bit
different - it's instead more like modifying the DNA of plants while
they're growing, by hand. In this way, generative music is nature and
livecoding is nurture, in fact it's possible to have a combination of
the two.

Live interfaces: Performance, Art, Music conferece

Happily we’ve been awarded some funding for a conference on live performance technology from Vitae Yorkshire!  This will be a great start to my new position in ICSRiM.  Here’s the call:

LIVE INTERFACES
Performance, Art, Music
http://icsrim.org.uk/liveinterfaces/

Date: 7th-8th September, 2012
Venue: ICSRiM, School of Music, University of Leeds, UK

CALL FOR PAPERS AND PERFORMANCES

Live Interfaces is a conference on live, technology-mediated interaction in performance.  The conference seeks to investigate cross-disciplinary understandings of performance technology with a particular focus on issues related to the notion of ‘liveness’ in interaction.

Live Interfaces will consist of paper and poster presentations, performances and workshops over two days.   Researchers, theorists and artists from diverse fields are encouraged to participate, including: digital performance, live art, computer music, choreography, music psychology, interaction design, human computer interaction, digital aesthetics, computer vision, smart materials and augmented stage technology.

We invite submissions addressing the conference theme of technology-mediated live interaction  in performance, and suggest the following indicative topics:

- Audience perception/interaction
- Biophysical sensors
- Brain-computer interfaces
- Computer vision/real-time video in performance
- Cross-modal perception/illusion
- Digital dramaturgy/choreography/composition
- Digital performance phenomenology
- Gesture recognition and control
- Historical perspectives
- Live coding in music, video animation and/or dance
- Participatory performance
- Performance technology aesthetics
- Redefining audience interaction
- Tangible interaction

Paper submissions should be in extended abstract form, with a suggested length of 500 words.  Please format all submissions using either the Word or LaTeX template available from the website.

Performance proposals should include a description of the performance and the live interaction technology used, as well as a list of technical requirements.  Attaching recordings of past performances is strongly encouraged.

We hope to announce a journal special issue on performance technology following the conference as a publication opportunity for extended papers.

Extended abstracts must be submitted electronically via the website by midnight (GMT+1) on the 17th June 2012.  All submissions will be subject to cross-disciplinary peer review, and notified of acceptance by 1st July.

Please address all queries to liveinterfaces@icsrim.org.uk

Key dates:

- 17th May – Submissions system open
- 17th June – Submission deadline
- 1st July – Notification of selected papers/performances
- 29th July – Camera-ready deadline for accepted papers
- 7-8th September – Conference

Registration will open nearer the date, with a fee in the region of £25, including lunch for both days.

Please keep an eye on one of the following for updates, including information on conference workshops and co-located events.

Website: http://icsrim.org.uk/liveinterfaces/
Facebook: http://facebook.com/liveinterfaces/
Twitter: http://twitter.com/liveinterfaces/
Identica: http://identi.ca/liveinterfaces/

Planning committee:
Alex McLean, University of Sheffield, University of Leeds (from August)
Kate Sicchio, University of Lincoln
Maria Chatzichristodoulou, University of Hull
Scott Hewitt, University of Huddersfield
Ben Dornan, University of Sheffield
Stephen Pearse, University of Sheffield
Phoebe Bakanas, ICSRiM, University of Leeds
Ash Sagar, York St Johns University

Senior advisor:
Kia Ng, Director of ICSRiM, University of Leeds

Supported by Vitae Yorkshire, the University of Leeds and the Arts and Humanities Research Council