Category Archives: livecoding

Residency in Barcelona

I’m very happy to have a month’s residency in Barcelona coming up, starting 22nd July 2013. I’ll be running workshops (probably at the beginning, while locals are still in town) and working on some new things. It’s produced by L’ull Cec for Hangar, in the context of the Addicted2Random project, which is funded by the European culture programme. It’ll be really great to get some real focus on things I’m desperate to get done.

What on earth is live coding?

Busy times at the moment, but a quick pause to link to the afore-mentioned full interview in Dazed and Confused by the fine Stephen Fortune. I think the on-line version is a bit longer than in print. There’ll likely be another algorave related article in Wired magazine (the UK version I think) in the next month or so. Anyway here’s the text from Dazed and Confused for posterity:

Alex McLean is a programmer and live coder. He performs with a livecoding band called Slub and tours with the travelling Algorave festival. But what is “livecoding” exactly? “Live coders are basically performing by writing computer programs live on stage, while the programs are generating their art – whether that’s visuals or music,’ McLean says. “Their computer screens are projected, so that the audience can see the code being manipulated. But the focus is on the music, on people dancing and seriously enjoying themselves”. In the run up to an Algorave aboard the MS Stubnitz, London, we met McLean who did his best to scramble our brain.

Do you think a newcomer to the algorave scene would leave enlightened or mystified?
Hopefully they would enjoy the music without feeling that they were compelled to understand it. Also because we’re making music, not doing formally specified software engineering, there’s no real ground of understanding anyway, apart from the music itself. Even those making the software don’t really have to understand it – “bugs” often get into the code which don’t make sense, but still sound good, so we just go with it.

Is there any genre or activity which you feel livecoding resembles?
In terms of algorithmic music, on one side there’s the “electroacoustic” focus on experimental sound, the search for new dimensions of timbre and musical movement. But Live coding is a way of making music and is not tied to any particular genre. I’ve heard live coders make drone music, jazz, indian classical music, indie covers, and hip hop manipulated beatbox.

How do ideas circulate throughout the scene?
There’s a big overlap with free and open source culture, so sharing ideas in the form of software and sourcecode happens a great deal. There are many languages for algorithmic music and video, such as Supercollider, Fluxus, ChucK, Impromptu and PureData, and strong communities of practice have grown around them.

Are your fellow algoravers proficient programmers?
Yes, many livecoders make and adapt their own programming environments: that takes some experience. But proficiency at coding dance music is different to making financial systems or whatever. I’ve run workshops where I’ve got non-programmers making acid house together in a couple of hours. I think there’s real possibility to make producing algorave music more like drumming circles, where beginners can just join in and learn through doing.

Can any sort of coding be a creative activity? Or only certain forms, like livecoding?
Creativity is a surprisingly recent concept, and not that well defined, but I like to think of it as everyday behaviour, which most people engage in daily. Coding generally involves making sense out of huge, crazy structures, and it’s impossible to get anywhere without zoning out into a state of focussed, creative flow.

You claim you’d like to make programming more like a synthesiser. How would that be different from the other software systems that people use to make music?
I think it’s important to consider programming as exploration rather than implementation, because then we are using computer languages more like human languages. Any software interface can be thought of as a language, but the openness of programming allows us to set our own creative limits to explore, instead of working inside fixed, pre-defined limits. To me this is using computers on a deep level for what they are – language machines.

Who (or what) inspires you?
If I had to pick one person it would have to be Laurie Spiegel, I love the way she writes about using computer language to transform musical patterns.

Check out the original article.

Things coming up

I’m having a bit of a breather at the moment, but here’s some of the things I am up to over the Summer:

16th May 2013

Another Algorave on the MS Stubnitz. The last one was masses of fun, and really looking forward to seeing what the next one turns up..

May 27-30 2013

I won’t actually be there, but EunJoo Shin will present our collaboration microphone at NIME 2013, as a paper and installation.

15th June 2013

A performance collaboration with xname at the Audacious Space in Sheffield. Difficult to say much about this, but it’s going to be noisy..

27-28th June 2013

A paper on the Textural X (dodgy preprint here), and a performance at xCoAx2013: Computation, Communication, Aesthetics and X in Bergamo.

11th July 2013

Another outing to London for the Thursday Club, for a presentation and performance with Kate Sicchio of our piece in development “Sound Choreography <> Body Code”. Here’s some footage from our first performance at Audio:Visual:Motion.

19th-21st July 2013

Performance and workshops at the awesome Deer Shed Festival in North Yorks with Dave.

15th August 2013

A pre-warning of a gig at Cafe OTO, the return of lurk, featuring Leafcutter John, Alexandra Cardenas, Roger Dean and a new collaboration between Paul Hession and myself. Especially looking forward to this after recently realising I’ve actually seen Paul play before, several years ago with Tom Jenkinson and Matthew Yee-King:

15-20th September 2013

Co-organising the Schloss-Dagstuhl seminar Collaboration and Learning through Live Coding. Really excited about this, and we plan to do some other things around Europe before and/or after..

Transient and ephemeral code

Be sure to read the comments – Sam Aaron makes some important corrective points… The below left as documentation of thinking-in-progress.

There is now an exciting resurgence of interest in live programming languages within certain parts of the software engineering and programming language theory community. In general the concerns of liveness from “programming experience design” and psychology of programming perspectives, and the decade-old view of live coding and live programming languages from arts research/practice perspective are identical, with some researchers working across all these contexts. However I think there is one clear difference which is emerging. This is the assumption of code being live in terms of transience — code which exists only to serve the purposes of a particular moment in time. This goes directly against an underlying assumption of software engineering in general, that we are building code, towards an ideal end-game, which will be re-used many times by other programmers and end-users.

I tried injecting a simple spot of satire into my previous post, by deleting the code at the end of all the video examples. I’m very curious about how people thought about that, although I don’t currently have the methods at my fingertips to find out. Introspections very welcome, though. Does it seem strange to write live code one moment, and delete it the next? Is there a sense of loss, or does it feel natural that the code fades with the short-term memory of its output?

For me transient code is important, it switches focus from end-products and authorship, to activity. Programming becomes a way to experience and interact the world right now, by using language which expands experience into the semiotic in strange ways, but stays grounded in live perception of music, video, and (in the case of algorave) bodily movement in social environments. It would be a fine thing to relate this beyond performance arts — creative manipulation of code during business meetings and in school classrooms is already commonplace, through live programming environments such as spreadsheets and Scratch. I think we do need to understand more about this kind of activity, and support its development into new areas of life. We’re constantly using (and being used by) software, why not open it up more, so we can modify it through use?

Sam Aaron recently shared a great talk he gave about his reflections on live programming to FP days, including on the ephemeral nature of code. It’s a great talk, excellently communicated, but from the video I got the occasional impression that was is dragging the crowd somewhere they might not want to go. I don’t doubt that programming code for the fleeting moment could enrich many people’s lives, perhaps it would worthwhile to also give consideration to “non-programmers” or end-user programmers (who I earlier glibly called real programmers) to change the world through live coding. [This is not meant to be advice to Sam, who no doubt has thought about this in depth, and actively engages all sorts of young people in programming through his work]

In any case, my wish isn’t to define two separate strands of research — as I say, they are interwoven, and I certainly enjoy engineering non-transient code as well. But, I think the focus on transience and the ephemeral nature of code naturally requires such perspectives as philosophy, phenomenology and a general approach grounded in culture and practice. To embrace wider notions of liveness and code then, we need to create an interdisciplinary field that works across any boundaries between the humanities and sciences.

Demonstrating tidal

After posting at length about the history of my musical pattern representation, I thought I’d better show some demos and explain a bit about how it works in practice.

Demonstrating music tech is difficult, because it seems to be impossible to listen to demos without making aesthetic judgements. The below is not meant to be good music, but if you find yourself enjoying any of it, please think sad thoughts. If you find yourself reacting badly to the broken rhythms, try humming a favourite tune over the top. Or alternatively, don’t bother reading this paragraph at all, and go and tell your friends about how the idea is kind of interesting, but the music doesn’t make you weep hot tears like S Club did back in the day.

Anyway, this demo video shows how polyrhythmic patterns can be quickly sequenced:

Strings in this context are automatically parsed into Patterns, where comma-separated patterns are stacked on top of each other. Subpatterns can be specified inside square brackets to arbitrary depth, and then the speed of those can be modified with an asterisk.

In the above example the patterns are of sample library names, where bd=bass drum, sn=snare, etc.

By the way, the red flashes indicate when I trigger an evaluation. Lately people have associated live coding with evaluate-per-keypress. This doesn’t work outside well-managed rigged demos and educational sandboxes; computer language generally doesn’t work on a character level, it works on a word and sentence level. I had an evaluate-per-keypress mode in my old Perl system ten years ago, but always kept it switched off, because I didn’t want to evaluate 1 and 12 on the way to 120. *Some* provisionality is not necessarily a bad thing; mid-edits may be both syntactically valid and disastrous.

That rant aside, this video demonstrates brak, a fairly straightforward example of a pattern manipulation:

Here’s the code for brak:

brak :: Pattern a -> Pattern a
brak = every 2 (((1%4) <~) . (\x -> cat [x, silence]))

In other words, every 2nd repetition, squash some silence on to the end of the pattern, and then shift the whole thing 1/4 of a cycle to the left. This turns any pattern into a simple breakbeat.

Let’s have a closer look at every in action:

This demonstrates how a function can be applied to a pattern conditionally, in the above shifting (with <~) or reversing (with rev) every specified number of repetitions.

These demos all trigger sounds using a software sampler, but it’s possible to get to subsample level:

The striate function cuts a sample into bits for further manipulation, in the above case through reversal. This is a technique called granular synthesis.

Here’s the code for striate:

striate :: Int -> OscPattern -> OscPattern
striate n p = cat $ map (\x -> off (fromIntegral x) p) [0 .. n-1]
  where off i p = p 
                  |+| begin (atom (fromIntegral i / fromIntegral n)) 
                  |+| end (atom (fromIntegral (i+1) / fromIntegral n))

It takes n copies of the pattern, and concatenates them together, but selecting different portions of the patterns to play with the begin and end synthesiser parameters. The |+| operator knits together different synth parameters into a whole synth trigger message, which is then sent to the synth over the network (the actual sound is not rendered with Haskell here).

This video demonstrates the |+| combinator a little more, blending parameters to pan the sounds using a sine function, do a spot of waveshaping, and to apply a vowel formant filter:

 

Finally (for now) here’s a video demonstrating Haskell’s “do syntax” for monads:

A pattern of integers is used to modulate the speed of a pattern of samplenames, as one way of creating a stuttering rhythm.

That’s it, hopefully this discharges some flavour of what is possible — any kind of feedback always very welcome.

Haskell patterns ad nauseam

TL;DR I’m now describing algorave music as functions from time ranges to lists of events, with arbitrary time precision, where you can query continuously varying patterns for more detail by specifying narrower time ranges.

For more practical demo-based description of my current system see this post.

I’ve been restructuring and rewriting my Haskell pattern library for quite some time now. I’ve just done it again, and thought it would be a useful point to compare the different approaches I’ve taken. In all of the following my underlying aim has been to get people to dance to my code, while I edit it live (see this video for an example). So the aim has been to make an expressive language for describing periodic, musical structures quickly.

First some pre-history – I started by describing patterns with Perl. I wrote about this about ten years ago, and here’s a short video showing it in action. This was quite frustrating, particularly when working with live instrumentalists — imperative language is just too slow to work with for a number of reasons.

When I first picked up Haskell, I tried describing musical patterns in terms of a tree structure:

data Event = Sound String
           | Silence
data Structure = Atom Event
               | Cycle [Structure]
               | Polymetry [Structure]

(For brevity, I will just concentrate on the types — in each case there was a fair amount of code to allow the types to be composed together and used).

Cycles structure events into a sequence, and polymetries overlay several structures, which as the name suggests, may have different metres.

The problem with this structure is that it doesn’t really lend itself to live improvisation. It represents musical patterns as lists embedded within lists, with no random access — to get at the 100th metric cycle (or musical loop) you have to generate the 99 cycles before it. This is fine for off-line batch generation, but not so good for live coding, and is restrictive in other ways — for example transforming events based on future or past events is awkward.

So then I moved on to representing patterns as functions, starting with this:

data Pattern a = Pattern {at :: Int -> [a], period :: Int}

So here a pattern is a function, from integers to lists. This was quite a revelation for me, and might have been brought on by reading Conal Eliot’s work on functional reactive programming, I don’t clearly remember. I still find it strange and wonderful that it’s possible to manipulate this kind of pattern, as a trivial example reversing it, without turning it into a list of first order values first. Because these patterns are functions from time to values, you can manipulate time without having to touch the values. You can still generate music from recursive tree structures, but with functions within functions instead of in the datatypes. Great!

In the above representation, the pattern kept note of its “period”. This was to keep track of the duration of the cycle, useful when combining patterns of different lengths. This made things fiddly though, and was a code smell for an underlying problem — I was representing time with an integer. This meant I always had to work to a predefined “temporal atom” or “tatum”, the lowest possible subdivision.

Having a fixed tatum is fine for acid house and other grid-based musics, but at the time I wanted to make structures more expressive on the temporal level. So in response, I came up with this rather complex structure:

data Pattern a = Atom {event :: a}
                 | Arc {pattern :: Pattern a,
                        onset :: Double,
                        duration :: Maybe Double
                       }
                 | Cycle {patterns :: [Pattern a]}
                 | Signal {at :: Double -> Pattern a}

So lists are back in the form of Cycles. However, time is represented with floating point (Double) values, where a Cycle is given a floating point onset and duration as part of an Arc.

Patterns may also be constructed as a Signal, which represents constantly varying patterns, such as sinewaves. I found this a really big deal – representing discrete and continuous patterns in a single datatype, and allowing them to be composed together into rich structures.

As with all the other representations, this did kind of work, and was tested and developed through live performance and audience/collaborator feedback. But clearly this representation had got complex again, so had the supporting code, and the use of doubles presented the ugly problem of floating point precision.

So simplifying again, I arrived at this:

  data Pattern a = Sequence {arc :: Range -> [Event a]}
                 | Signal {at :: Rational -> [a]}
  type Event a = (Range, a)
  type Range = (Rational, Rational)

This is back to a wholly higher-order representation and is much more straightforward. Now we have Sequences of discrete events (where each event is a value which has a start and end time), and Signals of continuously varying values. Time is now represented as fractions, with arbitrary precision. An underlying assumption is that metric cycles have a duration of 1, so that all time values with a denominator of 1 represent the end of one cycle and the beginning of the next.

A key insight behind the above was that we can represent patterns of discrete events with arbitrary temporal precision, by representing them as functions from time ranges to events. This is important, because if we can only ask for discrete events occurring at particular points in time, we’ll never know if we’ve missed some short-lived events which begin and end in between our “samples” of the structure. When it comes to rendering the music (e.g. sending the events to a synthesiser), we can render the pattern in chunks, and know that we haven’t missed any events.

At this point, things really started to get quite beautiful, and I could delete a lot of housekeeping code. However, I still wasn’t out of the woods..

Having both Sequence and Signal part of the same type meant that it was somehow not possible to specify patterns as a clean instance of Applicative Functor. It meant the patterns could “change shape” when they are combined in various ways, causing problems. So I split them out into their own types, and defined them as instances of a type class with lots of housekeeping functions so that they could be treated the same way:

data Sequence a = Sequence {range :: Range -> [Event a]}
data Signal a = Signal {at :: Time -> [a]}

class Pattern p where
  pt :: (p a) -> Time -> [a]
  atom :: a -> p a
  silence :: p a
  toSignal :: p a -> Signal a
  toSignal p = Signal $ \t -> pt p t
  squash :: Int -> (Int, p a) -> p a
  combine' :: p a -> p a -> p a
  mapOnset :: (Time -> Time) -> p a -> p a
  mapTime :: (Time -> Time) -> p a -> p a
  mapTime = mapOnset
  mapTimeOut :: (Time -> Time) -> p a -> p a

I’ll save you the instance declarations, but things got messy. But! Yesterday I had the insight that a continuous signal can be represented as a discrete pattern, which just gets more detailed the closer you look. So both discrete and continuous patterns can be represented with the same datatype:

type Time = Rational
type Arc = (Time, Time)
data Pattern a = Pattern {arc :: Arc -> [Event a]}

Much simpler! And I could delete about half of the supporting code. Here’s an example of what a “continuous” pattern looks like:

sig :: (Time -> a) -> Pattern a
sig f = Pattern f'
  where f' (s,e) | s > e = []
                 | otherwise = [((s,e), f s)]

sinewave :: Pattern Double
sinewave = sig $ \t -> sin $ pi * 2 * (fromRational t)

It just gives you a single value for the range you ask for (the start value in the range, although on reflection perhaps the middle one or an average value would be better), and if you want more precision you just ask for a smaller range. If you want a value at a particular point, you just give a zero-length range.

I’ve found that this representation actually makes sense as a monad. This has unlocked some exciting expressive possibilities, for example taking one pattern, and using it to manipulate a second pattern, in this case changing the density of the pattern over time:

listToPat [1%1, 2%1, 1%2] >>= (flip density) (listToPat ["a", "b"])

Well this isn’t fully working yet, but I’ll work up some clearer examples soon.

So I hope that’s it for now, it’s taken me a ridiculous amount of effort to get to this point, and I’ve ended up with less code than I begun with. I’ve found programming with Haskell a remarkably humbling experience, but an enjoyable one. I really hope that this representation will stick though, so I can concentrate more on making interesting functions for transforming patterns.

In case you’re wondering what the mysterious “a” type is in the above definitions of “Pattern a“, well of course it could be anything. In practice what I end up with is a pattern of hashes, which represent synthesiser control messages. I can represent all the different synthesiser parameters as their own patterns (which are of different types depending on their function), and combine them into a pattern of synthesiser event, and manipulate that further until they eventually end up with a scheduler which sends the messages to the synth. For a close up look at an earlier version of my system in use, here’s a video.

The current state of the sourcecode is here if you fancy a look, I’ve gone back to calling it “tidal”. It’s not really in a state that other people could use it, but hopefully one day soon.. Otherwise, it’s coming to an algorave near you soon.

As ever, thanks to those who have given me advice along the way.

Happy new year + upcoming

Looking forward to 2013, some things I’m up to so far:

+ more on the cards..

Busy week

That was fun..

Slub at the Mozilla party

First, full slub (Dave, Ade and I) at the Mozilla party. The most interested crowd we’ve had, it was hard to get any live coding done between all the questions!  Dave collected some photos from the many that appeared online..

Then to Mexico City for the week-long /* vivo */ live coding festival.  They have a really great scene there, so many great accomplished performances, philosophical talks and fun workshops.  They also have great food and mezcal. A festival report will hopefully appear on the TOPLAP website soon but here is some video from the 2/3 slub performance (Dave and I) there (check out the 3D and HD options..):

Hester Reeve and I performing at the AHRC moot

Then back to London for the AHRC Digital Transformations Moot, where Hester Reeve and I made an experimental, and (fairly) durational live code/art performance work, where I made marks by live coding, and Hester made marks on a blackboard-painted pole.

Next is a panel session at PPIG, a solo performance at iFIMPAC in December (as well as a co-written paper on live coding in education), and more to follow in the new year..