Tidal cycles continued

I’ve continued with the Tidal cycles project, pushing forward with at least one cycle per weekday, apart from one day when I made a longer recording (to appear on chordpunch soon). All the audio is downloadable and creative commons licensed (CC-BY), check the descriptions for the tweet-sized tidal code for each cycle, and follow on twitter or soundcloud for updates.

I should note that this is of course inspired by the long-lived sctweets tradition in the supercollider community.

Remote performance via zeromq

I did a remote performance streamed to Barcelona last week as part of a “Perspectives on multichannel live coding” concert, which involved me sitting on my studio floor in Sheffield, live coding broken techno for 16 speakers. The music was beamed over to an audience of 30-40 people in Universitat Pompeu Fabra, who were surrounded by 16 speakers, while I created the music locally, monitoring in quadrophonic surround sound (sadly I didn’t have 16 speakers to hand). I really enjoyed the challenge of making a coherent multi-channel performance, and got some positive feedback on the music, but thought I’d share the more technical side..

The organiser/curator Gerard Roma and I discussed the possibility of streaming audio, compressed with ogg vorbis and streamed over icecast. Encoding/decoding and streaming 16 channels of audio is a bit problematic though, we probably had the bandwidth but the libraries just aren’t there with 16 channel support. It’s straightforward to stream 4 channels, or 5.1, but for some reason every channel has to be labelled with a location, and I couldn’t get sixteen channels working with gstreamer.

In any case streaming synth control messages rather than audio output is a better approach really, and that’s what we went with. I just ran my synthesiser Dirt in both places, and sent trigger messages over Open Sound Control to both. Unfortunately it wasn’t quite that simple due to the various institutional firewalls between us, so I sent the OSC over ZeroMQ. This involved running a simple daemon on my (unfirewalled) server, which received OSC over plain UDP, which it forwarded to any ZeroMQ subscribers. It was then easy to add some code to Dirt which subscribed to the ZeroMQ server, and piped OSC messages into liblo for processing. Using ZeroMQ as part of this made for really easy to write, fault-tolerant code.

A slightly amusing side effect is that anyone running a recent git checkout of Dirt during my various tests and performance would have received my OSC messages and heard me mess around and play.. Something that could be made more of in the future…

I’d love to do more multichannel performances, streamed or in person, let me know if you’d like me to propose something for your system!

Dagstuhl seminar on collaboration and learning through live coding

A wonderful time at Dagstuhl last week. Aspects of the seminar has already been covered very nicely in blogs by Mark Guzdial, and Dave Griffiths. I’ve tended to blog about live coding over on the TOPLAP blog, but over the coming days I’ll be unravelling my thoughts about live coding here. To start with though, here’s a couple of thoughts about the Dagstuhl format.

Dagstuhl seminars fit well with live coders, because organisers are encouraged to organise on-the-fly, reacting to themes as they arise and develop through the workshop. A solid week of discussion passed very quickly, but despite the relaxing surroundings was remarkably hard work. This was in part because I was suppressing a cold throughout, to varying levels of success, but mostly because it was all so interesting, with discussions starting over breakfast and flowing through the day and into the evening.

The whole thing re-invigorated a whole host of my interests in live coding, and brought together many perspectives into a field that we could share in. As Mark and Dave have noted, this was a rather cross-disciplinary group of cross-disciplinary people, and although the odd technical discussion probably did exclude some participants, we managed to drift between discussions about education, engineering, philosophy, politics and music without hitting too many obstacles. The involvement of cross-disciplinary people – artist-programmers, engineer-ethnographers, textile-mathematicians, computer science-philosophers, and so on, meant misunderstandings were quickly identified and bridged.

More soon..

Tidal cycles

I’ve started a twitter feed called @tidalcycles, with minimal tidal programs and their output. I’ll try to add one a day, but lets see how things go. Here’s the first couple:

brak $ let x = "bd [sn [[sn bd] sn]]*1/3" in interlace (sound $ slow 3 $ x) (sound $ every 3 (append "[bd]*6") x)

weave 4 (speed $ (1+) sinewave1) [density 4 $ every 5 ((0.25 <~) . rev) $ striate 16 $ sound"[bd sn/2]/2", sound "bd [~ hc]*3"]

Colourful texture

Texture v.2 is getting interesting now, reminds me of fabric travelling around a loom..

Everything apart from the DSP is implemented in Haskell. The functional approach has worked out particularly well for this visualisation — because musical patterns are represented as functions from time to events (using my Tidal EDSL), it’s trivial to get at future events across the graph of combinators. Still much more to do though.

Vocal

A quick improv from Sheffield:

Here’s the state of my editor at the end:

d1 $ slow 2 $ sound "bd [sn sn bd]/2"

let x = density 2 $ striate' 8 0.75 $ sound (slow 4 $ "[bd bd/4] [ht mt lt]") in
d2 $ stack [every 3 rev $ every 4 (0.75 <~) x
            |+| pan "0.2",
            every 4 rev $ every 3 (0.5 <~) x
            |+| pan "0.8"
           ]
  |+| speed "1"
  |+| shape "0.6"

d4 $ every 4 (density 2) $ echo 0.5 $ brak $ every 3 (0.25 <~) $ sound "[future,odx,bd]*3"
  |+| shape "0.7"


let perc = 0.2 in
d3 $ slow 2 $ whenmod 10 12 (echo 0.25) $ density 2 $ sound (pick <$> "~ [operaesque]" <*> (slow 5 $ run 24))
  |+| slow 16 ((begin $ (*(1-perc)) <$>  sinewave1) |+| (end $ (+perc) <$> sinewave1))
  |+| speed (slow 2 "0.75 0.7")
  |+| pan "0.6"
  |+| shape "0.6"

let perc = 0.2 in
d4 $ slow 3 $ every 2 (rev) $ whenmod 10 12 (echo 0.25) $ density 2 $ sound (pick <$> "~ [operaesque]*3" <*> (slow 10 $ run 16))
  |+| slow 16 ((begin $ (*(1-perc)) <$>  sinewave1) |+| (end $ (+perc) <$> sinewave1))
  |+| speed "0.75"
  |+| pan "0.4"
  |+| vowel "i"

hush

d6 $ whenmod 10 12 (density 2) $ whenmod 12 4 (rev) $ slow 2 $ sound "[futuremono]*3 [odx/3]"


d7 $ whenmod 6 4 (0.25 <~) $ every 4 (density (3/2)) $ slow 2 $ sound "[jungle/2]*2 [jungle/3]*2"
  |+| shape "0.7"


d7 $ (whenmod 2 4 ((|+| speed "0.9") . rev) $ every 2 (0.25 <~) $ sound "odx [sn/2 ~ sn/2]")

d2 silence


d8 $ ((slow 8 $ double (0.25 <~) $ striate 12 $ sound "[diphone2/1 ~ diphone2/3]*4")
  |+| (slow 4 $ speed ((*) <$> "[2 1] 1.5" <*> ((+0) <$> ((+0.4) <$> (slow 4 $ sinewave1))))))
  |+| vowel "i"

d9 $ slow 2 $ sound "[[odx]*4]/3 [[odx]*4 [odx]*8]/3"
  |+| speed "1"
  |+| cutoff "0.04"
  |+| resonance "0.7"
  |+| shape "0.8"

bps 1

Extending human ability

I don’t always enjoy praise, but it’s really great when commentators see through the cold reality of live coding or algorave and get at the promise that motivates what we’re doing.

Here’s a comment by a reddit user called Tekmo from a few months back, which I think is about the promise of a more embodied approach to the practice of programming:

I think the entire premise of this project is really brilliant. Right now it’s probably not immediately inspiring because it takes a minute or so to switch between patterns for an average user, but imagine somebody getting REALLY good at improving on this, with their own custom library of one or two-letter function names and performing by constantly improvising patterns every few seconds while programming at lightning speed.

But the real reason I think this is brilliant is because this is sort of what I always imagined programming was about: extending human ability. I feel like the super-heroes of the future will be programmers that command an impressive array of remote machinery as if it were an extension of their own body.

Here’s an excerpt from a nice blog post by DuBose Cole which to me hints at a cultural tipping point when more people start programming:

Events like Algorave highlight that by making more people creators through programming, we don’t just get new technical creations, but social and cultural ones as well. Algorave features electronic music created by algorithms programmed on the fly for a crowd. Revellers seem to attend due to either an interest in how the music is created, a particular love of electronic music, or just to have a party. An idea like Algorave takes the image of coding as a solitary experience and moves it forward, making the programmer a collaborative and immediate creator, as well as bit of a rock star.

What the idea highlights however, is that learning to create with code is less about the skill itself and more about what you do with it. Pushing coding literacy is only the beginning. Coders are creating an ever expanding culture of creation, which anyone with a basic appreciation or skill for programming can join in with. The increasing simplicity with which people can learn coding has not only changed who can create, but also the scope of what’s being created.

Brilliant stuff.

Texture 2.0 bug exposure

Texture 2.0 (my Haskell based visual live programming language) is working a bit more. It has reached gabber zero – the point at which a programming language is able to support the production of live techno. Also I’ve made some small steps towards getting some of my live visualisation ideas working. Here’s a video which exposes some nice bugs towards the end:

This is an unsupported, very pre-alpha experiment, but if you want to try to get it working, first install Tidal (and if you want sound, the associated “dirt” sampler). Then download the code from here:

https://github.com/yaxu/hstexture

.. and run it with something like runhaskell Main.hs

 

Texture version 2.0 pre alpha

During my residency period, I’m rewriting “Texture”, the visual front-end for Tidal I started making way back in the closing moments of my PhD. The first step is to re-implement Texture in Haskell — before it was written in C, and spat out code that was then piped into the Haskell interpreter, which was a bit nuts. I’m taking a bricolage approach so don’t have a clear plan, but have a rudimentary interface starting to work:

As before, the idea is that values are applied to the closest function with a compatible type signature. I’ve still had to ‘reimplement’ the Haskell type system in itself to some extent. While I could get Haskell to tell me whether a value could be type-compatible with a function, it seems that this is not enough. This is because in practice, it is very likely that things will be type compatible, and the real constraints come with the presence of type class instances. Or something like that.

My next step is where the real point of this rewriting exercise comes in – visualisation of patterns as they are passed through a tree of transformations. I’m not sure exactly how this is going to look, but because this is all about visualising higher order functions of time and not streams of data, it’s going to be something quite a bit different from dataflow; it’ll be able to include past and future values in the visualisation without any buffering.

The (currently useless) code is available here, under the GPLv3 license.