Category Archives: misc

Texture version 2.0 pre alpha

During my residency period, I’m rewriting “Texture”, the visual front-end for Tidal I started making way back in the closing moments of my PhD. The first step is to re-implement Texture in Haskell — before it was written in C, and spat out code that was then piped into the Haskell interpreter, which was a bit nuts. I’m taking a bricolage approach so don’t have a clear plan, but have a rudimentary interface starting to work:

As before, the idea is that values are applied to the closest function with a compatible type signature. I’ve still had to ‘reimplement’ the Haskell type system in itself to some extent. While I could get Haskell to tell me whether a value could be type-compatible with a function, it seems that this is not enough. This is because in practice, it is very likely that things will be type compatible, and the real constraints come with the presence of type class instances. Or something like that.

My next step is where the real point of this rewriting exercise comes in – visualisation of patterns as they are passed through a tree of transformations. I’m not sure exactly how this is going to look, but because this is all about visualising higher order functions of time and not streams of data, it’s going to be something quite a bit different from dataflow; it’ll be able to include past and future values in the visualisation without any buffering.

The (currently useless) code is available here, under the GPLv3 license.

Workshop: Drawing, Weaving, and Speaking Live Generative Music

Some more details about my workshops coming up in Hangar Barcelona. Signup here.

This workshop will explore alternative strategies for creating live sound and music. We will make connections between generative code and our perception of music, using metaphors of speech, knitting and shape, and playing with code as material. We will take a fresh look at generative systems, not through formal understanding but just by trying things out.
Through the workshops, we will work up through the layers of generative code. We will take a side look at symbols, inventing alphabets and drawing sound. We will string symbols together into words, exploring their musical properties, and how they can be interpreted by computers. We will weave words into the patterns of language, as live generation and transformation of musical patterns. We will learn how generative code is like musical notation, and how one can come up with live coding environments that are more like graphical scores.

We will visit systems like Python, Supercollider, Haskell, OpenFrameworks, Processing, OpenCV and experiment as well with more esoteric interfaces.

Schedule:

Session #01
Symbols – This first session will deal with topics such as sound symbology, mental imagery, perception and invented alphabets. We will try out different ways to draw sounds, map properties of shape to properties of sound using computer vision (“acid sketching”,https://vimeo.com/7492566), and draw lines through a sound space created from microphone input. This will allow us to get a feel for the real difference between analogue and digital, how they support each other, and how they relate to human perception and generative music.

Session #02
Words – Some more talk about strings of symbols as words, being articulations or movements, and relate expression in speech (prosody) with expression in generative music. We will experiment with stringing sequences of drawn sounds together, inventing new “onomatopoeic” words. We will look at examples of musical traditions which relate words with sounds (ancient Scottish Canntaireachd, chanting the bagpipes), and also try out vocable synthesis (http://slub.org/world orhttp://oldproject.arnolfini.org.uk/projects/2008/babble/), which works like speech synthesis but uses words to describe articulations of a musical instrument.

Session #03
Language – This session will explore the historical and metaphorical connections between knitting and computation, and between code and pattern. After some in depth talk about live coding, and the problems and opportunities it presents, we’ll spend some time exploring Tidal, a simple live coding language for musical pattern, and understand it using the metaphor of knitting with time.
Tidal: http://yaxu.org/demonstrating-tidal/

Session #04
Notation – Here we will look at the relationship between language and shape, and a range of visual programming languages. We will try out Texture, a visual front-end for Tidal, and try out some ways of controlling it with computer vision, that create feedback loops through body and code.
Texture: http://yaxu.org/category/texture/

Session #05
Final presentation and workshop wrap up.

Level: Introductory/intermediate. Prior programming experience is not required, but participants will need to bring a laptop (preferably a PC, or a Mac able to boot off a DVD), an external webcam and a pair of headphones.

Language: English

Tutor: Alex McLean

Alex McLean is a live coder, software artist and researcher based in Sheffield UK. He is one third of the live coding group Slub, getting crowds to dance to algorithms at festivals across Europe. He promotes anthropocentric technology as co-founder of the ChordPunch record label, of event promoters Algorave, the TOPLAP live coding network and the Dorkbot electronic art meetings in Sheffield and London. Alex is a research fellow in Human/Technology Interface within the Interdisciplinary Centre for Scientific Research in Music, University of Leeds.

http://yaxu.org/ ]
http://slub.org/ ]
http://algorave.com/ ]
http://chordpunch.com/ ]
http://toplap.org/ ]
http://icsrim.org.uk/ ]
http://music.leeds.ac.uk/people/alex-mclean/ ]

Dates:
Tuesday 23.07.2013, 17:00-21:00h
Thursday 25.07.2013, 17:00-21:00h
Saturday 27.07.2013, 12:00-18:00h
Monday 29.07.2013, 17:00-21:00h
Wednesday 31.07.2013, 17:00-21:00h

Location: Hangar. Passatge del Marquès de Santa Isabel, 40. Barcelona. Metro Poblenou.

Price: Free.

To sign up, please send an email to info@lullcec.org with a brief text outlining your background and motivation for attending the workshop. Note that applications won’t be accepted if candidates are unable to commit to attending the course in its entirety.

+info: [ http://lullcec.org/en/2013/workshops/drawing-weaving-and-speaking-live-generative-music/ ]

This workshop has been produced by l’ull cec for Hangar.

Appearances elsewhere

2013-04-17 12.49.15I got a couple of kind mentions etc lately:

That’s it! Hopefully I will survive all this attention.

New projects and events

Taking stock of the new and fast-developing projects I’m involved with.

Sound Choreography <> Body Code

A performance which creates a feedback loop through code, music, choreography, dance and
back through code, in collaboration with Kate Sicchio. First performance is this Friday at Audio:Visual:Motion in Manchester. The sourcecode for the sound choreographer component is already available, which choreographs using a shifting, sound-reactive diagram. I’m working on my visual programming language Texture as part of this too, which Kate will be disrupting via computer vision..

Algorave

Collaborating with other live coders and other musicians/video artists using algorithms, creating events which shift focus back on the audience having a seriously good time. A work in progress, but upcoming events are already planned in Brighton, London (onboard the MS Stubnitz!), Karlsruhe and Sydney. More info

Declaration Kriole

Working with world music band Rafiki Jazz, making a new Kriole based on the Universal Declaration of Human Rights. I’ll be working with a puppeteer, giving a puppet a live coded voice which sings in this new language. The puppet will hopefully become a new member of the band, created through interaction within the band. First recording session soon, with live performances to follow fairly soon after. One of the more ambitious projects I’ve been involved with!

Microphone II

Working with EunJoo Shin on a new version of the Microphone. Our previous version got accepted to a couple of big international festivals, but they turned out to be too big to ship! So the next iteration will have a new body, and more of a visual focus.

Slubworld

Slub world is a on-line commission from the Arnolfini: “You are invited to join a new, on-line, sonic world co-inhabited by beatboxing robots. Participants will be able to make music together by reprogramming their environment in a specially invented language, based on state-of-the-art intarsia, campanology and canntaireachd technology. The result will be a cross between a sound poetry slam, yarn bombing, and a live coded algorave, experienced entirely through text and sound.” All for launch in May.. Another ambitious project then.

Dagstuhl seminar: Collaboration and Learning through Live Coding

Co-organising a Dagstuhl seminar bringing together leading thinkers in programming experience design, computing education and live coding.

Plus more in the pipeline, including neuroimaging and programming, a sound visualisation project at Sage Gateshead and hopefully a return of the live interfaces conference and live notation project.

Audio blast festival

Audio blast is a streaming festival by apo33, running in both Nantes and Piksel festival in Bergen.

I’m performing this Saturday November 24th, for an hour from 7pm GMT (8pm CET).  I’ll be streaming quadrophonic sound from my studio in Sheffield, which will be played in both spaces, with a stream for remote listeners from two AKG mics in one of the spaces.  More info and link to the network stream on the website.  If anyone wants to pop by Sheffield for a listen and beer they’re welcome too :)

SmoothDirt programme notes

I’m doing a few solo performances over the next days, in Cambridge, Uxbridge and Birmingbridge.  Here’s the programme notes/rationale;

Yaxu – SmoothDirt

From a linear perspective of time, live coding will always be somewhat distant from human experience.  As computer programming is a fundamentally indirect manipulation of sound, is live coding really live?  If we consider the flow of time from past to future, the time necessary to modify an algorithm acts as an impenetrable barrier between coder and experience.  An alternative perspective is to think of time in terms of cycles. From this perspective, if a coder’s actions lag behind the present moment, then they are also ahead of it.  They are inside time, the cycle of development enmeshed with rhythmic cycles of music, in mutual resonance.  Smoothdirt is a simple language built around this simple idea, allowing extremes of repetition at multiple scales to be explored as musical performance.

Yaxu will produce broken techno from his laptop for around twenty minutes.

Live interfaces: Performance, Art, Music conferece

Happily we’ve been awarded some funding for a conference on live performance technology from Vitae Yorkshire!  This will be a great start to my new position in ICSRiM.  Here’s the call:

LIVE INTERFACES
Performance, Art, Music
http://icsrim.org.uk/liveinterfaces/

Date: 7th-8th September, 2012
Venue: ICSRiM, School of Music, University of Leeds, UK

CALL FOR PAPERS AND PERFORMANCES

Live Interfaces is a conference on live, technology-mediated interaction in performance.  The conference seeks to investigate cross-disciplinary understandings of performance technology with a particular focus on issues related to the notion of ‘liveness’ in interaction.

Live Interfaces will consist of paper and poster presentations, performances and workshops over two days.   Researchers, theorists and artists from diverse fields are encouraged to participate, including: digital performance, live art, computer music, choreography, music psychology, interaction design, human computer interaction, digital aesthetics, computer vision, smart materials and augmented stage technology.

We invite submissions addressing the conference theme of technology-mediated live interaction  in performance, and suggest the following indicative topics:

- Audience perception/interaction
- Biophysical sensors
- Brain-computer interfaces
- Computer vision/real-time video in performance
- Cross-modal perception/illusion
- Digital dramaturgy/choreography/composition
- Digital performance phenomenology
- Gesture recognition and control
- Historical perspectives
- Live coding in music, video animation and/or dance
- Participatory performance
- Performance technology aesthetics
- Redefining audience interaction
- Tangible interaction

Paper submissions should be in extended abstract form, with a suggested length of 500 words.  Please format all submissions using either the Word or LaTeX template available from the website.

Performance proposals should include a description of the performance and the live interaction technology used, as well as a list of technical requirements.  Attaching recordings of past performances is strongly encouraged.

We hope to announce a journal special issue on performance technology following the conference as a publication opportunity for extended papers.

Extended abstracts must be submitted electronically via the website by midnight (GMT+1) on the 17th June 2012.  All submissions will be subject to cross-disciplinary peer review, and notified of acceptance by 1st July.

Please address all queries to liveinterfaces@icsrim.org.uk

Key dates:

- 17th May – Submissions system open
- 17th June – Submission deadline
- 1st July – Notification of selected papers/performances
- 29th July – Camera-ready deadline for accepted papers
- 7-8th September – Conference

Registration will open nearer the date, with a fee in the region of £25, including lunch for both days.

Please keep an eye on one of the following for updates, including information on conference workshops and co-located events.

Website: http://icsrim.org.uk/liveinterfaces/
Facebook: http://facebook.com/liveinterfaces/
Twitter: http://twitter.com/liveinterfaces/
Identica: http://identi.ca/liveinterfaces/

Planning committee:
Alex McLean, University of Sheffield, University of Leeds (from August)
Kate Sicchio, University of Lincoln
Maria Chatzichristodoulou, University of Hull
Scott Hewitt, University of Huddersfield
Ben Dornan, University of Sheffield
Stephen Pearse, University of Sheffield
Phoebe Bakanas, ICSRiM, University of Leeds
Ash Sagar, York St Johns University

Senior advisor:
Kia Ng, Director of ICSRiM, University of Leeds

Supported by Vitae Yorkshire, the University of Leeds and the Arts and Humanities Research Council

Fellowship

I’m excited to be joining Kia Ng in the Interdisciplinary Centre for Scientific Research in Music (ICSRiM) within the faculty of Performance, Visual Arts & Communications (PVAC) for the new academic year, as a two year fellowship.

I’ll be a research fellow in Human/Technology Interface, a research strand supported within the cross disciplinary Culture, Society & Innovation Hub.

All very central to my interests, the ideal context for developing embodied approaches to live coding, perhaps.  I’m really looking forward to getting started, although it won’t be for another four months or so..