Category Archives: supercollider

Pure dyne

I’ve been through a few linux distros over the years, neatly getting progressively easier to install and configure as I get less willing to spend time recompiling kernels, culminating in ubuntu, enjoying the attention to detail and simplicity of use.  Recently though, I’ve had to give ubuntu up and go back upstream to the rather higher maintenance Debian again.  Linux suffers from creeping featurism in its layers of audio APIs, it started with OSS, a straightforward API based on files, then came ALSA, a wildly complex API with broken documentation in a wiki you can’t edit, and an architecture that somehow means only one OSS application can write sound at a time.  It seems to me that it’s a failing of ALSA that further layers of abstraction are piled on top of it, creating a rather complex landscape for sound hackers to navigate.

Ubuntu has joined in the fun by shipping with PulseAudio, which is probably great for general users but a pain for those needing to work with audio on a low level without using loads of CPU.  Pulse is not straightforward to remove, and when I removed it had problems with volume controls not working, and the likelihood that future system upgrades wouldn’t work so well.  That’s why I switched to debian sidux, but then I couldn’t get laptop hibernation, or my firewire sound card working, and had the stress of maintaining an unstable distribution.

However this week Puredyne carrot and coriander came out, and it’s really great.  The kernel is optimised for realtime sound, and jack audio runs solidly without any drop outs, something I haven’t seen before.  My firewire sound works reliably, better than I managed under ubuntu.  It has a really nice logo and clean look, with no plump penguins in sight.  It comes with all the best a/v software beautifully packaged, including all the live coding languages.  The people behind it are super friendly and helpful.  It’s downstream from ubuntu, so all the software is available.  It’s a dream!

They make a big deal out of it being good for booting off a USB key, and I think have worked out some nice practicalities of working that way.  This makes it great for doing workshops and running linux in a non-linux lab etc.  It installs and works just as nicely on a permanent hard drive though, and that’s what I’ve done.

Anyway, heartily recommended, a dream come true, congratulations to all those involved.

Dedication to RSI is what I have

harvey and his scarf

I’ve kept a bit quiet about a great achievement in my life, but now I’ve come to terms with it I think the time has now come to go public – last September I was knitter of the month for knitting the zig zag scarf from Aneeta’s excellent knitting-for-beginners book knitty gritty.  I made it for my son Harvey (another of my achievements), shown wearing it.

My knitter of the month prize was some beautiful hand-dyed yarn which I’ve since turned into another scarf with a nice wavy pattern.  I estimate this second scarf took about 7500 stitches, it took me a while but I managed to go a bit faster after adjusting my knitting towards a more continental style of holding the yarn in my left hand.

knitting at dorkcamp

The pattern took a bit of concentration, but at some point I started being able to watch videos while knitting.  I’ve found this an excellent way of exploring new fields of science for a couple of hours each night.  I think somehow stitching the knits and purls helps weave new ideas into my understanding.  In any case often when I’m not in the mood to spend an hour either watching a lecture or knitting I am in the mood to do both.

Here’s some of the videos I’d particularly recommend to watch while knitting (note: I’m adding to this as I remember what I’ve watched):

The physical modelling of drums using digital waveguides

Joel Laird completed a fine PhD thesis on physical modelling drums in 2001, which included C++ sourcecode for an accurate model of a drum and a felt mallet for hitting it with.  I’ve been in contact with Joel and am very happy to have prompted him to license the source under the GPL.

A .tar.gz file including some windows demo programs and the (Borland) C++ source is here.  I hope to make some time to translate some of it into realtime supercollider unit generators soon…

Thanks, Joel!

*update*

I ported it to GNU C++, a version with my edits is available here.  There’s also a darcs repository — patches very welcome!

Waveguide mesh unit generator

After quite a bit of fiddling, I got a waveguide mesh working. It’s a physical model of a drum head, basically lots of bidirectional, single sample delays connected in a triangular mesh to form a hexagon. [update: now a second extern is in there that tessellates a circle instead].

It sounds pretty good already, next plan is to play with different ways of exciting it.

The supercollider plugin, together with some haskell (hsc) code for testing it, is downloadable here.

[update: native sclang code and classes included now too]

[another update: new version with patch from Dan Stowell, it uses less CPU now]

ASCII Rave in Haskell

I’ve been playing with using words to control the articulation of a physical modelling synthesiser based on the elegant Karplus-Strong algorithm.

The idea is to be able to make instrumental sounds by typing onomatopoeic words. (extra explanation added in the comments)

Here’s my first ever go at playing with it:

For a fuller, more readable experience you’re better off looking at the higher quality avi than the above flash transcoding.

As before, I’m using HSC3 to do the synthesis. If anyone’s interested, I plan to release the full source in September, but the synthesis part is available here

Canntaireachd for sinewaves

An early sketch of a system of vocables for describing manipulations of a sine wave.

The text is a bit small there, it’s better in the original avi version.

Vowels give pitch, and consonants give movements between pitches.

Inspired by the notation of canntaireachd. Made with hsc (Haskell client for scsynth). As ever, code available under GPL
on application.

I’m not sure where I’m going with this. It’s nice to describe a sound in this way but to use it in music the sound has to change over time otherwise it gets repetitive and therefore boring in many situations. I think I either have to develop ways of manipulating these strings programmatically, or ways of manipulating how they are interpreted. Both approaches would involve livecoding of course…