Experimenting with webcam overlay. Video recorded using gstreamer, source for screencaster here (screensave.c).
UPDATE, here’s another from a different angle to appease douglas.
The Text live coding workshop went really well, surprisingly well considering it was the first time anyone apart from me had used it and (so I found out after) most of the participants didn’t have any programming experience. The six participants took to the various combinators surprisingly quickly, the main stumbling block being getting the functions to connect in the right way… Some UI work to do there, and I got some valuable feedback on it.
Once the participants had got the hang of things on headphones, we all switched to speakers and the seven of us played acid techno for an hour or so together, in perfect time sync thanks to netclock. Here’s a mobile phone snippet:
The sound quality doesn’t capture it there, but for me things got really interesting musically, and it was fun walking around the room panning between the seven players…
I’ve been rather busy writing lately, my PhD funding runs out in April, and I hope by then I’ll have finished and will be looking for things to do next.
I have had a bit of time to make Text, a visual language I mentioned earlier, a bit more stable, here’s a test run:
A bit of a struggle, partly due to the small screen area I gave myself for the grab, but also due to some UI design issues I need to sort out before my workshop at Access Space in Sheffield next week, on the 5th February. Access Space is a really nice free media lab, but will turn nasty unless I free the workshop software, so expect a release soon.
In case someone is interested, here’s the linux commandline I use to record a screencast with audio from jackd:
gst-launch-0.10 avimux name=mux \
! filesink location=cast.avi \
ximagesrc name=videosource use-damage=false endx=640 endy=480 \
! video/x-raw-rgb,framerate=10/1 \
! videorate \
! ffmpegcolorspace \
! videoscale method=1 \
! video/x-raw-yuv,width=640,height=480,framerate=10/1 \
! queue \
! mux. \
jackaudiosrc connect=0 name=audiosource \
! audio/x-raw-float,rate=44100,channels=2,depth=16 \
! audioconvert \
! queue \
Text is a experimental visual language under development. Code and docs will appear here at some point, but all I have for now is this video of a proof of concept.
It’s basically Haskell but with syntax based on proximity in 2D space, rather than adjacency. Type compatible things connect automatically, made possible though Haskell’s strong types and currying. I implemented the interface in C, using clutter, and ended up implementing a lot of Haskell’s type system. Whenever something changes it compiles the graph into Haskell code, which gets piped to ghci. The different colours are the different types. Stripes are curried function parameters. Lots more to do, but I think this could be a really useful system for live performance.
I’ve been through a few linux distros over the years, neatly getting progressively easier to install and configure as I get less willing to spend time recompiling kernels, culminating in ubuntu, enjoying the attention to detail and simplicity of use. Recently though, I’ve had to give ubuntu up and go back upstream to the rather higher maintenance Debian again. Linux suffers from creeping featurism in its layers of audio APIs, it started with OSS, a straightforward API based on files, then came ALSA, a wildly complex API with broken documentation in a wiki you can’t edit, and an architecture that somehow means only one OSS application can write sound at a time. It seems to me that it’s a failing of ALSA that further layers of abstraction are piled on top of it, creating a rather complex landscape for sound hackers to navigate.
Ubuntu has joined in the fun by shipping with PulseAudio, which is probably great for general users but a pain for those needing to work with audio on a low level without using loads of CPU. Pulse is not straightforward to remove, and when I removed it had problems with volume controls not working, and the likelihood that future system upgrades wouldn’t work so well. That’s why I switched to debian sidux, but then I couldn’t get laptop hibernation, or my firewire sound card working, and had the stress of maintaining an unstable distribution.
However this week Puredyne carrot and coriander came out, and it’s really great. The kernel is optimised for realtime sound, and jack audio runs solidly without any drop outs, something I haven’t seen before. My firewire sound works reliably, better than I managed under ubuntu. It has a really nice logo and clean look, with no plump penguins in sight. It comes with all the best a/v software beautifully packaged, including all the live coding languages. The people behind it are super friendly and helpful. It’s downstream from ubuntu, so all the software is available. It’s a dream!
They make a big deal out of it being good for booting off a USB key, and I think have worked out some nice practicalities of working that way. This makes it great for doing workshops and running linux in a non-linux lab etc. It installs and works just as nicely on a permanent hard drive though, and that’s what I’ve done.
Anyway, heartily recommended, a dream come true, congratulations to all those involved.
I’ve been thinking about visual languages and the morphology of symbols (as opposed to words) for a while. I had the opportunity to start putting some of these ideas into code at a really excellent openframeworks workshop this week, run by Joel Gethin Lewis and Arturo Castro.
Here’s what it does:
Makes the point nicely that symbols and spaces can intertwine.
Using opencv blob detection, the regularity, direction and area of the shapes map to envelope modulation, resonance and pitch. The drawing is then sequenced into a melody using the minimum spanning tree (from the boost library) of the shape centroids, where distance maps to inter-onset interval.
It also has a mode for projecting the red circles and highlights back on the drawing surface which worked well.
This is only the second thing I’ve made with openframeworks, and while I don’t really get on with the codeblocks editor recommended for linux, I’m impressed with how accessible it makes opencv and all that.
Update: open frameworks sourcecode
Another update (1st August 2013): I ported this to Python, get the source here
I’m going to do a live a/v stream from my sofa 10pm GMT this Saturday 13th December ’08, livecoding with Perl and hopefully also a little language parsed with Haskell. You can find info about how to watch, listen to the stream and join the chat over on the toplap site.
I did something similar last weekend, a remote performance to the Piksel festival in Norway, and I enjoyed it so much I had to repeat it. Hopefully it’ll become a regular thing, yeeking has already offered to do the next one.
I’m doing the streaming with gstreamer, I don’t know if it’s possible to do live screencasts in this way with anything else and it offers a huge amount of control. I reached the limits of gst-launch so have written a little gstreamer app to use for this weekend. I’ll be releasing that soon…
Another thing – it’s the xmas dorkboteastlondon tomorrow (thurs) and one of our best line-ups ever. Unmissable if you’re in around…
Two posts rolled in to one, to annoy the aggregators a bit less (sorry haskellers, more haskell stuff soon).
First, dorkcamp is a lovely event in its third year. The idea is for around 60 of us to go to a campsite an hour out of London, well equipped with showers, toilets, a big kitchen and hall, and do fun dorky stuff like soldering and knitting. It happens at the end of August, tickets are running low so grab yours now. More info on the website and wiki.
Second here’s a new demo, this time with two drum simulations, one high and one low:
Joel Laird completed a fine PhD thesis on physical modelling drums in 2001, which included C++ sourcecode for an accurate model of a drum and a felt mallet for hitting it with. I’ve been in contact with Joel and am very happy to have prompted him to license the source under the GPL.
A .tar.gz file including some windows demo programs and the (Borland) C++ source is here. I hope to make some time to translate some of it into realtime supercollider unit generators soon…
I’m working with Jamie Forth on ideas around spaces of rhythm. Here’s a demo (which might not work in feed readers):
[kml_flashembed movie="http://doc.gold.ac.uk/~ma503am/software/space/audio.swf" height="300" width="400" bgcolor="#000000" /]
The space has two quality dimensions, “intensity” (X) and “disorder” (Y). Drum patterns are arranged along these dimensions, so more intense ones are towards the left and more ordered ones towards the top.
Draw a line from a high hat to a kick drum. If you draw a short line the rhythms will be more homogenous. Certain angles have certain feels to them. Maybe. It seems a nice way of playing with polymetric rhythms as vectors anyway.