Category Archives: rant

Motivation

Now here’s an hour well spent, Bret Victor giving a talk on “Inventing on Principle”:

He demos some really nice experiments in live interfaces, including some javascript live coding with a nice implementation of time scrubbing.  He uses this great work as an illustration for his main point though, which is about why he has done these things. He puts forward a vision of the inventor as someone who isn’t motivated by building a career, making a startup, or engineering challenges in industry or research, but clear moral principles.

Among others he mentions Richard Stallman, which reminded me of the MOTIVATION file that comes with emacs.

Anyway watch it — I’m going to watch it again before commenting further..

Computational thinking

Some great news today that the UK school ICT programme is going to be replaced/updated with computer science.  As far as I can tell a lot of schools have actually been doing this stuff already with Scratch, but this means targeting teacher training for broader roll-out.

This has immediately triggered bike shedding about the issue of which programming language is used.  To quote twitter, “iteration is iteration and variables are variables. Doesn’t matter if its VB, ASP, Java, or COBOL”.  Apparently one of these should be used because they are “real languages” and Scratch isn’t.

This brought to the fore something I’ve been thinking about for a while, “computational thinking”.  This seems to most often be used interchangeably with “procedural thinking”, i.e. breaking down a problem into a sequence of operations to solve it.  From this view it makes perfect sense to focus on iteration, alternation and state, and see the language as incidental, and therefore pick a mainstream language designed for business programming rather than teaching.

The problem with this view is that thinking of problems in terms of sequences of abstract operations is only one way of thinking about programming languages.  Furthermore it is is surface level, and perhaps rather dull.  Ingrained Java programmers might find other approaches to programming difficult, but fresh minds do not, and I’d argue that a broader perspective would serve a far broader range of children than the traditional group of people who tend to be atypical on the autistic spectrum, and who have overwhelmed the programming language design community for far too long.  (This is not meant to be an outward attack, after all I am a white, middle-aged male working in a computer science department..)

I’d argue then that computational thinking is far richer than just procedural thinking alone.  For example programmers engage mental imagery when they program, and so in my view what is most important to computational thinking is the interaction between mental imagery and abstract thinking..  Abstract procedures are only half of the story, and the whole is far greater than the sum.  For this reason I believe the visuospatial scene of the programmer’s user environment is really key in its support for computational thinking.

Computation is increasingly becoming more about human interaction than abstract halting Turing machines, which in turn should direct us to re-imagining the scope of programming as creative exploration of human relationships with the world.  In my view this calls for engaging with the various declarative and multi-paradigm approaches to programming and radical UI design in fields such as programming HCI.  If school programming languages that serve children best end up looking quite a bit different from conventional programming languages, maybe it’s actually the conventions that need changing.

There must be no generative, procedural or computational art

This blog entry feels like a work in progress, so feedback is especially encouraged.

Lately I’ve been considering a dichotomy running through the history of computer art.  On one side of the dichotomy, consider this press statement from SAP, the “world’s leading provider of business software”, on sponsoring a major interactive art group show at the V&A:

London – October 08, 2009 – Global software leader SAP AG (NYSE: SAP) today announced its exclusive partnership with the Victoria and Albert (V&A) Museum in London for an innovative and interactive exhibition entitled Decode: Digital Design Sensations. Central to the technology-based arts experience is Bit.Code, a new work by German artist Julius Popp, commissioned by SAP and the V&A. Bit.Code is themed around the concept of clarity, which also reflects SAP’s focus on transparency of data in business, and of how people process and use digital information.

As consumers, people are overwhelmed with information that comes from a wide variety of electronic sources. Decode is about translating into a visual format the increasing amount of data that people digest on a daily basis. The exhibit seeks to process and make sense of this while engaging the viewer in myriad ways.

As far as art sponsorship goes, this is pretty damn weird.  The “grand entrance installation” was commissioned to reflect the mission statement of the corporate sponsor.  I found nothing in this exhibition about the corporate ownership and misuse of personal data, just something here about helping confused consumers.

Of course this is nothing new, the Cybernetic Serendipity exhibition at the ICA in 1968 was an early showcase of electronic and computer art, and was similarly compromised by the intervention of corporate sponsors. As Usselmann notes, despite the turbulence of the late sixties, there was no political dimension to the exhibition.  Usselmann highlights the inclusion of exhibits by sponsoring corporations in the exhibition itself as excluding such a possibility, and suggests that this created a model of entertainment well suited for interactive museum exhibits, but compromised in terms of socio-political engagement.  Cybernetic Serendipity was well received, and is often lauded for bringing together some excellent work for the first time, but in curatorial terms it seems possible that it has had lasting negative impact on the computer art field.

As I was saying though, there is a dichotomy to be drawn, and Inke Arns drew it well in this 2004 paper.  Arns makes a lucid distinction between generative art on one side, and software art on the other.  Generative art considers software as a neutral tool, a “black box” which generates artworks.  Arns gets to the key point of generative art, that it negates intentionality: the artworks are divorced from any human author, and considered only for their aesthetic.  This lack of author is celebrated by generative artists, as if the lack of cultural context could set the artwork free towards infinite beauty.  Arns contrasts this with software art, which instead focuses on software itself as the work, therefore placing responsibility for the work back on the human programmer.  In support, Arns invokes the notion of performative utterances  from speech act theory; the process of writing source code is equivalent to performing source code.  Humans project themselves by the act of programming, just as they do through the act of speech.

Arns relates the generative art approach with early work in the 60s, and software art approach with contemporary work, but this is unfair.  As could be seen in much of the work at Bit.Code, the presentation of sourcecode as a politically neutral tool is still very much alive.  More importantly, she neglects similar arguments to her own already being made in the late sixties/early seventies.  A few years after Cybernetic Serendipity, Frieder Nake published his essay There should be no computer art, giving a leftist perspective that decried the art market, in particular the model of art dealer and art gallery selling art works for the aesthetic pleasure of ruling elite. Here Nake retargets criticism of sociopolitical emptiness against the art world as a whole:

.. the role of the computer in the production and presentation of semantic information which is accompanied by enough aesthetic information is meaningful; the role of the computer in the production of aesthetic information per se and for the making of profit is dangerous and senseless.

From this we already see the dichotomy between focus on aesthetic output of processes, and focus on the processes of software and its role in society. These are not mutually exclusive, and indeed Nake advocates both.  But, it seems there is a continuing tendency, with its public beginnings in Cybernetic Serendipity, for computer artists to focus on the output.

So this problem is far from unique to computer art, but as huge corporations gain ever greater control over our information and our governments, the absence of critical approaches in computer art in public galleries looks ever more stark.

So returning to the title of this blog entry, which borrows from the title of Nake’s essay, perhaps there should be no generative, procedural or computational art. Maybe it is time to leave generative and procedural art for educational museum exhibits.  I think this is also true of the term “computational art”, because the word “computation” strongly implies that we are only interested in the end results of processes that halt, rather than in the activity of perpetual processes and their impact on our lives.  Is it time to return to software art, or processor art, or turn to something new, like critical engineering?

Best known and wrong: Dreyfus and Dreyfus

Since dipping my toe into cross-disciplinary research, I’ve noticed that it seems the best known results of a field are often derided or ignored within the field.  For example:

  • Speech perception: Motor theory – based on outmoded idea of there being a special module that evolved for speech perception and action
  • Linguistics: Inuit words for snow – it turns out that they don’t have a particularly large number
  • Neuropsychology: We draw things using one side of the brain and do maths with the other – it’s a bit more complicated than that I believe, although I’d like to know more..
  • Psychology of emotion (?): Kübler-Ross model – the model of five stages of grief doesn’t have any experimental basis
  • Music psychology: Mozart effect – rather questionable hypothesis, with conflict of interest, that doesn’t seem to be replicable (except to the extent that it’s also true of death metal). I’ve not met any music psychologists who take this at all seriously.

I’d be interested to hear of more examples..

I guess research is nuanced, and ideas that can be understood from bite-sized quotes get ingrained in folklore over a couple of decades and are impossible to dislodge if/when they are superseded.

These things really get in the way of understanding of a field though. For example Alan Blackwell’s pioneering masters module on programming language usability found its way on to reddit lately.  One commenter couldn’t understand how the course text could have a chapter on “Acquisition of Programming Knowledge and Skills” without referencing the Dreyfus model of skills acquisition.  The Dreyfus model is detailed in a 30 year old paper, which while is enjoyable to read, does not introduce any empirical research, makes some arbitrary distinctions and does not seem to figure in any contemporary field of academic research.  In their paper, Dreyfus and Dreyfus suggest  that people should not learn by exploration and experimentation, but by reading manuals and theoretical instruction structured around five discrete modes of learning.  It is surprising then that this model appears to be highly regarded among agile development proponents, who through a lot of squinting manage to fit it to the five stages of becoming an agile developer.  For example this talk by Patrick Kua somehow invokes homeopathy in support of this rather fragile application of Dreyfus’ air pilot training manual design to agile development.

On the surface this seems fairly harmless pseudoscience, but for anyone trying to take a more nuanced view of applied research in software development practices, it can be extremely irritating.  There is no reason why Rogalski and Samurçay should mention Dreyfus’s model in their review of programming skills acquisition, but because it is fashionable amongst agile development coaches, its absence seems unforgivable by agile practitioners.  This reddit thread is a clear case where pseudoscience can act as a serious barrier in dialogue between research and practice.

That said, I’m quite naive both about agile development and education studies, so am very happy to be enlightened on any of the above.

To add on a positive note, perhaps the answer to this is open scholarship.  As campaigning and funding organisations lead us towards a future where all public funded research is freely available, practitioners are increasingly able to immerse themselves in real, contemporary research.  Perhaps then over-simplistic and superseded ghosts from the past will finally be replaced, so we can live our lives informed by more nuanced understanding of ourselves.

New old laptop

My old laptop was falling apart, but buying a new one presented all kinds of ethical problems of which I have become increasingly aware.  Also new laptops are badly made and I always much preferred the squarer 4:3 screens that weirdly got phased out in the switch to widescreen five years ago (around the same time that storing a collection of films on a laptop became practical I guess).

So, I built my dream laptop from ebay purchases (all prices include postage):

  • IBM Thinkpad T60 with 1024×768 screen and 2GB RAM – £164.95
    The last IBM branded thinkpad, widely considered the best laptops amongst linux musicians :)  Apparently it is possible to find T61s with 4:3 screens but I couldn’t find one.
    I did buy a T60 for £118, which had a higher resolution screen but it arrived damaged, and only had 1GB RAM.  This one arrived beautifully reconditioned, well worth the extra, and the 1024×768 screen is good for matching projector resolutions.
  •  T7600 cpu – £94.99
    Replacing the 1.8GHz processor with a faster 2.33 GHz one, the fastest that the T60 is compatible with.  Installing it was tricky and nerve-wracking but a youtube video helped me through it.  £95 is expensive for a second hand cpu, but that’s because it’s the fastest of its class and so in high demand..
  • Arctic silver paste – £5.75
    To help keep the faster processor cool.  I was worried I’d have to upgrade the fan too but the cpu temperature has been fine so far.
  • A Kingston 96GB SSD drive – £85.00
    This probably makes a bigger speed difference than replacing the CPU, and makes the laptop much quieter..  I didn’t put much research into this but read that more expensive drives aren’t faster because of limitations of using an older laptop
  • 9 cell battery – £20.55
    The laptop came with a working battery, but £20 for a 6+ hour battery life is a no brainer.

So the total is £371, not that cheap but it’s a really nice, fast (for my uses), quiet and robust laptop.  Returning to a 4:3 screen feels like opening the door after years squinting through a letterbox.   Also, screw planned obsolescence, hopefully this five year old laptop will be with me for years to come.

Sonic boom

Jew's Harp

I’ve been peeved by this FT article, and failing to express my annoyance over on twitter, so time for a post.

The central question is that “New technology is leading to some innovative instruments – but will musicians embrace them?” To start with this is the wrong way round, musicians have been inventing their own instruments for millennia and willingly embracing them.  For example one of the oldest pieces of music technology is the Jew’s Harp, a highly expressive timbral instrument, augmenting the human body. I think all new instruments should be judged against it.

So on the whole technology is not some abstract machine churning out devices for musicians to scratch their heads over.  As the antithesis of this point the article introduces Ken Moore, paraphrased and quoted as laying into the ReacTable as a fad, which is not often used for real music.  He says a better way forward is to use motion-sensing equipment, in particular his own use of Wii controllers to create theramins.  Now I like theramins very much, but Moore profoundly misunderstands the ReacTable, which actually includes motion-sensing technology at its heart.  Indeed Moore’s videos could easily show him using a ReacTable in the air, but without visual feedback and with only two pucks.

The genius of the ReacTable, which in my view shows the way forward for music tech as a whole, is in combining visual, continuous gestures in a space of relative distance and rotation, defined by and integrated with the abstract, discrete symbols of language.  This is what the Jew’s Harp had already done beautifully, thanks to human categorical vowel perception and continuous, multidimensional range of embodied expression in vowel space.  The ReacTable pushes this further however, by bringing dataflow and to an extent computation into the visuospatial realm.  This is a very human direction to take things, humans being creatures who fundamentally express ourselves both with language, and with prosody and movement, engaging with the striated and smooth simultaneously and intertwined.

I could rant about those crass arguments around ‘real music’ too. People dance to the ReacTable in large numbers, and I don’t see how you can get any more real than that.  Still if the ReacTable is starting to get bad press then that’s potentially a good sign, that it’s forcing people into an uncomfortable position, towards changing their minds about where musician-led technology could really drag us…  Towards new embodied languages.

Novels are digital art too

The striated and the smooth

Digital means discrete, and analog means continuous. Digital and analog support each other, as
Deleuze and Guattari put it:

… in the case of the striated, the line is between two points, while in the smooth, the point is between two lines.

When we speak, we articulate our vocal tracts in analog movements, creating discontinuities that the listener is able to segment into the digital phonemes, diphones and words of language.  Language is digital, as is clear when we write it with a typewriter.  The analog arrangement of ink on paper is woven into a perfectly discrete sequence of symbols, as our eyes saccade across them.  But we reconstruct the analog movements of speech when we read; even when we read silently, we add paralinguistic phrasing in our heads to aid processing of the text.  This analog phrasing is important, for example modulating the tone of voice with slight sarcasm tone can completely negate the meaning of what is said.  Prosody can convey far subtler emotional feeling that this.

A great deal of what is called `digital art’ is not digital art at all, and it seems many digital artists seem ashamed of the digital.  In digital installation art, the screen and keyboard are literally hidden in a box somewhere, as if words were a point of shame.  The digital source code behind the work is not shown, and all digital output is only viewable by the artist or a technician for debugging purposes.  The experience of the actual work is often entirely analog, the participant moves an arm, and observes an analog movement in response, in sight, sound or motor control.  They may choose to make jerky, discontinuous movements, and get a discontinuous movement in response, but this is far from the complexity of digital language.  This kind of installation forms a hall of mirrors.  You move your arm around and look for how your movement has been contorted.

This is fine, computers allow abstractions away from particular perceptual modes and senses, and so are quite good at translation between modes.  But computers really excel as machines for formal language, and so I hope the practice of hiding linguistic computation in `digital art’ will be a short lived fashion.

Cyclic revision control

There is something about artist-programmers, the way they’re caught using general purpose languages and tools in specific, unusual circumstances.  Many of the basic assumptions underlying the development of these general purpose systems, such as errors are bad, the passing of time need not be structured only minimised, standards and pre-defined plans are good, etc, often just don’t apply.  It’s not that artist-programmers can get away with being bad programmers.  Far from it, in my opinion they should be fluent with their language, it’s no good being baffled by syntax errors and spaghetti code while you’re trying to work out some weird idea.  However if you are following your imagination as part of a creative process, then established and fashionable software development methods often look arbitrary and inhibiting.

The last few days I’ve been thinking about revision control.  Revision control systems are really excellent and have a great deal to offer artist-programmers, particularly those working in groups.  What I’ve been wondering though is whether they assume a particular conception of time that doesn’t always apply in the arts.

Consider a live coder, writing software to generate a music performance.  In terms of revision control they are in an unusual situation.  Normally we think of programmers making revisions towards a final result or milestone, at which point they ship. For live coders, every revision they make is part of the final result, and nothing gets shipped, they are already the end users.  We might somewhat crassly think about shipping a product to an audience, but what we’re `shipping’ them isn’t software, but a software development process, as musical development.

Another unusual thing about live coding revisions is that whereas software development conventionally begins with nothing and finishes with a complete, complex structure, a live coder begins and ends with nothing.  Rather than aim for a linear path towards a predefined goal, musicians instead are concerned with how to return to nothing in a satisfying manner.  Indeed perhaps the biggest problem for Live Algorithms is the problem of how to stop playing.  The musician’s challenge is both how to build and how to deconstruct.

There are two ways of thinking about time, either as a linear progression and as a recurrent cycle or oscillation.  Here’s a figure from the excellent book Rhythms of the Brain by György Buzsáki:

“Oscillations illustrate the orthogonal relationship between frequency and time and space and time. An event can repeat over and over, giving the impression of no change (e.g., circle of life). Alternatively, the event evolves over time (pantha rei). The forward order of succession is a main argument for causality. One period (right) corresponds to the perimeter of the circle (left).” (pg. 7)

This illustrates nicely that these approaches aren’t mutually exclusive, they’re just different ways of looking at the same thing.  Indeed it’s normal to think of conventional design processes as cycles of development, with repeating patterns between milestones.  It’s not conventional to think of the code itself ending up back where it started however, but this can happen several times during a music performance, we are all familiar with chorus and verse structure for example, and performances necessarily begin and end at silence.

So where am I going with this?  I’m not sure, but I think there’s plenty of mileage in rethinking revision control for artist-programmers.  There’s already active, radical work in this area, for example the code timeline scrubbing in field looks awesome, and Julian Rohrhuber et al have some great research on time and programming, and have worked on non-linear scheduling of code changes in SuperCollider.

As far as I can see though, the revision control timeline has so far been treated as a linear structure with occasional parts branching and remeeting the main flow later on.  You do sometimes get instances of timelines feeding back on themselves, a process called backporting, but this is generally avoided, only done in urgent circumstances such as for applying security fixes to old code.

What if instead, timelines were of cycles within cycles, with revision control designed not to aid progression towards future features, but help the programmer wrestle their code back towards the state it was in ten minutes ago, and ten minutes before that?  Just questions for now, but I think there is something to be done here.  After all, there is something about artist-programmers, the way they’re caught using general purpose languages and tools in specific, unusual circumstances.

Languages are Languages – follow up

There are some interesting comments to my “languages are languages” post that I wanted to highlight — a disadvantage of blogs is that comments are often the best bit but are subservient to the posts they are on.  I also brought the subject up on the PPIG (Psychology of Programming Interest Group) mailing list, again prompting some enlightening discussion.

By the way, PPIG are holding a Work In Progress meeting here in Sheffield from the 18th-19th April.  A call for abstracts is out now.  Heartily recommended!

Languages are languages

Ian Bogost has an interesting argument that computer languages are not languages,  but systems.

He starts off arguing that learning a programming language shouldn’t meet a curricular requirement for learning a natural language.  That’s a fair argument, except he does so on the basis that computer languages are not languages at all.

”the ability to translate natural languages doesn’t really translate (as it were) to computer languages”

It clearly does translate.  You can either translate literally from C to Perl (but not really vice-versa), or idiomatically.  It’s straightforward to translate from C to English, but difficult to translate from English to C.  But then, it’s difficult to translate a joke between sign and spoken language; that doesn’t mean that sign language isn’t a language, indeed sign languages are just as rich as spoken ones…  The experience of signing is different from speaking, and so self-referential jokes don’t translate well.

We can approach translating from English to C in different ways though.  We can model the world described in a narrative in an object oriented or declarative fashion.  A human can get the sense of what is written in this language either by reading it, or perhaps by using it as an API, to generate works of art based on the encoded situation.  Or we could try to capture a sense of expectation in the narrative within temporal code structure, and output it as music.

From the comments:

”If we allow computer languages, we should allow recipes. Computer codes are specialized algorithms. So are recipes.”

This seems to be confusing utterances with languages.  Recipes are written in e.g. English.  Computer programs are written in e.g. C.

“[programming code is] done IN language, but it ISN’T language”

You could say the same of poetry, surely?  Poetry is done in language, but part of its power is to reach beyond language in new directions.  Likewise code is done in language, but you can also do language in code, by defining new functions or  parsing other languages.

The thing is that natural languages develop with a close relationship with the speaker, words being grounded in the human experience of their body and environment, and movements and relationships within it.  Computer languages aren’t based around these words, but we can still use the same symbolic references by using those words in the secondary notation of functions names and variables, or even by working with an encoded lexicon such as wordnet as data.  In doing so we are borrowing from a natural language, but we could just have easily used an invented language such as Esperanto.  Finally the language is grounded in the outside world when it is executed, through whatever modality or modalities its actuators allow, usually vision, sound and/or movement.

… replacing a natural language like French with a software language like C is a mixed metaphor.

Discussing computer language as if it were natural language surely isn’t a mixed metaphor, if anything it’s just a plain metaphor.  But both have strong family resemblances, because both are languages.

The claim that computer languages are not languages reads as an attempt to portray computer languages as somehow not human.  Get over it, digital computation is something that humans do with or without electronic hardware, we can do it to engage fully with all of our senses, and we can do it with language.  Someone (who I keep anonymous, just in case) said this on a mailing list recently:

“Having done a little bit of reading in Software Studies, I was surprised by just how many claims are invalidated with a single simple example of livecoding.”

I think that this is one of them.