Returned from ICMC 2007

Søren Knudsen EMAIL HIDDEN
Wed Sep 5 16:00:37 CEST 2007


This is my experience with ICMC 2007... It's also blogged at
blog.audiomind.dk (with pictures)

- S?ren

---

I have just returned to my office after attending ICMC 2007.

I attended ICMC 2007 as a student Volunteer which meant that I had to do some
work during the conference. Mainly take care of any technical problems
related to the auditoria?s and the technical aspects of recording some
interviews (in collaboration with Thibaut de Ruyter and Erik Christensen) and
panel discussions which will be published here in the near future. It was
quite ok and I got to meet some interesting people too.

We did interviews with John Chowning, Perry Cook, Lars Graugaard, Kristoffer
Jensen, Gary Kendall, Steve Mann, Barbara Tillmann, Nicola Bernardini, Roger
Dannenberg, Henrik Frisk, Kia Ng, Jerome Thomas, Marg Anga, Norah Zuniga
Shaw, Birgitte Alsted, Cynthia Grund and Fuzzi. All in all a very broad range
of people with interesting perspectives on electronic music and more.

I had the pleasure of listening to John Chownings Stria in Musikteatret Plex
(The Music Theatre Plex) which had been ?extended? with some visualizations
to celebrate its 30?th year anniversary. A great piece imho - the
visualizations too, btw. The picture below shows the general idea of the
visuals, with the horizontal axis being the frequency and the vertical being
time, where the narrow line in the middle is current time, the orange/yellow
area is what is to come and the green area what has already been played. John
Chowning also gave one of the keynote speeches (the only one I had time to
attend) at the conference in which he looked back at the previous fifty years
of computer music - the fact that the speech came from one of the guys which
has been in the field somewhat all the time gave a dimension to it, that were
very inspiring.

I also had the chance to hear Roger Dannenberg talk about an idea of his,
which basically extends the familiar idea of the spelling checker in ordinary
word processing software into multitrack recording software - check the
proceedings when (and if, I have no idea) they come online. Anyway, it was an
interesting idea and not that hard to implement I think. Let?s see this in
Ardour, please:) Can?t find the paper online btw. Title should be ?An
Intelligent Multi-Track Audio Editor?.

The water concert in DGI-byen (a public indoor swimming pool and more) with
Steve Mann among others, playing on their so-called Hydraulophones, which are
acoustic instruments. But where ordinary acoustic instruments make their
sound by modes (or resonances) in gaseous matter (normally air), these
instruments make their sound in liquid matter (normally water). Apparently
this not only changes the speed of the sound and thereby the modes which
occur at a given length, but also gives rise to more turbulence in the
liquid. According to my calculations, a pipe of 1 meter at 20 degrees Celsius
(and closed in both ends) should (filled with air) have resonances at (344
m/s) / 2 m = 172 / s = 172 Hz and multiples of that. The same pipe but filled
with water should then have resonances at (1482 m/s) / 2 m = 741 / s = 741 Hz
and multiples of that. How the pipe in the picture shown on the left can then
produce sound at frequencies much lower than that, with a much shorter pipe
is beyond me. That said, I think the instruments sounded very organic.
Another aspect of these sorts of instruments is their applicability to public
outdoor installations as they require a low amount of maintenance and clean
themselves quite well. Audio examples of these instruments will be published
as part of the audio interviews which is referred to in the beginning. They
published a paper in the conference proceedings, but I cant find it anywhere
on the net. The title is ?Inventing new instruments based on a computational
?hack? to make a badly tuned or unpitched instrument play in perfect
harmony?.

Ge Wang and Rebecca Fiebrink at Princeton gave a presentation of the new
version of the Chuck language. They have made some pretty interesting
extensions for working with Realtime Audio Analysis such as MIR (Music
Information Retrieval). Get the paper here.

Matt Hoffman (also at Princeton) demo?ed FeatSynth which is a Framework that
handles synthesizer settings by choosing parameters of the synthesizer which
makes the synth sound close(st) to a soundfile you give as input to the
system. Look here.

In the same area, Soundspotter, which is a realtime MIR program developed by
Michael Casey at University of London, was demo?ed. The software makes it
possible to retrieve video clips which has sound identical to what you feed
into a sound input in realtime, allowing for a concept which was new to me,
namely Concatinative Synthesis. You could for instance sing a melody into a
microphone and have the system show movie clips which, when concatenated
resulted in the same melody. Quite cool, I?d say! More here.

Other interesting ideas presented were RasterPiece, The Sliding Phase
Vocoder, The ElectroAcoustic Resource Site, Int.Lib, ? I really can?t
remember it all right now:)

I also got to hear live performances by Andreas Weixler and Se-Lien Chuang,
Christopher Penrose, Sten-Olof Hellstr?m and John Bowers (Live circuit
bending) and finally Zach Layton. Many of the performers did some wonderful
visuals as well as music of course.  

I talked to one of the aa-cell guys when he demo?ed Impromptu. Nice piece of
software? Now _why_ is it that I don?t own a Mac?

I also _almost_ got the name of a nice piece of vj?ing software - but then
not quite. So; does anyone know of such a piece of software, as far as I
remember the name of it started ?Mu? and consisted of 6 letters. Also; John
Chowning linked to a new FM tutorial authored by him during his keynote, and
stupid as I was, thought that I would be able to find it in notime on the
net. Turns out I could not?

I think that?s all for now. I have to look into the table setup now. More
info/pics on that soon so stay tuned.




More information about the music-bar mailing list