Csound Csound-dev Csound-tekno Search About

[Csnd] Re: Re: BArCMuT @ CNMAT - Dec 4

Date2008-12-03 22:21
Fromvictor
Subject[Csnd] Re: Re: BArCMuT @ CNMAT - Dec 4
Judging by what we saw at ICMC, you could be
quite disappointed...

Victor
----- Original Message ----- 
From: "Richard Dobson" 
To: 
Sent: Wednesday, December 03, 2008 7:43 PM
Subject: [Csnd] Re: BArCMuT @ CNMAT - Dec 4


> Wish I could be there, especially for the parallel  computing feature - 
> might you be able to post a review of the event here?
>
> Richard Dobson
>
> Noah Thorp wrote:
> ..
>> Our presenters this month from CNMAT will be:
>> - Adrian Freed (Research Director) will give a brief overview of CNMAT 
>> research and will demonstrate some new IDE-free micro-controller 
>> programming techniques using OSC that are part of a project to substitute 
>> Arduino/Wiring for more modern, rapid prototyping techniques. He will 
>> also show some music and general gestural controllers and other DIY 
>> projects using fiber fabric and malleable materials.
>> - David Wessel (CoDirector) will present his many touch instrument and 
>> briefly describe the new Berkeley parallel computing lab 
>> (http://parlab.eecs.berkeley.edu/)
>> - John MacCallum will present CNMAT’s 120 element Spherical Loudspeaker 
>> with various beam forming and musical applications.
>
>
>
> Send bugs reports to this list.
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe 
> csound" 


Date2008-12-03 22:34
FromRichard Dobson
Subject[Csnd] Re: Re: Re: BArCMuT @ CNMAT - Dec 4
Maybe; but these institutions set out to be leaders and to set the 
agenda***, and the advantage of being there might simply be to ask 
questions!


*** See .e.g.:

http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=212201281

"
"We have the opportunity to reset the software stack for the next 30 
years," said David A. Patterson, veteran Berkeley computer science 
professor and director of the lab. "I can't think of another research 
project that has as much upside or risk," he added.
"

Whether they can, um, reset the music software stack, and whether we 
like it when they've done it, are of course the Big Questions!


So if I was there I would ask something along the lines of "what NEW 
sounds will we be able to make, what NEW processes will we be able to 
explore,  with all this parallel processing?"


Richard Dobson





victor wrote:
> Judging by what we saw at ICMC, you could be
> quite disappointed...
> 


Date2008-12-03 23:53
From"Steven Yi"
Subject[Csnd] Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
AttachmentsNone  

Date2008-12-04 03:58
From"Michael Gogins"
Subject[Csnd] Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
AttachmentsNone  

Date2008-12-04 04:50
FromDarren Nelsen
Subject[Csnd] Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
As I'm on the other side of the continent, I look forward to hearing  
your report!

-Darren

On Dec 3, 2008, at 6:53 PM, Steven Yi wrote:

> Well, I think will be able to attend this meeting as it's maybe 8-10
> blocks walking distance away. =)  Though, I think we know already that
> parallel processing will simply just provide for more processing at
> one time and that's about it.  Since parallel algorithms can be just
> as easily be computed on a single processor, I don't think it's going
> to open up any new possibilities in that regards.  The last time I was
> able to attend the BArCMuT meeting was also at CNMAT much earlier this
> year and it looks like they'll be covering a lot of the same ideas
> from that meeting.  At that meeting they were doing processing of
> many, many channels of audio-rate signals, which I think may be
> important for their spherical speaker, but also they were using the
> audio-rate signals for control-rate data, using custom controllers.
> In those contexts which were all realtime, I can see parallel
> processing opening up options for what is practical in realtime.  If
> you're not concerned with realtime performance, then I don't see any
> inherent musical possibilities that could not be done even on a single
> processor and with software rendering to disk.
>
> So, at least for me, the issue is not can a musical operation be done
> or not, but rather can it be done in realtime or not.  Anyways, I will
> be attending and can report on the meeting to this list. =)
>
> steven
>
>
>
> On Wed, Dec 3, 2008 at 2:34 PM, Richard Dobson
>  wrote:
>> Maybe; but these institutions set out to be leaders and to set the
>> agenda***, and the advantage of being there might simply be to ask
>> questions!
>>
>>
>> *** See .e.g.:
>>
>> http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=212201281
>>
>> "
>> "We have the opportunity to reset the software stack for the next  
>> 30 years,"
>> said David A. Patterson, veteran Berkeley computer science  
>> professor and
>> director of the lab. "I can't think of another research project  
>> that has as
>> much upside or risk," he added.
>> "
>>
>> Whether they can, um, reset the music software stack, and whether  
>> we like it
>> when they've done it, are of course the Big Questions!
>>
>>
>> So if I was there I would ask something along the lines of "what  
>> NEW sounds
>> will we be able to make, what NEW processes will we be able to  
>> explore,
>> with all this parallel processing?"
>>
>>
>> Richard Dobson
>>
>>

Date2008-12-04 09:51
FromRichard Dobson
Subject[Csnd] Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
Steven Yi wrote:
> Well, I think will be able to attend this meeting as it's maybe 8-10
> blocks walking distance away. =)  Though, I think we know already that
> parallel processing will simply just provide for more processing at
> one time and that's about it.  Since parallel algorithms can be just
> as easily be computed on a single processor, 

There are plenty of algorithms out there that are beyond what a single 
processor, even with a 6GHz (or 20GZ for that matter) clock speed, can 
do in real time, with low power consumption. The Sliding Phase Vocoder 
now in Csound is but one example.

I have approached the issue somewhat differently, making a distinction 
between mere parallel processing per se, and what we have called 
"High-Performance Audio Computing" (HiPAC), which considers what new 
things we can do with a ~lot~ more processing power than we have now. It 
so happens that as Moore's law reaches its limit, it is being replaced 
by a new measure related  to multi-core processing in a number of forms, 
not least massive SIMD-style vector accelerators (e.g. Clearspeed, 
GPGPU, etc); hence to achieve most of the goals of HiPAC  inevitably 
means employing large-scale vector acceleration (and quite possibly 
'conventional" multi-core processing too). These architectures are ideal 
for computing FFTs, FIRs, 2D and 3D meshes, and other 'embarrassingly 
parallel" algorithms.  There is a lot more to this topic than just 
running multiple Csound instruments simultaneously.

So, parallel processing in one form or another is the likely means, but 
not the end.  The end (IMO) is a lot more processing power, not simply 
to do more of what we can already do, but do NEW things that until now 
have been prohibitively demanding computationally (e.g. full-bandwidth 
room modelling in real time - high frequencies demand many more nodes, 
so tend to be avoided; the published mesh-based room models stop around 
4Khz, or even lower).

That's how I am looking at things, anyway. The powers that be at 
Berkeley may well look at things very differently, or at different 
things altogether.

Richard Dobson


Date2008-12-04 11:03
FromVictor Lazzarini
Subject[Csnd] Re: Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
I think the issue here is not so much having many parallel processors,
but fast communication between them. You see, things like mesh models
are parallelisable, but they are also fine-grained, which makes them
perform badly in a lot of systems.

Also, just to give you an idea of what I meant about the uselessness of
the ICMC panel, one of its members claimed that the parallel issue had
been solved many years ago and then proceeded to show us a pipeline.
It simply did not occur to him that in realtime audio, one the most typical
applications, this would give us a large latency between input and output,
no matter how fast the computation is.

Victor

At 09:51 04/12/2008, you wrote:
>So, parallel processing in one form or another is the likely means, 
>but not the end.  The end (IMO) is a lot more processing power, not 
>simply to do more of what we can already do, but do NEW things that 
>until now have been prohibitively demanding computationally (e.g. 
>full-bandwidth room modelling in real time - high frequencies demand 
>many more nodes, so tend to be avoided; the published mesh-based 
>room models stop around 4Khz, or even lower).

Victor Lazzarini
Music Technology Laboratory
Music Department
National University of Ireland, Maynooth 


Date2008-12-04 20:16
From"Steven Yi"
Subject[Csnd] Re: Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
AttachmentsNone  

Date2008-12-04 20:50
From"Steven Yi"
Subject[Csnd] Re: Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
AttachmentsNone  

Date2008-12-04 21:00
FromRichard Dobson
Subject[Csnd] Re:: BArCMuT @ CNMAT - Dec 4
Steven Yi wrote:
> Hi Richard,
> 
> I think again the key term is realtime, as we could certainly model a
> room now, just that we would be waiting a long time for the results!
> I guess for me, my concerns are if a sound is capable of being
> produced, and whether I have to wait for it or not is somewhat
> secondary.  For example, I could use Sliding Phase Vocoder now, though
> I might need to wait for it, but ultimately I would get that sound.  I
> guess that is how I interpreted your question about new sounds and new
> processes.
> 

Yes, that's it, pretty much. I remember the good old days of CDP on the 
Atari ST (8MHz CPU, software f/p), where processing one second of sound 
through pvoc took an hour. The funny thing was, one did it a lot despite 
the wait, simply because there were results impossible to get any other 
way. Things are somewhat different today - I get the impression that if 
that second of source took 10 seconds to process, many users would 
grumble and look for a faster solution. The idea of a 3600-fold wait 
would get very short shrift these days! Though of course the 
astrophysicists and molecular modellers don't seem too bothered, with 
"latency" measured in weeks or months.

My approach is simply: the processing power ~will~ (we are assured) be 
there by 2020 (Andy Moorer's prediction in JAES may 2000), so we have 
that much time to discover all the processes that are hardly even known 
about they are so costly. He also predicted 700 channels of audio (or 
was that 7000?) and intelligent agents to deal with them. Well, with 
Wavefield Synthesis very much the flavour of the decade, we may get to 
700 well in advance of 2020, not so sure about the intelligent agents 
though - we will need to decide what we want to do with them before we 
can train an agent to do it.

With this subject we are in effect revisiting the old hardware paradigm. 
Not so very long ago now, the innovative step was to build a dsp 
accelerator directly into a personal workstation - e.g. the Atari Falcon 
  and the NeXT  machine, both with a 56000 chip inside them and an 
external port that connected directly to the dsp to talk to codecs, 
other dsps, etc.

Now, it is the dsp chip itself that needs the accelerator. ADI have 
recently taken a  step on this path by adding accelerators to their 
latest Sharc device (21469), specifically to accelerate FFT and FIR 
computations, together with a somewhat modest SIMD facility with two 
parallel arithmetic Processing Elements. So, it is still measured in 
MFlops, albeit quite a lot lot them, whereas we could do with dsp chips 
measured in GFlops.

Some people may remember "Extended Csound" which ran on (typically) a 
pair of Sharc chips, with direct hardware connection  to audio and MIDI 
ports. This was before the days of the streaming pvoc opcodes; I posit 
some to-come  chip that will run the Sliding Pvoc in real time (which 
offers ~lower~ latency compared to normal block-based pvoc) - that will 
definitely need  many Gflops, the more so since we clearly need 
double-precision floating point. An audio  counterpart to the GPU, 
therefore, and something that can equally form the basis of the next 
all-singing Korg (or whoever) workstation. So it will need to be low 
power-consumption, so that no fan is needed.

This is the other side of massively-parallel computation, the more 
publicised one being the multi-core or cluster approach as driven by 
Intel and co, where all the issues raised by Victor will surely rear up 
and confront us. Clearly we will continue to desire workstations with 
such multi-core capacity built in (and hope to have software that drives 
it efficiently), but I suspect the real revolution for audio will be the 
TFlop-class dsp+accelerator  processors that will be being announced, 
well, some 10 years from now.

In the meantime, as you say, even if it takes a long time now, we 
already have the Sliding Pvoc to try out, so that when that chip does 
appear, we already have stuff to run on it, and most importantly, know 
~why~ we will want to!


Richard Dobson




Date2008-12-04 21:23
FromRichard Dobson
Subject[Csnd] Re: : BArCMuT @ CNMAT - Dec 4
Indeed - the Tesla C1060 processor (now actually available, at a mere 
$1700) is on my Christmas list to Santa (together with a PC with PCI 
Express to put it in).

They show as one example project in their applications page a FIR filter 
implementation, and as of the last time I looked, that was the only 
audio example. Ironically, a mere FFT would amount to a somewhat 
unambitious task for a GPU.

The next version of OS X ("Snow Leopard") will include a new GPU 
programming language "OpenCL" as standard, no details available but 
probably very similar to CUDA, but presumably for use with both Nvidia 
and ATI graphics chips. Apple have quietly snuck in a new 
Nvidia-designed graphic chip into to most recent MacBook Pros. So I 
think it is highly likely that a lot of GPU-based audio code will emerge 
in the next year or so. If I get hold of the hardware, some of it might 
even come from me.


Richard Dobson




Steven Yi wrote:
> BTW: I just came across this library CUDA for using GPU's:
> 
> http://www.nvidia.com/object/cuda_get.html?CMP=KNC-CUD-K-GOOG&gclid=CKiyuPjjp5cCFQhJagodaAMq-w
> 
> this page:
> 
> http://www.nvidia.com/object/cuda_learn_products.html
> 
> mentions "audio" so maybe worth looking it.  It seems Nvidia-specific,
> but seems like enough products to make it beyond a niche hardware
> platform.
> 
> steven
> 


Date2008-12-05 03:45
From"Steven Yi"
Subject[Csnd] Re: Re: Re: Re: Re: Re: BArCMuT @ CNMAT - Dec 4
AttachmentsNone