Csound Csound-dev Csound-tekno Search About

[Csnd] [OT] ICMC 2010

Date2010-06-14 09:35
Fromjohn ffitch
Subject[Csnd] [OT] ICMC 2010
Mainly addressed to those of you that attended ICMC this year; was
there anything particularly interesting, dramatic or unexpected?  And
how was the music?
==John ffitch


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"

Date2010-06-14 13:54
FromMichael Gogins
Subject[Csnd] Re: [OT] ICMC 2010
I did not attend every paper session or every concert, but I did go to
most of them. Nothing truly unexpected to me, unfortunately.

I have previously reported here my conversation with Miller Puckette
about the difficulty of going further than the multi-processing poly~
object to implement concurrency in Pure Data. I believe Max/MSP does
the same as Pure Data except that poly~ in Max seems to be
multi-threading, not multi-processing.

There was a paper on concurrency in audio software, which I did not
attend, but read, "Advances In The Parallelization Of Music And Audio
Applications" by Eric Battenberg, Adrian Freed, and David Wessel.
Interestingly enough, this paper does not so much consider concurrency
in signal flow graphs, but rather DSP algorithms that lend themselves
to parallelization such as partitioned convolution and non-negative
matrix factorization, and running other parts of music software in
parallel with audio engines. Most interest of all is the call for
consideration of GPU processors for audio purposes, huge speedups may
be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
these CUDA speedups mostly occur in matrix products and
factorizations, which do not occur so much in most Csound orchestras.
The paper also discusses I/O bottlenecks that occur when multiple
cores begin to be used for audio processing.

I enjoyed speaking with Lydia Ayers about her poster with Andrew
Horner, "Synthesizing The Dynamic Spectra Of The Didgeridoo" in
Csound. This is pretty high quality stuff. There was also an
interesting poster "Hrtfearly & Hrtfreverb: Flexible Binaural
Reverberation" by Brian Carty and Victor Lazzarini. Then "Introducing
Belle, Bonne, Sage" by William Burnson is of interest because it may
soon make it possible to embed high-quality music notation display in
Csound front ends.

A high point of the papers for me was "Petrol: Reactive Pattern
Language For Improvised Music"  by Alex Mclean and Geraint Wiggins.
This presented a system for live coding in the context of algorithmic
composition in a functional language. Another interesting paper was
Christopher Bailey's "A Database System For Organising Musique
Concrete." It would be interesting to try to do something like this
using an open source database system integrated with the Csound API
and CsoundAC for the algorithmic, or just plain scripted, composition
of musique concrete and electroacoustic music.

Now as to music. Again on a personal note, I like and enjoy all the
real-time and interactive pieces, but I truly would love to see more
high-quality fixed media works, and especially more algorithmic
composition. I strongly feel that the real power of the computer is
being tossed out the window in order to "put on a show" and be seen
doing something on stage.

On a more positive note, I felt that the stylistic range of the
concerts was somewhat broader than in some past ICMCs that I have
attended. There were outbreaks of outright consonance and even hints
or simulations of tonality. There were several pieces that I enjoyed
hearing, would enjoy hearing again, and would probably buy if I saw
them in a music store. Most of the pieces that I did like seem to have
ended up on the ICMC CD.

Does anyone who is more interested in the real-time, interactive stuff
have anything to say about this ICMC?

Regards,
Mike

On Mon, Jun 14, 2010 at 4:35 AM, john ffitch  wrote:
> Mainly addressed to those of you that attended ICMC this year; was
> there anything particularly interesting, dramatic or unexpected?  And
> how was the music?
> ==John ffitch
>
>
> Send bugs reports to the Sourceforge bug tracker
>            https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>
>



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2010-06-14 15:42
FromRichard Dobson
Subject[Csnd] Re: Re: [OT] ICMC 2010
On 14/06/2010 13:54, Michael Gogins wrote:
...
> There was a paper on concurrency in audio software, which I did not
> attend, but read, "Advances In The Parallelization Of Music And Audio
> Applications" by Eric Battenberg, Adrian Freed, and David Wessel.


Thanks for mentioning this - I have just found it is available online as 
a pdf download.


> Interestingly enough, this paper does not so much consider concurrency
> in signal flow graphs, but rather DSP algorithms that lend themselves
> to parallelization such as partitioned convolution and non-negative
> matrix factorization, and running other parts of music software in
> parallel with audio engines. Most interest of all is the call for
> consideration of GPU processors for audio purposes, huge speedups may
> be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
> these CUDA speedups mostly occur in matrix products and
> factorizations, which do not occur so much in most Csound orchestras.
> The paper also discusses I/O bottlenecks that occur when multiple
> cores begin to be used for audio processing.
>

It has an obvious application to low-latency FFT and general 
transform-based processing, not least the sliding phase vocoder which 
John and I have worked on and is in Csound (for the special case where 
ksmps = 1 or very low). It is, I must say, a tad disappointing that our 
papers on that, and on massively parallel acceleration generally (at 
ICMC 2007 especially), seem not to have made the cut as a reference for 
their paper.

Richard Dobson



Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"

Date2010-06-14 15:49
FromVictor Lazzarini
Subject[Csnd] Re: Re: Re: [OT] ICMC 2010
John Lato here in Maynooth is doing some interesting work with CUDA  
and sound synthesis of complicated Physical Models. He might have  
something to sat about this.

Victor

On 14 Jun 2010, at 15:42, Richard Dobson wrote:

> On 14/06/2010 13:54, Michael Gogins wrote:
> ...
>> There was a paper on concurrency in audio software, which I did not
>> attend, but read, "Advances In The Parallelization Of Music And Audio
>> Applications" by Eric Battenberg, Adrian Freed, and David Wessel.
>
>
> Thanks for mentioning this - I have just found it is available  
> online as a pdf download.
>
>
>> Interestingly enough, this paper does not so much consider  
>> concurrency
>> in signal flow graphs, but rather DSP algorithms that lend themselves
>> to parallelization such as partitioned convolution and non-negative
>> matrix factorization, and running other parts of music software in
>> parallel with audio engines. Most interest of all is the call for
>> consideration of GPU processors for audio purposes, huge speedups may
>> be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
>> these CUDA speedups mostly occur in matrix products and
>> factorizations, which do not occur so much in most Csound orchestras.
>> The paper also discusses I/O bottlenecks that occur when multiple
>> cores begin to be used for audio processing.
>>
>
> It has an obvious application to low-latency FFT and general  
> transform-based processing, not least the sliding phase vocoder  
> which John and I have worked on and is in Csound (for the special  
> case where ksmps = 1 or very low). It is, I must say, a tad  
> disappointing that our papers on that, and on massively parallel  
> acceleration generally (at ICMC 2007 especially), seem not to have  
> made the cut as a reference for their paper.
>
> Richard Dobson
>
>
>
> Send bugs reports to the Sourceforge bug tracker
>           https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body  
> "unsubscribe csound"
>



Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"

Date2010-06-14 15:50
FromMichael Gogins
Subject[Csnd] Re: Re: Re: [OT] ICMC 2010
Of course you are correct about transform-based processing. And I
think the sliding phase vocoder will have many uses.

If people with Csound were suddenly on stage with live orchestras
obviously richer than Max/MSP, then other people would very quickly
sit up and take notice.

It is a genuine problem for us that we have a system that is clearly
superior for some uses to any other software synthesizer, but what
gets taught is Max, and therefore what gets used is Max.

I don't foresee a total solution to this, since I don't see that
Csound will ever be perceived by producers, film composers, and laptop
musicians as easier to use than Max (or Reaktor or Kyma), but being
faster than anything else would certainly help.

Regards,
Mike

On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
 wrote:
> On 14/06/2010 13:54, Michael Gogins wrote:
> ...
>>
>> There was a paper on concurrency in audio software, which I did not
>> attend, but read, "Advances In The Parallelization Of Music And Audio
>> Applications" by Eric Battenberg, Adrian Freed, and David Wessel.
>
>
> Thanks for mentioning this - I have just found it is available online as a
> pdf download.
>
>
>> Interestingly enough, this paper does not so much consider concurrency
>> in signal flow graphs, but rather DSP algorithms that lend themselves
>> to parallelization such as partitioned convolution and non-negative
>> matrix factorization, and running other parts of music software in
>> parallel with audio engines. Most interest of all is the call for
>> consideration of GPU processors for audio purposes, huge speedups may
>> be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
>> these CUDA speedups mostly occur in matrix products and
>> factorizations, which do not occur so much in most Csound orchestras.
>> The paper also discusses I/O bottlenecks that occur when multiple
>> cores begin to be used for audio processing.
>>
>
> It has an obvious application to low-latency FFT and general transform-based
> processing, not least the sliding phase vocoder which John and I have worked
> on and is in Csound (for the special case where ksmps = 1 or very low). It
> is, I must say, a tad disappointing that our papers on that, and on
> massively parallel acceleration generally (at ICMC 2007 especially), seem
> not to have made the cut as a reference for their paper.
>
> Richard Dobson
>
>
>
> Send bugs reports to the Sourceforge bug tracker
>           https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
> csound"
>
>



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2010-06-14 15:59
FromMichael Gogins
Subject[Csnd] Re: Re: Re: [OT] ICMC 2010
I wonder, would some differential equation models of instruments be
numerically solvable as finite element method matrix equations? If so
then CUDA might give speedups of about 10 times.

Regards,
Mike

On Mon, Jun 14, 2010 at 10:50 AM, Michael Gogins
 wrote:
> Of course you are correct about transform-based processing. And I
> think the sliding phase vocoder will have many uses.
>
> If people with Csound were suddenly on stage with live orchestras
> obviously richer than Max/MSP, then other people would very quickly
> sit up and take notice.
>
> It is a genuine problem for us that we have a system that is clearly
> superior for some uses to any other software synthesizer, but what
> gets taught is Max, and therefore what gets used is Max.
>
> I don't foresee a total solution to this, since I don't see that
> Csound will ever be perceived by producers, film composers, and laptop
> musicians as easier to use than Max (or Reaktor or Kyma), but being
> faster than anything else would certainly help.
>
> Regards,
> Mike
>
> On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
>  wrote:
>> On 14/06/2010 13:54, Michael Gogins wrote:
>> ...
>>>
>>> There was a paper on concurrency in audio software, which I did not
>>> attend, but read, "Advances In The Parallelization Of Music And Audio
>>> Applications" by Eric Battenberg, Adrian Freed, and David Wessel.
>>
>>
>> Thanks for mentioning this - I have just found it is available online as a
>> pdf download.
>>
>>
>>> Interestingly enough, this paper does not so much consider concurrency
>>> in signal flow graphs, but rather DSP algorithms that lend themselves
>>> to parallelization such as partitioned convolution and non-negative
>>> matrix factorization, and running other parts of music software in
>>> parallel with audio engines. Most interest of all is the call for
>>> consideration of GPU processors for audio purposes, huge speedups may
>>> be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
>>> these CUDA speedups mostly occur in matrix products and
>>> factorizations, which do not occur so much in most Csound orchestras.
>>> The paper also discusses I/O bottlenecks that occur when multiple
>>> cores begin to be used for audio processing.
>>>
>>
>> It has an obvious application to low-latency FFT and general transform-based
>> processing, not least the sliding phase vocoder which John and I have worked
>> on and is in Csound (for the special case where ksmps = 1 or very low). It
>> is, I must say, a tad disappointing that our papers on that, and on
>> massively parallel acceleration generally (at ICMC 2007 especially), seem
>> not to have made the cut as a reference for their paper.
>>
>> Richard Dobson
>>
>>
>>
>> Send bugs reports to the Sourceforge bug tracker
>>           https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> csound"
>>
>>
>
>
>
> --
> Michael Gogins
> Irreducible Productions
> http://www.michael-gogins.com
> Michael dot Gogins at gmail dot com
>



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2010-06-14 16:07
FromRichard Dobson
Subject[Csnd] Re: Re: Re: Re: [OT] ICMC 2010
All the latest Macs with Nvidia GPUs can run OpenCL, but sadly as Apple 
themselves have acknowledged,  OpenCL  is not in any way optimized for 
real-time audio - deadlines are basically "whenever the GPU feels like 
it". Were it not for that, Mac-based Csound devs could have at least 
made a start in accelerating via the GPU.

Richard Dobson

On 14/06/2010 15:50, Michael Gogins wrote:
> Of course you are correct about transform-based processing. And I
> think the sliding phase vocoder will have many uses.
>
> If people with Csound were suddenly on stage with live orchestras
> obviously richer than Max/MSP, then other people would very quickly
> sit up and take notice.
>
> It is a genuine problem for us that we have a system that is clearly
> superior for some uses to any other software synthesizer, but what
> gets taught is Max, and therefore what gets used is Max.
>
> I don't foresee a total solution to this, since I don't see that
> Csound will ever be perceived by producers, film composers, and laptop
> musicians as easier to use than Max (or Reaktor or Kyma), but being
> faster than anything else would certainly help.
>



Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"

Date2010-06-14 16:10
FromPeiman Khosravi
Subject[Csnd] Re: Re: Re: Re: [OT] ICMC 2010
I keep telling people that csound is easier than max for DSP and only  
recently I managed to convince a composer! In a sense in csound all  
you need is an opcode, which acts like a powerfull synthesiser,  
whereas in max you need to create your own 'opcodes' to do anything  
interesting (unless you're using externals).

What is more difficult in csound is the control of instruments. So I  
would say that csound is certainly easier than msp, but as far as real- 
time control is concerned more hassle than max. This is what makes the  
csound/max combination so perfect. For GUI control we have the  
flexibility of max and for DSP the flexibility of csound. I have a  
feeling that this combination is becoming more and more popular, and  
it won't be long before it works its way into university courses as an  
alternative to max/msp. Having a faster engine would certainly open  
the way for this to happen more quickly! (getting IRCAM to gear its  
course around this may help too!)

I also think that one of the reasons csound is less recognised by  
sound designers is the historical emphasis on the score composition,  
which is somehow out of fashion these days. People don't realise that  
they don't actually have to write 'notes' and that the score can  
itself become a powerful tool for creating complex textures (e.g. with  
cmask).

Best,

Peiman


On 14 Jun 2010, at 15:50, Michael Gogins wrote:

> Of course you are correct about transform-based processing. And I
> think the sliding phase vocoder will have many uses.
>
> If people with Csound were suddenly on stage with live orchestras
> obviously richer than Max/MSP, then other people would very quickly
> sit up and take notice.
>
> It is a genuine problem for us that we have a system that is clearly
> superior for some uses to any other software synthesizer, but what
> gets taught is Max, and therefore what gets used is Max.
>
> I don't foresee a total solution to this, since I don't see that
> Csound will ever be perceived by producers, film composers, and laptop
> musicians as easier to use than Max (or Reaktor or Kyma), but being
> faster than anything else would certainly help.
>
> Regards,
> Mike
>
> On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
>  wrote:
>> On 14/06/2010 13:54, Michael Gogins wrote:
>> ...
>>>
>>> There was a paper on concurrency in audio software, which I did not
>>> attend, but read, "Advances In The Parallelization Of Music And  
>>> Audio
>>> Applications" by Eric Battenberg, Adrian Freed, and David Wessel.
>>
>>
>> Thanks for mentioning this - I have just found it is available  
>> online as a
>> pdf download.
>>
>>
>>> Interestingly enough, this paper does not so much consider  
>>> concurrency
>>> in signal flow graphs, but rather DSP algorithms that lend  
>>> themselves
>>> to parallelization such as partitioned convolution and non-negative
>>> matrix factorization, and running other parts of music software in
>>> parallel with audio engines. Most interest of all is the call for
>>> consideration of GPU processors for audio purposes, huge speedups  
>>> may
>>> be obtainable in this way, e.g. using the CUDA toolkit.  
>>> Unfortunately
>>> these CUDA speedups mostly occur in matrix products and
>>> factorizations, which do not occur so much in most Csound  
>>> orchestras.
>>> The paper also discusses I/O bottlenecks that occur when multiple
>>> cores begin to be used for audio processing.
>>>
>>
>> It has an obvious application to low-latency FFT and general  
>> transform-based
>> processing, not least the sliding phase vocoder which John and I  
>> have worked
>> on and is in Csound (for the special case where ksmps = 1 or very  
>> low). It
>> is, I must say, a tad disappointing that our papers on that, and on
>> massively parallel acceleration generally (at ICMC 2007  
>> especially), seem
>> not to have made the cut as a reference for their paper.
>>
>> Richard Dobson
>>
>>
>>
>> Send bugs reports to the Sourceforge bug tracker
>>           https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body  
>> "unsubscribe
>> csound"
>>
>>
>
>
>
> -- 
> Michael Gogins
> Irreducible Productions
> http://www.michael-gogins.com
> Michael dot Gogins at gmail dot com
>
>
> Send bugs reports to the Sourceforge bug tracker
>            https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body  
> "unsubscribe csound"
>



Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"

Date2010-06-14 17:05
FromAnthony Palomba
Subject[Csnd] Re: Re: Re: Re: Re: [OT] ICMC 2010
I have to agree with Peiman, Max/Csound combination is incredibly
powerful. It is actually only recently that this ability came to PC users.

Having been exposed to csound as an undergrad, I was familiar with
its power to let me experiment with DSP at a very low level. But trying
to perform with it took a lot of wrestling and left me exasperated.

But running it in Max makes all the difference in the world. Discovering this
combination has created a csound rebirth for me becuase I can easily
take my csound instruments, slap a UI on them, connect any interface I
want and play with it.

It would be great if csound supported CUDA acceleration. Although I still
don't know how you would get around the latency that would be generated
in sending data from user mode to the GPU, then back up, then down to the audio
interface. Kinda makes real time performance impractical.

But it would be great for calculating super accurate complex physical models.




Anthony





On Mon, Jun 14, 2010 at 10:10 AM, Peiman Khosravi <peimankhosravi@gmail.com> wrote:
I keep telling people that csound is easier than max for DSP and only recently I managed to convince a composer! In a sense in csound all you need is an opcode, which acts like a powerfull synthesiser, whereas in max you need to create your own 'opcodes' to do anything interesting (unless you're using externals).

What is more difficult in csound is the control of instruments. So I would say that csound is certainly easier than msp, but as far as real-time control is concerned more hassle than max. This is what makes the csound/max combination so perfect. For GUI control we have the flexibility of max and for DSP the flexibility of csound. I have a feeling that this combination is becoming more and more popular, and it won't be long before it works its way into university courses as an alternative to max/msp. Having a faster engine would certainly open the way for this to happen more quickly! (getting IRCAM to gear its course around this may help too!)

I also think that one of the reasons csound is less recognised by sound designers is the historical emphasis on the score composition, which is somehow out of fashion these days. People don't realise that they don't actually have to write 'notes' and that the score can itself become a powerful tool for creating complex textures (e.g. with cmask).

Best,

Peiman



On 14 Jun 2010, at 15:50, Michael Gogins wrote:

Of course you are correct about transform-based processing. And I
think the sliding phase vocoder will have many uses.

If people with Csound were suddenly on stage with live orchestras
obviously richer than Max/MSP, then other people would very quickly
sit up and take notice.

It is a genuine problem for us that we have a system that is clearly
superior for some uses to any other software synthesizer, but what
gets taught is Max, and therefore what gets used is Max.

I don't foresee a total solution to this, since I don't see that
Csound will ever be perceived by producers, film composers, and laptop
musicians as easier to use than Max (or Reaktor or Kyma), but being
faster than anything else would certainly help.

Regards,
Mike

On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
<richarddobson@blueyonder.co.uk> wrote:
On 14/06/2010 13:54, Michael Gogins wrote:
...

There was a paper on concurrency in audio software, which I did not
attend, but read, "Advances In The Parallelization Of Music And Audio
Applications" by Eric Battenberg, Adrian Freed, and David Wessel.


Thanks for mentioning this - I have just found it is available online as a
pdf download.


Interestingly enough, this paper does not so much consider concurrency
in signal flow graphs, but rather DSP algorithms that lend themselves
to parallelization such as partitioned convolution and non-negative
matrix factorization, and running other parts of music software in
parallel with audio engines. Most interest of all is the call for
consideration of GPU processors for audio purposes, huge speedups may
be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
these CUDA speedups mostly occur in matrix products and
factorizations, which do not occur so much in most Csound orchestras.
The paper also discusses I/O bottlenecks that occur when multiple
cores begin to be used for audio processing.


It has an obvious application to low-latency FFT and general transform-based
processing, not least the sliding phase vocoder which John and I have worked
on and is in Csound (for the special case where ksmps = 1 or very low). It
is, I must say, a tad disappointing that our papers on that, and on
massively parallel acceleration generally (at ICMC 2007 especially), seem
not to have made the cut as a reference for their paper.

Richard Dobson



Send bugs reports to the Sourceforge bug tracker
         https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
csound"





--
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com


Send bugs reports to the Sourceforge bug tracker
          https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"




Send bugs reports to the Sourceforge bug tracker
          https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"



Date2010-06-14 19:49
FromAnthony Palomba
Subject[Csnd] Re: Re: Re: Re: Re: [OT] ICMC 2010
Micheal, thanks again for the ICMC update.

" "Petrol: Reactive Pattern Language For Improvised Music" 
by Alex Mclean and Geraint Wiggins. This presented a
system for live coding in the context of algorithmic
composition in a functional language."

This sounds fascinating, is there a place I might be able to read
this paper?



-ap



On Mon, Jun 14, 2010 at 11:05 AM, Anthony Palomba <apalomba@austin.rr.com> wrote:
I have to agree with Peiman, Max/Csound combination is incredibly
powerful. It is actually only recently that this ability came to PC users.

Having been exposed to csound as an undergrad, I was familiar with
its power to let me experiment with DSP at a very low level. But trying
to perform with it took a lot of wrestling and left me exasperated.

But running it in Max makes all the difference in the world. Discovering this
combination has created a csound rebirth for me becuase I can easily
take my csound instruments, slap a UI on them, connect any interface I
want and play with it.

It would be great if csound supported CUDA acceleration. Although I still
don't know how you would get around the latency that would be generated
in sending data from user mode to the GPU, then back up, then down to the audio
interface. Kinda makes real time performance impractical.

But it would be great for calculating super accurate complex physical models.




Anthony






On Mon, Jun 14, 2010 at 10:10 AM, Peiman Khosravi <peimankhosravi@gmail.com> wrote:
I keep telling people that csound is easier than max for DSP and only recently I managed to convince a composer! In a sense in csound all you need is an opcode, which acts like a powerfull synthesiser, whereas in max you need to create your own 'opcodes' to do anything interesting (unless you're using externals).

What is more difficult in csound is the control of instruments. So I would say that csound is certainly easier than msp, but as far as real-time control is concerned more hassle than max. This is what makes the csound/max combination so perfect. For GUI control we have the flexibility of max and for DSP the flexibility of csound. I have a feeling that this combination is becoming more and more popular, and it won't be long before it works its way into university courses as an alternative to max/msp. Having a faster engine would certainly open the way for this to happen more quickly! (getting IRCAM to gear its course around this may help too!)

I also think that one of the reasons csound is less recognised by sound designers is the historical emphasis on the score composition, which is somehow out of fashion these days. People don't realise that they don't actually have to write 'notes' and that the score can itself become a powerful tool for creating complex textures (e.g. with cmask).

Best,

Peiman



On 14 Jun 2010, at 15:50, Michael Gogins wrote:

Of course you are correct about transform-based processing. And I
think the sliding phase vocoder will have many uses.

If people with Csound were suddenly on stage with live orchestras
obviously richer than Max/MSP, then other people would very quickly
sit up and take notice.

It is a genuine problem for us that we have a system that is clearly
superior for some uses to any other software synthesizer, but what
gets taught is Max, and therefore what gets used is Max.

I don't foresee a total solution to this, since I don't see that
Csound will ever be perceived by producers, film composers, and laptop
musicians as easier to use than Max (or Reaktor or Kyma), but being
faster than anything else would certainly help.

Regards,
Mike

On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
<richarddobson@blueyonder.co.uk> wrote:
On 14/06/2010 13:54, Michael Gogins wrote:
...

There was a paper on concurrency in audio software, which I did not
attend, but read, "Advances In The Parallelization Of Music And Audio
Applications" by Eric Battenberg, Adrian Freed, and David Wessel.


Thanks for mentioning this - I have just found it is available online as a
pdf download.


Interestingly enough, this paper does not so much consider concurrency
in signal flow graphs, but rather DSP algorithms that lend themselves
to parallelization such as partitioned convolution and non-negative
matrix factorization, and running other parts of music software in
parallel with audio engines. Most interest of all is the call for
consideration of GPU processors for audio purposes, huge speedups may
be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
these CUDA speedups mostly occur in matrix products and
factorizations, which do not occur so much in most Csound orchestras.
The paper also discusses I/O bottlenecks that occur when multiple
cores begin to be used for audio processing.


It has an obvious application to low-latency FFT and general transform-based
processing, not least the sliding phase vocoder which John and I have worked
on and is in Csound (for the special case where ksmps = 1 or very low). It
is, I must say, a tad disappointing that our papers on that, and on
massively parallel acceleration generally (at ICMC 2007 especially), seem
not to have made the cut as a reference for their paper.

Richard Dobson



Send bugs reports to the Sourceforge bug tracker
         https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
csound"





--
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com


Send bugs reports to the Sourceforge bug tracker
          https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"




Send bugs reports to the Sourceforge bug tracker
          https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"




Date2010-06-14 19:57
FromMichael Gogins
Subject[Csnd] Re: Re: Re: Re: Re: Re: [OT] ICMC 2010
If you are a member of the International Computer Music Association,
you should have access to online archives of ICMC proceedings.

If not, try googling on the author names.

Hope this helps,
Mike

On Mon, Jun 14, 2010 at 2:49 PM, Anthony Palomba  wrote:
> Micheal, thanks again for the ICMC update.
>
> " "Petrol: Reactive Pattern Language For Improvised Music"
> by Alex Mclean and Geraint Wiggins. This presented a
> system for live coding in the context of algorithmic
> composition in a functional language."
>
> This sounds fascinating, is there a place I might be able to read
> this paper?
>
>
>
> -ap
>
>
>
> On Mon, Jun 14, 2010 at 11:05 AM, Anthony Palomba 
> wrote:
>>
>> I have to agree with Peiman, Max/Csound combination is incredibly
>> powerful. It is actually only recently that this ability came to PC users.
>>
>> Having been exposed to csound as an undergrad, I was familiar with
>> its power to let me experiment with DSP at a very low level. But trying
>> to perform with it took a lot of wrestling and left me exasperated.
>>
>> But running it in Max makes all the difference in the world. Discovering
>> this
>> combination has created a csound rebirth for me becuase I can easily
>> take my csound instruments, slap a UI on them, connect any interface I
>> want and play with it.
>>
>> It would be great if csound supported CUDA acceleration. Although I still
>> don't know how you would get around the latency that would be generated
>> in sending data from user mode to the GPU, then back up, then down to the
>> audio
>> interface. Kinda makes real time performance impractical.
>>
>> But it would be great for calculating super accurate complex physical
>> models.
>>
>>
>>
>>
>> Anthony
>>
>>
>>
>>
>>
>> On Mon, Jun 14, 2010 at 10:10 AM, Peiman Khosravi
>>  wrote:
>>>
>>> I keep telling people that csound is easier than max for DSP and only
>>> recently I managed to convince a composer! In a sense in csound all you need
>>> is an opcode, which acts like a powerfull synthesiser, whereas in max you
>>> need to create your own 'opcodes' to do anything interesting (unless you're
>>> using externals).
>>>
>>> What is more difficult in csound is the control of instruments. So I
>>> would say that csound is certainly easier than msp, but as far as real-time
>>> control is concerned more hassle than max. This is what makes the csound/max
>>> combination so perfect. For GUI control we have the flexibility of max and
>>> for DSP the flexibility of csound. I have a feeling that this combination is
>>> becoming more and more popular, and it won't be long before it works its way
>>> into university courses as an alternative to max/msp. Having a faster engine
>>> would certainly open the way for this to happen more quickly! (getting IRCAM
>>> to gear its course around this may help too!)
>>>
>>> I also think that one of the reasons csound is less recognised by sound
>>> designers is the historical emphasis on the score composition, which is
>>> somehow out of fashion these days. People don't realise that they don't
>>> actually have to write 'notes' and that the score can itself become a
>>> powerful tool for creating complex textures (e.g. with cmask).
>>>
>>> Best,
>>>
>>> Peiman
>>>
>>>
>>> On 14 Jun 2010, at 15:50, Michael Gogins wrote:
>>>
>>>> Of course you are correct about transform-based processing. And I
>>>> think the sliding phase vocoder will have many uses.
>>>>
>>>> If people with Csound were suddenly on stage with live orchestras
>>>> obviously richer than Max/MSP, then other people would very quickly
>>>> sit up and take notice.
>>>>
>>>> It is a genuine problem for us that we have a system that is clearly
>>>> superior for some uses to any other software synthesizer, but what
>>>> gets taught is Max, and therefore what gets used is Max.
>>>>
>>>> I don't foresee a total solution to this, since I don't see that
>>>> Csound will ever be perceived by producers, film composers, and laptop
>>>> musicians as easier to use than Max (or Reaktor or Kyma), but being
>>>> faster than anything else would certainly help.
>>>>
>>>> Regards,
>>>> Mike
>>>>
>>>> On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
>>>>  wrote:
>>>>>
>>>>> On 14/06/2010 13:54, Michael Gogins wrote:
>>>>> ...
>>>>>>
>>>>>> There was a paper on concurrency in audio software, which I did not
>>>>>> attend, but read, "Advances In The Parallelization Of Music And Audio
>>>>>> Applications" by Eric Battenberg, Adrian Freed, and David Wessel.
>>>>>
>>>>>
>>>>> Thanks for mentioning this - I have just found it is available online
>>>>> as a
>>>>> pdf download.
>>>>>
>>>>>
>>>>>> Interestingly enough, this paper does not so much consider concurrency
>>>>>> in signal flow graphs, but rather DSP algorithms that lend themselves
>>>>>> to parallelization such as partitioned convolution and non-negative
>>>>>> matrix factorization, and running other parts of music software in
>>>>>> parallel with audio engines. Most interest of all is the call for
>>>>>> consideration of GPU processors for audio purposes, huge speedups may
>>>>>> be obtainable in this way, e.g. using the CUDA toolkit. Unfortunately
>>>>>> these CUDA speedups mostly occur in matrix products and
>>>>>> factorizations, which do not occur so much in most Csound orchestras.
>>>>>> The paper also discusses I/O bottlenecks that occur when multiple
>>>>>> cores begin to be used for audio processing.
>>>>>>
>>>>>
>>>>> It has an obvious application to low-latency FFT and general
>>>>> transform-based
>>>>> processing, not least the sliding phase vocoder which John and I have
>>>>> worked
>>>>> on and is in Csound (for the special case where ksmps = 1 or very low).
>>>>> It
>>>>> is, I must say, a tad disappointing that our papers on that, and on
>>>>> massively parallel acceleration generally (at ICMC 2007 especially),
>>>>> seem
>>>>> not to have made the cut as a reference for their paper.
>>>>>
>>>>> Richard Dobson
>>>>>
>>>>>
>>>>>
>>>>> Send bugs reports to the Sourceforge bug tracker
>>>>>          https://sourceforge.net/tracker/?group_id=81968&atid=564599
>>>>> Discussions of bugs and features can be posted here
>>>>> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>>>>> "unsubscribe
>>>>> csound"
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Michael Gogins
>>>> Irreducible Productions
>>>> http://www.michael-gogins.com
>>>> Michael dot Gogins at gmail dot com
>>>>
>>>>
>>>> Send bugs reports to the Sourceforge bug tracker
>>>>           https://sourceforge.net/tracker/?group_id=81968&atid=564599
>>>> Discussions of bugs and features can be posted here
>>>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>>>> csound"
>>>>
>>>
>>>
>>>
>>> Send bugs reports to the Sourceforge bug tracker
>>>           https://sourceforge.net/tracker/?group_id=81968&atid=564599
>>> Discussions of bugs and features can be posted here
>>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>>> csound"
>>>
>>
>
>



-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2010-06-15 14:01
From=?windows-1252?Q?Fran=8Dcois_Roux?=
Subject[Csnd] Re: Re: Re: Re: Re: Re: [OT] ICMC 2010
Yes, Max/Csound combination is so important that
I present this officialy as a matter of pratice here in Lyon,
one of the two Conservatoire Superieur in France.
Student begin using this solution for there composition.
Before, we was using csound above all, not in real-time, with lisp.

Even if it's not exactly for the same things, Ircam dsp recent 
developpement (FTM, Gabor)
brings finally some much more complicate solutions for students in 
composition,
because you should use MaxMSP with some Ircam own cryptic syntax objects,
where a dsp process is allocate in an heterogeous MSP/IRCAM patch process.
Csound/Max could be seeing as two coherent software, with there own 
practice,
so MSP part progressively is less use than Csound, enhance by the fact
that in the absence of dsp object developement in MSP, Ircam library for 
example,
(as jimmies, etc ...) are seen very basic today for students in sound 
and precision.


Anthony Palomba wrote:
> I have to agree with Peiman, Max/Csound combination is incredibly
> powerful. It is actually only recently that this ability came to PC users.
>
> Having been exposed to csound as an undergrad, I was familiar with
> its power to let me experiment with DSP at a very low level. But trying
> to perform with it took a lot of wrestling and left me exasperated.
>
> But running it in Max makes all the difference in the world. 
> Discovering this
> combination has created a csound rebirth for me becuase I can easily
> take my csound instruments, slap a UI on them, connect any interface I
> want and play with it.
>
> It would be great if csound supported CUDA acceleration. Although I still
> don't know how you would get around the latency that would be generated
> in sending data from user mode to the GPU, then back up, then down to 
> the audio
> interface. Kinda makes real time performance impractical.
>
> But it would be great for calculating super accurate complex physical 
> models.
>
>
>
>
> Anthony
>
>
>
>
>
> On Mon, Jun 14, 2010 at 10:10 AM, Peiman Khosravi 
> > wrote:
>
>     I keep telling people that csound is easier than max for DSP and
>     only recently I managed to convince a composer! In a sense in
>     csound all you need is an opcode, which acts like a powerfull
>     synthesiser, whereas in max you need to create your own 'opcodes'
>     to do anything interesting (unless you're using externals).
>
>     What is more difficult in csound is the control of instruments. So
>     I would say that csound is certainly easier than msp, but as far
>     as real-time control is concerned more hassle than max. This is
>     what makes the csound/max combination so perfect. For GUI control
>     we have the flexibility of max and for DSP the flexibility of
>     csound. I have a feeling that this combination is becoming more
>     and more popular, and it won't be long before it works its way
>     into university courses as an alternative to max/msp. Having a
>     faster engine would certainly open the way for this to happen more
>     quickly! (getting IRCAM to gear its course around this may help too!)
>
>     I also think that one of the reasons csound is less recognised by
>     sound designers is the historical emphasis on the score
>     composition, which is somehow out of fashion these days. People
>     don't realise that they don't actually have to write 'notes' and
>     that the score can itself become a powerful tool for creating
>     complex textures (e.g. with cmask).
>
>     Best,
>
>     Peiman
>
>
>
>     On 14 Jun 2010, at 15:50, Michael Gogins wrote:
>
>         Of course you are correct about transform-based processing. And I
>         think the sliding phase vocoder will have many uses.
>
>         If people with Csound were suddenly on stage with live orchestras
>         obviously richer than Max/MSP, then other people would very
>         quickly
>         sit up and take notice.
>
>         It is a genuine problem for us that we have a system that is
>         clearly
>         superior for some uses to any other software synthesizer, but what
>         gets taught is Max, and therefore what gets used is Max.
>
>         I don't foresee a total solution to this, since I don't see that
>         Csound will ever be perceived by producers, film composers,
>         and laptop
>         musicians as easier to use than Max (or Reaktor or Kyma), but
>         being
>         faster than anything else would certainly help.
>
>         Regards,
>         Mike
>
>         On Mon, Jun 14, 2010 at 10:42 AM, Richard Dobson
>                  > wrote:
>
>             On 14/06/2010 13:54, Michael Gogins wrote:
>             ...
>
>
>                 There was a paper on concurrency in audio software,
>                 which I did not
>                 attend, but read, "Advances In The Parallelization Of
>                 Music And Audio
>                 Applications" by Eric Battenberg, Adrian Freed, and
>                 David Wessel.
>
>
>
>             Thanks for mentioning this - I have just found it is
>             available online as a
>             pdf download.
>
>
>                 Interestingly enough, this paper does not so much
>                 consider concurrency
>                 in signal flow graphs, but rather DSP algorithms that
>                 lend themselves
>                 to parallelization such as partitioned convolution and
>                 non-negative
>                 matrix factorization, and running other parts of music
>                 software in
>                 parallel with audio engines. Most interest of all is
>                 the call for
>                 consideration of GPU processors for audio purposes,
>                 huge speedups may
>                 be obtainable in this way, e.g. using the CUDA
>                 toolkit. Unfortunately
>                 these CUDA speedups mostly occur in matrix products and
>                 factorizations, which do not occur so much in most
>                 Csound orchestras.
>                 The paper also discusses I/O bottlenecks that occur
>                 when multiple
>                 cores begin to be used for audio processing.
>
>
>             It has an obvious application to low-latency FFT and
>             general transform-based
>             processing, not least the sliding phase vocoder which John
>             and I have worked
>             on and is in Csound (for the special case where ksmps = 1
>             or very low). It
>             is, I must say, a tad disappointing that our papers on
>             that, and on
>             massively parallel acceleration generally (at ICMC 2007
>             especially), seem
>             not to have made the cut as a reference for their paper.
>
>             Richard Dobson
>
>