Csound Csound-dev Csound-tekno Search About

[Csnd] Beginner questions on realtime csound work

Date2009-05-21 07:21
FromJason Conklin
Subject[Csnd] Beginner questions on realtime csound work
I have some beginnerish questions for the list, inspired in part by
some recent threads (esp. the ones on the beginner/intermediate
learning curve and this current Haskell/functional programming
thread). I've gotten a sense from these discussions of how much
variety good approaches to algorithmic composition and experimentation
can have, whether using csound, csound in concert with other tools, or
entirely different software. I'm looking for some sense of how to
approach this variety without too much dizziness.

My intentions, although flexible and experimental/open-ended, are
centered around a desire to put together some algorithmically based
performance pieces (compare Risset's "Duet for one pianist" and stuff
along those lines, perhaps). I play the flute, and have for years
wanted to build programs capable of "playing along", according to
various rules, with live flute (or other instrumental) playing. I have
recently, finally dived into csound, which has been great, but I
currently feel like I'm aiming at a point that I can't see: a place
where I can play with those ideas.

This leads to my two "big" questions, one more long-term and
csound/programming-oriented, and the other more immediate, about
system platform and hardware requirements.

First, and mainly, am I even on the right track? From what I can tell,
csound is plenty appropriate for developing such ideas, but I don't
know enough yet to be certain. I have a basic understanding of the
orchestra and score, but familiarity with only a handful of synthesis
opcodes and i/o techniques. I'm a total newb when it comes to
score-processing and other "add-ons" like Python or Blue - although
working on Tobiah's sinewave challenge, entirely in vi, definitely
gave me a sense of that stuff's value! Basically, what do I need to
delve into and work on now in order to work with real-time input?

The second question is much more direct. I hope to put together a
machine in the near future; something capable as a system for this
kind of work. I haven't thought a lot about new-computer parts in a
while, though, and wonder what I'll need to have or know as I go
shopping/building. Any certain minimum specs on processor or
sound-card options? Any tricky system-building considerations? For a
few reasons I'll most likely be running Ubuntu Studio (the new release
with the realtime kernel), but that's not set in stone. The rest is up
in the air at this stage. Any recommendations or caveats?

I know there's a lot here; feel free to respond only to little chunks
or ask me if I haven't been clear about anything. I'm still figuring
it out myself. Thanks

/jc

Date2009-05-21 07:50
Fromvictor
Subject[Csnd] Re: Beginner questions on realtime csound work
yes, I think you are. Although the nature of Csound is turned
towards sound synthesis and processing, there are elements that
can be used in performance tracking (tempest, pitch trackers, etc.),
and you can also use Csound in conjunction with lots of other
software, so that it can for instance concentrate on the synthesis
side (it's very flexible that way). I use Csound for all my live-electronics
pieces and it does the job well for me.

As for the OS, I heard good things about Ubuntu Studio, so IMHO
it might be a good way to go. I use Fedora myself, but that is because I 
have been
using RedHat Linux since I first touched linux in 1999 and I am kind
of used to it (and I build my own kernels from time to time). In terms of
hardware, pretty much everything these days gives you some performance
from netbooks to desktops, so I guess you might just look around
for something well supported by ubuntu.

> My intentions, although flexible and experimental/open-ended, are
> centered around a desire to put together some algorithmically based
> performance pieces (compare Risset's "Duet for one pianist" and stuff
> along those lines, perhaps). I play the flute, and have for years
> wanted to build programs capable of "playing along", according to
> various rules, with live flute (or other instrumental) playing. I have
> recently, finally dived into csound, which has been great, but I
> currently feel like I'm aiming at a point that I can't see: a place
> where I can play with those ideas.
>
> This leads to my two "big" questions, one more long-term and
> csound/programming-oriented, and the other more immediate, about
> system platform and hardware requirements.
>
> First, and mainly, am I even on the right track? From what I can tell,
> csound is plenty appropriate for developing such ideas, but I don't
> know enough yet to be certain. I have a basic understanding of the
> orchestra and score, but familiarity with only a handful of synthesis
> opcodes and i/o techniques. I'm a total newb when it comes to
> score-processing and other "add-ons" like Python or Blue - although
> working on Tobiah's sinewave challenge, entirely in vi, definitely
> gave me a sense of that stuff's value! Basically, what do I need to
> delve into and work on now in order to work with real-time input?
>
> The second question is much more direct. I hope to put together a
> machine in the near future; something capable as a system for this
> kind of work. I haven't thought a lot about new-computer parts in a
> while, though, and wonder what I'll need to have or know as I go
> shopping/building. Any certain minimum specs on processor or
> sound-card options? Any tricky system-building considerations? For a
> few reasons I'll most likely be running Ubuntu Studio (the new release
> with the realtime kernel), but that's not set in stone. The rest is up
> in the air at this stage. Any recommendations or caveats?
>
> I know there's a lot here; feel free to respond only to little chunks
> or ask me if I haven't been clear about anything. I'm still figuring
> it out myself. Thanks
>
> /jc
>
>
> Send bugs reports to this list.
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe 
> csound" 


Date2009-05-21 09:15
Fromjpff@cs.bath.ac.uk
Subject[Csnd] Re: Beginner questions on realtime csound work
There are a number of answers that you could be given, but for now I am
going to point you at a very small and simple example, and I realise that
this is apparent puffing.

@InProceedings{JPF94,
  author =       {Andrew Brothwell and John ffitch},
  title =        {{An Automatic Blues Band}},
  booktitle =    {6th International Linux Audio Conference},
  pages =        {12--17},
  year =         {2008},
  editor =       {Frank Barknecht and Martin Rumori},
  address =      {Kunsthochscule f\"ur Medien K\"oln},
  month =        {March},
  organization = {LAC2008},
  publisher =    {Tribun EU, Gorkeho 41, Bruno 602 00},
  note =         {ISBN 978-80-7399-362-7},
  annote =       {\url{http://lac.linuxaudio.org/download/papers/13.pdf}}
}

Uses a very simple listening component in csound to keep a synthetic blues
band in time and in tune.  A more detailed explanation is available as a
tech report
http://www.cs.bath.ac.uk/pubdb/download.php?resID=219

On the other hand, this may be way too simple for your use.

==Jogn (still on second cup of coffee)




Date2009-05-21 17:15
From
Subject[Csnd] Re: Beginner questions on realtime csound work
Hey Jason,

Welcome to the world of csound! I am very interested
in many of the same things you are. Realtime integration
of electronics in music compositions.

I don't know what your budget is, or if you prefer Mac or PC, but 
I would recommend getting a laptop with a dual core and firewire 
audio interface. A laptop because you can take it to performances
and firewire because it is in general more reliable USB. This is because 
firewire I/O processing is done on the device while USB I/O 
processing is done on the host. 

There are many csound resources out there, but one I have found 
very useful for getting started using csound in a real time environment
is Ian McCurdy's examples. http://iainmccurdy.org/csound.html

I would also recommend trying Pure Data, a real-time graphical programming 
environment for audio, video, and graphical processing. It is free.
http://puredata.info/

But if you have the resources, I would recommend getting Max/MSP
http://www.cycling74.com/products/max5  

They are both great for building real time performance environments.
And you can run csound from inside of them as well! Max is more developed 
and now has integration with Ableton Live, which you may want to upgrade
to later on if you want a sequencing environment.

Lots of stuff to think about, have fun!



Anthony






---- Jason Conklin  wrote: 
> I have some beginnerish questions for the list, inspired in part by
> some recent threads (esp. the ones on the beginner/intermediate
> learning curve and this current Haskell/functional programming
> thread). I've gotten a sense from these discussions of how much
> variety good approaches to algorithmic composition and experimentation
> can have, whether using csound, csound in concert with other tools, or
> entirely different software. I'm looking for some sense of how to
> approach this variety without too much dizziness.
> 
> My intentions, although flexible and experimental/open-ended, are
> centered around a desire to put together some algorithmically based
> performance pieces (compare Risset's "Duet for one pianist" and stuff
> along those lines, perhaps). I play the flute, and have for years
> wanted to build programs capable of "playing along", according to
> various rules, with live flute (or other instrumental) playing. I have
> recently, finally dived into csound, which has been great, but I
> currently feel like I'm aiming at a point that I can't see: a place
> where I can play with those ideas.
> 
> This leads to my two "big" questions, one more long-term and
> csound/programming-oriented, and the other more immediate, about
> system platform and hardware requirements.
> 
> First, and mainly, am I even on the right track? From what I can tell,
> csound is plenty appropriate for developing such ideas, but I don't
> know enough yet to be certain. I have a basic understanding of the
> orchestra and score, but familiarity with only a handful of synthesis
> opcodes and i/o techniques. I'm a total newb when it comes to
> score-processing and other "add-ons" like Python or Blue - although
> working on Tobiah's sinewave challenge, entirely in vi, definitely
> gave me a sense of that stuff's value! Basically, what do I need to
> delve into and work on now in order to work with real-time input?
> 
> The second question is much more direct. I hope to put together a
> machine in the near future; something capable as a system for this
> kind of work. I haven't thought a lot about new-computer parts in a
> while, though, and wonder what I'll need to have or know as I go
> shopping/building. Any certain minimum specs on processor or
> sound-card options? Any tricky system-building considerations? For a
> few reasons I'll most likely be running Ubuntu Studio (the new release
> with the realtime kernel), but that's not set in stone. The rest is up
> in the air at this stage. Any recommendations or caveats?
> 
> I know there's a lot here; feel free to respond only to little chunks
> or ask me if I haven't been clear about anything. I'm still figuring
> it out myself. Thanks
> 
> /jc
> 
> 
> Send bugs reports to this list.
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"

Date2009-05-21 17:52
FromRory Walsh
Subject[Csnd] Re: Re: Beginner questions on realtime csound work
I'm not sure how you intend on controlling your Csound instruments in
real time but I've found the following quite useful for different
pieces:

1) Midi controllers: I use an evolution uc-33 and really like it. It's
got quite an array of sliders, buttons and knobs. I generally use the
buttons to start instruments and then use sliders/pots to control
different elements of the audio. I also usually leave one slider to
control the level of the incoming audio which means I don't need to be
near the main mixing console in a concert.

2) Build your own GUI interface: For this I would normally write my
own GUI frontend in C++ and through the use of the Csound API I can
use my on-screen sliders to control aspects of the audio. More
recently I've used Pd to build my GUI frontends and then through the
use of the csoundapi~ object I'm able to control my Csoud instruments.
The nice thing about using Pd is that it's simple to build timers and
generate numerical data that can be sent to Csound. Whilst Pd offers
so much in terms of processing, you don't really need to know any of
that stuff to get going, with just a few simple objects and csoundapi~
you can get really nice results.

3) Analysing the incoming signal. The two approaches mentioned above
mean that you need to have someone to control the audio instruments.
If you prefer a hands free approach then you can analyse the incoming
signal for amplitude and frequency. A lot of information can be got
from analysing the incoming sound. In some cases you might even be
able to determine what type of instrument is playing from analysing
the incoming audio stream. For instance if you check the amplitude of
the odd harmonics and they seem to be much stronger than then the even
ones you can assume that the instrument making the sound is open at
one end and closed at the other such as a clarinet for example. You
can also use pitch triggers to set off sounds. If you play a middle C
for example your patch could respond by transposing your last phrase
up a particular interval and then start playing along with you as you
move into the next musical phrase. In terms of algorithmic stuff you
can sample a note and then use any algorithm you like to generate a
set of notes by transposing you original tones up or down. The
possibilities are endless.

Of all the above approaches I use the midi controller in conjunction
with a GUI most often. This is because I like to have complete control
over the audio. In pieces(I should really say tests..) where I've
analysed incoming signals I've found the results to be somewhat
unpredictable at times. I must admit that in most cases this
unpredictability added to the piece and gave a very strong sense of
interaction between performer and computer.

Rory.



2009/5/21  :
> Hey Jason,
>
> Welcome to the world of csound! I am very interested
> in many of the same things you are. Realtime integration
> of electronics in music compositions.
>
> I don't know what your budget is, or if you prefer Mac or PC, but
> I would recommend getting a laptop with a dual core and firewire
> audio interface. A laptop because you can take it to performances
> and firewire because it is in general more reliable USB. This is because

Date2009-05-21 18:36
FromMichael Gogins
Subject[Csnd] Re: Re: Beginner questions on realtime csound work
Yes, the McCurdy site is a fantasic resource. Highly recommended. He
seems to be doing a pretty good job of keeping it up to date, e.g.
with examples for the physical models inspired by the work of Stephan
Bilbao (also highly recommended), prepiano and barmodel etc.

Regards,
Mike

On 5/21/09, apalomba@austin.rr.com  wrote:
> Hey Jason,
>
> Welcome to the world of csound! I am very interested
> in many of the same things you are. Realtime integration
> of electronics in music compositions.
>
> I don't know what your budget is, or if you prefer Mac or PC, but
> I would recommend getting a laptop with a dual core and firewire
> audio interface. A laptop because you can take it to performances
> and firewire because it is in general more reliable USB. This is because
> firewire I/O processing is done on the device while USB I/O
> processing is done on the host.
>
> There are many csound resources out there, but one I have found
> very useful for getting started using csound in a real time environment
> is Ian McCurdy's examples. http://iainmccurdy.org/csound.html
>
> I would also recommend trying Pure Data, a real-time graphical programming
> environment for audio, video, and graphical processing. It is free.
> http://puredata.info/
>
> But if you have the resources, I would recommend getting Max/MSP
> http://www.cycling74.com/products/max5
>
> They are both great for building real time performance environments.
> And you can run csound from inside of them as well! Max is more developed
> and now has integration with Ableton Live, which you may want to upgrade
> to later on if you want a sequencing environment.
>
> Lots of stuff to think about, have fun!
>
>
>
> Anthony
>
>
>
>
>
>
> ---- Jason Conklin  wrote:
>> I have some beginnerish questions for the list, inspired in part by
>> some recent threads (esp. the ones on the beginner/intermediate
>> learning curve and this current Haskell/functional programming
>> thread). I've gotten a sense from these discussions of how much
>> variety good approaches to algorithmic composition and experimentation
>> can have, whether using csound, csound in concert with other tools, or
>> entirely different software. I'm looking for some sense of how to
>> approach this variety without too much dizziness.
>>
>> My intentions, although flexible and experimental/open-ended, are
>> centered around a desire to put together some algorithmically based
>> performance pieces (compare Risset's "Duet for one pianist" and stuff
>> along those lines, perhaps). I play the flute, and have for years
>> wanted to build programs capable of "playing along", according to
>> various rules, with live flute (or other instrumental) playing. I have
>> recently, finally dived into csound, which has been great, but I
>> currently feel like I'm aiming at a point that I can't see: a place
>> where I can play with those ideas.
>>
>> This leads to my two "big" questions, one more long-term and
>> csound/programming-oriented, and the other more immediate, about
>> system platform and hardware requirements.
>>
>> First, and mainly, am I even on the right track? From what I can tell,
>> csound is plenty appropriate for developing such ideas, but I don't
>> know enough yet to be certain. I have a basic understanding of the
>> orchestra and score, but familiarity with only a handful of synthesis
>> opcodes and i/o techniques. I'm a total newb when it comes to
>> score-processing and other "add-ons" like Python or Blue - although
>> working on Tobiah's sinewave challenge, entirely in vi, definitely
>> gave me a sense of that stuff's value! Basically, what do I need to
>> delve into and work on now in order to work with real-time input?
>>
>> The second question is much more direct. I hope to put together a
>> machine in the near future; something capable as a system for this
>> kind of work. I haven't thought a lot about new-computer parts in a
>> while, though, and wonder what I'll need to have or know as I go
>> shopping/building. Any certain minimum specs on processor or
>> sound-card options? Any tricky system-building considerations? For a
>> few reasons I'll most likely be running Ubuntu Studio (the new release
>> with the realtime kernel), but that's not set in stone. The rest is up
>> in the air at this stage. Any recommendations or caveats?
>>
>> I know there's a lot here; feel free to respond only to little chunks
>> or ask me if I haven't been clear about anything. I'm still figuring
>> it out myself. Thanks
>>
>> /jc
>>
>>
>> Send bugs reports to this list.
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> csound"
>
>
> Send bugs reports to this list.
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
> csound"
>


-- 
Michael Gogins
Irreducible Productions
http://www.michael-gogins.com
Michael dot Gogins at gmail dot com

Date2009-05-22 02:02
FromJason Conklin
Subject[Csnd] Re: Beginner questions on realtime csound work
First of all, thanks for all your responses! I am extremely impressed
at the wealth of information I have gleaned already. A lot for me to
look into, as my background in programming and system design is pretty
cursory compared with many of you, but I didn't ever think this would
be an "easy" project, either!

As for operating systems, I think I will approach this through Ubuntu,
at least to start - primarily because making art using
free/open-source tools - all the monkeying around included - appeals
to me. If Ubuntu doesn't want to cooperate for whatever reason, I'm
likely to try Fedora next - I was a Fedora user for a long time before
my brother, who had been a Red Hat employee, went to Canonical as a
kernel programmer. Support communities are decent for either distro,
but it's nice to have a bit of inside expertise so handy! I like
Apple's products, too, but that's probably just not on my menu for
now. And Windows... well I get to use that at work....

The Pure Data recommendations are great, and I may look into Max/MSP
down the road, too -- it looks like a fantastic product. I have seen
Pd referenced in several contexts, but didn't want to check it out
myself before getting some solid corroboration that it may apply
(which you've more than adequately provided!), or at least a better
sense of csound's innards.

I'm not really astute enough as a programmer to develop a performance
environment/frontend of my own -- not yet. But I do have some
programming knowledge (mostly in PHP and a touch of C along with some
other web-development stuff) and plenty of interest. But while I mount
the learning curve, it seems this is where Pd and/or MIDI control will
come in. I'll just have to play with options to know for sure.

All that said, it is this option...

> 3) Analysing the incoming signal. ...
> If you prefer a hands free approach then you can analyse the incoming
> signal for amplitude and frequency. A lot of information can be got
> from analysing the incoming sound.

...that really describes what I'd like to explore, and that interests
me most. Of course, the added dimensions of MIDI or frontend controls
prove desirable as well, but that's down the road a ways. The
unpredictability in this approach that Rory mentioned is fine with me,
even a bit exciting -- I expect to introducing some aleatoric or
chaotic aspects into the input processing anyway. At least to start,
I'll be very happy even if my output is relatively noisy and
unpredictable, as long as it's *working*.

Thanks especially for the examples, too -- the Brothwell report, Iain
McCurdy's page, and reference to the PVS opcodes, which I'm reading up
on now. All promise to be very helpful.

If anyone's doing work along similar lines that you find interesting,
I'd love to hear about it and, if possible, to hear it or even check
out the tools in use. Of course, I hope to develop my own ideas and
applications as I learn some technique, but inspiration is always
good!

Thanks again, everyone.
/jc

Date2011-01-30 11:10
FromEnrico Francioni
Subject[Csnd] Re: Re: Beginner questions on realtime csound work

Hello Rory,

You can have further discussion on this issue?
I am very interested.
You made the experience?
Did you patch?

Thanks you,
enrico


I carry your own quote:

Analysing the incoming signal. The two approaches mentioned above 
mean that you need to have someone to control the audio instruments. 
If you prefer a hands free approach then you can analyse the incoming 
signal for amplitude and frequency. A lot of information can be got 
from analysing the incoming sound. In some cases you might even be 
able to determine what type of instrument is playing from analysing 
the incoming audio stream. For instance if you check the amplitude of 
the odd harmonics and they seem to be much stronger than then the even 
ones you can assume that the instrument making the sound is open at 
one end and closed at the other such as a clarinet for example. You 
can also use pitch triggers to set off sounds. If you play a middle C 
for example your patch could respond by transposing your last phrase 
up a particular interval and then start playing along with you as you 
move into the next musical phrase. In terms of algorithmic stuff you 
can sample a note and then use any algorithm you like to generate a 
set of notes by transposing you original tones up or down. The 
possibilities are endless. 

Date2011-01-31 13:01
FromRory Walsh
SubjectRe: [Csnd] Re: Re: Beginner questions on realtime csound work
I posted an instrument for real time processing to this list a few
weeks back, if I'm not mistaken it was in response to an email from
you?

On 30 January 2011 11:10, Enrico Francioni  wrote:
>
>
> Hello Rory,
>
> You can have further discussion on this issue?
> I am very interested.
> You made the experience?
> Did you patch?
>
> Thanks you,
> enrico
>
>
> I carry your own quote:
>
> Analysing the incoming signal. The two approaches mentioned above
> mean that you need to have someone to control the audio instruments.
> If you prefer a hands free approach then you can analyse the incoming
> signal for amplitude and frequency. A lot of information can be got
> from analysing the incoming sound. In some cases you might even be
> able to determine what type of instrument is playing from analysing
> the incoming audio stream. For instance if you check the amplitude of
> the odd harmonics and they seem to be much stronger than then the even
> ones you can assume that the instrument making the sound is open at
> one end and closed at the other such as a clarinet for example. You
> can also use pitch triggers to set off sounds. If you play a middle C
> for example your patch could respond by transposing your last phrase
> up a particular interval and then start playing along with you as you
> move into the next musical phrase. In terms of algorithmic stuff you
> can sample a note and then use any algorithm you like to generate a
> set of notes by transposing you original tones up or down. The
> possibilities are endless.
> --
> View this message in context: http://csound.1045644.n5.nabble.com/Beginner-questions-on-realtime-csound-work-tp1119012p3363342.html
> Sent from the Csound - General mailing list archive at Nabble.com.
>
>
> Send bugs reports to the Sourceforge bug tracker
>            https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>
>


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2011-01-31 14:08
FromEnrico Francioni
Subject[Csnd] Re: Re: Beginner questions on realtime csound work


rory ok ... I'm sorry.

But I did not want to work with MIDI,
but using only the microphone

thanks

e

Date2011-01-31 18:30
FromRory Walsh
SubjectRe: [Csnd] Re: Re: Beginner questions on realtime csound work
I haven't done too much of this. I know Iain McCurdy wrote a live
piece that using the notes of the flute to trigger different
processes. Perhaps you may ask him how he tackled the problem/

Rory.



On 31 January 2011 14:08, Enrico Francioni  wrote:
>
>
>
> rory ok ... I'm sorry.
>
> But I did not want to work with MIDI,
> but using only the microphone
>
> thanks
>
> e
> --
> View this message in context: http://csound.1045644.n5.nabble.com/Beginner-questions-on-realtime-csound-work-tp1119012p3364539.html
> Sent from the Csound - General mailing list archive at Nabble.com.
>
>
> Send bugs reports to the Sourceforge bug tracker
>            https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>
>


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2011-02-02 11:26
FromRory Walsh
SubjectRe: [Csnd] Re: Re: Beginner questions on realtime csound work
I'm pretty sure he used one of the pitch tracking opcodes and then,
according to whatever notes may have been input, his instrument would
in turn process the audio in a particular way. Are you having problems
with this approach? If so present the simplest instrument you can to
the list and we will see where the problem might be. If you wish to
pose a more in-depth question on peoples personal experiences with
realtime control I'm afraid you may have to wait longer for an answer.

Rory.




On 31 January 2011 18:30, Rory Walsh  wrote:
> I haven't done too much of this. I know Iain McCurdy wrote a live
> piece that using the notes of the flute to trigger different
> processes. Perhaps you may ask him how he tackled the problem/
>
> Rory.
>
>
>
> On 31 January 2011 14:08, Enrico Francioni  wrote:
>>
>>
>>
>> rory ok ... I'm sorry.
>>
>> But I did not want to work with MIDI,
>> but using only the microphone
>>
>> thanks
>>
>> e
>> --
>> View this message in context: http://csound.1045644.n5.nabble.com/Beginner-questions-on-realtime-csound-work-tp1119012p3364539.html
>> Sent from the Csound - General mailing list archive at Nabble.com.
>>
>>
>> Send bugs reports to the Sourceforge bug tracker
>>            https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>>
>>
>


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"


Date2011-02-02 18:09
FromEnrico Francioni
Subject[Csnd] Re: Re: Beginner questions on realtime csound work

thanks rory,

I am in communication with Iain.
His examples and its parts which also employs pitchamdf and pitch are very
eloquent (eg Three Breaths for flute and live electronics)
Now I use pitchamdf with the simple examples.

I understand that there are two ways to deal with the incoming signal that
controls the algorithm: the pitch or the amplitude ...
know other ways to treat the signal coming from a microphone
that controls the algorithm? Perhaps acting on the sound spectrum (fft)?

...

e

Date2011-02-02 18:29
FromRory Walsh
SubjectRe: [Csnd] Re: Re: Beginner questions on realtime csound work
Analysing the spectrum would give you even more information to work
with. Regarding interactive music, I've never been that bothered by
the analysis stage. It's what composers do with that analysis
information that interests me.

Rory.


On 2 February 2011 18:09, Enrico Francioni  wrote:
>
>
> thanks rory,
>
> I am in communication with Iain.
> His examples and its parts which also employs pitchamdf and pitch are very
> eloquent (eg Three Breaths for flute and live electronics)
> Now I use pitchamdf with the simple examples.
>
> I understand that there are two ways to deal with the incoming signal that
> controls the algorithm: the pitch or the amplitude ...
> know other ways to treat the signal coming from a microphone
> that controls the algorithm? Perhaps acting on the sound spectrum (fft)?
>
> ...
>
> e
> --
> View this message in context: http://csound.1045644.n5.nabble.com/Beginner-questions-on-realtime-csound-work-tp1119012p3368212.html
> Sent from the Csound - General mailing list archive at Nabble.com.
>
>
> Send bugs reports to the Sourceforge bug tracker
>            https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>
>


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"