Csound Csound-dev Csound-tekno Search About

[Csnd] composing with Csound, score question

Date2012-07-18 02:41
Frompeiman khosravi
Subject[Csnd] composing with Csound, score question
I have a rather difficult question.

These days I tend to do more synthesis but this is all mixed in
Protools. However, I'm much better at doing synthesis in context. So
once I have an instrument (mostly generating textures) that I like I
will render it N times with completely different parameters and
parameter envelopes (linseg).

Problem is that I'm getting tired of moving between Csound an protools
and was wondering how I could do most of the mixing in Csound. An
Obvious problem is that different linesegs for the same instrument may
produce drastically different results but it would not make sense to
have 100 copies of the same instrument with different linsegs. It
would also be far better if one could see different envelopes in the
score, next to the note statement.

I suppose this classes both as asking for advice (what is the best
approach?) and a possibility for Csound6 feature list. Say you have an
instrument with k-rate or a-rate linseg or similar opcode. Something
like this (OK I'm just thinking aloud and I know this is a bit out
there!) would be so useful:

instr 1
kfreq linseg {freqEnvelop} ;or something like that
aamp linseg {ampEnvelop}
printk2 kfreq
endin


And then this in the score:

{i1 0 10
freqEnvelop: 100, p3/2, 40, p3/2, 4000
ampEnvelop: 0, .2, 1, p3-.2, 0
}

{i1 3 20
freqEnvelop: 500, p3/2, 100, p3/2, 500
ampEnvelop: 0, .2, 1, p3-.2, 0
}


I do realise that this kind of thing is normally done in the api but I
also stand by my opinion that Csound should improve in its
compositional interface. Along the same lines I think that
implementing patterns would be so useful in Csound. So really, I'm
wondering two things: (1) the possibility of defining bits of the
orchestra in the score (maybe this is possible with string in the
current system?) and (2) an expansion of the score language to allow
more elaborate processes. Of course this latter may be a case of
designing a compositional language, in python or whatever, and that's
fine by me but I think such a tool needs to be released with Csound or
be easily available.


Thanks
Peiman

Date2012-07-18 03:49
FromSteven Yi
SubjectRe: [Csnd] composing with Csound, score question
Hi Peiman,

In regards to Csound-only usage, and instrument design as a whole, I'd
perhaps consider a couple of existing options.

1. If the envelopes are being modified per instance of instrument,
then the p-field signature of the instrument could be designed to
either take in the parameters (i.e. p5-8 could be linseg parameters),
or tables could be used to hold values.  The way tables could be done,
you could pass individual tables in, or create one large table that
holds multiple values and index into it.  Either of these ways may be
a bit cumbersome however, as you'd have to write some tricky code to
handle n-number of segments in a line if you were going to map it to
an instance of linseg.  A simpler notion then would be just to create
a table and read it with oscil or phaser/table.  (I think Prent
Rodgers uses envelopes as articulations to good effect in his pieces.)

2. You may want to look at the instrument design in:

http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html

In the article, you'll see that parameters for an instrument are
exposed with chnget and are set via chnset in separate instruments.
This allows MIDI-like instrument designs, where continuous data (i.e.
pressure, volume, etc.) is set separately from discrete parameters to
a note-event value (velocity, keyNumber).  This would allow global
per-instrument parameters.  To achieve per-note continous data, you
can use the technique found in the iOS/Android examples for
multitouch:

http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD

The link there shows values being read via a chnget, but the name of
the channel is dynamically created using p4.  You could simplify this
for your own usage by using fractional instrument numbers and reading
the fractional value of p1.

Referring back to the MIDI-emulation article, the instr 1 and 2 show
setting values.  You could create a few of these for different ways of
setting channel values.  In this regard, the design is open and
flexible: you could have the signal generated with sample and hold, an
oscillator, linear segments, exponential segments, etc. by creating
different parameter setting instruments like 1 and 2 there.

You'd then be able to organize sets of notes together in time, i.e.:

i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument

i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument

(Note: The above is mostly how blue's automation works, though
currently blue does not do per-note continuous values, only
per-instrument)

Perhaps these ideas will be flexible enough for your needs?

As for things like Pattern libraries, I'd say, for Csound 5, it might
be best to use a scripting language.  For Csound 6, I'm doing some
research work now for introducing a type system.  I think it will lend
itself to user-defined data types, and together with user-defined
functions could allow functional style programming that could make
designing composing libraries in Csound 6 much easier.  It's early in
the exploratory phase though.  There's a number of steps to take
though to get there, and I'd rather see this done in a careful way
rather than to introduce something quickly to address a short-term
problem, only to create bigger, long-term issues.

Hope that helps!
steven

On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
 wrote:
> I have a rather difficult question.
>
> These days I tend to do more synthesis but this is all mixed in
> Protools. However, I'm much better at doing synthesis in context. So
> once I have an instrument (mostly generating textures) that I like I
> will render it N times with completely different parameters and
> parameter envelopes (linseg).
>
> Problem is that I'm getting tired of moving between Csound an protools
> and was wondering how I could do most of the mixing in Csound. An
> Obvious problem is that different linesegs for the same instrument may
> produce drastically different results but it would not make sense to
> have 100 copies of the same instrument with different linsegs. It
> would also be far better if one could see different envelopes in the
> score, next to the note statement.
>
> I suppose this classes both as asking for advice (what is the best
> approach?) and a possibility for Csound6 feature list. Say you have an
> instrument with k-rate or a-rate linseg or similar opcode. Something
> like this (OK I'm just thinking aloud and I know this is a bit out
> there!) would be so useful:
>
> instr 1
> kfreq linseg {freqEnvelop} ;or something like that
> aamp linseg {ampEnvelop}
> printk2 kfreq
> endin
>
>
> And then this in the score:
>
> {i1 0 10
> freqEnvelop: 100, p3/2, 40, p3/2, 4000
> ampEnvelop: 0, .2, 1, p3-.2, 0
> }
>
> {i1 3 20
> freqEnvelop: 500, p3/2, 100, p3/2, 500
> ampEnvelop: 0, .2, 1, p3-.2, 0
> }
>
>
> I do realise that this kind of thing is normally done in the api but I
> also stand by my opinion that Csound should improve in its
> compositional interface. Along the same lines I think that
> implementing patterns would be so useful in Csound. So really, I'm
> wondering two things: (1) the possibility of defining bits of the
> orchestra in the score (maybe this is possible with string in the
> current system?) and (2) an expansion of the score language to allow
> more elaborate processes. Of course this latter may be a case of
> designing a compositional language, in python or whatever, and that's
> fine by me but I think such a tool needs to be released with Csound or
> be easily available.
>
>
> Thanks
> Peiman
>
>
> Send bugs reports to the Sourceforge bug tracker
>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>

Date2012-07-18 13:13
Frompeiman khosravi
SubjectRe: [Csnd] composing with Csound, score question
Hi Steven,

Thanks for your reply. I just had a look at your article, and yes this
seems like a very good solution actually. This is very clever!

Yes yes I agree that a long term solution is the way forward.

Best,
Peiman

On 18 July 2012 03:49, Steven Yi  wrote:
> Hi Peiman,
>
> In regards to Csound-only usage, and instrument design as a whole, I'd
> perhaps consider a couple of existing options.
>
> 1. If the envelopes are being modified per instance of instrument,
> then the p-field signature of the instrument could be designed to
> either take in the parameters (i.e. p5-8 could be linseg parameters),
> or tables could be used to hold values.  The way tables could be done,
> you could pass individual tables in, or create one large table that
> holds multiple values and index into it.  Either of these ways may be
> a bit cumbersome however, as you'd have to write some tricky code to
> handle n-number of segments in a line if you were going to map it to
> an instance of linseg.  A simpler notion then would be just to create
> a table and read it with oscil or phaser/table.  (I think Prent
> Rodgers uses envelopes as articulations to good effect in his pieces.)
>
> 2. You may want to look at the instrument design in:
>
> http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>
> In the article, you'll see that parameters for an instrument are
> exposed with chnget and are set via chnset in separate instruments.
> This allows MIDI-like instrument designs, where continuous data (i.e.
> pressure, volume, etc.) is set separately from discrete parameters to
> a note-event value (velocity, keyNumber).  This would allow global
> per-instrument parameters.  To achieve per-note continous data, you
> can use the technique found in the iOS/Android examples for
> multitouch:
>
> http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>
> The link there shows values being read via a chnget, but the name of
> the channel is dynamically created using p4.  You could simplify this
> for your own usage by using fractional instrument numbers and reading
> the fractional value of p1.
>
> Referring back to the MIDI-emulation article, the instr 1 and 2 show
> setting values.  You could create a few of these for different ways of
> setting channel values.  In this regard, the design is open and
> flexible: you could have the signal generated with sample and hold, an
> oscillator, linear segments, exponential segments, etc. by creating
> different parameter setting instruments like 1 and 2 there.
>
> You'd then be able to organize sets of notes together in time, i.e.:
>
> i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>
> i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>
> (Note: The above is mostly how blue's automation works, though
> currently blue does not do per-note continuous values, only
> per-instrument)
>
> Perhaps these ideas will be flexible enough for your needs?
>
> As for things like Pattern libraries, I'd say, for Csound 5, it might
> be best to use a scripting language.  For Csound 6, I'm doing some
> research work now for introducing a type system.  I think it will lend
> itself to user-defined data types, and together with user-defined
> functions could allow functional style programming that could make
> designing composing libraries in Csound 6 much easier.  It's early in
> the exploratory phase though.  There's a number of steps to take
> though to get there, and I'd rather see this done in a careful way
> rather than to introduce something quickly to address a short-term
> problem, only to create bigger, long-term issues.
>
> Hope that helps!
> steven
>
> On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>  wrote:
>> I have a rather difficult question.
>>
>> These days I tend to do more synthesis but this is all mixed in
>> Protools. However, I'm much better at doing synthesis in context. So
>> once I have an instrument (mostly generating textures) that I like I
>> will render it N times with completely different parameters and
>> parameter envelopes (linseg).
>>
>> Problem is that I'm getting tired of moving between Csound an protools
>> and was wondering how I could do most of the mixing in Csound. An
>> Obvious problem is that different linesegs for the same instrument may
>> produce drastically different results but it would not make sense to
>> have 100 copies of the same instrument with different linsegs. It
>> would also be far better if one could see different envelopes in the
>> score, next to the note statement.
>>
>> I suppose this classes both as asking for advice (what is the best
>> approach?) and a possibility for Csound6 feature list. Say you have an
>> instrument with k-rate or a-rate linseg or similar opcode. Something
>> like this (OK I'm just thinking aloud and I know this is a bit out
>> there!) would be so useful:
>>
>> instr 1
>> kfreq linseg {freqEnvelop} ;or something like that
>> aamp linseg {ampEnvelop}
>> printk2 kfreq
>> endin
>>
>>
>> And then this in the score:
>>
>> {i1 0 10
>> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> ampEnvelop: 0, .2, 1, p3-.2, 0
>> }
>>
>> {i1 3 20
>> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> ampEnvelop: 0, .2, 1, p3-.2, 0
>> }
>>
>>
>> I do realise that this kind of thing is normally done in the api but I
>> also stand by my opinion that Csound should improve in its
>> compositional interface. Along the same lines I think that
>> implementing patterns would be so useful in Csound. So really, I'm
>> wondering two things: (1) the possibility of defining bits of the
>> orchestra in the score (maybe this is possible with string in the
>> current system?) and (2) an expansion of the score language to allow
>> more elaborate processes. Of course this latter may be a case of
>> designing a compositional language, in python or whatever, and that's
>> fine by me but I think such a tool needs to be released with Csound or
>> be easily available.
>>
>>
>> Thanks
>> Peiman
>>
>>
>> Send bugs reports to the Sourceforge bug tracker
>>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>>
>
>
> Send bugs reports to the Sourceforge bug tracker
>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>

Date2012-07-19 01:09
Frompeiman khosravi
SubjectRe: [Csnd] composing with Csound, score question
Hello again,

It just occurred to me that I have a problem if I want to have more that one instance of an instrument with different automations. Mhh.

P   

On 18 July 2012 13:13, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi Steven,

Thanks for your reply. I just had a look at your article, and yes this
seems like a very good solution actually. This is very clever!

Yes yes I agree that a long term solution is the way forward.

Best,
Peiman

On 18 July 2012 03:49, Steven Yi <stevenyi@gmail.com> wrote:
> Hi Peiman,
>
> In regards to Csound-only usage, and instrument design as a whole, I'd
> perhaps consider a couple of existing options.
>
> 1. If the envelopes are being modified per instance of instrument,
> then the p-field signature of the instrument could be designed to
> either take in the parameters (i.e. p5-8 could be linseg parameters),
> or tables could be used to hold values.  The way tables could be done,
> you could pass individual tables in, or create one large table that
> holds multiple values and index into it.  Either of these ways may be
> a bit cumbersome however, as you'd have to write some tricky code to
> handle n-number of segments in a line if you were going to map it to
> an instance of linseg.  A simpler notion then would be just to create
> a table and read it with oscil or phaser/table.  (I think Prent
> Rodgers uses envelopes as articulations to good effect in his pieces.)
>
> 2. You may want to look at the instrument design in:
>
> http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>
> In the article, you'll see that parameters for an instrument are
> exposed with chnget and are set via chnset in separate instruments.
> This allows MIDI-like instrument designs, where continuous data (i.e.
> pressure, volume, etc.) is set separately from discrete parameters to
> a note-event value (velocity, keyNumber).  This would allow global
> per-instrument parameters.  To achieve per-note continous data, you
> can use the technique found in the iOS/Android examples for
> multitouch:
>
> http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>
> The link there shows values being read via a chnget, but the name of
> the channel is dynamically created using p4.  You could simplify this
> for your own usage by using fractional instrument numbers and reading
> the fractional value of p1.
>
> Referring back to the MIDI-emulation article, the instr 1 and 2 show
> setting values.  You could create a few of these for different ways of
> setting channel values.  In this regard, the design is open and
> flexible: you could have the signal generated with sample and hold, an
> oscillator, linear segments, exponential segments, etc. by creating
> different parameter setting instruments like 1 and 2 there.
>
> You'd then be able to organize sets of notes together in time, i.e.:
>
> i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>
> i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
> i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>
> (Note: The above is mostly how blue's automation works, though
> currently blue does not do per-note continuous values, only
> per-instrument)
>
> Perhaps these ideas will be flexible enough for your needs?
>
> As for things like Pattern libraries, I'd say, for Csound 5, it might
> be best to use a scripting language.  For Csound 6, I'm doing some
> research work now for introducing a type system.  I think it will lend
> itself to user-defined data types, and together with user-defined
> functions could allow functional style programming that could make
> designing composing libraries in Csound 6 much easier.  It's early in
> the exploratory phase though.  There's a number of steps to take
> though to get there, and I'd rather see this done in a careful way
> rather than to introduce something quickly to address a short-term
> problem, only to create bigger, long-term issues.
>
> Hope that helps!
> steven
>
> On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
> <peimankhosravi@gmail.com> wrote:
>> I have a rather difficult question.
>>
>> These days I tend to do more synthesis but this is all mixed in
>> Protools. However, I'm much better at doing synthesis in context. So
>> once I have an instrument (mostly generating textures) that I like I
>> will render it N times with completely different parameters and
>> parameter envelopes (linseg).
>>
>> Problem is that I'm getting tired of moving between Csound an protools
>> and was wondering how I could do most of the mixing in Csound. An
>> Obvious problem is that different linesegs for the same instrument may
>> produce drastically different results but it would not make sense to
>> have 100 copies of the same instrument with different linsegs. It
>> would also be far better if one could see different envelopes in the
>> score, next to the note statement.
>>
>> I suppose this classes both as asking for advice (what is the best
>> approach?) and a possibility for Csound6 feature list. Say you have an
>> instrument with k-rate or a-rate linseg or similar opcode. Something
>> like this (OK I'm just thinking aloud and I know this is a bit out
>> there!) would be so useful:
>>
>> instr 1
>> kfreq linseg {freqEnvelop} ;or something like that
>> aamp linseg {ampEnvelop}
>> printk2 kfreq
>> endin
>>
>>
>> And then this in the score:
>>
>> {i1 0 10
>> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> ampEnvelop: 0, .2, 1, p3-.2, 0
>> }
>>
>> {i1 3 20
>> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> ampEnvelop: 0, .2, 1, p3-.2, 0
>> }
>>
>>
>> I do realise that this kind of thing is normally done in the api but I
>> also stand by my opinion that Csound should improve in its
>> compositional interface. Along the same lines I think that
>> implementing patterns would be so useful in Csound. So really, I'm
>> wondering two things: (1) the possibility of defining bits of the
>> orchestra in the score (maybe this is possible with string in the
>> current system?) and (2) an expansion of the score language to allow
>> more elaborate processes. Of course this latter may be a case of
>> designing a compositional language, in python or whatever, and that's
>> fine by me but I think such a tool needs to be released with Csound or
>> be easily available.
>>
>>
>> Thanks
>> Peiman
>>
>>
>> Send bugs reports to the Sourceforge bug tracker
>>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>>
>
>
> Send bugs reports to the Sourceforge bug tracker
>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
> Discussions of bugs and features can be posted here
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"
>


Date2012-07-19 01:29
FromSteven Yi
SubjectRe: [Csnd] composing with Csound, score question
There shouldn't be a problem if you use the technique from the Android
example.  I'll write an example later tonight and will post here.


On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
 wrote:
> Hello again,
>
> It just occurred to me that I have a problem if I want to have more that one
> instance of an instrument with different automations. Mhh.
>
> P
>
> On 18 July 2012 13:13, peiman khosravi  wrote:
>>
>> Hi Steven,
>>
>> Thanks for your reply. I just had a look at your article, and yes this
>> seems like a very good solution actually. This is very clever!
>>
>> Yes yes I agree that a long term solution is the way forward.
>>
>> Best,
>> Peiman
>>
>> On 18 July 2012 03:49, Steven Yi  wrote:
>> > Hi Peiman,
>> >
>> > In regards to Csound-only usage, and instrument design as a whole, I'd
>> > perhaps consider a couple of existing options.
>> >
>> > 1. If the envelopes are being modified per instance of instrument,
>> > then the p-field signature of the instrument could be designed to
>> > either take in the parameters (i.e. p5-8 could be linseg parameters),
>> > or tables could be used to hold values.  The way tables could be done,
>> > you could pass individual tables in, or create one large table that
>> > holds multiple values and index into it.  Either of these ways may be
>> > a bit cumbersome however, as you'd have to write some tricky code to
>> > handle n-number of segments in a line if you were going to map it to
>> > an instance of linseg.  A simpler notion then would be just to create
>> > a table and read it with oscil or phaser/table.  (I think Prent
>> > Rodgers uses envelopes as articulations to good effect in his pieces.)
>> >
>> > 2. You may want to look at the instrument design in:
>> >
>> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >
>> > In the article, you'll see that parameters for an instrument are
>> > exposed with chnget and are set via chnset in separate instruments.
>> > This allows MIDI-like instrument designs, where continuous data (i.e.
>> > pressure, volume, etc.) is set separately from discrete parameters to
>> > a note-event value (velocity, keyNumber).  This would allow global
>> > per-instrument parameters.  To achieve per-note continous data, you
>> > can use the technique found in the iOS/Android examples for
>> > multitouch:
>> >
>> >
>> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >
>> > The link there shows values being read via a chnget, but the name of
>> > the channel is dynamically created using p4.  You could simplify this
>> > for your own usage by using fractional instrument numbers and reading
>> > the fractional value of p1.
>> >
>> > Referring back to the MIDI-emulation article, the instr 1 and 2 show
>> > setting values.  You could create a few of these for different ways of
>> > setting channel values.  In this regard, the design is open and
>> > flexible: you could have the signal generated with sample and hold, an
>> > oscillator, linear segments, exponential segments, etc. by creating
>> > different parameter setting instruments like 1 and 2 there.
>> >
>> > You'd then be able to organize sets of notes together in time, i.e.:
>> >
>> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > (Note: The above is mostly how blue's automation works, though
>> > currently blue does not do per-note continuous values, only
>> > per-instrument)
>> >
>> > Perhaps these ideas will be flexible enough for your needs?
>> >
>> > As for things like Pattern libraries, I'd say, for Csound 5, it might
>> > be best to use a scripting language.  For Csound 6, I'm doing some
>> > research work now for introducing a type system.  I think it will lend
>> > itself to user-defined data types, and together with user-defined
>> > functions could allow functional style programming that could make
>> > designing composing libraries in Csound 6 much easier.  It's early in
>> > the exploratory phase though.  There's a number of steps to take
>> > though to get there, and I'd rather see this done in a careful way
>> > rather than to introduce something quickly to address a short-term
>> > problem, only to create bigger, long-term issues.
>> >
>> > Hope that helps!
>> > steven
>> >
>> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> >  wrote:
>> >> I have a rather difficult question.
>> >>
>> >> These days I tend to do more synthesis but this is all mixed in
>> >> Protools. However, I'm much better at doing synthesis in context. So
>> >> once I have an instrument (mostly generating textures) that I like I
>> >> will render it N times with completely different parameters and
>> >> parameter envelopes (linseg).
>> >>
>> >> Problem is that I'm getting tired of moving between Csound an protools
>> >> and was wondering how I could do most of the mixing in Csound. An
>> >> Obvious problem is that different linesegs for the same instrument may
>> >> produce drastically different results but it would not make sense to
>> >> have 100 copies of the same instrument with different linsegs. It
>> >> would also be far better if one could see different envelopes in the
>> >> score, next to the note statement.
>> >>
>> >> I suppose this classes both as asking for advice (what is the best
>> >> approach?) and a possibility for Csound6 feature list. Say you have an
>> >> instrument with k-rate or a-rate linseg or similar opcode. Something
>> >> like this (OK I'm just thinking aloud and I know this is a bit out
>> >> there!) would be so useful:
>> >>
>> >> instr 1
>> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> aamp linseg {ampEnvelop}
>> >> printk2 kfreq
>> >> endin
>> >>
>> >>
>> >> And then this in the score:
>> >>
>> >> {i1 0 10
>> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >> {i1 3 20
>> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >>
>> >> I do realise that this kind of thing is normally done in the api but I
>> >> also stand by my opinion that Csound should improve in its
>> >> compositional interface. Along the same lines I think that
>> >> implementing patterns would be so useful in Csound. So really, I'm
>> >> wondering two things: (1) the possibility of defining bits of the
>> >> orchestra in the score (maybe this is possible with string in the
>> >> current system?) and (2) an expansion of the score language to allow
>> >> more elaborate processes. Of course this latter may be a case of
>> >> designing a compositional language, in python or whatever, and that's
>> >> fine by me but I think such a tool needs to be released with Csound or
>> >> be easily available.
>> >>
>> >>
>> >> Thanks
>> >> Peiman
>> >>
>> >>
>> >> Send bugs reports to the Sourceforge bug tracker
>> >>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> Discussions of bugs and features can be posted here
>> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> "unsubscribe csound"
>> >>
>> >
>> >
>> > Send bugs reports to the Sourceforge bug tracker
>> >             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> > Discussions of bugs and features can be posted here
>> > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> > csound"
>> >
>
>

Date2012-07-19 02:39
Frompeiman khosravi
SubjectRe: [Csnd] composing with Csound, score question
Thanks Steven,

I see your point. Do you mean something like this?

<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>


<CsInstruments>

sr=44100
ksmps=1
nchnls=2
0dbfs=1


instr 1 ; Automation - set value

Sparam = p4 ; name of parameter to control
ival = p5   ; value
kcounter = 0

if (kcounter == 0) then
chnset k(ival), Sparam
turnoff
endif

endin


instr 2

Sparam = p4 ; name of parameter to control
imode = p5
istart = p6 ; start value
iend = p7   ; end value

if (imode==0) then
ksig line istart, p3, iend
else
ksig expon istart, p3, iend
endif

chnset ksig, Sparam

endin


instr 3

ifrac = frac(p1)
Sfreq sprintf "freq.%g", p1
kvalue chnget Sfreq
;puts Sfreq, 1
printk2 kvalue
endin


</CsInstruments>


<CsScore>

i3.1 0 1
i3.2 0 1

i 1 0 .1 "freq.3.1" 3
i 1 0 .1 "freq.3.2" 30

</CsScore>


</CsoundSynthesizer>

On 19 July 2012 01:29, Steven Yi <stevenyi@gmail.com> wrote:
There shouldn't be a problem if you use the technique from the Android
example.  I'll write an example later tonight and will post here.


On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
<peimankhosravi@gmail.com> wrote:
> Hello again,
>
> It just occurred to me that I have a problem if I want to have more that one
> instance of an instrument with different automations. Mhh.
>
> P
>
> On 18 July 2012 13:13, peiman khosravi <peimankhosravi@gmail.com> wrote:
>>
>> Hi Steven,
>>
>> Thanks for your reply. I just had a look at your article, and yes this
>> seems like a very good solution actually. This is very clever!
>>
>> Yes yes I agree that a long term solution is the way forward.
>>
>> Best,
>> Peiman
>>
>> On 18 July 2012 03:49, Steven Yi <stevenyi@gmail.com> wrote:
>> > Hi Peiman,
>> >
>> > In regards to Csound-only usage, and instrument design as a whole, I'd
>> > perhaps consider a couple of existing options.
>> >
>> > 1. If the envelopes are being modified per instance of instrument,
>> > then the p-field signature of the instrument could be designed to
>> > either take in the parameters (i.e. p5-8 could be linseg parameters),
>> > or tables could be used to hold values.  The way tables could be done,
>> > you could pass individual tables in, or create one large table that
>> > holds multiple values and index into it.  Either of these ways may be
>> > a bit cumbersome however, as you'd have to write some tricky code to
>> > handle n-number of segments in a line if you were going to map it to
>> > an instance of linseg.  A simpler notion then would be just to create
>> > a table and read it with oscil or phaser/table.  (I think Prent
>> > Rodgers uses envelopes as articulations to good effect in his pieces.)
>> >
>> > 2. You may want to look at the instrument design in:
>> >
>> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >
>> > In the article, you'll see that parameters for an instrument are
>> > exposed with chnget and are set via chnset in separate instruments.
>> > This allows MIDI-like instrument designs, where continuous data (i.e.
>> > pressure, volume, etc.) is set separately from discrete parameters to
>> > a note-event value (velocity, keyNumber).  This would allow global
>> > per-instrument parameters.  To achieve per-note continous data, you
>> > can use the technique found in the iOS/Android examples for
>> > multitouch:
>> >
>> >
>> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >
>> > The link there shows values being read via a chnget, but the name of
>> > the channel is dynamically created using p4.  You could simplify this
>> > for your own usage by using fractional instrument numbers and reading
>> > the fractional value of p1.
>> >
>> > Referring back to the MIDI-emulation article, the instr 1 and 2 show
>> > setting values.  You could create a few of these for different ways of
>> > setting channel values.  In this regard, the design is open and
>> > flexible: you could have the signal generated with sample and hold, an
>> > oscillator, linear segments, exponential segments, etc. by creating
>> > different parameter setting instruments like 1 and 2 there.
>> >
>> > You'd then be able to organize sets of notes together in time, i.e.:
>> >
>> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > (Note: The above is mostly how blue's automation works, though
>> > currently blue does not do per-note continuous values, only
>> > per-instrument)
>> >
>> > Perhaps these ideas will be flexible enough for your needs?
>> >
>> > As for things like Pattern libraries, I'd say, for Csound 5, it might
>> > be best to use a scripting language.  For Csound 6, I'm doing some
>> > research work now for introducing a type system.  I think it will lend
>> > itself to user-defined data types, and together with user-defined
>> > functions could allow functional style programming that could make
>> > designing composing libraries in Csound 6 much easier.  It's early in
>> > the exploratory phase though.  There's a number of steps to take
>> > though to get there, and I'd rather see this done in a careful way
>> > rather than to introduce something quickly to address a short-term
>> > problem, only to create bigger, long-term issues.
>> >
>> > Hope that helps!
>> > steven
>> >
>> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> > <peimankhosravi@gmail.com> wrote:
>> >> I have a rather difficult question.
>> >>
>> >> These days I tend to do more synthesis but this is all mixed in
>> >> Protools. However, I'm much better at doing synthesis in context. So
>> >> once I have an instrument (mostly generating textures) that I like I
>> >> will render it N times with completely different parameters and
>> >> parameter envelopes (linseg).
>> >>
>> >> Problem is that I'm getting tired of moving between Csound an protools
>> >> and was wondering how I could do most of the mixing in Csound. An
>> >> Obvious problem is that different linesegs for the same instrument may
>> >> produce drastically different results but it would not make sense to
>> >> have 100 copies of the same instrument with different linsegs. It
>> >> would also be far better if one could see different envelopes in the
>> >> score, next to the note statement.
>> >>
>> >> I suppose this classes both as asking for advice (what is the best
>> >> approach?) and a possibility for Csound6 feature list. Say you have an
>> >> instrument with k-rate or a-rate linseg or similar opcode. Something
>> >> like this (OK I'm just thinking aloud and I know this is a bit out
>> >> there!) would be so useful:
>> >>
>> >> instr 1
>> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> aamp linseg {ampEnvelop}
>> >> printk2 kfreq
>> >> endin
>> >>
>> >>
>> >> And then this in the score:
>> >>
>> >> {i1 0 10
>> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >> {i1 3 20
>> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >>
>> >> I do realise that this kind of thing is normally done in the api but I
>> >> also stand by my opinion that Csound should improve in its
>> >> compositional interface. Along the same lines I think that
>> >> implementing patterns would be so useful in Csound. So really, I'm
>> >> wondering two things: (1) the possibility of defining bits of the
>> >> orchestra in the score (maybe this is possible with string in the
>> >> current system?) and (2) an expansion of the score language to allow
>> >> more elaborate processes. Of course this latter may be a case of
>> >> designing a compositional language, in python or whatever, and that's
>> >> fine by me but I think such a tool needs to be released with Csound or
>> >> be easily available.
>> >>
>> >>
>> >> Thanks
>> >> Peiman
>> >>
>> >>
>> >> Send bugs reports to the Sourceforge bug tracker
>> >>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> Discussions of bugs and features can be posted here
>> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> "unsubscribe csound"
>> >>
>> >
>> >
>> > Send bugs reports to the Sourceforge bug tracker
>> >             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> > Discussions of bugs and features can be posted here
>> > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> > csound"
>> >
>
>


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"



Date2012-07-19 02:58
FromSteven Yi
SubjectRe: [Csnd] composing with Csound, score question
AttachmentsnoteAutomation.csd  
Hi Peiman,

Something like that would work.  I think I wrote a slightly
overly-complicated example, but I've attached it anyways. The overly
complicated part is that i1 and i2 were made to construct the string,
when it could just as easily have been passed in from the score.  i3
shows the parameters being read in.  I found that frac(p1) did not
give the result I thought it would. My thought is that the score
parser may be throwing away the fractional number.  So I went with a
noteNum identifier being passed in a p-field.

Either way, the attached CSD shows 5 notes of the same instrument
using different automations per note (a pair of notes descending, and
a 3 note glissandi up coming in two seconds later).  Hopefully should
illustrate the technique.

The implementation could probably be further refined using a default,
so for example, if p4 is 0 or not set, then read from i3.freq and
i3.amp, but if greater than 0, read in from i3.p4.freq (where the p4
is replaced).  That could allow then per-instrument automation as well
as per-note automation.

Anyways, hope this helps! :D

steven




On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
 wrote:
> Hello again,
>
> It just occurred to me that I have a problem if I want to have more that one
> instance of an instrument with different automations. Mhh.
>
> P
>
> On 18 July 2012 13:13, peiman khosravi  wrote:
>>
>> Hi Steven,
>>
>> Thanks for your reply. I just had a look at your article, and yes this
>> seems like a very good solution actually. This is very clever!
>>
>> Yes yes I agree that a long term solution is the way forward.
>>
>> Best,
>> Peiman
>>
>> On 18 July 2012 03:49, Steven Yi  wrote:
>> > Hi Peiman,
>> >
>> > In regards to Csound-only usage, and instrument design as a whole, I'd
>> > perhaps consider a couple of existing options.
>> >
>> > 1. If the envelopes are being modified per instance of instrument,
>> > then the p-field signature of the instrument could be designed to
>> > either take in the parameters (i.e. p5-8 could be linseg parameters),
>> > or tables could be used to hold values.  The way tables could be done,
>> > you could pass individual tables in, or create one large table that
>> > holds multiple values and index into it.  Either of these ways may be
>> > a bit cumbersome however, as you'd have to write some tricky code to
>> > handle n-number of segments in a line if you were going to map it to
>> > an instance of linseg.  A simpler notion then would be just to create
>> > a table and read it with oscil or phaser/table.  (I think Prent
>> > Rodgers uses envelopes as articulations to good effect in his pieces.)
>> >
>> > 2. You may want to look at the instrument design in:
>> >
>> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >
>> > In the article, you'll see that parameters for an instrument are
>> > exposed with chnget and are set via chnset in separate instruments.
>> > This allows MIDI-like instrument designs, where continuous data (i.e.
>> > pressure, volume, etc.) is set separately from discrete parameters to
>> > a note-event value (velocity, keyNumber).  This would allow global
>> > per-instrument parameters.  To achieve per-note continous data, you
>> > can use the technique found in the iOS/Android examples for
>> > multitouch:
>> >
>> >
>> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >
>> > The link there shows values being read via a chnget, but the name of
>> > the channel is dynamically created using p4.  You could simplify this
>> > for your own usage by using fractional instrument numbers and reading
>> > the fractional value of p1.
>> >
>> > Referring back to the MIDI-emulation article, the instr 1 and 2 show
>> > setting values.  You could create a few of these for different ways of
>> > setting channel values.  In this regard, the design is open and
>> > flexible: you could have the signal generated with sample and hold, an
>> > oscillator, linear segments, exponential segments, etc. by creating
>> > different parameter setting instruments like 1 and 2 there.
>> >
>> > You'd then be able to organize sets of notes together in time, i.e.:
>> >
>> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > (Note: The above is mostly how blue's automation works, though
>> > currently blue does not do per-note continuous values, only
>> > per-instrument)
>> >
>> > Perhaps these ideas will be flexible enough for your needs?
>> >
>> > As for things like Pattern libraries, I'd say, for Csound 5, it might
>> > be best to use a scripting language.  For Csound 6, I'm doing some
>> > research work now for introducing a type system.  I think it will lend
>> > itself to user-defined data types, and together with user-defined
>> > functions could allow functional style programming that could make
>> > designing composing libraries in Csound 6 much easier.  It's early in
>> > the exploratory phase though.  There's a number of steps to take
>> > though to get there, and I'd rather see this done in a careful way
>> > rather than to introduce something quickly to address a short-term
>> > problem, only to create bigger, long-term issues.
>> >
>> > Hope that helps!
>> > steven
>> >
>> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> >  wrote:
>> >> I have a rather difficult question.
>> >>
>> >> These days I tend to do more synthesis but this is all mixed in
>> >> Protools. However, I'm much better at doing synthesis in context. So
>> >> once I have an instrument (mostly generating textures) that I like I
>> >> will render it N times with completely different parameters and
>> >> parameter envelopes (linseg).
>> >>
>> >> Problem is that I'm getting tired of moving between Csound an protools
>> >> and was wondering how I could do most of the mixing in Csound. An
>> >> Obvious problem is that different linesegs for the same instrument may
>> >> produce drastically different results but it would not make sense to
>> >> have 100 copies of the same instrument with different linsegs. It
>> >> would also be far better if one could see different envelopes in the
>> >> score, next to the note statement.
>> >>
>> >> I suppose this classes both as asking for advice (what is the best
>> >> approach?) and a possibility for Csound6 feature list. Say you have an
>> >> instrument with k-rate or a-rate linseg or similar opcode. Something
>> >> like this (OK I'm just thinking aloud and I know this is a bit out
>> >> there!) would be so useful:
>> >>
>> >> instr 1
>> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> aamp linseg {ampEnvelop}
>> >> printk2 kfreq
>> >> endin
>> >>
>> >>
>> >> And then this in the score:
>> >>
>> >> {i1 0 10
>> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >> {i1 3 20
>> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >>
>> >> I do realise that this kind of thing is normally done in the api but I
>> >> also stand by my opinion that Csound should improve in its
>> >> compositional interface. Along the same lines I think that
>> >> implementing patterns would be so useful in Csound. So really, I'm
>> >> wondering two things: (1) the possibility of defining bits of the
>> >> orchestra in the score (maybe this is possible with string in the
>> >> current system?) and (2) an expansion of the score language to allow
>> >> more elaborate processes. Of course this latter may be a case of
>> >> designing a compositional language, in python or whatever, and that's
>> >> fine by me but I think such a tool needs to be released with Csound or
>> >> be easily available.
>> >>
>> >>
>> >> Thanks
>> >> Peiman
>> >>
>> >>
>> >> Send bugs reports to the Sourceforge bug tracker
>> >>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> Discussions of bugs and features can be posted here
>> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> "unsubscribe csound"
>> >>
>> >
>> >
>> > Send bugs reports to the Sourceforge bug tracker
>> >             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> > Discussions of bugs and features can be posted here
>> > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> > csound"
>> >
>
>

Date2012-07-19 03:56
Frompeiman khosravi
SubjectRe: [Csnd] composing with Csound, score question
Thanks Steven,

This is really nice. 

I came up with this which is a bit more convoluted.

Can I ask why you have the conditional in instrument 1?

Best,
Peiman

<CsoundSynthesizer>

<CsOptions>
-odac
</CsOptions>


<CsInstruments>

sr=44100
ksmps=1
nchnls=1
0dbfs=1


instr 1 ; string - set value
Smember = p4 ; name of parameter to control

    ; Parse Smember
    istrlen    strlen   Smember
    idelimiter strindex Smember, ":"

    S1    strsub Smember, 0, idelimiter  ; "String1"
    S2    strsub Smember, idelimiter + 1, istrlen  ; "String2"

kcounter = 0
if (kcounter == 0) then
chnset S2, S1
turnoff
endif

endin

instr 2 ; Automation - set value

Sparam = p4 ; name of parameter to control
ival = p5   ; value
kcounter = 0

if (kcounter == 0) then
chnset k(ival), Sparam
turnoff
endif

endin


instr 3 ; Automation segment

Sparam = p4 ; name of parameter to control
imode = p5
istart = p6 ; start value
iend = p7   ; end value

if (imode==0) then
ksig line istart, p3, iend
else
ksig expon istart, p3, iend
endif

chnset ksig, Sparam

endin


instr instrument
instance=p4
Srate sprintf "instrument.rate.%g", instance
krate chnget Srate

Sfile sprintf "instrument.file.%g", instance
Sname chnget Sfile

Sloop sprintf "instrument.loop.%g", instance
iloop chnget Sloop

aout    diskin    Sname, krate, 0, iloop
out    aout
;puts Sfreq, 1
;printk2 krate
endin


</CsInstruments>


<CsScore>

i"instrument" 0.01 10 1
i 1 0 .1 "instrument.file.1:/Applications/Max5/examples/sounds/cherokee.aif"
i 3 0  3 "instrument.rate.1" 0 0.1  1
i . +  2 .             1 1   0.001


i"instrument" 0.01 10 2
i 1 0 .1  "instrument.file.2:/Applications/Max5/examples/sounds/jongly.aif"
i 2 0 .1  "instrument.loop.2" 1
i 3 0  1  "instrument.rate.2" 1 1 10
i 3 +  10 "instrument.rate.2" 0 2 .5


</CsScore>


</CsoundSynthesizer>

On 19 July 2012 02:58, Steven Yi <stevenyi@gmail.com> wrote:
Hi Peiman,

Something like that would work.  I think I wrote a slightly
overly-complicated example, but I've attached it anyways. The overly
complicated part is that i1 and i2 were made to construct the string,
when it could just as easily have been passed in from the score.  i3
shows the parameters being read in.  I found that frac(p1) did not
give the result I thought it would. My thought is that the score
parser may be throwing away the fractional number.  So I went with a
noteNum identifier being passed in a p-field.

Either way, the attached CSD shows 5 notes of the same instrument
using different automations per note (a pair of notes descending, and
a 3 note glissandi up coming in two seconds later).  Hopefully should
illustrate the technique.

The implementation could probably be further refined using a default,
so for example, if p4 is 0 or not set, then read from i3.freq and
i3.amp, but if greater than 0, read in from i3.p4.freq (where the p4
is replaced).  That could allow then per-instrument automation as well
as per-note automation.

Anyways, hope this helps! :D

steven




On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
<peimankhosravi@gmail.com> wrote:
> Hello again,
>
> It just occurred to me that I have a problem if I want to have more that one
> instance of an instrument with different automations. Mhh.
>
> P
>
> On 18 July 2012 13:13, peiman khosravi <peimankhosravi@gmail.com> wrote:
>>
>> Hi Steven,
>>
>> Thanks for your reply. I just had a look at your article, and yes this
>> seems like a very good solution actually. This is very clever!
>>
>> Yes yes I agree that a long term solution is the way forward.
>>
>> Best,
>> Peiman
>>
>> On 18 July 2012 03:49, Steven Yi <stevenyi@gmail.com> wrote:
>> > Hi Peiman,
>> >
>> > In regards to Csound-only usage, and instrument design as a whole, I'd
>> > perhaps consider a couple of existing options.
>> >
>> > 1. If the envelopes are being modified per instance of instrument,
>> > then the p-field signature of the instrument could be designed to
>> > either take in the parameters (i.e. p5-8 could be linseg parameters),
>> > or tables could be used to hold values.  The way tables could be done,
>> > you could pass individual tables in, or create one large table that
>> > holds multiple values and index into it.  Either of these ways may be
>> > a bit cumbersome however, as you'd have to write some tricky code to
>> > handle n-number of segments in a line if you were going to map it to
>> > an instance of linseg.  A simpler notion then would be just to create
>> > a table and read it with oscil or phaser/table.  (I think Prent
>> > Rodgers uses envelopes as articulations to good effect in his pieces.)
>> >
>> > 2. You may want to look at the instrument design in:
>> >
>> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >
>> > In the article, you'll see that parameters for an instrument are
>> > exposed with chnget and are set via chnset in separate instruments.
>> > This allows MIDI-like instrument designs, where continuous data (i.e.
>> > pressure, volume, etc.) is set separately from discrete parameters to
>> > a note-event value (velocity, keyNumber).  This would allow global
>> > per-instrument parameters.  To achieve per-note continous data, you
>> > can use the technique found in the iOS/Android examples for
>> > multitouch:
>> >
>> >
>> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >
>> > The link there shows values being read via a chnget, but the name of
>> > the channel is dynamically created using p4.  You could simplify this
>> > for your own usage by using fractional instrument numbers and reading
>> > the fractional value of p1.
>> >
>> > Referring back to the MIDI-emulation article, the instr 1 and 2 show
>> > setting values.  You could create a few of these for different ways of
>> > setting channel values.  In this regard, the design is open and
>> > flexible: you could have the signal generated with sample and hold, an
>> > oscillator, linear segments, exponential segments, etc. by creating
>> > different parameter setting instruments like 1 and 2 there.
>> >
>> > You'd then be able to organize sets of notes together in time, i.e.:
>> >
>> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> > durations
>> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >
>> > (Note: The above is mostly how blue's automation works, though
>> > currently blue does not do per-note continuous values, only
>> > per-instrument)
>> >
>> > Perhaps these ideas will be flexible enough for your needs?
>> >
>> > As for things like Pattern libraries, I'd say, for Csound 5, it might
>> > be best to use a scripting language.  For Csound 6, I'm doing some
>> > research work now for introducing a type system.  I think it will lend
>> > itself to user-defined data types, and together with user-defined
>> > functions could allow functional style programming that could make
>> > designing composing libraries in Csound 6 much easier.  It's early in
>> > the exploratory phase though.  There's a number of steps to take
>> > though to get there, and I'd rather see this done in a careful way
>> > rather than to introduce something quickly to address a short-term
>> > problem, only to create bigger, long-term issues.
>> >
>> > Hope that helps!
>> > steven
>> >
>> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> > <peimankhosravi@gmail.com> wrote:
>> >> I have a rather difficult question.
>> >>
>> >> These days I tend to do more synthesis but this is all mixed in
>> >> Protools. However, I'm much better at doing synthesis in context. So
>> >> once I have an instrument (mostly generating textures) that I like I
>> >> will render it N times with completely different parameters and
>> >> parameter envelopes (linseg).
>> >>
>> >> Problem is that I'm getting tired of moving between Csound an protools
>> >> and was wondering how I could do most of the mixing in Csound. An
>> >> Obvious problem is that different linesegs for the same instrument may
>> >> produce drastically different results but it would not make sense to
>> >> have 100 copies of the same instrument with different linsegs. It
>> >> would also be far better if one could see different envelopes in the
>> >> score, next to the note statement.
>> >>
>> >> I suppose this classes both as asking for advice (what is the best
>> >> approach?) and a possibility for Csound6 feature list. Say you have an
>> >> instrument with k-rate or a-rate linseg or similar opcode. Something
>> >> like this (OK I'm just thinking aloud and I know this is a bit out
>> >> there!) would be so useful:
>> >>
>> >> instr 1
>> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> aamp linseg {ampEnvelop}
>> >> printk2 kfreq
>> >> endin
>> >>
>> >>
>> >> And then this in the score:
>> >>
>> >> {i1 0 10
>> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >> {i1 3 20
>> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> }
>> >>
>> >>
>> >> I do realise that this kind of thing is normally done in the api but I
>> >> also stand by my opinion that Csound should improve in its
>> >> compositional interface. Along the same lines I think that
>> >> implementing patterns would be so useful in Csound. So really, I'm
>> >> wondering two things: (1) the possibility of defining bits of the
>> >> orchestra in the score (maybe this is possible with string in the
>> >> current system?) and (2) an expansion of the score language to allow
>> >> more elaborate processes. Of course this latter may be a case of
>> >> designing a compositional language, in python or whatever, and that's
>> >> fine by me but I think such a tool needs to be released with Csound or
>> >> be easily available.
>> >>
>> >>
>> >> Thanks
>> >> Peiman
>> >>
>> >>
>> >> Send bugs reports to the Sourceforge bug tracker
>> >>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> Discussions of bugs and features can be posted here
>> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> "unsubscribe csound"
>> >>
>> >
>> >
>> > Send bugs reports to the Sourceforge bug tracker
>> >             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> > Discussions of bugs and features can be posted here
>> > To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> > csound"
>> >
>
>

Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"



Date2012-07-19 13:58
FromSteven Yi
SubjectRe: [Csnd] composing with Csound, score question
Hi Peiman,

In the instrument 1 I had created, I think I had done it because it
was setting a kvalue and I thought it would need the k-rate version of
the opcode.  It looks to me though that was an erroneous assumption,
and could just use the "chnset ival, Sparamstr" version of the opcode
and call turnoff right after.

I just tried it and sure enough, it works fine with just using the
i-time version.  So, the conditional is unnecessary. :)

Thanks!
steven


On Wed, Jul 18, 2012 at 10:56 PM, peiman khosravi
 wrote:
> Thanks Steven,
>
> This is really nice.
>
> I came up with this which is a bit more convoluted.
>
> Can I ask why you have the conditional in instrument 1?
>
> Best,
> Peiman
>
>
> 
>
> 
> -odac
> 
>
>
> 
>
> sr=44100
> ksmps=1
> nchnls=1
> 0dbfs=1
>
>
> instr 1 ; string - set value
> Smember = p4 ; name of parameter to control
>
>     ; Parse Smember
>     istrlen    strlen   Smember
>     idelimiter strindex Smember, ":"
>
>     S1    strsub Smember, 0, idelimiter  ; "String1"
>     S2    strsub Smember, idelimiter + 1, istrlen  ; "String2"
>
>
> kcounter = 0
> if (kcounter == 0) then
> chnset S2, S1
>
> turnoff
> endif
>
> endin
>
> instr 2 ; Automation - set value
>
> Sparam = p4 ; name of parameter to control
> ival = p5   ; value
> kcounter = 0
>
> if (kcounter == 0) then
> chnset k(ival), Sparam
> turnoff
> endif
>
> endin
>
>
> instr 3 ; Automation segment
>
>
> Sparam = p4 ; name of parameter to control
> imode = p5
> istart = p6 ; start value
> iend = p7   ; end value
>
> if (imode==0) then
> ksig line istart, p3, iend
> else
> ksig expon istart, p3, iend
> endif
>
> chnset ksig, Sparam
>
> endin
>
>
> instr instrument
> instance=p4
> Srate sprintf "instrument.rate.%g", instance
> krate chnget Srate
>
> Sfile sprintf "instrument.file.%g", instance
> Sname chnget Sfile
>
> Sloop sprintf "instrument.loop.%g", instance
> iloop chnget Sloop
>
> aout    diskin    Sname, krate, 0, iloop
> out    aout
> ;puts Sfreq, 1
> ;printk2 krate
> endin
>
>
> 
>
>
> 
>
> i"instrument" 0.01 10 1
> i 1 0 .1 "instrument.file.1:/Applications/Max5/examples/sounds/cherokee.aif"
> i 3 0  3 "instrument.rate.1" 0 0.1  1
> i . +  2 .             1 1   0.001
>
>
> i"instrument" 0.01 10 2
> i 1 0 .1  "instrument.file.2:/Applications/Max5/examples/sounds/jongly.aif"
> i 2 0 .1  "instrument.loop.2" 1
> i 3 0  1  "instrument.rate.2" 1 1 10
> i 3 +  10 "instrument.rate.2" 0 2 .5
>
>
> 
>
>
> 
>
>
> On 19 July 2012 02:58, Steven Yi  wrote:
>>
>> Hi Peiman,
>>
>> Something like that would work.  I think I wrote a slightly
>> overly-complicated example, but I've attached it anyways. The overly
>> complicated part is that i1 and i2 were made to construct the string,
>> when it could just as easily have been passed in from the score.  i3
>> shows the parameters being read in.  I found that frac(p1) did not
>> give the result I thought it would. My thought is that the score
>> parser may be throwing away the fractional number.  So I went with a
>> noteNum identifier being passed in a p-field.
>>
>> Either way, the attached CSD shows 5 notes of the same instrument
>> using different automations per note (a pair of notes descending, and
>> a 3 note glissandi up coming in two seconds later).  Hopefully should
>> illustrate the technique.
>>
>> The implementation could probably be further refined using a default,
>> so for example, if p4 is 0 or not set, then read from i3.freq and
>> i3.amp, but if greater than 0, read in from i3.p4.freq (where the p4
>> is replaced).  That could allow then per-instrument automation as well
>> as per-note automation.
>>
>> Anyways, hope this helps! :D
>>
>> steven
>>
>>
>>
>>
>> On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
>>  wrote:
>> > Hello again,
>> >
>> > It just occurred to me that I have a problem if I want to have more that
>> > one
>> > instance of an instrument with different automations. Mhh.
>> >
>> > P
>> >
>> > On 18 July 2012 13:13, peiman khosravi  wrote:
>> >>
>> >> Hi Steven,
>> >>
>> >> Thanks for your reply. I just had a look at your article, and yes this
>> >> seems like a very good solution actually. This is very clever!
>> >>
>> >> Yes yes I agree that a long term solution is the way forward.
>> >>
>> >> Best,
>> >> Peiman
>> >>
>> >> On 18 July 2012 03:49, Steven Yi  wrote:
>> >> > Hi Peiman,
>> >> >
>> >> > In regards to Csound-only usage, and instrument design as a whole,
>> >> > I'd
>> >> > perhaps consider a couple of existing options.
>> >> >
>> >> > 1. If the envelopes are being modified per instance of instrument,
>> >> > then the p-field signature of the instrument could be designed to
>> >> > either take in the parameters (i.e. p5-8 could be linseg parameters),
>> >> > or tables could be used to hold values.  The way tables could be
>> >> > done,
>> >> > you could pass individual tables in, or create one large table that
>> >> > holds multiple values and index into it.  Either of these ways may be
>> >> > a bit cumbersome however, as you'd have to write some tricky code to
>> >> > handle n-number of segments in a line if you were going to map it to
>> >> > an instance of linseg.  A simpler notion then would be just to create
>> >> > a table and read it with oscil or phaser/table.  (I think Prent
>> >> > Rodgers uses envelopes as articulations to good effect in his
>> >> > pieces.)
>> >> >
>> >> > 2. You may want to look at the instrument design in:
>> >> >
>> >> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >> >
>> >> > In the article, you'll see that parameters for an instrument are
>> >> > exposed with chnget and are set via chnset in separate instruments.
>> >> > This allows MIDI-like instrument designs, where continuous data (i.e.
>> >> > pressure, volume, etc.) is set separately from discrete parameters to
>> >> > a note-event value (velocity, keyNumber).  This would allow global
>> >> > per-instrument parameters.  To achieve per-note continous data, you
>> >> > can use the technique found in the iOS/Android examples for
>> >> > multitouch:
>> >> >
>> >> >
>> >> >
>> >> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >> >
>> >> > The link there shows values being read via a chnget, but the name of
>> >> > the channel is dynamically created using p4.  You could simplify this
>> >> > for your own usage by using fractional instrument numbers and reading
>> >> > the fractional value of p1.
>> >> >
>> >> > Referring back to the MIDI-emulation article, the instr 1 and 2 show
>> >> > setting values.  You could create a few of these for different ways
>> >> > of
>> >> > setting channel values.  In this regard, the design is open and
>> >> > flexible: you could have the signal generated with sample and hold,
>> >> > an
>> >> > oscillator, linear segments, exponential segments, etc. by creating
>> >> > different parameter setting instruments like 1 and 2 there.
>> >> >
>> >> > You'd then be able to organize sets of notes together in time, i.e.:
>> >> >
>> >> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> > durations
>> >> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> >> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >> >
>> >> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> > durations
>> >> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> > durations
>> >> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >> >
>> >> > (Note: The above is mostly how blue's automation works, though
>> >> > currently blue does not do per-note continuous values, only
>> >> > per-instrument)
>> >> >
>> >> > Perhaps these ideas will be flexible enough for your needs?
>> >> >
>> >> > As for things like Pattern libraries, I'd say, for Csound 5, it might
>> >> > be best to use a scripting language.  For Csound 6, I'm doing some
>> >> > research work now for introducing a type system.  I think it will
>> >> > lend
>> >> > itself to user-defined data types, and together with user-defined
>> >> > functions could allow functional style programming that could make
>> >> > designing composing libraries in Csound 6 much easier.  It's early in
>> >> > the exploratory phase though.  There's a number of steps to take
>> >> > though to get there, and I'd rather see this done in a careful way
>> >> > rather than to introduce something quickly to address a short-term
>> >> > problem, only to create bigger, long-term issues.
>> >> >
>> >> > Hope that helps!
>> >> > steven
>> >> >
>> >> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> >> >  wrote:
>> >> >> I have a rather difficult question.
>> >> >>
>> >> >> These days I tend to do more synthesis but this is all mixed in
>> >> >> Protools. However, I'm much better at doing synthesis in context. So
>> >> >> once I have an instrument (mostly generating textures) that I like I
>> >> >> will render it N times with completely different parameters and
>> >> >> parameter envelopes (linseg).
>> >> >>
>> >> >> Problem is that I'm getting tired of moving between Csound an
>> >> >> protools
>> >> >> and was wondering how I could do most of the mixing in Csound. An
>> >> >> Obvious problem is that different linesegs for the same instrument
>> >> >> may
>> >> >> produce drastically different results but it would not make sense to
>> >> >> have 100 copies of the same instrument with different linsegs. It
>> >> >> would also be far better if one could see different envelopes in the
>> >> >> score, next to the note statement.
>> >> >>
>> >> >> I suppose this classes both as asking for advice (what is the best
>> >> >> approach?) and a possibility for Csound6 feature list. Say you have
>> >> >> an
>> >> >> instrument with k-rate or a-rate linseg or similar opcode. Something
>> >> >> like this (OK I'm just thinking aloud and I know this is a bit out
>> >> >> there!) would be so useful:
>> >> >>
>> >> >> instr 1
>> >> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> >> aamp linseg {ampEnvelop}
>> >> >> printk2 kfreq
>> >> >> endin
>> >> >>
>> >> >>
>> >> >> And then this in the score:
>> >> >>
>> >> >> {i1 0 10
>> >> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> >> }
>> >> >>
>> >> >> {i1 3 20
>> >> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> >> }
>> >> >>
>> >> >>
>> >> >> I do realise that this kind of thing is normally done in the api but
>> >> >> I
>> >> >> also stand by my opinion that Csound should improve in its
>> >> >> compositional interface. Along the same lines I think that
>> >> >> implementing patterns would be so useful in Csound. So really, I'm
>> >> >> wondering two things: (1) the possibility of defining bits of the
>> >> >> orchestra in the score (maybe this is possible with string in the
>> >> >> current system?) and (2) an expansion of the score language to allow
>> >> >> more elaborate processes. Of course this latter may be a case of
>> >> >> designing a compositional language, in python or whatever, and
>> >> >> that's
>> >> >> fine by me but I think such a tool needs to be released with Csound
>> >> >> or
>> >> >> be easily available.
>> >> >>
>> >> >>
>> >> >> Thanks
>> >> >> Peiman
>> >> >>
>> >> >>
>> >> >> Send bugs reports to the Sourceforge bug tracker
>> >> >>
>> >> >> https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> >> Discussions of bugs and features can be posted here
>> >> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> >> "unsubscribe csound"
>> >> >>
>> >> >
>> >> >
>> >> > Send bugs reports to the Sourceforge bug tracker
>> >> >
>> >> > https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> > Discussions of bugs and features can be posted here
>> >> > To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> > "unsubscribe
>> >> > csound"
>> >> >
>> >
>> >
>>
>> Send bugs reports to the Sourceforge bug tracker
>>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> csound"
>>
>

Date2012-07-19 14:10
Frompeiman khosravi
SubjectRe: [Csnd] composing with Csound, score question
Thanks Steven,

Makes sense. Great, I'm really happy about this way of working!

Best,
P

On 19 July 2012 13:58, Steven Yi <stevenyi@gmail.com> wrote:
Hi Peiman,

In the instrument 1 I had created, I think I had done it because it
was setting a kvalue and I thought it would need the k-rate version of
the opcode.  It looks to me though that was an erroneous assumption,
and could just use the "chnset ival, Sparamstr" version of the opcode
and call turnoff right after.

I just tried it and sure enough, it works fine with just using the
i-time version.  So, the conditional is unnecessary. :)

Thanks!
steven


On Wed, Jul 18, 2012 at 10:56 PM, peiman khosravi
<peimankhosravi@gmail.com> wrote:
> Thanks Steven,
>
> This is really nice.
>
> I came up with this which is a bit more convoluted.
>
> Can I ask why you have the conditional in instrument 1?
>
> Best,
> Peiman
>
>
> <CsoundSynthesizer>
>
> <CsOptions>
> -odac
> </CsOptions>
>
>
> <CsInstruments>
>
> sr=44100
> ksmps=1
> nchnls=1
> 0dbfs=1
>
>
> instr 1 ; string - set value
> Smember = p4 ; name of parameter to control
>
>     ; Parse Smember
>     istrlen    strlen   Smember
>     idelimiter strindex Smember, ":"
>
>     S1    strsub Smember, 0, idelimiter  ; "String1"
>     S2    strsub Smember, idelimiter + 1, istrlen  ; "String2"
>
>
> kcounter = 0
> if (kcounter == 0) then
> chnset S2, S1
>
> turnoff
> endif
>
> endin
>
> instr 2 ; Automation - set value
>
> Sparam = p4 ; name of parameter to control
> ival = p5   ; value
> kcounter = 0
>
> if (kcounter == 0) then
> chnset k(ival), Sparam
> turnoff
> endif
>
> endin
>
>
> instr 3 ; Automation segment
>
>
> Sparam = p4 ; name of parameter to control
> imode = p5
> istart = p6 ; start value
> iend = p7   ; end value
>
> if (imode==0) then
> ksig line istart, p3, iend
> else
> ksig expon istart, p3, iend
> endif
>
> chnset ksig, Sparam
>
> endin
>
>
> instr instrument
> instance=p4
> Srate sprintf "instrument.rate.%g", instance
> krate chnget Srate
>
> Sfile sprintf "instrument.file.%g", instance
> Sname chnget Sfile
>
> Sloop sprintf "instrument.loop.%g", instance
> iloop chnget Sloop
>
> aout    diskin    Sname, krate, 0, iloop
> out    aout
> ;puts Sfreq, 1
> ;printk2 krate
> endin
>
>
> </CsInstruments>
>
>
> <CsScore>
>
> i"instrument" 0.01 10 1
> i 1 0 .1 "instrument.file.1:/Applications/Max5/examples/sounds/cherokee.aif"
> i 3 0  3 "instrument.rate.1" 0 0.1  1
> i . +  2 .             1 1   0.001
>
>
> i"instrument" 0.01 10 2
> i 1 0 .1  "instrument.file.2:/Applications/Max5/examples/sounds/jongly.aif"
> i 2 0 .1  "instrument.loop.2" 1
> i 3 0  1  "instrument.rate.2" 1 1 10
> i 3 +  10 "instrument.rate.2" 0 2 .5
>
>
> </CsScore>
>
>
> </CsoundSynthesizer>
>
>
> On 19 July 2012 02:58, Steven Yi <stevenyi@gmail.com> wrote:
>>
>> Hi Peiman,
>>
>> Something like that would work.  I think I wrote a slightly
>> overly-complicated example, but I've attached it anyways. The overly
>> complicated part is that i1 and i2 were made to construct the string,
>> when it could just as easily have been passed in from the score.  i3
>> shows the parameters being read in.  I found that frac(p1) did not
>> give the result I thought it would. My thought is that the score
>> parser may be throwing away the fractional number.  So I went with a
>> noteNum identifier being passed in a p-field.
>>
>> Either way, the attached CSD shows 5 notes of the same instrument
>> using different automations per note (a pair of notes descending, and
>> a 3 note glissandi up coming in two seconds later).  Hopefully should
>> illustrate the technique.
>>
>> The implementation could probably be further refined using a default,
>> so for example, if p4 is 0 or not set, then read from i3.freq and
>> i3.amp, but if greater than 0, read in from i3.p4.freq (where the p4
>> is replaced).  That could allow then per-instrument automation as well
>> as per-note automation.
>>
>> Anyways, hope this helps! :D
>>
>> steven
>>
>>
>>
>>
>> On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
>> <peimankhosravi@gmail.com> wrote:
>> > Hello again,
>> >
>> > It just occurred to me that I have a problem if I want to have more that
>> > one
>> > instance of an instrument with different automations. Mhh.
>> >
>> > P
>> >
>> > On 18 July 2012 13:13, peiman khosravi <peimankhosravi@gmail.com> wrote:
>> >>
>> >> Hi Steven,
>> >>
>> >> Thanks for your reply. I just had a look at your article, and yes this
>> >> seems like a very good solution actually. This is very clever!
>> >>
>> >> Yes yes I agree that a long term solution is the way forward.
>> >>
>> >> Best,
>> >> Peiman
>> >>
>> >> On 18 July 2012 03:49, Steven Yi <stevenyi@gmail.com> wrote:
>> >> > Hi Peiman,
>> >> >
>> >> > In regards to Csound-only usage, and instrument design as a whole,
>> >> > I'd
>> >> > perhaps consider a couple of existing options.
>> >> >
>> >> > 1. If the envelopes are being modified per instance of instrument,
>> >> > then the p-field signature of the instrument could be designed to
>> >> > either take in the parameters (i.e. p5-8 could be linseg parameters),
>> >> > or tables could be used to hold values.  The way tables could be
>> >> > done,
>> >> > you could pass individual tables in, or create one large table that
>> >> > holds multiple values and index into it.  Either of these ways may be
>> >> > a bit cumbersome however, as you'd have to write some tricky code to
>> >> > handle n-number of segments in a line if you were going to map it to
>> >> > an instance of linseg.  A simpler notion then would be just to create
>> >> > a table and read it with oscil or phaser/table.  (I think Prent
>> >> > Rodgers uses envelopes as articulations to good effect in his
>> >> > pieces.)
>> >> >
>> >> > 2. You may want to look at the instrument design in:
>> >> >
>> >> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >> >
>> >> > In the article, you'll see that parameters for an instrument are
>> >> > exposed with chnget and are set via chnset in separate instruments.
>> >> > This allows MIDI-like instrument designs, where continuous data (i.e.
>> >> > pressure, volume, etc.) is set separately from discrete parameters to
>> >> > a note-event value (velocity, keyNumber).  This would allow global
>> >> > per-instrument parameters.  To achieve per-note continous data, you
>> >> > can use the technique found in the iOS/Android examples for
>> >> > multitouch:
>> >> >
>> >> >
>> >> >
>> >> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >> >
>> >> > The link there shows values being read via a chnget, but the name of
>> >> > the channel is dynamically created using p4.  You could simplify this
>> >> > for your own usage by using fractional instrument numbers and reading
>> >> > the fractional value of p1.
>> >> >
>> >> > Referring back to the MIDI-emulation article, the instr 1 and 2 show
>> >> > setting values.  You could create a few of these for different ways
>> >> > of
>> >> > setting channel values.  In this regard, the design is open and
>> >> > flexible: you could have the signal generated with sample and hold,
>> >> > an
>> >> > oscillator, linear segments, exponential segments, etc. by creating
>> >> > different parameter setting instruments like 1 and 2 there.
>> >> >
>> >> > You'd then be able to organize sets of notes together in time, i.e.:
>> >> >
>> >> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> > durations
>> >> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2 durations
>> >> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >> >
>> >> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> > durations
>> >> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> > durations
>> >> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing instrument
>> >> >
>> >> > (Note: The above is mostly how blue's automation works, though
>> >> > currently blue does not do per-note continuous values, only
>> >> > per-instrument)
>> >> >
>> >> > Perhaps these ideas will be flexible enough for your needs?
>> >> >
>> >> > As for things like Pattern libraries, I'd say, for Csound 5, it might
>> >> > be best to use a scripting language.  For Csound 6, I'm doing some
>> >> > research work now for introducing a type system.  I think it will
>> >> > lend
>> >> > itself to user-defined data types, and together with user-defined
>> >> > functions could allow functional style programming that could make
>> >> > designing composing libraries in Csound 6 much easier.  It's early in
>> >> > the exploratory phase though.  There's a number of steps to take
>> >> > though to get there, and I'd rather see this done in a careful way
>> >> > rather than to introduce something quickly to address a short-term
>> >> > problem, only to create bigger, long-term issues.
>> >> >
>> >> > Hope that helps!
>> >> > steven
>> >> >
>> >> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> >> > <peimankhosravi@gmail.com> wrote:
>> >> >> I have a rather difficult question.
>> >> >>
>> >> >> These days I tend to do more synthesis but this is all mixed in
>> >> >> Protools. However, I'm much better at doing synthesis in context. So
>> >> >> once I have an instrument (mostly generating textures) that I like I
>> >> >> will render it N times with completely different parameters and
>> >> >> parameter envelopes (linseg).
>> >> >>
>> >> >> Problem is that I'm getting tired of moving between Csound an
>> >> >> protools
>> >> >> and was wondering how I could do most of the mixing in Csound. An
>> >> >> Obvious problem is that different linesegs for the same instrument
>> >> >> may
>> >> >> produce drastically different results but it would not make sense to
>> >> >> have 100 copies of the same instrument with different linsegs. It
>> >> >> would also be far better if one could see different envelopes in the
>> >> >> score, next to the note statement.
>> >> >>
>> >> >> I suppose this classes both as asking for advice (what is the best
>> >> >> approach?) and a possibility for Csound6 feature list. Say you have
>> >> >> an
>> >> >> instrument with k-rate or a-rate linseg or similar opcode. Something
>> >> >> like this (OK I'm just thinking aloud and I know this is a bit out
>> >> >> there!) would be so useful:
>> >> >>
>> >> >> instr 1
>> >> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> >> aamp linseg {ampEnvelop}
>> >> >> printk2 kfreq
>> >> >> endin
>> >> >>
>> >> >>
>> >> >> And then this in the score:
>> >> >>
>> >> >> {i1 0 10
>> >> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> >> }
>> >> >>
>> >> >> {i1 3 20
>> >> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> >> }
>> >> >>
>> >> >>
>> >> >> I do realise that this kind of thing is normally done in the api but
>> >> >> I
>> >> >> also stand by my opinion that Csound should improve in its
>> >> >> compositional interface. Along the same lines I think that
>> >> >> implementing patterns would be so useful in Csound. So really, I'm
>> >> >> wondering two things: (1) the possibility of defining bits of the
>> >> >> orchestra in the score (maybe this is possible with string in the
>> >> >> current system?) and (2) an expansion of the score language to allow
>> >> >> more elaborate processes. Of course this latter may be a case of
>> >> >> designing a compositional language, in python or whatever, and
>> >> >> that's
>> >> >> fine by me but I think such a tool needs to be released with Csound
>> >> >> or
>> >> >> be easily available.
>> >> >>
>> >> >>
>> >> >> Thanks
>> >> >> Peiman
>> >> >>
>> >> >>
>> >> >> Send bugs reports to the Sourceforge bug tracker
>> >> >>
>> >> >> https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> >> Discussions of bugs and features can be posted here
>> >> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> >> "unsubscribe csound"
>> >> >>
>> >> >
>> >> >
>> >> > Send bugs reports to the Sourceforge bug tracker
>> >> >
>> >> > https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> > Discussions of bugs and features can be posted here
>> >> > To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> > "unsubscribe
>> >> > csound"
>> >> >
>> >
>> >
>>
>> Send bugs reports to the Sourceforge bug tracker
>>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> csound"
>>
>


Send bugs reports to the Sourceforge bug tracker
            https://sourceforge.net/tracker/?group_id=81968&atid=564599
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"



Date2012-07-19 14:12
FromSteven Yi
SubjectRe: [Csnd] composing with Csound, score question
Fantastic!  Glad this is working for you and thanks asking about that chnset!

steven

On Thu, Jul 19, 2012 at 9:10 AM, peiman khosravi
 wrote:
> Thanks Steven,
>
> Makes sense. Great, I'm really happy about this way of working!
>
> Best,
> P
>
>
> On 19 July 2012 13:58, Steven Yi  wrote:
>>
>> Hi Peiman,
>>
>> In the instrument 1 I had created, I think I had done it because it
>> was setting a kvalue and I thought it would need the k-rate version of
>> the opcode.  It looks to me though that was an erroneous assumption,
>> and could just use the "chnset ival, Sparamstr" version of the opcode
>> and call turnoff right after.
>>
>> I just tried it and sure enough, it works fine with just using the
>> i-time version.  So, the conditional is unnecessary. :)
>>
>> Thanks!
>> steven
>>
>>
>> On Wed, Jul 18, 2012 at 10:56 PM, peiman khosravi
>>  wrote:
>> > Thanks Steven,
>> >
>> > This is really nice.
>> >
>> > I came up with this which is a bit more convoluted.
>> >
>> > Can I ask why you have the conditional in instrument 1?
>> >
>> > Best,
>> > Peiman
>> >
>> >
>> > 
>> >
>> > 
>> > -odac
>> > 
>> >
>> >
>> > 
>> >
>> > sr=44100
>> > ksmps=1
>> > nchnls=1
>> > 0dbfs=1
>> >
>> >
>> > instr 1 ; string - set value
>> > Smember = p4 ; name of parameter to control
>> >
>> >     ; Parse Smember
>> >     istrlen    strlen   Smember
>> >     idelimiter strindex Smember, ":"
>> >
>> >     S1    strsub Smember, 0, idelimiter  ; "String1"
>> >     S2    strsub Smember, idelimiter + 1, istrlen  ; "String2"
>> >
>> >
>> > kcounter = 0
>> > if (kcounter == 0) then
>> > chnset S2, S1
>> >
>> > turnoff
>> > endif
>> >
>> > endin
>> >
>> > instr 2 ; Automation - set value
>> >
>> > Sparam = p4 ; name of parameter to control
>> > ival = p5   ; value
>> > kcounter = 0
>> >
>> > if (kcounter == 0) then
>> > chnset k(ival), Sparam
>> > turnoff
>> > endif
>> >
>> > endin
>> >
>> >
>> > instr 3 ; Automation segment
>> >
>> >
>> > Sparam = p4 ; name of parameter to control
>> > imode = p5
>> > istart = p6 ; start value
>> > iend = p7   ; end value
>> >
>> > if (imode==0) then
>> > ksig line istart, p3, iend
>> > else
>> > ksig expon istart, p3, iend
>> > endif
>> >
>> > chnset ksig, Sparam
>> >
>> > endin
>> >
>> >
>> > instr instrument
>> > instance=p4
>> > Srate sprintf "instrument.rate.%g", instance
>> > krate chnget Srate
>> >
>> > Sfile sprintf "instrument.file.%g", instance
>> > Sname chnget Sfile
>> >
>> > Sloop sprintf "instrument.loop.%g", instance
>> > iloop chnget Sloop
>> >
>> > aout    diskin    Sname, krate, 0, iloop
>> > out    aout
>> > ;puts Sfreq, 1
>> > ;printk2 krate
>> > endin
>> >
>> >
>> > 
>> >
>> >
>> > 
>> >
>> > i"instrument" 0.01 10 1
>> > i 1 0 .1
>> > "instrument.file.1:/Applications/Max5/examples/sounds/cherokee.aif"
>> > i 3 0  3 "instrument.rate.1" 0 0.1  1
>> > i . +  2 .             1 1   0.001
>> >
>> >
>> > i"instrument" 0.01 10 2
>> > i 1 0 .1
>> > "instrument.file.2:/Applications/Max5/examples/sounds/jongly.aif"
>> > i 2 0 .1  "instrument.loop.2" 1
>> > i 3 0  1  "instrument.rate.2" 1 1 10
>> > i 3 +  10 "instrument.rate.2" 0 2 .5
>> >
>> >
>> > 
>> >
>> >
>> > 
>> >
>> >
>> > On 19 July 2012 02:58, Steven Yi  wrote:
>> >>
>> >> Hi Peiman,
>> >>
>> >> Something like that would work.  I think I wrote a slightly
>> >> overly-complicated example, but I've attached it anyways. The overly
>> >> complicated part is that i1 and i2 were made to construct the string,
>> >> when it could just as easily have been passed in from the score.  i3
>> >> shows the parameters being read in.  I found that frac(p1) did not
>> >> give the result I thought it would. My thought is that the score
>> >> parser may be throwing away the fractional number.  So I went with a
>> >> noteNum identifier being passed in a p-field.
>> >>
>> >> Either way, the attached CSD shows 5 notes of the same instrument
>> >> using different automations per note (a pair of notes descending, and
>> >> a 3 note glissandi up coming in two seconds later).  Hopefully should
>> >> illustrate the technique.
>> >>
>> >> The implementation could probably be further refined using a default,
>> >> so for example, if p4 is 0 or not set, then read from i3.freq and
>> >> i3.amp, but if greater than 0, read in from i3.p4.freq (where the p4
>> >> is replaced).  That could allow then per-instrument automation as well
>> >> as per-note automation.
>> >>
>> >> Anyways, hope this helps! :D
>> >>
>> >> steven
>> >>
>> >>
>> >>
>> >>
>> >> On Wed, Jul 18, 2012 at 8:09 PM, peiman khosravi
>> >>  wrote:
>> >> > Hello again,
>> >> >
>> >> > It just occurred to me that I have a problem if I want to have more
>> >> > that
>> >> > one
>> >> > instance of an instrument with different automations. Mhh.
>> >> >
>> >> > P
>> >> >
>> >> > On 18 July 2012 13:13, peiman khosravi 
>> >> > wrote:
>> >> >>
>> >> >> Hi Steven,
>> >> >>
>> >> >> Thanks for your reply. I just had a look at your article, and yes
>> >> >> this
>> >> >> seems like a very good solution actually. This is very clever!
>> >> >>
>> >> >> Yes yes I agree that a long term solution is the way forward.
>> >> >>
>> >> >> Best,
>> >> >> Peiman
>> >> >>
>> >> >> On 18 July 2012 03:49, Steven Yi  wrote:
>> >> >> > Hi Peiman,
>> >> >> >
>> >> >> > In regards to Csound-only usage, and instrument design as a whole,
>> >> >> > I'd
>> >> >> > perhaps consider a couple of existing options.
>> >> >> >
>> >> >> > 1. If the envelopes are being modified per instance of instrument,
>> >> >> > then the p-field signature of the instrument could be designed to
>> >> >> > either take in the parameters (i.e. p5-8 could be linseg
>> >> >> > parameters),
>> >> >> > or tables could be used to hold values.  The way tables could be
>> >> >> > done,
>> >> >> > you could pass individual tables in, or create one large table
>> >> >> > that
>> >> >> > holds multiple values and index into it.  Either of these ways may
>> >> >> > be
>> >> >> > a bit cumbersome however, as you'd have to write some tricky code
>> >> >> > to
>> >> >> > handle n-number of segments in a line if you were going to map it
>> >> >> > to
>> >> >> > an instance of linseg.  A simpler notion then would be just to
>> >> >> > create
>> >> >> > a table and read it with oscil or phaser/table.  (I think Prent
>> >> >> > Rodgers uses envelopes as articulations to good effect in his
>> >> >> > pieces.)
>> >> >> >
>> >> >> > 2. You may want to look at the instrument design in:
>> >> >> >
>> >> >> >
>> >> >> > http://www.csounds.com/journal/issue13/emulatingMidiBasedStudios.html
>> >> >> >
>> >> >> > In the article, you'll see that parameters for an instrument are
>> >> >> > exposed with chnget and are set via chnset in separate
>> >> >> > instruments.
>> >> >> > This allows MIDI-like instrument designs, where continuous data
>> >> >> > (i.e.
>> >> >> > pressure, volume, etc.) is set separately from discrete parameters
>> >> >> > to
>> >> >> > a note-event value (velocity, keyNumber).  This would allow global
>> >> >> > per-instrument parameters.  To achieve per-note continous data,
>> >> >> > you
>> >> >> > can use the technique found in the iOS/Android examples for
>> >> >> > multitouch:
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > http://csound.git.sourceforge.net/git/gitweb.cgi?p=csound/csound5.git;a=blob_plain;f=android/CsoundAndroidExamples/res/raw/multitouch_xy.csd;hb=HEAD
>> >> >> >
>> >> >> > The link there shows values being read via a chnget, but the name
>> >> >> > of
>> >> >> > the channel is dynamically created using p4.  You could simplify
>> >> >> > this
>> >> >> > for your own usage by using fractional instrument numbers and
>> >> >> > reading
>> >> >> > the fractional value of p1.
>> >> >> >
>> >> >> > Referring back to the MIDI-emulation article, the instr 1 and 2
>> >> >> > show
>> >> >> > setting values.  You could create a few of these for different
>> >> >> > ways
>> >> >> > of
>> >> >> > setting channel values.  In this regard, the design is open and
>> >> >> > flexible: you could have the signal generated with sample and
>> >> >> > hold,
>> >> >> > an
>> >> >> > oscillator, linear segments, exponential segments, etc. by
>> >> >> > creating
>> >> >> > different parameter setting instruments like 1 and 2 there.
>> >> >> >
>> >> >> > You'd then be able to organize sets of notes together in time,
>> >> >> > i.e.:
>> >> >> >
>> >> >> > i2 0 2 "i3.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> >> > durations
>> >> >> > i2 . . "i3.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> >> > durations
>> >> >> > i3 . . "someSampleToPlay.wav" ; perhaps a sample playing
>> >> >> > instrument
>> >> >> >
>> >> >> > i2 2 2 "i3.1.amplitude" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> >> > durations
>> >> >> > i2 . . "i3.1.freq" 0 .2 1 .2 0  ; example 3pt linseg with .2
>> >> >> > durations
>> >> >> > i3.1 . . "someSampleToPlay.wav" ; perhaps a sample playing
>> >> >> > instrument
>> >> >> >
>> >> >> > (Note: The above is mostly how blue's automation works, though
>> >> >> > currently blue does not do per-note continuous values, only
>> >> >> > per-instrument)
>> >> >> >
>> >> >> > Perhaps these ideas will be flexible enough for your needs?
>> >> >> >
>> >> >> > As for things like Pattern libraries, I'd say, for Csound 5, it
>> >> >> > might
>> >> >> > be best to use a scripting language.  For Csound 6, I'm doing some
>> >> >> > research work now for introducing a type system.  I think it will
>> >> >> > lend
>> >> >> > itself to user-defined data types, and together with user-defined
>> >> >> > functions could allow functional style programming that could make
>> >> >> > designing composing libraries in Csound 6 much easier.  It's early
>> >> >> > in
>> >> >> > the exploratory phase though.  There's a number of steps to take
>> >> >> > though to get there, and I'd rather see this done in a careful way
>> >> >> > rather than to introduce something quickly to address a short-term
>> >> >> > problem, only to create bigger, long-term issues.
>> >> >> >
>> >> >> > Hope that helps!
>> >> >> > steven
>> >> >> >
>> >> >> > On Tue, Jul 17, 2012 at 9:41 PM, peiman khosravi
>> >> >> >  wrote:
>> >> >> >> I have a rather difficult question.
>> >> >> >>
>> >> >> >> These days I tend to do more synthesis but this is all mixed in
>> >> >> >> Protools. However, I'm much better at doing synthesis in context.
>> >> >> >> So
>> >> >> >> once I have an instrument (mostly generating textures) that I
>> >> >> >> like I
>> >> >> >> will render it N times with completely different parameters and
>> >> >> >> parameter envelopes (linseg).
>> >> >> >>
>> >> >> >> Problem is that I'm getting tired of moving between Csound an
>> >> >> >> protools
>> >> >> >> and was wondering how I could do most of the mixing in Csound. An
>> >> >> >> Obvious problem is that different linesegs for the same
>> >> >> >> instrument
>> >> >> >> may
>> >> >> >> produce drastically different results but it would not make sense
>> >> >> >> to
>> >> >> >> have 100 copies of the same instrument with different linsegs. It
>> >> >> >> would also be far better if one could see different envelopes in
>> >> >> >> the
>> >> >> >> score, next to the note statement.
>> >> >> >>
>> >> >> >> I suppose this classes both as asking for advice (what is the
>> >> >> >> best
>> >> >> >> approach?) and a possibility for Csound6 feature list. Say you
>> >> >> >> have
>> >> >> >> an
>> >> >> >> instrument with k-rate or a-rate linseg or similar opcode.
>> >> >> >> Something
>> >> >> >> like this (OK I'm just thinking aloud and I know this is a bit
>> >> >> >> out
>> >> >> >> there!) would be so useful:
>> >> >> >>
>> >> >> >> instr 1
>> >> >> >> kfreq linseg {freqEnvelop} ;or something like that
>> >> >> >> aamp linseg {ampEnvelop}
>> >> >> >> printk2 kfreq
>> >> >> >> endin
>> >> >> >>
>> >> >> >>
>> >> >> >> And then this in the score:
>> >> >> >>
>> >> >> >> {i1 0 10
>> >> >> >> freqEnvelop: 100, p3/2, 40, p3/2, 4000
>> >> >> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> >> >> }
>> >> >> >>
>> >> >> >> {i1 3 20
>> >> >> >> freqEnvelop: 500, p3/2, 100, p3/2, 500
>> >> >> >> ampEnvelop: 0, .2, 1, p3-.2, 0
>> >> >> >> }
>> >> >> >>
>> >> >> >>
>> >> >> >> I do realise that this kind of thing is normally done in the api
>> >> >> >> but
>> >> >> >> I
>> >> >> >> also stand by my opinion that Csound should improve in its
>> >> >> >> compositional interface. Along the same lines I think that
>> >> >> >> implementing patterns would be so useful in Csound. So really,
>> >> >> >> I'm
>> >> >> >> wondering two things: (1) the possibility of defining bits of the
>> >> >> >> orchestra in the score (maybe this is possible with string in the
>> >> >> >> current system?) and (2) an expansion of the score language to
>> >> >> >> allow
>> >> >> >> more elaborate processes. Of course this latter may be a case of
>> >> >> >> designing a compositional language, in python or whatever, and
>> >> >> >> that's
>> >> >> >> fine by me but I think such a tool needs to be released with
>> >> >> >> Csound
>> >> >> >> or
>> >> >> >> be easily available.
>> >> >> >>
>> >> >> >>
>> >> >> >> Thanks
>> >> >> >> Peiman
>> >> >> >>
>> >> >> >>
>> >> >> >> Send bugs reports to the Sourceforge bug tracker
>> >> >> >>
>> >> >> >> https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> >> >> Discussions of bugs and features can be posted here
>> >> >> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> >> >> "unsubscribe csound"
>> >> >> >>
>> >> >> >
>> >> >> >
>> >> >> > Send bugs reports to the Sourceforge bug tracker
>> >> >> >
>> >> >> > https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> >> > Discussions of bugs and features can be posted here
>> >> >> > To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> >> > "unsubscribe
>> >> >> > csound"
>> >> >> >
>> >> >
>> >> >
>> >>
>> >> Send bugs reports to the Sourceforge bug tracker
>> >>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> >> Discussions of bugs and features can be posted here
>> >> To unsubscribe, send email sympa@lists.bath.ac.uk with body
>> >> "unsubscribe
>> >> csound"
>> >>
>> >
>>
>>
>> Send bugs reports to the Sourceforge bug tracker
>>             https://sourceforge.net/tracker/?group_id=81968&atid=564599
>> Discussions of bugs and features can be posted here
>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe
>> csound"
>>
>