Csound Csound-dev Csound-tekno Search About

[Csnd] phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)

Date2008-01-23 09:06
FromTim Mortimer
Subject[Csnd] phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
I'm back! ; )

So i've been browsing the manual this evening, trying to find out what's the
best way to go about designing some "crossfading oscillator" synthesis
instruments with Csound, & to be honest i'm a little uncertain what is the
best way to proceed (as mostly up until now for me Csound has been about
sample playback or resynthesis in one form or other).

I touched upon some alaising concerns in an earlier post (about 6 weeks or
so back..) & yes, "aliasing happens!" Anthony Kozar ; ) ... but, err, i'm
still not sure how to approach this.

I'm concerned because i am likely to want to use square & impulse &
generally "pointy" like oscillator shapes, but i also want to be able to
crossfade the oscillator shape during playback with other "found" & "random"
type shapes (snipped from samples or created in text file) or other more
additive GEN routine based ftables even (do i dare add FM to the mix at this
point?), to create diverse & changing spectra in a single crossfadeable, but
phase aligned synthesis method.

now the obvious thing to do to start this would be use some sort of oscil
based table read - but surely (from what i can gather reading the manual..)
VCO2 & GEN30 (???) are designed to overcome any aliasing issues
(particularly again with those "squarer/pointy" waveshapes..) by performing
some sort of analysis on the "spectra" implied by any given oscillator
shape, & reproduce the signal band-limited to within Nyquist safe regions?

So do i load all my oscillator shapes in a bunch of ftables, analyze them
all in separate gen30's to bandlimit the spectrum to appropriate levels, &
then tablemix from these / selected GEN30's to create the desired effect??

GEN30 manual entry does refer to "harmonic partials" - does that mean it
creates a sine based reduction of integer value frequency ratios only, &
doesnt represent the complete spectra in all it complexity?

Alternately, i'm not sure any VCO2 based implementation is going to make the
job of crossfadeing phase-synced oscillators any easier....

I know what i'm looking for for the rest of this synth design

> LP/HP/Notchfilters 
> allpass filters
> envelopes 
> feedback, pitchshifting & modulating delaylines

to mainly create some nice classic ambient pad sounds that can "expand
upwards" into lots of top end sizzle!, but i wan't to make sure i get the
cross fadeable phase synced oscillator section right, particularly when i
can't phase sync gbuzz or the like to get nyquist friendly pulse waves that
way, so basically I have to look at some sort of table based alternative it
seems to keep the "pulse oscillator state" in phase alignment with the other
less harmonically rich oscillator shapes...

There are plenty of Synthesis gurus on this list. Do any of you have any
useful advice as to how i might best proceed with this?

I also recall the recent crossfadeable Waveshaping discussion, & wonder if
as part of that there was a method i might borrow to perhaps better align
tablemix transitions to zero crossings in the wave playback for
example...??? (as opposed to simply at constant krate - but that hopefully
would be ok anyways as long as the crossfading wasn't to ambitious in terms
of speed of execution...) - Anthony, did you achieve this "transition of
crossfade at 0 crossings only" at all? Was it necessary to achieve the
effect u were after?

Please share your experiences & insights & suggestions! 

& many thanks as usual.

T.









-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15037443.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-23 09:25
From"Oeyvind Brandtsegg"
Subject[Csnd] Re: phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
AttachmentsNone  

Date2008-01-23 09:50
FromTim Mortimer
Subject[Csnd] Re: phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
Thanks Oeyvind,

based on your observation I think then i will begin experimenting by reading
phase synced gen 30 tables, & begin to cross fade those. 

Anthony K i'm still keen to hear if you felt the need to syncronise
waveshaping crossfades with 0 crossings of the phase position or not..
(there were udo versions of that process around i believe, must go off & try
& dig those up...)

I look forward very much to seeing what you have achieved with PSGS Oeyvind. 

One i have this basic synth idea happening i'll probabaly switch the
oscilator bank for partikkel & get stuck into it further from there...

There's more to the story in terms of effects etc, but this is the key issue
i need to resolve to get started





-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15038198.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-23 16:17
Fromaaron@akjmusic.com
Subject[Csnd] The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
AttachmentsNone  

Date2008-01-23 23:34
FromTim Mortimer
Subject[Csnd] Re: The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
Very briefly AAron (as i'm running late for work, & to be honest don't really
want to go. Oh the perils of indespensability! ; ) ...)

Sorry, briefly yes i do share many of your interests & concerns (remember i
independantly tried "boulder synthesis 12 months ago...) & I also am
"somewhat dissapointed" with Resynthesis "in general" in csound, but mainly
because it seems most of the varying analysis techniques don't seem to make
their data "available" to the user in ways that further "human extrapolation
& deduction" can take place (which may not be Csounds fault entirely...).

I continually use Loris as an example, as it's method involves identifying
fundamentals, partials, & noise, & allows a kind of "statistical
interpolation" to take place on this data (in theory using Csound, but not
presently possible on windows...) to create "more than what you got" in the
first place.

My point is, having identified "fundamentals, partials, & noise" using any
resynthesis technique, i dont see why this analysis can't be used to perform
(with a bit of work) a resynthis based on any number of alternate
resynthesis methods (including ATS, adsyn et al...) but also simply a "bank
of sines & resonators" for a more synthetic realisation, or LPC for
partials, with noise supplied from ATS... my general guiding thoughts... 

the main reason being processes of resynthesis could then be decided upon
based on a combination of those processes that leant themselves to the most
"symathetic" (or if you want, "synthetic"...) recreation, depending if you
are modelling koto's , flutes, or sandpits on the occasion....

The main thing i think that's needed & missing at the moment is generic
playback handling of additive synthesis data in the same ktimpt reader
format as synonymous with PV & ATS file playback...

i'm not sure there's an answer to your post per se. it's a big topic. but
"thinking big" is what being creative is about, (at least in my somewhat
teutonic version of events anyway...)

I'm still a little nervous about aliasing though! looking forward to getting
down to some experiments starting tomorrow.... (which may see me moving on o
the yet unexplored territory of designing some effects in csound...)

If anyone has a good pitch shifting delay with feedback please can we all
have a look?




-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15055472.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-24 05:43
FromTim Mortimer
Subject[Csnd] Re: phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
I just ran a test render of unsynced gbuzz & oscili generated signals running
side by side at the same frequency, & I'm VERY PLEASED to report that after
exporting 6 mins of audio (a reasonable enough test i thought) any phase
deviation between the 2 signals was basically non existant!

i think that i expected the worst here because things like max for example
are notoriously "sloppy" with this type of thing, & i was convinced that
only running table reads from a single oscili source was the guaranteed way
to keep signals phase synced...

also pleased to report that dr B's tutes from chapter 1 of the book help me
achieve this realisation, ironically through his demonstration of putting
several opcodes slightly out of phase... wow, so they don't drift by
themselves!? amazing ... ; )

ok, so now the fun commences in earnest... (& glad i put those concurrent
pvx related ideas in a separate thread, as i'd still like some ideas &
insight there...)

great to be able to sing some praise over how "rock solid" & "easy" this
appers to be therefore using csound
-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15059101.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-24 07:39
FromAnthony Kozar
Subject[Csnd] Re: Re: phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
I made no efforts to observe zero crossings or anything like that -- the
mathematics of the dynamic waveshaping technique that I was describing
involves purely continuous functions.

Regarding your crossfading question, I am having some trouble understanding
what your goal is.  Certainly, crossfading between two (or more) signals is
just a matter of mixing them?  If your crossfade "amount" moves slowly
enough, there should be no aliasing due to the fade.  So, I do not
understand the need for "phase synced" oscillators.  Clearly I must be
misunderstanding something ...

Are you trying to accomplish wave sequencing (a la Korg Wavestation) ??
This technique typically involves using a series of single-cycle waveforms
that are spliced or "crossfaded" together one after another with each
waveform only being played for one to a few cycles.

Anthony Kozar
mailing-lists-1001 AT anthonykozar DOT net
http://anthonykozar.net/


Tim Mortimer wrote on 1/23/08 4:50 AM:

> Anthony K i'm still keen to hear if you felt the need to syncronise
> waveshaping crossfades with 0 crossings of the phase position or not..
> (there were udo versions of that process around i believe, must go off & try
> & dig those up...)


Date2008-01-24 09:53
FromTim Mortimer
Subject[Csnd] Re: Re: phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
Not to worry anthony, you have answered my question & the problems i
envisaged might arise so far have not...

Moving on to the issue of a pitch shifting delay with feedback... that's
proving a not so easy thing to nail. more posting on that possibly to follow
(might poke around a bit more first...)

thank you for your response. it is as alwys much appreciated...

T

-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15061759.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-24 15:30
Fromaaron@akjmusic.com
Subject[Csnd] wave sequencing
AttachmentsNone  

Date2008-01-24 17:16
FromJohn Lato
Subject[Csnd] Re: wave sequencing
One wavestation implementation in csound (written by Russell Pinkston) can be found 
at http://ems.music.utexas.edu/program/mus329j/ClassStuff/wavestat.html

It's pretty simple.  4 oscillators crossfaded together (randomly).  It's pretty easy 
to change from random crossfades to user-controlled values; some mechanism to produce 
XY coordinates is the usual controller.  I've written implementations that crossfade 
sample playback instead of single-cycle waveforms using the same basic principles.

John W. Lato
School of Music
The University of Texas at Austin
1 University Station E3100
Austin, TX 78712-0435
(512) 232-2090

aaron@akjmusic.com wrote:
> Quoting Anthony Kozar :
>>
>> Are you trying to accomplish wave sequencing (a la Korg Wavestation) ??
>> This technique typically involves using a series of single-cycle 
>> waveforms
>> that are spliced or "crossfaded" together one after another with each
>> waveform only being played for one to a few cycles.
> 
> This would be cool...how is it done in csound...clocks and table reads?
> 
> how would you 'chain-trigger' the single cycles?
> 
> -AKJ
> 
> 
> 
> 
> Send bugs reports to this list.
> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe 
> csound"

Date2008-01-24 18:21
FromAnthony Kozar
Subject[Csnd] Re: The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
aaron@akjmusic.com wrote on 1/23/08 11:17 AM:

> so anything we  can do in CSound (or the like) to achieve anything
> remotely close is  going to:
> 
> 1) involve samples, and/or:
> 2) involve resynthesis, and/or:
> 3) involve lots of work and a complex setup.

Hmmm ... not sure that I agree with this list ... see below.

> The ultimate example of complexity--a bow driving a string. Or
> something I thought about today as my daughter was playing in my
> nightstand--a wooden cup with a lid filled with pennies that she likes
> to shake. Just imagine synthesizing *that* with an adsyn bank...

I think that adsyn would certainly be the wrong tool for this task.  See
instead the PhISM opcodes based on techniques developed by Perry Cook.
(bamboo, crunch, shaker, sleighbells, etc.)

> My current interest--and please chime in, any gurus who know the
> secrets!--is to recreate dynamically evolving instruments, using
> samples, but showing no looping artifacts. [...]
> But there's the rub--in 'hiding'
> the loop artifacts by overlaps, I got an unrealistic, unwanted
> (although attractive in it's own right) chorusing effect, and my
> question of how to realistic reproduce a *single* instrument sound
> that dynamically changes while having no artifacts of looping remains
> an open one.

These are generally considered inherent limitations in sample-playback
techniques -- and became the impetus for much research in physical modeling
instead.  Acoustic instruments ARE very complex as you pointed out and no
sample-looping, additive synthesis, or other analysis-resynthesis techniques
can compensate for the fact that a sample is just one static snapshot of how
an instrument responded to one set of "input parameters."  The attack is
typically full of all kinds of transients that you do not want to
pitch-shift or time-stretch.  And these transients vary greatly depending on
articulation.  The "steady-state tone" of a real instrument is usually
anything but a steady harmonic spectrum.

Have you tried any of the Csound physical models that are floating around
out there?

> if the manual's demo .csd files are an illustration of their greatest
> power, I remain disappointed.

I would instead assume that the manual examples illustrate the least power
of an opcode.  They are typically designed for maximum clarity and show the
simplest, functional manner of using an opcode .
 
> One obvious way to do achieve my goal of realism is to multisample the
> hell out of the instrument, and have *no* looping, but just 'diskin' a
> sustained sound for every sample.... (This is the 'mellotron'
> approach ...)

I love mellotrons, but a mellotron flute always plays 'G' with exactly the
same nuances.  Again, samples are probably not the ultimate solution for
"dynamic realism".

I don't typically try to imitate acoustic instruments with Csound (or
conventional keyboard synthesizers for that matter).  I am more interested
in synthesizing unique sounds, so I cannot really help with any Csound
examples of realistic imitations.  The Horner/Ayers book and the work of
Perry Cook is where I would start my research if I wanted to imitate
acoustic instruments.
 
Anthony Kozar
mailing-lists-1001 AT anthonykozar DOT net
http://anthonykozar.net/


Date2008-01-24 21:34
FromAnthony Kozar
Subject[Csnd] Re: Re: wave sequencing
Attachmentstables  tablesequencing.orc  tablesequencing.py  tablesequencing.sco  waveseq.csd  
Thanks very much for this example, John.  I love swirly sounds like that,
and I was surprised at how simple it is to achieve a nice effect just by
crossfading between partials as that example does.

So, that orchestra is a great example of what Korg called "Vector
Synthesis".  VS is just crossfading between four sources set up as the four
corners of a 2D square.  (Which, I believe, limits the combinations that you
can have -- four parameters would be needed for the most flexibility).  Neat
suggestion:  use the xyin opcode in Csound to turn your mouse into a
"joystick" to control the crossfading in real time.

IIRC though, VS is just one of the ideas implemented by the Korg
Wavestation.  The other is wave sequencing synthesis.  Each of the four
sound sources in a Wavestation patch could be an oscillator that sequenced
multiple waveforms, one after another.  I _think_ they implemented it as a
dual oscillator crossfading between two tables -- as soon as one table
finished fading out, that half of the oscillator could start reading a
different table and then fade back in as the other half faded out.

This implementation of wave sequencing avoids aliasing but I am not sure
that it got down to sequencing the waveforms at the level of a single cycle
per table.  The Casio CZ series of synths actually had an option for
choosing two waveforms that would alternate on a per cycle basis.  This
usually results in a "suboctave" effect -- the two waveforms are perceived
as a single periodic waveform an octave lower.  I believe that Waldorf
synths also did some wave sequencing.

I would like to experiment with applying the per cycle sequencing technique
to much longer sequences of waveforms.  I would also be happy just to
imitate the Wavestation's idea of wave sequencing.  However, I am not sure
that there are any existing Csound opcodes that are up to this task  (unless
you run at sr = kr;  Csound can do almost anything then ;)

Possible opcodes for experimenting would be tablekt, tableikt, and tablexkt.
The problem is that the table number changes at k-rate, when I would want it
to change exactly at the frequency of the oscillator.  tableimix could also
probably be used to splice together several other tables.  GEN18 might be an
easier method for splicing tables.

I'm attaching two not-so-great experiments that I made several years ago
using tableikt and GEN18.  Ultimately, I may want a new wave sequencing
opcode for maximum flexibility.  Perhaps I will write one for Csound 5.09
...

Anthony Kozar
mailing-lists-1001 AT anthonykozar DOT net
http://anthonykozar.net/

John Lato wrote on 1/24/08 12:16 PM:

> One wavestation implementation in csound (written by Russell Pinkston) can be
> found 
> at http://ems.music.utexas.edu/program/mus329j/ClassStuff/wavestat.html
> 
> It's pretty simple.  4 oscillators crossfaded together (randomly).  It's
> pretty easy 
> to change from random crossfades to user-controlled values; some mechanism to
> produce 
> XY coordinates is the usual controller.  I've written implementations that
> crossfade 
> sample playback instead of single-cycle waveforms using the same basic
> principles.
> 
> John W. Lato
> School of Music
> The University of Texas at Austin
> 1 University Station E3100
> Austin, TX 78712-0435
> (512) 232-2090
> 
> aaron@akjmusic.com wrote:
>> Quoting Anthony Kozar :
>>> 
>>> Are you trying to accomplish wave sequencing (a la Korg Wavestation) ??
>>> This technique typically involves using a series of single-cycle
>>> waveforms
>>> that are spliced or "crossfaded" together one after another with each
>>> waveform only being played for one to a few cycles.
>> 
>> This would be cool...how is it done in csound...clocks and table reads?
>> 
>> how would you 'chain-trigger' the single cycles?
>> 
>> -AKJ


Date2008-01-24 22:02
FromTim Mortimer
Subject[Csnd] Re: The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
Aha! BUT....(with regard to below...)

in the case of analysis for resynthesis (again using the LORIS paradigm as
an example...) if you have a fundamental, partials & / or resonant nodes, &
a noise residual factor, it could (& should) be possible to at least "model
subjectively" how changes in dynamic or articulation or playback pitch MIGHT
alter the "static snapshot" of the sound you have made (not to mention small
stochastic variations to the data upon each "call for playback"..)

basically what i'm continually harping on about is the desired capacity to
apply this principle to pvx files, ats data, sdif based resynthesis etc etc
(as long as you can meaningfully separate the parts of the spectrum of your
analysis into the consituent representative classifications ... fundamental,
partials or resonants (addmittedly the most subjective &/ or tricky bit..) &
residual... basically my understanding is that all analysis & resynthesis
techniques are ultimately concerned with one form of spectral analysis or
other - it's just some deal "directly" with this "classification" issue, &
some do not.... 

Using say python & a txt represeantation of any given spectral analysis, it
should be possible to apply this classification to any type of analysis data
(i have written some experiments to this effect on SDIF data obtained via
spear for example using python.... & also on GEN43 (??) data from pvx
files... 

if the data "behaved nicely", you could even look at interpolating pitches
for a "multisample" style of playback instrument based on only a handful of
analysis samples to begin with (like u might have a koto playing at c3, c4,
& c5, & that's it)

sure it wouldn't be acoustically perfect - but it would be dynamic, &
subject to greater articulation (& variation) during performance....





Anthony Kozar-2 wrote:
> 
> 
> Acoustic instruments ARE very complex as you pointed out and no
> sample-looping, additive synthesis, or other analysis-resynthesis
> techniques
> can compensate for the fact that a sample is just one static snapshot of
> how
> an instrument responded to one set of "input parameters."  
> 
> 

-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15076137.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-24 22:24
From"Steven Yi"
Subject[Csnd] Re: Re: Re: wave sequencing
AttachmentsNone  

Date2008-01-24 22:25
FromTim Mortimer
Subject[Csnd] Re: The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
The dream therefore is a situation where analysis for resynthesis becomes a
kind of series of discrete plots in a "3 dimensional sonic continuum" of
time, amplitude, & fundamental freq (known as "the instrument")

information is continually interpolated from this field of discrete, known
"analysis points" & data can be inferred anywhere on the map to create a
changing spectral content not only across time (the current "frame" of
reference .. nice pun there) but also by modulating dynamics, &/or pitch, so
an endless, dynamically expressive resynth continuum is achieveable, &
interpolateable, not to mention also including access to some extended &
hopefully quite impressive sounding glissando effects....





Tim Mortimer wrote:
> 
> Aha! BUT....(with regard to below...)
> 
> in the case of analysis for resynthesis (again using the LORIS paradigm as
> an example...) if you have a fundamental, partials & / or resonant nodes,
> & a noise residual factor, it could (& should) be possible to at least
> "model subjectively" how changes in dynamic or articulation or playback
> pitch MIGHT alter the "static snapshot" of the sound you have made (not to
> mention small stochastic variations to the data upon each "call for
> playback"..)
> 
> basically what i'm continually harping on about is the desired capacity to
> apply this principle to pvx files, ats data, sdif based resynthesis etc
> etc (as long as you can meaningfully separate the parts of the spectrum of
> your analysis into the consituent representative classifications ...
> fundamental, partials or resonants (addmittedly the most subjective &/ or
> tricky bit..) & residual... basically my understanding is that all
> analysis & resynthesis techniques are ultimately concerned with one form
> of spectral analysis or other - it's just some deal "directly" with this
> "classification" issue, & some do not.... 
> 
> Using say python & a txt represeantation of any given spectral analysis,
> it should be possible to apply this classification to any type of analysis
> data (i have written some experiments to this effect on SDIF data obtained
> via spear for example using python.... & also on GEN43 (??) data from pvx
> files... 
> 
> if the data "behaved nicely", you could even look at interpolating pitches
> for a "multisample" style of playback instrument based on only a handful
> of analysis samples to begin with (like u might have a koto playing at c3,
> c4, & c5, & that's it)
> 
> sure it wouldn't be acoustically perfect - but it would be dynamic, &
> subject to greater articulation (& variation) during performance....
> 
> 
> 
> 
> 
> Anthony Kozar-2 wrote:
>> 
>> 
>> Acoustic instruments ARE very complex as you pointed out and no
>> sample-looping, additive synthesis, or other analysis-resynthesis
>> techniques
>> can compensate for the fact that a sample is just one static snapshot of
>> how
>> an instrument responded to one set of "input parameters."  
>> 
>> 
> 
> 

-- 
View this message in context: http://www.nabble.com/phase-synced-crossfade-synthesis-without-aliasing---how--%28-Gen30--%29-tp15037443p15076601.html
Sent from the Csound - General mailing list archive at Nabble.com.


Date2008-01-25 00:41
Fromaaron@akjmusic.com
Subject[Csnd] Re: Re: The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
AttachmentsNone  

Date2008-01-25 05:23
FromAnthony Kozar
Subject[Csnd] Re: Re: Re: The 'big picture' of Tim M's question--was: 'phase-synced crossfading, etc.'
aaron@akjmusic.com wrote on 1/24/08 7:41 PM:

> Quoting Anthony Kozar :

>> Have you tried any of the Csound physical models that are floating around
>> out there?
> 
> Yes, and ditto, I'd love to see who has put enough faith in them to
> work with them in a large scale musical context...the most impressive
> quasi-realistic acoustic instrument work I've seen is still mostly
> sample-based to my knowledge. what you say about transients is true,
> so I should correct myself--multisamples. Obviously, using one sample
> for the range of an entire keyboard does not give anything close to
> realism, although some of the sounds might have interesting
> characteristics in their own right.

I was assuming multiple samples for pitch but not necessarily dynamics and
articulation.  One of the best examples of physical modeling work that I
have heard so far that was done with Csound, is a nice imitation of a
trombone in some compositional sketches that Michael Mossey wrote.

> The Horner/Ayers 'horn chapter' has the best sounding example in the
> whole 'CSound Book' (the Strauss "Eulenspiegel" excerpt) if you ask
> me...musical, warm, and not jarring in the least. And nice realism to
> boot.

They have written an entire book that expands on that work, I believe.  (I
have never read it).  "Cooking with Csound, Part 1"

Anthony Kozar
mailing-lists-1001 AT anthonykozar DOT net
http://anthonykozar.net/




Date2008-01-25 09:14
From"Oeyvind Brandtsegg"
Subject[Csnd] Re: Re: Re: wave sequencing
AttachmentsNone  

Date2008-01-28 10:34
From"Oeyvind Brandtsegg"
Subject[Csnd] Re: phase synced crossfade synthesis without aliasing - how? ( Gen30 ?)
AttachmentsNone