Hi List and Dr. Dobson,

In my recent work I have come across the paradigm of creating a
continuum from endpoint stimuli in experimental procedures using
synthetic sounds as the end points.

I'm specifically wondering, what are the major differences in the
abstract between the linear predictive coding analysis and
pitch-synchronous-overlap-add resynthesis and the spectral streaming
phase vocoder analysis/resynthesis as it is implemented today in
Csound ?

Many people are using the LPC/PSOLA but I know from musical experience
that the PVS/PVX format sounds much better. I'm trying to get a better
idea of why this is so... any scholarly papers, websites, or similar
online resources would be greatly appreciated!


Thank you for your time and consideration,

David Akbari