Hi All, Apologies, an emergency situation at my job has required me to continue working this evening and since it is already 45 minutes past the start of the meeting I don't think I will be able to attend and report back. Hopefully someone else may whom have gone can report! steven On Thu, Dec 4, 2008 at 12:50 PM, Steven Yi wrote: > BTW: I just came across this library CUDA for using GPU's: > > http://www.nvidia.com/object/cuda_get.html?CMP=KNC-CUD-K-GOOG&gclid=CKiyuPjjp5cCFQhJagodaAMq-w > > this page: > > http://www.nvidia.com/object/cuda_learn_products.html > > mentions "audio" so maybe worth looking it. It seems Nvidia-specific, > but seems like enough products to make it beyond a niche hardware > platform. > > steven > > On Thu, Dec 4, 2008 at 12:16 PM, Steven Yi wrote: >> Hi Richard, >> >> I think again the key term is realtime, as we could certainly model a >> room now, just that we would be waiting a long time for the results! >> I guess for me, my concerns are if a sound is capable of being >> produced, and whether I have to wait for it or not is somewhat >> secondary. For example, I could use Sliding Phase Vocoder now, though >> I might need to wait for it, but ultimately I would get that sound. I >> guess that is how I interpreted your question about new sounds and new >> processes. >> >> Now, don't get me wrong though, I think the practicality of processing >> in realtime is very important, otherwise I wouldn't have bothered >> working on the initial naive implementation of multicore support that >> is in Csound now. It would certainly help with some of the things I'm >> interested in to make it more practical to work with in composing. So >> I do agree with Michael that it is not to be underestimated. >> >> From Victor's email and how that part of the presentation is described >> in the announcement, I'm not expecting much about parallel processing >> and music in detail, but rather more just what the lab is all about. >> It looks like the audio section of the lab's site is here: >> >> http://parlab.eecs.berkeley.edu/applications/hearing.html >> >> and hints at what they are focusing on. Anyways, I'll be taking notes! >> >> steven >> >> >> On Thu, Dec 4, 2008 at 1:51 AM, Richard Dobson >> wrote: >>> Steven Yi wrote: >>>> >>>> Well, I think will be able to attend this meeting as it's maybe 8-10 >>>> blocks walking distance away. =) Though, I think we know already that >>>> parallel processing will simply just provide for more processing at >>>> one time and that's about it. Since parallel algorithms can be just >>>> as easily be computed on a single processor, >>> >>> There are plenty of algorithms out there that are beyond what a single >>> processor, even with a 6GHz (or 20GZ for that matter) clock speed, can do in >>> real time, with low power consumption. The Sliding Phase Vocoder now in >>> Csound is but one example. >>> >>> I have approached the issue somewhat differently, making a distinction >>> between mere parallel processing per se, and what we have called >>> "High-Performance Audio Computing" (HiPAC), which considers what new things >>> we can do with a ~lot~ more processing power than we have now. It so happens >>> that as Moore's law reaches its limit, it is being replaced by a new measure >>> related to multi-core processing in a number of forms, not least massive >>> SIMD-style vector accelerators (e.g. Clearspeed, GPGPU, etc); hence to >>> achieve most of the goals of HiPAC inevitably means employing large-scale >>> vector acceleration (and quite possibly 'conventional" multi-core processing >>> too). These architectures are ideal for computing FFTs, FIRs, 2D and 3D >>> meshes, and other 'embarrassingly parallel" algorithms. There is a lot more >>> to this topic than just running multiple Csound instruments simultaneously. >>> >>> So, parallel processing in one form or another is the likely means, but not >>> the end. The end (IMO) is a lot more processing power, not simply to do >>> more of what we can already do, but do NEW things that until now have been >>> prohibitively demanding computationally (e.g. full-bandwidth room modelling >>> in real time - high frequencies demand many more nodes, so tend to be >>> avoided; the published mesh-based room models stop around 4Khz, or even >>> lower). >>> >>> That's how I am looking at things, anyway. The powers that be at Berkeley >>> may well look at things very differently, or at different things altogether. >>> >>> Richard Dobson >>> >>> >>> >>> Send bugs reports to this list. >>> To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe >>> csound" >>> >> >