Csound Csound-dev Csound-tekno Search About

Re: [Cs-dev] Second Draft of 6.05 release notes

Date2015-04-23 15:25
From"Art Hunkins"
SubjectRe: [Cs-dev] Second Draft of 6.05 release notes
Exciter opcode now included in Android distribution.

Art Hunkins

----- Original Message ----- 
From: "jpff" 
To: 
Sent: Thursday, April 23, 2015 7:34 AM
Subject: [Cs-dev] Second Draft of 6.05 release notes


> Please read/check.  I have added words on srconv and sndload for
> example.  Are they right?
> ==John ffitch
> ========================================================================
>
> ============================
> CSOUND VERSION 6.05
> RELEASE NOTES   VERSION 6.05
> ============================
>
> As ever there are new facilities and numerous bug-fixes.  A major part of
> this release is the removal of a number of memory leaks and over use
> of memory.  Naturally these changes are all but invisible, just a
> smaller memory foot-print.
>
> Note that we track bugs and requests for enhancements via the github
> issues system,, and these had a significant affect on this release.
>
> The Developers
>
>
>
> USER-LEVEL CHANGES
> ==================
>
> New opcodes:
>
>    o    **None**
>
>    o    The opcode sndload is now deprecated
>
> New Gen and Macros:
>
>    o    Paul Octavian Nasca's padsynth algorithm implemented as a gen.
>
> Orchestra:
>
> Score:
>
>    o    Fixed string location calculation bug when processing score
>         lines [fixes #443]
>
> Options:
>
>    o    A short-format copyright option is available, with a fixed
>         number of well-known licences (CC, etc)
>
>    o    New command-line option to report MIDI devices in simple
>         format
>
>    o    New command-line option to set ksmps
>
>
> Modified Opcodes and Gens:
>
>  o    adsynt handles amplitude changes better
>
>  o    sfont has better checking for corruptions
>
>  o    better checking in physical models for out-of-range frequencies
>
>  o    ftgenonce and others allows string parameters
>
>  o    gausstrig reworked and extended with new features
>
>  0    use of p() function no longer complains overrides the pcnt warning
>
>  o    fix to midirecv
>
>  o    OSCsend cleans up after use improved
>
>  o    fillarray is limited to 1 or 2 dimensional arrays; in fact it
>       failed silently previously for 3D and higher.
>
>  o    oscbnk now works when the equaliser is used.
>
>  o    mp3in now works with both mono and stereo input files
>
>  o    flooper & flooper2 now allow stereo tables
>
>  o    Release phase of expsegr fixed
>
>  o    f-tables created by a large number of arguments could overwrite
>       memory, now fixed
>
>  o    performance of plltrack improved
>
>  o    init of arrays clarified and checked
>
>  o    gen23 corrected to stop an infinite loop
>
>  o    alwayson now starts from score offset; this is part of a fix to
>       the long-standing problem with alwayson in CsoundVST
>
>  o    invalue now checks for output string size and reallocates
>       memory if smaller than default string size (set at 256 bytes
>       for backwards compatibility)
>
> Utilities:
>
>  o    The srconv utility has been improved but it does not work well,
>       with groups of noise in otherwise good output.  We recommend
>       the use of Erik de Castro Lopo's Secret Rabbit Code (aka
>       libsamplerate) as providing sample rate conversion at high
>       quality.  srconv will be removed shortly possibly to be
>       replaced by an SRC-based utility.
>
>
> Frontends:
>
>  pnacl: added interface to allow the use of Csound's MIDI input system.
>         fixed audio input to conform to the latest Pepper API spec.
>
>
>  icsound:
>
>  csound~:
>
>  Emscripten:
>
>  csdebugger:
>
>
> General usage:
>
>
> Bugs fixed:
>
>    o   bugs in fastabi,oscktp, phasorbnk, adsr, xadsr, hrtfer fixed
>
>    o   bugs in the harmon. harmon2, harmon3 and harmon4 fixed
>
>    o   Csound could crash after a parsing error, a case now removed
>
> ====================
> SYSTEM LEVEL CHANGES
> ====================
>
> System changes:
>
>    o    There are now checks that xin/xout types match those defined
>         as part of UDO definition.
>
>    o    jack now has a timeout
>
>
> Internal changes:
>
>    * Many defects indicated by coverity fixed or code changed.
>      Should make csound more robust in edge cases.
>
>    * Parser-related changes simplifies allocation of temporary
>      variables, with some new optimisations.
>
>    * code for multi-thread rendering improved and stablised
>      vis-a-vis redefinition of instruments.
>
> API
> ===
>    *
>
> Platform Specific
> =================
>
> iOS
> ---
>
>    * fixed audio callback to work correctly with lightning output and
>    Apple TV.
>
>
> Android
> -------
>
>    * New experimental audio IO mode: csoundPerformKsmps() is called
>    from the OpenSL ES output callback. This mode can be optionally
>    enabled by passing a value of "false" to a new second parameter to
>    the CsoundObj constructor (bool isAsync).  The default constructor
>    and the one-parameter sets this to "true" (keeping backwards
>    compatibility with existing code).
>
>    * The OSC opcodes are included in distribution.
>
>    * There are new file open and save dialogs that permit the user to
>    access the SD card on the device, if there is one, in addition to
>    internal storage.
>
>    * There is a new "Save as..." button that permits the user to save
>    the csd as a new file with a new name.
>
>    * Many of the examples in the archive of Android examples are now
>    built into the app and can be run from the app's menu.
>
>
>
> Windows
> -------
>
> OSX
> ---
>       Installation now places csladspa.so rather than csladspa.dylib
>       on disk.
>
> Linux
> -----
>        Linux is now build without FLTK threads,  This removes system
>        hangs and is in line with other builds
>
>
> ========================================================================
>
>
> ------------------------------------------------------------------------------
> BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
> Develop your own process in accordance with the BPMN 2 standard
> Learn Process modeling best practices with Bonita BPM through live 
> exercises
> http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- 
> event?utm_
> source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
> _______________________________________________
> Csound-devel mailing list
> Csound-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/csound-devel 


------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net

Date2015-04-23 15:58
FromMichael Gogins
SubjectRe: [Cs-dev] Second Draft of 6.05 release notes
AttachmentsNone  None  
Correct.

Regards,
Mike


-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com

On Thu, Apr 23, 2015 at 10:25 AM, Art Hunkins <abhunkin@uncg.edu> wrote:
Exciter opcode now included in Android distribution.

Art Hunkins

----- Original Message -----
From: "jpff" <jpff@codemist.co.uk>
To: <csound-devel@lists.sourceforge.net>
Sent: Thursday, April 23, 2015 7:34 AM
Subject: [Cs-dev] Second Draft of 6.05 release notes


> Please read/check.  I have added words on srconv and sndload for
> example.  Are they right?
> ==John ffitch
> ========================================================================
>
> ============================
> CSOUND VERSION 6.05
> RELEASE NOTES   VERSION 6.05
> ============================
>
> As ever there are new facilities and numerous bug-fixes.  A major part of
> this release is the removal of a number of memory leaks and over use
> of memory.  Naturally these changes are all but invisible, just a
> smaller memory foot-print.
>
> Note that we track bugs and requests for enhancements via the github
> issues system,, and these had a significant affect on this release.
>
> The Developers
>
>
>
> USER-LEVEL CHANGES
> ==================
>
> New opcodes:
>
>    o    **None**
>
>    o    The opcode sndload is now deprecated
>
> New Gen and Macros:
>
>    o    Paul Octavian Nasca's padsynth algorithm implemented as a gen.
>
> Orchestra:
>
> Score:
>
>    o    Fixed string location calculation bug when processing score
>         lines [fixes #443]
>
> Options:
>
>    o    A short-format copyright option is available, with a fixed
>         number of well-known licences (CC, etc)
>
>    o    New command-line option to report MIDI devices in simple
>         format
>
>    o    New command-line option to set ksmps
>
>
> Modified Opcodes and Gens:
>
>  o    adsynt handles amplitude changes better
>
>  o    sfont has better checking for corruptions
>
>  o    better checking in physical models for out-of-range frequencies
>
>  o    ftgenonce and others allows string parameters
>
>  o    gausstrig reworked and extended with new features
>
>  0    use of p() function no longer complains overrides the pcnt warning
>
>  o    fix to midirecv
>
>  o    OSCsend cleans up after use improved
>
>  o    fillarray is limited to 1 or 2 dimensional arrays; in fact it
>       failed silently previously for 3D and higher.
>
>  o    oscbnk now works when the equaliser is used.
>
>  o    mp3in now works with both mono and stereo input files
>
>  o    flooper & flooper2 now allow stereo tables
>
>  o    Release phase of expsegr fixed
>
>  o    f-tables created by a large number of arguments could overwrite
>       memory, now fixed
>
>  o    performance of plltrack improved
>
>  o    init of arrays clarified and checked
>
>  o    gen23 corrected to stop an infinite loop
>
>  o    alwayson now starts from score offset; this is part of a fix to
>       the long-standing problem with alwayson in CsoundVST
>
>  o    invalue now checks for output string size and reallocates
>       memory if smaller than default string size (set at 256 bytes
>       for backwards compatibility)
>
> Utilities:
>
>  o    The srconv utility has been improved but it does not work well,
>       with groups of noise in otherwise good output.  We recommend
>       the use of Erik de Castro Lopo's Secret Rabbit Code (aka
>       libsamplerate) as providing sample rate conversion at high
>       quality.  srconv will be removed shortly possibly to be
>       replaced by an SRC-based utility.
>
>
> Frontends:
>
>  pnacl: added interface to allow the use of Csound's MIDI input system.
>         fixed audio input to conform to the latest Pepper API spec.
>
>
>  icsound:
>
>  csound~:
>
>  Emscripten:
>
>  csdebugger:
>
>
> General usage:
>
>
> Bugs fixed:
>
>    o   bugs in fastabi,oscktp, phasorbnk, adsr, xadsr, hrtfer fixed
>
>    o   bugs in the harmon. harmon2, harmon3 and harmon4 fixed
>
>    o   Csound could crash after a parsing error, a case now removed
>
> ====================
> SYSTEM LEVEL CHANGES
> ====================
>
> System changes:
>
>    o    There are now checks that xin/xout types match those defined
>         as part of UDO definition.
>
>    o    jack now has a timeout
>
>
> Internal changes:
>
>    * Many defects indicated by coverity fixed or code changed.
>      Should make csound more robust in edge cases.
>
>    * Parser-related changes simplifies allocation of temporary
>      variables, with some new optimisations.
>
>    * code for multi-thread rendering improved and stablised
>      vis-a-vis redefinition of instruments.
>
> API
> ===
>    *
>
> Platform Specific
> =================
>
> iOS
> ---
>
>    * fixed audio callback to work correctly with lightning output and
>    Apple TV.
>
>
> Android
> -------
>
>    * New experimental audio IO mode: csoundPerformKsmps() is called
>    from the OpenSL ES output callback. This mode can be optionally
>    enabled by passing a value of "false" to a new second parameter to
>    the CsoundObj constructor (bool isAsync).  The default constructor
>    and the one-parameter sets this to "true" (keeping backwards
>    compatibility with existing code).
>
>    * The OSC opcodes are included in distribution.
>
>    * There are new file open and save dialogs that permit the user to
>    access the SD card on the device, if there is one, in addition to
>    internal storage.
>
>    * There is a new "Save as..." button that permits the user to save
>    the csd as a new file with a new name.
>
>    * Many of the examples in the archive of Android examples are now
>    built into the app and can be run from the app's menu.
>
>
>
> Windows
> -------
>
> OSX
> ---
>       Installation now places csladspa.so rather than csladspa.dylib
>       on disk.
>
> Linux
> -----
>        Linux is now build without FLTK threads,  This removes system
>        hangs and is in line with other builds
>
>
> ========================================================================
>
>
> ------------------------------------------------------------------------------
> BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
> Develop your own process in accordance with the BPMN 2 standard
> Learn Process modeling best practices with Bonita BPM through live
> exercises
> http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual-
> event?utm_
> source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
> _______________________________________________
> Csound-devel mailing list
> Csound-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/csound-devel


------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel


Date2015-04-24 12:40
FromAnders Genell
Subject[Cs-dev] OT: network synced audio
Dear devs!

My previous attempt to send a mail very similar to this one was lost in the bitwise limbo somewhere. If you actually did receive it and now are faced with the same drivel one more, I sincerely apologize. 

There were some questions about syncing audio over network recently and it has kept me thinking. 

I have a sonos system at home with two small speakers and a sub in a 2.1 setup. Each speaker is an independent wireless audio player in its own right, but in the controller software they can be grouped together as a whole. In order to have functioning stereo the stream in each speaker must be closely synced to the other. I wonder how they might achieve that. 

Or, more importantly, how can it be achieved using e.g. two raspberry pi?

My current interest is to make a multi channel sound recording system where each channel (or two channels) are wirelessly synced to all others so that a multichannel file can be created after finished recording session. 

I was thinking that it e.g. should be possible to sync the beginning of a buffer to a certain time stamp, or something...

The reason I send this to the csound dev list is that you are the most clever and experienced audio related devs I know of. 

Regards,
Anders
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net

Date2015-04-24 17:25
FromJustin Smith
SubjectRe: [Cs-dev] OT: network synced audio
AttachmentsNone  None  
The problem with synchronizing to a time stamp is that knowing the actual time to any accuracy is a hard problem. In fact the difference between the time on any two computers that are supposedly synchronized to the same external source is typically going to be greater than the phase drift between two speakers you are optimistically sending data to over wifi. http://en.wikipedia.org/wiki/Clock_synchronization

Wireless adds some hard problems here. For example you could get everything calibrated for two speakers to be in phase, but a cat sits down between a speaker and its wireless transmitter, leading to increased data loss and thus larger network delay (assuming use of TCP where you get added delay rather than simply dropped audio data). Or one of the speakers is closer than the other to a running microwave oven (these operate on the same frequency range as wifi).

I'm sure one could put together a control system that tracks and corrects for phase drift via a pair of mics in the center, but I'd think the added mic installation would negate any benefit the wireless speaker system provides over a wired one (not to mention the complexity of implementing the control system). A pingback system to track wireless channel latency would be less reliable but also less intrusive, but would potentially require modification to the firmware on the speakers?

On Fri, Apr 24, 2015 at 4:40 AM, Anders Genell <anders.genell@gmail.com> wrote:
Dear devs!

My previous attempt to send a mail very similar to this one was lost in the bitwise limbo somewhere. If you actually did receive it and now are faced with the same drivel one more, I sincerely apologize.

There were some questions about syncing audio over network recently and it has kept me thinking.

I have a sonos system at home with two small speakers and a sub in a 2.1 setup. Each speaker is an independent wireless audio player in its own right, but in the controller software they can be grouped together as a whole. In order to have functioning stereo the stream in each speaker must be closely synced to the other. I wonder how they might achieve that.

Or, more importantly, how can it be achieved using e.g. two raspberry pi?

My current interest is to make a multi channel sound recording system where each channel (or two channels) are wirelessly synced to all others so that a multichannel file can be created after finished recording session.

I was thinking that it e.g. should be possible to sync the beginning of a buffer to a certain time stamp, or something...

The reason I send this to the csound dev list is that you are the most clever and experienced audio related devs I know of.

Regards,
Anders
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel


Date2015-04-25 22:35
FromPete Goodeve
SubjectRe: [Cs-dev] OT: network synced audio
AttachmentsNone  

Date2015-04-25 23:22
FromMichael Gogins
SubjectRe: [Cs-dev] OT: network synced audio
AttachmentsNone  None  
I have some professional experience here, not much, but some. 

Without taking measures you will not get adequate sync.

If you use the Windows timestamp project system you can get sync within a millisecond or so. This may not be enough for audio, or it may be enough, depending on your requirements. If you need sync for MIDI or other control data this should be good enough.

I think other systems of implementing sync on Windows would do no better than the Windows timestamp project.

Regards,
Mike


-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com

On Sat, Apr 25, 2015 at 5:35 PM, Pete Goodeve <pete.goodeve@computer.org> wrote:
I thought I'd chip in here, even though I'm certainly no expert (!), but
I'm not nearly as pessimistic as Justin...

I'd think it should be easy to get sub-millisecond sync on a local
wifi net using standard NTP.  The ntp site talks about a typical
100 microsec uncertainty in such a situation.

Further, if you have two 'clients' with parallel routes to their NTP
master (as would be the case on a local wifi) their various delays
should be similar, so their agreement would probably be even
closer, even if they both had some offset from the master.

The Raspbian on my Pi has full NTP installed by default, so presumably
one could set up one's own timing network between several of them.
And I guess one could get a good check of their actual sync by cross
connecting GPIO pins and comparing time stamps on pulses between
them..

My thoughts, anyway.

        -- Pete --


On Fri, Apr 24, 2015 at 01:40:01PM +0200, Anders Genell wrote:
>
> I have a sonos system at home with two small speakers and a sub in a 2.1 setup. Each speaker is an independent wireless audio player in its own right, but in the controller software they can be grouped together as a whole. In order to have functioning stereo the stream in each speaker must be closely synced to the other. I wonder how they might achieve that.
>
> Or, more importantly, how can it be achieved using e.g. two raspberry pi?
>
> My current interest is to make a multi channel sound recording system where each channel (or two channels) are wirelessly synced to all others so that a multichannel file can be created after finished recording session.
>
> I was thinking that it e.g. should be possible to sync the beginning of a buffer to a certain time stamp, or something...

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel


Date2015-04-26 09:55
FromAnders Genell
SubjectRe: [Cs-dev] OT: network synced audio
AttachmentsNone  None  
Thank you all for these answers!

I suppose achieving good enough sync is possible as Sonos (www.sonos.com) manages to do so. There are very little technical information available about the Sonos system, so how they do it is not easily known, unless someone knows someone on the inside. Thus I don't know to what degree of time resolution the individual channels are in synch, but listening to music I know well from other setups on the stereo pair is at least very convincing to my ears. Come to think of it, I should try some mono material. It should then be two identical streams in the speaker, keeping the sound image dead centered. If there is random stream sync variation it should be revealed as random off-axis positioning. In an a anechoic room the directional resolution of hearing around "straight forward"-direction is about one degree for suitable sounds (broad band, short duration). That would correspond to an interaural time difference of about 20 μs for a smallish head (0.2 meters btwn ears). Im not sure that kind of resolution is needed for a home stereo system, but it says something about what ideal sync criterion we're looking at. 


As you say, Pete, using NTP should be possible in some way. If e.g. having one machine regularly synching its system clock to an official NTP server, and setting it up as local NTP server and having the rest of the machines synch to it, it should be possible to keep the system clocks in good synch. So the question is how to sync an audio stream to the system clock? Does the audio stream have to sync to a sound card clock? Can the sound card clock be synced to system clock? Can the (recorded) stream be synced to the system clock independently of the sound card clock?
Maybe for my use case there could be some kind of time code included in the resulting audio file so that individual files from different RPi:s could be merged into a synced multi-channel file?

Regards,
Anders




26 apr 2015 kl. 00:22 skrev Michael Gogins <michael.gogins@gmail.com>:

I have some professional experience here, not much, but some. 

Without taking measures you will not get adequate sync.

If you use the Windows timestamp project system you can get sync within a millisecond or so. This may not be enough for audio, or it may be enough, depending on your requirements. If you need sync for MIDI or other control data this should be good enough.

I think other systems of implementing sync on Windows would do no better than the Windows timestamp project.

Regards,
Mike


-----------------------------------------------------
Michael Gogins
Irreducible Productions
http://michaelgogins.tumblr.com
Michael dot Gogins at gmail dot com

On Sat, Apr 25, 2015 at 5:35 PM, Pete Goodeve <pete.goodeve@computer.org> wrote:
I thought I'd chip in here, even though I'm certainly no expert (!), but
I'm not nearly as pessimistic as Justin...

I'd think it should be easy to get sub-millisecond sync on a local
wifi net using standard NTP.  The ntp site talks about a typical
100 microsec uncertainty in such a situation.

Further, if you have two 'clients' with parallel routes to their NTP
master (as would be the case on a local wifi) their various delays
should be similar, so their agreement would probably be even
closer, even if they both had some offset from the master.

The Raspbian on my Pi has full NTP installed by default, so presumably
one could set up one's own timing network between several of them.
And I guess one could get a good check of their actual sync by cross
connecting GPIO pins and comparing time stamps on pulses between
them..

My thoughts, anyway.

        -- Pete --


On Fri, Apr 24, 2015 at 01:40:01PM +0200, Anders Genell wrote:
>
> I have a sonos system at home with two small speakers and a sub in a 2.1 setup. Each speaker is an independent wireless audio player in its own right, but in the controller software they can be grouped together as a whole. In order to have functioning stereo the stream in each speaker must be closely synced to the other. I wonder how they might achieve that.
>
> Or, more importantly, how can it be achieved using e.g. two raspberry pi?
>
> My current interest is to make a multi channel sound recording system where each channel (or two channels) are wirelessly synced to all others so that a multichannel file can be created after finished recording session.
>
> I was thinking that it e.g. should be possible to sync the beginning of a buffer to a certain time stamp, or something...

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel

Date2015-04-26 22:13
Frompete.goodeve@computer.org
SubjectRe: [Cs-dev] OT: network synced audio
AttachmentsNone  

Date2015-04-27 08:53
FromAnders Genell
SubjectRe: [Cs-dev] OT: network synced audio
AttachmentsNone  None  


On Sun, Apr 26, 2015 at 11:13 PM, <pete.goodeve@computer.org> wrote:
On Sun, Apr 26, 2015 at 10:55:42AM +0200, Anders Genell wrote:
> [....]
>
> As you say, Pete, using NTP should be possible in some way. If e.g. having one machine regularly synching its system clock to an official NTP server, and setting it up as local NTP server and having the rest of the machines synch to it, it should be possible to keep the system clocks in good synch.

This is sort of what I was thinking.

> So the question is how to sync an audio stream to the system clock? Does the audio stream have to sync to a sound card clock? Can the sound card clock be synced to system clock? Can the (recorded) stream be synced to the system clock independently of the sound card clock?

I hadn't considered this!  I guess most (all?) audio cards have their own sample clock,
and recording/playback is synced to this rather than the system clock.  You'd have
to account for drift between the two.  (Though I did see a reference somewhere that
pro studio systems often have a means of using an external clock for sampling,
so I suppose this could be synced somehow.)

The Haiku OS that I'm most familiar with has an elaborate scheme for translating
between "Performance Time" (the sample clock) and "Real Time" (the system).
Buffers are passed around as events with a timestamp, so that should be translatable
to the common system time.  Don't know what other OSs do.

Aha!
Interesting! That sounds almost exactly like what I want...
Haven't ever tried Haiku OS, but maybe it's time.

 

> Maybe for my use case there could be some kind of time code included in the resulting audio file so that individual files from different RPi:s could be merged into a synced multi-channel file?

This sounds like a good way.  Shouldn't be hard to devise a suitable format.
You might have to do some heavy interpolation in the post-processing if the
frame rates (from the original sample clocks) were too far off.

Yes. Actually, if there was a way to very precisely know the sample clock frequency, and assuming it does not vary during recording, it would likely be enough to just start recording at a very precisely synced moment and then do the interpolation after.

So does anybody here know a) how to find an exact value for the sound card clock frequency and b) how to align the first frame of a recording buffer to a certain system clock time?

Regards,
Anders
 

        -- Pete --

------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/csound-devel