Csound Csound-dev Csound-tekno Search About

[Csnd] OT live tempo tracking and comparison

Date2014-04-22 22:05
Frompeiman khosravi
Subject[Csnd] OT live tempo tracking and comparison
I'm sorry for this very OT post. 

I've been asked to make a programme - max, pd, or csound, it doesn't matter - for synching a video file to an orchestra in real time. The tempo following is easy since there is a percussion player who is simply following the conductor and beating a midi drum. And the video part has a click track already. 

The video is being played with Quartz composer which has a speed wheel. So one can speed up or slow down the video to fit it around the tempo of the live music.  

The part that I can't get my head around is how to compare and synch the detected beats to the click track of the video? I've gone as far as detecting both tempi and calculating the difference, all pretty straight forward, but how to ensure that the downbeats remain locked together (of course drifting for a bar or so is acceptable)? 

Is this even possible?

Any pointer are much appreciated. 

Many Thanks
Peiman     

Date2014-04-23 03:40
FromSteven Yi
SubjectRe: [Csnd] OT live tempo tracking and comparison
I'm not sure, but for some reason the question made me think of
phase-locked loops[1]. I just checked though and in my todo list is an
entry for dleay-locked loops[2].  Fons on the Linux Audio mailing list
once posted about it[3] and linked to a paper [4] (the link in the
original thread is old, the link provided works). I don't know if this
is a standard way to do the kind of thing you're wanting to achieve.

I also feel like I read about performance synchronization in some of
Barry Vercoe's and Roger Dannenberg's papers.  Okay, I searched my
Zotero collection and found the Dannenberg paper is from 1984 and is
called "An On-Line Algorithm for Real-time Accompaniment", available
at [5]. (You can download the full pdf there using the more info
link).  The Vercoe paper is from 1985 and is called "Synthetic
Rehearsal: Training the Synthetic Performer" [6]

Hope that's useful in some way.  Perhaps others might know some
simpler solutions that are readily available, but I think those could
be useful if you're implementing yourself.

Good luck!
steven

[1] - http://en.wikipedia.org/wiki/Phase-locked_loop
[2] - http://en.wikipedia.org/wiki/Delay-locked_loop
[3] - http://linux-audio.4202.n7.nabble.com/Progressive-Quantisation-td26534.html#a26543
[4] - http://kokkinizita.linuxaudio.org/papers/usingdll.pdf
[5] - http://quod.lib.umich.edu/i/icmc/bbp2372.1984.025/1/--on-line-algorithm-for-real-time-accompaniment?rgn=full+text;view=image;q1=accompaniment
[6] - http://quod.lib.umich.edu/i/icmc/bbp2372.1985.043/1/--synthetic-rehearsal-training-the-synthetic-performer?rgn=full+text;view=image;q1=synthetic+performer

On Tue, Apr 22, 2014 at 5:05 PM, peiman khosravi
 wrote:
> I'm sorry for this very OT post.
>
> I've been asked to make a programme - max, pd, or csound, it doesn't matter
> - for synching a video file to an orchestra in real time. The tempo
> following is easy since there is a percussion player who is simply following
> the conductor and beating a midi drum. And the video part has a click track
> already.
>
> The video is being played with Quartz composer which has a speed wheel. So
> one can speed up or slow down the video to fit it around the tempo of the
> live music.
>
> The part that I can't get my head around is how to compare and synch the
> detected beats to the click track of the video? I've gone as far as
> detecting both tempi and calculating the difference, all pretty straight
> forward, but how to ensure that the downbeats remain locked together (of
> course drifting for a bar or so is acceptable)?
>
> Is this even possible?
>
> Any pointer are much appreciated.
>
> Many Thanks
> Peiman
>
>
>
> www.peimankhosravi.co.uk || RSS Feed || Concert News


Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"



Date2014-04-23 09:30
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
Hi Steven,

This is really useful. Thanks so much for taking the time to provide all this info. 

All the best,
Peiman 



On 23 April 2014 03:40, Steven Yi <stevenyi@gmail.com> wrote:
I'm not sure, but for some reason the question made me think of
phase-locked loops[1]. I just checked though and in my todo list is an
entry for dleay-locked loops[2].  Fons on the Linux Audio mailing list
once posted about it[3] and linked to a paper [4] (the link in the
original thread is old, the link provided works). I don't know if this
is a standard way to do the kind of thing you're wanting to achieve.

I also feel like I read about performance synchronization in some of
Barry Vercoe's and Roger Dannenberg's papers.  Okay, I searched my
Zotero collection and found the Dannenberg paper is from 1984 and is
called "An On-Line Algorithm for Real-time Accompaniment", available
at [5]. (You can download the full pdf there using the more info
link).  The Vercoe paper is from 1985 and is called "Synthetic
Rehearsal: Training the Synthetic Performer" [6]

Hope that's useful in some way.  Perhaps others might know some
simpler solutions that are readily available, but I think those could
be useful if you're implementing yourself.

Good luck!
steven

[1] - http://en.wikipedia.org/wiki/Phase-locked_loop
[2] - http://en.wikipedia.org/wiki/Delay-locked_loop
[3] - http://linux-audio.4202.n7.nabble.com/Progressive-Quantisation-td26534.html#a26543
[4] - http://kokkinizita.linuxaudio.org/papers/usingdll.pdf
[5] - http://quod.lib.umich.edu/i/icmc/bbp2372.1984.025/1/--on-line-algorithm-for-real-time-accompaniment?rgn=full+text;view=image;q1=accompaniment
[6] - http://quod.lib.umich.edu/i/icmc/bbp2372.1985.043/1/--synthetic-rehearsal-training-the-synthetic-performer?rgn=full+text;view=image;q1=synthetic+performer

On Tue, Apr 22, 2014 at 5:05 PM, peiman khosravi
<peimankhosravi@gmail.com> wrote:
> I'm sorry for this very OT post.
>
> I've been asked to make a programme - max, pd, or csound, it doesn't matter
> - for synching a video file to an orchestra in real time. The tempo
> following is easy since there is a percussion player who is simply following
> the conductor and beating a midi drum. And the video part has a click track
> already.
>
> The video is being played with Quartz composer which has a speed wheel. So
> one can speed up or slow down the video to fit it around the tempo of the
> live music.
>
> The part that I can't get my head around is how to compare and synch the
> detected beats to the click track of the video? I've gone as far as
> detecting both tempi and calculating the difference, all pretty straight
> forward, but how to ensure that the downbeats remain locked together (of
> course drifting for a bar or so is acceptable)?
>
> Is this even possible?
>
> Any pointer are much appreciated.
>
> Many Thanks
> Peiman
>
>
>
> www.peimankhosravi.co.uk || RSS Feed || Concert News


Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"





Date2014-04-23 09:40
Fromjoachim heintz
SubjectRe: [Csnd] OT live tempo tracking and comparison
if you already got the difference, it should be possible to implement an 
algorithm like this:

if difference < 0 then ;real-time is faster
   klick = slower
elseif difference > 0 then
   klick = faster
else
   relax
endif

the "slower" and "faster" could be implemented first as sudden change of 
the tempo. if this works, it could be softened by something like port.

i hope ...

	joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
> I'm sorry for this very OT post.
>
> I've been asked to make a programme - max, pd, or csound, it doesn't
> matter - for synching a video file to an orchestra in real time. The
> tempo following is easy since there is a percussion player who is simply
> following the conductor and beating a midi drum. And the video part has
> a click track already.
>
> The video is being played with Quartz composer which has a speed wheel.
> So one can speed up or slow down the video to fit it around the tempo of
> the live music.
>
> The part that I can't get my head around is how to compare and synch the
> detected beats to the click track of the video? I've gone as far as
> detecting both tempi and calculating the difference, all pretty straight
> forward, but how to ensure that the downbeats remain locked together (of
> course drifting for a bar or so is acceptable)?
>
> Is this even possible?
>
> Any pointer are much appreciated.
>
> Many Thanks
> Peiman
>
>
>
> *www.peimankhosravi.co.uk  || RSS Feed
>  || Concert News
> *


Send bugs reports to
        https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"



Date2014-04-23 10:03
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      



On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"





Date2014-04-23 17:57
FromJustin Smith
SubjectRe: [Csnd] OT live tempo tracking and comparison
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"






Date2014-04-25 11:19
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
Thanks Justin,

Could I ask you to send me an example of the formula please?

Many Thanks
Peiman




On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"







Date2014-04-28 22:34
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman



On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"







Date2014-04-28 22:43
FromJustin Smith
SubjectRe: [Csnd] OT live tempo tracking and comparison
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"








Date2014-04-28 23:07
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 



On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"









Date2014-04-28 23:14
FromJustin Smith
SubjectRe: [Csnd] OT live tempo tracking and comparison
I meant furthest from average, but I guess it could also mean the one with the biggest absolute value difference from the median (either the shortest or longest in the collection). This is a heuristic, so it is probably worth trying a few variations and seeing what works with real input.


On Mon, Apr 28, 2014 at 3:07 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 
On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"










Date2014-04-28 23:16
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
oh yes I see. So first take the average, then remove the one furthest from average and then take the average again?

Sorry all all the questions!

Thanks
Peiman



On 28 April 2014 23:14, Justin Smith <noisesmith@gmail.com> wrote:
I meant furthest from average, but I guess it could also mean the one with the biggest absolute value difference from the median (either the shortest or longest in the collection). This is a heuristic, so it is probably worth trying a few variations and seeing what works with real input.


On Mon, Apr 28, 2014 at 3:07 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 
On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"











Date2014-04-28 23:21
FromJustin Smith
SubjectRe: [Csnd] OT live tempo tracking and comparison
Yes, I think something like that. Or, maybe simpler, remove the smallest and largest inputs before averaging. I think average will give a better result than mean or median because of the musical likelihood of an accellerando or decellerando over multiple measures.


On Mon, Apr 28, 2014 at 3:16 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
oh yes I see. So first take the average, then remove the one furthest from average and then take the average again?

Sorry all all the questions!

Thanks
Peiman
On 28 April 2014 23:14, Justin Smith <noisesmith@gmail.com> wrote:
I meant furthest from average, but I guess it could also mean the one with the biggest absolute value difference from the median (either the shortest or longest in the collection). This is a heuristic, so it is probably worth trying a few variations and seeing what works with real input.


On Mon, Apr 28, 2014 at 3:07 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 
On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"












Date2014-04-28 23:27
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
Yes, I see. That makes sense. Thanks very much. I'm experimenting now...

All the best,
Peiman



On 28 April 2014 23:21, Justin Smith <noisesmith@gmail.com> wrote:
Yes, I think something like that. Or, maybe simpler, remove the smallest and largest inputs before averaging. I think average will give a better result than mean or median because of the musical likelihood of an accellerando or decellerando over multiple measures.


On Mon, Apr 28, 2014 at 3:16 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
oh yes I see. So first take the average, then remove the one furthest from average and then take the average again?

Sorry all all the questions!

Thanks
Peiman
On 28 April 2014 23:14, Justin Smith <noisesmith@gmail.com> wrote:
I meant furthest from average, but I guess it could also mean the one with the biggest absolute value difference from the median (either the shortest or longest in the collection). This is a heuristic, so it is probably worth trying a few variations and seeing what works with real input.


On Mon, Apr 28, 2014 at 3:07 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 
On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"













Date2014-04-29 22:18
Frompeiman khosravi
SubjectRe: [Csnd] OT live tempo tracking and comparison
fyi, here it is. It seems to be working. Thank Justin! 

P



On 28 April 2014 23:27, peiman khosravi <peimankhosravi@gmail.com> wrote:
Yes, I see. That makes sense. Thanks very much. I'm experimenting now...

All the best,
Peiman
On 28 April 2014 23:21, Justin Smith <noisesmith@gmail.com> wrote:
Yes, I think something like that. Or, maybe simpler, remove the smallest and largest inputs before averaging. I think average will give a better result than mean or median because of the musical likelihood of an accellerando or decellerando over multiple measures.


On Mon, Apr 28, 2014 at 3:16 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
oh yes I see. So first take the average, then remove the one furthest from average and then take the average again?

Sorry all all the questions!

Thanks
Peiman
On 28 April 2014 23:14, Justin Smith <noisesmith@gmail.com> wrote:
I meant furthest from average, but I guess it could also mean the one with the biggest absolute value difference from the median (either the shortest or longest in the collection). This is a heuristic, so it is probably worth trying a few variations and seeing what works with real input.


On Mon, Apr 28, 2014 at 3:07 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 
On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"














Date2014-04-29 22:22
FromJustin Smith
SubjectRe: [Csnd] OT live tempo tracking and comparison
very cool, glad you got it working, and thanks for sharing the result


On Tue, Apr 29, 2014 at 2:18 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
fyi, here it is. It seems to be working. Thank Justin! 

P
On 28 April 2014 23:27, peiman khosravi <peimankhosravi@gmail.com> wrote:
Yes, I see. That makes sense. Thanks very much. I'm experimenting now...

All the best,
Peiman
On 28 April 2014 23:21, Justin Smith <noisesmith@gmail.com> wrote:
Yes, I think something like that. Or, maybe simpler, remove the smallest and largest inputs before averaging. I think average will give a better result than mean or median because of the musical likelihood of an accellerando or decellerando over multiple measures.


On Mon, Apr 28, 2014 at 3:16 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
oh yes I see. So first take the average, then remove the one furthest from average and then take the average again?

Sorry all all the questions!

Thanks
Peiman
On 28 April 2014 23:14, Justin Smith <noisesmith@gmail.com> wrote:
I meant furthest from average, but I guess it could also mean the one with the biggest absolute value difference from the median (either the shortest or longest in the collection). This is a heuristic, so it is probably worth trying a few variations and seeing what works with real input.


On Mon, Apr 28, 2014 at 3:07 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Thanks Justine. So I just tried it in max without the outlier removal step. The only one thing I don't understand now: "the one furthest from the median". Is this the list member that has a value furthest from the middle element in the list?

Thanks
Peiman 
On 28 April 2014 22:43, Justin Smith <noisesmith@gmail.com> wrote:
Sorry, I haven't had the chance to sketch out what this would look like in csound code. I should have said "the last n timings", whatever the unit of timing you want to track. For a piece with a strong time signature tracking measures may give better results than beats for example (it will correct for the noise syncopation could otherwise introduce).

Also, as a first attempt, one could just average the last N timings without the outlier removal step. This is probably similar enough to the algorithm for port opcode that one could simplify this by having the right half-time argument.


On Mon, Apr 28, 2014 at 2:34 PM, peiman khosravi <peimankhosravi@gmail.com> wrote:
To be specific. By "taking the timings of the last N measures" do you mean the time interval between the beats? Or do you actually mean 'bars' by measure?

Thanks again!
Peiman
On 23 April 2014 17:57, Justin Smith <noisesmith@gmail.com> wrote:
Calculate a moving average, by taking the timings of the last N measures, throwing away the one furthest from the median, and averaging the rest. This way you are resilient even in the face of accelerando / decelerando. Also remember that the speed of sound is such that people are used to things they see being somewhat out of sync with things they hear, this gives some leeway (you could even plan on always being a little ahead of the beat for realism).


On Wed, Apr 23, 2014 at 2:03 AM, peiman khosravi <peimankhosravi@gmail.com> wrote:
Hi joachim,

I've tried this and unless I'm overthinking it, it won't work as expected. The problem is that this system is still open to the beat counts drifting. So the downbeat can become the next upbeat and continue to move out of phase over the course of the performance.

The simples solution I can think of is to implement a waiting mechanism, which makes me think that perhaps the best way would be to generate an smpte from the live performance and to use this to drive the video forward frame-by-frame. 

Thanks,
Peiman      
On 23 April 2014 09:40, joachim heintz <jh@joachimheintz.de> wrote:
if you already got the difference, it should be possible to implement an algorithm like this:

if difference < 0 then ;real-time is faster
  klick = slower
elseif difference > 0 then
  klick = faster
else
  relax
endif

the "slower" and "faster" could be implemented first as sudden change of the tempo. if this works, it could be softened by something like port.

i hope ...

        joachim


Am 22.04.2014 23:05, schrieb peiman khosravi:
I'm sorry for this very OT post.

I've been asked to make a programme - max, pd, or csound, it doesn't
matter - for synching a video file to an orchestra in real time. The
tempo following is easy since there is a percussion player who is simply
following the conductor and beating a midi drum. And the video part has
a click track already.

The video is being played with Quartz composer which has a speed wheel.
So one can speed up or slow down the video to fit it around the tempo of
the live music.

The part that I can't get my head around is how to compare and synch the
detected beats to the click track of the video? I've gone as far as
detecting both tempi and calculating the difference, all pretty straight
forward, but how to ensure that the downbeats remain locked together (of
course drifting for a bar or so is acceptable)?

Is this even possible?

Any pointer are much appreciated.

Many Thanks
Peiman



*www.peimankhosravi.co.uk <http://www.peimankhosravi.co.uk> || RSS Feed
<http://peimankhosravi.co.uk/miscposts.rss> || Concert News
<http://spectralkimia.wordpress.com/>*


Send bugs reports to
       https://github.com/csound/csound/issues
Discussions of bugs and features can be posted here
To unsubscribe, send email sympa@lists.bath.ac.uk with body "unsubscribe csound"