| Sending one copy from this end. ;-)
Test matrix, logging, regression testing, more logging. . .
Where is all this going to be stashed for ready access and test logging
consistency/availability?
Prioritizing test 'areas of concern' balanced against available 'people
hour' power could help facilitate defining what is achievable and/or desired
of the Csound QA Test crew's focus.
What areas/types of testing would be most beneficial? What
mega-time-consuming tests would be considered 'absolutely - must do'?
Repository of 'standard' test scripts is where?
things like that
> -----Original Message-----
> From: rasmus ekman [mailto:rasmuse@hem.passagen.se]
> Sent: Wednesday, November 10, 1999 4:47 PM
> To: Csound list
> Subject: Re: [Csnd] beta testing
>
>
> Nicola Bernardini wrote:
> >
> > Right. What I was hinting at is: how do you produce the so-called
> > 'correct' samples? with mathlab/octave, or what? Do these exist
> > somewhere as references?
>
> I think there may be a slight overestimation of the ambition
> of Gabriel's "beta testing" proposal. The idea as I've construed it
> is to test minimal platform consistency of Csound, the DSP
> programming language, that is, to investigate whether the same
> orch/score will, within reasonable limits, produce "the same"
> output on all the supported platforms, and further, to check that
> "normal" and almost-normal usage of the various opcodes at least
> do not crash Csound.
>
> If the task is to verify that opcode authors have done correct
> implementation of DSP technologies (or translation of algorithms
> or snippage of published code), or that published DSP algorithms
> work as the article authors suggest, then we'd indeed need a team
> of skilled mathematicians with professional tools (and I for one
> should better go missing from this project immediately, as I know
> no maths whatsoever).
>
> Since there are over 500 opcodes to test, and opcodes often
> come in families, I thought (naively?) that eg differences in
> floating-point precision would turn up in a multitude of cases,
> and thus still give us half a chance of spotting wild divergences.
> Any suspicious cases should of course be subjected to further
> testing. And if we don't know what's happening, we can at least
> document this, so those who know more will know where to look
> out for trouble.
>
> I understand some people on the list have actually worked
> with QA; I hope they (or you) can come up with some useful
> suggestions how to set up the testing.
>
>
> Regards,
>
> re
> --
> To unsubscribe, send email to csound-unsubscribe@lists.bath.ac.uk
>
-- |