| > Just curious, are you running Csound on the same computer as the UI?
In general, yes, but the idea is to allow a server ( the model? ) to
hold csound in a high priority thread. A display will also run on this
machine, and the controller ( the input parser ) will receive input from
mouse, keyboard, midi, and other clients over network. So the display
may also be running on a networked client eventually. I don't plan on
implementing this all at once, but i want to do the research to start
with the best architecture for that goal.
> For typical MVC, the input is done with the controller, the controller
> updates the model, and the model notifies the view to update. When
> you interact with a UI, I've found that the interaction (mouse
> control, keyboard) goes through the controller first, then the model,
> then the view again. So, when you move a knob, the interaction
> doesn't move the knob, but rather the interaction notifies the model
> which notifies the knob to update.
So far, that sounds like what I want. I'm not sure how the model updates
the display though. I was thinking that the display would execute some
sort of timed callbacky thing where it asked the server what the
settings on screen should be. Maybe it should keep a copy of all that
data itself and only ask if there has been a change?
> In Java's Swing library, there's a slightly modified version of MVC
> where it's more MV, where the model and the controller are the same
> object. I think with how FLTK and Csound are implemented, there's
> isn't a true MVC way to do all of this since the widgets themselves
> will update their views when interacted with, before updating values
> in Csound.
Ok, but that would only *need* to be the case for mouse movements right?
I kinda figured that might have to happen, and I think that will be
ok. But for keystrokes I could grab them before the fltk thread sees
them some how right? Anyone have tips on how?
> Maybe the thing to think about is limiting how change occurs. So all
> input, instead of directly affecting csound, can call FLset on fltk
> widgets. The widgets would then change and thus affecting csound.
> The layering becomes something like:
>
> valueChanges -> widgets -> csound
>
> Csound could also then affect widgets, but that would count as input
> too I guess.
I was hoping to have an input parser module that would control how all
change occurs. ( Except micing, which I guess would be sent through the
input module after the widget and then go back to the display. )
> If you have all input handling through csound, then this might work.
> Csound's priority would still be top, handling is about on the same
> level, and FLTK should be in its own lower-priorty thread anyway.
Actually, I meant to have midi input handled from the controller thread
( seperate from fltk ) using portmidi, and getting relayed to csound
using api calls. This is mostly because writing multi-client complex
input modes in csound is a pain, and would be much easier to do in
C/C++. File i/o would be simpler too. Then the display would do the same
thing, every once in a while on timer callback or a change notification,
it would check the state of tables within csound, and update whatever is
showing accordingly.
> However, I'm not sure about keystroke handling and csound, especially
> if FLTK has focus.
Yeah, that's the problem. Anyone know? I guess I should ask on fltk
lists too.
Thanks for the input. = )
Iain
-------------------------------------------------------
SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
from IBM. Find simple to follow Roadmaps, straightforward articles,
informative Webcasts and more! Get everything you need to get up to
speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
_______________________________________________
Csound-devel mailing list
Csound-devel@lists.sourceforge.net |