As I mentioned before I have GMS setup to produce specific scales. At this stage they are all based on the key of C. Eventually I’ll set up the application so that the key and scale are dynamically adjustable. I will also include a wider variety of scales, including all the modes, diminished, whole tone, and more.
One thing I haven’t decided on how to approach is timing and tempo. The way it works at the moment is that the tempo is determined by applying a multiplier to the frame rate. In this example I’m dynamically changing the multiplier using the arrow keys to achieve different note durations.
GMS Pentatonic Scale
I should probably pass the source code for that sequencer bank I created on to you, it might give you some ideas about how to implement some tempo mechanisms. One thing I found is that that timing mechanism needs to be in another thread, otherwise it will never really stay synced up. Plus the key and scale stuff I already did once too, I’d hate to see someone have to go through it again :)
Thanks, Grant. I’ve already got the key and scale stuff under control. I think I will stick with my frame rate and multiplier technique for now and work in the clock at a later date. Also, it looks like the proMIDI library has a mechanism to set the tempo in BPM, so I may look at using it rather than rwmidi. If I get stuck or it seems too time consuming, I may take you up on your offer. Thanks!
Cool, can’t wait to see it!
Perhaps one way to get note duration more dynamically would be to get the total number of pixels in a frame above say 50% brightness and normalise to a reasonable duration. This way, it would be possible to control duration by twisting your hand so that different amounts of it are visible to the camera.
I really like that idea, Nick. Thanks for your input. Of course this would mean that the timing and tempo would be difficult (nearly impossible) to match to other performers, but perhaps I’ll set it up with a “tempo mode” that bases the duration on beats per minutes, and a “free mode” that uses some algorithm based on pixel brightness for note durations.
You could also have a hybrid mode in which note ons and / or note offs are quantised, potentially combining the expressiveness of one and collaborative potential of the other.