WOW THAT WAS A LOT OF WORK PUT INTO 4 DAYS (Texturizing)

 So, now that we have a system let's discuss for a bit the plausibility of this process as well as what I ended up doing it. The framework of the program remained the same, keywords were placed and are checked by the Speech recognition software and the triggering events send messages into VCV to affect the music being played. It is here that we have to discuss my first conceptual issue, from previous post I've shared the amount of work and modules that had to be put in place to construct a mere 4 chord harmony (there are most likely better ways to do it but i digress) and creating a single voice melody proved to be equally as difficult. Having multiple voices to control a single oscillator (or worse, several single voice oscillators to be controlled) is a really inefficient way to create music in this system, and since most tools and tutorials specialize in constructing and developing textures through single voices (which mine technically is but since it's coming from multiple sources it registers as a polyphonic value) creating textures and utilizing modules was very hard. After a loooooot of playing around I couldn't find a lot of satisfying ways to generate textures that would successfully use more modules and adapt to the mood in the process. 

This does not mean that it cannot be done, far from it, VCV is extremely flexible but works best as products of their own interactions, a system that would use only a couple of modules to generate signals and then re sends those signals to add up to the sound is what this type of software is made to do, it was then a little overzealous on me to try to apply general rules for music creation to a system that adapts better to a different approach to music making.

It is here that I admit that I couldn't construct all the moods that I had defined. After a lot of work, trial an error I managed to have all 3 moods for the minor Progression, but only 1 mood for the Major Progression, since the patch starts in the Major Progression, the default and unedited values of the patch, that account for the "Calm" mood, are present and there is no changes being transmitted by the code to influence them, apart from the mentioned bpm, and Gain values.



The minor progression came together after the Major one did, and  for better or worse, that meant a lot more sonic exploration which led to an easier experience when coming up with mood swings. For this patch I placed it's own MIDI communication device that allowed me to send signals specifically cattered for the resulting textures. 

minor:
  •  Tense (60bpm) - Dry (0.34-0.2)[4,0] - Wet (0.5 - 0.75)[5,7] - PreD(0.1 - 0.3)[6,3] - Clock(2 - 8)[6,67]
  • Worry (100bpm) - Dry (0.34-0.75)[4,7] - Wet (0.5 - 0.2)[5,0] - PreD(0.1)[6,0] - Clock(2 - 8)[6,80]
  • Somber (100bpm) - Vol0,2 (127-0)[0,0] - Vol2 (120-127)[]
With each value in parenthesis representing the change needed in the voltage value and the one in brackets representing the input needed in the code, once this was organized I devised a function that could interact with the designated values.

def Minor(Dry,Wet,PreD,Clk,bpm):
    dry = mido.Message('control_change', control=4, value=Dry, channel=2)
    outport.send(dry)
    wet = mido.Message('control_change', control=5, value=Wet, channel=2)
    outport.send(wet)
    pred = mido.Message('control_change', control=6, value=PreD, channel=2)
    outport.send(pred)
    clk = mido.Message('control_change', control=7, value=Clk, channel=2)
    outport.send(clk)
    bpm = mido.Message('control_change', control=2, value=bpm, channel=1)
    outport.send(bpm)
    vol1 = mido.Message('control_change', control=1, value=110, channel=2)
    outport.send(vol1)
    vol2 = mido.Message('control_change', control=0, value=127, channel=2)
    outport.send(vol2)
    Activate(3)

It is only necessary then to input the requiring value changes when the corresponding key-word is called (It is worth noting once again that MIDI communicates in ranges from 0-127 but CV values are voltages that are not necessarily equivalent throughout modules, meaning each value had to be measured to make sure the interaction with the code maintained the desired result). Most of the controls being edited are part of the reverb module I applied to the patch and a change in speed.


Next post will probably be the last of this particular project, but I wanna come back and take note of a couple of things that I learned and how I solved them

Comments

Popular Posts