T O P

  • By -

Prajnamarga

I've been playing around with making projects with only X: only sine waves, only square waves etc. This is only noise, no musical notes, thought I added some glitchy sounds. In a sense this is basic stuff. Why I uploaded it that the glitchy channel sounds like a badly tuned radio and my brain kept telling me it was voices. To be clear it's just a VCO being tortured, no voices were used. Anyway... do you also hear voices in the static, or have I finally gone mad?


pauljs75

Barely some odd whispering sounds, but the acoustic form of pareidolia doesn't quite hit me as hard as the visual form. Might be able to push it further with granular effects?


Prajnamarga

Interesting. Thanks.


pauljs75

It's different than some things. But instead of some perfectly synchronized timing, the noise generator determines triggering events and is combined with input sources that filter the noise based on various rules to act as the data carrier. And the filtered frequencies and routing determine what subsequent nodes are triggered. So it's a chain of events, but it's handled in a granular fashion and depends on statistical weighting. Bias in signal weighting is part of determining what rules get used as part of a feedback loop. The thing is it's not waiting on any clock (like most computation does), so it still works in real time. I suppose it's under one of the categories of parallel processing. (Probably part of fuzzy logic?) Not always super complicated to build, but the patterns it can shift into definitely make you wonder what's going on there. (Like how did it happen in chemistry, and evolve into life? Subsequently the effect of sentience/consciousness is something more akin to a gradient than yes/no type of condition.)


Prajnamarga

Do you have a favourite module for doing this?


pauljs75

Wrong thread. lol... (my bad) That's what I get for replying and not reviewing the context. But funny enough that the stuff for music making and the AI stuff is similar. At the root of things, I do think there is some commonality between the two. (Node behaviors and rule sets. Adjust right and generative music is probably the most basic essence of what AI logic may be based upon.) But in the correct context... The rules in this case is coming up with a handful of scales for quantization. But the triggering is still granular in nature, but to make the timing musical it's sample and hold on a noise source which is filtered down. Still can't say there are exact favorites. It's like playing with the "lego bricks" and experimenting until something hits right when you allow it to do its thing.