/r/algorithmicmusic
Links and discussion on algorithmic music
Links and discussion on algorithmic music.
Generative, stochastic, aleatoric, sonification and other forms of automation in audio and music creation.
Posts can include programming libraries, other software and technical tools, performance code, recordings, philosophical discussion, questions, event listings or any other relevant items.
Resources:
Related subreddits:
/r/algorithmicmusic
I need a music generation or compositing library for generating music from a set of scale and notes based on emotion of a person. Is there any such library in C or Javascript or Python by which I can achieve this.
I would like to share a recently added "Weekly Algorithmic Music" challenge. The idea is to post a new piece of music every week, be it a simple loop or a more complete thing. The purposes of this are to:
You can join the streak at https://streak.club/s/1768/weekly-loops.
All algorithmic music makers and listeners are very welcome!
I'm currently running an in-person study with computer-generated music at my college, and I'm worried about not really having a control group.
β
I created a generative music system that takes 2 different compositions as input and makes a new composition that attempts to synthesize the thematic material of the inputs. I'm testing for 2 things: if my generated music is able to synthesize or combine the thematic material and emotional quality of 2 input pieces, and if my generated music is of a similar quality to other generative systems. For the first part, I have people listen to a series of 3 music clips in a random order (where 1 music clip is generated by my system, and the other 2 clips were the compositions used as input). I have people rate each clip on a couple emotional scales, and then ask them to compare the music clips with regard to their emotional qualities. For the second part, I have people listen to several more series of 3 music clips in a random order (where 1 is generated by me, and the others are generated by some other generative system). I have participants rate each one on quality, and then ask them to verbally compare them based on quality.
This feels like a good experiment, but am I lacking a control group? What would be the control group in this case? This is a long message so I appreciate if anyone is able to give any feedback on this.
Computer-generated midi and two vocal samples used per song: https://youtu.be/A-GyOXxbrXI?si=GL3lendIePREOh4n
Miniature Recs is specialized in ultraminimalist procedural and algorithmic laptop music, released in the form of albums collecting short sonic miniatures. Every track is just one of many possible instances of the algorithm. The idea of releasing only music in this extremely compact format comes from a provocation: as philosopher of technique Bernard Stiegler suggested, we are experiencing an "industrial capture of attention" which partly short-circuit our previous relational modalities β why not exploring what this attentive contraction affords aesthetically? Miniature Recs explores this matter by employing the tools of algorithmic composition/improvisation, trying to devise new forms of human-machine interactions outside of the dominant big data paradigm.
I have made a program for coding music in Python. It's available in a browser at this address. Basically you write patterns in the form of functions that can respond to a chord progression and a timeline. It's meant to be a less experimental version of Sonic Pi that simulates a song instead of executing it real-time.
This is not in a finished state yet and there are bugs that are yet to be fixed, so the front end is very bare bones, but I'm posting it to see if people are interested.
Library version for coding locally.
Do we know of any midi performance to clean score AI project on GitHub, or elsewhere, that takes in the midi of a piano performance (like an improvisation) and turns it into a clean score (with quantised and regular tempo + corrected mistakes)?
Hey Folks, π
Good news! The submission deadline of evoMUSART 2024 has been extended to November 15th! π
You still have time to submit your work to the 13th International Conference on Artificial Intelligence in Music, Sound, Art and Design (evoMUSART).
If you work with Artificial Intelligence techniques applied to visual art, music, sound synthesis, architecture, video, poetry, design, or other creative tasks, don't miss the opportunity to submit your work to evoMUSART.
EvoMUSART 2024 will be held in Aberystwyth, Wales, UK, between 3 and 5 April 2024. π΄σ §σ ’σ ·σ ¬σ ³σ Ώ
For more information, visit the conference webpage: https://www.evostar.org/2024/evomusart
β
Hi All
I am a PhD student exploring human-computer co-creative algorithmic music composition.
In a world where the search for an AI to replace every human activity seems to predominate, it tends to be forgotten that humans (still) have incredible insights into the the creative process, and the engagement with such a process in a non trivial way uplifts the human spirit and improves our lives. Computers offer powerful ways to contribute to such a process with capabilities that complement rather than attempt to replace a human user. To put it more lightly, this is what you get when you are both a jazz musician and tech geek.
I have designed a web based interactive music composing tool featuring some ideas around this subject and am looking for respondents who will interact with the site and then fill in a short survey. Interaction should take 30 - 45 minutes and the survey will take 5 minutes. Follow the link if you are curious, there are further instructions on the site.
Please do the survey as that is the data for my dissertation.
https://letsplay.musicrobot.link
Peace, love and cyborg future.