/r/algorithmicmusic
Links and discussion on algorithmic music
Links and discussion on algorithmic music.
Generative, stochastic, aleatoric, sonification and other forms of automation in audio and music creation.
Posts can include programming libraries, other software and technical tools, performance code, recordings, philosophical discussion, questions, event listings or any other relevant items.
Resources:
Related subreddits:
/r/algorithmicmusic
Updates for Melogy music generator, try it here: https://www.melogyapp.com/
Hello,
I am researching the sonorization technique and its origins. The technique involves translating information or numerical data from non-musical sources into sound. For example, Mamoru Fujieda's Patterns of Plants, where he scanned plant surfaces and used the numerical results to create different tunings and melodic phases.
However, I haven't found many resources on the internet, except for some musicians and composers who have published their sonorization exercises on YouTube. Am I searching for the wrong term?
https://twoshot.app/model/454
This is a free UI for the melody flow model that meta research had taken offline
Here's the paper: https://arxiv.org/html/2407.03648v1
Hi everyone!
We’re hosting the first “AI for Music” workshop at AAAI on March 3, 2025. The workshop will explore how AI is transforming music creation, recognition, education, and more. Topics include AI-driven composition, sound design, legal and ethical challenges, and AI’s impact on musicians’ careers.
Submissions (up to 6 pages) are welcome until November 22, 2024. Work in progress is encouraged!
Workshop Summary
This one-day workshop will explore the dynamic intersection of artificial intelligence and music. It explores how AI is transforming music creation, recognition, and education, ethical and legal implications, as well as business opportunities. We will investigate how AI is changing the music industry and education—from composition to performance, production, collaboration, and audience experience. Participants will gain insights into the technological challenges in music and how AI can enhance creativity, enabling musicians and producers to push the boundaries of their art. The workshop will cover topics such as AI-driven music composition, where algorithms generate melodies, harmonies, and even full orchestral arrangements. We will discuss how AI tools assist in sound design, remixing, and mastering, allowing for new sonic possibilities and efficiencies in music production. Additionally, we'll examine AI's impact on music education and the careers of musicians, exploring advanced learning tools and teaching methods. AI technologies are increasingly adopted in the music and entertainment industry. The workshop will also discuss the legal and ethical implications of AI in music, including questions of authorship, originality, and the evolving role of human artists in an increasingly automated world. This workshop is designed for AI researchers, musicians, producers, and educators interested in the current status and future of AI in music.
Call for Papers
Submissions should be a maximum of 6 pages. Work in progress is welcome. Authors are encouraged to include descriptions of their prototype implementations. Additionally, authors are encouraged to interact with workshop attendees by including posters or demonstrations at the end of the workshop. Conceptual designs without any evidence of practical implementation are discouraged.
Topics of interest are (but not limited to)
Important Dates
We hope to see you there! 🎶
Hey folks! đź‘‹
Are you working on research in Artificial Intelligence for creative purposes like visual art, music generation, sound synthesis, architecture, poetry, video, or design? EvoMUSART 2025 offers a great opportunity to present your work!
EvoMUSART 2025 focuses on showcasing applications of Artificial Neural Networks, Evolutionary Computation, Swarm Intelligence, and other computational techniques in creative tasks.
đź“ŤÂ Location: Trieste, Italy
đź“…Â Date: 23-25 April 2025
🗓️ Paper Submission Deadline: 1 November 2024
Visit the link for details and submission guidelines:
https://www.evostar.org/2025/evomusart
I apologize if this is not the right place, but I would like to create some lessons for generative music for my 7th grade computer science class using Scratch. I was wondering if anyone has tried doing this with much success or could lead me down the right path to follow. Thank you ahead of time.
Hi all, I've been working on this Kontakt instrument where you can load a sample and then fold it into all kinds of different patterns using a bunch of step sequencers.
Have a look and let me know what you think.
https://youtu.be/FG0LvOwnF5g?si=_zTjW4Ik-RSzdduS
Thank you,
Tak
After an awesome listening party launch, the latest drone album has been released. There is algorithmic work seen most heavily in track 2 but the others have an element of the machine taking control. Check it out here
Dear colleagues,
Juan Romero, Penousal Machado and Colin Johnson will publish a Special Issue associated with EvoMUSART on "Complex Systems in Aesthetics, Creativity and Arts" and it would be a pleasure if you sent an extension of your contribution.
Journal: Complexity (ISSN 1076-2787)
JCR Journal with Impact factor: 1.7 (Q2)
Deadline for manuscript submissions: 18 October 2024
Special Issue URL: https://onlinelibrary.wiley.com/doi/toc/10.1155/8503.si.941484
Instructions for authors:Â https://onlinelibrary.wiley.com/page/journal/8503/homepage/author-guidelines
One of the main - possibly unattainable - challenges of computational arts is to build algorithms that evaluate properties such as novelty, creativity, and aesthetic properties of artistic artifacts or representations. Approaches in this regard have often been based on information-theoretic ideas. For example, ideas relating mathematical notions of form and balance to beauty date to antiquity. In the 20th century, attempts were made to develop aesthetic measures based on the ideas of balance between order and complexity. In recent years, these ideas have been formalized into the idea that aesthetic engagement occurs when work is on the "edge of chaos," between excessive order and excessive disorder, formalizing it through notions such as the Gini coefficient and Shannon entropy, and links between cognitive theories of Bayesian brain and free energy minimization with aesthetic theories. These ideas have been used both to understand human behavior and to build creative systems.
The use of artificial intelligence and complex systems for the development of artistic systems is an exciting and relevant area of research. In recent years, there has been an enormous interest in the application of these techniques in fields such as visual art and music generation, analysis and performance, sound synthesis, architecture, video, poetry, design, game content generation, and other creative endeavors.
This Special Issue invites original research and review articles which will focus on both the use of complexity ideas and artificial intelligence methods to analyze and evaluate aesthetic properties and to drive systems that generate aesthetically appealing artifacts, including: music, sound, images, animation, design, architectural plans, choreography, poetry, text, jokes, etc.
Potential topics include but are not limited to the following:
Dr. Penousal Machado
Dr. Colin Johnson
Dr. Iria Santos
Guest Editors (EvoMUSART 2025)
These variations of the famous Soviet-era folk song, with a jazz-like progression towards the original theme, were created by changing the tonal properties of the theme using the Tonamic method. Apart from the added drums, the output score is not modified.
Please find attached a tool for sonification of integers sequences in form of a score:
https://musescore1983.pythonanywhere.com/
Here is a demo with the beginning of Moonlight Sonata, part 3 and a favourite integer sequence of mine:Â Abstract Moonlight Sonata 3. This tool works like this: It takes as input a score in the form of a midi and then, depending on the sequence, runs back and forth on the score and creates a variation. The minimum of the sequence corresponds roughly to the beginning, while the maximum corresponds to the end of the score. Other sequences for sonification might be found here:Â OEIS.
A basic L-system, written in Javascript, converting A to ABA, and B to BBB (the standard Cantor dust). I then take generation N and, on the same stave, pair it with generations N+1, N+2, and N+3. Each A token is then mapped to a specific note, according its generation.
(Since my video won't up,I have link it :(