How Artists Are Using AI to Make Music: Findings from Analyzing Hundreds of Musical Works
How are artists using AI to make music? That’s what our Audio Research team set out to understand when they analyzed 337 musical works created since 2017, including singles, albums, performances, installations, soundtracks and more.
The analysis examines how artists are using AI tools in their creative and production process. The findings challenge narratives about AI-powered music creation, such as the idea that AI limits creative possibilities – or on the other extreme, utopian visions of effortless hit-making.
What’s really happening is more nuanced. Artists are experimenting with AI and starting to develop their own strategies for AI integration, with approaches that vary depending on their creative goals and technical expertise.
We wanted to share a few of those strategies identified in our research, which artists and creative professionals can draw inspiration from in their AI exploration. You can read the full paper here: Music and Artificial Intelligence: Artistic Trends, or read on below for the key findings.
#1: Artists prioritize creative control when using AI
Artistic agency and control was a key theme that emerged. Rather than automate the entire composition and production process, professionals mainly use AI as a co-composition or sound design tool. Musicians adopt modular approaches to AI to maintain creative agency.
For example, artists might use AI to generate harmonic progressions or drum patterns, but then manually arrange, edit, and layer these elements with traditional instruments and vocals. This approach allows them to leverage AI's generative capabilities while maintaining their distinctive artistic style.
Important note: This study's scope was limited to artists who create music professionally. Our analysis focused on 337 works, out of millions of tracks generated by creators.
#2: Artists are building their own custom creative engines
We also found that artists are training custom models on their own material. They’re curating datasets (like collections of audio samples and musical data) to develop their own specialized models, similarly to how producers program synthesizers to create their unique sounds. Another form of customization is AI voices, in which singers like Holly Herndon, Grimes, and Sevdaliza release their vocal likeness to the public to make music with it.
These custom models and likenesses allow artists to create AI tools that can generate music in their specific style.
#3: Much like past technologies, AI is expanding artistic possibilities
As discussed further in our paper, past technology advancements have enabled new forms of expression. For instance, amplifiers facilitated the rise of genres like rock, blues, and jazz. Synthesizers opened the door to prog rock and EDM. Sampling was critical for hip-hop.
Each of these technologies were met with resistance at the time. AI could be the next chapter in this story. What we do know from our analysis is that AI is opening up new possibilities for musical expression:
Interactive, dynamic generation: Musicians can now create music that responds instantly to live input. Our research uncovered performances where AI generates accompaniment that adapts to the performer's style or audience reactions in real time.
Rapid iteration: Artists can generate dozens of musical ideas in minutes, compare them quickly, and refine the best ones. This speed allows for experimentation that wasn't practical before.
New genres and artistic formats: The digital nature of AI-generated music is pushing artists beyond standard song structures and album releases. We're seeing musical works that exist as online experiences, installations that generate music continuously, and other formats.
What this means for creative teams
These findings suggest a few strategies for creative professionals looking to integrate AI into their music production workflows:
Take a modular approach: The creative process is not a single one-and-done generation, it’s a series of steps. Try using AI for specific elements, like generating harmonic progressions, creating percussion samples, or exploring textural possibilities. Then build around it to get the finished piece you want.
Invest in customization: The artists pushing boundaries are those creating bespoke tools tailored to their creative needs. Consider how your team might partner with technical experts to fine-tune models (customize AI systems to better match your style) or curate training data specific to your projects.
Use AI as an iterative tool: AI excels at quickly generating multiple options for comparison and refinement. For instance, you might create twenty variations of a melody, pick the best one, and develop it further.
Looking ahead
We hope our work contributes to understanding how early adopters have used AI so far and serves as inspiration for future AI musicians.
AI offers both exciting opportunities, and important challenges and limitations. The full paper provides more detail on both the creative breakthroughs and practical considerations artists encounter when working with AI tools.
You can learn more here about the Audio Research team and our Stable Audio model family.