Users can create music tracks from written prompts

Adobe GenAI music

Adobe has revealed its Project Music GenAI Control, which allows users to create music from written prompts.

Users can then edit the software’s creation to fine tune it to their needs. Examples of prompts given by Adobe include “powerful rock,” “happy dance,” or “sad jazz”, which can then be edited to adjust the tempo, structure, and repeating patterns of a piece of music; choose when to increase and decrease the audio’s intensity; extend the length of a clip; re-mix a section; or generate a seamlessly repeatable loop.

A use case given by the tech giant is creating a new piece of music instead of cutting an existing track, which could potentially save time.

The tool is being developed in collaboration with the University of California, San Diego (Zachary Novack, Julian McAuley, Taylor Berg-Kirkpatrick), and at the School of Computer Science, Carnegie Mellon University (Shih-Lun Wu, Chris Donahue, Shinji Watanabe).

Nicholas Bryan, senior research scientist at Adobe Research and one of the creators of the technologies, said: “With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length.

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music.”

Project Music GenAI Control will be part of Adobe’s Firefly tools. It revealed a number of other AI tools it is working on last year, including automated dubbing, generative fill for video content, and more.