Google’s brand-new AI system is so advanced that it can create music of any genre based on a written description. However, because to the potential dangers, the corporation has no current intentions to release it.
Google’s MusicLM isn’t the first artificially intelligent music-generation system, but it’s rather good. Riffusion, a visual music-composing AI, Dance Diffusion, Google’s AudioML, and OpenAI’s Jukebox are just a few examples of the many additional initiatives. However, due to technical restrictions and a lack of training data, no system has yet been able to generate music that is exceptionally complex in terms of composition or audio quality.
It’s possible that MusicLM is the first of its kind to do so.
As described in a scholarly study, MusicLM was taught to generate cohesive songs from descriptions of “substantial intricacy,” such as “enchanting jazz blues with a noteworthy sax solo and a solitary singer” or “Berlin ’90s techno with a heavy bottom and forceful kick.” Interestingly, the songs have a similar style to what a human musician may create, albeit without the same originality or cohesion.
The quality of the samples is undeniable, especially considering that no musicians or instrumentalists were involved in their creation. MusicLM can pick up subtleties like instrumental riffs, melodies, and emotions even when given rather extensive and rambling descriptions.
MusicLM can do more than only make snippets of tunes. The system is demonstrated by Google researchers to be able to expand upon preexisting tunes, whether they be hummed, sung, whistled, or rendered on an instrument. Furthermore, MusicLM can take a series of sequentially written descriptions (such as “time to meditate,” “time to wake up,” “time to run,” and “time to give 100%”) and turn them into a musical “story” or narrative of up to the many minutes in length, suitable for use as a film score.
Not only that. Images and text can be used to teach MusicLM how to “play” a certain instrument or create a piece of music in a certain style. The system may compose music based on specified locations, time periods, and other criteria, and the level of experience of the AI “musician” can also be adjusted.
On the other hand, MusicLM is far from perfect. As an inevitable byproduct of the training procedure, a few of the specimens have a distorted quality. The vocals generated by MusicLM, even when using choral harmonies, are far from perfect. Most of the “lyrics” are either completely incomprehensible or contain only a few English words, and they are performed by simulated voices that seem to be composites of the styles and voices of other performers.
Still, the Google findings pointed out the system’s many ethical issues, such as its propensity to include copyrighted content from training examples into the created songs. Based on their testing, they have decided not to release MusicLM in its current form because they found that around 1% of the audio the system made was a straight replication from the tracks on which it trained.
We recognize the possibility for exploitation of creative content related to the use case, the authors of the paper admitted. We stress the importance of further research into mitigating these dangers in the future of musical creation.
Even though systems like MusicLM are marketed as tools to aid musicians rather than replace them, it seems likely that substantial legal difficulties will come to the fore if they are ever made available. They’ve done so already, albeit with more basic forms of AI. Jay-record Z’s label sued a YouTube channel called Vocal Synthesis in 2020 for allegedly employing artificial intelligence to make cover versions of Jay-Z songs, including as Billy Joel’s In other words, “We Didn’t Light the Match.” YouTube re-uploaded the clips after removing them first owing to “incomplete” takedown requests. The present legal status of deepfaked music is unclear.
Eric Sunray, a current legal intern at the Music Publishers Association, wrote a whitepaper arguing that AI music converters like MusicLM violate music copyright by creating “tapestries of cohesive sound from the appears to work they imbibe in instruction, thereby infringing the United States Copyright Act’s reproduction right.” Since Jukebox’s debut, some have questioned whether or not it is ethical to train AI models on protected music. The training data utilized by AI systems to generate images, code, and text has also been criticized for being collected off the web without the permission of the creators.
Andy Baio of Waxy wonders if artificially intelligent system-generated music would be covered by copyright because it is a derivative work and only the primary elements are protected. Naturally, it’s not obvious what might be called “original” in such music; commercial use of such music would be a leap into the unknown. However, Baio believes that courts will have to make case-by-case determinations, even if the use of the created music is for fair use purposes such as parody or commentary.
There may soon be an answer to the question at hand. There are a number of pending legal actions that could affect music-generating AI; one of these involves the legal protections owed to musicians whose material is utilized without their permission to train AI systems. The future holds, however.