How AI May Transform the Music Industry for Better or Worse
Companies such as AIVA and OpenAI have created powerful automated music creation technology that serves as yet another reminder that artificial intelligence may have a large impact on the music industry.
- AIVA have released an automated music soundtrack creation tool that many companies are using. Using generated music enables companies to skip the maze of music licencing, royalties, and potential lawsuits.
- OpenAI have also released very powerful models that are able to generate music from scratch, known as Musenet and Jukebox (though still in the early stages).
Is the impact of AI going to be positive or negative for the music industry?
It’s very difficult to tell, but the music industry has already been paving the way for technology to become deeply engrained in many aspects of both music creation and listening, for example:
- Music licence fees and complicated music clearance processes have made companies a lot more open to using automatically generated music, because it reduces both costs and legal problems.
- Music producers often use plugin-assisted and automated tools in their workflows, from chord generation to beat making.
- AI is used for automated music mixing and mastering.
- Spotify and other streaming services use big data to influence viewer’s listening habits a lot more than you might think. Their algorithms can analyze music releases before a human even listens to them.
Before we continue, check out AI generated rock music composed by AIVA:
And here is an early example of OpenAIs automated music creation efforts (the model used for this has already been replaced by a much more powerful one, known as GPT3)
The current global situation has possibly changed the music industry possibly forever, with many music professionals left jobless. When we get back to being able to play festivals and concerts, the world might be in a very different place.
However, there may also be even more shifts in the music industry in the near future, driven by technology.
We will discuss where we are right now, and how we think music might be impacted by AI in the near future.
How Music Makers are Currently Paving the Way for Automation
Big Data Use by Streaming Services
Did you know that Spotify can already tell a lot of information about the musical track before anybody or any human ever even listens to it?
Many music makers feel compelled to write and produce music that will satisfy the streaming algorithm, in fear of releasing music that nobody will listen to.
Spotify even have an API that you can hook into which provides this information.
They can tell, with a good degree of confidence for the following:
- How lively/sad the music is.
- The musical key and tempo
- If the music is live
- If it has vocals or is an instrumental
- …and likely MUCH more information.
When Spotify users start listening to the track, then can also tell:
- If users are skipping the track (therefore they don’t like it)
- How long they are listening to the track
- How often they repeat the track
- If they are adding the track to playlists
- …and much more.
All of this information feeds into their algorithms for which they can automatically create and tailor users playlists, to their interests, which means that these streaming services are highly influencing people’s musical tastes using automated means.
We even wrote a list of websites that can help you analyze your spotify listening habits.
Hardware to Software. Analog to Digital.
In music production, we have moved from hardware instruments and analog gear to virtual instruments.
This is not just for electronic music, it’s happening in a wide variety of musical genres.
Orchestras, guitars, analog synthesizers and physical drum kits are now often replaced with music sampled or generated from a laptop (e.g. through Logic Pro, Ableton Live, Cubase, Garageband, etc).
Music producers have also made heavy use of samples of music samples for the last few decades. This is particularly prevalent in drums, vocals, pads, but it can also be used for practically any instrument or sounds that you can think of.
Audiences and listeners have gotten accustomed to virtual instruments and sampled sounds. This could make it quite easy for AI to be part of the music creation process.
Chord and Melody Generation
Music generation tools are now in heavy use for producers, which effectively means that producers are using tools to help them generate chords melodies and harmonies at the click of a button.
These tools generally do not currently use AI, but seen as most modern pop genres follow a very limited pattern of chord progressions and types of melodies, it’s easy to imagine that artificial intelligence could simply start to make some very good sounding progressions in the near future.
This may start in the form of automated tools that we’ll use will be used for creative automated tools for producers in the form of modern plugins for melody generation.
But eventually, the music producer must be completely removed from the equation, such as in the case of automated music generation.
Automated Mixing and Mastering
During the mixing and mastering process. Many music makers are making use of intelligent tools, such as automated mixing and mastering services.
For example, the iZotope Neutron plugin currently has a detection feature, which can set the mixing levels to what they class as the optimal levels.
There are also online mastering services such as Landr which has automated mastering services that many artists are quite happy with. We are still quite a long way from getting top-quality professional mixing and mastering done an automated fashion. However, if AI goes in the direction that it currently is, then we can imagine a few iterations down the line, then these processes will be very effectively automated.
AI Music Generation
Automated music generation might seem like an unsavoury concept, but think about it from the perspective of a company or brand. The music industry is a complicated maze of licences royalties and lawsuits.
If a company can get clean access to unique music that they can generate at the click of a button, then this may be a good business proposition for any company, and they will happily pay for it.
Even if musical artists come to a brand with a great song, the brand may still choose to go for generated music, because of the lower costs and lower legal risks.
As mentioned in the introduction section of this article, AIVA, is an artificial intelligence engine can create some very passable music that would be highly usable for many brands.
In the previous generation of openAIs model GPT2, they released two very impressive and yet slightly worrying releases, known as the following
Musenet is an AI model capable of generating MIDI files for many different music genres, which is effectively the industry standard of music notation for electronic instruments and gear that can be used for synthesizers, drum machines, and basically any other virtual instrument.
Jukebox takes this one step further, where they actually output raw audio files. This could eventually be improved to the point where an audience is fooled into thinking this is a human-produced and recorded song.
These have already been replaced by a much more powerful model
These are still at quite an early phase. Also, keep in mind that both Musenet and jukebox, are from the previous generation of open AI is release (GPT2).
OpenAI have recently released GPT3 (a successor to GPT2), which is now significantly more powerful. So if you don’t think that GPT two is impressive. Then wait until you see the third iteration.
But it’s not just AI, the music industry, in general, has been using automated tools, or deep learning, big data, and in simply intelligent plugins can make the music creation process and consumption process, quicker, and more efficient.
Are Jobs at Risk in the Music Industry?
What is the outlook for AI and the music industry, based on all of the information above?
We are currently at a point where music is becoming heavily influenced by automated technology.
There are many parts of the chain of both the music creation and music listening process where AI can come in and have quite an impact.
This may lead to a much lower demand of certain professional services, such as in mixing and mastering for certain music producers, and it may also lead to less income for artists.
For example, if automatically generated music is used for background music in videos and movies, then those creators do not need to pay the artists, any royalties. Therefore the artists would lose money.
Think about how the modern music industry has damaged orchestras. Classical music is now in quite a limited demand.
It’s relatively difficult to make a very good living, as in classical music, unless you are part of a good orchestra.
Coupled with lower demand for classical music, orchestral sampling is highly used in modern music production to reduce the need for the large expense of paying a full orchestra.
AI in music may be both good and bad.
It may help automate certain parts of music production that form as a barrier for people to get into making music.
It may make music-making far more accessible to people in the same way that current music DAWs and music plugins have made it easier and more cost-effective for people to make music.
However, there is always the danger that AI may have too much of an input into the creative process in music and remove its artistic element.
They may shape musical tastes to the point where musical purists no longer think that music has the same level of feeling that it used to.
What do you think is the future of AI in music. Do you think it’s going to be a good thing or a bad thing?
Make sure to write a comment in the comment section below. We will be really interested to hear your thoughts!