Artificial Intelligence in the Music Industry
Artificial Intelligence in the Music Industry is a recent phenomenon that has already had a significant impact on the way we conceive, produce, and consume music. Artificial intelligence (AI) as a system works by learning and responding in a way that mimics the thought processes of a human. Major corporations in the music industry have developed applications that can create, perform, and/or curate music through the power of AI's simulation of human intelligence  . This subset of the music industry is relatively new, but growing at a rapid rate. Although its advancements have been great, using AI in music has the potential to cause issues in the future for artists who could have to compete against machines for recognition. Potential legal discrepancies regarding copyright infringement are also of major concern.
A Look Into AI
AI as a concept came into popular consciousness in the 1950s with British mathematician Alan Turing's paper on how to build an 'intelligent machine' and how to test its intelligence. Since then, interest in AI as a potential field of study has waxed and waned. The use of AI in media, like books and movies, has increased awareness of AI in both the general public and academia. Recently, we have seen an exponential increase in the implementation of AI in our everyday lives, from virtual assistants like Amazon's Alexa and Apple's Siri to fraud detection in personal banking .
AI In Music
The first recorded instance of computer-generated music was in 1951 by Turing. Turing built the BBC outside-broadcast unit at the Computing Machine Library in Manchester, England, and used it to generate a variety of different melodies. To do so, he programmed individual musical notes into the computer. Artificial intelligence was first used to create music in 1957 by Lejaren Hiller and Leonard Isaacson from the University of Illinois at Urbana–Champaign . Hiller and Isaacson programmed the ILLIAC (Illinois Automatic Computer) to generate music written start-to-finish by artificial intelligence. Around the same time, Russian researcher R.Kh.Zaripov published the first widely-available paper on algorithmic music composing. He used the historical Russian computer, URAL-1, to do so  .
Since those milestones, research and software in AI generated music has flourished. In 1974, the first International Computer Music Conference (ICMC) was hosted at Michigan State University in East Lansing, Michigan. The ICMC is now an annual event hosted by the International Computer Music Association (ICMA) for AI composers and researchers alike .
One of the most notable breakthroughs in computer-generated music was the Experiments in Musical Intelligence (EMI) system. Developed in 1981 by David Cope, an American composer and scientist at the University of California, Santa Cruz, EMI was able to analyze different types of music and create unique compositions by genre. It has now created more than a thousand different musical works based on over 30 different composers   .
In 2016, Google, a leader in AI technology, released Magenta. Its mission statement was to become “an open-source research project exploring the role of machine learning as a tool in the creative process” . Instead of learning from hard-coded rules, Magenta learns by example from other humans. In this way, it acts as an assistant to humans in the creative process, rather than a machine-part replacement.
In 2012, Sony created Flow Machines, a research and development project focused on improving AI in music production. Flow Machines is an attempt to bridge the gap between human and AI production. Sony states that it "is a tool for a creator to get inspiration and ideas to have their creativity greatly augmented". The main component of the Flow Machines project is its Flow Machines Professional-- an AI assisted music composing system. The system lets creators compose melodies in many different styles. Creators can generate melodies, chords, and bases by operating Flow Machines. Furthermore, their own ideas inspired by Flow Machines. From there, the process is the same as regular music production. Creators can arrange Flow Machines with DAW, lyrics, recording, mixing, mastering etc.
Audio mastering has also grown through AI. A company called LANDR is one of multiple AI-based mastering services. LANDR provides musicians with a more affordable alternative to human-based mastering. The company is used by streaming services such as Spotify and Apple Music, as well as music companies like Pioneer DJ and BeatPort. To date, more than 2 million musicians have used LANDR to master more than 10 million songs.
Another way AI is used in the music industry is by creating recommendation models. One of the most prominent users of this tool today is Spotify. Spotify uses a host of machine learning techniques to predict and customize playlists for its users. The most prominent example of this is the "Discover Weekly" playlist, a collection of 30 songs curated specifically for the user each week based on search history, listening patterns, and predictive models. One way they do this is through collaborative filtering,which compares different users with similar behaviors to predict what a user might enjoy or want to listen to next. Another tool is Natural Language Processing (NPL), which analyses human speech patterns through text. AI accumulates words that are associated with different artists by scanning the internet for articles and posts written about them. They can then associate artists who have similar cultural vectors, or top terms which each other and recommend similar artists to their users. Spotify also utilizes convolutional neural networks (CNN) to recommend songs for users. CNN-based models allow Spotify to analyze raw audio data regarding the song’s BPM, musical key, loudness, and other parameters. The model then finds songs with similar parameters and recommends it to users. CNN has been an extremely effective tool for discovering quality music yet to be recognized by the masses. One final tool Spotify uses to curate user-specific playlists is audio models. Audio models are most useful when an artist is new and doesn’t have much online about them yet or many listeners. Audio models analyze raw audio tracks and categorize them with similar songs. This way, they are able to recommend lesser-known artists alongside popular tracks . In 2021, Spotify was approved for a controversial patent that outlined technology designed to discern information about a user based on an AI analysis of their voice, detecting qualities such as age, gender, and emotion in order to improve music recommendations. Users have expressed concerns over privacy, seeing the idea of Spotify listening to your daily speech and background noises as invasive. 
Spotify also releases a lot of the variables they collect to use in their recommendation algorithms. Through their web API, developers can pick a subset of the Spotify library and pull data on all those songs. This data includes more basic musical features like bpm and key, but also more nuanced analysis. Other features Spotify measures include danceability, loudness, and acousticness. While this data is all publicly available, Spotify is vague about how these measurements are collected. Spotify is able to maintain control and ownership over this information by keeping these methods classified. The information is public but not the way it's obtained. These measurements are used within the AI models produced by Spotify, but also in third-party apps and programs that use AI to recommend music.  Spotify's API is available for free with an account and increases the availability of musical analytics greatly. In addition to using this data to find new music, it can also be used to analyze trends in music and predict popularity in the future. 
Warner Bros and Endel
In 2019, Warner Music Group became the first major music label to sign a deal for the rights of an artificial intelligence software. The development company of this software, Endel, agreed to create 20 albums throughout the course of the year, each focusing on different moods. The development team uses algorithms to create “personalized” songs designed to lower the stress of listeners, boosting their productivity, and more. As the songs for each mood are still created through the core software, every album is created “just by pressing one button.” Although this news was met with some concern from artists and listeners alike, Endel does not view itself as competition for more established musicians, instead generating sounds designed to “blend with the background.” 
The Commodification of Music
Some ethical concerns about the future of music and AI have arisen over recent years, one of which is the commodification of music. With the popularization of AI-generated music on the horizon, there is concern that music will be, or is currently being, made and sold solely for profit. Music has always been widely seen as an essentially-human art medium that pulls from emotion and experience. Many would assume real music is something no machine could recreate. However, emotion and experience might not be key ingredients to a chart-topping song these days. AI technology can easily generate songs that are trendy and catchy with the power to analyze and remix music that is already popular. This also poses the threat of music becoming homogenous or lacking variety .
One of the biggest ethical dilemmas that AI-generated music is facing is copyright infringement. Allowing AI systems to listen to copyrighted music and then generate similar songs without compensating or citing the original artist could result in huge legal complications. As laws currently stand now, copyright infringement can only occur if AI creates a song that sounds similar to an existing song and claims it as its own. This is a hard line to draw because it may depend on how similar the songs are to make this call — what does "similar" look like and can it be defined in a legal context? This law was written without AI systems in mind, meaning it is likely that they never took into consideration the implications of its consequences, like a machine listening to and pulling from an artist's entire discography to create one song. From an artist's point of view, it may seem unfair to not be credited or compensated for something like this  .
In the case of Endel, their deal posed a unique challenge to Warner. A copyright lawyer was hired, and it was eventually decided that every software engineer involved was credited as a songwriter. It remains to be seen whether this will be the standard in the future, as there are many valid interpretations of how involved these engineers truly were in the creation of music. 
In anticipation of similar cases in the future, the Copyright Office released a statement asserting that for a work to be registered, it “must be created by a human being.” This is in contrast to the United Kingdom, where works created by a machine are credited to the creator(s) of the machine in question. While similarities can be drawn between AI in music to AI in other artistic mediums, such as animators partially utilizing algorithmic software in the creation of movies, a consensus has yet to be reached in the realm of music. 
AI's Impact on Employment
Businesses are being reshaped by technology, and those in the music industry are no exception. According to a McKinsey report, 70 percent of companies will have adopted at least one AI technology by 2030. Artificial intelligence is expected to complement and augment our human capabilities. As our decision-making improves and becomes more effective and efficient due to the insights and support AI provides, it can drive growth and innovation. The creative process will likely transform because of AI's impact. Since its inception, AI has threatened human employment in various fields. This phenomenon has already begun in the music industry and will continue as AI is increasingly implemented. While AI makes many jobs easily replaceable, it can also provide new opportunities for human-centric careers . Like many aspects of AI's impact on music, how exactly its effect on employment and opportunities will play out in the future is unclear.
- Freeman, Jeremy. “Artificial Intelligence and Music — What the Future Holds?” Medium, 24 Feb. 2020, email@example.com_53491/artificial-intelligence-and-music-what-the-future-holds-79005bba7e7d.
- Anyoha, Rockwell. “The History of Artificial Intelligence” Harvard University, August 28, 2017, https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.
- Adams, R. “10 Powerful Examples Of Artificial Intelligence In Use Today.” Forbes, 6 Nov. 2017, www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-today/?sh=337d635e420d.
- Li, Chong. “A Retrospective of AI + Music - Prototypr.” Medium, 25 Sept. 2019, blog.prototypr.io/a-retrospective-of-ai-music-95bfa9b38531.
- Cope, David. "Experiments in Musical Intelligence" University of California, Santa Cruz, http://artsites.ucsc.edu/faculty/cope/experiments.htm
- Magenta. Google, magenta.tensorflow.org.
- “How Google Is Making Music with Artificial Intelligence.” Science | AAAS, 8 Dec. 2017, www.sciencemag.org/news/2017/08/how-google-making-music-artificial-intelligence.
- Flow Machines. Sony, https://www.sonycsl.co.jp/paris/2811/.
- Create, We'll Do The Rest. LANDR, https://www.landr.com/en/.
- LFlexion. “How AI Helps Spotify Win in the Music Streaming World.” Medium, 22 Aug. 2020, https://becominghuman.ai/how-big-data-and-ai-has-changed-the-music-industry-c28ad1573d7f.
- Sen, Ipshita. “How AI Helps Spotify Win in the Music Streaming World.” Outside Insight, 26 Nov. 2018, outsideinsight.com/insights/how-ai-helps-spotify-win-in-the-music-streaming-world.
- Savage, Mark. "Spotify wants to suggest songs based on your emotions" BBC, 28 January 2021, https://www.bbc.com/news/entertainment-arts-55839655
- Spotify. "Spotify for Developers: Web API Documentation" https://developer.spotify.com/documentation/web-api/
- Leclercq, Paul "Music Recommendation service with the Spotify API, Spark MLlib and Databricks" https://medium.com/@polomarcus/music-recommendation-service-with-the-spotify-api-spark-mllib-and-databricks-7cde9b16d35d
- Menten, Matthew et al. "Temporal Trends in Music Popularity - A Quantitative analysis of Spotify API data" 12 December 2018
- Wang, Amy. "Warner Music Group Signs an Algorithm to a Record Deal" Rolling Stone, 23 March 2019, Warner Music Group Signs an Algorithm to a Record Deal
- Staff. “What Does Commodification Mean for Modern Musicians?” Dorico, 13 Mar. 2018, blog.dorico.com/2018/01/commodification-music-mean-modern-musicians.
- “How AI Is Benefiting The Music Industry?” Tech Stunt, 20 Aug. 2020, techstunt.com/how-ai-is-benefiting-the-music-industry.
- “We’ve Been Warned About AI and Music for Over 50 Years, but No One’s Prepared.” The Verge, 17 Apr. 2019, www.theverge.com/2019/4/17/18299563/ai-algorithm-music-law-copyright-human.
- Burroughs, Scott. " Endel And The Coming Robot Copyright Reckoning" Above The Law, 27 March 2019 https://abovethelaw.com/2019/03/endel-and-the-coming-robot-copyright-reckoning/
- Marr, Bernard. “The Amazing Ways Artificial Intelligence Is Transforming The Music Industry” Forbes, 5 July. 2019, https://www.forbes.com/sites/bernardmarr/2019/07/05/the-amazing-ways-artificial-intelligence-is-transforming-the-music-industry/?sh=6283f7785072.
- Thomas, Mike. “ARTIFICIAL INTELLIGENCE'S IMPACT ON THE FUTURE OF JOBS” builtin, 8 Apr. 2020, https://builtin.com/artificial-intelligence/ai-replacing-jobs-creating-jobs.