Difference between revisions of "Artificial Intelligence in the Music Industry"

From SI410
Jump to: navigation, search
Line 20: Line 20:
  
 
===Recommendation Models===
 
===Recommendation Models===
Another way AI is used in the music industry is by creating recommendation models. One of the most prominent users of this tool today is '''Spotify'''. Spotify uses a variety of machine learning techniques to predict and customize playlists for its users. One example of this is the Discovery Weekly playlist which is a list of 30 songs curated specifically for the user each week based on search history, listening patterns, and predictive models. One way they do this is through '''collaborative filtering'''. To do this, Spotify compares different users with similar behaviors to predict what a user might enjoy or want to listen to next. Another tool is '''Natural Language Processing (NPL)'''. NPL analyses human speech patterns through text. AI accumulates words that are associated with different artists by scanning the internet for articles and posts written about them. They can then associate artists who have similar cultural vectors, or top terms which each other and recommend similar artists to their users. One final tool Spotify uses to curate user-specific playlists is '''audio models'''. Audio models are most useful when an artist is new and doesn’t have much online about them yet or many listeners. Audio models analyze raw audio tracks and categorize them with similar songs. This way, they are able to recommend lesser-known artists alongside popular tracks <ref> Sen, Ipshita. “How AI Helps Spotify Win in the Music Streaming World.” Outside Insight, 26 Nov. 2018, outsideinsight.com/insights/how-ai-helps-spotify-win-in-the-music-streaming-world. </ref>.  
+
Another way AI is used in the music industry is by creating recommendation models. One of the most prominent users of this tool today is '''Spotify'''. Spotify uses a variety of machine learning techniques to predict and customize playlists for its users. One example of this is the Discovery Weekly playlist which is a list of 30 songs curated specifically for the user each week based on search history, listening patterns, and predictive models. One way they do this is through '''collaborative filtering'''. To do this, Spotify compares different users with similar behaviors to predict what a user might enjoy or want to listen to next. Another tool is '''natural language processing (NPL)'''. NPL analyses human speech patterns through text. AI accumulates words that are associated with different artists by scanning the internet for articles and posts written about them. They can then associate artists who have similar cultural vectors, or top terms which each other and recommend similar artists to their users. One final tool Spotify uses to curate user-specific playlists is '''audio models'''. Audio models are most useful when an artist is new and doesn’t have much online about them yet or many listeners. Audio models analyze raw audio tracks and categorize them with similar songs. This way, they are able to recommend lesser-known artists alongside popular tracks <ref> Sen, Ipshita. “How AI Helps Spotify Win in the Music Streaming World.” Outside Insight, 26 Nov. 2018, outsideinsight.com/insights/how-ai-helps-spotify-win-in-the-music-streaming-world. </ref>.  
  
 
==Ethical Implications==
 
==Ethical Implications==

Revision as of 20:49, 13 March 2021

Back • ↑Topics • ↑Categories

Artificial Intelligence in the Music Industry is a relatively new phenomenon that has already had a huge impact on the way we produce, consume, and experience music. Artificial intelligence (AI) itself is essentially a system created to learn and respond in a way that is similar to human beings. To some degree, AI technologies can simulate human intelligence and many different companies have developed applications that can create, perform, and/or curate music through the power of AI [1] . This industry is relatively new, but growing rapidly. This may cause future problems for artists competing against machines for recognition and potential legal discrepancies regarding copyrighted materials used in the AI learning process.

The Illinois Automatic Computer (ILLIAC)

History

The first recorded instance of computer-generated music was in 1951 by British mathematician, Alan Turing. Turing built the BBC outside-broadcast unit at the Computing Machine Library in Manchester, England, and used it to generate a variety of different melodies. To do so, he programmed individual musical notes into the computer. Artificial intelligence was first used to create music in 1957 by Lejaren Hiller and Leonard Isaacson from the University of Illinois at Urbana–Champaign [2]. They programmed the ILLIAC (Illinois Automatic Computer) to generate music completely written by artificial intelligence. Around this same time, Russian researcher R.Kh.Zaripov published the first paper on algorithmic music composing that was available worldwide. He used the historical Russian computer, URAL-1, to do so [3] [4] .


From then on, research and software around AI generate music began to flourish. In 1974, the first International Computer Music Conference (ICMC) was hosted at Michigan State University in East Lansing, Michigan. This became an annual event hosted by the International Computer Music Association (ICMA) for AI composers and researchers alike [5] .

Applications

Music Production

One of the biggest breakthroughs in computer-generated music was the Experiments in Musical Intelligence (EMI) system. Developed by David Cope, an American composer and scientist at the University of California, Santa Cruz, EMI was able to analyze different types of music and create unique compositions by genre. It has now created more than a thousand different musical works based on over 30 different composers[6] [7] .

In 2016 Google, one of the leaders in AI technology, released Magenta. Magenta is “an open source research project exploring the role of machine learning as a tool in the creative process” [8]. Instead of learning from hard-coded rules, Magenta learns by example from other humans. It can be thought of as an assistant to humans in the creative process, rather than a replacement[9].

Recommendation Models

Another way AI is used in the music industry is by creating recommendation models. One of the most prominent users of this tool today is Spotify. Spotify uses a variety of machine learning techniques to predict and customize playlists for its users. One example of this is the Discovery Weekly playlist which is a list of 30 songs curated specifically for the user each week based on search history, listening patterns, and predictive models. One way they do this is through collaborative filtering. To do this, Spotify compares different users with similar behaviors to predict what a user might enjoy or want to listen to next. Another tool is natural language processing (NPL). NPL analyses human speech patterns through text. AI accumulates words that are associated with different artists by scanning the internet for articles and posts written about them. They can then associate artists who have similar cultural vectors, or top terms which each other and recommend similar artists to their users. One final tool Spotify uses to curate user-specific playlists is audio models. Audio models are most useful when an artist is new and doesn’t have much online about them yet or many listeners. Audio models analyze raw audio tracks and categorize them with similar songs. This way, they are able to recommend lesser-known artists alongside popular tracks [10].

Ethical Implications

Commodification of Music

Computers generating new music through AI

Big Data

Copyright

Conclusion

Even though artificial intelligence in music has been around for over 60 years now, it's still considered to be in its very early stages of development. No one knows for sure what the future holds for the music industry as we currently know it. Maybe in 20 years, the chart-topping songs will have been computer-generated or our favorite music streaming services will know exactly what we want to listen to based on our mood. We will just have to wait and see.

See Also

References

  1. Freeman, Jeremy. “Artificial Intelligence and Music — What the Future Holds?” Medium, 24 Feb. 2020, medium.com/@jeremy.freeman_53491/artificial-intelligence-and-music-what-the-future-holds-79005bba7e7d.
  2. Li, Chong. “A Retrospective of AI + Music - Prototypr.” Medium, 25 Sept. 2019, blog.prototypr.io/a-retrospective-of-ai-music-95bfa9b38531.
  3. Li, Chong. “A Retrospective of AI + Music - Prototypr.” Medium, 25 Sept. 2019, blog.prototypr.io/a-retrospective-of-ai-music-95bfa9b38531.
  4. Freeman, Jeremy. “Artificial Intelligence and Music — What the Future Holds?” Medium, 24 Feb. 2020, medium.com/@jeremy.freeman_53491/artificial-intelligence-and-music-what-the-future-holds-79005bba7e7d.
  5. Freeman, Jeremy. “Artificial Intelligence and Music — What the Future Holds?” Medium, 24 Feb. 2020, medium.com/@jeremy.freeman_53491/artificial-intelligence-and-music-what-the-future-holds-79005bba7e7d.
  6. Li, Chong. “A Retrospective of AI + Music - Prototypr.” Medium, 25 Sept. 2019, blog.prototypr.io/a-retrospective-of-ai-music-95bfa9b38531.
  7. Freeman, Jeremy. “Artificial Intelligence and Music — What the Future Holds?” Medium, 24 Feb. 2020, medium.com/@jeremy.freeman_53491/artificial-intelligence-and-music-what-the-future-holds-79005bba7e7d.
  8. Magenta. Google, magenta.tensorflow.org.
  9. “How Google Is Making Music with Artificial Intelligence.” Science | AAAS, 8 Dec. 2017, www.sciencemag.org/news/2017/08/how-google-making-music-artificial-intelligence.
  10. Sen, Ipshita. “How AI Helps Spotify Win in the Music Streaming World.” Outside Insight, 26 Nov. 2018, outsideinsight.com/insights/how-ai-helps-spotify-win-in-the-music-streaming-world.