Music and artificial intelligence: a bond that is growing at breakneck speed

Music and artificial intelligence: a bond that is growing at breakneck speed
Photo by Gerd Altman (CC0C/Pixabay)

Over the past decade or so, artificial intelligence (AI) has become more prevalent in everyday life, from online ads that seem to know exactly what you’re looking at to music composition and other creative applications.

The idea of ​​making music using artificial intelligence raises questions about the nature of creativity and the future of human composers. From useful tools to groundbreaking prototyping, here’s a look at some of the latest innovations using artificial intelligence in the music writing process.

Score Cloud Songwriter

DoReMIR Music Research AB recently announced the launch of ScoreCloud Songwriter, a tool that turns original music into master sheets. The software uses information recorded with a single microphone, and can include vocals and instruments. Various AI protocols separate the sounds, then transcribe the music, including melody and chords, along with English lyrics. What you will get is a master sheet with the melody, words and chord symbols on it.

Sven Ahlbak, CEO of Doremer, explained in Media release. “Our vision is that ScoreCloud Songwriter will help songwriters, composers, and other music professionals, such as teachers and performers. It may even inspire music lovers who never thought they could write a song. We hope it will become an indispensable tool for creating, sharing and sustaining musical works. on her “.

Harmony dance spread

Harmonai is a company that creates open source models for the music industry, and Dance Diffusion is their latest innovation in artificial intelligence sound generation. It uses a mixture of publicly available forms to create audio parts – so far, about 1-3 seconds long – from nothing, as it were, which can then be interpolated into longer recordings. Since it is AI, the more users enter the audio files in order to learn from them, the more they will evolve and evolve. If you’re interested in how Dance Diffusion came together, here’s a video interview with the creators over here.

Here is one of their projects, an AI-powered endless solo that has been in play since January 27, 2021. It is based on the work of the late musician Adam Neely.

It’s still in the testing stages, but its implications are profound.

AudioLM by Google

Google’s new AudioLM bases its approach to sound creation on the way language is processed. It can create music for piano with a short excerpt of the input. Speech sounds into words and sentences, in the same way that music revolves around individual notes that come together to form melody and harmony. Google engineers used the concepts in Advanced Language Modeling as their guide. AI captures the melody as well as the overall structure and details of the sound waveform to create realistic sounds. Reconstructs sounds in layers designed to capture nuances.

Meta AudioGen

Meta’s new AudioGen uses an AI model of text-to-speech to create sounds as well as music. The user enters a text message, such as “the wind is blowing”, or even a combination, such as “the wind is blowing and the leaves are rustling” and the AI ​​responds with a matching voice. Developed by the Met and the Hebrew University of Jerusalem, the system is capable of generating sound from scratch. AI can separate different voices for complex situations, such as many people speaking at once. The researchers trained the AI ​​using a combination of audio samples, and it could create a new sound beyond its own training dataset. Besides sounds, he can create music, but this part of his functionality is still in its infancy.

What’s Next?

With the AI ​​music generation in its infancy, it is easy to dismiss its future impact on the industry. But it cannot be ignored.

An electronic band called YACHT recorded an entire album using artificial intelligence in 2019, using technology that has already been surpassed. Basically, they taught the AI ​​how to be a yacht, and he wrote the music. Then the band turned it on to their next album.

YACHT member and technical writer Claire L. Evans: ‘I’m Not Interested in Being Retroactive’, the inconsistency in a documentary on their 2019 album powered by artificial intelligence chain kicks off (As quoted in technology crisis). “I don’t want to go back to my roots and play acoustic guitar because I’m so afraid of the upcoming robotics apocalypse, but I also don’t want to jump into the trenches and welcome the new robotics pioneer either.”

The onslaught of new technology is relentless. The only option is to jump on the train.

#Ludwigvan

Get daily arts news straight to your inbox.

Subscribe to Ludwig van Daily – Classical Music and Opera in Five Minutes or Less over here.

Music and artificial intelligence: a bond that is growing at breakneck speed

Music and artificial intelligence: a bond that is growing at breakneck speed
Latest posts by Anya Wassenberg (show all)

Source

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Prev
AI enables smarter sourcing
AI enables smarter sourcing

AI enables smarter sourcing

The company has been built through the acquisition of 17 companies over a period

Next
There is now a new “premium” version of Microsoft Teams
There is now a new "premium" version of Microsoft Teams

There is now a new “premium” version of Microsoft Teams

Microsoft announces a premium service for its teams Collaboration platform On

You May Also Like