South Korea’s largest music label, HYBE, made internally famous due to top group BTS, is looking to AI to help bridge the language gap. According to a report by Reuters, the label used AI to help release a track by label singer MIDNATT in six languages.
These include Korean, English, Spanish, Chinese, Japanese, and Vietnamese. The track was released back in May and marked the first use of the technology for a simultaneous six-language release. According to HYBE, if found to be a success, this could be used for other popular K-pop acts.
While speaking with Reuters, Chung Wooyong, the head of HYBE’s interactive media arm, said, “We would first listen to the reaction, the voice of the fans, then decide what our next steps should be,“. According to the report, MIDNATT recorded the song in each of the six languages. This was done as native speakers in the studio read out the lyrics and the singer recorded.
Expanding on how the AI work, Chung Wooyong said, “We divided a piece of sound into different components – pronunciation, timbre, pitch, and volume,…We looked at pronunciation which is associated with tongue movement and used our imagination to see what kind of outcome we could make using our technology.“.
Then in a demonstration to Reuters, the team was able to listen to a before and after comparison of an elongated vowel sound being added to a word. The word was “twisted” in English, and was done to sound more natural while preserving the singer’s natural voice.
Afterward, using HYBE’s in-house AI, there were all combined. As some have noted over the last few months, this is another example of music and AI coming together. How to tool works, is that it used a deep learning powered by Neural Analysis and Synthesis (NANSY) framework that was developed by a company called Supertone, which HYBE acquired back in January.
According to its chief operating officer Choi Hee-doo, the AI makes the song sound more natural, and it’s an improvement to non-AI software used in the past. But what does the artist in question think? Well in a statement to MIDNATT said that AI allowed him to have a “wider spectrum of artistic expressions.”
He went on further and said, “I feel that the language barrier has been lifted and it’s much easier for global fans to have an immersive experience with my music,…It’s going to lower the barrier to music creation. It’s a little bit like Instagram for pictures but in the case of music.”
Back in February, famous DJ David Guetta, commented how he believed that Ai would become a critical element in music moving forward. During an interview, he said in part, “I’m sure the future of music is in AI. For sure. There’s no doubt. But as a tool.”. And he’s not wrong; at least when it comes to fans. All one has to do is YouTube mash-ups, and you’ll hear the influence of AI in music.
All over social media, these mash-ups have been shared, liked, and fed feeds for months. This is partly why the Grammy Awards recently complied with new rules to deal with AI in music. So it seems that like many other industries, music is turning to AI to knock down barriers.
Editor’s Note: Deep Learning is becoming a critical topic in the future of AI development, and if you want to stay on the frontlines of the latest developments, then you need to hear from the industry leaders leading the charge. You’ll get that at the ODSC West 2023 Deep Learning & Machine Learning Track. Save your seat and register today.