[go: up one dir, main page]

An error occured Please, provide your email address This address is invalid Your email is already subscribed. Do you wish to unsuscribe ? newsletter nolist newsletter refreshed You've been unsuscribed Your email has been registered
Subscribe

By signing up, you agree to receive emails every time the Deezer Newsroom is updated. You can unsubscribe at any time by clicking the link at the bottom of each email.

UGC, Converse and Dunkin’ along with many other brands have signed to Deezer’s newly introduced music solution for commercial spaces

Paris, June, 23, 2025 – Deezer, the global music experiences platform, has launched Deezer Business, a service enabling businesses to legally play music and create engaging atmospheres in commercial spaces like shops, restaurants, hotels, cinemas, gyms and offices across France and internationally. 

Cinemas from UGC Group are one of the many businesses that have chosen to let their customers live the music with Deezer, along with American doughnut and coffee icon Dunkin’ who are setting up shop in France for the first time. Converse have also chosen to soundtrack their Parisian showroom with tailormade playlists from Deezer Business.

With access to a vast catalog of high-quality music and editorial expertise, these and other businesses can now manage music programming and build custom playlists, tailored to their brand and customer segment, without the hassle of negotiating music licenses and curating playlists from scratch. It is also possible to add commercial spots throughout the music programming.

“As a global music experience platform, we know the impact music has on how people feel in a space,” said Julien Delbourg, Chief Commercial Officer at Deezer. “From global names like UGC, Dunkin’, and Converse to local heroes, our mission is to help businesses turn everyday moments into memorable experiences, using music to express their brand and forge lasting connections with the people who walk through their doors.” 

Maxime SCHMIDT, Director of Brand, Communication, and Digital, UGC, said: “Our cinemas are built to be vibrant and welcoming places for everyone and we’re constantly working to improve the experience in our theaters. This is why we’re thrilled to enrich the sound experience within 15 UGC lobbies thanks to Deezer Business!”

Jean-Wilfrid Thibault, General Manager Dunkin’ France (QSRP Group) said: “At Dunkin’, every shop is designed as a vibrant living space, where people come to recharge with positive energy from the morning onward. Thanks to Deezer Business, music and pop culture will be a natural part of creating a dynamic and optimistic atmosphere at Dunkin’. It’s a key ingredient for connecting with our core target — an urban, mobile generation in search of good vibes and brands that reflect who they are. This partnership strengthens our ambition: to offer a daily dose of happiness, whether it’s in the shop or on the go.”

The Deezer Business platform offers a variety of features to legally stream music and ensure a seamless and customizable experience, including: 

  • Multi-zone management for controlling music across different areas
  • Deezer editorial playlists tailored to specific business types and customer preferences
  • AI-powered playlist creation
  • Intuitive scheduling and broadcasting options via dedicated hardware or app
  • Offline mode

For more information about Deezer Business and its sounding solution, please visit: 

Website : https://deezer-business.com

Linkedin : https://www.linkedin.com/showcase/deezer-business

*** ENDS ***

Press Contact Deezer

Jesper Wendel | jwendel@deezer.com 

This work introduces PeakNetFP, the first neural audio fingerprinting (AFP) system designed specifically around spectral peaks. This novel system is designed to leverage the sparse spectral coordinates typically computed by traditional peak-based AFP methods. PeakNetFP performs hierarchical point feature extraction techniques similar to the computer vision model PointNet++, and is trained using contrastive learning like in the state-of-the-art deep learning AFP, NeuralFP. This combination allows PeakNetFP to outperform conventional AFP systems and achieves comparable performance to NeuralFP when handling challenging time-stretched audio data. In extensive evaluation, PeakNetFP maintains a Top-1 hit rate of over 90% for stretching factors ranging from 50% to 200%. Moreover, PeakNetFP offers significant efficiency advantages: compared to NeuralFP, it has 100 times fewer parameters and uses 11 times smaller input data. These features make PeakNetFP a lightweight and efficient solution for AFP tasks where time stretching is involved. Overall, this system represents a promising direction for future AFP technologies, as it successfully merges the lightweight nature of peak-based AFP with the adaptability and pattern recognition capabilities of neural network-based approaches, paving the way for more scalable and efficient solutions in the field.

This paper is a joint work between Deezer Research, BMAT Licensing S.L., and the Music Technology Group at Universitat Pompeu Fabra.

Contrastive learning and equivariant learning are effective methods for self-supervised learning (SSL) for audio content analysis. Yet, their application to music information retrieval (MIR) faces a dilemma: the former is more effective on tagging (e.g., instrument recognition) but less effective on structured prediction (e.g., tonality estimation); The latter can match supervised methods on the specific task it is designed for, but it does not generalize well to other tasks. In this article, we adopt a best-of-both-worlds approach by training a deep neural network on both kinds of pretext tasks at once. The proposed new architecture is a Vision Transformer with 1-D spectrogram patches (ViT-1D), equipped with two class tokens, which are specialized to different self-supervised pretext tasks but optimized through the same model: hence the qualification of self-supervised multi-class-token multitask (MT2). The former class token optimizes cross-power spectral density (CPSD) for equivariant learning over the circle of fifths, while the latter optimizes normalized temperature-scaled cross-entropy (NT-Xent) for contrastive learning. MT2 combines the strengths of both pretext tasks and outperforms consistently both single-class-token ViT-1D models trained with either contrastive or equivariant learning. Averaging the two class tokens further improves performance on several tasks, highlighting the complementary nature of the representations learned by each class token. Furthermore, using the same single-linear-layer probing method on the features of last layer, MT2 outperforms MERT on all tasks except for beat tracking; achieving this with 18x fewer parameters thanks to its multitasking capabilities. Our SSL benchmark demonstrates the versatility of our multi-class-token multitask learning approach for MIR applications.

In music information retrieval (MIR), contrastive self-supervised learning for general-purpose representation models is effective for global tasks such as automatic tagging. However, for local tasks such as chord estimation, it is widely assumed that contrastively trained general-purpose self-supervised models are inadequate and that more sophisticated SSL is necessary; e.g., masked modeling. Our paper challenges this assumption by revealing the potential of contrastive SSL paired with a transformer in local MIR tasks. We consider a lightweight vision transformer with one-dimensional patches in the time–frequency domain (ViT-1D) and train it with simple contrastive SSL through normalized temperature-scaled cross-entropy loss (NT-Xent). Although NT-Xent operates only over the class token, we observe that, potentially thanks to weight sharing, informative musical properties emerge in ViT-1D’s sequence tokens. On global tasks, the temporal average of class and sequence tokens offers a performance increase compared to the class token alone, showing useful properties in the sequence tokens. On local tasks, sequence tokens perform unexpectedly well, despite not being specifically trained for. Furthermore, high-level musical features such as onsets emerge from layer-wise attention maps and self-similarity matrices show different layers capture different musical dimensions. Our paper does not focus on improving performance but advances the musical interpretation of transformers and sheds light on some overlooked abilities of contrastive SSL paired with transformers for sequence modeling in MIR.

The recent rise in capabilities of AI-based music generation tools has created an upheaval in the music industry, necessitating the creation of accurate methods to detect such AI-generated content. This can be done using audio-based detectors; however, it has been shown that they struggle to generalize to unseen generators or when the audio is perturbed. Furthermore, recent work used accurate and cleanly formatted lyrics sourced from a lyrics provider database to detect AI-generated music. However, in practice, such perfect lyrics are not available (only the audio is); this leaves a substantial gap in applicability in real-life use cases. In this work, we instead propose solving this gap by transcribing songs using general automatic speech recognition (ASR) models. We do this using several detectors. The results on diverse, multi-genre, and multi-lingual lyrics show generally strong detection performance across languages and genres, particularly for our best-performing model using Whisper large-v2 and LLM2Vec embeddings. In addition, we show that our method is more robust than state-of-the-art audio-based ones when the audio is perturbed in different ways and when evaluated on different music generators.

The rapid rise of generative AI has transformed music creation, with millions of users engaging in AI-generated music. Despite its popularity, concerns regarding copyright infringement, job displacement, and ethical implications have led to growing scrutiny and legal challenges. In parallel, AI-detection services have emerged, yet these systems remain largely opaque and privately controlled, mirroring the very issues they aim to address. This paper explores the fundamental properties of synthetic content and how it can be detected. Specifically, we analyze deconvolution modules commonly used in generative models and mathematically prove that their outputs exhibit systematic frequency artifacts – manifesting as small yet distinctive spectral peaks. This phenomenon, related to the well-known checkerboard artifact, is shown to be inherent to a chosen model architecture rather than a consequence of training data or model weights. We validate our theoretical findings through extensive experiments on open-source models, as well as commercial AI-music generators such as Suno and Udio. We use these insights to propose a simple and interpretable detection criterion for AI-generated music. Despite its simplicity, our method achieves detection accuracy on par with deep learning-based approaches, surpassing 99% accuracy on several scenarios.

The rapid advancement of AI-based music generation tools is revolutionizing the music industry but also posing challenges to artists, copyright holders, and providers alike. This necessitates reliable methods for detecting such AI-generated content. However, existing detectors, relying on either audio or lyrics, face key practical limitations: audio-based detectors fail to generalize to new or unseen generators and are vulnerable to audio perturbations; lyrics-based methods require cleanly formatted and accurate lyrics, unavailable in practice. To overcome these limitations, we propose a novel, practically grounded approach: a multimodal, modular late-fusion pipeline that combines automatically transcribed sung lyrics and speech features capturing lyrics-related information within the audio. By relying on lyrical aspects directly from audio, our method enhances robustness, mitigates susceptibility to low-level artifacts, and enables practical applicability. Experiments show that our method, DE-detect, outperforms existing lyrics-based detectors while also being more robust to audio perturbations. Thus, it offers an effective, robust solution for detecting AI-generated music in real-world scenarios.

All albums on Deezer with AI generated tracks are now clearly tagged to give music fans full transparency 

Paris, June 20, 2025 – Deezer, the global music experiences platform, today introduced the world’s first AI tagging system for music streaming, clearly displaying which albums include fully AI generated tracks. The company recently announced the launch of its cutting-edge AI-Music detection tool, revealing that nearly one fifth (18%) of all music uploaded on a daily basis, more than 20,000 tracks,  are 100% AI generated. 

“We’ve detected a significant uptick in delivery of AI generated music only in the past few months and we see no sign of it slowing down. It’s an industry wide issue and we are committed to leading the way in increasing transparency by helping music fans identify which albums include AI music,” said Alexis Lanternier, CEO, Deezer. “AI is not inherently good or bad, but we believe a responsible and transparent approach is key to building trust with our users and the music industry. We are also clear in our commitment to safeguarding the rights of artists and songwriters at a time where copyright law is being put into question in favor of training AI models.”

Although fully AI-generated music currently accounts for only a small fraction of streams on Deezer — approximately 0.5% — it’s evident that the primary purpose of uploading these tracks to streaming platforms is fraudulent. Deezer has found that up to 70% of the streams generated by fully AI-generated tracks are in fact fraudulent. When detecting stream manipulation of any kind, Deezer excludes the streams from the royalty payments. 

Deezer is currently excluding fully AI generated tracks from algorithmic and editorial recommendations, in order to minimize any negative impact on artist remuneration and the user experience.

*** ENDS ***

Notes to editors: 

Deezer’s AI music detection tool sets an industry standard, with the ability to detect 100 % AI-generated music from the most prolific generative models – such as Suno and Udio, with the possibility to add detection capabilities for practically any other similar tool as long as there’s access to relevant data examples. Not only that, Deezer has made significant progress in creating a system with increased generalizability, to detect AI generated content without a specific dataset to train on. The reported increase comes at a time of growing concerns about AI companies training their models with copyrighted material, and governments potentially diminishing copyright laws to facilitate AI development. Deezer is committed to protecting the rights of artists and creators, and remains the only streaming platform to sign the global statement on AI training.

AI is a critical challenge for the music industry – According to a study conducted by CISAC and PMP Strategy, with participation from key industry players (including Deezer), nearly 25% of creators’ revenues are at risk by 2028, which could amount to as much as €4 billion by that time. This represents a colossal, even critical, challenge for the music creation sector as a whole. https://www.cisac.org/services/reports-and-research/cisacpmp-strategy-ai-study  

Delivery vs streams – Most of the daily delivery of AI tracks are never streamed on Deezer, but they are diluting the catalog and are used for fraudulent activity – today, up to 70% of all streams of fully AI generated tracks are fraudulent.

Two new patents – In December 2024, Deezer applied for two patents for its AI Detection technology, focused on two different methods of detecting unique signatures that are used to distinguish synthetic content from authentic content.


Press Contact Deezer

Jesper Wendel
jwendel@deezer.com

Big news in the world of streaming: Deezer has become the first platform to clearly label albums that include fully AI-generated tracks. If you’ve been wondering how to tell the difference between human-created music and AI content, this is a game changer.


We have recently launched a powerful AI detection tool, and the results are wild—about 18% of music uploaded each day (over 20,000 tracks!) is now fully AI-generated. While most of these tracks don’t go viral, Deezer found that around 70% of their streams are fake, often uploaded to exploit royalty systems.
 

To fight back, we are taking a bold step:

  • AI-generated tracks are now clearly tagged so listeners know what they’re hearing.
  • These tracks won’t show up in editorial playlists or algorithm-based recommendations.
  • And most importantly, fraudulent streams are being filtered out of royalty payments.

 

For now, AI-only songs make up just 0.5% of all streams on Deezer, but the trend is growing fast, and with this new tagging system, Deezer is setting a precedent for how streaming platforms can handle the rise of AI music responsibly.


What do you think? Should other platforms keep it real and tag AI music too?

 

The increasing availability of user data on music streaming platforms opens up new possibilities for analyzing music consumption. However, understanding the evolution of user preferences remains a complex challenge, particularly as their musical tastes change over time. This paper uses the dictionary learning paradigm to model user trajectories across different musical genres. We define a new framework that captures recurring patterns in genre trajectories, called pathlets, enabling the creation of comprehensible trajectory embeddings. We show that pathlet learning reveals relevant listening patterns that can be analyzed both qualitatively and quantitatively. This work improves our understanding of users’ interactions with music and opens up avenues of research into user behavior and fostering diversity in recommender systems. A dataset of 2000 user histories tagged by genre over 17 months, supplied by Deezer (a leading music streaming company), is also released with the code.

Overview of the methodology.