The rise of synthetic artists
A new strain of AI music is beginning to reshape streaming culture, not through obvious novelty tracks or functional background sound, but through fully packaged acts that simulate the history, identity and artistic aura of real bands. The point is no longer simply to generate a song. It is to generate a musical world: a name, an origin story, a vanished lineup, a supposed golden era, even the illusion of cultural lineage. What makes this shift significant is that it blurs the boundary between creation and fabrication in a medium where listeners have traditionally expected some human presence behind the sound.
Table of Contents
The fictional group Concubanas captures that transformation with unusual clarity. Presented on YouTube as a Havana band founded in 1971 and devoted to a hybrid of Cuban and Congolese music, it arrives with all the narrative cues of a rediscovered cult act. Only a small notice at the bottom of the description reveals that the material is “altered or synthetic.” By that point, the deception has already done its work. The music is not merely AI-generated; it is context-generated, wrapped in enough stylistic and historical detail to feel plausible to anyone not actively looking for a warning label.
Why disclosure has become the central issue
That ambiguity is now at the heart of the debate. María Teresa Llano of the University of Sussex argues that the real problem is not only the existence of AI music, but the fact that listeners are often left without a clear way to know what they are hearing. Her concern is less about policing taste than about protecting trust. Without transparency, every encounter with new music risks becoming an exercise in doubt, and doubt changes the way art is experienced.
That frustration is already visible in user communities. Threads on Reddit, commentary elsewhere online and petitions in Spotify’s own community forums all point to a growing demand for clear labeling and even tools that would allow users to avoid AI-generated tracks altogether. The emotional response is not uniform. Some listeners feel cheated, others are impressed, and many oscillate between fascination and disappointment. That tension matters because it reveals something deeper than resistance to technology: people are reacting not only to the sound itself, but to the disappearance of certainty about where the sound comes from.
Platforms are moving at different speeds
The platforms have not responded with equal clarity. YouTube formally requires creators to disclose realistic content made with synthetic or altered media, including generative AI, and it acknowledges that undisclosed use can mislead viewers. Yet even there, the system is only partially effective, because the disclosure can be easy to miss, especially on desktop, where users may need to scroll to the end of a description to find it. A rule that exists without visible enforcement does not fully restore confidence.
Spotify appears even less settled. The service has emphasized AI’s creative potential and drawn distinctions between wholly generated music and music that uses AI only in part, but it has not articulated a comparable public labeling framework in the material described here. That leaves copyright as the main line of moderation, even though copyright is often the hardest issue to prove when generative systems are involved. The gap between YouTube’s imperfect disclosure model and Spotify’s vaguer posture suggests that the industry still lacks a common standard for synthetic music.
The business logic behind the flood
This uncertainty is emerging at the same time that AI music is becoming economically meaningful. A CISAC study cited in the source estimates that revenue from AI-generated music could rise from $100 million in 2023 to around $4 billion in 2028, with one-fifth of streaming-platform revenue coming from such content by then. Whether or not those projections are met in full, the direction is unmistakable. Synthetic music is no longer a fringe experiment; it is becoming a scalable commercial category.
Channels such as Zaruret show how quickly that category can expand. In only a few months, it built a catalogue of long-form videos, invented bands and elaborate AI-written backstories, collecting millions of views and tens of thousands of subscribers in the process. That success points to a harder truth behind the novelty: audiences do not need to be fully convinced for the model to work. They only need to keep listening. And that is why labeling matters. In art, listeners do not connect only with arrangement, mood or genre; they connect with a maker, a biography, a sense of intention. When that human anchor disappears, music may still function, but its meaning begins to shift from expression to simulation.
What streaming may lose in the process
The deeper cultural question is not whether AI can generate convincing salsa, jazz or rock. It clearly can, at least convincingly enough to pass in crowded recommendation systems and autoplay feeds. The more important question is what happens to musical culture when authenticity becomes optional and provenance becomes obscured. If listeners can no longer tell whether a song comes from a lived artistic trajectory or from a prompt, then the relationship between audience and artist weakens. What remains may still be enjoyable, even impressive, but it risks becoming detached from the human stories that have long given recorded music its emotional and social weight.
Author:
Lucia Mihalkova
COO of Webiano Digital & Marketing Agency

Source: Fake bands and artificial songs are taking over YouTube and Spotify



