
In the ongoing debate about the impact of artificial intelligence on creativity—one that has been unfolding ever since AI-generated images stopped being just a funny meme—we have a problem. I understand that some AI enthusiasts may come across as simplistic (or even lazy) in their approach, but demonizing those who use AI while ignoring the fact that legal mechanisms to address plagiarism already exist distracts us from the real issue: the concentration of power in the hands of a few.
Some visual artists, understandably concerned about AI’s impact, advocate for absolute control over data as a form of protection. However, this narrow-minded stance only builds walls that benefit large corporations. What happens when copyright, instead of promoting innovation, becomes an instrument of exclusion?
Imagine a world where only big tech companies had access to libraries. This isn’t science fiction—it’s the future we’re heading toward if we demand explicit permissions for every piece of data used in AI training. The idea that “every piece of data has an owner” does not protect creators; it consolidates an oligopoly. Universities, startups, and open-source projects would be left out. Why? Because negotiating licenses at a massive scale requires resources that only giants like Google or Meta can afford. The result is not justice for artists but a mode of production where independent voices are silenced, and AI narratives are dictated by those who can pay for them. It is paradoxical that some creators criticize independent AI users while supporting companies like Adobe, which integrate AI tools trained on private image banks. Isn’t it just as problematic that a corporation controls such data, with even less transparency?

This entire logic only leads to a hyper-commodified world. Access to knowledge should not be a privilege—it is the foundation of every creative revolution. Should a writer pay royalties for every book they read before writing a novel? Or a scientist for every paper they cited in their research? The idea that training an AI with publicly accessible data is equivalent to “theft” ignores a fundamental truth: all learning is based on existing knowledge, and AI does not copy literally; it identifies patterns from data. The real issue is not imitation but who gets to decide which data is accessible and under what conditions. Demanding payment for every pixel, text, or sound used in an algorithm is like charging a painter for visiting a museum. It does not protect creativity; it imprisons it. And while corporations pay for licenses from major stock image banks, open and cooperative AI models—lacking lawyers and budgets—are trapped in a system that marginalizes them.
The solution is not to close doors but to build bridges. Datasets should be treated as public infrastructure—accessible to all, but with clear rules of reciprocity. If a company uses open data to train its model, shouldn’t it give some of its advancements back to the community?This approach is not utopian. Indeed, open-source projects like Stable Diffusion and various academic initiatives demonstrate that it is possible to compete with tech giants when knowledge is shared.
Control over data under free-market logic is not a shield for creators: it is a gift to corporations.