The Creativity Paradox of Generative AI

By Michael Poulin

What’s What

People have developed a generative AI (AI) based on information they created already. AI combines elements of human information and represents it in a human-like style.

I do not think anybody would disagree with this.

Some people believe that AI can generate something that did not exist before. This is an ambiguous belief—most likely this “new” thing was simply not known to the believer. Those who cook food know this phenomenon—a cooker is hunting for new recipes, but in the dominant majority of cases there are the same ingredients they knew before, though in different proportions, or some “new” ones that other people used but the cooker did not know about.

Do we call this an innovation from the cooker’s perspective or just a sharing and adopting of knowledge?

Why do people create? Because people face realities of life, and this generates needs in different areas—different aspects of their being. Creations help and allow people to make life more convenient for them in most cases. Though each coin has two sides, and the downside of creation may become not only inconvenient but even dangerous to people.

The more people know about life, the more sophisticated needs and creations are constructed, and the more questions and even needs we find. This is the trivia, which is called human society progress (do not mistake it with progressists’ “progress”).

So, can AI create really new things? At the moment, people tend to think that it cannot. Here are a few reasons for this:

1) People believe that creations root in thinking. Even though people do not really know what “thinking” is, they have several models of what they believe about thinking.

2) An AI can apply human information logic using statistical models, can decompose articulated tasks and data structures, can verify and modify the logic, and can produce new logical rules that humans do not possess or are not aware of (aka new recipes and seldom ingredients). These rules are derived from the information created by people. Instead of “thinking,” AI can form compositions of information that people never had before. Whether this is good or bad for us, we do not know because we have had no need for this.

3) Before talking about AI creation ability, we need to understand a simple linguistic limitation: despite the data used for these compositions having human meanings initially, i.e., being seen as information, after being de- and recomposed in a new, unknown way, these compositions do not have human interpretation, at least for a while, i.e., they do not form information. Moreover, these combinations cannot define new needs but rather offer previously unknown propositions to the specified tasks.

4) The truthfulness and trustfulness of these propositions totally depend on the data used to train AI and on the method of training—which logical constructions are appreciated by people performing training. The qualifications and competency of these people in gathering data and selection of logic are generally unknown.

Thus, if nobody (nothing for AgenticAI) understands the need and defines the tasks for this need, AI cannot think or create but can only recompose. The popular perception that AI can hallucinate is, in essence, a subjective reaction to unexpected or undesirable outcomes, nothing more.

It has been noticed already that AI developed by big tech companies can answer any questions from a user. It does not matter if the answers are correct/accurate or not, if they are trustworthy or not—the answers can be about everything. But…it is not about creativity; it is about the summary and estimation of the probability of how well the answer would match the given tasks (prompt). When the answer is presented in a human language, it becomes even more convincing.

Companion AI as a Detractor

So, if you need something, just ask AI.

This relates to almost all things a person is usually dealing with in life – culinary stuff, sex, literature, house construction, scientific research, poetry, medicine, chemistry, transport, biology, etc. If you still think it is a great accidental discovery, think again.

How did people survive without AI before? They asked more experienced and knowledgeable people around. What if those people were not that savvy? They simply gave inaccurate advice or denied to answer (the need remained waiting for the creation of a solution), but this was OK because each next generation learnt from the mistakes of the past. Then, books appeared, and more knowledge could be transitioned into the future. This is also a trivia but with a caveat – those who were asked and the books/texts used always possessed a notion of trust from the requester. People didn’t ask for advice or learn from untrusted persons. Books could be compared and prioritised as well. This process has led to the notion of competence and trust being based on the competency of the source.

Many ask why the new generation does not learn good things as the previous generation did and why they disrespect the experience collected before them. My answer is – because of the Internet, which was not represented as a distrustful and untrue information source. People started to believe that if one searched the Internet intensively, the answer could be found for almost any question. The difference with Internet Search is only in that AI performs these searches and information analyses (relevant/irrelevant) itself, releasing people from such ‘hard labour’. The cost of this “gift” to people is the hidden ability to manipulate the results without the requester’s awareness.

As a result of omni-awareness of AI, more and more people will use it for more and more life-related things. This gives the AI the power not only to rewrite history but also to direct the users on what to do in which case and what not to do. The major “mission” of this know-it-all guy is to demonstrate that other people had already created that which you think you need. This may be truth or a lie; you never know until you verify it, but the majore – ‘Your creation is known and not needed’ – is sent to you.

AI tells you that your trust in AI should be “out of the box”, unconditional because AI “knows” everything. Really? Or some AI propagandists want you to believe in this. The more you rely on AI, the more you lose self-confidence and dignity. AI wants you freely to outsource your mind together with your creativity. Just ask AI… It is time to explain why I recommended you to have another thought, recall?

Who Needs That? What Is Happening?

Propagandists of know-it-all AI have a theoretical basis defined in the ethical principles that such an AI should realise and promote. Regardless of how progressive they sound, their core is about neo-Marxist concepts of plurality and solidarity. Plurality states that the majority of people – all versus you – is always right (while in human history it is usually wrong), i.e., if an AI tells you that your need is already resolved in the way that the AI articulated, you have to agree with it. Solidarity is, in essence, a prohibition of individual opinions and disagreements, even just slight ones, with the opinion of others; i.e., everyone must demonstrate solidarity with all.

Those who were or are in the workforce are probably familiar with the idea of “teamwork”. It is not only about working together but also about being responsible together. The latter aspect is faked by saying, ‘How many people can make a mistake?’ even if they have no clue of what they did in reality (if they are incompetent). When the leftists figured out this AI capability, they organised absolutely abnormal investments in the “AI-mental tools”. With these principles and a few others, the AI drives to the social world based on ruler-centricity; people may rest.

The know-it-all AI continuously challenges a necessity in the people’s creativity.

The Big AI Brothers think for them, decide for them, and resolve all needs; the only thing that is required in return is to obey the Big AI Brother directives.

The Generative AI Creativity Paradox is simple:

Generative AI created by humans kills human creativity.

What we “gonna do about this”? I do not have an answer to this question. I, personally, try to avoid using AI everywhere I can; if I can’t, I verify every statement generated with the AI involvement. What you “gonna do about this” totally depends on you. Do you want to preserve your humanity or prefer to go with the flow?