AI Art is created using Darwinistic Evolution of Ideas


Generative AIs like MidJourney or DALL-E have become very popular, but they also have their detractors. We hear a lot about these systems cutting and pasting human artists’ art, and parroting the distinct styles of established artists. Copyright law is attempting to adjust copyright law to exclude AI Art. This matters to creators at all levels.

I would like to discuss AI Art starting from the Epistemology of Creativity.

The Selectionist Manifesto says “All useful novelty in the universe comes from processes of variation and selection”.

I know this is true. I hope you can see it too.

What would be the best example of useful novelty and diversity? The Earth’s ecosphere, with all its living things. All life on Earth, including us humans, exists, because Natural Evolution of Species… works.

Human creativity is pathetic compared to creativity resulting from Natural Evolution. If we had never seen a gecko, a giraffe or an octopus (or pictures thereof), we could likely not imagine either. And no human can create a platypus from scratch, but Evolution did.

Selectionism is the generalization of Darwinism to all processes. Many everyday phenomena, such as which automobiles we can buy, are the result of a Selectionist process: Car manufacturers won’t make cars that people won’t buy, and so car designs evolve towards a set of likeable standards.

This is also true for the processes in our brains. Creativity is the ability to select a few concepts, put things together in novel ways, compare them mentally against both a World Model we have built since birth, and against some desired pattern of outcomes, such as whether the results are aesthetically pleasing. We generate and evaluate many alternatives in our minds and then let the best solutions breed with each other to generate even better solutions. We can do this incredibly quickly. Every sentence we speak has been created in a Selectionist (Darwinism-like) battle of concepts, ideas, and sentence fragments. The battle continues unabated even while we are speaking.

The irony of this is that since brains use Selectionist Methods among ideas that breed with each other to create better ideas… “Intelligent Design” is really an Evolutionary process inside a skull. If God existed, He would be running Evolution of ideas in His brain. This also includes car designers and people buying cars.

Throughout life, we humans build a “World Model” in our heads. Most of the things we know we learn through direct experience. There is also a small fraction that we learn from enjoying art in various forms. Our take-away from enjoying art as an artist includes that the art may influence our own style and techniques.

I’d like to emphasize that our knowledge of art, even among artists, is a small fraction of all our knowledge about the world.

In the case of LLMs, almost everything in their corpus is NOT art. Their learning corpora contained pictures of items and agents in the world. Cats, people, cars, buildings, nature, news. These posted pictures of things in the real world are not normally classified as art.

And the LLMs read about things that are not in pictures, and somehow manage to connect all the images of cats to the word “cat”. If I ask MidJourney or DALL-E for an image of a cat in a box, I won’t get back pixels from corpus images of cats in boxes. Instead, the results are synthesized from pure Understanding of what cats look like, what boxes look like, and what cats in boxes look like.

This is also what we humans do when we create. Combine. Select. Repeat.

The “Non-art” part of the World Model in our LLMs (and our brains) dominates the “Art” by orders of magnitude. My idea of an elephant is unlikely to be Salvador Dali’s idea of an elephant, and unlikely to be close to DALL-Es idea of an elephant.

So when a computer makes AI art, most of the input is drawn from its understanding of non-art images and from related language, rather than from actual art. It also makes novel connections between normally disjoint concepts based on the prompt. This prompt is an important part of the creative process, but its effect on the resulting image is dependent on the system’s understanding of the prompt, which again depends on the LLM (as both a Language Model, partial World Model.

If the prompt contains instructions to paint a cat in the style of Rembrandt, the impact of Rembrandt’s style can be seen at every layer, from composition to lighting to brush strokes. What the AI takes away from being trained on images by Rembrandt is basically Rembrandt’s style, and (in theory) never uses anything he actually painted at the pixel level.

Because Rembrandt’s style is not his paintings. Styles are abstractions, an emergent property we (or an LLM) can learn by studying Rembrandt’s work.

If a human paints something that looks like something Rembrandt might have painted, then Rembrandt might deserve some recognition for both inspiration and technique, but this does not in any way give Rembrandt a copyright claim. And if you ask an LLM based AI for something in the style of Rembrandt, it will adjust the output towards its understanding of that style, which obviously came from the rather small selection of available real Rembrandt art — 300+

paintings, 2000+ sketches. I don’t expect to find cats in boxes in any of Rembrandt’s paintings.

So when we examine LLM generated art, there are no copy or paste lines or borders anywhere, anymore than when a human paints in the style of Rembrandt. It’s nothing like PhotoShop.

The LLM’s World Model based Understanding of the world becomes Art when we ask for Art in the prompt.

And the results are original art.

We will be happy to hear your thoughts

Leave a reply

0
Your Cart is empty!

It looks like you haven't added any items to your cart yet.

Browse Products
Powered by Caddy
Shopping cart