Generative Art Using Machine Learning

Yesterday OPENAI released a showcase of examples demonstrating image generation from text called DALL-E. The complexity of never seen examples is impressive. The quality of illustration and images makes you wonder where this will be used when anything you can think up, can be convincingly generated.

Sleepy Owl Illustrations are not SuperbOwl but not too bad.

Art, design, fashion, or possibly just to deceive? Still I’d love to see more categories or variations. Possibly an Open API

Dendrocreation

The examples are uncanny and familiar while still being “unique” or new. The low effort to produce many satisfactory results will be enough for many, especially if the cost is low.

It’s easy to point to jobs or industry that this could one day replace however I see more people working with machine learning. Human preference should steer results. A fleshy discriminator.

I joked that I’m now a Canadian fashion designer.

And just the day before another generative adversarial network, or GAN was shown called Taming transformers. This technique uses segmentation trained from landscape images to generate high resolution landscapes.

Generate van art

Cryptoart seems to be booming for well known generative artists at the moment and tools like these now eclipse some of the earliest GAN art sold at auction.

Previously other trained networks such as StyleGAN and BIGGAN peaked my interest in generating art with code. Neural networks for image generation have improved in leaps and bounds. Better UI to allow anyone to explore makes this really accessible and easy.

Exciting times.

“Big Flood” lightbox

Blue led animation in steel laser cut Coast Salish artwork

Using the Adafruit Adabox #017 I added leds and an e-ink display to a lasercut lightbox made of steel by Coast Salish artist Xwalacktun as a gift this year.

A miniature version of He-yay meymuy (Big Flood), it’s 30cm tall with an 11cm diameter. The original piece is an impressive 487.8cm tall by 167.6cm made of aluminum. Located at the entrance to the Audain Art Museum, it’s is a powerful piece inside a beautiful building.

I used a metre of RGB neopixels wrapped around a cardboard tube, diffused with the bubblewrap, and plugged into the Adafruit Magtag to animate rain. In this mode the lightbox will eventually fill to a full blue colour.

There are three other modes with different colours and animations using the Adafruit_led_animation library. Each mode animates differently and updates the e-ink display with a section of art from the lightbox.

Adafruit MagTag eink display
Top View: With e-ink the last image will remain, even after removing power.

I even upcycled the spool from the neopixel strip to mount the MagTag as it fit snuggly. There are plenty of other features I’ve yet to take advantage using built in Wi-fi, light sensor, etc.

Art by Coast Salish artist Xwalacktun
296 x 128 px Indexed Colour .BMP used to display on e-ink

The base is made with glued pieces of western red cedar to mimic the architecture of the museum and carved to receive the artwork.

Untitled

Low angle sun illuminates Whistler Mountain

Whistler Mountain and crystallized snow from a walk today.

Adafruit MagTag

Just started playing with the Adafruit MagTag internet connected eink display.

Running CircuitPython, and using ESP 32-S2 to connect over wifi. It’s wild that the 4-shade B&W eink display stays even after power is removed.

Stemma connectors make adding sensors quick and easy. Live updating code.py file and plenty of libraries / examples makes testing super-quick.

This greyscale eink explainer is great way to understand what’s going on.

The most difficult thing to this point is correctly double-clicking the reset button to update UF2

More to come…

Refresh 2020

An ongoing rebuild… Please watch your step. [ under-construction.gif ]