Yesterday OPENAI released a showcase of examples demonstrating image generation from text called DALL-E. The complexity of never seen examples is impressive. The quality of illustration and images makes you wonder where this will be used when anything you can think up, can be convincingly generated.
Art, design, fashion, or possibly just to deceive? Still I’d love to see more categories or variations. Possibly an Open API
The examples are uncanny and familiar while still being “unique” or new. The low effort to produce many satisfactory results will be enough for many, especially if the cost is low.
It’s easy to point to jobs or industry that this could one day replace however I see more people working with machine learning. Human preference should steer results. A fleshy discriminator.
And just the day before another generative adversarial network, or GAN was shown called Taming transformers. This technique uses segmentation trained from landscape images to generate high resolution landscapes.
Cryptoart seems to be booming for well known generative artists at the moment and tools like these now eclipse some of the earliest GAN art sold at auction.
Previously other trained networks such as StyleGAN and BIGGAN peaked my interest in generating art with code. Neural networks for image generation have improved in leaps and bounds. Better UI to allow anyone to explore makes this really accessible and easy.
Using the Adafruit Adabox #017 I added leds and an e-ink display to a lasercut lightbox made of steel by Coast Salish artist Xwalacktun as a gift this year.
A miniature version of He-yay meymuy (Big Flood), it’s 30cm tall with an 11cm diameter. The original piece is an impressive 487.8cm tall by 167.6cm made of aluminum. Located at the entrance to the Audain Art Museum, it’s is a powerful piece inside a beautiful building.
I used a metre of RGB neopixels wrapped around a cardboard tube, diffused with the bubblewrap, and plugged into the Adafruit Magtag to animate rain. In this mode the lightbox will eventually fill to a full blue colour.
There are three other modes with different colours and animations using the Adafruit_led_animation library. Each mode animates differently and updates the e-ink display with a section of art from the lightbox.
I even upcycled the spool from the neopixel strip to mount the MagTag as it fit snuggly. There are plenty of other features I’ve yet to take advantage using built in Wi-fi, light sensor, etc.
The base is made with glued pieces of western red cedar to mimic the architecture of the museum and carved to receive the artwork.