![]() |
![]() ![]() ![]() ![]() |
|
|
The Rainbow Goblins animated movie is a tribute to a beloved artist and an incredible musician. The story was first told in the form of a book by Ul de Rico, published in 1978. The unique artistry of Ul de Rico’s enchanting oil-on-oak paintings made The Rainbow Goblins a timeless classic. The artwork captured the imagination of Japanese musician Masayoshi Takanaka. In 1981, Takanaka released a musical masterpiece, recounting the fateful tale of the seven goblins accompanied by his transcendent guitar solos. |
|||||||
![]() |
|||||||
|
|||||||
I came to the story several years ago through the music of Takanaka. I fell in love with both the music and artwork. Often times I’d stare at the pages of the book whilst listening to the album. The characters and the world came to life in my mind, and I dreamed of one day sharing this vision in the form of a movie. The music was so cinematic and the artwork so rich and alive that I knew it would blend perfectly into a film. |
|||||||
![]() |
|||||||
Years later, in 2024, when generative AI tools emerged, I realised the potential to animate the story. This was incredibly exciting, and I immersed myself in learning this new medium to discover what was possible. I figured it was the ideal candidate for this sort of adaptation, considering the story has already inspired many other artists and musicians to reinterpret it. Another notable adaptation was the band Primus and their 2017 album The Desaturating Seven, reigniting interest in the book for a new generation. The video accompanying their album can be seen on YouTube here: Primus - The Desaturating Seven |
|||||||
![]() |
|||||||
Sadly, Ul de Rico is no longer with us. I would have loved to show him this film. I also hope it finds its way to Masayoshi Takanaka. I attempted to find a way to contact him but haven’t had any luck, so if anyone out there can put us in touch, it would be greatly appreciated. |
|||||||
![]() |
![]() |
||||||
|
|||||||
The film was created with the assistance of generative AI. At GooRoo Animation, we specialise in stop motion animation, which seems like the antithesis of AI generated video. Having spent years painstakingly manipulating miniature models frame by frame to create my previous films, I was strangely drawn to the idea of bringing characters to life with a simple prompt. In reality, it wasn’t quite that simple. |
|||||||
|
|||||||
|
|||||||
|
|||||||
Playing around with these tools in their early stages of development was fascinating, discovering new possibilities every day. While AI tools make it possible to create incredible imagery, it’s also really hard to get them to do exactly what you want them to do. It was frustrating but also really fun to try and find solutions to create the vision I had in mind. As generative AI develops, the limitations will surely decrease and the possibilities increase. It’s all changing so rapidly and new tools are released every day. Even over the course of a few months, while making the film, the tools available evolved so much that I changed my workflow multiple times. It made me want to go back and remake the scenes I did first, but I soon realised I could endlessly chase my tail with that approach. There’s also more I could do to clean up all the little AI errors, but I’m not sure if it’s really worth the time, so after fixing all the main issues, I’ve accepted it and moved on. It will be interesting to see how this film dates.
|
|||||||
![]() |
![]() |
||||||
|
|||||||
I thought I’d try to describe how I made this film. Firstly, I needed a good image to video generator that would maintain the style of the artwork. I tested all the main players on the market at the time, and found that Minimax was the best for this purpose. Most of the others often drifted towards realism and turned things a bit too 3D. While creating the project, Minimax released a new video model, Live-01, that was designed specifically for 2D character animation. It didn’t improve the results dramatically from the old model, but it was reassuring that I’d invested in a video generator that was geared towards my kind of project. And that investment was significant, considering this film was a fun, little side project for me. Two months of their unlimited plan cost $300 Australian dollars. But I made good use of it with over 1500 generations. |
|||||||
![]() |
|||||||
So the majority of time that I spent on this project was in Adobe After Effects, compositing together parts of different generations to create one shot. This laborious process is not what comes to mind when thinking of making AI generated video. What I’ve noticed from watching a lot of other early AI films is that the best examples come from creators with a background in visual effects. Part of that is just knowing how to clean up video generations and make them usable. I’m no expert, but have enough experience from years of working on animations. Half the time in our stop motion films, what we shoot straight out of the camera isn’t usable either. There are rigs to remove or backgrounds to replace. |
|||||||
![]() |
![]() |
||||||
It was a lot of fun creating the dream sequence because it allowed for more variety and experimentation. I enjoyed playing around with the Deforum plugin for Stable Diffusion, which creates incredible morphing animations. I find it works best with intricate patterns and strange psychedelic imagery. I love meditating to these videos, allowing myself to be hypnotised by the stream of imaginative visuals. I think they are a great demonstration of the type of creativity that AI is capable of. |
|||||||
![]() |
|||||||
|
|||||||
I’d take an image from the book, or sometimes just a small part of the image. I’d upscale it using Topaz Gigapixel and clean it up a bit in Photoshop. I found the video generators would get better results if it was sharpened and denoised. I think it needed distinct outlines to differentiate objects clearly. This was sometimes a bit detrimental to the artistic style because the original oil painting often had a nice, soft blending to it. Usually I’d take those images into Minimax to animate them. Sometimes, when I struggled to get the results I wanted, I’d try other video generators. Kling and Luma both proved useful because they had the feature of adding a last frame, which was necessary at times when the scene needed to end up with something specific. I subscribed to Krea, which allowed access to a bunch of video generators in one. Runway and Haiper got some use as well. From there I took the generated videos into Topaz video AI and upscaled them from 720P to 4K. Sometimes I’d slow them down or speed them up there too. Topaz was pretty good at adding inbetween frames when slowing was required. Then they’d go into After Effects if anything needed compositing or cleaning up. Finally I edited it all together in Magix Vegas, where I also added the final effects like colour grading or adding a little bit of pan/crop/zooming to keep the frame constantly moving. The music was all edited in Vegas too. |
|||||||
![]() |
|||||||
|
|||||||
I didn’t want to be limited to just the scenes in the book, so I searched for an image generator that could effectively imitate the same style of artwork. PicLumen did a great job at this, and opened up the door to whole new environments that blended seamlessly with the world Ul de Rico created. This was especially valuable when the goblins went on their dangerous journey. I wanted that scene to be a montage of them crossing various landscapes. |
|||||||
|
|||||||
Hopefully it’s not obvious which landscapes came directly from the book and which didn’t. The only one that was directly from the book was the scene with them precariously crossing the fallen tree high above a chasm. PicLumen used the images below from the book as a style reference to create the scenes above, along with a relatively simple prompt such as "towering cliffs viewed from the beach below, with clouds and yellow hues in the sky." |
|||||||
![]() |
|||||||
![]() |
|||||||
Being obsessed with rock climbing, I decided I needed a climbing scene high up on a cliff. I couldn’t quite get the movement I wanted from the goblins so most of this scene was left on the cutting room floor, at least for now. |
|||||||
![]() |
![]() |
||||||
|
|||||||
Any time I was using an AI generated background, I needed to insert the goblins into that environment. This was done in Adobe Photoshop, and by inserting some rough shadows, it generally blended alright. I also needed to pose the goblins in a way that suggested the action I wanted them to do in the video. Early on in the project I did this more manually, by cutting and pasting limbs and using the liquify tool to bend them into place. This was time consuming and didn’t get great results. Throughout the project I was looking for an easy way to train an AI image generator on goblin images so that I could pose them in different positions simply with a prompt. I tried briefly to setup open source models like Stable Diffusion locally on my computer to enable this kind of controlled training, to create a custom LoRA. But lacking the programming knowledge and the motivation to get too deep into the weeds, I decided to persist with online tools instead. Then halfway through the project, Krea released the capacity to train a model online. This was a game changer. Luckily by this stage I already had lots of images of them in different poses using the Photoshop cut and paste method. So I used 10 or so of these to train a model. There were 7 goblins, but they all looked pretty similar, just with different colours. So I only needed to train one model. I picked the yellow goblin, as he was the leader and featured the most. With the new yellow goblin generations, I just had to bring them back into Photoshop and change their hue. I always liquified the faces a little to differentiate them. For example, Orange had a plumper, fuller face. Green was more angular. Red had sunken eyes and looked older. The trained model still struggled to provide much variation in facial expression, but I'm sure if it was trained on a broader data set with more expression variation it would then be able to create whatever I asked of it. |
|||||||
![]() |
|||||||
I had fun trying to build on the personalities I saw in the original images. Orange and Red were always together, so I made orange a helpful, caring goblin, always looking out for his old buddy Red, who was scared and timid. Yellow was the fearless leader, always bossing everyone around, with Green always at his heels, quick to follow any order. Violet was often lost in a daydream, whilst Blue and Indigo were always quibbling about something. These little details are probably lost on most viewers, but hopefully they’re picked up subconsciously. If not, at least I enjoyed developing their personalities in my mind. |
|||||||
![]() |
![]() |
||||||
For shots with multiple characters moving, it was almost never possible to get them all doing the correct thing in the same generation. I found I got better results when I gave the video generators simple prompts, just asking for one thing at a time. To easily composite them together, I’d also need to prompt for no camera move, so that the shot would remain stationary. This wasn’t ideal because moving camera shots generally look nicer. If there were say 3 actions that needed to take place in the one shot, I might initially try to prompt for all of them in one, plus a camera move, and see how it went. When that inevitably failed, I’d simplify things and prompt for 1 action at a time. Most advice I heard recommended the opposite, to provide as much detail as possible. This is good advice for image generation, and probably text to video as well. In that case, it needs to create everything based on the prompt, so you want the prompt to be as descriptive as possible. However, when you’re working with detailed images and just wanting small, specific changes to them, it seems like less info will get it to focus on those specific changes without hallucinating other crazy things. |
|||||||
![]() |
|||||||
One of the common errors in the generated videos was that they would often lose the pupils in the eyes of the characters. To get them back on in After Effects, I’d motion track the eyes and add a new pupil that moved as the head did. Sometimes things randomly moved that should have been stationary, so I often used parts of the original still image in the final composition to cover that up. |
|||||||
|
|||||||
There are so many new AI tools to explore that I couldn't possibly try them all. I was excited that the dance scene gave me an opportunity to play around with Viggle, a tool designed for putting your recorded actions onto another character. I wasn’t sure how well it would handle a transition to a 2D animated character. It reproduced the actions clearly and accurately, but it really wanted to give the character pants, even when I dressed up in a sheet to represent the goblin’s robe. So I had to cut the shot at the goblin’s waist and was limited to upper body actions. I layered a bunch of copies of the same goblin dancing footage, staggered them slightly, changed the colour, and ended up with a result of them all dancing sort of in sync. They were layered on top of a background with storm clouds rolling in. |
|||||||
![]() |
![]() |
||||||
It didn’t help that I’m really tall and skinny so I didn’t match the goblin’s stature. I tried generating some videos of a bigger guy dancing in a black robe, so I could have a more suitable reference character, but I couldn’t quite get the specific dance moves I was after. I still used one of these shots in the final film. In the end, most of the dance scene was created in my usual workflow; Pose the characters, place them on the background, prompt their actions and piece it together with good background animation. |
|||||||
|
|||||||
I always attempted to animate using AI first, but occasionally I had to resort to good old fashioned manual animation. For example, in the dream scene, there were a few elements where I couldn't get the desired result with AI. The colourful drips, the goblins opening and closing their mouth and their feet tapping were all created by editing a sequence of images in Photoshop and then timing the sequence to the music in After Effects. This 1 scene took a few weeks as it was a complex project, integrating my animation of the goblins with the incredible AI generated animation of the colourful dream bubbles surrounding them. |
|||||||
![]() |
|||||||
It’s interesting to compare the original images from the book with the animated scenes from the film. Some scenes like the dream sequence came a long way and required the most work. Other scenes, like when the goblins arrived at the cave at sunset, didn't change much from the original imagery and were relatively simple to create. You can do this comparison without the physical book on hand by watching these 2 videos below. However, I would really encourage you to buy the book as there is so much detail in those images that isn't seen on screen, and there's nothing better than holding that incredible artwork in your hands. |
|||||||
|
|||||||
Many people argue that AI isn’t creative because it cannot produce original ideas. While that might be true, I’d argue that human brains work in much the same way. They integrate a huge collection of previous ideas from past experiences to create new ideas. In my opinion, novelty is simply an amalgamation of existing ideas. In that sense, nothing is truly original. In science and technology we embrace this notion, with new concepts and innovations building a layer on top of our current understanding. It seems we’re reluctant to do that in the space of art and music. But I’ve come to the conclusion that the same evolution of ideas is at play. Philosophically, that’s why I feel comfortable working with the art and music of creative masters like Ul de Rico and Takanaka. Hopefully I can build upon their work and add something of value, but that’s for you, the audience, to decide. |
|||||||
![]() |
|||||||
|
|||||||
The original Takanaka album is over an hour long, so it was a real challenge to cut it down to 7 and a half minutes, while still retaining the essence. I wanted it to be a short, action-packed film but that meant I couldn’t include some great parts of the album. One of my favourite songs Plumed Bird doesn’t feature at all because it didn’t advance the story. The first edit was actually 18 minutes long but the feedback I got when I showed friends and family was that it could be much tighter. Cutting it back drastically allowed me to keep the best shots and get rid of the rest. I’m thinking of creating a longer version of the film that utilises all this extra content and does justice to the music in a way that I feel the 7 minute version doesn’t. I was inspired recently by watching the Daft Punk movie Interstella 5555 to create something similar with The Rainbow Goblins, using the entire album and creating 14 short videos, one for each song. But I’ll only undertake a project like that if I can get approval from Takanaka. |
|||||||
![]() |
|||||||
I’m excited to see where AI film making tools go in the future. I think it’s pretty safe to assume we’ve hardly scratched the surface. They are going to enable everyone to bring their ideas to life with much less time and effort, which I think is ultimately positive. There are things that could be lost in that process, but I’m hopeful that human creativity can continue to shine through. For now, I’ll be returning to my roots as a stop motion animator. We have a Claymation film in the works at GooRoo Animation. I learned a lot from experimenting with AI that I think will be useful on this upcoming project. I don’t think AI is quite ready to replace the need for animators just yet, but I’ll be using AI tools whenever possible to help streamline the process. For example, animating tiny movements at 25fps as we did on earlier films now seems unnecessary, when AI assisted inbetweening will enable us to capture only the key poses and interpolate the frames in between. In the early phases of developing characters and environments, we've been using AI image generators to rapidly test designs, like these AI generated clay squirrels. |
|||||||
![]() |
|||||||
I’ll probably return to AI film making at some stage because I can see exciting potential. I made The Rainbow Goblins Animation largely as an experiment to see what was possible. Now I also view it as a proof of concept for other book adaptations. I enjoyed making this so much that I would love to create more. I’m thinking of reaching out to a few authors and illustrators to see if they’re interested. Another obvious choice would be The White Goblin, sequel to The Rainbow Goblins. If anyone has another children’s book they would like to see animated, let me know. Or if you have any project that you’d like to commission me and my little team at GooRoo Animation to create, using AI, stop motion or otherwise, please don’t hesitate to get in touch. We’re passionate about bringing exciting, unique ideas to life, in short film, advertising or music videos. What I love about AI film making is that it makes ambitious ideas accessible for individuals or small teams on a very low budget. Amazing special effects are no longer limited to Hollywood studios. What we choose to create is really only limited by our own imagination. |
|||||||
![]() |
|||||||