In June 2025, Chicken Fruit animation studio had an immersive short film play at the Outernet London in partnership with LGBTQ+ charity Galop.
We got our first email from the team at Galop while we were in Japan, about four hours after we unexpectedly won the Grand Prize at Tokyo Anime Award Festival for our short film Loneliness & Laundry.
We were spending the following day travelling back to the UK, so although we usually try to be quite disciplined with our work hours, we replied at 11pm, trusting the time difference to hide our late-night mailing and the spell-check to catch our celebratory sake-fuelled typos.
The initial details were quite vague: all we knew is that it would be at the Outernet, and it would be a project renouncing LGBT+ hate. Armed with this hazy knowledge, enthusiasm, and a proper chat booked in for the following week, we got back to the UK and almost immediately went to Tottenham Court Road to figure out what we’d actually be making.
This on-site visit did not quite clear things up.
See, the Outernet London has several immersive locations at street level:
the huge 5-screen, 4-storey NOW Building
Now Trending, which has screens on two adjacent walls and the ceiling
Now Pop One and Now Pop Two, two spaces with floor-to-ceiling screens along one long wall
Now Arcade, an immersive LED tunnel, with screens on the walls and ceiling.
We immediately dismissed the big one. There was no way they’d trust us, a tiny animation studio who had spent most of the last few years working on a short film (that nobody can watch yet because it’s still in festivals) to make something for the NOW Building.
The previous Galop / Outernet project was in the Now Trending space, so we thought maybe we’d be making something for this one? But we reckoned it was equally likely that it would be for one of the Pop spaces or the tunnel. We spent about 40 minutes across all of the locations strategising how we’d best utilise those smaller spaces.
We also didn’t know exactly when our animation would be playing; all we knew at this stage was “Pride 2025”, which we took to mean the day of Pride London.
So, armed with the expectation that we’d probably be making something for 1-3 big-ish screens, and it would play for maybe one day (which, to be clear, was already a very cool idea), we had our first chat with the Galop and Outernet teams.
From these initial meetings, we learned that we’d be making a two-minute animation for the five huge screens of the NOW Building, and it would be playing for the entirety of June.
Cool cool cool.
With only two months until launch, we got stuck in.
Pre-Production
The message Galop wanted to highlight was that hate hurts all of us: the idea that even when targeted at a specific group of people, hatred and discrimination ends up affecting everyone.
Beyond this message, however, the brief was fairly open. Now we knew we had five huge immersive screens to work with, we wanted to make something that took advantage of the space – something that was best experienced in person.
We came up with eight early ideas. Some of them played with movement (for example having something rising upwards so it would feel like you were sinking underground as you watched), some with the immersive space (e.g. making the ceiling design give the impression of an endless tunnel) and some that were a bit more meta (incorporating the non-screen elements of the space to make the animation feel embedded in reality).
After consulting with the Galop and Outernet teams, we landed on the concept of the Squidgies.
The Squidgies would be cute, colourful spheres, which would be attacked by spiky, hateful characters. We would show how the Spikeys’ hate would spread throughout the Squidgies, and then have the Spikeys turn on the audience, using a glass-smashing effect to make it seem like they were breaking through the giant screens.
Up until this point, this project had been nothing but fun and exciting and quick. That first email from Galop was on March 10th; by April 10th we’d locked the concept, done some lookdev, delivered the storyboards – and had squeezed in attending several other days-long film festivals with Loneliness & Laundry.
This is when things started getting a little trickier.
We got the delivery specs through from the Outernet technical team. As a small studio, we’re used to making things for social media. We’ve done some stuff for TV. We spent a chunk of the last few years making assets for a mobile game. All of these things are fairly small file sizes, usually 1080p max.
The Outernet screens were 8k.
As we hand-drew a lot of our most recent short film, a lot of that was done at 12 frames per second. For our 3D work, we usually work at 24 fps.
The Outernet wanted 50 fps.
There were also – not sure if we’ve mentioned this yet – five screens.
So, while usually two minutes of animation takes us maybe an hour to render, for this project, we bookmarked two solid weeks.
Were there quicker options? Sure! We played with the idea of cloud rendering, which would bring the render time down to our usual hours rather than literal weeks, but there were two things that held us back: firstly, the cost, which given that we made this project as a donation to Galop, we could not afford.
Secondly, we had to consider the immersive nature of the screens. We had squishy characters travelling from the north screen, along the west screen, over onto the south screen – if the cloud renderer had slightly different settings to what we were expecting, the screens might not line up exactly and that immersive feeling would be lost. If the colour settings differed between renders, the screens wouldn’t match visually. If our squishy simulation calculated differently on one screen, even the still characters wouldn’t match up. And if anything came back wrong and we had to re-render, it would cost thousands of pounds to redo.
Apart from cloud rendering, we also considered seeing if we could enlist some animation pals and borrow their machines for overnight renders. But this would come with the same concerns re: render settings and lack of consistency – plus, the couple of people we were brave enough to reach out to never got back to us, which we’re sure will not contribute to any kind of insecurity going forwards.
We had a couple of back-up plans in case things got really dire, but we figured we could handle the huge render times. We just had to get going.
We quickly put together a rough animatic of the south wall – the biggest one – to get an idea of timings of the main story beats. At the same time, we made some 360º styleframes playing with colour variations, as well as some options for the ceiling: we thought it would be cool to have a giant tunnel on the ceiling screen, with dozens more squidgies fading off into the sky.
Galop opted for the brighter colours mixed together, rather than a gradient. We also mutually agreed that the endless tunnel of squidgies might overstretch us a bit, because we really thought we were making good decisions re: keeping things simple.
The thing is, we designed our characters as spheres with little faces on. You couldn’t get any simpler!
And maybe if we were designing for our usual single screen, this would have been true.
But with five wrap-around four-storey screens, you need a lot of spheres to fill the space.
We also pitched these spherical babies as “Squidgies” – so they had to actually, you know, squidge. This meant running dozens of simulations, which then had to match up across screen borders.
Not only that, but when you have several squidgy spheres all smushed together, when one sphere moves, it affects all the spheres around it, which in turn affect all the ones around them, and so on.
We had to hand-animate all that.
Finally, we also had to have a mechanism wherein these squidgy spheres could turn into hard, spiky, mean characters. This meant building that spiky geometry into the squidgy version – which upped the complexity of these super simple characters by a lot.
For our first on-site test on May 8th, we had big dreams of being able to preview an entire 360º animatic. This second month was much rougher than the first, though – once we’d figured out all the technical challenges of actually setting up our scenes, we simply didn’t have enough time to render 2 minutes of animation for all 5 screens.
We did, however, manage to render out wrap-around styleframes for all five screens, show a Squidgy transforming into a Spikey, and test out some initial audio.
Although we were initially apprehensive about them, these styleframe tests were actually super useful – they allowed us to check that our guesses re: perspective were right, make corrections, and help us make sure the scale we were working at wasn’t too overwhelming.
The first time we ever saw the Squidgies in the NOW Building
After a super stressful week preparing for the first test, we felt more reassured now we’d seen something actually work in the space.
This was about the last low-stress day we had for the rest of the month.
Production
Through a great many trials and tribulations, we slowly figured out solutions to our problems.
To simplify the Squidgy rigs, over the course of production we:
Merged some deformers
Rigged a slider to increase the poly count of our Spikeys only during the transformation using an Xpresso rig
Split the renders into passes: a low-sample de-noised pass for the Squidgies (as they were basically a flat colour) and a high-sample pass for the Spikeys, which had more texture and detail
Opted to do facial animation using 2D planes so they could be in high resolution.
And all of these steps did save us time! But, as is often the way, they flagged up new problems of their own.
For example, separating out the facial animation into 2D planes meant that we were rendering out the face pass separately to the 3D pass – and this in turn meant that the faces weren’t affected by the squidgy simulations. So, when a Squidgy got squidged, the face would slide around completely independently of the rest of the body.
We fudged this for a while, having our producer Lindsey get back into the Ink & Paint department and manually mask the facial features by hand, before we figured out a proper solution: attaching a surface deformer to the faces, which meant we could use the simulated geometry of the Squidgies to drive the position of the 2D face rig.
(This in turn actually gave us another issue where the surface deformer cancelled out the rotation of the face rig, but luckily we spotted it quick enough to fix.)
Even with all our tweaks, we continued to encounter a LOT of issues with our renders.
Because C4D struggles with real-time previewing when a scene gets to a certain size, it was impossible to see our animation without rendering it first. Even when a render only takes a few seconds per frame, this quickly adds up.
This isn’t to mention that we needed to preview things in 360º. Since we don’t (yet?) work with Unreal, the best solution we could think of was to:
Render each screen from C4D
Skew these renders in After Effects to make the perspective work in the Outernet space
Put those AE renders back into C4D to put them on the 3D screen set-up
Render THAT
Export everything to Vimeo and tag it as a 360º video in the metadata.
All this meant that if we wanted to properly preview our progress, we needed somewhere in the region of a day and a half.
With now only a few weeks to go, we did not have that sort of time.
These After Effects renders were part of our final export process, too. Not only did we use them to skew the C4D renders, but also to composite the faces and backgrounds.
Per screen, these AE renders took a modest ~8 hours, but unlike C4D renders, if After Effects renders fail halfway through for whatever reason, you have to start them again from the beginning, rather than just picking up where you left off.
Additionally, After Effects renders fail frequently, and often without telling you why 🙂
Luckily, we did manage to squeeze in some frivolity amongst the render stress.
During our initial concept presentations, Galop were keen on the idea of presenting the Squidgies as cute, sensitive, loveable creatures, allowing people to get attached to them before the horrors of hate crime ruined them.
This meant, of course, that they needed cute little voices.
We obviously did not have the budget to hire a professional voice actor. But we did have a secret weapon: our director Jonny, who, for a cis man in his mid-to-late 30s, is alarmingly good at doing cute little voices.
Featuring the high-tech set-up of a blanket fort for soundproofing, a mic we have for Zoom calls, and Apple’s Voice Memo app, we recorded over 350 different voice clips for the Squidgies and Spikeys.
As well as sound effects, we also needed music.
We cut our initial animatic to a track by Reeder. We intended it to be a placeholder, just to show the vibe of what we had in mind, but the more we tried to find a final piece, the more we realised the Reeder track (“Unwhere”) was actually perfect.
As it happens, as well as being an incredible composer, Reeder is also an incredibly generous person – they donated Unwhere to the project, and even did some tweaks for us to make it more suited to a huge event space, rather than an intimate personal listening experience.
At this point, time was becoming a pressing concern. We began to rely heavily on a production spreadsheet, calculating exactly how long each render took per frame, so we could be absolutely sure we would make the deadline – which was now looming ominously on the horizon.
While we had initially intended to have everything but rendering finished by two weeks before delivery, in reality we’d encountered so many unexpected problems that we had to waterfall renders, still doing production work on some scenes while others were rendering, switching between machines like mad scientists.
Our second and final testing slot was on Thursday 29th May; the launch for the project was the following Tuesday.
Somehow, we managed to get everything rendered for the testing slot. And it all worked – much more smoothly than we had feared. In fact, we could have left it there and it would have been fine, but there were a couple of things we figured we’d improve just a little over the weekend – a tiny simulation mis-match where two of the screens met, some audio levels, a couple of faces that still were going a bit rogue. Miraculously, just tweaks.
Somehow, despite all the stresses of doing something new and scary and fiercely difficult, we’d pulled it off.
After
Once we’d delivered the final files, attended the launch event, made some final bits and bobs for social media and spent a good few hours hanging out at the Outernet experiencing the novelty of watching other people watch our animation, we were able to reflect on the frenzy of the previous few months.
Obviously, first and foremost, it was an honour to be involved in a project for such a good cause. Galop do so much good work in campaigning against LGBTQ+ discrimination, and working with them was a joy and a privilege.
Secondly – look, the Outernet is cool, isn’t it. We knew it was cool when we took on the project, we knew it was cool all the way through production, and we knew it was cool when a thing we made was playing on five immersive screens all the way through June. We feel incredibly lucky that we got the chance to make something for this space, and really appreciate all the wisdom and support the teams at both Galop and the Outernet lent us during this production.
As for us…
We can’t pretend this one was a walk in the park. In fact, it was probably the toughest project we’ve taken on to date – mostly because the sheer size of the renders made us feel insane. Our usual workflow uses C4D, Redshift and After Effects, which necessitated literal weeks of render time, while the idea of Unreal Engine was sitting in the back of our minds, unhelpfully reminding us that if we’d just spent a few years learning different software, we could be exporting everything in real time.
But ultimately, this project taught us so much about a type of work that we’d previously only dipped our toesies into. We’d love the opportunity to do something similar in future so we can test out everything we learned.
Additionally, on the back of our Squidgy struggles, we’re more determined than ever to simplify our processes and branch out into the new software we’ve been telling ourselves we don’t have enough time to learn.
Since the Squidgies were released into the wild, we’ve already done our first project using Substance Painter, and we’re finally starting to properly experiment with Unreal. In a bid to free ourselves from Adobe entirely (we switched from the Creative Suite to Affinity years ago), we’ve been editing in Da Vinci Resolve and Fusion instead of After Effects, and, like everyone else, we really want to get stuck into Blender to see if it can smooth out any of C4D’s ugly edges for us.
But, ultimately, our animation woes and rendering stresses only lasted a couple of months. We signed up for it, brought a lot of the struggles on ourselves, and now it’s over and we can look back and learn from it.
The hate and discrimination that LGBTQ+ people experience isn’t temporary. It isn’t something that anyone signs up for, or chooses, or brings upon themselves.
And it affects everyone. It affects LGBTQ+ people the most, absolutely. But literally nobody is safe from the effects of it.
Hate hurts all of us.
And it takes all of us to stop it.
That’s what we wanted to show with our animation, and that’s why we’re proud to support Galop and their work.
If you experience hate or are made to feel unsafe because of your sexual orientation or gender identity, you can talk to Galop for advice and support on their helpline at 0800 999 5428, via email at help@galop.org.uk, or on their website, galop.org.uk.
CREDITS
Galop
Head of Fundraising and Comms: Boom Macleod
Media and Communications Manager: Rachel Perry
Outernet
Content Relations and Production Executive: Jake Carlyon
Senior Media Solutions Engineer: Eduard Martinenko
Creative Director of Culture/Lifestyle: Scott Neal
Editorial Campaign Manager: Nikki Worth
Chicken Fruit
Creative Director: Jonny Eveson
Producer: Lindsey Williams
Music by Reeder
Photos taken within the NOW Building supplied by the Outernet.