A former anthropology student from Los Angeles may be the George Lucas of artificial intelligence

Anand Kumar
By
Anand Kumar
Anand Kumar
Senior Journalist Editor
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis...
- Senior Journalist Editor
15 Min Read
#image_title

This story comes from Hollywood Reporter The next AI release, which will be published on March 31. Check out more stories throughout the week, and the full issue next week.

On Instagram, the director known as Gossip Goblin posts dark sci-fi epics set on strange worlds inhabited by mutant creatures and dungeon communities. The images are accompanied by philosophical narrators who contemplate reality. The short films feel strangely like bits of big-budget cinema. But it wasn’t shot on soundstages or rendered by a visual effects studio: it was created and assembled using artificial intelligence.

At a time when Hollywood and Silicon Valley are still debating what AI actually is — a cost-cutting tool, a visual gimmick, or the basis of an entirely new cinematic language — Gossip Goblin offers a more provocative possibility: that AI actually has an emerging aesthetic, and that it belongs not to studios, but to individuals willing to turn it into something personal. His films do not reject the strangeness of the medium—the dream logic, the artificial textures, the sense of images that are half-remembered rather than fully observed—but lean into it, suggesting a kind of storytelling that feels less like traditional filmmaking and more like visual thought.

The man behind the account is Zach London, 35, a Los Angeles native who has so far maintained a relatively low level of public interest even as his work has gone viral online. He points out that the name “Gossip Goblin” started out as a deliberately unserious pseudonym — a kind of Internet pseudonym — but has since become the catchphrase for a growing body of acts that can never be ditched. London studied sculpture and anthropology at Pitzer College before drifting into product design and working in virtual reality at tech companies like Oculus. Four years ago, he moved to Stockholm after meeting his Swedish partner. While experimenting with early image-generating programs after work, he found a new way to visualize the stories he had long been writing.

Since then, the Gossip Goblin has quietly amassed over a million followers on Instagram and millions of additional views across platforms. London recently quit his tech job, raised a small round of funding and launched a studio to produce longer, AI-driven films with a small international team. His first major effort was a 20-minute short titled Bachoright – set in grungy, Blade Runner— a world populated by hybrid flesh-metal characters and featuring a full cast of voice actors, a foley artist, and an original score — is scheduled for release in the coming weeks after nearly five months of production.

This approach puts it in an intriguing position within the fast-moving AI landscape. While social media is filled with one-click AI-powered videos (often dismissed as “sloppy”), London insists that his projects still involve many of the same steps as traditional filmmaking: scripts, shot lists, voice actors, foley artists, and extensive editing.

Whether this process represents the future of independent filmmaking or merely a transitional curiosity remains an open question. But Hollywood is really starting to care. London says he’s received calls from studios, actors and directors who are curious about what AI storytelling could become.

Hollywood Reporter He spoke with London about how he makes his films, why most AI content fails to stand out, and whether a successful film might one day emerge from this new medium.

You are originally from Los Angeles. How did you end up doing it from Stockholm?

I grew up in the valley and studied sculpture and anthropology at Pitzer College—two very lucrative majors. I thought maybe I would go to law school after that, but I ended up getting a Fulbright scholarship to Malaysia and spent almost two years traveling around Southeast Asia. I then moved to the Bay Area and started working in tech as a product designer at startups and eventually at Facebook on Oculus doing VR work. I moved to Sweden about four years ago after I met a Swedish girl – either she moved to the US or I moved here, and here I am. Making movies was never part of the plan. I’ve always drawn and written stories, and even self-published a few small books on travel writing and short novels, but it never occurred to me that filmmaking was something available to me. AI has kind of changed that.

How did you first start experimenting with AI tools?

About three and a half years ago I was messing around with early image creation tools with a co-worker after work. We were trying to use it in a design project and the results were terrible – completely unusable in corporate work – but the technology itself was great. Before the video generation, I began writing a series of travel writings about a fictional country called Orumquan, written in the style of a 1980s National Geographic. She created a completely fake ethnography of this fictional Soviet country and used Midjourney to create images that appear almost documentary but are surreal. It unexpectedly took off online and got me excited about telling stories again. When video tools started to emerge, I realized that moving images meant you could actually build narrative worlds – although early on the technology was so limited that storytelling had to adapt to what AI could realistically produce.

Your work looks much more polished than most AI videos online. How are these films actually made?

The biggest misconception is that someone writes “sci-fi movie” in a prompt and the movie appears on the other side. Maybe we’ll get there eventually, but that’s not where technology is today. Our process starts with a script, and then we break that script down into what looks like a traditional shot list – every scene, every angle, every environment. Then we start exploring the visual world: what the characters look like, what the creatures look like, what kind of lighting and architecture this world has. Once we define that aesthetic, we create and enhance hundreds or thousands of images and videos that fit the story, then everything is assembled and edited in DaVinci Resolve like a regular movie. We also work with voice actors and even a foley artist to create sound effects, so there’s still a lot of traditional filmmaking craft involved.

What AI tools do you use to create images?

Quite a lot of them – 15 to 25 instruments across the entire track. There is no one magic generator that does it all. Some tools are better for creating raw images, others are better for repeating characters consistently across different scenes, and others are better for motion or animation. Midjourney is still a favorite for image generation, but we also use other models that are better at reproducing a given character from multiple angles or lighting conditions. Consistency is one of the hardest problems in AI filmmaking – if a character changes their appearance from one shot to the next, the illusion breaks down – so a lot of the work is about figuring out how to control the output via different tools.

One thing I noticed while watching your short film is that it relies heavily on narration rather than dialogue. Was that intentional?

Mostly it was a technical limitation. When we made this film, the tools simply weren’t good enough to produce convincing dialogue scenes with synchronized speech and performances. If we had tried to do that, it would have been awkward or contrived, so we focused on narrative and atmosphere instead. The next project we’re working on is about 25 minutes long and is more dialogue-based because technology has improved significantly since then. Tools are evolving so rapidly that what seemed impossible a year ago is now achievable.

Are the sounds in your films generated by artificial intelligence?

No, they’re all human voice actors. We’re working with two performers – one was an opera singer and is now a DJ in San Francisco, and the other is a jazz singer in the UK. Synthetic voices have become incredibly convincing, but real performers still deliver something that is difficult to replicate. Eventually, motion capture performance will likely become a bigger part of this workflow as well, where you record an actor’s performance and translate it into an AI-generated character, but that part of the technology is still very early.

I’ve built a following of over a million people online. Why do you think your work stands out from other AI content?

Honestly, because most AI content is what people call “slop”. This technology has a sort of default visual style, and if you just press the button and accept what’s generated, you end up with generic sci-fi visuals that look like everything else. In fact, it takes a lot of work to push AI away from this baseline and impose a specific creative vision. The other difference is storytelling. Many content creators focus entirely on visuals, which are impressive visuals without any story behind them. I’m more interested in building a mythology, with recurring characters and stories that exist in a larger world.

You recently quit your job and started building a studio around that work. What is the goal?

The goal is to build a larger world of stories — not mass-manufactured content, but thoughtful science fiction created by a small team. What’s exciting about AI is that it may allow people to create ambitious stories without needing hundreds of millions of dollars. Historically, if you wanted to produce large-scale science fiction, you needed a huge studio production. Now a few people may be able to create something visually comparable with far fewer resources.

Have Hollywood studios started contacting you?

Yes, I’ve talked to most of the studios and streamers at this point, as well as some actors and directors whose work I really like. A lot of these conversations are just curiosity — people trying to understand what the future of filmmaking might look like. Some actors are raising questions about whether they should license their voices or likenesses for AI use. I don’t think anyone really knows the answers yet, but there’s definitely a lot of interest.

Would you like to eventually partner with Hollywood or build this independently?

Our goal is to retain as much ownership of intellectual property as possible. In a future where artificial intelligence allows anyone to create massive amounts of content, there will be an enormous amount of buzz online. The things that will really hold value are recognizable characters and worlds that the audience connects with. If we can build a small group of stories and intellectual property that people really care about, that’s where the long-term value lies.

Do you think there’s a real AI-generated blockbuster coming?

probably. Technology is improving so quickly that it seems inevitable. But I’m less interested in being the first person to prove that this can happen. There are already well-funded companies trying to win this race. What matters to me is doing it well and focusing on storytelling rather than just demonstrating the technology. Ultimately, the audience doesn’t care about the tool, they care about whether the story is compelling.

I feel like someone would be George Lucas in this, wouldn’t that be interesting if it were you?

That’s what we tell investors, but I don’t want to jinx it. This is basically an elevator pitch: “We can tell a completely broad, unfiltered sci-fi epic that covers all these different worlds and ideas and stories, and we can do it fairly reliably with a fairly small team.” Plus, there’s not much risk in doing this. It’s not like we’re asking the world to do it.

Share This Article
Anand Kumar
Senior Journalist Editor
Follow:
Anand Kumar is a Senior Journalist at Global India Broadcast News, covering national affairs, education, and digital media. He focuses on fact-based reporting and in-depth analysis of current events.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *