Julian Morel — Automated Media Systems
I build automated systems that produce creative work at scale — music, video, narrative, documents. The hard part is never the prompt. It's everything that happens after it.
One prompt. One finished music video. Lyrics written, music composed, vocals recorded, visuals generated and timed, karaoke subtitles burned in — everything rendered and ready to publish.
That can be fully automated — fire a prompt, walk away, get a video. Or it can be a creative tool: choose your visual style, pick your video supplier, set resolution and cost parameters, upload a character or generate one, toggle continuous chaining on or off. The pipeline handles the complexity either way. The user decides how much control they want.
▲ Sound on
Type a prompt. Get a fighter. Complete backstory, fighting style, lore, character art, theme song — generated from a single input. Enter them into a league where outcomes aren't random: a narrative engine reads each fighter's history and form, then weights the odds. The whole league is rendered as an automated broadcast.
Fighter creation
Daily broadcast
Daily Top 10 — Automated Broadcast
▲ Sound on
A professional translation, reconstruction and certification platform for certified translators — Word documents and PDFs in, perfectly reconstructed translated documents out. Eliminates the need for expensive DTP reconstruction: structure, layout, formatting, and meaning all survive the same process. Built for professional translation workflows where document fidelity isn't optional.
Where it started. When generative video first appeared, the tools were barely usable — two seconds of footage before the model dissolved into its training data, if you were lucky. Heroes of the Wood was the lab where those early experiments happened. Now it's where ThemeTuned and FightAnything outputs go public — a running record of how far the tools have come, and what these pipelines actually produce in the wild.
How I Work
I invest as much time as possible understanding what I'm building before I build it. It's always iterative in practice — sometimes I only learn what's wrong by fighting the system. But the more I think upfront, the less I unpick later. Two years of full-time AI-native development has made the pattern recognition faster. The instinct for where things are about to break is the skill.
I try to think hard upfront — data flows, transformation points, failure modes — then get into the system fast enough that it tells me what I missed. Where possible, logic lives server-side: rate limiting, cost controls, business rules.
I avoid third-party dependencies where I can build the equivalent myself.
I've learned to be deliberate about how I use AI tooling. Letting an AI copilot touch multiple files simultaneously makes debugging close to impossible. File-by-file changes, keeping markdown notes of what changed and why — it feels slower but it's the only way to stay in control of a codebase you actually understand.
Background
English & Film Studies at university — then a career spent at the coalface of deliberately disruptive industries. I headed up the B2B division of a UK-licensed secondary lottery company, building commercial products in a regulated category that barely existed yet, before moving to a City-based professional gambling operation primarily staffed by quants. Independent, lateral thinking — finding the edge, the way through — was the thing those environments rewarded.
When generative AI arrived, I taught myself to code — with AI as the development environment from day one. I started where most people start — Webflow, Make, Zapier. As AI made working directly in code not just possible but preferable, I ditched them. I started experimenting immediately, back when you'd get perhaps two usable seconds from a five-second clip before the model dissolved into its training data. Some of those early experiments are still up. ↗
Now I build full-time, with generative AI woven through the stack at every level — not as the product, but as the material. Still learning every day. That's the point.
Some instincts go back further still. In the early 2000s I founded the UK's first automated TV bingo platform — video and audio, generated and simulcast to live television in real time. A production pipeline before production pipelines were a thing.
Get in touch
If you're working on automated media, generative AI systems, or anything that involves getting models to work together reliably — I'm interested.