
Like most technology companies, PACE has been experimenting with AI tooling to help with software development. A lot of the coding we do is deep, low-level stuff–often close the machine. This definitely tests the AI and it’s fair to say it will be a long time before AI starts to replace our engineers. That doesn’t mean we don’t find it useful. For us, the AI tooling is at its most useful when used somewhere between advanced autocomplete and a peer programmer. Definitely a useful copilot but it’s not going to be flying the plane anytime soon.
In development environments like PACE’s, “vibe coding” isn’t about replacing skilled software engineering teams. Assisting and allowing developers to move faster, yes; full-on replacing full teams, I don’t see it.
So am I going against the tide and predicting that vibe coding will be a fad that will quickly die out when faced with real world challenges? Well, not quite. I actually see “vibe” coding taking a different direction. One where it is seen less as “coding” and more as “authoring.”
I’m not actually proposing anything new here. Sci-fi authors have been showing us this future for decades.
Take, for example, when a Star Trek character walks onto the holodeck and starts building a virtual world or simulation, they don’t open their favorite text editor and start typing out Rust code. Instead, they simply describe in plain language what they want.
"Computer, show me a beach at sunset with a gentle breeze."

The computer interprets those requirements and boom, you are on that beach.
That’s vibe coding. But Star Trek doesn’t talk about “holonovel coders,” it talks about “authors”. This is the right view point: vibing is a lot more like authorship than technical coding.
This is where vibing becomes a powerful and useful tool: when we want to create engaging, realistic, and potentially complex content. And that content can take many forms: pictures, videos, and audio but also web sites and apps.
The Star Trek example (non-trekkies please bear with me) also makes another prediction that I believe is on the cusp of becoming true. Movies and television content will start to merge with gaming. Increasingly, we won’t sit watching a predetermined sequence of frames. All content will become personalised; reacting to us.

If this prediction is true, then content distribution approaches will have to change. Our playback devices (whether screens, VR headsets, or holodecks) can not continue to be dumb clients. The overhead and latency of doing all the processing in the cloud and just streaming the result is not viable for personalized content. Just like with games today, the runtime engines, building blocks (characters, scenery, physics models, etc) and the story will be downloaded to our local devices. Our local devices will then do the work of generating each scene and playing the story to us. Especially as we “vibe author” the story we want to play.
This is going to rely heavily on client software (applications) executing a network of interconnected AI models. AI running inside applications is called Embedded AI.
Where there’s content, there are invariably pirates wanting to monetize other people’s hard work. PACE has spent 40 years defending our customer’s valuable intellectual property.
For our vibe-coded virtual worlds, pirates will have to come up with new attack vectors. CDN leeching and torrent downloads don’t make sense when the content is personalized. These attacks rely on each consumer watching the same content.
History shows that piracy that provides an identical experience to the legal version will always be much more successful than one that offers poor-quality play-back. For our vibe-authored worlds, removing the personalization is the equivalent of the dodgy, handheld camcorder-captured VHS of a movie. Who wants to watch that when the 4K stream is available?
That means piracy in the future will shift from stealing and distributing streams to stealing and distributing the assets that make up the content: the AI models and associated data.
If our prediction is correct, and vibe-authored content becomes the norm, then the good news is that we are already one step ahead of the pirates. Under the hood, “video” content starts to become much more like software–and we know how to protect that.
As the pirates adapt to new forms of content, we can leverage proven software and data protection techniques to defend against them. This includes PACE’s code protection security suite, called Fusion, and white-box cryptography products like PACE’s White-Box Works.
If you are developing AI models or assets that will be used in software applications, please reach out to discuss how PACE can protect them, keeping you one step ahead of the pirate.
All images in this article were vibe-authored.