I’ve been all over the place lately, coming up with ideas, trying them out, throwing away more code than I actually kept, etc. And this whole firehose of thoughts has uncovered another issue. We’re in this weird transitional phase where the old deterministic way of doing things and the new “ask an LLM to figure it out” way are both valid approaches to the same problem. And figuring out when to use which one is its own challenge.

With this new programming paradigm, we have a few new interesting classes of problems. The one I’ve been thinking about most recently is the class of problems that can be solved deterministically and “non-deterministically” (through the use of LLMs). It’s like having two completely different tools for the same job, and you have to decide which one to reach for.

An example I’ve been working through is migrating a full application from one framework to another. In my case, it’s very old legacy applications, and fast-forwarding them 40 years into the future. You do a lot of transpiling, obviously, but there are also all these other forms of transformation—taking existing reporting systems, UIs, business logic, and figuring out how to translate that to a new framework. Some of it maps cleanly. Some of it absolutely does not.

The traditional approach would be to write increasingly complex transformation rules. Parse this, map that, handle these edge cases, add more edge cases, realize you have 50 edge cases, cry a little. You end up with this massive deterministic system that technically works but is a nightmare to maintain.

The LLM approach is almost the opposite problem. You can throw anything at it and get something back, but you lose guarantees. It might hallucinate a function that doesn’t exist. It might interpret your intent slightly wrong. It might produce perfectly valid code that does the wrong thing. And when you’re dealing with production systems, “mostly right” isn’t really good enough.

I think ideally you have a generic enough system that can take any input and, with some elbow grease, output what a user would want. But when the input has as much complexity as “a computer program” can have, it’s difficult to do it all deterministically. There are just too many valid interpretations, too many contextual decisions, too many “well, it depends” moments.

So in general, I think you do what you can through a deterministic program, and do the rest through an LLM. Use the parser and AST transformations for the mechanical stuff that has clear rules. Use the LLM for the judgment calls, the “how should this legacy form map to our new component library” questions, the parts where a human would normally have to make a decision.

The trick is figuring out where that line is. I’ve found myself writing hybrid systems where deterministic code handles the structure and the LLM fills in the gaps. It’s like having a very smart intern who’s great at creative problem-solving but needs clear boundaries and review. You don’t want them rewriting your entire codebase unsupervised, but you also don’t want to micromanage every single decision.

I’m still figuring this out, honestly. But I think that’s where we are right now—in this transitional moment where the answer to “should this be deterministic or LLM-powered?” is increasingly “yes.”