I’ve just been ignoring my boss every time he says something about how we should leverage AI. What we’re building doesn’t need it and can’t tolerate hallucinations. They just want to be able to brag up the chain that AI is being used, which is the wrong reason to use it.
If I was forced to use it, I’d probably be writing pretty extensive guardrails (outside of the AI) to make sure it isn’t going off the rails and the results make sense. I’m doing that anyway with all user input, so I guess I’d be treating all LLM generated text as user input and assuming it’s unreliable.
That’s a very sane stance. Treating LLM output as untrusted input is probably the correct default when correctness matters.
The worst failures I’ve seen happen when teams half-trust the model — enough to automate, but still needing heavy guardrails. Putting the checks outside the model keeps the system understandable and deterministic.
Ignoring AI unless it can be safely boxed isn’t anti-AI — it’s good engineering.
We ran into this while building GTWY.ai. What reduced hallucinations for us wasn’t more prompting or verification layers, but narrowing what the agent was allowed to do at each step. When inputs, tools, and outputs were explicit, the model stopped confidently inventing things. Fewer degrees of freedom beat smarter models.
This matches our experience too. The biggest reduction in hallucinations usually comes from shrinking the action space, not improving the prompt. When inputs, tools, and outputs are explicitly constrained, the model stops “being creative” in places where creativity is actually risk.
It’s less about smarter models and more about making the system boring and deterministic at each step.
I've found that I can use a very similar approach to the one I've used when handling the risks associated with blockchain, cryptocurrencies, "web scale" infrastructure, and of course the chupacabra.
Fair enough. A healthy dose of skepticism has served us well for every overhyped wave so far. The difference this time seems to be that AI systems don’t just fail noisily — they fail convincingly, which changes how risk leaks into production.
Treating them with the same paranoia we applied to web scale infra and crypto is probably the right instinct. The chupacabra deserved it too.
To someone that has paid zero attention and/or deliberately ignores any coverage about the numerous (and often hilarious) ways that spicy autocomplete just completely shits the bed? Sure, maybe.
That’s fair — if you’re already skeptical and paying attention, the failures are obvious and often funny. The risk tends to show up more with non-experts or downstream systems that assume the output is trustworthy because it looks structured and confident.
Autocomplete failing loudly is annoying; autocomplete failing quietly inside automation is where things get interesting.
This hits a key point that isn't emphasised enough. A few interactions with technology and people have shaped my view:
I fiddled with Apple's Image Playground thing sometime last year, and it was quite rewarding to see a result from a simple description. It wasn't exactly what I'd asked for, but it was close, kind of. As someone who has basically zero artistic ability it was fun to be able to create something like that. I didn't think much about it at the time, but recently I thought about this again, and I keep that in mind when seeing people who are waxing poetic about using spicy autocomplete to "write code" or "analyse business requests" or whatever the fuck it is they're using it for. Of course it seems fucking magical and fool proof if you don't know how to do the thing you're asking it for, yourself.
I had to fly back to my childhood home at very short notice in August to see my (as it turned out, dying) father in hospital. I spoke to more doctors, nurses and specialists in two weeks, than I think I have ever spoken to about my own health in 40+ years). I was relaying the information from doctors to my brother via text message. His initial response to said information was to send me back a Chat fucking GPT summary/analysis of what I'd passed along to him... because apparently my own eyeballs seeing the physical condition of our father, and a doctor explaining the cause, prognosis and chances of recovery were not reliable enough. Better ask Dr Spicy Autocomplete for a fucking second opinion I guess.
So now my default view about people who choose to use spicy autocomplete for anything besides shits and giggles like "Write a Star Trek fan fiction where Captain Jack Sparrow is in charge of DS9", or "Draw <my wife> as a cute little bunny" is essentially "yeah of course it looks like infallible magic, if you don't know how to do it yourself".
The real risk with LLMs isn’t when they fail loudly — it’s when they fail quietly and confidently, especially for non-experts or downstream systems that assume structured output equals correctness.
When you don’t already understand the domain, AI feels infallible. That’s exactly when unvalidated outputs become dangerous inside automation, decision pipelines, and production workflows.
This is why governance can’t be an afterthought. AI systems need deterministic validation against intent and execution boundaries before outputs are trusted or acted on — not just better prompts or post-hoc monitoring.
That gap between “sounds right” and “is allowed to run” is where tools like Verdic Guard are meant to sit.
I’ve just been ignoring my boss every time he says something about how we should leverage AI. What we’re building doesn’t need it and can’t tolerate hallucinations. They just want to be able to brag up the chain that AI is being used, which is the wrong reason to use it.
If I was forced to use it, I’d probably be writing pretty extensive guardrails (outside of the AI) to make sure it isn’t going off the rails and the results make sense. I’m doing that anyway with all user input, so I guess I’d be treating all LLM generated text as user input and assuming it’s unreliable.
That’s a very sane stance. Treating LLM output as untrusted input is probably the correct default when correctness matters.
The worst failures I’ve seen happen when teams half-trust the model — enough to automate, but still needing heavy guardrails. Putting the checks outside the model keeps the system understandable and deterministic.
Ignoring AI unless it can be safely boxed isn’t anti-AI — it’s good engineering.
We ran into this while building GTWY.ai. What reduced hallucinations for us wasn’t more prompting or verification layers, but narrowing what the agent was allowed to do at each step. When inputs, tools, and outputs were explicit, the model stopped confidently inventing things. Fewer degrees of freedom beat smarter models.
This matches our experience too. The biggest reduction in hallucinations usually comes from shrinking the action space, not improving the prompt. When inputs, tools, and outputs are explicitly constrained, the model stops “being creative” in places where creativity is actually risk.
It’s less about smarter models and more about making the system boring and deterministic at each step.
I've found that I can use a very similar approach to the one I've used when handling the risks associated with blockchain, cryptocurrencies, "web scale" infrastructure, and of course the chupacabra.
Fair enough. A healthy dose of skepticism has served us well for every overhyped wave so far. The difference this time seems to be that AI systems don’t just fail noisily — they fail convincingly, which changes how risk leaks into production.
Treating them with the same paranoia we applied to web scale infra and crypto is probably the right instinct. The chupacabra deserved it too.
> they fail convincingly
To someone that has paid zero attention and/or deliberately ignores any coverage about the numerous (and often hilarious) ways that spicy autocomplete just completely shits the bed? Sure, maybe.
That’s fair — if you’re already skeptical and paying attention, the failures are obvious and often funny. The risk tends to show up more with non-experts or downstream systems that assume the output is trustworthy because it looks structured and confident.
Autocomplete failing loudly is annoying; autocomplete failing quietly inside automation is where things get interesting.
> The risk tends to show up more with non-experts
This hits a key point that isn't emphasised enough. A few interactions with technology and people have shaped my view:
I fiddled with Apple's Image Playground thing sometime last year, and it was quite rewarding to see a result from a simple description. It wasn't exactly what I'd asked for, but it was close, kind of. As someone who has basically zero artistic ability it was fun to be able to create something like that. I didn't think much about it at the time, but recently I thought about this again, and I keep that in mind when seeing people who are waxing poetic about using spicy autocomplete to "write code" or "analyse business requests" or whatever the fuck it is they're using it for. Of course it seems fucking magical and fool proof if you don't know how to do the thing you're asking it for, yourself.
I had to fly back to my childhood home at very short notice in August to see my (as it turned out, dying) father in hospital. I spoke to more doctors, nurses and specialists in two weeks, than I think I have ever spoken to about my own health in 40+ years). I was relaying the information from doctors to my brother via text message. His initial response to said information was to send me back a Chat fucking GPT summary/analysis of what I'd passed along to him... because apparently my own eyeballs seeing the physical condition of our father, and a doctor explaining the cause, prognosis and chances of recovery were not reliable enough. Better ask Dr Spicy Autocomplete for a fucking second opinion I guess.
So now my default view about people who choose to use spicy autocomplete for anything besides shits and giggles like "Write a Star Trek fan fiction where Captain Jack Sparrow is in charge of DS9", or "Draw <my wife> as a cute little bunny" is essentially "yeah of course it looks like infallible magic, if you don't know how to do it yourself".
The real risk with LLMs isn’t when they fail loudly — it’s when they fail quietly and confidently, especially for non-experts or downstream systems that assume structured output equals correctness.
When you don’t already understand the domain, AI feels infallible. That’s exactly when unvalidated outputs become dangerous inside automation, decision pipelines, and production workflows.
This is why governance can’t be an afterthought. AI systems need deterministic validation against intent and execution boundaries before outputs are trusted or acted on — not just better prompts or post-hoc monitoring.
That gap between “sounds right” and “is allowed to run” is where tools like Verdic Guard are meant to sit.