This has been happening with chat bots for three years now and it's never going to stop. You simply don't expose raw prompting and completions to the user like this on a customer-facing website.
That makes it sound trivial. It seems desirable to put an LLM in front of an api (obviously with auth/authorization as needed) so that it can be called via natural language. But, to avoid wasting LLM resources on a chiplotle chapt bot, you’d need to make the LLM classify the input text as an “acceptable” request or not to deadend it. That sounds harder and more prone to exploits than you make it seem.
This has been happening with chat bots for three years now and it's never going to stop. You simply don't expose raw prompting and completions to the user like this on a customer-facing website.
That makes it sound trivial. It seems desirable to put an LLM in front of an api (obviously with auth/authorization as needed) so that it can be called via natural language. But, to avoid wasting LLM resources on a chiplotle chapt bot, you’d need to make the LLM classify the input text as an “acceptable” request or not to deadend it. That sounds harder and more prone to exploits than you make it seem.