Caveman Mode Save Token?

(twitter.com)

15 points | by brightball 20 hours ago ago

9 comments

  • rdevilla 15 hours ago ago

    Only two or three weeks from incepting the idea of a token efficient LLM English dialect to seeing it in practice. I just never imagined it to take.... this particular form.....

    https://news.ycombinator.com/item?id=47434846

  • schmorptron 19 hours ago ago

    I used a system prompt similar to this, where I just dumped the entirety of https://grugbrain.dev/ into it and prefaced it with the assistant having to emulate grug.

    Didn't find it particularly useful, but is is funny!

  • brightball 20 hours ago ago

    Can this actually work?

    • illwrks 18 hours ago ago

      It does. I've been tinkering with Copilot Studio Agents and you can hit a 8k character limit quickly. By taking your instructions and asking Copilot to compress the information down, while ensuring they are still human readable, you can cut it back to about 5k characters. The information is more dense and functionally the same and the agent is just as consistent as before.

    • pixel_popping 19 hours ago ago

      Anything that reduce input/output works to an extent logically.

  • JaceDev 20 hours ago ago

    [flagged]