Show HN: TabGPT - Ask Gemini, ChatGPT, Claude at the Same Time in Chrome

(franz101.substack.com)

31 points | by hoerzu 9 months ago ago

18 comments

  • mackmcconnell 9 months ago ago

    I wonder how sustainable the free model is for ai startups. This shows how you can switch easily from one to another. Maybe we are in the golden days like back when Uber was cheap…

    • verdverm 9 months ago ago

      For Uber, car prices and salaries go up over time naturally

      For computing, silicon has become cheaper and more efficient over time

      I expect a race to the bottom and then some stabilization, much like we have seen in general cloud computing, and have seen with token prices

      • daghamm 9 months ago ago

        "For Uber, car prices and salaries go up over time naturally"

        Or, your VC money runs out and you start treating your gig workers like crap to save a few cents here and there.

      • bravetraveler 9 months ago ago

        The bottom can still be pretty high! Storage has become an order of magnitude cheaper, yet I still don't bother with block storage pricing

        Dedicated or S3 is where it's at, still plenty of room for gamification

      • hoerzu 9 months ago ago

        The point is the VC money funding something unsustainable (burning through billions). Token prices will never be zero IMO.

        • meiraleal 9 months ago ago

          Yes they will. They are already, if you run local models, that are only getting better. There are 7-11B models that are as good as ChatGPT 3.5

          • chatmasta 9 months ago ago

            Token costs are not zero when you’re running local models, because you paid for the hardware, and you can’t scale inference indefinitely without paying for more hardware.

          • hoerzu 9 months ago ago

            Ok, but running a 11B model gets things 60% of the time right and consumes maximum of electricity of your machine. Not sure if that makes you product the best. Further video generation is very compute intensive. I guess price will decrease over time but the technical advance will allways be for the smarter model

            • meiraleal 9 months ago ago

              > and consumes maximum of electricity of your machine

              OpenAI isn't a eletricity company so the token prize is still zero for what is worth for VCs.

              > but the technical advance will allways be for the smarter model

              Not true. Currently, the small models are advancing much faster with daily new releases

  • hoerzu 9 months ago ago

    Author here, you can read the source code here: https://github.com/franz101/tabgpt

  • radicality 9 months ago ago

    Related, I’ve been using openrouter.ai a lot recently. You can chat with however many models you want simultaneously, set api parameters, use self-moderated models etc.

    • hoerzu 9 months ago ago

      Ah very cool, thanks for sharing. In the next version I'll implement going to the next model if you are rate limited :D

    • meiraleal 9 months ago ago

      this is not related, this is spamming.

      • radicality 9 months ago ago

        I don’t know what to tell you - I’m in no way affiliated with that site and simply found it useful for the kind of tasks similar to what the post was about (chatting with multiple llms at once).

        • meiraleal 9 months ago ago

          From the POV of someone posting a Show HN, that's not nice. They are not even similar tools as OP project ruins in the browser and don't make use of APIs, it is a much more innovative approach that you commented nothing, just suggested an alternative (which nobody was looking for, and the suggestion isn't)

  • tacone 9 months ago ago

    Nice idea! Hopefully someone will make something like that for Firefox as well.

  • 486sx33 9 months ago ago

    Dogpile for LLMs, love it!

    • hoerzu 9 months ago ago

      Woah love that didn't know about it