5 comments

  • subscribed 12 hours ago ago

    Looks cool, but I think your maths isn't mathing :)

    It's a second day of the first week (as per Google Play), and it shows $9.99 already (£8.99 in the Play Store).

    I'm not saying it's expensive, feature wise it's awesome, I'm saying it's inconsistent :)))

    BTW, is there any chance for the trial key (even one day)? My phone is running GrapheneOS and I would need to see if all I'd like works (or I can make it work).

    Maybe beta programme?

  • newsdeskx 21 hours ago ago

    does this work with purely local models through Ollama, or do you still need the Ollama server running on another machine? been looking for something that actually works offline for basic voice commands

    • yincrash 17 hours ago ago

      Still needs a server. You could run a server locally if you had a model that your device could handle then point aide to the localhost URL.

      • subscribed 12 hours ago ago

        New phones can run Gemma 4 quants pretty nicely. It's a surprisingly good model. Google's Edge Gallery also offers some choice to try.

        • subscribed 8 hours ago ago

          Missed the window for edit: I agree that ideally I'd have a tiny local MOE-kind of model, able to establish the complexity of the request, route simple local requests to the instantly available local agent, and route all the rest outside (to one of several models).