Kagi Assistants

(blog.kagi.com)

145 points | by ingve 10 hours ago ago

84 comments

  • jryio 10 hours ago ago

    I think there's a very important nugget here unrelated to agents: Kagi as a search engine is a higher signal source of information than Google page rank and ad sense funded model. Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.

    > We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.

    > This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.

    • clearleaf 9 hours ago ago

      Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

      Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)

      Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.

      • viraptor 9 hours ago ago

        > Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

        It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.

      • pixelready 6 hours ago ago

        “Show me the incentive and I’ll show you the outcome” - Charlie Munger

        Kagi works better and will continue to do so as long as Kagi’s interests are aligned with users’ needs and Google’s aren’t.

      • sroussey 9 hours ago ago

        There are several startups providing web search solely for ai agents. Not sure any agent uses Google for this.

        • clearleaf 7 hours ago ago

          Maybe we should learn to pass reverse-turing tests and pretend to be LLMs so we can use this stuff lol.

      • MangoToupe 9 hours ago ago

        > Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

        They spent the last decade and a half encouraging the proliferation of garbage via "SEO". I don't see this reversing.

      • idiotsecant 8 hours ago ago

        >Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

        Unlikely. There are very few people willing to pay for Kagi. The HN audience is not at all representative of the overall population.

        Google can have really miserable search results and people will still use it. It's not enough to be as good as google, you have to be 30% better than google and still free in order to convert users.

        I use Kagi and it's one of the few services I am OK with a reoccurring charge from because I trust the brand for whatever reason. Until they find a way to make it free, though, it can't replace google.

    • bitpush 10 hours ago ago

      > Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.

      I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.

      • VHRanger 10 hours ago ago

        (Kagi staff here)

        Generally we do particularly better on product research queries [1] than other categories, because most poor review sites are full of trackers and other stuff we downrank.

        However there aren't public benchmarks for us to brag about on product search, and frankly the simpleQA digression in this post made it long enough it was almost cut.

        1. (Except hyper local search like local restaurants)

        • oidar 8 hours ago ago

          do you use pinned/deranked sites as an indicator for quality?

          • VHRanger 8 hours ago ago

            I don't think we share them across accounts, no, but we do use your personal kagi search config in assistant searches.

      • viraptor 9 hours ago ago

        The wrote "returned the relevant Wikipedia page higher" and not "wikipedia.org as the most relevant result" - that's an important distinction. There are many irrelevant Wikipedia pages.

  • natemcintosh 10 hours ago ago

    As a Kagi subscriber, I find this to be mostly useful. I'd say I do about 50% standard Kagi searches, 50% Kagi assistant searches/conversations. This new ability to change the level of "research" performed can be genuinely useful in certain contexts. That said, I probably expect to use this new "research assistant" once or twice a month.

    • milch 25 minutes ago ago

      I've already used the Research assistant half a dozen times today and am super happy with the outcomes. It does seem to be more trigger happy with doing multiple searches based on information it found in earlier results, and I've found the resulting output to be reasonably accurate. Some models in particular seem to never want to do more than one search, and you can tell the output in those cases is often not very useful if the sources partially contradict each other or don't provide enough detail. The best I've found to avoid this is o3 pro, but o3 pro is very slow and expensive. If the Research assistant gets 85% of the results in half the time as o3 pro...

    • VHRanger 10 hours ago ago

      I'd say the most useful part for me is appending ? / !quick / !research directly from the browser search bar to a query

      • jen729w 6 hours ago ago

        I love your `?` mechanic. Brilliantly simple. Thank you from a very happy customer.

        • vunuxodo 4 hours ago ago

          Seconded, I love that the AI feature stays away from me until and unless I specifically ask for it.

    • adriantoine 6 hours ago ago

      Same, I'm quite happy with it. I first subscribed because I was fed up with the promoted results in Google but now I find their assistant searches actually useful too.

  • smallerfish 8 hours ago ago

    I tried a prompt that consistently gets Gemini to badly hallucinate, and it responded correctly.

    Prompt: "At a recent SINAC conference (approx Sept 2025) the presenters spoke about SINAC being underresourced and in crisis, and suggested better leveraging of and coordination with NGOs. Find the minutes of the conference, and who was advocating for better NGO interaction."

    The conference was actually in Oct 2024. The approx date in parens causes Gemini to create an entirely false narrative, which includes real people quoted out of context. This happens in both Gemini regular chat and Gemini Deep Research (in which the narrative gets badly out of control).

    Kagi reasonably enough answers: "I cannot find the minutes of a SINAC conference from approximately September 2025, nor any specific information about presenters advocating for better NGO coordination at such an event."

    • adriantoine 6 hours ago ago

      I also tried two hallucinations that I've seen in the latest Gemini and Claude. The first one is the famous "how many r in strawberry" and another one is asking about king sacrifice in chess where most LLMs tell you it's a very powerful move and tell you how to do it (obviously a nonsense since the point of the game is to protect the king).

      Kagi actually got those two correct, explaining how losing your king means losing your chess game, which is quite impressive.

      • sometimes_all 3 hours ago ago

        I tried both these prompts (along with the SINAC one as per GP) in Sonnet 4.5 and Gemini 3, and they both answered correctly for all three. Both also provided context on the chess question as well.

    • sometimes_all 3 hours ago ago

      You might be interested to know your comment caused Gemini 3 to go meta on this question. This is what I found in its thinking:

      ``` Evaluating Date Discrepancies

      I've just uncovered a significant discrepancy: a "Hacker News" result flags a date error originating from the very format of parenthetical dates. The system appears to be misinterpreting these, leading to demonstrably false information. This warrants further investigation to understand the root cause and ensure data integrity.

      Analyzing Date Ambiguity

      I'm now grappling with the user's date. The Hacker News comment suggests the user's prompt might be flawed, as it contradicts the actual date of an event I've indexed. Specifically, the conference referenced occurred in October 2024, not September 2025 as the prompt implies. This leads me to question the user's understanding of the timeline or if the prompt refers to an unrelated event in September 2025. This inconsistency demands further clarification.

      ...

      Investigating User Prompt Intent

      I've just realized that the Hacker News snippet reveals this might be a specific problematic test prompt. My strategy is to focus on the October 2024 forum, a strong match, organized by Kattia Cambronero and CCT. I'll verify if there was a 2025 event, but the 2024 event fits the described "crisis" and "NGO coordination". If the user says "approx Sept 2025," they likely mean a recent event. ```

    • VHRanger 8 hours ago ago

      Ah yes we have some benchmarks on this sort of misguided prompt trap, so it should perform well on this

  • ranyume 9 hours ago ago

    I used quick research and it was pretty cool. A couple of caveats to keep in mind:

    1. It answers using only the crawled sites. You can't make it crawl a new page. 2. It doesn't use a page' search function automatically.

    This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.

    • Rehanzo 6 hours ago ago

      What exactly do you mean by "You can't make it crawl a new page"? It has the ability to read webpages, if that is what you're referring to

    • everlier 6 hours ago ago

      Their agents can natively read files and webpages from URLs. It's so convenient, I've implemented identical feature for our product at work.

  • spott 3 hours ago ago

    I want a kagi mcp server I can use with ChatGPT or Claude.

    I don’t want to use kagi ultimate (I use too many other features of ChatGPT and Claude), I just want to be able to improve the results of my AI models with kagi.

  • teecha 6 hours ago ago

    Could Kagi Ultimate be a good replacement for ChatGPT? A subscription there would save me some money if it was similar.

    Just recently started paying for Kagi search and quite love it.

    • everlier 6 hours ago ago

      Mostly depends on which features are most important for you. I'm a SWE, so using their Assistant for Web RAG nearly exclusively for work-related stuff and for most of my personal queries. I'm rarely using multi-modal content, mostly sticking to text. They support many providers and new notable models are typically rolled out only a few days after release which is always great to test out. I have a standalone subscription for coding agent LLM. If above aligns - might be a good choice.

    • coffeefirst 6 hours ago ago

      That’s what I did and I’m pretty happy with it. I just fall back to something free on the rare occasion I want an image generated (tbh, mostly emojis of my dog).

    • Atotalnoob 6 hours ago ago

      Mostly, yes.

      You have a spend limit, but the assistant has dozens of of models

  • hatthew 8 hours ago ago

    I'm a little confused about what the point of these are compared to the existing features/models that kagi already has. Are they just supposed to be a one-stop shop where I don't have to choose which model to use? When should I use kagi quick/research assistant instead of, e.g. kimi?

    I tried the quick assistant a bit (don't have ultimate so I can't try research), and while the writing style seems slightly different, I don't see much difference in information compared to using existing models through the general kagi assistant interface.

    • VHRanger 8 hours ago ago

      Quick assistant is a managed experience, so we can add features to it in a controlled way we can't for all the models we otherwise support at once.

      For now Quick assistant has a "fast path" answer for simple queries. We can't support the upgrades we want to add in there on all the models because they differ in tool calling, citation reliability, context window, ability to not hallucinate, etc.

      The responding model is currently qwen3-235B from cerebras but we want to decouple the user expectations from that so we can upgrade it down the road to something else. We like Kimi, but couldn't get a stable experience for Quick on it at launch with current providers (tool calling unreliability)

      • hatthew 2 hours ago ago

        That makes sense, thanks!

  • itomato 10 hours ago ago

    I'm seeing a lot of investment in these things that have a short shelf life.

    Agents/assistants but nothing more.

    • VHRanger 9 hours ago ago

      We're building tools that we find useful, and we hope others find it too. See notes on our view of LLMs and their flaws:

      https://blog.kagi.com/llms

    • ugurs 9 hours ago ago

      Why do you think the shelf life is short?

  • ceroxylon 9 hours ago ago

    Kagi reminds me of the original search engines of yore, when I could type what I want and it would appear, and I could go on with my work/life.

    As for the people who claim this will create/introduce slop, Kagi is one of the few platforms where they are actively fighting against low quality AI generated content with their community fueled "SlopStop" campaign.[0]

    Not sponsored, just a fan. Looking forward to trying this out.

    [0] https://help.kagi.com/kagi/features/slopstop.html

  • paradox460 7 hours ago ago

    One thing I've wished for is the ability to use my kagi AI features in my editor (currently Zed)

    • Rehanzo 6 hours ago ago

      We've got an assistant API coming soon™

  • nsonha 4 hours ago ago

    I use perplexity a lot and pretty much exclusively with "deep research" on. Is this on the same level? Because Perplexity often take more than a minute, and this is only 20 secs.

    • milch 20 minutes ago ago

      A query I ran earlier ran for about 70s, and did 12 web searches and 2 site fetches.

  • bananapub 9 hours ago ago

    regular reminder: kagi is - above all else - a really really good search engine, and if google/etc, or even just the increasingly horrific ads-ocracy make you sad, you should definitely give it a go - the trial is here: https://kagi.com/pricing

    if you like it, it's only $10/month, which I regrettably spend on coffee some days.

    • skydhash 9 hours ago ago

      I now that the price haven’t changed for a while, but I would pay for unlimited search and no AI.

      • Dusseldorf 5 hours ago ago

        Well, good news. Unlimited search was $10 before. Then they added the AI access for free but, and this part is key, did not make any requirement that you have to use it.

    • iLoveOncall 8 hours ago ago

      > above all else

      What they've been building for the past couple of years makes it blindingly clear that they are definitely not a search engine *above all else*.

      Don't believe me? Check their CEO's goal: https://news.ycombinator.com/item?id=45998846

      • debo_ 7 hours ago ago

        I think you have a problem.

  • HotGarbage 10 hours ago ago

    I really wish Kagi would focus on search and not waste time and money on slop.

    • drewda 10 hours ago ago

      What they saying in this post is that they are designing these LLM-based features to support search.

      The post describes how their use-case is finding high quality sources relevant to a query and providing summaries with references/links to the user (not generating long-form "research reports")

      FWIW, this aligns with what I've found ChatGPT useful for: a better Google, rather than a robotic writer.

      • theoldgreybeard 10 hours ago ago

        I'm sure Google also says they built "AI mode" to "support search".

        Their search is still trash.

        • esafak 9 hours ago ago

          Except the AI mode filters out the bad results for you :)

          • saghm 8 hours ago ago

            I have a no-AI mode that filters out the bad results too. The problem is that it doesn't return any results at all, as it doesn't help with the harder problem of filtering out only the bad results without the good ones though. So far it's not clear to me that LLMs have significantly moved the needle on the ability to differentiate this.

          • theoldgreybeard 6 hours ago ago

            In my experience the same slop garbage I get in search is the same slop garbage, only “summarized”, in AI mode.

    • barrell 10 hours ago ago

      If you look at my post history, I’m the last person to defend LLMs. That being said, I think LLMs are the next evolution in search. Not what OpenAI and Anthropic and xAI are working on - I think all the major models are moving further and further away from that with the “AI” stuff. But the core technology is an amazing way to search.

      So I actually find it the perfect thing for Kagi to work with. If they can leverage LLMs to improve search, without getting distracted by the “AI” stuff, there’s tons of potential value,

      Not saying that’s what this is… but if there’s any company I’d want playing with LLMs it’s probably Kagi

      • skydhash 8 hours ago ago

        A better search would be rich metadata and powerful filter tools, not result summarizer. When I search, I want to find stuff, I don’t want an interpretation of what was found.

    • 0x1ch 10 hours ago ago

      This is building on top of the existing core product, so the output is directly tied to the quality of their core search results being fed into the assistants. I overall really enjoy all of their A.I products, using their prompt assistant frequently for quick research tasks.

      It does miss occasionally, or I feel like "that was a waste of tokens" due to a bad response or something, but overall I like supporting Kagi's current mission in the market of AI tools.

    • bigstrat2003 9 hours ago ago

      Same, though in fairness as long as they don't force it on me (the way Google does) and as long as the real search results don't suffer because of a lack of love (which so far they haven't), then it's no skin off my back. I think LLMs are an abysmal tool for finding information, but as long as the actual search feature is working well then I don't care if an LLM option exists.

    • VHRanger 10 hours ago ago

      It's not -- this was posted literally yesterday as a position statement on the matter (see early paragraphs in OP):

      https://blog.kagi.com/llms

      Kagi is treating LLMs as potentially useful tools to be used with their deficiencies in mind, and with respect of user choices.

      Also, we're explicitly fighting against slop:

      https://blog.kagi.com/slopstop

      • saghm 8 hours ago ago

        Is there anyone selling LLM tools that would claim they aren't keeping their deficiencies in mind or admit that they're ignoring user choices? I'm not saying you are or aren't wasting money on slop, because I have no way of knowing, but it's hard to imagine someone who is concerned about a company acting in bad finding this compelling.

  • iLoveOncall 9 hours ago ago

    The fact that people applaud Kagi taking the money they gave for search to invest it in bullshit AI products and spit on Google's AI search at the same time tells you everything you need to know about HackerNews.

    • data-ottawa 8 hours ago ago

      Search is AI now, so I don’t get what your argument is.

      Since 2019 Google and Bing both use BERT style encoder-only search architecture.

      I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.

      So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.

      They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.

      • VHRanger 8 hours ago ago

        We're not running on openrouter, that would break the privacy policy.

        We get specific deals with providers and use different ones for production models.

        We do train smaller scale stuff like query classification models (not trained on user queries, since I don't even have access to them!) but that's expected and trivially cheap.

    • w10-1 8 hours ago ago

      Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.

      • stefan_ 8 hours ago ago

        High-margin AI products? Yes the world is just filled with those!

        • VHRanger 8 hours ago ago

          Our stuff is profitable.

          Actually if you use LLMs sized responsibility to the task it's cheaper than a lot of APIs for the final product.

          The expensive LLMs are expensive, but the cheap ones are cheaper than other infrastructure in something like quick answer or quick assistant

    • VHRanger 9 hours ago ago

      We're explicitly conscious of the bullshit problem in AI and we try to focus on only building tools we find useful. See position statement on the matter yesterday:

      https://blog.kagi.com/llms

      • grayhatter 8 hours ago ago

        > LLMs are bullshitters. But that doesn't mean they're not useful

        > Note: This is a personal essay by Matt Ranger, Kagi’s head of ML

        I appreciate the disclaimer, but never underestimate someone's inability to understand something, when their job depends on them not understanding it.

        Bullshit isn't useful to me, I don't appreciate being lied to. You might find use in declaring the two different, but sufficiently advanced ignorance (or incompetence) is indistinguishable from actual malice, and thus they should be treated the same.

        Your essay, while well written, doesn't do much to convince me any modern LLM has a net positive effect. If I have to duplicate all of it's research to verify none of it is bullshit, which will only be harder after using it given the anchoring and confirmation bias it will introduce... why?

      • iLoveOncall 9 hours ago ago

        Your words don't match your actions.

        And to be clear you shouldn't build the tools that YOU find useful, you should build the tools that your users, which pay for a specific product, find useful.

        You could have LLMs that are actually 100% accurate in their answers that it would not matter at all to what I am raising here. People are NOT paying Kagi for bullshit AI tools, they're paying for search. If you think otherwise, prove it, make subscriptions entirely separate for both products.

        • freediver 8 hours ago ago

          Kagi founder here. We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools. Life is too short for that. I also happen to like Star Trek version of the future, where smart computers we can talk to exist. I also like that Star Trek is still 90% human drama, and 10% technology quitely working in the background in service of humans - and this is the kind of future I would like to build towards and leave for my children. Having the most accurate search in the world that has users' best interest in mind is a big part of it, and that is not going anywhere.

          edit: seeing the first two (negative) replies to my comment made me smile. HN is tough crowd to please :) The thing is similar to how I did paid search and went all in with my own money when everyone thought I was crazy, I did that out of own need and need for my family to have search done right and am doing the same now with AI, wanting to have it done right as a product. What you see here is the result of this group of humans that call themself Kagi best effort - not more, not less.

          • Dusseldorf 5 hours ago ago

            Just wanted to chip in with a positive comment among the hail of negativity here. Thank you for what you and your team are doing. I've been getting tons of great use daily out of the search and news features, as well as occasionally using the assistant. It can definitely be hard to find decent paid alternatives to the freeware crap model so prevalent on the web, seeing your philosophy here is a huge breath of fresh air.

          • zythyx 7 hours ago ago

            I found Kagi quite recently, and after blowing through my trial credits, and now almost blowing through my low tier (300) credits, I'm starting to look at the next tier up. However, it's approaching my threshold of value vs price.

            I have my own payment methods for AI (OpenWebUI hosted on personal home server connected to OpenRouter API credits which costs me about $1-10 per month depnding on my usage), so seeing AI bundled with searches in the pricing for Kagi really just sucks the value out of the main reason I want to switch to Kagi.

            I would love to be able to just buy credits freely (say 300 credits for $2-3) and just using them whenever. No AI stuff, no subscription, just pay for my searches. If I have a lull in my searches for a month, then a) no extra resources from Kagi have been spent, and b) my credits aren't used and rollover. Similarly, if I have a heavy search month, then I'll buy more and more credits.

            I just don't want to buy extra AI on top of what I already have.

          • saghm 8 hours ago ago

            > We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools.

            For what it's worth, as someone who tends to be pretty skeptical of introducing AI tools into my life, this statistic doesn't really convince me much of the utility of them. I'm not sure how to differentiate this from selection bias where users who don't want to use AI tools just don't subscribe in the first place rather than this being a signal that the AI tools are worthwhile for people outside of a niche group who are already interested enough to pay for them.

            This isn't as strong a claim as what the parent comment was saying; it's not saying that the users you have don't want to be paying for AI tools, but it doesn't mean that there aren't people who are actively avoiding paying for them either. I don't pretend to have any sort of insight into whether this is a large enough group to be worth prioritizing, but I don't think the statement of your perspective here is going to be particularly compelling to anyone who doesn't already agree with you.

          • SleekoNiko 7 hours ago ago

            People are very passionate about their views on LLMs. :)

          • iLoveOncall 8 hours ago ago

            > I also happen to like Star Trek version of the future, where smart computers we can talk to exist [...], this is the kind of future I would like to build towards

            Well if that doesn't seal the deal in making it clear that Kagi is not about search anymore, I don't know what does. Sad day for Kagi search users, wow!

            > Having the most accurate search in the world that has users' best interest in mind is a big part of it

            It's not, you're just trying to convince yourself it is.

        • VHRanger 8 hours ago ago

          I can't really do anything with the recommendation you're making.

          The recommendation you made worked from your personal preference as an axiom.

          The fact is that the APIs in search cost vastly more than the LLMs used in quick answer / quick assistant.

          If you use the expensive AI stuff (research assistant or the big tier 1 models) that's expensive. But also: it is in a separate subscription, the $25/month one.

          We used not to give any access to the assistant at the $5 and $10 tier, now we do, it's a free upgrades for users.

  • AuthAuth 9 hours ago ago

    Kagi is already expensive for a search engine. Now I know part of my subscription is going towards funding AI bullshit. And I know the cost of that AI bullshit will get jacked up in price and force Kagi sub price up as well. I'm so tired of AI being forced into everything.

    • progval 8 hours ago ago

      These are only available on the Ultimate tier. If (like me) you don't care about the LLMs then there is no reason to be on the Ultimate tier so you don't pay for it.

    • johnnyanmac 8 hours ago ago

      >expensive for a search engine.

      As in, not "free"?

      Either way, I guess we'll see how this affects the service.

      • AuthAuth 5 hours ago ago

        This is $10 USD a month to have a search engine. $5 is a more reasonable price but 300 searches is far to little to be useful. Converted to my currency this is over 2 hours of wages for a search engine.

        • johnnyanmac 5 hours ago ago

          I see. $10 is half an hour of minimum wage in my state, and around 1.25 hours of federal minimum wage. For anyone who hasn't been laid off in the US, it's a pittance for US developers.

          Sounds like Kagi might need to implement some better regional pricing.

  • daft_pink 10 hours ago ago

    Not for nothing, but I wish there was an anonymized ai built into a kagi that was able to have normal conversation discussion about sexual topics or search for pornographic topics like a safe search off function.

    I understand the safety needs around things LLM should not build nuclear weapons, but it would be nice to have a frontier model that could write or find porn.

    • VHRanger 10 hours ago ago

      You'll want de-censored models like cydonia for that -- can be found on openrouter, or through something like msty

  • DontForgetMe an hour ago ago

    Awesome, a search engine with an LLM / AI!

    I hadn't been sure about Kagi before, but this has really swung it for me, I'm off to sign up post haste. It's a revolutionary move that really shows how fast ahead of the competition Kagi is, how dexterous their fingers at the pulse of humanity, how bold.