24 comments

  • goda90 5 hours ago ago

    A few years ago I set out to refactor some of my team's code that I wasn't particularly familiar with, but we wanted to modularize and re-use in more places. The primary file alone was 18k+ lines of Typescript that was a terrible mess of spaghetti. Most of it had been written in JavaScript but later converted haphazardly. I ended up writing myself a little app that used the Typescript compiler APIs to help me just explore all the many branches of the code and annotate how I would refactor different parts. It helped a bit, but I never got time to add some of the more intelligent features I wanted like finding every execution path between two points.

  • dcreater 7 hours ago ago

    you say "local-first" but have placed voyage API for embeddings as the default (had to go to the website and dig to find that you can infact use local embedding models). Please fix

    • esafak an hour ago ago

      It would be convenient if it could load local SLMs itself, otherwise I'll have to manually start the LLM server before I can use it, and it's not something I leave running all the time.

    • ofriw an hour ago ago

      Thank you, yes the docs are overdue for a refresh. It's in the works

  • romperstomper an hour ago ago

    I don't understand how/why all of this is local-first if all these providers are supported and used - could you elaborate what is sent to them?

    • ofriw 13 minutes ago ago

      The DB is stored locally, and any embedding, reranker and LLM will work. It's up to you if you self host these or bring them externally from one SaaS or the other

  • henryhale 3 hours ago ago

    I have been working on depgraph (https://github.com/henryhale/depgraph) for a while now. It is truly local with several output options(json, mermaid, jsoncanvas). Mutliple languages are supported (js, go, c) - expanding the list slowly but sure.

  • Neywiny 7 hours ago ago

    Might give this a try to experiment if it's really free to use (I'll have to read up on that I guess). The qemu codebase is huge and every contributer seems to solve problems in slightly different ways. Would be nice if this tool could help distill it.

    • ofriw an hour ago ago

      Completely free, MIT licensed. You can fully self host it if you have the hardware to run Qwen3-embedding and reranker models

  • apgwoz 5 hours ago ago

    Perhaps I am missing something, but this seems to require a Lemon (LLM)? Is the idea that the Lemon is used to help build an index AOT that can be queried locally, after?

    I want to figure out how to build advanced tools, potentially by leveraging Lemons to iterate quickly, that allow us all to rely _less_ on Lemons, but still get 10,20,30x efficiency gains when building software, without needing to battle the ethics of it all.

    • ofriw an hour ago ago

      ChunkHound does it a bit differently, since at true enterprise scale it's very slow and costly to pass all code chunks through an LLM during indexing time. Instead, ChunkHound implements a customized "deep research" algorithm that's been optimized for code exploration so it can answer, on demand, any deep technical question about the indexed codebase. This research agent can be powered by a lower tier LLM (think Haiku, Codex low, etc) that's already included in your subscription.

  • conception 4 hours ago ago

    I have chunckhound is a few projects and it’s noted in both the agent md file as well as mcp and claude never uses it. Ever. Never once.

    Is there a prompt special sauce y’all use to get it to use it?

  • dmos62 3 hours ago ago

    Will try this out. Was always envious of how Augment was able to do this. Kudos.

  • dogman123 8 hours ago ago

    Is there a way to have the model inside of codex to make use of chunkhound instead of its “built in” search/explore functionality with rg? Whenever I spin up a new agent using xhigh thinking it spins its wheels for a while to get up to speed — wondering if chunkhound can make this process faster.

    • esafak 2 hours ago ago

      That's what the MCP is for, if you can get the LLM to use it. Sometimes they just like to do it their own way :)

  • bravura 7 hours ago ago

    Can you please expose the functionality as a self-documenting CLI command with machine readable output? (Or did I misunderstand that MCP isn't the only way to use it?)

    I am curious to try it but do not want to adopt MCP servers.

    Telling Claude to call the CLI tool is more efficient.

    • ofriw an hour ago ago

      `chunkhound search <query>`, `chunkhound search --regex <query>` and `chunkhound research <query>` are the main cli entry points that you can already use today

    • blackqueeriroh 4 hours ago ago

      Am I confused or is this not an open-source project on GitHub?

      You have every ability to make these modifications yourself; is there a reason you feel the need to require the creator to do so?

      • from_memory 4 hours ago ago

        I think the term is "Instrumentalism".

    • dcreater 7 hours ago ago

      Agree. And to make the CLI usage more effective/efficient, if you can publish a skill that would be excellent

      • esafak 2 hours ago ago

        That's why we're asking for the CLI; so we can write the skills.

  • CamperBob2 5 hours ago ago

    Looks like the tutorial link is broken.