As a founder who also acts as the head of product for our B2B SaaS, Markhub, this is a topic I'm passionate about. Your use cases especially using an LLM to understand an undocumented codebase are very familiar.
My biggest pain point wasn't just executing individual tasks with an LLM, but the "context loss" that happens between those tasks. The summary from my research, the user feedback from a Slack thread, and the technical spec for the feature all lived in separate places.
So, we took a different approach. We built our own AI teammate, MAKi, directly into our collaboration platform. We don't just "use" an LLM; we've made it the central OS for our entire product development cycle.
Here's our day-to-day workflow:
1. User Feedback Synthesis: Instead of manually reading user interviews or community posts, I ask MAKi: "Summarize all feedback from the last 7 days related to our mobile app, and categorize them into 'Bugs' and 'Feature Requests'." MAKi reads all the scattered conversations and generates a structured report.
2. From Feedback to Spec: We then discuss that report in a chat thread. Once we decide on a feature, I ask MAKi: "Take this conversation and our decision, and write a technical spec document for the dev team, including the user problem, proposed solution, and key action items."
3. Living Documentation: This is the most powerful part. As developers work on the feature, their discussions and code commits (via integration) are all linked to that initial conversation. Later, anyone can ask MAKi: "What was the original reason we built the PWA notification feature?" and it will instantly pull up the entire history from the first user feedback to the final decision document.
We've found that the true power of LLMs isn't just in answering questions, but in creating a persistent, searchable, and intelligent memory for the entire team.
Tools like Userdoc (https://userdoc.fyi) help in a few ways, you can easily create requirements (stories, personas, journeys, test cases), but also reverse engineer existing source code into detailed docs, then ask natural language questions etc. AI helps us plan our product in Userdoc, and our devs connect via MCP to bring those requirements directly in to Cursor (full disclaimer, I work at Userdoc - but we eat our own dogfood)
I am a founder and also wear the hat of a product manager for my products. JTBD is a pretty common use case for me. I have also brainstormed features, written detailed specs, and done early prototyping. Also used Lovable to convert my thoughts into screens that I can pass on to my engineers.
Recently, I even used Claude to design my entire DB and dummy data to fill it up quickly to test out prototypes.
As a founder who also acts as the head of product for our B2B SaaS, Markhub, this is a topic I'm passionate about. Your use cases especially using an LLM to understand an undocumented codebase are very familiar.
My biggest pain point wasn't just executing individual tasks with an LLM, but the "context loss" that happens between those tasks. The summary from my research, the user feedback from a Slack thread, and the technical spec for the feature all lived in separate places.
So, we took a different approach. We built our own AI teammate, MAKi, directly into our collaboration platform. We don't just "use" an LLM; we've made it the central OS for our entire product development cycle.
Here's our day-to-day workflow:
1. User Feedback Synthesis: Instead of manually reading user interviews or community posts, I ask MAKi: "Summarize all feedback from the last 7 days related to our mobile app, and categorize them into 'Bugs' and 'Feature Requests'." MAKi reads all the scattered conversations and generates a structured report.
2. From Feedback to Spec: We then discuss that report in a chat thread. Once we decide on a feature, I ask MAKi: "Take this conversation and our decision, and write a technical spec document for the dev team, including the user problem, proposed solution, and key action items."
3. Living Documentation: This is the most powerful part. As developers work on the feature, their discussions and code commits (via integration) are all linked to that initial conversation. Later, anyone can ask MAKi: "What was the original reason we built the PWA notification feature?" and it will instantly pull up the entire history from the first user feedback to the final decision document.
We've found that the true power of LLMs isn't just in answering questions, but in creating a persistent, searchable, and intelligent memory for the entire team.
Tools like Userdoc (https://userdoc.fyi) help in a few ways, you can easily create requirements (stories, personas, journeys, test cases), but also reverse engineer existing source code into detailed docs, then ask natural language questions etc. AI helps us plan our product in Userdoc, and our devs connect via MCP to bring those requirements directly in to Cursor (full disclaimer, I work at Userdoc - but we eat our own dogfood)
I am a founder and also wear the hat of a product manager for my products. JTBD is a pretty common use case for me. I have also brainstormed features, written detailed specs, and done early prototyping. Also used Lovable to convert my thoughts into screens that I can pass on to my engineers.
Recently, I even used Claude to design my entire DB and dummy data to fill it up quickly to test out prototypes.
I use it to critique PRDs and also for research (e.g. tell me how this API works and does it support this use-case).
It is nowhere close to replacing a PM (sorry to the all the hypistas bigging it up) but it's quite helpful as an aide.
The best PMs are using it to replace all the low leverage work (writing docs), and spending more time on the high leverage work (organizing ppl).