While I appreciate the effort put into development, I have some questions about the underlying premise.
1. How do detailed metrics about coding sessions, git interactions, and test runs actually leads to meaningful improvements in your productivity?
2. Assuming you have used this plugin for a while now, how does tracking these metrics correlate with better code quality?
I hope I'm not coming across as overly critical, as that's not my intent. I appreciate the effort put into the development of software regardless of the final intent.
When I look at my own experiences, I wouldn’t focus strictly on code quality and productivity in the narrow sense. What I mean by that is that my focus on smaller commits and smaller branches helped team members understand changes more quickly, making knowledge transfer easier. For branches where code reviews/approvals were necessary, this helped our overall flow because stories were not blocked that long due to long-lasting approvals.
Another point was that, with Kasama, I was able to track the runtime of long-running tests and build tasks, which allowed me to point at tasks where optimizations were needed. Otherwise, such discussions were always based on gut feelings, and improvements were usually postponed…
Congratulations on launch. But I don't like when number of hours is used to quantify my work. Because x hours of work is not necessarily a good representation of complexity / quality of work. Even the number of commits is not a good metric. I want to know what HN thinks and how they measure their productivity?
These "quantified self" stats provide countless insights.
One example.
There is a refactoring view in there. By time, by projects, even breaking the figures down by type of actions.
Benefits:
- Self awareness. It is hard to gauge how much time is spent refactoring. If your priority isn't to refactor but meet a soon coming deadline, these stats tell you whether you are adhering to the priorities.
- Quantifies. If you are trying to explain to your colleagues that you find yourself needing to do a lot of refactor for that particular project, you've got numbers to communicate. What's a lot? Some colleagues often ask.
- Evidence. Showing these numbers communicates better certainty than "I think I've been doing a lot of refactor on this project today"
Plus, oftentimes with visualisations, we don't know what we are looking for. Until we find it.
I use refactoring tools all the time during development and bugfixing. The distinction between refactoring / feature development / bugfixing is mostly in the intention. If it just tracks the usage of refactoring tools, I think there will be many false positives.
Yes, metrics alone are not sufficient here. But I did not want to include opinionated targets where optimization towards might be problematic.
For me, I had 2 reasons to look at the metrics: First, I wanted to split my work into smaller chunks and commit more often, and I wanted to track whether I achieved this goal. Second, it occurred to me that I was using some IDE refactorings a lot of times, but I was wondering where my „blind spots“ where, i.e which types of refactorings I was using rarely or wasn’t even aware of. This inspired me to track IDE refactorings in the plugin as well.
However, there might still be the use case of some kind of „trainers“ that could be included in the plugin that help you improving your coding. Running tests more often, continuously integrating your branches, committing more often - these are not silver bullet mechanisms, but they do make sense a lot of times and a tool might help here.
It's data. Creating a colorful dashboard turns it into information,
To get knowledge and action is another step.
The plug-in does what it says on the tin,
it's up to the user to make something of it,
or leave it as inert knowledge.
I mean, that's something you should try to answer yourself, if you think you can extract any benefit from having these stats.
To me it's similar to a fitness app that can tell you things like time, speed, distance, elevation, but won't really tell you how to run better.
I have a hard time coming up with ways this could help my coding habits right away, but I think this would be on the user to find these, not the stats reporting tool, no? And if you find no real use, then it's maybe just not for you.
> To me it's similar to a fitness app that can tell you things like time, speed, distance, elevation, but won't really tell you how to run better.
Most of these stats are very simple to interpret - higher speed/distance - you're getting better.
My fear is that people will apply such simplistic evaluation on these stats as well - "your daily commit rate has been going down lately, we need to focus on that".
I was about to comment the same thing. Managers might interpret this in the totally wrong way and will probably end up using this against employees, like "you're not coding enough" and dump more work on you. Knowledge work, especially software engineering is way more than just time spent coding.
I as a programmer, however, might find this insightful, but then again be too hard on myself and wonder if I am not working hard enough. There's a lot to say about a tool like this, though. Interesting work. I'm too scared to try this out lol.
Congratulations on the launch.
While I appreciate the effort put into development, I have some questions about the underlying premise.
1. How do detailed metrics about coding sessions, git interactions, and test runs actually leads to meaningful improvements in your productivity?
2. Assuming you have used this plugin for a while now, how does tracking these metrics correlate with better code quality?
I hope I'm not coming across as overly critical, as that's not my intent. I appreciate the effort put into the development of software regardless of the final intent.
Yes, these are great questions!
When I look at my own experiences, I wouldn’t focus strictly on code quality and productivity in the narrow sense. What I mean by that is that my focus on smaller commits and smaller branches helped team members understand changes more quickly, making knowledge transfer easier. For branches where code reviews/approvals were necessary, this helped our overall flow because stories were not blocked that long due to long-lasting approvals.
Another point was that, with Kasama, I was able to track the runtime of long-running tests and build tasks, which allowed me to point at tasks where optimizations were needed. Otherwise, such discussions were always based on gut feelings, and improvements were usually postponed…
Congratulations on launch. But I don't like when number of hours is used to quantify my work. Because x hours of work is not necessarily a good representation of complexity / quality of work. Even the number of commits is not a good metric. I want to know what HN thinks and how they measure their productivity?
You should give our free vscode extension a spin https://www.exceeds.ai/ (I am the CTO/co-founder)
If this turns into some sort of hiring metric, I’m gong to be pissed.
> What’s your Kasama score?
Was this named after the Tagalog word?
Holla, swear to my kasamas When I grow up I wanna be just like Yuri Kochiyama.
Yes, well noticed!
That’s quite easy for me because I’m Filipino! And coincidentally, I’m also residing in Germany. :D
Mabuhay! We also have a Michelin starred restaurant here in Chicago with the same name.
The same restaurant that was featured in the tv series The Bear right?
It's a legible Thai name as well.
It would be interesting if this also hooked into an AST to capture different syntaxes and structures used to express procedures and entities in code.
Is the code available somewhere? The Github repo seems to be only for documentation and issues.
I’m curious on how you came up with the name? In my language it means companion.
Yes, the tagalog meaning of 'companion' was the reason behind the name.
Cool! Thank you for using our dialect. Congratulations on the launch too!
How actionable will these insights be? Are you going to write better code because you don't want your "failed test" metric to go down?
As with many of these "quantified self" stats it feels like it will result in a colorful and nice to look at dashboard...with no benefit.
These "quantified self" stats provide countless insights.
One example.
There is a refactoring view in there. By time, by projects, even breaking the figures down by type of actions.
Benefits:
- Self awareness. It is hard to gauge how much time is spent refactoring. If your priority isn't to refactor but meet a soon coming deadline, these stats tell you whether you are adhering to the priorities.
- Quantifies. If you are trying to explain to your colleagues that you find yourself needing to do a lot of refactor for that particular project, you've got numbers to communicate. What's a lot? Some colleagues often ask.
- Evidence. Showing these numbers communicates better certainty than "I think I've been doing a lot of refactor on this project today"
Plus, oftentimes with visualisations, we don't know what we are looking for. Until we find it.
I use refactoring tools all the time during development and bugfixing. The distinction between refactoring / feature development / bugfixing is mostly in the intention. If it just tracks the usage of refactoring tools, I think there will be many false positives.
Yes, metrics alone are not sufficient here. But I did not want to include opinionated targets where optimization towards might be problematic.
For me, I had 2 reasons to look at the metrics: First, I wanted to split my work into smaller chunks and commit more often, and I wanted to track whether I achieved this goal. Second, it occurred to me that I was using some IDE refactorings a lot of times, but I was wondering where my „blind spots“ where, i.e which types of refactorings I was using rarely or wasn’t even aware of. This inspired me to track IDE refactorings in the plugin as well.
However, there might still be the use case of some kind of „trainers“ that could be included in the plugin that help you improving your coding. Running tests more often, continuously integrating your branches, committing more often - these are not silver bullet mechanisms, but they do make sense a lot of times and a tool might help here.
It's data. Creating a colorful dashboard turns it into information, To get knowledge and action is another step. The plug-in does what it says on the tin, it's up to the user to make something of it, or leave it as inert knowledge.
> How actionable will these insights be?
I mean, that's something you should try to answer yourself, if you think you can extract any benefit from having these stats.
To me it's similar to a fitness app that can tell you things like time, speed, distance, elevation, but won't really tell you how to run better.
I have a hard time coming up with ways this could help my coding habits right away, but I think this would be on the user to find these, not the stats reporting tool, no? And if you find no real use, then it's maybe just not for you.
> To me it's similar to a fitness app that can tell you things like time, speed, distance, elevation, but won't really tell you how to run better.
Most of these stats are very simple to interpret - higher speed/distance - you're getting better.
My fear is that people will apply such simplistic evaluation on these stats as well - "your daily commit rate has been going down lately, we need to focus on that".
It'll tell your boss if they should keep you around or replace you with AI.
1. Create local (or private remote repos) with similar name as remote repos in VCS
2. Use scripts to commit junk to local or remote repos
2a. Extra points if you use chatgpt, Claude, gemini beta alpha0 to generate junk commits
3. ???
4. Profit. Sit back for a few months or quarters. Interview for new jobs, and then bounce out of there
Do NOT show this to most managers
I was about to comment the same thing. Managers might interpret this in the totally wrong way and will probably end up using this against employees, like "you're not coding enough" and dump more work on you. Knowledge work, especially software engineering is way more than just time spent coding.
I as a programmer, however, might find this insightful, but then again be too hard on myself and wonder if I am not working hard enough. There's a lot to say about a tool like this, though. Interesting work. I'm too scared to try this out lol.
No, but who is the one interested in your improving? :-)