I've never seen this with any other tech adoption. Normally companies are very reluctant to spend money on dev tools or components. Maybe reluctantly some of the Jetbrains suite. But in this case people are being ordered to use an expensive tool! e.g. in this thread https://news.ycombinator.com/item?id=47148256
Similar to the tendency of other companies to cram in useless AI buttons for no other reason than to pump the stock. Like I have one in outlook, with two disclaimers "I can help—but I don’t currently have access to your whole inbox, only to the specific email thread you had open", and "AI answers may be incorrect". Or the Notepad one, complete with CVE.
It has much more the feeling of a management reorganization fad, like edicts of the form "we're all going to be agile now! Please submit your sprint planning for the next six months by Friday".
On one hand I'm getting direction to use more AI. On the other hand, I incurred a mid 5-figure spend generating tokens in January and a LOT of people started sweating when the bill showed up. :/
I have AI automation metrics that are tracked. We have automation that automates the AI usage whether you want it or not so our metrics are 100% for everyone. Presumably this makes the stock go up somehow.
I'll talk to the LLM about how stupid all of this is; if you want me productive... I choose the tools, sorry.
Enforce a good diet so we'll finally eat our vegetables, boss. Get out of here. If these are the commonly established boundaries, I'm either gaming it or simply out. Easy choice.
I can't say, yet! Hasn't struck. They'd like users, aren't forcing the issue. Very reasonable/measured so far. Should expectations appear [and I were to play along], I maintain they would be sorely disappointed. Simply don't believe we're limited by content generation/consumption.
All beside the point, anyway. I'll worry about meeting agreeable expectations in the next place... where I can renegotiate my side of the terms, too. The work doesn't really call for it, I'm already more productive than my enabled peers. Not pressed, options exist (both internal and external). Competitors more to my liking surely exist. I'm entirely fine failing to meet demands that I don't believe can/should be met. Call me fortunate [and perhaps naive] :)
My 'agents' were called 'pipelines' 20 years ago, they serve us well. The... 'real world' logistics need to be considerably shortened before an agent [or more pipelines] might have any meaningful impact. We have all the code/docs/whatever we might need, and a lot of built-in downtime, so I suspect it's a wash. Moving parts or people to datacenters, for instance.
All that's not to say an LLM can't be useful. They could spare us some shoveling, so to speak. Less work, not necessarily further or faster. Easier. There's not a lot of juice to squeeze and I'm not sure one should be willing [without proper consideration/compensation].
This is why unions use to exist. If tech workers were in a union, they to stop this in its track.
But sadly over the years, some unions became very corrupt and others were allowed to be killed of by Companies and the US Gov.
Again I am glad I am at the age I am at. With that, I feel real bad for the young. From what I am seeing, between Climate Change, Living Costs and now AI, the young seems really screwed :(
My generation allowed these oligarchs to take over the US, it is not like no one knew that started happening in the 80s. So here we are.
I personally like using AI at my job and would resent being forced to join a union which opposed it.
The way that I guarantee my job treats me well is by being willing to quit whenever it stops working for me. Despite everyone panicking about AI layoffs, I still consistently get messages from recruiters trying to fill AI-related jobs. In your ideal world where the union is supposed to represent me but oppose AI, do those jobs still exist?
> The way that I guarantee my job treats me well is by being willing to quit whenever it stops working for me.
That will work until it won't.
But you're deciding to leave power on the table. That's kinda like leaving money on the table. And of course, it's typically the unsophisticated people who do that.
Ha, resonates for me. I wonder if a solution is to pitch the APIs (or an MCP) to customers at the same cost as cloud solution and tell them to vibecode away on top of it. I guess that's IaaS.
I'm in this weird space where I'm working for a company on behalf of my actual company. And the company I'm being farmed out to does not care for us because we are contractors in a sense. This resulted in us being brought on board without any training, any knowledge transfer, and guidance, and being told "get to work".
The result is when I ask people for information regarding the organization or code bases they come back to me with "ChatGPT said this", where ChatGPT is an internally hosted AI stack.
It's gotten to the point where I've given up and just have the internally hosted AI writing unit tests for me because their organization doesn't care about us, and as a result I just don't care about them.
The worst part is the tests seem to be reasonably working. Which is terrifying because I actually know the language or testing framework very well. I'm effectively working in a junior capacity without any training or guidance and my merge requests are getting pushed through.
It's going to be interesting seeing how these organizations that force AI in the workplaces age, because there are no longer experts in the code base. Only slop.
I just don't understand the fuss. Having run some models locally, there is some interesting phenomena going on there, but trying to troubleshoot why things go wrong within the confines of the model itself is... Well, for me at the moment at least, intractable. If I have to rely on something, and I can't troubleshoot or fix it... I'm not relying on it. Especially when the only thing these models really seem good at is burning watts, and massive token multiplication.
I've never seen this with any other tech adoption. Normally companies are very reluctant to spend money on dev tools or components. Maybe reluctantly some of the Jetbrains suite. But in this case people are being ordered to use an expensive tool! e.g. in this thread https://news.ycombinator.com/item?id=47148256
Similar to the tendency of other companies to cram in useless AI buttons for no other reason than to pump the stock. Like I have one in outlook, with two disclaimers "I can help—but I don’t currently have access to your whole inbox, only to the specific email thread you had open", and "AI answers may be incorrect". Or the Notepad one, complete with CVE.
It has much more the feeling of a management reorganization fad, like edicts of the form "we're all going to be agile now! Please submit your sprint planning for the next six months by Friday".
It's wild being in the midst of it.
On one hand I'm getting direction to use more AI. On the other hand, I incurred a mid 5-figure spend generating tokens in January and a LOT of people started sweating when the bill showed up. :/
I have AI automation metrics that are tracked. We have automation that automates the AI usage whether you want it or not so our metrics are 100% for everyone. Presumably this makes the stock go up somehow.
I'll talk to the LLM about how stupid all of this is; if you want me productive... I choose the tools, sorry.
Enforce a good diet so we'll finally eat our vegetables, boss. Get out of here. If these are the commonly established boundaries, I'm either gaming it or simply out. Easy choice.
Is there not a higher productivity expectation for you? How are you going to meet those without agents?
I can't say, yet! Hasn't struck. They'd like users, aren't forcing the issue. Very reasonable/measured so far. Should expectations appear [and I were to play along], I maintain they would be sorely disappointed. Simply don't believe we're limited by content generation/consumption.
All beside the point, anyway. I'll worry about meeting agreeable expectations in the next place... where I can renegotiate my side of the terms, too. The work doesn't really call for it, I'm already more productive than my enabled peers. Not pressed, options exist (both internal and external). Competitors more to my liking surely exist. I'm entirely fine failing to meet demands that I don't believe can/should be met. Call me fortunate [and perhaps naive] :)
My 'agents' were called 'pipelines' 20 years ago, they serve us well. The... 'real world' logistics need to be considerably shortened before an agent [or more pipelines] might have any meaningful impact. We have all the code/docs/whatever we might need, and a lot of built-in downtime, so I suspect it's a wash. Moving parts or people to datacenters, for instance.
All that's not to say an LLM can't be useful. They could spare us some shoveling, so to speak. Less work, not necessarily further or faster. Easier. There's not a lot of juice to squeeze and I'm not sure one should be willing [without proper consideration/compensation].
Human logic is not a most wanted product of today's tech companies
Have been for a year at least.
This is why unions use to exist. If tech workers were in a union, they to stop this in its track.
But sadly over the years, some unions became very corrupt and others were allowed to be killed of by Companies and the US Gov.
Again I am glad I am at the age I am at. With that, I feel real bad for the young. From what I am seeing, between Climate Change, Living Costs and now AI, the young seems really screwed :(
My generation allowed these oligarchs to take over the US, it is not like no one knew that started happening in the 80s. So here we are.
I personally like using AI at my job and would resent being forced to join a union which opposed it.
The way that I guarantee my job treats me well is by being willing to quit whenever it stops working for me. Despite everyone panicking about AI layoffs, I still consistently get messages from recruiters trying to fill AI-related jobs. In your ideal world where the union is supposed to represent me but oppose AI, do those jobs still exist?
> The way that I guarantee my job treats me well is by being willing to quit whenever it stops working for me.
That will work until it won't.
But you're deciding to leave power on the table. That's kinda like leaving money on the table. And of course, it's typically the unsophisticated people who do that.
I feel I could vibe code in a week, against our API, what my company wants customers to pay lots and lots of money for from our cloud solution.
Ha, resonates for me. I wonder if a solution is to pitch the APIs (or an MCP) to customers at the same cost as cloud solution and tell them to vibecode away on top of it. I guess that's IaaS.
Yeah I know, and I hate it.
I'm in this weird space where I'm working for a company on behalf of my actual company. And the company I'm being farmed out to does not care for us because we are contractors in a sense. This resulted in us being brought on board without any training, any knowledge transfer, and guidance, and being told "get to work".
The result is when I ask people for information regarding the organization or code bases they come back to me with "ChatGPT said this", where ChatGPT is an internally hosted AI stack.
It's gotten to the point where I've given up and just have the internally hosted AI writing unit tests for me because their organization doesn't care about us, and as a result I just don't care about them.
The worst part is the tests seem to be reasonably working. Which is terrifying because I actually know the language or testing framework very well. I'm effectively working in a junior capacity without any training or guidance and my merge requests are getting pushed through.
It's going to be interesting seeing how these organizations that force AI in the workplaces age, because there are no longer experts in the code base. Only slop.
I just don't understand the fuss. Having run some models locally, there is some interesting phenomena going on there, but trying to troubleshoot why things go wrong within the confines of the model itself is... Well, for me at the moment at least, intractable. If I have to rely on something, and I can't troubleshoot or fix it... I'm not relying on it. Especially when the only thing these models really seem good at is burning watts, and massive token multiplication.