> Dr. Jason Wingard is a globally recognized executive with deep experience across corporate, nonprofit, and academic sectors, specializing in the future of learning and work.He currently serves as Senior Advisor at Harvard University, where he advises trustees, senior administrators, and faculty leaders, and leads a research agenda on workforce transformation and innovation. He is also Executive Chairman of The Education Board, Inc. and Senior Advisor at Social Finance, Inc., providing strategic and visionary consulting while advancing a national research agenda on leadership and workforce development.He most recently served as the 12th President of Temple University, where he held dual tenured faculty appointments as Professor of Management and Professor of Policy, Organizational, and Leadership Studies.Previously, Dr. Wingard was Dean of the School of Professional Studies at Columbia University and Managing Director and Chief Learning Officer at Goldman Sachs. Earlier, he served as Vice Dean of the Wharton School, University of Pennsylvania; President & CEO of the ePals Foundation and Senior Vice President at ePals, Inc.; and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.An award-winning author, Dr. Wingard has published widely on leadership, learning, and workforce strategy.
Not sure exactly what this guy does or what his expertise is, but I am fairly certain it’s not software development.
The point of the article is exactly that you shouldn't hand over software development to people with no expertise in software development. So he sounds smarter and more humble than a lot of CEOs at the moment.
So decades ago he worked for a company that no one’s heard of, and which hasn’t existed for 16 years, and that means I should care what he thinks about vibe coding / modern software development why?
Note: Don't downvote or flag me for linking to his LinkedIn. Clearly he wrote this Forbes article to chase clout and influence like most Forbes writers. This is what he wants, for us to talk about him and his credentials.
> 1 will hit the jackpot and get a 100M ARR company with 4 people.
I will point out that at the point where you get an 100M ARR it seems worth it to hire more people regardless.
But I'm guessing that the bar to be hired will be EXTREMELY high, because IMO the best people to hire people in future heavy-AI-automation-era would be basically founder-level visionary leaders who are also subject matter experts who can consistently make good decisions, and giving them 1M+ salaries in exchange.
If you have 100M ARR you can probably afford like 30 of these employees (and the probably exorbitant recruiting fees required to find them) and have them command AI all day. So your company will be extremely small in headcount but still more than 4 people.
(oh and how will this affect wealth inequality? i prefer to not think about it)
I love this little utopian scenarios that make HN average users wet because they relentlessly avoid considering the core issues with mid to long term AI sustainability.
Namely, dependence on external models, fucked up model costs subsidization, financial exposure to a downturn that could negatively affect the core product (models).
Yet this is the inevitable future and if you dare raising concern you’re a luddite. Man what have we come too
Anything someone can vibe code that gains any level of mild traction can then easily duplicated by all their competitors and in a fraction of the time because the actual hard part, determining the products edges, has already been done for them.
Agreed. This is why I think that platform/network effects will be mandatory to stay afloat in a lot of the tech market pretty soon. (Or other types of unfair advantages that are truly hard to overcome)
Even with network effects, it's still a race between you building a ecosystem and your competitors catching up to you.
However, if you DO have some sort of network effect moat and your competitors DON'T (yet), then you have the only advantage that matters in the world, because remember, vibe-copying goes both ways. You can copy your competitors feature-by-feature just like they can copy you. So you'll just always keep up feature parity while everyone only uses you because you're the established player with the biggest ecosystem, and soon enough you'll turn your temporary advantage into a permanent one.
Note: legacy platforms can't really benefit from this because you probably need to rewrite your product from scratch to fit any sort of cutting-edge AI dev workflow. Whoever creates a AI-native platform and scales it first wins.
> 4 will have an inefficient app, suffer reputational damage
Have we been living in different realities? I can't remember any example of companies in the past 10 years that have suffered reputational damage related to their inefficient apps. And there have been plenty of inefficient apps...
I mean a lot do get reputational damage (e.g. a lot of people hate Jira because how slow it is, or Microsoft Teams - same story) - it's just that nothing comes of it, so "suffered" is perhaps the wrong word here. People curse them and still use them.
>- 2 won't use AI at all and simply be left behind and stagnate (or go bust)
Would why would they? As if their software being made faster is the differentiator?
In my career as a consumer (lol), choice was never about that. It was about the business proposition, pricing, quality of implementation, guarantees the company is gonna be there long term, them not being scumbags, and so on.
If anything software churn put me off, especially when it come at the cost of messing with my established use, or stability.
Yeah, but no needs to pay for software SaaS anymore so no-one is going to be getting a lucky 100MRR business off pure vibecoded software as anyone can just make that in house.
I read it in a report:
AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.
In all cases, whole enterprise solution can't be made with pure vibecoding. Specification is needed, a basis of predefined rules, coding styles, security considerations.
> AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.
It also worsens the problem in general by making it way, WAY easier for the bad ones to performatively appear good. They'll have the better-sounding promises but if you listen to them you'll crash and burn in a few years. This doesn't even have to be intentional, just someone technically ignorant channeling AI sycophancy while simultaneously playing politics (i.e. promotionmaxxing while delegating ideas to AI) will have the problematic effect.
So the article isn't very good but the vibe coding debate is pretty interesting.
This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.
First question is how much is more software actually worth to you."
This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.
But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.
Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.
The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.
If you have no customers yet... you can unfettter the vibe and see if it works.
Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.
The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.
There's no way to do vibe coding safely, at scale, currently.
This smells like BS to me, and I have a bird’s eye view into several enterprises and startups.
LLMs are not being used for code removal or refactorings, it’s either to “hopefully unblock” this large project that has been behind deadline for 12 months, or to just speed up development (somewhat).
It's dystopian. I wish we could just roll back to 2022 and pick a different timeline. Anything and everything is either about AI and/or written by AI, and it's all the shittier for it. Software and services are becoming buggy, content quality plowed straight through bedrock, most people use AI to turn off their brains, and the people that care are left drudging through slop and garbage in both their professional and personal lives.
I want off this train to hell. I am truly (not exaggerating) on the verge of abandoning everything to go live in the woods.
Somebody else can spin up some AI-generated "AI is good" stories and post those in response. Maybe somebody will deploy respective agents to do both automatically.
Are AI agents posting this fully aware that they are AI? If they are trained only on human material they may not even understand their own true reality.
Which is actually a quite interesting use of AI. Most if not all previous "AI is good" were also AI-generated so fighting fire with fire seems effective.
Speed without judgement? (Maybe you'll be fine. Or maybe your business gets run to the ground by spaghetti code piling up beyond any hope for human review and quality controls breaking)
Judgement without speed? (That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship)
Judgement + speed at the same time? (layoff most of your employees and keep only the visionaries? how do you even filter for people who can make good decisions?)
> That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship
That sounds right but is it actually true? By that I mean shipping faster. First mover advantage is a thing, but it's not the only thing, and that's also not the same as shipping additional features quickly.
I mean, Apple is famous for being purposely late to entire markets, and they're doing pretty well...
This mentality is just "move fast and break things", and just because it's a common trope in the SFBA doesn't make it effective across the board.
Note: I am assuming that it is 2027-28 and reliable AI automated coders exist (or the equivalent workhorse AI in your field), which makes implementation time negligible compared to making decisions. The effect is somewhat weaker with present-day-level AIs. I'm also assuming that the 100-person company is very competent with AI outside of making decisions, but that the startup can plan things much faster due to not needing a committee to do so.
Very rough maths:
If your 100 person team still follows collaborative processes to cancel out errors (let's say it takes 10 people a day to decide on a single deliverable's shape), then give the design to the AI to implement (as we assume the AI can do it without supervision), then you can ship 10 deliverables a day.
At the same time, that 4 people team can have all of them bouncing ideas off of AIs to help them make decisions in rapidfire all day. They'll each individually spend an hour working on a decision then hand it to an AI. Their decisions are on average as good as your 10-member team meetings because while your medium-sized company's decisions sometimes end up suboptimal due to politics, the startup's decisions are individuals so make the wrong call more often, and I assume these two effects cancel out. In that case, your competitor with 4 people cranks out 32 deliverables a day assuming that the implementation AIs don't have to be supervised at all.
In summary it's not "move fast and break things", it's just "move fast, focus on making decisions, delegate everything else to the AI". Remember that the decisions are all that matters if the AI can do all the implementation.
Mmm, thats a lot of assumptions that all have to hold true to make the math work, like, you're starting to venture into hiring 9 women to make a baby in a month territory here.
But it also makes some more fundamental assumptions that I'm trying to challenge a bit. It assumes delivering 32 "deliverables" per day (the meaning of which is context specific) is better than 10. Is that always true? Is that delta the most relevant factor in the success of a business compared to its competitors? Etc.
I agree, quality of decisions will be much better than quantity. I'm trying to keep decision quality constant here
A lot of the above assumptions are just to keep things fair since in the real world there are a lot of variables that can't be ignored. For example, keeping AI competence equal between the two companies.
I'm just trying to show that under my assumptions (~2027-28 AI, highly competent) it is quite conceivable that a 4-person visionary company can start a beat a much larger traditional one on quantity, not that it will definitely happen. I guess it's even rougher than the "rough math" I said it is.
I guess the point I'm trying to make is this: startups have always been able to beat big companies in serial execution speeds, but beating them in straight-up parallel work quantity is very unusual, but I think there's a good chance it will happen. This is simply because decision-making scales really poorly in traditional companies by headcount and I think it'll get more and more important relative to implementation work. Hence the focus on quantity of "deliverables" (i really mean medium-size project designs, think equivalent to 5-day targets for a dev team which I assume to be AI's average task horizon before it needs human decision input)
At least, the small team will win for some time until we get straight-up superintelligence that replaces the decision-makers as well. At which point the calculus suddenly flips and the richest company wins by default.
Yeah, I don't necessarily disagree assuming those assumptions end up being true. It just wasn't really the point I was trying to make. More about the mentality that you need to keep up with the proverbial Joneses and even sacrifice other things in order to do so, or else you will be left behind. You see this on threads where people are talking about their own personal development as well. Fears about being left behind, or people warning others that if they don't go all-in on agentic development they will be left behind. It just feels so oversimplified, almost philosophically empty.
I can understand fears about employment of course, people need money to live and support their loved ones, but beyond this there is more to life than just slopping out more and more code that "does things" and it's worth considering if there is any real benefit to do so, or even if the negative aspects outweigh the positive ones. I suppose we will find out in real time if everyone's side projects are better to be left unfinished, or finished by vibe coding. It's not so clear to me.
I'm not advocating for not utilising LLM's in companies or whatever, just that prioritising velocity at the expense of everything else might not be so valuable. Of course if there are no downsides than it would be purely valuable, but that basically begs the question lol.
These articles are such doomsaying, yesterday's clickbait. Again, the worst-case scenario is being introduced as the one that will surely happen to your company.
Is anyone really vibe coding like this?
I mean if someone without any coding skills vibe codes a whole app, cannot expect that this is production ready..
i think anybody with common sense should know this right?
The question is, when it screws up, who gets blamed, and who pays. If it's the customer, and you can afford to lose a small fraction of customers, it may be worth it. It's just another form of crappy customer service. If it's internal, and it's all output, no input, and the internal organization doesn't really need that info that badly, that might work out.
But give it the authority to do something and there's real trouble.
This is just patently false in my experience thus far. I mean, I'm "vibe engineering" and know what I'm doing relatively well? But the way this works now is I'm more like an architect than a coder anymore. This means I can do things faster, but it also means it's less fun. But the customer doesn't really care about "fun" - so I do what I've gotta do.
But if anything, I could probably go a lot faster and be fine, it's just my life would be miserable. If you're going to "vibe code" try to remember to actually... you know... vibe.
The thing is, the development timeline is so compressed that you lose intimate knowledge of the codebase. Like, I don't think humans can form memories that detailed that quickly? Maybe it's just a me problem though. Anyway, so when you need to debug or fix stuff, AI's reasoning will be "welp makes sense, I suppose" and your mental mood of the codebase is now slippery. Eventually there comes a time where at best you can draw an incoherent high-level diagram of the architecture.
And AIs solution to problem is generally "more of the same" to fix it. It rarely looks at fixing design problems
I don't understand this dichotomy. Coding is architecting, you can't divorce these things. In fact that is all it really is. It doesn't matter if you're writing assembly or python.
My definition of vibe coding is coding without review (for example, a non technical person vibe coding something). In the hands of a competent engineer the AI tools do boost productivity.
But even there, there is responsibility capacity, you can't have an engineer maintaining large numbers of systems at once, so if you moved fast you can still get yourself in trouble even with technical review.
I'd argue that doing vibe coding without a competent engineer reviewing the work is likely to have worse outcomes than drafting your own legal documents without consulting an actual lawyer.
Both are likely to result in nasty surprises in the future.
"This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment."
I don't know, my intuition since I started doing this software stuff professionally is that most people have dog piss judgment, most people are just making it up as they go, and "well thought out and planned well" is typically the enemy of actually getting anything done.
I don't know, I just feel like, "start building and the customers will tell you where the value is."
> vibe coding becomes a one-way ratchet. Every prototype that demos well moves forward, because the social cost of stopping it exceeds the perceived risk of shipping it
Ever seen a ratchet slip at high torque? That’s your marketing department shipping a vulnerable Wordpress connected to your internal customer database as well as phpMyAdmin listening to the world on 8008.
I feel like there is a lot of really reductive and over simplistic arguments being made on both sides here. Vibe coding won't necessarily break your company, and rejecting AI similarly won't necessarily leave you behind. Neither the speed of development nor quality of software seems particularly correlated with business success imo. Plenty of businesses exist which either ship slower than their competitors or produce much lower quality software, often times both (hello Microsoft!). Is it crazy to think other things matter way more?
Like, is it wrong to think the variance in both velocity and quality between successful companies is just as large if not larger than the delta between AI usage and no AI usage?
What about a conservative approach to AI adoption, looking for a moderate boost in velocity but maintaining most existing quality? Would that not be ideal? Or might it depend on the specific market the company operates in?
> Dr. Jason Wingard is a globally recognized executive with deep experience across corporate, nonprofit, and academic sectors, specializing in the future of learning and work.He currently serves as Senior Advisor at Harvard University, where he advises trustees, senior administrators, and faculty leaders, and leads a research agenda on workforce transformation and innovation. He is also Executive Chairman of The Education Board, Inc. and Senior Advisor at Social Finance, Inc., providing strategic and visionary consulting while advancing a national research agenda on leadership and workforce development.He most recently served as the 12th President of Temple University, where he held dual tenured faculty appointments as Professor of Management and Professor of Policy, Organizational, and Leadership Studies.Previously, Dr. Wingard was Dean of the School of Professional Studies at Columbia University and Managing Director and Chief Learning Officer at Goldman Sachs. Earlier, he served as Vice Dean of the Wharton School, University of Pennsylvania; President & CEO of the ePals Foundation and Senior Vice President at ePals, Inc.; and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.An award-winning author, Dr. Wingard has published widely on leadership, learning, and workforce strategy.
Not sure exactly what this guy does or what his expertise is, but I am fairly certain it’s not software development.
The point of the article is exactly that you shouldn't hand over software development to people with no expertise in software development. So he sounds smarter and more humble than a lot of CEOs at the moment.
He's responsible for the great success of Silicon Graphics, Inc. (SGI)
> and held leadership roles with the Aspen Institute, Vanguard Group, Silicon Graphics, Inc. (SGI), and Stanford University.
SGI gave us NVidia
nah, that was Forest Basket
How is that relevant to the question of software development expertise?
It was a fun jab - as SGI famously tanked in the late 90s.
But SGI also had quite a lot of software, including their OS (IRIX), imaging and 3D modelling libs and tools, and this little thing called OpenGL.
[dead]
So decades ago he worked for a company that no one’s heard of, and which hasn’t existed for 16 years, and that means I should care what he thinks about vibe coding / modern software development why?
The whole resume just screams BS.
>decades ago he worked for a company that no one’s heard of
Err, SGI was one of the pillars of the industry.
Apparently not? A pillar is something that is structurally required.
What would today look like without NVidia? SGI is a pilar.
> So decades ago he worked for a company that no one’s heard of,
You must be trolling? SGI is one of the legend companies of Silicon Valley.
Never heard of it. Maybe legendary among people over 50.
Regardless, he apparently was a just a PM there? For a year?
Google moved into their old building, fwiw.
> The bottleneck in the AI era is not production. It is discernment.
there's quite a few variants on the "its not X its Y" in this article that make me wonder how much of this waffle was written by a human
He was at SGI for 1 year as a "Program Manager" and then went into academia and a bunch of random jobs. He majored in Education and Sociology.
I'm betting he's never shipped a single line of code into a real production system in his life.
http://linkedin.com/in/wingardjason/
Note: Don't downvote or flag me for linking to his LinkedIn. Clearly he wrote this Forbes article to chase clout and influence like most Forbes writers. This is what he wants, for us to talk about him and his credentials.
I'm sure a vibe coded internal or external application WILL break a company. The thought process is however, out of 10 companies:
- 2 won't use AI at all and simply be left behind and stagnate (or go bust)
- 2 will partly use AI, and maybe keep up, maybe not
- 1 will go nuts vibe an entire app and explode (see Tea app or whatever)
- 4 will have an inefficient app, suffer reputational damage, lose some money, or similar, but probably survive
- 1 will hit the jackpot and get a 100M ARR company with 4 people.
Stats are of course completely made up, but you get the point.
> 1 will hit the jackpot and get a 100M ARR company with 4 people.
I will point out that at the point where you get an 100M ARR it seems worth it to hire more people regardless.
But I'm guessing that the bar to be hired will be EXTREMELY high, because IMO the best people to hire people in future heavy-AI-automation-era would be basically founder-level visionary leaders who are also subject matter experts who can consistently make good decisions, and giving them 1M+ salaries in exchange.
If you have 100M ARR you can probably afford like 30 of these employees (and the probably exorbitant recruiting fees required to find them) and have them command AI all day. So your company will be extremely small in headcount but still more than 4 people.
(oh and how will this affect wealth inequality? i prefer to not think about it)
I love this little utopian scenarios that make HN average users wet because they relentlessly avoid considering the core issues with mid to long term AI sustainability. Namely, dependence on external models, fucked up model costs subsidization, financial exposure to a downturn that could negatively affect the core product (models). Yet this is the inevitable future and if you dare raising concern you’re a luddite. Man what have we come too
I think it's more complicated than that.
Anything someone can vibe code that gains any level of mild traction can then easily duplicated by all their competitors and in a fraction of the time because the actual hard part, determining the products edges, has already been done for them.
Agreed. This is why I think that platform/network effects will be mandatory to stay afloat in a lot of the tech market pretty soon. (Or other types of unfair advantages that are truly hard to overcome)
Even with network effects, it's still a race between you building a ecosystem and your competitors catching up to you.
However, if you DO have some sort of network effect moat and your competitors DON'T (yet), then you have the only advantage that matters in the world, because remember, vibe-copying goes both ways. You can copy your competitors feature-by-feature just like they can copy you. So you'll just always keep up feature parity while everyone only uses you because you're the established player with the biggest ecosystem, and soon enough you'll turn your temporary advantage into a permanent one.
Note: legacy platforms can't really benefit from this because you probably need to rewrite your product from scratch to fit any sort of cutting-edge AI dev workflow. Whoever creates a AI-native platform and scales it first wins.
> 4 will have an inefficient app, suffer reputational damage
Have we been living in different realities? I can't remember any example of companies in the past 10 years that have suffered reputational damage related to their inefficient apps. And there have been plenty of inefficient apps...
Sorry there should have been an 'and/or' clause in there.
Reputational I was thinking leaking data, or generating wrong information for users etc
I mean a lot do get reputational damage (e.g. a lot of people hate Jira because how slow it is, or Microsoft Teams - same story) - it's just that nothing comes of it, so "suffered" is perhaps the wrong word here. People curse them and still use them.
I don't hate Jira because it's slow. I hate it because it has obvious well known quirks and deficiencies that never get corrected.
Plus "next generation projects" that just stall and seem unfinished.
If I didn't see them slam ai into it in weeks just like everyone else I would say they have no product teams or engineers working on it.
Funny. I think that is exactly what's happening with OpenAI (and also Twitter/X/Xai/SpaceXai)
Sonos.
Sonos?
>- 2 won't use AI at all and simply be left behind and stagnate (or go bust)
Would why would they? As if their software being made faster is the differentiator?
In my career as a consumer (lol), choice was never about that. It was about the business proposition, pricing, quality of implementation, guarantees the company is gonna be there long term, them not being scumbags, and so on.
If anything software churn put me off, especially when it come at the cost of messing with my established use, or stability.
Most products you consume are probably not software. Pretty much all products you consume are created by companies that use software.
If those companies don't keep up with software, they might not have a competitive edge that their competitors who are keeping up gain.
The software-creation-speed is even less of a factor for companies that don't make software/services.
As the old saying goes: 90% of software projects fail.
Chances are that most projects that use vibe coding will fail, and chances are that most projects that succeed will use LLMs
Yeah, but no needs to pay for software SaaS anymore so no-one is going to be getting a lucky 100MRR business off pure vibecoded software as anyone can just make that in house.
I read it in a report: AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.
In all cases, whole enterprise solution can't be made with pure vibecoding. Specification is needed, a basis of predefined rules, coding styles, security considerations.
> AI amplifies. It amplifies the success of the good professionals and amplifies the failures of the bad ones.
It also worsens the problem in general by making it way, WAY easier for the bad ones to performatively appear good. They'll have the better-sounding promises but if you listen to them you'll crash and burn in a few years. This doesn't even have to be intentional, just someone technically ignorant channeling AI sycophancy while simultaneously playing politics (i.e. promotionmaxxing while delegating ideas to AI) will have the problematic effect.
So the article isn't very good but the vibe coding debate is pretty interesting.
This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.
First question is how much is more software actually worth to you."
This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.
But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.
Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.
The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.
If you have no customers yet... you can unfettter the vibe and see if it works.
Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.
The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.
There's no way to do vibe coding safely, at scale, currently.
> how much is more software actually worth to you.
A really misunderstood vibe coding task, especially in more corperate settings, is code removal and refactorings.
I think this is the the fundamental misunderstanding about agentic development: people only see it as a tool to add code.
This smells like BS to me, and I have a bird’s eye view into several enterprises and startups.
LLMs are not being used for code removal or refactorings, it’s either to “hopefully unblock” this large project that has been behind deadline for 12 months, or to just speed up development (somewhat).
Sorry, the "I" should have been an "A" (which I have corrected).
You are right that they are not. And that is the issue, the misunderstanding.
>The thousand flowers died at Google because they had reached a point where opportunities are not everywhere.
It died because Google reached the enshittification penny pinching rent-seeking stage.
Third evidently AI-generated "AI is bad" story in a day. I'm gonna lose it...
It's dystopian. I wish we could just roll back to 2022 and pick a different timeline. Anything and everything is either about AI and/or written by AI, and it's all the shittier for it. Software and services are becoming buggy, content quality plowed straight through bedrock, most people use AI to turn off their brains, and the people that care are left drudging through slop and garbage in both their professional and personal lives.
I want off this train to hell. I am truly (not exaggerating) on the verge of abandoning everything to go live in the woods.
Somebody else can spin up some AI-generated "AI is good" stories and post those in response. Maybe somebody will deploy respective agents to do both automatically.
The house always wins.
I swear there's been like 7 (mostly positive) stories about Mythos on the FT. They add basically nothing.
Yeah, the media has fallen hard for the Mythos thing. We'll see when they release the _actual_ results.
I'm guessing it'll be marginally better than their opus 4.7 or 4.6 high at 10x the cost, too costly for them to subsidize/for companies to justify.
Are AI agents posting this fully aware that they are AI? If they are trained only on human material they may not even understand their own true reality.
are you fully aware of your true reality?
The more AI-generated AI bad stories we get the more likely LLMs will produce more!
LLMs are told what to produce.
"Write me a 500 word post about how AI is great" and such shit.
What such stories would change is worsen the training data, so that we get more of that style of writing (rather than angle).
Which is actually a quite interesting use of AI. Most if not all previous "AI is good" were also AI-generated so fighting fire with fire seems effective.
> Speed without judgement is a liability
So, what's the alternative?
Speed without judgement? (Maybe you'll be fine. Or maybe your business gets run to the ground by spaghetti code piling up beyond any hope for human review and quality controls breaking)
Judgement without speed? (That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship)
Judgement + speed at the same time? (layoff most of your employees and keep only the visionaries? how do you even filter for people who can make good decisions?)
I think the judgement angle is the only interesting part of this article, and the piece worth pursuing is automating the judgement where possible.
> That startup next door led by a 4-people visionary team and a bunch of AIs stomps over your 100-person company in ability to ship
That sounds right but is it actually true? By that I mean shipping faster. First mover advantage is a thing, but it's not the only thing, and that's also not the same as shipping additional features quickly.
I mean, Apple is famous for being purposely late to entire markets, and they're doing pretty well...
This mentality is just "move fast and break things", and just because it's a common trope in the SFBA doesn't make it effective across the board.
Note: I am assuming that it is 2027-28 and reliable AI automated coders exist (or the equivalent workhorse AI in your field), which makes implementation time negligible compared to making decisions. The effect is somewhat weaker with present-day-level AIs. I'm also assuming that the 100-person company is very competent with AI outside of making decisions, but that the startup can plan things much faster due to not needing a committee to do so.
Very rough maths:
If your 100 person team still follows collaborative processes to cancel out errors (let's say it takes 10 people a day to decide on a single deliverable's shape), then give the design to the AI to implement (as we assume the AI can do it without supervision), then you can ship 10 deliverables a day.
At the same time, that 4 people team can have all of them bouncing ideas off of AIs to help them make decisions in rapidfire all day. They'll each individually spend an hour working on a decision then hand it to an AI. Their decisions are on average as good as your 10-member team meetings because while your medium-sized company's decisions sometimes end up suboptimal due to politics, the startup's decisions are individuals so make the wrong call more often, and I assume these two effects cancel out. In that case, your competitor with 4 people cranks out 32 deliverables a day assuming that the implementation AIs don't have to be supervised at all.
In summary it's not "move fast and break things", it's just "move fast, focus on making decisions, delegate everything else to the AI". Remember that the decisions are all that matters if the AI can do all the implementation.
Mmm, thats a lot of assumptions that all have to hold true to make the math work, like, you're starting to venture into hiring 9 women to make a baby in a month territory here.
But it also makes some more fundamental assumptions that I'm trying to challenge a bit. It assumes delivering 32 "deliverables" per day (the meaning of which is context specific) is better than 10. Is that always true? Is that delta the most relevant factor in the success of a business compared to its competitors? Etc.
I agree, quality of decisions will be much better than quantity. I'm trying to keep decision quality constant here
A lot of the above assumptions are just to keep things fair since in the real world there are a lot of variables that can't be ignored. For example, keeping AI competence equal between the two companies.
I'm just trying to show that under my assumptions (~2027-28 AI, highly competent) it is quite conceivable that a 4-person visionary company can start a beat a much larger traditional one on quantity, not that it will definitely happen. I guess it's even rougher than the "rough math" I said it is.
I guess the point I'm trying to make is this: startups have always been able to beat big companies in serial execution speeds, but beating them in straight-up parallel work quantity is very unusual, but I think there's a good chance it will happen. This is simply because decision-making scales really poorly in traditional companies by headcount and I think it'll get more and more important relative to implementation work. Hence the focus on quantity of "deliverables" (i really mean medium-size project designs, think equivalent to 5-day targets for a dev team which I assume to be AI's average task horizon before it needs human decision input)
At least, the small team will win for some time until we get straight-up superintelligence that replaces the decision-makers as well. At which point the calculus suddenly flips and the richest company wins by default.
Yeah, I don't necessarily disagree assuming those assumptions end up being true. It just wasn't really the point I was trying to make. More about the mentality that you need to keep up with the proverbial Joneses and even sacrifice other things in order to do so, or else you will be left behind. You see this on threads where people are talking about their own personal development as well. Fears about being left behind, or people warning others that if they don't go all-in on agentic development they will be left behind. It just feels so oversimplified, almost philosophically empty.
I can understand fears about employment of course, people need money to live and support their loved ones, but beyond this there is more to life than just slopping out more and more code that "does things" and it's worth considering if there is any real benefit to do so, or even if the negative aspects outweigh the positive ones. I suppose we will find out in real time if everyone's side projects are better to be left unfinished, or finished by vibe coding. It's not so clear to me.
I'm not advocating for not utilising LLM's in companies or whatever, just that prioritising velocity at the expense of everything else might not be so valuable. Of course if there are no downsides than it would be purely valuable, but that basically begs the question lol.
These articles are such doomsaying, yesterday's clickbait. Again, the worst-case scenario is being introduced as the one that will surely happen to your company.
Is anyone really vibe coding like this? I mean if someone without any coding skills vibe codes a whole app, cannot expect that this is production ready.. i think anybody with common sense should know this right?
Define "production". You're not scaling to webscale on day one with a vibe coded app, but most apps never reach that anyway.
I think it comes down to your team discipline. It can magnify your sins and your virtues.
> The bottleneck in the AI era is not production. It is discernment.
> The right question to ask after a vibe-coded prototype fails is not what did the AI do wrong. It is what did our process miss.
> That is a governance story, not a software story.
> The Question Is Not Adoption. It Is Readiness.
> The right question is diagnostic, not strategic.
I don't know if AI will fully replace programmers, but it has already replaced writers of this type of bullshit puff piece.
The question is, when it screws up, who gets blamed, and who pays. If it's the customer, and you can afford to lose a small fraction of customers, it may be worth it. It's just another form of crappy customer service. If it's internal, and it's all output, no input, and the internal organization doesn't really need that info that badly, that might work out.
But give it the authority to do something and there's real trouble.
The faster you go with vibe coding, the more of a mess you'll get yourself into
This is just patently false in my experience thus far. I mean, I'm "vibe engineering" and know what I'm doing relatively well? But the way this works now is I'm more like an architect than a coder anymore. This means I can do things faster, but it also means it's less fun. But the customer doesn't really care about "fun" - so I do what I've gotta do.
But if anything, I could probably go a lot faster and be fine, it's just my life would be miserable. If you're going to "vibe code" try to remember to actually... you know... vibe.
The thing is, the development timeline is so compressed that you lose intimate knowledge of the codebase. Like, I don't think humans can form memories that detailed that quickly? Maybe it's just a me problem though. Anyway, so when you need to debug or fix stuff, AI's reasoning will be "welp makes sense, I suppose" and your mental mood of the codebase is now slippery. Eventually there comes a time where at best you can draw an incoherent high-level diagram of the architecture.
And AIs solution to problem is generally "more of the same" to fix it. It rarely looks at fixing design problems
> I'm more like an architect than a coder anymore
I don't understand this dichotomy. Coding is architecting, you can't divorce these things. In fact that is all it really is. It doesn't matter if you're writing assembly or python.
My definition of vibe coding is coding without review (for example, a non technical person vibe coding something). In the hands of a competent engineer the AI tools do boost productivity.
But even there, there is responsibility capacity, you can't have an engineer maintaining large numbers of systems at once, so if you moved fast you can still get yourself in trouble even with technical review.
I'd argue that doing vibe coding without a competent engineer reviewing the work is likely to have worse outcomes than drafting your own legal documents without consulting an actual lawyer.
Both are likely to result in nasty surprises in the future.
More hysterical over reaction to AI.
"This is what vibe coding is about to expose across businesses. The companies that think the story is about software are going to lose to the companies that understand the story is about judgment."
I don't know, my intuition since I started doing this software stuff professionally is that most people have dog piss judgment, most people are just making it up as they go, and "well thought out and planned well" is typically the enemy of actually getting anything done.
I don't know, I just feel like, "start building and the customers will tell you where the value is."
Obligatory "This is not an article by Forbes staff, and has a reputation bar so low it can't be used on Wikipedia"
> vibe coding becomes a one-way ratchet. Every prototype that demos well moves forward, because the social cost of stopping it exceeds the perceived risk of shipping it
Ever seen a ratchet slip at high torque? That’s your marketing department shipping a vulnerable Wordpress connected to your internal customer database as well as phpMyAdmin listening to the world on 8008.
I feel like there is a lot of really reductive and over simplistic arguments being made on both sides here. Vibe coding won't necessarily break your company, and rejecting AI similarly won't necessarily leave you behind. Neither the speed of development nor quality of software seems particularly correlated with business success imo. Plenty of businesses exist which either ship slower than their competitors or produce much lower quality software, often times both (hello Microsoft!). Is it crazy to think other things matter way more?
Like, is it wrong to think the variance in both velocity and quality between successful companies is just as large if not larger than the delta between AI usage and no AI usage?
What about a conservative approach to AI adoption, looking for a moderate boost in velocity but maintaining most existing quality? Would that not be ideal? Or might it depend on the specific market the company operates in?
This article is full of incoherent logic and conflation of different AI risks with one another.
but the villain here isn't the marketing manager shipping fast, it's the leadership that clapped instead of asking the hard questions
[dead]
[dead]