Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.
I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.
And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.
All the while re-evaluating all my assumptions and that of others.
If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.
Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.
The last fairly technical career to get surprisingly and fully automated in the way this post displays concern about - trading.
I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.
Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:
- How complex the job was
- How high a quality bar there was to do it
- How current algos never could do it and neither could future ones
- How there'd always be edge for humans
Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.
And lastly - LLMs don't have to be "great," they just have to be "good enough."
See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.
Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.
1) I believe we need true AGI to replace developers.
2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.
3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.
Back in the late 80s and early 90s there was a craze called CASE - Computer-Aided Software Engineering. The idea was humans really suck at writing code, but we're really good at modeling and creating specifications. Tools like Rational Rose arose during this era, as did Booch notation which eventually became part of UML.
The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.
Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.
Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?
At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.
Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.
I am 61, I have been a full time developer since I was about 19. I have lost count of the number of 'next thing to replace developers' many many times. many of them showed promise. Many of them continue to be developed. Frameworks with higher and higher levels of abstraction.
I see LLMs as the next higher level of abstraction.
Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.
Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.
Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.
I've been thinking about this a bunch and here's what I think will happen as cost of writing software approaches 0:
1. There will be way more software
2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.
3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking
4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.
EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:
As a junior dev, I do two conscious things to make sure I'll still be relevant for the workforce in the future.
1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.
2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.
At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is
From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are
Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect
If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out
Firstly, as many commenters have mentioned, I don't see AI taking jobs en masse. They simply aren't accurate enough and they tend to generate more code faster which ends up needing more maintenance.
Advice #1: do work on your own mind. Try to improve your personal organization. Look into methodologies like GTD. Get into habits of building discipline. Get into the habit of storing information and documentation. From my observations many developers simply can't process many threads at once, making their bottleneck their own minds.
Advice #2: lean into "metis"-heavy tasks. There are many programming tasks which can be easily automated: making a app scaffold, translating a simple algorithm, writing tests, etc. This is the tip of the iceberg when it comes to real SWE work though. The intricate connections between databases and services, the steps you have to go through to debug that one feature, the hack you have to make in the code so the code behaves differently in the testing environment, and so on. LLMs require legibility to function: a clean slate, no tech-debt, low entropy, order, etc. Metis is a term talked about in the book "Seeing Like a State" and it encompasses knowledge and skills gained through experience which is hard to transfer. Master these dark corners, hack your way around the code, create personal scripts for random one-off tasks. Learn how to poke and pry the systems you work on to get out the information you want.
I use Copilot a bit, and it can be really, really good.
It helps me out, but in terms on increasing productivity, it pales in comparison to simple auto-complete. In fact it pales in comparison to just having a good, big screen vs. battling away on a 13" laptop.
LLMs are useful and provide not insignificant assistance, but probably less assistance than the tools we've had for a long time. LLMs are not a game changer like some other thing have been since I've been programming (since late 1980s). Just going to Operating Systems with protected memory was a game changer, I could make mistakes and the whole computer didn't crash!
I don't see LLMs as something we have to protect our careers from, I see LLMs as an increasingly useful tool that will become a normal part of programming same as auto-complete, or protected memory, or syntax-highlighting. Useful stuff we'll make use of, but it's to help us, not replace us.
My anecdata shows people who have no/limited experience in software engineering are suddenly able to produce “software”. That is, code of limited engineering value. It technically works, but is a ultimately an unmaintainable, intractable Heath Robinson monstrosity.
Coding LLMs will likely improve, but what will happen first: a good-at-engineering LLM; or a negative feedback cycle of training data being polluted with a deluge of crap?
1) have books like 'The Art of Programming' on my shelf, as AI seems to propagate solutions that are related to code golf more than robustness due to coverage in the corpus.
2) Force my self to look at existing code as abstract data types, etc... to help reduce the cost of LLMs failure mode (confident, often competent, and inevitable wrong)
3) curry whenever possible to support the use of coding assistants and to limit their blast radius.
4) Dig deep into complexity theory to understand what LLMs can't do, either for defensive or offensive reasons.
5) Realize that SWE is more about correctness and context than code.
6) Realize what many people are already discovering, that LLM output is more like clip art than creation.
I think in some sense the opposite could occur, where it democratizes access to becoming a sort of pseudo-junior-software engineer. In the sense that a lot more people are going to be generating code and bespoke little software systems for their own ends and purposes. I could imagine this resulting in a Cambrian Explosion of small software systems. Like @m_ke says, there will be way more software.
Who maintains these systems? Who brings them to the last mile and deploys them? Who gets paid to troubleshoot and debug them when they reach a threshold of complexity that the script-kiddie LLM programmer cannot manage any longer? I think this type of person will definitely have a place in the new LLM-enabled economy. Perhaps this is a niche role, but figuring out how one can take experience as a software engineer and deploy it to help people getting started with LLM code (for pay, ofc) might be an interesting avenue to explore.
This blew up way more than I expected. Thanks everyone for the comments, I read almost all of them.
For the sake of not repeating myself, I would like to clarify/state some things.
1. I did not intend to signal that SWE will disappear as a profession, but would rather undergo transformation, as well as shrinking in terms of the needed workforce.
2. Some people seem to be hanging up to the idea that they are doing unimaginably complicated things. And sure, some people do, but I doubt they are the majority of the SWE workforce. Can LLM replace a cobol developer in financial industry? No, I don't think so. Can it replace the absurd amount of people whose job description can be distilled to "reading/writing data to a database"? Absolutely.
3. There seems to be a conflicting opinion. Some people say that code quality matters a lot and LLMs are not there yet, while other people seems to focus more on "SWE is more than writing code".
Personally, based on some thinking and reading the comments, I think the best way to future-proof a SWE career is to move to position that requires more people skills. In my opinion, good product managers that are eager to learn coding and combine LLMs for code writing, will be the biggest beneficiaries of the upcoming trend. As for SWEs, it's best to start acquiring people skills.
There's a great Joel Spolsky post about developers starting businesses and realising that there's a bunch of "business stuff" that was abstracted away at big companies. [1]
One way to future proof is to look at the larger picture, the same way that coding can't be reduced to algorithm puzzles:
"Software is a conversation, between the software developer and the user. But for that conversation to happen requires a lot of work beyond the software development."
I was advising this MBA student's nascent startup (with the idea I might technical cofound once they're graduating), and they asked about whether LLMs would help.
So I listed some ways that LLMs practically would and wouldn't fit into the workflow of the service they doing. And related it to a bunch of other stuff, including how to make the most of the precious customer real-world access they'd have, and generating a success in the narrow time window they have, and the special obligations of that application domain niche.
Later, I mentally replayed the conversation in my head (as I do), and realized they were actually probably asking about using an LLM to generate the startup's prototype/MVP for the software they imagined.
And also, "generating the prototype" is maybe the only value that an MBA student had been told a "technical" person could provide at this point. :)
That interpretation of the LLM question didn't even occur to me when I was responding. I could've easily whipped up the generic Web CRUD any developer could do and the bespoke scrape-y/protocol-y integrations that fewer developers could do, both to a correctness level necessarily higher than the norm (which was required by this particular application domain). In the moment, it didn't occur to me that anyone would think an LLM would help at all, rather than just be an unnecessary big pile of risk for the startup, and potential disaster in the application domain.
I no longer have skin in the game since I retired a few years back.
But I have had over 30 years in a career that has been nothing if not dynamic the whole time. And so I no doubt would keep on keepin' on (as the saying goes).
Future-proof a SWE career though? I think you're just going to have to sit tight and enjoy (or not) the ride. Honestly, I enjoyed the first half of my career much more than where SWE ended up in the latter half. To that end, I have declined to encourage anyone from going into SWE. I know a daughter of a friend that is going into it — but she's going into it because she has a passion for it. (So, 1) no one needed to convince her but 2) passion for coding may be the only valid reason to go into it anyway.)
Imagine the buggy-whip makers gathered around the pub, grousing about how they are going to future-proof their trade as the new-fangled automobiles begin rolling down the street. (They're not.)
I have been unemployed for almost a year now (it started with a full division layoff and then no willingness or motivation to look for work at the time). Seeing the way AI can do most of the native app development (which is what I did) code I wrote I am losing almost any motivation to even try now. But I have been sleeping the best after college (where I slept awesome) and I have been working out, watching lots of theatre and cinema and playing lots of sports (two of them almost daily), reading a lot of literature, lots of podcasts. I guess I will just wait for my savings to run dry and then see what option then I'd have and what I would not if at all. I know the standard thing to do and say is "up-skill", "change with the times" etc and I am sure those have merit but I just feel I am done with the constant catch up, kind of checked out. I don't give a fuck anymore maybe, or I do and I am too demoralised to confront it.
LLMs will just write code without you having to go copy-pasta from SO.
The real secret is talent stacks: have a combination of talents and knowledge that is desirable and unique. Be multi-faceted. And don't be afraid to learn things that are way outside of your domain. And no, you wouldn't be pigeon-holing yourself either.
For example there aren't many SWEs that have good SRE knowledge in the vehicle retail domain. You don't have to be an expert SRE, just be good enough, and understand the business in which you're operating and how those practices can be applied to auto sales (knowing the laws and best practices of the industry).
As others have stated, I don't think we have anything to worry about.
As a SWE you are expected to neatly balance code, its architecture and how it addresses the customers' problems. At best, what I've seen LLMs produce is code monkey level programming (like copy pasting from StackOverflow), but then a human is still needed to tweak it properly.
What would be needed is General AI and that's still some 50 years away (and has been for the past 70 years). The LLMs are a nice sleight of hand and are useful but more often wrong than right, as soon as you delve into details.
Beware of the miopia and gate keeping displayed on this thread.
There will be less SWE and DevOps and related jobs available in the next 24 months. Period.
Become hyper-aware of how a business measure your value as a SWE. How? Ask pointed, uncomfortable questions that force the people paying you to think and be transparent.
Stay on the cutting edge of how to increase your output and quality using AI.
Ie: how long does it take for a new joiner to produce code? How do you cut that time down by 10x using “AI”?
Most organizations don't move that fast. Certainly not fast enough to need this kind of velocity.
As it is I spend 95% of my time working out what needs to be done with all of the stakeholders and 5% of my time writing code. So the impact of AI on that is negligible.
When doing X becomes cheaper with the invention of new tools, X is now more profitable and humanity tends to do much more of it.
Nearly all code was machine-generated after the invention of compilers. Did the compiler destroy programming? Absolutely not. Compilers and other tools like higher-level programming languages really kickstarted the software industry.
IMO the potential transition from writing programming languages -> writing natural language and have LLM generate the program is still a smaller change than machine code/assembly -> modern programming languages.
If the invention of programming languages expanded the population of programmers from thousands to the 10s of millions, I think LLMs could expand this number again to a billion.
Build something with an LLM outside of your comfort zone.
I was a pretty early adopter to an LLM based workflow. The more I used it, the worse my project became, and the more I learned myself. It didn’t take long for my abilities to surpass the LLM, and for the past year my usage of LLMs has been dropping dramatically. These days I spend more time in docs than in a chat conversation.
When chatGPT was announced, many people thought programming was over. As in <12 months. Here we are several years later, and my job looks remarkably the same.
I would absolutely love to not have to program anymore. For me, programming is a means to an end. However, after having used LLMs pretty much everyday for 2.5 years, it’s very clear to me that software engineering won’t be changing anytime soon. Some things will get easier and workflows may change, but if you want to build and maintain a moderately difficult production grade application with decent performance, you will still be programming in 10 years
My solution has been to work with LLMs and being on the one side of the industry trying to replace the other. I switched focus fairly early on in the "AI hype" era mainly because I thought it looked like a lot of fun to play with LLMs. After a few years I realized I'm quite a bit ahead of my of my former coworkers that stayed still. I've worked on both the product end and closer to the hardware, and as more and more friends ask for help on problems I've realized I do in fact have a lot of understanding of this space.
A lot of people in this discussion seem to be misunderstanding the way the industry will change with LLMs. It's not a simple as "engineers will be automated away" in the same sense that we're a long way away uber drivers disappearing from self driving cars.
But the impact of LLMs on software is going to be much closer to the impact of the web and web development on native application development. People used to scoff at the idea that any serious company would be run from a web app. Today I would say the majority of software engineers are, directory or indirectly, building web-based products.
LLMs will make coding easier, but they also enable a wide range of novel solutions within software engineering itself. Today any engineer can launch a 0-shot classifier that's better performing than what would have taken a team of data scientists just a few years ago.
I'm a decent engineer working as a DS in a consulting firm. In my last two projects, I checked in (or corrected) so much more code than the other two junior DS's in my team, that at the end some 80%-90% of the ML-related stuff had been directly built, corrected or optimized by me. And most of the rest that wasn't, was mostly because it was boilerplate. LLMs were pivotal in this.
And I am only a moderately skilled engineer. I can easily see somebody with more experience and skills doing this to me, and making me nearly redundant.
"The use of FORTRAN, like the earlier symbolic programming, was very slow to be taken up by the professionals. And this is typical of almost all professional groups. Doctors clearly do not follow the advice they give to others, and they also have a high proportion of drug addicts. Lawyers often do not leave decent wills when they die. Almost all professionals are slow to use their own expertise for their own work. The situation is nicely summarized by the old saying, “The shoe maker’s children go without shoes”. Consider how in the future, when you are a great expert, you will avoid this typical error!"
Richard W. Hamming, “The Art of Doing Science and Engineering”
Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.
So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?
If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
I will not believe the AI takeover until there's evidence. I haven't seen any examples, apart from maybe TODO list apps. Needless to say, that's nowhere near the complexity that is required at most jobs. Even if my carreer was endangered, I would continue the path I've taken so far: have a basic understanding of as much as possible (push out the edges of knowledge circle or whatever it's called), and strive to have an expert knowledge about maybe 1 or 2, or 3 subjects which pay for your daily bread. Basically just be good at what you do, and that should be fine. As for beginners, I advise to dive deep into a subject, start with a solid foundation and be sure to have a hands-on approach, while maintaining a consistent effort.
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.
A steeper learning curve in a professional field generally translates into higher earnings. The longer you have to be trained to be helpful, the more a job generally earns.
The Cloud was meant to decimate engineering AND development. But what it did was it created enough chaos that theres a higher demand for both than ever, just maybe not in your region and for your skillset.
LLMs are guaranteed to cause chaos, but the outcome of that chaos is not predictable. Will every coder now output the same as a team of 30 BUT there are 60 times as many screwed up projects made by wannabe founders that you have to come in and clean up? Will businesses find ways to automate code development and then turn around and have to bring the old guys back in constantly to fix up the pipeline. Will we all be coding in black boxes that the AI fills in?
I would make sure you just increase your skills and increase your familiarity with LLMs in case they become mandatory.
It depends on whether you think they are a paradigm change (at the very least) or not. If you don't then either you will be right or you will be toast.
For those of us who do think this is a revolution, you have two options:
1. Embrace it.
2. Find another career, presumably in the trades or other hands-on vocations where AI ingress will lag behind for a while.
To embrace it you need to research the LLM landscape as it pertains to our craft and work out what interests you and where you might best be able to surf the new wave, it is rapidly moving and growing.
The key thing (as it ever was) is to build real world projects mastering LLM tools as you would an IDE or language; keep on top of the key players, concepts and changes; and use your soft skills to help open-eyed others follow the same path.
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code
LLMs do help but to a limited extend. Never heard of anyone in the second category.
> how do you future-proof your career in light of, the inevitable, LLM take over?
Generally speaking, coding has never been a future proof career. Ageism, changes in technology, economic cycles, offshoring... When I went into that field in early 2000s, it was kind of expected that most people if they wanted to be somewhat successful had to move eventually to leadership/management position.
Things changed a bit with successful tech companies competing for talents and offering great salaries and career paths for engineers, especially in the US but it could very well be temporary and shouldn't be taken for granted.
LLMs is one factor among many that can impact our careers, probably not the most important. I think there's a lot of hype and we're not being replaced by machines anytime soon. I don't see a world where an entrepreneur is going to command an LLM to write a service or a novel app for them, or simply maintain an existing complex piece of software.
Who knows what the future holds? As a SWE you are expected to adapt and use modern technology. Always learning is a part of the job. Look at all the new things to build with, frameworks being updated/changes, etc. Making things easier.
LLMs will make things easier, but it's easy to disagree that they will threaten a developer's future with these reasons in mind:
* Developers should not be reinventing the wheel constantly. LLMs can't work very well on subjects they have no info on (proprietary work).
* The quality is going to get worse over time with the internet being slopped up with the mass disregard for quality content. We are at a peak right now. Adding more parameters isn't going to make the models better. It's just going to make them better at plagiarism.
* Consistency - a good codebase has a lot of consistency to avoid errors. LLMs can produce good coding examples, but they will not have much regard for your how -your- project is currently written. Introducing inconsistency makes maintenance more difficult, let alone the bugs that might slip in and wreak havoc later.
I try to go to the lowest level I could. During my recent research into PowerPC 32-bit assembly language I have found 1) Not many material online, and what available are usually PDF with pictures which could be difficult for LLMs to pick up, and 2) Indeed ChatGPT didn't give good answer even for a Hello, World example.
I think hardware manufacturers, including ones that produce chips, are way less encouraged to put things online and thus has a wide moat. "Classic" ones such as 6502 or 8086 definitely have way more material. "Modern" popular ones such as x86/64 too have a lot of material online. But "obscure" ones don't.
On software side, I believe LLMs or other AI can easily replace juniors who only knows how to "fill-in" the code designed by someone else, in a popular language (Python, Java, Javascript, etc.), in under 10 years. In fact it has greatly supported my data engineering work in Python and Scala -- does it always produce the most efficient solution? No. Does it greatly reduces the time I need to get to a solution? Yes, definitely!
Disclaimer: I wholeheartedly hate all the systems they call AI these days and I hate the culture around it for technological, ecological, political, and philosophical reasons.
I won't future-proof my career against LLMs at all. If I ever see myself in the position that I must use them to produce or adjust code, or that I mostly read and fix LLM-generated code, then I'll leave the industry and do something else.
I see potential in them to simplify code search/navigation or to even replace stackoverflow, but I refuse to use them to build entire apps. If management in turn believes that I'm not productive enough anymore then so be it.
I expect that lots of product owners and business people will be using them in order to quickly cobble something together and then hand it over to a dev for "polishing". And this sounds like a total nightmare to me. The way I see it, devs make this dystopian nightmare a little bit more true everytime they use an LLM to generate code.
In a market, scarce services will always be more valuable than abundant services.
Assuming that AI will at some point be capable of replacing an SWE, to future-proof your career, you will need to learn how to provide services that AI cannot provide. Those might not be what SWEs currently usually offer.
I believe it's actually not that hard to predict what this might be:
1. Real human interaction, guidance and understanding: This, by definition, is impossible to replace with a system, unless the "system" itself is a human.
2. Programming languages will be required in the future as long as humans are expected to interface with machines and work in collaboration with other humans to produce products. In order to not lose control, people will need to understand the full chain of experience required to go from junior SWE to senior SWE - and beyond. Maybe less people will be required to produce more products but still, they will be required as long as humanity doesn't decide to give up control over basically any product that involves software (which will very likely be almost all products).
3. The market will get bigger and bigger to the point where nothing really works without software anymore. Software will most likely be even more important to have a unique selling point than it is now.
4. Moving to a higher level of understanding of how to adapt and learn is beneficial for any individual and actually might be one of the biggest jumps in personal development. This is worth a lot for your career.
5. The current state of software development in most companies that I know has reached a point where I find it actually desirable for change to occur. SWE should improve as a whole. It can do better than Agile for sure. Maybe it's time to "grow up" as a profession.
I think the real world improvements will plateau and it'll take awhile for current enterprise just to adopt what is possible today but that is still going to cause quite a bit of change. You can imagine us going from AI Chat Bots with RAG on traditional datastores, to AI-enhanced but still human-engineered SaaS Products, to bespoke AI-generated and maintained products, to fully E2E AI Agentic products.
An example is do you tell the app to generate a python application to manage customer records or do you tell it "remember this customer record so other humans or agents can ask for it" and it knows how to efficiently and securely do that.
We'll probably see more 'AI Reliability Engineer' type roles which will likely be around building and maintain evaluation datasets, tracking and stomping out edge cases, figuring out human intervention/escalation, model routing, model distillation, Context-window vs Fine-tuning, and overall intelligence-cost management.
>junior to mid level software engineering will disappear mostly, while senior engineers will transition
It's more likely the number of jobs at all level of seniority will decrease, but none will disappear.
What I'm interested to see is how the general availability of LLM will impact the "willingness" of people to learn coding. Will people still "value" coding as an activity worth their time?
For me as an already "senior" engineer, using LLMs feel like a superpower, when I think of a solution to a problem, I can test and explore some of my ideas faster by interacting with it.
For a beginner, I feel that having all of this available can be super powerful too, but also truly demotivating. Why bother to learn coding when the LLM can already do better than you? It takes years to become "good" at coding, and motivation is key.
As a low-dan Go player, I remember feeling a bit that way when AlphaGo was released. I'm still playing Go but I've lost the willingness to play competitively, now it's just for fun.
I just copied the html from this thread into Claude to get a summary. I think being very realistic, a lot of SWE job requirements will be replaced by LLMs.
The expertise to pick the right tool for the right job based on previous experience that senior engineers poses is something that can probably be taught to an LLM.
Having the ability to provide a business case for the technology to stakeholders that aren't technologically savvy is going to be a people job for a while still.
I think positioning yourself as an expert / bridge between technology and business is what will future-proof a lot of SWE, but in reality, especially at larger organizations, there will be a trimming process where the workload of what was thought to need 10 engineers can be done with 2 engineers + LLMs.
I'm excited about the future where we're able to create software quicker and more contextual to each specific business need. Knowing how to do that can be an advantage for software engineers of different skill levels.
Learn the tools, use them where they shine avoid them where they do not. Your best bet is just learn to start using LLM in your day to day coding and find out what works and what doesn’t.
I work on a pretty straightforward CRUD app in a niche domain and so far they haven’t talked about replacing me with some LLM solution. But LLMs have certainly made it a lot faster to add new features. I’d say working in a niche domain is my job security. Not many scientists want to spend their time trying to figure out how to get an LLM to make a tool that makes their life easier - external competitors exist but can’t give the same intense dedication to the details required for smaller startups and their specific requirements.
A side note - maybe my project is just really trivial, maybe I’m dumber or worse at coding than I thought, or maybe a combination of the above, but LLMs have seemed to produce code that is fine for what we’re doing especially after a few iteration loops. I’m really curious what exactly all these SWEs are working on that is complex enough that LLMs produce unusable code
I’ve carved out a niche of very low level systems programming and optimization. I think it’ll be awhile before LLMs can do what I do. I also moved to to staff so I think a lot of what I do now will still exist with junior/mid level devs being reduce by AI.
But I am focusing on maximizing my total comp so I can retire in 10-15 years if I need to. I think most devs are underestimating where this is eventually going to go.
Short term defense is learning about, and becoming an expert in, using LLMs in products.
Longer term defense doesn't exist. If Software Engineering is otherwise completely automated by LLMs, we're in AGI territory, and likely recursive self-improvement plays out (perhaps not AI-foom, by huge uptick in capability / intelligence per month / quarter).
In AGI territory, the economy, resource allocation, labor vs. capital all transition into a new regime. If problems that previously took hundreds of engineers working over multiple years can now be built autonomously within minutes, then there's no real way to predict the economic and social dynamics that result from that.
I'm working as if in 2-3 years the max comp I will be able to get as a senior engineer will be 150k. And it will be hard to get that. It's not that it will disappear, its that the bar to produce working software will go way down. Most knowledge and skill sets will be somewhat commoditized.
Also pretty sure this will make outsourcing easier since foreign engineers will be able to pick up technical skills easier
Another thing I want to note is; even if I get replaced by AI, I think I'd be sad for a bit. I think it'd be a fun period to try to find a "hand-focused" job. Something like a bakery or chocolatier. I honestly wouldn't mind if I could do the same satisfying work but more hands-on, rather than behind a desk all day
I might be too optimistic but I think LLMs will basically replace the worst and most junior devs, while the job of anyone with 5 or 10+ years of experience will be babysitting AI codevelopers, instead of junior developers.
I find a lot of good use for LLMs but it's only as a multiplier with my own effort. It doesn't replace much anything of what I do that actually requires thought. Only the mechanical bits. So that's the first thing I ensure: I'm not involved in "plumbing software development". I don't plug together CRUD apps with databases, backend apis and some frontend muck. I try to ensure that at least 90% of the actual code work is about hard business logic and domain specific problems, and never "stuff that would be the same regardless of whether this thing is about underwear or banking".
If I can delegate something to it, it's every bit as difficult as delegating to another developer. Something we all know is normally harder than doing the job yourself. The difference between AI Alice and junior dev Bob, is that Alice doesn't need sleep. Writing specifications, reviewing changes and ensuring Alice doesn't screw up is every bit as hard as doing the same with Bob.
And here is the kicker: whenever this equation changes, that we have some kind of self-going AI Alice, then we're already at the singularity. Then I'm not worried about my job, I'll be in the forest gathering sticks for my fire.
To me it seems possible, that seniors will become in even more demand, because learning to become a decent developer is actually harder if you're distracted by leaning on LLMs. Thus, the supply of up-and-coming good new seniors may be throttled. This to me is because LLMs don't abstract code well.
Once upon a time electronics engineers had to know a lot about components. Then along came integrated circuits and they had to know about them, and less about components.
Once upon a time programmers had to know machine code or assembler. I've never had to know about those for my job.
I programmed in C++ for years and had to know plenty about memory. These days I rarely need to as much, but some basic knowledge is needed.
Its fine if a student is learning to code mostly in Python, but a short course in C is probably a good idea even today.
But as for LLMs, you can't say "now I need to know less about this specific thing that's been black-boxed for me" , because it isn't wrapped conveniently like that.
I'm extremely glad that when I was a junior, LLMs weren't around. Really seems like a barrier to learning. Its impossible to understand all the generated code, but also difficult, without significant career experience, to be able to judge what you need to know about, and what you don't. Feel sorry for juniors today to be honest!
I've been quite worried about it at this point. However I see that "this is not going to happen" is likely not going to help me. So I rather go with the flow, use it where reasonable even if it's not clear to me whether AI is truly ever leaving the hype stage.
FWIW I was allowed to use AI at work since ChatGPT appeared and usually it wasn't a big help for coding. However for education and trying to "debug" funny team interactions, I've surely seen some value.
My guess is though that some sort of T-shaped skillset is going to be more important while maintaining a generalist perspective.
I feel the influencer crowd is overblowing the actual utility of LLM's massively. Kind of feels akin to the "cryptocurrency will take over the world" trope 10 years ago, and yet .. I don't see it any crypto in my day to day life to this day. Will it improve general productivity and boring tasks nobody wants to do? Sure, but to think any more than that frankly I'd like some hard evidence of it being actually able to "reason". And reason better than most devs I've ever worked with, because quite honestly humans are also pretty bad at writing software, and LLM's learn from humans, so ...
I see this sort of take from a lot of people and I always tell them to do the same exercise. A cure for baseless fears.
Pick an LLM. Any LLM.
Ask it what the goat river crossing puzzle is. With luck, it will tell you about the puzzle involving a boatman, a goat, some vegetable, and some predator. If it doesn’t, it’s disqualified.
Now ask it to do the same puzzle but with two goats and a cabbage (or whatever vegetable it has chosen).
It will start with the goat. Whereupon the other goat eats the cabbage left with it on the shore.
Hopefully this exercise teaches you something important about LLMs.
I recall the Code Generation from class diagram and then low-code declared as death of all devs.
The current generation of LLMs are immensely expensive and will become further more if all the VC money disappears.
A FT dev is happy to sit there and deal with all the whinning, meeting, alignment, 20 iterations of refactoring, architectural change, late friday evening to put out fire. To make an LLM work for 40h/week with that much of context would cost insane and several steering people. Also the level of ambiguous garbage spewed by management and requirement engineers which I turn to value is… difficult with LLMs.
Lets take it this way, before LLMs, we have wonderful outsourcing firms that costs slightly less than maintaining inhouse team, if devs were to disappear, that would be the nail. LLMs need steering and does not deal well with ambiguity, so I don’t see a threat.
Also for all the people touting LLM holy song, try asking windsurf or cursor to generate something niche which does not exist publicly, see how well it does. Aside, I closed several PRs last week because people started using generated code with 100+ LOC which would do with just one or two lines if the authors took some time to review the latest release of the library.
Here’s a question for you, have they automated trains yet? They’re literally on tracks. Until trains are fully automated, then after that cars, then later airplanes, then maybe, just maybe “ai” will come for thought work. Meanwhile, Tesla’s “ai” still can’t stop running into stopped firetrucks[1]…
The simple answer is to use LLMs so you can put it on your resume. Another simple answer is to transition to a job where it's mostly about people.
The complex answer is we don't really know how good things will get and we could be at the peak for the next 10-20 years, or there could be some serious advancements that make the current generation look like finger-painting toddlers by comparison.
I would say the fear of no junior/mid positions is unfounded though since in a generation or two, you'd have no senior engineers.
LLMs are most capable where they have a lot of good data in their training corpus and not much reasoning is required. Migrate to a part of the software industry where that isn't true, e.g. systems programming.
The day LLMs get smart enough to read a chip datasheet and then realize the hardware doesn't behave the way the datasheet claims it does is the day they're smart enough to send a Terminator to remonstrate with whoever is selling the chip anyway so it's a win-win either way, dohohoho.
LLM's for now only have 2-3 senses. The real shift will come when they can collect data using robotics. Right now a human programmer is needed to explain the domain to AI and review the code based on the domain.
On the bright side, every programmer can start a business without a need to hire a army of programmers. I think we are getting back to artisan based economy where everyone can be a producer without a corporate job.
A couple reasons why I am not scared of AI taking my job:
1. They are trained to be average coders.
The way LLMs are trained is by giving them lots of examples of previous coding tasks. By definition, half of those examples are below average. Unless there is a breakthrough on how they are trained any above average coder won't have anything to worry about.
2. They are a tool that can (and should) be used by humans.
Computers are much better at chess than any human, but a human with a computer is better than any computer. The same is true with a coding LLM. Any SWE who can work with an LLM will be much better than any LLM.
3. There is enough work for both.
I have never worked for a company where I have had less work when I left than when I started. I worked for one company where it was estimated that I had about 2 years worth of work to do and 7 years later, when I left, I had about 5 years of work left. Hopefully LLMs will be able to take some of the tedious work so we can focus on harder tasks, but most likely the more we are able to accomplish the more there will be to accomplish.
It's lowering the bar for developers to enter the marketplace, in a space that is wildly under saturated. We'll all be fine. There's tons of software to be built.
More small businesses will be able to punch-up with LLMs tearing down walled gardens that were reserved for those with capital to spend on lawyers, consultants and software engineering excellence.
It's doing the same thing as StackOverflow -- hard problems aren't going away, they're becoming more esoteric.
If you're at the edge, you're not going anywhere.
If you're in the middle, you're going to have a lot more opportunities because your throughput should jump significantly so your ROI for mom and pop shops finally pencils.
Just be sure you actually ship and you'll be fine.
I am not afraid of companies replacing Software Engineers with LLMs while being able to maintain the same level of quality. The real thing to worry about is that companies do what they did to QA Engineers, technical writers, infrastructure engineers, and every other specialized role in the software development process. In other words, they will try to cut costs, and the result will be worse software that breaks more often and doesn't scale very well.
Luckily, the bar has been repeatedly lowered so that customers will accept worse software. The only way for these companies to keep growing at the rate their investors expect them to is to try and cut corners until there's nothing left to cut. Software engineers should just be grateful that the market briefly overvalued them to the degree that did and prepare for a regression to the mean.
The biggest "fault" of LLMs (which continues) is their compliance. Being a good software dev often means pushing back and talking through tradeoffs, and finding out what the actual business rules are. I.e. interrogating the work.
Even if these LLM tools do see massive improvements, it seems to me that they are still going to be very happy to take the set of business rules that a non-developer gives them, and spit out a program that runs but does not do what the user ACTUALLY NEEDS them to do. And the worst thing is that the business user may not find out about the problems initially, will proceed to build on the system, and these problems become deeper and less obvious.
If you agree with me on that, then perhaps what you should focus out is building out your consulting skills and presence, so that you can service the mountains of incoming consulting work.
For thousands of years, the existence of low cost or even free apprentices for skilled trades meant there was no work left for experts with mastery of the trade.
> the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
It's not enough to make generalizations yet. What kind of projects ? What tuning does it need ? What kind of end users ? What kind of engineers ?
In the field I work with, I can't see how LLMs can help with a clear path to convergence to a reliable product. I anything, I suspect we will need more manual analysis to fix insanity we receive from our providers if they start working with LLMs.
Some jobs will disappear, but I've yet to see signs of anything serious emerge yet. You're right for juniors though, but I suspect those who stop training will loose their life insurance and will starve under LLMs either by competition, our the amount of operational instability it will bring.
I think what we're seeing echoes a pattern we have lived through many times before, just with new tooling. Every major leap in developer productivity - from assembly to higher-level languages, from hand-rolled infrastructure to cloud platforms, from code libraries to massive open-source ecosystems - has sparked fears that fewer developers would be needed. In practice, these advancements have not reduced the total number of developers; they have just raised the bar on what we can accomplish.
LLMs and code generation tools are no exception. They will handle some boilerplate and trivial tasks, just like autocompletion, frameworks, and package managers already do. This will make junior-level coding skills less of a differentiator over time. But it is also going to free experienced engineers to spend more time on the complex, high-level challenges that no model can solve right now - negotiating unclear requirements, architecting systems under conflicting constraints, reasoning about trade-offs, ensuring reliability and security, and mentoring teams.
It is less about "Will these tools replace me?" and more about "How do I incorporate these tools into my workflow to build better software faster?" That is the question worth focusing on. History suggests that the demand for making complex software is bottomless, and the limiting factor is almost never just "typing code." LLMs are another abstraction layer. The people who figure out how to use these abstractions effectively, augmenting their human judgment and creativity rather than fighting it, will end up leading the pack.
"until eventually LLMs will become so good, that senior people won't be needed any more"
You are assuming AGI will come eventually.
I assume eventually the earth will be consumed by the sun, but I am equally less worried as I don't see it as a near future.
I am still regular dissapointed, when I try out the newest hyped model. They usually fail my tasks and require lots of manual labour.
So if that gets significantly better, I can see them replacing junior devs. But without understanding, they cannot replace a programmer for any serious task. But they maybe enable more people to become good enough programmers for their simple task. So less demand for less skilled devs indeed.
My solution - the same as before - improve my skills and understanding.
LLMs can help us engineers gain context quickly on how to write solutions. But, I don't see it replacing us anytime soon.
I'm currently working on a small team with a senior engineer. He's the type of guy who preach letting Cursor or whatever new AI IDE is relevant nowadays do most of the work. Most of his PRs are utter trash. Time to ship is slow and code quality is trash. It's so obvious that the code is AI generated. Bro doesn't even know how to rebase properly resulting to overwriting (important) changes instead of fixing conflicts. And guess who has to fix their mistakes (me and I'm not even a senior yet).
I have as much interest in the art of programming as in building products, and becoming some sort of AI whisperer sounds tremendously tedious to me. I opted out of the managerial track for the same reason. Fortunately, I have enough money saved that I can probably just work on independent projects for the rest of my career, and I’m sure they’ll attract customers whether or not they were built using AI.
With that said, looking back on my FAANG career in OS framework development, I’m not sure how much of my work could have actually been augmented by AI. For the most part, I was designing and building brand new systems, not gluing existing parts together. There would not be a lot of precedent in the training data.
So far I haven't found much use for LLM code generation. I'm using Copilot as a glorified autocomplete and that's about it. I tried to use LLM to generate more code, but it takes more time to yield what I want than to write it myself, so it's just not useful.
Now ChatGPT really became indispensable tool for me, on the one row with Google and StackOverflow.
So I don't feel threatened so far. I can see the potential, and I think that it's very possible for LLM-based agents to replace me eventually, probably not this generation, but few years later - who knows. But that's just hand waving, so getting worried about possible future is not useful for mental well-being.
I think there's been a lot of fear-mongering on this topic and "the inevitable LLM take over" is not as inevitable as it might seem, perhaps depending on your definition of "take over."
I have personally used LLMs in my job to write boilerplate code, write tests, make mass renaming changes that were previously tedious to do without a lot of grep/sed-fu, etc. For these types of tasks, LLMs are already miles ahead of what I was doing before (do it myself by hand, or have a junior engineer do it and get annoyed/burnt out).
However, I have yet to see an LLM that can understand an already established large codebase and reliably make well-designed additions to it, in the way that an experienced team of engineers would. I suppose this ability could develop over time with large increases in memory/compute, but even state-of-the-art models today are so far away from being able to act like an actual senior engineer that I'm not worried.
Don't get me wrong, LLMs are incredibly useful in my day-to-day work, but I think of them more as a leap forward in developer tooling, not as an eventual replacement for me.
Better tools that accelerate how fast engineers can produce software? That's not a threat, just a boon. I suspect the actual transition will just be people learning/focusing on somewhat different higher level skills rather than lower level coding. Like going from assembly to c, we're hoping we can transition more towards natural language.
> junior to mid level software engineering will disappear mostly
People don't magically go to senior. Can't get seniors without junior and mid to level up. We'll always need to take in and train new blood.
My advice? Focus on the business value, not the next ticket. Understand what the actual added value of your work is to your employer. It won’t help in the day-to-day tasks but it will help you navigate your career with confidence.
Personally - and I realize this is not generalizable advice - I don’t consider myself a SWE but a domain expert who happens to apply code to all of his tasks.
I’ve been intentionally focusing on a specific niche - computer graphics, CAD and computational geometry. For me writing software is part of the necessary investment to render something, model something or convert something from domain to domain.
The fun parts are really fun, but the boring parts are mega-boring. I’m actually eagerly awaiting for LLM:s to reach some level of human parity because there simply isn’t enough talent in my domains to do all the things that would be worthwhile to do (cost and return of investment, right).
The reason is my domain is so niche you can’t webscrape&label to reach the intuition and experience of two decades, working in various industries from graphics benchmarking, automotive HUDs, to industrial mission critical AEC workflows and to realtime maps.
There is enough knowledge to train LLMs to get a hint as soon as I tie few concepts together, and then they fly. The code they write at the moment apart from simple subroutines is not good enough to act as unsupervised assistant … most of the code is useless honestly. But I’m optimistic and hope they will improve.
Right now LLMs have a slight advantage over stackoverflow etc in that they'll react to your specific question/circumstances, but they also require you to doublecheck everything they spit out. I don't think that will ever change, and I think most of the hype comes from people whose salaries depend on it being right around the corner or people who are playing a speculation game (if I learn this tool I'll never have to work again/ avoid this tool will doom me to poverty forever).
Sysadmin here. Thats what we used to be called. Then some fads came, some went. DevOps, SRE. Etc.
I have notes on particular areas I am focusing on, but I have a small set of general notes on this, and they seem to apply to you SWEs also.
Headline: Remember data is the new oil
Qualifier: It's really all about IP portfolios these days
1) Business Acument: How does the tech serve the business/client needs, from a holistic perspective of the business. (eg: sysadmins have long had to big picture finance, ops, strategy, industry, etc knowledge) - aka - turn tech knowledge into business knowledge
2) Leadership Presence: Ability to meet w/c-suite, engineers, clients, etc, and speak their languages, understand their issues, and solve their issues. (ex: explain ROI impacts for proposals to c-suite)
3) Emotional Intelligence: Relationship building in particular.
(note: this is the thing I neglected the most in my career and regreted it!)
4) Don't be afraid to use force multiplier tools. In this discussion, that means LLMs, but it can mean other things too. Adopt early, keep up with tooling, but focus on the fundamental tech and don't get bogged down into proprietary stockholm syndrome. Augment yourself to be better, don't try to replace yourself.
----
Now, I know thats a simplistic list, but you asked so I gave you what I had. What I am doing (besides trying to get my mega-uber-huge-sideproject off the ground), is recentering on certain areas I don't think are going anywhere: on-prem, datacenter buildouts, high-compute, ultra-low-latency, scalable systems, energy, construction of all the previous things, and the banking knowledge to round it all out.
If my side-project launch fails, I'm even considering data-center sales instead of the tech side. Why? I'm tired of rescuing the entire business to no fanfare while sales people get half my salary in a bonus. Money aside, I can still learn and participate in the builds as sales (see it happen all the time).
In other words, I took my old-school niche set of knowledge, and adopted it over the years as the industry changed, focusing on what I do best (in this case, operations - aka - the ones who actually get shit into prod, and fix it when it's broke, regardless of the title associated).
Future at this point is ... unpredictable. It is unsatisfactory answer but it is true.
So what does it take for LLM to replace SWE?
1. It needs to get better, much better
2. It needs to be cheaper still
Those two things are at odds with each other. If Scaling Laws is the god we preaching to, then it apparently has already hit the diminishing of return, maybe if we scale up 1000x we can get AGI, but that won't be economically reasonable for a long time.
Back to reality, what does it mean to survive in a market assuming coding assistants are going to get marginally better over say next 5 years? Just use them, they are genuinely useful tools to accomplish really boring and mundane stuff. Things like writing docker files, those will go away to LLM and human won't be able and don't have to compete. They are also great second thoughts advice givers, it is fun to know what LLM thought of your design proposal and build upon their advice.
Overall, I don't think much will change over night, the industry might experience contraction in terms of how many developers it will hire, for which I think for a long time, the demand will not be there. For people already in the industry, as long as you keep learning, it is probably going to be fine, well, for now.
I worked at a big tech co years ago. Strolling up to my bus stop in my casual attire while others around me wore uniforms, rushing to get to work. A nice private shuttle would pick me up. It would deliver me pretty much at the doors of the office. If it were raining, somebody would be standing outside to hand me an umbrella even though the door was a short distance away. Other times there would be someone there waiting on the shuttle to hand me a smoothie. When I got to the door, there would be someone dedicated to opening it. When I got inside, a breakfast buffet fit for a king would be served. Any type of cuisine I wanted was served around campus for lunch and dinner, and it was high quality. If I wanted dessert, there was entire shops (not one but many) serving free handcrafted desserts. If I wanted my laundry done, someone would handle that. If I wanted snacks, each floor of my office had its own little 7/11. If I didn't feel like having all this luxury treatment, I'd just work from home and nobody cared.
All of that, and I was being paid a very handsome amount compared to others outside of tech? Several times over the national average? For gluing some APIs together?
What other professions are like this where there's a good chunk of people who can have such a leisurely life, without taking much risk, and get so highly compensated compared to the rest? I doubt there's many. At some point, the constrained supply must answer to the high demand and reality shows up at the door.
I quit a year into the gig to build my own company. Reality is much different now. But I feel like I've gained many more skills outside of just tech that make me more equipped for whatever the future brings.
1. Similar to autonomous driving going from 90-99% reliability can take longer than 0-90%.
2. You can now use LLMs and public clouds to abstract away a lot skills that you don't have (managing compute clusters, building iOS and Android apps, etc.). So you can start your 3 person company and do things that previously required 100s of people.
IMHO LLMs and cloud computing are very similar where you need a lot of money to build an offering so perhaps only a few big players are going to survive.
Long horizon problems are a completely unsolved problem in AI.
See the GAIA benchmark. While this surely will be beat soon enough, the point is that we do exponentially longer horizon tasks than that benchmark every single day.
It's very possible we will move away from raw code implementation, but the core concepts of solving long horizon problems via multiple interconnected steps are exponentially far away. If AI can achieve that, then we are all out of a job, not just some of us.
Take 2 competing companies that have a duopoly on a market.
Company 1 uses AI and fires 80% their workforce.
Company 2 uses ai and keeps their workforce.
AI in its current form is a multiplier, we will see company two massively outcompete the first as each employee now performs 3-10 people's tasks. Therefore, Company two's output is exponentially increased per person. As a result, it significantly weakens the first company. Standard market forces haven't changed.
The reality, as I see it, is that interns will now be performing at Senior SWE, senior SWE engineers will now be performing at VP of engineering levels, and VP's of engineering will now be performing at nation state levels of output.
We will enter an age where goliath companies will be common place. Hundreds or even thousands of mega trillion dollar companies. Billion dollar startups will be expected almost at launch.
Again, unless we magically find a solution to long horizon problems (which we haven't even slightly found). That technology could be 1 year or 100 years away. We're waiting on our generation's Einstein to discover it.
What does it mean to be a software engineer? You know the syntax, standard library, and common third party libraries of your domain. You know the runtime, and the orchestration around multiple instances of your runtime. You know how to communicate between these instances and with third-party runtimes like databases and REST APIs.
A large model knows all of this as well. We already rely on generative language model conversations to fill in the knowledge gaps that Googling for documentation (or “how do I do X?” stackoverflow answers) filled.
What’s harder is debugging. A lot of debugging is guesswork and action taking, note-taking, and brain-storming for possible ideas as to why X crashes on Y input.
Bugs that boil down to isolating a component and narrowing down what’s not working are hard. Being able to debug them could be the moat that will protect us SWEs from redundancy. Alternatively, pioneering all the new ways of getting reproducible builds and reproducible environments will be the route to eliminating this class of bug entirely, or at least being able to confidently say that some bug was probably due to bad memory, bad power supplies, or bad luck.
Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.
I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.
And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.
All the while re-evaluating all my assumptions and that of others.
If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.
Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.
The last fairly technical career to get surprisingly and fully automated in the way this post displays concern about - trading.
I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.
Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:
- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans
Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.
And lastly - LLMs don't have to be "great," they just have to be "good enough."
See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.
Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.
I don't worry about it, because:
1) I believe we need true AGI to replace developers.
2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.
3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.
Back in the late 80s and early 90s there was a craze called CASE - Computer-Aided Software Engineering. The idea was humans really suck at writing code, but we're really good at modeling and creating specifications. Tools like Rational Rose arose during this era, as did Booch notation which eventually became part of UML.
The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.
Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.
Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?
At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.
Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.
I am 61, I have been a full time developer since I was about 19. I have lost count of the number of 'next thing to replace developers' many many times. many of them showed promise. Many of them continue to be developed. Frameworks with higher and higher levels of abstraction.
I see LLMs as the next higher level of abstraction.
Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.
Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.
Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.
I've been thinking about this a bunch and here's what I think will happen as cost of writing software approaches 0:
1. There will be way more software
2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.
3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking
4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.
EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:
- https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...
As a junior dev, I do two conscious things to make sure I'll still be relevant for the workforce in the future.
1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.
2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.
At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is
From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are
Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect
If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out
Firstly, as many commenters have mentioned, I don't see AI taking jobs en masse. They simply aren't accurate enough and they tend to generate more code faster which ends up needing more maintenance.
Advice #1: do work on your own mind. Try to improve your personal organization. Look into methodologies like GTD. Get into habits of building discipline. Get into the habit of storing information and documentation. From my observations many developers simply can't process many threads at once, making their bottleneck their own minds.
Advice #2: lean into "metis"-heavy tasks. There are many programming tasks which can be easily automated: making a app scaffold, translating a simple algorithm, writing tests, etc. This is the tip of the iceberg when it comes to real SWE work though. The intricate connections between databases and services, the steps you have to go through to debug that one feature, the hack you have to make in the code so the code behaves differently in the testing environment, and so on. LLMs require legibility to function: a clean slate, no tech-debt, low entropy, order, etc. Metis is a term talked about in the book "Seeing Like a State" and it encompasses knowledge and skills gained through experience which is hard to transfer. Master these dark corners, hack your way around the code, create personal scripts for random one-off tasks. Learn how to poke and pry the systems you work on to get out the information you want.
I use Copilot a bit, and it can be really, really good.
It helps me out, but in terms on increasing productivity, it pales in comparison to simple auto-complete. In fact it pales in comparison to just having a good, big screen vs. battling away on a 13" laptop.
LLMs are useful and provide not insignificant assistance, but probably less assistance than the tools we've had for a long time. LLMs are not a game changer like some other thing have been since I've been programming (since late 1980s). Just going to Operating Systems with protected memory was a game changer, I could make mistakes and the whole computer didn't crash!
I don't see LLMs as something we have to protect our careers from, I see LLMs as an increasingly useful tool that will become a normal part of programming same as auto-complete, or protected memory, or syntax-highlighting. Useful stuff we'll make use of, but it's to help us, not replace us.
My anecdata shows people who have no/limited experience in software engineering are suddenly able to produce “software”. That is, code of limited engineering value. It technically works, but is a ultimately an unmaintainable, intractable Heath Robinson monstrosity.
Coding LLMs will likely improve, but what will happen first: a good-at-engineering LLM; or a negative feedback cycle of training data being polluted with a deluge of crap?
I’m not too worried at the moment.
I remember John Carmack talking about this last year. Seems like it's still pretty good advice more than a year later:
"From a DM, just in case anyone else needs to hear this."
https://x.com/ID_AA_Carmack/status/1637087219591659520
1) have books like 'The Art of Programming' on my shelf, as AI seems to propagate solutions that are related to code golf more than robustness due to coverage in the corpus.
2) Force my self to look at existing code as abstract data types, etc... to help reduce the cost of LLMs failure mode (confident, often competent, and inevitable wrong)
3) curry whenever possible to support the use of coding assistants and to limit their blast radius.
4) Dig deep into complexity theory to understand what LLMs can't do, either for defensive or offensive reasons.
5) Realize that SWE is more about correctness and context than code.
6) Realize what many people are already discovering, that LLM output is more like clip art than creation.
I think in some sense the opposite could occur, where it democratizes access to becoming a sort of pseudo-junior-software engineer. In the sense that a lot more people are going to be generating code and bespoke little software systems for their own ends and purposes. I could imagine this resulting in a Cambrian Explosion of small software systems. Like @m_ke says, there will be way more software.
Who maintains these systems? Who brings them to the last mile and deploys them? Who gets paid to troubleshoot and debug them when they reach a threshold of complexity that the script-kiddie LLM programmer cannot manage any longer? I think this type of person will definitely have a place in the new LLM-enabled economy. Perhaps this is a niche role, but figuring out how one can take experience as a software engineer and deploy it to help people getting started with LLM code (for pay, ofc) might be an interesting avenue to explore.
This blew up way more than I expected. Thanks everyone for the comments, I read almost all of them.
For the sake of not repeating myself, I would like to clarify/state some things.
1. I did not intend to signal that SWE will disappear as a profession, but would rather undergo transformation, as well as shrinking in terms of the needed workforce.
2. Some people seem to be hanging up to the idea that they are doing unimaginably complicated things. And sure, some people do, but I doubt they are the majority of the SWE workforce. Can LLM replace a cobol developer in financial industry? No, I don't think so. Can it replace the absurd amount of people whose job description can be distilled to "reading/writing data to a database"? Absolutely.
3. There seems to be a conflicting opinion. Some people say that code quality matters a lot and LLMs are not there yet, while other people seems to focus more on "SWE is more than writing code".
Personally, based on some thinking and reading the comments, I think the best way to future-proof a SWE career is to move to position that requires more people skills. In my opinion, good product managers that are eager to learn coding and combine LLMs for code writing, will be the biggest beneficiaries of the upcoming trend. As for SWEs, it's best to start acquiring people skills.
There's a great Joel Spolsky post about developers starting businesses and realising that there's a bunch of "business stuff" that was abstracted away at big companies. [1]
One way to future proof is to look at the larger picture, the same way that coding can't be reduced to algorithm puzzles:
"Software is a conversation, between the software developer and the user. But for that conversation to happen requires a lot of work beyond the software development."
[1] The Development Abstraction Layer https://www.joelonsoftware.com/2006/04/11/the-development-ab...
I was advising this MBA student's nascent startup (with the idea I might technical cofound once they're graduating), and they asked about whether LLMs would help.
So I listed some ways that LLMs practically would and wouldn't fit into the workflow of the service they doing. And related it to a bunch of other stuff, including how to make the most of the precious customer real-world access they'd have, and generating a success in the narrow time window they have, and the special obligations of that application domain niche.
Later, I mentally replayed the conversation in my head (as I do), and realized they were actually probably asking about using an LLM to generate the startup's prototype/MVP for the software they imagined.
And also, "generating the prototype" is maybe the only value that an MBA student had been told a "technical" person could provide at this point. :)
That interpretation of the LLM question didn't even occur to me when I was responding. I could've easily whipped up the generic Web CRUD any developer could do and the bespoke scrape-y/protocol-y integrations that fewer developers could do, both to a correctness level necessarily higher than the norm (which was required by this particular application domain). In the moment, it didn't occur to me that anyone would think an LLM would help at all, rather than just be an unnecessary big pile of risk for the startup, and potential disaster in the application domain.
I no longer have skin in the game since I retired a few years back.
But I have had over 30 years in a career that has been nothing if not dynamic the whole time. And so I no doubt would keep on keepin' on (as the saying goes).
Future-proof a SWE career though? I think you're just going to have to sit tight and enjoy (or not) the ride. Honestly, I enjoyed the first half of my career much more than where SWE ended up in the latter half. To that end, I have declined to encourage anyone from going into SWE. I know a daughter of a friend that is going into it — but she's going into it because she has a passion for it. (So, 1) no one needed to convince her but 2) passion for coding may be the only valid reason to go into it anyway.)
Imagine the buggy-whip makers gathered around the pub, grousing about how they are going to future-proof their trade as the new-fangled automobiles begin rolling down the street. (They're not.)
I have been unemployed for almost a year now (it started with a full division layoff and then no willingness or motivation to look for work at the time). Seeing the way AI can do most of the native app development (which is what I did) code I wrote I am losing almost any motivation to even try now. But I have been sleeping the best after college (where I slept awesome) and I have been working out, watching lots of theatre and cinema and playing lots of sports (two of them almost daily), reading a lot of literature, lots of podcasts. I guess I will just wait for my savings to run dry and then see what option then I'd have and what I would not if at all. I know the standard thing to do and say is "up-skill", "change with the times" etc and I am sure those have merit but I just feel I am done with the constant catch up, kind of checked out. I don't give a fuck anymore maybe, or I do and I am too demoralised to confront it.
LLMs will just write code without you having to go copy-pasta from SO.
The real secret is talent stacks: have a combination of talents and knowledge that is desirable and unique. Be multi-faceted. And don't be afraid to learn things that are way outside of your domain. And no, you wouldn't be pigeon-holing yourself either.
For example there aren't many SWEs that have good SRE knowledge in the vehicle retail domain. You don't have to be an expert SRE, just be good enough, and understand the business in which you're operating and how those practices can be applied to auto sales (knowing the laws and best practices of the industry).
As others have stated, I don't think we have anything to worry about.
As a SWE you are expected to neatly balance code, its architecture and how it addresses the customers' problems. At best, what I've seen LLMs produce is code monkey level programming (like copy pasting from StackOverflow), but then a human is still needed to tweak it properly.
What would be needed is General AI and that's still some 50 years away (and has been for the past 70 years). The LLMs are a nice sleight of hand and are useful but more often wrong than right, as soon as you delve into details.
Beware of the miopia and gate keeping displayed on this thread.
There will be less SWE and DevOps and related jobs available in the next 24 months. Period.
Become hyper-aware of how a business measure your value as a SWE. How? Ask pointed, uncomfortable questions that force the people paying you to think and be transparent.
Stay on the cutting edge of how to increase your output and quality using AI.
Ie: how long does it take for a new joiner to produce code? How do you cut that time down by 10x using “AI”?
Most organizations don't move that fast. Certainly not fast enough to need this kind of velocity.
As it is I spend 95% of my time working out what needs to be done with all of the stakeholders and 5% of my time writing code. So the impact of AI on that is negligible.
When doing X becomes cheaper with the invention of new tools, X is now more profitable and humanity tends to do much more of it.
Nearly all code was machine-generated after the invention of compilers. Did the compiler destroy programming? Absolutely not. Compilers and other tools like higher-level programming languages really kickstarted the software industry. IMO the potential transition from writing programming languages -> writing natural language and have LLM generate the program is still a smaller change than machine code/assembly -> modern programming languages.
If the invention of programming languages expanded the population of programmers from thousands to the 10s of millions, I think LLMs could expand this number again to a billion.
Build something with an LLM outside of your comfort zone.
I was a pretty early adopter to an LLM based workflow. The more I used it, the worse my project became, and the more I learned myself. It didn’t take long for my abilities to surpass the LLM, and for the past year my usage of LLMs has been dropping dramatically. These days I spend more time in docs than in a chat conversation.
When chatGPT was announced, many people thought programming was over. As in <12 months. Here we are several years later, and my job looks remarkably the same.
I would absolutely love to not have to program anymore. For me, programming is a means to an end. However, after having used LLMs pretty much everyday for 2.5 years, it’s very clear to me that software engineering won’t be changing anytime soon. Some things will get easier and workflows may change, but if you want to build and maintain a moderately difficult production grade application with decent performance, you will still be programming in 10 years
My solution has been to work with LLMs and being on the one side of the industry trying to replace the other. I switched focus fairly early on in the "AI hype" era mainly because I thought it looked like a lot of fun to play with LLMs. After a few years I realized I'm quite a bit ahead of my of my former coworkers that stayed still. I've worked on both the product end and closer to the hardware, and as more and more friends ask for help on problems I've realized I do in fact have a lot of understanding of this space.
A lot of people in this discussion seem to be misunderstanding the way the industry will change with LLMs. It's not a simple as "engineers will be automated away" in the same sense that we're a long way away uber drivers disappearing from self driving cars.
But the impact of LLMs on software is going to be much closer to the impact of the web and web development on native application development. People used to scoff at the idea that any serious company would be run from a web app. Today I would say the majority of software engineers are, directory or indirectly, building web-based products.
LLMs will make coding easier, but they also enable a wide range of novel solutions within software engineering itself. Today any engineer can launch a 0-shot classifier that's better performing than what would have taken a team of data scientists just a few years ago.
Not a clue.
I'm a decent engineer working as a DS in a consulting firm. In my last two projects, I checked in (or corrected) so much more code than the other two junior DS's in my team, that at the end some 80%-90% of the ML-related stuff had been directly built, corrected or optimized by me. And most of the rest that wasn't, was mostly because it was boilerplate. LLMs were pivotal in this.
And I am only a moderately skilled engineer. I can easily see somebody with more experience and skills doing this to me, and making me nearly redundant.
"The use of FORTRAN, like the earlier symbolic programming, was very slow to be taken up by the professionals. And this is typical of almost all professional groups. Doctors clearly do not follow the advice they give to others, and they also have a high proportion of drug addicts. Lawyers often do not leave decent wills when they die. Almost all professionals are slow to use their own expertise for their own work. The situation is nicely summarized by the old saying, “The shoe maker’s children go without shoes”. Consider how in the future, when you are a great expert, you will avoid this typical error!"
Richard W. Hamming, “The Art of Doing Science and Engineering”
Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.
So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?
If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
We’re really not as clever as we think we are.
I will not believe the AI takeover until there's evidence. I haven't seen any examples, apart from maybe TODO list apps. Needless to say, that's nowhere near the complexity that is required at most jobs. Even if my carreer was endangered, I would continue the path I've taken so far: have a basic understanding of as much as possible (push out the edges of knowledge circle or whatever it's called), and strive to have an expert knowledge about maybe 1 or 2, or 3 subjects which pay for your daily bread. Basically just be good at what you do, and that should be fine. As for beginners, I advise to dive deep into a subject, start with a solid foundation and be sure to have a hands-on approach, while maintaining a consistent effort.
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.
A steeper learning curve in a professional field generally translates into higher earnings. The longer you have to be trained to be helpful, the more a job generally earns.
I am already trained.
I dont think the prediction game is worthwhile.
The Cloud was meant to decimate engineering AND development. But what it did was it created enough chaos that theres a higher demand for both than ever, just maybe not in your region and for your skillset.
LLMs are guaranteed to cause chaos, but the outcome of that chaos is not predictable. Will every coder now output the same as a team of 30 BUT there are 60 times as many screwed up projects made by wannabe founders that you have to come in and clean up? Will businesses find ways to automate code development and then turn around and have to bring the old guys back in constantly to fix up the pipeline. Will we all be coding in black boxes that the AI fills in?
I would make sure you just increase your skills and increase your familiarity with LLMs in case they become mandatory.
It depends on whether you think they are a paradigm change (at the very least) or not. If you don't then either you will be right or you will be toast.
For those of us who do think this is a revolution, you have two options:
1. Embrace it.
2. Find another career, presumably in the trades or other hands-on vocations where AI ingress will lag behind for a while.
To embrace it you need to research the LLM landscape as it pertains to our craft and work out what interests you and where you might best be able to surf the new wave, it is rapidly moving and growing.
The key thing (as it ever was) is to build real world projects mastering LLM tools as you would an IDE or language; keep on top of the key players, concepts and changes; and use your soft skills to help open-eyed others follow the same path.
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code
LLMs do help but to a limited extend. Never heard of anyone in the second category.
> how do you future-proof your career in light of, the inevitable, LLM take over?
Generally speaking, coding has never been a future proof career. Ageism, changes in technology, economic cycles, offshoring... When I went into that field in early 2000s, it was kind of expected that most people if they wanted to be somewhat successful had to move eventually to leadership/management position.
Things changed a bit with successful tech companies competing for talents and offering great salaries and career paths for engineers, especially in the US but it could very well be temporary and shouldn't be taken for granted.
LLMs is one factor among many that can impact our careers, probably not the most important. I think there's a lot of hype and we're not being replaced by machines anytime soon. I don't see a world where an entrepreneur is going to command an LLM to write a service or a novel app for them, or simply maintain an existing complex piece of software.
Who knows what the future holds? As a SWE you are expected to adapt and use modern technology. Always learning is a part of the job. Look at all the new things to build with, frameworks being updated/changes, etc. Making things easier.
LLMs will make things easier, but it's easy to disagree that they will threaten a developer's future with these reasons in mind:
* Developers should not be reinventing the wheel constantly. LLMs can't work very well on subjects they have no info on (proprietary work).
* The quality is going to get worse over time with the internet being slopped up with the mass disregard for quality content. We are at a peak right now. Adding more parameters isn't going to make the models better. It's just going to make them better at plagiarism.
* Consistency - a good codebase has a lot of consistency to avoid errors. LLMs can produce good coding examples, but they will not have much regard for your how -your- project is currently written. Introducing inconsistency makes maintenance more difficult, let alone the bugs that might slip in and wreak havoc later.
I try to go to the lowest level I could. During my recent research into PowerPC 32-bit assembly language I have found 1) Not many material online, and what available are usually PDF with pictures which could be difficult for LLMs to pick up, and 2) Indeed ChatGPT didn't give good answer even for a Hello, World example.
I think hardware manufacturers, including ones that produce chips, are way less encouraged to put things online and thus has a wide moat. "Classic" ones such as 6502 or 8086 definitely have way more material. "Modern" popular ones such as x86/64 too have a lot of material online. But "obscure" ones don't.
On software side, I believe LLMs or other AI can easily replace juniors who only knows how to "fill-in" the code designed by someone else, in a popular language (Python, Java, Javascript, etc.), in under 10 years. In fact it has greatly supported my data engineering work in Python and Scala -- does it always produce the most efficient solution? No. Does it greatly reduces the time I need to get to a solution? Yes, definitely!
Disclaimer: I wholeheartedly hate all the systems they call AI these days and I hate the culture around it for technological, ecological, political, and philosophical reasons.
I won't future-proof my career against LLMs at all. If I ever see myself in the position that I must use them to produce or adjust code, or that I mostly read and fix LLM-generated code, then I'll leave the industry and do something else.
I see potential in them to simplify code search/navigation or to even replace stackoverflow, but I refuse to use them to build entire apps. If management in turn believes that I'm not productive enough anymore then so be it.
I expect that lots of product owners and business people will be using them in order to quickly cobble something together and then hand it over to a dev for "polishing". And this sounds like a total nightmare to me. The way I see it, devs make this dystopian nightmare a little bit more true everytime they use an LLM to generate code.
In a market, scarce services will always be more valuable than abundant services. Assuming that AI will at some point be capable of replacing an SWE, to future-proof your career, you will need to learn how to provide services that AI cannot provide. Those might not be what SWEs currently usually offer.
I believe it's actually not that hard to predict what this might be:
1. Real human interaction, guidance and understanding: This, by definition, is impossible to replace with a system, unless the "system" itself is a human.
2. Programming languages will be required in the future as long as humans are expected to interface with machines and work in collaboration with other humans to produce products. In order to not lose control, people will need to understand the full chain of experience required to go from junior SWE to senior SWE - and beyond. Maybe less people will be required to produce more products but still, they will be required as long as humanity doesn't decide to give up control over basically any product that involves software (which will very likely be almost all products).
3. The market will get bigger and bigger to the point where nothing really works without software anymore. Software will most likely be even more important to have a unique selling point than it is now.
4. Moving to a higher level of understanding of how to adapt and learn is beneficial for any individual and actually might be one of the biggest jumps in personal development. This is worth a lot for your career.
5. The current state of software development in most companies that I know has reached a point where I find it actually desirable for change to occur. SWE should improve as a whole. It can do better than Agile for sure. Maybe it's time to "grow up" as a profession.
I think the real world improvements will plateau and it'll take awhile for current enterprise just to adopt what is possible today but that is still going to cause quite a bit of change. You can imagine us going from AI Chat Bots with RAG on traditional datastores, to AI-enhanced but still human-engineered SaaS Products, to bespoke AI-generated and maintained products, to fully E2E AI Agentic products.
An example is do you tell the app to generate a python application to manage customer records or do you tell it "remember this customer record so other humans or agents can ask for it" and it knows how to efficiently and securely do that.
We'll probably see more 'AI Reliability Engineer' type roles which will likely be around building and maintain evaluation datasets, tracking and stomping out edge cases, figuring out human intervention/escalation, model routing, model distillation, Context-window vs Fine-tuning, and overall intelligence-cost management.
>junior to mid level software engineering will disappear mostly, while senior engineers will transition
It's more likely the number of jobs at all level of seniority will decrease, but none will disappear.
What I'm interested to see is how the general availability of LLM will impact the "willingness" of people to learn coding. Will people still "value" coding as an activity worth their time?
For me as an already "senior" engineer, using LLMs feel like a superpower, when I think of a solution to a problem, I can test and explore some of my ideas faster by interacting with it.
For a beginner, I feel that having all of this available can be super powerful too, but also truly demotivating. Why bother to learn coding when the LLM can already do better than you? It takes years to become "good" at coding, and motivation is key.
As a low-dan Go player, I remember feeling a bit that way when AlphaGo was released. I'm still playing Go but I've lost the willingness to play competitively, now it's just for fun.
I just copied the html from this thread into Claude to get a summary. I think being very realistic, a lot of SWE job requirements will be replaced by LLMs.
The expertise to pick the right tool for the right job based on previous experience that senior engineers poses is something that can probably be taught to an LLM.
Having the ability to provide a business case for the technology to stakeholders that aren't technologically savvy is going to be a people job for a while still.
I think positioning yourself as an expert / bridge between technology and business is what will future-proof a lot of SWE, but in reality, especially at larger organizations, there will be a trimming process where the workload of what was thought to need 10 engineers can be done with 2 engineers + LLMs.
I'm excited about the future where we're able to create software quicker and more contextual to each specific business need. Knowing how to do that can be an advantage for software engineers of different skill levels.
Learn the tools, use them where they shine avoid them where they do not. Your best bet is just learn to start using LLM in your day to day coding and find out what works and what doesn’t.
I work on a pretty straightforward CRUD app in a niche domain and so far they haven’t talked about replacing me with some LLM solution. But LLMs have certainly made it a lot faster to add new features. I’d say working in a niche domain is my job security. Not many scientists want to spend their time trying to figure out how to get an LLM to make a tool that makes their life easier - external competitors exist but can’t give the same intense dedication to the details required for smaller startups and their specific requirements.
A side note - maybe my project is just really trivial, maybe I’m dumber or worse at coding than I thought, or maybe a combination of the above, but LLMs have seemed to produce code that is fine for what we’re doing especially after a few iteration loops. I’m really curious what exactly all these SWEs are working on that is complex enough that LLMs produce unusable code
I’ve carved out a niche of very low level systems programming and optimization. I think it’ll be awhile before LLMs can do what I do. I also moved to to staff so I think a lot of what I do now will still exist with junior/mid level devs being reduce by AI.
But I am focusing on maximizing my total comp so I can retire in 10-15 years if I need to. I think most devs are underestimating where this is eventually going to go.
Short term defense is learning about, and becoming an expert in, using LLMs in products.
Longer term defense doesn't exist. If Software Engineering is otherwise completely automated by LLMs, we're in AGI territory, and likely recursive self-improvement plays out (perhaps not AI-foom, by huge uptick in capability / intelligence per month / quarter).
In AGI territory, the economy, resource allocation, labor vs. capital all transition into a new regime. If problems that previously took hundreds of engineers working over multiple years can now be built autonomously within minutes, then there's no real way to predict the economic and social dynamics that result from that.
I'm working as if in 2-3 years the max comp I will be able to get as a senior engineer will be 150k. And it will be hard to get that. It's not that it will disappear, its that the bar to produce working software will go way down. Most knowledge and skill sets will be somewhat commoditized.
Also pretty sure this will make outsourcing easier since foreign engineers will be able to pick up technical skills easier
Another thing I want to note is; even if I get replaced by AI, I think I'd be sad for a bit. I think it'd be a fun period to try to find a "hand-focused" job. Something like a bakery or chocolatier. I honestly wouldn't mind if I could do the same satisfying work but more hands-on, rather than behind a desk all day
I might be too optimistic but I think LLMs will basically replace the worst and most junior devs, while the job of anyone with 5 or 10+ years of experience will be babysitting AI codevelopers, instead of junior developers.
I find a lot of good use for LLMs but it's only as a multiplier with my own effort. It doesn't replace much anything of what I do that actually requires thought. Only the mechanical bits. So that's the first thing I ensure: I'm not involved in "plumbing software development". I don't plug together CRUD apps with databases, backend apis and some frontend muck. I try to ensure that at least 90% of the actual code work is about hard business logic and domain specific problems, and never "stuff that would be the same regardless of whether this thing is about underwear or banking".
If I can delegate something to it, it's every bit as difficult as delegating to another developer. Something we all know is normally harder than doing the job yourself. The difference between AI Alice and junior dev Bob, is that Alice doesn't need sleep. Writing specifications, reviewing changes and ensuring Alice doesn't screw up is every bit as hard as doing the same with Bob.
And here is the kicker: whenever this equation changes, that we have some kind of self-going AI Alice, then we're already at the singularity. Then I'm not worried about my job, I'll be in the forest gathering sticks for my fire.
To me it seems possible, that seniors will become in even more demand, because learning to become a decent developer is actually harder if you're distracted by leaning on LLMs. Thus, the supply of up-and-coming good new seniors may be throttled. This to me is because LLMs don't abstract code well. Once upon a time electronics engineers had to know a lot about components. Then along came integrated circuits and they had to know about them, and less about components. Once upon a time programmers had to know machine code or assembler. I've never had to know about those for my job. I programmed in C++ for years and had to know plenty about memory. These days I rarely need to as much, but some basic knowledge is needed. Its fine if a student is learning to code mostly in Python, but a short course in C is probably a good idea even today. But as for LLMs, you can't say "now I need to know less about this specific thing that's been black-boxed for me" , because it isn't wrapped conveniently like that. I'm extremely glad that when I was a junior, LLMs weren't around. Really seems like a barrier to learning. Its impossible to understand all the generated code, but also difficult, without significant career experience, to be able to judge what you need to know about, and what you don't. Feel sorry for juniors today to be honest!
I've been quite worried about it at this point. However I see that "this is not going to happen" is likely not going to help me. So I rather go with the flow, use it where reasonable even if it's not clear to me whether AI is truly ever leaving the hype stage.
FWIW I was allowed to use AI at work since ChatGPT appeared and usually it wasn't a big help for coding. However for education and trying to "debug" funny team interactions, I've surely seen some value.
My guess is though that some sort of T-shaped skillset is going to be more important while maintaining a generalist perspective.
I feel the influencer crowd is overblowing the actual utility of LLM's massively. Kind of feels akin to the "cryptocurrency will take over the world" trope 10 years ago, and yet .. I don't see it any crypto in my day to day life to this day. Will it improve general productivity and boring tasks nobody wants to do? Sure, but to think any more than that frankly I'd like some hard evidence of it being actually able to "reason". And reason better than most devs I've ever worked with, because quite honestly humans are also pretty bad at writing software, and LLM's learn from humans, so ...
I see this sort of take from a lot of people and I always tell them to do the same exercise. A cure for baseless fears.
Pick an LLM. Any LLM.
Ask it what the goat river crossing puzzle is. With luck, it will tell you about the puzzle involving a boatman, a goat, some vegetable, and some predator. If it doesn’t, it’s disqualified.
Now ask it to do the same puzzle but with two goats and a cabbage (or whatever vegetable it has chosen).
It will start with the goat. Whereupon the other goat eats the cabbage left with it on the shore.
Hopefully this exercise teaches you something important about LLMs.
I recall the Code Generation from class diagram and then low-code declared as death of all devs.
The current generation of LLMs are immensely expensive and will become further more if all the VC money disappears.
A FT dev is happy to sit there and deal with all the whinning, meeting, alignment, 20 iterations of refactoring, architectural change, late friday evening to put out fire. To make an LLM work for 40h/week with that much of context would cost insane and several steering people. Also the level of ambiguous garbage spewed by management and requirement engineers which I turn to value is… difficult with LLMs.
Lets take it this way, before LLMs, we have wonderful outsourcing firms that costs slightly less than maintaining inhouse team, if devs were to disappear, that would be the nail. LLMs need steering and does not deal well with ambiguity, so I don’t see a threat.
Also for all the people touting LLM holy song, try asking windsurf or cursor to generate something niche which does not exist publicly, see how well it does. Aside, I closed several PRs last week because people started using generated code with 100+ LOC which would do with just one or two lines if the authors took some time to review the latest release of the library.
Here’s a question for you, have they automated trains yet? They’re literally on tracks. Until trains are fully automated, then after that cars, then later airplanes, then maybe, just maybe “ai” will come for thought work. Meanwhile, Tesla’s “ai” still can’t stop running into stopped firetrucks[1]…
[1] https://www.wired.com/story/tesla-autopilot-why-crash-radar/
The simple answer is to use LLMs so you can put it on your resume. Another simple answer is to transition to a job where it's mostly about people.
The complex answer is we don't really know how good things will get and we could be at the peak for the next 10-20 years, or there could be some serious advancements that make the current generation look like finger-painting toddlers by comparison.
I would say the fear of no junior/mid positions is unfounded though since in a generation or two, you'd have no senior engineers.
LLMs are most capable where they have a lot of good data in their training corpus and not much reasoning is required. Migrate to a part of the software industry where that isn't true, e.g. systems programming.
The day LLMs get smart enough to read a chip datasheet and then realize the hardware doesn't behave the way the datasheet claims it does is the day they're smart enough to send a Terminator to remonstrate with whoever is selling the chip anyway so it's a win-win either way, dohohoho.
Make lots of incompatible changes to libraries. No way LLMs keep up with that since their grasp on time is weak at best.
LLM's for now only have 2-3 senses. The real shift will come when they can collect data using robotics. Right now a human programmer is needed to explain the domain to AI and review the code based on the domain.
On the bright side, every programmer can start a business without a need to hire a army of programmers. I think we are getting back to artisan based economy where everyone can be a producer without a corporate job.
A couple reasons why I am not scared of AI taking my job:
1. They are trained to be average coders.
The way LLMs are trained is by giving them lots of examples of previous coding tasks. By definition, half of those examples are below average. Unless there is a breakthrough on how they are trained any above average coder won't have anything to worry about.
2. They are a tool that can (and should) be used by humans.
Computers are much better at chess than any human, but a human with a computer is better than any computer. The same is true with a coding LLM. Any SWE who can work with an LLM will be much better than any LLM.
3. There is enough work for both.
I have never worked for a company where I have had less work when I left than when I started. I worked for one company where it was estimated that I had about 2 years worth of work to do and 7 years later, when I left, I had about 5 years of work left. Hopefully LLMs will be able to take some of the tedious work so we can focus on harder tasks, but most likely the more we are able to accomplish the more there will be to accomplish.
It's lowering the bar for developers to enter the marketplace, in a space that is wildly under saturated. We'll all be fine. There's tons of software to be built.
More small businesses will be able to punch-up with LLMs tearing down walled gardens that were reserved for those with capital to spend on lawyers, consultants and software engineering excellence.
It's doing the same thing as StackOverflow -- hard problems aren't going away, they're becoming more esoteric.
If you're at the edge, you're not going anywhere.
If you're in the middle, you're going to have a lot more opportunities because your throughput should jump significantly so your ROI for mom and pop shops finally pencils.
Just be sure you actually ship and you'll be fine.
I am not afraid of companies replacing Software Engineers with LLMs while being able to maintain the same level of quality. The real thing to worry about is that companies do what they did to QA Engineers, technical writers, infrastructure engineers, and every other specialized role in the software development process. In other words, they will try to cut costs, and the result will be worse software that breaks more often and doesn't scale very well.
Luckily, the bar has been repeatedly lowered so that customers will accept worse software. The only way for these companies to keep growing at the rate their investors expect them to is to try and cut corners until there's nothing left to cut. Software engineers should just be grateful that the market briefly overvalued them to the degree that did and prepare for a regression to the mean.
The biggest "fault" of LLMs (which continues) is their compliance. Being a good software dev often means pushing back and talking through tradeoffs, and finding out what the actual business rules are. I.e. interrogating the work.
Even if these LLM tools do see massive improvements, it seems to me that they are still going to be very happy to take the set of business rules that a non-developer gives them, and spit out a program that runs but does not do what the user ACTUALLY NEEDS them to do. And the worst thing is that the business user may not find out about the problems initially, will proceed to build on the system, and these problems become deeper and less obvious.
If you agree with me on that, then perhaps what you should focus out is building out your consulting skills and presence, so that you can service the mountains of incoming consulting work.
For thousands of years, the existence of low cost or even free apprentices for skilled trades meant there was no work left for experts with mastery of the trade.
Except, of course, that isn't true.
> the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
It's not enough to make generalizations yet. What kind of projects ? What tuning does it need ? What kind of end users ? What kind of engineers ?
In the field I work with, I can't see how LLMs can help with a clear path to convergence to a reliable product. I anything, I suspect we will need more manual analysis to fix insanity we receive from our providers if they start working with LLMs.
Some jobs will disappear, but I've yet to see signs of anything serious emerge yet. You're right for juniors though, but I suspect those who stop training will loose their life insurance and will starve under LLMs either by competition, our the amount of operational instability it will bring.
I think what we're seeing echoes a pattern we have lived through many times before, just with new tooling. Every major leap in developer productivity - from assembly to higher-level languages, from hand-rolled infrastructure to cloud platforms, from code libraries to massive open-source ecosystems - has sparked fears that fewer developers would be needed. In practice, these advancements have not reduced the total number of developers; they have just raised the bar on what we can accomplish.
LLMs and code generation tools are no exception. They will handle some boilerplate and trivial tasks, just like autocompletion, frameworks, and package managers already do. This will make junior-level coding skills less of a differentiator over time. But it is also going to free experienced engineers to spend more time on the complex, high-level challenges that no model can solve right now - negotiating unclear requirements, architecting systems under conflicting constraints, reasoning about trade-offs, ensuring reliability and security, and mentoring teams.
It is less about "Will these tools replace me?" and more about "How do I incorporate these tools into my workflow to build better software faster?" That is the question worth focusing on. History suggests that the demand for making complex software is bottomless, and the limiting factor is almost never just "typing code." LLMs are another abstraction layer. The people who figure out how to use these abstractions effectively, augmenting their human judgment and creativity rather than fighting it, will end up leading the pack.
"until eventually LLMs will become so good, that senior people won't be needed any more"
You are assuming AGI will come eventually.
I assume eventually the earth will be consumed by the sun, but I am equally less worried as I don't see it as a near future.
I am still regular dissapointed, when I try out the newest hyped model. They usually fail my tasks and require lots of manual labour.
So if that gets significantly better, I can see them replacing junior devs. But without understanding, they cannot replace a programmer for any serious task. But they maybe enable more people to become good enough programmers for their simple task. So less demand for less skilled devs indeed.
My solution - the same as before - improve my skills and understanding.
LLMs can help us engineers gain context quickly on how to write solutions. But, I don't see it replacing us anytime soon.
I'm currently working on a small team with a senior engineer. He's the type of guy who preach letting Cursor or whatever new AI IDE is relevant nowadays do most of the work. Most of his PRs are utter trash. Time to ship is slow and code quality is trash. It's so obvious that the code is AI generated. Bro doesn't even know how to rebase properly resulting to overwriting (important) changes instead of fixing conflicts. And guess who has to fix their mistakes (me and I'm not even a senior yet).
I have as much interest in the art of programming as in building products, and becoming some sort of AI whisperer sounds tremendously tedious to me. I opted out of the managerial track for the same reason. Fortunately, I have enough money saved that I can probably just work on independent projects for the rest of my career, and I’m sure they’ll attract customers whether or not they were built using AI.
With that said, looking back on my FAANG career in OS framework development, I’m not sure how much of my work could have actually been augmented by AI. For the most part, I was designing and building brand new systems, not gluing existing parts together. There would not be a lot of precedent in the training data.
So far I haven't found much use for LLM code generation. I'm using Copilot as a glorified autocomplete and that's about it. I tried to use LLM to generate more code, but it takes more time to yield what I want than to write it myself, so it's just not useful.
Now ChatGPT really became indispensable tool for me, on the one row with Google and StackOverflow.
So I don't feel threatened so far. I can see the potential, and I think that it's very possible for LLM-based agents to replace me eventually, probably not this generation, but few years later - who knows. But that's just hand waving, so getting worried about possible future is not useful for mental well-being.
I think there's been a lot of fear-mongering on this topic and "the inevitable LLM take over" is not as inevitable as it might seem, perhaps depending on your definition of "take over."
I have personally used LLMs in my job to write boilerplate code, write tests, make mass renaming changes that were previously tedious to do without a lot of grep/sed-fu, etc. For these types of tasks, LLMs are already miles ahead of what I was doing before (do it myself by hand, or have a junior engineer do it and get annoyed/burnt out).
However, I have yet to see an LLM that can understand an already established large codebase and reliably make well-designed additions to it, in the way that an experienced team of engineers would. I suppose this ability could develop over time with large increases in memory/compute, but even state-of-the-art models today are so far away from being able to act like an actual senior engineer that I'm not worried.
Don't get me wrong, LLMs are incredibly useful in my day-to-day work, but I think of them more as a leap forward in developer tooling, not as an eventual replacement for me.
Better tools that accelerate how fast engineers can produce software? That's not a threat, just a boon. I suspect the actual transition will just be people learning/focusing on somewhat different higher level skills rather than lower level coding. Like going from assembly to c, we're hoping we can transition more towards natural language.
> junior to mid level software engineering will disappear mostly People don't magically go to senior. Can't get seniors without junior and mid to level up. We'll always need to take in and train new blood.
My advice? Focus on the business value, not the next ticket. Understand what the actual added value of your work is to your employer. It won’t help in the day-to-day tasks but it will help you navigate your career with confidence.
Personally - and I realize this is not generalizable advice - I don’t consider myself a SWE but a domain expert who happens to apply code to all of his tasks.
I’ve been intentionally focusing on a specific niche - computer graphics, CAD and computational geometry. For me writing software is part of the necessary investment to render something, model something or convert something from domain to domain.
The fun parts are really fun, but the boring parts are mega-boring. I’m actually eagerly awaiting for LLM:s to reach some level of human parity because there simply isn’t enough talent in my domains to do all the things that would be worthwhile to do (cost and return of investment, right).
The reason is my domain is so niche you can’t webscrape&label to reach the intuition and experience of two decades, working in various industries from graphics benchmarking, automotive HUDs, to industrial mission critical AEC workflows and to realtime maps.
There is enough knowledge to train LLMs to get a hint as soon as I tie few concepts together, and then they fly. The code they write at the moment apart from simple subroutines is not good enough to act as unsupervised assistant … most of the code is useless honestly. But I’m optimistic and hope they will improve.
Learning woodworking in order to make fine furniture. This is mostly a joke, but the kind that I nervously laugh at.
AI's are going to put SWE's out of a job at roughly the same time as bitcoin makes visa go bankrupt.
Aka never, or at least far enough in the future that you can't really predict or plan for it.
Right now LLMs have a slight advantage over stackoverflow etc in that they'll react to your specific question/circumstances, but they also require you to doublecheck everything they spit out. I don't think that will ever change, and I think most of the hype comes from people whose salaries depend on it being right around the corner or people who are playing a speculation game (if I learn this tool I'll never have to work again/ avoid this tool will doom me to poverty forever).
Sysadmin here. Thats what we used to be called. Then some fads came, some went. DevOps, SRE. Etc.
I have notes on particular areas I am focusing on, but I have a small set of general notes on this, and they seem to apply to you SWEs also.
Headline: Remember data is the new oil Qualifier: It's really all about IP portfolios these days
1) Business Acument: How does the tech serve the business/client needs, from a holistic perspective of the business. (eg: sysadmins have long had to big picture finance, ops, strategy, industry, etc knowledge) - aka - turn tech knowledge into business knowledge
2) Leadership Presence: Ability to meet w/c-suite, engineers, clients, etc, and speak their languages, understand their issues, and solve their issues. (ex: explain ROI impacts for proposals to c-suite)
3) Emotional Intelligence: Relationship building in particular. (note: this is the thing I neglected the most in my career and regreted it!)
4) Don't be afraid to use force multiplier tools. In this discussion, that means LLMs, but it can mean other things too. Adopt early, keep up with tooling, but focus on the fundamental tech and don't get bogged down into proprietary stockholm syndrome. Augment yourself to be better, don't try to replace yourself.
----
Now, I know thats a simplistic list, but you asked so I gave you what I had. What I am doing (besides trying to get my mega-uber-huge-sideproject off the ground), is recentering on certain areas I don't think are going anywhere: on-prem, datacenter buildouts, high-compute, ultra-low-latency, scalable systems, energy, construction of all the previous things, and the banking knowledge to round it all out.
If my side-project launch fails, I'm even considering data-center sales instead of the tech side. Why? I'm tired of rescuing the entire business to no fanfare while sales people get half my salary in a bonus. Money aside, I can still learn and participate in the builds as sales (see it happen all the time).
In other words, I took my old-school niche set of knowledge, and adopted it over the years as the industry changed, focusing on what I do best (in this case, operations - aka - the ones who actually get shit into prod, and fix it when it's broke, regardless of the title associated).
Future at this point is ... unpredictable. It is unsatisfactory answer but it is true.
So what does it take for LLM to replace SWE?
1. It needs to get better, much better 2. It needs to be cheaper still
Those two things are at odds with each other. If Scaling Laws is the god we preaching to, then it apparently has already hit the diminishing of return, maybe if we scale up 1000x we can get AGI, but that won't be economically reasonable for a long time.
Back to reality, what does it mean to survive in a market assuming coding assistants are going to get marginally better over say next 5 years? Just use them, they are genuinely useful tools to accomplish really boring and mundane stuff. Things like writing docker files, those will go away to LLM and human won't be able and don't have to compete. They are also great second thoughts advice givers, it is fun to know what LLM thought of your design proposal and build upon their advice.
Overall, I don't think much will change over night, the industry might experience contraction in terms of how many developers it will hire, for which I think for a long time, the demand will not be there. For people already in the industry, as long as you keep learning, it is probably going to be fine, well, for now.
I worked at a big tech co years ago. Strolling up to my bus stop in my casual attire while others around me wore uniforms, rushing to get to work. A nice private shuttle would pick me up. It would deliver me pretty much at the doors of the office. If it were raining, somebody would be standing outside to hand me an umbrella even though the door was a short distance away. Other times there would be someone there waiting on the shuttle to hand me a smoothie. When I got to the door, there would be someone dedicated to opening it. When I got inside, a breakfast buffet fit for a king would be served. Any type of cuisine I wanted was served around campus for lunch and dinner, and it was high quality. If I wanted dessert, there was entire shops (not one but many) serving free handcrafted desserts. If I wanted my laundry done, someone would handle that. If I wanted snacks, each floor of my office had its own little 7/11. If I didn't feel like having all this luxury treatment, I'd just work from home and nobody cared.
All of that, and I was being paid a very handsome amount compared to others outside of tech? Several times over the national average? For gluing some APIs together?
What other professions are like this where there's a good chunk of people who can have such a leisurely life, without taking much risk, and get so highly compensated compared to the rest? I doubt there's many. At some point, the constrained supply must answer to the high demand and reality shows up at the door.
I quit a year into the gig to build my own company. Reality is much different now. But I feel like I've gained many more skills outside of just tech that make me more equipped for whatever the future brings.
Two thoughts:
1. Similar to autonomous driving going from 90-99% reliability can take longer than 0-90%.
2. You can now use LLMs and public clouds to abstract away a lot skills that you don't have (managing compute clusters, building iOS and Android apps, etc.). So you can start your 3 person company and do things that previously required 100s of people.
IMHO LLMs and cloud computing are very similar where you need a lot of money to build an offering so perhaps only a few big players are going to survive.
Long horizon problems are a completely unsolved problem in AI.
See the GAIA benchmark. While this surely will be beat soon enough, the point is that we do exponentially longer horizon tasks than that benchmark every single day.
It's very possible we will move away from raw code implementation, but the core concepts of solving long horizon problems via multiple interconnected steps are exponentially far away. If AI can achieve that, then we are all out of a job, not just some of us.
Take 2 competing companies that have a duopoly on a market.
Company 1 uses AI and fires 80% their workforce.
Company 2 uses ai and keeps their workforce.
AI in its current form is a multiplier, we will see company two massively outcompete the first as each employee now performs 3-10 people's tasks. Therefore, Company two's output is exponentially increased per person. As a result, it significantly weakens the first company. Standard market forces haven't changed.
The reality, as I see it, is that interns will now be performing at Senior SWE, senior SWE engineers will now be performing at VP of engineering levels, and VP's of engineering will now be performing at nation state levels of output.
We will enter an age where goliath companies will be common place. Hundreds or even thousands of mega trillion dollar companies. Billion dollar startups will be expected almost at launch.
Again, unless we magically find a solution to long horizon problems (which we haven't even slightly found). That technology could be 1 year or 100 years away. We're waiting on our generation's Einstein to discover it.
What does it mean to be a software engineer? You know the syntax, standard library, and common third party libraries of your domain. You know the runtime, and the orchestration around multiple instances of your runtime. You know how to communicate between these instances and with third-party runtimes like databases and REST APIs.
A large model knows all of this as well. We already rely on generative language model conversations to fill in the knowledge gaps that Googling for documentation (or “how do I do X?” stackoverflow answers) filled.
What’s harder is debugging. A lot of debugging is guesswork and action taking, note-taking, and brain-storming for possible ideas as to why X crashes on Y input.
Bugs that boil down to isolating a component and narrowing down what’s not working are hard. Being able to debug them could be the moat that will protect us SWEs from redundancy. Alternatively, pioneering all the new ways of getting reproducible builds and reproducible environments will be the route to eliminating this class of bug entirely, or at least being able to confidently say that some bug was probably due to bad memory, bad power supplies, or bad luck.