* Consulting. Businesses are fond of repeating mistakes with great dedication that sometimes it takes some outside help to steer the ship right to great animosity from the people writing code.
* Accessibility. Accessibility isn’t a huge challenge unless you’re in a business with a pattern of largely ignoring it. Then it can be a huge challenge to fix. AI won’t be enough and it nightly likely require outside help.
* Speed. If you want faster executing software you need to measure things. AI will be learning from existing code that likely wasn’t well measured.
There's quite a few, although LLM's are slowly creeping in:
1. everything with less data to train on:
- Compiler / language toolchain development.
- Specialized embedded robotics (industrial robotics, drones).
- Scientific / high-performance computing
2. Low tolerance for LLM-induced errors:
- Network protocols / telecom software
- Medical software
- Aerospace, automotive
3. Performance-critical code:
- Game engine / graphics engine development (probably an area where we'll see them soon)
- Kernels, drivers, microcontrollers.
From what I’ve seen, LLMs are good at making stuff that has already been made and posted to GitHub a thousand times before. At my job we’re constantly asked to do things that really haven’t been done before, at least not by people sharing source code, so the LLMs suck at most of it.
LLMs make for great tech demos, but when it comes to writing code for production that actually does something new and useful, it hasn’t impressed me at all.
That’s what I added the caveat about it being open sourced. I’m sure what I’m doing has been solved many times by many companies, but it’s not information they share publicly, just like my code isn’t shared publicly. There is also the issue of integrating with existing legacy systems, which may include bespoke internal tools.
Maybe the thing has been done in general, but not in a way that’s useful for me. That’s why it looks good in tech demos. If I ask AI to write what I need, it will give me an answer, but it won’t actually work and integrate in the ways I need for production. The last time I tried it gave me 70 lines of code, the real end result was thousands. The AI version would look cool in a demo though.
Many excellent LLM are being created. I feel that this era is similar to the emerging automotive industry era. In other words, we are currently in an era of engine performance competition, competing for power and speed. However, I believe that this era will eventually transition to the next phase.
I am good with the current power and speed. Let's straight jump to the smart era.
Also, my main issue is not really AI not being good enough. If a company is fine getting sh*t code then let's go full AI, but I love my job, I love solving issues, coding, working with new paradigm, trying solutions, failing, improving, etc. I don't want to be a prompt expert and being asked to review AI generated code all day long.
Of course, it is a very personal opinion, but I think it is still shared by a decent bunch of people.
Embedded systems in infrastructure systems should be save as they not only need to be specific but are just important and dangerous but you never know.
Just get really good at something, in the top 10% where you would be writing books and disagreeing with reddit.
AI is predictive. Most people will fall to a comfort zone where AI tells them what to do. But you should become an expert and be one of the few who are telling it what to do.
Managers and CTO don't care about you being an expert. They just push you to use what they saw 100 times on LinkedIn, using Cursor to improve 60% of code delivery time.
Every month CTO meeting is about them pushing software engineer to use Cursor.
Legacy systems - there are legacy systems that are like house of cards and you have to move forward very carefully. These areas might have code/languages that are older and the LLM wont have as big a model to learn from
Businesses often rely on these systems - and they rely on the processes to protect them so are reluctant to adopt AI
I am baffled about how each company are jumping into LLMs without considering anything about their own privacy when 10 years ago, just using GitHub with a private repository could have been an issue.
> To be fair the code they produce is dogshit, so it isn't a problem.
That's not a problem for managers and CTO that are just being brainwashed by marketing and LinkedIn posts that all their engineers should use Cursor.
Plenty of LLMs here. Probably more than others and Stripe is poster boy for OpenAI.
Fintech has a ton of regulations. Everything layered over and over with tests. There's a form of extreme engineering where fintech runs tests in production, meaning that the systems in place are robust enough to handle bad code and junk data.
* Consulting. Businesses are fond of repeating mistakes with great dedication that sometimes it takes some outside help to steer the ship right to great animosity from the people writing code.
* Accessibility. Accessibility isn’t a huge challenge unless you’re in a business with a pattern of largely ignoring it. Then it can be a huge challenge to fix. AI won’t be enough and it nightly likely require outside help.
* Speed. If you want faster executing software you need to measure things. AI will be learning from existing code that likely wasn’t well measured.
There's quite a few, although LLM's are slowly creeping in: 1. everything with less data to train on: - Compiler / language toolchain development. - Specialized embedded robotics (industrial robotics, drones). - Scientific / high-performance computing
2. Low tolerance for LLM-induced errors: - Network protocols / telecom software - Medical software - Aerospace, automotive
3. Performance-critical code: - Game engine / graphics engine development (probably an area where we'll see them soon) - Kernels, drivers, microcontrollers.
etc. Not all is lost yet.
From what I’ve seen, LLMs are good at making stuff that has already been made and posted to GitHub a thousand times before. At my job we’re constantly asked to do things that really haven’t been done before, at least not by people sharing source code, so the LLMs suck at most of it.
LLMs make for great tech demos, but when it comes to writing code for production that actually does something new and useful, it hasn’t impressed me at all.
Ain't nothing new under the sun.
That’s what I added the caveat about it being open sourced. I’m sure what I’m doing has been solved many times by many companies, but it’s not information they share publicly, just like my code isn’t shared publicly. There is also the issue of integrating with existing legacy systems, which may include bespoke internal tools.
Maybe the thing has been done in general, but not in a way that’s useful for me. That’s why it looks good in tech demos. If I ask AI to write what I need, it will give me an answer, but it won’t actually work and integrate in the ways I need for production. The last time I tried it gave me 70 lines of code, the real end result was thousands. The AI version would look cool in a demo though.
Many excellent LLM are being created. I feel that this era is similar to the emerging automotive industry era. In other words, we are currently in an era of engine performance competition, competing for power and speed. However, I believe that this era will eventually transition to the next phase.
I am good with the current power and speed. Let's straight jump to the smart era.
Also, my main issue is not really AI not being good enough. If a company is fine getting sh*t code then let's go full AI, but I love my job, I love solving issues, coding, working with new paradigm, trying solutions, failing, improving, etc. I don't want to be a prompt expert and being asked to review AI generated code all day long.
Of course, it is a very personal opinion, but I think it is still shared by a decent bunch of people.
Embedded systems in infrastructure systems should be save as they not only need to be specific but are just important and dangerous but you never know.
Just get really good at something, in the top 10% where you would be writing books and disagreeing with reddit.
AI is predictive. Most people will fall to a comfort zone where AI tells them what to do. But you should become an expert and be one of the few who are telling it what to do.
Managers and CTO don't care about you being an expert. They just push you to use what they saw 100 times on LinkedIn, using Cursor to improve 60% of code delivery time.
Every month CTO meeting is about them pushing software engineer to use Cursor.
Legacy systems - there are legacy systems that are like house of cards and you have to move forward very carefully. These areas might have code/languages that are older and the LLM wont have as big a model to learn from
Businesses often rely on these systems - and they rely on the processes to protect them so are reluctant to adopt AI
Defence. We don't use any LLMs, and couldn't even if we wanted to.
To be fair the code they produce is dogshit, so it isn't a problem.
That might be a good candidate, right.
I am baffled about how each company are jumping into LLMs without considering anything about their own privacy when 10 years ago, just using GitHub with a private repository could have been an issue.
> To be fair the code they produce is dogshit, so it isn't a problem.
That's not a problem for managers and CTO that are just being brainwashed by marketing and LinkedIn posts that all their engineers should use Cursor.
Fintech, banking..
Plenty of LLMs here. Probably more than others and Stripe is poster boy for OpenAI.
Fintech has a ton of regulations. Everything layered over and over with tests. There's a form of extreme engineering where fintech runs tests in production, meaning that the systems in place are robust enough to handle bad code and junk data.