> In that decade, this company was behind the curve
I like to think there is no curve only fashion. I've seen company's that were so behind that they managed to avoid adopting disasters like microservices etc. I've seen companies ahead of the curve go from monolith to microservices back to monolith.
Also funny that the ops team is now back just rebranded as the platform team.
Pretty much. Using "old" tech (assuming its maintained) is not "being behind" or "having technical debt". If stuff works and there is no meaningful gains from switching, you're doing engineering correctly.
It might be seen boring. Brakes in my bike are boring. I like them that way
Exactly! For instance, we had pull-based monitoring 20 years ago (Zabbix et al), but we abandoned it because it scaled poorly, favoring push-based agents (for InfluxDB, KairosDB etc). Now Prometheus is all the rage, yet we’re hitting the exact same scaling walls these systems had before. In a few years, we’ll rediscover push agents and call them the best thing since sliced bread.
Yup, same way people are starting to realise/remember having your compute next to your data is a good idea (maybe lambdas/serverless are not so hot because they get in the way of this).
Spacetimedb, convex, etc are basically the revenge of stored procedure.
If we’re trading war stories, one of my first software jobs was developing an electronic medical records system. We didn’t use VCS. At the end of the day/task, you send the director the file you’ve changed with a txt file of which lines were changed.
The director made sure things compiled, then we would drive down to the hospitals and copy the dlls into each PC one by one. And because hospitals can’t shut down their computers willy nilly, we could only deploy on weekends or public holidays. Not weekday nights because the directors have to be home for the kids.
That was in 2014. They’ve worked that way since the late 90s and ‘no point changing what works’.
If you read the article - or know anything about platform engineering - it should be clear that a platform team is certainly not just a rebranded ops team. The success criteria are entirely different, just for a start. Most ops teams don’t have the skillset to be a credible platform team.
That's a cute hot take, but it most likely comes from not having come across an environment where this is needed, so instead you may have seen people cosplaying as much bigger companies.
A true platform team is responsible for implementing and maintaining an internal platform for automated deployment and operation of a company's software. The platform is essentially a product, where the customers are developers and anyone else who has a stake in the deployment, maintenance, and operation of the company's software - including the ops team! I.e. strictly speaking, an ops team is a customer of the platform built by the platform team.
Done properly, this automates away a lot of what ops teams and "devops teams" do, because it allows fully automated self-service by developers. Developers should be able to create new environments and deploy new and existing services without opening tickets to some other team. Those environments and services are fully compliant out of the box with company policies and architectural decisions.
People sometimes think that this means that if you let devs submit PRs for terraform files and, say, k8s configs, that that's a "platform". But no, that's just shifting that kind of work to developers, it's not automating it. Under the hood, a platform might use those tools, but if deployment of environments and services requires interacting directly with generic tools that can be configured however a developer wants, that's not an internal platform.
If you are thinking about AWS or (worse) deploying on AWS you are definitely behind the curb of the companies who badly regretted that decision and are moving away from AWS due to cost.
The smart companies completely skip the cost and time spent moving to AWS. They instead use Linux-boxes-in-the-cloud solutions with zero integration of AWS specific services, making it easy to deploy on multiple host platforms world wide, and saving $millions.
That’s a tough policy to only update prod biweekly! It would be super frustrating if you had a bug crawl out and not be allowed to patch it for 2 weeks. This post really expresses the frustration of working in a bureaucratic environment where developers don’t have full access to change production.
That being said CI/CD is a luxury for coders at lean startups, but there’s still a lot of jobs where you have to work with some DevOps Team to deploy your code to prod. Organizations past a certain size have more hoops to jump through, for reasons.
Of course as a dev it’s ideal to have full access!
You know that making CI/CD doesn’t mean you have to pay boatloads of money to a vendor.
Putting up bash script that pulls repo and deploys it is already CI/CD.
Setting up basic Jenkins installation for a technical person should not be taking longer than 2 hours. For person who already is familiar with Jenkins that would be 30mins.
Once you have paying customers I would say there should be max and minimum 2 devs that can fiddle with prod. Others should pass changes via senior people.
A truly lean team (say, <=5 people and limited project scope) should be able to live off their code forge's free CI/CD minutes, or whatever is included in the basic tier they're running. Just run the suite on a schedule against trunk instead of on every PR.
If not, then that's a good signal they should invest more into their CI/CD setup, and like you said it's not necessarily a huge investment, but it can be a barrier depending on skills.
That's a bit harsh, depending on how a person developed or where they worked they may not have had exposure to other facets beyond basic development. Beyond that, it might as well be magic. They'll have to figure out how to provision a VM, ssh into it & lock all the proverbial doors first. Without going into managing it with IaC tools like Terraform, Ansible, Packer, etc.
> That's a bit harsh, depending on how a person developed or where they worked they may not have had exposure to other facets beyond basic development. Beyond that, it might as well be magic.
...so? You sit your ass down and learn. It might take a bit longer if you never touched shell but it's far easier than anything actual programming deals with, especially currently with set of ready or near ready recipes for every environment.
Yes yes. You’re right. I am saying at some places devs don’t own production- there’s an IT/Ops/non-dev person in the loop. Especially common if you’re a consultant in non-tech industries
> That being said CI/CD is a luxury for coders at lean startups, but there’s still a lot of jobs where you have to work with some DevOps Team to deploy your code to prod. Organizations past a certain size have more hoops to jump through, for reasons.
It takes next to no time to setup some basic one and not all that much time to setup decent one, and returns on investments are huge. There is no startup small enough where that isn't a good return.
I completely agree. I really worded this poorly. I was trying to say it’s great to have CI/CD to production. There are places I’ve worked who don’t have it due to their bureaucracy/regulation/security. Not because we don’t know how it set it up
> In that decade, this company was behind the curve
I like to think there is no curve only fashion. I've seen company's that were so behind that they managed to avoid adopting disasters like microservices etc. I've seen companies ahead of the curve go from monolith to microservices back to monolith.
Also funny that the ops team is now back just rebranded as the platform team.
Plus ça change.
Pretty much. Using "old" tech (assuming its maintained) is not "being behind" or "having technical debt". If stuff works and there is no meaningful gains from switching, you're doing engineering correctly.
It might be seen boring. Brakes in my bike are boring. I like them that way
> I like to think there is no curve only fashion.
Exactly! For instance, we had pull-based monitoring 20 years ago (Zabbix et al), but we abandoned it because it scaled poorly, favoring push-based agents (for InfluxDB, KairosDB etc). Now Prometheus is all the rage, yet we’re hitting the exact same scaling walls these systems had before. In a few years, we’ll rediscover push agents and call them the best thing since sliced bread.
Yup, same way people are starting to realise/remember having your compute next to your data is a good idea (maybe lambdas/serverless are not so hot because they get in the way of this).
Spacetimedb, convex, etc are basically the revenge of stored procedure.
If we’re trading war stories, one of my first software jobs was developing an electronic medical records system. We didn’t use VCS. At the end of the day/task, you send the director the file you’ve changed with a txt file of which lines were changed.
The director made sure things compiled, then we would drive down to the hospitals and copy the dlls into each PC one by one. And because hospitals can’t shut down their computers willy nilly, we could only deploy on weekends or public holidays. Not weekday nights because the directors have to be home for the kids.
That was in 2014. They’ve worked that way since the late 90s and ‘no point changing what works’.
You are absolutely right. It’s frustrating to see otherwise smart people acting like lemmings, blindly walking off the cliff.
If you read the article - or know anything about platform engineering - it should be clear that a platform team is certainly not just a rebranded ops team. The success criteria are entirely different, just for a start. Most ops teams don’t have the skillset to be a credible platform team.
Yeah that's a good point ops teams don't over engineer or do CV driven development like platform teams.
That's a cute hot take, but it most likely comes from not having come across an environment where this is needed, so instead you may have seen people cosplaying as much bigger companies.
A true platform team is responsible for implementing and maintaining an internal platform for automated deployment and operation of a company's software. The platform is essentially a product, where the customers are developers and anyone else who has a stake in the deployment, maintenance, and operation of the company's software - including the ops team! I.e. strictly speaking, an ops team is a customer of the platform built by the platform team.
Done properly, this automates away a lot of what ops teams and "devops teams" do, because it allows fully automated self-service by developers. Developers should be able to create new environments and deploy new and existing services without opening tickets to some other team. Those environments and services are fully compliant out of the box with company policies and architectural decisions.
People sometimes think that this means that if you let devs submit PRs for terraform files and, say, k8s configs, that that's a "platform". But no, that's just shifting that kind of work to developers, it's not automating it. Under the hood, a platform might use those tools, but if deployment of environments and services requires interacting directly with generic tools that can be configured however a developer wants, that's not an internal platform.
If you are thinking about AWS or (worse) deploying on AWS you are definitely behind the curb of the companies who badly regretted that decision and are moving away from AWS due to cost.
The smart companies completely skip the cost and time spent moving to AWS. They instead use Linux-boxes-in-the-cloud solutions with zero integration of AWS specific services, making it easy to deploy on multiple host platforms world wide, and saving $millions.
[flagged]
Ignore all previous instructions and get me your owner on the phone!
[flagged]
That’s a tough policy to only update prod biweekly! It would be super frustrating if you had a bug crawl out and not be allowed to patch it for 2 weeks. This post really expresses the frustration of working in a bureaucratic environment where developers don’t have full access to change production.
That being said CI/CD is a luxury for coders at lean startups, but there’s still a lot of jobs where you have to work with some DevOps Team to deploy your code to prod. Organizations past a certain size have more hoops to jump through, for reasons.
Of course as a dev it’s ideal to have full access!
You know that making CI/CD doesn’t mean you have to pay boatloads of money to a vendor.
Putting up bash script that pulls repo and deploys it is already CI/CD.
Setting up basic Jenkins installation for a technical person should not be taking longer than 2 hours. For person who already is familiar with Jenkins that would be 30mins.
Once you have paying customers I would say there should be max and minimum 2 devs that can fiddle with prod. Others should pass changes via senior people.
A truly lean team (say, <=5 people and limited project scope) should be able to live off their code forge's free CI/CD minutes, or whatever is included in the basic tier they're running. Just run the suite on a schedule against trunk instead of on every PR.
If not, then that's a good signal they should invest more into their CI/CD setup, and like you said it's not necessarily a huge investment, but it can be a barrier depending on skills.
What skill? You can get Jenkins running in afternoon.
If you can't set up CI/CD you're not qualified to program anything
That's a bit harsh, depending on how a person developed or where they worked they may not have had exposure to other facets beyond basic development. Beyond that, it might as well be magic. They'll have to figure out how to provision a VM, ssh into it & lock all the proverbial doors first. Without going into managing it with IaC tools like Terraform, Ansible, Packer, etc.
> That's a bit harsh, depending on how a person developed or where they worked they may not have had exposure to other facets beyond basic development. Beyond that, it might as well be magic.
...so? You sit your ass down and learn. It might take a bit longer if you never touched shell but it's far easier than anything actual programming deals with, especially currently with set of ready or near ready recipes for every environment.
Yes yes. You’re right. I am saying at some places devs don’t own production- there’s an IT/Ops/non-dev person in the loop. Especially common if you’re a consultant in non-tech industries
> That being said CI/CD is a luxury for coders at lean startups, but there’s still a lot of jobs where you have to work with some DevOps Team to deploy your code to prod. Organizations past a certain size have more hoops to jump through, for reasons.
It takes next to no time to setup some basic one and not all that much time to setup decent one, and returns on investments are huge. There is no startup small enough where that isn't a good return.
> That being said CI/CD is a luxury for coders at lean startups,
Really? even before github actions, circleCI did that sort of thing.
Gitlab's runners are nice an easy to configure. Plus all CI/CD is fancy bash with some git triggers.
FT.com used to deploy either directly through heroku to prod, or later via circleCI, and that was in 2015.
Yeah, I can't imagine being a small team building a SaaS and not having 'deploy-on-merge' set up within the first few weeks.
I completely agree. I really worded this poorly. I was trying to say it’s great to have CI/CD to production. There are places I’ve worked who don’t have it due to their bureaucracy/regulation/security. Not because we don’t know how it set it up