I don't know if this is realistic but as a general rule if I was contracting with someone so that my business would have higher reliability, I would ask for a service level agreement with a agreed upon amount the vendor will pay you for every unit of time there service is not up.
At least then your pain is their pain, and they are incentivesed to prevent problems and fix them quickly.
Usually those agreements either just give you credits for the same service, pay way less than you lost or basically everything falls under force majeure.
If it works for you that's great, but when the actual shit hits the fan I don't think you should expect actual compensation.
Nobody that uses Kubernetes and random shit from Github would sign such an agreement if they actually had to pay out and could not weasel their way out of it. That would be signing up for a near-unlimited liability and business suicide.
Let's assume an incident costs you (the customer) ~5k, just assuming the time it takes to get a professional on very short notice to debug (since the whole promise of managed services is that you no longer need technical staff at all). That's also ignoring the actual cost to your business (lost sales, reputational risk, or missing your own SLAs).
For the provider to be willing to pay out something like this they'd need to charge you monthly several times that amount (otherwise just one incident and they're forever underwater on the LTV). Yet such a monthly amount would make the service unaffordable to all but the most deep-pocketed customers... for whom the impact of an outage on their business would cost even more meaning they'd want the payouts to be even bigger, leading to a catch-22.
High-availability good enough for the provider to put 5-figure sums on the line is actually really hard (there's a reason actual critical stuff like stock exchange order processing or card transactions don't run on the "cloud", nor on Kubernetes for that matter), so the next best thing is make-believe "high availability" where everyone (except the occasional poor soul like you that actually believed the marketing) understands the charade and plays along (because their own SLAs are often make-believe too).
100% uptime is impossible of course, a 100% reliable service would survive the next ice age.
But reliability at the holy grails of 4 and 5 nines (99.99%, 99.999% uptime) means ever greater investment - geographically dispersing your service, distributed systems, dealing with clock drift, multi master, eventual consistency, replication, sharding.. it’s a long list.
Questions to ask: could you do better yourself - with the resources you have? Is it worth the investment of a migration to get there? Whats the payoff period for that extra sliver of uptime? Will it cost you in focus over the longer term? Is the extra uptime worth all those costs?
For this particular failure mode absolutely - this is amateur-level stuff that shouldn't have happened.
You know how to make something that works keep working? Not messing with it. Of course, this doesn't pay salaries if your entire career is based on "fixing" things that work until they don't.
There is no reason to hurry a Postgres upgrade - the thing shouldn't be internet accessible anyway, so no risk of security issues.
If you do want to update, it's best to test the update on a test/staging system. Which I'm sure they would have if they didn't have to pay a 10-90x markup on the compute price.
Finally, when you do the update, you'd do it manually during a time where you are present and outside of business hours to further minimize the impact of something going wrong, instead of the upgrade happening out of the blue at a random time.
One amateur moment doesn’t make a service’s management amateur.
If you run it yourself there’s a chance you will trade the mistakes made by DO for different mistakes made by your own team - and still have similar overall reliability.
Simply moving the time at which the mistakes occur can be extremely valuable. Doing it yourself means you can say "no touching the server during business hours". You can't guarantee that with a provider.
Even if you work out that you cannot do better, at least you are no longer paying the insane premium of the managed highly-available service (since it's not actually capable of delivering).
I just had a 12hr outage due to flyio's quick and easy postgres minor patch update cooking my database.
I ended up downloading the entire volume, setting up my own docker container locally, exporting it, creating a new cluster (on the latest major patch).
Since this is about DO managed Postgres: if you're using it with replicas, they use async replication and RPO can be greater than 15 minutes. Since failover is triggered during upgrades, there ends up being a lot of periods where you can lose multiple minutes of committed data.
Do they at least allow you to set your own schedule for upgrade windows? That way you could schedule them for quiet times of day, minimising the likelihood of there being significant replica lag.
It's common to do this on AWS and the other hyperscale providers (though, of course, they tend to do synchronous replication anyway, meaning that this particular failure mode wouldn't apply) - upgrades are a common source of unforeseen issues, so it makes sense to minimise the potential blast radius by running them out of hours.
> I chose managed services specifically to avoid ops emergencies. We're a tiny startup paying the premium so someone else handles this. Instead, I spent late night hours debugging VPC routing issues in a networking layer I don't control.
This happens with managed services and I understand the frustration, but vendors are just as fallible as the rest of us and are going to have wonky behaviour and outages, regardless of the stability they advertise. This is always part of build vs buy, buy doesn't always guarentee a friction free result.
It happens with the big cloud providers as well, I've spent hours with AWS chasing why some VMs are missing routing table entries inside the VPC, or on GCP we had to just ban a class of VMs because the packet processing was so bad we couldn't even get a file copy to complete between VMs.
One of the issues I have with this is the insane markups they're charging for services that ultimately aren't any better than what you can do yourself.
If they aren't any better at least save yourself some money.
> but vendors are just as fallible as the rest of us
Isn’t the point that they shouldn’t be. They should have specialists dedicated to running these kind of things, test upgrades before rolling out, et c., while for the rest of us it’s just one of many things we try to handle.
Oof. I have a very similar set up except I'm using their managed MySQL instead of PostgreSQL. It appears I wasn't hit.
Same thought as you.. I just didn't want to figure out and manage MySQL-with-failover myself so I switched their managed solution a year or two ago and my bill went up like 300% or more (was running fine on a ~$12 or maybe $24 droplet + $5 volume but now costs, I don't even remember, $150 or so).
> I chose managed services specifically to avoid ops emergencies
You may not be spending enough time on HN reading all the horror stories =P
The benefit of a managed service isn't that it doesn't go down; though it probably goes down less than something you self-manage, unless you're a full-time SRE with the experience to back it.
The benefit of a managed service is you say: "It's not my problem, I opened a ticket, now I'm going to get lunch, hope it's back up soon."
> though it probably goes down less than something you self-manage, unless you're a full-time SRE with the experience to back it.
I wonder how true that is. This went down because of a bad update, which is probably like 99.99% of outages. The other 0.01% is cosmic rays causing hardware failures.
My server was up for 3.5 years with no outages because I just didn't touch it. I had to take it offline a couple days ago to move it which made me a little sad. Took a snapshot and moved it to a new droplet, brought it back up as-is and it's running great again.
Anyway, emergencies are less emergy if things go down while you're upgrading and shuffling things around yourself. You expect hiccups if you're the one causing the hiccups. It's when someone else is tinkering on the other side of the country/planet and blows something up that suddenly you have an emergency.
I concur. I've seen a lot of companies outside the techbro world where the entire thing runs on a single VPS/dedicated server with a setup that would make any sysadmin squirm. And yet, it just works and makes them money?
Which isn't too surprising - hardware is extremely reliable nowadays. When's the last time your laptop broke? And that laptop lives a much harsher life than server HW in a datacenter. Obviously everyone is going to have their own anecdotes about this, but I think it's fair to say that overall the failure rates are quite low.
You know why their (often awful) setups work and consistently beat the major clouds in terms of uptime? No moving parts for K8s and all the "best practices", and most importantly, there is nobody "fixing" the working setup until it doesn't work. Ironically they are getting better uptime by avoiding all the things that are marketed as improving uptime.
At my work we pay a boring, regional VPS host that is not fancy. In fact its maybe a few levels above "your 2000's web host, with a LAMP stack, a FTP login and a bad admin panel". Just a bit above that.
However, they ALWAYS pick up the phone on the 3rd ring with a capable, on call linux sysadmin with good general DB, services, networking, DNS, email knowledge.
Bonus point is that with such a simple stack you don't need to phone them often because the thing just works.
Most cloud outages are self-inflicted with the endless churn trying to reinvent things, not actual hardware failure. Just not touching the working system would boost their reliability and uptime, but then a lot of people would lose justification for their salaries so it can't happen.
I've had an RDS instance fail in an equally-weird way that required manual AWS operator intervention 24 hours later, until which we were effectively locked out of our data and had to restore from a recent backup & rebuild the missing data from logs to bring the service back up in the meantime.
All the supposed "savings" of using managed services to save on staff costs evaporated immediately. No refund from the provider obviously despite it being an edge-case in their implementation.
We were on AWS for a while. The complexity was way higher than what our team could manage. DOKS is simpler, and this is the first major issue we've hit in many months.
The font color implies this comment is downvoted, but I earnestly encourage readers to take very seriously the difference in SLOs and SLAs between high-cost vendors like AWS and GCP and low-cost vendors like DigitalOcean. Read their docs; do not assume DO is "the same, but lower cost."
… are the published SLAs worth more than use as toilet paper?
I think it boils down to who offers the highest quality / $, and that's an impossible metric to really measure except via experience.
But with a number of the "big" clouds, there's what the SLA says, and then the actual lived performance of the system. Half the time the SLA weasels out of the outage — e.g., "the API works" is not in SLA scope for a number of cloud services, only thinks like "the service is serving your data". E.g., your database is up? SLA. You can make API calls modify it? Not so much. VMs are running? SLA. API calls to alloc/dealloc? No. Support responded to you? SLA. The respond contains any meaningful content? Not so fast. Even if your outage is covered by SLA, getting that SLA to action often requires a mountain of work: I have to prove to the cloud vendor that they've strayed from their own SLA¹, and force them to issue a credit, and often then the benefit of the credit outweight my time in salary. Oftentimes the exchanges in support town seem to reveal that the cloud provider has, apparently, no monitoring whatsoever to be able to see what actual perf I am experiencing. (E.g., I have had tickets with Azure where they seem blithely unaware their APIs are returning 500s …)
So, published is one thing. On paper, IDK, maybe Azure & GCP probably look pretty on par. In practice, I would laugh at that idea.
¹AWS is particularly guilty of this; I could summarize their support as "request ID or GTFO".
AWS designs and implements their foundational services holistically. I can understand that the services "higher up the stack" may not feel this way to AWS customers sometimes. However, the foundation of VPCs, EC2, EBS and S3, are very strong.
If the word "production" is suppose to really mean something to you, move your workload to Google Cloud, or move it to AWS, or on https://cast.ai
Disclaimer: I have no commercial affiliation with Cast AI.
As a solo dev who just started his second cluster a few days ago... I like it.
Upfront costs a little higher than I'd like. I'm paying $24 for a droplet + $12 for a load balancer, plus maybe $1 for a volume.
I could probably run my current workload on a $12 droplet but apparently Cilium is a memory hog and that makes the smaller droplet infeasible, and it seems not practical to not run a load balancer.
But now I can run several distinct apps running different frameworks and versions of php, node, bun, nginx, whatever and spin them up and tear them down in minutes and I kind of love that. And if I ever get any significant amount of users I can press a button and scale up or horizontally.
I don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot, that's built in.
I have SSO across all my subdomains. That was a little annoying to get running, took a day and a half to figure out but it was a one time thing and the config is all committed in YAML so if I ever forget how it works I have something to reference instead of trying to remember 100 shell commands I randomly ran on a naked VPS.
Upgrades are easy. Can upgrade the distro or whatever package easily.
Downsides are deploys take a minute or two instead of sub-second.
It took weeks of tinkering to get a good DX going, but I've happily settled on DevSpace. Again it takes a couple minutes to start up and probably oodles of RAM instead of milliseconds but I can maintain 10 different projects without trying to keep my dev machine in sync with everything.
So some trade-offs but I've decided it's a net win after you're over the initial learning hump.
> I can run several distinct apps running different frameworks and versions
> don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot
But doesn't literally any PaaS and provider with a "run a container" feature (AWS Fargate/ECS, etc) fit the bill without the complexity, moving parts and failure modes of K8s.
K8s makes sense when you need a control plane to orchestrate workloads on physical machines - its complexity and moving parts are somewhat justified there because that task is actually complex.
But to orchestrate VMs from a cloud provider - where the hypervisor and control plane already offers all of the above? Why take on the extra overhead by layering yet another orchestration layer on top?
As the sibling comment already mentioned, k8s is not much more complexity once you're past the learning curve. I used to host with ec2 + scripts earlier. K8s actually solves a lot of problems that you will have to solve yourself anyway.
Why did you find the need to have ChatGPT write this for you instead of writing it yourself? Don't think that it's not completely and blindingly obvious.
On the note of managed services, running postgres or mysql is so much easier these days. Just run postgres on bare metal dedicated servers and save tons of money and time. And the reduced complexity actually leads to more reliability and less maintenance.
Your grand total of one submission is a link to Razer.com. When you actually contribute something to this community perhaps then you can make a statement like that. Still, probably not the best.
I am contributing by admonishing you for not even contributing anything worthwhile at all while complaining about others content. Learn some awareness.
I don't know if this is realistic but as a general rule if I was contracting with someone so that my business would have higher reliability, I would ask for a service level agreement with a agreed upon amount the vendor will pay you for every unit of time there service is not up.
At least then your pain is their pain, and they are incentivesed to prevent problems and fix them quickly.
Usually those agreements either just give you credits for the same service, pay way less than you lost or basically everything falls under force majeure.
If it works for you that's great, but when the actual shit hits the fan I don't think you should expect actual compensation.
At our scale I doubt if we can get any cloud provider to write custom contracts. But if I had negotiating power, I completely agree.
Nobody that uses Kubernetes and random shit from Github would sign such an agreement if they actually had to pay out and could not weasel their way out of it. That would be signing up for a near-unlimited liability and business suicide.
Let's assume an incident costs you (the customer) ~5k, just assuming the time it takes to get a professional on very short notice to debug (since the whole promise of managed services is that you no longer need technical staff at all). That's also ignoring the actual cost to your business (lost sales, reputational risk, or missing your own SLAs).
For the provider to be willing to pay out something like this they'd need to charge you monthly several times that amount (otherwise just one incident and they're forever underwater on the LTV). Yet such a monthly amount would make the service unaffordable to all but the most deep-pocketed customers... for whom the impact of an outage on their business would cost even more meaning they'd want the payouts to be even bigger, leading to a catch-22.
High-availability good enough for the provider to put 5-figure sums on the line is actually really hard (there's a reason actual critical stuff like stock exchange order processing or card transactions don't run on the "cloud", nor on Kubernetes for that matter), so the next best thing is make-believe "high availability" where everyone (except the occasional poor soul like you that actually believed the marketing) understands the charade and plays along (because their own SLAs are often make-believe too).
See also: the recent Cloudflare or AWS outages.
100% uptime is impossible of course, a 100% reliable service would survive the next ice age.
But reliability at the holy grails of 4 and 5 nines (99.99%, 99.999% uptime) means ever greater investment - geographically dispersing your service, distributed systems, dealing with clock drift, multi master, eventual consistency, replication, sharding.. it’s a long list.
Questions to ask: could you do better yourself - with the resources you have? Is it worth the investment of a migration to get there? Whats the payoff period for that extra sliver of uptime? Will it cost you in focus over the longer term? Is the extra uptime worth all those costs?
> could you do better yourself
For this particular failure mode absolutely - this is amateur-level stuff that shouldn't have happened.
You know how to make something that works keep working? Not messing with it. Of course, this doesn't pay salaries if your entire career is based on "fixing" things that work until they don't.
There is no reason to hurry a Postgres upgrade - the thing shouldn't be internet accessible anyway, so no risk of security issues.
If you do want to update, it's best to test the update on a test/staging system. Which I'm sure they would have if they didn't have to pay a 10-90x markup on the compute price.
Finally, when you do the update, you'd do it manually during a time where you are present and outside of business hours to further minimize the impact of something going wrong, instead of the upgrade happening out of the blue at a random time.
One amateur moment doesn’t make a service’s management amateur.
If you run it yourself there’s a chance you will trade the mistakes made by DO for different mistakes made by your own team - and still have similar overall reliability.
Simply moving the time at which the mistakes occur can be extremely valuable. Doing it yourself means you can say "no touching the server during business hours". You can't guarantee that with a provider.
Even if you work out that you cannot do better, at least you are no longer paying the insane premium of the managed highly-available service (since it's not actually capable of delivering).
I just had a 12hr outage due to flyio's quick and easy postgres minor patch update cooking my database.
I ended up downloading the entire volume, setting up my own docker container locally, exporting it, creating a new cluster (on the latest major patch).
Lost most of my day yesterday
Since this is about DO managed Postgres: if you're using it with replicas, they use async replication and RPO can be greater than 15 minutes. Since failover is triggered during upgrades, there ends up being a lot of periods where you can lose multiple minutes of committed data.
Do they at least allow you to set your own schedule for upgrade windows? That way you could schedule them for quiet times of day, minimising the likelihood of there being significant replica lag.
It's common to do this on AWS and the other hyperscale providers (though, of course, they tend to do synchronous replication anyway, meaning that this particular failure mode wouldn't apply) - upgrades are a common source of unforeseen issues, so it makes sense to minimise the potential blast radius by running them out of hours.
> I chose managed services specifically to avoid ops emergencies. We're a tiny startup paying the premium so someone else handles this. Instead, I spent late night hours debugging VPC routing issues in a networking layer I don't control.
This happens with managed services and I understand the frustration, but vendors are just as fallible as the rest of us and are going to have wonky behaviour and outages, regardless of the stability they advertise. This is always part of build vs buy, buy doesn't always guarentee a friction free result.
It happens with the big cloud providers as well, I've spent hours with AWS chasing why some VMs are missing routing table entries inside the VPC, or on GCP we had to just ban a class of VMs because the packet processing was so bad we couldn't even get a file copy to complete between VMs.
> vendors are just as fallible as the rest of us
One of the issues I have with this is the insane markups they're charging for services that ultimately aren't any better than what you can do yourself.
If they aren't any better at least save yourself some money.
> but vendors are just as fallible as the rest of us
Isn’t the point that they shouldn’t be. They should have specialists dedicated to running these kind of things, test upgrades before rolling out, et c., while for the rest of us it’s just one of many things we try to handle.
Oh I've run into exactly the same issue on my personal cluster and I had no clue what was the issue. Is this solvable?
Oof. I have a very similar set up except I'm using their managed MySQL instead of PostgreSQL. It appears I wasn't hit.
Same thought as you.. I just didn't want to figure out and manage MySQL-with-failover myself so I switched their managed solution a year or two ago and my bill went up like 300% or more (was running fine on a ~$12 or maybe $24 droplet + $5 volume but now costs, I don't even remember, $150 or so).
The benefit of a managed service isn't that it doesn't go down; though it probably goes down less than something you self-manage, unless you're a full-time SRE with the experience to back it.
The benefit of a managed service is you say: "It's not my problem, I opened a ticket, now I'm going to get lunch, hope it's back up soon."
> though it probably goes down less than something you self-manage, unless you're a full-time SRE with the experience to back it.
I wonder how true that is. This went down because of a bad update, which is probably like 99.99% of outages. The other 0.01% is cosmic rays causing hardware failures.
My server was up for 3.5 years with no outages because I just didn't touch it. I had to take it offline a couple days ago to move it which made me a little sad. Took a snapshot and moved it to a new droplet, brought it back up as-is and it's running great again.
Anyway, emergencies are less emergy if things go down while you're upgrading and shuffling things around yourself. You expect hiccups if you're the one causing the hiccups. It's when someone else is tinkering on the other side of the country/planet and blows something up that suddenly you have an emergency.
I concur. I've seen a lot of companies outside the techbro world where the entire thing runs on a single VPS/dedicated server with a setup that would make any sysadmin squirm. And yet, it just works and makes them money?
Which isn't too surprising - hardware is extremely reliable nowadays. When's the last time your laptop broke? And that laptop lives a much harsher life than server HW in a datacenter. Obviously everyone is going to have their own anecdotes about this, but I think it's fair to say that overall the failure rates are quite low.
You know why their (often awful) setups work and consistently beat the major clouds in terms of uptime? No moving parts for K8s and all the "best practices", and most importantly, there is nobody "fixing" the working setup until it doesn't work. Ironically they are getting better uptime by avoiding all the things that are marketed as improving uptime.
I've read a few horror stories, but I always thought it wouldn't happen to me :)
> It's not my problem, I opened a ticket, now I'm going to get lunch, hope it's back up soon.
That's a good way of thinking about it.
Try a different managed service. We're using Render for a year with no DB outages. Although, we have gone down with Cloudflare several times.
As far as dbs go, I believe Amazon RDS is quite reliable. I think Render uses it under the hood.
You could also consider AWS ECS directly with RDS.
At my work we pay a boring, regional VPS host that is not fancy. In fact its maybe a few levels above "your 2000's web host, with a LAMP stack, a FTP login and a bad admin panel". Just a bit above that.
However, they ALWAYS pick up the phone on the 3rd ring with a capable, on call linux sysadmin with good general DB, services, networking, DNS, email knowledge.
Bonus point is that with such a simple stack you don't need to phone them often because the thing just works.
Most cloud outages are self-inflicted with the endless churn trying to reinvent things, not actual hardware failure. Just not touching the working system would boost their reliability and uptime, but then a lot of people would lose justification for their salaries so it can't happen.
Wait, customer support with a competent sysadmin? You're not making this up? It sounds ethereal.
This is the way... you live in operator heaven.
Lower prices come with a cost. I am not a fan of AWS but they higher reliability.
I've had an RDS instance fail in an equally-weird way that required manual AWS operator intervention 24 hours later, until which we were effectively locked out of our data and had to restore from a recent backup & rebuild the missing data from logs to bring the service back up in the meantime.
All the supposed "savings" of using managed services to save on staff costs evaporated immediately. No refund from the provider obviously despite it being an edge-case in their implementation.
We were on AWS for a while. The complexity was way higher than what our team could manage. DOKS is simpler, and this is the first major issue we've hit in many months.
The font color implies this comment is downvoted, but I earnestly encourage readers to take very seriously the difference in SLOs and SLAs between high-cost vendors like AWS and GCP and low-cost vendors like DigitalOcean. Read their docs; do not assume DO is "the same, but lower cost."
… are the published SLAs worth more than use as toilet paper?
I think it boils down to who offers the highest quality / $, and that's an impossible metric to really measure except via experience.
But with a number of the "big" clouds, there's what the SLA says, and then the actual lived performance of the system. Half the time the SLA weasels out of the outage — e.g., "the API works" is not in SLA scope for a number of cloud services, only thinks like "the service is serving your data". E.g., your database is up? SLA. You can make API calls modify it? Not so much. VMs are running? SLA. API calls to alloc/dealloc? No. Support responded to you? SLA. The respond contains any meaningful content? Not so fast. Even if your outage is covered by SLA, getting that SLA to action often requires a mountain of work: I have to prove to the cloud vendor that they've strayed from their own SLA¹, and force them to issue a credit, and often then the benefit of the credit outweight my time in salary. Oftentimes the exchanges in support town seem to reveal that the cloud provider has, apparently, no monitoring whatsoever to be able to see what actual perf I am experiencing. (E.g., I have had tickets with Azure where they seem blithely unaware their APIs are returning 500s …)
So, published is one thing. On paper, IDK, maybe Azure & GCP probably look pretty on par. In practice, I would laugh at that idea.
¹AWS is particularly guilty of this; I could summarize their support as "request ID or GTFO".
AWS frequently has outages. us-east-1 anyone
AWS designs and implements their foundational services holistically. I can understand that the services "higher up the stack" may not feel this way to AWS customers sometimes. However, the foundation of VPCs, EC2, EBS and S3, are very strong.
If the word "production" is suppose to really mean something to you, move your workload to Google Cloud, or move it to AWS, or on https://cast.ai
Disclaimer: I have no commercial affiliation with Cast AI.
Obligatory, do you actually need kubernetes? I struggle to imagine any tiny startup that does.
As a solo dev who just started his second cluster a few days ago... I like it.
Upfront costs a little higher than I'd like. I'm paying $24 for a droplet + $12 for a load balancer, plus maybe $1 for a volume.
I could probably run my current workload on a $12 droplet but apparently Cilium is a memory hog and that makes the smaller droplet infeasible, and it seems not practical to not run a load balancer.
But now I can run several distinct apps running different frameworks and versions of php, node, bun, nginx, whatever and spin them up and tear them down in minutes and I kind of love that. And if I ever get any significant amount of users I can press a button and scale up or horizontally.
I don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot, that's built in.
I have SSO across all my subdomains. That was a little annoying to get running, took a day and a half to figure out but it was a one time thing and the config is all committed in YAML so if I ever forget how it works I have something to reference instead of trying to remember 100 shell commands I randomly ran on a naked VPS.
Upgrades are easy. Can upgrade the distro or whatever package easily.
Downsides are deploys take a minute or two instead of sub-second.
It took weeks of tinkering to get a good DX going, but I've happily settled on DevSpace. Again it takes a couple minutes to start up and probably oodles of RAM instead of milliseconds but I can maintain 10 different projects without trying to keep my dev machine in sync with everything.
So some trade-offs but I've decided it's a net win after you're over the initial learning hump.
> I can run several distinct apps running different frameworks and versions > don't have to muck about with pm2 or supervisord or cronjobs, that's built in. I don't have to muck about with SSL certs/certbot
But doesn't literally any PaaS and provider with a "run a container" feature (AWS Fargate/ECS, etc) fit the bill without the complexity, moving parts and failure modes of K8s.
K8s makes sense when you need a control plane to orchestrate workloads on physical machines - its complexity and moving parts are somewhat justified there because that task is actually complex.
But to orchestrate VMs from a cloud provider - where the hypervisor and control plane already offers all of the above? Why take on the extra overhead by layering yet another orchestration layer on top?
As the sibling comment already mentioned, k8s is not much more complexity once you're past the learning curve. I used to host with ec2 + scripts earlier. K8s actually solves a lot of problems that you will have to solve yourself anyway.
Running Kubernetes in a managed environment like DO is no harder than using docker compose.
Why did you find the need to have ChatGPT write this for you instead of writing it yourself? Don't think that it's not completely and blindingly obvious.
On the note of managed services, running postgres or mysql is so much easier these days. Just run postgres on bare metal dedicated servers and save tons of money and time. And the reduced complexity actually leads to more reliability and less maintenance.
Your grand total of one submission is a link to Razer.com. When you actually contribute something to this community perhaps then you can make a statement like that. Still, probably not the best.
It's 100% absolutely positively written by AI. Why are you going to bat to defend a likely made up story copy pasted straight out of ChatGPT?
Your grand total of zero submissions doesn't even exist so why don't you contribute something to this community instead of complaining about me?
I am contributing by admonishing you for not even contributing anything worthwhile at all while complaining about others content. Learn some awareness.
This thread maybe needs this https://news.ycombinator.com/newsguidelines.html :)