Starting a new service was a path towards promotion at AWS, so they ended up launching 100s of services to the point where there were 10 different ways to do everything. I’m glad they’re culling them.
There were (are?) challenges in product placement and compatibility as well. Yes, some “services” are very niche and dont have a credible target market/adoption plan. My old service is one of those on this list.
But there are others where a “service” is really a _feature_ of another existing product offering. But for handwave reasons it can be extremely difficult to implement that way, and a new API and service name etc make the development tractable. To throw a stone Cloudwatch org is an interesting example of this, where its both too broad and too narrow, leading to an umbrella of undersized feature/services.
I guess this explains why AWS manages to run the whole gamut from the most generally applicable tooling such as EC2 to something I’ve never heard of and sounds specific enough to just be its own business, ‘AWS HealthOmics - Variant and Annotation Store’
If you click the Glacier link, it seems like it's some sort of standalone service and API that's very old. The page says to use S3's Glacier storage tier instead, so no change for the majority of folks that are likely using it this way
That's probably not the Glacier most people are using now. I still have it from ancient times so I got this email:
----
Hello,
After careful consideration, we have decided to stop accepting new customers for
Amazon Glacier (original standalone vault-based service) starting on December
15, 2025. There will be no change to the S3 Glacier storage classes as part of
this plan.
Amazon Glacier is a standalone service with its own APIs, that stores data in
vaults and is distinct from Amazon S3 and the S3 Glacier storage classes [1].
Your Amazon Glacier data will remain secure and accessible indefinitely. Amazon
Glacier will remain fully operational for existing customers but will no longer
be offered to new customers (or new accounts for existing customers) via APIs,
SDKs, or the AWS Management Console. We will not build any new features or
capabilities for this service.
You can continue using Amazon Glacier normally, and there is no requirement to
migrate your data to the S3 Glacier storage classes.
Key Points:
* No impact to your existing Amazon Glacier data or operations: Your data
remains secure and accessible, and you can continue to add data to your Glacier
Vaults.
* No need to move data to S3 Glacier storage classes: your data can stay in
Amazon Glacier in perpetuity for your long-term archival storage needs.
* Optional enhancement path: if you want additional capabilities, S3 Glacier
storage classes are available.
For customers seeking enhanced archival capabilities or lower costs, we
recommend the S3 Glacier storage classes [1] because they deliver the highest
performance, most retrieval flexibility, and lowest cost archive storage in the
cloud. S3 Glacier storage classes provide a superior customer experience with S3
bucket-based APIs, full AWS Region availability, lower costs, and AWS service
integration. You can choose from three optimized storage classes: S3 Glacier
Instant Retrieval for immediate access, S3 Glacier Flexible Retrieval for backup
and disaster recovery, and S3 Glacier Deep Archive for long-term compliance
archives.
If you choose to migrate (optional), you can use our self-service AWS Guidance
tool [2] to transfer data from Amazon Glacier vaults to the S3 Glacier storage
classes.
If you have any questions about this change, please read our FAQs [3]. If you
experience any issues, please reach out to us via AWS Support for help [4].
Is the implication these services are used so little it isn't worth AWS continuing to invest in developing or maintaining beyond bare-minimum KTLO ops?
They should get real and include App Runner on this list.
So much promise as a Heroku alternative with all the AWS integrations but it's basically dead now. Not a peep from them on their public roadmap over at github.
We're having to go back to Fargate with all the operational overhead that entails.
If you are fine with running lots of apps on one beefy machine, the project I am building https://github.com/openrundev/openrun provides a similar abstraction as App Runner and Cloud Run (automatically deploy web apps from source). It supports scaling down to zero, but does not yet scale an app beyond one container.
Yeah it's an Event Notification that triggers lambda that acts on the bucket, i had to give it permissions to the bucket so i guess it's outside it :). We'll see!
Yeah I agree. We're currently using it to dump images as originals into a bucket at a path.. then the aws lambda function attached generates all the thumbnails.
Glad to see this, but there’s a lot more cleanup to do. AWS went from
having a few core excellent services with a strong innovation pipeline to a chaotic “jack of all trades master of none” approach with no clear product strategy. Some of the recent panic trying to catch up on AI has resulting in even more slop thrown at the wall hoping something sticks.
We love you, but focus on the core infrastructure bits and stop chasing everything that moves! Your customers build better apps and services that you do… just build great building blocks and folks will be very happy.
That original Glacier API was infamous for being extremely cheap to write to but prohibitively expensive to read from. Something like 10 cents per list objects request or something ridiculous like that. Can't remember the specifics but I do remember reading blog posts from people that wanted to restore a couple files and had to pay several thousand dollars for that.
I believe that they did alter the pricing at some point. Regardless, the move to just a storage class on S3 made everything much simpler.
Starting a new service was a path towards promotion at AWS, so they ended up launching 100s of services to the point where there were 10 different ways to do everything. I’m glad they’re culling them.
There were (are?) challenges in product placement and compatibility as well. Yes, some “services” are very niche and dont have a credible target market/adoption plan. My old service is one of those on this list.
But there are others where a “service” is really a _feature_ of another existing product offering. But for handwave reasons it can be extremely difficult to implement that way, and a new API and service name etc make the development tractable. To throw a stone Cloudwatch org is an interesting example of this, where its both too broad and too narrow, leading to an umbrella of undersized feature/services.
I guess this explains why AWS manages to run the whole gamut from the most generally applicable tooling such as EC2 to something I’ve never heard of and sounds specific enough to just be its own business, ‘AWS HealthOmics - Variant and Annotation Store’
Other cloud vendors aren't much better, every time I go back it is a new world in some fashion.
[dead]
Amazon Glacier on the list is a pretty big surprise to me.
It was consolidated into S3 as a storage class: https://docs.aws.amazon.com/amazonglacier/latest/dev/introdu...
That's interesting, as Glacier was based on a completely different hardware implementation for a different use case.
If you click the Glacier link, it seems like it's some sort of standalone service and API that's very old. The page says to use S3's Glacier storage tier instead, so no change for the majority of folks that are likely using it this way
Same capability is now just a storage class in S3.
Read the header here for an explanation, it's not going away.
https://docs.aws.amazon.com/amazonglacier/latest/dev/introdu...
That's probably not the Glacier most people are using now. I still have it from ancient times so I got this email:
----
Hello,
After careful consideration, we have decided to stop accepting new customers for Amazon Glacier (original standalone vault-based service) starting on December 15, 2025. There will be no change to the S3 Glacier storage classes as part of this plan.
Amazon Glacier is a standalone service with its own APIs, that stores data in vaults and is distinct from Amazon S3 and the S3 Glacier storage classes [1]. Your Amazon Glacier data will remain secure and accessible indefinitely. Amazon Glacier will remain fully operational for existing customers but will no longer be offered to new customers (or new accounts for existing customers) via APIs, SDKs, or the AWS Management Console. We will not build any new features or capabilities for this service.
You can continue using Amazon Glacier normally, and there is no requirement to migrate your data to the S3 Glacier storage classes.
Key Points: * No impact to your existing Amazon Glacier data or operations: Your data remains secure and accessible, and you can continue to add data to your Glacier Vaults. * No need to move data to S3 Glacier storage classes: your data can stay in Amazon Glacier in perpetuity for your long-term archival storage needs. * Optional enhancement path: if you want additional capabilities, S3 Glacier storage classes are available.
For customers seeking enhanced archival capabilities or lower costs, we recommend the S3 Glacier storage classes [1] because they deliver the highest performance, most retrieval flexibility, and lowest cost archive storage in the cloud. S3 Glacier storage classes provide a superior customer experience with S3 bucket-based APIs, full AWS Region availability, lower costs, and AWS service integration. You can choose from three optimized storage classes: S3 Glacier Instant Retrieval for immediate access, S3 Glacier Flexible Retrieval for backup and disaster recovery, and S3 Glacier Deep Archive for long-term compliance archives.
If you choose to migrate (optional), you can use our self-service AWS Guidance tool [2] to transfer data from Amazon Glacier vaults to the S3 Glacier storage classes.
If you have any questions about this change, please read our FAQs [3]. If you experience any issues, please reach out to us via AWS Support for help [4].
[1] https://aws.amazon.com/s3/storage-classes/glacier/ [2] https://aws.amazon.com/about-aws/whats-new/2021/04/new-aws-s... implementation-amazon-s3-glacier-re-freezer/ [3] https://aws.amazon.com/s3/faqs/#Storage_Classes [4] https://aws.amazon.com/support
Is the implication these services are used so little it isn't worth AWS continuing to invest in developing or maintaining beyond bare-minimum KTLO ops?
Some of them also ended up getting consolidated into other larger services
They should get real and include App Runner on this list.
So much promise as a Heroku alternative with all the AWS integrations but it's basically dead now. Not a peep from them on their public roadmap over at github.
We're having to go back to Fargate with all the operational overhead that entails.
If you are fine with running lots of apps on one beefy machine, the project I am building https://github.com/openrundev/openrun provides a similar abstraction as App Runner and Cloud Run (automatically deploy web apps from source). It supports scaling down to zero, but does not yet scale an app beyond one container.
Super cool project!
If you ever get this working for Kubernetes, I'd love to add it to https://canine.sh for a one click deploy, been looking for something like this.
genuinely curious, what would make you trust a PaaS-like platform again? are you looking for control/transparency? is it about pricing?
Amazon S3 Object Lambda seems like a massive category to deprecate
Yeah, that will greatly impact one of our products. As usual with AWS documentation it's not very clear what the update path seems to be.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon...
I wonder why they're going that direction too, if anything those lambdas must be making money for them.
At least it’s not S3 triggers for lambdas, just about gave me a heart attack.
oh maybe thats what were using. Made it months ago and im not 100% sure. Lambda on putObject
That sounds like it might be a lambda trigger to me. The feature being deprecated is lambdas that operate at the s3 API level.
Yeah it's an Event Notification that triggers lambda that acts on the bucket, i had to give it permissions to the bucket so i guess it's outside it :). We'll see!
Yeah I agree. We're currently using it to dump images as originals into a bucket at a path.. then the aws lambda function attached generates all the thumbnails.
Glad to see this, but there’s a lot more cleanup to do. AWS went from having a few core excellent services with a strong innovation pipeline to a chaotic “jack of all trades master of none” approach with no clear product strategy. Some of the recent panic trying to catch up on AI has resulting in even more slop thrown at the wall hoping something sticks.
We love you, but focus on the core infrastructure bits and stop chasing everything that moves! Your customers build better apps and services that you do… just build great building blocks and folks will be very happy.
Wow. I can’t believe Glacier is on that list.
Does not be accessible to new customers mean a new test account that rolls into the same parent org would no longer have access either?
It's the standalone Glacier service which I wasn't even aware existed - nothing changes for the s3 glacier storage class.
That original Glacier API was infamous for being extremely cheap to write to but prohibitively expensive to read from. Something like 10 cents per list objects request or something ridiculous like that. Can't remember the specifics but I do remember reading blog posts from people that wanted to restore a couple files and had to pay several thousand dollars for that.
I believe that they did alter the pricing at some point. Regardless, the move to just a storage class on S3 made everything much simpler.
I just implemented incident manager org wide :(
I knew the service was rough, but had the right building blocks plus Cfn/CDK support and has been working well.
My lack of trust in AWS is increasing, feels like a google move.
Amazon following the lead of Google Cloud of shutting down AWS services is not a good sign.
Where will the money and resources to develop AWS AI come from? Not from Incident Manager, that is for certain.
Ironically that's a domain where AI could be genuinely helpful and reduce MTTR.
This is why you should never use niche aws services.
> This is why you should never use niche aws services.
Niche ? From who's perspective? Anyway if AWS are offering a service, why would you ever need to consider 'is this too niche for lts?'