2 comments

  • munch-o-man 10 hours ago ago

    author here: I've been running this in my homelab nomad/consul/vault cluster for a while now and it has been working great. My temporal job that does nightly backups of nomad/consul/vault/postgres now has an extra step to push to s3-orchestrator too and if it gets an error that no backend has available space it will delete the oldest backup of that type and then try again. Right now I have it doing "spread" routing between OCI and Cloudflare s3-compatible storage because they offer the best always-free s3 storage and I was already using cloudflare and running four nomad clients on oracle cloud connected to my cluster via wireguard (I would never give oracle a cent of my money but when they are offering 26gb of compute in always-free tier I'll take every bit of it thanks).

    The coolest way to test this out is to just clone it and then run either:

    make nomad-demo

    make kubernetes-demo

    that will spin up the docker-compose crap used for integration testing (two minio instances and a postgres) then start kubernetes via k3d or nomad via -dev mode, build the docker image, ingest it, run it, and print out a handy list of urls for different dashboards/metrics/ui/etc. The grafana dashboard in the repo is automatically ingested by grafana in the two "-demo" modes so you can literally run one command to run it and immediately play with the ui, see visualizations of the metrics, and start playing with it in a safe sandboxed environment.

    For people that aren't just trying to get as much free storage as possible the storage and api/ingress/egress quotas can still be super useful in cost management since you can cap yourself.

    The other cool use is if you needed data replicated across two different clouds for [reasons] this will do all that work for you if you set a replication factor and your application doesn't have to know anything about it...just point it at this instead of the actual s3 backend.

    Also the ability to drain a backend could be super useful if you are trying to get off a certain cloud without taking downtime.

    This is engineered to be highly durable...instead of failing it degrades and returns to healthy when conditions improve and the postgres is back...and it stops all writes when postgres is down since no usage would be tracked.

    also, if you have an existing bucket that you want to bring under management by the s3-orchestrator it has sync functionality...the only thing it can't import is the monthly api-calls/ingress/egress from before the sync.

    I'm open to all advice and comments. Pretty nervous sharing this.

  • steppacodes 10 hours ago ago

    [dead]