I don’t fully understand the problem this is trying to solve. Or at least, if this solves your problem then it feels like you have bigger problems?
If you have staging/production deployments in CI/CD and have your Kubernetes clusters managed in code, then adding feature deployments is not any different from what you have done already. Paying for a third party app seems (to me) both a waste of money and a problem waiting to happen.
How we do it: For a given helm chart, we have three sets of value files; prod, staging and preview. An Argo application exists for each prod, staging and preview instance.
When a new branch is created, a pipeline runs that renders a new preview chart (with some variables based on branch/tag name), creates a new argo application and commits this to the kubernetes repo. Argo picks it up, deploys it to the appropriate cluster and that’s it. Ingress hostnames get picked up and DNS records get created.
When the branch gets deleted, a job runs to remove the argo application and done.
It’s the same for staging and production, I really wouldn’t want a different deployment pipeline for preview environments - that just increases complexity and the chances of things going wrong.
Great way to apply your gathered Kubernetes knowledge!
But I find the pricing tough and I don't like to give 3rd party tools that level of access to my clusters.
I know its early state but I see several problems: Right now it seems to be GH only, a lot of people are on selfhosted GitLab. Does it only support helm or also kustomize and raw extra manifests. What about GitOps?
I've build similar solution for clints, mostly only CI based. Often with Flux/ArgoCD support. The thing I found difficult was to show the diff of the rendered manifest also while applying the app. Since I'm not a fan of the rendered manifest pattern this often involved extra branches. Is this handled by the app?
Thanks for the thoughtful feedback — these are all fair concerns.
The app doesn’t access your production clusters. Previews run in managed, isolated clusters, and each preview gets its own namespace with deny-all NetworkPolicies, quotas, and automatic teardown. That said, if the concern is about installing charts into any external K8 cluster at all, then I agree this won’t be a fit — and that’s a reasonable constraint.
It’s GitHub-first today simply because that’s where I personally hit the problem. GitLab is supported via the REST API using a personal access token that you can scope as tightly as you want, so you can trigger previews from GitLab CI today.
Native GitLab App integration (auto-triggering on MRs, status updates, etc.) is something I’ve thought about, but I wanted to validate the core workflow first.
It is intentionally Helm-only for now. The specific pain I was trying to solve was reviewing Helm changes — values layering, dependencies, and template changes — by seeing them running in a real environment, rather than trying to generalise across all deployment models.
I’m not trying to replace or compete with Flux or Argo CD. The idea is to validate Helm changes before they land in a GitOps repo or get promoted through environments — essentially answering the question of “does this look OK and actually work when deployed, so should be safe to merge?”
It doesn’t expose rendered manifest diffs today, but I agree that would be valuable — especially a readable “what changed after Helm rendering” view tied back to the PR. I’m still thinking through the cleanest way to do that without adding a lot of complexity to the workflow.
Appreciate you taking the time give your feedback. Thanks.
Congrats! I could see the value of this, for sure. I handle this problem by spinning up a preview environment in a namespace. Each branch gets its own and a script takes care of setting up namespaces for a couple of shared resources for staging (rabbit and temporal).
It was a lot of work setting that up though. Preview environments based on a helm deploy makes sense. I wish this had been available before I did all that.
Thanks for the feedback — you’re spot on about the setup this is trying to speed up. The namespace-per-branch approach works well (and that’s what this does), but the setup around ingress, DNS, secrets, and cleanup tends to be the real time sink. Glad it resonates.
I don’t fully understand the problem this is trying to solve. Or at least, if this solves your problem then it feels like you have bigger problems?
If you have staging/production deployments in CI/CD and have your Kubernetes clusters managed in code, then adding feature deployments is not any different from what you have done already. Paying for a third party app seems (to me) both a waste of money and a problem waiting to happen.
How we do it: For a given helm chart, we have three sets of value files; prod, staging and preview. An Argo application exists for each prod, staging and preview instance.
When a new branch is created, a pipeline runs that renders a new preview chart (with some variables based on branch/tag name), creates a new argo application and commits this to the kubernetes repo. Argo picks it up, deploys it to the appropriate cluster and that’s it. Ingress hostnames get picked up and DNS records get created.
When the branch gets deleted, a job runs to remove the argo application and done.
It’s the same for staging and production, I really wouldn’t want a different deployment pipeline for preview environments - that just increases complexity and the chances of things going wrong.
Great way to apply your gathered Kubernetes knowledge! But I find the pricing tough and I don't like to give 3rd party tools that level of access to my clusters. I know its early state but I see several problems: Right now it seems to be GH only, a lot of people are on selfhosted GitLab. Does it only support helm or also kustomize and raw extra manifests. What about GitOps?
I've build similar solution for clints, mostly only CI based. Often with Flux/ArgoCD support. The thing I found difficult was to show the diff of the rendered manifest also while applying the app. Since I'm not a fan of the rendered manifest pattern this often involved extra branches. Is this handled by the app?
Thanks for the thoughtful feedback — these are all fair concerns.
The app doesn’t access your production clusters. Previews run in managed, isolated clusters, and each preview gets its own namespace with deny-all NetworkPolicies, quotas, and automatic teardown. That said, if the concern is about installing charts into any external K8 cluster at all, then I agree this won’t be a fit — and that’s a reasonable constraint.
It’s GitHub-first today simply because that’s where I personally hit the problem. GitLab is supported via the REST API using a personal access token that you can scope as tightly as you want, so you can trigger previews from GitLab CI today.
Native GitLab App integration (auto-triggering on MRs, status updates, etc.) is something I’ve thought about, but I wanted to validate the core workflow first.
It is intentionally Helm-only for now. The specific pain I was trying to solve was reviewing Helm changes — values layering, dependencies, and template changes — by seeing them running in a real environment, rather than trying to generalise across all deployment models.
I’m not trying to replace or compete with Flux or Argo CD. The idea is to validate Helm changes before they land in a GitOps repo or get promoted through environments — essentially answering the question of “does this look OK and actually work when deployed, so should be safe to merge?”
It doesn’t expose rendered manifest diffs today, but I agree that would be valuable — especially a readable “what changed after Helm rendering” view tied back to the PR. I’m still thinking through the cleanest way to do that without adding a lot of complexity to the workflow.
Appreciate you taking the time give your feedback. Thanks.
Congrats! I could see the value of this, for sure. I handle this problem by spinning up a preview environment in a namespace. Each branch gets its own and a script takes care of setting up namespaces for a couple of shared resources for staging (rabbit and temporal).
It was a lot of work setting that up though. Preview environments based on a helm deploy makes sense. I wish this had been available before I did all that.
Thanks for the feedback — you’re spot on about the setup this is trying to speed up. The namespace-per-branch approach works well (and that’s what this does), but the setup around ingress, DNS, secrets, and cleanup tends to be the real time sink. Glad it resonates.
Thanks again.