The docker quickstart asks me to mount the docker socket, an incredibly dangerous act that fundamentally breaks the isolation that containers are supposed to provide. I see no attempt to explain why this level of access is necessary.
I can guess why you think you need it, but whatever the reason it's not good enough. If you need job workers or some other kind of container, tell me how to run those with docker compose.
It breaks the isolation for that one container, the rest are just fine. That's clearly done in order to dynamically spin up CI/CD containers, which you obviously can't do with something like compose.
I get why you don't want to do that on a machine running other things and I wouldn't either, but you're pretending like this is such a strange, unnecessary and unexpected thing to require, when in reality, basically everything does it this way and there isn't really a good alternative without a ton of additional complexity the vast majority of people won't need.
If you want to be able to spin up CI/CD containers, don’t you kinda already need to have docker socket access? In that case, you’ve already decided that this isn’t a threat vector you’re concerned about. Yes, this probably makes it easier, but the ability to startup new containers for CI/CD is what makes this threat possible.
So, I’m not sure this is something I’d worry much about. Perhaps they should flag this in the documentation as something to be noted, but otherwise, I’m not sure how else you get this functionality. Is there another way?
It's a multi-user Git / CI /CD / project management platform. If you introduce this in your organization, a single vulnerability can take down the entire system and any other application running on the same host. You can't just "decide that this isn’t a threat vector" without taking the use case into account. Or at least it should come with alarm bells warning users that it's unsafe.
What is "entire system" here? I'd run something like that in a VM, so the "entire system" would be nothing but the app itself.
If there is a RCE vuln in the app, your users are just as unsafe if it's running as root on the host or if it's running as nobody in a container. The valuable data is all inside.
What do you mean by scoped access? A bunch of regexes checking that the app doesn't add any dangerous flags to docker run? That sounds like a fun CTF challenge to me, which is not a good thing for a security feature...
Yes, that's exactly what I said. The container with the socket is not isolated, but all the other containers are, including the CI containers, which is the whole point.
The code inside those containers is isolated, which is the whole point.
Only the app or runner container has socket access, which it uses to create new containers without socket access and it runs user code in there. If your get RCE in the app/runner, you get RCE on the host, yes, no shit. But if you get RCE in any other container on the system, you're properly contained.
It appears you fundamentally don't understand what mounting the docker socket is doing. I'm sorry to give you homework but you need to go look it up to participate in this conversation.
> The code inside those containers is isolated, which is the whole point.
A container with socket access can replace code or binaries in any other container, read any containers volumes and environment variables, replace whole containers, etc. That does not meet any definition of "isolated"
But those containers DON'T have socket access. ONE container has socket access, then it creates other containers WITHOUT socket access. Those containers ARE isolated. Since the untrusted (user provided) code runs in those, the setup is reasonably secure. An RCE in OneDev is an RCE on the host, but that's a completely different threat model. The important part is that user code is isolated, which it is.
I actually don't know who is misunderstanding who here. I work with containers daily and this is how I understand this situation:
The runner (trusted code) is tasked with taking job specifications from the user (untrusted code) and running them in isolated environments. Correct?
The runner is in a container with a mounted docker socket. It sends a /containers/create request to the socket. It passes a base image, some resource limits and maybe a directory mount for the checked out repository (untrusted code). The code could alternatively be copied instead of mounted. Correct?
The new container is created by dockerd without the socket mounted, because that wasn't specified by the runner ("Volumes": [] or maybe ["/whatever/user/repo/:/repo/"]). Correct?
The untrusted code is now executed inside that container. Because the container was created with no special mounts or privileges, it is as isolated as if it was created manually with docker run. Correct?
The job finishes executing, the runner uses the socket to collect the logs and artifacts, then it destroys the container. Correct?
So please tell me how you think untrusted code could get access to the socket here?
It's poor security practice that shouldn't be overlooked. Mounting the Docker socket effectively allows the entire application to run with root privileges on the host. Given that this seems to be a multi-tenant application, the implications are even more concerning. The component responsible for spinning up CI/CD containers shouldn't operate within the security boundary of the rest of the application.
On a related note, I believe Docker's design contributes to this issue. Its inflexible sandboxing model encourages such risky practices.
Apparently multiple people were triggered by the idea that their organization's Git forge, CI / CD, and project management shouldn't be a single system running as root. I can't fathom why.
No shit, I don't know why everyone is trying to explain Docker basics to me. All I'm saying is that socket access is required to spin up containers and it's nothing out of the ordinary for this use case.
Of course it's an issue if you're using Docker to isolate OneDev from the rest of the apps running on your systems. But that's not everyone's use-case. Anything that intentionally spins up user-controlled containers should be isolated in a VM. That's how every sane person runs GitLab runners, for example.
As far as I know kaniko handles the "I'm a CI job inside a container and I want to build a container image" part. The reason CI/CD runners need socket access is to create those job containers in the first place. Using Podman to create job containers inside the app Docker container would be a solution, but Podman containers have many subtle incompatibilities with Docker and its ecosystem, so it makes sense they wouldn't want to use that, at least by default.
I guess one way of doing it would be to have two instances of rootless docker running, one with this project meant to run all containers, and the one exposed to the container being the other docker engine instance for CI/CD jobs.
This looks super cool - especially given the changes to github’s leadership.
One minor note - on mobile safari there didn’t seem to be any state update on press of buttons, and submission was not clear until the backend server seemed to respond. My internet connection is a little slow and it was unclear tapping the button worked. I would expect a little loading state or at least ui to show the button as disabled during submission.
Code discussion anywhere anytime
Select any code or diff to start discussion. Suggest and apply changes. Discussions stay with code to help code understanding.
How do the discussions stay with the code? Git notes?
From their docs, apparently it just modifies the source code, although their LOC didn't change when they hit save in their video demo, so I'm not sure.
I recall looking at it maybe 2-3 years ago and deciding against it. And it never seemed to get traction in the selfhosted community either.
...but I can't for the life of me recall why. Definitely wasn't a radioactive red flag issue...but some aspect around CI wasn't for me.
In general though with things like this that carry an open license and have a docker image you're better off trying it yourself than listening to randoms like me
This is why I dropped it. The CI/CD configurations were some weird proprietary format where as gitlab/gitea/forejo are all (mostly) feature compliant with my already existing github workflow files
I evaluated moving from Gitea to OneDev before Gitea had CI. OneDev was useable, I didn't mind it, but I don't run java anywhere else so I decided against adopting it. A few years later and now Gitea/Forgejo are at feature parity.
For me the bus factor was a bit of a red flag, plus I prefer the layout in forgejo/gitea. Also didn't like that there wasn't really any obvious way to link in an external CI, and of course it's written in java so had that to factor in too.
I actually have a side project running on it. The CI documentation is really lacking and it was hard setting it up. Other than that I am happy with it.
Fossil started as "not invented here", but it has grown into something I like a lot more than Git. Knowing how to use a version control system should be an incidental skill (akin to simple shell commands like "cp" and "ln"), not something that needs to be mentioned in a job posting's role description.
I also appreciate that the default workflow for undoing bad merges is a whiteout rather than a true "delete".
To each his own, but having worked with CVS, SVN, Perforce, Git, and Fossil, the centralized model is much less work for release engineering and administration most of the time. If I were a maintainer of the Linux kernel of one of the many Linux distros where you have potentially thousands of contributors to one codebase, I would use git because it scales up better.
However, I wouldn't underestimate the value of scaling down well, especially for all the people around here building some startup out of a barndominium. A VCS that includes its own GUI-based admin tool and is simple enough to used by some high school intro to web design class is a good thing in my book.
I was turned off by the necessity of using users and permissions; it feels unnecessary for local development and kind of a PITA if you have many repos.
It works exactly as advertised though; my gripes aren't technical.
I can understand that, but it sounds more like an argument in favor of ye-olde CVS than git... Not that I ever have nor desire to manage my own git server, which would involve its own authentication and administrative tasks.
Note: Before some third person pitches in their condescending two cents on the limitations of CVS, nobody here is recommending CVS.
Our gitea instance had roughly five minutes of downtime in total over the past year, just to upgrade gitea itself. All in the middle of the night. How much downtime has GitHub seen over the same period, and how many people's work was affected by that?
I've been hosting a git service for quite a while now and it's maybe around half an hour per year of maintenance work. It's totally worth it in my opinion, it's so much better on almost every way ... One big reason is decentralization. Full control of data, change what you want, then things like the current npmjs attack show the downsides of having everyone using the same thing, and so much more
The concerns are valid, but I'd like to point out that managing all that isn't as frightening as it sounds.
If you do small scale (we're talking self-hosted git here after all), all these are either non-issue or a one-time issue.
Figuring out backups and firewall is the latter. Once figured out, you don't worry about that at all. Figuring these out isn't rocket science either.
As for the former. For minimum maintenance, I often run services in docker containers - one service (as in compose stack) per Debian VM. This makes OS upgrades very stable and, given docker is the only "3rd-party" package, they are very unlikely to break the system. That allows to set unattended-upgrades to upgrade everything.
With this approach most of the maintenance comes from managing containers' versions. It's a good practice to use fixed containers' versions which does mean there is some labor involved when it comes to upgrading them, but you don't always have to stick to the exact version. Many containers have tags for major versions and these are fairly safe to rely on for automatic upgrades. The manual part of the upgrades when a new major release comes out can be a really rare occasion.
If your services' containers don't do such versioning (GitLab and YouTrack are the examples of that), then you aren't as lucky, but bumping a version every few months or so shouldn't be too laborsome either.
Now, if DDoS is a concern, there is probably already the staff in place to deal with that. DDoS is mostly for popular public services to worry about, not for a private Gitea instance. Such pranks are costly to randomly poke around and require some actual incentive.
But why keep a private instance out in the open anyway? Put it behind a VPN and then you don't really have to account for security and upgrades as much.
One answer might be to avoid LLMs training off the intellectual property that your humans typed out for you. But as LLM code generation tools take off, it's a losing battle for most orgs to prevent staff from using LLMs to generate the code in the first place, so this particular advantage is being subverted.
Especially as self-hosting means loosing the community aspect of GitHub. Every potential contributor already has an account. Every new team member already knows how to use it.
You’re assuming people are self-hosting open source projects on their gut servers. That’s often not the case. Even if it were, GitHub irked a lot of people using their code to train Copilot.
I self-host gitea. It took maybe 5 minutes to set up on TrueNAS and even that was only because I wanted to set up different datasets so I could snapshot independently. I love it. I have privacy. Integrating into a backup strategy is quite easy —- it goes along with the rest of my off-site NAS backup without me needing to retain local clones on my desktop. And my CI runners are substantially faster than what I get through GitHub Actions.
The complexity and maintenance burden of self-hosting is way overblown. The benefits are often understated and the deficiencies of whatever hosted service left unaddressed.
When I publish open source code, I don't mind if people or companies use it, or maybe even learn from it. What I don't like is feeding it into a giant plagiarism machine that is perpetuating the centralization of power on the internet.
Whether you agree with why someone may be opting to self-host a git server is immaterial to why they've done so. Likewise, I'm not going to rehash the debate over fair use vs software licenses. Pretending like you don't understand why someone that published code under a copyleft license is displeased with it being locked in a proprietary model being used to build proprietary software is willful ignorance. But, again, it makes no difference whether you're right or they're right; no one is obligated to continue pushing open source code to GitHub or any other service.
A better question is why does it take any time to maintain a tool like this? I spend zero time maintaining my open-source browser (Firefox). It just periodically updates itself and everything just works. I maybe spend a bit of time maintaining my IDE by updating settings and installing plugins for it, but nothing onerous.
A tool like this is not fundamentally more complex than a browser or a full-fledged IDE.
I am using 14 different extensions in Firefox. I don't think any of them have broken due to a Firefox update for at least the past 3 years.
The only maintenance I have had to do was when the "I don't care about cookie's" extension got sold out, so had to switch to a fork [1]. That was 2-3 years ago.
because:
1) privacy - don't want projects leaving a closed circle of people
2) compliance - you have to self-host and gitlab/github are way too expensive for what they provide when open-source alternatives exist
3) you just want to say fuck you to coorperate (nothing is free) and join the clippy movement.
The docker quickstart asks me to mount the docker socket, an incredibly dangerous act that fundamentally breaks the isolation that containers are supposed to provide. I see no attempt to explain why this level of access is necessary.
I can guess why you think you need it, but whatever the reason it's not good enough. If you need job workers or some other kind of container, tell me how to run those with docker compose.
It breaks the isolation for that one container, the rest are just fine. That's clearly done in order to dynamically spin up CI/CD containers, which you obviously can't do with something like compose.
I get why you don't want to do that on a machine running other things and I wouldn't either, but you're pretending like this is such a strange, unnecessary and unexpected thing to require, when in reality, basically everything does it this way and there isn't really a good alternative without a ton of additional complexity the vast majority of people won't need.
> It breaks the isolation for that one container, the rest are just fine.
Wrong. A container with access to the socket can compromise any other container, and start new containers with privileged access to the host system.
It compromises everything. This is a risk worth flagging.
If you want to be able to spin up CI/CD containers, don’t you kinda already need to have docker socket access? In that case, you’ve already decided that this isn’t a threat vector you’re concerned about. Yes, this probably makes it easier, but the ability to startup new containers for CI/CD is what makes this threat possible.
So, I’m not sure this is something I’d worry much about. Perhaps they should flag this in the documentation as something to be noted, but otherwise, I’m not sure how else you get this functionality. Is there another way?
It's a multi-user Git / CI /CD / project management platform. If you introduce this in your organization, a single vulnerability can take down the entire system and any other application running on the same host. You can't just "decide that this isn’t a threat vector" without taking the use case into account. Or at least it should come with alarm bells warning users that it's unsafe.
What is "entire system" here? I'd run something like that in a VM, so the "entire system" would be nothing but the app itself.
If there is a RCE vuln in the app, your users are just as unsafe if it's running as root on the host or if it's running as nobody in a container. The valuable data is all inside.
Running a binary as a non-root user with scoped access to Docker commands seems more appropriate to me.
What do you mean by scoped access? A bunch of regexes checking that the app doesn't add any dangerous flags to docker run? That sounds like a fun CTF challenge to me, which is not a good thing for a security feature...
Yes, that's exactly what I said. The container with the socket is not isolated, but all the other containers are, including the CI containers, which is the whole point.
No containers, existing or potential, are isolated from the one with socket access.
The code inside those containers is isolated, which is the whole point.
Only the app or runner container has socket access, which it uses to create new containers without socket access and it runs user code in there. If your get RCE in the app/runner, you get RCE on the host, yes, no shit. But if you get RCE in any other container on the system, you're properly contained.
It appears you fundamentally don't understand what mounting the docker socket is doing. I'm sorry to give you homework but you need to go look it up to participate in this conversation.
> The code inside those containers is isolated, which is the whole point.
A container with socket access can replace code or binaries in any other container, read any containers volumes and environment variables, replace whole containers, etc. That does not meet any definition of "isolated"
But those containers DON'T have socket access. ONE container has socket access, then it creates other containers WITHOUT socket access. Those containers ARE isolated. Since the untrusted (user provided) code runs in those, the setup is reasonably secure. An RCE in OneDev is an RCE on the host, but that's a completely different threat model. The important part is that user code is isolated, which it is.
> The important part is that user code is isolated, which it is.
It isn't for the reasons I stated in previous comments, which you are unable to refute. Your dogged insistence to the contrary is bizarre.
I hope you do not work in this area.
I actually don't know who is misunderstanding who here. I work with containers daily and this is how I understand this situation:
The runner (trusted code) is tasked with taking job specifications from the user (untrusted code) and running them in isolated environments. Correct?
The runner is in a container with a mounted docker socket. It sends a /containers/create request to the socket. It passes a base image, some resource limits and maybe a directory mount for the checked out repository (untrusted code). The code could alternatively be copied instead of mounted. Correct?
The new container is created by dockerd without the socket mounted, because that wasn't specified by the runner ("Volumes": [] or maybe ["/whatever/user/repo/:/repo/"]). Correct?
The untrusted code is now executed inside that container. Because the container was created with no special mounts or privileges, it is as isolated as if it was created manually with docker run. Correct?
The job finishes executing, the runner uses the socket to collect the logs and artifacts, then it destroys the container. Correct?
So please tell me how you think untrusted code could get access to the socket here?
Would rootless docker help? (Potentially even running that specific workflow with it's own dedicated user)
It's poor security practice that shouldn't be overlooked. Mounting the Docker socket effectively allows the entire application to run with root privileges on the host. Given that this seems to be a multi-tenant application, the implications are even more concerning. The component responsible for spinning up CI/CD containers shouldn't operate within the security boundary of the rest of the application.
On a related note, I believe Docker's design contributes to this issue. Its inflexible sandboxing model encourages such risky practices.
Apparently multiple people were triggered by the idea that their organization's Git forge, CI / CD, and project management shouldn't be a single system running as root. I can't fathom why.
No shit, I don't know why everyone is trying to explain Docker basics to me. All I'm saying is that socket access is required to spin up containers and it's nothing out of the ordinary for this use case.
Of course it's an issue if you're using Docker to isolate OneDev from the rest of the apps running on your systems. But that's not everyone's use-case. Anything that intentionally spins up user-controlled containers should be isolated in a VM. That's how every sane person runs GitLab runners, for example.
Isn't kaniko designed to solve this?
As far as I know kaniko handles the "I'm a CI job inside a container and I want to build a container image" part. The reason CI/CD runners need socket access is to create those job containers in the first place. Using Podman to create job containers inside the app Docker container would be a solution, but Podman containers have many subtle incompatibilities with Docker and its ecosystem, so it makes sense they wouldn't want to use that, at least by default.
I guess one way of doing it would be to have two instances of rootless docker running, one with this project meant to run all containers, and the one exposed to the container being the other docker engine instance for CI/CD jobs.
Iirc that's how gitlab runners does this as well. Not saying that just because someone else does, it makes it ok, but just putting it out there.
This looks super cool - especially given the changes to github’s leadership.
One minor note - on mobile safari there didn’t seem to be any state update on press of buttons, and submission was not clear until the backend server seemed to respond. My internet connection is a little slow and it was unclear tapping the button worked. I would expect a little loading state or at least ui to show the button as disabled during submission.
From their docs, apparently it just modifies the source code, although their LOC didn't change when they hit save in their video demo, so I'm not sure.
https://docs.onedev.io/tutorials/code/code-review#free-style...
Seems like heavier version of gitea with a pricing section.
I recall looking at it maybe 2-3 years ago and deciding against it. And it never seemed to get traction in the selfhosted community either.
...but I can't for the life of me recall why. Definitely wasn't a radioactive red flag issue...but some aspect around CI wasn't for me.
In general though with things like this that carry an open license and have a docker image you're better off trying it yourself than listening to randoms like me
This is why I dropped it. The CI/CD configurations were some weird proprietary format where as gitlab/gitea/forejo are all (mostly) feature compliant with my already existing github workflow files
I evaluated moving from Gitea to OneDev before Gitea had CI. OneDev was useable, I didn't mind it, but I don't run java anywhere else so I decided against adopting it. A few years later and now Gitea/Forgejo are at feature parity.
For me the bus factor was a bit of a red flag, plus I prefer the layout in forgejo/gitea. Also didn't like that there wasn't really any obvious way to link in an external CI, and of course it's written in java so had that to factor in too.
I actually have a side project running on it. The CI documentation is really lacking and it was hard setting it up. Other than that I am happy with it.
Fossil is that you?
Fossil started as "not invented here", but it has grown into something I like a lot more than Git. Knowing how to use a version control system should be an incidental skill (akin to simple shell commands like "cp" and "ln"), not something that needs to be mentioned in a job posting's role description.
I also appreciate that the default workflow for undoing bad merges is a whiteout rather than a true "delete".
To each his own, but having worked with CVS, SVN, Perforce, Git, and Fossil, the centralized model is much less work for release engineering and administration most of the time. If I were a maintainer of the Linux kernel of one of the many Linux distros where you have potentially thousands of contributors to one codebase, I would use git because it scales up better.
However, I wouldn't underestimate the value of scaling down well, especially for all the people around here building some startup out of a barndominium. A VCS that includes its own GUI-based admin tool and is simple enough to used by some high school intro to web design class is a good thing in my book.
I was turned off by the necessity of using users and permissions; it feels unnecessary for local development and kind of a PITA if you have many repos.
It works exactly as advertised though; my gripes aren't technical.
I can understand that, but it sounds more like an argument in favor of ye-olde CVS than git... Not that I ever have nor desire to manage my own git server, which would involve its own authentication and administrative tasks.
Note: Before some third person pitches in their condescending two cents on the limitations of CVS, nobody here is recommending CVS.
It looks cool, but I think some/most of the features on the front page are behind a paywall?
I'll add it to my list of things to try out though, having more competition in the space is definitely a good thing.
Ooh! My interest is peaked seeing it is written in Java.
*piqued
[dead]
Why do people self-host things like this instead of Github or Gitlab? I don't want to maintain more services for my services. Who has time for that.
Our gitea instance had roughly five minutes of downtime in total over the past year, just to upgrade gitea itself. All in the middle of the night. How much downtime has GitHub seen over the same period, and how many people's work was affected by that?
I've been hosting a git service for quite a while now and it's maybe around half an hour per year of maintenance work. It's totally worth it in my opinion, it's so much better on almost every way ... One big reason is decentralization. Full control of data, change what you want, then things like the current npmjs attack show the downsides of having everyone using the same thing, and so much more
Backups, OS upgrades, version upgrades, firewall management, DDoS management. I just find self-hosting to be excessive to do right.
The concerns are valid, but I'd like to point out that managing all that isn't as frightening as it sounds.
If you do small scale (we're talking self-hosted git here after all), all these are either non-issue or a one-time issue.
Figuring out backups and firewall is the latter. Once figured out, you don't worry about that at all. Figuring these out isn't rocket science either.
As for the former. For minimum maintenance, I often run services in docker containers - one service (as in compose stack) per Debian VM. This makes OS upgrades very stable and, given docker is the only "3rd-party" package, they are very unlikely to break the system. That allows to set unattended-upgrades to upgrade everything.
With this approach most of the maintenance comes from managing containers' versions. It's a good practice to use fixed containers' versions which does mean there is some labor involved when it comes to upgrading them, but you don't always have to stick to the exact version. Many containers have tags for major versions and these are fairly safe to rely on for automatic upgrades. The manual part of the upgrades when a new major release comes out can be a really rare occasion.
If your services' containers don't do such versioning (GitLab and YouTrack are the examples of that), then you aren't as lucky, but bumping a version every few months or so shouldn't be too laborsome either.
Now, if DDoS is a concern, there is probably already the staff in place to deal with that. DDoS is mostly for popular public services to worry about, not for a private Gitea instance. Such pranks are costly to randomly poke around and require some actual incentive.
But why keep a private instance out in the open anyway? Put it behind a VPN and then you don't really have to account for security and upgrades as much.
For the GP's DDOS concern, you could just expose the service through a Cloudflare tunnel or something.
tinkering with services and networks and the whole self-hosting concept is pretty fun for many people.
its not that hard though, and to be honest I trust myself more than a large org like microsoft to get that right
One answer might be to avoid LLMs training off the intellectual property that your humans typed out for you. But as LLM code generation tools take off, it's a losing battle for most orgs to prevent staff from using LLMs to generate the code in the first place, so this particular advantage is being subverted.
This probably only matters for 1-2 years tops. LLMs are taking off pretty fast.
As a mirror of my GitHub repositories following the 3-2-1 backup principle.
Especially as self-hosting means loosing the community aspect of GitHub. Every potential contributor already has an account. Every new team member already knows how to use it.
You’re assuming people are self-hosting open source projects on their gut servers. That’s often not the case. Even if it were, GitHub irked a lot of people using their code to train Copilot.
I self-host gitea. It took maybe 5 minutes to set up on TrueNAS and even that was only because I wanted to set up different datasets so I could snapshot independently. I love it. I have privacy. Integrating into a backup strategy is quite easy —- it goes along with the rest of my off-site NAS backup without me needing to retain local clones on my desktop. And my CI runners are substantially faster than what I get through GitHub Actions.
The complexity and maintenance burden of self-hosting is way overblown. The benefits are often understated and the deficiencies of whatever hosted service left unaddressed.
Microsoft/GitHub has no model training. How do you think Copilot works? Also if you provide open source, people and companies are gonna use it.
When I publish open source code, I don't mind if people or companies use it, or maybe even learn from it. What I don't like is feeding it into a giant plagiarism machine that is perpetuating the centralization of power on the internet.
[dead]
> Microsoft/GitHub has no model training. How do you think Copilot works?
I'm sure if you used that big, smug brain of yours you'd piece together exactly what I meant. Here's a search query to get the juices flowing:
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Whether you agree with why someone may be opting to self-host a git server is immaterial to why they've done so. Likewise, I'm not going to rehash the debate over fair use vs software licenses. Pretending like you don't understand why someone that published code under a copyleft license is displeased with it being locked in a proprietary model being used to build proprietary software is willful ignorance. But, again, it makes no difference whether you're right or they're right; no one is obligated to continue pushing open source code to GitHub or any other service.
I mean git was exactly designed to be decentralized
A better question is why does it take any time to maintain a tool like this? I spend zero time maintaining my open-source browser (Firefox). It just periodically updates itself and everything just works. I maybe spend a bit of time maintaining my IDE by updating settings and installing plugins for it, but nothing onerous.
A tool like this is not fundamentally more complex than a browser or a full-fledged IDE.
You ever have a Firefox update where you have to go and change Settings, or adjust your extensions? Never?
I am using 14 different extensions in Firefox. I don't think any of them have broken due to a Firefox update for at least the past 3 years.
The only maintenance I have had to do was when the "I don't care about cookie's" extension got sold out, so had to switch to a fork [1]. That was 2-3 years ago.
[1] https://github.com/OhMyGuus/I-Still-Dont-Care-About-Cookies
because: 1) privacy - don't want projects leaving a closed circle of people 2) compliance - you have to self-host and gitlab/github are way too expensive for what they provide when open-source alternatives exist 3) you just want to say fuck you to coorperate (nothing is free) and join the clippy movement.