> Unfortunately, the gphotos-sync tool stopped working in March 2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup.
I didn't even realize this tool existed. I tried something like it awhile back, but it didn't work to my satisfaction (I don't remember why), so my awful, awful, awful workflow is to use the Google Takeout functionality to generate something like 8 .tar.gz files (50 gigabytes each), manually download each one (being prompted for authentication each time), and then rsyncing them over to my local server, and finally uncompressing them.
It's very lovely how much Google doesn't want you to exfiltrate your own data.
I wonder at which point I'll get annoyed enough to go through the effort of setting up immich. Which, naturally, will probably involve me re-working my local server as well. The yak's hair grows faster than I can shave it.
> I wonder at which point I'll get annoyed enough to go through the effort of setting up immich. Which, naturally, will probably involve me re-working my local server as well. The yak's hair grows faster than I can shave it.
LLM + Nix (ideally NixOS) changed everything imo.
After reading TFA last night, it was less work to tell Claude Code to get Immich running on my home server (NixOS), add the service to Tailscale, and then give me a todo list reminder of what I needed to do to mirror my Macbook iCloud/Photo.app gallery to it and then see it on my iPhone...
...than any of the times I've had to work around "black box says no", much like your example.
Just a couple years ago, this wasn't the case. I didn't have the energy to ssh into my server and remember how things are set up and then read a bunch of docs and risk having to go into a manual debug loop any time a service breaks. LLM does all that. I never even read Nix docs. LLM does that too.
In fact, it was fairly fun to finally get a good cross-platform setup working in general to divest from Apple thanks to LLM + Nix. I really like where things are going in this regard. I don't need any of this crap anymore that I used to use because it was the only way to get something that Just Worked.
By the time I lose my software job and have to compete with you lot, H1Bs, and teenagers to fold sweaters at Hollister, I won't need to use a single bit of proprietary software. It will be a huge consolation.
As critical as I am of LLM use, the nice thing about it here is your configs can be version controlled, and rolling back changes is pretty painless.
I'd still want to go through any changes with a fine tooth comb to look for security issues and to make sure I know what it is adding and removing, but it's saner than letting an LLM run amok on a live system.
There is something to be said about NixOS, it really is a matter of setting `services.immich.enable = true;` in a configuration file. I find this really powerful and simpler than docker and docker-compose. But don't get me wrong, I am all for containerization when it comes to other OS/distros. Yes, there is a learning curve for the Nix language and creating your own packages. But anyone who can install a distro can install NixOS. Instead of running your apt/dnf/pacman commands, you edit a file with your package names and services you want to enable, and run `nixos-rebuild switch`. Though, you might find standalone binaries such as uv and its portable Python bundles don't work out the box, there is a a few lines configuration to get it working. Having a single language for configuring all services/applications (neovim,nginx,syncthing,systemd, etc) is refreshing. And of course combined with generative AI, you can set up a lot quickly.
Immich is one of the only apps on iOS that properly does background sync. There is also PhotoSync which is notable for working properly with background sync. I'll take a wild guess that Ente may have got this working right too (at least I'd hope). This works around the limitation that iOS apps can't really run as background apps (appears to me that the app can wake up on some interval, run/sync for a little and try again on the next interval). This is much more usable then for example, the Synology apps for photo sync, which is, the last time I tried, for some reason insanely slow and the phone needs to have the app open and screen on for it fully sync.
Some issues I ran into is the Immich iOS app updating and then being incompatible with the older version of the server installed on my machine. You'd have to disable app updates for all apps, as iOS doesn't support disabling updates for individual apps.
In my specific scenario, the latest version of Immich for NixOS didn't perform a certain migration for my older version of Immich. I had to track down the specific commit that contained the version of Immich which had the migration, apply that, then I was able to get back to the latest version. Luckily, even though I probably applied a few versions before getting the right one, it didn't corrupt the Immich install.
I've hosted Immich since it came out and all my photos have been migrated to it at this point. I would never host Immich on NixOS (and I do use it for certain things). The reason? It's not simpler than a container option and creates a single point of issue. The container option is tested and supported by Immich, they recommend it. So everything I need is part of that. I moved servers midway through the year and the storage for my Immich implementation is NAS hosted and the mount is simply exposed to the Immich container. It took me less than 15 minutes to move Immich. And while that would have likely been the same with NixOS it's actually more of a chore to roll back with Nix. My Compose file is locked to major/minor and I choose when to do upgrades. But rollbacks are actually simpler IMO. I just stop the container, tar the operational directory, flip the bits in the Compose file and restart. I've not actually had an issue with Immich ever while doing it this way and I manage about 10TB of photos and videos currently in Immich.
I actually thought about doing this with NixOS last year, but it seemed counterproductive compared to how I self-host, I don't want to manage configurations in multiple places. If I switched everything it would likely be just as much work and then I'm reliant on Nix. Over the years I've gone from the OS being a mix of Arch and Ubuntu to mostly just Debian for my self hosting LXC or VMs. I already have the deployments templated so there's nothing for me to do other than map an IP, give it a hostname and start it.
To each their own, but I don't want to be beholden to NixOS for everything. I like the container abstraction on LXC and VMs and it's been very good to minimize the work of self-hosting over 40+ services both in my home lab and in the bare metal server I lease from Hetzner.
> It's not simpler than a container option and creates a single point of issue. The container option is tested and supported by Immich, they recommend it. I don't want to be beholden to NixOS for everything.
I think there's a misunderstanding here. You aren't beholden to NixOS here. You don't have to use nixpkgs nor home-manager modules. You can make your own flakes and you can use containers, but the benefit is still that you set it up declaratively in config.
It's not incompatible with anything you've said, it's just cool that it has default configurations for things if you aren't opinionated.
> I don't want to manage configurations in multiple places.
I've accumulated one big Nix config that configures across all my machines. It's kind of insane that this is possible.
Of course, it would seem complicated looking at the end result, but I iterated there over time.
Is it? Why? If a NixOS module doesn’t support what you need, you can just write your own module, and the module system lets you disable existing modules if you need to. Doing anything custom this way still feels easier than doing it in an imperative world.
I can see your point that it can be daunting to have all the pain upfront. When I was using Ubuntu on my servers it was super simple to get things running
The problem was when I had to change some obscure .ini file in /etc for a dependency to something new I was setting up. Three days later I'd realise something unrelated had stopped working and then had to figure out which change in the last many days caused this
For me this is at least 100x more difficult than writing a Nix module, because I'm simply not good at documenting my changes in parallel with making them
For others this might not be a problem, so then an imperative solution might be the best choice
Having used Nix and NixOS for the past 6-7 years, I honestly can't imagine myself using anything than declarative configuration again - but again, it's just a good fit for me and how my mind works
In the NixOS scenario you described, what keeps you from finding an unrelated thing stopped working three days later and having to find what changed?
I’m asking because you spoke to me when you said “because I'm simply not good at documenting my changes in parallel with making them”, and I want to understand if NixOS is something I should look into. There are all kinds of things like immich that I don’t use because I don’t want the personal tech debt of maintaining them.
I think the sibling answer by oasisaimlessly is really good. I'd supplement it by saying that because you can have the entire configuration in a git repo, you can see what you've changed at what point in time
I'm the beginning I was doing one change, writing that change down in some log, then doing another change (this I'll mess up in about five minutes)
Now I'm creating a new commit, writing a description for it to help myself remember what I'm doing and then changing the Nix code. I can then review everything I've changed on the system by doing a simple diff. If something breaks I can look at my commit history and see every change I've ever made
It does still have some overhead in terms of keeping a clean commut history. I occasionally get distracted by other issues while working and I'll have to split the changes into two different commits, but I can do that after I've checked everything works, so it becomes a step at the end where I can focus fully on it instead of yet another thing I need to keep track of mentally
I just realised I didn't answer the first question about what keeps me from discovering the issues earlier
The quick answer is complexity and the amount of energy I have, since I'm mostly working on my homelab after a full work day
Some things also don't run that often or I don't check up on them for some time. Like hardware acceleration for my jellyfin instance stopped working at some point because I was messing around with OpenCL and I messed up something with the Mesa drivers. Didn't discover it until I noticed the fans going ham due to the added workload
I'm not really sure what your point is, but I'll try to take it in good faith and read it as "why doesn't docker solve the problem for it, since you can also keep those configurations in a git repo?"
If any kind of apt upgrade or similar command is run in a dockerfile, it is no longer reproducible. Because of this it's necessary to keep track of which dockerfiles do that and keep track of when a build was performed; that's more out-of-band logging. With NixOS I will get the exact same system configuration if I build the same commit (barring some very exotic edge cases)
Besides that, docker still needs to run on a system, which must also be maintained, so Docker only partly addresses a subset of the issue
If Docker works for you and you're not facing any issues with such a setup, then that's great. NixOS is the best solution for me
That’s all my point was, yeah. Genuinely no extra snark intended.
> it is no longer reproducible
The problem I have with this is that most of the software I use isn’t reproducible, and reproducible isn’t something that is the be all and end all to me. If you want reproducible then yes nix is the only game in town, but if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
> docker still needs to run on a system
This is a fair point but very little of that system impacts the app you’re running in a container, and if you’re regularly breaking running containers due to poking around in the host, you’re likely going to do it by running some similar command whether the OS wants you to do it or not.
> if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
For some I'm sure that's the case; it wasn't in my case.
I ran docker for several years before. First docker-compose, then docker swarm, finally Nomad.
Getting things running is pretty fast, but handling volumes, backups, upgrades of anything in the stack (OS, scheduler, containers, etc) broke something almost every time - doing an update to a new release of Ubuntu would pretty much always require backing up all the volumes and local state to external media, wiping the disk, installing the new version, and restoring from the backup
That's not to talk about getting things running after an issue. Because a lot of configuration can't be done through docker envs, it has to be done through the service. As a consequence that config is now state
I had an nvme fail on me six months ago. Recovering was as simple as swapping the drive, booting the install media, install the OS, and transfering the most recent backup before rebooting
Took about 1.5 hours and everything was back up and running without any issues
Not OP, and not a very experienced with NixOS (I just use Nix for building containers), but roughly speaking:
* With NixOS, you define the configuration for the entire system in one or a couple .nix files that import each other.
* You can very easily put these .nix files under version control and follow a convention of never leaving the system in a state where you have uncommitted changes.
I've written a dozen flakes because I want some niche behavior that the home-manager impl didn't give me, and I just used an LLM and never opened Nix docs once.
It's just declarative configuration, so you also get a much better deliverable at the end than running terminal commands in Arch Linux, and it ends up being less work.
Have you seen how bad the Nix documentation is and how challenging Nix (the language) is? Not to mention that you have to learn Yet Another Language just for this corner case, which you will not use for anything else. At least Guix uses a lisp variant so that some of the skills you gain are transferable (e.g. to Emacs, or even to a GP language like Common Lisp or Racket).
Don't get me wrong, I love the concept of Nix and the way it handles dependency management and declarative configuration. But I don't think we can pretend that it's easy.
The documentation is not great (especially since it tends to document nix-the-language and not the conventions actually used in Nixpkgs), but there are very few languages on earth with more examples of modules than Nix.
Not really. No. You can easily checkout repo containing the Dockerfile, add a Dockerfile override, change most of the stuff while maintaining the original Dockerfile instact and the ability to use git to update it. Then you change one line in docker-compose.yaml (or override it if it's also hosted by the repo) and build the container locally. Can't imagine easier way to modify existing docker images, I do this a lot with my self-hosted services.
It is straightforward, but so is the NixOS module system, and I could describe writing a custom module the same way you described custom Docker images.
But it works on Ubuntu, it works on Debian, it works on Mac, it works on Windows, it works on a lot of things other than a Nix install.
And I have to know Docker for work anyhow. I don't have to know Nix for anything else.
You can't win on "it's net easier in Nix" than anywhere else, and a lot of us are pretty used to "it's just one line" and know exactly what that means when that one line isn't quite what we need or want. Maybe it easier after a rather large up-front investment into Nix, but I've got dozens of technologies asking me for large up-front investments.
This is a familiarity problem. I've never used NixOS and all your posts telling me how simple it is sounds like super daunting challenges to me versus just updating a Dockerfile or a one liner in compose that I am already familiar with, I suspect its the inverse for you.
I'm running NixOS on some of my hosts, but I still don't fully commit to configuring everything with nix, just the base system, and I prefer docker-compose for the actual services. I do it similarly with Debian hosts using cloud-init (nix is a lot better, though).
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
Ditto on having services expressed in more portable/cross distro containers. With NixOS in particular, I've found the best of both worlds by using podman quadlets via this flake in particular https://github.com/SEIAROTg/quadlet-nix
If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.
Doing all that with containers is a spaghetti soup of custom scripts.
Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.
So you go from having to worry about one image + N services to up-to-N images + N services.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Your understanding of not-containers is incorrect.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
I hope someone will create a Debian package for Immich. I’m running a bunch of services and they are all nicely organized with user foo, /var/lib/foo, journalctl -u foo, systemctl start foo, except for Immich which is the odd one out needing docker compose. The nix package shows it can be done but it would probably be a fair amount of work to translate to a Debian package.
Indeed! This morning I needed a service to port forward ssh from my server to a firewalled machine, to access stuff while I work from a mountain cabin over the next few days. ChatGPT gave me a nice nix config snippet, and it just worked! Auto reconnecting and everything.
I would of course have thrown up a port forward manually today, and maybe even spent the time to add a service later, but now it was fixed once and “forever” in two minutes!
AI generated Nix is equally deterministic and repeatable. The deterministic behavior makes Nix well suited for AI yolo code, either it evaluates and builds or it doesn't, and if the result isn't functional you revert back to the previous generation.
That is the use case for NixOS yes, can you clarify how it is no longer deterministic? I have been using it for a few months and was not aware of this change
Immich was my gateway into NixOS. It did a really good job of showing how well it can work. I'm only a couple of months in, so we'll see if it sticks, but I'm also running it on my laptop now.
But what's the performance of NixOS compared to other distros? Also, I imagine CUDA installation is not as simple as changing a few lines of config file?
> There is something to be said about NixOS, it really is a matter of setting `services.immich.enable = true;` in a configuration file.
Assuming someone has added it to NixOS, yeah. There are plenty of platforms even easier than that where you can click "install" on "apps" that have already been configured.
> There are plenty of platforms even easier than that where you can click "install" on "apps" that have already been configured.
Yeah, like TrueNAS, where they've decided it was good entire to run Kubernetes on NAS hardware, with all the fun and speed that comes with. You just hit "Install", wait five minutes, and you get something half-working but integrated with the rest of their "product".
I'll stick with configuration I can put in git, patch when needed and is easy to come back to after 6 months when you've forgotten all about the previous context you had.
Regarding NixOS, I'm mostly afraid of them going on a user purge after their developer purge. You just never know who this group of people will come after next, especially after they started defining "Fascism" as "anyone asking for how they define Fascism".
And the jump of getting rid of people you hate who contribute to your project and you can do little harm to, to getting rid of people you hate who are of no use to you and you can do genuine damage to (e.g. by installing a tor exit node) is a step down if you think you could get away with it.
> Regarding NixOS, I'm mostly afraid of them going on a user purge after their developer purge
... Why? I don't know what developer purge you're talking about, but getting rid of people running a project almost never means suddenly they'll start to get rid of users, I'm not sure why that assumption is there. Not to mention that they couldn't even "purge users" if they wanted to, unless they make the download URLs private and start including some licensing schema which, come on, hardly is realistic to be worried about...
To provide some opinionated context for this unhinged rant:
The community developing nix had a falling out with a couple highly unsavory groups that basically consisted of the Palmer Lucky Slaughter Bot Co. and a couple guys who keep trying to monetize the project in extremely sleazy ways. This wasn't some sort of Stalinistic purge, it was people rejecting having their name attached to actual murder and sleazy profiteering.
> But anyone who can install a distro can install NixOS. Instead of running your apt/dnf/pacman commands, you edit a file with your package names and services you want to enable, and run `nixos-rebuild switch`.
You can do the same with any configuration manager such as puppet, salt or chief.
Self hosting used to mean conceding on something. I can honestly say Immich is better in every way than Google Photos or whatever Apple calls it. The only thing is having to set it up yourself.
There are still some features that a miss from Google photos. There isn't any way (that I know of) to auto add pictures to an album based on the face. I used to have dedicated albums for family members, and it was nice to have the auto updated.
Face recognition in general just isn't as good as Google Photos.
It's still an amazing piece of software and I'd never go back, but it isn't perfect yet.
Are we using the same Google Photos? I've found Immich face recognition and context/object search to be miles better than Google Photos. In particular, Google Photos is exceptionally bad at distinguishing non-European looking faces (though it's not great in general), and it completely gave up on updating / scanning new photos in 2024 after I imported party photos with a lot of different people.
Almost all my Google Photos "people" are mix-and-matched similar looking faces, so it's borderline useless. Immich isn't perfect, but it gives me the control to rerun face recognition and reassign faces when I want, even on my ancient GTX 1060.
For the record, I think Immich is very good, and I use it myself. But there is something about the design and performance in the mobile app that still makes it feel "not quite there yet" on iOS at least.
Yes, it does silently and reliably upload all my photos to my server. That's like, the entire selling point of the app? You even have control over how and when (on wifi or not) and the ability to change hostnames depending on what network you are on. And yes I can browse my entire collection back to 2001 no problem. I have no idea what the offline support is.
That was my selling point for Nextcloud, and it turns out it doesn't work reliably. It works most of the time, but for backing up photos it's not enough, and when it fails it's super annoying (you have to resync EVERYTHING from scratch).
People seem very happy about Immich, I'm tempted to try. But people seem very about Nextcloud as well, so it's difficult to tell.
The sync really is quite good. On wifi it's basically seamless. If I had 30k new images though it would be much faster to use the immich-go tool mentioned in the blog post.
Offline support is alright, though I haven't worried about this much. I think it doesn't do any local deletion, so whatever stays in your DCIM folder is still on device.
The Nextcloud iOS app does it. For some reason it requires the location permission "all the time" for that, presumably as a way to "wake" the app from time to time?
I decided to try Nextcloud exactly because of this. My problem with it is more that the whole thing is a bit unreliable. Like once in a while the app will get into a state where the only way I found to recover is to just erase everything and re-sync everything. And the app will resend ALL the pictures, even though they are already on the server.
And I can't do that with my family members' phones. It doesn't matter to me if the app takes a month to sync the photos, but it has to require zero maintenance. I can deal with the server side, but I need it to "just work" on the smartphones.
Searching for "nextcloud ios background sync" shows a whole bunch of forum posts and bug reports about it not working well unless you have the application open.
For something that works well it seems like a ton of people have a lot of issues with it. Are you sure you're on the latest iOS version? Seems like people experience the issues when they're on a later version.
> We got to this stage of having to sync ̶b̶e̶c̶a̶u̶s̶e̶ ̶A̶p̶p̶l̶e̶ ̶c̶a̶n̶’̶t̶ ̶s̶t̶a̶n̶d̶ ̶p̶u̶t̶t̶i̶n̶g̶ ̶m̶o̶r̶e̶ ̶s̶t̶o̶r̶a̶g̶e̶ ̶o̶n̶ ̶c̶l̶i̶e̶n̶t̶ ̶d̶e̶v̶i̶c̶e̶s̶.
"because a company that sells you Cloud storage has very few incentives to give away more local storage, or compress/optimize the files generated by its camera app." might be more accurate
> We got to this stage of having to sync because Apple can’t stand putting more storage on client devices.
It's not why I use sync services. All my photos fit on my devices (more or less). But I want to have seamless access to my files from both of my devices. And most importantly the sync is my first line of backup, i.e. if my phone gets obliterated I don't loose a day or two of files and photos, I only loose a couple of minutes.
I have not shared it with many people. But one of my most wanted feature is to completely share by photos with my partner. None of the services I tried (Plex, Synology Photos) had it. In Immich, it’s just a flip of a button.
Flip a switch and then what, are you getting a isolated public URL to share? Or you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted?
> you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted
I think the previous commenter misunderstood your question, this is the answer (you can also put it behind something like cloudflared tunnels).
Immich is a service like any other running on your server, if you want it exposed to the internet you need to do it yourself (get a domain, expose the service to the internet via your home ip or a tunnel like cloudflared, and link that to your domain).
After that, Immich allows you to share public folders (anyone with the link can see the album, no auth), or private folders (people have to auth with your immich server, you either create an account for them since you're the admin, or set up oauth with automatic account creation).
Ugreen has it. It has conditional albums in which one can setup rules like person, file type, location, anniversary and more and share a live album. Or leave all params empty and simply mirror the entire library.
You get a link and you can set read or write permissions on it.
Whoever gets that link can browse it in a web browser.
I've used this to share albums of photos with gatherings of folks; it works very well. It does assume you have your Immich installation publicly available, however. (Not open to the public, but on a publicly accessible web server)
How safe is that to set up for novice it people? I have a pi with pi-hole on it and am thinking about putting immich on it but the fact that it exposes itself outside my LAN frightens me.
I have it set up in a container that I keep updated. Then it's reverse proxied by another container which runs nginx proxy manager, which keeps the HTTPS encryption online. So far, the maintenance has only been checking whether a new version has been released and docker pulling the images, then restarting the containers.
OK. Then you concede your security, as I can't imagine any single person self-hosting can be better at keeping their public service more secure than engineers at Google can. Especially with limited time.
You definitely have a dull imagination. If the software itself is secure, containerized version of Immich behind a containerized version of nginx proxy manager is probably as secure as you can get. Also google security tends to be mainly leaning towards securing google and less towards securing google's (non paying) customers.
I mean, if you’re confident about security best practices, have a moderate amount of networking experience, and are a seasoned web developer, it’s not too scary at all. I realize that’s a lot of prerequisites though.
it’s not a fair comparison with Google because Google has a much bigger target on their back. There are millions of users of Google, so the value of hacking Google is very high. The value of hacking a random Immich instance is extremely low.
Other than redundant hosting, what will I get as an Apple user by setting this up? It would be very easy to set up, just not sure what I’m gaining from it
I don't think it would add any value for you. For me, it adds value because I only have to turn my head to the left to see the computer that contains all my photos since I started taking pictures with a smartphone.
Supporting someone who is not TooBigTech is a valid concern, IMO.
The selling point for me is that it is NOT TooBigTech. It doesn't have to be as good as TooBigTech, but it has to be reliable enough. In my case it means that it should be able to sync from iOS/Android, in the background, even if the user never opens the app, and it should never get out of sync and require setting up everything again. Nextcloud fails at that.
For once iCloud have a terrible sync speed. Even 500GB of photos / videos take forever to sync like a week and I can't imagine what it will take for someone with multi-TB archives.
I'd imagine if you're person who make a lot of photos / videos slow sync can be pretty annoying. Unfortunately I'm not one of them to tell, but just had to wait like a week for the first sync of my wife's iPhone to finish.
My biggest worry with Immich is how to future-proof the albums. With photos sorted into folders, it should be no problem to access them in a couple of decades. With Immich, I have to rely on the software still working or finding some kind of tool to dump the database.
I work on an image search engine[0], main idea has been to preserve all the original meta-data and directory structure while allowing semantic and meta-data search from a single interface. All meta-data is stored in a single json file, with original Path and filenames, in case ever to create backups. Instead of uploading photos to a server, you could host it on a cheap VPS with enough space, and instead Index there. (by default it a local app). It is an engine though and don't provide any Auth or specific features like sharing albums!
I use Single File PHP Gallery. Put the file in root dir of your photos and set it executable in web server. That's it. The settings are also inside the file, if you need any tweaking.
You should print them ;-). But yeah, I’m also old school in that I make directories for each album. I used MacOS photos before, but it’s terrible when you change systems (which eventually will happen).
This was why I was driven to use Photoprism. I use syncthing-fork to upload from phones, and a custom made thing to copy them to folders (this also works with Cameras that aren't phones).
Although Immich does backup from your phone, I don't see it as a viable backup solution. Git-annex, Unison, and Syncthing are much better at keeping files synchronized across devices. Immich will create its own copies of photos and transcode videos for playback on the web. That may be fine if you have enough storage space, but for me it makes the phone backup useless. I suppose you could use a git-annex special remote directory as an Immich external library.
The database is Postgres, and the schema is quite sensible. You can (and I have) write normal SQL queries in psql to modify the data.
It might not be as easy as rsync to transfer data out, but I would trust it way more than some of the folder based systems I've had with local apps that somehow get corrupted/modified between their database and the local filesystem. And I don't think ext4 is somehow magically more futureproof than Postgres. And if no-one else writes an export tool, and you feel unable to, your local friendly LLM will happily read the schema and write the SQL for you.
I have the same concerns and that’s why I only use software which accept my directory structure as input and isn’t messing with it. I, for example, added my top directories of my image directory structure hand by hand each bit itself as a shared directory (read-only) to immich.
The main reason: I don’t trust software NOT deleting my photos. (Yes, I have an off-site) backup, but the restore would take time.
I use photosync to upload to a folder that is an external library for immich. Immich then periodically scans that folder to load new assests. I usually use digikam to manipulate that folder. Immich is there just for easy remote browsing of those files.
I remember that Immich has a mode to not use cryptic hashes but folders for storage. When I used it it was somehow deprecated due to some problems, but supported. I actually stopped using Immich because newer versions run the keep alive via socket.io with a Prostgres notify, which does constant empty WAL flushes, triggering empty page writes on idle.
Thank you, well put.
thats why I am using next cloud and manual curation. Folders is the ultimate future proof structure. But I do see the value of a nice UI. But immich hides the files from me too much for my taste.
Although I am sure I can back them up to my PC somehow. But having them just on the server is not my favourite solution.
You can configure the storage template for the photos and include an "album" part, so if a photo is in some album it'll get sorted into that folder. Then the file tree on disk is as you wish.
I haven't tested what it does when a photo is in multiple albums, but it does handle the no album case fine as well.
In the same boat. It seems there is API to export photos, so was thinking about some script that will export photos into separate folder and use hard links in order not to take more space.
This is what I like about Ente. I can pay them to give me an e2e encrypted cloud service, and then the desktop app has a continuous export feature that will dump everything into a plain folder structure on my home NAS automatically.
Ente could go out of business tomorrow and I’d still have all my photos, neatly organized into folders.
And I don’t have to bother with self-hosting overhead. Or I could self host, too, if I wanted. But I still need an off-site backup so I might as well pay for the cloud service.
Have you had any issues with the continuous export writing to a network volume? And does it work for all users in a family plan? That was my plan as well, but I’d like to only have to run one export job
I can’t tell you about family plan since I don’t have one. I assume you’d have to set this up on a per-user basis.
I haven’t had any network volume issues. It’s an SMB volume provided by trueNAS mounted on a Windows machine.
I will say, if you mess up your volume like the time I took my NAS down for maintenance for a few days, the export failure wasn’t incredibly loud. I don’t think it notified and screamed at me that it wasn’t working. So I guess that is a significant risk.
I adore Immich. I set it up a while ago, and I'm finally looking at my photos again. I was previously using Nextcloud for photos, but it was such a slog to find anything that I never took or looked at photos.
Immich put the joy back in photography for me, it's so easy to find anything, even with just searching with natural language.
I do that with DayOne and curation, but obviously this means I keep only 2/3 pictures per event, but most of the time that's enough (and even better, since I choose the ones I prefer and keep those)
I never even used Google Photos (because, you know), so if somebody could explain more concretely: how do you use it? Is it actually a backup app (and if so, is it really much different from using a generic backup app or even just syncthing), or does it somehow magically allow you to keep the preview gallery and search on your device, while your actual 200 GB of photos are somewhere in the cloud and the local storage is basically just auto-managed cache, where everything you didn't access in the last 6 months gets deleted? Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc? I'm asking because I don't even fully understand how the camera app handles it itself, and if all the data is fully portable.
FWIW, I also don't use any fancy collection management and barely understand what all these Lightrooms and XMP files are for. Maybe I should, but up to this day photos for me are just a bunch of files in the folder, that I sometimes manually group into subfolders like 2025-09, mostly to make it easier on thumbnail-maker.
It auto uploads all your photos to the cloud and you can delete them locally and still have them. The biggest feature is the AI search, you can type anything and it will find your pictures without you doing any work categorizing them. It can do objects or backgrounds or colors and it can even do faces so you can search by people's name. That and there's share links to albums and multiplayer albums.
It keeps the originals locally when it uploads forever unless you delete them. There's a one click "free up space on this device" button to delete the local files. It's actually somewhat annoying to export in bulk, you pretty much have to use takeout.
Key features that matter to me:
1) backup from android or iOS. This helps when I have switched phones over the years.
2) shared albums with family or friends where invited people can both see and contribute photos. Think kids albums, weddings, holidays.
3) ability to re-download at full resolution
1) You don't have backups of other data on your phone (chat history, 2FA secrets and private keys, text notes, anki cards, game progress, configuration of all apps, etc.)? I had assumed everyone who cares about their data has backups of their data anyway, so that's not really a selling point to install another app for
2) that's nice!
3) "it doesn't throw my data away" is the last selling point?! Isn't that just assumed?!
1) I do have separate backups, as well as this, which runs more frequently (after picture is taken) vs daily for device backup
3) not compared to iCloud photos which I migrated from. You can export a whole album with Google at original quality with 1 click. With Apple you can only do 1000 at a time. For apple you can ask for a whole account export, but that takes a few days and gives you all photos. (Similar to Google Takeout).
For nearly a decade I've been using Google Photos with a love-hate relationship. I've tried a few alternative photo apps, even tried building one myself as a side side side side project, but nothing really felt like it could replace how I use Google Photos (haven't tried in the past couple of years mind).
I have a daughter, and my family lives in another country, so I want to be able to share photos with them. These are the feaures I need:
- Sharing albums with people (read only). It sounds pretty simply, but even Google fucked it up somehow. I added family members by their Google account to the album, and somehow later I saw someone I didn't know was part of the album. Apparently adding people gives (or did?) them permission to share the album with other people which is weird. I want to be able to control exactly who sees the photos, and not allow them to share or download them with others. On the topic of features, I should note that zero of the other social features (comments / reactions) have ever been used.
- Shared album with my spouse (write). I take photos of the kid, she takes photos of the kid. We want to be able to both add our photos to the shared album.
- Automatic albums or grouping by faces. Being able to quickly see all the photos of our kid is really great, especially if it works with the other sharing features. On Google you could setup Live Albums that did this... (automatic add and share between multiple people) but I can't see the option anymore on Android. I feel it could be a bit simpler though, just tagging a specific face, so that all photos should be shared within my Google One Family.
- The way we use it is we have a shared album between us or all the photos, and then a curared album shared with family members of the best photos.
Other than that I just use it as a place to dump photos (automatically backed up from my phone) and search if needed. Ironically the search is not very good, but usually I can remember when the photo I need was taken roughly so can scroll through the timeline. In total my spouse and I have ~200GB of media on Google Photos, some of it is backed up elsewhere.
What about automatic background sync without ever having to open the app on mobile? Does that work or do you have to open the app regularly for it to sync properly?
This doesn't work properly on Nextcloud (it sometimes gets out of sync and then I'm screwed because I have to reset the app on my family member's phone and have them resync for hours).
Wouldn't recommend. When I wanted to move from Google Photos to iCloud, there was no way to simply get all my photos. I had to use a JS script that would keep scrolling the page and download photos one by one.
You can back up to Immich using various methods, including dumb file copy into a dropbox folder. For a while, I was using PhotoSync that uploaded photos to my NAS with Immich using WebDAV.
Immich also has an app that can upload photos to your server automatically. You can store them there indefinitely. There are galleries, timelines, maps for geotagged photos, etc.
The app also allows you to browse your galleries from your phone, without downloading full-resolution pictures. It's wickedly fast, especially in your home network.
> Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc?
It preserves the information from sidecar files and the original RAW files. The RAW processing is a bit limited right now, and it doesn't support HDR properly. However, the information is not lost and once they polish the HDR support, you'll just need to regenerate the miniatures.
Immich is a Google Photos clone, and when they say "self-hosting", they mean SELF-HOSTING. You need to be a web dev or a sys admin to be able to wrangle that thing. Nightmare upgrades, tons of weird bugs related to syncing.
If your solution to an issue is "just reset the Redis cache", this is when I am done.
Immich solves the wrong problem. I just want the household to share photos - I don't want to host a Google Photos for others.
Not my experience hosting immich for close to two years now. There was only one "breaking change" a long time ago where you would have to manually change a docker image in the compose file, but since then things have been smooth for me.
Immich may not be the pinnacle of all software development, but with the alternative being Google photos:
- Uploading too many photos won't clog my email and vice versa
- I'm not afraid of getting locked out of my photo account for unclear reasons and being unable to reach anyone to regain access
- If I upload family photos from the beach, then my account won't get automatically flagged/disabled for whatever
- Backups are trivially easy compared to Google takeout
- The devs are reachable and responsive. Encounter a problem? You'll at least reach a human being instead of getting stranded with a useless non-support forum
I would instead say that my (and my family's) photos are too important to me to pass their hosting on to a company known for its arbitrary decisions and then being an impenetrable labyrinth if there is an issue.
So you do pay some price, but it is an illusion to think that the price of Google photos (be that in cash, your data or your effort) is much lower.
Things that did break during this time:
- my hacky remote filesystem
- network connectivity of a too cheap server
but these were on me and my stinginess.
> Immich solves the wrong problem. I just want the household to share photos
That is a totally reasonable view. But others have different preferences. I, for example, do not want to share all my photos with Google, govvies and anyone else they leak them to.
So I self host, back up and share my files with the family. I can always dump what I want to insta, etc. but it is my choice what to share, picture by picture, with default "off". And have no dark patterns trying to catch a finger accidentally hitting a "back up to cloud" for the full album.
That, to me, is a big deal, worth dealing with occasional IT hassles for. Which is just a personal preference.
>> Immich solves the wrong problem. I just want the household to share photos
pixelfed may be what the parent want then. I don't like that it is PHP, but as long as they adhere to the ActPub protocal, we can roll our own in whatever flavor.
Actually, I've setup a proxmox server last week that run a couple of self-hosted application. I've nextcloud running and it was fairly easy to setup. The next item on my list WAS Immich. I decided against trying to deploy it. The reason is simple: they are essentially forcing the use of Docker, which I won't touch at at all. Either a native proxmox container (which is just lxc) or a proper VM, but I keep those in reserve as they can be heavy. I'm not asking of them to create a native package for debian or a container image; a simple install script that bootstraps the application (checks & install itself and dependencies), bootstrap the database and basic config (data directory, url & ports, admin password) is more than enough. The same script should be use to update the application if possible, or provide an updater on the admin panel to update the application without manual steps or data migrations. Adguard Home does all of this perfectly in my opinion. I know Immich thinks they are making things "easier" to just dump everything into a docker container, but some of us wont touch it at all. Same reason I avoid any projects that heavily relies on nodejs/npm ecosystem.
I really don't understand this take.
A script that installs all required dependencies is fine if and only if you are dedicating a machine to immich. It probably requires some version of node, with possibly hidden dependencies on some python, it uses ffmpeg, so all related libraries and executables need to be there. You then have a couple separate DBs, all communicating together.
Let's not talk about updates! What if you're skipping versions? Now your "simple install script" becomes a fragile behemoth.
I would NOT consider this if it was non docker-native.
Plus, I don't have a server with enough resources for a lot of VMs, with all of their overhead and complications, just to have one per service.
Nowadays there are many ways to run a container not just the original docker.com software, and you can do that on pretty much any platform. Even Android now!
I've never understood it either. I still deploy some things into their own respective manual deployments but for lots of things having a pre-made docker compose means I can throw it on my general app VM and it'll take 5 seconds to spin up and auto get HTTPS certs and DNS. Then I don't lose hours when I get two days into using something and realize it's not for me.
Also have you read some of the setup instructions for some of these things? I'd be churning out 1000 lines of ansible crap.
Either way since Proxmox 9.1 has added at least initial support for docker based containers the whole argument's out the window anyway.
Me neither. Docker is the platform agnostic way to deploy stuff and if I maintained software, it is ideal - i can ship my environment to your environment. Reproducing that yourself will take ages, or alternatively I also need to maintain a lot of complex scripts long-term that may break in weird ways.
These things are a proxmox home lab user's lifeline. My only complaint is that you have to change your default host shell to bash to run them. You only have to do that for the initial container creation though.
I think it's the best of every world. Self contained, with an install script. Can bring up every dependent service needed all in one command. Even your example of "a simple script" has 5 different expectations.
I've been waiting what feels like years for immich stable to be released for this reason. Luckily it finally happened about a month ago. I'm about to go through swapping out the main OS SSD on my server. If I'm able to see the immich backups after reinstalling TrueNAS I'm going to call it resilient enough for me.
Such a weird take. Of course "self hosting" means "self hosting".
Sure it could be easier/safer to manage, everything can be better.
Over the last couple of years hosting it I had a single issue with an upgrade but that was because I simply ignore the upgrade instructions and YOLOed the docker compose update.
Again, is it perfect? No.
Would I expect a non tech savy user to manage their own instance? Again no.
I run Immich for more than two years and there was an upgrade to 1.33 I think around spring 2024 that required special instructions on editing docker compose file because they changed the vector database. I think there was also a database migration same year when - if you did not update the version regularly - would need to run two step upgrade. They provided plenty of documentation always. A while ago sync was quite wonky but they improved that a lot lately.
Huh? What are you maintaining? The PostgreSQL db and extensions are provided in the container image. You do not have to use your own external PostgreSQL.
Of course, you may have reasons to do that. But then you also own the maintenance.
I have never had to maintain any PG extensions. Whatever they put in the image, I just run. And so far it has just worked. Upgrades are frequent and nothing has broken on upgrade - yet at least
> You need to be a web dev or a sys admin to be able to wrangle that thing.
I totally disagree. You do need a tiny bit of command line experience to install and update it (nothing more than using a text editor and running `docker compose up`), but that's really it. All administration happens from the web UI after that. I've been using Immich for at least 2 years and I've never had to manually do something other than an update.
> Immich solves the wrong problem. I just want the household to share photos - I don't want to host a Google Photos for others.
Honestly, I can't understand what exactly you're expecting. If Google Photos suits your needs for sharing photos with others, that's great! As for Immich, have you read how it started[0]? I think it's solved the problem amazingly well and it still stays true to its initial ambitions.
Every time I go the self hosting route, everything goes smoothly for awhile, and then decides to break 6 months down the line, and I have to waste a Saturday figuring it all out and upgrading things. Not what I want to do with my weekend, when I'm already doing software dev and maintenance for work. This happens even for super dependable, well written self hosted software.
On the other hand, maybe AI can help remove some of that pain for me now. Just have Claude figure out what's wrong. (Until it decides to hallucinate something, and makes things worse)
I was just telling a nonprofit the other day, who in the name of “self hosting” was running their business on a 73 plugin WordPress site:
Move to Shopify and LearnWorlds. Integrate the two. Stop self hosting. (They’re not large enough to do it well; and it already caused them a two week outage.)
Having seen a lot of companies and startups doinge exactly that, more of less everyone regrets it. Either you end up with such a lot of traffic through these vendors that you'll regret it financially, or you want to change some specific part of your page or your purchase process, which Shopify doesn't let you change, and you'll end up needing to switch or be sad, or, as I regularly have to (because we don't get the resources and time to switch): try to manipulate the site through some weird hacky Javascript snippets that manipulate the DOM after it loaded.
It's literally always the same. They get you running in no time, and in no time you're locked into their ecosystem: No customization if they don't want it; pricing won't scale and just randomly changes without any justification; if you do something they don't like they'll just shut you down.
> Stop self hosting.
Worst mantra of the century. Leading to huge dependencies, vendor lock ins, monopolies, price gauging. This is only a good idea for a prototype, and only as long as you'll not gonna run the prototype indefinitely but will eventually replace it. And maybe for one-person-companies who just want to get going and don't have resources for this.
Let me empathize but say, to put it bluntly, they do not have qualified IT Staff. They have 1 or 2 people who understand only basic web server stuff and nothing else. Thus the two week outage.
Paying LearnWorlds + Shopify $30K a year, if it were even that extreme, is cheaper than an engineer and certainly cheaper than an outage over Giving Tuesday, as they found out the hard way. They got hacked and were down for the most high-traffic nonprofit donor day of the year in their effort to save a few bucks. It wasn’t even the plugins, but the instance underlying the shared hosting.
> It's literally always the same. They get you running in no time, and in no time you're locked into their ecosystem: No customization if they don't want it; pricing won't scale and just randomly changes without any justification; if you do something they don't like they'll just shut you down.
You’re also locked into an ecosystem. It’s called Stripe or PayPal. Almost all of that applies anyway. Don’t forget that significant amount of customizations are restricted to streamline PCI compliance, you can do illegal things very easily. Install an analytics script that accidentally captures their credit card numbers, and suddenly you’re in hot water.
> Leading to huge dependencies, vendor lock ins, monopolies, price gauging
Have you analyzed how many dependencies are in your self hosted projects? What happens to them if maintainers retire? How long did it take your self hosted projects to resolve the 10/10 CVE in NextJS? And as for price gouging, if it’s cheaper than an engineer to properly support a self-hosted solution, I’ll still make that trade as even $80K for software is cheaper than $120K to support it. If you’re at the scale where you don’t have a proper engineer to manage it, do not self host. Business downtime is always more expensive than software (in this case, 5 salaries for 2 weeks to do absolutely nothing + lost donations + reputational damage + customer damages, because “self hosting is easy and cheaper”).
disagree. as the sister comment mentions, wordpress may have been the wrong choice, but self hosting is never wrong, especially for a non profit who may not have the resources to deal with a situation if a hosting service decides to shut them out.
If they don't have the resources to switch to a different hosting provider, why do you assume they will have the resources to fix things when their self-host solution shits the bed?
Switching the ecosystem from something like Shopify to some other shop software requires a lot of manual work, and some of the stuff won't even be transferable 1:1.
Fixing some issue with your WordPress installation will require a person who can google and knows a little stuff about webservers, and maybe containers, and will usually go pretty fast, as WordPress is open source and runs almost half the internet, and almost every problem that will come up will have been solved in some StackOverflow thread or GitHub issue.
Usually though, if you run WordPress and you're not doing a lot of hacky stuff, you will not encounter problems.
Vendors shutting you down, increasing their pricing, or shutting down vital features in their software, happens regularly though. And if it happens, shit hits the fan.
I’ve experimented with both Immich and Ente over the last year and run Immich in parallel with Google Photos right now for my family. Once they add a few more features to support things like smart albums, I’ll be able to drop Google Photos entirely.
I love that the consumer space is getting this kind of attention. It’s one of the biggest opportunities for big tech to lock people into their ecosystem, as photos are something everyone cherishes. You can extort people with ever increasing subscription fees because over time they reach a scale with their own photos that makes it inconvenient to manage themselves. It’s nice to have multiple options that are not Google or Apple.
Wow. When factoring in the OS, that's an entire system's worth of RAM dedicated to just hosting files!
What does it use all this for? Or is this just for when it occasionally (upon uploading new pictures) loads the image recognition neural net?
I'd have to stop Immich whenever I want to do some other RAM-heavy task. All my other services (including database, several web servers with a bunch of web services, a Windows VM, git server, email, redis...) + the host OS and any redundancy caused by using containers, use 4.6GB combined, peaking to 6GB on occasion
> CPU: Minimum 2 cores, recommended 4 cores
Would be good to know how fast those cores should be. My hardware is a mobile platform from 2012, and I've noticed each core is faster than a modern Pi as well as e.g. the "dedicated cores" you get from DigitalOcean. It really depends what you run it on, not how many of them you have
It has modern features that requires relatively heavy processing such as facial recognition, finding similar images, ocr, transcoding videos, etc. I think it only needs those computing resources when you upload new images/videos.
Immich is wonderful in docker setup passing the gpu for ML which works pretty good and the amazing new OCR feature does miracles, I’m able to find notes that I photographed for this purpose but then forgot, I’m able to find memories just by remembering the name of the place and searching for it and everything is running local!
Tailscale (and similar services) is an abstraction on top of Wireguard. This gives you a few benefits:
1. You get a mesh network out of the box without having to keep track of Wireguard peers. It saves a bunch of work once you’re beyond the ~5 node range.
2. You can quickly share access to your network with others - think family & friends.
3. You have the ability to easily define fine grained connectivity policies. For example, machines in the “untrusted” group cannot reach machines in the “trusted” group.
4. It “just works”. No need to worry about NAT or port forwarding, especially when dealing with devices in your home network.
Also it has a very rich ACL system. The Immich node can be locked out from accessing any other node in the network, but other nodes can be allowed to access it.
Tailscale uses wireguard, which is better in a lot of ways compared to OpenVPN. It's far more flexible, secure, configurable and efficient. That said, you probably won't notice a significant difference
OpenVPN is far from "no fuss", especially when compared to Tailscale.
I like to self host things so I also self host Headscale (private tailnet) and private derp proxy nodes (it is like TURN). Since derp uses https and can run on 443 using SNI I get access to my network also at hotels and other shady places where most of the UDP and TCP traffic is blocked.
Tailscale ACL is also great and requires more work to achieve the same result using OpenVPN.
And Tailscale creates a wireguard mesh which is great since not everything goes through the central server.
Wireguard is great, I have personally donated to it and have used Wireguard for years before it became stable. And I still use it on devices (routers) where Tailscale is not supported. But as Jason stated - it is quite basic and is supposed to be used in other tools and this is what we are seeing with solutions like Tailscale.
Tailscale makes it simple for the user - no need to set up and maintain complex configurations, just install it, sign in with your SSO and it does everything for you. Amazing!
So, I wanted to use tailscale for a few local services in my home, but I run a few of them on the same device, and have a simple reverse proxy that switches based on hostname.
Afaict I can't use a tailnet address to talk to that (or is it magic dns I'm thinking about? it was a while since I dug in). I suppose I could have a different device be an exit node on my internal network, but at that point I figure I may as well just keep using my wireguard vpn into my home network. I'm not sure if tailscale wins me anything.
Do other people have a solution for this? (I definitely don't want to use tailscale funnel or anything. I still want all this traffic to be restricted like a vpn.)
Not GP. My guess is that they’re self hosting this at home (not on a server that’s on the internet), and Tailscale easily and securely allows them to access this when they’re elsewhere.
I host at home and can access the things at home just fine by having the server as DMZ in the router, or whatever it is called these days. This doesn't really answer what Tailscale does more than port forwarding. If it punches NAT, that sounds like it actually makes you rely on a third party to host your STUN, i.e. you're not self hosting the Tailscale server?
Even if you are self hosting in the cloud or on a rented box, Tailscale is still really nice from a security perspective. No need to expose anything to the internet, and you can easily mix and match remotely hosted and home servers since they all are on the same Tailnet.
Tailscale routes my mobile device dns through my pile back at the home. I have nginx setup with easy to remember domains (photos.my domain.com) that work when i’m away as well without exposing anything to the open internet.
Why not call it VPN if that's what it is? In your case, it sounds like configuring your "pile" (is that a DNS server, short for pihole maybe?) on your phone would do the same thing, but if the goal is to not expose anything to the open internet, a VPN would be the thing that does that
In my words, I use Tailscale at home but not for this (yet). Tailscale is a simple mesh network that joins my home computers and phones while on separate networks. Like a VPN, but only the phone to PC traffic flows on that virtual private network.
Tailscale gives me access to my home network when I'm not at home. I can be on a train, in another country even, and watch shows streamed off the Raspberry Pi in my home office.
I have been experimenting with Immich off and on for over a year, first in docker-compose and now in podman. It is slick and seamless in a lot of ways, but the portability and upgrade ability are questionable, as others have highlighted.
For example, when they moved between Postgres container versions, it required a manual edit to the compose file to adjust the image. Even if you managed to get it set up initially in docker, it’s these sorts of concepts that are way more advanced than the vast majority of people who may even be interested in self-hosting.
For a hobbyist self-hoster it’s cool and fun, but not something at this point I’d trust my photos to alone. I have considered Ente for that but today it’s still iCloud Photos.
I gave it a try a few months ago. Unfortunately, my experience was not that great. I was hosting it on Synology through Docker and found that the iOS client was a bit buggy and quite slow. Synology Photos completed the initial sync in a few hours, while Immich took several days. After a few months, I switched back to Synology Photos. I might try Immich again in the future.
I started looking for alternatives after Synology became more restrictive with their hardware. I'm curious if anyone else has had a similar experience.
Long time synology user. Switched 3 weeks ago to ugreen. They rolled back their fiasco decision about drives (synology), but I wanted some good hardware in 2025. Everything that synology offers is outdated and slow.
Got myself a 6800 pro. It chewed through 98k photos, many of which are raw, within 24h AFAIK. Then came face recognition, text recognition etc. Within 2-3 days all was done.
The performance is night and day. Photos and movies load instantly. Finally can watch home movies on my TV without stuttering (4k footage straight from a nikon).
The photos app is similar to the synology one. Face recognition was better for me. Have compared the amount of photos tagged to a few people and ugreen found 15% more. Have seen photos of my grandma which I didn't see for years!
There's much more positive i could say. For the negatives: no native drive app (nextcloud which supposedly was an alternative doesn't sync folders on android), no native security cam app.
I am running now 10 docker containers without a sweat. My ds920+ was so slow, that I gave up on docker entirely after a few attempts.
The photos app has some nice features which synology didn't have. Conditional albums. Baby albums.
My guess would be that Synology is an expensive but weak computer, bare minimum for NAS.
Immich does require some CPU and also GPU for video transcoding and vector search embedding generation.
I had Immich (and many other containers) running successfully on AMD Ryzen 2400G for years. And recently I upgraded to 5700G since it was a cheap upgrade.
I'll throw in another "+1, quite satisfied with immich" comment, because I'm honestly that impressed.
The project as a whole feels competent.
Stuff that should be fast is fast. E.g. upload a few tens of thousands of photos (saturates my wifi just fine), wait for indexing and thumbnailing to finish, and then jump a few years in the scroll bar - odds are very good that it'll have the thumbnails fully rendered in like a quarter of a second, and fuzzy ones practically instantly. It's transparently fast.
And the image folder structure is very nearly your full data, with metadata files along side the images, so 99% backups and "immich is gone, now what" failure modes are quite easy. And if you change the organization, it'll restructure the whole folder for you to match the new setup, quietly and correctly.
Image content searching is not perfect (is it ever?), but I can turn it on in a couple clicks, search for the breed of my dog, and get hundreds of correct matches before the first mistake. That's more than good enough to be useful, and dramatically better than anything self-hosted that I've tried before, and didn't take an hour of reading to enable.
It's "this is like actually decent" levels that I haven't seen much in self-hosted stuff. Usually it's kinda janky but still technically functional in some core areas, or abysmally slow and weird like nextcloud, but nope. Just solid all around. Highly recommended.
> the image folder structure is very nearly your full data, with metadata files along side the images
Wait, other comments were saying that one of Immich's weak points is backups. Someone else replied that the postgres structure is sane so you can run sql queries to get your data out if needed. Now you're saying it's plain old files. I'm confused
Some minor data is in postgres, but to test it I just fed it a previous install's library folder (images and metadata files). Worked fine, restored all my albums and tags, though perhaps not "people" iirc. And not e.g. ML image content search, of course, you need to re-generate that. And the metadata files were more than obvious enough to satisfy my "I can do this by hand if I really need to" bar, and recreating the accounts by hand is trivial.
The main "weak point" is probably that it doesn't have S3 integration, which is entirely fair. But for my purposes, rcloning the library folder (or e.g. rsync to a btrfs for free deduplication if you reorganize) is more than good enough, because that folder provides enough data for it to restore everything I care about.
For DB backups for keeping everything, there are configurable auto-backups, but it's only a snapshot to a local filesystem. So you'd need to mirror that out somehow, but syncthing/rclone/etc exist and there are plenty of options.
Really looking for a system where I can install the app on my parents' iPhones and it backs up their photos to my server without them having to even know about the app. They won't open it, ever.
It's pretty easy to set it up to upload automatically, if that's the question. No need to launch it. I've only got my server on my home network so I can't sync while away, and I occasionally check to make sure it's working - no problems at all yet, a few months in.
As someone who loves immich, a VM is overkill but also without vGPUs or external configuration loses you access to some the best features, local AI searching akin to what google and apple photos offer.
Its not perfect but its great to be able to just search for things in a photo and find any matches across dozens of TBs of raws, without having to have some 3rd party cloud AI nonsense do all the work.
The only thing I wish they could get integrated is support for jxl compressed raws, which requires them compile libraw with adobe's sdk.
It seems like you are saying the AI features don't work if you don't have a GPU, if I understood correctly, but I have my install on a server with no GPU and the object search and facial recognition features work fine. Probably slower to generate the embeddings, but I don't have any comparison to make.
> As someone who loves immich, a VM is overkill but also without vGPUs or external configuration loses you access to some the best features, local AI searching akin to what google and apple photos offer.
I installed immich in a VM. And the VM is using GPU passthrough. I don't see how it's overkill: immich is a kitchen sink with hundreds if not thousands of dependencies and hardly a month goes by without yet another massive exploit affecting package managers.
I'm not saying VM escapes exploit aren't a thing but this greatly raises the bar.
When one install a kitchen sink as gigantic as immich, anything that can help contain exploits is most welcome.
So: immich in a VM and if you want a GPU, just do GPU passthrough.
I agree that the search using facial recognition is nice in immich that said.
Unfortunately Immich doesn't (yet) support object storage natively, which IMHO would make things way easier in a lot of ways.
You can still mount an object storage bucket to the filesystem, but it's not supported officially by Immich and you anyways have additional delay caused by the fact that your device reaches out to your server, and your server reaches out to the bucket.
It would be amazing (and I've been working on that) to have an Immich that supports natively S3 and does everything with S3.
This, together with the performance issues of Immich, is what pushed me to create immich-go-backend (https://github.com/denysvitali/immich-go-backend) - a complete rewrite of Immich's backend in Go.
The project is not mature enough yet, but the goal is to reach feature parity + native S3 integration.
I have the main volume for images in a zpool with two SSDs in a raid-1 configuration. I also have a daily cronjob that makes an encrypted off-site backup with Borg. I've also got healthchecks.io jobs setup so that if the zpool becomes unhealthy, the backups fail, or anything stops, then both me and my partner get alerted.
My partner isn't very technical, but having an Immich server we are both invested in has gotten her much more interested in self hosting and the skills to do it.
My setup has Immich in a Docker container, which is itself in a Proxmox LXC container.
I then have Proxmox back it up to Proxmox Backup Server running in a VM, and it has a cron job that uploads the whole backup of everything to Backblaze B2.
The backup script to B2 is a bit awful at the moment because it re-uploads the whole thing every night... I plan on switching to something better like Kopia at some point when I'll get the time
I'm using restic to backup the Immich photo directories as well as automatically generated Immich database dumps to an external drive and a Hetzner Storage Box.
Immich is great, but I like Ente more because of the E2E encryption. I don't trust that someday my hardware wouldn't get stolen and all photos get in possession of someone else.
I'm much more worried about the Ente install getting broken for some reason and my pictures being locked and lost, than a burglar stealing a hard disk in my basement.
That's why I like how Photoprism just uses my files as they are without touching them (I think immich can do that as well now, but it wasn't so in the past). I can manage the filesystem encryption myself if I want to.
I like Ente's E2EE for hosting on a remote server.
In my case I want to host on my personal server at home, so it feels actually nicer to not have E2EE. I basically would like to have the photos of all my family members on a hard disk that they could all access if needed (by plugging it into their computer).
Ente looks interesting and worth looking into, thanks for mentioning it.
In the context of having a phone stolen, it's possible to at least limit the damage and revoke accesses via the Tailscale control server. Then the files on device are still vulnerable, but not everything in Immich (or whatever other service is running).
Why would you need Tailscale for revoking an access token in an unrelated service? Just kick the device out of the sessions list in Immich, or change your password if that's stored directly on the device
I’d like to provide the service to my semi-extended family — not just me and my partner, but also my parents and siblings. And I respect their privacy, so I want to eliminate even the possibility of me, system administrator, accessing their photos.
I'm running Immich on NanoPi R6C (arm64, even lower idle power usage, still plenty fast for running Immich).
I use Cloudflare tunnel to make it available outside the home network. I've set up two DNS names – one for accessing it directly in the local network, and and a second one that goes through the tunnel. The Immich mobile app supports internal/external connection settings – it uses the direct connection when connected to home wifi, and the tunnel when out and about.
For uploading photos taken with a camera I either use immich-go (https://github.com/simulot/immich-go) or upload them through the web UI. There's a "publish to Immich" plugin for Adobe Lightroom which was handy, but I've moved away from using Lightroom.
Are you also facing the the 100mb upload limit when using cloudflare tunnel?
Sometimes I want to upload a video from my phone will away from home but I can't and need to vpn
You have to disable Cloudflare proxy which is not an option with tunnels. It's technically against TOS to proxy non-HTML media anyway. I just ended up exposing my public IP.
I considered doing that too. My main problem with it is privacy. Let's say I set up some sort of dynamic DNS to point foo.bar.example.org to my home IP. Then, after some family event, I share an album link (https://foo.bar.example.org/share/long-base64-string) with friends and family. The album link gets shared on, and ends up on the public internet. Once somebody figures out foo.bar.example.org points to my home IP, they can look up my home IP at all times.
Surprised that neither the article, nor the comments mention Photoprism from what I can see. It’s not I’ve been hosting Photoprism and syncing my photos with PhotoSync from my iPhone for a while now.
I would consider switching to another solution if it had in-browser basic editing (cropping, contrast / white balance adjustment, etc).
Immich, ente and photoprism all compete in a similar space?
Seems immich is the most polished webpage, but which solution will become the next cloud for photos is to be seen. Surely it's not next cloud anymore, considering the comments here.
> Surely it's not next cloud anymore, considering the comments here.
I have been testing Nextcloud for backing up photos from my family members' phones. Wouldn't recommend.
The sync on iOS works well for a while, then it stops working, then some files are "locked" and error messages appear, or it just stops syncing, and the only way I find to recover is to essentially restart the sync from scratch. It will then reupload EVERYTHING for hours, even though 95% images are already on the server.
Note that in my use-case, the user never opens the app. It has to work in the background, always, and the user should not have to know about it.
Same here. I also have an ente account, but only to check if they made relevant progress. So far, I don't understand why Ente has so much traction when PhotoPrism has the better feature set, in my opinion.
I've switched from photoprism to immich. Immich is a much more active project, bugs are fixed, face recognition is an order of magnitude better, just an overall more solid experience. If you are choosing, I wouldn't doubt for a second to go with immich.
Immich struggles to act as a true unifying solution for users with large, existing archival collections (DSLRs, scanned film, etc.).
Since those „Archival Assets“ are often decades old, already organized into complex, user-defined file structures (e.g., 1998/DATE_PLACE_PROJECT/PLACE_PROJECT_DATE.jpg), and frequently contain incomplete or inconsistent metadata (missing dates, no GPS, different file formats).
Immich's current integration solutions (like "External Libraries") treat the archive as a read-only view, which leads to a fragmented user experience:
- Changes, facial recognition, or tagging remain only within Immich’s database, failing to write metadata back to the archival files in their original directory structure (last time I checked, might be better now.
- My established, meaningful directory structure is ignored or flattened in the Immich view, forcing the user to rely entirely on Immich’s internal date/AI-based organization.
My goal (am I the only one?) of having one app view all photos while maintaining the integrity and organizational schema of the archival files on disk is not yet fully met.
Immich needs a robust, bi-directional import/sync layer that respects and enhances existing directory structures, rather than just importing files into its own schema.
This is where I'm at, really. I have my own filing hierarchy and storage templates can't really deal with it (and I don't get why they would be needed when all I want is for it to handle an "uploads" directory and re-scan the file tree after I file things)
I also run NixOS (btw) but opted for the container. My Docker compose setup has moved from Arch to Ubuntu to NixOS now, so I like the flexibility of that setup.
I also use Tailscale, and use cloudflare as nameserver and Caddy in front of Immich to get an nice url and https. For DNS redirects I use Adguard on the tailnet, but (mostly for family) I also set some redirects in my Mikrotik hEX (E50UG). This way Immich is reachable from anywhere and not on the internet. Unfortunately it looks like the Immich app caches the IP address somewhere? Because it always reports as disconnected whenever Tailscale turns off when I'm at home or the other way around and takes some time/attempts/restarts to get going again. It's been pretty flaky that way...
Other than that: Best selfhosted app ever. It has reminded me that video > photos, for family moments. Regularly I go back through the years for that day, love that feature.
immich is neat, but I tire of fiddling around with computers more than necessary so I pay for iCloud for the family because I don't want to be Oncall 24/7/365. I do self host home assistant sadly, just because certain things I want to do are just not possible with SmartThings. planning on moving to their hosted solution for that eventually too tho.
I actually did the math earlier and the iCloud 12TB plan for a family is way cheaper than the equivalent s3 storage assuming frequent access, even assuming a 50% discount. so that's nice.
Yes I don't recommend doing that. My experience is that people understand you are human because they know you. They don't expect 9 9s availability but if they somehow do that can be clarified from the start : "I'm hosting this free of charge for family members because (insert your reasons here, it's important to clarify WHY it's different because Apple and BigTech in general somehow still have a ton of goodwill) but as you know as also have a job and our family life. Consequently sometimes, e.g. electricity outage or me having to update the server, there will be down time. Do no panic when this happens as the files are always safe (backup details if you want) but please do be patient. Typically it might take (insert your realistic expectation, do NOT be too optimistic) a day per month for updates. If you do have better solutions please do contribute."
... or something of the kind. I've been doing that for years and people are surprisingly understanding. IMHO it stems from the why.
The "way cheaper than the equivalent" argument reminds me of, and apologies I know it's a bit rough, Russian foreign minister days ago who criticize the EU for its plan to decouple with their oil & gas saying something like "Well if they do that they will pay a lot more elsewhere" and he's right. The point isn't the money though, the point is agency and sovereignty.
One option is use immich just to browse photos. I back my photos up to various places, one of which is my NAS. You can set up immich to browse but not modify photos so you can still use it as a "front end".
I didn't knew about Lychee previous to your comment, but given that they support what should be a basic feature of photo management software (unlike Immich), I'll give it a try
This is great timing, I'm just setting up a homelab and planning to run Immich on a mini PC server connected to a NAS. I did find icloudpd, which seems like a pretty reliable syncing tool for people in Apple ecosystem. https://github.com/icloud-photos-downloader/icloud_photos_do...
The Nextcloud app kind of does it, it seems. The fact that it stops working seems unrelated: starting the app doesn't make it recover, so it just seems buggy.
Nextcloud uses the location permission for some reason, presumably to wake up the app in the background once in a while? At least it can be closed (and "swiped away") for 2 months and keep syncing. Until it breaks and stops working entirely.
I had immich running great for a while, maybe for months. It would seamlessly sync photos from phone to local home server. I was going to setup nightly outbound sync too (1 is none, 2 is some).
I updated the container for usual appliance maintenance. Entire thing is toast. Metadata files can't be read, mounted, permission issues and more. It's been four months since.
as an un-solicited drive-by suggestion: see if they're owned by root? you may have sudo'd the original run.
since you're at least a few months behind though, do check for breaking changes: https://github.com/immich-app/immich/discussions?discussions... they've pretty consistently had instructions, but you unfortunately mostly have to know to look for it. not sure why the upgrade notification doesn't make it super incredibly painfully obvious.
This is why I just can't deal with self hosting... I'm already burnt out on this kind of stuff in my day job. And something like this will ALWAYS happen eventually.
I use my Immich instance locally as photos and videos storage for 7-8 months now. It's really amazing. Reading comments here I'd say it's medium-difficulty tier to set up. I have it on my Elitedesk 800 G3 mini, in Docker Compose, with hardware encoding and it's more than sufficient to handle this work. Never had a bug.
As a counter-anecdote: literally just `docker compose up` on a spare laptop for me, and it's working great (though it's only available on the local network). There might be stuff to tune (e.g. I'm pretty sure it's not using my GPU), but it's almost totally unnecessary for just one household of people's use - the initial huge google-photos-takeout took an hour or three to finish indexing with all the features enabled, but all new stuff is done within seconds. The most I've done is to swap the actual photo storage to an external drive, which is just a "move that folder, custom mapping in the docker command" change.
On hardware that doesn't have docker, or is significantly more resource constrained somehow: yea, I completely believe it. I haven't tried that, but given the features it makes total sense that it'd be harder.
I fail to get the problems with self hosting Immich. I mean, obviously you gotta have a server ready, and at least some knowledge about self hosting and docker. But apart from that, installing Immich was like a breeze. It was actually not more than pasting some lines into a docker-compose, and running it.
Have been using it for about 1.5 years, and I have not a single problem, which is quite incredible for a software that basically has all features that Google Photos has.
I've had nothing but trouble with Immich. It's a CPU hog if you enable any kind of AI/ML (face detection is a notable culprit) or when preprocessing even small phone videos, I can't get it to import an existing photo tree from a filesystem, and the iOS app can't seem to sync reliably...
Very nice the author uses tailscale serve! It's an underrated, and unfortunately under documented, way to host a web service directly to Tailscale. With that you can run a docker compose stack with one extra tailscale container, and then it's immediately a self contained and reasonably portable web server in your tailnet.
Immich really is fantastic software, and their roadmap is promising. I hope they have enough funding to keep going.
I would disagree there. I've tried lots of photo managers, and for organizing thousands of photos, I think Google Photos has it pretty much nailed. When choosing a photo/video manager, "works pretty much exactly like Google Photos but without all the AI bullshit and privacy issues" is a major selling point for me. Ideally it would even have the same shortcuts so that my muscle memory still works.
The only thing that's really missing is a feature on the mobile app to delete local copies of uploaded assets ... Something like Google Photos "Free up space" feature.
It has that. Select the media you want to delete, tap & hold, then scroll to the right in the menu and select Delete from device. At least on Android this is the way.
One thing I really like is the performance... its smooth and fluid. The api is really useful as well: I wrote a small job to auto add descriptions and tags to the images.
I honestly thought there was some app - even one from Google - that let you just run on your Windows or Mac and downloaded all of your Google Photos to your computer.
That’s all the author is trying to do. He isn’t trying to avoid or replace Google Photos - just have a local backup.
Even Apple has a Windows app that does that for iCloud Photos
Immich started the same time and with the same backstory/reasoning to my (failed) project.
I love the immich success story but it seems like it's missing a crucial use case in my view: I don't actually want a majority of the photos on my phone. I want something like a shared album that me and my wife both have access to, and so we can share photos specifically to that album (quickly and without hassle), so we can do it in the moment and both have access.
I would probably estimate 90% Of my photos are junk, But I want to isolate and share the 10% that are really special.
My app failed, but I'm thinking about reviving it as an alternative front-end to immich, to build upon that.. But I feel like I'm the only one who wants this. Everyone else seems fine with bulk photo backup for everything.
I want something with a simpler backend than immich. I don't really want to host it because it needs lots of stuff to run. I would love one that can do sqlite and is a single binary go (or rust) program.
I have a homegrown app too. It's too tinkery for anyone else. I throw whole iOS device backups at it so it can pluck out media from texts. Then the frontend has an efficient bulk sorting workflow with vi keys to navigate a grid of photos and tag with a few different tags or delete. I feel like this is not the same use case as immich, it's maybe a curation step before exporting a refined set of media.
just disable auto-upload and then manually upload the ones you want to. There is a setting to share your immich library with someone else. Between those two features, you should get something close to what you want.
For me one of the killer things would be to click "share" on a photo I took, and then have the immich albums show up so I can put them in that specific place as like a 3 click process. That's basically what I was building my whole app around
You can pick which albums on your phone to upload to Immich. You and your wife could have separate users on the server too if you want that. I think you can probably share a user account, or share albums between users, but the syncing might get confusing if you both have an album with the same name. The only reason I can think of to not upload everything on your phone and try to share one or two albums is that it might get hard to search through many pictures, even with the AI.
As for not wanting most of your photos, Immich also includes AI search and facial recognition which both work really well. I can't remember if it detects near-duplicates, but I thought it did. I think you should play around with it before you leap into the giant project of making your own app.
> For every cloud service I use, I want to have a local copy of my data for backup purposes and independence.
I know how much Adobe is hated around any creative circle, but tbf I find that Lightroom CC does this pretty well.
Adobe has a well done simple helper app that does just that: downloads the entire of your library locally, with all pictures, all edits, everything. For backup purposes is perfect.
Lightroom might be expensive for amateurs, but if you even just do a couple of photo jobs per year, it's worth every cent.
I've been running Immich on my Kubernetes cluster for a few months now. It was one of the harder things to install. I didn't use the "official" Helm chart because I didn't like it, instead just set it up myself. I use Cloud Native Postgres for DBs so I have backups already configured. I had to use a special image with vectorchord in it. It auto updates with flux and has been fine. The only time it wasn't fine was when I needed to manually upgrade vectorchord in the db.
The Android app is good but does quite often fail to open, just getting stuck on the splash screen indefinitely. Means I have to have another app for viewing photos on my phone.
One of the main reasons I wanted to install it is because my partner runs out of space on her iPhone and I don't want to pay Apple exorbitant amounts for piffling storage. Unfortunately it doesn't quite work for that; I can't find an option to delete local copies after upload.
> Unfortunately, the gphotos-sync tool stopped working in March 2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup.
I didn't even realize this tool existed. I tried something like it awhile back, but it didn't work to my satisfaction (I don't remember why), so my awful, awful, awful workflow is to use the Google Takeout functionality to generate something like 8 .tar.gz files (50 gigabytes each), manually download each one (being prompted for authentication each time), and then rsyncing them over to my local server, and finally uncompressing them.
It's very lovely how much Google doesn't want you to exfiltrate your own data.
I wonder at which point I'll get annoyed enough to go through the effort of setting up immich. Which, naturally, will probably involve me re-working my local server as well. The yak's hair grows faster than I can shave it.
> I wonder at which point I'll get annoyed enough to go through the effort of setting up immich. Which, naturally, will probably involve me re-working my local server as well. The yak's hair grows faster than I can shave it.
LLM + Nix (ideally NixOS) changed everything imo.
After reading TFA last night, it was less work to tell Claude Code to get Immich running on my home server (NixOS), add the service to Tailscale, and then give me a todo list reminder of what I needed to do to mirror my Macbook iCloud/Photo.app gallery to it and then see it on my iPhone...
...than any of the times I've had to work around "black box says no", much like your example.
Just a couple years ago, this wasn't the case. I didn't have the energy to ssh into my server and remember how things are set up and then read a bunch of docs and risk having to go into a manual debug loop any time a service breaks. LLM does all that. I never even read Nix docs. LLM does that too.
In fact, it was fairly fun to finally get a good cross-platform setup working in general to divest from Apple thanks to LLM + Nix. I really like where things are going in this regard. I don't need any of this crap anymore that I used to use because it was the only way to get something that Just Worked.
By the time I lose my software job and have to compete with you lot, H1Bs, and teenagers to fold sweaters at Hollister, I won't need to use a single bit of proprietary software. It will be a huge consolation.
I would say, Claude is a proprietary software after all, No?
LLM managing a NixOS install lol
As critical as I am of LLM use, the nice thing about it here is your configs can be version controlled, and rolling back changes is pretty painless.
I'd still want to go through any changes with a fine tooth comb to look for security issues and to make sure I know what it is adding and removing, but it's saner than letting an LLM run amok on a live system.
There is something to be said about NixOS, it really is a matter of setting `services.immich.enable = true;` in a configuration file. I find this really powerful and simpler than docker and docker-compose. But don't get me wrong, I am all for containerization when it comes to other OS/distros. Yes, there is a learning curve for the Nix language and creating your own packages. But anyone who can install a distro can install NixOS. Instead of running your apt/dnf/pacman commands, you edit a file with your package names and services you want to enable, and run `nixos-rebuild switch`. Though, you might find standalone binaries such as uv and its portable Python bundles don't work out the box, there is a a few lines configuration to get it working. Having a single language for configuring all services/applications (neovim,nginx,syncthing,systemd, etc) is refreshing. And of course combined with generative AI, you can set up a lot quickly.
Immich is one of the only apps on iOS that properly does background sync. There is also PhotoSync which is notable for working properly with background sync. I'll take a wild guess that Ente may have got this working right too (at least I'd hope). This works around the limitation that iOS apps can't really run as background apps (appears to me that the app can wake up on some interval, run/sync for a little and try again on the next interval). This is much more usable then for example, the Synology apps for photo sync, which is, the last time I tried, for some reason insanely slow and the phone needs to have the app open and screen on for it fully sync.
Some issues I ran into is the Immich iOS app updating and then being incompatible with the older version of the server installed on my machine. You'd have to disable app updates for all apps, as iOS doesn't support disabling updates for individual apps.
In my specific scenario, the latest version of Immich for NixOS didn't perform a certain migration for my older version of Immich. I had to track down the specific commit that contained the version of Immich which had the migration, apply that, then I was able to get back to the latest version. Luckily, even though I probably applied a few versions before getting the right one, it didn't corrupt the Immich install.
I've hosted Immich since it came out and all my photos have been migrated to it at this point. I would never host Immich on NixOS (and I do use it for certain things). The reason? It's not simpler than a container option and creates a single point of issue. The container option is tested and supported by Immich, they recommend it. So everything I need is part of that. I moved servers midway through the year and the storage for my Immich implementation is NAS hosted and the mount is simply exposed to the Immich container. It took me less than 15 minutes to move Immich. And while that would have likely been the same with NixOS it's actually more of a chore to roll back with Nix. My Compose file is locked to major/minor and I choose when to do upgrades. But rollbacks are actually simpler IMO. I just stop the container, tar the operational directory, flip the bits in the Compose file and restart. I've not actually had an issue with Immich ever while doing it this way and I manage about 10TB of photos and videos currently in Immich.
I actually thought about doing this with NixOS last year, but it seemed counterproductive compared to how I self-host, I don't want to manage configurations in multiple places. If I switched everything it would likely be just as much work and then I'm reliant on Nix. Over the years I've gone from the OS being a mix of Arch and Ubuntu to mostly just Debian for my self hosting LXC or VMs. I already have the deployments templated so there's nothing for me to do other than map an IP, give it a hostname and start it.
To each their own, but I don't want to be beholden to NixOS for everything. I like the container abstraction on LXC and VMs and it's been very good to minimize the work of self-hosting over 40+ services both in my home lab and in the bare metal server I lease from Hetzner.
A few thoughts.
> It's not simpler than a container option and creates a single point of issue. The container option is tested and supported by Immich, they recommend it. I don't want to be beholden to NixOS for everything.
I think there's a misunderstanding here. You aren't beholden to NixOS here. You don't have to use nixpkgs nor home-manager modules. You can make your own flakes and you can use containers, but the benefit is still that you set it up declaratively in config.
It's not incompatible with anything you've said, it's just cool that it has default configurations for things if you aren't opinionated.
> I don't want to manage configurations in multiple places.
I've accumulated one big Nix config that configures across all my machines. It's kind of insane that this is possible.
Of course, it would seem complicated looking at the end result, but I iterated there over time.
Example: https://github.com/johnae/world -- fully maintained by a clanker (https://github.com/johnae/world/pulls?q=is%3Apr+is%3Aclosed)
My problem with NixOS is the second you try to go "outside the guardrails", the difficulty increases 100x
Is it? Why? If a NixOS module doesn’t support what you need, you can just write your own module, and the module system lets you disable existing modules if you need to. Doing anything custom this way still feels easier than doing it in an imperative world.
> you can just write your own module, and the module system lets you disable existing modules if you need to
That sounds about 100x more difficult to me
I can see your point that it can be daunting to have all the pain upfront. When I was using Ubuntu on my servers it was super simple to get things running
The problem was when I had to change some obscure .ini file in /etc for a dependency to something new I was setting up. Three days later I'd realise something unrelated had stopped working and then had to figure out which change in the last many days caused this
For me this is at least 100x more difficult than writing a Nix module, because I'm simply not good at documenting my changes in parallel with making them
For others this might not be a problem, so then an imperative solution might be the best choice
Having used Nix and NixOS for the past 6-7 years, I honestly can't imagine myself using anything than declarative configuration again - but again, it's just a good fit for me and how my mind works
In the NixOS scenario you described, what keeps you from finding an unrelated thing stopped working three days later and having to find what changed?
I’m asking because you spoke to me when you said “because I'm simply not good at documenting my changes in parallel with making them”, and I want to understand if NixOS is something I should look into. There are all kinds of things like immich that I don’t use because I don’t want the personal tech debt of maintaining them.
I think the sibling answer by oasisaimlessly is really good. I'd supplement it by saying that because you can have the entire configuration in a git repo, you can see what you've changed at what point in time
I'm the beginning I was doing one change, writing that change down in some log, then doing another change (this I'll mess up in about five minutes)
Now I'm creating a new commit, writing a description for it to help myself remember what I'm doing and then changing the Nix code. I can then review everything I've changed on the system by doing a simple diff. If something breaks I can look at my commit history and see every change I've ever made
It does still have some overhead in terms of keeping a clean commut history. I occasionally get distracted by other issues while working and I'll have to split the changes into two different commits, but I can do that after I've checked everything works, so it becomes a step at the end where I can focus fully on it instead of yet another thing I need to keep track of mentally
I just realised I didn't answer the first question about what keeps me from discovering the issues earlier
The quick answer is complexity and the amount of energy I have, since I'm mostly working on my homelab after a full work day
Some things also don't run that often or I don't check up on them for some time. Like hardware acceleration for my jellyfin instance stopped working at some point because I was messing around with OpenCL and I messed up something with the Mesa drivers. Didn't discover it until I noticed the fans going ham due to the added workload
> because you can have the entire configuration in a git repo, you can see what you've changed at what point in time
That’s true of docker too.
I'm not really sure what your point is, but I'll try to take it in good faith and read it as "why doesn't docker solve the problem for it, since you can also keep those configurations in a git repo?"
If any kind of apt upgrade or similar command is run in a dockerfile, it is no longer reproducible. Because of this it's necessary to keep track of which dockerfiles do that and keep track of when a build was performed; that's more out-of-band logging. With NixOS I will get the exact same system configuration if I build the same commit (barring some very exotic edge cases)
Besides that, docker still needs to run on a system, which must also be maintained, so Docker only partly addresses a subset of the issue
If Docker works for you and you're not facing any issues with such a setup, then that's great. NixOS is the best solution for me
That’s all my point was, yeah. Genuinely no extra snark intended.
> it is no longer reproducible
The problem I have with this is that most of the software I use isn’t reproducible, and reproducible isn’t something that is the be all and end all to me. If you want reproducible then yes nix is the only game in town, but if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
> docker still needs to run on a system
This is a fair point but very little of that system impacts the app you’re running in a container, and if you’re regularly breaking running containers due to poking around in the host, you’re likely going to do it by running some similar command whether the OS wants you to do it or not.
> if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
For some I'm sure that's the case; it wasn't in my case.
I ran docker for several years before. First docker-compose, then docker swarm, finally Nomad.
Getting things running is pretty fast, but handling volumes, backups, upgrades of anything in the stack (OS, scheduler, containers, etc) broke something almost every time - doing an update to a new release of Ubuntu would pretty much always require backing up all the volumes and local state to external media, wiping the disk, installing the new version, and restoring from the backup
That's not to talk about getting things running after an issue. Because a lot of configuration can't be done through docker envs, it has to be done through the service. As a consequence that config is now state
I had an nvme fail on me six months ago. Recovering was as simple as swapping the drive, booting the install media, install the OS, and transfering the most recent backup before rebooting
Took about 1.5 hours and everything was back up and running without any issues
Not OP, and not a very experienced with NixOS (I just use Nix for building containers), but roughly speaking:
* With NixOS, you define the configuration for the entire system in one or a couple .nix files that import each other.
* You can very easily put these .nix files under version control and follow a convention of never leaving the system in a state where you have uncommitted changes.
* See the NixOS/infra repo for an example of managing multiple machines' configurations in a single repo: https://github.com/NixOS/infra/blob/6fecd0f4442ca78ac2e4102c...
Both of those things are true of running containers, no?
I've written a dozen flakes because I want some niche behavior that the home-manager impl didn't give me, and I just used an LLM and never opened Nix docs once.
It's just declarative configuration, so you also get a much better deliverable at the end than running terminal commands in Arch Linux, and it ends up being less work.
"Just" write your own module?
Have you seen how bad the Nix documentation is and how challenging Nix (the language) is? Not to mention that you have to learn Yet Another Language just for this corner case, which you will not use for anything else. At least Guix uses a lisp variant so that some of the skills you gain are transferable (e.g. to Emacs, or even to a GP language like Common Lisp or Racket).
Don't get me wrong, I love the concept of Nix and the way it handles dependency management and declarative configuration. But I don't think we can pretend that it's easy.
The documentation is not great (especially since it tends to document nix-the-language and not the conventions actually used in Nixpkgs), but there are very few languages on earth with more examples of modules than Nix.
I’ve never seen nix, but I’d rather learn “yet another language” than fight yet another yaml syntax.
That time would be better spent just learning YAML, since you’ll encounter it in a hell of a lot more places than Nix
Kind of the same for docker? Plopping a docker compose file and setting up few environment vars vs writing dockerfiles from scratch.
Not really. No. You can easily checkout repo containing the Dockerfile, add a Dockerfile override, change most of the stuff while maintaining the original Dockerfile instact and the ability to use git to update it. Then you change one line in docker-compose.yaml (or override it if it's also hosted by the repo) and build the container locally. Can't imagine easier way to modify existing docker images, I do this a lot with my self-hosted services.
I'll be honest, that does not sound "easy".
It is straightforward, but so is the NixOS module system, and I could describe writing a custom module the same way you described custom Docker images.
It isn't the absolutely easiest process.
But it works on Ubuntu, it works on Debian, it works on Mac, it works on Windows, it works on a lot of things other than a Nix install.
And I have to know Docker for work anyhow. I don't have to know Nix for anything else.
You can't win on "it's net easier in Nix" than anywhere else, and a lot of us are pretty used to "it's just one line" and know exactly what that means when that one line isn't quite what we need or want. Maybe it easier after a rather large up-front investment into Nix, but I've got dozens of technologies asking me for large up-front investments.
This is a familiarity problem. I've never used NixOS and all your posts telling me how simple it is sounds like super daunting challenges to me versus just updating a Dockerfile or a one liner in compose that I am already familiar with, I suspect its the inverse for you.
If it wasn't easy, I wouldn't be using it. I'm the laziest of programmers or users.
If that’s not easy I don’t know what is.
I'm running NixOS on some of my hosts, but I still don't fully commit to configuring everything with nix, just the base system, and I prefer docker-compose for the actual services. I do it similarly with Debian hosts using cloud-init (nix is a lot better, though).
The reason is that I want to keep the services in a portable/distro-agnostic format and decoupled from the base system, so I'm not tied too much to a single distro and can manage them separately.
Ditto on having services expressed in more portable/cross distro containers. With NixOS in particular, I've found the best of both worlds by using podman quadlets via this flake in particular https://github.com/SEIAROTg/quadlet-nix
How do you update the software in the containers when new versions come out or vulnerabilities are actively being exploited?
My understanding is that when using containers updating is an ordeal and you avoid the need my never exposing the services to the internet.
If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates.
To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install.
How does one do it on nix? Bump version in a config and install? Seems similar
Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker.
Doing all that with containers is a spaghetti soup of custom scripts.
> How do you update the software in the containers when new versions come out or vulnerabilities are actively being exploited?
You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image
Am I the only one surprised that this is a serious discussion in 2025?
I refer you to:
https://xkcd.com/1053/
Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume.
Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical.
I'm a career IT guy who supports biz in my metro area. I've never used docker nor run into it with any of my customers vendors. My current clients are Windows shops across med, pharma, web retail and brick/mortar retail. Virtualization here is hyper-v.
And it this isn't a non-FOSS world. BSD powers firewalls and NAS. About a third of the VMs under my care are *nix.
And as curious as some might be at the lack of dockerism in my world, I'm equally confounded at the lack of compartmentalization in their browsing - using just one browser and that one w/o containers. Why on Earth do folks at this technical level let their internet instances constantly sniff at each other?
But we live where we live.
The world is too complex, and life paths too varied, to reliably assume "everyone" in a community or group knows about some fact.
You're usually deep within a social bubble of some sort if you find yourself assuming otherwise.
Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc.
If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production.
Your understanding of containers is incorrect!
Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services.
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It's not that easy.
First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases.
Then I have to rebuild and mess with all potential issues if software builds ...
Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ...
I'm a bit surprised this has to be explained in 2025, what field do you work in?
It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host.
So you go from having to worry about one image + N services to up-to-N images + N services.
I think you are not too wrong about this.
Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying.
Your understanding of not-containers is incorrect.
In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc.
In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation.
But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs.
pull new container, stop old and start new. can also make immutable containers.
I hope someone will create a Debian package for Immich. I’m running a bunch of services and they are all nicely organized with user foo, /var/lib/foo, journalctl -u foo, systemctl start foo, except for Immich which is the odd one out needing docker compose. The nix package shows it can be done but it would probably be a fair amount of work to translate to a Debian package.
I'll try to install in a short-ish while, and look into its installation in a detailed manner.
I may try to package it, and if it proves to be easy to maintain, I might file an ITP.
You could try setting it up as a Podman Quadlets, those hook into systemd so you can treat them like a normal service.
This is my favorite use of CLI AI coding tools: updating my nix config. I can just ask my computer to configure services for me!
Indeed! This morning I needed a service to port forward ssh from my server to a firewalled machine, to access stuff while I work from a mountain cabin over the next few days. ChatGPT gave me a nice nix config snippet, and it just worked! Auto reconnecting and everything.
I would of course have thrown up a port forward manually today, and maybe even spent the time to add a service later, but now it was fixed once and “forever” in two minutes!
I remember when people claimed to use NixOs in order to have a deterministic, repeatable setup.
AI generated Nix is equally deterministic and repeatable. The deterministic behavior makes Nix well suited for AI yolo code, either it evaluates and builds or it doesn't, and if the result isn't functional you revert back to the previous generation.
That is the use case for NixOS yes, can you clarify how it is no longer deterministic? I have been using it for a few months and was not aware of this change
Immich was my gateway into NixOS. It did a really good job of showing how well it can work. I'm only a couple of months in, so we'll see if it sticks, but I'm also running it on my laptop now.
> There is something to be said about NixOS
OK, I'll stick with Ubuntu + KDE (so Kubuntu really) on all my machines.
But what's the performance of NixOS compared to other distros? Also, I imagine CUDA installation is not as simple as changing a few lines of config file?
https://nixos.wiki/wiki/CUDA
It’s not too bad. As others have said, AI makes it easy to get right.
It's confusing, but bear in mind that nixos.wiki is an unofficial wiki, the official one is at: https://wiki.nixos.org/wiki/CUDA
> There is something to be said about NixOS, it really is a matter of setting `services.immich.enable = true;` in a configuration file.
Assuming someone has added it to NixOS, yeah. There are plenty of platforms even easier than that where you can click "install" on "apps" that have already been configured.
> There are plenty of platforms even easier than that where you can click "install" on "apps" that have already been configured.
Yeah, like TrueNAS, where they've decided it was good entire to run Kubernetes on NAS hardware, with all the fun and speed that comes with. You just hit "Install", wait five minutes, and you get something half-working but integrated with the rest of their "product".
I'll stick with configuration I can put in git, patch when needed and is easy to come back to after 6 months when you've forgotten all about the previous context you had.
Nit: TrueNAS migrated away from Kubernetes in 2024.
Nix can be easy,but it's not simple.
Obligatory link: https://youtube.com/watch?v=SxdOUGdseq4
Regarding NixOS, I'm mostly afraid of them going on a user purge after their developer purge. You just never know who this group of people will come after next, especially after they started defining "Fascism" as "anyone asking for how they define Fascism".
And the jump of getting rid of people you hate who contribute to your project and you can do little harm to, to getting rid of people you hate who are of no use to you and you can do genuine damage to (e.g. by installing a tor exit node) is a step down if you think you could get away with it.
NixOS is open-source, if needed it can be forked anytime and continued to work on with new maintainers.
And flakes make that viable
> Regarding NixOS, I'm mostly afraid of them going on a user purge after their developer purge
... Why? I don't know what developer purge you're talking about, but getting rid of people running a project almost never means suddenly they'll start to get rid of users, I'm not sure why that assumption is there. Not to mention that they couldn't even "purge users" if they wanted to, unless they make the download URLs private and start including some licensing schema which, come on, hardly is realistic to be worried about...
To provide some opinionated context for this unhinged rant:
The community developing nix had a falling out with a couple highly unsavory groups that basically consisted of the Palmer Lucky Slaughter Bot Co. and a couple guys who keep trying to monetize the project in extremely sleazy ways. This wasn't some sort of Stalinistic purge, it was people rejecting having their name attached to actual murder and sleazy profiteering.
Honestly one of the funniest things I've read on HN in a while. Would you call yourself anti-anti-fascist?
They're likely just doing the ol' "what opinions, mfer" goose meme.
> But anyone who can install a distro can install NixOS. Instead of running your apt/dnf/pacman commands, you edit a file with your package names and services you want to enable, and run `nixos-rebuild switch`.
You can do the same with any configuration manager such as puppet, salt or chief.
Self hosting used to mean conceding on something. I can honestly say Immich is better in every way than Google Photos or whatever Apple calls it. The only thing is having to set it up yourself.
There are still some features that a miss from Google photos. There isn't any way (that I know of) to auto add pictures to an album based on the face. I used to have dedicated albums for family members, and it was nice to have the auto updated.
Face recognition in general just isn't as good as Google Photos.
It's still an amazing piece of software and I'd never go back, but it isn't perfect yet.
Are we using the same Google Photos? I've found Immich face recognition and context/object search to be miles better than Google Photos. In particular, Google Photos is exceptionally bad at distinguishing non-European looking faces (though it's not great in general), and it completely gave up on updating / scanning new photos in 2024 after I imported party photos with a lot of different people.
Almost all my Google Photos "people" are mix-and-matched similar looking faces, so it's borderline useless. Immich isn't perfect, but it gives me the control to rerun face recognition and reassign faces when I want, even on my ancient GTX 1060.
The main issues I see are immich identifying things like statues or paintings as people, and not dealing with people (especially kids) aging.
Google photos isn't perfect either but I never saw these kind of issues when I was still using it.
Don’t all non-European people look alike?
That's something that should be possible with the upcoming Workflow feature. Some details can be found in the November Recap blog post.
https://immich.app/blog/2025-november-recap
The /people page looks a lot like albums based on face to me, is that not what you are talking about?
It seems like he’s saying that he could create an album and add a rule saying “add all pictures of John and Jan”
Not who you're replying to, but no - this isn't the same. You can't share that album with others. Or have others collaborate on it.
You can do this with a few scripts and the Immich API - but that's not something the average user will do.
I'm waiting for that first point to. The good news is that they just started work on workflows, which should allow for that.
Not in every way. Seems it has issues with Ultra HDR (https://github.com/immich-app/immich/issues/23094)
>Self hosting used to mean conceding on something.
Of what I selfhost, I've never felt I was having to concede on anything.
For the record, I think Immich is very good, and I use it myself. But there is something about the design and performance in the mobile app that still makes it feel "not quite there yet" on iOS at least.
Are you on the latest version?
Does your phone silently and reliably upload all the photos to your server? My guess you're conceding on that part.
How's the offline app support? My full library (30k items) is available on my phone (not in high res). There are a lot more concessions I'm sure.
Yes, it does silently and reliably upload all my photos to my server. That's like, the entire selling point of the app? You even have control over how and when (on wifi or not) and the ability to change hostnames depending on what network you are on. And yes I can browse my entire collection back to 2001 no problem. I have no idea what the offline support is.
That was my selling point for Nextcloud, and it turns out it doesn't work reliably. It works most of the time, but for backing up photos it's not enough, and when it fails it's super annoying (you have to resync EVERYTHING from scratch).
People seem very happy about Immich, I'm tempted to try. But people seem very about Nextcloud as well, so it's difficult to tell.
Nextcloud is dogshit.
Immich is the best end user focused app I've ever ran in a container.
The sync really is quite good. On wifi it's basically seamless. If I had 30k new images though it would be much faster to use the immich-go tool mentioned in the blog post.
Offline support is alright, though I haven't worried about this much. I think it doesn't do any local deletion, so whatever stays in your DCIM folder is still on device.
> The sync really is quite good.
Do you have to ever open the app though? On iOS/Android?
In my case I would need it to run on the phones of my family members, and they probably will never open the app.
iOS doesn't allow that sort of pattern for non-Apple applications last time I looked, so probably doesn't work on iOS at all.
The Nextcloud iOS app does it. For some reason it requires the location permission "all the time" for that, presumably as a way to "wake" the app from time to time?
I decided to try Nextcloud exactly because of this. My problem with it is more that the whole thing is a bit unreliable. Like once in a while the app will get into a state where the only way I found to recover is to just erase everything and re-sync everything. And the app will resend ALL the pictures, even though they are already on the server.
And I can't do that with my family members' phones. It doesn't matter to me if the app takes a month to sync the photos, but it has to require zero maintenance. I can deal with the server side, but I need it to "just work" on the smartphones.
> The Nextcloud iOS app does it.
Searching for "nextcloud ios background sync" shows a whole bunch of forum posts and bug reports about it not working well unless you have the application open.
One issue (https://github.com/nextcloud/ios/issues/2225) been open since 2022, seems to still be not working properly. Another (https://github.com/nextcloud/ios/issues/2497) been open since 2023.
For something that works well it seems like a ton of people have a lot of issues with it. Are you sure you're on the latest iOS version? Seems like people experience the issues when they're on a later version.
The offline sync was a bit problematic in the past but this year they finally got it working properly.
Can confirm, they put in a ton of effort to fix it and they delivered. Flawless on ios since many versions ago.
I’d gladly trade manual but bulletproof sync over paying a fee forever for essentially… storing files on drives.
We got to this stage of having to sync because Apple can’t stand putting more storage on client devices.
> We got to this stage of having to sync ̶b̶e̶c̶a̶u̶s̶e̶ ̶A̶p̶p̶l̶e̶ ̶c̶a̶n̶’̶t̶ ̶s̶t̶a̶n̶d̶ ̶p̶u̶t̶t̶i̶n̶g̶ ̶m̶o̶r̶e̶ ̶s̶t̶o̶r̶a̶g̶e̶ ̶o̶n̶ ̶c̶l̶i̶e̶n̶t̶ ̶d̶e̶v̶i̶c̶e̶s̶.
"because a company that sells you Cloud storage has very few incentives to give away more local storage, or compress/optimize the files generated by its camera app." might be more accurate
> We got to this stage of having to sync because Apple can’t stand putting more storage on client devices.
It's not why I use sync services. All my photos fit on my devices (more or less). But I want to have seamless access to my files from both of my devices. And most importantly the sync is my first line of backup, i.e. if my phone gets obliterated I don't loose a day or two of files and photos, I only loose a couple of minutes.
More device storage wouldn’t help. I couldn’t fit all of my pictures on any phone sold today.
I'm actually in the process of building a home NAS server primarily for this purpose. Delighted to hear everyone has such a good experience.
How does sharing an album with others work on Immich?
I have not shared it with many people. But one of my most wanted feature is to completely share by photos with my partner. None of the services I tried (Plex, Synology Photos) had it. In Immich, it’s just a flip of a button.
> In Immich, it’s just a flip of a button.
Flip a switch and then what, are you getting a isolated public URL to share? Or you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted?
> you have your infrastructure exposed to the internet and the shared URL is pointing to your actual server where the data is hosted
I think the previous commenter misunderstood your question, this is the answer (you can also put it behind something like cloudflared tunnels).
Immich is a service like any other running on your server, if you want it exposed to the internet you need to do it yourself (get a domain, expose the service to the internet via your home ip or a tunnel like cloudflared, and link that to your domain).
After that, Immich allows you to share public folders (anyone with the link can see the album, no auth), or private folders (people have to auth with your immich server, you either create an account for them since you're the admin, or set up oauth with automatic account creation).
Ugreen has it. It has conditional albums in which one can setup rules like person, file type, location, anniversary and more and share a live album. Or leave all params empty and simply mirror the entire library.
You get a link and you can set read or write permissions on it.
Whoever gets that link can browse it in a web browser.
I've used this to share albums of photos with gatherings of folks; it works very well. It does assume you have your Immich installation publicly available, however. (Not open to the public, but on a publicly accessible web server)
How safe is that to set up for novice it people? I have a pi with pi-hole on it and am thinking about putting immich on it but the fact that it exposes itself outside my LAN frightens me.
I have it set up in a container that I keep updated. Then it's reverse proxied by another container which runs nginx proxy manager, which keeps the HTTPS encryption online. So far, the maintenance has only been checking whether a new version has been released and docker pulling the images, then restarting the containers.
OK. Then you concede your security, as I can't imagine any single person self-hosting can be better at keeping their public service more secure than engineers at Google can. Especially with limited time.
You definitely have a dull imagination. If the software itself is secure, containerized version of Immich behind a containerized version of nginx proxy manager is probably as secure as you can get. Also google security tends to be mainly leaning towards securing google and less towards securing google's (non paying) customers.
I mean, if you’re confident about security best practices, have a moderate amount of networking experience, and are a seasoned web developer, it’s not too scary at all. I realize that’s a lot of prerequisites though.
it’s not a fair comparison with Google because Google has a much bigger target on their back. There are millions of users of Google, so the value of hacking Google is very high. The value of hacking a random Immich instance is extremely low.
If you're not Cloudflare averse...
Setup immich VM or docker container with a cloudflare tunnel
Front access with Cloudflare Access (ZeroTrust) for free.
Set "can only be accessed by users with email = xyz@myuser”
Done.
Now assuming this is the same user email as the one you shared photos with, there is a base level of security keeping the riffraff away.
Home IP is never exposed either, because it's proxied through the cf tunnel.
If you want to share with family, you can permanently add them as users to your Immich instance. Otherwise, you can create a link that they can use.
Other than redundant hosting, what will I get as an Apple user by setting this up? It would be very easy to set up, just not sure what I’m gaining from it
Recall the 3-2-1 backup rule. iCloud is an offsite cloud copy, your phone is (arguably) a local copy, so you're missing one additional local copy.
I don't think it would add any value for you. For me, it adds value because I only have to turn my head to the left to see the computer that contains all my photos since I started taking pictures with a smartphone.
I plan to set up Immich so that I can have a central photo storage.
Apple Photos play poorly when you want to put the library on an external drive (and even more poorly when you want to put it on a networked drive).
Supporting someone who is not TooBigTech is a valid concern, IMO.
The selling point for me is that it is NOT TooBigTech. It doesn't have to be as good as TooBigTech, but it has to be reliable enough. In my case it means that it should be able to sync from iOS/Android, in the background, even if the user never opens the app, and it should never get out of sync and require setting up everything again. Nextcloud fails at that.
For once iCloud have a terrible sync speed. Even 500GB of photos / videos take forever to sync like a week and I can't imagine what it will take for someone with multi-TB archives.
Yes, but it's a one times occurrence, isn't it?
I'd imagine if you're person who make a lot of photos / videos slow sync can be pretty annoying. Unfortunately I'm not one of them to tell, but just had to wait like a week for the first sync of my wife's iPhone to finish.
My biggest worry with Immich is how to future-proof the albums. With photos sorted into folders, it should be no problem to access them in a couple of decades. With Immich, I have to rely on the software still working or finding some kind of tool to dump the database.
I work on an image search engine[0], main idea has been to preserve all the original meta-data and directory structure while allowing semantic and meta-data search from a single interface. All meta-data is stored in a single json file, with original Path and filenames, in case ever to create backups. Instead of uploading photos to a server, you could host it on a cheap VPS with enough space, and instead Index there. (by default it a local app). It is an engine though and don't provide any Auth or specific features like sharing albums!
[0] https://github.com/eagledot/hachi
I use Single File PHP Gallery. Put the file in root dir of your photos and set it executable in web server. That's it. The settings are also inside the file, if you need any tweaking.
https://sye.dk/sfpg/
You should print them ;-). But yeah, I’m also old school in that I make directories for each album. I used MacOS photos before, but it’s terrible when you change systems (which eventually will happen).
Don't storage templates handle this out of the box? I haven't actually checked my instance but that was my impression from reading the docs!
This was why I was driven to use Photoprism. I use syncthing-fork to upload from phones, and a custom made thing to copy them to folders (this also works with Cameras that aren't phones).
https://www.photoprism.app/
Although Immich does backup from your phone, I don't see it as a viable backup solution. Git-annex, Unison, and Syncthing are much better at keeping files synchronized across devices. Immich will create its own copies of photos and transcode videos for playback on the web. That may be fine if you have enough storage space, but for me it makes the phone backup useless. I suppose you could use a git-annex special remote directory as an Immich external library.
The database is Postgres, and the schema is quite sensible. You can (and I have) write normal SQL queries in psql to modify the data.
It might not be as easy as rsync to transfer data out, but I would trust it way more than some of the folder based systems I've had with local apps that somehow get corrupted/modified between their database and the local filesystem. And I don't think ext4 is somehow magically more futureproof than Postgres. And if no-one else writes an export tool, and you feel unable to, your local friendly LLM will happily read the schema and write the SQL for you.
I have the same concerns and that’s why I only use software which accept my directory structure as input and isn’t messing with it. I, for example, added my top directories of my image directory structure hand by hand each bit itself as a shared directory (read-only) to immich.
The main reason: I don’t trust software NOT deleting my photos. (Yes, I have an off-site) backup, but the restore would take time.
I use photosync to upload to a folder that is an external library for immich. Immich then periodically scans that folder to load new assests. I usually use digikam to manipulate that folder. Immich is there just for easy remote browsing of those files.
I remember that Immich has a mode to not use cryptic hashes but folders for storage. When I used it it was somehow deprecated due to some problems, but supported. I actually stopped using Immich because newer versions run the keep alive via socket.io with a Prostgres notify, which does constant empty WAL flushes, triggering empty page writes on idle.
Storage template is not deprecated https://docs.immich.app/administration/storage-template/
Also according to https://immich.app/cursed-knowledge the notify issue was fixed July 2024.
Thank you, well put. thats why I am using next cloud and manual curation. Folders is the ultimate future proof structure. But I do see the value of a nice UI. But immich hides the files from me too much for my taste.
Although I am sure I can back them up to my PC somehow. But having them just on the server is not my favourite solution.
I share this worry.
You can configure the storage template for the photos and include an "album" part, so if a photo is in some album it'll get sorted into that folder. Then the file tree on disk is as you wish.
I haven't tested what it does when a photo is in multiple albums, but it does handle the no album case fine as well.
In the same boat. It seems there is API to export photos, so was thinking about some script that will export photos into separate folder and use hard links in order not to take more space.
At this point you could just point Claude code at the database and the image folder and ask it to write a migration script
This is what I like about Ente. I can pay them to give me an e2e encrypted cloud service, and then the desktop app has a continuous export feature that will dump everything into a plain folder structure on my home NAS automatically.
Ente could go out of business tomorrow and I’d still have all my photos, neatly organized into folders.
And I don’t have to bother with self-hosting overhead. Or I could self host, too, if I wanted. But I still need an off-site backup so I might as well pay for the cloud service.
Have you had any issues with the continuous export writing to a network volume? And does it work for all users in a family plan? That was my plan as well, but I’d like to only have to run one export job
I can’t tell you about family plan since I don’t have one. I assume you’d have to set this up on a per-user basis.
I haven’t had any network volume issues. It’s an SMB volume provided by trueNAS mounted on a Windows machine.
I will say, if you mess up your volume like the time I took my NAS down for maintenance for a few days, the export failure wasn’t incredibly loud. I don’t think it notified and screamed at me that it wasn’t working. So I guess that is a significant risk.
This is why I still use Piwigo as I don't need to mess with file names and structure as far as I have seen.
As long as it is running on an open source database engine, I don't understand the difficulty.
You'll have plenty of time to write your exportation script before postgres ever disappear completely of all the bytes stored on our planet.
Also, are you saying you don't do backups?
I adore Immich. I set it up a while ago, and I'm finally looking at my photos again. I was previously using Nextcloud for photos, but it was such a slog to find anything that I never took or looked at photos.
Immich put the joy back in photography for me, it's so easy to find anything, even with just searching with natural language.
Yeah I started with memories for nextcloud. But it was buggy/slow unfortunately.
Being able to scroll to dates with immich is golden. And the facial recognition is on device and works great.
I don't have that experience with Nextcloud Memories.
Everything works well and it's comparably fast with Google Photos for me, and scrolling to specific dates works fine.
How long ago did you try it? I've only been using it for a few months so maybe it's improved over time.
I do that with DayOne and curation, but obviously this means I keep only 2/3 pictures per event, but most of the time that's enough (and even better, since I choose the ones I prefer and keep those)
I never even used Google Photos (because, you know), so if somebody could explain more concretely: how do you use it? Is it actually a backup app (and if so, is it really much different from using a generic backup app or even just syncthing), or does it somehow magically allow you to keep the preview gallery and search on your device, while your actual 200 GB of photos are somewhere in the cloud and the local storage is basically just auto-managed cache, where everything you didn't access in the last 6 months gets deleted? Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc? I'm asking because I don't even fully understand how the camera app handles it itself, and if all the data is fully portable.
FWIW, I also don't use any fancy collection management and barely understand what all these Lightrooms and XMP files are for. Maybe I should, but up to this day photos for me are just a bunch of files in the folder, that I sometimes manually group into subfolders like 2025-09, mostly to make it easier on thumbnail-maker.
It auto uploads all your photos to the cloud and you can delete them locally and still have them. The biggest feature is the AI search, you can type anything and it will find your pictures without you doing any work categorizing them. It can do objects or backgrounds or colors and it can even do faces so you can search by people's name. That and there's share links to albums and multiplayer albums.
It keeps the originals locally when it uploads forever unless you delete them. There's a one click "free up space on this device" button to delete the local files. It's actually somewhat annoying to export in bulk, you pretty much have to use takeout.
It's annoying to export in bulk because you have to use the bulk export tool?
Yes. Google's Takeout took is pretty bad, UX-wise.
Key features that matter to me: 1) backup from android or iOS. This helps when I have switched phones over the years. 2) shared albums with family or friends where invited people can both see and contribute photos. Think kids albums, weddings, holidays. 3) ability to re-download at full resolution
1) You don't have backups of other data on your phone (chat history, 2FA secrets and private keys, text notes, anki cards, game progress, configuration of all apps, etc.)? I had assumed everyone who cares about their data has backups of their data anyway, so that's not really a selling point to install another app for
2) that's nice!
3) "it doesn't throw my data away" is the last selling point?! Isn't that just assumed?!
1) I do have separate backups, as well as this, which runs more frequently (after picture is taken) vs daily for device backup
3) not compared to iCloud photos which I migrated from. You can export a whole album with Google at original quality with 1 click. With Apple you can only do 1000 at a time. For apple you can ask for a whole account export, but that takes a few days and gives you all photos. (Similar to Google Takeout).
For nearly a decade I've been using Google Photos with a love-hate relationship. I've tried a few alternative photo apps, even tried building one myself as a side side side side project, but nothing really felt like it could replace how I use Google Photos (haven't tried in the past couple of years mind).
I have a daughter, and my family lives in another country, so I want to be able to share photos with them. These are the feaures I need:
- Sharing albums with people (read only). It sounds pretty simply, but even Google fucked it up somehow. I added family members by their Google account to the album, and somehow later I saw someone I didn't know was part of the album. Apparently adding people gives (or did?) them permission to share the album with other people which is weird. I want to be able to control exactly who sees the photos, and not allow them to share or download them with others. On the topic of features, I should note that zero of the other social features (comments / reactions) have ever been used.
- Shared album with my spouse (write). I take photos of the kid, she takes photos of the kid. We want to be able to both add our photos to the shared album.
- Automatic albums or grouping by faces. Being able to quickly see all the photos of our kid is really great, especially if it works with the other sharing features. On Google you could setup Live Albums that did this... (automatic add and share between multiple people) but I can't see the option anymore on Android. I feel it could be a bit simpler though, just tagging a specific face, so that all photos should be shared within my Google One Family.
- The way we use it is we have a shared album between us or all the photos, and then a curared album shared with family members of the best photos.
Other than that I just use it as a place to dump photos (automatically backed up from my phone) and search if needed. Ironically the search is not very good, but usually I can remember when the photo I need was taken roughly so can scroll through the timeline. In total my spouse and I have ~200GB of media on Google Photos, some of it is backed up elsewhere.
What about automatic background sync without ever having to open the app on mobile? Does that work or do you have to open the app regularly for it to sync properly?
This doesn't work properly on Nextcloud (it sometimes gets out of sync and then I'm screwed because I have to reset the app on my family member's phone and have them resync for hours).
Wouldn't recommend. When I wanted to move from Google Photos to iCloud, there was no way to simply get all my photos. I had to use a JS script that would keep scrolling the page and download photos one by one.
Lesson learnt.
These days there is Data Transfer Project that supports transferring pictures from Google Photos to iCloud and the other way around. https://portmap.dtinit.org/articles/photos7.md/
Google Takeout?
It technically gives you the data, but it’s not in a format that’s very easy to use
You can back up to Immich using various methods, including dumb file copy into a dropbox folder. For a while, I was using PhotoSync that uploaded photos to my NAS with Immich using WebDAV.
Immich also has an app that can upload photos to your server automatically. You can store them there indefinitely. There are galleries, timelines, maps for geotagged photos, etc.
The app also allows you to browse your galleries from your phone, without downloading full-resolution pictures. It's wickedly fast, especially in your home network.
> Does it preserve all this additional data Android cameras add, like HDR, video fragments before photos, does it handle photospheres well, etc?
It preserves the information from sidecar files and the original RAW files. The RAW processing is a bit limited right now, and it doesn't support HDR properly. However, the information is not lost and once they polish the HDR support, you'll just need to regenerate the miniatures.
Immich is a Google Photos clone, and when they say "self-hosting", they mean SELF-HOSTING. You need to be a web dev or a sys admin to be able to wrangle that thing. Nightmare upgrades, tons of weird bugs related to syncing.
If your solution to an issue is "just reset the Redis cache", this is when I am done.
Immich solves the wrong problem. I just want the household to share photos - I don't want to host a Google Photos for others.
Not my experience hosting immich for close to two years now. There was only one "breaking change" a long time ago where you would have to manually change a docker image in the compose file, but since then things have been smooth for me.
Immich may not be the pinnacle of all software development, but with the alternative being Google photos:
- Uploading too many photos won't clog my email and vice versa
- I'm not afraid of getting locked out of my photo account for unclear reasons and being unable to reach anyone to regain access
- If I upload family photos from the beach, then my account won't get automatically flagged/disabled for whatever
- Backups are trivially easy compared to Google takeout
- The devs are reachable and responsive. Encounter a problem? You'll at least reach a human being instead of getting stranded with a useless non-support forum
I would instead say that my (and my family's) photos are too important to me to pass their hosting on to a company known for its arbitrary decisions and then being an impenetrable labyrinth if there is an issue.
So you do pay some price, but it is an illusion to think that the price of Google photos (be that in cash, your data or your effort) is much lower.
Things that did break during this time: - my hacky remote filesystem - network connectivity of a too cheap server but these were on me and my stinginess.
> Immich solves the wrong problem. I just want the household to share photos
That is a totally reasonable view. But others have different preferences. I, for example, do not want to share all my photos with Google, govvies and anyone else they leak them to.
So I self host, back up and share my files with the family. I can always dump what I want to insta, etc. but it is my choice what to share, picture by picture, with default "off". And have no dark patterns trying to catch a finger accidentally hitting a "back up to cloud" for the full album.
That, to me, is a big deal, worth dealing with occasional IT hassles for. Which is just a personal preference.
>> Immich solves the wrong problem. I just want the household to share photos
pixelfed may be what the parent want then. I don't like that it is PHP, but as long as they adhere to the ActPub protocal, we can roll our own in whatever flavor.
Or perhaps Photoview? https://github.com/photoview/photoview
Actually, I've setup a proxmox server last week that run a couple of self-hosted application. I've nextcloud running and it was fairly easy to setup. The next item on my list WAS Immich. I decided against trying to deploy it. The reason is simple: they are essentially forcing the use of Docker, which I won't touch at at all. Either a native proxmox container (which is just lxc) or a proper VM, but I keep those in reserve as they can be heavy. I'm not asking of them to create a native package for debian or a container image; a simple install script that bootstraps the application (checks & install itself and dependencies), bootstrap the database and basic config (data directory, url & ports, admin password) is more than enough. The same script should be use to update the application if possible, or provide an updater on the admin panel to update the application without manual steps or data migrations. Adguard Home does all of this perfectly in my opinion. I know Immich thinks they are making things "easier" to just dump everything into a docker container, but some of us wont touch it at all. Same reason I avoid any projects that heavily relies on nodejs/npm ecosystem.
I really don't understand this take. A script that installs all required dependencies is fine if and only if you are dedicating a machine to immich. It probably requires some version of node, with possibly hidden dependencies on some python, it uses ffmpeg, so all related libraries and executables need to be there. You then have a couple separate DBs, all communicating together. Let's not talk about updates! What if you're skipping versions? Now your "simple install script" becomes a fragile behemoth. I would NOT consider this if it was non docker-native. Plus, I don't have a server with enough resources for a lot of VMs, with all of their overhead and complications, just to have one per service. Nowadays there are many ways to run a container not just the original docker.com software, and you can do that on pretty much any platform. Even Android now!
I've never understood it either. I still deploy some things into their own respective manual deployments but for lots of things having a pre-made docker compose means I can throw it on my general app VM and it'll take 5 seconds to spin up and auto get HTTPS certs and DNS. Then I don't lose hours when I get two days into using something and realize it's not for me.
Also have you read some of the setup instructions for some of these things? I'd be churning out 1000 lines of ansible crap.
Either way since Proxmox 9.1 has added at least initial support for docker based containers the whole argument's out the window anyway.
Me neither. Docker is the platform agnostic way to deploy stuff and if I maintained software, it is ideal - i can ship my environment to your environment. Reproducing that yourself will take ages, or alternatively I also need to maintain a lot of complex scripts long-term that may break in weird ways.
https://community-scripts.github.io/ProxmoxVE/scripts?id=imm...
These things are a proxmox home lab user's lifeline. My only complaint is that you have to change your default host shell to bash to run them. You only have to do that for the initial container creation though.
What are some arguments against using docker?
I think it's the best of every world. Self contained, with an install script. Can bring up every dependent service needed all in one command. Even your example of "a simple script" has 5 different expectations.
Because it doesn't solve your problem does not mean it solves the wrong problem.
I've been waiting what feels like years for immich stable to be released for this reason. Luckily it finally happened about a month ago. I'm about to go through swapping out the main OS SSD on my server. If I'm able to see the immich backups after reinstalling TrueNAS I'm going to call it resilient enough for me.
I have to agree. I tried using it for a few months and it left me convinced I'll be paying for iCloud photos for the rest of my life.
Such a weird take. Of course "self hosting" means "self hosting".
Sure it could be easier/safer to manage, everything can be better.
Over the last couple of years hosting it I had a single issue with an upgrade but that was because I simply ignore the upgrade instructions and YOLOed the docker compose update.
Again, is it perfect? No. Would I expect a non tech savy user to manage their own instance? Again no.
What? It is literally just start the container and forget. When upgrading it is change the version tag and restart the container.
Upgrades are frequent but no hassle.
I have been running this for half a year. It might have been more work earlier?
My household is using this for our shared photos repository and everyone can use it. Even the kids.
There is both direct web access and an iPhone app.
I run Immich for more than two years and there was an upgrade to 1.33 I think around spring 2024 that required special instructions on editing docker compose file because they changed the vector database. I think there was also a database migration same year when - if you did not update the version regularly - would need to run two step upgrade. They provided plenty of documentation always. A while ago sync was quite wonky but they improved that a lot lately.
Idk maintaining the PG vector extensions has been kind of a pain in the ass, at least from an automation perspective
I never had to meddle with that
Huh? What are you maintaining? The PostgreSQL db and extensions are provided in the container image. You do not have to use your own external PostgreSQL.
Of course, you may have reasons to do that. But then you also own the maintenance.
I have never had to maintain any PG extensions. Whatever they put in the image, I just run. And so far it has just worked. Upgrades are frequent and nothing has broken on upgrade - yet at least
these are all cases of PEBCAK
There is an Android app, too.
Couldnt Immich be used by a paid service provider to provide Immich google-free photo hosting ?
I am pretty sure they already offer that service for a price.
> You need to be a web dev or a sys admin to be able to wrangle that thing.
I totally disagree. You do need a tiny bit of command line experience to install and update it (nothing more than using a text editor and running `docker compose up`), but that's really it. All administration happens from the web UI after that. I've been using Immich for at least 2 years and I've never had to manually do something other than an update.
> Immich solves the wrong problem. I just want the household to share photos - I don't want to host a Google Photos for others.
Honestly, I can't understand what exactly you're expecting. If Google Photos suits your needs for sharing photos with others, that's great! As for Immich, have you read how it started[0]? I think it's solved the problem amazingly well and it still stays true to its initial ambitions.
[0]: https://v1.142.1.archive.immich.app/docs/overview/welcome
Every time I go the self hosting route, everything goes smoothly for awhile, and then decides to break 6 months down the line, and I have to waste a Saturday figuring it all out and upgrading things. Not what I want to do with my weekend, when I'm already doing software dev and maintenance for work. This happens even for super dependable, well written self hosted software.
On the other hand, maybe AI can help remove some of that pain for me now. Just have Claude figure out what's wrong. (Until it decides to hallucinate something, and makes things worse)
I was just telling a nonprofit the other day, who in the name of “self hosting” was running their business on a 73 plugin WordPress site:
Move to Shopify and LearnWorlds. Integrate the two. Stop self hosting. (They’re not large enough to do it well; and it already caused them a two week outage.)
> Move to Shopify and LearnWorlds.
Having seen a lot of companies and startups doinge exactly that, more of less everyone regrets it. Either you end up with such a lot of traffic through these vendors that you'll regret it financially, or you want to change some specific part of your page or your purchase process, which Shopify doesn't let you change, and you'll end up needing to switch or be sad, or, as I regularly have to (because we don't get the resources and time to switch): try to manipulate the site through some weird hacky Javascript snippets that manipulate the DOM after it loaded.
It's literally always the same. They get you running in no time, and in no time you're locked into their ecosystem: No customization if they don't want it; pricing won't scale and just randomly changes without any justification; if you do something they don't like they'll just shut you down.
> Stop self hosting.
Worst mantra of the century. Leading to huge dependencies, vendor lock ins, monopolies, price gauging. This is only a good idea for a prototype, and only as long as you'll not gonna run the prototype indefinitely but will eventually replace it. And maybe for one-person-companies who just want to get going and don't have resources for this.
Let me empathize but say, to put it bluntly, they do not have qualified IT Staff. They have 1 or 2 people who understand only basic web server stuff and nothing else. Thus the two week outage.
Paying LearnWorlds + Shopify $30K a year, if it were even that extreme, is cheaper than an engineer and certainly cheaper than an outage over Giving Tuesday, as they found out the hard way. They got hacked and were down for the most high-traffic nonprofit donor day of the year in their effort to save a few bucks. It wasn’t even the plugins, but the instance underlying the shared hosting.
> It's literally always the same. They get you running in no time, and in no time you're locked into their ecosystem: No customization if they don't want it; pricing won't scale and just randomly changes without any justification; if you do something they don't like they'll just shut you down.
You’re also locked into an ecosystem. It’s called Stripe or PayPal. Almost all of that applies anyway. Don’t forget that significant amount of customizations are restricted to streamline PCI compliance, you can do illegal things very easily. Install an analytics script that accidentally captures their credit card numbers, and suddenly you’re in hot water.
> Leading to huge dependencies, vendor lock ins, monopolies, price gauging
Have you analyzed how many dependencies are in your self hosted projects? What happens to them if maintainers retire? How long did it take your self hosted projects to resolve the 10/10 CVE in NextJS? And as for price gouging, if it’s cheaper than an engineer to properly support a self-hosted solution, I’ll still make that trade as even $80K for software is cheaper than $120K to support it. If you’re at the scale where you don’t have a proper engineer to manage it, do not self host. Business downtime is always more expensive than software (in this case, 5 salaries for 2 weeks to do absolutely nothing + lost donations + reputational damage + customer damages, because “self hosting is easy and cheaper”).
If you need 73 plugins for wordpress, then Wordpress is a poor technology choice for your usecase.
disagree. as the sister comment mentions, wordpress may have been the wrong choice, but self hosting is never wrong, especially for a non profit who may not have the resources to deal with a situation if a hosting service decides to shut them out.
If they don't have the resources to switch to a different hosting provider, why do you assume they will have the resources to fix things when their self-host solution shits the bed?
You're comparing apples to oranges.
Switching the ecosystem from something like Shopify to some other shop software requires a lot of manual work, and some of the stuff won't even be transferable 1:1.
Fixing some issue with your WordPress installation will require a person who can google and knows a little stuff about webservers, and maybe containers, and will usually go pretty fast, as WordPress is open source and runs almost half the internet, and almost every problem that will come up will have been solved in some StackOverflow thread or GitHub issue.
Usually though, if you run WordPress and you're not doing a lot of hacky stuff, you will not encounter problems. Vendors shutting you down, increasing their pricing, or shutting down vital features in their software, happens regularly though. And if it happens, shit hits the fan.
Of course it’s sometimes the wrong choice. Not everyone should self-host their own DNS and other things if their needs are already meet.
I’ve experimented with both Immich and Ente over the last year and run Immich in parallel with Google Photos right now for my family. Once they add a few more features to support things like smart albums, I’ll be able to drop Google Photos entirely.
I love that the consumer space is getting this kind of attention. It’s one of the biggest opportunities for big tech to lock people into their ecosystem, as photos are something everyone cherishes. You can extort people with ever increasing subscription fees because over time they reach a scale with their own photos that makes it inconvenient to manage themselves. It’s nice to have multiple options that are not Google or Apple.
https://docs.immich.app/install/requirements
> RAM: Minimum 4GB, recommended 6GB
Wow. When factoring in the OS, that's an entire system's worth of RAM dedicated to just hosting files!
What does it use all this for? Or is this just for when it occasionally (upon uploading new pictures) loads the image recognition neural net?
I'd have to stop Immich whenever I want to do some other RAM-heavy task. All my other services (including database, several web servers with a bunch of web services, a Windows VM, git server, email, redis...) + the host OS and any redundancy caused by using containers, use 4.6GB combined, peaking to 6GB on occasion
> CPU: Minimum 2 cores, recommended 4 cores
Would be good to know how fast those cores should be. My hardware is a mobile platform from 2012, and I've noticed each core is faster than a modern Pi as well as e.g. the "dedicated cores" you get from DigitalOcean. It really depends what you run it on, not how many of them you have
It has modern features that requires relatively heavy processing such as facial recognition, finding similar images, ocr, transcoding videos, etc. I think it only needs those computing resources when you upload new images/videos.
Immich is wonderful in docker setup passing the gpu for ML which works pretty good and the amazing new OCR feature does miracles, I’m able to find notes that I photographed for this purpose but then forgot, I’m able to find memories just by remembering the name of the place and searching for it and everything is running local!
Docker + Immich + Tailscale is the killer replacement to Google & Apple Photos, it's simply that simple
I don't get the appeal of Tailscale for simple homelab use. I have OpenVPN and it's trivial. Hit the toggle and I'm connected, no fuss.
Tailscale (and similar services) is an abstraction on top of Wireguard. This gives you a few benefits:
1. You get a mesh network out of the box without having to keep track of Wireguard peers. It saves a bunch of work once you’re beyond the ~5 node range.
2. You can quickly share access to your network with others - think family & friends.
3. You have the ability to easily define fine grained connectivity policies. For example, machines in the “untrusted” group cannot reach machines in the “trusted” group.
4. It “just works”. No need to worry about NAT or port forwarding, especially when dealing with devices in your home network.
Also it has a very rich ACL system. The Immich node can be locked out from accessing any other node in the network, but other nodes can be allowed to access it.
Tailscale uses wireguard, which is better in a lot of ways compared to OpenVPN. It's far more flexible, secure, configurable and efficient. That said, you probably won't notice a significant difference
OpenVPN is far from "no fuss", especially when compared to Tailscale.
I like to self host things so I also self host Headscale (private tailnet) and private derp proxy nodes (it is like TURN). Since derp uses https and can run on 443 using SNI I get access to my network also at hotels and other shady places where most of the UDP and TCP traffic is blocked.
Tailscale ACL is also great and requires more work to achieve the same result using OpenVPN.
And Tailscale creates a wireguard mesh which is great since not everything goes through the central server.
You should give it a try.
Why not just use wireguard directly? The configuration is fairly trivial
Wireguard is great, I have personally donated to it and have used Wireguard for years before it became stable. And I still use it on devices (routers) where Tailscale is not supported. But as Jason stated - it is quite basic and is supposed to be used in other tools and this is what we are seeing with solutions like Tailscale.
Tailscale makes it simple for the user - no need to set up and maintain complex configurations, just install it, sign in with your SSO and it does everything for you. Amazing!
With Tailscale you don't have to learn anything, you just install apps and click.
One value of Tailscale for a ton of simple use-cases is that people don't have time / don't want to learn.
Even more trivial with Tailscale, so why wouldn’t I use Tailscale to configure wireguard for me?
Tailscale is much more reliable in my experience. OpenVPN isn't very reliable in my experience as a network admin. And IPsec is an abomination.
So, I wanted to use tailscale for a few local services in my home, but I run a few of them on the same device, and have a simple reverse proxy that switches based on hostname.
Afaict I can't use a tailnet address to talk to that (or is it magic dns I'm thinking about? it was a while since I dug in). I suppose I could have a different device be an exit node on my internal network, but at that point I figure I may as well just keep using my wireguard vpn into my home network. I'm not sure if tailscale wins me anything.
Do other people have a solution for this? (I definitely don't want to use tailscale funnel or anything. I still want all this traffic to be restricted like a vpn.)
I want to love Tailscale on mobile, but it conflicts with Adguard and regularly disconnects.
I keep Tailscale but switched over to Pangolin for access most of my self-hosted services.
Any reason you didn't just set tailscale DNS to ad guard? I have set it to controlD
With pangolin you are exposing it otside your private network right? Its public website. That might be undesireable for security.
Can you elaborate? What role does Tailscale play? I selfhost and have heard about Tailscale but couldn't figure out how it's used.
Not GP. My guess is that they’re self hosting this at home (not on a server that’s on the internet), and Tailscale easily and securely allows them to access this when they’re elsewhere.
I host at home and can access the things at home just fine by having the server as DMZ in the router, or whatever it is called these days. This doesn't really answer what Tailscale does more than port forwarding. If it punches NAT, that sounds like it actually makes you rely on a third party to host your STUN, i.e. you're not self hosting the Tailscale server?
Even if you are self hosting in the cloud or on a rented box, Tailscale is still really nice from a security perspective. No need to expose anything to the internet, and you can easily mix and match remotely hosted and home servers since they all are on the same Tailnet.
Tailscale routes my mobile device dns through my pile back at the home. I have nginx setup with easy to remember domains (photos.my domain.com) that work when i’m away as well without exposing anything to the open internet.
Why not call it VPN if that's what it is? In your case, it sounds like configuring your "pile" (is that a DNS server, short for pihole maybe?) on your phone would do the same thing, but if the goal is to not expose anything to the open internet, a VPN would be the thing that does that
In my words, I use Tailscale at home but not for this (yet). Tailscale is a simple mesh network that joins my home computers and phones while on separate networks. Like a VPN, but only the phone to PC traffic flows on that virtual private network.
Tailscale can give you domains + ssl for local services with basically no effort.
With tailscale on your server and endpoints you can access the server from anywhere without even having to open any ports. It is like magic.
If you don't open ports, how can it reach your internal services to allow you access to them?
Tailscale gives me access to my home network when I'm not at home. I can be on a train, in another country even, and watch shows streamed off the Raspberry Pi in my home office.
That's called a VPN
Is this like "Band-Aid" that used to be a brand name but now people just use it generically?
I'm using it with Dokploy, which takes care of Docker+Tailscale for me, it's quite convenient
I have been experimenting with Immich off and on for over a year, first in docker-compose and now in podman. It is slick and seamless in a lot of ways, but the portability and upgrade ability are questionable, as others have highlighted.
For example, when they moved between Postgres container versions, it required a manual edit to the compose file to adjust the image. Even if you managed to get it set up initially in docker, it’s these sorts of concepts that are way more advanced than the vast majority of people who may even be interested in self-hosting.
For a hobbyist self-hoster it’s cool and fun, but not something at this point I’d trust my photos to alone. I have considered Ente for that but today it’s still iCloud Photos.
I gave it a try a few months ago. Unfortunately, my experience was not that great. I was hosting it on Synology through Docker and found that the iOS client was a bit buggy and quite slow. Synology Photos completed the initial sync in a few hours, while Immich took several days. After a few months, I switched back to Synology Photos. I might try Immich again in the future.
I started looking for alternatives after Synology became more restrictive with their hardware. I'm curious if anyone else has had a similar experience.
Long time synology user. Switched 3 weeks ago to ugreen. They rolled back their fiasco decision about drives (synology), but I wanted some good hardware in 2025. Everything that synology offers is outdated and slow.
Got myself a 6800 pro. It chewed through 98k photos, many of which are raw, within 24h AFAIK. Then came face recognition, text recognition etc. Within 2-3 days all was done.
The performance is night and day. Photos and movies load instantly. Finally can watch home movies on my TV without stuttering (4k footage straight from a nikon).
The photos app is similar to the synology one. Face recognition was better for me. Have compared the amount of photos tagged to a few people and ugreen found 15% more. Have seen photos of my grandma which I didn't see for years!
There's much more positive i could say. For the negatives: no native drive app (nextcloud which supposedly was an alternative doesn't sync folders on android), no native security cam app.
I am running now 10 docker containers without a sweat. My ds920+ was so slow, that I gave up on docker entirely after a few attempts.
The photos app has some nice features which synology didn't have. Conditional albums. Baby albums.
My guess would be that Synology is an expensive but weak computer, bare minimum for NAS.
Immich does require some CPU and also GPU for video transcoding and vector search embedding generation.
I had Immich (and many other containers) running successfully on AMD Ryzen 2400G for years. And recently I upgraded to 5700G since it was a cheap upgrade.
I think running it on Synology may be a lot of your problem there
I'll throw in another "+1, quite satisfied with immich" comment, because I'm honestly that impressed.
The project as a whole feels competent.
Stuff that should be fast is fast. E.g. upload a few tens of thousands of photos (saturates my wifi just fine), wait for indexing and thumbnailing to finish, and then jump a few years in the scroll bar - odds are very good that it'll have the thumbnails fully rendered in like a quarter of a second, and fuzzy ones practically instantly. It's transparently fast.
And the image folder structure is very nearly your full data, with metadata files along side the images, so 99% backups and "immich is gone, now what" failure modes are quite easy. And if you change the organization, it'll restructure the whole folder for you to match the new setup, quietly and correctly.
Image content searching is not perfect (is it ever?), but I can turn it on in a couple clicks, search for the breed of my dog, and get hundreds of correct matches before the first mistake. That's more than good enough to be useful, and dramatically better than anything self-hosted that I've tried before, and didn't take an hour of reading to enable.
It's "this is like actually decent" levels that I haven't seen much in self-hosted stuff. Usually it's kinda janky but still technically functional in some core areas, or abysmally slow and weird like nextcloud, but nope. Just solid all around. Highly recommended.
> the image folder structure is very nearly your full data, with metadata files along side the images
Wait, other comments were saying that one of Immich's weak points is backups. Someone else replied that the postgres structure is sane so you can run sql queries to get your data out if needed. Now you're saying it's plain old files. I'm confused
Some minor data is in postgres, but to test it I just fed it a previous install's library folder (images and metadata files). Worked fine, restored all my albums and tags, though perhaps not "people" iirc. And not e.g. ML image content search, of course, you need to re-generate that. And the metadata files were more than obvious enough to satisfy my "I can do this by hand if I really need to" bar, and recreating the accounts by hand is trivial.
The main "weak point" is probably that it doesn't have S3 integration, which is entirely fair. But for my purposes, rcloning the library folder (or e.g. rsync to a btrfs for free deduplication if you reorganize) is more than good enough, because that folder provides enough data for it to restore everything I care about.
For DB backups for keeping everything, there are configurable auto-backups, but it's only a snapshot to a local filesystem. So you'd need to mirror that out somehow, but syncthing/rclone/etc exist and there are plenty of options.
How does the mobile syncing work?
Really looking for a system where I can install the app on my parents' iPhones and it backs up their photos to my server without them having to even know about the app. They won't open it, ever.
Nextcloud fails at that.
It's pretty easy to set it up to upload automatically, if that's the question. No need to launch it. I've only got my server on my home network so I can't sync while away, and I occasionally check to make sure it's working - no problems at all yet, a few months in.
This 100x.
As someone who loves immich, a VM is overkill but also without vGPUs or external configuration loses you access to some the best features, local AI searching akin to what google and apple photos offer.
Its not perfect but its great to be able to just search for things in a photo and find any matches across dozens of TBs of raws, without having to have some 3rd party cloud AI nonsense do all the work.
The only thing I wish they could get integrated is support for jxl compressed raws, which requires them compile libraw with adobe's sdk.
It seems like you are saying the AI features don't work if you don't have a GPU, if I understood correctly, but I have my install on a server with no GPU and the object search and facial recognition features work fine. Probably slower to generate the embeddings, but I don't have any comparison to make.
> As someone who loves immich, a VM is overkill but also without vGPUs or external configuration loses you access to some the best features, local AI searching akin to what google and apple photos offer.
I installed immich in a VM. And the VM is using GPU passthrough. I don't see how it's overkill: immich is a kitchen sink with hundreds if not thousands of dependencies and hardly a month goes by without yet another massive exploit affecting package managers.
I'm not saying VM escapes exploit aren't a thing but this greatly raises the bar.
When one install a kitchen sink as gigantic as immich, anything that can help contain exploits is most welcome.
So: immich in a VM and if you want a GPU, just do GPU passthrough.
I agree that the search using facial recognition is nice in immich that said.
How do people handle backups with Immich? Ideally I’d like all my images to be uploaded to object storage if I’m self-hosting.
Unfortunately Immich doesn't (yet) support object storage natively, which IMHO would make things way easier in a lot of ways.
You can still mount an object storage bucket to the filesystem, but it's not supported officially by Immich and you anyways have additional delay caused by the fact that your device reaches out to your server, and your server reaches out to the bucket.
It would be amazing (and I've been working on that) to have an Immich that supports natively S3 and does everything with S3.
This, together with the performance issues of Immich, is what pushed me to create immich-go-backend (https://github.com/denysvitali/immich-go-backend) - a complete rewrite of Immich's backend in Go.
The project is not mature enough yet, but the goal is to reach feature parity + native S3 integration.
I think Ente uses MinIO for storage, I could see them supporting the ability to self-host in S3 at some point
Hey, you can connect any S3 compliant service to Ente.
Our quickstart.sh[1] bundles Minio, but you can configure Ente to use RustFS[2] or Garage[3] instead.
[1]: https://ente.io/help/self-hosting/#quickstart
[2]: https://github.com/rustfs/rustfs
[3]: https://garagehq.deuxfleurs.fr/
Thanks for the links 0 it’s been a while since I looked at the self hosting documentation.
I have the main volume for images in a zpool with two SSDs in a raid-1 configuration. I also have a daily cronjob that makes an encrypted off-site backup with Borg. I've also got healthchecks.io jobs setup so that if the zpool becomes unhealthy, the backups fail, or anything stops, then both me and my partner get alerted.
My partner isn't very technical, but having an Immich server we are both invested in has gotten her much more interested in self hosting and the skills to do it.
My setup has Immich in a Docker container, which is itself in a Proxmox LXC container.
I then have Proxmox back it up to Proxmox Backup Server running in a VM, and it has a cron job that uploads the whole backup of everything to Backblaze B2.
The backup script to B2 is a bit awful at the moment because it re-uploads the whole thing every night... I plan on switching to something better like Kopia at some point when I'll get the time
Something like https://github.com/yandex-cloud/geesefs I'd guess.
I'm using restic to backup the Immich photo directories as well as automatically generated Immich database dumps to an external drive and a Hetzner Storage Box.
I create local and remote restic backups (using Backrest). I just point to the docker mount points and run database export as a pre-hook.
Personally I do a daily sync from the underlying gocryptfs to Backblaze B2. It's also on btrfs so I can do snapshots, etc.
Incremental borg backups uploaded to cloud storage... Have a cron job and get notified about every backup
Shameless plug: https://github.com/Barre/ZeroFS
Immich is great, but I like Ente more because of the E2E encryption. I don't trust that someday my hardware wouldn't get stolen and all photos get in possession of someone else.
I'm much more worried about the Ente install getting broken for some reason and my pictures being locked and lost, than a burglar stealing a hard disk in my basement.
That's why I like how Photoprism just uses my files as they are without touching them (I think immich can do that as well now, but it wasn't so in the past). I can manage the filesystem encryption myself if I want to.
I like Ente's E2EE for hosting on a remote server.
In my case I want to host on my personal server at home, so it feels actually nicer to not have E2EE. I basically would like to have the photos of all my family members on a hard disk that they could all access if needed (by plugging it into their computer).
Ente looks interesting and worth looking into, thanks for mentioning it.
In the context of having a phone stolen, it's possible to at least limit the damage and revoke accesses via the Tailscale control server. Then the files on device are still vulnerable, but not everything in Immich (or whatever other service is running).
Why would you need Tailscale for revoking an access token in an unrelated service? Just kick the device out of the sessions list in Immich, or change your password if that's stored directly on the device
Why not encrypt your server? Or store the photos on an encrypted partition?
I have this dilemma.
> Why not encrypt your server?
I’d like to provide the service to my semi-extended family — not just me and my partner, but also my parents and siblings. And I respect their privacy, so I want to eliminate even the possibility of me, system administrator, accessing their photos.
I'm running Immich on NanoPi R6C (arm64, even lower idle power usage, still plenty fast for running Immich).
I use Cloudflare tunnel to make it available outside the home network. I've set up two DNS names – one for accessing it directly in the local network, and and a second one that goes through the tunnel. The Immich mobile app supports internal/external connection settings – it uses the direct connection when connected to home wifi, and the tunnel when out and about.
For uploading photos taken with a camera I either use immich-go (https://github.com/simulot/immich-go) or upload them through the web UI. There's a "publish to Immich" plugin for Adobe Lightroom which was handy, but I've moved away from using Lightroom.
Are you also facing the the 100mb upload limit when using cloudflare tunnel? Sometimes I want to upload a video from my phone will away from home but I can't and need to vpn
You have to disable Cloudflare proxy which is not an option with tunnels. It's technically against TOS to proxy non-HTML media anyway. I just ended up exposing my public IP.
> I just ended up exposing my public IP.
I considered doing that too. My main problem with it is privacy. Let's say I set up some sort of dynamic DNS to point foo.bar.example.org to my home IP. Then, after some family event, I share an album link (https://foo.bar.example.org/share/long-base64-string) with friends and family. The album link gets shared on, and ends up on the public internet. Once somebody figures out foo.bar.example.org points to my home IP, they can look up my home IP at all times.
Surprised that neither the article, nor the comments mention Photoprism from what I can see. It’s not I’ve been hosting Photoprism and syncing my photos with PhotoSync from my iPhone for a while now. I would consider switching to another solution if it had in-browser basic editing (cropping, contrast / white balance adjustment, etc).
https://www.photoprism.app/
So am I seeing this right:
Immich, ente and photoprism all compete in a similar space?
Seems immich is the most polished webpage, but which solution will become the next cloud for photos is to be seen. Surely it's not next cloud anymore, considering the comments here.
> Surely it's not next cloud anymore, considering the comments here.
I have been testing Nextcloud for backing up photos from my family members' phones. Wouldn't recommend.
The sync on iOS works well for a while, then it stops working, then some files are "locked" and error messages appear, or it just stops syncing, and the only way I find to recover is to essentially restart the sync from scratch. It will then reupload EVERYTHING for hours, even though 95% images are already on the server.
Note that in my use-case, the user never opens the app. It has to work in the background, always, and the user should not have to know about it.
Same here. I also have an ente account, but only to check if they made relevant progress. So far, I don't understand why Ente has so much traction when PhotoPrism has the better feature set, in my opinion.
I've switched from photoprism to immich. Immich is a much more active project, bugs are fixed, face recognition is an order of magnitude better, just an overall more solid experience. If you are choosing, I wouldn't doubt for a second to go with immich.
Immich struggles to act as a true unifying solution for users with large, existing archival collections (DSLRs, scanned film, etc.). Since those „Archival Assets“ are often decades old, already organized into complex, user-defined file structures (e.g., 1998/DATE_PLACE_PROJECT/PLACE_PROJECT_DATE.jpg), and frequently contain incomplete or inconsistent metadata (missing dates, no GPS, different file formats).
Immich's current integration solutions (like "External Libraries") treat the archive as a read-only view, which leads to a fragmented user experience:
- Changes, facial recognition, or tagging remain only within Immich’s database, failing to write metadata back to the archival files in their original directory structure (last time I checked, might be better now.
- My established, meaningful directory structure is ignored or flattened in the Immich view, forcing the user to rely entirely on Immich’s internal date/AI-based organization.
My goal (am I the only one?) of having one app view all photos while maintaining the integrity and organizational schema of the archival files on disk is not yet fully met.
Immich needs a robust, bi-directional import/sync layer that respects and enhances existing directory structures, rather than just importing files into its own schema.
This is where I'm at, really. I have my own filing hierarchy and storage templates can't really deal with it (and I don't get why they would be needed when all I want is for it to handle an "uploads" directory and re-scan the file tree after I file things)
I also run NixOS (btw) but opted for the container. My Docker compose setup has moved from Arch to Ubuntu to NixOS now, so I like the flexibility of that setup.
I also use Tailscale, and use cloudflare as nameserver and Caddy in front of Immich to get an nice url and https. For DNS redirects I use Adguard on the tailnet, but (mostly for family) I also set some redirects in my Mikrotik hEX (E50UG). This way Immich is reachable from anywhere and not on the internet. Unfortunately it looks like the Immich app caches the IP address somewhere? Because it always reports as disconnected whenever Tailscale turns off when I'm at home or the other way around and takes some time/attempts/restarts to get going again. It's been pretty flaky that way...
Other than that: Best selfhosted app ever. It has reminded me that video > photos, for family moments. Regularly I go back through the years for that day, love that feature.
immich is neat, but I tire of fiddling around with computers more than necessary so I pay for iCloud for the family because I don't want to be Oncall 24/7/365. I do self host home assistant sadly, just because certain things I want to do are just not possible with SmartThings. planning on moving to their hosted solution for that eventually too tho.
I actually did the math earlier and the iCloud 12TB plan for a family is way cheaper than the equivalent s3 storage assuming frequent access, even assuming a 50% discount. so that's nice.
> because I don't want to be Oncall 24/7/365
Yes I don't recommend doing that. My experience is that people understand you are human because they know you. They don't expect 9 9s availability but if they somehow do that can be clarified from the start : "I'm hosting this free of charge for family members because (insert your reasons here, it's important to clarify WHY it's different because Apple and BigTech in general somehow still have a ton of goodwill) but as you know as also have a job and our family life. Consequently sometimes, e.g. electricity outage or me having to update the server, there will be down time. Do no panic when this happens as the files are always safe (backup details if you want) but please do be patient. Typically it might take (insert your realistic expectation, do NOT be too optimistic) a day per month for updates. If you do have better solutions please do contribute."
... or something of the kind. I've been doing that for years and people are surprisingly understanding. IMHO it stems from the why.
The "way cheaper than the equivalent" argument reminds me of, and apologies I know it's a bit rough, Russian foreign minister days ago who criticize the EU for its plan to decouple with their oil & gas saying something like "Well if they do that they will pay a lot more elsewhere" and he's right. The point isn't the money though, the point is agency and sovereignty.
One option is use immich just to browse photos. I back my photos up to various places, one of which is my NAS. You can set up immich to browse but not modify photos so you can still use it as a "front end".
Anyone used https://lycheeorg.dev for a comparison?
I'm curious to know which one would suit me best.
> Albums within albums
I didn't knew about Lychee previous to your comment, but given that they support what should be a basic feature of photo management software (unlike Immich), I'll give it a try
Thanks for the suggestion!
I use lychee. It’s been great. Uploads could be a bit rough a few versions back but they’ve been seamless for a while
This is great timing, I'm just setting up a homelab and planning to run Immich on a mini PC server connected to a NAS. I did find icloudpd, which seems like a pretty reliable syncing tool for people in Apple ecosystem. https://github.com/icloud-photos-downloader/icloud_photos_do...
I just sync from my mac and iphone to immich. Works well
On iOS, does it keep syncing if you "swipe away" the app and never open it for e.g. a couple months?
I really would like something like this.
As far as I understand iOS’s behaviour, there’s no way to do what you’re asking unless you’re Apple Inc.
The Nextcloud app kind of does it, it seems. The fact that it stops working seems unrelated: starting the app doesn't make it recover, so it just seems buggy.
Nextcloud uses the location permission for some reason, presumably to wake up the app in the background once in a while? At least it can be closed (and "swiped away") for 2 months and keep syncing. Until it breaks and stops working entirely.
That’s surprising to hear! I’d love to know how they managed it. (I’m not willing to familiarise myself with the codebase, of course.)
I had immich running great for a while, maybe for months. It would seamlessly sync photos from phone to local home server. I was going to setup nightly outbound sync too (1 is none, 2 is some).
I updated the container for usual appliance maintenance. Entire thing is toast. Metadata files can't be read, mounted, permission issues and more. It's been four months since.
> permission issues
as an un-solicited drive-by suggestion: see if they're owned by root? you may have sudo'd the original run.
since you're at least a few months behind though, do check for breaking changes: https://github.com/immich-app/immich/discussions?discussions... they've pretty consistently had instructions, but you unfortunately mostly have to know to look for it. not sure why the upgrade notification doesn't make it super incredibly painfully obvious.
This is why I just can't deal with self hosting... I'm already burnt out on this kind of stuff in my day job. And something like this will ALWAYS happen eventually.
That is why you should never update stuff that works :)
I personally prefer Photoprism more.
Much more responsive and clear UI, golang backend are two main subjective advantages.
I use my Immich instance locally as photos and videos storage for 7-8 months now. It's really amazing. Reading comments here I'd say it's medium-difficulty tier to set up. I have it on my Elitedesk 800 G3 mini, in Docker Compose, with hardware encoding and it's more than sufficient to handle this work. Never had a bug.
As a counter-anecdote: literally just `docker compose up` on a spare laptop for me, and it's working great (though it's only available on the local network). There might be stuff to tune (e.g. I'm pretty sure it's not using my GPU), but it's almost totally unnecessary for just one household of people's use - the initial huge google-photos-takeout took an hour or three to finish indexing with all the features enabled, but all new stuff is done within seconds. The most I've done is to swap the actual photo storage to an external drive, which is just a "move that folder, custom mapping in the docker command" change.
On hardware that doesn't have docker, or is significantly more resource constrained somehow: yea, I completely believe it. I haven't tried that, but given the features it makes total sense that it'd be harder.
I fail to get the problems with self hosting Immich. I mean, obviously you gotta have a server ready, and at least some knowledge about self hosting and docker. But apart from that, installing Immich was like a breeze. It was actually not more than pasting some lines into a docker-compose, and running it.
Have been using it for about 1.5 years, and I have not a single problem, which is quite incredible for a software that basically has all features that Google Photos has.
I've had nothing but trouble with Immich. It's a CPU hog if you enable any kind of AI/ML (face detection is a notable culprit) or when preprocessing even small phone videos, I can't get it to import an existing photo tree from a filesystem, and the iOS app can't seem to sync reliably...
Very nice the author uses tailscale serve! It's an underrated, and unfortunately under documented, way to host a web service directly to Tailscale. With that you can run a docker compose stack with one extra tailscale container, and then it's immediately a self contained and reasonably portable web server in your tailnet.
Immich really is fantastic software, and their roadmap is promising. I hope they have enough funding to keep going.
I use ente which is also the same, a bit tricky to setup but the app looks great
Not self hosted. But I pay for ente for me and my family. Covers 5 of us for £10 a month. Not the cheapest but ente works amazing.
You can self host it. It used to be a fairly complicated procedure but I think they simplified it recently, but I haven’t tried it again since then
I prefer Photoview’s simplicity over Immich. Immich leans too much toward mimicking Google Photos for my taste.
I would disagree there. I've tried lots of photo managers, and for organizing thousands of photos, I think Google Photos has it pretty much nailed. When choosing a photo/video manager, "works pretty much exactly like Google Photos but without all the AI bullshit and privacy issues" is a major selling point for me. Ideally it would even have the same shortcuts so that my muscle memory still works.
The only thing that's really missing is a feature on the mobile app to delete local copies of uploaded assets ... Something like Google Photos "Free up space" feature.
It has that. Select the media you want to delete, tap & hold, then scroll to the right in the menu and select Delete from device. At least on Android this is the way.
Love Immich. Runs smoothly on an amd 4700u ($200) with minimum cpu/ram usage
I agree, and simple to me $200 new PC does this task just fine.
I found the sync clients for iOS to stuck unfortunately, meaning I cannot use this.
One thing I really like is the performance... its smooth and fluid. The api is really useful as well: I wrote a small job to auto add descriptions and tags to the images.
I honestly thought there was some app - even one from Google - that let you just run on your Windows or Mac and downloaded all of your Google Photos to your computer.
That’s all the author is trying to do. He isn’t trying to avoid or replace Google Photos - just have a local backup.
Even Apple has a Windows app that does that for iCloud Photos
Immich started the same time and with the same backstory/reasoning to my (failed) project.
I love the immich success story but it seems like it's missing a crucial use case in my view: I don't actually want a majority of the photos on my phone. I want something like a shared album that me and my wife both have access to, and so we can share photos specifically to that album (quickly and without hassle), so we can do it in the moment and both have access.
I would probably estimate 90% Of my photos are junk, But I want to isolate and share the 10% that are really special.
My app failed, but I'm thinking about reviving it as an alternative front-end to immich, to build upon that.. But I feel like I'm the only one who wants this. Everyone else seems fine with bulk photo backup for everything.
I want something with a simpler backend than immich. I don't really want to host it because it needs lots of stuff to run. I would love one that can do sqlite and is a single binary go (or rust) program.
Mine is that: https://photofield.dev/ (but has fewer features)
Syncthing?
I have a homegrown app too. It's too tinkery for anyone else. I throw whole iOS device backups at it so it can pluck out media from texts. Then the frontend has an efficient bulk sorting workflow with vi keys to navigate a grid of photos and tag with a few different tags or delete. I feel like this is not the same use case as immich, it's maybe a curation step before exporting a refined set of media.
just disable auto-upload and then manually upload the ones you want to. There is a setting to share your immich library with someone else. Between those two features, you should get something close to what you want.
For me one of the killer things would be to click "share" on a photo I took, and then have the immich albums show up so I can put them in that specific place as like a 3 click process. That's basically what I was building my whole app around
Just have immich app to sync only certain album? And add photos to that album. Seems like solved feature?
I don't think he's here looking for a solution.
You can pick which albums on your phone to upload to Immich. You and your wife could have separate users on the server too if you want that. I think you can probably share a user account, or share albums between users, but the syncing might get confusing if you both have an album with the same name. The only reason I can think of to not upload everything on your phone and try to share one or two albums is that it might get hard to search through many pictures, even with the AI.
As for not wanting most of your photos, Immich also includes AI search and facial recognition which both work really well. I can't remember if it detects near-duplicates, but I thought it did. I think you should play around with it before you leap into the giant project of making your own app.
Given how many times I've read praises about Immich here, I tried it a few weeks ago and was quite disappointed.
The fact that they don't support sub-albums make it an absolute no-go to me.
This will be my Christmas project. Thanks, author.
that PC is overkill for that
learnt a lot from this! Thanks.
> For every cloud service I use, I want to have a local copy of my data for backup purposes and independence.
I know how much Adobe is hated around any creative circle, but tbf I find that Lightroom CC does this pretty well. Adobe has a well done simple helper app that does just that: downloads the entire of your library locally, with all pictures, all edits, everything. For backup purposes is perfect. Lightroom might be expensive for amateurs, but if you even just do a couple of photo jobs per year, it's worth every cent.
I've been running Immich on my Kubernetes cluster for a few months now. It was one of the harder things to install. I didn't use the "official" Helm chart because I didn't like it, instead just set it up myself. I use Cloud Native Postgres for DBs so I have backups already configured. I had to use a special image with vectorchord in it. It auto updates with flux and has been fine. The only time it wasn't fine was when I needed to manually upgrade vectorchord in the db.
The Android app is good but does quite often fail to open, just getting stuck on the splash screen indefinitely. Means I have to have another app for viewing photos on my phone.
One of the main reasons I wanted to install it is because my partner runs out of space on her iPhone and I don't want to pay Apple exorbitant amounts for piffling storage. Unfortunately it doesn't quite work for that; I can't find an option to delete local copies after upload.
this is super cool.