I think that proposing immich for every use case out there is not the correct answer.
As much as I like immich, this is not a good use case… iMHO.
Me
I think that proposing immich for every use case out there is not the correct answer.
As much as I like immich, this is not a good use case… iMHO.
All that? Well, I understand your point, but honestly I have more fun learning something new, and was really little work.
Anyway… Its an option too
No you don’t need two: in fact I have only unbound setup to do everything with one piece of software.
Better or worse? No idea, but it works and its one less piece that might fail.
I have a quite rich selfhosted stack, and DNS is indeed part of it.
For such a critical piece of infrastructure I didn’t needed a container, just installed Unbound and did some setup for ad blocking and internal DNS rules.
Here my setup: https://wiki.gardiol.org/doku.php?id=router:dhcp-dns
You could go with an independent pihole maybe, but that would double the chances of a hardware failure…
Using one device for everything might seem risky, but actually has less chances of failure ;)
That’s not the point. Maybe you can, but for how long? you will never stop asking the question with docker…
I think you wrote it back ways: transitioned from docker to podman?
Yeah podman should use quadlets, not compose, but still works just fine with docker compose and the podman socket!
Yes you need both 80 and 443 for certbot to work. Anyway having 80 to redirect to 443 is common and not a security risk.
Podman guys… Podman All the way…
There is no “write and forget” solution. There never has been.
Do you think we have ORIGINALS or Greek or roman written texts? No, we have only those that have been copied over and over in the course of the centuries. Historians knows too well. And 90% of anything ever written by humans in all history has been lost, all that was written on more durable media than ours.
The future will hold only those memories of us that our descendants will take the time to copy over and over. Nothing that we will do today to preserve our media will last 1000 years in any case.
(Will we as a specie survive 1000 more years?)
Still, it our duty to preserve for the future as much as we can. If today’s historians are any guide, the most important bits will be those less valuable today: the ones nobody will care to actually preserve.
Citing Alessandro Barbero, a top notch Italian current historian, he would kill no know what a common passant had for breakfast in the tenth century. We know nothing about that, while we know a tiny little more about kings.
Fellow Gentoo user! Kudos.
Well, here is the relevant part then, sorry if it was not clear:
TLDR: proxy auth doesnt work with Jellyfin, OIDC yes and it bypassess proxy, so in both cases proxy will not be involved.
This is my jellyfin nginx setup: https://wiki.gardiol.org/doku.php?id=services:jellyfin#reverse-proxy_configuration
currently i don’t use any proxy related authentication because i need to find the time to work with the plugins in Jellyfin. I don’t have any chromecast, but i do regularly use the Android Jellyfin app just fine.
I expect, using the OIDC plugin in jellyfin, that Jellyfin will still manage the login via Authelia itself, so i do not expect much changes in NGINX config (except, maybe, adding the endpoints).
Never found a service that don’t work with nginx reverse proxy.
My jelly fin does.
Don’t run photoprims tough…
You might use LDAP, but its total overkill.
I have not yet worked jellyfin with authelia, but its more or less the last piece and I don’t really care so far if its left out.
A good reverse proxy with https is mandatory, so start with that one. I mean, from all point of views, not login.
I have all my services behing nginx, then authelia linked to nginx. Some stuff works only with basic auth. Most works with headers anyway, so natively with authelia. Some bitches don’t, so I disable authelia for them. Annoying, but I have only four users so there is not much to keep in sync.
They actually do, i am down the same path recently and installing authelia was the best choice I made. Still working on it.
But most stvies support either basic auth, headers auth, oidc or similar approaches. Very few don’t.
Ok, I have a web browser on a locked down device and nothing else: how do I print a pdf or a photo using IPP?
I have: a camera, a browser, a file manager (kind of, think of an iPhone or some stock android business device) and I need to print a photo taken with the camera or a pdf file sent to me via email or WhatsApp?
The device is connected to the WiFi guest network with limited internet access (if any) and as only available service a server with port 443 open (a reverse proxy on that, captive portal and such).
In my experience, there is no way to print via cups in this configuration. Maybe I am wrong?
It still requires the device to be capable to print…
And the user to find the printer select it and so on. And must expose more ports on the network beside 443…
So, indeed cups is a great solution, but not to the problem I want to solve.
I do use cups in fact for the trusted part of the network, driverless printing for windows and Linux. Android doesn’t even need cups since it picks up the printer directly from the printer itself (AirPrint or whatevee that’s called).
I known cups can share printers and queues.
What is unclear?
I don’t want to pull drivers or install cups on devices. I want to print from anywhere just uploading a file to a web page.
If I have lots of devices or just want to let somebody print from his phone/tablet without installing or configuring anything…
With cups I still need to touch the system or the device somehow to let it print.
Yes, this is what I am afraid of… There is nothing out there for this task.
Hope to find something or maybe try to create something using lpr on the background… But this is the las hope as I have little time.
Nope!
Just wasted 3 days debugging an IP assigned to two devices… Not fun, don’t do it…