Just an explorer in the threadiverse.

  • 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle
  • I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).

    1. K8s itself is quite resource-consuming, especially on ram. My homelab is built on old/junk hardware from retired workstations. I don’t want the kubelet itself sucking up half my ram. Things like k3s help with this considerably, but that’s not quite precisely k8s either. If I’m going to start trimming off the parts of k8s I don’t need, I end up going all the way to single-node podman/docker… not the halfway point that is k3s.
    2. If you don’t use hostNetworking, the k8s model of traffic routes only with the cluster except for egress is all pure overhead. It’s totally necessary with you have a thousand engineers slinging services around your cluster, but there’s no benefit to this level fo rigor in service management in a homelab. Here again, the networking in podman/docker is more straightforward and maps better to the stuff I want to do in my homelab.
    3. Podman accepts a subset of k8s resource-yaml as a docker-compose-like config interface. This lets me use my familiarity with k8s configs iny podman setup.

    Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.


  • What’s the network flow like? I’m posting this to the lemmy.ml /asklemmy community, but I’m composing it on the sh.itjust.works interface. I’m assuming sh.itjust.works hands this over to lemmy.ml. How does my browsing work? Is all of my traffic routed through sh.itjust.works?

    • You register your account on sh.itjust.works, that’s where all the info you care about resides. Your list of subscribed communities resides there. When you read a post, it gets fetched out of the db on sh.itjust.works (irrespective of where the home instance for that post’s community is… when you read it it comes out of the database on your home instance), and when you comment on a post, that gets written to the db on your home instance. Your home instance a standalone fully functioning thing.
    • When you subscribe to a remote community like this one, you tell your home instance "keep up to date with posts and comments for this community and let me know about them. Your home instance asynchronously gets all those updates while you’re asleep or whatever so it can show them to you out of its local database when you come back. If more users on sh.itjust.works subscribe to the same community… there’s no incremental overhead. All ya’lls instance is ALREADY subscribed to that sub. So other users on your instance can sub to it for free, it’s already in the instance’s database.

    Assuming there’s a mass influx of redditors, what does it look like as things fail?

    • If lemmy.ml (where this community is homed) falls over from being overloaded or just is broken for whatever reason, your instance is unaffected. You can still read posts and make comments. This community however… is affected. New posts and comments for this community might come through intermitently or not at all for you (and everyone in the lemmyverse) because the community’s home server isn’t working well enough to reliably deliver them over federated replication. You can still read older posts and comments that have already been synced to your home instance, but new ones might not arrive. You might also see weird stuff like being able to see new comments from other sh.itjust.works users on this community, since those get written to your db before getting federated back to the community’s home server. But mostly updates from other instances stop or get unreliable.
    • If sh.itjust.works falls over for some reason… well… that sucks for you. You can’t log in or browse anything on it. You can still visit this sub at https://lemmy.ml/c/asklemmy/ as long as lemmy.ml is working and you’ll be able to see the posts and comments that other accounts make. But you’ll be an anonymous read-only browser, you won’t be able to post or comment until sh.itjust.works comes back online (or you make a new account elsewhere and lose all your comment history and subscription list).

    Are there easy mechanisms to allow me to grab my post history?

    There’s a github issue for this, but it’s not done yet: https://github.com/LemmyNet/lemmy/issues/506.

    I’m assuming most (all?) Lemmy servers are hosted in home labs?

    I don’t think that’s a good assumption. lemmy.ml is hosted on OVH, a cloud provider. My home instance on lemmy.world is hosted by admins that run something like a 32 CPU mastodon instance. Most instances with over 100 users are running on some kind of probably modest but “real” cloud instance. The admins are volunteers, but often smart technical folks paying for small but real compute infrastructure.

    The idea of Lemmy excites me, but the growth pain that could be coming scares me. Anybody using a CDN in front of their servers? That could be good, but with unconstrained growth, that could be costly, which is very bad.

    Anticipating growing pains isn’t wrong, it’s probably gonna happen. But the devs are gonna find and work on the biggest performance problems so that people can viably run bigger instances, and instance admins are gonna run bigger hardware and ask for donations or run patreons to cover the cost. In my opinion, the bigger worry is that Lemmy will fizzle… not that it will spectacularly explode. As long as people join and contribute and are interested, we’ll find a way to improve scalability and performance. The death knell would be if people get bored and leave, but compute capacity won’t be the problem in that scenario.