Context

Having started out in the world of Napster & Limewire, I’ve always relied on public sources. It wasn’t until in the early '10s that I lucked into a Gazelle-based tracker that was started by some fellow community members. Unfortunately, I wasn’t paying enough attention when they closed shop and didn’t know how to move elsewhere. Combined with some life circumstances I gave up the pursuit for the time being.

It wasn’t until recently that a friend was kind enough to help me get back and introduced me to current state of automation. Over the course of a few months, I’ve since built up the attached systems. I’ve been having an absolute blast learning and am very impressed with all of the contributions!

After all of the updates due to BF deals, I put together the attached diagram as it was starting to get too complex to keep all of the interactions in my head. 😅

Setup

  • All of the services run in Docker containers.
  • Each container is a separate Compose file managed by Systemd.
  • The system itself is in a VM running on my home server (both Arch, btw).
  • Tailscale is used for remote access to the local network.
  • ProtonVPN is managed by Gluetun and uses a separate network for isolating services.

Questions

  • What am I missing or can be improved?
  • Is there a better way to document?
  • What do you do differently that might be beneficial?

Thoughts

  • I had Calibre set up at one point, but I really don’t like how it tracks files by renaming them. I have been considering trying to automate with the CLI instead, but haven’t gotten around to it yet.
  • I’ve been toying with the idea of creating a file-arr for analyzing disk usage, performing common operations, and exposing a web-based upload/download client so I don’t have to mount the volume everywhere.
  • Similarly, I’m interested in a way to aggregate logs/notifications/metrics. I’m aware of Notifiarr, but would prefer a self-hosted version.
  • I just set up Last FM scrobbling so I don’t have any data yet. I’m hoping to use that for discovery and if possible, playlist syncing or auto-generation.

Notes

  • Diagram was made using D2lang.
  • Some of the connections have been simplified to improve readability / routing.
  • Some services have been redacted out of an abundance of caution.
  • I know VPN with Usenet isn’t necessary, but it’s easier to keep it consistent.

Also, thanks for the recommendations to check out deemix/Deezer. That worked really well! 😀

Edit: HQ version of diagram

  • alin742@lemmus.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Is tailscale private and safe? I would also like to use it for my homeserver?

    • Xyre@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It’s based on WireGaurd with some added benefits. Free for up to 3 users. I’ve had no issues with it and even use it for corporate networks. An alternative is ZeroTier, while I haven’t used it I hear a lot of people recommend it too.

  • db0@lemmy.dbzer0.comM
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    10 months ago

    Very nice. Can you share a docker-compose.yml for others to replicate this? Also your diagram could be a bit higher quality.

    • Xyre@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 months ago

      Each service is a separate docker-compose.yml, but they are more-or-less the same as the example configs provided by each service. I did it this way as opposed to a single file to make it easier to add/remove services following this pattern.

      I do have a higher quality version of the diagram, but had to downsize it a lot to get pictrs to accept it…

      • db0@lemmy.dbzer0.comM
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Ah your instance must be limiting the size. lemmy.dbzer0.com allows you to upload anything and just downscales to 1024px max dimention. You can also just host on imgur etc.

    • Xyre@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I get what they’re saying and it may be ‘technically correct’, but the issue is more nuanced than that. In my experience, some trackers have strict requirements or restricted auth tokens (e.g. can’t browse & download from different IPs). Proxying may be the solution, but I’d have to look at how it decides what traffic gets routed where.

    • Xyre@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      The problem I’ve found is that the services will query indexers and that not all of the trackers allow you to use multiple IPs. This is where I found it easier to make all outbound requests go through the VPN so I didn’t get in trouble. It’s also why I have the Firefox container set up inside the network with it exposed over the local network as a VNC session. So I can browse the sites while maintaining a single IP.

      I do have qbittorrent set up with a kill switch on the VPN interface managed by Gluetun.

  • navigatron@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    You’re running docker inside a vm? Why?

    The first thing I would do is learn the 5-layer OSI model for networking. (The 7-layer is more common, but wrong). Start thinking of things in terms of services and layers. Make a diagram for each layer (or just the important layers. Layers 3 and up.)

    If you can stomach it, learn network namespaces. It lets you partition services between network stacks without container overhead.

    Using a vm or docker for isolation is perfectly fine, but don’t use both. Either throw docker on your host or put them all in as systemd services on a vm.

    • Xyre@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The server itself is running nothing but the hypervisor. I have a few VMs running on it that makes it easy provision isolated environments. Additionally, it’s made it easy to snapshot a VM before performing maintenance in case I need to roll back. The containers provide isolation from the environment itself in the event of a service gone awry.

      Coming from cloud environments where everything is a VM, I’m not sure what issues you’re referring to. The performance penalty is almost non-existent while the benefits are plenty.

      • 1337@1337lemmy.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        I recently rebuilt my home server using containers instead of (qemu/KVM) VMs and I notice a performance benefit in some areas. Although I just use systemd-nspawn containers rather than docker as I don’t really see the need to install 3rd party software for a feature already installed on my OS.

        I handle snapshots by using btrfs. Works great

    • Xyre@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      For a long time, that was the case. Then the greed nation attacked. Now they’ve reproduced the cable model on the web and more than half of which have terrible clients / infrastructure.

      If I could pay for a single service that operated similar to this setup:

      • Tell it what I’d like to watch while also displaying similar content for discovery.
      • Tracking progress in every show (while not forgetting!).
      • Not losing content I have been watching as it’s now in ‘another castle’.
      • A single place to view all tracked shows rather than loading each service individually.

      I probably would sign up for it as that’s what was so successful for Netflix until all of the studios thought they could do better. And now the consumer has to suffer the consequences.