Every community I care about is dead

  • 4 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle






  • Everyone fully missing the point here. This is the banner image for [email protected] (that’s not where we are right now for the record), and it has a normal JPEG size of 7.7MB. When it’s served as WebP it’s 3.8MB. OP is correct that this is very stupid and wasteful for a web content image. It’s a triple-monitor 1440p wallpaper that’s used verbatim, and it should instead be compressed down to be bandwidth-friendly. I was able to get it to 1.4MB at JPEG quality 80, and when swapping it out in dev tools and performing A/B testing I can’t tell the difference. This should be brought to the attention of a mod on that community so it can stop sucking people’s data for no reason.


  • You’re right, and I suppose I was half-thinking along the lines of “we have all the pieces to solve this, but we don’t because we’re frozen in place by greed” instead of “this is something we could do with infrastructure today”. If everyone could collectively let go and re-distribute wealth and materials efficiently everyone would be much better off for it, but instead we’re stuck in some game theory hell where the optimal personal choice results in one of the worst outcomes.



  • JXL is the best image codec we have so far and it’s not even close. I did a breakdown on some of its benefits here. JXL can losslessly convert PNG, JPG, and GIF into itself, and can losslessly send them back the other way too. The main downside is that Google has been blocking its adoption by keeping support out of Chromium in favor of pushing AVIF, which started a chicken and egg problem of no one wanting to use it until everyone else started using it too. If you want to be an early adopter you can feel free to use JXL, but just know that 3rd party software support is still maturing.

    Something you might find interesting is that the original JPEG is such a badass format that they’ve taken a lot of their findings from JXL and made a badass JPEG encoder with it named jpegli. Oddly, jpegli-based JPEGs are not yet able to be losslessly-compressed into JXL files, per this issue - hopefully that will be fixed at some point.





  • Arch should be fine for university stuff. The main problem with Arch is not Arch itself, but all the software it tracks being very fresh. You’ll be pulling updates as they come down the line, and that may result in temporary bugs or day-to-day workflow changes - caused by the software developers themselves. I don’t think an Arch system is unusually unstable or prone to breaking, but last year they did brick everyone’s GRUB loaders by pushing an update too early (post-mortem here). It’s up to you, but if you want to err on the side of system/software stability I would go for Mint/OpenSUSE Tumbleweed/Debian.

    I don’t have any practical experience with EndeavourOS but TMK it’s just preconfigured Arch and it uses the default repos, so that sounds good to me. Vanilla Arch is not inherently better or worse, it’s just a more minimal starting point.


  • Conduit is also licensed under Apache 2.0, so it could also be taken closed source at any point in time. The reason this wouldn’t impact Conduit as much is that there’re other contributors, whilst Synapse and Dendrite are almost exclusively developed by Element.

    Right. The current perspective is based on the idea that if Synapse/Dendrite go closed-source right now, an open source version would be good as dead. Element is responsible for 95% of Synapse/Dendrite and I’m sure a community fork would have to play a lot of catch-up to figure out how to keep it going. If the community was more involved in Synapse/Dendrite implementation (and if Element let them) there would be less cause for alarm, as closing the source would just mean an immediate community fork and putting Element on ignore. Also to reiterate, The Matrix Foundation is not going along with Element on this move, and even if Element pulled something shady the Matrix Core Spec etc. would still remain open and under the Foundation’s control, so the max we have to lose is Synapse/Dendrite and all of Element’s developers.

    As for the rest I agree and I do actually trust that Element is simply playing their only card here. These maneuvers are all required in order for Element to survive as a company at all, but they also unfortunately leave this backdoor open as a consequence. Matthew has pinky-promised over and over that they are only acting in good faith and that they would never use the backdoor, but it’s understandable that the presence of the backdoor is putting everyone at unease. Best case scenario we take this as a warning sign that if Element drops dead tomorrow then Matrix is also dead. If people want Matrix to not be practically owned by Element then we should diversify and prepare escape plans.


  • This is actually quite a controversial change mainly because of their switch to a CLA. This indirectly gives them the opportunity to switch the license to closed source whenever they feel like it in the future. Semi-controversially, they are also primarly making this AGPL change in order to begin selling dual-licensing to companies. The Matrix Foundation itself does not support this change from Element, though Element is within its rights to do so.

    You can read some more thoughts on this from the pessimistic folks at HackerNews. My main takeaway is that I don’t trust Element because I don’t trust anyone. I’m sure they’re doing this in good faith but I don’t like the power they have at the moment. I hope this is what’s needed to begin focusing efforts on alternative homeserver implementations like Conduit.







  • Mirrored vdevs allow growth by adding a pair at a time, yes. Healing works with mirrors, because each of the two disks in a mirror are supposed to have the same data as each other. When a read or scrub happens, if there’s any checksum failures it will replace the failed block on Disk1 with Disk2’s copy of that block.

    Many ZFS’ers swear by mirrored vdevs because they give you the best performance, they’re more flexible, and resilvering from a failed mirror disk is an order of magnitude faster than resilvering from a failed RAIDZ - leaving less time for a second disk failure. The big downside is that they eat 50% of your disk capacity. I personally run mirrored vdevs because it’s more flexible for a small home NAS, and I make up for some of the disk inefficiency by being able to buy any-size disks on sale and throw them in whenever I see a good price.