ActivityPub is a w3c standard, which IMO is a big plus over nostr which doesn’t have an established independent steward for it.
Also isn’t there the thing where users can’t really be banned on nostr? I’m not sure where I read that, but that’s going to kill any mass adoption if that’s the case.
Sounds like somebody gave you some incorrect information re: banning.
You don’t need a w3c standard to have a protocol that is open source and used globally, it’s just one way to go about that. You can also have standards which are not made through w3c but are made through some other governance body, or you can have standards where the standard just kind of evolves from a bunch of different devs trying different versions of things until there’s one main way which floats to the top since everybody prefers it. Nostr has the NIP (Nostr improvement proposal) process which has been used to make standards for everything from video streaming to calendar events/invites.
Relays on nostr, which are the equivalent to instances in ActivityPub/mastodon/lemmy can set their own moderation policies, defederate from other relays, etc all the same as in ActivityPub. The moderation abilities are the same. This means relays can choose what content they allow and ban users/topics/content from other relays, etc. The key difference is that you are by default connected to multiple relays. So if your relay blocks a user you really want to follow, you can keep following that user and see them in your feed, they just don’t show up for other users on that relay. If a relay blocks you, you can’t post content to that relay. So you get the best of both worlds: relays have curated, moderated public squares with trending hashtags and tweets while not reducing your ability to choose who to follow and who can follow you.
Identity portability is another key feature: if your instance goes down, you don’t lose all your DMs, followers, etc.
I see what you’re saying about it not needing a standards body, and of course that can work fine, but for me it’s an advantage that AP is maintained by a body independent from any specific implementation. An equivalent would be if the AP spec was defined by the Mastodon devs and community—not a bad thing, just not as good in my mind.
The relays thing I think was what the unable to really ban comes from. Are there moderation tools to propagate bans across relays quickly? Does nostr have the same issues as with lemmy instances where an admin abandons the relay and it gets overrun with shit? Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.
The user portability aspect is the main advantage of it that I can see, and it looks like a pretty clever solution to the issue. Though personally speaking, I only really care about my subscription list, which I sync between two accounts already using my lemmy client. I understand some people might care more about the other stuff though (particularly on microblog platforms)
Before we get into the weeds here, let’s start with an important basic premise: Moderation ability, at a protocol level, from an instance/relay admin perspective in nostr and AP is identical.
Are there moderation tools to propagate bans across relays quickly?
Relay operators can share ban lists like they do in AP. Relay operators can only directly control their own relay, not other relays. I don’t know the ins-and-outs of how the interface on the admin side looks, but at a protocol level, AP and Nostr offer the same abilities.
Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.
Relays sharing ban lists help can solve this problem. I would argue that we don’t want to give that power (to ban a user from the entire network) to a single relay admin or even a couple relay admins (since anybody can be a relay admin), so broad consensus of some form needs to exist OR sets of relays can form their own little networks of trust where they will automatically trust a ban from other admins in that network. A relay admin doesn’t need to be able to ban somebody from the entire network if they simply disagree with that user’s post, they can just ban the user on their own relay. There is value in having public squares with varying degrees of moderation, among other reasons, because laws about what kind of speech are acceptable vary country by country. There is value in having mainstream platforms which refuse to host some kinds of content and having that be a different moderation policy than the one used by the government, for example. Remember that legality and morality are not the same and that there are differences in what is illegal vs illegal in different jurisdictions. We don’t want the legal standards of Russia or China to the legal standards the entire network has to follow.
If the user is doing something which is very illegal, which I believe you are referring to, that is a job for law enforcement. Neutral networks like the internet are traditionally policed “at the edges”. We don’t have gmail proactively filtering for objectionable or illegal content because of the consequences that come from that privacy invasion, false positives, additional computational load, reducing reliability of sending/receive between email carriers, etc. Comcast is not inspecting packets as they fly through their network at a the speed of light, delaying them, and determining if they should be passed or not. It’s the internet, they just pass them through. Instead, we say “this is an open, neutral network and if you break the law, LEO will deal with it”.
Fair play, regarding the tooling being there then, I had the impression it wasn’t even possible currently. I guess I’d now wonder how ubiquitous its usage is.
My concern with your second part is that law enforcement would not be able to quickly deal with the issue and in the case of an abandoned relay, could take a fair few days or weeks before any action is taken. The problems with such illegal content is that in many places even unwittingly having it in your browser cache would put you massively at risk—it needs to be removed and the user prevented from continuing as immediately as possible, anything else puts the people using the network at risk. If such a risk exists, it’s going to put most people off (and entirely understandably). I know I avoided browsing lemmy for a fair while when the problem here was still being figured out, and I thankfully never saw anything, but I’m still weary of browsing on my lunch break at work for example.
Also FWIW, I think Google does scan emails and drive for this stuff, I think all US based social networks have an obligation to do so also, IIRC, but I might not be 100% correct on that.
There is no “delete a user from the internet” button. It doesn’t exist. Even if a single admin could ban a user from entire network, which is giving immense amount of power to any admin, all that user has to do is make a new account to get around it. That’s true for Nostr, AP, Twitter, Facebook, E-mail, etc. This is why spam exists and will always exist. AP or nostr or whoever isn’t going to solve spam or abuse of online services, the best we can do it mitigate the bulk of it. Relays and instances can share ban lists in nostr or AP, that can be automated, that is the way to mitigate the problem. There is, however, a “delete a person from society” button we can press, and that is LEOs job. That, conveniently, also deletes them from the internet. It’s just not a button we trust anybody but government to press. We do have a “delete a user from most of AP/Nostr” button in the form of shared blocklists.
As we get stronger and stronger anti-spam/anti-abuse measures, we make it harder and harder to join and participate in networks like the internet. This isn’t actually a problem for spammers, they have a financial incentive, so they can pay people to fill out captchas and do SMS verifications and whatever else they need to do. All we do by increasing the cost to spam is change that kinds of spam are profitable to send. Other abuse of services that isn’t spam have their own intrinsic motivations that may outweigh the cost associated with making new accounts. At a certain level of anti-spam mitigation, you end up hurting end users more than spammers. A captcha and e-mail verification blocks like 90% of spam attempts and is a very small barrier for users. But even that has accessibility implications. Requiring them to receive an SMS? An additional 10% but now you’ve excluded people who don’t have their own cell phone or use a VoIP provider. You’ve made it more dangerous for people to use your service to seek help for things like addiction, domestic abuse, etc as their partner or family member may share the same phone. You’ve made it harder to engage in dissent against the government in authoritarian regimes. You’ve also made it much more difficult to run a relay, since running a relay now requires access to an SMS service, payment for that SMS service, etc. Require them to receive a letter in the mail? An additional 10% but now you’ve excluded people who don’t have a stable address or mail access, etc. Plus now it takes a week to sign up for your website and that’s even getting into apartment numbers and the complications you’d face there. For a listing to be placed on Google Maps, maybe a letter in the mail is a reasonable hurdle to have, after all, Google only wants to list businesses which have a physical address. For posting to twitter? It’s pretty ludicrous.
I generally trust relay admins to make moderation decisions, otherwise I wouldn’t be on their instance or relay on the first place. And my trust becomes extended to other admins they work with and share ban lists with. And that’s fine. But remember that any person with any set of motivations can be a relay or instance admin. That person could be the very troll we are trying to prevent with this anti-spam or anti-abuse measures. What I don’t trust is any random person on the internet being able to make moderation decisions for the entire internet. Which means that any approach to bans would need to be federated and built on mutual trust between operators.
ActivityPub is a w3c standard, which IMO is a big plus over nostr which doesn’t have an established independent steward for it.
Also isn’t there the thing where users can’t really be banned on nostr? I’m not sure where I read that, but that’s going to kill any mass adoption if that’s the case.
Sounds like somebody gave you some incorrect information re: banning.
I see what you’re saying about it not needing a standards body, and of course that can work fine, but for me it’s an advantage that AP is maintained by a body independent from any specific implementation. An equivalent would be if the AP spec was defined by the Mastodon devs and community—not a bad thing, just not as good in my mind.
The relays thing I think was what the unable to really ban comes from. Are there moderation tools to propagate bans across relays quickly? Does nostr have the same issues as with lemmy instances where an admin abandons the relay and it gets overrun with shit? Some users need to be booted off the network entirely and swiftly sometimes, we’ve seen several cases of this in Lemmy already with users posting horrendous shit. I’d be concerned that one of my relays would lag on banning (timezone differences for moderators or whatever innocuous reason) and these users achieve their goal of more people seeing the shit they post. For some people this might trigger PTSD, which is why I say it would be a huge barrier to mass adoption until that issue is resolved.
The user portability aspect is the main advantage of it that I can see, and it looks like a pretty clever solution to the issue. Though personally speaking, I only really care about my subscription list, which I sync between two accounts already using my lemmy client. I understand some people might care more about the other stuff though (particularly on microblog platforms)
Before we get into the weeds here, let’s start with an important basic premise: Moderation ability, at a protocol level, from an instance/relay admin perspective in nostr and AP is identical.
Relay operators can share ban lists like they do in AP. Relay operators can only directly control their own relay, not other relays. I don’t know the ins-and-outs of how the interface on the admin side looks, but at a protocol level, AP and Nostr offer the same abilities.
Relays sharing ban lists help can solve this problem. I would argue that we don’t want to give that power (to ban a user from the entire network) to a single relay admin or even a couple relay admins (since anybody can be a relay admin), so broad consensus of some form needs to exist OR sets of relays can form their own little networks of trust where they will automatically trust a ban from other admins in that network. A relay admin doesn’t need to be able to ban somebody from the entire network if they simply disagree with that user’s post, they can just ban the user on their own relay. There is value in having public squares with varying degrees of moderation, among other reasons, because laws about what kind of speech are acceptable vary country by country. There is value in having mainstream platforms which refuse to host some kinds of content and having that be a different moderation policy than the one used by the government, for example. Remember that legality and morality are not the same and that there are differences in what is illegal vs illegal in different jurisdictions. We don’t want the legal standards of Russia or China to the legal standards the entire network has to follow.
If the user is doing something which is very illegal, which I believe you are referring to, that is a job for law enforcement. Neutral networks like the internet are traditionally policed “at the edges”. We don’t have gmail proactively filtering for objectionable or illegal content because of the consequences that come from that privacy invasion, false positives, additional computational load, reducing reliability of sending/receive between email carriers, etc. Comcast is not inspecting packets as they fly through their network at a the speed of light, delaying them, and determining if they should be passed or not. It’s the internet, they just pass them through. Instead, we say “this is an open, neutral network and if you break the law, LEO will deal with it”.
Fair play, regarding the tooling being there then, I had the impression it wasn’t even possible currently. I guess I’d now wonder how ubiquitous its usage is.
My concern with your second part is that law enforcement would not be able to quickly deal with the issue and in the case of an abandoned relay, could take a fair few days or weeks before any action is taken. The problems with such illegal content is that in many places even unwittingly having it in your browser cache would put you massively at risk—it needs to be removed and the user prevented from continuing as immediately as possible, anything else puts the people using the network at risk. If such a risk exists, it’s going to put most people off (and entirely understandably). I know I avoided browsing lemmy for a fair while when the problem here was still being figured out, and I thankfully never saw anything, but I’m still weary of browsing on my lunch break at work for example.
Also FWIW, I think Google does scan emails and drive for this stuff, I think all US based social networks have an obligation to do so also, IIRC, but I might not be 100% correct on that.
There is no “delete a user from the internet” button. It doesn’t exist. Even if a single admin could ban a user from entire network, which is giving immense amount of power to any admin, all that user has to do is make a new account to get around it. That’s true for Nostr, AP, Twitter, Facebook, E-mail, etc. This is why spam exists and will always exist. AP or nostr or whoever isn’t going to solve spam or abuse of online services, the best we can do it mitigate the bulk of it. Relays and instances can share ban lists in nostr or AP, that can be automated, that is the way to mitigate the problem. There is, however, a “delete a person from society” button we can press, and that is LEOs job. That, conveniently, also deletes them from the internet. It’s just not a button we trust anybody but government to press. We do have a “delete a user from most of AP/Nostr” button in the form of shared blocklists.
As we get stronger and stronger anti-spam/anti-abuse measures, we make it harder and harder to join and participate in networks like the internet. This isn’t actually a problem for spammers, they have a financial incentive, so they can pay people to fill out captchas and do SMS verifications and whatever else they need to do. All we do by increasing the cost to spam is change that kinds of spam are profitable to send. Other abuse of services that isn’t spam have their own intrinsic motivations that may outweigh the cost associated with making new accounts. At a certain level of anti-spam mitigation, you end up hurting end users more than spammers. A captcha and e-mail verification blocks like 90% of spam attempts and is a very small barrier for users. But even that has accessibility implications. Requiring them to receive an SMS? An additional 10% but now you’ve excluded people who don’t have their own cell phone or use a VoIP provider. You’ve made it more dangerous for people to use your service to seek help for things like addiction, domestic abuse, etc as their partner or family member may share the same phone. You’ve made it harder to engage in dissent against the government in authoritarian regimes. You’ve also made it much more difficult to run a relay, since running a relay now requires access to an SMS service, payment for that SMS service, etc. Require them to receive a letter in the mail? An additional 10% but now you’ve excluded people who don’t have a stable address or mail access, etc. Plus now it takes a week to sign up for your website and that’s even getting into apartment numbers and the complications you’d face there. For a listing to be placed on Google Maps, maybe a letter in the mail is a reasonable hurdle to have, after all, Google only wants to list businesses which have a physical address. For posting to twitter? It’s pretty ludicrous.
I generally trust relay admins to make moderation decisions, otherwise I wouldn’t be on their instance or relay on the first place. And my trust becomes extended to other admins they work with and share ban lists with. And that’s fine. But remember that any person with any set of motivations can be a relay or instance admin. That person could be the very troll we are trying to prevent with this anti-spam or anti-abuse measures. What I don’t trust is any random person on the internet being able to make moderation decisions for the entire internet. Which means that any approach to bans would need to be federated and built on mutual trust between operators.