Formerly known as arc@lemm.ee / server shuts down end June 25

  • 0 Posts
  • 39 Comments
Joined 2 months ago
cake
Cake day: June 10th, 2025

help-circle
  • I don’t really buy the “small incompatibilities” argument. The project strives for total compatibility, even down to the most esoteric parameter that nobody has ever heard of. And even that seems like overkill to me - there are alternative implementations of core commands on Linux and other *nix systems like BSD, Solaris etc. where the compatibility is way worse. For example, busybox is used in embedded Linux, and a containerized images like Alpine Linux.

    It also seems a bit rich to complain that uutils might get extended. GNU coreutils came into being because of dissatisfaction with the commands that came with the default *nix. Same for bash (vs sh), GNU cc (vs cc), GNU emacs (vs emacs) and so on. Was there somebody back then complaining about devs “spamming commits” that extended functionality?

    And other Rust applications won’t only work with uutils. That’s absurd. They’ll test the capabilities of the OS they’re built to run on either at build time with feature flags or at runtime by probing commands. Just like any other high level application.

    As for license, MIT is used for plenty other things in a typical Linux dist, e.g. X11.

    The biggest point of concern for a Rust rewrite is dependency integrity. Rust uses cargo to manage dependencies and absolutely everything in the Cargo.toml/Cargo.lock files has to be reviewed. The crates.io repository is beginning to support package signing and The Update Framework initiative but every single dependency of uutils would need to be carefully reviewed and signature validated for it to be considered trustworthy. Basically everything needs to get locked down, and wherever possible dependencies expunged altogether.






  • No, YOU don’t understand end to end encryption, and you don’t understand browsers. You say you could “write down a base64 encoded binary blob on a website”. Yes you could and how do you decrypt it? The asnwer is with a key (asymmetric or symmetric) that the recipient must have in memory of the receiving software - the browser that the filter has already intercepted and compromised. So “moar layers” is not protection since the filter could inject any JS it likes to reveal the inner key and/or conversation. It could do this ad nauseum and the only protection is how determined the filter is.

    But this is also a nonsense argument just on a practical level. The problem is kids connecting to adult websites, or websites with some adult content. The filter doesn’t need to do much - either block a domain outright, or do some DPI to determine from the path what part of the website the browser is calling. The government thinks it reasonable that every single website that potentially hosts adult content should capture proof of identity of adults. I contend that really the issue is kids having access to those websites at all, and that proxies can and would be a far more effective way to control the issue without imposing on adults. No solution is perfect, but a filter is a far more effective way than entrusting some random website with personal information. Only this week somebody found an app that was storing ids in a public S3 bucket compromising all those users. Multiply that by hundreds, thousands of websites all needing verification and this will not be the last compromise by any means.











  • Actually it can be done and is being done. Software like Fortigate Firewall can do deep packet inspection on encrypted connections by replacing certs with their own and doing man in the middle inspection. It requires the browser has a root CA cert that trusts the certs issued by the proxy but that’s about it. Filtering software could onboard a new device where the root cert could be installed.

    And if Fortigate can do it then any filtering software can too. e.g. a kid uses their filtered device to go to reddit.com, the filter software substitutes reddit’s cert for their own and proxies the connection. Then it looks at the paths to see if the kid is visiting an innocuous group or an 18+ group. So basic filtering rules could be:

    1. If domain is entirely blocked, just block it.
    2. If domain hosts mixed content, deep packet inspection & block if necessary
    3. If domain is innocuous allow it through

    This is eminently possible for an ISP to implement and do so in a way that it ONLY happens when a user opts into it on a registered device while leaving everything open if they did not opt into it.

    And like I said this is an ISP problem to figure out. The government could have set the rules and walked away. And as a solution it would be far more simple that requiring every website to implement age verification.


  • That’s a problem is for ISPs and content providers to figure out. I don’t see why the government has to care other than laying out the ground rules - you must offer and implement a parental filter for people who want it for free as part of your service. If ISPs have to do deep packet inspection and proxy certs for protected devices / accounts then that’s what they’ll have to do.

    As far as the government is concerned it’s not their problem. They’ve said what should happen and providing the choice without being assholes to people over 18 who are exercising their rights to use the internet as they see fit.