Do you apply the same reasoning for software that use javascript, the JVM, the CLR or some other kind of VM?
Do you apply the same reasoning for software that use javascript, the JVM, the CLR or some other kind of VM?
These are server CPUs, not something you wanna put in your laptop or desktop.
write only medium
I guess you meant “write once”?
Anyway, this won’t prevent attacks that somehow swap the CD being read, or the backend logic for where to read the data from.
You cited Git as an example, but in Git it’s possible to e.g. force-push a branch and if someone later fetches it with no previous knowledge they will get the original version.
The problem is the “with non previous knowledge” and is the reason this isn’t a storage issue. The way you would solve this in git would be to fetch a specific commit, i.e. you need to already know the hash of the data you want.
For the Wayback Machine this could be as simple as embedding that hash in the url. That way when someone tries to fetch that url in the future they know what to expect and can verify the website data matches the hash.
This won’t however work if you don’t already have such hash or you don’t trust the source of it, and I don’t think there’s something that will ever work in those cases.
If you think this is normal then imagine what other people think of the linux community though!
But here’s the issue: the parent comment didn’t even provide reasons why they think Windows sucks or examples/episodes where this was a problem for them. It adds nothing to the discussion, just free hate.
Lots of major companies like Microsoft and IBM also contribute to Linux, it doesn’t make them saints nor even necessarily compare to what they get for using the volunteer dev work inside Linux.
Most of those companies actually contribute to the kernel or to foundational software used on servers, but few contribute to the userspace for desktop consumers on the level that Valve does.
Zig is “c”, but modern and safe.
Zig is safer than C, but not on a level that is comparable to Rust, so it lacks its biggest selling point. Unfortunately just being a more modern language is not enough to sell it.
So imagine if trying to fit in a C-like cousin failed
C++ was not added to Linux because Linus Torvalds thought it was an horrible language, not because it was not possible to integrate in the kernel.
CEO bonuses should be awarded 10 years after their mandate
It also seems to require a GC though…
newxml is GC only, for simplicity sake.
Pointers are not guaranteed to be safe
So I guess they are forbidden in @safe
mode?
but it’s being replaced by something else instead
Do you know what is the replacement? I tried looking up DIP1000 but it only says “superceded” without mentioning by what.
This makes me wonder how ready D is for someone that wants to extensively use @safe
though.
For local variables, one should use pointers, otherwise
ref
does references that are guaranteed to be valid to their lifetime, and thus have said limitations.
Should I take this to mean that pointers instead are not guaranteed to be valid, and thus are not memory safe?
Note that Rust does not “solve” memory management for you, it just checks whether yours is memory safe. Initially you might rely on the borrow checker for those checks, but as you become more and more used to Rust you’ll start to anticipate it and write code that already safisfies it. So ultimately you’ll still learn how to safely deal with memory management, just in a different way.
Rust for Linux used to be developed in parallel to the mainline Linux before Linus Torvalds merged support in the main tree.
“safe by default” can be done by starting your files with
@safe:
Last time I heard about that it was much more limited than Rust, for example it even disallowed taking references to local variables. Has something changed since then?
But the one time I looked at a rust git repo I couldn’t even find where the code to do a thing was.
IMO that tells more about how the project was organized and names things than the language used.
So I think probably, the best way IS to go the way linus did. Just go ahead and write a very basic working kernel in rust. If the project is popular it will gain momentum.
As the other commenter pointed out, there’s Redox. The issue is that this completly disregards an incremental approach: you have to rewrite everything before it comes usable, you can’t do it piece by piece. Currently the approach of Rust for Linux is not even to rewrite things, but to allow writing new drivers in Rust.
Trying to slowly adapt parts of the kernel to rust and then complain when long term C developers don’t want to learn a new language in order to help isn’t going to make many friends on that team.
Have you seen the conference video? That’s not just refusal to learn a new language, it’s open hostility. And it’s not the only instance, for example Asahi Lina also reported unreasonable behaviour by some maintainers just because she wrote Rust code, even when Rust was not involved.
The reputation loss is probably worse than whatever fine they end up paying
Time to pull a Meta/X and change name
Unfortunately some things will IMO always remain a natural monopoly. For example good luck trying to convince developers to write their apps for all those different operating systems.
That doesn’t really excuse its behavior in the video though.
Luckily Apple strictly controls the App Store and will never allow apps to abuse this, right? Right?
Did you mean patents?