It's about time. Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time. It's not as if Rust is that much more difficult to write than C. Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
If 32-bit x86 support can be dropped for pragmatic reasons, so can these architectures. If people really, really want to preserve these architectures as ongoing platforms for the future, they need to step up and create a backend for the Rust toolchain that supports them.
Right now, there are approximately five languages that are presumed to be acceptable for core applications in the base system: C, C++, Shell (which probably means specifically bash), Perl, and Python. The most recent language to be added to that list is Python, about 20 years ago. That's not to say that everybody likes those languages (indeed, there's quite a few commenters here who I think would be surprised to learn that not only is C++ on this list, but that it's been on it for at least 25 years).
There's other languages that are considered acceptable, even desirable, languages to write applications in (e.g., Java, PHP, Go), but Rust is really the first language to compete sufficiently close to C's competence for people to contemplate adding it to the base-system-languages list. I'd say only Go has ever come close to approaching that threshold, but I've never seen it contemplated for something like systemd.
Interestingly, I wonder if the debates over the addition of C++, Python, and Perl to the base system language set were this acrimonious.
> Interestingly, I wonder if the debates over the addition of C++, Python, and Perl to the base system language set were this acrimonious.
I think any projects that are run by people that see themselves as "X-people" (like Python-people, Perl-people) always have a bit "ick" reaction to new languages being added to projects they might see as part of a language's community.
So say you're a C++ developer, contributed to APT over the years, see all of it linked to the C++ community which you are part of too, and someone wants to start migrating parts of it to Rust/$NewLang. I think it might sometimes affect more for these people than just the code, might even be "attacking" (strong word perhaps) their sense of identity, for better or worse.
I was (and am still) a C++ person. But I somehow became also a Rust person. I am sure some people felt attacked, but imho Rust has most of the characteristics that made C++ a good language and that's why I "embraced" Rust rather than felt attacked.
Yeah rust is not C++ but it really compliments C++ well. Each has their niche and tbh I see them converging into the two kings of the systems space rather than actually competing with each other.
If anyone sees that horrible mess of hacks around pre-STL C++'s lacks of namespace in combination with latest C++ features as part of the C++ community I'd be very surprised :D
If APT were a hardcore C++ project surely we'd have like adopted namespaces everywhere by now.
> I think any projects that are run by people that see themselves as "X-people" (like Python-people, Perl-people) always have a bit "ick" reaction to new languages being added to projects they might see as part of a language's community.
I would say that Pythonistas are quite accustomed to "(other) languages being added" to the Python ecosystem. After all, NumPy relies on Fortran, as well as C.
Asserting that kind of "ownership" over code seems rather distasteful to me. Maybe there would be less acrimony if developers got paid for it somehow.
Yeah, maybe Python was a bad example, didn't really think specifically about the languages I made as examples, they were just examples with the hope of making it easier to understand what exactly I was talking about.
Some communities indeed are better at embracing multiple languages, Python, JavaScript and Java/JVM comes to mind, where it isn't uncommon to call out to other languages.
> So say you're a C++ developer, contributed to APT over the years, see all of it linked to the C++ community which you are part of too, and someone wants to start migrating parts of it to Rust/$NewLang. I think it might sometimes affect more for these people than just the code, might even be "attacking" (strong word perhaps) their sense of identity, for better or worse.
How is language relevant here? If someone just rewrote it in the same language instead of a different one, do you feel the reaction would be significantly better?
This is really a Rust specific thing, since so many C++ projects were 1-on-1 translated to Rust only changing the licence.
Rust has been the tool of choice for stealing GPL3 open source projects where some people have spent all their free time on at some point in their life.
> there are approximately five languages that are presumed to be acceptable for core applications in the base system: [...] Python
I don't know if you've tried to get someone else's Python running recently, but it has devolved into a disaster effectively requiring containers to accurately replicate the exact environment it was written in.
Core system applications should be binaries that run with absolutely minimal dependencies outside of default system-wide libraries. Heck, I would go as far as to say applications in the critical path to repairing a system (like apt) should be statically linked since we no longer live in a storage constrained world.
> I don't know if you've tried to get someone else's Python running recently, but it has devolved into a disaster effectively requiring containers to accurately replicate the exact environment it was written in.
Please show me a project where you believe you "effectively require containers" just to run the code, and I will do my best to refute that.
> since we no longer live in a storage constrained world.
I think you do care about the storage use if you're complaining about containers.
And I definitely care, on principle. It adds up.
For reasons I can only assume have to do with poorly configured CI, pip gets downloaded billions of times annually (https://pypistats.org/packages/pip), and I assume those files get unpacked and copied all the time since there would be no good reason to use uv to install pip. That's dozens of petabytes of disk I/O.
> "containers" broadly, including things like pipx, venv, or uv.
This statement makes no sense. First off, those are three separate tools, which do entirely different things.
The sort of "container" you seem to have in mind is a virtual environment. The standard library `venv` module provides the base-line support to create them. But there is really hardly anything to them. The required components are literally a symlink to Python, a brief folder hierarchy, and a five-or-so-line config file. Pipx and uv are (among other things) managers for these environments (which manage them for different use cases; pipx is essentially an end-user tool).
Virtual environments are nowhere near a proper "container" in terms of either complexity or overhead. There are people out there effectively simulating a whole new OS installation (and more) just to run some code (granted this is often important for security reasons, since some of the code running might not be fully trusted). A virtual environment is... just a place to install dependencies (and they do after all have to go somewhere), and a scheme for selecting which of the dependencies on local storage should be visible to the current process (and for allowing the process to find them).
It is such a non problem it forced them to hack a „fuck you this Python is owned by the distribution not you“ message into pip requiring you to agree to „breaking your system“ to use it.
Of all the languages, python in the base system has been an unmitigated garbage fire.
It was not their action, nor is it hacked, nor is the message contained within pip.
The system works by pip voluntarily recognizing a marker file, the meaning of which was defined by https://peps.python.org/pep-0668/ — which was the joint effort of people representing multiple Linux distros, pip, and Python itself. (Many other tools ignore the system Python environment entirely, as mine will by default.)
Further, none of this causes containers to be necessary for installing ordinary projects.
Further, it is not a problem unique to Python. The distro simply can't package all the Python software out there available for download; it's completely fair that people who use the Python-native packaging system should be expected not to interfere with a system package manager that doesn't understand that system. Especially when the distro wants to create its tools in Python.
You only notice it with Python because distros aren't coming with JavaScript, Ruby etc. pre-installed in order to support the system.
Well the essential system Python should be in /usr/sbin and read-only (insofar Python allows that with its __pycache__ spam).
The fact that users have to keep up with multiple PEPs, error messages, --single-version-externally-managed, --break-system-packages, config files everywhere, stealth packages in .local and uv to paper over all of this shows that Python packaging is completely broken.
> the essential system Python should be in /usr/sbin
There's still quite a bit you can do with the "system Python". Mine includes NumPy, bindings for GTK, QT5 and QT6, Freetype, PIL....
> insofar Python allows that with its __pycache__ spam
This is, to my understanding, precisely why the standard library is pre-compiled during installation (when the process already has sudo rights, and can therefore create the `__pycache__` folders in those locations). This leverages the standard library `compileall` module — from the Makefile:
> The fact that users have to keep up with multiple PEPs, error messages, --single-version-externally-managed, --break-system-packages, config files everywhere, stealth packages in .local and uv to paper over all of this shows that Python packaging is completely broken.
Please do not spread FUD.
They don't have to do any of that. All they have to do is make a virtual environment, which can have any name, and the creation of which is explicitly supported by the standard library. Further, reading the PEPs is completely irrelevant to end users. They only describe the motivation for changes like --break-system-packages. Developers may care about PEPs, but they can get a better summary of the necessary information from https://packaging.python.org ; and none of the problems there have anything to do with Linux system Python environments. The config files that developers care about are at the project root.
Today, on any Debian system, you can install an up-to-date user-level copy of yt-dlp (for example) like so, among many other options:
sudo apt install pipx
pipx install yt-dlp
You only have to know how one of many options works, in order to get a working system.
> All they have to do is make a virtual environment
Okay so to create a five line script I have to make a virtual environment. Then I have to activate and deactivate it whenever using it. And I have to remember to update the dependenceis regularly. For my five line script.
Seems to me the companies managing mloc-codebases pushed their tradeoffs on everyone else.
> Okay so to create a five line script... For my five line script.
I can guarantee that your "five line script" simply does not have the mess of dependencies you imagine it to have. I've had projects run thousands of lines using nothing but the standard library before.
> Then I have to activate and deactivate it whenever using it.
No, you do not. Activation scripts exist as an optional convenience because the original author of the third-party `virtualenv` liked that design. They just manipulate some environment variables, and normally the only relevant one is PATH. Which is to say, "activation" works by putting the environment's path to binaries at the front of the list. You can equally well just give the path to them explicitly. Or symlink them from somewhere more convenient for you (like pipx already does for you automatically).
> And I have to remember to update the dependenceis regularly.
No, you do not in general. No more so than for any other software.
Programs do not stop working because of the time elapsed since they were written. They stop working because the world around them changes. For many projects this is not a real concern. (Did you know there is tons of software out there that doesn't require an Internet connection to run? So it is automatically invulnerable to web sites changing their APIs, for example.) You don't have to remember to keep on top of that; when it stops working, you check if an update resolves the problem.
If your concern is with getting security updates (for free, applying to libraries you also got for free, all purely on the basis of the good will of others) for your dependencies, that is ultimately a consequence of your choice to have those dependencies. That's the same in every language that offers a "package ecosystem".
This also, er, has nothing to do with virtual environments.
> Seems to me the companies managing mloc-codebases pushed their tradeoffs on everyone else.
Not at all. They are the ones running into the biggest problems. They are the ones who have created, or leveraged, massive automation systems for containers, virtualization etc. — and probably some of it is grossly unnecessary, but they aren't putting in the time to think about the problem clearly.
And now we have a world where pip gets downloaded from PyPI literally billions of times a year.
People are too harsh on this. It's not hard to install a version manager and set your primary python to that. Which is just good hygiene.
My understanding of the reasoning is that python-based system packages having dependencies managed through pip/whatever present a system stability risk. So they chose this more conservative route, as is their MO.
Honestly if there is one distribution to expect those kinds of shennanigans on it would be Debian. I don't know how anybody chooses to use that distro without adding a bunch of APT sources and a language version manager.
yes because then you're starting to use non-distro python packages. If you want to do that, use a virtualenv, there is no safe other way (even if there was no python in the base system) .
There's a good reason for this. The average user has no idea and doesn't care what language some random distro-packaged program is written in. They want to be able to run ubxtool or gdal_calc or virt-manager or whatever without setting up a virtual environment. Python developers on the other hand should be adept at such things, should they choose to use a non-distro packaged version of something.
The tricky part is when "users" start using pip to install something because someone told them to.
and storage is a big deal for debian! It is basically the other player in embedded Linux other than rolling your own where a 4GB or smaller emmc is pretty common.
Is X11/KDE considered part of the "base system"? If yes, then:
> indeed, there's quite a few commenters here who I think would be surprised to learn that not only is C++ on this list, but that it's been on it for at least 25 years
The Debian base system is much, much smaller. I'm surprised that people consider Python to be part of it. However, APT depends on Perl and the C++ run-time library, so those two languages have been part of the base system for a very long time.
You can of course add your own "apt" binary in /usr/local/bin/apt which can be written in any language you like, say COBOL, Java, Common Lisp or Python.
Ah, my apologies. I must admit I am not entirely clear on where the boundaries are between the different tools and other elements of the debian packaging process.
> In particular, our code to parse .deb, .ar, .tar, and the
HTTP signature verification code would strongly benefit
from memory safe languages
> Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time.
But hasn't all that foundational code been stable and wrung out already over the last 30+ years? The .tar and .ar file formats are both from the 70s; what new benefits will users or developers gain from that thoroughly battle-tested code being thrown out and rewritten in a new language with a whole new set of compatibility issues and bugs?
I wish, but I get new security bugs in those components like every year or so, not all are tracked with security updates to be fair, some we say it's your own fault if you use the library to parse untrusted code.
After all the library wasn't designed around safety, we assumed the .debs you pass to it are trusted in some way - because you publish them to your repository or you are about to install them so they have root maintainer scripts anyway.
But as stuff like hosting sites and PPAs came up, we have operators publishing debs for untrusted users, and hence suddenly there was a security boundary of sorts and these bugs became problematic.
Of course memory safety here is only one concern, if you have say one process publishing repos for multiple users, panics can also cause a denial of service, but it's a step forward from potential code execution exploits.
I anticipate the rewrites to be 1 to 1 as close as possible to avoid introducing bugs, but then adding actual unit tests to them.
> But hasn't all that foundational code been stable and wrung out already over the last 30+ years?
Not necessarily. The "HTTP signature verification code" sounds like it's invoking cryptography, and the sense I've had from watching the people who maintain cryptographic libraries is that the "foundational code" is the sort of stuff you should run away screaming from. In general, it seems to me to be the cryptography folks who have beat the drum hardest for moving to Rust.
As for other kind of parsing code, the various archive file formats aren't exactly evolving, so there's little reason to update them. On the other hand, this is exactly the kind of space where there's critical infrastructure that has probably had very little investment in adversarial testing either in the past or present, and so it's not clear that their age has actually led to security-critical bugs being shaken out. Much as how OpenSSL had a trivially-exploitable, high criticality exploit for two years before anybody noticed.
Actual cryptography code, the best path is formally verified implementations of the crypto algorithms; with parsers for wrapper formats like OpenPGP or PKCS#7 implemented in a memory safe language.
You don't want the core cryptography implemented in Rust for Rust's sake when there's a formally verified Assembler version next to it. Formally verified _always_ beats anything else.
I should have clarified that I was primarily referring to the stuff dealing with all the wrapper formats (like PKIX certificate verification), not the core cryptographic algorithms themselves.
The core cryptographic algorithms, IMHO, should be written in a dedicated language for writing cryptographic algorithms so that they can get formally-verified constant-time assembly out of it without having to complain to us compiler writers that we keep figuring out how to deobfuscate their branches.
Sure. But assembly implementations by definition are not portable. And I don’t know what it takes to write a formally verified library line this, but I bet it’s very expensive.
In contrast, a rust implementation can be compiled into many architectures easily, and use intrinsically safer than a C version.
Plus cryptography and PKI is constantly evolving. So it can’t benefit from the decades old trusted implementations.
Formally verified in an obscure language where it's difficult to find maintainers does not beat something written in a more "popular" language, even if it hasn't been formally verified (yet?).
And these days I would (unfortunately) consider assembly as an "obscure language".
(At any rate, I assume Rust versions of cryptographic primitives will still have some inline assembly to optimize for different platforms, or, at the very least, make use of compile intrinsics, which are safer than assembly, but still not fully safe.)
It's insanely complex, particularly you want _verified_ crypto. Last year (or two years ago?) I had to fix a tiny typo in OpenSSL's ARM assembly for example, it was breaking APT and Postgres left and right, but only got triggered on AWS :D
You don't want to write the whole thing in assembly, just the parts that need to be constant time. Even those are better written as called subroutines called from the main implementation.
Take BLAKE3 as an example. There's asm for the critical bits, but the structural parts that are going to be read most often are written in rust like the reference impl.
I would like a special purpose language to exist precisely for writing crytographic code where you always want the constant time algorithm. In this niche language "We found a 20% speed-up for Blemvich-Smith, oops, it actually isn't constant time on the Arrow Lake micro-code version 18 through 46" wouldn't even get into a nightly let alone be released for use.
It seems that for reasons I don't understand this idea isn't popular and people really like hand rolling assembly.
I do think this is pretty much the one use case for a true "portable assembler", where it basically is assembly except the compiler will do the register allocation and instruction selection for you (so you don't have to deal with, e.g., the case that add32 y, x, 0xabcdef isn't an encodable instruction because the immediate is too large).
If you mean GnuPG, that is what Snowden used. It could be better than new software that may have new bugs. Memory safety is a very small part of cryptographic safety.
(New cryptographic software can also be developed by all sorts of people. In this case I'm not familiar, but we do know that GnuPG worked for the highest profile case imaginable.)
GPG works great if you use it to encrypt and decrypt emails manually as the authors intended. The PGP/GPG algorithms were never intended for use in APIs or web interfaces.
Ironically, it was the urge not to roll your own cryptography that got people caught in GPG-related security vulnerabilities.
Isn't it also funny that all of these things are done by the same person?
In all seriousness though, let me assure you that I plan to take a very considerate approach to Rust in APT. A significant benefit of doing Rust in APT rather than rewriting APT from scratch in Rust means that we can avoid redoing all our past mistakes because we can look at our own code and translate it directly.
Honestly having seen trainwreck after trainwreck after trainwreck come out of Canonical for the last decade, I'm sure I'm not the only one that has strong doubts about anyone associated being able to "avoid redoing past mistakes" or to make things not suck.
The Fil-C project ( https://fil-c.org/ ) seems like a more pragmatic way to deal with C security holes in old, well-loved userspace code. It effectively turns C into a managed language rather than a bare metal one, seems to remove a lot of the impetus to rewrite.
I really like the idea of Fil-C for derisking legacy code, but
- It's not an option for debian core infrastructure until it supports at least the same platforms debian does (arm, riscv, etc) and it currently only supports x86_64.
- It doesn't turn C into a modern language, since it looks like there's active development here getting the productivity benefits of moving away from C is likely still worth it.
If C isn't going away yet, just doing filc for x84_64 might still be a big win as it could catch a lot of stuff at runtime for x84_64 users that can then be fixed for everyone.
Given that a few languages from similar timeframe are still around, even if with fewer users, it is guaranteed to stay around as long as current computing models stay relevant.
If you're single platform (Fil-C is x86-64 only), if the program is finished (Fil-C doesn't magically make maintaining a C project any easier to handle) and if performance isn't relevant (Fil-C is and despite its originator's confidence always will be bigger and slower than what you have today) then I agree.
The author's benchmarks suggest 10× would be a pathological case!
But even so - what price correct & secure software? We all lost a tonne of performance overnight when we applied the first Meltdown and Spectre workarounds. This doesn't seem much different.
We have an alternative that isn't 10x slower, and comes with many other benefits (Rust). The only cost is losing hardware support for some very obsolete and very unpopular platforms. (Nevermind that Fil-C's hardware support is narrower than Rust's.)
Rust doesn't automatically add memory safety to all existing C code, which will need to be maintained for decades, Fil-C nearly does and its still early days.
> We have an alternative that isn't 10x slower, and comes with many other benefits
Anyone involved with development around a fruity company would say Swift ;)
It is not only about memory safety. C community is aging fast and young developers choose different languages. We started to rewrite all C and C++ code in my team because it is really hard to find people willing to maintain it. From my experience typical C or C++ programer is around 40 and not willing to switch jobs.
Inviting rank amateurs to established projects while expecting them to operate as free labor in the hopes of future relevance for employment has a distinctly different feel. Missives like the OP feel like preying on a desperate and young generation when paired with the commentary.
If all the entry-level jobs are C or C++, do you think companies would have a hard time filling them? Would the unemployed new graduates really shun gainful employment if Rust wasn't part of the equation?
Meanwhile, hiring managers left and right are reporting that within hours of a job being posted, they are flooded with hundreds of applications. And you can't find a single person because of the programming language of your stack? And to remedy this, you're going to rewrite your stack in an unproven language? Have you considered that if you can't find anyone that it might not be a programming language or tech stack problem?
My experience in working in C++ was that the salary wasn’t as good as what I could get elsewhere. That was a big part of why I left, I didn’t hate the job.
Wow, I've never considered this aspect of it but you're right. If you want widespread access to incoming developers that can contribute to your project, that really does mean Rust by default at this point if you want a low level language regardless of what you prefer.
Is the pay good at your job? Rust jobs are few and far between, so I think Rustafarians are willing to take lower pay to work in their dream language. Companies will of course abuse this.
I agree that new software should be written in Rust or another, safer language. But I don't agree that it's wise to start retrofitting old software in this way. New code is almost always worse in quality than old code, and I do not believe that the safety gains from Rust are so advantageous that they will offset that factor.
Would this logically extend to also include C-reliant languages like Python and Ruby (the latter being mostly a grammar underpinned by C) as technical debt also?
Yes, which is why in 2025 it is a good idea to use Rust with python bindings for your performance sensitive code.
A lot of the C code used in python is calling out to old, battle tested and niche libraries so it is unlikely that someone is going to replace those any time soon but Rust is definitely increasing as time goes on for greenfield work.
Most Python libraries that relies on C are numerical stuff.
From experience with this type of code you typically end up with a load of functions that take in a numpy array and its length/dimensions to a C function that works on that array in place or an output array that was also supplied. In terms of getting this wrong, it’s usually a crash caused by out of bounds memory access which would still be a runtime crash in Rust. So I’m not sure there’s a massive benefit for these types of code other than the fun of learning Rust. Other than that, you’re typically writing C/C++ to interface with C and Fortran libraries that are really battle tested, and for which it will take decades for Rust to have equivalents. So moving to Rust will just cause you to have lots of unsafe statements - not a bad thing necessarily if you are doing a lot of work at the C level in existing code but less of a benefit if you are doing a straight wrap of a library.
On the flip side, things on the web side of Python like uWSGI which is written in C are important for the security aspect but they’re a very small part of the Python ecosystem.
All (current) languages eventually have a compiler/runtime that is memory unsafe. This is basically fine because it's a tiny amount of surface area (relative to the amount of code that uses it) and it exists in a way that the input to is relatively benign so there's enough eyes/time/... to find bugs.
There's also nothing stopping you from re-implementing python/ruby/... in a safer way once that becomes the low hanging fruit to improve computer reliability.
Are you counting ones that involve running malicious code in a sandbox and not just trusted code on untrusted input? Because then I'd agree, but that's a much harder and different problem.
My impression is that for the trusted code untrusted input case it hasn't been that many, but I could be wrong.
Sandboxes are difficult independent of language, see all the recent speculation vulnerabilities for instance. Sure, worse languages make it even harder, but I think we're straying from the original topic of "python/ruby" by considering sandboxes at all.
> Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
I don't know about that. Look at the code for the COSMIC desktop environment's clock widget (the cosmic-applet-time directory under <https://github.com/pop-os/cosmic-applets>), for example. It's pretty much unreadable compared to a C code base of similar complexity (GNU coreutils, for example: <https://savannah.gnu.org/projects/coreutils/>).
as in that "isn't the style of code you are used too"
I don't think "how well people not familiar with you language can read it" is a relevant metric for most languages.
Also IMHO while C feels readable it isn't when it matters. Because it very often just doesn't include information you need when reading. Like looking at function header doesn't tell you if a ptr is nullable, or if a mut ptr is a changeable input value or instead is a out ptr. which is supposed to point to unitialized memory and if there is an error how that affects the state of the validity of any mutable ptrs passed in. To just name some example (lets not even get started about pre processor macros pretending to be C functions). In conclusion while C seems nice to read it is IMHO often a painful experience to "properly" read it e.g. in context of a code review.
As a side note: The seemingly verbose syntax of e.g. `chrono::DateTime` comes from there being 2 DateTime-types in use in the module, one from the internationalization library (icu) and one from a generic time library (chronos). Same for Sender, etc. That isn't a supper common issue, but happens sometimes.
I think the Rust example’s biggest readability sin is using the full names of things like foo::bar::Baz instead of just Baz, but I get why they did that. When you import a lot of things into a file the latter way, it’s easy to get lost in “was that a foo Baz or a wiz Baz?” Sometimes it’s easier just to use the long names everywhere to be explicit.
If I wanted to tweak the Rust project, I’d feel pretty confident I was calling the right things with the right params.
That's a style choice that I think comes from former C++ devs.
Java can potentially have the same problem. But because everyone uses an IDE and because it's rarely really an issue, everyone will simply import `Baz` rather than worry about the Foo::Baz and Bat::Baz collision. It does happen in java code, but I can't stress how infrequently it's actually a problem.
I don’t think that’s quite right. I haven’t written C++ since the 90s, and I use IDEs (Emacs and Zed), but I still sometimes reach a mental threshold where I look at my screen and see way too many names to have to hold in my mental buffer, then decide to make them more explicit.
IDK what the state of Emac/Zed is in terms of type information (I'm sure it depends on the language in question). For Jetbrains/Eclipse/Netbeans if there's a question about a type you can ctrl+click on the type and immediately pull all information about it.
In java, I rarely pay attention to the `import` section (and I know most devs at my company).
You can look up `using namespace std;` in google and you'll find a lot of articles saying it's a bad practice in C++. Everyone recommends writing the full `std::cout` rather than `cout`.
All modern editors do that pretty well with language servers now. Specifically, Emacs and Zed do this perfectly with Rust.
I do think it’s down to personal preference. With the fully qualified names, I can look at the screen and follow the flow without having to mouse over the various names in play. For that matter, I could print it out if I wanted to and still have all the information.
I don’t think you’re objectively wrong. It’s more that we have different approaches to managing the complexity when it gets hairy.
I disagree. Both seem perfectly readable, assuming you know their preferred coding styles. As a non-C programmer, I absolutely despise running into #ifndef SOME_OBSCURE_NAME and `while (n) { if (g) {` but C (and in the latter case Go) programmers seem to love that style.
Comparing a bunch of small, barely integrated command line programs to a UI + calendar widget doesn't seem "of similar complexity" to me. Looking at a C clock widget (https://gitlab.freedesktop.org/xorg/app/xclock/-/blob/master...) the difference seems pretty minimal to me. Of course, the XClock code doesn't deal with calendars, so you have to imagine the extra UI code for that too.
but we do... 4 years after Rust, we got a first glimpse at Zig. Even today, there's a lot of people that believe that Zig would have been a better choice for the Linux kernel than Rust.
And the answer to "why now" is quite simple - Because of the whole Rust in kernel debate, people started scrutinizing the situation.
Then Zig people should be the focus of “this kids and their new shiny” criticism, not Rust.
People who become aware of something only when it’s being used by something huge also aren’t early adopters either. Rust has already been in the Windows kernel for years at this point, with none of this consternation.
That makes no sense. It was much longer than 10 years before people considered C to be tech debt for example. Idk if it will be 10 years exactly, but we are seeing better languages emerging (Swift 6, Mojo, probably others) that provide the same safety guarantees and performance/use case profiles as Rust, but are vastly more ergonomic and lovely to use. I fear Linux was hasty integrating Rust because it will likely prevent them from integrating something better in the near future.
Right, that’s when Rust was new and shiny. People who chase the latest new shiny thing don’t consider ten year old technology new and shiny, they’d have moved on years ago.
Memory safety is mostly a issue of the past. Clearly, there are new code bases with memory issue too. But we have tools to prevent that.
The new security issues are supply chain attacks. And Cargo is the way to have exactly this.
Can you provide some evidence to support this? There’s a large body of evidence to the contrary, e.g. from Chrome[1].
> But we have tools to prevent that. The new security issues are supply chain attacks.
Speaking as a “supply chain security” person, this doesn’t really hold water. Supply chain attacks include the risk of memory unsafety lurking in complex dependency trees; it’s not an either-or.
I think it's mostly the fact that C dependencies are much rarer and much harder to add and maintain.
The average C project has at most a handful of other C dependencies. The average Rust, Go or NodeJS project? A couple hundred.
Ironically, because dependency management is so easy in modern languages, people started adding a lot of dependencies everywhere.
Need a leftpad? Just add one line in some yaml file or an "Alt-Enter" in an IDE. Done.
In C? That is a lot more work. If you do that, you do it for advanced for stuff you absolutely need for your project. Because it is not easy.
In all likelihood you write that stuff yourself.
CVE-2024-3094 is it? You can argue that in C it is much easier to obfuscate your exploit. Implementing something in C is also a lot more work, so you might be also inclined to use 3rd party library.
I never found it hard to add a C library to my projects using pkg-config. And yes, when the package came from Debian I have some trust that it is not a huge supply chain risk.
I think the problem started with the idea over language-level managers that are just github collections instead of curated distribution-level package managers. So my response "C has no good package manager" is: It should not have a packager manager and Cargo or npm or the countless Python managers should all not exist either.
Usually the hard bit with C libraries is having dependencies with dependencies all of which use their own complex build systems, a mix of Make, CMake, Autotools, Ninja, etc.
Then within that for e.g. a mix of using normal standard names for build parameters and not e.g. PROJECTNAME_COMPILER instead of CMAKE_C_COMPILER
The package manager takes care of the dependencies. And one does not need to compile the libraries one uses, so how complicated this is does not matter. I install the -dev package and I am done. This works beautifully and where it does not the right move would be to fix this.
I think in most of my projects, many of the C++ packages I used for work (lots of computer vision, video codecs etc) I had to compile and host myself.
The latest and greatest of OpenCV, dlib or e.g. gstreamer weren't available on the distros I was using (Ubuntu, Fedora, Centos). They'd lag a year or more behind sometimes.
Some stuff was outright not available at all via package manager - in any version.
So, yes, you do have to figure out how to build and package these things by yourself very often.
There are also no "leftpad" or similar packages in C. If you don't want to write something yourself.
In constrast - virtually every software package of any version is available to you in cargo or npm.
Virtually every package is in cargo and npm because there is no curation. This is exactly why it is a supply-chain risk. The fix is to have a curated list of packages, but this is what Linux distribution are. There is no easy way out.
> Critical infrastructure still written in C ... is technical debt that is only going to get worse over time.
No. Rust is not magic, it just forces a discipline in which certain safety checks can be made automatically (or are obviated entirely). In other languages like C, the programmer needs to perform those checks; and it's technical debt if the C code is not coded carefully and reviewed for such issues. If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries. And the safety of critical infra code written in C gets _better_ over time, as such technical debt is repaid.
> Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
That's not true. First, it's not a well-defined statement, since "what we know now" about language design is, as it has always been, a matter of debate and a variety of opinions. But even regardless of that - C was a language with certain design choices and aesthetics. Rust does not at _all_ share those choices - even if you tack on "and it must be safe". For example: Rust is much richer language - in syntax, primitive types, and standard library - than C was intended to be.
> If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries. And the safety of critical infra code written in C gets _better_ over time, as such technical debt is repaid.
How many decades have we tried this? How many more to see that it just hasn't panned out like you describe?
> If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries.
History shows again and again that this statement is impossible..
Name a large C application that’s widely used, and I’ll show you at least one CVE that’s caused by a memory leak from the project
There is a pure-Rust compiler backend in the works, but that's going to take a long time to mature so it's just pragmatic to use LLVM in the meantime. Especially since the exploitation potential is pretty contrived in this case - if you compile compromised code then you're probably owned anyway, regardless of the backends memory safety.
Thankfully the “trust” you need out of a compiler is very very different. It would be closer to claiming you need to compile it on a Rust OS too because you’re trusting a large C/C++ app.
Separation of concerns solves this because the compiler has minimal impact on the trustedness of the code the Rust compiler generates. Indeed, one would expect that all the ways that the LLVM compiler fails are ways any Rust implementation would fail too - by generating the wrong code which is rarely if ever due to memory safety or thread safety issues. There may be other reasons to write the compiler backend in Rust but I wouldn’t put the trust of compiled Rust code as anywhere near the top of reasons to do that.
They need to do this carefully and with adversarial testing. There are safety measures in e.g. gnu tar that really should be replicated. But they are not to do with parsing, but the semantics.
> There is no standard. It's accidentally designed.
Are you really claiming that you can't design a language without an official standard? Not to mention that C itself has been designed long before its first ISO standard. Finally, the idea that a standard committee is a preconditionfor good language design is rather bold, I have to say. The phrase "design by committee" isn't typically used as a compliment...
> You've solved one class of bugs outside of "unsafe {}".
It's "only" the single most important class of bugs for system safety.
This kind of deflection and denialism isn't helping. And I'm saying this as someone who really likes C++.
> that you can't design a language without an official standard?
No, just that it's not 1968 anymore, and if you want to claim your language has learned lessons from the past, then this is one that clearly got missed.
> The phrase "design by committee" isn't typically used as a compliment...
While the phrase "emergent incompatibilities" is only known as a detriment.
> It's "only" the single most important class of bugs for system safety.
Again, I ask for a reference, "according to what?" I understand this is the zeitgeist. Is it actually true? It seems to me this great experiment is actually proving it probably isn't.
> This kind of deflection and denialism isn't helping.
Once again, I asked for proof that the claim was true, you've brought nothing, and instead have projected your shortcomings onto my argument.
> And I'm saying this as someone who really likes C++.
Have you ever pushed for C++ to replace C programs because you assume they would be "better" according to some ill defined and never measured metrics?
Ah, now I realize that you don't necessarily meant something like an ISO standard. I definitely agree that a programm language should have some kind of normative specification and it's not sufficient to say "the spec is the behavior of the compiler".
> Again, I ask for a reference, "according to what?" I understand this is the zeitgeist.
I think that at this point it is pretty well-established that the majority of security CVEs in C or C++ applications are caused by memory safety bugs. For sources see https://www.cisa.gov/news-events/news/urgent-need-memory-saf.... As a C++ dev this totally makes sense. (I just happen to work in a domain where security doesn't really matter :)
To be clear: I definitely don't think that all C or C++ code should be rewritten in Rust. But for components that are exposed to the public internet or accept untrusted user input it totally makes sense.
I have been seeing hatred on this forum towards Rust since long time. Initially it didn't make any kind of sense. Only after actually trying to learn it did I understand the backlash.
It actually is so difficult, that most people might never be able to be proficient in it. Even if they tried. Especially coming from the world of memory managed languages. This creates push back against any and every use, promotion of Rust. The unknown fear seem to be that they will be left behind if it takes off.
I completed my battles with Rust. I don't even use it anymore (because of lack of opportunities). But I love Rust. It is here to stay and expand. Thanks to the LLMs and the demand for verifiability.
I find Rust much easier to write than C. Its types let me be reasonably sure I’ve written appropriate code before I even get to the point of running tests, and I don’t have to memorize the flow of the whole program to have that assurance.
12 | hover(altitude2);
| ----- ^^^^^^^^^ expected `Meters`, found `Feet`
| |
| arguments to this function are incorrect
Guaranteeing that I’ve never mixed units means I don’t have to worry about parking my spacecraft at 1/3 the expected altitude. Now I can concentrate on the rest of the logic. The language has my back on the types so I never have to waste brain cycles on the bookkeeping parts.
That’s one example. It’s not unique to Rust by a long shot. But it’s still a vast improvement over C, where that same signed 32 bit data type is the number of eggs in a basket, the offset of bytes into a struct, the index of an array, a UTF-8 code point, or whatever else.
This really shows up at refactoring time. Move some Rust code around and it’ll loudly let you know exactly what you need to fix before it’s ready. C? Not so much.
```
error: passing 'Feet' to parameter of incompatible type 'Meters'
20 | hover(altitude2);
```
Coming from a dynamically typed language (Python, etc), this might seem like a revelation, but its old news since the dawn of programming computers. A C language server will pick this up before compile time, just like `rust-analyzer` does: `argument of type "Feet" is incompatible with parameter of type "Meters"`.
Did you not know this? I feel like a lot of people on message boards criticizing C don't know that this would fail to compile and the IDE would tell you in advance...
> An investigation attributed the failure to a measurement mismatch between two measurement systems: SI units (metric) by NASA and US customary units by spacecraft builder Lockheed Martin.[3]
That was only the proximate cause, the ultimate cause was cultural. As complex systems and efforts run into problems, it is trivial to blame the unit conversion when they had been ignoring people for months who had concerns [0]
> ... ground controllers ignored a string of indications that something was seriously wrong with the craft's trajectory, over a period of weeks if not months. But managers demanded that worriers and doubters "prove something was wrong," even though classic and fundamental principles of mission safety should have demanded that they themselves, in the presence of significant doubts, properly "prove all is right" with the flight
Dropping units on the NASA side also was problematic but really culture was the cause of the actual crash.
That's technically true, but if NASA's API accepted arguments in Meters instead of int32_t (or whatever the equivalent was in the language they used), then it would've been instantly obvious that the controller code that Lockheed Martin wrote was using the wrong datatype.
Do we know how the code was called, was it linked in or was it via IPC (the latter seems most likely to me, and then the question is does the IPC framework support a rich type system)?
That was exactly what I was thinking of when I wrote that.
But also think of how many libc functions take multiple ints or multiple chars in various orders. You can get carried away with typing, i.e. by having a separate type for everything*. Still, imagine you’re writing, say, a hypothetical IDE device driver and had separate types for BlockNumber and ByteInBlock so that it’s impossible to transpose read(byte_offset,block) instead of read(block,byte_offset), even if those are really the same kind of numbers.
That kind of thing makes a gazillion bugs just vanish into the ether.
I sometimes think about a related issue: suppose you have a function whose n parameters have n different types. Should the programmer be required to provide those parameters in a specific order? There's no ambiguity.
There appears to be some tension between different conveniences you might afford yourself. If you have read(offset: offsetTypeForRead, address: addressTypeForRead), you can catch when someone accidentally passes an address where the offset should be and an offset where the address should be.
Or, you can say "hey, I'm always adding the offset to the address; it doesn't matter which one gets passed first" and relieve the programmer of needing to know the order in which two semantically distinct variables get passed to `read`.
But if you do provide that convenience -- and it's not unintuitive at all; there really is only one valid interpretation of a combination of an address and an offset, regardless of the order you mention them in -- you lose some of the safety that you wanted from the types. If your variables are declared correctly, everything is fine. If there's a mistake in declaring them, you'll wave through incorrect calls to `read` that would have been caught before.
Huh, that’s an interesting point, and I’d have to think on that. There are still plenty of cases where ordering would matter, like subtract(a,b), unless you go whole hog and define that like
fn sub(a:LeftOp, b:RightOp)
but that seems redundant. There are still plenty of other cases where I could your idea being useful. Like I always forget whether (in Python) it’s json.dump(file, data) or dump(data, file). Ultimately, should it matter? I’m passing a file handle and an object, and it’s unambiguous how those two args relate to the task at hand.
How does this scheme of yours work with m/s and seconds.
IIUC, rust would NOT let you do a type checked m/s * s => m, so using the type system for these kinds of games is silly and dangerous (I presume you would have to do the dumb thing and typeconvert to the same type -- e.g.
(m) (speed * ((m/s) seconds))
to do multiplication which means you're inserting unscientific and reader-confusing type conversions all over the place)
That's no problem in Rust. You can have a Speed type and a Time type, and define the multiplication operator over them to return a Length value. In fact, there's already a library which does exactly that, with tons of predefined SI units: https://docs.rs/uom/latest/uom/
Related library Sguaba [1] from Helsing AI written by Jon Gjengset it allows you to define coordinate systems on the type level and safe conversion and calculation with them.
I love that sort of thing. It so much easier to get up and running with plausible results when it’s all but impossible to misuse the API. “Darn it, it’s making me cast Celsius to Meters before I call this function. Hey, wait, that can’t be right…”
I'm very confused, explain how this is not the case with C?
I haven't written rust, but my impression is the benefit is more about deeper introspection of things like lifetime than basic typesafety, which already exists in C/C++ (and is likewise occasionally bypassed for convenience, so I wonder how often the same is done for Rust)
Nah, you’re just not good enough. I for example would have never made that mistake when calling hover(int32_t) in C. And on the off chance I did, my reviewers would have caught such a simple mistake because they too are excellent C developers.
> Especially coming from the world of memory managed languages.
If people from that world complain about Rust, I surely wouldn't want them around a C codebase.
There's nothing wrong about memory-managed languages, if you don't need to care about memory. But being unfamiliar with memory and complaining about the thing that help you avoid shooting your foot isn't something that inspires trust.
The hardship associated with learning rust isn't going to go away if they do C instead. What's going to happen instead is that bugged code will be written, and they will learn to associate the hardship with the underlying problem: managing memory.
The problem here is that C is too basic, dated, with inadequate higher-level abstractions, which makes writing robust and secure software extra difficult and laborious. "Learning underlying hardware" doesn't solve that at all.
Debian supports dozens of architectures, so it needs to abstract away architecture-specific details.
Rust gives you as much control as C for optimizing software, but at the same time neither Rust nor C really expose actual underlying hardware (on purpose). They target an abstract machine with Undefined Behaviors that don't behave like the hardware. Their optimisers will stab you in the back if you assume you can just do what the hardware does. And even if you could write directly for every logic gate in your hardware, that still wouldn't help with the fragility and tedium of writing secure parsers and correct package validation logic.
Assuming you're right about the language being too hard for most people, the outcome I'd expect given the history of computing is that the language will fail or be stuck in a niche
Time and time again, theoretically worse solutions that are easily accessible win
Debian just cut i386, Wikipedia says the i386 was discontinued in 2007. These systems are all of the same vintage, so it does not seem a huge leap to be culled from the support list.
The actual Intel 80386 processor was discontinued in 2007, but the i386 architecture -- ia32, 32-bit x86 -- lived on for longer in the semi-mainstream (there were 32-bit Intel Atoms introduced to the market as late as 2012, AMD Geodes were sold until at least 2019, and I believe some VIA C3/C7 derivatives made it into the 2010s as well) and is in fact still in production to this day for the embedded and industrial markets (Vortex86, etc).
i386 (32 bit) only processors we discontinued but 64bit processors can operate in 32bit mode so toolchain was still widely available and there was still demand for i386 OS that would run on modern hardware in i386 mode for some ancient software.
Yeah, and you can still run i386 binaries on widely available amd64 CPUs. So this is an even stronger argument for killing these other obsolete platforms.
> Who is still using these machines? Genuine question, not trolling.
Either legacy systems (which are most certainly not running the current bleeding-edge Debian) or retro computing enthusiast.
These platforms are long obsolete and there are no practical reasons to run them besides "I have a box in the corner that's running untouched for the last 20 years" and "for fun". I can get a more powerful and power efficient computer (than any of these systems) from my local e-waste recycling facility for free.
It’s usually a loud minority of trolls or hobbyists. It just takes one to spark a doubt.
Here is one famous example of a dude who’s managed to get PRs merged in dozens of packages, just to make them compatible with ancient versions of nodejs https://news.ycombinator.com/item?id=44831811
Wow that was an interesting read. I find it amusing that nobody seems to really know who he is or what his motives are, yet his code is run on millions of machines every day.
Sure, but almost nobody uses or wants modern linuxes on those machines. It's almost always described (in their own words) as "some ancient crusty version of Debian"
> Nobody wants to play games on Linux given the small userbase compared to Windows.
According to the last Steam survey, 3% of players use Linux. Steam has 130 million active players, so that means there are 4 million people playing on Linux. Definitely not "nobody", and way bigger than the whole retrocomputing community.
By the way, I am also one of those retrocomputing guys, I have a Pentium 2 running Windows 98 right here. IMHO, trying to shoehorn modern software on old hardware is ridiculous, the whole point of retro hardware is using retro software.
> Who is still using these machines? Genuine question, not trolling.
Well, there are so many things were you could argue about the relevance of a userbase.
If the size of a userbase would be the only argument, Valve could just drop support for the Linux userbase which is just 2-3% of their overall userbase.
Not your point, but Linux compatibility is Valve protecting themselves from the existential risk that is a capricious Microsoft. At one point, it seemed Microsoft was trying to make the Microsoft Store be the mechanism for distributing all software. Linux being viable for gaming is/was their safety net to avoid being locked out from the ecosystem.
popcon.debian.org reports 3 alpha installations and 261750 installations. Assuming comparable opt-in rates there are less than 0.002% of the users using alpha.
The other mentioned architectures hppa, m68k and sh4 are at a similar level.
they might run Debian but not upstream Debian/stable
you mainly find that with systems needing certification
this are the kind of situations where having a C language spec isn't enough but you instead need a compiler version specific spec of the compiler
similar they tend to run the same checkout of the OS with project specific security updates back-ported to it, instead of doing generic system updates (because every single updates needs to be re-certified)
but that is such a huge effort that companies don't want to run a full OS at all. Just the kernel and the most minimal choice of packages you really need and not one more binary then that.
and they might have picked Debian as a initial source for their packages, kernel etc. but it isn't really Debian anymore
If we are talking about embedded control systems no, you don't want new software, you want your machine to do what is supposed to do. At my workplace we have some old VME crates running VxWorks, and nobody is gonna update those to the latest Linux distro.
This is incorrect. Internet connections and 3rd party integrations have changed this view of “the software doesn’t need to change once it leaves the factory”.
John Deere, Caterpillar, etc are leaning heavily into the “connected industrial equipment” world. GE engines on airplanes have updatable software and relay telemetry back to GE from flights.
The embedded world changed. You just might have missed it if your view is what shipped out before 2010.
My experience is in big scientific experiments like particle accelerators, I guess other fields are different. Still, my experience is that:
1) The control network is air gapped, any kind of direct Internet connection is very much forbidden.
2) Embedded real-time stuff usually runs on VxWorks or RTEMS, not Linux. If it is Linux, it is an specialized distro like NI Linux.
3) Anything designed in the last 15 years uses ARM. Older systems use PowerPC. Nobody has used Alpha, HPPA, SH4 or m68k in ages. So if you really want to run Debian on it, just go ahead and use Armbian.
Here’s Debian’s “Supported Architectures”: https://wiki.debian.org/SupportedArchitectures. These platforms are all in ‘unofficial’ status (like, they work, but are not officially supported by the core Debian project).
Who is actually _running_ Debian Trixie on these platforms now?
It is counter-intuitive to me that these platforms are still unofficially supported, but 32-bit x86 [edit: and all MIPS architectures!] are not!
I am emotionally sad to see them fall by the wayside (and weirdly motivated to dig out a 68k Amiga or ‘very old Macintosh’ and try running Trixie…) but, even from a community standpoint, I find it hard to understand where and how these ports are actually used.
Can you actually install Debian on an Amiga - or other 68k system - now? I’ve been searching around the web and I haven’t found much evidence that you can.
This is not intended to bash you or anyone else who’s working on it - I think it’s a cool project (I have in the recent past got an 86duino ZERO to run Gentoo, just to see if an obscure old-ish piece of hardware can be useful with modern Linux on it - and it can). I do understand the reason a project like Debian might not want to have to spend resources even just to make it easier to do though.
I didn't find what Debian version they tried but I think it's implied it's a recent version. They ran into memory issues. They had only 48MB while the recommendations are to use 64MB. It did boot though until it threw errors because of memory constraints.
They got a working system by trying Debian 3.1 though.
I was running Debian unstable on my 64-bit SPARC up until I moved a few months ago and haven't set the system back up again while unpacking.
It's been somewhat useful for finding weird edge cases in software where for whatever reason, it doesn't reproduce easily on AArch64 or x86, but does there. (Or vice-versa, sometimes.)
I don't know that I'd say that's sufficient reason to motivate dozens of people to maintain support, but it's not purely academic entertainment or nostalgia, for that.
m68k has a LLVM port already, so Rust can be implemented for that platform.[0] It would be nice to have LLVM backends for alpha, hppa and sh4 - these older architectures tend to be quite simple so a working LLVM has plenty of value as a reference and for educational use.
(LLVM even used to have an in-tree DEC Alpha backend, though that was back in 2011 and not relevant to any version of Rust.)
LLVM is desirable for other reasons (LLVMpipe for example), so investing into an LLVM port is probably a better use of limited resources than improving and maintaining a retargeted rustc for GCC.
no, as all conditional-compiled platform specific code is missing.
So using it with #[no_core] should work (assuming the WIP part of the backend isn't a problem). But beyond that you have to first port libcore (should be doable) and then libstd (quite a bunch of work).
As far as I understand it, the m68k LLVM port is not ABI-compatible with GCC on Linux because the alignment is wrong: https://wiki.debian.org/M68k/Alignment (page says that the LLVM packages are FTBFS because of this)
The principled way to address this would be to define a new ABI suffix for the m68k-unknown-linux-gnu target triple, replacing the existing 'gnu' which would retain GCC compatibility with 2-byte alignment for int.
sure but most frequent/core contributors for Linux (kernel) and most big Distros are payed by some company with commercial interest which mostly doesn't involve spending time on maintaining any of this old architectures
sure some are also payed by a foundation. Which is also payed by companies but with a degree of decoupling of influence.
and some pay them self, e.g. fully voluntary work, but most dev can't afford to do so on a long term, high time commitment manner. So a lot of major changes and contributions end up coming from people directly or indirectly "payed" by some company.
and that's pretty common across most "older, larger, sustainable and still developed OSS"
I think that's unkind absent any proof otherwise which you haven't posted.
In reality a lot of the heavily used parts of Linux (and open source generally) are going to have some commercial involvement or are we suggesting that no one should be paid by any of the companies backing Linux who use Linux because to me that sounds worse.
Whether you like it or not, Linux/Open Source hasn't been entirely the preserve of unpaid people doing it for fun for a long time.
My whole point, which you've missed, is that the _interest_ is what we should be looking at. I don't care if he says he does or doesn't act on it. He has the interest.
The whole thread started someone making the wrong point that:
>They are of no commercial interest to Ubuntu.
Which is plainly false.
Julian can believe whatever he wants, in fact the more naively idealistic he is the better for Canonical.
The fact of the matter is that corporate Linux's benefits from impossible to run setups. That's the whole business model. The more convoluted the setup is the more money they make. Rust fits into this business model perfectly.
You'd think people would have learned this after the Nth rug pull from supposed champions of free software like RedHat.
more relevant they are all "unofficial" supported architectures and non in a "might get official support" context, so they all have been of very limited interest of Debian as a whole for quite a while
Those all seem to be completely obsolete so I guess they can just stay on the latest version of Debian that supports them, or make their own distro. (Or add Rust support I guess but that's probably not realistic.)
Most don't need to be rebranded. Alpha and hppa are retro computing and haven't been available to buy for 18 and 15 years respectively. Sh4 died this year. Only m68k is still holding on, but is a rounding error in the number is users.
Aren’t m68k computers only a few models from the 1990s and 1980s, and some more recent hobby projects? That’s squarely in the retro computing enthusiasts category.
I’m not in the Debian world, but those do seem to me like the types of systems that could use their own specialized distros rather than being a burden to the mass market ones. It’s not as if you could run a stock configuration of any desktop environment on them anyway.
m68k is used in a number of embedded systems today. It is unknown (to me) how many of those run linux (as opposed to some other embedded OS), but I would guess at least some do. I also don't know how many run (or want to run) debian vs something else (a custom yacto distribution is my first guess), but that might be non-zero. It is possible someone is running a non-debian distribution and using debian packages to provide their updates.
All I find searching for “embedded m68k Linux distro” is people looking for, or coming up with, alternatives, as Debian was already “too big” fifteen years ago.
I don’t get the fuzz around the “retro computing” verbiage. I doubt anyone is actually running Debian on these devices out of necessity, someone who plays baroque music in reconstructed period instruments won’t balk at being called an “early music” enthusiast.
Well, we are on a mission to create The Universal Operating System. So maybe.
But I'm not sure. I think the new Rust dependencies are good. In an ideal world, the people who care about niche systems step up to help Rust target those systems.
> In an ideal world, the people who care about niche systems step up to help Rust target those systems.
I’m actually the person who added the m68k target to the Rust compiler and was also one of the driving forces of getting the backend into LLVM.
Generally speaking, getting a new backend into the Rust compiler is not trivial as it depends on LLVM support at the moment which is why asking someone to just do it is a bit arrogant.
Luckily, both rustc_codegen_gcc and gccrs are being worked on, so this problem will be resolved in the future.
Sorry, I didn't mean to insinuate that there's anything minor about it, or that nobody is doing the work. I should have phrased myself differently.
I'll try to rephrase: if we never want to give up support for a platform we've supported in the past, then I think we only have two options: (1) never adopt new technology where support for said platforms doesn't come for free, or (2) leave it up to those who care about the niches to ensure support.
Neither is pain-free, but the first seems like a recipe for stagnation.
It's lovely to see the two alternative compiler paths for Rust moving forward though! Thank you!
If anyone has a problem with the language used in the email, I would remind you that this is the same person who is maintainer for debian's keepassxc packages.
To be honest I don't really read insults either in this e-mail or in the thread you linked. If I'm seeing it right, there's only one comment by the guy in that thread, right? That comment is direct and uses language that may be considered unprofessional ("crap"/"crappy"), but it's not insulting the users (they are not referred to as crappy). Same for the e-mail.
I don’t think the language is unprofessional, it’s direct and it states his opinion.
The one demanding it is the maintainer of keepassxc it would’ve been better to just close the issue that this is a Debian only problem and he should install it like that and just close it.
mainly people have issues with clear, precise and concise language about intend of action instead of idk. a request of discussion
now this is separate from being open for discussion if someone has some good arguments (which aren't "you break something which isn't supported and only nich used") and some claim he isn't open for arguments
and tbh. if someone exposes users to actual relevant security risk(1) because the change adds a bit of in depth security(2) and then implicitly denounces them for "wanting crap" this raises a lot of red flags IMHO.
(1): Copy pasting passwords is a very bad idea, the problem is phsishing attacks with "look alike" domains. You password manager won't fill them out, your copy past is prone to falling for it. In addition there are other smaller issues related to clip board safety and similar (hence why KC clears the clipboard after a short time).
(2): Removing unneeded functionality which could have vulnerabilities. Except we speak about code from the same source which if not enabled/setup does pretty much nothing (It might still pull in some dependencies, tho.)
The level of knee-jerk reaction to anything Rust into traditionally C projects borders on the pathological. That email is about as polite as it gets without being coddling.
Do keep in mind that a lot of the people involved in these sorts of things are neurodiverse in some ways, and may have significant trouble dealing with change.
As teh64 helpfully pointed out in https://news.ycombinator.com/item?id=45784445 some hours ago, 4ish years ago my position on this was a total 360 and I'd have had the same reaction to now-me's proposal.
All these changes requires work. Because of this, other priorities will get less attention. It would be ironic if bad security flaws are missed/introduced because of all the work switching to Rust. Its also very likely that all the new code written in Rust will be far less mature than the existing source bases. So the outcome might be (very probably actually) a lot of work to worsen security.
Most of the academic research into these sorts of typesafe languages usually returns the null result (if you don't agree, it means you haven't read the research on this topic). That's researcher for it didn't work and you shouldn't be using these techniques. Security is a process, not a silver bullet and 'just switch to Rust' is very silvery bullet.
It's not like I'm in a hurry to switch to Rust and will spend full steam on it. It's amongst the lowest priority items.
A lot of the Rust rewrites suffer a crucial issue: they want a different license than what they are rewriting and hence rewrite from scratch because they can't look at the code.
But here we're saying: Hey we have this crucial code, there may be bugs hidden in it (segfaults in it are a recurring source of joy), and we'll copy that code over from .cc to .rs and whack it as little as possible so it compiles there.
The problem is much more there on the configuration parser for example which does in a sense desparately need a clean rewrite, as it's way too sloppy, and it's making it hard to integrate.
In an optimal world I'd add annotations to my C++ code and have a tool that does the transliteration to Rust at the end; like when the Go compiler got translated from C to Go. It was glorious.
What a horrible mindset. I'll never understand this "security" argument.
It is our responsibility to our users to provide them the most secure option possible as the default.
Removing features is not the most secure option possible. Go all the way then and remove everything. Only when your computer cannot do anything it will be 100% secure.
> Removing features is not the most secure option possible.
If I have a program that encrypts and decrypts passwords, then the surface area is way smaller than if it also has browser integrations and a bunch of other features. Every feature has the potential to make this list longer: https://keepass.info/help/kb/sec_issues.html which applies to any other piece of software.
At the same time, people can make the argument that software that's secure but has no useful features also isn't very worthwhile. From that whole discussion, the idea of having a minimal package and a full package makes a lot of sense - I'd use the minimal version because I don't use that additional functionality, but someone else might benefit a bunch from the full version.
A password program that integrates with your browsers reduces a lot of attack surfaces. If you can't directly talk to the bower that implies the clipboard which in turns means other programs on your system can see the password.
That doesn't sound right to me; its legitimate topic that a package where the core use-case is X, that package has obscure feature Y, and the mere existence of Y can cause security issues for a user even when the user never intended to use it.
Very concrete example, the whole Log4j vulnerability issue was basically just a direct implication of a feature that allowed for arbitrary code execution. Nearly no user of Log4j intentionally used that feature, they were all vulnerable because Log4j had that feature.
The fix to the CVE was effectively to remove the feature. If someone had the foresight to try to reduce Log4j to only the features that ~everyone actually used, and publish a separate Log4j-maximal for the fringe users that intentionally use that feature, it would have prevented what was arguably the worst vulnerability that has ever happened.
In the case this thread is about, no one seems to be deny that there should be a 'minimal' and 'full' versions and that the 'minimal' version is going to be more secure. The entire flame war seems to be over whether its better to take a preexisting package name and have it be a minimal one or the full one.
That is simply a tradeoff between "make preexisting users who don't use ancillary features be as secure as possible by default going forward" or "make preexisting users who do use ancillary features not broken by upgrades".
> That doesn't sound right to me; its legitimate topic that a package where the core use-case is X, that package has obscure feature Y, and the mere existence of Y can cause security issues for a user even when the user never intended to use it.
In this case it is not clear at all whether the feature is obscure. For most people it could be actually essential and the primary requirement for the whole software.
But many users were relying on these features. Hence the bug report.
This is literally the same as helping a relative to make their computer more secure by turning it off. Problem solved I guess?
If you made a mistake by shipping insecure defaults you could fix it e.g. by including a banner to use the minimal version to users that don't use the extra features. But simply rug-pulling everybody for "security" and doubling down by insulting the affected users? I really do not understand people that act like this.
Just annoys me that he calls features "crap" just because he likely doesn't use them personally and ends that post with a random sentence claiming such a version "increases the risk of drive-by attacks" with zero evidence.
The developer explains the features aren't plugins and aren't even enabled by default.
Arrogance from maintainers like this from within Debian is what will hurt it far more than any external entity.
Exactly, this rude and insulting behavior is why many people shy away from open source. Not everybody has the time and mental capacity to engage in ideological battles about software architecture.
We should really hold more value to keeping existing user setups working. Breakages are incredibly damaging and might very well have a bigger impact than insecure defaults.
> he calls features "crap" just because he likely doesn't use them personally
"All of these features are superfluous and do not really belong in a local password database manager" seems to me like a pretty clear explanation of what is "crap" about them, and it seems pretty clearly not to be about personal taste.
Unfortunately, this kind of culture where you joyfully screw over your real users to chase the approval and benefit of some spherical user in a vacuum that you would like to cater to has become endemic in the free software world. It probably started with GNOME 3 (deliberately strip-mined of functionality, actively hostile to customisability, wasteful of screen space, all for the sake of chasing some HCI-prophesied transition to mobile touch devices which never came for desktop Linux), but was perfected by Mozilla in the name of security.
I'm not sure if it's an insecurity thing or an immaturity thing, but when all these stories pop up, I always wonder why rust enthusiasts don't just prove their point by making their own "modern" and non-"retro" tech. If you can make something better, just do it already, and people will switch to it when they see the benefits. This parasitic "you must accept rust in your long-standing project" model is so off-putting, as is always evident by the complaints it causes. I love projects like Redox that try to do their own thing... why doesn't the rust community rally around projects like that and turn them into cve-free masterpieces that people will want to use?
This email is from a Debian maintainer, about Debian introducing a new hard dependency on Rust. It's not some random Rust advocate telling Debian folks that they should use Rust against their will.
Yes there are absolutely some obnoxious "you should rewrite this in Rust" folks out there, but this is not a case of that.
There are like 1000 Debian maintainers, right? This person doesn't speak for the project as a whole, and as far as I can tell he is telling Debian folks they will be accepting rust whether they want it or not, and whether their preferred architecture is supported or not. Maybe there was some organizational vote on this, but if so it isn't referenced in the thread. It says "I plan", not "Debian decided to".
And regardless, my point is it would be more sensible to say "I'm going to introduce an oxidized fork of apt and a method to use it as your system apt if you prefer" and then over the next year or so he could say "look at all these great benefits!" (if there are any). At that point, the community could decide that the rust version should become the default because it is so much better/safer/"modern"/whatever.
You seem to think of "rust enthusiasts" as some organized group with a goal of writing Rust for the sake of it. Rust is long past such extremely early adopter phase.
What you're seeing now is developers who are interested in writing a better version of whatever they're already working on, and they're choosing Rust to do it. It's not a group "Rust enthusiasts" ninjas infiltrating projects. It's more and more developers everywhere adopting Rust as a tool to get their job done, not to play language wars.
Nah, I called out redox and another commenter pointed out ripgrep as an even better example of what I’d prefer to see, and those are also by what I would call rust enthusiasts. I don’t think of them as a monolithic group.
Where we disagree is I would not call injecting rust into an established project “writing a better version”. I would love it if they did write a better version, so we could witness its advantages before switching to it.
> my point is it would be more sensible to say "I'm going to introduce an oxidized fork of apt and a method to use it as your system apt if you prefer" and then over the next year or so he could say "look at all these great benefits!" (if there are any). At that point, the community could decide that the rust version should become the default because it is so much better/safer/"modern"/whatever.
That's not how open source software development works.
I wasn't asked by Linus whether ipchains should become the default over ipfirewall nor whether iptables should become over ipchains.
I wasn't asked whether GCC should use C++ instead of C as the language to build GCC itself.
I can go on with lots of examples.
Why should APT be different and require the maintainers to fork their own project do introduce changes? Why should an undefined "community" (who is that? apparently not the APT developers...) decide? Does this have to be done for every code change in APT?
Exactly! People love ripgrep because of its clear advantages, and had the developers been interested in making posix mode for it, I'm certain at least some distros would have made it the default by now.
I really like to write programs in rust. But my stance has changed a bit over the years ever since other languages caught up a bit. On top of that I’m very skeptical if the rewrite of an ancient tool brings more less security. I don’t know the apt source code or how it actually works behind the cli interface so I leave this judgement to the pros. But there seems to be a very strong move to rewrite all core systems in rust. My issue with that is the fact that these tools don’t even invent anything new. Or change / improve the status co. I understand that it’s hard to introduce a new system without breaking other stuff. But our systems are still based on decisions from the telegraph age. Layers on top of layers on top of layers.
I've heard two arguments for these rewrites that don't always come up in these discussions. There are fair counterpoints to both of these but I think they add valuable dimensions to the conversation, or perhaps may explain why a rewrite may not seem justified without them.
* It's becoming increasingly difficult to find new contributors who want to work with very old code bases in languages like C or C++. Some open source projects have said they rewrote to Rust just to attract new devs.
* Reliability can be proven through years in use but security is less of a direct correlation. Reliability is a statistical distribution centered around the 'happy path' of expected use and the more times your software is used the more robust it will become or just be proven to be. But security issues are almost by definition the edgiest edge cases and aren't pruned by normal use but by direct attacks and pen testing. It's much harder to say that old software has been attacked in every possible way than that it's been used in every possible way. The consequences of CVEs may also be much higher than edge case reliability bugs, making the justification for proactive security hardening much stronger.
Yeah I get point for attracting young blood. But I wonder if the core utils which have been rewritten got rewritten by the original maintainers? And again the question why not simply write something new. With a modern architecture etc rather than drop in replacements.
On your second part. I wonder how aviation and space and car industry do it. They rely heavily on tested / proven concepts. What do they do when introducing a new type of material to replace another one. Or when a complete assembly workflow gets updated.
> And again the question why not simply write something new.
The world isn't black or white. Some people write Rust programs with the intent to be drop-in compatible programs of some other program. (And, by the way, that "some other program" might itself be a rewrite of an even older program.)
Yet others, such as myself, write Rust programs that may be similar to older programs (or not at all), but definitely not drop-in compatible programs. For example, ripgrep, xsv, fd, bat, hyperfine and more.
I don't know why you insist on a word in which Rust programs are only drop-in compatible rewrites. Embrace the grey and nuanced complexity of the real world.
> And again the question why not simply write something new.
There is a ton of new stuff getting written in Rust. But we don't have threads like this on HN when someone announces a new piece of infra written in Rust, only when there's a full or partial rewrite.
Re automotive and other legacy industries, there's heavy process around both safety and security. Performing HARAs and TARAs, assigning threat or safety levels to specific components and functions, deep system analysis, adding redundancy for safety, coding standards like MISRA, etc. You don't get a lot of assurances for "free" based on time-proven code. But in defense there's already a massive push towards memory safe languages to reduce the attack surface.
Because of backwards compatibility. You don’t rewrite Linux from scratch to fix old mistakes, that’s making a new system altogether. And I’m pretty sure there are some people doing just that. But still, there’s value in rewriting the things we have now in a future-proof language, so we have a better but working system until the new one is ready.
Sorry. I will answer on this because I feel people got a bit hung up on the “new” thing. Might be a language barrier. I really understand the reasons why with backwards compatibility etc. The point I tried to make is that we really spend tons of time either to maintain software that where written or “born” 50 or so years ago or rewrite things in the same spirit.
I mixed my comments wit the the security aspect which might muddled a lot what I tried to say with the “new” part.
One sees this also on HN. I love the UNIX philosophy and also the idea of POSIX. But it’s treated as if it is the holy grail of OS design and in case of POSIX the only true cross platform schema.
Look also at the boot steps a CPU has to run through to boot up. By pretending to be 40 year old variant and then piece by piece startup features.
Well I hope I cleared my point :)
Writing tools that are POSIX compatible doesn't mean one puts it on the pedestal of the "holy grail of OS design." I've certainly used POSIX to guide design aspects of things I build. Not because I think POSIX is the best. In fact, I think it's fucking awful and I very much dislike how some people use it as a hammer to whinge about portability. But POSIX is ubiquitous. So if you want your users to have less friction, you can't really ignore it.
And by the way, Rust didn't invent this "rewrite old software" idea. GNU did it long before Rust programmers did.
uutils/coreutils is MIT-licensed and primarily hosted on GitHub (with issues and PRs there) whereas GNU coreutils is GPL-licensed and hosted on gnu.org (with mailing lists).
EDIT: I'm not expressing a personal opinion, just stating how things are. The license change may indeed be of interest to some companies.
The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.
Using GitHub is unacceptable as it is banning many countries from using it. You are excluding devs around the world from contributing. Plus it is owned by Microsoft.
So we replaced a strong copyleft license and a solid decentralized workflow with a centralized repo that depends on the whims of Microsoft and the US government and that is somehow a good thing?
> The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.
That is not at all true. If someone were to change the license of a project from MIT to something proprietary, the original will still exist and be just as available to users. No freedom is lost.
I mean sadly even though I hate the bans the exclusion is really insignificant in the grand scheme of things and the benefits Github brings most of the considered acceptable for the tradeoff. I am sadly one of those people I am fairly young (25) and my introduction to git happened with Github so I am really used to it. Though I am also developing a codeforge as an hobby project and maybe something serious in the long term.
There is also another crowd that completely aligns with the US foreign policy and also has the same animosity towards those countrie's citizens (I 've seen considerable amount of examples of these).
For the license part I really don't get the argument how can a coreutils rewrite can get rugpulled this is not a hosted service where minio [1] [2] like situation can happen and there is always the original utils if something like that were to happen.
2 GNU coreutils maintainers, including myself, monitor the issues and PRs on a GitHub mirror that we have [1]. Generally the mailing list is preferred though, since more people follow it.
People have to learn on some project. Why not something that’s simple to test against? You know what it should do, so let’s rewrite it!
Whether the rewrite should be adopted to replace the original is certainly a big discussion. But simply writing a replacement isn’t really worth complaining about.
I think polyglot causes more problems than it solves. It is gross how many different toolchains and package managers it now takes to build a distro. One person wants python, another wants node, another wants go, and now this. with node we traded buffer overflows for supply chain attacks. If they don’t want C, it would be better to start fresh. Robert Morris re-wrote enough of Linux in golang to be usable, and the overhead was something like 5-15% slower than C. If the goal is Rust everywhere, contribute to Redox. They are further along that road.
There needs to be a limit for each project. Debian is a large project so it needs to have more options than smaller projects. Rust is getting popular enough it is reasonable for Debian to say it is an approved option.
Note that I'm not saying Debian should, I'm saying it is reasonable that they would. I am not a Debian maintainer and so I should not have an opinion on what tools they use, only that adding Rust isn't unreasonable. It may be reasonable to take away a different tool to get Rust in - again this is something I should not have an opinion on but Debian maintainers should.
Unfortunately, the world is a complicated place and each one of these languages have their own benefits and tradeoffs that suit themselves to one particular language or another (ask an ML scientist to switch to raw C), leading to all of these languages having a valid place in the pantheon of softwares (except maybe for js). Since debian is a pragmatic OS, it needs to adapt to solve for the real problem of being generally usable, and thus supporting all of these languages. Rewriting Everything in one language would be a massive pain and likely a massive waste of time and supporting an OS with less reputation and stable footing like Redox would almost if not more counterproductive as rewriting everything in debian from scratch (it’s a bit hyperbolic to state the goal is to Rewrite Everything in rust), so supporting the gradual replacement of some mission critical components like the apt parser or whatever they’re talking about is likely more realistic. Although an OS definitely shouldn’t “move fast and break things” (especially not one like Debian) I don’t think it’s too ridiculous to drop support for architectures that can’t support a language that was released almost a decade ago. Having a proven language (I think it’s safe to say rust is proven by now, right?) that is much less prone to self-combustion on modification than C, yet maintains a directly compiled nature as well as being to interface relatively well with normal C libraries in some standard applications is a pretty good value-deal proposition in my opinion.
Applications vs Infrastructure: When stand-alone applications are in completely different languages, that is normal and reasonable and fine. When it takes 5-10 different programming languages just to build and manage the base system, that is an engineering failure and a mess.
Wouldn't it make sense to wait for (or support) one of the rust-for-GCC ports to become viable? As far as I understand, rust in the kernel won't become mandatory either until it's supported by GCC, and as a boon, with multiple implementations you can be more certain that the language won't move as fast and break things anymore. There's already upstream rust support in GCC, so I don't reckon it's that far off from being usable, at least for projects choosing to target it specifically.
Furthermore, if these architectures are removed from further debian updates now, is there any indication that, once there's a rust toolchain supporting them, getting them back into modern debian wouldn't be a bureaucratic nightmare?
> Furthermore, if these architectures are removed from further debian updates now, is there any indication that, once there's a rust toolchain supporting them, getting them back into modern debian wouldn't be a bureaucratic nightmare?
These architectures aren't being removed from Debian proper now, they already were removed more than a decade ago. This does not change anything about their status nor their ability to get back into Debian proper, which had already practically vanished.
non of the listed architectures have official Debian support anymore
i.e. they are only still around because they haven't caused any major issues and someone bothered to fix them up from time to time on their own free time
so yes, you probably won't get them back in once they are out as long as a company doesn't shoulder the (work time) bill for it (and with it I mean long term maintenance more then the cost of getting them in)
but for the same reason they have little to no relevance when it comes to any future changes which might happen to get them kicked out (as long as no company steps up and shoulders the (work time) bill for keeping them maintained
There's already upstream rust support in GCC, so I don't reckon it's that far off from being usable, at least for projects choosing to target it specifically.
The GCCRS project can't even build libcore right now, let alone libstd. In addition, it is currently targeting Rust 1.50's feature set, with some additions that the Linux kernel needs. I don't see it being a useful general purpose compiler for years.
What's more likely is that rustc_codegen_gcc, which I believe can currently build libcore and libstd, will be stabilised first.
If only a reason were given. This is the original:
> Rust is a security nightmare. We'd need to add over 130 packages to main for sequoia, and then we'd need to rebuild them all each time one of them needs a security update.
What has changed? Why is 130 packages for a crypto application acceptable?
It depends if the original opinion was a reasoned one or just based on personal feelings.
The dependency explosion is still a problem and I’m not aware of any real solution. It would have been interesting to to see why their opinion changed… I’m guessing it’s as simple as the perceived benefits overriding any concerns and no major supply-chain attacks being known so far.
I would much rather work with someone that other people find abrasive but who communicates much like myself and is clearly open-minded, than the opposite.
This is the same maintainer who broke KeePass on Debian and then flipped off everyone in the thread. Someone needs to pull him aside and let him know the world does not revolve around him and the problems he chooses to manufacture to justify his paycheck.
I'm happy for all developers programming in their favorite programming languages. Programming for over 30 years I have seen entire ecosystems come and go.
What I don't get is the burning need for Rust developers to insult others. Kind of the same vibes that we get from systemd folks and LP. Does it mean they have psychological issues and deep down in their heart they know they need to compensate?
I remember C vs Pascal flame back in the day but that wasn't serious. Like, at all. C/C++ developers today don't have any need to prove anything to anyone. It would be weird for a C developer to walk around and insult Rust devs, but the opposite is prevalent somehow.
IMHO, Rust is proof that many programmers prefer over-engineering and unnecessary complexity with obtuse syntax over sound language design. My personal theory is that they subconsciously like to keep their craft esoteric and "magic." The importance of readability, simplicity, and KISS just isn't taught enough nowadays.
The general problem is that there’s a lot of activists online nowadays. I used to call myself an activist, but now it’s a dirty word which I associate with obsessive behaviour, harassment, cancellation attempts and being generally obnoxious with the purpose of achieving some goal.
I think it’s a combination of religion decreasing in importance and social media driving people mildly nuts. Many undertakings are collecting “true believers”, turning into their religion and social media is how they evangelize.
Rust is a pretty mild case, but it still attracts missionaries.
So, the people are different, Western society’s different and social media’s giving everyone a voice while bringing out the worst in them.
It makes me uncomfortable that this mandate is coming from a Canonical employee. After all, if this switch was a good idea on merit alone, it would happen organically without requiring this kind of combative communication.
Apt has just 3 listed maintainers, and judging by the git history this guy does 90% of the work. Him making the decision _is_ it happening organically.
Open source fundamentally is a do-ocracy (it's in literally all of the licenses). Those who do, decide; and more and more often those who do are just one or two people for a tool used by millions.
It's hard to imagine their is some malicious financial incentive to choosing a different language to write the package manager with...
The obvious potential motivations are things like making a more reliable product, or making their employees more productive by giving them access to modern tools... I guess I could imagine preparing for some sort of compliance/legal/regulatory battle where it's important to move towards memory safe tooling but even there I rather imagine that microsoft is better placed to say that they are and any move on canonical's part would be defensive.
Is this the end of Debian as GNU/Linux? The main Rust toolchain isn't GNU, gccrs is still incomplete and most Rust rewrites of existing GNU libraries and tools use MIT or other non GPL licenses.
The main python and perl toolchains were never maintained by GNU either. Python has never been distributed under a GPL license. I'm not 100% sure of the licensing history of perl but I think it's always been available under a non-GPL license (as well as being under a GPL license - at least recently - not sure if that was always the case).
This doesn't seem like a noteworthy change to the degree to which GNU/Linux is an accurate name... though there are lots of things I'd put more importance on than GNU in describing debian (systemd, for instance).
Edit: Looks like Perl 1.0 was under the following non-commercial license, so definitely not always GPL though that now leaves the question of licensing when debian adopted it, if you really care.
> You may copy the perl kit in whole or in part as long as you don't try to make money off it, or pretend that you wrote it.
One major point of heartburn with Rust is that it comparatively lacks the diversity of ISA targets that C broadly does. I know some of this is because C is both relatively simple to write a basic compiler for that more or less just works (in comparison to something crazy like C++), and that's it's been around for a long time, but why isn't there more of a push to add at least all of the supported Debian ISAs to the Rust compiler?
Most people don't write a basic compiler for C either, "relatively simple" or no. Most people would rather add a new target to an existing compiler, which is much easier.
It's also "relatively easy" to add a new backend to Rust.
The language is tough love, and I think it's important despite what the first respondent has said.
Much of the language used seems to stem from nauseating interactions that have occured in kernel world around rust usage.
I'm not a big fan of rust for reasons that were not brought up during the kernel discussions, but I'm also not an opponent of moving forward. I don't quite understand the pushback against memory safe languages and defensiveness against adopting modern tooling/languages
I think the spin that Rust is necessarily the way forward is what is wrong. IMHO Rust has severe problems and what is considered "modern" is mostly taste. We have seen the same thing in the past with a push towards C++, Java, managed languages. What is new is that the free software movement is now controlled so much by corporate interests that some of these changes are pushed through aggressively against the interests of other parts of the community. In the past, if you wanted something changed and there was no agreement, you created a fork and if it was truly better it was eventually adopted by the majority. Nowadays, the companies which fund most of the development aggressively pursue their interests and the part of the community that disagrees is forced out. This justified by with suitable propaganda "not willing to adapt", etc. The whole point of free software should be that I do not have to adapt to some companies's idea of what is modern, if I do not want to. This is why I fled from Microsoft.
> I think the spin that Rust is necessarily the way forward is what is wrong.
I haven't seen this from Rust. Obviously lots of us think that Rust is the way forward for us but I think the problem you're talking about is that nobody offered any alternatives you liked better and that's not on Rust.
If Bob is ordering pizza for everybody who wants one, it is not the case that "Pizza is necessarily the way forward", and it's not Bob's fault that you can't have sliders, I think if you want sliders you're going to need to order them yourself and "Pizza is the way forward" is merely the default when you don't and people are hungry.
Dave Abraham's Hylo is an example of somebody offering to order sushi in this analogy. It's not yet clear whether Dave knows a Sushi place that delivers here, or how much Sushi would be but that's what having another way forward could look like.
In C++ they've got profiles, which is, generously, "Concepts of a plan" for a way forward and in C... I mean, it's not your focus, but nobody is looking at this right? Maybe Fil-C is your future? I note that Fil-C doesn't work on these obsolete targets either.
The disagreement is the difference between "there's an group [the Rust community] going around pushing/bullying projects to use Rust" and "project maintainers want to start using Rust". Those two things get conflated a ton, particularly in this forum by people who have an axe to grind against Rust.
This whole it used to be different thing is looking back with rose tinted glasses. It’s always been the case that project maintainers were able to make choices that the community didn’t necessarily agree with, corporate backed contributors or not, and it’s still a possibility to fork and try to prove out that the other stance is better.
Nobody is being forced out of the community, you can fork and not adopt the changes if you want. Thats the real point of free software, that you have the freedom to make that choice. The whole point of free software was never that the direction of the software should be free from corporate control in some way, the maintainers of a project have always had the authority to make decisions about their own project, whether individual or corporate or a mix.
The point of freedom in software is certainly that I can create my own fork. And individual projects a maintainer can certainly do what he wants. But it is still worrying if in community projects such as Debian when decisions that come with a cost to some part of the community are pushed through without full consensus. It would be certainly not the first time. systemd was similar and for similar reasons (commercial interests by some key stakeholders), and I would argue that Debian did suffer a lot from how badly this was handled. I do not think the community ever got as healthy and vibrant as it was before this. So it t would be sad if this continues.
...it is still worrying if in community projects such as Debian when decisions that come with a cost to some part of the community are pushed through without full consensus.
What are some concrete cases you can point to where a decision was made with full consensus? Literally everyone agreed? All the users?
I'm not sure many projects have ever been run that way. I'm sure we've all heard of the Benevolent Dictator for Life (BDfL). I'm sure Linus has made an executive decision once in a while.
> I think the spin that Rust is necessarily the way forward is what is wrong.
Well, what's the alternative? The memory safety problem is real, I don't think there is any doubt about that.
C/C++ is a dead end: the community has thoroughly rejected technical solutions like the Circle compiler, and "profiles" are nothing more than a mirage. They are yet again trying to make a magical compiler which rejects all the bad code and accepts all the good code without making any code changes, which of course isn't going to happen.
Garbage collection is a huge dealbreaker for the people still on C/C++. This immediately rules out the vast majority of memory-safe languages. What is left is pretty much only Zig and Rust. Both have their pros and cons, but Rust seems to be more mature and has better community adoption.
The way I see it, the pro-memory-safety crowd is saying "There's a giant hole in our ship, let's use Rust to patch it", and the anti-Rust crowd yells back "I don't like the color of it, we shouldn't repair the hole until someone invents the perfect solution". Meanwhile, the ship is sinking. Do we let the few vocal Rust haters sink the ship, or do we tell them to shut up or show up with a better alternative?
No it is not. We have a lot of amazing and rock solid software written in C and C++. Stuff mostly works great.
Sure, things could be better but there is no reason why we need to act right now. This is a long term decisions that doesn't need to be rushed.
> What is left is pretty much only Zig and Rust.
We had Ada long before Rust and it is a pretty amazing language. Turns out security isn't that important for many people and C++ is good enough for many projects apparently.
There is also D, Nim, Odin and so on.
> Garbage collection is a huge dealbreaker
It isn't. We had Lisp Machines in the 80s and automatic garbage collection has vastly improved these days. So I wouldn't rule those out either.
In short, no the ship is not sinking. There are many options to improve things. The problems is once you depend on rust it will be hard to remove so it is better to think things through because rushing to adopt it.
Basically correct, but Zig is not a memory safe language. It may be an improvement wrt. syntax over C, and its standard library facilities may be genuinely better than Rust's wrt. writing unsafe code, but it's simply not interesting from a safety perspective. I'm sure that even the most rabid Zig advocates would readily acknowledge this point.
> Garbage collection is a huge dealbreaker for the people still on C/C++.
The problem is not so much GC itself, but more like pervasive garbage collection as the only memory management strategy throughout the program. Tracing GC is a legit memory management strategy for some programs or parts of a program.
> it's simply not interesting from a safety perspective
The reason memory safety is interesting in the first place (for practical, not theoretical reasons) is that it is a common cause of security vulnerabilities. But spatial memory safety is a bigger problem than temporal memory safety, and Zig does offer spatial memory safety. So if Rust's memory safety is interesting, then so is the memory safety Zig offers.
I'm a rabid software correctness advocate, and I think that people should acknowledge that correctness, safety (and the reasons behind it) are much more complex than the binary question of what behaviours are soundly disallowed by a language (or ATS advocates would say that from that their vantage point, Rust is just about as unsafe as C, and so is completely uninteresting from that perspective).
The complexity doesn't end with spatial vs temporal safety. For example, code review has been found to be one of the most effective correctness measures, so if a language made code reviews easier, it would be very interesting from a correctness/security perspective.
I am very much inclined to agree with you, but do you have any sourcing for a claim that spatial is a bigger problem with regards to security vulnerabilities? Every time I feel like posting similar sentiments I just know that a reply linking to an article about how Microsoft and/or Google claim 70% of bugs are memory safety issues will be posted. Both of the ‘studies’ (code surveys) seem to claim use-after-free make up the majority of those bugs.
Zig only does bounds checking by default in Debug and ReleaseSafe builds. If you build with ReleaseFast or ReleaseSmall it will happily do an out of bounds read: https://godbolt.org/z/733PxPEPY
That's a matter of how policy is set. You can set it to on or off for a particular function, too. The point is that language offers sound spatial safety just as much as Rust does (and both allow you to turn it on or off in particular pieces of code).
Rust technically isn’t a memory safe language the second you use “unsafe”. Rust advocates tend to pretend the can have their cake and eat it too when comparing it to other low level languages. No, just because you have the word unsafe next to the scary parts doesn’t make it okay.
I’ve written a good chunk of low level/bare metal rust—unsafe was everywhere and extremely unergonomic. The safety guarantees of Rust are also much weaker in such situations so that’s why I find Zig very interesting.
No oob access, no wacky type coercion, no nullptrs solves such a huge portion of my issues with C. All I have to do is prove my code doesn’t have UAF (or not if the program isn’t critical) and I’m basically on par with Rust with much less complexity.
The point of unsafe is you have small bubbles of unsafe which you can verify rigorously or use tools like Miri to make sure they upheld and you build safe abstraction on top of that unergonomic part. Looking at embedded-hal and even to extreme embassy you can see the value of it. If you don't do any abstraction I definitely agree Rust is not fun to write at all.
The safety guarantees of Rust the language around unsafe are just as good as C or Zig if you use the appropriate facilities (raw pointers, MaybeUninit, UnsafeCell/Cell, Option for nullability, Pin<> etc). Sometimes this is made unnecessarily difficult by standard library code that expects all the guarantees of ordinary Safe Rust instead of accepting more lenient input (e.g. freely aliasable &Cell<T>'s), but such cases can be addressed as they're found.
My point is that it’s easier to write correct Zig code than correct unsafe Rust. Raw pointers can be null in rust so you should use NonNull<T> but there’s aliasing rules that are easy to mess up. And difficultly with the stdlib as you mentioned.
I don’t actually mind Rust when I was able to write in safe user land, but for embedded projects I’ve had a much better time with Zig.
While memory safety is important, I do not believe it is now suddenly so important that it needs now needs to be solved quickly at all cost. There are far more pressing issues. I also do not think at all that C/C++ are dead end. In fact, I think it would be far more cost effective to continue to improve safety of C and C++ code via gradual improvement than adding another language into the mix. The reason is that complexity and the burden of long-term maintenance is the main issue we have in free software, and Rust makes this worse rather than better. Case in point, the fact that I have limited security updates because of some Rust video codec in the dependency chain hurts my security far far more than the fact that it might prevent some memory safety issue: https://www.debian.org/releases/trixie/release-notes/issues.... I think this is what people are missing. They over-exaggerate the important of memory safety over rather mundane issues which are actually far more important. I have never known anybody who was directly affected by a security issues caused by memory safety, I know plenty who were affected because their software was not up-to-date.
What are realistic options for "gradual improvement" of memory safety in C and C++?
My first thought is that it is kind of like talking about gradually improving manual memory allocation in Java. C and C++ are fundamentally memory unsafe; it's part of their design, to offer complete control over memory in a straightforward, direct way.
> The Debian infrastructure currently has problems with rebuilding packages of types that systematically use static linking. With the growth of the Go and Rust ecosystems it means that these packages will be covered by limited security support until the infrastructure is improved to deal with them maintainably.
> IMHO Rust has severe problems and what is considered "modern" is mostly taste.
Really? As opposed to e.g. C or C++ (as the most important languages which Rust is competing with)? Sure, taste plays into everything, but I think a lot of people work with Rust since it's genuinely a better tool.
I hear you on free software being controlled by corporate interests, but that's imo a separate discussion from how good Rust is as a language.
Ada and SPARK fulfilled the promise of a safe systems language decades ago without making most of the mistakes Rust does. Rust has its strong sides, sure, but it's far from the only shop in town. The GCC happens to include an Ada compiler as well.
It seems like Ada more or less has to have memory safety bolted on -- that is what SPARK does -- and it's not clear that Ada's bias towards OO is better than Rust's bias towards functional programming.
Are you talking about features like type inference (so the Rust code could be less clear, since types are not always written out)?
The problem is they forgot about making the language approachable so it lives in its bubble for safety criticial usage (which Rust kinda starting to eat its lunch from with the certified Rust fork)
If you’re referring to Ferrocene with the certified Rust fork, then I’d like to make the minor correction that we don’t consider Ferrocene a true fork, but rather a downstream distribution of the Rust projects compiler. There are very little changes to the compiler itself. Most relevant changes are documentation, build process and different test coverage - we do test architectures that upstream does not.
For starter it looks completely alien my real introduction to Ada was with this article comparing it to Rust on solving advent of code [1] but it gives me that feeling when I try to read Haskell code (not that extreme). I did not actually give it a real chance though but its user base even more niche than Rust so idk. It has really cool properties (being able to define n bit datatypes is nice) but it is also leans to more on the imperative side so that also does not interest me.
That is a 'subtle whataboutism' reply, actually...
you see, GP did not speak in relative terms, but absolutely: They believe Rust has problems. They did not suggest that problems with programming languages are basically all fungible, that we should sum up all problems, compare different languages, and see which ones come out on top.
No need to sully CL with this kind of elitism. Any language you need to be a genius to use is a bad language. That's one of the fundamental issues with C. We're all imperfect idiots some of the time and one instance of undefined behavior breaks any guarantees the language gives you.
I find that languages with a high intellectual barrier to entry are much more pleasant places to be since people like the OP can't understand them and we never have people try to bully us into doing things _the right way_.
This is someone who says things like
>It's important for the project as whole to be able to
move forward and rely on modern tools and technologies
and not be held back by trying to shoehorn modern software
on retro computing devices.
Elitism is it's own form of bullying and needs to be treated as such.
I don't particularly like large swaths of humanity, but I also try hard not to be elitist towards them either. I'm not always successful, but I make a strong effort as my family raised me to be respectful to everyone, even if you don't personally like them.
Name me another language that offers the memory safety and deterministic runtime guarantees that Rust does, eliminating entire classes of bugs, the same integration with existing C code that Rust does, and enjoys the same community and pool of eager contributors that Rust does.
The pushback comes from the idea to rewrite all old tools in another language just because you can. Instead of creating new projects and using the new language it feels like the most rust projects are rewrite from old projects.
And the most projects you have read about on hacker news in the last year 'I made xy, but in rust' are already abandoned. It's just a trend to write something already existing in Rust just to learn the language and then release it for productive use.
For me it actually is the language. While a little pushy at times I think the arguments for rewriting certain things in a safer language is well founded. If the apt tool chains is one of those places I'll leave for the Debian developers to determine, but for decompression tools I can see a benefit.
If Rust should be the language of choice, preferably not. The syntax is awful, the language is complicated and Rust programs seems to collect dependencies at the same rate as JavaScript. Where I might agree with you is that Rust seems to attract a certain type of people. They write absolutely brilliant software, but like the Rust compile, they are rather particular with what input they'll accept.
In the end I don't really care what apt is written in, I'm not the one writing the code. I just use the tool. It would be sad if some platforms are left behind, because the Rust developers don't care about them and not because they're no longer useful.
Java and C# are memory safe languages, as are common interpreted languages like Python and Ruby. Even JavaScript is memory safe, barring the possibility of subtle JIT bugs that may practically impact such safety.
But op means memory and data safe, without a GC nor a runtime, so it can be used as a systems programming language. For "some reason" people only talk about Rust in this space!
There are currently no widely-used programming languages other than Rust that offer the memory safety guarantees Rust has without a garbage collector. I think this is bad, and I wish more people would develop memory-safe systems languages that explore other parts of the design space like Rust does. But in the meantime, Rust is pretty good, certainly preferable to C or C++.
That doesn't really make sense since memory safety is a property of a language. You can have code that is unsafe (read unsound), but that is a separate issue.
For a language to be memory safe it means there must be no way to mishandle a function or use some object wrong that would result in an "unsafe" operation (for Rust, that means undefined behavior).
That is to say the default is safe, and you are given an escape hatch. While in something like c/c++ the default is unsafe.
I'd also like to add that program correctness is another separate concept from language safety and code safety, since you could be using an unsafe language writing unsafe ub code and still have a correct binary.
Yep, I noticed that under a lot of videos mentioning rust in kernel, or rust in general there's a high chance that the comment section will just be straight up lifted from 4chan pol or a similar place
That is a crude and politically inept way of putting it, but do you not think there is some grain of truth to it?
If you opt into something with as high a barrier to entry and necessary time commitment as a programming language, you naturally also opt into the existing community around that language, because that will be where the potential contributors, people to help you solve issues, and people you have to talk to if you need the language or ecosystem to move in some direction will hail from. In turn, the community will naturally get to impose its own values and aesthetic preferences onto you, whether by proactively using the position of relative power they have over you, or simply by osmosis. As it happens, the community surrounding Rust does largely consist of American progressives, which should not be surprising - after all, the language was created by an American company whose staff famously threatened mutiny when its own CEO turned out to offend progressive sensibilities.
As such, it is natural that bringing Rust into your project would over time result in it becoming more "woke", just like using Ruby would make it more likely that you attract Japanese contributors, or targeting Baikal CPUs would result in you getting pulled into the Russian orbit. The "woke" side themselves recognises this effect quite well, which is why they were so disturbed when Framework pushed Omarchy as a Linux distribution.
Of course, one needs to ask whether it is fair to insinuate premeditation by calling a mere expected effect an "agenda". Considering the endlessly navel-gazing nature of the culture wars, I would find it surprising if there weren't at least some people out there who make the same observation as above, and do think along the lines that driving Rust adoption is [also] a good thing because of it. Thus, Rust adoption does become, in a sense, part of the "woke agenda", just as Rust rejection becomes, perhaps even more clearly so, part of the "chud agenda".
> As such, it is natural that bringing Rust into your project would over time result in it becoming more "woke", just like using Ruby would make it more likely that you attract Japanese contributors, or targeting Baikal CPUs would result in you getting pulled into the Russian orbit. The "woke" side themselves recognises this effect quite well, which is why they were so disturbed when Framework pushed Omarchy as a Linux distribution.
I think this analysis is basically accurate - there's no conspiracy or even deliberate agenda going on, it's just that the community surrounding Rust happens to have (at the moment, anyway) a relatively high number of American progressives, many of whom are openly interested in imposing American progressive ideological norms in spaces they care about (which is basically what we mean by the term "woke").
I think Rust is a good software tool and I would like to see it be as widely adopted and politically-neutral as C is, and used in all sorts of projects run by all sorts of people with all sorts of other agendas, political or otherwise. Consequently, I would like to see people and projects who do not agree with American progressive norms adopt the language and become active users of it, which will help dilute the amount of Rust users who are progressives. I myself am not an American political progressive and I have lots of issues with the stated politics of many well-known Rust developers.
Is that stuff still going on? People were rather intense about certain politics during COVID, but outside of some furries on the Rust discord I haven't noticed anything outwardly political?
The general temperature of politics in FOSS, I think, is not obviously lower than before: just in terms of things that made it onto HN, in the past month or so alone we have seen the aforementioned kerfuffle about dhh (the leader? founder? of Ruby on Rails), his projects and their detractors, and the wrestling over control between NixOS's board and its community moderators who were known for prosecuting political purges and wanted to assert formal authority over the former.
We need more effective ways to prevent ideological capture of our institutions especially when there is no actual connection between an institution's mission (e.g., to support the use of Rust) and religion, politics, sexuality or morals.
People are (understandably) sick of the fact that for whatever reason, the biggest proponents of Rust are insufferable.
Personally, I'm simply bothered by the fact that (one of?) the most famous figure of Rust on Linux and Rust Forever consumes and advocates for pornography that's illegal in my country, without being held accountable by the community.
From what I could piece together, the only group who ever cried wolf about this is a forum full of contemptious little angry men who spend weeks researching people they hate on the internet. No one seems to want to touch the subject from fear of being associated with them.
I'm genuinely not sure who you're talking about or whether this is an accurate characterization of their views. For that matter, I'm not sure what country you're in and whether I myself agree with that country's laws about whatever kind of pornography this is. Certainly plenty of countries I don't live in and have no ties to have laws I disagree with or violate routinely.
I'm pretty suspicious of demands for communities to hold people accountable, especially when the community in question is a loose group of people who mostly communicate online and are united by their shared use of a specific programming technology; and who probably disagree on all sorts of other issues, including contentious ones.
> advocates for pornography that's illegal in my country, without being held accountable by the community
If some form of speech is illegal in your country it does automatically mean it should be illegal for the whole world or that it is wrong or that the world-wide community should adhere to standards specific to your country. Even if that country is USA.
In other words, nobody should give a flying f about open source developers porn preferences.
> Personally, I'm simply bothered by the fact that (one of?) the most famous figure of Rust on Linux and Rust Forever consumes and advocates for pornography that's illegal in my country, without being held accountable by the community.
Your abhorrent personal opinion of another individual has no place in a technical discussion.
The (retired?) head of a project trying to port Linux on Mac. I avoid naming names, not to be cryptic but because a lot of people like to stumble on these kinds of posts via search. Or moderate them. I'm not agreeing, but I can see why it's easier.
Acolytes being the people talking positively about their experience using a language and the strengths they think it has. So the people with positive opinions should say nothing at all, and the people with negative opinions should be free to share. And somehow, you think this will lead to faster adoption.
That’s an interesting thought. It would run counter to everything we know about human nature, but interesting nevertheless.
Rust is already pretty successful adoption wise. It’s powering significant parts of the internet, it’s been introduced in 3 major operating systems (Windows, Linux, Android), many successful companies in a variety of domains have written their entire tech stack in it. Adoption as measured by crates.io downloads has doubled every year for the last 10 years.
Now I’m imagining how much more widely Rust would be used if they had adopted your visionary approach of never saying anything positive about it.
> Acolytes being the people talking positively about their experience using a language and the strengths they think it has.
No, it's the people who have given rise to the multiple Rust memes over the years.
I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community. Scala? Kotlin? Swift? Zig? None of those languages have built such poor reputations for their communities.
After all, for quite a few years every thread on forums that mentioned C or C++ was derailed by Rust proponents. I didn't see C++ users jumping into Rust threads posting attacks, but there are many examples of Rust users jumping into C++ or C threads, posting attacks.
> That’s an interesting thought. It would run counter to everything we know about human nature, but interesting nevertheless.
Well, the fact that Rust is an outlier in this sample should tell you everything you need to know; other up-and-coming languages have not, in the past, gotten such a reputation.
> I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community.
Because you’re young or you weren't around in 2010 when Go was gaining adoption. Same shit back then. People said “I like the language, it’s quite useful” followed by tirades from people who thought it was the end of human civilisation. It had exactly the reputation you speak of. (“DAE generics???”)
Eventually the haters moved on to hating something else. That’s what the Rust haters will do as well. When Zig reaches 1.0 and gains more adoption, the haters will be out in full force.
> Because you’re young or you weren't around in 2010 when Go was gaining adoption.
I've been working as a programmer since the mid-90s
>> I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community.
> People said “I like the language, it’s quite useful” followed by tirades from people who thought it was the end of human civilisation.
And? That's not the same as having a hostile community. I never saw Go proponents enter C# or Java discussions to make attacks against the programmers using C# or Java like I saw constantly wirh Rust proponents entering C or C++ discussions and calling the developers dinosaurs, incompetent, etc.
Hostile according to who? According to the haters, maybe. I’m sure the Go community was called “hostile” by haters back in the day.
Look at the drama created by Linux maintainers who were being insanely hostile, coming up with spurious objections, being absolute asshats - to the point where even Linus said enough was enough. The Rust for Linux members conducted themselves with dignity throughout. The Linux subsystem maintainers acted like kindergarteners.
But of course, haters will read the same emails and confirmation bias will tell them they’re right and Rust is the problem.
My friend, the OP in this very thread has in multiple posts, made outright slanderous comments about C programmers. The reputation of the Rust community is very much the making of the Rust promoters. If you are seeing pushback, that's just the consequences of such behavior.
I also notice that these language debates are very much generational. That has a few consequences. First is that older devs have thicker skin. Second, older devs are more wary of the big promises made by Rust. Whether you like it or not, the push for Rust very much comes across as naivete as much as anything to older, more experienced devs who have seen this type of thing before.
You can't write a device driver without manipulating memory directly. A OS Kernel has to manipulate memory directly by definition. Most academic research into memory safe languages is mixed with a high amount of null results (meaning it doesn't work). Yet the Rust folks push it as the 'one true way'. Meanwhile, most Rust OpenSource projects are abandoned currently.
Its not hate, its pointing out track record and avoiding repeating past mistakes due to painful experiences in our youth. Your determination to repeat past mistakes doesn't come across as enlightenment like you think it does.
Here, find the “null result” in this study by the Android team - Eliminating Memory Safety Vulnerabilities at the Source (https://security.googleblog.com/2024/09/eliminating-memory-s...). They stopped adding new memory unsafe code and they saw a dramatic drop in the number of memory safety vulnerabilities. They only write new code in Kotlin or Rust now.
The Android team shipped a more secure operating system to billions of people. Their lives are better because of choosing more Rust and Kotlin and less C++.
> You can't write a device driver without manipulating memory directly.
This is a successful kernel driver that powers all IPC in Android. This is the most load bearing component in Android, especially because it is constantly being attacked by malware. It manipulates memory just fine actually.
In your hurry to dismiss Rust, you haven’t done a technical evaluation of it. If you had you wouldn’t conflate memory safety with a lack of memory manipulation. You’ve taken the intellectually lazy shortcut of dismissing a new thing because no new thing can be as good as the old things.
I write all this not to convince you to change your mind. I don’t think that’s possible. I write it so anyone else reading this will avoid your thought process. I don’t need to convince you, because this industry is moving ahead regardless.
> I’m sure the Go community was called “hostile” by haters back in the day.
I was there, and no it wasn't. The Go community didn't jump into every programming discussion throwing around accusations of dinosaur, insecurity, etc.
From the outside looking in most of the push back against rust seems to be of the "I am too old, never make mistakes, and don't want to learn a new thing late in my career" variety.
I have rarely seen an argument that pushes back against Rust with actual alternative solutions to the problems the rust proponents are trying to solve. It is mostly a bunch of old people letting the perfect be the enemy of the good.
I think you got your last sentence backwards. Because reading it literally, you mean what we have now is perfect. If that's the case, switching to Rust is a waste of time.
No, it makes sense to me. Perfect is not what we have now (as evident to everybody outside of the "skill issue" denialists) but how C/C++ or a new language could get up to par with Rust with some enhancement, and without the latter's idiosyncrasies.
> Scala? Kotlin? Swift? Zig? None of those languages have built such poor reputations for their communities.
> I didn't see C++ users jumping into Rust threads posting attacks, but there are many examples of Rust users jumping into C++ or C threads, posting attacks.
I already seen this with Zig. And even without language communities. Look at this whole thread. Look in to the mirror. Regularly when Rust is mentioned on HN. Anti-Rust cult comes to complain that there is Rust.
Even if someone just posts "I have made this with Rust" - then this cult comes and complains "why do you need to mention Rust?!". Like look at your self. Who hurt you?
I think you'd need to give answer to your own questioning here... why did you take "Rust haters" as "Rust-language haters", and not as "Rust-community haters"?
> I think you'd need to give answer to your own questioning here... why did you take "Rust haters" as "Rust-language haters", and not as "Rust-community haters"?
Because it literally says "Rust haters"; not "Rust community haters".
Are you saying that when someone refers to "Rust", they mean the community and not the language?
> If you could separate the language from the acolytes it would have seen much faster adoption.
Good news: you can. And that's why it has had fast adoption.
(those advocating for Rust in "meme-like" ways are not generally the same people actually developing the Rust compiler or the core parts of it's ecosystem)
I think this is the wrong way to promote rust. For me rust is just a hype. I know nobody that programms or even thinks about rust. I’m from the embedded world an there c is still king. I understand that some will see rust as a good alternative, but as long as the real money is made in c it is not ready
> I know nobody that programms or even thinks about rust.
I think it isn’t reasonable to infer that nobody uses something because you don’t know anybody who uses it in your niche. I know lots of embedded programmers who use Rust.
Rust slaps on embedded too; I think that's one of its core competencies. But you have to do a lot of leg work for each piece of hardware because manufacturer support isn't there, and the OSS libs are usually not great. If your requirement is "Use only the most popular language in this domain", that's fine, but there's no point in evaluating or discussing other languages if so; the outcome is predetermined.
I think the linked requirement, the hype you see, and rust's own material is misleading: It's not a memory-safety one-trick lang; it's a nice overall lang and tool set.
The unfortunate reality is that you must write almost all of your drivers from scratch if you want to rust in embedded. There is no OEM driver support, and as you said the open source drivers are all crap and written for arduino-level hobby projects.
Lack of drivers is prohibitive if your are a small/medium team or are using a lot of complicated peripherals or SoC. Compare to C where any MCU or embedded SoC or moderately complex peripheral normally comes with C driver code.
I'm not following: Rust excels at C interop, so why wouldn't you use the OEM drivers/SDK and bind them through Rust for your own code? That's what I've always done when I need to interact with a C library in Rust.
So the main reason that doesn't work sometimes is how you are using Rust.
For instance right now I'm leaning into the Rust Embassy ecosystem which is async based, the drivers need relatively deep integration with the embedded-hal-async layer which is far from trivial to do with C bindings.
In practice I end up rewriting drivers. Which sounds daunting but often times it's much easier than folks think and the resulting code is usually 1/4th or smaller the original C code. If only implement what you need sometimes drivers can be less than 100 lines of Rust.
Hmm. You raise a good point. I've been doing a bit of C FFI on std-rust applications, but hadn't thought to apply this to embedded much. I have wrapped CMSIS-DSP (ARM-official DSP toolkit; filters etc) with Rust embedded, and it works out well! Increases compile times, but worth it. Perhaps we should apply this approach more broadly.
One problem: It's tedious going from the pointer-level API bindgen gives you to a high-level rust API that has references, arrays etc. In that you have to do some boilerplate for each bit of functionality you want. Not a big deal for a specific application, but not ideal if making a general library. And C libs tend to be sloppy with integer types, which works, but is not really idiomatic for rust. Maybe that could be automated with codegen or proc macros?
I believe the ESP-IDF rust lib is mostly FFI (?); maybe that's a good example. We've been re-inventing the wheel re STM-32 and Nordic support.
Zig is an example of excelling at C interop--not Rust.
And Cargo is an impediment in the embedded ecosystem rather than a bonus.
Part of why we're getting Rewrite-it-in-Rust everywhere is precisely because the C interop is sufficiently weak that you can't do things easily in a piecemeal fashion.
And lets not talk about Rust compile times and looking at Rust code in a debugger and just how bad Rust code is in debug mode ...
Well, at Microsoft, plenty of people have thought about and discussed C#, with some products/features using it. Not that it's gone away, but it has not won the hearts and minds of the general (software-developing) public.
AWS is very heavy on Rust internally for core services.
EC2 (lots of embedded work on servers), IAM, DynamoDB, and parts of S3 all heavily use Rust for quite a few years now already.
We can move really fast with Rust as compared to C, while still saving loads of compute and memory compared to other languages. The biggest issue we've hit is the binary size which matters in embedded world.
Linux has added support for Rust now. I don't think Rust's future supremacy over C is doubtful at this point.
AWS might honestly be the biggest on Rust out of all the FAANGs based on what I've heard too. We employ loads of Rust core developers (incl Niko, who is a Sr PE here) and have great internal Rust support at this point :). People still use the JVM where performance doesn't matter, but anywhere where performance matters,I don't see anyone being okay-ed to use C over Rust internally at this point.
I'm from embedded too. We tried to use rust in one of our projects and came to the conclusion that it makes no sense to convert our team from experiences C++ developers to beginner level Rust developers. Additionally to that, it was nearby impossible to control the amount of packages that come in with Cargo. We had for a small tool three versions of the same library as dependency in our binary.
Additionally to that, a part of the team doesn't had fun on writing code with Rust.
We trashed the whole tool, which was a massive loss of time for the project.
Any recommended pointers to examples of how to use rustc without cargo
I dislike the tone of the evangelism and the anti-C attitude but I'm not anti-rust. I purchased a computer with an oversized amount or RAM in part so I could experiment with rust. But determining how to write, edit and compile small programs, from the ground up, without cargo appears exceedingly difficult, and feels like going against the tide
It stands to reason that the embedded programmer commenting was unable to determine how to avoid using cargo and pulling in unnecessary dependencies. Otherwise he would not have encountered this problem
e.g. Chrome & Fuchsia both build included Rust bits using their existing build system.
Bazel and Buck2 both work well with it, relatively.
One can also just be really disciplined with Cargo and not add superfluous deps and be careful about the ones you do include to monitor their transitive dependencies.
IMHO this is more about crates.io than Cargo, and is the biggest weakness of the language community. A bulk of developers unfortunately I think come from an NPM-using background and so aren't philosophically ... attuned... to see the problem here.
This 'language requires an eco-system' is something that I really dislike. The linker is where it should come together, not in your choices about importing external dependencies, module packaging, source code hosting options, debugger and so on.
This is entirely the wrong lens. This is someone who wants to use Rust for a particular purpose, not some sort of publicity stunt.
> I know nobody that programms or even thinks about rust. I’m from the embedded world a there c is still king.
Now’s a good time to look outside of your bubble instead of pretending that your bubble is the world.
> as long as the real money is made in c it is not ready
Arguably, the real money is made in JavaScript and Python for the last decade. Embedded roles generally have fewer postings with lower pay than webdev. Until C catches back up, is it also not ready?
Telling people they need to take their ball and go home if they're incapable or unable to maintain an entire compiler back-end seems like a, shall we say, 'interesting' lens for a major distro such as Debian.
I program mostly Python, C, C++, Javascript and Rust. Including on embedded (that goes for C, C++ and Rust)
Most people nowadays who criticize Rust do so on a cultural basis of "there are people who want this so and it changes things therefore it is bad". But never on the merits.
Rust is a good language that contains in its language design some of the lessons the best C programmers have internalized. If you are a stellar C programmer you will manually enforce a lot of the similar rules that Rust enforces automatically. That doesn't mean Rust is a cage. You can always opt for unsafe if ypu feel like it.
But I know if my life depended on it I would rather write that program in Rust than in C, especially if it involves concurrency or multiprocessing.
Practically on embedded the issue is that most existing libraries are written in C or C++. That can be a reason to not choose it in the daily life. But it is not a rational reason for which a programming language sucks. Every programming language had once only one user. Every programming language had once no dependencies written in it. Rust is excellent in letting you combine it with other languages. The tooling is good. The compiler error messages made other language realize how shitty their errors were.
Even if nobody programmed in Rust, the good bits of that language lift the quality in the other languages.
> a cultural basis of "there are people who want this so and it changes things therefore it is bad". But never on the merits.
In this mindset, arguing against change is an argument on the merits. Because everything you spend time on has the opportunity cost of everything else you could spend time on.
Yes sure, but you got to engage with the arguments on the other side as well. The argument on the other side is that a rewrite is worth it because it prevents entire classes of memory leaks that are still to this day leading the exploitable CVE lists.
We could now pretend their position is: "Oh, we got this shiny new language that magically makes everything 100% safe and thus we need to rewrite everything." But that is not the position. Most of them are aware that a rewrite is always a trade-off. You could reintroduce old bugs etc.
As I said, I program languages on both sides on the divide and if I had to write and maintain secure software that my life depended on I would certainly prefer to write it in Rust. Memory safety would be just a tiny part of that. The other factors would be the strict type system (can be used to enforce certain guarantees that contributers cannot easily fuck up) and the tooling (the builtin testing is stellar).
The future of tooling is going to be written in the language people of the coming generations like to write. There was a time when C++ or even C was the new shiny thing. Why can't we just write all software in assembly like in the good old days? Because there were some actual tangible benefits to doing it in C and that's the language people with the ability of doing the job chose.
I am not saying a Rust rewrite makes sense in every case, but if you check the past decade of CVEs on a project and half of them would have been prevented by the vanilla Rust compiler maybe that's the rational thing?
I believe m68k already has a working Rust compiler of sorts, though it's not part of the default Rust chain. I think shaping that fork into something that will let it run and compile like normal is feasible.
For other architectures currently unsupported by Rust, I doubt it'll happen. The CPU architectures themselves are long dead and often only used for industrial applications, so the probability of hobbyists getting their hands on them is pretty slim.
People still using these old architectures for anything but enthusiast hacking will probably not be using Debian Trixie, and if they do, they can probably find a workaround. It's not like the .deb format itself is changing, so old versions of apt and dpkg will keep working for quite a while.
I'm sure if any of the large corporations depending on legacy hardware would get together and pay people to make the necessary forks, 6 months would be feasible. Practically, they won't, though.
I see the deadline more as a "expect breakages in weird unofficial Debian downstreams that were never supported in the first place" or "ask your weird Debian downstream maintainer if this is going to cause problems now". It's not that Debian is banning unofficial downstreams or semi-proprietary forks, but it's not going to let itself be limited by them either.
And who knows, maybe there are weird Debian downstreams that I don't know of that do have a working Rust compiler. Projects like Raspbian are probably already set but Debian forks for specific boards may need to tweak a few compiler settings to make compilers emit the right instructions for their ARM/MIPS CPUs to work.
I only find the message passive-aggressive or insulting if you're of the opinion you're entitled to Debian never releasing software that doesn't work on the Commodore64.
IMHO, Rust is not mature until it decides on a stable ABI, and starts being able to use non-static linking, and therefore able to produce dynamically linked binaries.
Fil-C is amazing but is much more problematic than Rust at this point since it only supports amd64 at this time and is maintained by a single genius.
It also doesn't help you to attract new contributors. With the changes we made over in Ubuntu to switch to rust-coreutils and sudo-rs, we have seen an incredible uptake in community contributions amongst other things, and it's very interesting to me to try to push APT more into the community space.
At this time, most of the work on APT is spent by me staying awake late, or during weekends and my 2 week Christmas break, the second largest chunk is the work I do during working hours but that's less cool and exciting stuff :D
Adding Rust into APT is one aspect; the other, possibly even more pressing need is rewriting all the APT documentation.
Currently the APT manual pages are split into apt-get and apt-cache and so on, with a summary in apt(8) - we should split them across apt install(8), apt upgrade (8) and so on. At the same time, DocBook XML is not very attractive to contributors and switching to reStructuredText with Sphinx hopefully attracts more people to contribute to it.
Sorry to double-reply, but this is actually a super important point in favor of Fil-C.
If you adopted Fil-C for apt, then you could adopt it optionally - only on ports that had a Fil-C compiler. Your apt code would work just as well in Fil-C as in Yolo-C. It's not hard to do that. I think about half the software I "ported" to Fil-C worked out of the box, and in those cases where I had to make changes, they're the sort of changes you could upstream and maintain the software for both Fil-C and Yolo-C.
So, with Fil-C, there would be no need to ruffle feathers by telling port maintainers to support a new toolchain!
As far as I understand, Fil-C changes the ABI of the system, therefore it requires a new architecture in Debian terminology, e.g. amd64fil. And then you'd need to use multi-arch to pull in amd64fil binaries where that works.
We'll have to see how this plays out but it's not super plug and play.
> > since it only supports amd64 at this time and is maintained by a single genius.
> That's easily fixable.
as easily as fixing Rust to work on the remaining 4 architectures?
> > It also doesn't help you to attract new contributors.
> I don't understand this point.
C++ doesn't attract a lot of developers, Rust attracts many more. I want more community, particularly _young_ community. I don't wanna work on this alone all the time :D
He got a lot of contributors because those contributors wanted to participate in a rewrite. I.e. the opportunity to "move fast and break things". Not exactly the kind of contributions you should be looking for in a package manager that so many of us rely on.
If he was asking for C/C++ contributors, he'd be asking for help maintaining a mature project. That's less fun. It mature, grown-up work for serious people. Those serious people probably already have serious jobs. So, fewer people will show up.
Focus on a language that isn't a moving target, sir.
And this argument about "young" contributors is the same nonsense that came from your senior management. But you're independent.
Aren't the experienced engineers supposed to be leading the next generation? If you really want to get the young folks on board, drop Ubuntu and call it Gyatt. Instead of LTS, call it Rizz. Just think of all the young who will want to work on Skibidi 26.04!
Rust attracts hype and hype artists. Ask me how I know. Do you want drive-by people or do you want long-term community members? There are many young folk interested in learning C and looking for adequate mentorship along with a project to work on. Wouldn't that be a better use of energy? Have you even put out any outreach to attract others to these projects where you say you're alone?
You are making a mistake and falling on the sword for your bosses at the same time. Tough days are here but maybe hold on for better employment than this.
> It's important for the project as whole to be able to
move forward and rely on modern tools and technologies
and not be held back by trying to shoehorn modern software
on retro computing devices.
... This is Debian we're talking about here?
... What distros are recommended for those who intend to continue trying to squeeze utility out of "retro computing devices"?
... And what sort of minimum specifications are we talking about, here?
I don't care whether kernel developers want to use C or Rush or whatever. I judge the quality by using it in production. If it works well then I don't care how they are built.
How can you judge the security qualities of software by using it in production? You're surely not using it in the way someone looking for exploits would use it.
Or I guess if you interpret this as a societal scale: we've collectively used C in production a lot, and look at all the security problems. Judgment completed. Quality is low.
It's certainly what we aim for in APT. We do have an overwrite of course, since we need to copy uninitiated data around: The cache file is allocated as a whole and written at the end, but not all parts of it are used, but it triggers stuff.
Don't want to introduce complex code to only copy the parts that are actually reachable would be silly and introduce bugs.
But keep in mind valgrind is super buggy and we spend quite a bunch of time working around valgrind false positives (outside of amd64)
TBH most “false positives” that I investigate are wishful thinking or the result of ignorance of what is really happening. It looks like you are using Debian. That probably doesn’t help. Here is a typical Debian “bug” report:
10 years old. It never was a false positive. It was fixed a good few years ago. The fix did not involve suppressing the error.
Valgrind does need a lot of work, especially for missing CPU features and for Darwin. I’m not aware of many memcheck bugs that aren’t relatively obscure corner cases.
Like the last one was/is the inability to comprehend safety of large buffers on ppc64el because the stack clash protector code generated by gcc isn't understood. The one before that was more problems of that sort on armhf where it also didn't understand the clash protector - in more cases.
It's quite surprising and it takes days to weeks to debug each of these, going down to the assembler level and verifying that by hand.
The language is incredibly frank, and I agree with it completely. The retro-computing hobby doesn't need the ability to run contemporary operating systems.
It's insane that x86 Debian is still compiling all software targeting Pentium Pro (from 1995!).
x64 Debian is a bit more modern, and you must splurge for a CPU from 2005 (Prescott) to get the plethora of features it requires
> It's insane that x86 Debian is still compiling all software targeting Pentium Pro (from 1995!).
Debian 13 raised the x86 requirement to Pentium 4 because LLVM required SSE2 and Rust required LLVM.
The target before was not Pentium Pro in my understanding. It was Pentium Pro equivalent embedded CPUs. Servers and desktops since 2005 could use x86-64 Debian.
Is it just the "retro-computing hobby"? There could still be businesses who might need support for old machines, especially in developing countries. I don't know the actual situation though, I'm open to the idea that my suggestion is insane.
No, it’s a valid question, and one that I’m sure will get some answers in the coming days and weeks as the discussion on adding this requirement continues, but in some sense, it’s beside the point.
The cost of supporting this old hardware for businesses or hobbyists isn’t free. The parties that feel strongly that new software continue to be released supporting a particular platform have options here, ranging from getting support for those architectures in LLVM and Rust, pushing GCC frontends for rust forward, maintaining their own fork of apt, etc.
It's much more common to find businesses running on very old hardware in developed countries, not in developing ones. Developing nations basically didn't use computers 20-30 years ago, there's no random remnants from that era beyond some extreme tail end. And, given how the PC & server market evolved in the 2000s and 2010s, it was cheaper to buy a then-current x86 than to import some ancient Alpha system from wherever. Especially so since software licenses didn't really exist in those days in developing countries - even government institutions often ran pirated software without a second thought.
There's a non-negligble amount of "handed-down" refurbished hardware from developed to developing. PCs and servers that are already 5+yo and out of market at installation.
Those businesses usually have very purpose-built machines. Like USG machines running windows xp in hospitals. You normally don't touch them, just isolate. You definitely don't update them to the latest OS version.
These architectures were for were high end machines at the time, universities and government agencies would indeed been the only ones buying these, and they probably migrated to commodity hardware long ago; if migrating to a PC-compatible architecture had been off the table for them, so had been keeping these things running after their EOL.
(In my second-tier university at my developing country, the Sun workstation hadn’t been turned on in years by the late 2000s, and the the minicomputer they bought in the 1980s was furniture at the school)
Edit: As for big businesses, they have support plans from IBM or HP for their mainframes, nothing relevant to Debian.
and scroll down for other categories of x86 chip manufacturers. These have plenty of uses. Maybe in another 30 years' time they will mostly be a hobby, but we are very far from that time.
If I was a blackhat, I'd surely value all pre-IntelME / AMD PSP *retro-computing* hardware that is still capable of running more or less modern software without a built-in hardware backdoor higher than its weight in gold.
Wow, those are exactly the same targets I use for releasing x86 and x64 (Windows) builds, but even I think it's a little over the top for Debian to support Pentium Pro.
I'll first say that 32-bit CPUs, including x86-based ones, are not retro computing. They still carry the load of all sorts of important computing systems, today. They are still being produced (IIANM, also by Intel and AMD). Sure, with much more limited use cases, and it's definitely not the mainstream, but it's there. Not a hobby and not for a 'retro' experience.
But you are also completely ignoring limited-capabilities hardware, like embedded systems and micro-controllers. That includes newer offerings from ST Microelectronics, Espressif, Microchip Technology etc. (and even renewed 'oldies' like eZ80's which are compatible with Zilog's 8-bit Z80 from the 1970s - still used in products sold to consumers today). The larger ones are quite capable pieces of hardware, and I would not be surprised if some of them use Debian-based OS distributions.
> This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
By Sequoia, are they talking about replacing GnuPG with https://sequoia-pgp.org/ for signature verification?
I really hope they don't replace the audited and battle-tested GnuPG parts with some new-fangled project like that just because it is written in "memory-safe" rust.
Sequoia-PGP is 8 years old at this point, their 1.0 happened half a decade ago.
Meanwhile, GnuPG is well regarded for its code maturity. But it is a C codebase with nearly no tests, no CI pipeline(!!), an architecture that is basically a statemachine with side effects, and over 200 flags. In my experience, only people who haven't experienced the codebase speak positively of it.
It's rather that GnuPG is ill-regarded for its code immaturity tbh. You don't even need to read the code base, just try to use it in a script:
It exits 0 when the verification failed, it exits 1 when it passed, and you have to ignore it all and parse the output of the status fd to find the truth.
It provides options to enforce various algorithmic constraints but they only work in some modes and are silently ignored in others.
> I find this particular wording rather unpleasant and very unusual to what I'm used to from Debian in the past. I have to admit that I'm a bit disappointed that such a confrontational approach has been chosen.
>It's important for the project as whole to be able to
move forward and rely on modern tools and technologies
and not be held back by trying to shoehorn modern software
on retro computing devices.
Loved this statement on the state of modern software using the backbone of C (in linux and elsewhere)
My main objection to Rust is how ugly it looks. Why did they have to change things such as how types and functions are defined? I really hate keywords such as def, fn, and other "explicit" function declarations. Also all the :: and <> from C++. Language-wise Java and C# did a much better job at introducing the features they needed without breaking the readability and familiarity of C.
The "spiral" type declaration syntax from C is hard to parse, both for humans and machines. That's probably why even C++ is moving away from it:
C modern C++
"int foo[5]" -> "array<int,5> foo"
It's easy to criticize simple examples like the one above, since the C++ (or Rust) version is longer than the C declaration, but consider something like this:
char *(*(**foo[][8])())[];
and the idiomatic Rust equivalent:
let foo: Vec<[Option<fn() -> Vec<String>>; 8]> = Vec::new();
The later can be parsed quite trivially by descending into the type declaration. It's also visible at a glimpse, that the top-level type is a Vec and you can also easily spot the lambda and it's signature.
Another ergonomic aspect of the Rust syntax is that you can easily copy the raw type, without the variable name:
Vec<[Option<fn() -> Vec<String>>; 8]>
While the standalone C type looks like this:
char *(*(**[][8])())[]
which is quite a mess to untangle ;)
Also, I think C# is generally closer to Rust than to C when it comes to the type syntax. A rough equivalent to the previous example would be:
var foo = new List<Func<List<string>>?[]>();
I can't deny that "?" is more ergonomic than Rust's "Option<T>", but C# has also a way less expressive type system than Rust or C++, so pick your poison.
That seems like a bad idea to me: Dependencies will be added, for very basic system utilities, on (parts of) a software ecosystem which is still a "moving target", not standardized, and IIANM itself has further dependencies. I wonder whether platform compatibility won't be jeopardized, either.
I would be worried if even C++ dependencies were added for basic system utilities, let alone something like Rust.
Now, granted, I'm not an expert on distro management, bootstrapping etc. so maybe I'm over-reacting, but I am definitely experiencing some fear, uncertainty and doubt here. :-(
> Dependencies will be added, for very basic system utilities, on (parts of) a software ecosystem which is still a "moving target", not standardized,
This is the status quo and always has been. gcc has plenty of extensions that are not part of a language standard that are used in core tools. Perl has never had a standard and is used all over the place.
If you're designing an OS distribution, you would have your base system written adhering strictly to language standards and without relying on flakey extensions (not that GCC C extensions are flakey, I'm guessing most/all of them are stable since the 1990s), and minimizing reliance on additional tools.
For example, IIUC, you can build a perl interpreter using a C compiler and GNU Make. And if you can't - GCC is quite bootstrappable; see here for the x86 / x86_64 procedure:
It feels like perhaps you’ve conflated the issue in this thread, which is about using Rust in apt, which is much, much later in the distribution bringup than this bootstrapping, and using Rust in something like the Linux kernel, which is more relevant to those types of bootstrapping discussions you posted.
apt is so late in the process that these bootstrapping discussions aren’t quite so relevant. My point was that at the same layer of the OS, there are many, many components that don't meet the same criteria posted, including perl.
The procedure to produce GCC you cited was 13 steps. Many of the tools were made after distributions required GCC. And a similar procedure could produce a Rust compiler.
Keep in mind APT is using the GNU dialect of C++, particularly C++ 17, shortly C++ 23. And it always exploits the latest C++ features while at the same time still having workarounds in the code for a lack of namespaces in C++ compilers prior to the C++ standardization...
But doesn't this actually strengthen my point? Debian transitioned from a tool written in a more demanding language to a tool written in a less-demanding one.
> It's important for the project as whole to be able to
> move forward and rely on modern tools and technologies
> and not be held back by trying to shoehorn modern software
> on retro computing devices.
Rust is the present and the future and it's quite logical that it becomes a key requirement in Linux distributions, but I'm really not convinced by the wording here… This last sentence feels needlessly antagonistic.
Right. I do have some nostalgia for installing Linux on a brand new PC which had less total RAM than my computer today has cache, but we need to be clear eyed about what makes sense for a maintained piece of software. I also have feelings about steam trains, but burning coal is not a sensible way to power a train in 2025.
A nostalgia-fuelled Linux distro, maybe using a deliberately slimmed down or retro kernel, and chosen software could make a lot more sense than keep trying to squeeze Debian onto hardware that was already obsolete at the turn of the century while also promoting Debian as a viable choice for a brand new laptop.
But the message is not just "we will require Rust and here's the timeline (and some things may break)", it's also "I know what I'm going to break and it's your problem to fix it" (via the CC line) and ends in a passive-aggressive way (the phrase "Thank you for your understanding." is basically a polite way of saying "and fuck you too."). This gets people off-side, and makes it more likely they're going to throw up barriers rather than either do what you ask or find some other way to collaborate/achieve the ideal outcome.
Or those annoying nagging "well, what if I don't have an X86_64 CPU that was made in the last five years?", to which obviously our response should be: "get different hardware LOL, closedwontfix"
I'm not gonna waste effort on a fool's errand. So far, neither claim has been backed up whatsoever.
It's obviously more likely it's just fans of the language with a knee-jerk reaction of "ackshully you're totally definetely wrong, but uh... don't ask me how, you just are" than legitimate talking points.
No, supporting 5 year old mainstream hardware is a very reasonable thing to do. Supporting 20 year old hardware that barely anyone used even when it was new is not.
Indeed. Four targets are identified as potentially affected:
alpha, hppa, m68k and sh4
To be fair, lots of people did use Motorola 68xxx CPUs when those were new, it's just that it was 40+ years ago in products like the Commodore Amiga. The SH4 is most popularly connected to the Dreamcast, Sega's video game console from back when Sega made video game consoles.
The Alpha and PA Risc were seen in relatively recent and more conventional hardware, but in much tinier numbers, and when I say relatively I mean early this century, these are not products anybody bought five years ago, and when they were on sale they were niche products for a niche which in practical terms was eaten by Microsoft.
> Be careful. Rust does not support some platforms well.[0] ANything
> that is not Tier 1 is not guaranteed to actually work. And
> architectures like m68k and powerpc are Tier 3.
>
> [0] <https://doc.rust-lang.org/beta/rustc/platform-support.html>.
Thank you for your message.
Rust is already a hard requirement on all Debian release
architectures and ports except for alpha, hppa, m68k, and
sh4 (which do not provide sqv).
Create a plan to add support for {alpha, hppa, m68k, and
sh4,} targets to the Rust compiler
> [ rustc_codegen_gcc, libcore atomics for each target (m68k does not have support for 64-bit atomics and will need patching to libgcc helper functions), ..., libc, liballoc and libstd (fix std::thread, std::fs, std::net, std::sync), and then compiletest will find thousands of bugs ]
So, CI build hours on those actual but first emulated ISAs?
Rust is a great language for devs. They love it and how developer centric everything about it is.
But for end users on Debian trying to compile rust stuff is a nightmare. They do breaking changes in the compiler (rustc) every 3 months. This is not a joke or exaggeration. It's entirely inappropriate to use such a rapidly changing language in anything that matters because users on a non-rolling distro, LIKE DEBIAN, will NOT be able to compile software written for it's constantly moving bleeding edge.
This is an anti-user move to ease developer experience. Very par for the course for modern software.
Rust has stability guarantees. As long as you don't update version of your binary/library new compiler versions can compile for that version and you can even interlink different editions so I really don't understand this argument. C++ modules had a proposal for something similar but it did not pass. It would also fix so many horrible backward compatible things haunts the language as a whole.
First, Debian is not a distro where users have to compile their software. The packages contain binaries, the compilation is already done. The instability of Rust would not affect users in any way.
And second, as a developer, I never had a more unpleasant language to work with than Rust. The borrow checker back then was abysmal. Rust is not about developer happiness - Ruby is - but its memory safety makes it a useful option in specific situation. But you can be sure that many developers will avoid it like a plague - and together with the breakage and long compile times that's probably why moves like the one dictated here are so controversial.
> The instability of Rust would not affect users in any way.
Sure it would. Suppose a rust-based package has a security bug. Upstream has fixed it, but that fix depends on some new rust language feature that the frozen version of rust in Debian doesn't have yet.
Then the responsible Debian maintainer would backport that fix, as they have done in other languages for decades. Really, that's not user facing. It's a possible hassle for the maintainers and developers, which might be bad enough, but not a problem for users.
Yes, let’s introduce a hard dependency on a language which has no specification, only one compiler and supports a pitiful number of architectures. That’s what true progress looks like.
Ferrous Systems donated their language specification ("Ferrocene") to the Rust foundation[0] who is working on integrating it but that takes time, obviously.
Do you think that it was made up from whole cloth in the abstract machine and implemented later? No, it was based on the available implementations of its time.
On top of that, languages like Python do not have a specification and yet have multiple implementations.
That's literally what the C++ specification does. The reason for all the UB and implementation defined stuff is because 90s compilers disagreed on sizeof(int).
I don’t think it’s a fair argument to criticize a spec because it’s derived from an implementation. If anything it shows how the spec can be practically applied.
I don’t think this is a good argument, given that there are manifestly successful languages that don’t have a formal specification. Python comes to mind.
(Plus, architecture quantity isn’t exactly the thing that matters. Quality is what matters, and Rust’s decision to conservatively stabilize on the subset of LLVM backends they can reliably test on seems very reasonable to me.)
Considering the number of provers and statistical analysers and given C is the only mainstream language with a formally verified compiler, I would say fairly well thank you.
Honestly, I am not even opposed to Rust. It has cool ideas. I do think it should care a lot more about being portable and properly defined and should have done so a lot earlier and I do deeply disagree with the opinion of some core team members that specification is meaningless.
C obviously always was a questionable choice for a tool like apt but Rust seems even worse to me. Apt has absolutely no need to be written in a low level language. At least you could argue that C was chosen because it’s portable but I don’t see what Rust has going for it.
The only spec you need meet to get Perl or Python running on a new platform is the C spec. A compiler meeting that spec is enough to port and compile the interpreter for a new platform.
Large share of C programs also (accidentally) depend on holes in the specifications, aka undefined behavior. Putting them under the mercy of the compiler to remain consistent.
The complaint about architecture support is somewhat valid (though: the world largely runs on a small handful of architectures). The other complaints are completely bogus, and repeatedly bringing them up just suggests bad faith.
Can you please make your substantive points thoughtfully instead of posting in the flamewar style? We're trying for the one and not the other here: https://news.ycombinator.com/newsguidelines.html.
Check the list of supported architectures https://wiki.debian.org/SupportedArchitectures, it's pretty short. Official support extends to exactly 5 architectures. So Debian was never going to be your go to
for anything not mainstream.
Flagged for deliberately spreading disinformation under the name StopDisinfo. As this user is aware, rust has a specification https://github.com/rust-lang/fls, which they have previously commented on: https://news.ycombinator.com/item?id=44927141 (and which I probably wouldn't have remembered if not for the rather remarkable name they are saying false things under).
Rust also has multiple compilers (rustc, mrustc, and gccrs) though only one is production ready at this time.
The Rust specification you link is performative and only intended to satisfy requirements of certification processes. No one is actually using it to implement the language, as far as I am aware.
There is other work on specifying Rust (e.g. the Unsafe Code Guidelines Working Group), but nothing approaching a real spec for the whole language. Honestly, it is probably impossible at this point; Rust has many layers of implementation-defined hidden complexities.
“There’s a standard, but it’s performative” is a different argument than the top level one.
But even if we accept that, it doesn’t seem like a good comparative argument: anybody who has written a nontrivial amount of C or C++ has dealt with compiler-defined behavior or compiler language extensions. These would suggest that the C and C++ standards are “performative” in the same sense, but repeated claims about the virtues of standardization don’t seem compatible with accepting that.
The original purpose of the C standard was to solve the problems created by the diversity of increasingly divergent implementations of C. They studied existing behavior across systems, proposed new language constructs, and it was generally a success (look at the proliferation of C in the 90s across many different systems and architectures).
The actual informal semantics in the standard and its successors is written in an axiomatic (as opposed to operational or denotational) style, and is subject to the usual problem of axiomatic semantics: one rule you forgot to read can completely change the meaning of the other rules you did read. There are a number of areas known to be ill-specified in the standard, with the worst probably being the implications of the typed memory model. There have since been formalized semantics of C, which are generally less general than the informal version in the standard and make some additional assumptions.
C++ tried to follow the same model, but C++ is orders of magnitude more complex than C and thus the standard is overall less well specified than the C++ standard (e.g. there is still no normative list of all the undefined behavior in C++). It is likely practically impossible to write a formal specification for C++. Still, essentially all of the work on memory models for low-level programming languages originates in the context of C++ (and then ported back to C and Rust).
Well, the memory ordering model was developed for C++ and is used in C and Rust. But e.g. C++ does not have a pointer provenance model, which is arguably almost as important in this context. It turns out one of the things we really care about with multi-processing, and thus memory models is linked list hacks, and those only work if you have provenance rules, which uh, C++ just has a shrug emoji where the provenance rules would go so that's not great. C has an ISO document, although it's not part of the ISO C standard it's just another document so far, but Rust has specified provenance.
Also, the C++ ordering model is defective in the sense that while it offers the orders we actually use it also offers an order nobody knows how to implement, so it's basically just wishful thinking. For years now the C++ standard has labelled this order "temporarily discouraged" as experts tried to repair the definition and C++ 26 is slated to just deprecate it instead. Rust doesn't copy that defect.
To be clear, my argument doesn't hinge on whether FLS is a normative specification of Rust or not. The argument is that being "specified" is neither necessary nor sufficient for language maturity or quality.
can you link what you're referring to? As I understand, this announcement is about Debian tooling, & you're referencing Linux kernel development discussions
I don't get the need for Rust since I happily compile common lisp to machine code when I need fast binaries.
But the people who use the language have an amazing talent to make people on the fence hate them within half a dozen sentences.
They remind me of Christian missionaries trying to convert the savages from their barbarous religions with human sacrifice to the civilised religion with burning heretics.
Many programmers feel the same way about Lispers. It's best to set aside your gut feelings about the community and think primarily about the technical and organizational merits and disadvantages of the technology.
One of the most iconic lisp saying is literally about how every other languages are supposedly inferior to lisp, so I don't think it's a particularly good example here.
Not a Rust or even a systems language guy but it’s not “for some reason”. The reason is actually incredibly clear and about removing the single largest surface area of security problems in the entire history of Linux.
Is it the largest though? Based on https://owasp.org/www-project-top-ten/, it would be part of #6 or #8. While one can always walk and chew gum at the same time (and those rewriting system software are likely not the same people who need to design systems with less flawed access control), replacing stable software is not without risk (e.g. https://lwn.net/Articles/1043103/, which is an instance of #6 caused by rust). Would you trust a rewrite of OpenSSH in rust (absent any information about its authors)?
Owasp deals with web security only. That whole area is biased towards php/ruby/js/python/beam where those issues don't exist outside of native modules.
https://www.cvedetails.com/vulnerabilities-by-types.php is a bit more clear. It's xss, SQL, then memory. The first two are not possible to enforce a fix on - you can always make a decision to do something bad with no visible annotation. Even then, rich types like in rust make safe interfaces easier to produce. But rust tackles the next class of issues - one that you can verify to be safe or require an explicit "unsafe" around it.
But how many breaches get CVEs (or are caused by them)? Given how much users (and their data) interact via the web vs any other platform, reducing the breaches of services is going to improve users lives more than reducing the number of CVEs (does mongodb defaulting to not have username/password to access have a CVE).
As for preventing XSS and SQL injections, that's what good web frameworks do. If your framework encourages you to write raw unescaped SQL, or doesn't provide sensible defaults around content policies, then no matter what language it's in, there are going to be issues (and maybe if we called these frameworks "unsafe" then we'd get somewhere with fixing them).
In which code though? What counts as a security vulnerability (does the design of the play store such that searching for an app and the top result not being that app count)? I like everyone else want a secure browser, but my secure browser isn't going to prevent my email (or passwords) from appearing on https://haveibeenpwned.com/. I like rust, and want to write more rust, but if I were to port openssh to rust, I guarantee my rust code would have more CVEs than openssh.
My feeling is in the specific instance of using rust in apt, this is most likely a good thing (though I hope existing well tested rust libraries are used rather than NIHing them and introducing new bugs), but so far Ubuntu's rustification has not gone smoothly, so I'm more wary of the changes that e.g. improvements to Firefox via rust.
I don't agree with openssh take but yeah we don't need to migrate everything to rust and yeah a migration not always goes smoothly I am using sudo-rs on my arch install for over a year havent had any issues yet.
There is no guarantee that other bugs do not flurish in the rust echosystem.
There are no publicly known quality code checks of rust programs except a big "trust us"(see firefox with all its CVEs, despite "rust"). And combined with the Cargo echosystem, where every malicious actor can inject malware is a big warning sign.
AFAIK Linux is using rustc directly, without cargo.
And just an anecdote, Asahi Linux devs said that Rust made it very easy (maybe relative to working with C) to write the drivers for the Apple M1 and M2 series, so it seems that the language has his merits, even without the cargo ecosystem.
Also Rust will only minimize certain kinds of bugs, others are impossible, a few years ago (I believe was Microsoft) that said that 70% of the bugs found were memory related [0], it means that Rust would have prevented most of those.
Maybe Rust is not the best answer, but as for now it the most proven answer for this particular problem, who know of Zig or other language will replace both C and Rust in the future.
I might be misunderstanding here but... what you're saying is that Rust programs can still have bugs? Isn't that the same as other programs except Rust prevents the most disastrous and common bugs that lead to most CVEs?
If I got that right, how is "it's still not perfect" an argument?
If perfect isn't needed, we might as well stick with the 20+ years old mature codebase, there's no need to move to another language. I mean the "code to parse .deb, .ar, .tar," is done and whatever memory bugs should've been already fixed .
Use Rust for evergreen projects by all means, just leave mature tested systems alone, please.
I have similar feelings about the go community. I like go on the whole, but certain aspects are atrocious. The 'go community' is strangely cult-like and complaining about the language is generally poorly received. As a result I very rarely interact with them and just do my own thing.
Rust isn't 'delivering value', people are just drinking the koolaid. CL has its niches, and it still fills them well: it's never been that successful as a general-purpose language.
As a GC language Common Lisp is slow as molasses. However, in the realm of languages which are slow as molasses it's got quite acceptable performance. I suppose it's in part because it's alien technology from beyond the bounds of the multiverse, but also in part due to the fact that we had really interesting tech back in the '70s and '80s that kinda got lost during the so-called AI winter.
To add on to that, with declarations the programmer can tell the Lisp compiler that (for example) a variable can be stack allocated to help improve performance. The fact that Lisp code is just data is another benefit towards performance as it means macros are relatively easy to write so some computation can be done at compile time. There are also various useful utilities in the spec which can be used to help profile execution of a program to aid in optimization, such as time and trace.
News flash, but Lisp compilers have gotten better since the 80's. In general, normal, unoptimized Lisp is probably on-par with Java, while optimized Lisp with a good compiler can get on par with C++. SBCL is a very good compiler, and GC technology has come a long way.
SBCL is fantastic, but the resources being poured in GCC and LLVM combined with CPU manufacturers including architectural features to help C and related procedural languages work better make it a massive uphill battle. I suppose that with proper use of things like sb-simd and arena allocation you can get in the ballpark of C, but with some tweaking C, C++, and Rust can get insanely fast (of course Fortran's better, but nothing compares to it, so let's leave that aside for now).
> Christian missionaries trying to convert the savages
Fast forward 5 centuries, it turns out they were in fact pretty successful as South America central Africa are the places where Catholicism is the most active today, far more than in Europe.
Rust evangelists are tiresome. It's not gonna fix the tech debt problem, No matter how much rust crack you smoke. Disciplined use of c, with modern tools like valgrind, will give you safe code without having to lobotomize yourself into fighting the borrow checker for everything, even manifestly simple code.
It would be nice (speaking as a Valgrind developer) if Valgrind could guarantee safe code. Unfortunately it doesn’t. Firstly, it does not detect all kinds of errors (and indeed no tool does). Secondly, it is unlikely that the test coverage is perfect.
Delusional overconfidence that developer “skill” is all that is needed to overcome the many shortcomings of C is not a solution to the problem of guaranteeing security and safety.
I find it surprising hearing statements like this from a developer of a tool for, well, C programmers mostly I guess? "Skill is all that is needed to prevent bugs and produce bug-free software" is a phrase I've never heard from an actual C programmer, but have heard plenty of times from detractors.
The C programmers I know are certainly not deluded or overconfident. They don't even think "their" language is a perfect one, or even a very good one. They just avoid black-and-white thinking. They take a practical approach about memory issues, seeing them more like any other kind of bug. It's a different aesthetics than you would maybe see from many Rust folks. They want to be productive and want to be in control and want to understand what their code does. In turn, they accept that in some cases, bugs (possibly memory bugs) creep in, some of which could go unnoticed for some time. They tend to not see that as a huge issue, at least in general, because an issue that has gone unnoticed (or didn't manifest) is often less of a problem than one that is immediately obvious. (In case of data corruption, it _can_ be a huge issue, and you have to add safeguards to prevent it, and have to be accepting some residual risk).
They understand that everything is a trade off and that with experience and practice, good architecture, good tooling etc. you can prevent many bugs early, and detect them early. They have tried many approaches to prevent bugs, including fancy languages and constructs, and have concluded that in many cases, perfect safety is not possible, in particular it's not possible without seriously hurting other requirements, such as productivity.
As to valgrind, I can say that it was a bit of a mixed bag for me. It did help me finding bugs a number of times, but I also had to configure it a bit because it was producing a lot of noise for some external libraries (such as libc). I don't really understand the underlying issues.
And you took "safe" to be "100% safe, never breaks" instead of just "relatively safe" or "improving safety a lot". They even hinted at the cost that comes with more heavy-handed approaches, which by the way aren't perfectly safe either.
Btw. kindly look at the other issue that I overconfidently waved away as "probably a false positive"?
I've been trying to build a debian package recently. I didn't have any crashes but I couldn't work out how to do it especially with the unbelievably contradictory and confusing documentation. I'm so glad I mainly use makepkg on Artix which is MUCH easier.
I struggle to believe that this is really about a call to improve quality when there seem to be some other huge juicy targets.
Are you sure you're not conflating documentation with random people's writings on the web? Because that there seems to be a helluva lot of cargo culting on this topic.
I was doing this months ago and have forgotten every twisty road I went down but I wanted to produce a binary package for a particular version of Ubuntu (and or Debian) and put it in a PPA so that people could use my code easily. It seemed like the rules file could be anything and I wouldn't have to implement a lot of targets that are either irrelevant or hard to understand the purpose of. So I used a script. Mistake - makefiles now seem to be the thing.
I struggled over how to layout the directories in my GIT repo. The fact that I want to build from the git repo is another layer of confusion - as opposed to building from a tarfile. I'm making something unstable for other developers right now, rather than a stable releasable item.
The next bit of extreme confusion is .... where should my package's install target put the binary plugins I built. I'm not going to try to go back and check over this in detail but as far as I remember the docs were very unspecific about that as if it could be anywhere and different user docs on the net seemed to show different things.
I got to the point where I could appear to build the thing on my machine but that's not good enough - the PPA has to be able to do it and then you've got to upload, wait and hope the log explains what's wrong well enough.
I tried looking at other packages - I'm building plugins for GNU make so I tried that - but it was using the build from tar approach (IIRC) and was way overcomplicated for my very simple package which is just a few .so files in a directory.
It took me a couple of weeks of messing around with every possible option to get this far and I just ran out of energy and time. I am not a beginner at programming - only at packaging - so IMO there is a great deal that could be done for the user experience. Don't get me wrong - I'm not picking on .deb. RPM is another incredibly horrible packaging system where every tiny mistake can force a long long long rebuild.
They're obviously complicated because they're trying to offer a lot and e.g. Artix doesn't use selinux so there's one misery avoided straight away but it has a consequence.
IMO the core docs just don't prevent any of this confusion. They seem like a reference for people who already know what they're doing and enough tutorial for a very specific simple case that wasn't mine. People wouldn't bother to write their own tutorials if the docs filled the need.
This all seems to be criticism of how Debian chooses to build packages. You may well be right in some of it. But I fail to see where contradicting documentation comes in.
It seems that there is a history of ways to build packages and the docs probably need to explain now versus then so that you can beware of what other people have written in the past. They also need to be comprehensive enough that you're not looking for other people's answers that may be out of date.
Back to my original point - I don't think Rust is going to fix this.
It's about time. Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time. It's not as if Rust is that much more difficult to write than C. Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
If 32-bit x86 support can be dropped for pragmatic reasons, so can these architectures. If people really, really want to preserve these architectures as ongoing platforms for the future, they need to step up and create a backend for the Rust toolchain that supports them.
Right now, there are approximately five languages that are presumed to be acceptable for core applications in the base system: C, C++, Shell (which probably means specifically bash), Perl, and Python. The most recent language to be added to that list is Python, about 20 years ago. That's not to say that everybody likes those languages (indeed, there's quite a few commenters here who I think would be surprised to learn that not only is C++ on this list, but that it's been on it for at least 25 years).
There's other languages that are considered acceptable, even desirable, languages to write applications in (e.g., Java, PHP, Go), but Rust is really the first language to compete sufficiently close to C's competence for people to contemplate adding it to the base-system-languages list. I'd say only Go has ever come close to approaching that threshold, but I've never seen it contemplated for something like systemd.
Interestingly, I wonder if the debates over the addition of C++, Python, and Perl to the base system language set were this acrimonious.
> Interestingly, I wonder if the debates over the addition of C++, Python, and Perl to the base system language set were this acrimonious.
I think any projects that are run by people that see themselves as "X-people" (like Python-people, Perl-people) always have a bit "ick" reaction to new languages being added to projects they might see as part of a language's community.
So say you're a C++ developer, contributed to APT over the years, see all of it linked to the C++ community which you are part of too, and someone wants to start migrating parts of it to Rust/$NewLang. I think it might sometimes affect more for these people than just the code, might even be "attacking" (strong word perhaps) their sense of identity, for better or worse.
I was (and am still) a C++ person. But I somehow became also a Rust person. I am sure some people felt attacked, but imho Rust has most of the characteristics that made C++ a good language and that's why I "embraced" Rust rather than felt attacked.
Yeah rust is not C++ but it really compliments C++ well. Each has their niche and tbh I see them converging into the two kings of the systems space rather than actually competing with each other.
If anyone sees that horrible mess of hacks around pre-STL C++'s lacks of namespace in combination with latest C++ features as part of the C++ community I'd be very surprised :D
If APT were a hardcore C++ project surely we'd have like adopted namespaces everywhere by now.
> I think any projects that are run by people that see themselves as "X-people" (like Python-people, Perl-people) always have a bit "ick" reaction to new languages being added to projects they might see as part of a language's community.
I would say that Pythonistas are quite accustomed to "(other) languages being added" to the Python ecosystem. After all, NumPy relies on Fortran, as well as C.
Asserting that kind of "ownership" over code seems rather distasteful to me. Maybe there would be less acrimony if developers got paid for it somehow.
Yeah, maybe Python was a bad example, didn't really think specifically about the languages I made as examples, they were just examples with the hope of making it easier to understand what exactly I was talking about.
Some communities indeed are better at embracing multiple languages, Python, JavaScript and Java/JVM comes to mind, where it isn't uncommon to call out to other languages.
> So say you're a C++ developer, contributed to APT over the years, see all of it linked to the C++ community which you are part of too, and someone wants to start migrating parts of it to Rust/$NewLang. I think it might sometimes affect more for these people than just the code, might even be "attacking" (strong word perhaps) their sense of identity, for better or worse.
How is language relevant here? If someone just rewrote it in the same language instead of a different one, do you feel the reaction would be significantly better?
This is really a Rust specific thing, since so many C++ projects were 1-on-1 translated to Rust only changing the licence.
Rust has been the tool of choice for stealing GPL3 open source projects where some people have spent all their free time on at some point in their life.
> Shell (which probably means specifically bash)
Debian has ongoing efforts to make many shell scripts (like postinst Scripts in packages etc.) non-bash-specific.
A minimal Debian installation doesn't contain bash, but rather dash, which doesn't support bash extensions.
> A minimal Debian installation doesn't contain bash, but rather dash, which doesn't support bash extensions.
Please don't make up wrong facts that would be trivial to check first.
All minimal Debian installations include bash as it is an essential package. Where essential is used in the sense of https://www.debian.org/doc/debian-policy/ch-binary.html#esse...
Whether with a base install via the installer, or debootstrap, I've never seen bash missing.
For clarity, 'sh' is what is softlinked to dash. Not bash.
> there are approximately five languages that are presumed to be acceptable for core applications in the base system: [...] Python
I don't know if you've tried to get someone else's Python running recently, but it has devolved into a disaster effectively requiring containers to accurately replicate the exact environment it was written in.
Core system applications should be binaries that run with absolutely minimal dependencies outside of default system-wide libraries. Heck, I would go as far as to say applications in the critical path to repairing a system (like apt) should be statically linked since we no longer live in a storage constrained world.
> I don't know if you've tried to get someone else's Python running recently, but it has devolved into a disaster effectively requiring containers to accurately replicate the exact environment it was written in.
Please show me a project where you believe you "effectively require containers" just to run the code, and I will do my best to refute that.
> since we no longer live in a storage constrained world.
I think you do care about the storage use if you're complaining about containers.
And I definitely care, on principle. It adds up.
For reasons I can only assume have to do with poorly configured CI, pip gets downloaded billions of times annually (https://pypistats.org/packages/pip), and I assume those files get unpacked and copied all the time since there would be no good reason to use uv to install pip. That's dozens of petabytes of disk I/O.
> Please show me a project where you believe you "effectively require containers" just to run the code
I guess GP meant "containers" broadly, including things like pipx, venv, or uv. Those are, effectively, required since PEP 668:
https://stackoverflow.com/questions/75608323/how-do-i-solve-...
> "containers" broadly, including things like pipx, venv, or uv.
This statement makes no sense. First off, those are three separate tools, which do entirely different things.
The sort of "container" you seem to have in mind is a virtual environment. The standard library `venv` module provides the base-line support to create them. But there is really hardly anything to them. The required components are literally a symlink to Python, a brief folder hierarchy, and a five-or-so-line config file. Pipx and uv are (among other things) managers for these environments (which manage them for different use cases; pipx is essentially an end-user tool).
Virtual environments are nowhere near a proper "container" in terms of either complexity or overhead. There are people out there effectively simulating a whole new OS installation (and more) just to run some code (granted this is often important for security reasons, since some of the code running might not be fully trusted). A virtual environment is... just a place to install dependencies (and they do after all have to go somewhere), and a scheme for selecting which of the dependencies on local storage should be visible to the current process (and for allowing the process to find them).
this is not normally a problem for a distro where the environment is well known
It is such a non problem it forced them to hack a „fuck you this Python is owned by the distribution not you“ message into pip requiring you to agree to „breaking your system“ to use it.
Of all the languages, python in the base system has been an unmitigated garbage fire.
> it forced them to hack a
It was not their action, nor is it hacked, nor is the message contained within pip.
The system works by pip voluntarily recognizing a marker file, the meaning of which was defined by https://peps.python.org/pep-0668/ — which was the joint effort of people representing multiple Linux distros, pip, and Python itself. (Many other tools ignore the system Python environment entirely, as mine will by default.)
Further, none of this causes containers to be necessary for installing ordinary projects.
Further, it is not a problem unique to Python. The distro simply can't package all the Python software out there available for download; it's completely fair that people who use the Python-native packaging system should be expected not to interfere with a system package manager that doesn't understand that system. Especially when the distro wants to create its tools in Python.
You only notice it with Python because distros aren't coming with JavaScript, Ruby etc. pre-installed in order to support the system.
Well the essential system Python should be in /usr/sbin and read-only (insofar Python allows that with its __pycache__ spam).
The fact that users have to keep up with multiple PEPs, error messages, --single-version-externally-managed, --break-system-packages, config files everywhere, stealth packages in .local and uv to paper over all of this shows that Python packaging is completely broken.
> the essential system Python should be in /usr/sbin
There's still quite a bit you can do with the "system Python". Mine includes NumPy, bindings for GTK, QT5 and QT6, Freetype, PIL....
> insofar Python allows that with its __pycache__ spam
This is, to my understanding, precisely why the standard library is pre-compiled during installation (when the process already has sudo rights, and can therefore create the `__pycache__` folders in those locations). This leverages the standard library `compileall` module — from the Makefile:
> The fact that users have to keep up with multiple PEPs, error messages, --single-version-externally-managed, --break-system-packages, config files everywhere, stealth packages in .local and uv to paper over all of this shows that Python packaging is completely broken.Please do not spread FUD.
They don't have to do any of that. All they have to do is make a virtual environment, which can have any name, and the creation of which is explicitly supported by the standard library. Further, reading the PEPs is completely irrelevant to end users. They only describe the motivation for changes like --break-system-packages. Developers may care about PEPs, but they can get a better summary of the necessary information from https://packaging.python.org ; and none of the problems there have anything to do with Linux system Python environments. The config files that developers care about are at the project root.
Today, on any Debian system, you can install an up-to-date user-level copy of yt-dlp (for example) like so, among many other options:
You only have to know how one of many options works, in order to get a working system.> All they have to do is make a virtual environment
Okay so to create a five line script I have to make a virtual environment. Then I have to activate and deactivate it whenever using it. And I have to remember to update the dependenceis regularly. For my five line script.
Seems to me the companies managing mloc-codebases pushed their tradeoffs on everyone else.
You, too: please do not spread FUD.
> Okay so to create a five line script... For my five line script.
I can guarantee that your "five line script" simply does not have the mess of dependencies you imagine it to have. I've had projects run thousands of lines using nothing but the standard library before.
> Then I have to activate and deactivate it whenever using it.
No, you do not. Activation scripts exist as an optional convenience because the original author of the third-party `virtualenv` liked that design. They just manipulate some environment variables, and normally the only relevant one is PATH. Which is to say, "activation" works by putting the environment's path to binaries at the front of the list. You can equally well just give the path to them explicitly. Or symlink them from somewhere more convenient for you (like pipx already does for you automatically).
> And I have to remember to update the dependenceis regularly.
No, you do not in general. No more so than for any other software.
Programs do not stop working because of the time elapsed since they were written. They stop working because the world around them changes. For many projects this is not a real concern. (Did you know there is tons of software out there that doesn't require an Internet connection to run? So it is automatically invulnerable to web sites changing their APIs, for example.) You don't have to remember to keep on top of that; when it stops working, you check if an update resolves the problem.
If your concern is with getting security updates (for free, applying to libraries you also got for free, all purely on the basis of the good will of others) for your dependencies, that is ultimately a consequence of your choice to have those dependencies. That's the same in every language that offers a "package ecosystem".
This also, er, has nothing to do with virtual environments.
> Seems to me the companies managing mloc-codebases pushed their tradeoffs on everyone else.
Not at all. They are the ones running into the biggest problems. They are the ones who have created, or leveraged, massive automation systems for containers, virtualization etc. — and probably some of it is grossly unnecessary, but they aren't putting in the time to think about the problem clearly.
And now we have a world where pip gets downloaded from PyPI literally billions of times a year.
Thank you! Exactly what I wanted to explain.
Yet, if I write a dockerfile, and need to use perl, system perl is fine.
If I need a python script, I have to arrange for all the RUN lines to live inside a virtual environment inside the container.
People are too harsh on this. It's not hard to install a version manager and set your primary python to that. Which is just good hygiene.
My understanding of the reasoning is that python-based system packages having dependencies managed through pip/whatever present a system stability risk. So they chose this more conservative route, as is their MO.
Honestly if there is one distribution to expect those kinds of shennanigans on it would be Debian. I don't know how anybody chooses to use that distro without adding a bunch of APT sources and a language version manager.
yes because then you're starting to use non-distro python packages. If you want to do that, use a virtualenv, there is no safe other way (even if there was no python in the base system) .
Yes, the distro people are strong believers in virtual environments as best practice - for you, not them.
There's a good reason for this. The average user has no idea and doesn't care what language some random distro-packaged program is written in. They want to be able to run ubxtool or gdal_calc or virt-manager or whatever without setting up a virtual environment. Python developers on the other hand should be adept at such things, should they choose to use a non-distro packaged version of something.
The tricky part is when "users" start using pip to install something because someone told them to.
This should become the official error message!
Can you expand? I'm honestly curious how it is not a problem in Pythons case
That asks GP to prove a negative. What problems have you encountered, and why do you believe them to be common?
and storage is a big deal for debian! It is basically the other player in embedded Linux other than rolling your own where a 4GB or smaller emmc is pretty common.
It really, really isn't. I wish this would stop being repeated so much.
Is X11/KDE considered part of the "base system"? If yes, then:
... isn't so surprising.The Debian base system is much, much smaller. I'm surprised that people consider Python to be part of it. However, APT depends on Perl and the C++ run-time library, so those two languages have been part of the base system for a very long time.
> I'm surprised that people consider Python to be part of it. However, APT depends on Perl
Pardon?
Debian doesn't ship files in /usr/local.
You can of course add your own "apt" binary in /usr/local/bin/apt which can be written in any language you like, say COBOL, Java, Common Lisp or Python.
Which apt is that?
? file `which apt`
/usr/bin/apt: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=157631f2617f73dee730273c7c598fd4d17b7284, for GNU/Linux 3.2.0, stripped
I wonder how much of that Perl support is simply regex and other parsing utilities.
I’ve notice a lot of that in base OS systems
Its a curiosity more than anything though
A substantial fraction of apt is written in perl. It's actually still pretty core to debian, even if applications are using it less.
There is no Perl code in APT itself, nor in dpkg on the installing side; Perl is used by dpkg-dev, i.e. when building packages.
Ah, my apologies. I must admit I am not entirely clear on where the boundaries are between the different tools and other elements of the debian packaging process.
I'm sure the Debian base system doesn't include GUI stuff at all.
> In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages
> Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time.
But hasn't all that foundational code been stable and wrung out already over the last 30+ years? The .tar and .ar file formats are both from the 70s; what new benefits will users or developers gain from that thoroughly battle-tested code being thrown out and rewritten in a new language with a whole new set of compatibility issues and bugs?
I wish, but I get new security bugs in those components like every year or so, not all are tracked with security updates to be fair, some we say it's your own fault if you use the library to parse untrusted code.
After all the library wasn't designed around safety, we assumed the .debs you pass to it are trusted in some way - because you publish them to your repository or you are about to install them so they have root maintainer scripts anyway.
But as stuff like hosting sites and PPAs came up, we have operators publishing debs for untrusted users, and hence suddenly there was a security boundary of sorts and these bugs became problematic.
Of course memory safety here is only one concern, if you have say one process publishing repos for multiple users, panics can also cause a denial of service, but it's a step forward from potential code execution exploits.
I anticipate the rewrites to be 1 to 1 as close as possible to avoid introducing bugs, but then adding actual unit tests to them.
"The best laid plans of mice and men often go awry."
> But hasn't all that foundational code been stable and wrung out already over the last 30+ years?
Not necessarily. The "HTTP signature verification code" sounds like it's invoking cryptography, and the sense I've had from watching the people who maintain cryptographic libraries is that the "foundational code" is the sort of stuff you should run away screaming from. In general, it seems to me to be the cryptography folks who have beat the drum hardest for moving to Rust.
As for other kind of parsing code, the various archive file formats aren't exactly evolving, so there's little reason to update them. On the other hand, this is exactly the kind of space where there's critical infrastructure that has probably had very little investment in adversarial testing either in the past or present, and so it's not clear that their age has actually led to security-critical bugs being shaken out. Much as how OpenSSL had a trivially-exploitable, high criticality exploit for two years before anybody noticed.
Actual cryptography code, the best path is formally verified implementations of the crypto algorithms; with parsers for wrapper formats like OpenPGP or PKCS#7 implemented in a memory safe language.
You don't want the core cryptography implemented in Rust for Rust's sake when there's a formally verified Assembler version next to it. Formally verified _always_ beats anything else.
I should have clarified that I was primarily referring to the stuff dealing with all the wrapper formats (like PKIX certificate verification), not the core cryptographic algorithms themselves.
The core cryptographic algorithms, IMHO, should be written in a dedicated language for writing cryptographic algorithms so that they can get formally-verified constant-time assembly out of it without having to complain to us compiler writers that we keep figuring out how to deobfuscate their branches.
Sure. But assembly implementations by definition are not portable. And I don’t know what it takes to write a formally verified library line this, but I bet it’s very expensive.
In contrast, a rust implementation can be compiled into many architectures easily, and use intrinsically safer than a C version.
Plus cryptography and PKI is constantly evolving. So it can’t benefit from the decades old trusted implementations.
> Formally verified _always_ beats anything else.
Formally verified in an obscure language where it's difficult to find maintainers does not beat something written in a more "popular" language, even if it hasn't been formally verified (yet?).
And these days I would (unfortunately) consider assembly as an "obscure language".
(At any rate, I assume Rust versions of cryptographic primitives will still have some inline assembly to optimize for different platforms, or, at the very least, make use of compile intrinsics, which are safer than assembly, but still not fully safe.)
With crypto, you really want to just write the assembly, due to timing issues that higher level languages simply cannot guarantee.
It's insanely complex, particularly you want _verified_ crypto. Last year (or two years ago?) I had to fix a tiny typo in OpenSSL's ARM assembly for example, it was breaking APT and Postgres left and right, but only got triggered on AWS :D
You don't want to write the whole thing in assembly, just the parts that need to be constant time. Even those are better written as called subroutines called from the main implementation.
Take BLAKE3 as an example. There's asm for the critical bits, but the structural parts that are going to be read most often are written in rust like the reference impl.
Yes, for sure.
I would like a special purpose language to exist precisely for writing crytographic code where you always want the constant time algorithm. In this niche language "We found a 20% speed-up for Blemvich-Smith, oops, it actually isn't constant time on the Arrow Lake micro-code version 18 through 46" wouldn't even get into a nightly let alone be released for use.
It seems that for reasons I don't understand this idea isn't popular and people really like hand rolling assembly.
There's been plenty, like RobustIsoCrypt or FaCT:
https://github.com/PLSysSec/FaCT
They struggle to guarantee constant time for subroutines within a non-constant time application, which is how most people want to use cryptography.
I do think this is pretty much the one use case for a true "portable assembler", where it basically is assembly except the compiler will do the register allocation and instruction selection for you (so you don't have to deal with, e.g., the case that add32 y, x, 0xabcdef isn't an encodable instruction because the immediate is too large).
You can't avoid those with NASA Power of 10 sorts of restrictions?
If you mean GnuPG, that is what Snowden used. It could be better than new software that may have new bugs. Memory safety is a very small part of cryptographic safety.
(New cryptographic software can also be developed by all sorts of people. In this case I'm not familiar, but we do know that GnuPG worked for the highest profile case imaginable.)
GPG works great if you use it to encrypt and decrypt emails manually as the authors intended. The PGP/GPG algorithms were never intended for use in APIs or web interfaces.
Ironically, it was the urge not to roll your own cryptography that got people caught in GPG-related security vulnerabilities.
There are none. This is a canonical employee trying to force Ubuntu's decisions (rust coretools) on the wider Debian community.
Additionally, the fact that this comes across as so abrasive and off-putting is on brand for online Rust evangelicalism.
> But hasn't all that foundational code been stable and wrung out already over the last 30+ years?
No: a little less than 5 years ago there was CVE-2020-27350, a memory safety bug in the tar/ar implementations.
But just this year there was CVE-2025-62518 in tokio-tar.
Every software is stable and wrung out until someone finds an exploit.
Recently the rust coreutils had a bug and this essentially disabled auto-updates on Ubuntu. :)
Seeing this tone-deaf message from an Ubuntu employee would be funny if I didn’t actually use Ubuntu. Looks like I have to correct that…
Isn't it also funny that all of these things are done by the same person?
In all seriousness though, let me assure you that I plan to take a very considerate approach to Rust in APT. A significant benefit of doing Rust in APT rather than rewriting APT from scratch in Rust means that we can avoid redoing all our past mistakes because we can look at our own code and translate it directly.
You have never been skilled at being considerate:
https://github.com/keepassxreboot/keepassxc/issues/10725#iss...
Christ that was handled badly.
Honestly having seen trainwreck after trainwreck after trainwreck come out of Canonical for the last decade, I'm sure I'm not the only one that has strong doubts about anyone associated being able to "avoid redoing past mistakes" or to make things not suck.
Seems reasonable. I wish you would have written that in your original message. Good luck…
The Fil-C project ( https://fil-c.org/ ) seems like a more pragmatic way to deal with C security holes in old, well-loved userspace code. It effectively turns C into a managed language rather than a bare metal one, seems to remove a lot of the impetus to rewrite.
I really like the idea of Fil-C for derisking legacy code, but
- It's not an option for debian core infrastructure until it supports at least the same platforms debian does (arm, riscv, etc) and it currently only supports x86_64.
- It doesn't turn C into a modern language, since it looks like there's active development here getting the productivity benefits of moving away from C is likely still worth it.
If C isn't going away yet, just doing filc for x84_64 might still be a big win as it could catch a lot of stuff at runtime for x84_64 users that can then be fixed for everyone.
Given that a few languages from similar timeframe are still around, even if with fewer users, it is guaranteed to stay around as long as current computing models stay relevant.
If you're single platform (Fil-C is x86-64 only), if the program is finished (Fil-C doesn't magically make maintaining a C project any easier to handle) and if performance isn't relevant (Fil-C is and despite its originator's confidence always will be bigger and slower than what you have today) then I agree.
Making core package infrastructure 10x slower doesn't seem especially pragmatic.
The author's benchmarks suggest 10× would be a pathological case!
But even so - what price correct & secure software? We all lost a tonne of performance overnight when we applied the first Meltdown and Spectre workarounds. This doesn't seem much different.
We have an alternative that isn't 10x slower, and comes with many other benefits (Rust). The only cost is losing hardware support for some very obsolete and very unpopular platforms. (Nevermind that Fil-C's hardware support is narrower than Rust's.)
Rust doesn't automatically add memory safety to all existing C code, which will need to be maintained for decades, Fil-C nearly does and its still early days.
> We have an alternative that isn't 10x slower, and comes with many other benefits
Anyone involved with development around a fruity company would say Swift ;)
I don't get the impression the Swift team is especially interested in use cases outside of OS X and iOS. (The context here is Debian.)
There seems to be a push to get Swift outside of just Apples platforms, namely Linux and Android support and backend micro service developnent.
It is not only about memory safety. C community is aging fast and young developers choose different languages. We started to rewrite all C and C++ code in my team because it is really hard to find people willing to maintain it. From my experience typical C or C++ programer is around 40 and not willing to switch jobs.
Inviting rank amateurs to established projects while expecting them to operate as free labor in the hopes of future relevance for employment has a distinctly different feel. Missives like the OP feel like preying on a desperate and young generation when paired with the commentary.
If all the entry-level jobs are C or C++, do you think companies would have a hard time filling them? Would the unemployed new graduates really shun gainful employment if Rust wasn't part of the equation?
Meanwhile, hiring managers left and right are reporting that within hours of a job being posted, they are flooded with hundreds of applications. And you can't find a single person because of the programming language of your stack? And to remedy this, you're going to rewrite your stack in an unproven language? Have you considered that if you can't find anyone that it might not be a programming language or tech stack problem?
My experience in working in C++ was that the salary wasn’t as good as what I could get elsewhere. That was a big part of why I left, I didn’t hate the job.
Wow, I've never considered this aspect of it but you're right. If you want widespread access to incoming developers that can contribute to your project, that really does mean Rust by default at this point if you want a low level language regardless of what you prefer.
A pity the banks didn't do that with COBOL....
They did, Java and .NET.
Is the pay good at your job? Rust jobs are few and far between, so I think Rustafarians are willing to take lower pay to work in their dream language. Companies will of course abuse this.
I agree that new software should be written in Rust or another, safer language. But I don't agree that it's wise to start retrofitting old software in this way. New code is almost always worse in quality than old code, and I do not believe that the safety gains from Rust are so advantageous that they will offset that factor.
Would this logically extend to also include C-reliant languages like Python and Ruby (the latter being mostly a grammar underpinned by C) as technical debt also?
Yes, which is why in 2025 it is a good idea to use Rust with python bindings for your performance sensitive code.
A lot of the C code used in python is calling out to old, battle tested and niche libraries so it is unlikely that someone is going to replace those any time soon but Rust is definitely increasing as time goes on for greenfield work.
Most Python libraries that relies on C are numerical stuff.
From experience with this type of code you typically end up with a load of functions that take in a numpy array and its length/dimensions to a C function that works on that array in place or an output array that was also supplied. In terms of getting this wrong, it’s usually a crash caused by out of bounds memory access which would still be a runtime crash in Rust. So I’m not sure there’s a massive benefit for these types of code other than the fun of learning Rust. Other than that, you’re typically writing C/C++ to interface with C and Fortran libraries that are really battle tested, and for which it will take decades for Rust to have equivalents. So moving to Rust will just cause you to have lots of unsafe statements - not a bad thing necessarily if you are doing a lot of work at the C level in existing code but less of a benefit if you are doing a straight wrap of a library.
On the flip side, things on the web side of Python like uWSGI which is written in C are important for the security aspect but they’re a very small part of the Python ecosystem.
Not really.
All (current) languages eventually have a compiler/runtime that is memory unsafe. This is basically fine because it's a tiny amount of surface area (relative to the amount of code that uses it) and it exists in a way that the input to is relatively benign so there's enough eyes/time/... to find bugs.
There's also nothing stopping you from re-implementing python/ruby/... in a safer way once that becomes the low hanging fruit to improve computer reliability.
> basically fine
How many type confusion 0 days and memory safety issues have we had in dynamic language engines again? I've really lost count.
How many ways to cause a segmentation fault in CPython, that don't start with deliberate corruption of the bytecode, are you aware of?
How is "type confusion" a security issue?
Are you counting ones that involve running malicious code in a sandbox and not just trusted code on untrusted input? Because then I'd agree, but that's a much harder and different problem.
My impression is that for the trusted code untrusted input case it hasn't been that many, but I could be wrong.
It depends, what language was the sandbox written in?
Sandboxes are difficult independent of language, see all the recent speculation vulnerabilities for instance. Sure, worse languages make it even harder, but I think we're straying from the original topic of "python/ruby" by considering sandboxes at all.
> Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
I don't know about that. Look at the code for the COSMIC desktop environment's clock widget (the cosmic-applet-time directory under <https://github.com/pop-os/cosmic-applets>), for example. It's pretty much unreadable compared to a C code base of similar complexity (GNU coreutils, for example: <https://savannah.gnu.org/projects/coreutils/>).
I think this is a you problem
as in that "isn't the style of code you are used too"
I don't think "how well people not familiar with you language can read it" is a relevant metric for most languages.
Also IMHO while C feels readable it isn't when it matters. Because it very often just doesn't include information you need when reading. Like looking at function header doesn't tell you if a ptr is nullable, or if a mut ptr is a changeable input value or instead is a out ptr. which is supposed to point to unitialized memory and if there is an error how that affects the state of the validity of any mutable ptrs passed in. To just name some example (lets not even get started about pre processor macros pretending to be C functions). In conclusion while C seems nice to read it is IMHO often a painful experience to "properly" read it e.g. in context of a code review.
As a side note: The seemingly verbose syntax of e.g. `chrono::DateTime` comes from there being 2 DateTime-types in use in the module, one from the internationalization library (icu) and one from a generic time library (chronos). Same for Sender, etc. That isn't a supper common issue, but happens sometimes.
I think the Rust example’s biggest readability sin is using the full names of things like foo::bar::Baz instead of just Baz, but I get why they did that. When you import a lot of things into a file the latter way, it’s easy to get lost in “was that a foo Baz or a wiz Baz?” Sometimes it’s easier just to use the long names everywhere to be explicit.
If I wanted to tweak the Rust project, I’d feel pretty confident I was calling the right things with the right params.
That's a style choice that I think comes from former C++ devs.
Java can potentially have the same problem. But because everyone uses an IDE and because it's rarely really an issue, everyone will simply import `Baz` rather than worry about the Foo::Baz and Bat::Baz collision. It does happen in java code, but I can't stress how infrequently it's actually a problem.
I don’t think that’s quite right. I haven’t written C++ since the 90s, and I use IDEs (Emacs and Zed), but I still sometimes reach a mental threshold where I look at my screen and see way too many names to have to hold in my mental buffer, then decide to make them more explicit.
IDK what the state of Emac/Zed is in terms of type information (I'm sure it depends on the language in question). For Jetbrains/Eclipse/Netbeans if there's a question about a type you can ctrl+click on the type and immediately pull all information about it.
In java, I rarely pay attention to the `import` section (and I know most devs at my company).
You can look up `using namespace std;` in google and you'll find a lot of articles saying it's a bad practice in C++. Everyone recommends writing the full `std::cout` rather than `cout`.
All modern editors do that pretty well with language servers now. Specifically, Emacs and Zed do this perfectly with Rust.
I do think it’s down to personal preference. With the fully qualified names, I can look at the screen and follow the flow without having to mouse over the various names in play. For that matter, I could print it out if I wanted to and still have all the information.
I don’t think you’re objectively wrong. It’s more that we have different approaches to managing the complexity when it gets hairy.
I just import them with unique names if there is a collision - Wiz_Baz and Foo_Baz
That's an apples to oranges comparison.
Most of the code in that module is dedicated to the gui maintenance. The parts that do deal with time are perfectly legible.
> pretty much unreadable
I disagree. Both seem perfectly readable, assuming you know their preferred coding styles. As a non-C programmer, I absolutely despise running into #ifndef SOME_OBSCURE_NAME and `while (n) { if (g) {` but C (and in the latter case Go) programmers seem to love that style.
Comparing a bunch of small, barely integrated command line programs to a UI + calendar widget doesn't seem "of similar complexity" to me. Looking at a C clock widget (https://gitlab.freedesktop.org/xorg/app/xclock/-/blob/master...) the difference seems pretty minimal to me. Of course, the XClock code doesn't deal with calendars, so you have to imagine the extra UI code for that too.
https://cgit.git.savannah.gnu.org/cgit/coreutils.git/tree/sr...
I beg to differ.
A lot of the complexity is to handle localized date formats on systems that support them. Most other implementations of 'date' do not do this.
The easiest way to see this is in US locales, which use 12-hour clocks in GNU 'date' but not other implementations:
I added a test case for that recently, since it is a nice usability feature [1].[1] https://github.com/coreutils/coreutils/commit/1066d442c2c023...
To be fair GUI code is going to be harder to read than a non-interactive utility, in any two languages
rust is kinda ugly. i think i like zig better.
Actually I agree. I wish Rust kept the basic C syntax for function etc. I really hate def, fn, and other such keywords.
Oh please, in a decade Rust will also be technical debt and people will be wanting to write it in Brust or whatever is the trendy new language.
It’s been ten years since Rust 1.0, if that were to happen, we’d be seeing it now. But we don’t.
but we do... 4 years after Rust, we got a first glimpse at Zig. Even today, there's a lot of people that believe that Zig would have been a better choice for the Linux kernel than Rust.
And the answer to "why now" is quite simple - Because of the whole Rust in kernel debate, people started scrutinizing the situation.
Then Zig people should be the focus of “this kids and their new shiny” criticism, not Rust.
People who become aware of something only when it’s being used by something huge also aren’t early adopters either. Rust has already been in the Windows kernel for years at this point, with none of this consternation.
That makes no sense. It was much longer than 10 years before people considered C to be tech debt for example. Idk if it will be 10 years exactly, but we are seeing better languages emerging (Swift 6, Mojo, probably others) that provide the same safety guarantees and performance/use case profiles as Rust, but are vastly more ergonomic and lovely to use. I fear Linux was hasty integrating Rust because it will likely prevent them from integrating something better in the near future.
You’re the one that said ten years.
You're the one who said "ten years since Rust 1.0"
Right, that’s when Rust was new and shiny. People who chase the latest new shiny thing don’t consider ten year old technology new and shiny, they’d have moved on years ago.
The person who they replied to stated a decade. This whole thing is pretty clear cut and dry.
I said ten years from now...
C's been around for over 50 years. Yes, I would hope that something does replace replace Rust after 50 years.
Memory safety is mostly a issue of the past. Clearly, there are new code bases with memory issue too. But we have tools to prevent that. The new security issues are supply chain attacks. And Cargo is the way to have exactly this.
> Memory safety is mostly an issue of the past.
Can you provide some evidence to support this? There’s a large body of evidence to the contrary, e.g. from Chrome[1].
> But we have tools to prevent that. The new security issues are supply chain attacks.
Speaking as a “supply chain security” person, this doesn’t really hold water. Supply chain attacks include the risk of memory unsafety lurking in complex dependency trees; it’s not an either-or.
[1]: https://www.chromium.org/Home/chromium-security/memory-safet...
What part of C package management defends against supply chain attacks?
Does it audit third-party code for you?
I think it's mostly the fact that C dependencies are much rarer and much harder to add and maintain.
The average C project has at most a handful of other C dependencies. The average Rust, Go or NodeJS project? A couple hundred.
Ironically, because dependency management is so easy in modern languages, people started adding a lot of dependencies everywhere. Need a leftpad? Just add one line in some yaml file or an "Alt-Enter" in an IDE. Done.
In C? That is a lot more work. If you do that, you do it for advanced for stuff you absolutely need for your project. Because it is not easy. In all likelihood you write that stuff yourself.
CVE-2024-3094 is it? You can argue that in C it is much easier to obfuscate your exploit. Implementing something in C is also a lot more work, so you might be also inclined to use 3rd party library.
I never found it hard to add a C library to my projects using pkg-config. And yes, when the package came from Debian I have some trust that it is not a huge supply chain risk.
I think the problem started with the idea over language-level managers that are just github collections instead of curated distribution-level package managers. So my response "C has no good package manager" is: It should not have a packager manager and Cargo or npm or the countless Python managers should all not exist either.
pkg-config isn’t the hard bit though, is it?
Usually the hard bit with C libraries is having dependencies with dependencies all of which use their own complex build systems, a mix of Make, CMake, Autotools, Ninja, etc.
Then within that for e.g. a mix of using normal standard names for build parameters and not e.g. PROJECTNAME_COMPILER instead of CMAKE_C_COMPILER
The package manager takes care of the dependencies. And one does not need to compile the libraries one uses, so how complicated this is does not matter. I install the -dev package and I am done. This works beautifully and where it does not the right move would be to fix this.
I think in most of my projects, many of the C++ packages I used for work (lots of computer vision, video codecs etc) I had to compile and host myself. The latest and greatest of OpenCV, dlib or e.g. gstreamer weren't available on the distros I was using (Ubuntu, Fedora, Centos). They'd lag a year or more behind sometimes. Some stuff was outright not available at all via package manager - in any version.
So, yes, you do have to figure out how to build and package these things by yourself very often. There are also no "leftpad" or similar packages in C. If you don't want to write something yourself.
In constrast - virtually every software package of any version is available to you in cargo or npm.
Virtually every package is in cargo and npm because there is no curation. This is exactly why it is a supply-chain risk. The fix is to have a curated list of packages, but this is what Linux distribution are. There is no easy way out.
As GvR famously reminded everyone in 2011, a language and its packaging system are separate things.
But Rust, you know, has one.
> Critical infrastructure still written in C ... is technical debt that is only going to get worse over time.
No. Rust is not magic, it just forces a discipline in which certain safety checks can be made automatically (or are obviated entirely). In other languages like C, the programmer needs to perform those checks; and it's technical debt if the C code is not coded carefully and reviewed for such issues. If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries. And the safety of critical infra code written in C gets _better_ over time, as such technical debt is repaid.
> Rust is explicitly designed to be what you'd get if you were to re-create C knowing what we know now about language design and code safety.
That's not true. First, it's not a well-defined statement, since "what we know now" about language design is, as it has always been, a matter of debate and a variety of opinions. But even regardless of that - C was a language with certain design choices and aesthetics. Rust does not at _all_ share those choices - even if you tack on "and it must be safe". For example: Rust is much richer language - in syntax, primitive types, and standard library - than C was intended to be.
> If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries. And the safety of critical infra code written in C gets _better_ over time, as such technical debt is repaid.
How many decades have we tried this? How many more to see that it just hasn't panned out like you describe?
> If coding is careful and the code is review - there is no technical debt, or perhaps I should say no more than the unsafe parts of a rust codebase or the standard libraries.
History shows again and again that this statement is impossible..
Name a large C application that’s widely used, and I’ll show you at least one CVE that’s caused by a memory leak from the project
[flagged]
While I get your view, I think examples would help to move the conversation along in a more constructive manner
> it's factually just better, am I right?
so cool to see people getting it!
If people really, really want to have all infra written in Rust, they should step-up and stop using LLVM.
There is a pure-Rust compiler backend in the works, but that's going to take a long time to mature so it's just pragmatic to use LLVM in the meantime. Especially since the exploitation potential is pretty contrived in this case - if you compile compromised code then you're probably owned anyway, regardless of the backends memory safety.
What is the concern with LLVM? I'm asking because I genuinely don't know.
I think the issue he's pointing at is that LLVM is itself written in C++ - so the entire "trusted" Rust toolchain depends on trusting a huge C++ app.
Thankfully the “trust” you need out of a compiler is very very different. It would be closer to claiming you need to compile it on a Rust OS too because you’re trusting a large C/C++ app.
Separation of concerns solves this because the compiler has minimal impact on the trustedness of the code the Rust compiler generates. Indeed, one would expect that all the ways that the LLVM compiler fails are ways any Rust implementation would fail too - by generating the wrong code which is rarely if ever due to memory safety or thread safety issues. There may be other reasons to write the compiler backend in Rust but I wouldn’t put the trust of compiled Rust code as anywhere near the top of reasons to do that.
I would assume that because it is written in unsafe C++, it creates technical debt that should addressed rather soon.
They need to do this carefully and with adversarial testing. There are safety measures in e.g. gnu tar that really should be replicated. But they are not to do with parsing, but the semantics.
IOW, what's your specification?
> It's not as if Rust is that much more difficult to write than C
According to what?
> Rust is explicitly designed
There is no standard. It's accidentally designed.
> knowing what we know now about language design and code safety.
You've solved one class of bugs outside of "unsafe {}". The rest are still present.
> There is no standard. It's accidentally designed.
Are you really claiming that you can't design a language without an official standard? Not to mention that C itself has been designed long before its first ISO standard. Finally, the idea that a standard committee is a preconditionfor good language design is rather bold, I have to say. The phrase "design by committee" isn't typically used as a compliment...
> You've solved one class of bugs outside of "unsafe {}".
It's "only" the single most important class of bugs for system safety.
This kind of deflection and denialism isn't helping. And I'm saying this as someone who really likes C++.
> that you can't design a language without an official standard?
No, just that it's not 1968 anymore, and if you want to claim your language has learned lessons from the past, then this is one that clearly got missed.
> The phrase "design by committee" isn't typically used as a compliment...
While the phrase "emergent incompatibilities" is only known as a detriment.
> It's "only" the single most important class of bugs for system safety.
Again, I ask for a reference, "according to what?" I understand this is the zeitgeist. Is it actually true? It seems to me this great experiment is actually proving it probably isn't.
> This kind of deflection and denialism isn't helping.
Once again, I asked for proof that the claim was true, you've brought nothing, and instead have projected your shortcomings onto my argument.
> And I'm saying this as someone who really likes C++.
Have you ever pushed for C++ to replace C programs because you assume they would be "better" according to some ill defined and never measured metrics?
Ah, now I realize that you don't necessarily meant something like an ISO standard. I definitely agree that a programm language should have some kind of normative specification and it's not sufficient to say "the spec is the behavior of the compiler".
> Again, I ask for a reference, "according to what?" I understand this is the zeitgeist.
I think that at this point it is pretty well-established that the majority of security CVEs in C or C++ applications are caused by memory safety bugs. For sources see https://www.cisa.gov/news-events/news/urgent-need-memory-saf.... As a C++ dev this totally makes sense. (I just happen to work in a domain where security doesn't really matter :)
To be clear: I definitely don't think that all C or C++ code should be rewritten in Rust. But for components that are exposed to the public internet or accept untrusted user input it totally makes sense.
Can I ask what that domain is?
Audio and multimedia art.
thanks
Sadly most people don't agree with this
I have been seeing hatred on this forum towards Rust since long time. Initially it didn't make any kind of sense. Only after actually trying to learn it did I understand the backlash.
It actually is so difficult, that most people might never be able to be proficient in it. Even if they tried. Especially coming from the world of memory managed languages. This creates push back against any and every use, promotion of Rust. The unknown fear seem to be that they will be left behind if it takes off.
I completed my battles with Rust. I don't even use it anymore (because of lack of opportunities). But I love Rust. It is here to stay and expand. Thanks to the LLMs and the demand for verifiability.
I find Rust much easier to write than C. Its types let me be reasonably sure I’ve written appropriate code before I even get to the point of running tests, and I don’t have to memorize the flow of the whole program to have that assurance.
For instance,
This fails at build time with: Guaranteeing that I’ve never mixed units means I don’t have to worry about parking my spacecraft at 1/3 the expected altitude. Now I can concentrate on the rest of the logic. The language has my back on the types so I never have to waste brain cycles on the bookkeeping parts.That’s one example. It’s not unique to Rust by a long shot. But it’s still a vast improvement over C, where that same signed 32 bit data type is the number of eggs in a basket, the offset of bytes into a struct, the index of an array, a UTF-8 code point, or whatever else.
This really shows up at refactoring time. Move some Rust code around and it’ll loudly let you know exactly what you need to fix before it’s ready. C? Not so much.
All you're doing is passing an argument of the incorrect type to your function. The exact same thing fails to compile in C:
``` #include <stdio.h>
typedef struct { int value; } Feet;
typedef struct { int value; } Meters;
void hover(Meters altitude) { printf("At %i meters\n", altitude.value); }
int main() { Meters altitude1 = {.value = 16}; hover(altitude1); Feet altitude2 = {.value = 16}; hover(altitude2); } ```
``` error: passing 'Feet' to parameter of incompatible type 'Meters' 20 | hover(altitude2); ```
Coming from a dynamically typed language (Python, etc), this might seem like a revelation, but its old news since the dawn of programming computers. A C language server will pick this up before compile time, just like `rust-analyzer` does: `argument of type "Feet" is incompatible with parameter of type "Meters"`.
Did you not know this? I feel like a lot of people on message boards criticizing C don't know that this would fail to compile and the IDE would tell you in advance...
People use header libraries as they treat languages like C and C++ as if they were scripting languages.
right and in C++ you have amazing zero-overhead units libraries
That’s not a good example. Wrapping an integral value into a struct works exactly the same in C and (https://stackoverflow.com/questions/40629989/pros-and-cons-o...)
In C++ you can even add overloaded operators to make using math on such structs ergonomical.
Compilers know of the idiom, and will optimize the struct away.
It would've saved the Mars Climate Orbiter: https://en.wikipedia.org/wiki/Mars_Climate_Orbiter
> An investigation attributed the failure to a measurement mismatch between two measurement systems: SI units (metric) by NASA and US customary units by spacecraft builder Lockheed Martin.[3]
That was only the proximate cause, the ultimate cause was cultural. As complex systems and efforts run into problems, it is trivial to blame the unit conversion when they had been ignoring people for months who had concerns [0]
> ... ground controllers ignored a string of indications that something was seriously wrong with the craft's trajectory, over a period of weeks if not months. But managers demanded that worriers and doubters "prove something was wrong," even though classic and fundamental principles of mission safety should have demanded that they themselves, in the presence of significant doubts, properly "prove all is right" with the flight
Dropping units on the NASA side also was problematic but really culture was the cause of the actual crash.
[0] https://spectrum.ieee.org/why-the-mars-probe-went-off-course
That's technically true, but if NASA's API accepted arguments in Meters instead of int32_t (or whatever the equivalent was in the language they used), then it would've been instantly obvious that the controller code that Lockheed Martin wrote was using the wrong datatype.
Do we know how the code was called, was it linked in or was it via IPC (the latter seems most likely to me, and then the question is does the IPC framework support a rich type system)?
That was exactly what I was thinking of when I wrote that.
But also think of how many libc functions take multiple ints or multiple chars in various orders. You can get carried away with typing, i.e. by having a separate type for everything*. Still, imagine you’re writing, say, a hypothetical IDE device driver and had separate types for BlockNumber and ByteInBlock so that it’s impossible to transpose read(byte_offset,block) instead of read(block,byte_offset), even if those are really the same kind of numbers.
That kind of thing makes a gazillion bugs just vanish into the ether.
A bit clunky but more foolproof is something like:
ide_read ( &(struct ide_loc) { .byte_offset = 10, .block = 20 } )
I sometimes think about a related issue: suppose you have a function whose n parameters have n different types. Should the programmer be required to provide those parameters in a specific order? There's no ambiguity.
There appears to be some tension between different conveniences you might afford yourself. If you have read(offset: offsetTypeForRead, address: addressTypeForRead), you can catch when someone accidentally passes an address where the offset should be and an offset where the address should be.
Or, you can say "hey, I'm always adding the offset to the address; it doesn't matter which one gets passed first" and relieve the programmer of needing to know the order in which two semantically distinct variables get passed to `read`.
But if you do provide that convenience -- and it's not unintuitive at all; there really is only one valid interpretation of a combination of an address and an offset, regardless of the order you mention them in -- you lose some of the safety that you wanted from the types. If your variables are declared correctly, everything is fine. If there's a mistake in declaring them, you'll wave through incorrect calls to `read` that would have been caught before.
Huh, that’s an interesting point, and I’d have to think on that. There are still plenty of cases where ordering would matter, like subtract(a,b), unless you go whole hog and define that like
but that seems redundant. There are still plenty of other cases where I could your idea being useful. Like I always forget whether (in Python) it’s json.dump(file, data) or dump(data, file). Ultimately, should it matter? I’m passing a file handle and an object, and it’s unambiguous how those two args relate to the task at hand.How does this scheme of yours work with m/s and seconds.
IIUC, rust would NOT let you do a type checked m/s * s => m, so using the type system for these kinds of games is silly and dangerous (I presume you would have to do the dumb thing and typeconvert to the same type -- e.g.
to do multiplication which means you're inserting unscientific and reader-confusing type conversions all over the place)That's no problem in Rust. You can have a Speed type and a Time type, and define the multiplication operator over them to return a Length value. In fact, there's already a library which does exactly that, with tons of predefined SI units: https://docs.rs/uom/latest/uom/
And the exact same thing can be done in C++.
Related library Sguaba [1] from Helsing AI written by Jon Gjengset it allows you to define coordinate systems on the type level and safe conversion and calculation with them.
[1] https://github.com/helsing-ai/sguaba
I love that sort of thing. It so much easier to get up and running with plausible results when it’s all but impossible to misuse the API. “Darn it, it’s making me cast Celsius to Meters before I call this function. Hey, wait, that can’t be right…”
I'm very confused, explain how this is not the case with C?
I haven't written rust, but my impression is the benefit is more about deeper introspection of things like lifetime than basic typesafety, which already exists in C/C++ (and is likewise occasionally bypassed for convenience, so I wonder how often the same is done for Rust)
This would get an error message in C, what are you talking about?
Structs in C are type safe as well.
Nah, you’re just not good enough. I for example would have never made that mistake when calling hover(int32_t) in C. And on the off chance I did, my reviewers would have caught such a simple mistake because they too are excellent C developers.
/s
My jaw reflexively clenched.
> Especially coming from the world of memory managed languages.
If people from that world complain about Rust, I surely wouldn't want them around a C codebase.
There's nothing wrong about memory-managed languages, if you don't need to care about memory. But being unfamiliar with memory and complaining about the thing that help you avoid shooting your foot isn't something that inspires trust.
The hardship associated with learning rust isn't going to go away if they do C instead. What's going to happen instead is that bugged code will be written, and they will learn to associate the hardship with the underlying problem: managing memory.
> It actually is so difficult, that most people might never be able to be proficient in it. Even if they tried
I think this is more true of C than it is of Rust if the bar is "code of sufficient quality to be included in debian"
Anyone capable of programming and willing to invest enough time will be able to learn Rust, in my opinion.
It might take some people months rather than days, but I think that is a desirable outcome.
Important low level software should be written by competent developers willing to invest the effort.
The effort should rather be invested into learning how underlying hardware works and introducing redundancy and observability into the system.
That's non-sequitur.
The problem here is that C is too basic, dated, with inadequate higher-level abstractions, which makes writing robust and secure software extra difficult and laborious. "Learning underlying hardware" doesn't solve that at all.
Debian supports dozens of architectures, so it needs to abstract away architecture-specific details.
Rust gives you as much control as C for optimizing software, but at the same time neither Rust nor C really expose actual underlying hardware (on purpose). They target an abstract machine with Undefined Behaviors that don't behave like the hardware. Their optimisers will stab you in the back if you assume you can just do what the hardware does. And even if you could write directly for every logic gate in your hardware, that still wouldn't help with the fragility and tedium of writing secure parsers and correct package validation logic.
> It actually is so difficult, that most people might never be able to be proficient in it. Even if they tried.
That could also be applied to C and C++ …
Assuming you're right about the language being too hard for most people, the outcome I'd expect given the history of computing is that the language will fail or be stuck in a niche
Time and time again, theoretically worse solutions that are easily accessible win
One of the follow up messages is interesting: https://lists.debian.org/debian-devel/2025/10/msg00288.html
> Rust is already a hard requirement on all Debian release architectures and ports except for alpha, hppa, m68k, and sh4 (which do not provide sqv).
Wonder what this means for those architectures then?
Who is still using these machines? Genuine question, not trolling.
It looks like the last machines of each architecture were released:
Alpha in 2007
HP-PA in 2008
m68k in pre-2000 though derivatives are used in embedded systems
sh4 in 1998 (though possible usage via "J2 core" using expired patents)
This means that most are nearly 20 years old or older.
Rust target triples exist for:
m68k: https://doc.rust-lang.org/nightly/rustc/platform-support/m68... and https://doc.rust-lang.org/nightly/rustc/platform-support/m68... both at Tier 3.
(Did not find target triples for the others.)
If you are using these machines, what are you using them for? (Again, genuinely curious)
Debian just cut i386, Wikipedia says the i386 was discontinued in 2007. These systems are all of the same vintage, so it does not seem a huge leap to be culled from the support list.
[0] https://en.wikipedia.org/wiki/I386
The actual Intel 80386 processor was discontinued in 2007, but the i386 architecture -- ia32, 32-bit x86 -- lived on for longer in the semi-mainstream (there were 32-bit Intel Atoms introduced to the market as late as 2012, AMD Geodes were sold until at least 2019, and I believe some VIA C3/C7 derivatives made it into the 2010s as well) and is in fact still in production to this day for the embedded and industrial markets (Vortex86, etc).
All of those are i586 or i686 though right? AMD Geode LX is the only i586 CPU produced in the last 15 years.
Everything else is at least i686 and Rust has perfectly adequate i686 support.
Is there any major distro left with pre i686 support?
Debian's i386 is actually i686 I believe.
i386 (32 bit) only processors we discontinued but 64bit processors can operate in 32bit mode so toolchain was still widely available and there was still demand for i386 OS that would run on modern hardware in i386 mode for some ancient software.
Yeah, and you can still run i386 binaries on widely available amd64 CPUs. So this is an even stronger argument for killing these other obsolete platforms.
You can only run the binaries if you have the libc to run them.
You say that like it is a higher or even comparable barrier to having alpha/m68k/sh4 hardware; it isn't.
> Who is still using these machines? Genuine question, not trolling.
Either legacy systems (which are most certainly not running the current bleeding-edge Debian) or retro computing enthusiast.
These platforms are long obsolete and there are no practical reasons to run them besides "I have a box in the corner that's running untouched for the last 20 years" and "for fun". I can get a more powerful and power efficient computer (than any of these systems) from my local e-waste recycling facility for free.
It’s usually a loud minority of trolls or hobbyists. It just takes one to spark a doubt.
Here is one famous example of a dude who’s managed to get PRs merged in dozens of packages, just to make them compatible with ancient versions of nodejs https://news.ycombinator.com/item?id=44831811
Wow that was an interesting read. I find it amusing that nobody seems to really know who he is or what his motives are, yet his code is run on millions of machines every day.
There’s hobbyists using m68k Macs, Amigas, and Atari STs. Definitely a small niche, and those running Linux on those machines more so.
Sure, but almost nobody uses or wants modern linuxes on those machines. It's almost always described (in their own words) as "some ancient crusty version of Debian"
Nobody wants to play games on Linux given the small userbase compared to Windows. Yet people work on it.
You could make this argument for so many usecases but apparently people just enjoy bashing retrocomputing here.
> Nobody wants to play games on Linux given the small userbase compared to Windows.
According to the last Steam survey, 3% of players use Linux. Steam has 130 million active players, so that means there are 4 million people playing on Linux. Definitely not "nobody", and way bigger than the whole retrocomputing community.
By the way, I am also one of those retrocomputing guys, I have a Pentium 2 running Windows 98 right here. IMHO, trying to shoehorn modern software on old hardware is ridiculous, the whole point of retro hardware is using retro software.
> Who is still using these machines? Genuine question, not trolling.
Well, there are so many things were you could argue about the relevance of a userbase.
If the size of a userbase would be the only argument, Valve could just drop support for the Linux userbase which is just 2-3% of their overall userbase.
Not your point, but Linux compatibility is Valve protecting themselves from the existential risk that is a capricious Microsoft. At one point, it seemed Microsoft was trying to make the Microsoft Store be the mechanism for distributing all software. Linux being viable for gaming is/was their safety net to avoid being locked out from the ecosystem.
Are there even close to 2% of Debian users using a DEC Alpha?
popcon.debian.org reports 3 alpha installations and 261750 installations. Assuming comparable opt-in rates there are less than 0.002% of the users using alpha.
The other mentioned architectures hppa, m68k and sh4 are at a similar level.
people enjoy running vintage stuff, and running modern stuff on some vintage thing is kinda cool.
But yeah, those can figure out how to keep their own port
Think about any time a computer is used in something designed to last 30+ years.
Cars, airplanes, construction equipment, etc.
I am pretty sure that those machines are not running Debian.
And almost certainly not whatever the next stable release of Debian is.
Why not? How do you know that? Debian is used pretty widely
they might run Debian but not upstream Debian/stable
you mainly find that with systems needing certification
this are the kind of situations where having a C language spec isn't enough but you instead need a compiler version specific spec of the compiler
similar they tend to run the same checkout of the OS with project specific security updates back-ported to it, instead of doing generic system updates (because every single updates needs to be re-certified)
but that is such a huge effort that companies don't want to run a full OS at all. Just the kernel and the most minimal choice of packages you really need and not one more binary then that.
and they might have picked Debian as a initial source for their packages, kernel etc. but it isn't really Debian anymore
You'd be surprised.
Even if they are, they are not updating to latest Debian stable.
I mean they can't in the first place, because ports only have unstable available
They try if they are internet connected
You would be wrong. People want new software
If we are talking about embedded control systems no, you don't want new software, you want your machine to do what is supposed to do. At my workplace we have some old VME crates running VxWorks, and nobody is gonna update those to the latest Linux distro.
This is incorrect. Internet connections and 3rd party integrations have changed this view of “the software doesn’t need to change once it leaves the factory”.
John Deere, Caterpillar, etc are leaning heavily into the “connected industrial equipment” world. GE engines on airplanes have updatable software and relay telemetry back to GE from flights.
The embedded world changed. You just might have missed it if your view is what shipped out before 2010.
My experience is in big scientific experiments like particle accelerators, I guess other fields are different. Still, my experience is that:
1) The control network is air gapped, any kind of direct Internet connection is very much forbidden.
2) Embedded real-time stuff usually runs on VxWorks or RTEMS, not Linux. If it is Linux, it is an specialized distro like NI Linux.
3) Anything designed in the last 15 years uses ARM. Older systems use PowerPC. Nobody has used Alpha, HPPA, SH4 or m68k in ages. So if you really want to run Debian on it, just go ahead and use Armbian.
I don't think those systems can/should be updated using apt though.
Here’s Debian’s “Supported Architectures”: https://wiki.debian.org/SupportedArchitectures. These platforms are all in ‘unofficial’ status (like, they work, but are not officially supported by the core Debian project).
Who is actually _running_ Debian Trixie on these platforms now?
It is counter-intuitive to me that these platforms are still unofficially supported, but 32-bit x86 [edit: and all MIPS architectures!] are not!
I am emotionally sad to see them fall by the wayside (and weirdly motivated to dig out a 68k Amiga or ‘very old Macintosh’ and try running Trixie…) but, even from a community standpoint, I find it hard to understand where and how these ports are actually used.
They’re not going to fall to the wayside, we’re eventually getting Rust support on these.
It’s just a bit annoying that Rust proponents are being so pushy in some cases as if Rust was the solution to everything.
Can you actually install Debian on an Amiga - or other 68k system - now? I’ve been searching around the web and I haven’t found much evidence that you can.
This is not intended to bash you or anyone else who’s working on it - I think it’s a cool project (I have in the recent past got an 86duino ZERO to run Gentoo, just to see if an obscure old-ish piece of hardware can be useful with modern Linux on it - and it can). I do understand the reason a project like Debian might not want to have to spend resources even just to make it easier to do though.
Here is a recent blog post where somebody tried to install Debian on an Amiga 4000 with an 68040 running at 25 MHz.
https://sandervanderburg.blogspot.com/2025/01/running-linux-...
I didn't find what Debian version they tried but I think it's implied it's a recent version. They ran into memory issues. They had only 48MB while the recommendations are to use 64MB. It did boot though until it threw errors because of memory constraints.
They got a working system by trying Debian 3.1 though.
They're >20 years old and nobody seriously uses them for modern software. Come the fuck on. Let them go.
I was running Debian unstable on my 64-bit SPARC up until I moved a few months ago and haven't set the system back up again while unpacking.
It's been somewhat useful for finding weird edge cases in software where for whatever reason, it doesn't reproduce easily on AArch64 or x86, but does there. (Or vice-versa, sometimes.)
I don't know that I'd say that's sufficient reason to motivate dozens of people to maintain support, but it's not purely academic entertainment or nostalgia, for that.
m68k has a LLVM port already, so Rust can be implemented for that platform.[0] It would be nice to have LLVM backends for alpha, hppa and sh4 - these older architectures tend to be quite simple so a working LLVM has plenty of value as a reference and for educational use.
(LLVM even used to have an in-tree DEC Alpha backend, though that was back in 2011 and not relevant to any version of Rust.)
[0] Looks like there is basic initial support but no 'core' or 'std' builds yet. https://doc.rust-lang.org/rustc/platform-support/m68k-unknow... This should potentially be fixable.
Shouldn't the codegen_gcc project[1] theoretically allow Rust to target backends that only GCC supports?
[1] https://github.com/rust-lang/rustc_codegen_gcc
https://lists.debian.org/debian-devel/2025/11/msg00010.html
LLVM is desirable for other reasons (LLVMpipe for example), so investing into an LLVM port is probably a better use of limited resources than improving and maintaining a retargeted rustc for GCC.
yes and no
yes, from a pure code generation aspect
no, as all conditional-compiled platform specific code is missing.
So using it with #[no_core] should work (assuming the WIP part of the backend isn't a problem). But beyond that you have to first port libcore (should be doable) and then libstd (quite a bunch of work).
Yes, that is being worked on.
As far as I understand it, the m68k LLVM port is not ABI-compatible with GCC on Linux because the alignment is wrong: https://wiki.debian.org/M68k/Alignment (page says that the LLVM packages are FTBFS because of this)
The principled way to address this would be to define a new ABI suffix for the m68k-unknown-linux-gnu target triple, replacing the existing 'gnu' which would retain GCC compatibility with 2-byte alignment for int.
They are of no commercial interest to Ubuntu.
While that seems like it would be true, is that really relevant to Debian? :)
The person making the post is getting paid by Ubuntu.
sure but most frequent/core contributors for Linux (kernel) and most big Distros are payed by some company with commercial interest which mostly doesn't involve spending time on maintaining any of this old architectures
sure some are also payed by a foundation. Which is also payed by companies but with a degree of decoupling of influence.
and some pay them self, e.g. fully voluntary work, but most dev can't afford to do so on a long term, high time commitment manner. So a lot of major changes and contributions end up coming from people directly or indirectly "payed" by some company.
and that's pretty common across most "older, larger, sustainable and still developed OSS"
Needs a perhaps with question mark or some proof.
You could just read his signiture in the mailing list.
https://mastodon.social/@juliank
>Senior Engineer at Canonical.
Yes that's true and there's synergies but keep in mind I also have a personal mind
[flagged]
I think that's unkind absent any proof otherwise which you haven't posted.
In reality a lot of the heavily used parts of Linux (and open source generally) are going to have some commercial involvement or are we suggesting that no one should be paid by any of the companies backing Linux who use Linux because to me that sounds worse.
Whether you like it or not, Linux/Open Source hasn't been entirely the preserve of unpaid people doing it for fun for a long time.
My whole point, which you've missed, is that the _interest_ is what we should be looking at. I don't care if he says he does or doesn't act on it. He has the interest.
https://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog
I don't understand how people keep giving the benefit of the doubt to corporations.
They aren't people. They feel no guilt and have no shame.
Julian Klode is not a corporation, though, so we should give him the benefit of a doubt.
The whole thread started someone making the wrong point that:
>They are of no commercial interest to Ubuntu.
Which is plainly false.
Julian can believe whatever he wants, in fact the more naively idealistic he is the better for Canonical.
The fact of the matter is that corporate Linux's benefits from impossible to run setups. That's the whole business model. The more convoluted the setup is the more money they make. Rust fits into this business model perfectly.
You'd think people would have learned this after the Nth rug pull from supposed champions of free software like RedHat.
Instead of insinuating, can you say what exactly the impropriety here would be, in your opinion?
https://news.ycombinator.com/newsguidelines.html
> Assume good faith.
[flagged]
Let me evolve that for you.
You should not only avoid appearance of impropriety, but also impropriety.
Are you just being clever by mocking someone with a job or so you think there is something wrong with this technical decision?
[flagged]
That's obnoxious. Do you only promote the interests of your employer? Do you expect people to assume that?
It's not even as if Canonical has a particularly bad reputation for that kind of thing.
more relevant they are all "unofficial" supported architectures and non in a "might get official support" context, so they all have been of very limited interest of Debian as a whole for quite a while
Those all seem to be completely obsolete so I guess they can just stay on the latest version of Debian that supports them, or make their own distro. (Or add Rust support I guess but that's probably not realistic.)
> Wonder what this means for those architectures then?
They will be rebranded as "retro computing devices"
Most don't need to be rebranded. Alpha and hppa are retro computing and haven't been available to buy for 18 and 15 years respectively. Sh4 died this year. Only m68k is still holding on, but is a rounding error in the number is users.
Aren’t m68k computers only a few models from the 1990s and 1980s, and some more recent hobby projects? That’s squarely in the retro computing enthusiasts category.
I’m not in the Debian world, but those do seem to me like the types of systems that could use their own specialized distros rather than being a burden to the mass market ones. It’s not as if you could run a stock configuration of any desktop environment on them anyway.
m68k is used in a number of embedded systems today. It is unknown (to me) how many of those run linux (as opposed to some other embedded OS), but I would guess at least some do. I also don't know how many run (or want to run) debian vs something else (a custom yacto distribution is my first guess), but that might be non-zero. It is possible someone is running a non-debian distribution and using debian packages to provide their updates.
Does or should debian care? I don't know.
All I find searching for “embedded m68k Linux distro” is people looking for, or coming up with, alternatives, as Debian was already “too big” fifteen years ago.
I don’t get the fuzz around the “retro computing” verbiage. I doubt anyone is actually running Debian on these devices out of necessity, someone who plays baroque music in reconstructed period instruments won’t balk at being called an “early music” enthusiast.
Well, we are on a mission to create The Universal Operating System. So maybe.
But I'm not sure. I think the new Rust dependencies are good. In an ideal world, the people who care about niche systems step up to help Rust target those systems.
> In an ideal world, the people who care about niche systems step up to help Rust target those systems.
I’m actually the person who added the m68k target to the Rust compiler and was also one of the driving forces of getting the backend into LLVM.
Generally speaking, getting a new backend into the Rust compiler is not trivial as it depends on LLVM support at the moment which is why asking someone to just do it is a bit arrogant.
Luckily, both rustc_codegen_gcc and gccrs are being worked on, so this problem will be resolved in the future.
Sorry, I didn't mean to insinuate that there's anything minor about it, or that nobody is doing the work. I should have phrased myself differently.
I'll try to rephrase: if we never want to give up support for a platform we've supported in the past, then I think we only have two options: (1) never adopt new technology where support for said platforms doesn't come for free, or (2) leave it up to those who care about the niches to ensure support.
Neither is pain-free, but the first seems like a recipe for stagnation.
It's lovely to see the two alternative compiler paths for Rust moving forward though! Thank you!
Interesting bit about SH-4. I thought that Renesas had previously promised parts availability until 2029?
Sh-4 is on the Product Longevity Program https://www.renesas.com/en/support/product-longevity-program... but what it actually means, I really cannot easily figure out. It's marked as "Last Time Buy" right now.
If anyone has a problem with the language used in the email, I would remind you that this is the same person who is maintainer for debian's keepassxc packages.
Here's a thread of them insulting upstream developers & users of the Debian packages. https://github.com/keepassxreboot/keepassxc/issues/10725
To be honest I don't really read insults either in this e-mail or in the thread you linked. If I'm seeing it right, there's only one comment by the guy in that thread, right? That comment is direct and uses language that may be considered unprofessional ("crap"/"crappy"), but it's not insulting the users (they are not referred to as crappy). Same for the e-mail.
Unnecessary drama as usual...
Browser integration is a completely basic feature I expect from any password manager. It is absolutely useless for most people without it.
In fact not having it encourages copy and paste which reduces security.
Whats next? Strip javascript support from browsers to reduce the attack surface?
I don't get how this is even a discussion. Either he is paid by canonical to be a corporate saboteur or he is completely insane.
FWIW, I've used KeePass for years and have never used its browser integration...
I don’t think the language is unprofessional, it’s direct and it states his opinion.
The one demanding it is the maintainer of keepassxc it would’ve been better to just close the issue that this is a Debian only problem and he should install it like that and just close it.
mainly people have issues with clear, precise and concise language about intend of action instead of idk. a request of discussion
now this is separate from being open for discussion if someone has some good arguments (which aren't "you break something which isn't supported and only nich used") and some claim he isn't open for arguments
and tbh. if someone exposes users to actual relevant security risk(1) because the change adds a bit of in depth security(2) and then implicitly denounces them for "wanting crap" this raises a lot of red flags IMHO.
(1): Copy pasting passwords is a very bad idea, the problem is phsishing attacks with "look alike" domains. You password manager won't fill them out, your copy past is prone to falling for it. In addition there are other smaller issues related to clip board safety and similar (hence why KC clears the clipboard after a short time).
(2): Removing unneeded functionality which could have vulnerabilities. Except we speak about code from the same source which if not enabled/setup does pretty much nothing (It might still pull in some dependencies, tho.)
but yes very unnecessary drama
The HN post doesn't seem very confrontational to me, but some folks see it so, weird.
The level of knee-jerk reaction to anything Rust into traditionally C projects borders on the pathological. That email is about as polite as it gets without being coddling.
Do keep in mind that a lot of the people involved in these sorts of things are neurodiverse in some ways, and may have significant trouble dealing with change.
As teh64 helpfully pointed out in https://news.ycombinator.com/item?id=45784445 some hours ago, 4ish years ago my position on this was a total 360 and I'd have had the same reaction to now-me's proposal.
All these changes requires work. Because of this, other priorities will get less attention. It would be ironic if bad security flaws are missed/introduced because of all the work switching to Rust. Its also very likely that all the new code written in Rust will be far less mature than the existing source bases. So the outcome might be (very probably actually) a lot of work to worsen security.
Most of the academic research into these sorts of typesafe languages usually returns the null result (if you don't agree, it means you haven't read the research on this topic). That's researcher for it didn't work and you shouldn't be using these techniques. Security is a process, not a silver bullet and 'just switch to Rust' is very silvery bullet.
It's not like I'm in a hurry to switch to Rust and will spend full steam on it. It's amongst the lowest priority items.
A lot of the Rust rewrites suffer a crucial issue: they want a different license than what they are rewriting and hence rewrite from scratch because they can't look at the code.
But here we're saying: Hey we have this crucial code, there may be bugs hidden in it (segfaults in it are a recurring source of joy), and we'll copy that code over from .cc to .rs and whack it as little as possible so it compiles there.
The problem is much more there on the configuration parser for example which does in a sense desparately need a clean rewrite, as it's way too sloppy, and it's making it hard to integrate.
In an optimal world I'd add annotations to my C++ code and have a tool that does the transliteration to Rust at the end; like when the Go compiler got translated from C to Go. It was glorious.
*180, for other people confused by this.
/me hides in shame
You’re right this guy is a dick, as soon as I read this email I went to check who it was and laughed because I remembered that keepers thread.
What a horrible mindset. I'll never understand this "security" argument.
Removing features is not the most secure option possible. Go all the way then and remove everything. Only when your computer cannot do anything it will be 100% secure.> Removing features is not the most secure option possible.
If I have a program that encrypts and decrypts passwords, then the surface area is way smaller than if it also has browser integrations and a bunch of other features. Every feature has the potential to make this list longer: https://keepass.info/help/kb/sec_issues.html which applies to any other piece of software.
At the same time, people can make the argument that software that's secure but has no useful features also isn't very worthwhile. From that whole discussion, the idea of having a minimal package and a full package makes a lot of sense - I'd use the minimal version because I don't use that additional functionality, but someone else might benefit a bunch from the full version.
A password program that integrates with your browsers reduces a lot of attack surfaces. If you can't directly talk to the bower that implies the clipboard which in turns means other programs on your system can see the password.
Some people don't have actual understanding of the meaning of security.
Security is there to keep the features usable without interruptions or risks.
E.g. plugging the computer off the network is not about security if the service needs to be accessible.
That doesn't sound right to me; its legitimate topic that a package where the core use-case is X, that package has obscure feature Y, and the mere existence of Y can cause security issues for a user even when the user never intended to use it.
Very concrete example, the whole Log4j vulnerability issue was basically just a direct implication of a feature that allowed for arbitrary code execution. Nearly no user of Log4j intentionally used that feature, they were all vulnerable because Log4j had that feature.
The fix to the CVE was effectively to remove the feature. If someone had the foresight to try to reduce Log4j to only the features that ~everyone actually used, and publish a separate Log4j-maximal for the fringe users that intentionally use that feature, it would have prevented what was arguably the worst vulnerability that has ever happened.
In the case this thread is about, no one seems to be deny that there should be a 'minimal' and 'full' versions and that the 'minimal' version is going to be more secure. The entire flame war seems to be over whether its better to take a preexisting package name and have it be a minimal one or the full one.
That is simply a tradeoff between "make preexisting users who don't use ancillary features be as secure as possible by default going forward" or "make preexisting users who do use ancillary features not broken by upgrades".
> That doesn't sound right to me; its legitimate topic that a package where the core use-case is X, that package has obscure feature Y, and the mere existence of Y can cause security issues for a user even when the user never intended to use it.
In this case it is not clear at all whether the feature is obscure. For most people it could be actually essential and the primary requirement for the whole software.
But many users were relying on these features. Hence the bug report.
This is literally the same as helping a relative to make their computer more secure by turning it off. Problem solved I guess?
If you made a mistake by shipping insecure defaults you could fix it e.g. by including a banner to use the minimal version to users that don't use the extra features. But simply rug-pulling everybody for "security" and doubling down by insulting the affected users? I really do not understand people that act like this.
It’s called openBSD :) (don’t get me wrong, openBSD is awesome, but the default install which is secure out of the box is a bit spartan heheh)
Just annoys me that he calls features "crap" just because he likely doesn't use them personally and ends that post with a random sentence claiming such a version "increases the risk of drive-by attacks" with zero evidence. The developer explains the features aren't plugins and aren't even enabled by default. Arrogance from maintainers like this from within Debian is what will hurt it far more than any external entity.
Exactly, this rude and insulting behavior is why many people shy away from open source. Not everybody has the time and mental capacity to engage in ideological battles about software architecture.
We should really hold more value to keeping existing user setups working. Breakages are incredibly damaging and might very well have a bigger impact than insecure defaults.
> he calls features "crap" just because he likely doesn't use them personally
"All of these features are superfluous and do not really belong in a local password database manager" seems to me like a pretty clear explanation of what is "crap" about them, and it seems pretty clearly not to be about personal taste.
Some people care about modularity.
You misunderstand, see; these people are corporate saboteurs.
Unfortunately, this kind of culture where you joyfully screw over your real users to chase the approval and benefit of some spherical user in a vacuum that you would like to cater to has become endemic in the free software world. It probably started with GNOME 3 (deliberately strip-mined of functionality, actively hostile to customisability, wasteful of screen space, all for the sake of chasing some HCI-prophesied transition to mobile touch devices which never came for desktop Linux), but was perfected by Mozilla in the name of security.
I'm not sure if it's an insecurity thing or an immaturity thing, but when all these stories pop up, I always wonder why rust enthusiasts don't just prove their point by making their own "modern" and non-"retro" tech. If you can make something better, just do it already, and people will switch to it when they see the benefits. This parasitic "you must accept rust in your long-standing project" model is so off-putting, as is always evident by the complaints it causes. I love projects like Redox that try to do their own thing... why doesn't the rust community rally around projects like that and turn them into cve-free masterpieces that people will want to use?
This email is from a Debian maintainer, about Debian introducing a new hard dependency on Rust. It's not some random Rust advocate telling Debian folks that they should use Rust against their will.
Yes there are absolutely some obnoxious "you should rewrite this in Rust" folks out there, but this is not a case of that.
There are like 1000 Debian maintainers, right? This person doesn't speak for the project as a whole, and as far as I can tell he is telling Debian folks they will be accepting rust whether they want it or not, and whether their preferred architecture is supported or not. Maybe there was some organizational vote on this, but if so it isn't referenced in the thread. It says "I plan", not "Debian decided to".
And regardless, my point is it would be more sensible to say "I'm going to introduce an oxidized fork of apt and a method to use it as your system apt if you prefer" and then over the next year or so he could say "look at all these great benefits!" (if there are any). At that point, the community could decide that the rust version should become the default because it is so much better/safer/"modern"/whatever.
You seem to think of "rust enthusiasts" as some organized group with a goal of writing Rust for the sake of it. Rust is long past such extremely early adopter phase.
What you're seeing now is developers who are interested in writing a better version of whatever they're already working on, and they're choosing Rust to do it. It's not a group "Rust enthusiasts" ninjas infiltrating projects. It's more and more developers everywhere adopting Rust as a tool to get their job done, not to play language wars.
Nah, I called out redox and another commenter pointed out ripgrep as an even better example of what I’d prefer to see, and those are also by what I would call rust enthusiasts. I don’t think of them as a monolithic group.
Where we disagree is I would not call injecting rust into an established project “writing a better version”. I would love it if they did write a better version, so we could witness its advantages before switching to it.
> my point is it would be more sensible to say "I'm going to introduce an oxidized fork of apt and a method to use it as your system apt if you prefer" and then over the next year or so he could say "look at all these great benefits!" (if there are any). At that point, the community could decide that the rust version should become the default because it is so much better/safer/"modern"/whatever.
That's not how open source software development works.
I wasn't asked by Linus whether ipchains should become the default over ipfirewall nor whether iptables should become over ipchains.
I wasn't asked whether GCC should use C++ instead of C as the language to build GCC itself.
I can go on with lots of examples.
Why should APT be different and require the maintainers to fork their own project do introduce changes? Why should an undefined "community" (who is that? apparently not the APT developers...) decide? Does this have to be done for every code change in APT?
Thats exactly what happed with ripgrep. People seem to like it.
Exactly! People love ripgrep because of its clear advantages, and had the developers been interested in making posix mode for it, I'm certain at least some distros would have made it the default by now.
This was done with uutils. I'm daily driving them.
I really like to write programs in rust. But my stance has changed a bit over the years ever since other languages caught up a bit. On top of that I’m very skeptical if the rewrite of an ancient tool brings more less security. I don’t know the apt source code or how it actually works behind the cli interface so I leave this judgement to the pros. But there seems to be a very strong move to rewrite all core systems in rust. My issue with that is the fact that these tools don’t even invent anything new. Or change / improve the status co. I understand that it’s hard to introduce a new system without breaking other stuff. But our systems are still based on decisions from the telegraph age. Layers on top of layers on top of layers.
I've heard two arguments for these rewrites that don't always come up in these discussions. There are fair counterpoints to both of these but I think they add valuable dimensions to the conversation, or perhaps may explain why a rewrite may not seem justified without them.
* It's becoming increasingly difficult to find new contributors who want to work with very old code bases in languages like C or C++. Some open source projects have said they rewrote to Rust just to attract new devs.
* Reliability can be proven through years in use but security is less of a direct correlation. Reliability is a statistical distribution centered around the 'happy path' of expected use and the more times your software is used the more robust it will become or just be proven to be. But security issues are almost by definition the edgiest edge cases and aren't pruned by normal use but by direct attacks and pen testing. It's much harder to say that old software has been attacked in every possible way than that it's been used in every possible way. The consequences of CVEs may also be much higher than edge case reliability bugs, making the justification for proactive security hardening much stronger.
Yeah I get point for attracting young blood. But I wonder if the core utils which have been rewritten got rewritten by the original maintainers? And again the question why not simply write something new. With a modern architecture etc rather than drop in replacements.
On your second part. I wonder how aviation and space and car industry do it. They rely heavily on tested / proven concepts. What do they do when introducing a new type of material to replace another one. Or when a complete assembly workflow gets updated.
> And again the question why not simply write something new.
The world isn't black or white. Some people write Rust programs with the intent to be drop-in compatible programs of some other program. (And, by the way, that "some other program" might itself be a rewrite of an even older program.)
Yet others, such as myself, write Rust programs that may be similar to older programs (or not at all), but definitely not drop-in compatible programs. For example, ripgrep, xsv, fd, bat, hyperfine and more.
I don't know why you insist on a word in which Rust programs are only drop-in compatible rewrites. Embrace the grey and nuanced complexity of the real world.
> And again the question why not simply write something new.
There is a ton of new stuff getting written in Rust. But we don't have threads like this on HN when someone announces a new piece of infra written in Rust, only when there's a full or partial rewrite.
Re automotive and other legacy industries, there's heavy process around both safety and security. Performing HARAs and TARAs, assigning threat or safety levels to specific components and functions, deep system analysis, adding redundancy for safety, coding standards like MISRA, etc. You don't get a lot of assurances for "free" based on time-proven code. But in defense there's already a massive push towards memory safe languages to reduce the attack surface.
> why not simply write something new.
Because of backwards compatibility. You don’t rewrite Linux from scratch to fix old mistakes, that’s making a new system altogether. And I’m pretty sure there are some people doing just that. But still, there’s value in rewriting the things we have now in a future-proof language, so we have a better but working system until the new one is ready.
Sorry. I will answer on this because I feel people got a bit hung up on the “new” thing. Might be a language barrier. I really understand the reasons why with backwards compatibility etc. The point I tried to make is that we really spend tons of time either to maintain software that where written or “born” 50 or so years ago or rewrite things in the same spirit. I mixed my comments wit the the security aspect which might muddled a lot what I tried to say with the “new” part. One sees this also on HN. I love the UNIX philosophy and also the idea of POSIX. But it’s treated as if it is the holy grail of OS design and in case of POSIX the only true cross platform schema. Look also at the boot steps a CPU has to run through to boot up. By pretending to be 40 year old variant and then piece by piece startup features. Well I hope I cleared my point :)
Writing tools that are POSIX compatible doesn't mean one puts it on the pedestal of the "holy grail of OS design." I've certainly used POSIX to guide design aspects of things I build. Not because I think POSIX is the best. In fact, I think it's fucking awful and I very much dislike how some people use it as a hammer to whinge about portability. But POSIX is ubiquitous. So if you want your users to have less friction, you can't really ignore it.
And by the way, Rust didn't invent this "rewrite old software" idea. GNU did it long before Rust programmers did.
Inviting inexperienced amateurs to wide-reaching projects does not seem to be a prudent recipe. Nay, it is a recipe for disaster.
> Or change / improve the status [quo]
uutils/coreutils is MIT-licensed and primarily hosted on GitHub (with issues and PRs there) whereas GNU coreutils is GPL-licensed and hosted on gnu.org (with mailing lists).
EDIT: I'm not expressing a personal opinion, just stating how things are. The license change may indeed be of interest to some companies.
So a change to the worse.
The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.
Using GitHub is unacceptable as it is banning many countries from using it. You are excluding devs around the world from contributing. Plus it is owned by Microsoft.
So we replaced a strong copyleft license and a solid decentralized workflow with a centralized repo that depends on the whims of Microsoft and the US government and that is somehow a good thing?
> The GPL protects the freedom of the users while MIT-licensed software can be easily rug-pulled or be co-opted by the big tech monopolists.
That is not at all true. If someone were to change the license of a project from MIT to something proprietary, the original will still exist and be just as available to users. No freedom is lost.
With GPL I can compile my own copy and use it with their software. They have to allow that. They also have to give me their sources, changes included.
MIT is a big joke at the expense of the open-source community.
I mean sadly even though I hate the bans the exclusion is really insignificant in the grand scheme of things and the benefits Github brings most of the considered acceptable for the tradeoff. I am sadly one of those people I am fairly young (25) and my introduction to git happened with Github so I am really used to it. Though I am also developing a codeforge as an hobby project and maybe something serious in the long term.
There is also another crowd that completely aligns with the US foreign policy and also has the same animosity towards those countrie's citizens (I 've seen considerable amount of examples of these).
For the license part I really don't get the argument how can a coreutils rewrite can get rugpulled this is not a hosted service where minio [1] [2] like situation can happen and there is always the original utils if something like that were to happen.
[1] http://news.ycombinator.com/item?id=45665452 [2] https://news.ycombinator.com/item?id=44136108
2 GNU coreutils maintainers, including myself, monitor the issues and PRs on a GitHub mirror that we have [1]. Generally the mailing list is preferred though, since more people follow it.
[1] https://github.com/coreutils/coreutils
People have to learn on some project. Why not something that’s simple to test against? You know what it should do, so let’s rewrite it!
Whether the rewrite should be adopted to replace the original is certainly a big discussion. But simply writing a replacement isn’t really worth complaining about.
I think polyglot causes more problems than it solves. It is gross how many different toolchains and package managers it now takes to build a distro. One person wants python, another wants node, another wants go, and now this. with node we traded buffer overflows for supply chain attacks. If they don’t want C, it would be better to start fresh. Robert Morris re-wrote enough of Linux in golang to be usable, and the overhead was something like 5-15% slower than C. If the goal is Rust everywhere, contribute to Redox. They are further along that road.
There needs to be a limit for each project. Debian is a large project so it needs to have more options than smaller projects. Rust is getting popular enough it is reasonable for Debian to say it is an approved option.
Note that I'm not saying Debian should, I'm saying it is reasonable that they would. I am not a Debian maintainer and so I should not have an opinion on what tools they use, only that adding Rust isn't unreasonable. It may be reasonable to take away a different tool to get Rust in - again this is something I should not have an opinion on but Debian maintainers should.
Unfortunately, the world is a complicated place and each one of these languages have their own benefits and tradeoffs that suit themselves to one particular language or another (ask an ML scientist to switch to raw C), leading to all of these languages having a valid place in the pantheon of softwares (except maybe for js). Since debian is a pragmatic OS, it needs to adapt to solve for the real problem of being generally usable, and thus supporting all of these languages. Rewriting Everything in one language would be a massive pain and likely a massive waste of time and supporting an OS with less reputation and stable footing like Redox would almost if not more counterproductive as rewriting everything in debian from scratch (it’s a bit hyperbolic to state the goal is to Rewrite Everything in rust), so supporting the gradual replacement of some mission critical components like the apt parser or whatever they’re talking about is likely more realistic. Although an OS definitely shouldn’t “move fast and break things” (especially not one like Debian) I don’t think it’s too ridiculous to drop support for architectures that can’t support a language that was released almost a decade ago. Having a proven language (I think it’s safe to say rust is proven by now, right?) that is much less prone to self-combustion on modification than C, yet maintains a directly compiled nature as well as being to interface relatively well with normal C libraries in some standard applications is a pretty good value-deal proposition in my opinion.
Applications vs Infrastructure: When stand-alone applications are in completely different languages, that is normal and reasonable and fine. When it takes 5-10 different programming languages just to build and manage the base system, that is an engineering failure and a mess.
Linux gave up the fight against complexity a couple of decades ago.
Wouldn't it make sense to wait for (or support) one of the rust-for-GCC ports to become viable? As far as I understand, rust in the kernel won't become mandatory either until it's supported by GCC, and as a boon, with multiple implementations you can be more certain that the language won't move as fast and break things anymore. There's already upstream rust support in GCC, so I don't reckon it's that far off from being usable, at least for projects choosing to target it specifically.
Furthermore, if these architectures are removed from further debian updates now, is there any indication that, once there's a rust toolchain supporting them, getting them back into modern debian wouldn't be a bureaucratic nightmare?
> Furthermore, if these architectures are removed from further debian updates now, is there any indication that, once there's a rust toolchain supporting them, getting them back into modern debian wouldn't be a bureaucratic nightmare?
These architectures aren't being removed from Debian proper now, they already were removed more than a decade ago. This does not change anything about their status nor their ability to get back into Debian proper, which had already practically vanished.
non of the listed architectures have official Debian support anymore
i.e. they are only still around because they haven't caused any major issues and someone bothered to fix them up from time to time on their own free time
so yes, you probably won't get them back in once they are out as long as a company doesn't shoulder the (work time) bill for it (and with it I mean long term maintenance more then the cost of getting them in)
but for the same reason they have little to no relevance when it comes to any future changes which might happen to get them kicked out (as long as no company steps up and shoulders the (work time) bill for keeping them maintained
Ports are not part of Debian and particularly don't release with Debian, they only ship unstable.
changed the wording a little, thanks
There's already upstream rust support in GCC, so I don't reckon it's that far off from being usable, at least for projects choosing to target it specifically.
The GCCRS project can't even build libcore right now, let alone libstd. In addition, it is currently targeting Rust 1.50's feature set, with some additions that the Linux kernel needs. I don't see it being a useful general purpose compiler for years.
What's more likely is that rustc_codegen_gcc, which I believe can currently build libcore and libstd, will be stabilised first.
Interesting how a person's opinion can change: https://news.ycombinator.com/item?id=27594688
If only more people were willing to let their opinions be changed over time like that, rather than clinging onto them.
If only a reason were given. This is the original:
> Rust is a security nightmare. We'd need to add over 130 packages to main for sequoia, and then we'd need to rebuild them all each time one of them needs a security update.
What has changed? Why is 130 packages for a crypto application acceptable?
Probably because 120 (*) have been added in the intervening 4 years.
(*) random number
It depends if the original opinion was a reasoned one or just based on personal feelings.
The dependency explosion is still a problem and I’m not aware of any real solution. It would have been interesting to to see why their opinion changed… I’m guessing it’s as simple as the perceived benefits overriding any concerns and no major supply-chain attacks being known so far.
I assume it was a management decision to adopt Rust in APT similar to the decision to switch to the Rust version of coreutils.
Let me assure you it was my own decision. The final paragraph is my paraphrasing of a fellow Debian developer and CTTE member's stated opinion.
I would much rather work with someone that other people find abrasive but who communicates much like myself and is clearly open-minded, than the opposite.
This is the same maintainer who broke KeePass on Debian and then flipped off everyone in the thread. Someone needs to pull him aside and let him know the world does not revolve around him and the problems he chooses to manufacture to justify his paycheck.
https://github.com/keepassxreboot/keepassxc/issues/10725#iss...
I'm happy for all developers programming in their favorite programming languages. Programming for over 30 years I have seen entire ecosystems come and go.
What I don't get is the burning need for Rust developers to insult others. Kind of the same vibes that we get from systemd folks and LP. Does it mean they have psychological issues and deep down in their heart they know they need to compensate?
I remember C vs Pascal flame back in the day but that wasn't serious. Like, at all. C/C++ developers today don't have any need to prove anything to anyone. It would be weird for a C developer to walk around and insult Rust devs, but the opposite is prevalent somehow.
Who is insulting others and where?
Based on this, and many other similar threads, it's the anti-Rust zealots insulting Rust users.
> What I don't get is the burning need for Rust developers to insult others
... where?
Rust developers have corporate backing and therefore feel superior even though the language is an ugly OCaml knockoff.
They have the backing to doing the right thing, because their language is memory safe and this everytime means absolutely secure. Irony off.
IMHO, Rust is proof that many programmers prefer over-engineering and unnecessary complexity with obtuse syntax over sound language design. My personal theory is that they subconsciously like to keep their craft esoteric and "magic." The importance of readability, simplicity, and KISS just isn't taught enough nowadays.
The general problem is that there’s a lot of activists online nowadays. I used to call myself an activist, but now it’s a dirty word which I associate with obsessive behaviour, harassment, cancellation attempts and being generally obnoxious with the purpose of achieving some goal.
I think it’s a combination of religion decreasing in importance and social media driving people mildly nuts. Many undertakings are collecting “true believers”, turning into their religion and social media is how they evangelize.
Rust is a pretty mild case, but it still attracts missionaries.
So, the people are different, Western society’s different and social media’s giving everyone a voice while bringing out the worst in them.
It makes me uncomfortable that this mandate is coming from a Canonical employee. After all, if this switch was a good idea on merit alone, it would happen organically without requiring this kind of combative communication.
What's the long-term play for Canonical here?
Apt has just 3 listed maintainers, and judging by the git history this guy does 90% of the work. Him making the decision _is_ it happening organically.
Open source fundamentally is a do-ocracy (it's in literally all of the licenses). Those who do, decide; and more and more often those who do are just one or two people for a tool used by millions.
It's hard to imagine their is some malicious financial incentive to choosing a different language to write the package manager with...
The obvious potential motivations are things like making a more reliable product, or making their employees more productive by giving them access to modern tools... I guess I could imagine preparing for some sort of compliance/legal/regulatory battle where it's important to move towards memory safe tooling but even there I rather imagine that microsoft is better placed to say that they are and any move on canonical's part would be defensive.
"What's the long-term play for Canonical here?"
Presumably it's rewriting critical parsing code in APT to a memory-safe language.
The long term play is to drive out community participation and bring in corporate control in the apt/Debian ecosystem.
Is this the end of Debian as GNU/Linux? The main Rust toolchain isn't GNU, gccrs is still incomplete and most Rust rewrites of existing GNU libraries and tools use MIT or other non GPL licenses.
The main python and perl toolchains were never maintained by GNU either. Python has never been distributed under a GPL license. I'm not 100% sure of the licensing history of perl but I think it's always been available under a non-GPL license (as well as being under a GPL license - at least recently - not sure if that was always the case).
This doesn't seem like a noteworthy change to the degree to which GNU/Linux is an accurate name... though there are lots of things I'd put more importance on than GNU in describing debian (systemd, for instance).
Edit: Looks like Perl 1.0 was under the following non-commercial license, so definitely not always GPL though that now leaves the question of licensing when debian adopted it, if you really care.
> You may copy the perl kit in whole or in part as long as you don't try to make money off it, or pretend that you wrote it.
https://github.com/AnaTofuZ/Perl-1.0/blob/master/README.orig
GNU/Linux as a term was kind of a credit-grab by GNU anyway. They never were entirely responsible for the userspace.
But, there are now a lot more replacements for GNU's contributions under non-copyleft licenses, for sure.
It is hard to see it as anything else.
Time for the scheduled monthly rust drama I see.
More seriously I think Linux in general could benefit from a bit more pruning legacy stuff and embracing new so I count this as a plus
One major point of heartburn with Rust is that it comparatively lacks the diversity of ISA targets that C broadly does. I know some of this is because C is both relatively simple to write a basic compiler for that more or less just works (in comparison to something crazy like C++), and that's it's been around for a long time, but why isn't there more of a push to add at least all of the supported Debian ISAs to the Rust compiler?
There's no push to add Debian's officially supported platforms to Rust because Rust already supports those platforms.
Most people don't write a basic compiler for C either, "relatively simple" or no. Most people would rather add a new target to an existing compiler, which is much easier.
It's also "relatively easy" to add a new backend to Rust.
There's a policy document for Rust here: https://doc.rust-lang.org/rustc/target-tier-policy.html
There are a lot of things that can go wrong. You want to be able to test. Being able to test requires that someone has test hardware.
The language is tough love, and I think it's important despite what the first respondent has said.
Much of the language used seems to stem from nauseating interactions that have occured in kernel world around rust usage.
I'm not a big fan of rust for reasons that were not brought up during the kernel discussions, but I'm also not an opponent of moving forward. I don't quite understand the pushback against memory safe languages and defensiveness against adopting modern tooling/languages
I think the spin that Rust is necessarily the way forward is what is wrong. IMHO Rust has severe problems and what is considered "modern" is mostly taste. We have seen the same thing in the past with a push towards C++, Java, managed languages. What is new is that the free software movement is now controlled so much by corporate interests that some of these changes are pushed through aggressively against the interests of other parts of the community. In the past, if you wanted something changed and there was no agreement, you created a fork and if it was truly better it was eventually adopted by the majority. Nowadays, the companies which fund most of the development aggressively pursue their interests and the part of the community that disagrees is forced out. This justified by with suitable propaganda "not willing to adapt", etc. The whole point of free software should be that I do not have to adapt to some companies's idea of what is modern, if I do not want to. This is why I fled from Microsoft.
> I think the spin that Rust is necessarily the way forward is what is wrong.
I haven't seen this from Rust. Obviously lots of us think that Rust is the way forward for us but I think the problem you're talking about is that nobody offered any alternatives you liked better and that's not on Rust.
If Bob is ordering pizza for everybody who wants one, it is not the case that "Pizza is necessarily the way forward", and it's not Bob's fault that you can't have sliders, I think if you want sliders you're going to need to order them yourself and "Pizza is the way forward" is merely the default when you don't and people are hungry.
Dave Abraham's Hylo is an example of somebody offering to order sushi in this analogy. It's not yet clear whether Dave knows a Sushi place that delivers here, or how much Sushi would be but that's what having another way forward could look like.
In C++ they've got profiles, which is, generously, "Concepts of a plan" for a way forward and in C... I mean, it's not your focus, but nobody is looking at this right? Maybe Fil-C is your future? I note that Fil-C doesn't work on these obsolete targets either.
[flagged]
The disagreement is the difference between "there's an group [the Rust community] going around pushing/bullying projects to use Rust" and "project maintainers want to start using Rust". Those two things get conflated a ton, particularly in this forum by people who have an axe to grind against Rust.
The word you omitted is key: Necessarily.
This whole it used to be different thing is looking back with rose tinted glasses. It’s always been the case that project maintainers were able to make choices that the community didn’t necessarily agree with, corporate backed contributors or not, and it’s still a possibility to fork and try to prove out that the other stance is better.
Nobody is being forced out of the community, you can fork and not adopt the changes if you want. Thats the real point of free software, that you have the freedom to make that choice. The whole point of free software was never that the direction of the software should be free from corporate control in some way, the maintainers of a project have always had the authority to make decisions about their own project, whether individual or corporate or a mix.
The point of freedom in software is certainly that I can create my own fork. And individual projects a maintainer can certainly do what he wants. But it is still worrying if in community projects such as Debian when decisions that come with a cost to some part of the community are pushed through without full consensus. It would be certainly not the first time. systemd was similar and for similar reasons (commercial interests by some key stakeholders), and I would argue that Debian did suffer a lot from how badly this was handled. I do not think the community ever got as healthy and vibrant as it was before this. So it t would be sad if this continues.
...it is still worrying if in community projects such as Debian when decisions that come with a cost to some part of the community are pushed through without full consensus.
What are some concrete cases you can point to where a decision was made with full consensus? Literally everyone agreed? All the users?
I'm not sure many projects have ever been run that way. I'm sure we've all heard of the Benevolent Dictator for Life (BDfL). I'm sure Linus has made an executive decision once in a while.
> pushed through without full consensus
Requiring full consensus for decisions is a great way to make no decisions.
> are pushed through without full consensus
You describe it that way, but that's not how the world in general works in practice. You do things based on majority.
No, this is not how you do things in a functioning community. You do things based on societal contracts that also protect the interests of minorities.
I cannot fathom using the rest of my Saturday attempting to break down the level of spin you’re trying to play at here.
> systemd was similar and for similar reasons (commercial interests by some key stakeholders)
False claims don't really make the claims about the evils of Rust more believable.
> if you wanted something changed and there was no agreement, you created a fork and if it was truly better it was eventually adopted by the majority.
This assumes there wasn't agreement.
And if so, what would 'eventually adopted by the majority' mean. Is this announcement not that?
> The whole point of free software should be that I do not have to adapt to some companies's idea of what is modern, if I do not want to.
This hasn’t changed.
> I think the spin that Rust is necessarily the way forward is what is wrong.
Well, what's the alternative? The memory safety problem is real, I don't think there is any doubt about that.
C/C++ is a dead end: the community has thoroughly rejected technical solutions like the Circle compiler, and "profiles" are nothing more than a mirage. They are yet again trying to make a magical compiler which rejects all the bad code and accepts all the good code without making any code changes, which of course isn't going to happen.
Garbage collection is a huge dealbreaker for the people still on C/C++. This immediately rules out the vast majority of memory-safe languages. What is left is pretty much only Zig and Rust. Both have their pros and cons, but Rust seems to be more mature and has better community adoption.
The way I see it, the pro-memory-safety crowd is saying "There's a giant hole in our ship, let's use Rust to patch it", and the anti-Rust crowd yells back "I don't like the color of it, we shouldn't repair the hole until someone invents the perfect solution". Meanwhile, the ship is sinking. Do we let the few vocal Rust haters sink the ship, or do we tell them to shut up or show up with a better alternative?
> Meanwhile, the ship is sinking.
No it is not. We have a lot of amazing and rock solid software written in C and C++. Stuff mostly works great.
Sure, things could be better but there is no reason why we need to act right now. This is a long term decisions that doesn't need to be rushed.
> What is left is pretty much only Zig and Rust.
We had Ada long before Rust and it is a pretty amazing language. Turns out security isn't that important for many people and C++ is good enough for many projects apparently.
There is also D, Nim, Odin and so on.
> Garbage collection is a huge dealbreaker
It isn't. We had Lisp Machines in the 80s and automatic garbage collection has vastly improved these days. So I wouldn't rule those out either.
In short, no the ship is not sinking. There are many options to improve things. The problems is once you depend on rust it will be hard to remove so it is better to think things through because rushing to adopt it.
Basically correct, but Zig is not a memory safe language. It may be an improvement wrt. syntax over C, and its standard library facilities may be genuinely better than Rust's wrt. writing unsafe code, but it's simply not interesting from a safety perspective. I'm sure that even the most rabid Zig advocates would readily acknowledge this point.
> Garbage collection is a huge dealbreaker for the people still on C/C++.
The problem is not so much GC itself, but more like pervasive garbage collection as the only memory management strategy throughout the program. Tracing GC is a legit memory management strategy for some programs or parts of a program.
> it's simply not interesting from a safety perspective
The reason memory safety is interesting in the first place (for practical, not theoretical reasons) is that it is a common cause of security vulnerabilities. But spatial memory safety is a bigger problem than temporal memory safety, and Zig does offer spatial memory safety. So if Rust's memory safety is interesting, then so is the memory safety Zig offers.
I'm a rabid software correctness advocate, and I think that people should acknowledge that correctness, safety (and the reasons behind it) are much more complex than the binary question of what behaviours are soundly disallowed by a language (or ATS advocates would say that from that their vantage point, Rust is just about as unsafe as C, and so is completely uninteresting from that perspective).
The complexity doesn't end with spatial vs temporal safety. For example, code review has been found to be one of the most effective correctness measures, so if a language made code reviews easier, it would be very interesting from a correctness/security perspective.
I am very much inclined to agree with you, but do you have any sourcing for a claim that spatial is a bigger problem with regards to security vulnerabilities? Every time I feel like posting similar sentiments I just know that a reply linking to an article about how Microsoft and/or Google claim 70% of bugs are memory safety issues will be posted. Both of the ‘studies’ (code surveys) seem to claim use-after-free make up the majority of those bugs.
Mitre place spatial memory safety higher on the list: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html (also >3x KEV)
Zig only does bounds checking by default in Debug and ReleaseSafe builds. If you build with ReleaseFast or ReleaseSmall it will happily do an out of bounds read: https://godbolt.org/z/733PxPEPY
That's a matter of how policy is set. You can set it to on or off for a particular function, too. The point is that language offers sound spatial safety just as much as Rust does (and both allow you to turn it on or off in particular pieces of code).
Defaults and ecosystem approach matter a lot, though.
The whole Rust ecosystem is heavily biased towards prioritising memory safety and "safe by construction" .
This is evident in the standard library, in how crates approach API design, what the compilation defaults are, ...
In 6+ years of using Rust the only time I had to deal with segfaults was when working on low level wrappers around C code or JIT compilation.
Zig has some very interesting features, but the way they approach language and API design leaves a lot of surface area that makes mistakes easy.
Rust technically isn’t a memory safe language the second you use “unsafe”. Rust advocates tend to pretend the can have their cake and eat it too when comparing it to other low level languages. No, just because you have the word unsafe next to the scary parts doesn’t make it okay.
I’ve written a good chunk of low level/bare metal rust—unsafe was everywhere and extremely unergonomic. The safety guarantees of Rust are also much weaker in such situations so that’s why I find Zig very interesting.
No oob access, no wacky type coercion, no nullptrs solves such a huge portion of my issues with C. All I have to do is prove my code doesn’t have UAF (or not if the program isn’t critical) and I’m basically on par with Rust with much less complexity.
The point of unsafe is you have small bubbles of unsafe which you can verify rigorously or use tools like Miri to make sure they upheld and you build safe abstraction on top of that unergonomic part. Looking at embedded-hal and even to extreme embassy you can see the value of it. If you don't do any abstraction I definitely agree Rust is not fun to write at all.
The safety guarantees of Rust the language around unsafe are just as good as C or Zig if you use the appropriate facilities (raw pointers, MaybeUninit, UnsafeCell/Cell, Option for nullability, Pin<> etc). Sometimes this is made unnecessarily difficult by standard library code that expects all the guarantees of ordinary Safe Rust instead of accepting more lenient input (e.g. freely aliasable &Cell<T>'s), but such cases can be addressed as they're found.
My point is that it’s easier to write correct Zig code than correct unsafe Rust. Raw pointers can be null in rust so you should use NonNull<T> but there’s aliasing rules that are easy to mess up. And difficultly with the stdlib as you mentioned.
I don’t actually mind Rust when I was able to write in safe user land, but for embedded projects I’ve had a much better time with Zig.
While memory safety is important, I do not believe it is now suddenly so important that it needs now needs to be solved quickly at all cost. There are far more pressing issues. I also do not think at all that C/C++ are dead end. In fact, I think it would be far more cost effective to continue to improve safety of C and C++ code via gradual improvement than adding another language into the mix. The reason is that complexity and the burden of long-term maintenance is the main issue we have in free software, and Rust makes this worse rather than better. Case in point, the fact that I have limited security updates because of some Rust video codec in the dependency chain hurts my security far far more than the fact that it might prevent some memory safety issue: https://www.debian.org/releases/trixie/release-notes/issues.... I think this is what people are missing. They over-exaggerate the important of memory safety over rather mundane issues which are actually far more important. I have never known anybody who was directly affected by a security issues caused by memory safety, I know plenty who were affected because their software was not up-to-date.
What are realistic options for "gradual improvement" of memory safety in C and C++?
My first thought is that it is kind of like talking about gradually improving manual memory allocation in Java. C and C++ are fundamentally memory unsafe; it's part of their design, to offer complete control over memory in a straightforward, direct way.
English link: https://www.debian.org/releases/trixie/release-notes/issues....
> The Debian infrastructure currently has problems with rebuilding packages of types that systematically use static linking. With the growth of the Go and Rust ecosystems it means that these packages will be covered by limited security support until the infrastructure is improved to deal with them maintainably.
> IMHO Rust has severe problems and what is considered "modern" is mostly taste.
Really? As opposed to e.g. C or C++ (as the most important languages which Rust is competing with)? Sure, taste plays into everything, but I think a lot of people work with Rust since it's genuinely a better tool.
I hear you on free software being controlled by corporate interests, but that's imo a separate discussion from how good Rust is as a language.
Ada and SPARK fulfilled the promise of a safe systems language decades ago without making most of the mistakes Rust does. Rust has its strong sides, sure, but it's far from the only shop in town. The GCC happens to include an Ada compiler as well.
What are those mistakes?
It seems like Ada more or less has to have memory safety bolted on -- that is what SPARK does -- and it's not clear that Ada's bias towards OO is better than Rust's bias towards functional programming.
Are you talking about features like type inference (so the Rust code could be less clear, since types are not always written out)?
The problem is they forgot about making the language approachable so it lives in its bubble for safety criticial usage (which Rust kinda starting to eat its lunch from with the certified Rust fork)
If you’re referring to Ferrocene with the certified Rust fork, then I’d like to make the minor correction that we don’t consider Ferrocene a true fork, but rather a downstream distribution of the Rust projects compiler. There are very little changes to the compiler itself. Most relevant changes are documentation, build process and different test coverage - we do test architectures that upstream does not.
Yeah fork is a bad word for it. Sorry about that.
What do you find unapproachable about Ada?
For starter it looks completely alien my real introduction to Ada was with this article comparing it to Rust on solving advent of code [1] but it gives me that feeling when I try to read Haskell code (not that extreme). I did not actually give it a real chance though but its user base even more niche than Rust so idk. It has really cool properties (being able to define n bit datatypes is nice) but it is also leans to more on the imperative side so that also does not interest me.
[1] https://github.com/johnperry-math/AoC2023/blob/master/More_D...
And just recently Modula-2.
That is a 'subtle whataboutism' reply, actually...
you see, GP did not speak in relative terms, but absolutely: They believe Rust has problems. They did not suggest that problems with programming languages are basically all fungible, that we should sum up all problems, compare different languages, and see which ones come out on top.
I'm very happy with common lisp for fast code.
Of course most people aren't smart enough for the language so they have to use inferior algol languages like rust.
No need to sully CL with this kind of elitism. Any language you need to be a genius to use is a bad language. That's one of the fundamental issues with C. We're all imperfect idiots some of the time and one instance of undefined behavior breaks any guarantees the language gives you.
I find that languages with a high intellectual barrier to entry are much more pleasant places to be since people like the OP can't understand them and we never have people try to bully us into doing things _the right way_.
This is someone who says things like
>It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
While on company time.
> since people like the OP can't understand them and we never have people try to bully us
Yes well, glad to hear there’s no one bullying people there!
I don't normally upvote snark, but... Bravo.
Elitism is it's own form of bullying and needs to be treated as such.
I don't particularly like large swaths of humanity, but I also try hard not to be elitist towards them either. I'm not always successful, but I make a strong effort as my family raised me to be respectful to everyone, even if you don't personally like them.
I'm glad you understand how self defense works.
Name me another language that offers the memory safety and deterministic runtime guarantees that Rust does, eliminating entire classes of bugs, the same integration with existing C code that Rust does, and enjoys the same community and pool of eager contributors that Rust does.
I'll wait.
The pushback comes from the idea to rewrite all old tools in another language just because you can. Instead of creating new projects and using the new language it feels like the most rust projects are rewrite from old projects. And the most projects you have read about on hacker news in the last year 'I made xy, but in rust' are already abandoned. It's just a trend to write something already existing in Rust just to learn the language and then release it for productive use.
[flagged]
For me it actually is the language. While a little pushy at times I think the arguments for rewriting certain things in a safer language is well founded. If the apt tool chains is one of those places I'll leave for the Debian developers to determine, but for decompression tools I can see a benefit.
If Rust should be the language of choice, preferably not. The syntax is awful, the language is complicated and Rust programs seems to collect dependencies at the same rate as JavaScript. Where I might agree with you is that Rust seems to attract a certain type of people. They write absolutely brilliant software, but like the Rust compile, they are rather particular with what input they'll accept.
In the end I don't really care what apt is written in, I'm not the one writing the code. I just use the tool. It would be sad if some platforms are left behind, because the Rust developers don't care about them and not because they're no longer useful.
> While a little pushy at times I think the arguments for rewriting certain things in a safer language is well founded.
Yes. It is. Just write the code and show us that it is good.
Ironically, the people hating on it (and usually without any technical arguments) act way more cultish.
At least it looks that way to my not-rust-using self
Tell me you haven't used Rust without telling me you haven't used Rust.
> I don't quite understand the pushback against memory safe languages
As far as i read on HN, the only memory safe language discused on HN is rust and mostly with childish pro arguments.
Java and C# are memory safe languages, as are common interpreted languages like Python and Ruby. Even JavaScript is memory safe, barring the possibility of subtle JIT bugs that may practically impact such safety.
But op means memory and data safe, without a GC nor a runtime, so it can be used as a systems programming language. For "some reason" people only talk about Rust in this space!
There are currently no widely-used programming languages other than Rust that offer the memory safety guarantees Rust has without a garbage collector. I think this is bad, and I wish more people would develop memory-safe systems languages that explore other parts of the design space like Rust does. But in the meantime, Rust is pretty good, certainly preferable to C or C++.
Swift is also memory safe in that sense.
It has a gc in the general sense of that term.
Swift uses Automatic Reference Counting (ARC) which is a form of garbage collection.
Now I'm curious, how does it GC cycles, if at all?
EDIT: from a brief search: it doesn't.
Not sure why you're downvoted, RC is definitely a form of GC even if not a tracing GC.
Even modern C++ is memory safe when used properly.
By that definition, all languages are "memory safe when used properly".
Well. Yes. Lol
That doesn't really make sense since memory safety is a property of a language. You can have code that is unsafe (read unsound), but that is a separate issue.
For a language to be memory safe it means there must be no way to mishandle a function or use some object wrong that would result in an "unsafe" operation (for Rust, that means undefined behavior).
That is to say the default is safe, and you are given an escape hatch. While in something like c/c++ the default is unsafe.
I'd also like to add that program correctness is another separate concept from language safety and code safety, since you could be using an unsafe language writing unsafe ub code and still have a correct binary.
I haven't either, until I read comments on Rust in Linux on social media outside HN.
Apparently, Rust is part of the "woke agenda"
Yep, I noticed that under a lot of videos mentioning rust in kernel, or rust in general there's a high chance that the comment section will just be straight up lifted from 4chan pol or a similar place
Is there any particular reason for this? Do they not agree with the code of conduct more than normal?
Complete disagreement with codes of conduct and the typical political orientation of their proponents. Also, pure contrarian spirit.
That is a crude and politically inept way of putting it, but do you not think there is some grain of truth to it?
If you opt into something with as high a barrier to entry and necessary time commitment as a programming language, you naturally also opt into the existing community around that language, because that will be where the potential contributors, people to help you solve issues, and people you have to talk to if you need the language or ecosystem to move in some direction will hail from. In turn, the community will naturally get to impose its own values and aesthetic preferences onto you, whether by proactively using the position of relative power they have over you, or simply by osmosis. As it happens, the community surrounding Rust does largely consist of American progressives, which should not be surprising - after all, the language was created by an American company whose staff famously threatened mutiny when its own CEO turned out to offend progressive sensibilities.
As such, it is natural that bringing Rust into your project would over time result in it becoming more "woke", just like using Ruby would make it more likely that you attract Japanese contributors, or targeting Baikal CPUs would result in you getting pulled into the Russian orbit. The "woke" side themselves recognises this effect quite well, which is why they were so disturbed when Framework pushed Omarchy as a Linux distribution.
Of course, one needs to ask whether it is fair to insinuate premeditation by calling a mere expected effect an "agenda". Considering the endlessly navel-gazing nature of the culture wars, I would find it surprising if there weren't at least some people out there who make the same observation as above, and do think along the lines that driving Rust adoption is [also] a good thing because of it. Thus, Rust adoption does become, in a sense, part of the "woke agenda", just as Rust rejection becomes, perhaps even more clearly so, part of the "chud agenda".
> As such, it is natural that bringing Rust into your project would over time result in it becoming more "woke", just like using Ruby would make it more likely that you attract Japanese contributors, or targeting Baikal CPUs would result in you getting pulled into the Russian orbit. The "woke" side themselves recognises this effect quite well, which is why they were so disturbed when Framework pushed Omarchy as a Linux distribution.
I think this analysis is basically accurate - there's no conspiracy or even deliberate agenda going on, it's just that the community surrounding Rust happens to have (at the moment, anyway) a relatively high number of American progressives, many of whom are openly interested in imposing American progressive ideological norms in spaces they care about (which is basically what we mean by the term "woke").
I think Rust is a good software tool and I would like to see it be as widely adopted and politically-neutral as C is, and used in all sorts of projects run by all sorts of people with all sorts of other agendas, political or otherwise. Consequently, I would like to see people and projects who do not agree with American progressive norms adopt the language and become active users of it, which will help dilute the amount of Rust users who are progressives. I myself am not an American political progressive and I have lots of issues with the stated politics of many well-known Rust developers.
Is that stuff still going on? People were rather intense about certain politics during COVID, but outside of some furries on the Rust discord I haven't noticed anything outwardly political?
I don't know how you would measure it, though I would assume that there is no particular reason for the people who use the language or their preferences to change? A quick search brought up that they apparently renamed their master branch to "main" very recently (https://blog.rust-lang.org/inside-rust/2025/10/16/renaming-t...) (context: https://sfconservancy.org/news/2020/jun/23/gitbranchname/), to the point I was more surprised that it hadn't already happened earlier.
The general temperature of politics in FOSS, I think, is not obviously lower than before: just in terms of things that made it onto HN, in the past month or so alone we have seen the aforementioned kerfuffle about dhh (the leader? founder? of Ruby on Rails), his projects and their detractors, and the wrestling over control between NixOS's board and its community moderators who were known for prosecuting political purges and wanted to assert formal authority over the former.
We need more effective ways to prevent ideological capture of our institutions especially when there is no actual connection between an institution's mission (e.g., to support the use of Rust) and religion, politics, sexuality or morals.
Reading loaded adjectives for programming languages is waste of time. I would skip that kind of social media.
People are (understandably) sick of the fact that for whatever reason, the biggest proponents of Rust are insufferable.
Personally, I'm simply bothered by the fact that (one of?) the most famous figure of Rust on Linux and Rust Forever consumes and advocates for pornography that's illegal in my country, without being held accountable by the community.
From what I could piece together, the only group who ever cried wolf about this is a forum full of contemptious little angry men who spend weeks researching people they hate on the internet. No one seems to want to touch the subject from fear of being associated with them.
I'll give it to you, this is not a great time.
I'm genuinely not sure who you're talking about or whether this is an accurate characterization of their views. For that matter, I'm not sure what country you're in and whether I myself agree with that country's laws about whatever kind of pornography this is. Certainly plenty of countries I don't live in and have no ties to have laws I disagree with or violate routinely.
I'm pretty suspicious of demands for communities to hold people accountable, especially when the community in question is a loose group of people who mostly communicate online and are united by their shared use of a specific programming technology; and who probably disagree on all sorts of other issues, including contentious ones.
> advocates for pornography that's illegal in my country, without being held accountable by the community
If some form of speech is illegal in your country it does automatically mean it should be illegal for the whole world or that it is wrong or that the world-wide community should adhere to standards specific to your country. Even if that country is USA.
In other words, nobody should give a flying f about open source developers porn preferences.
> Personally, I'm simply bothered by the fact that (one of?) the most famous figure of Rust on Linux and Rust Forever consumes and advocates for pornography that's illegal in my country, without being held accountable by the community.
Your abhorrent personal opinion of another individual has no place in a technical discussion.
Who are you talking about? Asking in good faith, I never heard of that controversy before
The (retired?) head of a project trying to port Linux on Mac. I avoid naming names, not to be cryptic but because a lot of people like to stumble on these kinds of posts via search. Or moderate them. I'm not agreeing, but I can see why it's easier.
Please, don't use loaded, provocative, and most importantly, irrelevant terms in these discussions. It helps nothing and serves nobody.
The pushback is against the acolytes not the language.
If you could separate the language from the acolytes it would have seen much faster adoption.
Acolytes being the people talking positively about their experience using a language and the strengths they think it has. So the people with positive opinions should say nothing at all, and the people with negative opinions should be free to share. And somehow, you think this will lead to faster adoption.
That’s an interesting thought. It would run counter to everything we know about human nature, but interesting nevertheless.
Rust is already pretty successful adoption wise. It’s powering significant parts of the internet, it’s been introduced in 3 major operating systems (Windows, Linux, Android), many successful companies in a variety of domains have written their entire tech stack in it. Adoption as measured by crates.io downloads has doubled every year for the last 10 years.
Now I’m imagining how much more widely Rust would be used if they had adopted your visionary approach of never saying anything positive about it.
> Acolytes being the people talking positively about their experience using a language and the strengths they think it has.
No, it's the people who have given rise to the multiple Rust memes over the years.
I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community. Scala? Kotlin? Swift? Zig? None of those languages have built such poor reputations for their communities.
After all, for quite a few years every thread on forums that mentioned C or C++ was derailed by Rust proponents. I didn't see C++ users jumping into Rust threads posting attacks, but there are many examples of Rust users jumping into C++ or C threads, posting attacks.
> That’s an interesting thought. It would run counter to everything we know about human nature, but interesting nevertheless.
Well, the fact that Rust is an outlier in this sample should tell you everything you need to know; other up-and-coming languages have not, in the past, gotten such a reputation.
> I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community.
Because you’re young or you weren't around in 2010 when Go was gaining adoption. Same shit back then. People said “I like the language, it’s quite useful” followed by tirades from people who thought it was the end of human civilisation. It had exactly the reputation you speak of. (“DAE generics???”)
Eventually the haters moved on to hating something else. That’s what the Rust haters will do as well. When Zig reaches 1.0 and gains more adoption, the haters will be out in full force.
> Because you’re young or you weren't around in 2010 when Go was gaining adoption.
I've been working as a programmer since the mid-90s
>> I'm battling to think of any other about-to-go-mainstream language that had the reputation of a hostile community.
> People said “I like the language, it’s quite useful” followed by tirades from people who thought it was the end of human civilisation.
And? That's not the same as having a hostile community. I never saw Go proponents enter C# or Java discussions to make attacks against the programmers using C# or Java like I saw constantly wirh Rust proponents entering C or C++ discussions and calling the developers dinosaurs, incompetent, etc.
> not the same as having a hostile community
Hostile according to who? According to the haters, maybe. I’m sure the Go community was called “hostile” by haters back in the day.
Look at the drama created by Linux maintainers who were being insanely hostile, coming up with spurious objections, being absolute asshats - to the point where even Linus said enough was enough. The Rust for Linux members conducted themselves with dignity throughout. The Linux subsystem maintainers acted like kindergarteners.
But of course, haters will read the same emails and confirmation bias will tell them they’re right and Rust is the problem.
Keep hating.
My friend, the OP in this very thread has in multiple posts, made outright slanderous comments about C programmers. The reputation of the Rust community is very much the making of the Rust promoters. If you are seeing pushback, that's just the consequences of such behavior.
I also notice that these language debates are very much generational. That has a few consequences. First is that older devs have thicker skin. Second, older devs are more wary of the big promises made by Rust. Whether you like it or not, the push for Rust very much comes across as naivete as much as anything to older, more experienced devs who have seen this type of thing before.
You can't write a device driver without manipulating memory directly. A OS Kernel has to manipulate memory directly by definition. Most academic research into memory safe languages is mixed with a high amount of null results (meaning it doesn't work). Yet the Rust folks push it as the 'one true way'. Meanwhile, most Rust OpenSource projects are abandoned currently.
Its not hate, its pointing out track record and avoiding repeating past mistakes due to painful experiences in our youth. Your determination to repeat past mistakes doesn't come across as enlightenment like you think it does.
Here, find the “null result” in this study by the Android team - Eliminating Memory Safety Vulnerabilities at the Source (https://security.googleblog.com/2024/09/eliminating-memory-s...). They stopped adding new memory unsafe code and they saw a dramatic drop in the number of memory safety vulnerabilities. They only write new code in Kotlin or Rust now.
The Android team shipped a more secure operating system to billions of people. Their lives are better because of choosing more Rust and Kotlin and less C++.
> You can't write a device driver without manipulating memory directly.
This isn’t the gotcha you think it is. Check out this upstreamed driver - https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
This is a successful kernel driver that powers all IPC in Android. This is the most load bearing component in Android, especially because it is constantly being attacked by malware. It manipulates memory just fine actually.
In your hurry to dismiss Rust, you haven’t done a technical evaluation of it. If you had you wouldn’t conflate memory safety with a lack of memory manipulation. You’ve taken the intellectually lazy shortcut of dismissing a new thing because no new thing can be as good as the old things.
I write all this not to convince you to change your mind. I don’t think that’s possible. I write it so anyone else reading this will avoid your thought process. I don’t need to convince you, because this industry is moving ahead regardless.
> I’m sure the Go community was called “hostile” by haters back in the day.
I was there, and no it wasn't. The Go community didn't jump into every programming discussion throwing around accusations of dinosaur, insecurity, etc.
> I didn't see C++ users jumping into Rust threads posting attacks
There absolutely are, and have been. You could say it's a reaction. I don't want to argue about who started it.
I agree with you that if the Rust community has gained such a peculiar reputation, it's also due to valid reasons.
From the outside looking in most of the push back against rust seems to be of the "I am too old, never make mistakes, and don't want to learn a new thing late in my career" variety.
I have rarely seen an argument that pushes back against Rust with actual alternative solutions to the problems the rust proponents are trying to solve. It is mostly a bunch of old people letting the perfect be the enemy of the good.
I think you got your last sentence backwards. Because reading it literally, you mean what we have now is perfect. If that's the case, switching to Rust is a waste of time.
No, it makes sense to me. Perfect is not what we have now (as evident to everybody outside of the "skill issue" denialists) but how C/C++ or a new language could get up to par with Rust with some enhancement, and without the latter's idiosyncrasies.
> Scala? Kotlin? Swift? Zig? None of those languages have built such poor reputations for their communities.
> I didn't see C++ users jumping into Rust threads posting attacks, but there are many examples of Rust users jumping into C++ or C threads, posting attacks.
I already seen this with Zig. And even without language communities. Look at this whole thread. Look in to the mirror. Regularly when Rust is mentioned on HN. Anti-Rust cult comes to complain that there is Rust.
Even if someone just posts "I have made this with Rust" - then this cult comes and complains "why do you need to mention Rust?!". Like look at your self. Who hurt you?
> Look in to the mirror.
Pointing out that the Rust community has gained such a poor reputation while other communities have not requires "looking into the mirror"?
In my experience from these threads, there are more people polluting the discussion by complaining about Rust "acolytes" than actual acolytes.
Rust haters seem strangely obsessed.
> Rust haters seem strangely obsessed.
Well, this is a great example. People complaining about the community are labeled as people complaining about the language.
Do you not see the problem here?
I think you'd need to give answer to your own questioning here... why did you take "Rust haters" as "Rust-language haters", and not as "Rust-community haters"?
> I think you'd need to give answer to your own questioning here... why did you take "Rust haters" as "Rust-language haters", and not as "Rust-community haters"?
Because it literally says "Rust haters"; not "Rust community haters".
Are you saying that when someone refers to "Rust", they mean the community and not the language?
Yes, you're half way there.
If you change it to "Rust community haters seem strangely obsessed.", it is still valid.
> If you change it to "Rust community haters seem strangely obsessed.", it is still valid.
Maybe. What does that have to do with the Rust community having such a poor reputation compared to other communities?
Rust language and rust community are intertwined. It's a design descion from the language creators
> If you could separate the language from the acolytes it would have seen much faster adoption.
Good news: you can. And that's why it has had fast adoption.
(those advocating for Rust in "meme-like" ways are not generally the same people actually developing the Rust compiler or the core parts of it's ecosystem)
I think this is the wrong way to promote rust. For me rust is just a hype. I know nobody that programms or even thinks about rust. I’m from the embedded world an there c is still king. I understand that some will see rust as a good alternative, but as long as the real money is made in c it is not ready
> I know nobody that programms or even thinks about rust.
I think it isn’t reasonable to infer that nobody uses something because you don’t know anybody who uses it in your niche. I know lots of embedded programmers who use Rust.
Rust slaps on embedded too; I think that's one of its core competencies. But you have to do a lot of leg work for each piece of hardware because manufacturer support isn't there, and the OSS libs are usually not great. If your requirement is "Use only the most popular language in this domain", that's fine, but there's no point in evaluating or discussing other languages if so; the outcome is predetermined.
I think the linked requirement, the hype you see, and rust's own material is misleading: It's not a memory-safety one-trick lang; it's a nice overall lang and tool set.
The unfortunate reality is that you must write almost all of your drivers from scratch if you want to rust in embedded. There is no OEM driver support, and as you said the open source drivers are all crap and written for arduino-level hobby projects.
Lack of drivers is prohibitive if your are a small/medium team or are using a lot of complicated peripherals or SoC. Compare to C where any MCU or embedded SoC or moderately complex peripheral normally comes with C driver code.
I'm not following: Rust excels at C interop, so why wouldn't you use the OEM drivers/SDK and bind them through Rust for your own code? That's what I've always done when I need to interact with a C library in Rust.
So the main reason that doesn't work sometimes is how you are using Rust. For instance right now I'm leaning into the Rust Embassy ecosystem which is async based, the drivers need relatively deep integration with the embedded-hal-async layer which is far from trivial to do with C bindings.
In practice I end up rewriting drivers. Which sounds daunting but often times it's much easier than folks think and the resulting code is usually 1/4th or smaller the original C code. If only implement what you need sometimes drivers can be less than 100 lines of Rust.
That's a scenario I hadn't considered, thanks!
Hmm. You raise a good point. I've been doing a bit of C FFI on std-rust applications, but hadn't thought to apply this to embedded much. I have wrapped CMSIS-DSP (ARM-official DSP toolkit; filters etc) with Rust embedded, and it works out well! Increases compile times, but worth it. Perhaps we should apply this approach more broadly.
One problem: It's tedious going from the pointer-level API bindgen gives you to a high-level rust API that has references, arrays etc. In that you have to do some boilerplate for each bit of functionality you want. Not a big deal for a specific application, but not ideal if making a general library. And C libs tend to be sloppy with integer types, which works, but is not really idiomatic for rust. Maybe that could be automated with codegen or proc macros?
I believe the ESP-IDF rust lib is mostly FFI (?); maybe that's a good example. We've been re-inventing the wheel re STM-32 and Nordic support.
> I'm not following: Rust excels at C interop
Zig is an example of excelling at C interop--not Rust.
And Cargo is an impediment in the embedded ecosystem rather than a bonus.
Part of why we're getting Rewrite-it-in-Rust everywhere is precisely because the C interop is sufficiently weak that you can't do things easily in a piecemeal fashion.
And lets not talk about Rust compile times and looking at Rust code in a debugger and just how bad Rust code is in debug mode ...
I don't have strong opinions in either direction about Zig. But there's clear empirical evidence that Rust has a successful C interop story[1][2][3].
And note: browsers are the pathological case, in terms of build system integrations, global state assumptions, C++, etc.
(Your other complaints have a place, and don't seem unreasonable to me. But they're empirically not impediments to Rust's interop story.)
[1]: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
[2]: https://firefox-source-docs.mozilla.org/build/buildsystem/ru...
[3]: https://www.memorysafety.org/blog/rustls-nginx-compatibility...
https://github.com/avr-rust
https://github.com/esp-rs
https://github.com/rust-embedded/cortex-m
Even the embedded world is slowly changing.
> I know nobody that programms or even thinks about rust.
That's you. At companies like Microsoft and Google, plenty of people think about and discuss Rust, with some products/features already using Rust.
Well, at Microsoft, plenty of people have thought about and discussed C#, with some products/features using it. Not that it's gone away, but it has not won the hearts and minds of the general (software-developing) public.
What does this have to do with my comment? Your response has no logic in it.
> I know nobody that programms or even thinks about rust
This is your bias alone. I know tons of people and companies that do. Rust most likely runs on your device.
I think about rust, I program in rust. Now you mm know someone who does those things.
AWS is very heavy on Rust internally for core services.
EC2 (lots of embedded work on servers), IAM, DynamoDB, and parts of S3 all heavily use Rust for quite a few years now already.
We can move really fast with Rust as compared to C, while still saving loads of compute and memory compared to other languages. The biggest issue we've hit is the binary size which matters in embedded world.
Linux has added support for Rust now. I don't think Rust's future supremacy over C is doubtful at this point.
AWS might honestly be the biggest on Rust out of all the FAANGs based on what I've heard too. We employ loads of Rust core developers (incl Niko, who is a Sr PE here) and have great internal Rust support at this point :). People still use the JVM where performance doesn't matter, but anywhere where performance matters,I don't see anyone being okay-ed to use C over Rust internally at this point.
I'm from embedded too. We tried to use rust in one of our projects and came to the conclusion that it makes no sense to convert our team from experiences C++ developers to beginner level Rust developers. Additionally to that, it was nearby impossible to control the amount of packages that come in with Cargo. We had for a small tool three versions of the same library as dependency in our binary.
Additionally to that, a part of the team doesn't had fun on writing code with Rust.
We trashed the whole tool, which was a massive loss of time for the project.
You can absolutely use rustc without Cargo and there's frankly plenty of arguments for doing so in the context of embedded.
Any recommended pointers to examples of how to use rustc without cargo
I dislike the tone of the evangelism and the anti-C attitude but I'm not anti-rust. I purchased a computer with an oversized amount or RAM in part so I could experiment with rust. But determining how to write, edit and compile small programs, from the ground up, without cargo appears exceedingly difficult, and feels like going against the tide
It stands to reason that the embedded programmer commenting was unable to determine how to avoid using cargo and pulling in unnecessary dependencies. Otherwise he would not have encountered this problem
Definitely going against the tide; but possible
e.g. Chrome & Fuchsia both build included Rust bits using their existing build system.
Bazel and Buck2 both work well with it, relatively.
One can also just be really disciplined with Cargo and not add superfluous deps and be careful about the ones you do include to monitor their transitive dependencies.
IMHO this is more about crates.io than Cargo, and is the biggest weakness of the language community. A bulk of developers unfortunately I think come from an NPM-using background and so aren't philosophically ... attuned... to see the problem here.
Large parts of tooling dependent on cargo; for examples, lsp, analyzer, etc.
Once you take out cargo, rusts development environment becomes quite poor.
This is also just a matter of work, for example, the Buck folks have put in the work so that rust-analyzer works well with projects based on it.
This 'language requires an eco-system' is something that I really dislike. The linker is where it should come together, not in your choices about importing external dependencies, module packaging, source code hosting options, debugger and so on.
Unfortunately, that is the way rust set up their toolchain. In many ways, and for most if not all ecosystems, I do agree with you.
This isn't promotion. There is a feature. It is written. They have decided to ship it.
Secondly the argument that because you don't use it in your area no one should use it in OS development is nonsensical.
> I think this is the wrong way to promote rust
This is entirely the wrong lens. This is someone who wants to use Rust for a particular purpose, not some sort of publicity stunt.
> I know nobody that programms or even thinks about rust. I’m from the embedded world a there c is still king.
Now’s a good time to look outside of your bubble instead of pretending that your bubble is the world.
> as long as the real money is made in c it is not ready
Arguably, the real money is made in JavaScript and Python for the last decade. Embedded roles generally have fewer postings with lower pay than webdev. Until C catches back up, is it also not ready?
> This is entirely the wrong lens.
Telling people they need to take their ball and go home if they're incapable or unable to maintain an entire compiler back-end seems like a, shall we say, 'interesting' lens for a major distro such as Debian.
Just to parse some files?
Just to parse some files there are already tools and libraries for, for added security, without specifying a threat model
I program mostly Python, C, C++, Javascript and Rust. Including on embedded (that goes for C, C++ and Rust)
Most people nowadays who criticize Rust do so on a cultural basis of "there are people who want this so and it changes things therefore it is bad". But never on the merits.
Rust is a good language that contains in its language design some of the lessons the best C programmers have internalized. If you are a stellar C programmer you will manually enforce a lot of the similar rules that Rust enforces automatically. That doesn't mean Rust is a cage. You can always opt for unsafe if ypu feel like it.
But I know if my life depended on it I would rather write that program in Rust than in C, especially if it involves concurrency or multiprocessing.
Practically on embedded the issue is that most existing libraries are written in C or C++. That can be a reason to not choose it in the daily life. But it is not a rational reason for which a programming language sucks. Every programming language had once only one user. Every programming language had once no dependencies written in it. Rust is excellent in letting you combine it with other languages. The tooling is good. The compiler error messages made other language realize how shitty their errors were.
Even if nobody programmed in Rust, the good bits of that language lift the quality in the other languages.
> a cultural basis of "there are people who want this so and it changes things therefore it is bad". But never on the merits.
In this mindset, arguing against change is an argument on the merits. Because everything you spend time on has the opportunity cost of everything else you could spend time on.
Yes sure, but you got to engage with the arguments on the other side as well. The argument on the other side is that a rewrite is worth it because it prevents entire classes of memory leaks that are still to this day leading the exploitable CVE lists.
We could now pretend their position is: "Oh, we got this shiny new language that magically makes everything 100% safe and thus we need to rewrite everything." But that is not the position. Most of them are aware that a rewrite is always a trade-off. You could reintroduce old bugs etc.
As I said, I program languages on both sides on the divide and if I had to write and maintain secure software that my life depended on I would certainly prefer to write it in Rust. Memory safety would be just a tiny part of that. The other factors would be the strict type system (can be used to enforce certain guarantees that contributers cannot easily fuck up) and the tooling (the builtin testing is stellar).
The future of tooling is going to be written in the language people of the coming generations like to write. There was a time when C++ or even C was the new shiny thing. Why can't we just write all software in assembly like in the good old days? Because there were some actual tangible benefits to doing it in C and that's the language people with the ability of doing the job chose.
I am not saying a Rust rewrite makes sense in every case, but if you check the past decade of CVEs on a project and half of them would have been prevented by the vanilla Rust compiler maybe that's the rational thing?
> but as long as the real money is made in c it is not ready
People selling slop does not imply much about anything other than the people making the slop
when rust passes MISRA (or has its own MISRA rules), that may change. Call that a nice goal.
(I similarly have yet to see a single convincing argument to try to fight past the awkward, verbose and frustrating language that is rust).
Never tried to port LLVM. Is 6 months a reasonable timeframe to bring LLVM to a new architecture to production quality?
Can you use Rust without LLVM by using the Cranelift backend?
I meant, that email literally ask the fellow developer to either finish the Rust port or sunset the debian port in 6 months.
I am asking if the former option is a practical one
I believe m68k already has a working Rust compiler of sorts, though it's not part of the default Rust chain. I think shaping that fork into something that will let it run and compile like normal is feasible.
For other architectures currently unsupported by Rust, I doubt it'll happen. The CPU architectures themselves are long dead and often only used for industrial applications, so the probability of hobbyists getting their hands on them is pretty slim.
People still using these old architectures for anything but enthusiast hacking will probably not be using Debian Trixie, and if they do, they can probably find a workaround. It's not like the .deb format itself is changing, so old versions of apt and dpkg will keep working for quite a while.
In that case, the "6 months" deadline for non-m64k is just a false option.
I would consider that a passive-aggressive or an insult
I'm sure if any of the large corporations depending on legacy hardware would get together and pay people to make the necessary forks, 6 months would be feasible. Practically, they won't, though.
I see the deadline more as a "expect breakages in weird unofficial Debian downstreams that were never supported in the first place" or "ask your weird Debian downstream maintainer if this is going to cause problems now". It's not that Debian is banning unofficial downstreams or semi-proprietary forks, but it's not going to let itself be limited by them either.
And who knows, maybe there are weird Debian downstreams that I don't know of that do have a working Rust compiler. Projects like Raspbian are probably already set but Debian forks for specific boards may need to tweak a few compiler settings to make compilers emit the right instructions for their ARM/MIPS CPUs to work.
I only find the message passive-aggressive or insulting if you're of the opinion you're entitled to Debian never releasing software that doesn't work on the Commodore64.
His much bigger will this make an embedded Linux Debian image?
Probably not much larger at all, because the image doesn't need to contain Rust toolchain.
I don't know if the rust compiler produces bigger binaries, but for a single program, it'll not make a big difference.
IMHO, Rust is not mature until it decides on a stable ABI, and starts being able to use non-static linking, and therefore able to produce dynamically linked binaries.
They’d be better off just compiling the package manager with Fil-C
No changes required. Bringing up the fil-C toolchain on weird ports is probably less work than bringing up the Rust toolchain
Fil-C is amazing but is much more problematic than Rust at this point since it only supports amd64 at this time and is maintained by a single genius.
It also doesn't help you to attract new contributors. With the changes we made over in Ubuntu to switch to rust-coreutils and sudo-rs, we have seen an incredible uptake in community contributions amongst other things, and it's very interesting to me to try to push APT more into the community space.
At this time, most of the work on APT is spent by me staying awake late, or during weekends and my 2 week Christmas break, the second largest chunk is the work I do during working hours but that's less cool and exciting stuff :D
Adding Rust into APT is one aspect; the other, possibly even more pressing need is rewriting all the APT documentation.
Currently the APT manual pages are split into apt-get and apt-cache and so on, with a summary in apt(8) - we should split them across apt install(8), apt upgrade (8) and so on. At the same time, DocBook XML is not very attractive to contributors and switching to reStructuredText with Sphinx hopefully attracts more people to contribute to it.
> since it only supports amd64
Sorry to double-reply, but this is actually a super important point in favor of Fil-C.
If you adopted Fil-C for apt, then you could adopt it optionally - only on ports that had a Fil-C compiler. Your apt code would work just as well in Fil-C as in Yolo-C. It's not hard to do that. I think about half the software I "ported" to Fil-C worked out of the box, and in those cases where I had to make changes, they're the sort of changes you could upstream and maintain the software for both Fil-C and Yolo-C.
So, with Fil-C, there would be no need to ruffle feathers by telling port maintainers to support a new toolchain!
As far as I understand, Fil-C changes the ABI of the system, therefore it requires a new architecture in Debian terminology, e.g. amd64fil. And then you'd need to use multi-arch to pull in amd64fil binaries where that works.
We'll have to see how this plays out but it's not super plug and play.
Exactly something like that, yeah.
Some notes about that here: https://cr.yp.to/2025/fil-c.html
> since it only supports amd64 at this time and is maintained by a single genius.
That's easily fixable.
> It also doesn't help you to attract new contributors.
I don't understand this point.
> > since it only supports amd64 at this time and is maintained by a single genius. > That's easily fixable.
as easily as fixing Rust to work on the remaining 4 architectures?
> > It also doesn't help you to attract new contributors. > I don't understand this point.
C++ doesn't attract a lot of developers, Rust attracts many more. I want more community, particularly _young_ community. I don't wanna work on this alone all the time :D
> as easily as fixing Rust to work on the remaining 4 architectures?
Easier, because you won't have to port Fil-C to all of the architectures in order to use it on amd64.
> C++ doesn't attract a lot of developers, Rust attracts many more.
C is #2 on TIOBE.
C++ is #3 on TIOBE.
Rust is #16 on TIOBE.
So I don't know what you're talking about
he just said from experience that switching projects to rust got them many new contributors.
i guess it's cool for c(++) to have nice tiobe rankings but if they're not contributing how is that relevant?
He got a lot of contributors because those contributors wanted to participate in a rewrite. I.e. the opportunity to "move fast and break things". Not exactly the kind of contributions you should be looking for in a package manager that so many of us rely on.
If he was asking for C/C++ contributors, he'd be asking for help maintaining a mature project. That's less fun. It mature, grown-up work for serious people. Those serious people probably already have serious jobs. So, fewer people will show up.
Focus on a language that isn't a moving target, sir.
And this argument about "young" contributors is the same nonsense that came from your senior management. But you're independent.
Aren't the experienced engineers supposed to be leading the next generation? If you really want to get the young folks on board, drop Ubuntu and call it Gyatt. Instead of LTS, call it Rizz. Just think of all the young who will want to work on Skibidi 26.04!
Rust attracts hype and hype artists. Ask me how I know. Do you want drive-by people or do you want long-term community members? There are many young folk interested in learning C and looking for adequate mentorship along with a project to work on. Wouldn't that be a better use of energy? Have you even put out any outreach to attract others to these projects where you say you're alone?
You are making a mistake and falling on the sword for your bosses at the same time. Tough days are here but maybe hold on for better employment than this.
I agree with this. Fil-C is really impressive and Rust can panic, too (if panics/aborts are a concern).
> It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
... This is Debian we're talking about here?
... What distros are recommended for those who intend to continue trying to squeeze utility out of "retro computing devices"?
... And what sort of minimum specifications are we talking about, here?
I guess this is fine if you are using the rust standard library, I just don't want to see cargo pulling 500 packages to build this deb parsing code.
I don't care whether kernel developers want to use C or Rush or whatever. I judge the quality by using it in production. If it works well then I don't care how they are built.
How can you judge the security qualities of software by using it in production? You're surely not using it in the way someone looking for exploits would use it.
Or I guess if you interpret this as a societal scale: we've collectively used C in production a lot, and look at all the security problems. Judgment completed. Quality is low.
I'm pretty sure many other companies are going to use it in production before mine does. I'll just ask around...
Can we please also have the hard requirement that code should run without warnings under Valgrind?
Because that saves a lot of headaches down the line.
It's certainly what we aim for in APT. We do have an overwrite of course, since we need to copy uninitiated data around: The cache file is allocated as a whole and written at the end, but not all parts of it are used, but it triggers stuff.
Don't want to introduce complex code to only copy the parts that are actually reachable would be silly and introduce bugs.
But keep in mind valgrind is super buggy and we spend quite a bunch of time working around valgrind false positives (outside of amd64)
TBH most “false positives” that I investigate are wishful thinking or the result of ignorance of what is really happening. It looks like you are using Debian. That probably doesn’t help. Here is a typical Debian “bug” report:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=802778
10 years old. It never was a false positive. It was fixed a good few years ago. The fix did not involve suppressing the error.
Valgrind does need a lot of work, especially for missing CPU features and for Darwin. I’m not aware of many memcheck bugs that aren’t relatively obscure corner cases.
If you have encountered bugs please report them to https://bugs.kde.org.
Like the last one was/is the inability to comprehend safety of large buffers on ppc64el because the stack clash protector code generated by gcc isn't understood. The one before that was more problems of that sort on armhf where it also didn't understand the clash protector - in more cases.
It's quite surprising and it takes days to weeks to debug each of these, going down to the assembler level and verifying that by hand.
Would be good for memory safety and wouldn't need a rewrite of all software in a hyped language. No, we shouldn't do that ;)
Motivated people seem to prefer rewriting using a 13 years old programming language. Crazy.
In the end, only NetBSD will be standing in the breach after anything not x64 or ARMv8+ is declared "retro computing".
The language is incredibly frank, and I agree with it completely. The retro-computing hobby doesn't need the ability to run contemporary operating systems.
It's insane that x86 Debian is still compiling all software targeting Pentium Pro (from 1995!).
x64 Debian is a bit more modern, and you must splurge for a CPU from 2005 (Prescott) to get the plethora of features it requires
> It's insane that x86 Debian is still compiling all software targeting Pentium Pro (from 1995!).
Note that Debian no longer supports x86 as of Debian 13.
> It's insane that x86 Debian is still compiling all software targeting Pentium Pro (from 1995!).
Debian 13 raised the x86 requirement to Pentium 4 because LLVM required SSE2 and Rust required LLVM.
The target before was not Pentium Pro in my understanding. It was Pentium Pro equivalent embedded CPUs. Servers and desktops since 2005 could use x86-64 Debian.
Is it just the "retro-computing hobby"? There could still be businesses who might need support for old machines, especially in developing countries. I don't know the actual situation though, I'm open to the idea that my suggestion is insane.
No, it’s a valid question, and one that I’m sure will get some answers in the coming days and weeks as the discussion on adding this requirement continues, but in some sense, it’s beside the point.
The cost of supporting this old hardware for businesses or hobbyists isn’t free. The parties that feel strongly that new software continue to be released supporting a particular platform have options here, ranging from getting support for those architectures in LLVM and Rust, pushing GCC frontends for rust forward, maintaining their own fork of apt, etc.
It's much more common to find businesses running on very old hardware in developed countries, not in developing ones. Developing nations basically didn't use computers 20-30 years ago, there's no random remnants from that era beyond some extreme tail end. And, given how the PC & server market evolved in the 2000s and 2010s, it was cheaper to buy a then-current x86 than to import some ancient Alpha system from wherever. Especially so since software licenses didn't really exist in those days in developing countries - even government institutions often ran pirated software without a second thought.
Are you speaking from experience?
There's a non-negligble amount of "handed-down" refurbished hardware from developed to developing. PCs and servers that are already 5+yo and out of market at installation.
Those businesses usually have very purpose-built machines. Like USG machines running windows xp in hospitals. You normally don't touch them, just isolate. You definitely don't update them to the latest OS version.
These architectures were for were high end machines at the time, universities and government agencies would indeed been the only ones buying these, and they probably migrated to commodity hardware long ago; if migrating to a PC-compatible architecture had been off the table for them, so had been keeping these things running after their EOL.
(In my second-tier university at my developing country, the Sun workstation hadn’t been turned on in years by the late 2000s, and the the minicomputer they bought in the 1980s was furniture at the school)
Edit: As for big businesses, they have support plans from IBM or HP for their mainframes, nothing relevant to Debian.
It is not retro-computing. New 32-bit and x86 CPUs are produced, sold, and used today.
See (relatively recent) list of manfuacturers here:
https://en.wikipedia.org/wiki/List_of_x86_manufacturers
and scroll down for other categories of x86 chip manufacturers. These have plenty of uses. Maybe in another 30 years' time they will mostly be a hobby, but we are very far from that time.
Which ones of those don’t have MMX / SSE? Debian is not enabling any x86 instruction extensions that didn’t ship on the Pentium Pro
This is not accurate, the Trixie 32-bit release requires a Pentium 4.
Isn’t this the specific reason things like the raspberry pi were developed to solve?
If I was a blackhat, I'd surely value all pre-IntelME / AMD PSP *retro-computing* hardware that is still capable of running more or less modern software without a built-in hardware backdoor higher than its weight in gold.
> targeting Pentium Pro (from 1995!).
BTW, today is Pentium Pro's 30 years anniversary.
>retro-computing hobby doesn't need the ability to run contemporary operating systems
why not? I still want to run modern software on older machines for security and feature reasons
Wow, those are exactly the same targets I use for releasing x86 and x64 (Windows) builds, but even I think it's a little over the top for Debian to support Pentium Pro.
We're really talking about alpha, hppa, m68k and sh4
I'll first say that 32-bit CPUs, including x86-based ones, are not retro computing. They still carry the load of all sorts of important computing systems, today. They are still being produced (IIANM, also by Intel and AMD). Sure, with much more limited use cases, and it's definitely not the mainstream, but it's there. Not a hobby and not for a 'retro' experience.
But you are also completely ignoring limited-capabilities hardware, like embedded systems and micro-controllers. That includes newer offerings from ST Microelectronics, Espressif, Microchip Technology etc. (and even renewed 'oldies' like eZ80's which are compatible with Zilog's 8-bit Z80 from the 1970s - still used in products sold to consumers today). The larger ones are quite capable pieces of hardware, and I would not be surprised if some of them use Debian-based OS distributions.
32-bit might not be, but i686 definitely is. Last pure i686 chips without MMX shipped sometime around 1998-1999
> This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
By Sequoia, are they talking about replacing GnuPG with https://sequoia-pgp.org/ for signature verification?
I really hope they don't replace the audited and battle-tested GnuPG parts with some new-fangled project like that just because it is written in "memory-safe" rust.
Sequoia-PGP is 8 years old at this point, their 1.0 happened half a decade ago.
Meanwhile, GnuPG is well regarded for its code maturity. But it is a C codebase with nearly no tests, no CI pipeline(!!), an architecture that is basically a statemachine with side effects, and over 200 flags. In my experience, only people who haven't experienced the codebase speak positively of it.
It's rather that GnuPG is ill-regarded for its code immaturity tbh. You don't even need to read the code base, just try to use it in a script:
It exits 0 when the verification failed, it exits 1 when it passed, and you have to ignore it all and parse the output of the status fd to find the truth.
It provides options to enforce various algorithmic constraints but they only work in some modes and are silently ignored in others.
GnuPG has protected Snowden and he speaks positively of it.
Does Sequoia-PGP have similar credentials and who funds it?
We have replaced GnuPG by Sequoia in the previous Debian release.
When was GnuPG audited and by whom?
Or did Rust just raise its Series A?
What are the remaining use cases for GnuPG that aren't done better by specialized tools?
The follow-up is solid gold:
Ref: https://lists.debian.org/debian-devel/2025/10/msg00286.htmlDefinitely the adult in the room.
>It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
Loved this statement on the state of modern software using the backbone of C (in linux and elsewhere)
Why do these matters often become such personal and ideological debates?
The writer Gwern has a good essay attempting to answer this question: https://gwern.net/holy-war
My main objection to Rust is how ugly it looks. Why did they have to change things such as how types and functions are defined? I really hate keywords such as def, fn, and other "explicit" function declarations. Also all the :: and <> from C++. Language-wise Java and C# did a much better job at introducing the features they needed without breaking the readability and familiarity of C.
The "spiral" type declaration syntax from C is hard to parse, both for humans and machines. That's probably why even C++ is moving away from it:
It's easy to criticize simple examples like the one above, since the C++ (or Rust) version is longer than the C declaration, but consider something like this: and the idiomatic Rust equivalent: The later can be parsed quite trivially by descending into the type declaration. It's also visible at a glimpse, that the top-level type is a Vec and you can also easily spot the lambda and it's signature.Another ergonomic aspect of the Rust syntax is that you can easily copy the raw type, without the variable name:
While the standalone C type looks like this: which is quite a mess to untangle ;)Also, I think C# is generally closer to Rust than to C when it comes to the type syntax. A rough equivalent to the previous example would be:
I can't deny that "?" is more ergonomic than Rust's "Option<T>", but C# has also a way less expressive type system than Rust or C++, so pick your poison.Sounds like business as usual.
That seems like a bad idea to me: Dependencies will be added, for very basic system utilities, on (parts of) a software ecosystem which is still a "moving target", not standardized, and IIANM itself has further dependencies. I wonder whether platform compatibility won't be jeopardized, either.
I would be worried if even C++ dependencies were added for basic system utilities, let alone something like Rust.
Now, granted, I'm not an expert on distro management, bootstrapping etc. so maybe I'm over-reacting, but I am definitely experiencing some fear, uncertainty and doubt here. :-(
> Dependencies will be added, for very basic system utilities, on (parts of) a software ecosystem which is still a "moving target", not standardized,
This is the status quo and always has been. gcc has plenty of extensions that are not part of a language standard that are used in core tools. Perl has never had a standard and is used all over the place.
If you're designing an OS distribution, you would have your base system written adhering strictly to language standards and without relying on flakey extensions (not that GCC C extensions are flakey, I'm guessing most/all of them are stable since the 1990s), and minimizing reliance on additional tools.
For example, IIUC, you can build a perl interpreter using a C compiler and GNU Make. And if you can't - GCC is quite bootstrappable; see here for the x86 / x86_64 procedure:
https://stackoverflow.com/a/65708958/1593077
and you can get into that on other platforms anywhere along the bootstrapping chain. And then you can again easily build perl; see:
https://codereflections.com/2023/12/24/bootstrapping-perl-wi...
It feels like perhaps you’ve conflated the issue in this thread, which is about using Rust in apt, which is much, much later in the distribution bringup than this bootstrapping, and using Rust in something like the Linux kernel, which is more relevant to those types of bootstrapping discussions you posted.
apt is so late in the process that these bootstrapping discussions aren’t quite so relevant. My point was that at the same layer of the OS, there are many, many components that don't meet the same criteria posted, including perl.
The procedure to produce GCC you cited was 13 steps. Many of the tools were made after distributions required GCC. And a similar procedure could produce a Rust compiler.
Keep in mind APT is using the GNU dialect of C++, particularly C++ 17, shortly C++ 23. And it always exploits the latest C++ features while at the same time still having workarounds in the code for a lack of namespaces in C++ compilers prior to the C++ standardization...
Before apt, the main user interface for dpkg was dselect, which was written in C++
But doesn't this actually strengthen my point? Debian transitioned from a tool written in a more demanding language to a tool written in a less-demanding one.
> It's important for the project as whole to be able to > move forward and rely on modern tools and technologies > and not be held back by trying to shoehorn modern software > on retro computing devices.
Rust is the present and the future and it's quite logical that it becomes a key requirement in Linux distributions, but I'm really not convinced by the wording here… This last sentence feels needlessly antagonistic.
I suspect if this mailing list post doesn't go too under the radar, that last sentence will be a source of major regret.
And if it is it absolutely is indicative of their opinion and absolutely deserved.
Feels accurate to me. He's clearly anticipating the "but how will I run Debian on my PDP-11??" naysayers that always try to derail things.
Right. I do have some nostalgia for installing Linux on a brand new PC which had less total RAM than my computer today has cache, but we need to be clear eyed about what makes sense for a maintained piece of software. I also have feelings about steam trains, but burning coal is not a sensible way to power a train in 2025.
A nostalgia-fuelled Linux distro, maybe using a deliberately slimmed down or retro kernel, and chosen software could make a lot more sense than keep trying to squeeze Debian onto hardware that was already obsolete at the turn of the century while also promoting Debian as a viable choice for a brand new laptop.
> I also have feelings about steam trains, but burning coal is not a sensible way to power a train in 2025.
Solved problem:
United States Patent Application 3127321 Date of Patent March 31, 1964 NUCLEAR REACTOR FOR A RAILWAY VEHICLE
Very Fallout
But the message is not just "we will require Rust and here's the timeline (and some things may break)", it's also "I know what I'm going to break and it's your problem to fix it" (via the CC line) and ends in a passive-aggressive way (the phrase "Thank you for your understanding." is basically a polite way of saying "and fuck you too."). This gets people off-side, and makes it more likely they're going to throw up barriers rather than either do what you ask or find some other way to collaborate/achieve the ideal outcome.
Or those annoying nagging "well, what if I don't have an X86_64 CPU that was made in the last five years?", to which obviously our response should be: "get different hardware LOL, closedwontfix"
Is your point that rust doesn't run on a computer built in 2020?
No, but that if it did not, I am not so sure that would even be seen as a problem.
Oh, that sounds like a personal ignorance problem. You are in the right place to be educated though!
Feel free to enlighten me. Nothing I have seen thus far indicates anything to the contrary.
Spend more effort on looking before posting then!
I'm not gonna waste effort on a fool's errand. So far, neither claim has been backed up whatsoever.
It's obviously more likely it's just fans of the language with a knee-jerk reaction of "ackshully you're totally definetely wrong, but uh... don't ask me how, you just are" than legitimate talking points.
stay ignant then, keep making stupid claims and asking people to correct them lol
No, supporting 5 year old mainstream hardware is a very reasonable thing to do. Supporting 20 year old hardware that barely anyone used even when it was new is not.
Indeed. Four targets are identified as potentially affected:
alpha, hppa, m68k and sh4
To be fair, lots of people did use Motorola 68xxx CPUs when those were new, it's just that it was 40+ years ago in products like the Commodore Amiga. The SH4 is most popularly connected to the Dreamcast, Sega's video game console from back when Sega made video game consoles.
The Alpha and PA Risc were seen in relatively recent and more conventional hardware, but in much tinier numbers, and when I say relatively I mean early this century, these are not products anybody bought five years ago, and when they were on sale they were niche products for a niche which in practical terms was eaten by Microsoft.
Alpha and PA Risc where the predecessors of Itanium fwiw if you want to feel old in an easy to understand way.
Has Linux dropped support for older x86 CPUs?
Yeah, in... late 2012 https://lkml.org/lkml/2012/12/12/292
Rust hasn't either AFAIK.
From the mailing list on this: https://lists.debian.org/debian-devel/2025/10/msg00288.html :
[ The rustc book > Platform Support: https://doc.rust-lang.org/beta/rustc/platform-support.html ][ The rustic book > Target Tier Policy: https://doc.rust-lang.org/beta/rustc/target-tier-policy.html... ]
Create a plan to add support for {alpha, hppa, m68k, and sh4,} targets to the Rust compiler- 2.5pro: "Rust Compiler Target Porting Plan" https://gemini.google.com/share/b36065507d9d :
> [ rustc_codegen_gcc, libcore atomics for each target (m68k does not have support for 64-bit atomics and will need patching to libgcc helper functions), ..., libc, liballoc and libstd (fix std::thread, std::fs, std::net, std::sync), and then compiletest will find thousands of bugs ]
So, CI build hours on those actual but first emulated ISAs?
"Google porting all internal workloads to ARM, with help from GenAI" (2025) https://news.ycombinator.com/item?id=45691519
"AI-Driven Software Porting to RISC-V" (2025) https://news.ycombinator.com/item?id=45315314
"The Unreasonable Effectiveness of Fuzzing for Porting Programs" (2025) https://news.ycombinator.com/item?id=44311241 :
> A simple strategy of having LLMs write fuzz tests and build up a port in topological order seems effective at automating porting from C to Rust.
Rust is a great language for devs. They love it and how developer centric everything about it is.
But for end users on Debian trying to compile rust stuff is a nightmare. They do breaking changes in the compiler (rustc) every 3 months. This is not a joke or exaggeration. It's entirely inappropriate to use such a rapidly changing language in anything that matters because users on a non-rolling distro, LIKE DEBIAN, will NOT be able to compile software written for it's constantly moving bleeding edge.
This is an anti-user move to ease developer experience. Very par for the course for modern software.
> They do breaking changes in the compiler (rustc) every 3 months. This is not a joke or exaggeration.
That is, in fact, a gross exaggeration. Breaking changes to rustc are extremely rare.
Rust has stability guarantees. As long as you don't update version of your binary/library new compiler versions can compile for that version and you can even interlink different editions so I really don't understand this argument. C++ modules had a proposal for something similar but it did not pass. It would also fix so many horrible backward compatible things haunts the language as a whole.
Since when do Debian users need to compile APT themselves?
Debian still has its policies which means your use-case shouldn't be affected by this.
The rustc version will be fixed for compaibility at every release and all rust dependencies must be ported to apts.
In the debian context, the burden imposed by rust churn and "cargo hell" falls on debian package maintainers.
I wouldn't see it that way.
First, Debian is not a distro where users have to compile their software. The packages contain binaries, the compilation is already done. The instability of Rust would not affect users in any way.
And second, as a developer, I never had a more unpleasant language to work with than Rust. The borrow checker back then was abysmal. Rust is not about developer happiness - Ruby is - but its memory safety makes it a useful option in specific situation. But you can be sure that many developers will avoid it like a plague - and together with the breakage and long compile times that's probably why moves like the one dictated here are so controversial.
> The instability of Rust would not affect users in any way.
Sure it would. Suppose a rust-based package has a security bug. Upstream has fixed it, but that fix depends on some new rust language feature that the frozen version of rust in Debian doesn't have yet.
Then the responsible Debian maintainer would backport that fix, as they have done in other languages for decades. Really, that's not user facing. It's a possible hassle for the maintainers and developers, which might be bad enough, but not a problem for users.
rustc stable is continually updated yes. But surely any given release of debian targets a specific version of the toolchain. What's the issue?
[dead]
[dead]
Yes, let’s introduce a hard dependency on a language which has no specification, only one compiler and supports a pitiful number of architectures. That’s what true progress looks like.
Ferrous Systems donated their language specification ("Ferrocene") to the Rust foundation[0] who is working on integrating it but that takes time, obviously.
0: https://rustfoundation.org/media/ferrous-systems-donates-fer...
Their spec just happens to describe whatever the compiler decided to implement, it's not a source of truth.
How do you think that C got specified?
Do you think that it was made up from whole cloth in the abstract machine and implemented later? No, it was based on the available implementations of its time.
On top of that, languages like Python do not have a specification and yet have multiple implementations.
That's literally what the C++ specification does. The reason for all the UB and implementation defined stuff is because 90s compilers disagreed on sizeof(int).
I don’t think it’s a fair argument to criticize a spec because it’s derived from an implementation. If anything it shows how the spec can be practically applied.
And? What did you think it was going to be? An abstract description that completely ignores the existing compiler?
I don’t think this is a good argument, given that there are manifestly successful languages that don’t have a formal specification. Python comes to mind.
(Plus, architecture quantity isn’t exactly the thing that matters. Quality is what matters, and Rust’s decision to conservatively stabilize on the subset of LLVM backends they can reliably test on seems very reasonable to me.)
How has a language specification and multiple viable compilers helped C developers write security-critical code?
Considering the number of provers and statistical analysers and given C is the only mainstream language with a formally verified compiler, I would say fairly well thank you.
Honestly, I am not even opposed to Rust. It has cool ideas. I do think it should care a lot more about being portable and properly defined and should have done so a lot earlier and I do deeply disagree with the opinion of some core team members that specification is meaningless.
C obviously always was a questionable choice for a tool like apt but Rust seems even worse to me. Apt has absolutely no need to be written in a low level language. At least you could argue that C was chosen because it’s portable but I don’t see what Rust has going for it.
All those things didn’t stop Heartbleed or numerous other CVEs from happening, though, did it?
Doesn't Java have also a formally verified compiler?
Perl has been a hard dependency of the base system since forever and it doesn't have a specification nor more than one interpreter.
The only spec you need meet to get Perl or Python running on a new platform is the C spec. A compiler meeting that spec is enough to port and compile the interpreter for a new platform.
Large share of C programs also (accidentally) depend on holes in the specifications, aka undefined behavior. Putting them under the mercy of the compiler to remain consistent.
Because there's so many PIC, MIP, and PS/2 systems out there running debian?
The war is over. ARM and x86 won.
RISC-V is well on its way to be a challenger to both, with (eventually) full support for Rust.
I'll be happy if it does, but I very much have a "I'll believe it when I see it" view of new archs like RISC-V.
The complaint about architecture support is somewhat valid (though: the world largely runs on a small handful of architectures). The other complaints are completely bogus, and repeatedly bringing them up just suggests bad faith.
Can you please make your substantive points thoughtfully instead of posting in the flamewar style? We're trying for the one and not the other here: https://news.ycombinator.com/newsguidelines.html.
(We detached this subthread from https://news.ycombinator.com/item?id=45782109.)
Check the list of supported architectures https://wiki.debian.org/SupportedArchitectures, it's pretty short. Official support extends to exactly 5 architectures. So Debian was never going to be your go to for anything not mainstream.
gccrs exists?
Flagged for deliberately spreading disinformation under the name StopDisinfo. As this user is aware, rust has a specification https://github.com/rust-lang/fls, which they have previously commented on: https://news.ycombinator.com/item?id=44927141 (and which I probably wouldn't have remembered if not for the rather remarkable name they are saying false things under).
Rust also has multiple compilers (rustc, mrustc, and gccrs) though only one is production ready at this time.
Don't feed egregious comments by replying; flag them instead. If you flag, please don't also comment that you did.
https://news.ycombinator.com/newsguidelines.html
The Rust specification you link is performative and only intended to satisfy requirements of certification processes. No one is actually using it to implement the language, as far as I am aware.
There is other work on specifying Rust (e.g. the Unsafe Code Guidelines Working Group), but nothing approaching a real spec for the whole language. Honestly, it is probably impossible at this point; Rust has many layers of implementation-defined hidden complexities.
“There’s a standard, but it’s performative” is a different argument than the top level one.
But even if we accept that, it doesn’t seem like a good comparative argument: anybody who has written a nontrivial amount of C or C++ has dealt with compiler-defined behavior or compiler language extensions. These would suggest that the C and C++ standards are “performative” in the same sense, but repeated claims about the virtues of standardization don’t seem compatible with accepting that.
The original purpose of the C standard was to solve the problems created by the diversity of increasingly divergent implementations of C. They studied existing behavior across systems, proposed new language constructs, and it was generally a success (look at the proliferation of C in the 90s across many different systems and architectures).
The actual informal semantics in the standard and its successors is written in an axiomatic (as opposed to operational or denotational) style, and is subject to the usual problem of axiomatic semantics: one rule you forgot to read can completely change the meaning of the other rules you did read. There are a number of areas known to be ill-specified in the standard, with the worst probably being the implications of the typed memory model. There have since been formalized semantics of C, which are generally less general than the informal version in the standard and make some additional assumptions.
C++ tried to follow the same model, but C++ is orders of magnitude more complex than C and thus the standard is overall less well specified than the C++ standard (e.g. there is still no normative list of all the undefined behavior in C++). It is likely practically impossible to write a formal specification for C++. Still, essentially all of the work on memory models for low-level programming languages originates in the context of C++ (and then ported back to C and Rust).
Well, the memory ordering model was developed for C++ and is used in C and Rust. But e.g. C++ does not have a pointer provenance model, which is arguably almost as important in this context. It turns out one of the things we really care about with multi-processing, and thus memory models is linked list hacks, and those only work if you have provenance rules, which uh, C++ just has a shrug emoji where the provenance rules would go so that's not great. C has an ISO document, although it's not part of the ISO C standard it's just another document so far, but Rust has specified provenance.
Also, the C++ ordering model is defective in the sense that while it offers the orders we actually use it also offers an order nobody knows how to implement, so it's basically just wishful thinking. For years now the C++ standard has labelled this order "temporarily discouraged" as experts tried to repair the definition and C++ 26 is slated to just deprecate it instead. Rust doesn't copy that defect.
They have a document that they sometimes describe using the word "specification", but its README clarifies that it's not actually a specification:
> The FLS is not intended to be used as the normative specification of the Rust language
To be clear, my argument doesn't hinge on whether FLS is a normative specification of Rust or not. The argument is that being "specified" is neither necessary nor sufficient for language maturity or quality.
[flagged]
[flagged]
Where exactly is the arrogance? It seems like a rather plain and simple announcement to me.
[flagged]
Is Debian linked to Canonical? Isn't tat Ubuntu?
Ubuntu is built on top of Debian
[flagged]
[flagged]
[flagged]
[flagged]
can you link what you're referring to? As I understand, this announcement is about Debian tooling, & you're referencing Linux kernel development discussions
What if the person promising the first thing is not the person doing the second thing?
I don't get the need for Rust since I happily compile common lisp to machine code when I need fast binaries.
But the people who use the language have an amazing talent to make people on the fence hate them within half a dozen sentences.
They remind me of Christian missionaries trying to convert the savages from their barbarous religions with human sacrifice to the civilised religion with burning heretics.
Many programmers feel the same way about Lispers. It's best to set aside your gut feelings about the community and think primarily about the technical and organizational merits and disadvantages of the technology.
Yes, but we're not making a push to make everything a bilingual c/lisp code base.
Rust people for some reason are.
One of the most iconic lisp saying is literally about how every other languages are supposedly inferior to lisp, so I don't think it's a particularly good example here.
You mean the one that all languages eventually converge to Lisp? That's a fact though :)
Yes, the blind stumble down hill eventually. But we let them do it in their own time.
Not a Rust or even a systems language guy but it’s not “for some reason”. The reason is actually incredibly clear and about removing the single largest surface area of security problems in the entire history of Linux.
Is it the largest though? Based on https://owasp.org/www-project-top-ten/, it would be part of #6 or #8. While one can always walk and chew gum at the same time (and those rewriting system software are likely not the same people who need to design systems with less flawed access control), replacing stable software is not without risk (e.g. https://lwn.net/Articles/1043103/, which is an instance of #6 caused by rust). Would you trust a rewrite of OpenSSH in rust (absent any information about its authors)?
Owasp deals with web security only. That whole area is biased towards php/ruby/js/python/beam where those issues don't exist outside of native modules.
https://www.cvedetails.com/vulnerabilities-by-types.php is a bit more clear. It's xss, SQL, then memory. The first two are not possible to enforce a fix on - you can always make a decision to do something bad with no visible annotation. Even then, rich types like in rust make safe interfaces easier to produce. But rust tackles the next class of issues - one that you can verify to be safe or require an explicit "unsafe" around it.
But how many breaches get CVEs (or are caused by them)? Given how much users (and their data) interact via the web vs any other platform, reducing the breaches of services is going to improve users lives more than reducing the number of CVEs (does mongodb defaulting to not have username/password to access have a CVE).
As for preventing XSS and SQL injections, that's what good web frameworks do. If your framework encourages you to write raw unescaped SQL, or doesn't provide sensible defaults around content policies, then no matter what language it's in, there are going to be issues (and maybe if we called these frameworks "unsafe" then we'd get somewhere with fixing them).
Pretty consistently [1] [2] [3] [4] it comes out that nearly %80 of security vulnerabilities come from memory safety vulnerabilities. I would consider that largest. Especially Microsoft's doc is pretty telling that this ratio was pretty consistent since 2006 so no amount of external tooling and training solves this. [1] https://langui.sh/2019/07/23/apple-memory-safety/ (albeit apple solving this with an another language and a really nice cpu extension) [2] https://www.microsoft.com/en-us/msrc/blog/2019/07/we-need-a-... [3] https://security.googleblog.com/2019/05/queue-hardening-enha... [4] https://x.com/LazyFishBarrel/status/1129000965741404160
In which code though? What counts as a security vulnerability (does the design of the play store such that searching for an app and the top result not being that app count)? I like everyone else want a secure browser, but my secure browser isn't going to prevent my email (or passwords) from appearing on https://haveibeenpwned.com/. I like rust, and want to write more rust, but if I were to port openssh to rust, I guarantee my rust code would have more CVEs than openssh.
My feeling is in the specific instance of using rust in apt, this is most likely a good thing (though I hope existing well tested rust libraries are used rather than NIHing them and introducing new bugs), but so far Ubuntu's rustification has not gone smoothly, so I'm more wary of the changes that e.g. improvements to Firefox via rust.
I don't agree with openssh take but yeah we don't need to migrate everything to rust and yeah a migration not always goes smoothly I am using sudo-rs on my arch install for over a year havent had any issues yet.
> The reason is actually incredibly clear
There is no guarantee that other bugs do not flurish in the rust echosystem. There are no publicly known quality code checks of rust programs except a big "trust us"(see firefox with all its CVEs, despite "rust"). And combined with the Cargo echosystem, where every malicious actor can inject malware is a big warning sign.
AFAIK Linux is using rustc directly, without cargo.
And just an anecdote, Asahi Linux devs said that Rust made it very easy (maybe relative to working with C) to write the drivers for the Apple M1 and M2 series, so it seems that the language has his merits, even without the cargo ecosystem.
Also Rust will only minimize certain kinds of bugs, others are impossible, a few years ago (I believe was Microsoft) that said that 70% of the bugs found were memory related [0], it means that Rust would have prevented most of those.
Maybe Rust is not the best answer, but as for now it the most proven answer for this particular problem, who know of Zig or other language will replace both C and Rust in the future.
[0] https://www.zdnet.com/article/i-ditched-linux-for-windows-11...
I might be misunderstanding here but... what you're saying is that Rust programs can still have bugs? Isn't that the same as other programs except Rust prevents the most disastrous and common bugs that lead to most CVEs?
If I got that right, how is "it's still not perfect" an argument?
Agree with the Cargo objection.
If perfect isn't needed, we might as well stick with the 20+ years old mature codebase, there's no need to move to another language. I mean the "code to parse .deb, .ar, .tar," is done and whatever memory bugs should've been already fixed .
Use Rust for evergreen projects by all means, just leave mature tested systems alone, please.
I don't follow how from "better but not perfect is better than worse" you conclude that.
OK? So, is your position that Debian is only for software that's old enough to vote?
Or maybe Debian should never rely on any software written after 2015?
Firefox is 29% Javascript, 28% C++, 22% HTML, 10% C, 3% Python, 2,6% Kotlin and 5% other
> There is no guarantee that other bugs do not flurish in the rust echosystem.
well, less likely than in C thanks to a advanced type system, e.g. allowing authors of abstractions make their API much more fool proof.
> where every malicious actor can inject malware is a big warning sign.
Very much doubt that is the case...
There are guarantees that many types of bugs won't happen in Rust code. That alone is a great progress.
Are there guarantees that "other bugs" do not flourish in the C ecosystem?
Firefox is not even close to 100% Rust.
This is a wildly misinformed comment.
a bunch of major projects have conclusively shown that moving to memory safe languages without any doubt whatsoever results in more secure software.
I have similar feelings about the go community. I like go on the whole, but certain aspects are atrocious. The 'go community' is strangely cult-like and complaining about the language is generally poorly received. As a result I very rarely interact with them and just do my own thing.
> I don't get the need for Rust since I happily compile common lisp to machine code when I need fast binaries.
Not sure how that’s relevant when CL is basically dead and no one wants to work with it, while Rust is flourishing and delivering value
If the value proposition is internet drama, then it certainly does.
Rust isn't 'delivering value', people are just drinking the koolaid. CL has its niches, and it still fills them well: it's never been that successful as a general-purpose language.
> Rust isn't 'delivering value'
Citation needed.
Or, what can be asserted without evidence can be dismissed by pointing to ripgrep.
Oh wow a replacement for a tool I already have. It's slightly faster, even if I seldom ever worry about its speed. Who cares?
57000 people who have starred its repository on Github care.
More evidence than you have provided for your claim "Rust isn't delivering value", what did you use to come to that conclusion?
How can lisp be fast if it doesn't have static typing and uses GC?
As a GC language Common Lisp is slow as molasses. However, in the realm of languages which are slow as molasses it's got quite acceptable performance. I suppose it's in part because it's alien technology from beyond the bounds of the multiverse, but also in part due to the fact that we had really interesting tech back in the '70s and '80s that kinda got lost during the so-called AI winter.
To add on to that, with declarations the programmer can tell the Lisp compiler that (for example) a variable can be stack allocated to help improve performance. The fact that Lisp code is just data is another benefit towards performance as it means macros are relatively easy to write so some computation can be done at compile time. There are also various useful utilities in the spec which can be used to help profile execution of a program to aid in optimization, such as time and trace.
News flash, but Lisp compilers have gotten better since the 80's. In general, normal, unoptimized Lisp is probably on-par with Java, while optimized Lisp with a good compiler can get on par with C++. SBCL is a very good compiler, and GC technology has come a long way.
SBCL is fantastic, but the resources being poured in GCC and LLVM combined with CPU manufacturers including architectural features to help C and related procedural languages work better make it a massive uphill battle. I suppose that with proper use of things like sb-simd and arena allocation you can get in the ballpark of C, but with some tweaking C, C++, and Rust can get insanely fast (of course Fortran's better, but nothing compares to it, so let's leave that aside for now).
"why use C when I could simply link to node?"
[flagged]
Well, so far this thread has 0 people shilling Rust, a couple shilling Common Lisp and a bunch complaining about Rust shills (you included).
Makes you think, huh?
[flagged]
[flagged]
> Christian missionaries trying to convert the savages
Fast forward 5 centuries, it turns out they were in fact pretty successful as South America central Africa are the places where Catholicism is the most active today, far more than in Europe.
Yes, by doing a lit of killing in those places.
Not from religious people, by a very large margin.
Less killing, considerably more buggery: https://www.abc.net.au/news/2025-01-29/former-bishop-broome-...
The Christian Brothers missions were hell holes across the undeveloped regions.
* https://www.theguardian.com/uk-news/2017/mar/02/child-migran...
* https://www.childabuseroyalcommission.gov.au/case-studies/ca...
Rust evangelists are tiresome. It's not gonna fix the tech debt problem, No matter how much rust crack you smoke. Disciplined use of c, with modern tools like valgrind, will give you safe code without having to lobotomize yourself into fighting the borrow checker for everything, even manifestly simple code.
It would be nice (speaking as a Valgrind developer) if Valgrind could guarantee safe code. Unfortunately it doesn’t. Firstly, it does not detect all kinds of errors (and indeed no tool does). Secondly, it is unlikely that the test coverage is perfect.
Delusional overconfidence that developer “skill” is all that is needed to overcome the many shortcomings of C is not a solution to the problem of guaranteeing security and safety.
I find it surprising hearing statements like this from a developer of a tool for, well, C programmers mostly I guess? "Skill is all that is needed to prevent bugs and produce bug-free software" is a phrase I've never heard from an actual C programmer, but have heard plenty of times from detractors.
The C programmers I know are certainly not deluded or overconfident. They don't even think "their" language is a perfect one, or even a very good one. They just avoid black-and-white thinking. They take a practical approach about memory issues, seeing them more like any other kind of bug. It's a different aesthetics than you would maybe see from many Rust folks. They want to be productive and want to be in control and want to understand what their code does. In turn, they accept that in some cases, bugs (possibly memory bugs) creep in, some of which could go unnoticed for some time. They tend to not see that as a huge issue, at least in general, because an issue that has gone unnoticed (or didn't manifest) is often less of a problem than one that is immediately obvious. (In case of data corruption, it _can_ be a huge issue, and you have to add safeguards to prevent it, and have to be accepting some residual risk).
They understand that everything is a trade off and that with experience and practice, good architecture, good tooling etc. you can prevent many bugs early, and detect them early. They have tried many approaches to prevent bugs, including fancy languages and constructs, and have concluded that in many cases, perfect safety is not possible, in particular it's not possible without seriously hurting other requirements, such as productivity.
As to valgrind, I can say that it was a bit of a mixed bag for me. It did help me finding bugs a number of times, but I also had to configure it a bit because it was producing a lot of noise for some external libraries (such as libc). I don't really understand the underlying issues.
You didn't look very hard. The person that I replied to said " Disciplined use of c, with modern tools like valgrind, will give you safe code".
And you took "safe" to be "100% safe, never breaks" instead of just "relatively safe" or "improving safety a lot". They even hinted at the cost that comes with more heavy-handed approaches, which by the way aren't perfectly safe either.
Btw. kindly look at the other issue that I overconfidently waved away as "probably a false positive"?
Love Julian's email style. Polite, but firm and decisive.
Didn't they call Rust software "unpackageable" just a couple of months ago? IIRC they were talking about bcachefs-tools.
Debian has packaged rustc and Rust based programs for the better part of a decade now.
[flagged]
I think anyone who objects to Rust in userland as part of the system should also object to Perl, Python, Ruby, etc.
What would really be scary would be a distro that won't even boot unless a variety of LLM's are installed.
Boo!
I've been trying to build a debian package recently. I didn't have any crashes but I couldn't work out how to do it especially with the unbelievably contradictory and confusing documentation. I'm so glad I mainly use makepkg on Artix which is MUCH easier.
I struggle to believe that this is really about a call to improve quality when there seem to be some other huge juicy targets.
Are you sure you're not conflating documentation with random people's writings on the web? Because that there seems to be a helluva lot of cargo culting on this topic.
When the primary documentation is of no use one looks for anything else that can possibly help and a lot of that is out of date.
What's wrong with the primary documentation?
I was doing this months ago and have forgotten every twisty road I went down but I wanted to produce a binary package for a particular version of Ubuntu (and or Debian) and put it in a PPA so that people could use my code easily. It seemed like the rules file could be anything and I wouldn't have to implement a lot of targets that are either irrelevant or hard to understand the purpose of. So I used a script. Mistake - makefiles now seem to be the thing.
I struggled over how to layout the directories in my GIT repo. The fact that I want to build from the git repo is another layer of confusion - as opposed to building from a tarfile. I'm making something unstable for other developers right now, rather than a stable releasable item.
The next bit of extreme confusion is .... where should my package's install target put the binary plugins I built. I'm not going to try to go back and check over this in detail but as far as I remember the docs were very unspecific about that as if it could be anywhere and different user docs on the net seemed to show different things.
I got to the point where I could appear to build the thing on my machine but that's not good enough - the PPA has to be able to do it and then you've got to upload, wait and hope the log explains what's wrong well enough.
I tried looking at other packages - I'm building plugins for GNU make so I tried that - but it was using the build from tar approach (IIRC) and was way overcomplicated for my very simple package which is just a few .so files in a directory.
It took me a couple of weeks of messing around with every possible option to get this far and I just ran out of energy and time. I am not a beginner at programming - only at packaging - so IMO there is a great deal that could be done for the user experience. Don't get me wrong - I'm not picking on .deb. RPM is another incredibly horrible packaging system where every tiny mistake can force a long long long rebuild.
They're obviously complicated because they're trying to offer a lot and e.g. Artix doesn't use selinux so there's one misery avoided straight away but it has a consequence.
IMO the core docs just don't prevent any of this confusion. They seem like a reference for people who already know what they're doing and enough tutorial for a very specific simple case that wasn't mine. People wouldn't bother to write their own tutorials if the docs filled the need.
This all seems to be criticism of how Debian chooses to build packages. You may well be right in some of it. But I fail to see where contradicting documentation comes in.
It seems that there is a history of ways to build packages and the docs probably need to explain now versus then so that you can beware of what other people have written in the past. They also need to be comprehensive enough that you're not looking for other people's answers that may be out of date.
Back to my original point - I don't think Rust is going to fix this.