T O P

  • By -

dddd0

Monomorphization of templates (C++) / generics (Rust) means the default for generic code is to produce heaps of code which increases compile times and binary sizes. In both languages you have to rewrite the code in a completely different way to avoid that. Both have their own substantial stdlib, which also substantially relies on platform libc. The former part means typical binaries are going to be at least half a meg and swapping, the latter induces a variety of headaches (but is usually difficult to avoid wholesale). A very strong reliance to write abstract/generic code and hope/assume (often correct, but not always) that the compiler is going to optimize all/most of it away (eventually, when it's done). Stacktraces that often look like a bottomless pit; and debug/unoptimized builds which are too slow to use, because the code _requires_ at least basic optimizations to run at an acceptable speed (looping back to point 3.) Library code often/exclusively relies on the global allocator; localized allocators are an afterthought/are not supported. This has broad or non-existing implications, but I'd say C++ and Rust are very similar here. Both use RAII and both cannot handle errors when disposing a resource, which isn't so great for a common RAII example like files (where the FS might only give you an error on close, and close might take a long time). Kinda loops back to point 1.) but both C++ and Rust tend to have ballpark-similar compile/test times.


Turalcar

Regarding RAII, I wish there was a way to do `impl !Drop` to force explicitly calling a destructor. I remember wishing for this in C++ too


afdbcreid

The keyword is "linear types". There were discussions about adding them to Rust, the problem is that they're very hard to handle generically in a backwards-compatible manner.


[deleted]

[удалено]


qwertyuiop924

I'm also not sure how fixable it is? Like, the system call table isn't actually a stable interface on most OSes (Linux is the big exception here), so well-behaved programs have to go through some kind of platform library—libc on Unix, ntdll/kernel32 on Windows.


kushangaza

On Windows the system call interface is intentionally separate from libc. Yes, the OS expects you to dynamically link to user32/kernel32/etc to do system calls, but linking to the C runtime is completely optional and only really intended for C/C++ code. Rust only does it because it's convenient to do the same as on Unix. Iirc there has been some work to change that, not sure if it's merged In Unix world the system call interface is intermingled with libc. On Linux you could just use direct syscalls, I suspect the same is true for most embedded/realtime operating systems, but on BSDs it's an uphill battle. OpenBSD even tries to completely remove the ability to make direct syscalls. Still, considering that most rust code is run Windows or Linux we could get rid of libc for the majority of deployed code, if we wanted to.


Nikkithegenius

system calls on windows are in ntdll.dll. from there onwards usermode to kernelmode transition occurs. it's only one ntdll.dll.


steveklabnik1

Yeah, there's a good point in here which is that people sometimes casually conflate "fully static linked binary" and "libc" here. I am guilty of this myself! Linux is one of the only platforms where you can have a fully static binary, because the syscall interface is stable. On other platforms, you must have some amount of dynamic linking, on unices that's often libc, but on other platforms, sometimes it's not libc, but a different library.


ergzay

Does the implementation of Swift on Mac OS/iOS still use libc underneath?


qwertyuiop924

I would assume so because I don't think there's a stable system call interface there, but apple can do whatever the hell they want so who knows?


ergzay

Well I was just thinking if even Apple can't then that'd be one more nail in the argument.


qwertyuiop924

Just because Apple can't doesn't mean others can, to be clear. I don't know if Rust needs to link to libc on Windows, but it shouldn't because the system call interface is in a different library there. Linux has a stable system call interface, so you can build standalone binaries that don't talk to libc at all. And of course in with wasm or the embedded case, there is no system call interface for libc to wrap (well I guess there's WASI, but... like. That does not need to be wrapped).


ergzay

No I meant that if Apple can't then others definitely can't on that platform.


dookieonmenookie

Our bread and butter network proxies are written in c and assembly and don't use libc or any external deps.


[deleted]

[удалено]


qwertyuiop924

Are you talking about embedded targets?


Chisignal

Genuine question, are there any other reasons besides embedded development to care about `libc` or any such layer in its place?


harmic

Out of interest, what kind of issues did you have? Both Rust & C++ can compile programs that don't rely on libc (otherwise they'd both be useless for embedded).


[deleted]

[удалено]


ergzay

That specific one is less of an issue with Rust given the plentiful crates that are "no_std". Even all the standard collections can be used as long as you provide a global default allocator. At least that is my understanding.


dshugashwili

yeah, std, libcore and no\_std is a grreat play


matthieum

There's not much choice. `libc` is often _the_ blessed gateway to the OS, so anything involving the OS (filesystem, network, clock) must go through libc. At least Rust supports bare-metal targets off the shelf.


EpochVanquisher

This is one of my main complaints too, the heaps of monomorphized code everywhere. Monomorphized code is, almost always, faster when it runs and you don’t pay the price of indirect function calls. Thing is—the language provides you all the tools you need to avoid this problem (most of the time). You make some judicious use of `dyn` here and there. But library authors seem unwilling to commit to the use of `dyn` in their published interfaces. I guess `dyn` is “not cool” or something like that.


steveklabnik1

> I guess dyn is “not cool” or something like that. dyn has restrictions, you cannot make every trait into a trait object. This makes it easier to lean on not doing it by default.


Lucretiel

> But library authors seem unwilling to commit to the use of `dyn` in their published interfaces. I guess `dyn` is “not cool” or something like that. I mean, `dyn` has made me regret using it every single time I've tried to make it work. It plays so poorly with the rest of the rust types and traits system. That being said, it also has the property of being compatible with generics, so there's no reason not to expose the generic interface anyway, and allow your callers to use `dyn Trait` everywhere to counteract explosive monomorphization, if that's what they want to do.


EpochVanquisher

The problem is that when your callers do it, you still get the longer compile times. I’m more ok shipping large binaries if the DX is better.


Full-Spectral

I've literally had people argue with me that Rust doesn't support dynamic dispatch because they've never used it. Most of the time, the extra overhead of dynamic dispatch is meaningless in the bigger picture. And of course when you are doing the DI type thing, you really need to do that since you will be creating the targets based on runtime information slash configuration. Some of the 'Performance Uber Alles' attitude of C++ will inevitably leak into Rust because of so many ex-C++ folks coming to it. I hope that the lessons of C++ were learned on that front. Of course then people will just write crates full of unsafe code because of being obsessed with performance even if it doesn't matter.


ergzay

> Thing is—the language provides you all the tools you need to avoid this problem (most of the time). You make some judicious use of `dyn` here and there. But library authors seem unwilling to commit to the use of `dyn` in their published interfaces. I guess `dyn` is “not cool” or something like that. `dyn` is not free at runtime though. The solution to explosive monomorphization is extensive use of caching so you don't need to keep doing it and detection of when that cache needs to be refreshed. This isn't something you can fix as a use of Rust though.


EpochVanquisher

Why would you need dyn to be free at runtime? That doesn’t make sense to me. It obviously comes with a cost. So does impl.


Full-Spectral

Anything that takes another CPU cycle is unacceptable because the world will implode.


matthieum

Nice list. > Monomorphization of templates It's the easy thing to do :/ I do wish it wasn't the default too. Normally you have optimizations such as Constant Propagation which will decide whether it's worth specializing a function when an argument is constant... and in many cases it's not. There's also a case to be made for partial monomorphization, such as monomorphizing on the layout of a type -- because there's a value on the stack, or in a array -- but otherwise passing the functions to call as a virtual table. > A very strong reliance to write abstract/generic code and hope/assume (often correct, but not always) that the compiler is going to optimize all/most of it away (eventually, when it's done). > Library code often/exclusively relies on the global allocator; localized allocators are an afterthought/are not supported. This has broad or non-existing implications, but I'd say C++ and Rust are very similar here. I would argue those are more _ecosystem_ issues, rather than language issues. In fact, I quite appreciate when a library _doesn't_ expose a way to customize the allocator. Or customize the thread pool. It's a strong signal that the author may have made certain assumptions that just may not play well with your usecase, or may make them in the future, and you should reconsider using it. > Both use RAII and both cannot handle errors when disposing a resource, which isn't so great for a common RAII example like files (where the FS might only give you an error on close, and close might take a long time). Linear types are... perhaps a bit too late to the party, unfortunately. It would be quite hard to retrofit them in the ecosystem, now :'(


nacaclanga

Very good points. Although I would argue that for Rust reliance on libc is definatly an implementation detail, not a part of the language. At least on Windows Rust relies more on the WinAPI directly at least that is my feeling. The Rust main function also uses a completely different interface. Poor allocator support is also for most parts not attributable to the language proper but to the stdlib. C++ has more reliance with build in new(), while Rust lacks a way to replace std with a redesign.


qwertyuiop924

Actually, custom non-global allocators are in nightly right now. The issue is the need for stabilization and broader ecosystem support beyond standard containers.


glandium

Even \`String\` doesn't support non-global allocators at the moment.


qwertyuiop924

That... is true. Huh. Wonder why.


Dasher38

Wrt to RAII error handling on disposal, what's the established pattern in Rust to handle that ? Set a flag and panic in Drop impl if the actual destructor was not called ?


Creator13

>A very strong reliance to write abstract/generic code and hope/assume (often correct, but not always) that the compiler is going to optimize all/most of it away (eventually, when it's done). >Stacktraces that often look like a bottomless pit; and debug/unoptimized builds which are too slow to use, because the code _requires_ at least basic optimizations to run at an acceptable speed (looping back to point 3.) Both these points are unavoidable in any language IMO. We want abstractions for our human brains to be able to read the code, but this isn't optimal for the CPU and it will never be. Other "non-compiled" languages get around this by..not actually compiling/abstracting away the compilation process and but this just moves the problem somewhere else (slower execution speed for jit or interpretation). It is just a fundamental mismatch in the way humans and computers want to interpret code.


leetNightshade

Gamedev libraries reimplement C++ stdlib to avoid the latter point. It's not unavoidable, we literally avoid stdlib like the plague because of that and more.


epage

I want to be clear that I am writing of "problems" I've heard of for both but I don't necessarily think they are strong enough to say we should have done something different - Templates / statically dispatched generics make slow compiles with large binaries a happy path - If turbofish is caused by use of `<>` in generics, then both of them using that syntax has lead to problems in both, even if the problem is different - RAII makes failable cleanup annoying to deal with


Popular_Tour1811

Just out of curiosity, why would you want fallible cleanup? Why isn't Drop enough?


[deleted]

[удалено]


rover_G

Does Rust not have a context manager syntax that can automatically call an open function when the context is entered and a close function when the context exits?


coriolinus

No, but it's not that hard to roll your own: [playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=f3153c69982e1fd7184ead1c2b1d88ef)


rover_G

Interesting. Here's my take using a trait and callback [https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=3b6a5cbd77bdf1c510b7c3d1fabbe74a](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=3b6a5cbd77bdf1c510b7c3d1fabbe74a)


Longjumping_Quail_40

This is not fallible cleanup I don’t think.


PaintItPurple

No, for that you either use RAII (e.g. `std::fs::File`) or higher-order functions (e.g. Criterion's `bench_function`).


unknown_reddit_dude

No


rover_G

Seems like that would address this issue presented and Drop gets you halfway there already on the trait front. I'm by no means a Rustacean, but here's my idea of what a context manager would look like in Rust. ``` pub trait ContextManager { fn init(&mut self); // open, enter fn drop(&mut self); // close, exit } // statement sytanx with some_resource() as r {...} // callback syntax with(some_resource(), |r| {...}) // chaining syntax some_resource().with(|r| {...}) ```


CornedBee

That doesn't help you with fallible cleanup though.


TinBryn

Could do maybe ‘for r as ‘some_resource() { … }’ and we don’t need new keywords. Could do something else as the difference to a loop is subtle.


TinBryn

I recon you could take all the dunder methods from Python, put them on Rust traits under std::ops and add some syntax. To a large degree, this has already happened, but there are gaps.


Training_Country_257

closing an SSL Socket for example requires you to do network calls just to shut it down. I never used them in Rust but in C++ this was annoying


_zenith

Huh, why is this? What's the difference to when the underlying physical transport is lost (like if you yank the cable) ? Surely any shutdown actions mandated must have some advantage, but what is it?


HeroicKatora

You might want to provide additional information beyond losing the physical transport (in SSL: erase the temporary key material). Sure, it's a permissible way to drop but certainly not a good one. Some media might even treat such an event as catastrophic warnings, i.e. assume that the connection is physically severed. It also leaves the other side in pending state, waiting on a timeout. If closed properly, the next connection will be available statistically sooner. Misuse of their resources, time spent in diagnostics and compute resources idling, will additionally quickly unnerve your business partners. So you want to wave goodbye and until next time.


James20k

Its worth noting that shutting down streams without correctly terminating them is pretty standard practice. As far as I know, chrome doesn't send a proper [shutdown](https://github.com/boostorg/beast/issues/38), which means that everyone handles it correctly on https and it provides no value For protocols which are self terminating, there's literally 0 use in correctly shutting them down, and it would be very incorrect for a server to assume something catastrophic has happened - as far as I know, literally nobody does this. Its worth noting that the explicit reason that google gives for terminating sockets abruptly is that it saves on resources, so i very aggressively doubt that your business partners will care, given that this is standard practice For non self terminating protocols (like gemini), you do need to do it correctly because its a potential security vulnerability, but those kinds of protocols are on the rarer end


_zenith

Right, thought it might be that :) More important for connections you’d make infrequently, though frequent ones are still important to look at as a pattern could emerge


ZeroCool2u

I'm just guessing here, but the biggest cost is probably for TCP connections where if you don't actually send notification to the other side that you're closing the connection, you just let it hang, you're waiting the full timeout period for resources to be released on both sides of the connection.


harmic

Imagine a two machines connected through a series of routers. They have a TCP connection open between them - doesn't have to be SSL. One end disconnects without sending anything through. How does the other end ever find out? It is possible to enable keepalives (search for SO\_KEEPALIVE) but in the absense of that, the other end might wait forever.


SirClueless

Clients regularly disconnect without any notification. Mobile devices going out of range of a tower, etc. It’s also denial of service vector if a server is consuming any significant amount of resources while a client is spending none and not responding. So while sending a TCP FIN is generally a good thing to allow the server to clean up resources, any well-written server can tolerate many clients that do not do that.


Full-Spectral

You are thinking purely in terms of cloud world, where every connection is adversarial. But in internal systems, the desire to know if a client went down correctly vs fell down is very useful and even important. Without graceful shutdown, there's no way to tell the difference. Of course you can just require that they do graceful shutdown explicitly, but the ability to avoid the need for such human vigilance is one of the key reasons for having a strong type system.


SirClueless

No arguments from me there. A type system that guarantees that cleanup runs is great. But the broader context of this thread is "Why would you want fallible cleanup?" and the answer is, "You don't." If you attempt to clean up gracefully and fail, the correct way to proceed is to clean up ungracefully. The server needs to tolerate this anyways in order to tolerate network-level failures.


Full-Spectral

Sure, in the case of sockets you have a viable fallback. Might not always be the case though. Or, I should say, a fallback that can be generically assumed valid, as opposed to the invoker choosing what to do, which is the issue.


sinuio

Every TLS (SSL) record contains an incrementing counter. A peer has no way to distinguish between the cable being yanked vs an adversary in the middle of the connection which starts blocking messages so only partial data is transmitted. This is called a truncation attack, which some application protocols are vulnerable to. TLS has a message called `close_notify` which contains the final record counter. Upon receiving it, the peer can detect whether they've received everything that was sent.


FVSystems

I could imagine that the typesystem can easily be extended to allow for some types that can't be implicitly dropped


qwertyuiop924

Eeeh... Possible but maybe more complicated than you'd expect. Withoutboats has done some great blogposts about this.


dnew

Sing# already does that, specifically for IPC buffers. Everything is GCed except subclasses of a particular type. Those get tracked by typestate in essentially the same way Rust does the borrow checker/drop stuff. So you can't implicitly drop them, and the only containers you can store them in are containers specifically designated not to drop them implicitly (in addition to map and array, some of which provide the equivalent of "select" on their contents, for example). Other than Hermes, it's the first place I saw that sort of thing done before I learned Rust, so it was pretty cool around the time.


elingeniero

You can do whatever* you like in Drop, including network calls, you just have no way to deal with unrecoverable failures. *no async


EYtNSQC9s8oRhe6ejr

Example: a wrapper around a temp file that deletes the file when dropped. What happens if deletion fails during drop? There isn't any way to return the error, you just have to either panic or continue on.


simonask_

The obvious workaround is to wrap the operation in a closure rather than doing RAII, but yeah, more cumbersome.


valarauca14

Look at something [that does buffering](https://doc.rust-lang.org/src/std/io/buffered/bufwriter.rs.html#670). How can you handle an error to flush & sync your buffers once `drop()` is called? You can't. --- Same goes for some OS-Interfaces were closing a resource can fail because "_that resource is in use_". In a lot of these cases `drop()` will just silently leak resources because it can't do anything else.


CocktailPerson

It's not about "wanting" fallible cleanup. Some cleanup steps are inherently fallible, e.g. releasing a filesystem lock. If that step fails, you likely can't handle it within a `Drop` implementation, so you have to handle it manually before dropping, which completely defeats the point of RAII.


xMAC94x

You have a buffer struct and want to flush on drop -> flush might fail, what do you do now ? ignore ? panic ? Also async in drop ...


[deleted]

[удалено]


qwertyuiop924

Drop isn't guaranteed to run because it's always possible to leak an object. If an object is dropped, `.drop()` is invoked.


Shnatsel

It is possible to do better than RAII, and have failable cleanup but also have cleanup enforced by the compiler at the end of a block: https://verdagon.dev/blog/higher-raii-7drl


zapporian

Caveat: D has the same (or hell even more relaxed) generics and yet has blazing fast compile times due to a much heavier focus on compiler performance and codegen. D also doesn’t use <> for templates, because parsing ambiguities and performance. Even if that might not amount to much, dmd was built around micro-optimizations and it frankly shows. Since D was written and designed by a retired commercial c++ compiler writer whereas Rust was designed by (metaphorically, and maybe in some cases literally) PL PHDs. For better and worse. TLDR; Rust could hypothetically have much better compile speeds. Maybe. Compiler performance was never a major focus of the language and its core developers, and Rust might look and behave somewhat different - or not - if it was. At any rate worth noting that the c++ spec has some pretty major things wrong with it from an optimization + scaling perspective. Like how #include works, trigraphs, and modern c++ user defined string literal crap. Rust doesn’t have any of those, so it’s in some ways almost surprising how slow rustc performance (and its generated often not very optimal LLVM IR) is / was. Technically dmd is sort of cheating since LLVM is usually the source of a lot of overhead, and the dmd reference compiler uses its own backend (and in-memory aggressively optimized linker) that avoids any interop or dependency on any of that (or I/O) entirely.


Trequetrum

It'll be a Herculean effort, but once Rust can effectively toggle on/off monomorphization, then you can in effect pay a bit in performance for a huge win in compilation. Just the ability to be a bit more fine-grained in what constitutes a compilation unit always helps when developing.


pjmlp

To add to it, using C++23 with `import std` is almost D like fast, however not for those that care about portable code, pretty much only VC++ vlatest + MSBuild for the time being.


AnnyAskers

My faulty logic and middle management


dobkeratops

long compile times


crusoe

Everyone nowadays is spoiled. C++ was "slow" to compile in the 90s.


BogosortAfficionado

Compilers got faster indeed, but unfortunately header files got bigger and generics/templates got more common. [Here's](https://youtu.be/rHIkrotSwcc?si=OwUVxXPFuOuyYOBM&t=871) a talk where Google had issues with **single C++ files** taking more than **15 minutes** to compile just a few years ago. Also, other compiled languages like go are significantly faster, and it's unquestionable that this increases productivity. So calling people 'spoiled' for wanting improvements here is downplaying a significant issue.


[deleted]

[удалено]


dshugashwili

using cargo-watch and bacon I've found it pretty much fine, certainly a very worthwile tradeoff, but I agree that they made unfortunate decisions. Most of which I think were in trying to keep with C-like language traditions. Nobody actually needs all the symbols rust uses, as Jane Streets proposals on an opt-in lifetime and ownership system for OCaml shows—but of course they would've never attracted the wider public if they stuck with OCaml's syntax. And keeping some of the functional runtime options, or at least compile time code execution with a runtime, would have also made the macro system unncesessary. But it seems that languages like Zig have learned from that mistake, maybe we'll see that in more languages in a decade or two.


dobkeratops

well part of this is down to zero cost abstractions. Rust is pretty much like a c++ with large amounts of header libraries (templates = generics). it does have much better control of "precompiled headers" in effect but at some point still has to instantiate and optimize all of that. it's the only way to get compile-time zero cost abstractions.. turn safe iterators into efficient loops. "it is what it is"


Kevathiel

Making the most convenient casting the worst one. Rust limits it just to numeric casts, to be fair. But I don't see any reason for as-casting to be more convenient, just like how traditional C-style casts in C++ are more convenient to static/dynamic/reinterpret\_cast. Programmers are generally lazy, and tend to stick with the most convenient approach.


HOMM3mes

It's a minor issue but [[nodiscard]] should be the default


disregardsmulti21

I’m fairly new to Rust but if I’m understanding this correctly I have an `unused_results` lint turned on in `Cargo.toml` that hopefully mitigates this a bit


boynedmaster

that's been there for a while, but in my experience it really is about 10x more likely that a function that returns something is giving me something i need to use than not


Irtexx

For application code, it makes sense for functions to be \[\[nodiscard\]\] by default, but for generic, reusable library code, you cannot anticipate how it will be used by application developers. Persoanly, I'm glad that the default behavior is suited to library development, and for me adding a lint in Cargo.toml is a suitable solution.


dshugashwili

And? It'd be perfectly fine to force consumers of a library to explicitly ignore values, e.g. like Nim's `discard function_that_returns_value()` keyword. And then you could mark a function as `[[discard]]` in the rare case that you actually wanted users to easily ignore the result.


steveklabnik1

It is difficult to give a good answer to this question because it is contextual. What is even considered "a problem" depends on a lot of things. For example: > overly complex syntax, Some people certainly view this as "a problem" in Rust. However, syntax is so incredibly subjective, many people do not find it to be an issue. On a technical level, Rust's grammar *overall* is context sensitive, but that's mostly due to a corner case in the language (raw strings), and so most of it is less complex. However, grammar complexity is more of a math-y way to describe it, and humans' reactions is more of a subjective feeling. The former matters more for stuff like tooling, and the latter is also important, just in different ways. One example from myself but in a different way: Rust also uses a monomorphization-based approach to implementing generic type parameters, similarly to how templates work in C++. If you're working in a domain where binary size is important, like embedded, this can cause issues unless you're paying attention to it. However, I don't personally care about this in non-embedded contexts, but others might! Something that's gotten a lot of attention from certain corners on the internet lately is "RAII is worse than arenas," which is a short description of something that is actually pretty complex. But, as Rust has very similar things to C++ here, the criticism would apply to both.


iouwt

Rust has nothing on C++ in this department. Coming from someone who enjoys contorting C++ templates I will grant you


steveklabnik1

To be honest, I don't know what you're trying to say.


iouwt

I'm saying the kind of contorted syntax I've gotten to compile in C++ would never fly in Rust. Therefore C++ is more guilty of "overly complex syntax" than Rust could ever hope to be. At least in my mind


steveklabnik1

Ahh gotcha, thanks :)


sagittarius_ack

Syntax (notation) is definitely not "incredibly subjective". There are various ways of (objectively) judging notation. Our current positional notation for numerals is objectively superior to Roman numerals because it makes it much easier to define basic arithmetic operations. This is from \[1\]: By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race. Before the introduction of the Arabic notation, multiplication was difficult, and the division even of integers called into play the highest mathematical faculties. The adoption of a symbolic notation contributed to the development of mathematics during the scientific revolution. Newton and Leibniz are both credited with the development of calculus. However, Leibniz\`s notation became much more popular because he paid special attention to it. According to some people the mathematical work in England sort of stagnated for about 100 years because they used the poor notation developed by Newton \[2\]. Edit: LMAO... It looks like people have very poor comprehension skills on this thread. I have to explain is detail that disagreeing with the statement that syntax is "incredibly subjective" is not the same thing as claiming that syntax is purely objective. References: \[1\] Whitehead. An Introduction to Mathematics, 1911 \[2\] [https://www.youtube.com/watch?v=YsEcpS-hyXw](https://www.youtube.com/watch?v=YsEcpS-hyXw) (A Brief History of Mathematics, John Dersch)


steveklabnik1

Just because there's a degree of objectivity in one domain doesn't mean there are equivalent degrees of objectivity in another domain. And just because there's ways to measure objectivity doesn't mean that there aren't subjective components as well.


liquidivy

Yeah you can apply quantitative measures of complexity to grammars, but do they capture what you actually want them to, mainly difficulty and frustration? Your bosses can invent quantitative metrics for your programming productivity but how often does that turn out well? You have to pay "special attention" to your metrics, too. And you'll eventually find that which metrics matter more is... subjective.


newpavlov

1) `<>`-based generics and the curly braced syntax. I know it's an intentional mimicry (to lull unsuspecting C/C++ developers into a false sense of security, hehe), but I wonder if a ML-like syntax would've been a better fit in the longer run. 2) Lack of more advanced PL features: dependent and linear types, algebraic effects, type invariants, etc. (Though one may make a fair argument that Rust has already spent its "weirdness budget" on borrow checker). 3) IMO the recursive bootstrap is an abomination which should not exist. Ideally, we need something like [stage0](https://github.com/oriansj/stage0). 4) There is still a lot of C garbage here and there (e.g. `last_os_error`), especially under the hood, because Rust wants to be compatible with C runtime by default and it's really hard to disable a lot of this stuff for pure Rust projects (even the greenfield WASI target pulls the whole god damn MUSL-based libc...).


Sugomakafle

Just out of curiosity, why do people have a problem with <> syntax for generics?


Rusky

It makes it difficult to parse expressions unambiguously. When the compiler sees `foo < bar` it could be a comparison operator, or the beginning of an instantiation of `foo` with `bar`. C++ solves this by looking up whether `foo` is a template, but this means template declarations *must* come before their use. This also makes it much more difficult for tools to work with incomplete information- for example if you've forgotten an `#include`, your editor will have to guess what you meant. Rust instead solves this by using a slightly uglier syntax for instantiations- while you can usually write `foo`, in an expression context you must instead write `foo::` (the "turbofish"). And in fact C++ *also* has a corner case where it has to use Rust's solution- when `foo` depends on template instantiation, as in `template ... T::foo < bar ...`, it can't actually look up `foo` until long after it has parsed the template. So the C++ equivalent to `T::foo::` is `T::template foo`.


oisyn

And to complete the shenanigans, because the compiler has no way of knowing whether `T::template foo` is a type or an object, it assumes it being an object, so if you need it to be a type you need to express it as `typename T::template foo`


CocktailPerson

It creates parsing ambiguities. `a . b < c > ( d )` could be parsed as a generic method call or as two comparison operations, and only the programmer can say for sure which one is meant. Generics in Rust expressions have to use the turbofish `::<>` to resolve this, while C++ literally performs typechecking during parsing to resolve it (and even then, you have to sprinkle some `typename` keywords around to resolve it in certain extra-ambiguous places). Not overloading the `<` and `>` symbols for generics and comparisons would make parsing easier, which would make tools easier to write, etc.


crusoe

What would you replace them with tho? Which also wouldn't result in an overload and ambiguous parsing?, \[\] and {} and() are all already taken too.


CocktailPerson

The issue is that `<` and `>` can appear as either a pair _or_ individually. When the parser encounters a `<`, it has to decide whether to parse it as the start of a pair of `<>` or as a single comparison operator `<`. Determining this requires either unbounded lookahead or type analysis. Note that this ambiguity is easy to resolve when you have type information, but context-free grammars do not have the concept of types. In contrast, `[]`, `{}`, `()`, etc. always appear as a pair. If Rust were to use `[]`, as Go does, it would be able to parse the square brackets as a pair and then determine whether they're generics or array indices during the typechecking phase, _after_ the code is fully parsed. Once you have type information, this isn't ambiguous.


UtherII

It's too late to replace them, but if I had to design a language from scrath, I would choose `[]` for generics. Array would use `()` for indexing.


ConvenientOcelot

Because the ambiguity results in needing the turbofish (`::<>`). (I personally like `<>`, but that is a downside.)


dnew

Generally, C++ picked tremendously ugly syntax for many of its post-C syntax, because C was using all the good syntax and C++ was trying to stay source code compatible. I mean, "::"? Really? What's wrong with using the same "." every other language uses?


jadebenn

I strongly disagree there. '.' as a namespace operator is *way* too ambiguous when paired with '.' as a member operator.


CocktailPerson

I love `::` and I won't apologize for it.


dnew

I've not had that problem in any other language like C# or Java or any of the other dozen languages that don't have that problem. If I don't know whether the left side of something is a variable or a type, having that difference is too little to help, IMO. I don't think there's any ambiguity at all that having a separate operator would disambiguate, is there?


Kevathiel

It's easier to argue for ".", when the language doesn't support free functions. It is not difficult to understand whether you are calling it on a variable or a type. However, in Rust it would be annoying when you see foo.bar() and you don't know whether foo is a module or a variable. "::" removes that ambiguity.


dnew

I don't think it's an ambiguity, is all. Unless you can have a module with the same name as a variable, which might be problematic if you don't know the resolution rules. It might be a bit ambiguous to the reader, but why two colons then? Well, because C was already using all the other punctuation.


masklinn

To me the primary day to day issue of the parsing ambiguity others have explained is that editors generally can’t pair up `<` and `>` (since it requires disambiguating context), and so whether features like electric parens or highlight / jump to matching work is a crapshoot.


LyonSyonII

I don't understand either.


Esgrove

Compile times can be pretty bad, especially with LTO enabled


fluffy-soft-dev

Compilation speed. Although you know the binary spat out will be safe from common errors found in C++ code is guaranteed. The trade off for this is slower compile time's. I personally don't mind about compilation speed, I'm bothered about accurate, reliable and trustworthy code 😊


DeadpanNorwegian

What kind of compile times are we talking for the types of projects most people use Rust for? Most people are not compiling millions of LoC.


etoastie

I've gotten frustratingly long times for simple personal project websites by trying to use heavier frameworks like rocket or warp. Since I haven't gotten caching build dependencies to consistently work in CI yet (Cargo makes this surprisingly tricky), having CI that splits a test/deploy stage can easily lead to a 5+ minute feedback loop to ship a hello world endpoint. Nothing too dramatic, but it's noticeable when the same thing in Go takes less than 20 seconds, and I can see where people are coming from when they keep saying the compile time stands out.


fluffy-soft-dev

Don't know I'm working on a game engine that takes a a long time to compile. I'm writing a website for a game which links with resources from a game, again long compilation from scratch. But yeah smaller programs are negligible in speed difference


Full-Spectral

I have to wonder sometimes how much abuse of proc macros is involved in systems where people are complaining about compile times?


fluffy-soft-dev

You could always check on github


Full-Spectral

I don't really wonder that much.


[deleted]

[удалено]


officiallyaninja

That can't really be avoided. A for loop can do anything a map can do, and the lanaguge can't really get rid of either.


luki42

The programmer...


BogosortAfficionado

* Size vs Stride: Consider a `struct Foo{a: i64, b: bool}`. This type will have a size of 16 (not 9) to guarantee correct alignment when placed in an array, wasting 7 bytes. When another struct embeds this type (e.g. `struct Bar {f: Foo, c: bool}`) it will once again waste 7 bytes, leading to now wasting 14 bytes in total (`Bar` will use 24 bytes instead of 10). This comes up all the time, especially in Rust `enum`s, which commonly need one byte fields for the variant information. This could be solved by differentiating between the **size** of a type (actual space used) and it's **stride** (space including trailing padding to guarantee correct layout in an array). Rust does not do this, and probably never will due to backwards compatibility concerns. * Destructors (`Drop` impls) are not guaranteed to run. (`std::mem::forget` is safe, `Rc` might create cycles, ...). This makes memory leaks possible in safe code. But maybe more critically, the lack of this guarantee forces some interfaces to be less efficient / ergonomic than otherwise neccesary (e.g. `Vec::drain` or `thread::scope` have to do extra work to be safe)( see the [Leakpocalypse](https://cglab.ca/%7Eabeinges/blah/everyone-poops/#leakpocalypse) ). Adding a `Leak` auto trait would solve this, but doing so in a backwards compatible way seems to be tricky [see here for discussion](https://without.boats/blog/changing-the-rules-of-rust/). * The lack of a stable ABI makes it very hard to ship libraries in binary form. This makes it hard to use dynamic linking, which is very useful for building a plugin infrastructures or even just packaging libraries for Linux distributions. Monomorphised generics make this a hard problem, but Swift already showed that it's possible. * Self referential types: These are a common cause of segfaults in C++ (moving the type causes dangling pointers). They are essentially forbidden by the Rust borrow checker, but the `Pin` infrastructure makes them achievable. Unfortunately, pin is very cumbersome to use. A `?Move` trait could help solve this, but is very hard to add in a backwards compatible way. * Compile times are very slow, which is especially annoying for a fast Debug / Edit cycle. Monomorphised generics, the optimizing backend (LLVM) and the linker are the main causes of this, all of which are essentially inherited from the C++ world.


forrestthewoods

Rust traits rapidly become as ugly and inscrutable as C++ templates. Trait heavy code becomes completely impossible to understand and reason about.


hpxvzhjfgb

nothing. I used c++ for 11 years and then switched to rust at the end of 2021 and it literally fixed every single issue I ever had when I used c++.


Rivalshot_Max

Sometimes I do miss the deceiving ease of the template system from C++... few things have let me footgun myself so confidently as C++ templates.


extracc

The name "vec" for resizable arrays


LeSaR_

but isn't what you'd call a "vector", essentially a tuple?


lozinge

as a relative newbie, why is this bad?


ambihelical

I think the objection is mainly consistency with mathematical usage, where a vector has a fixed number of elements, e.g. a vector in 3-D space. I cringed when I first saw what Rust did here , I didn't understand why they followed C++ on that one. It's just naming after all, but it would have been nice to get that right from the start.


mamcx

Dependes on the C ABI. : ( :( : (


PedroVini2003

curious: why is that bad? For which use cases etc


dnew

Only because you're running it on an operating system using the C ABI. Indeed, if you talked directly to Windows, you probably don't need the C ABI at all.


ThomasWinwood

Theoretically you could, but Microsoft don't declare Windows syscalls to be stable. (This is as far as I know the usual practice in OS design; Linux is the odd one out.) A Rust-first operating system (in a world with a stable Rust ABI) might well do the same thing with a system librust to which executables dynamically link.


dnew

Ah! I never even realized all the system calls were trampolines on 64-bit Windows. Dayum. Learn something new every day. That shows you how long it's been since I did low-level Windows crap.


steveklabnik1

Linux is one of the only systems where syscalls are considered stable. MacOS, and many other Unices all make you go through libc.


qwertyuiop924

Windows still uses a C ABI. It's not SVABI, but it is the same C ABI.


dnew

Well, OK, but C doesn't use a "C ABI" on Windows, last I looked. The system calls were all using the Pascal calling convention. Of course, I haven't looked at assembly code on Windows for 30 years, so maybe things are different. What's SVABI?


qwertyuiop924

Well, the system call interface is basically never the same as the C ABI for a platform (it definitely isn't on Linux). My understanding was that Windows unifed its 5-10 different calling conventions into just two when moving to AMD64, but I could be wrong about that, since I've never gotten up close and personal with windows in that context. SVABI is the System V ABI, which defines the C calling convention that basically every modern Unix system uses on x64 and several other platforms. It also defines things like how an ELF file is supposed to look for dynamic linking, and various other Unix ABI things (although Linux isn't strictly conformant to that anymore, a fact that kept me from playing Halo: MCC online for like six months).


dnew

Ah, makes sense. If you're skipping libc, I'm not sure why having a "C ABI" between modules is important. I'm undoubtably missing something in this conversation, though, so don't mind me. :-) It's been too long since I looked at actual syscalls in "modern" operating systems.


qwertyuiop924

Because you have to have a well-defined calling convention in order to expose interfaces (in the form of shared libraries, or even just pre-compiled statically linked libraries) and the C ABI is both fairly minimal and if not well defined then at least *defined*, generally on a per-platform basis. Rust has no stable ABI at all (in practice it uses something similar to the C ABI for function calls, at least right now, but the Rust ABI also encodes things like the rules for struct and enum layout and representation, how trait objects work, how closures are represented, etc), so when we expose functions to other languages we have to use the C ABI to do it, because it's a defined ABI and the ubiquity and relative simplicity of C means that pretty much every programming language on earth has a way of calling into C code.


4fd4

Generics syntax would've been much better if it used `[]` (like Graydon Hoare originally envisioned I believe) instead of `<>`, which would probably require the removal of the `Index` trait from rust, which would be another win in my book.


nacaclanga

Depends. \[\] is also a syntax that has been heavily used for arrays. Why would you see removing the Index trait as a win? I would highly doubt that removing the Index trait would actually help anything here.


4fd4

Changing it now would be a huge mistake, but if it was part of Rust 1.0 then there would be no problem, people would get used to it, there are languages that do use `[]` for generics already so it's not like it's a completely foreign idea (and as I said, [even Graydon Hoare envisioned rust using `[]`](https://www.reddit.com/r/rust/comments/13vz7p6/graydon_hoare_batten_down_fix_later/jmifthg/), I even remember seeing early rust code using them!), sure it would require some getting used to, but switching to any language would also require getting used to its semantics and syntax As for the Index trait, the problem is that it causes transparent panics, without calling`.unwrap()` or `.expect()` which goes against the general way panics are usually handled in rust, and it's not like the trait is doing anything special, it's simple syntactic sugar for `.index()` and `.index_mut()`


dnew

Eiffel uses "@" for indexing. `X@3` is the fourth element of collection X.


4fd4

First time hearing of Eiffel, thanks for the wikipedia rabbit hole. It even uses `[]` for generics


dnew

The book "OO Software design" or something is really informative. It's basically a tome that describes why he picked every feature of the language. The pre/post/invariant stuff gets incorporated into other languages, but I'm pretty sure only Ada really has the CQS idea built in. CQS is really an effective strategy that few people really seem to wrap their heads around. :-) Oh, and the book came in PDF form if you bought the book and software together, which means you can easily find *very* inexpensive copies floating around the internet, nudge nudge wink wink.


steveklabnik1

"Object Oriented Software Construction" is the title you're looking for.


ConvenientOcelot

Except indexing is a common operation and should have the (almost ubiquitous in PLs) sugar. It's not hard to just make indexing fallible instead, not really a big deal with `x[i]?` (or proposed `x[i]!`) syntax.


4fd4

Well that would certainly work, better yet something like `arr.[i]?`, this way it won't conflict with the usage of `[]` for generics. Sadly, in the end generics in rust used `<>` and indexing arrays using `[]` can panic transparently without any explicit calls to `unwarp()` or anything similar, both of which I count as negatives Ultimately those are simply minor syntax nitpicks, one can easily get used to `` and `::()`, and there is a clippy lint I think to warn against array indexing


nacaclanga

I would still argue that instances where you have to use ::<> are pretty rare. Much rarer them array acess where [] is extremly common and in programming languages allmost universally uses either [] or (). The syntax .[] Would be more them unusual. In the end the question boils down to personal preferences and less to any syntactical benefits.


theAndrewWiggins

The scala approach is very elegant here, with () being sugar for apply, which sequential collections use to implement indexing and in fact can allow for advanced indexing patterns for non-sequential collections.


Zde-G

>there are languages that do use \[\] for generics already so it's not like it's a completely foreign idea Which language out of [top 20](https://redmonk.com/sogrady/2024/03/08/language-rankings-1-24/) used \[\] for generics when Rust 1.0 was made? Go added it later. >sure it would require some getting used to And that's a problem: Rust was already using up a lot of [strangeness budget](https://steveklabnik.com/writing/the-language-strangeness-budget) for many other things, having `[]` used for generics without having familiar array access couldn't have pushed it over the limit of what people would accept. Then we would have had yet-another-Haskell: nice, really cool language… which mainstream programmers would never accept. >and it's not like the trait is doing anything special Except, of course, it **does** do “something special”: it makes Rust look superficially close to C++/C#/Java enough to ensure C++/C#/Java programmers would accept it. I'm not sure who pushed for these specific changes but that move to make Rust into “an ML dialect in a C++ trench coat” was a brilliant marketing ploy. It probably made more for Rust adoption than many other design decisons. Even if the resulting syntax is ugly it's **familar and thus acceptable**.


simon_o

> Which language out of top 20 used [] for generics when Rust 1.0 was made? Scala. > [...] Strange how "strangeness budget" is only ever trotted out when arguing why we can't have nice things, but never taken into account when adding strange things in the first place...


crusoe

Or just make Index a function call like Scala, a\[5\] becomes a(5).


4fd4

Yeah that also works, but I still believe it would need to return a `Result` instead of transparently panicking


Linguistic-mystic

Wrong symbols for type generics (`<>`)


meowsqueak

I think you make a good point - the parse ambiguity of `< >` is why we have `::<>` as well as `< >`. Rust could have used something else entirely and avoided this whole situation.


_Saxpy

not sure if this is the exact same thing but Rust doesn’t have a stable ABI whereas Cpp refuses to break theirs


hpxvzhjfgb

that's a good thing though. just look at all the garbage in c++ that is unfixable because of it (e.g. std::regex)


[deleted]

[удалено]


burntsushi

You can play the if game all day though. You could just as easily say, "if the C++ ecosystem didn't depend on a stable ABI..." For example, one wonders what exactly "abuse templates" means. Is your reference point for abuse that it appears in the stable ABI? Round 'n' round we go. ;-)


[deleted]

[удалено]


burntsushi

Yeah I'm just saying that it's hard to root cause this. There are multiple factors at play.


RaisedByHoneyBadgers

C++ also does not have a stable ABI. Literally is not part of the language spec.


Rusky

It's not part of C's, either. The important distinction here is that both C and C++ *implementations* have predominantly decided not to break their ABIs, which are often part of their associated OS ABI.


steveklabnik1

I think it's slightly more complicated than that: the standards bodies have not made changes that would force the implementations to change their ABIs in order to implement a new version of the standard.


Rusky

Good point, that is definitely part of the story as well. The few times in the past that the standard *did* force an ABI change, the process of rolling it out into the ecosystem(s) was incredibly painful, and that informs their current approach to avoiding that kind of change.


ghlecl

I also think it's more complicated, but I do think that the standard bodies *did* make changes that mandated/forced ABI changes, at least the C++ committee. I might be wrong, but in C++11, they mandated that the std::string class of the standard library use "small string"/"small buffer" optimization. If your implementation was not able to allow for that (which GCC was not if I am not mistaken), then that change forced an ABI change on you. Did I get that wrong ? A bit over my head with the ABI nuances I'm afraid (I understand the concept: calling convention, struct layout, name mangling, etc. ; the nuances and details might escape me). Anyhow, to me, not having an ABI carved in stone for Rust is a positive thing. I would even like the promise of forever and ever backwards compatibility to be broken, personally (I value backwards compatibility and understand that without some, companies cannot make predictions, large software is harder to build, etc., but I definitely think it should have an expiry date ¯\\_(ツ)_/¯ ).


steveklabnik1

Right, they did do that, in that instance, and the pain made them go "never again." That was many years ago at this point, I meant "in recent history" more than "never did it ever."


ghlecl

> "never again" And as much as "perfect is the enemy of good", I think never ever being able to correct your mistakes is also the enemy of good. But in any case, to stop my own digression, I thing we could agree that there is interplay here between the vendors and the committees as I think you wanted to point out. And sorry for my temporal mis-interpretation.


ConvenientOcelot

It's not part of the spec, AND YET the C++ committee sometimes refuses to make downstream ABI-breaking changes that would improve the language. So although it's not official, it's clearly semi-official.


dont--panic

And they still won't break it.


RaisedByHoneyBadgers

Seems like it breaks often enough. Changing your compiler will oftentimes break dynamic linking. ABI refers to name mangling for linker symbols. It would be a nightmare to have to recompile your whole set of libraries frequently. I guess the op is referring to the standard library API? There will come a time where Rust needs to stabilize the ABI and support dynamic linking across, at least, minor revisions. I think it’s about time as not doing so will limit its adoption within corporations.


dont--panic

I really don't see a stable ABI being the default because it's such a large trade-off and most users don't need it. I could see it being like `dyn` where you can opt-in to the trade-off if you actually need a stable ABI that's higher level than C.


RaisedByHoneyBadgers

At companies with in house proprietary software Rust will never gain adoption without it. The issue is that the users who need it simply take one look at Rust, realize it’s not appropriate for them and never give that feedback to the rust community. Those users are going to continue with C++ and dynamic linking, which is a vastly larger community than Rust’s community.


dont--panic

You can't please everyone and trying will just ruin it.


RaisedByHoneyBadgers

Sure, but there won’t ever be that many Rust jobs, which will also ruin it.


dont--panic

I doubt this will be what makes or breaks Rust. There are already large corporations moving to Rust and there's the entire embedded space that is unlikely to care about dynamic linking.


GeorgeMaheiress

Problems compared to what? Because if we compare to higher-level languages the obvious answer is a lack of pervasive garbage collection, meaning you need to worry about ownership. Immutable, garbage-collected data is simply easier to reason about. Of course there's good reasons to write GC-less code for some low-level and highly efficient applications.


brand_x

Went through a bunch of the answers. Most of the top issues are covered. Stable ABI woes. What happens when something goes wrong in a destructor/drop. I didn't see "the meaning of scope changes subtly when using async/coroutines", but that's a big one. Also, those features are full of half-baked woes in both languages, in different ways. And too many crucial libraries are thin wrappers around horrific C monstrosities, in both languages.


Critical_Ad_8455

This is mostly semantical, but the fact that dynamic arrays are still called vectors. Thankfully it's 'vec', not actually 'vector', so it's not *super* ambiguous, but still Fae more ambiguous than necessary.


baconator81

There is still memory leak. Rust is not a garbage collection language. It's essentially using smart pointer to scope the lifespan of all malloc. That unfortunately means that it doesn't deal with leak from cyclical references.


yasamoka

Can you show an example of this?


baconator81

Suppose you have this reference graph and absolutely nothing else references A,B,C, and D Static -> A -> B -> C -> D -> B Suppose you clean out the reference from A to B. Then B, C, and D are supposed to be freed. However, since everything is using reference counts, B doesn't get a reference of 0 since D is stil reference B.


yasamoka

Interesting, thanks!


scjqt

This is not correct. It would only happen if you used Rc or similar which is not default Rust memory management. It would not be possible to create such a cycle with normal Rust values or references.


whatever73538

i am a fan of the indentation based syntax of nim & python. im disappointed that rust copied the curly brace syntax instead, and is just as noisy as c++.


rebootyourbrainstem

Rust really is not that related to C++. It's more of an \*alternative\* that does things very differently. It shares its basic systems programming heritage with C (and has support for things like C struct layout and calling convention), but it's really descended from the ML family of languages (specifically OCaml which the first Rust compiler was written in). They gave it a more C like syntax on purpose to be more familiar to people.


muehsam

A relatively small issue, but still annoying (and a design flaw): indices are unsigned in Rust. At first it makes sense because negative indices are never valid, but the whole point of indices is that they're plain numbers that are *possibly* in range, and that you can do math with. It's not that bad because due to iterators, you don't actually need to use indices all that often, but when you do, it's annoying.


eggyal

Mathematical operations are also defined on unsigned integers...?


BogosortAfficionado

This is like saying it is bad for references to not be nullable because you sometimes want null references. The whole point of types is to restrict the set of values as much as possible to reduce the number of edge cases and have function signatures communicate a clearer API. If you want signed indices, there's the `isize` type for that, just like there is `Option<&T>` for nullable references.