Honestly sounds great! Look forward to the results. I do think Linux compile times matters personally, and the time save on development because the compiler is doing checks as well isn’t a perfect one to one for this project, because people like myself compile the kernel way more than we dev for kernel. Adding and removing stuff to trim it down for various platforms.
During your compiling it would be interesting if you can find some rust flags that might disable checks to speed things up. Maybe there is a conf that skips the things downstream users can assume the actual devs ran?
K, I spent way more time on this than I wanted to, but it’s here. There’s a lot to read, but here’s the graph:
The readme explains everything I did to try to make it reasonably fair.
Note that the graph X-axis is logarithmic; the median compile time for Rust packages is an order of magnitude more than C or Go.
The sample size is pretty close to 50 packages in each language, and I made my best attempt to ensure each package included used only one compiled language. Without a lot more work, there wasn’t much more I could do to get an apples-to-apples comparison.
One thing to note is that I downloaded all package sources outside of the timing step. Rust, specially with cargo packages, downloads many dependencies in the build() phase, whereas with C they’re mostly already downloaded. So a significant amount of Rust build time it’s actually downloading and compiling dependencies, which it has to do for each virgin build. Whether that make this an unfair comparison is debatable; I will point out that Go, however, does exactly the same thing: library dependencies are downloaded and compiled at build time, same as Rust. This makes the Go median even more impressive, but has no bearing on the Rust v. C discussion.
A final note: entirely unintentionally, I apparently have no from-source Zig programs installed (via AUR). I don’t know what to make of that. Is it really that far behind in popularity?
Anyway, all of the source and laborious explanation is there; if you’re running Arch, you could perform the same analysis and most of the work is already done for you. You just need 4 pieces of non-standard software, two of which are probably regardless installed on your machine. Be aware, however: on my desktop it took 12 hours to re-download and clean build the 276 qualifying AUR packages in my system, so it’s a long metric to run.
Wow I read through the blog post and though I’m not a developer I’ve compiled and built Linux packages and operating systems in the past so now I want to fly home and give your script a go myself.
I enjoyed your write up. I can’t comment on programming, but I enjoy a good journey and story.
My final takeaway is your image. I’ll keep it in mind. Interesting!
You read through all that? Wow. Good on you! Even I didn’t re-read it, so there are probably typos all over.
Yeah, the code isn’t interesting. It’s just a bunch of zsh hacked together; I wouldn’t be surprised if you encounter issues running it. The only thing I’m pretty sure of is that it won’t break anything.
Good luck. If you do run it and get a graph, please post it. I’m interested to see results from other systems. Note that the script generates an svg, so you’ll need to convert it to png to post it, or just go on and edit the csvtk graph command and change the svg suffix to png and it’ll create a png for you.
Also, I meant to do this in the README: a huge shout-out to the author of csvtk. It’s a fantastic tool, and I only just discovered the graph command which does so much. It has a built-in, simple pivot table function (a group argument) that replaced a whole other tool and process step. Seriously nice piece of software for working with CSV.
Honestly sounds great! Look forward to the results. I do think Linux compile times matters personally, and the time save on development because the compiler is doing checks as well isn’t a perfect one to one for this project, because people like myself compile the kernel way more than we dev for kernel. Adding and removing stuff to trim it down for various platforms.
During your compiling it would be interesting if you can find some rust flags that might disable checks to speed things up. Maybe there is a conf that skips the things downstream users can assume the actual devs ran?
CC @biscuitswalrus@aussie.zone
K, I spent way more time on this than I wanted to, but it’s here. There’s a lot to read, but here’s the graph:
The readme explains everything I did to try to make it reasonably fair.
Note that the graph X-axis is logarithmic; the median compile time for Rust packages is an order of magnitude more than C or Go.
The sample size is pretty close to 50 packages in each language, and I made my best attempt to ensure each package included used only one compiled language. Without a lot more work, there wasn’t much more I could do to get an apples-to-apples comparison.
One thing to note is that I downloaded all package sources outside of the timing step. Rust, specially with cargo packages, downloads many dependencies in the
build()
phase, whereas with C they’re mostly already downloaded. So a significant amount of Rust build time it’s actually downloading and compiling dependencies, which it has to do for each virgin build. Whether that make this an unfair comparison is debatable; I will point out that Go, however, does exactly the same thing: library dependencies are downloaded and compiled at build time, same as Rust. This makes the Go median even more impressive, but has no bearing on the Rust v. C discussion.A final note: entirely unintentionally, I apparently have no from-source Zig programs installed (via AUR). I don’t know what to make of that. Is it really that far behind in popularity?
Anyway, all of the source and laborious explanation is there; if you’re running Arch, you could perform the same analysis and most of the work is already done for you. You just need 4 pieces of non-standard software, two of which are probably regardless installed on your machine. Be aware, however: on my desktop it took 12 hours to re-download and clean build the 276 qualifying AUR packages in my system, so it’s a long metric to run.
Wow I read through the blog post and though I’m not a developer I’ve compiled and built Linux packages and operating systems in the past so now I want to fly home and give your script a go myself.
I enjoyed your write up. I can’t comment on programming, but I enjoy a good journey and story.
My final takeaway is your image. I’ll keep it in mind. Interesting!
You read through all that? Wow. Good on you! Even I didn’t re-read it, so there are probably typos all over.
Yeah, the code isn’t interesting. It’s just a bunch of zsh hacked together; I wouldn’t be surprised if you encounter issues running it. The only thing I’m pretty sure of is that it won’t break anything.
Good luck. If you do run it and get a graph, please post it. I’m interested to see results from other systems. Note that the script generates an svg, so you’ll need to convert it to png to post it, or just go on and edit the csvtk graph command and change the svg suffix to png and it’ll create a png for you.
Also, I meant to do this in the README: a huge shout-out to the author of csvtk. It’s a fantastic tool, and I only just discovered the
graph
command which does so much. It has a built-in, simple pivot table function (agroup
argument) that replaced a whole other tool and process step. Seriously nice piece of software for working with CSV.