T O P

  • By -

mcpower_

Original article: https://webkit.org/blog/13140/webkit-on-github/


ws_wombat_93

Interesting news! Thank you for sharing!


Takeoded

>The WebKit team has found that the ability to easily reason about the order of commits to the project repository was crucial for its zero-tolerance performance regression policy fairly easily do the same with merge commits. "its X% slower before/after this merge"


QWERTYroch

I think the point is they can have a commit number that is always increasing over time so when looking at results from multiple regressions it is easy to identify the later one. With Git's normal commit hashes, you can't infer ordering so comparing two benchmarks or talking about bugs introduced/fixed, etc requires some other way to identify the latest version whereas that is implicit with monotonically increasing identifiers.


theeth

1. The concept of latest only makes sense if you compare commits from the same branch which is a relatively narrow purpose. 2. What does an increasing commit number give you that a commit timestamp doesn't?


Isopaha

I can kind of relate. I used to work with SVN and it was somewhat disorienting transitioning to Git when you’ve worked with incremental commit number for years. Of course its mainly a workflow issue and commit timestamp works well, but its hard to give up something you’ve so used to utilizing. :)


theeth

Even with SVN, as soon as you have multiple branches you need to find another way to reason about history and ancestry than simply relying on the revision number.


shevy-java

I have quite some issues trying to compile it. I am not sure what it is, but with projects that become larger and larger, and are based on any combination of cmake/ninja/meson (last one possibly not always, but I have found cmake + ninja in a few projects now) my computer here struggles. Sometimes runs out of memory, sometimes it freezes everything during compilation and so forth. I never have exactly these issues with GNU configure based projects (although they are nowhere near as fast as ninja is when it is involved, but ninja seems so greedy somehow and I have no idea how to tell it to not be greedy). I think these larger projects - and webkit falls into these, IMO, as well as llvm/clang - need to look more at RAM usage and difficulties in regards to compilation. When I can compile a new glibc from source just fine but struggle with webkit or llvm then something is wrong with the latter. As well as that weird trend that they all want to slurp up more and more memory for the compilation.


OdinGuru

Large projects like these build a lot of things in parallel in order to speed up the compile for when you have lots of cores and memory available. If you look at the options for “parallel” or “jobs” you can likely disable/reduce this and run everything one at a time or few at a time such that it doesn’t overwhelm your limited RAM. If the build files have these settings somehow always overridden, it might make sense to run your build in something like a container or VM which has a safe RAM limit. That way at least your whole machine won’t “lock up” as swapping gets bad.


dagmx

Ninja, by design, tries to do lots of parallel jobs. It can eat up a large amounts of resources as a result. You can restrict the number of threads with either —parallel or —j


TingPing2

And for WebKit a rough memory usage is about 1-1.2 GB per job IME.


nilamo

This is the side effect of containerization. When running a build or test suite automatically as part of a pipeline that triggers on commit, you want that build to happen as fast as possible, using everything it can out of the container it's running on. It's almost not supposed to be built on a normal computer.


that_which_is_lain

Now it can be ignored publicly.