Add support for suspendable jobs#16657
Conversation
There was a problem hiding this comment.
Could you provide the motivation for this commit and in particular why each lock type was chosen with the confines of this commit?
There was a problem hiding this comment.
Sorry, I should have made that clear in the commit message.
The issue I am trying to solve is to avoid a holding the MutexGuard while waiting for the file lock.
This is an issue since we can only lock a single lock at a time. (I am a bit surprised I overlooked this previously)
For the lock manager, we only insert into hashmap during the single threaded prepare fingerprint so we take RwLock::write(). During the multithreaded Job section, we take RwLock::read() to the hashmap.
There was a problem hiding this comment.
Sounds like this is a fix independent of the rest of the PR?
Can we split it out and update the title to focus on the user impact?
There was a problem hiding this comment.
I worry about the maintainability of having side band communication going on.
Is it possible for us to manage this within the existing flow? For example, could we use Poll on Job::run to return early if we couldn't acquire the lock?
There was a problem hiding this comment.
Hmmm, I think it might be tricky to use Poll with the generic structure of Job. (like how to resume the job since internally its a closure)
Though I am happy to look into that to see what is possible.
There was a problem hiding this comment.
Wait, if we know what jobs will actually build a prior (see #16659 (comment)), then why can't we do all of the lock processing before the build, grabbing our exclusive locks and blocking then, rather than waiting?
There was a problem hiding this comment.
Oh thats an interesting idea.
So basically the DrainState could try to lock before calling DrainState::run (running the job)
This is single threaded so it will change the way we take the locks.
I am imagining spawning a thread to take each lock as to avoid blocking the main job queue loop.
I think this could be encapsulated in the LockManager. (and use try_lock() as an optimization to avoid spawning a new thread)
There was a problem hiding this comment.
I'm saying that we acquire the locks in compile and downgrade after build.
There was a problem hiding this comment.
Hmmm well if you try to take the locks in compile (assuming you are referring to BuildRunner::compile) that would block all jobs since that is single threaded. Unless I am missing something?
My rational for trying to put them in DrainState is that is what is doing the job scheduling so it has the power to reschedule another job that is not blocked.
There was a problem hiding this comment.
Thats super::compile which currently coordinates locks.
How much does grabbing as we go benefit in practice when the build we block on would hold a shared lock until its done, blocking our job and all dependents anyways? We can build some jobs that don't overlap, so a little?
There was a problem hiding this comment.
We can build some jobs that don't overlap, so a little?
The benefit we get is when there are build units that do not overlap.
From my perspective, this is that main benefit of fine grain locking over what we currently have.
If we were to wait for everything, I feel like the benefits would be limited to a few more niche scenarios?
There was a problem hiding this comment.
The goal we are working towards is non-blocking. If there is overlap, then we are blocking.
So the question then is does the benefit from some overlap justify a more complex architecture (both for this and blocking messages).
There was a problem hiding this comment.
I did some thinking on this and I think we might be able to get away with simply locking before kicking off jobs.
The primary use case for fine grain locking is to avoid rust-analyzer's cargo check from blocking the users cargo build which slows down iteration during development.
If the user is making small incremental changes to their project, this means full rebuilds should be uncommon. So we will likely take shared locks to all of the dependencies, so we don't need an exclusive lock for those. We just need to take an exclusive lock on the things that changed(dirty units). Build scripts are still a problem with this but I think those will not be dirty unless the user is editing the build script itself.
I think the big question if this is worth the extra complexity of suspending jobs is if we care about build script units (or any other units that are shared between build and check) being blocking.
### What does this PR try to resolve? Split out of #16657 Currently we are holding a `MutexGuard` while waiting for the file lock, preventing multiple locks from being taken at once. This means that a single blocking unit can cause other units from running. For the lock manager, we only insert into hashmap during the single threaded prepare fingerprint so we take `RwLock::write()`. During the multithreaded `Job` section, we take `RwLock::read()` to the hashmap. ### How to test and review this PR? See the "How to test and review this PR" section of #16657 However, unlike that PR, `--jobs 1` will not work with the changes in that PR. We would need to do `--jobs 2` and make sure that the first build unit blocked causing the second one to block. Though I think this is a bit difficult to setup a test for without the changes in #16657 r? @epage
What does this PR try to resolve?
This PR fixes a bug in
-Zfine-grain-lockingthat prevent prevent other jobs from running when a job is waiting on a lock.Cargo's job queue has previously never support the ability to suspend a job that is already executing.
This PR adds support to suspend and resume jobs being executed.
This also paves the way to add better blocking messages to the user (originally attempted in #16463)
Tracking issue: #4282
Flow
.try_lock()to see if we will block.Message::Blockedletting the job queue reclaim the token for another job to run while we are blockedtry_locksucceeds, we have the lock and never blocked, so we simply start compiling.Message::Unblockedletting the job queue know we are ready to continue--jobslimit, wait for the job queue to reschedule us by callingJobState.resume.recv()which blocks the current thread.ActiveJob.resume.send()which will let the job resume exection.Design
Queue<Message>inJobStateto notify the job queue that we are going to block.std::sync::mpsc::channel.flowchart LR JQ["Job Queue (DrainState)"] J["Jobs"] J -- "Queue<Message>" --> JQ JQ -- "resume mpsc::channel" --> JHow to test and review this PR?
To test this I found the easiest to way to do it synthetically by injecting a
std::thread::sleepin the lock manager for a given lock. Along with passing--jobs 1to make a blocked job block the entire build.Test script
Doing this on
masterwill result in blocked build that never completes, while changes in this PR will finish the build.r? @epage