Fix pathological performance in trait solver cycles with errors#155355
Fix pathological performance in trait solver cycles with errors#155355erickt wants to merge 1 commit intorust-lang:mainfrom
Conversation
|
rustbot has assigned @dingxiangfei2009. Use Why was this reviewer chosen?The reviewer was selected based on:
|
This comment has been minimized.
This comment has been minimized.
Fuchsia's Starnix system has had a multi-year long bug where occasionally a typo could cause the rust compiler to take 10+ hours to report an error. This was particularly hard to trace down since Starnix's codebase is massive, over 384 thousand lines as of writing. With the help of treereduce, cargo-minimize, and rustmerge, after about a month of running we reduced it down to a couple [lines of code], which only takes about 35 seconds to report an error on my machine. The bug also appears to happen with `-Z next-solver=no` and `-Z next-solver=coherence`, but does not occur with `-Z next-solver` or `-Z next-solver=globally`. I used Gemini to help diagnose the problem and proposed solution (which is the one proposed in this patch): 1. The trait solver gets stuck in an exponential loop evaluating auto-trait bounds (like Send and Sync) on cyclic types that contain compilation errors (TyKind::Error). 2. Normally, if the solver detects a cycle, it prevents the result from being stored in the Global Cache because the result depends on the current evaluation stack. However, when an error is involved, the depth tracking gets pinned to a low value, forcing the solver to rely on the short-lived Provisional Cache. Since the provisional cache is cleared between high-level iterations of the fulfillment loop, the solver ends up re-discovering and re-evaluating the same large cycle thousands of times. 3. Allow global caching of results even if they appear stack-dependent, provided that the inference context is already "tainted by errors" (`self.infcx.tainted_by_errors().is_some()`). This violates the strict invariant that global cache entries shouldn't depend on the stack, but it is safe because the compilation is already guaranteed to fail due to the presence of errors. Prioritizing compiler responsiveness and termination over perfect correctness in error states is the correct trade-off here. I added the reduction as the test case for this. However, I don't see an easy way to catch if this bug comes back. Should we add some way to timeout the test if it takes longer than 10 seconds to compile? That could be a source of flakes though. I don't have any experience with the trait solver code, but I did try to review the code to the best of my ability. This approach seems a bit of a bandaid to the solution, but I don't see a better solution. We could try to teach the solver to not clear the provisional cache in this circumstance, but I suspect that'd be a pretty invasive change. I'm guessing if this does cause problems, it might report an incorrect error, but I (and Gemini) were unable to come up with an example that reported a different error with and without this fix. [lines of code]: https://gist.github.com/erickt/255bc4006292cac88de906bd6bd9220a
de41321 to
cca98a2
Compare
|
Thanks rustbot, I removed the issue links from the commit message. |
|
r? lcnr |
|
|
||
| let reached_depth = stack.reached_depth.get(); | ||
| if reached_depth >= stack.depth { | ||
| if reached_depth >= stack.depth || self.infcx.tainted_by_errors().is_some() { |
There was a problem hiding this comment.
this seems like a very general fix, does limiting this to goals which contain an error type themselves also fix the hang?
I agree that this change won't be unsound and at worst causes confusing errors. I am slightly worried as it does break - and already broken - invariant which can be a pain when trying to add assertion later on or if there's weird behavior we don't quite get. Given that it's limited to the old solver which will be removed this year, that is fine.
Please add a comment explaining why we're doing is/maybe even make let Some(_guar) = self.infcx.tainted_by_errors() into a separate branch
cc @rust-lang/types I don't think this needs an FCP as it reverting it won't be a breaking change and it seems trivial enough, still want to make sure nobody is strongly opposed here :3
Fuchsia's Starnix system has had a multi-year long bug where occasionally a typo could cause the rust compiler to take 10+ hours to report an error (see #136516 and #150907). This was particularly hard to trace down since Starnix's codebase is massive, over 384 thousand lines as of writing.
With the help of treereduce, cargo-minimize, and rustmerge, after about a month of running we reduced it down to a couple lines of code, which only takes about 35 seconds to report an error on my machine. The bug also appears to happen with
-Z next-solver=noand-Z next-solver=coherence, but does not occur with-Z next-solveror-Z next-solver=globally.I used Gemini to help diagnose the problem and proposed solution (which is the one proposed in this patch):
The trait solver gets stuck in an exponential loop evaluating auto-trait bounds (like Send and Sync) on cyclic types that contain compilation errors (TyKind::Error).
Normally, if the solver detects a cycle, it prevents the result from being stored in the Global Cache because the result depends on the current evaluation stack. However, when an error is involved, the depth tracking gets pinned to a low value, forcing the solver to rely on the short-lived Provisional Cache. Since the provisional cache is cleared between high-level iterations of the fulfillment loop, the solver ends up re-discovering and re-evaluating the same large cycle thousands of times.
Allow global caching of results even if they appear stack-dependent, provided that the inference context is already "tainted by errors" (
self.infcx.tainted_by_errors().is_some()). This violates the strict invariant that global cache entries shouldn't depend on the stack, but it is safe because the compilation is already guaranteed to fail due to the presence of errors. Prioritizing compiler responsiveness and termination over perfect correctness in error states is the correct trade-off here.I added the reduction as the test case for this. However, I don't see an easy way to catch if this bug comes back. Should we add some way to timeout the test if it takes longer than 10 seconds to compile? That could be a source of flakes though.
I don't have any experience with the trait solver code, but I did try to review the code to the best of my ability. This approach seems a bit of a bandaid to the solution, but I don't see a better solution. We could try to teach the solver to not clear the provisional cache in this circumstance, but I suspect that'd be a pretty invasive change.
I'm guessing if this does cause problems, it might report an incorrect error, but I (and Gemini) were unable to come up with an example that reported a different error with and without this fix.
Resolves #150907