Remove WithCachedTypeInfo::stable_hash.#155329
Remove WithCachedTypeInfo::stable_hash.#155329nnethercote wants to merge 1 commit intorust-lang:mainfrom
WithCachedTypeInfo::stable_hash.#155329Conversation
|
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
…, r=<try> Remove `WithCachedTypeInfo::stable_hash`.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Finished benchmarking commit (aef2982): comparison URL. Overall result: ❌✅ regressions and improvements - please read:Benchmarking means the PR may be perf-sensitive. It's automatically marked not fit for rolling up. Overriding is possible but disadvised: it risks changing compiler perf. Next, please: If you can, justify the regressions found in this try perf run in writing along with @bors rollup=never Instruction countOur most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.
Max RSS (memory usage)Results (primary -1.5%, secondary -1.9%)A less reliable metric. May be of interest, but not used to determine the overall result above.
CyclesResults (secondary -3.8%)A less reliable metric. May be of interest, but not used to determine the overall result above.
Binary sizeThis perf run didn't have relevant results for this metric. Bootstrap: 495.503s -> 489.042s (-1.30%) |
|
Pull request author cannot be assigned as reviewer. Please choose another assignee. |
0b026be to
27f66fe
Compare
|
Perf is pretty interesting here.
I think this is one of those cases where icounts is misleading, and overall the perf here is slightly positive. Combine that with it being a code simplification, and I'm inclined to pursue this. @rustbot label: +perf-regression-triaged |
|
|
|
|
|
The bootstrap time might be noise again. Base commit had a similarly big regression, and the next one on main is down by the same amount again. I guess this might also make the wall-time numbers tricky to analyze. It depends on whether the source of noise is something on the machine or something in the artifact. If it's on the machine, then those wall-time improvements might also be noise. |
| | ^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
|
|
||
| error: demangling(issue_60925::foo::Foo<issue_60925::llvm::Foo>::foo::h4b3099ec5dc5d306) | ||
| error: demangling(issue_60925::foo::Foo<issue_60925::llvm::Foo>::foo::h91bf17e30395b22a) |
There was a problem hiding this comment.
Why do these hashes change? Is there some impurity in the hashing?
There was a problem hiding this comment.
It's because impl<T: HashStable<Hcx>, Hcx> HashStable<Hcx> for WithCachedTypeInfo<T> has changed from a two-step hash to a one-step hash. (Search for "hash that hash" in the diff.) The old code would hash the T to create a Fingerprint, and then hash that Fingerprint. The new code just hashes T.
(To double check this, I just tried reinstating the two-step hash and the symbol names reverted to their previous values.)
27f66fe to
515530e
Compare
We store a stable hash value in the most common interned values (e.g. types, predicates, regions). This is 16 bytes of data.
r? @oli-obk