-
Notifications
You must be signed in to change notification settings - Fork 11.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[5.6] Faster implementation for Collection::mapToDictionary #22774
Conversation
…ld one had problems when eager loading multiple thousands of rows
Building arrays with reduce is quadratic so this isn't surprising. 👍 I'd suggest using a foreach loop if performance is the ultimate goal here. On a million items it ranges between 4x to 15x faster. In the typical case, using foreach in mapToDictionary on a million items runs in 0.8s–4.1s for me (in PHP 7.1 and 7.2) whereas using (All benchmarks on homestead resulted in < 1s for a million items for foreach. On macOS it's ~4s) |
Would like to see this too. @Lapayo can you benchmark with plain |
I just did, its indeed a lot faster - depending on the use case (y)
|
@Lapayo very much appreciated! I can't comment on the likelyhood, I would definitely favor the plain Collections are by definition "things of many" and thus almost any operation has to loop in some way and it's a hot code path: even the tiniest slow down accumulates on larger sets, as you've demonstrated. |
I wouldn't go as far as say that the second level of improvement (plain foreach over each) would be that important. Imagine 100k items of an actual eloquent record, you'd need a lot of memory available in order to not get a memory exhaustion, which would lead to a chunk, instead. |
Used foreach. Thanks. |
I see were you coming from. But I think you've to think more outside that. This is a PR for the base But my take is: having identifier a hot code path and found a solution which a) is 100% backwards compatible and b) gives so much better performance characteristics is nothing to disagree with; on the contrary. |
Why not 5.5? |
+1, would love to see this merged into 5.5 as well. |
I just ran into huge performance problems when eager loading multiple thousand rows (~20k in my case).
I tracked it down to the Dictionary::mapToDictionary method, which somehow is very slow when dealing with this amount of data.
I rewrote the method without map-reduce and it is much faster now. Let me know what you think!
The following benchmark was taken on an i7-6700 machine. (Benchmark script attached).
benchmap.zip