The slowdown is not going to be large in typical small-ish repos.
And it does not seem to matter if the assistant reacts a little bit slower
in situations involving the expensive scan, since:
a) Those situations typically involve getting back in sync after something
has changed on a remote, often after a disconnect of some duration.
So taking a few seconds more is not noticable.
b) If the scan finds things that it needs to do, it will start
blocking anyway after 10 transfers are queued (due to use of
queueTransferWhenSmall). So, only the speed of finding the first 10
transfers will be impacted by this change.
This commit was sponsored by Jochen Bartl on Patreon.
It was relying on segmentPaths to work correctly, so when it didn't,
sometimes the file that did not exist got matched up with a non-null
list of results. Fixed by always checking if each parameter exists.
There are two reason segmentPaths might not work correctly.
For one, it assumes that when the original list of paths
has more than 100 paths, it's not worth paying the CPU cost to
preserve input orders.
And then, it fails when a directory such as "." or ".." or
/path/to/repo is in the input list, and the list of found paths
does not start with that same thing. It should probably not be using
dirContains, but something else.
But, it's not clear how to handle this fully. Consider
when [".", "subdir"] has been expanded by git ls-files to
["subdir/1", "subdir/2"]
-- Both of the inputs contained those results, so there's
no one right answer for segmentPaths. All these would be equally valid:
[["subdir/1", "subdir/2"], []]
[[], ["subdir/1", "subdir/2"]]
[["subdir/1"], [""subdir/2"]]
So I've not tried to improve segmentPaths.