blob: Fix race when discarding already-consumed blobs
When reading objects via the gitpipe package, then the CatfileObject
pipeline step needs to wait with reading the next object until it's been
fully consumed by the caller. To do so, we synchronize on seeing EOF via
a synchronizingObject
wrapper: as soon as we saw that EOF, we close
the object's done channel and thus go on reading the next object. As
soon as that happens, the old object will be closed for reading by the
catfile reading queue.
The logical consequence of this is that we must not try to read from an object after having seen an EOF, or otherwise we race against it having been closed already. We try to do just that though when processing blobs by discarding excess blob data we didn't want send to the RPC caller even if we've already seen an EOF. As a result, we may see an error from time to time where we've been trying to discard data of an object which has already been closed.
Fix this bug by only discarding blob data in case we haven't seen an EOF yet.
Changelog: fixed
Fixes #3904 (closed)