Centralize logging strategy to avoid individual files
The following discussion from !2734 (merged) should be addressed:
-
@pks-t started a discussion: (+3 comments) @stanhu So if I get this right, then we're kind of bypassing the usual logging mechanisms and instead directly write into a file. I fear that this opens up quite some problems, especially as I expect the LFS smudge filter to be a high-volume executable in some repos. So this opens up several issues:
- No log rotation, so we keep on adding to the same file. As a result, this file may grow huge.
- No discarding of old logfiles, again causing the logs to grow indefinitely.
- An additional logfile at a seemingly random place of which the admin has to know that it exists.
- It's not connected to any monitoring solutions by default and thus not observable.
I'm definitely all-in to improve visibility of this command, but I'm not sure whether writing it to a separate logfile is the right thing to do.
We've had some discussions around this topic last week in the context of gitaly-git2go, but I don't think it actually resulted in an issue. There were some ideas floating which would improve the situation:
- Pass an additional file descriptor to executables. The executable may then write e.g. JSON-formatted log messages to that FD. Gitaly would consume all messages from it and forward them to the actual logging mechanism.
- Implement a Gitaly "service" which always listens on a local socket, accepting log messages from other components. Executables would simply connect to that socket and write to it. This doesn't necessarily have to be a GRPC service, but may also just be a simple Unix domain socket or similar.
These mechanisms are definitely more complex to implement, but the nice thing is that they'd solve the problem once across all of Gitaly. Also, they'd be able to integrate with our usual logging methods.
Maybe @pokstad1 has additional inputs, I think he also took part in that discussion.