Useful or not, from you.
fluentd Fluentd memory leak in Kubernetes with "buffer_type file"

[Fluentd version] v1.2.5

[Environment] Kubernetes

[Configuration] https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-gcp/fluentd-gcp-configmap.yaml

[Problems] We are seeing Fluentd memory leaks with Kubernetes versions >= v1.10. After some investigation, we realized the log rotation mechanism changed from Kubernetes v1.9 and v1.10, which might trigger the memory leak. Guess the logs are rotated in a way that Fluentd does not support / handle well.

Some experiment with exactly the same Fluentd version, output plugin version but different Kubernetes versions are shown below: image

Is there any known limitation for log rotation mechanism that might trigger this? We report a similar issue before, but I've verified that we are using a Fluentd version with that fix.

[The log rotation manager] In case it helps, the log rotation mechanism is https://github.com/kubernetes/kubernetes/blob/a3ccea9d8743f2ff82e41b6c2af6dc2c41dc7b10/pkg/kubelet/logs/container_log_manager.go#L150.

That's a useful answer
Without any help

We will move the buffer file path out of /var/log for now in GKE. Changing the buffer file suffix might still help prevent similar cases for other systems (e.g. Kubernetes users) though.