[0/5] Process connector bug fixes & enhancements

Message ID 20230309031953.2350213-1-anjali.k.kulkarni@oracle.com
Headers
Series Process connector bug fixes & enhancements |

Message

Anjali Kulkarni March 9, 2023, 3:19 a.m. UTC
  From: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>

In this series, we add back filtering to the proc connector module. This
is required to fix some bugs and also will enable the addition of event 
based filtering, which will improve performance for anyone interested
in a subset of process events, as compared to the current approach,
which is to send all event notifications.

Thus, a client can register to listen for only exit or fork or a mix or
all of the events. This greatly enhances performance - currently, we
need to listen to all events, and there are 9 different types of events.
For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes
200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms,
and handling just one type - 8K exits takes about 70ms.

Reason why we need the above changes and also a new event type 
PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening
application when any process exiting has a non-zero exit status is:

Oracle DB runs on a large scale with 100000s of short lived processes,
starting up and exiting quickly. A process monitoring DB daemon which
tracks and cleans up after processes that have died without a proper exit
needs notifications only when a process died with a non-zero exit code
(which should be rare).

This change will give Oracle DB substantial performance savings - it takes
50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does
this check every 3 secs, so over an hour we save 10secs for 100K PIDs.

Measuring the time using pidfds for monitoring 8K process exits took 4
times longer - 200ms, as compared to 70ms using only exit notifications
of proc connector. Hence, we cannot use pidfd for our use case.

This kind of a new event could also be useful to other applications like
Google's lmkd daemon, which needs a killed process's exit notification.

This patch series is organized as follows -

Patch 1    : Is needed for patches 2 & 3 to work.
Patches 2-3: Fixes some bugs in proc connector, details in the patches.
Patch 4    : Allow non-root users access to proc connector events.
Patch 5    : Adds event based filtering for performance enhancements.

Anjali Kulkarni (5):
  netlink: Reverse the patch which removed filtering
  connector/cn_proc: Add filtering to fix some bugs
  connector/cn_proc: Test code for proc connector
  connector/cn_proc: Allow non-root users access
  connector/cn_proc: Performance improvements

 drivers/connector/cn_proc.c     | 103 +++++++++--
 drivers/connector/connector.c   |  13 +-
 drivers/w1/w1_netlink.c         |   6 +-
 include/linux/connector.h       |   6 +-
 include/linux/netlink.h         |   5 +
 include/uapi/linux/cn_proc.h    |  62 +++++--
 net/netlink/af_netlink.c        |  35 +++-
 samples/connector/proc_filter.c | 299 ++++++++++++++++++++++++++++++++
 8 files changed, 485 insertions(+), 44 deletions(-)
 create mode 100644 samples/connector/proc_filter.c
  

Comments

Christian Brauner March 9, 2023, 5:05 p.m. UTC | #1
On Wed, Mar 08, 2023 at 07:19:48PM -0800, Anjali Kulkarni wrote:
> From: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
> 
> In this series, we add back filtering to the proc connector module. This
> is required to fix some bugs and also will enable the addition of event 
> based filtering, which will improve performance for anyone interested
> in a subset of process events, as compared to the current approach,
> which is to send all event notifications.
> 
> Thus, a client can register to listen for only exit or fork or a mix or
> all of the events. This greatly enhances performance - currently, we
> need to listen to all events, and there are 9 different types of events.
> For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes
> 200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms,
> and handling just one type - 8K exits takes about 70ms.
> 
> Reason why we need the above changes and also a new event type 
> PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening
> application when any process exiting has a non-zero exit status is:
> 
> Oracle DB runs on a large scale with 100000s of short lived processes,
> starting up and exiting quickly. A process monitoring DB daemon which
> tracks and cleans up after processes that have died without a proper exit
> needs notifications only when a process died with a non-zero exit code
> (which should be rare).
> 
> This change will give Oracle DB substantial performance savings - it takes
> 50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does
> this check every 3 secs, so over an hour we save 10secs for 100K PIDs.
> 
> Measuring the time using pidfds for monitoring 8K process exits took 4
> times longer - 200ms, as compared to 70ms using only exit notifications
> of proc connector. Hence, we cannot use pidfd for our use case.

Just out of curiosity, what's the reason this took so much longer?

> 
> This kind of a new event could also be useful to other applications like
> Google's lmkd daemon, which needs a killed process's exit notification.

Fwiw - independent of this thing here - I think we might need to also
think about making the exit status of a process readable from a pidfd.
Even after the process has been exited + reaped... I have a _rough_ idea
how I thought this could work:

* introduce struct pidfd_info
* allocate one struct pidfd_info per struct pid _lazily_when the first a pidfd is created
* stash struct pidfd_info in pidfd_file->private_data
* add .exit_status field to struct pidfd_info
* when process exits statsh exit status in struct pidfd_info
* add either new system call or ioctl() to pidfd which returns EAGAIN or
  sm until process has exited and then becomes readable

Thought needs to be put into finding struct pidfd_info based on struct pid...