From patchwork Tue Feb 21 21:11:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60255 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225338wrd; Tue, 21 Feb 2023 13:13:50 -0800 (PST) X-Google-Smtp-Source: AK7set+nW2MiVFDkjezBMJTggPudqv3GuDUPTMZWQO+pTH4IJCWnrhj4E3wRo4tRjn0upE7LXP+z X-Received: by 2002:a17:90a:357:b0:234:b48:84ce with SMTP id 23-20020a17090a035700b002340b4884cemr5064767pjf.37.1677014030138; Tue, 21 Feb 2023 13:13:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014030; cv=none; d=google.com; s=arc-20160816; b=MRvTjsqkohC55Q/L826/5Jby1ls2vygdMXF2okDlFZIF/pghIce4fVkghIpkAGMLPE qNKgnnKTkSuQ0+wtBs3Jo6CBI7VJRcJHNEzx0oXJXbZFRX3lAM/hAGk8+U3e+2QXTBP6 W8IW0qlBEf45ucBIidJiLUJuIVcU5Bq7L4WlS+0d/Lrc3Smtr4SsZcQPUQPD11FBz3cC 8yq28mphmNwP6dI9kz0QizKZ1AWnOeYUkz6lUAjOezk22RAvqGOcoG5fw+sONEpwrOzW 3K6cyjQ/h4uoTMXoDOspcufX8MOr/E0gDMhOFmDz6xA7jUo9hLOotivFbVEaSV4JOt7e iI4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=OSmDP26J8YzoMhJc3UKK7slznwSGVE3KNwggNF1IdJc=; b=va1Nfzde3YYFgQ6wUB3OfZXXMV4SpIKjwFm8z1U6mGotBHwUGJOG1krIqPkWYacwqy 8fowr/2yU8GMdMoERJ2kb5vhjaZcLDGnrwa9iInOKf2ipcCdNWUpW0YYunPsI3sf85gR 5XrM/JtedzblglBYTrbi9vJVyrmguxWQnukQwqPWmpO/kKvU+Ul+s3KiSmDIKVImMuq5 akVJq8sn0g2BgT/45Hjizx3DqbhvgblgAxvvjM/G6v2jhV4IbgS6LnQXmoX/d7KNqIaZ kM1nCPFYHiWdVMdu4SWBBzAj3h9YMSYEOOH8/VMtYFzo+/v+t4qIFynQm8nsxSSawDEj QPUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=QZ3afmGF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t13-20020a17090a6a0d00b002332c17dbfbsi6551133pjj.70.2023.02.21.13.13.37; Tue, 21 Feb 2023 13:13:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=QZ3afmGF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229558AbjBUVL6 (ORCPT + 99 others); Tue, 21 Feb 2023 16:11:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229541AbjBUVLw (ORCPT ); Tue, 21 Feb 2023 16:11:52 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 708FC302B4; Tue, 21 Feb 2023 13:11:49 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 9DD01209A93B; Tue, 21 Feb 2023 13:11:48 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 9DD01209A93B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013908; bh=OSmDP26J8YzoMhJc3UKK7slznwSGVE3KNwggNF1IdJc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QZ3afmGFFO8UhimsjFdIx0E4RrHMJCg4heN+JTPjRV3GYLMekM7QXuMqXGKy5udcd Gaz2yWH8+42tekNvGACHxjUfQkGUos9QOD1o2avfJqKBV3+9EcfLpjpSWqwGMf5vz0 NDjm2lpVRVRRrKTG+YHJmaySM45/b0AJDcHG1bXU= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 01/11] tracing/user_events: Split header into uapi and kernel Date: Tue, 21 Feb 2023 13:11:33 -0800 Message-Id: <20230221211143.574-2-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476663865395042?= X-GMAIL-MSGID: =?utf-8?q?1758476663865395042?= The UAPI parts need to be split out from the kernel parts of user_events now that other parts of the kernel will reference it. Do so by moving the existing include/linux/user_events.h into include/uapi/linux/user_events.h. Signed-off-by: Beau Belgrave --- include/linux/user_events.h | 52 ++++---------------------------- include/uapi/linux/user_events.h | 48 +++++++++++++++++++++++++++++ kernel/trace/trace_events_user.c | 5 --- 3 files changed, 54 insertions(+), 51 deletions(-) create mode 100644 include/uapi/linux/user_events.h diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 592a3fbed98e..13689589d36e 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -1,54 +1,14 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* SPDX-License-Identifier: GPL-2.0-only */ /* - * Copyright (c) 2021, Microsoft Corporation. + * Copyright (c) 2022, Microsoft Corporation. * * Authors: * Beau Belgrave */ -#ifndef _UAPI_LINUX_USER_EVENTS_H -#define _UAPI_LINUX_USER_EVENTS_H -#include -#include +#ifndef _LINUX_USER_EVENTS_H +#define _LINUX_USER_EVENTS_H -#ifdef __KERNEL__ -#include -#else -#include -#endif +#include -#define USER_EVENTS_SYSTEM "user_events" -#define USER_EVENTS_PREFIX "u:" - -/* Create dynamic location entry within a 32-bit value */ -#define DYN_LOC(offset, size) ((size) << 16 | (offset)) - -/* - * Describes an event registration and stores the results of the registration. - * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum - * must set the size and name_args before invocation. - */ -struct user_reg { - - /* Input: Size of the user_reg structure being used */ - __u32 size; - - /* Input: Pointer to string with event name, description and flags */ - __u64 name_args; - - /* Output: Bitwise index of the event within the status page */ - __u32 status_bit; - - /* Output: Index of the event to use when writing data */ - __u32 write_index; -} __attribute__((__packed__)); - -#define DIAG_IOC_MAGIC '*' - -/* Requests to register a user_event */ -#define DIAG_IOCSREG _IOWR(DIAG_IOC_MAGIC, 0, struct user_reg*) - -/* Requests to delete a user_event */ -#define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char*) - -#endif /* _UAPI_LINUX_USER_EVENTS_H */ +#endif /* _LINUX_USER_EVENTS_H */ diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h new file mode 100644 index 000000000000..03f92366068d --- /dev/null +++ b/include/uapi/linux/user_events.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Copyright (c) 2021-2022, Microsoft Corporation. + * + * Authors: + * Beau Belgrave + */ +#ifndef _UAPI_LINUX_USER_EVENTS_H +#define _UAPI_LINUX_USER_EVENTS_H + +#include +#include + +#define USER_EVENTS_SYSTEM "user_events" +#define USER_EVENTS_PREFIX "u:" + +/* Create dynamic location entry within a 32-bit value */ +#define DYN_LOC(offset, size) ((size) << 16 | (offset)) + +/* + * Describes an event registration and stores the results of the registration. + * This structure is passed to the DIAG_IOCSREG ioctl, callers at a minimum + * must set the size and name_args before invocation. + */ +struct user_reg { + + /* Input: Size of the user_reg structure being used */ + __u32 size; + + /* Input: Pointer to string with event name, description and flags */ + __u64 name_args; + + /* Output: Bitwise index of the event within the status page */ + __u32 status_bit; + + /* Output: Index of the event to use when writing data */ + __u32 write_index; +} __attribute__((__packed__)); + +#define DIAG_IOC_MAGIC '*' + +/* Request to register a user_event */ +#define DIAG_IOCSREG _IOWR(DIAG_IOC_MAGIC, 0, struct user_reg *) + +/* Request to delete a user_event */ +#define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char *) + +#endif /* _UAPI_LINUX_USER_EVENTS_H */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 908e8a13c675..070551480747 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -19,12 +19,7 @@ #include #include #include -/* Reminder to move to uapi when everything works */ -#ifdef CONFIG_COMPILE_TEST #include -#else -#include -#endif #include "trace.h" #include "trace_dynevent.h" From patchwork Tue Feb 21 21:11:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60251 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225087wrd; Tue, 21 Feb 2023 13:13:12 -0800 (PST) X-Google-Smtp-Source: AK7set85FhFpPlUGjCPrv/sYT1E+vrpfNj2E6MDVJveMrihmo6KJCPx3CfC5TqwrXvNfe4/36nc8 X-Received: by 2002:a17:902:f54a:b0:19a:a647:1891 with SMTP id h10-20020a170902f54a00b0019aa6471891mr10138408plf.67.1677013992654; Tue, 21 Feb 2023 13:13:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677013992; cv=none; d=google.com; s=arc-20160816; b=B6sxgPbycBh6Saj3y5tt/sBVUZcahtGOtYPn6hh0/Ps51pPULremv/zDq32ALjo+Wx lXdqvOZGpk/Sg5f0Lutu2hJcA9Ywg+nlGVijbBJSNnRvj7eFMqEUeVwrx6CNs5hNC1HW 88qWfOC6gNV9gMLFkPnGQGkULeSgwypsfKC8UdTJvwOs9dVbbnM6YkdgZvQjtkTpVPOl 6eSWTCHOg9iFIUfuG875LkK+T08Z6TSdNP3i+XF/487SwkEBmmrVFsDcusSYb9MLikRI /bFe4dYR2kHE/rpQVaWSkRmTBe0s1ZQmTjNDv9miHfr02Z4p/5CSbITJO/mb5FN/wAPu Pigw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=S8OIOLUkRs5CBqJ2LYBXQxLSyKFNFEQS/E/TUK8EqMU=; b=MvNfKb0uFngTHSALnfWVfmtK4H4edZx2XnJm6xXWcnXXi65P+/DBAeKlurapoSdTgc vi8c8SP92RDSTJJFdmK9YedPSLkAIXpyuJZy6lQq2+PL+KZEhcYGhnnN8tE1N3scwYhx 9Qkjxj3WBJ/RbaJnLncYwaSK9qywRPc3wL90KikK+M+8MNFA5BjSQVpxWr2da+argb4Z yHPD3NR0TYVWW4NqK1If6z/RpD6KROASwfTk3pcFpkzlQOiyRQP2nJ9u8iB1/lbK8FNc Ut/dEjT/j86hrob6vtSl4hIC8CfSQBKHzrQfHVy+X34uFvzrB5JqlgvJ9DNckCn/TmyQ hx5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=VwLGet4O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m13-20020a170902db0d00b0019a60b7cc06si8188200plx.529.2023.02.21.13.13.00; Tue, 21 Feb 2023 13:13:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=VwLGet4O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230043AbjBUVLy (ORCPT + 99 others); Tue, 21 Feb 2023 16:11:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43382 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229561AbjBUVLw (ORCPT ); Tue, 21 Feb 2023 16:11:52 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B8EB3303C4; Tue, 21 Feb 2023 13:11:49 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 0C401209A93C; Tue, 21 Feb 2023 13:11:49 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0C401209A93C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013909; bh=S8OIOLUkRs5CBqJ2LYBXQxLSyKFNFEQS/E/TUK8EqMU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VwLGet4Ox6Jbrts9WykuV93sf2VYVaulOFjcl75VohmOtQSXPwHbypa6j/+9vjaJZ ITbYfc3DFAvcCF5GE9DM+R8SdZE5fBxZeeqvXqMOviyPY191C9t0v4ZeaPraxaO2jq 1MhjfMWFNCZmpaHtxIvjR/iFAaorf+9R+GiFQqSw= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 02/11] tracing/user_events: Track fork/exec/exit for mm lifetime Date: Tue, 21 Feb 2023 13:11:34 -0800 Message-Id: <20230221211143.574-3-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476624504801654?= X-GMAIL-MSGID: =?utf-8?q?1758476624504801654?= During tracefs discussions it was decided instead of requiring a mapping within a user-process to track the lifetime of memory descriptors we should hook the appropriate calls. Do this by adding the minimal stubs required for task fork, exec, and exit. Currently this is just a NOP. Future patches will implement these calls fully. Suggested-by: Mathieu Desnoyers Signed-off-by: Beau Belgrave --- fs/exec.c | 2 ++ include/linux/sched.h | 5 +++++ include/linux/user_events.h | 18 ++++++++++++++++++ kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ 5 files changed, 29 insertions(+) diff --git a/fs/exec.c b/fs/exec.c index ab913243a367..d1c83e0dbae5 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -65,6 +65,7 @@ #include #include #include +#include #include #include @@ -1856,6 +1857,7 @@ static int bprm_execve(struct linux_binprm *bprm, current->fs->in_exec = 0; current->in_execve = 0; rseq_execve(current); + user_events_execve(current); acct_update_integrals(current); task_numa_free(current, false); return retval; diff --git a/include/linux/sched.h b/include/linux/sched.h index 853d08f7562b..a8e683b4291c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -69,6 +69,7 @@ struct sighand_struct; struct signal_struct; struct task_delay_info; struct task_group; +struct user_event_mm; /* * Task state bitmask. NOTE! These bits are also @@ -1522,6 +1523,10 @@ struct task_struct { union rv_task_monitor rv[RV_PER_TASK_MONITORS]; #endif +#ifdef CONFIG_USER_EVENTS + struct user_event_mm *user_event_mm; +#endif + /* * New fields for task_struct should be added above here, so that * they are included in the randomized portion of task_struct. diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 13689589d36e..3d747c45d2fa 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -11,4 +11,22 @@ #include +#ifdef CONFIG_USER_EVENTS +struct user_event_mm { +}; +#endif + +static inline void user_events_fork(struct task_struct *t, + unsigned long clone_flags) +{ +} + +static inline void user_events_execve(struct task_struct *t) +{ +} + +static inline void user_events_exit(struct task_struct *t) +{ +} + #endif /* _LINUX_USER_EVENTS_H */ diff --git a/kernel/exit.c b/kernel/exit.c index 15dc2ec80c46..e2aaaa81b281 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -68,6 +68,7 @@ #include #include #include +#include #include #include @@ -816,6 +817,7 @@ void __noreturn do_exit(long code) coredump_task_exit(tsk); ptrace_event(PTRACE_EVENT_EXIT, code); + user_events_exit(tsk); validate_creds_for_do_exit(tsk); diff --git a/kernel/fork.c b/kernel/fork.c index 9f7fe3541897..180f6d86fbad 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -97,6 +97,7 @@ #include #include #include +#include #include #include @@ -2502,6 +2503,7 @@ static __latent_entropy struct task_struct *copy_process( trace_task_newtask(p, clone_flags); uprobe_copy_process(p, clone_flags); + user_events_fork(p, clone_flags); copy_oom_score_adj(clone_flags, p); From patchwork Tue Feb 21 21:11:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60261 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225565wrd; Tue, 21 Feb 2023 13:14:21 -0800 (PST) X-Google-Smtp-Source: AK7set9blaEYxddk54FwxyaMJk96FWBsXXH1qg2SEvXn0hNihNYCJcv3UO+sIg86C5x1WWthCZGd X-Received: by 2002:a17:90b:4f8f:b0:234:f4a:8985 with SMTP id qe15-20020a17090b4f8f00b002340f4a8985mr7338000pjb.15.1677014061127; Tue, 21 Feb 2023 13:14:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014061; cv=none; d=google.com; s=arc-20160816; b=z02sXpIrIMnySoT3dOnv+/LZDs8eXBG+JqF1cRfDYv73VFREcGOCn3uASr5Zsk98c+ nxcVOmABp2YOYZJ1cDFQjGJmfCbEDQuVPQoXgHpVGkQntFSivovnj20GqC7uXxsunErT eEgT+DJj4rK9I1wctGHHa2fF8BhxqPpwNf7iE7wdeBzZCQRTLo1OUiADjN296e5mlwp8 nx4Pxsw3RAXpyIFx8FD7KRVwZC58sCZiRaHezNZAO/QGtgTgUzXahpUEFJeNolrXTNmu Ve0Fpqt0xns445wi05hWK4a7p3D77v9HR/eLPWqWkYDjvWLITzKU+nCYGM7ucRa0Q6b1 zngQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=R9INv6oij2jvfqxRYN7sTvDkp32JqHmiAHAQNiuLI7o=; b=ht3kTMyg1FYQpSaT6kpUpjI1JqAYVjK3PA8T/s06ostyArHpC0lDnfDYNIMShIzsf6 m8hInpkUwsr5PygRKGOmlKhD7cztFmaVwipGQF/Kb70eG4sPmhK/SQ01TGpGeStdJ21Z +ZspFgb4hKAZSCYI+tSEffDpXJnbxAL3g1r7QVKlGVy9+xGVjqx++jtgaRfBxqQAHsMS wWss00bK4rfglB4XZxsoecCeGZYSNgFDS7aH5kxs7MNztOJtuUct/ldECAZxx2Edrv1i AW+9S2REI6psJCkoL0c7LmAGKoxEVHVS/1C4xPBGg1iQiFEc+LgHocWvobU0SJpOLCsb Obxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=hx9zsUi8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lk18-20020a17090b33d200b00233f147ef63si1896487pjb.40.2023.02.21.13.14.08; Tue, 21 Feb 2023 13:14:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=hx9zsUi8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229992AbjBUVMP (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229946AbjBUVLx (ORCPT ); Tue, 21 Feb 2023 16:11:53 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3F5CF305C0; Tue, 21 Feb 2023 13:11:50 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 6BB6A209A93D; Tue, 21 Feb 2023 13:11:49 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 6BB6A209A93D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013909; bh=R9INv6oij2jvfqxRYN7sTvDkp32JqHmiAHAQNiuLI7o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hx9zsUi8uwbEEmaMGUsTrTNTsuR4HrF9wJMLXUQkHjrDvZfuXzgdIDRwYjRYI0OdD RoE3fC6VJgx6yqBh2g5j+2w46DzhLeQEFN5VJmJweqr0QlLqGYEFkTblD9ghmypHVc NXZqRvek1wZKGY8eEdEaptJAGUnj+pTt/b4jro4w= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 03/11] tracing/user_events: Use remote writes for event enablement Date: Tue, 21 Feb 2023 13:11:35 -0800 Message-Id: <20230221211143.574-4-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476696287860296?= X-GMAIL-MSGID: =?utf-8?q?1758476696287860296?= As part of the discussions for user_events aligned with user space tracers, it was determined that user programs should register a aligned value to set or clear a bit when an event becomes enabled. Currently a shared page is being used that requires mmap(). Remove the shared page implementation and move to a user registered address implementation. In this new model during the event registration from user programs 3 new values are specified. The first is the address to update when the event is either enabled or disabled. The second is the bit to set/clear to reflect the event being enabled. The third is the size of the value at the specified address. This allows for a local 32/64-bit value in user programs to support both kernel and user tracers. As an example, setting bit 31 for kernel tracers when the event becomes enabled allows for user tracers to use the other bits for ref counts or other flags. The kernel side updates the bit atomically, user programs need to also update these values atomically. User provided addresses must be aligned on a natural boundary, this allows for single page checking and prevents odd behaviors such as a enable value straddling 2 pages instead of a single page. Currently page faults are only logged, future patches will handle these. Suggested-by: Mathieu Desnoyers Signed-off-by: Beau Belgrave --- include/linux/user_events.h | 53 ++- include/uapi/linux/user_events.h | 15 +- kernel/trace/Kconfig | 5 +- kernel/trace/trace_events_user.c | 586 ++++++++++++++++++++++++------- 4 files changed, 517 insertions(+), 142 deletions(-) diff --git a/include/linux/user_events.h b/include/linux/user_events.h index 3d747c45d2fa..0120b3dd5b03 100644 --- a/include/linux/user_events.h +++ b/include/linux/user_events.h @@ -9,13 +9,63 @@ #ifndef _LINUX_USER_EVENTS_H #define _LINUX_USER_EVENTS_H +#include +#include +#include +#include #include #ifdef CONFIG_USER_EVENTS struct user_event_mm { + struct list_head link; + struct list_head enablers; + struct mm_struct *mm; + struct user_event_mm *next; + refcount_t refcnt; + refcount_t tasks; + struct rcu_work put_rwork; }; -#endif +extern void user_event_mm_dup(struct task_struct *t, + struct user_event_mm *old_mm); + +extern void user_event_mm_remove(struct task_struct *t); + +static inline void user_events_fork(struct task_struct *t, + unsigned long clone_flags) +{ + struct user_event_mm *old_mm; + + if (!t || !current->user_event_mm) + return; + + old_mm = current->user_event_mm; + + if (clone_flags & CLONE_VM) { + t->user_event_mm = old_mm; + refcount_inc(&old_mm->tasks); + return; + } + + user_event_mm_dup(t, old_mm); +} + +static inline void user_events_execve(struct task_struct *t) +{ + if (!t || !t->user_event_mm) + return; + + user_event_mm_remove(t); +} + +static inline void user_events_exit(struct task_struct *t) +{ + if (!t || !t->user_event_mm) + return; + + user_event_mm_remove(t); +} +#else static inline void user_events_fork(struct task_struct *t, unsigned long clone_flags) { @@ -28,5 +78,6 @@ static inline void user_events_execve(struct task_struct *t) static inline void user_events_exit(struct task_struct *t) { } +#endif /* CONFIG_USER_EVENTS */ #endif /* _LINUX_USER_EVENTS_H */ diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 03f92366068d..22521bc622db 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -27,12 +27,21 @@ struct user_reg { /* Input: Size of the user_reg structure being used */ __u32 size; + /* Input: Bit in enable address to use */ + __u8 enable_bit; + + /* Input: Enable size in bytes at address */ + __u8 enable_size; + + /* Input: Flags for future use, set to 0 */ + __u16 flags; + + /* Input: Address to update when enabled */ + __u64 enable_addr; + /* Input: Pointer to string with event name, description and flags */ __u64 name_args; - /* Output: Bitwise index of the event within the status page */ - __u32 status_bit; - /* Output: Index of the event to use when writing data */ __u32 write_index; } __attribute__((__packed__)); diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig index d7043043f59c..b61a1bfbfc22 100644 --- a/kernel/trace/Kconfig +++ b/kernel/trace/Kconfig @@ -791,9 +791,10 @@ config USER_EVENTS can be used like an existing kernel trace event. User trace events are generated by writing to a tracefs file. User processes can determine if their tracing events should be - generated by memory mapping a tracefs file and checking for - an associated byte being non-zero. + generated by registering a value and bit with the kernel + that reflects when it is enabled or not. + See Documentation/trace/user_events.rst. If in doubt, say N. config HIST_TRIGGERS diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 070551480747..553a82ee7aeb 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include "trace.h" #include "trace_dynevent.h" @@ -29,34 +30,11 @@ #define FIELD_DEPTH_NAME 1 #define FIELD_DEPTH_SIZE 2 -/* - * Limits how many trace_event calls user processes can create: - * Must be a power of two of PAGE_SIZE. - */ -#define MAX_PAGE_ORDER 0 -#define MAX_PAGES (1 << MAX_PAGE_ORDER) -#define MAX_BYTES (MAX_PAGES * PAGE_SIZE) -#define MAX_EVENTS (MAX_BYTES * 8) - /* Limit how long of an event name plus args within the subsystem. */ #define MAX_EVENT_DESC 512 #define EVENT_NAME(user_event) ((user_event)->tracepoint.name) #define MAX_FIELD_ARRAY_SIZE 1024 -/* - * The MAP_STATUS_* macros are used for taking a index and determining the - * appropriate byte and the bit in the byte to set/reset for an event. - * - * The lower 3 bits of the index decide which bit to set. - * The remaining upper bits of the index decide which byte to use for the bit. - * - * This is used when an event has a probe attached/removed to reflect live - * status of the event wanting tracing or not to user-programs via shared - * memory maps. - */ -#define MAP_STATUS_BYTE(index) ((index) >> 3) -#define MAP_STATUS_MASK(index) BIT((index) & 7) - /* * Internal bits (kernel side only) to keep track of connected probes: * These are used when status is requested in text form about an event. These @@ -70,20 +48,14 @@ #define EVENT_STATUS_OTHER BIT(7) /* - * Stores the pages, tables, and locks for a group of events. - * Each logical grouping of events has its own group, with a - * matching page for status checks within user programs. This - * allows for isolation of events to user programs by various - * means. + * Stores the system name, tables, and locks for a group of events. This + * allows isolation for events by various means. */ struct user_event_group { - struct page *pages; - char *register_page_data; char *system_name; struct hlist_node node; struct mutex reg_mutex; DECLARE_HASHTABLE(register_table, 8); - DECLARE_BITMAP(page_bitmap, MAX_EVENTS); }; /* Group for init_user_ns mapping, top-most group */ @@ -106,12 +78,34 @@ struct user_event { struct list_head fields; struct list_head validators; refcount_t refcnt; - int index; - int flags; int min_size; char status; }; +/* + * Stores per-mm/event properties that enable an address to be + * updated properly for each task. As tasks are forked, we use + * these to track enablement sites that are tied to an event. + */ +struct user_event_enabler { + struct list_head link; + struct user_event *event; + unsigned long addr; + + /* Track enable bit, flags, etc. Aligned for bitops. */ + unsigned int values; +}; + +/* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ +#define ENABLE_VAL_BIT_MASK 0x3F + +/* Only duplicate the bit value */ +#define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK + +/* Global list of memory descriptors using user_events */ +static LIST_HEAD(user_event_mms); +static DEFINE_SPINLOCK(user_event_mms_lock); + /* * Stores per-file events references, as users register events * within a file this structure is modified and freed via RCU. @@ -145,33 +139,17 @@ static int user_event_parse(struct user_event_group *group, char *name, char *args, char *flags, struct user_event **newuser); +static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm); +static struct user_event_mm *user_event_mm_get_all(struct user_event *user); +static void user_event_mm_put(struct user_event_mm *mm); + static u32 user_event_key(char *name) { return jhash(name, strlen(name), 0); } -static void set_page_reservations(char *pages, bool set) -{ - int page; - - for (page = 0; page < MAX_PAGES; ++page) { - void *addr = pages + (PAGE_SIZE * page); - - if (set) - SetPageReserved(virt_to_page(addr)); - else - ClearPageReserved(virt_to_page(addr)); - } -} - static void user_event_group_destroy(struct user_event_group *group) { - if (group->register_page_data) - set_page_reservations(group->register_page_data, false); - - if (group->pages) - __free_pages(group->pages, MAX_PAGE_ORDER); - kfree(group->system_name); kfree(group); } @@ -242,19 +220,6 @@ static struct user_event_group if (!group->system_name) goto error; - group->pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, MAX_PAGE_ORDER); - - if (!group->pages) - goto error; - - group->register_page_data = page_address(group->pages); - - set_page_reservations(group->register_page_data, true); - - /* Zero all bits beside 0 (which is reserved for failures) */ - bitmap_zero(group->page_bitmap, MAX_EVENTS); - set_bit(0, group->page_bitmap); - mutex_init(&group->reg_mutex); hash_init(group->register_table); @@ -266,20 +231,367 @@ static struct user_event_group return NULL; }; -static __always_inline -void user_event_register_set(struct user_event *user) +static void user_event_enabler_destroy(struct user_event_enabler *enabler) +{ + list_del_rcu(&enabler->link); + + /* No longer tracking the event via the enabler */ + refcount_dec(&enabler->event->refcnt); + + kfree(enabler); +} + +static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) +{ + bool unlocked; + int ret; + + mmap_read_lock(mm->mm); + + /* Ensure MM has tasks, cannot use after exit_mm() */ + if (refcount_read(&mm->tasks) == 0) { + ret = -ENOENT; + goto out; + } + + ret = fixup_user_fault(mm->mm, uaddr, FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE, + &unlocked); +out: + mmap_read_unlock(mm->mm); + + return ret; +} + +static int user_event_enabler_write(struct user_event_mm *mm, + struct user_event_enabler *enabler) +{ + unsigned long uaddr = enabler->addr; + unsigned long *ptr; + struct page *page; + void *kaddr; + int ret; + + lockdep_assert_held(&event_mutex); + mmap_assert_locked(mm->mm); + + /* Ensure MM has tasks, cannot use after exit_mm() */ + if (refcount_read(&mm->tasks) == 0) + return -ENOENT; + + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, + &page, NULL, NULL); + + if (ret <= 0) { + pr_warn("user_events: Enable write failed\n"); + return -EFAULT; + } + + kaddr = kmap_local_page(page); + ptr = kaddr + (uaddr & ~PAGE_MASK); + + /* Update bit atomically, user tracers must be atomic as well */ + if (enabler->event && enabler->event->status) + set_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); + else + clear_bit(enabler->values & ENABLE_VAL_BIT_MASK, ptr); + + kunmap_local(kaddr); + unpin_user_pages_dirty_lock(&page, 1, true); + + return 0; +} + +static void user_event_enabler_update(struct user_event *user) +{ + struct user_event_enabler *enabler; + struct user_event_mm *mm = user_event_mm_get_all(user); + struct user_event_mm *next; + + while (mm) { + next = mm->next; + mmap_read_lock(mm->mm); + rcu_read_lock(); + + list_for_each_entry_rcu(enabler, &mm->enablers, link) + if (enabler->event == user) + user_event_enabler_write(mm, enabler); + + rcu_read_unlock(); + mmap_read_unlock(mm->mm); + user_event_mm_put(mm); + mm = next; + } +} + +static bool user_event_enabler_dup(struct user_event_enabler *orig, + struct user_event_mm *mm) +{ + struct user_event_enabler *enabler; + + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); + + if (!enabler) + return false; + + enabler->event = orig->event; + enabler->addr = orig->addr; + + /* Only dup part of value (ignore future flags, etc) */ + enabler->values = orig->values & ENABLE_VAL_DUP_MASK; + + refcount_inc(&enabler->event->refcnt); + list_add_rcu(&enabler->link, &mm->enablers); + + return true; +} + +static struct user_event_mm *user_event_mm_get(struct user_event_mm *mm) +{ + refcount_inc(&mm->refcnt); + + return mm; +} + +static struct user_event_mm *user_event_mm_get_all(struct user_event *user) +{ + struct user_event_mm *found = NULL; + struct user_event_enabler *enabler; + struct user_event_mm *mm; + + /* + * We do not want to block fork/exec while enablements are being + * updated, so we use RCU to walk the current tasks that have used + * user_events ABI for 1 or more events. Each enabler found in each + * task that matches the event being updated has a write to reflect + * the kernel state back into the process. Waits/faults must not occur + * during this. So we scan the list under RCU for all the mm that have + * the event within it. This is needed because mm_read_lock() can wait. + * Each user mm returned has a ref inc to handle remove RCU races. + */ + rcu_read_lock(); + + list_for_each_entry_rcu(mm, &user_event_mms, link) + list_for_each_entry_rcu(enabler, &mm->enablers, link) + if (enabler->event == user) { + mm->next = found; + found = user_event_mm_get(mm); + break; + } + + rcu_read_unlock(); + + return found; +} + +static struct user_event_mm *user_event_mm_create(struct task_struct *t) +{ + struct user_event_mm *user_mm; + unsigned long flags; + + user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL); + + if (!user_mm) + return NULL; + + user_mm->mm = t->mm; + INIT_LIST_HEAD(&user_mm->enablers); + refcount_set(&user_mm->refcnt, 1); + refcount_set(&user_mm->tasks, 1); + + spin_lock_irqsave(&user_event_mms_lock, flags); + list_add_rcu(&user_mm->link, &user_event_mms); + spin_unlock_irqrestore(&user_event_mms_lock, flags); + + t->user_event_mm = user_mm; + + /* + * The lifetime of the memory descriptor can slightly outlast + * the task lifetime if a ref to the user_event_mm is taken + * between list_del_rcu() and call_rcu(). Therefore we need + * to take a reference to it to ensure it can live this long + * under this corner case. This can also occur in clones that + * outlast the parent. + */ + mmgrab(user_mm->mm); + + return user_mm; +} + +static struct user_event_mm *current_user_event_mm(void) +{ + struct user_event_mm *user_mm = current->user_event_mm; + + if (user_mm) + goto inc; + + user_mm = user_event_mm_create(current); + + if (!user_mm) + goto error; +inc: + refcount_inc(&user_mm->refcnt); +error: + return user_mm; +} + +static void user_event_mm_destroy(struct user_event_mm *mm) +{ + struct user_event_enabler *enabler, *next; + + list_for_each_entry_safe(enabler, next, &mm->enablers, link) + user_event_enabler_destroy(enabler); + + mmdrop(mm->mm); + kfree(mm); +} + +static void user_event_mm_put(struct user_event_mm *mm) +{ + if (mm && refcount_dec_and_test(&mm->refcnt)) + user_event_mm_destroy(mm); +} + +static void delayed_user_event_mm_put(struct work_struct *work) +{ + struct user_event_mm *mm; + + mm = container_of(to_rcu_work(work), struct user_event_mm, put_rwork); + user_event_mm_put(mm); +} + +void user_event_mm_remove(struct task_struct *t) { - int i = user->index; + struct user_event_mm *mm; + unsigned long flags; + + might_sleep(); + + mm = t->user_event_mm; + t->user_event_mm = NULL; + + /* Clone will increment the tasks, only remove if last clone */ + if (!refcount_dec_and_test(&mm->tasks)) + return; + + /* Remove the mm from the list, so it can no longer be enabled */ + spin_lock_irqsave(&user_event_mms_lock, flags); + list_del_rcu(&mm->link); + spin_unlock_irqrestore(&user_event_mms_lock, flags); + + /* + * We need to wait for currently occurring writes to stop within + * the mm. This is required since exit_mm() snaps the current rss + * stats and clears them. On the final mmdrop(), check_mm() will + * report a bug if these increment. + * + * All writes/pins are done under mmap_read lock, take the write + * lock to ensure in-progress faults have completed. Faults that + * are pending but yet to run will check the task count and skip + * the fault since the mm is going away. + */ + mmap_write_lock(mm->mm); + mmap_write_unlock(mm->mm); - user->group->register_page_data[MAP_STATUS_BYTE(i)] |= MAP_STATUS_MASK(i); + /* + * Put for mm must be done after RCU delay to handle new refs in + * between the list_del_rcu() and now. This ensures any get refs + * during rcu_read_lock() are accounted for during list removal. + * + * CPU A | CPU B + * --------------------------------------------------------------- + * user_event_mm_remove() | rcu_read_lock(); + * list_del_rcu() | list_for_each_entry_rcu(); + * call_rcu() | refcount_inc(); + * . | rcu_read_unlock(); + * schedule_work() | . + * user_event_mm_put() | . + * + * mmdrop() cannot be called in the softirq context of call_rcu() + * so we use a work queue after call_rcu() to run within. + */ + INIT_RCU_WORK(&mm->put_rwork, delayed_user_event_mm_put); + queue_rcu_work(system_wq, &mm->put_rwork); } -static __always_inline -void user_event_register_clear(struct user_event *user) +void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm) { - int i = user->index; + struct user_event_mm *mm = user_event_mm_create(t); + struct user_event_enabler *enabler; - user->group->register_page_data[MAP_STATUS_BYTE(i)] &= ~MAP_STATUS_MASK(i); + if (!mm) + return; + + rcu_read_lock(); + + list_for_each_entry_rcu(enabler, &old_mm->enablers, link) + if (!user_event_enabler_dup(enabler, mm)) + goto error; + + rcu_read_unlock(); + + return; +error: + rcu_read_unlock(); + user_event_mm_remove(t); +} + +static struct user_event_enabler +*user_event_enabler_create(struct user_reg *reg, struct user_event *user, + int *write_result) +{ + struct user_event_enabler *enabler; + struct user_event_mm *user_mm; + unsigned long uaddr = (unsigned long)reg->enable_addr; + + user_mm = current_user_event_mm(); + + if (!user_mm) + return NULL; + + enabler = kzalloc(sizeof(*enabler), GFP_KERNEL); + + if (!enabler) + goto out; + + enabler->event = user; + enabler->addr = uaddr; + enabler->values = reg->enable_bit; +retry: + /* Prevents state changes from racing with new enablers */ + mutex_lock(&event_mutex); + + /* Attempt to reflect the current state within the process */ + mmap_read_lock(user_mm->mm); + *write_result = user_event_enabler_write(user_mm, enabler); + mmap_read_unlock(user_mm->mm); + + /* + * If the write works, then we will track the enabler. A ref to the + * underlying user_event is held by the enabler to prevent it going + * away while the enabler is still in use by a process. The ref is + * removed when the enabler is destroyed. This means a event cannot + * be forcefully deleted from the system until all tasks using it + * exit or run exec(), which includes forks and clones. + */ + if (!*write_result) { + refcount_inc(&enabler->event->refcnt); + list_add_rcu(&enabler->link, &user_mm->enablers); + } + + mutex_unlock(&event_mutex); + + if (*write_result) { + /* Attempt to fault-in and retry if it worked */ + if (!user_event_mm_fault_in(user_mm, uaddr)) + goto retry; + + kfree(enabler); + enabler = NULL; + } +out: + user_event_mm_put(user_mm); + + return enabler; } static __always_inline __must_check @@ -824,9 +1136,6 @@ static int destroy_user_event(struct user_event *user) return ret; dyn_event_remove(&user->devent); - - user_event_register_clear(user); - clear_bit(user->index, user->group->page_bitmap); hash_del(&user->node); user_event_destroy_validators(user); @@ -972,9 +1281,9 @@ static void user_event_perf(struct user_event *user, struct iov_iter *i, #endif /* - * Update the register page that is shared between user processes. + * Update the enabled bit among all user processes. */ -static void update_reg_page_for(struct user_event *user) +static void update_enable_bit_for(struct user_event *user) { struct tracepoint *tp = &user->tracepoint; char status = 0; @@ -1005,12 +1314,9 @@ static void update_reg_page_for(struct user_event *user) rcu_read_unlock_sched(); } - if (status) - user_event_register_set(user); - else - user_event_register_clear(user); - user->status = status; + + user_event_enabler_update(user); } /* @@ -1067,10 +1373,10 @@ static int user_event_reg(struct trace_event_call *call, return ret; inc: refcount_inc(&user->refcnt); - update_reg_page_for(user); + update_enable_bit_for(user); return 0; dec: - update_reg_page_for(user); + update_enable_bit_for(user); refcount_dec(&user->refcnt); return 0; } @@ -1266,7 +1572,6 @@ static int user_event_parse(struct user_event_group *group, char *name, struct user_event **newuser) { int ret; - int index; u32 key; struct user_event *user; @@ -1285,11 +1590,6 @@ static int user_event_parse(struct user_event_group *group, char *name, return 0; } - index = find_first_zero_bit(group->page_bitmap, MAX_EVENTS); - - if (index == MAX_EVENTS) - return -EMFILE; - user = kzalloc(sizeof(*user), GFP_KERNEL); if (!user) @@ -1335,14 +1635,11 @@ static int user_event_parse(struct user_event_group *group, char *name, if (ret) goto put_user_lock; - user->index = index; - /* Ensure we track self ref and caller ref (2) */ refcount_set(&user->refcnt, 2); dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); - set_bit(user->index, group->page_bitmap); hash_add(group->register_table, &user->node, key); mutex_unlock(&event_mutex); @@ -1559,6 +1856,37 @@ static long user_reg_get(struct user_reg __user *ureg, struct user_reg *kreg) if (ret) return ret; + /* Ensure no flags, since we don't support any yet */ + if (kreg->flags != 0) + return -EINVAL; + + /* Ensure supported size */ + switch (kreg->enable_size) { + case 4: + /* 32-bit */ + break; +#if BITS_PER_LONG >= 64 + case 8: + /* 64-bit */ + break; +#endif + default: + return -EINVAL; + } + + /* Ensure natural alignment */ + if (kreg->enable_addr % kreg->enable_size) + return -EINVAL; + + /* Ensure bit range for size */ + if (kreg->enable_bit > (kreg->enable_size * BITS_PER_BYTE) - 1) + return -EINVAL; + + /* Ensure accessible */ + if (!access_ok((const void __user *)(uintptr_t)kreg->enable_addr, + kreg->enable_size)) + return -EFAULT; + kreg->size = size; return 0; @@ -1573,8 +1901,10 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, struct user_reg __user *ureg = (struct user_reg __user *)uarg; struct user_reg reg; struct user_event *user; + struct user_event_enabler *enabler; char *name; long ret; + int write_result; ret = user_reg_get(ureg, ®); @@ -1605,8 +1935,28 @@ static long user_events_ioctl_reg(struct user_event_file_info *info, if (ret < 0) return ret; + /* + * user_events_ref_add succeeded: + * At this point we have a user_event, it's lifetime is bound by the + * reference count, not this file. If anything fails, the user_event + * still has a reference until the file is released. During release + * any remaining references (from user_events_ref_add) are decremented. + * + * Attempt to create an enabler, which too has a lifetime tied in the + * same way for the event. Once the task that caused the enabler to be + * created exits or issues exec() then the enablers it has created + * will be destroyed and the ref to the event will be decremented. + */ + enabler = user_event_enabler_create(®, user, &write_result); + + if (!enabler) + return -ENOMEM; + + /* Write failed/faulted, give error back to caller */ + if (write_result) + return write_result; + put_user((u32)ret, &ureg->write_index); - put_user(user->index, &ureg->status_bit); return 0; } @@ -1720,38 +2070,6 @@ static const struct file_operations user_data_fops = { .release = user_events_release, }; -static struct user_event_group *user_status_group(struct file *file) -{ - struct seq_file *m = file->private_data; - - if (!m) - return NULL; - - return m->private; -} - -/* - * Maps the shared page into the user process for checking if event is enabled. - */ -static int user_status_mmap(struct file *file, struct vm_area_struct *vma) -{ - char *pages; - struct user_event_group *group = user_status_group(file); - unsigned long size = vma->vm_end - vma->vm_start; - - if (size != MAX_BYTES) - return -EINVAL; - - if (!group) - return -EINVAL; - - pages = group->register_page_data; - - return remap_pfn_range(vma, vma->vm_start, - virt_to_phys(pages) >> PAGE_SHIFT, - size, vm_get_page_prot(VM_READ)); -} - static void *user_seq_start(struct seq_file *m, loff_t *pos) { if (*pos) @@ -1775,7 +2093,7 @@ static int user_seq_show(struct seq_file *m, void *p) struct user_event_group *group = m->private; struct user_event *user; char status; - int i, active = 0, busy = 0, flags; + int i, active = 0, busy = 0; if (!group) return -EINVAL; @@ -1784,11 +2102,10 @@ static int user_seq_show(struct seq_file *m, void *p) hash_for_each(group->register_table, i, user, node) { status = user->status; - flags = user->flags; - seq_printf(m, "%d:%s", user->index, EVENT_NAME(user)); + seq_printf(m, "%s", EVENT_NAME(user)); - if (flags != 0 || status != 0) + if (status != 0) seq_puts(m, " #"); if (status != 0) { @@ -1811,7 +2128,6 @@ static int user_seq_show(struct seq_file *m, void *p) seq_puts(m, "\n"); seq_printf(m, "Active: %d\n", active); seq_printf(m, "Busy: %d\n", busy); - seq_printf(m, "Max: %ld\n", MAX_EVENTS); return 0; } @@ -1847,7 +2163,6 @@ static int user_status_open(struct inode *node, struct file *file) static const struct file_operations user_status_fops = { .open = user_status_open, - .mmap = user_status_mmap, .read = seq_read, .llseek = seq_lseek, .release = seq_release, @@ -1868,8 +2183,7 @@ static int create_user_tracefs(void) goto err; } - /* mmap with MAP_SHARED requires writable fd */ - emmap = tracefs_create_file("user_events_status", TRACE_MODE_WRITE, + emmap = tracefs_create_file("user_events_status", TRACE_MODE_READ, NULL, NULL, &user_status_fops); if (!emmap) { From patchwork Tue Feb 21 21:11:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60252 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225114wrd; Tue, 21 Feb 2023 13:13:17 -0800 (PST) X-Google-Smtp-Source: AK7set+XbMAWEySizMOXDgeagNEEnX4gGGETmqC4OoeZKjOGr8ow1AH8TYtKqq39Ew9S9pLUwCUY X-Received: by 2002:a17:90b:4d0f:b0:230:b0e3:9cad with SMTP id mw15-20020a17090b4d0f00b00230b0e39cadmr5216250pjb.23.1677013997173; Tue, 21 Feb 2023 13:13:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677013997; cv=none; d=google.com; s=arc-20160816; b=KQI8o7q/uytvVkNSW0fVk5PtQR3aN2VDL9avqUzNgVG5ctJpk8cpY6avYGZuEt4+hn giATlWRYycmK6P2EIhbebZVMlxZ985CgUn+PIi0EZYUffmjnstpnNOXmuRtGow5tNadI cPh5HWYy7rQk4XPoK8LyOefLci5niO5z+mcUE4n0GdMRcBJyeICgM987+JErqC9lAM+C 2Y+uqYcwnTf7kPKb7/VQY/AjOUH2+4eS9upfBPKsVHIQ8vioN1tmT/dFC4Xvj3U6oDRi 160RFkjp8J0nUrfdch8fJnTs2hftp0owYEzJXDYXeY3AUrc89/DYuGnk+rCrlXu5CqJO lu8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=LAT6es1zlcxHgyXQ+ccAd1DGKcAvO/9IswEU0C+E8Yk=; b=YnyUeVe6jc5k3zmWDtbAhWRhFiXuNqca6fszfranL+36sZhuyjvIR7oyRt0aIE5gmG Gtjzq7MQXHsDkkSkX/MU3mRqe7MZMB61zyZurnG0sTPU12kpGVugOlZXFBrV2OJutv0k glMzDD1//9cTLRJb7rOkQcaKueQQpy2u9hSutujhcuk3O+sLuqw0zkkk46bpQGC6lupL gIm/WgkjPNsf4B1S2YT3c2vX91fiiogdsr+IcHDNzdSS7DVO51oKKsptCKOFj9ZkfJno dGeLYX5FrhO6O7YwSPf6lh2TQjyDq1jkHenggW0K3De8Hf6CmcbgPJ8drYk8hO5PE31X CgiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b="sD/1eaVn"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ep22-20020a17090ae65600b002300816bbefsi5270606pjb.68.2023.02.21.13.13.04; Tue, 21 Feb 2023 13:13:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b="sD/1eaVn"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229569AbjBUVMB (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229716AbjBUVLw (ORCPT ); Tue, 21 Feb 2023 16:11:52 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9E0EB305C2; Tue, 21 Feb 2023 13:11:50 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id E3B5A20B9C3D; Tue, 21 Feb 2023 13:11:49 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com E3B5A20B9C3D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013910; bh=LAT6es1zlcxHgyXQ+ccAd1DGKcAvO/9IswEU0C+E8Yk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sD/1eaVnzd86RfhuQGs9V1soHDsH7AjM4ll3w5rmohEWwV3OxFGG6XcBwMOm64c7a vnPXQliyyk3o1rTE4U2EYLVD5T/2Yany8n6FKAwF6q4dBLODMzeyqKlJKd6ezroU7z lPJ2zTjDZZbJ63Li5seDtiZFeFZ3DDyOQdlWCWyw= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 04/11] tracing/user_events: Fixup enable faults asyncly Date: Tue, 21 Feb 2023 13:11:36 -0800 Message-Id: <20230221211143.574-5-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476629088137665?= X-GMAIL-MSGID: =?utf-8?q?1758476629088137665?= When events are enabled within the various tracing facilities, such as ftrace/perf, the event_mutex is held. As events are enabled pages are accessed. We do not want page faults to occur under this lock. Instead queue the fault to a workqueue to be handled in a process context safe way without the lock. The enable address is marked faulting while the async fault-in occurs. This ensures that we don't attempt to fault-in more than is necessary. Once the page has been faulted in, an address write is re-attempted. If the page couldn't fault-in, then we wait until the next time the event is enabled to prevent any potential infinite loops. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 120 +++++++++++++++++++++++++++++-- 1 file changed, 114 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 553a82ee7aeb..86bda1660536 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -99,9 +99,23 @@ struct user_event_enabler { /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ #define ENABLE_VAL_BIT_MASK 0x3F +/* Bit 6 is for faulting status of enablement */ +#define ENABLE_VAL_FAULTING_BIT 6 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK +#define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values) + +/* Used for asynchronous faulting in of pages */ +struct user_event_enabler_fault { + struct work_struct work; + struct user_event_mm *mm; + struct user_event_enabler *enabler; +}; + +static struct kmem_cache *fault_cache; + /* Global list of memory descriptors using user_events */ static LIST_HEAD(user_event_mms); static DEFINE_SPINLOCK(user_event_mms_lock); @@ -263,7 +277,85 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) } static int user_event_enabler_write(struct user_event_mm *mm, - struct user_event_enabler *enabler) + struct user_event_enabler *enabler, + bool fixup_fault); + +static void user_event_enabler_fault_fixup(struct work_struct *work) +{ + struct user_event_enabler_fault *fault = container_of( + work, struct user_event_enabler_fault, work); + struct user_event_enabler *enabler = fault->enabler; + struct user_event_mm *mm = fault->mm; + unsigned long uaddr = enabler->addr; + int ret; + + ret = user_event_mm_fault_in(mm, uaddr); + + if (ret && ret != -ENOENT) { + struct user_event *user = enabler->event; + + pr_warn("user_events: Fault for mm: 0x%pK @ 0x%llx event: %s\n", + mm->mm, (unsigned long long)uaddr, EVENT_NAME(user)); + } + + /* Prevent state changes from racing */ + mutex_lock(&event_mutex); + + /* + * If we managed to get the page, re-issue the write. We do not + * want to get into a possible infinite loop, which is why we only + * attempt again directly if the page came in. If we couldn't get + * the page here, then we will try again the next time the event is + * enabled/disabled. + */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!ret) { + mmap_read_lock(mm->mm); + user_event_enabler_write(mm, enabler, true); + mmap_read_unlock(mm->mm); + } + + mutex_unlock(&event_mutex); + + /* In all cases we no longer need the mm or fault */ + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); +} + +static bool user_event_enabler_queue_fault(struct user_event_mm *mm, + struct user_event_enabler *enabler) +{ + struct user_event_enabler_fault *fault; + + fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT | __GFP_NOWARN); + + if (!fault) + return false; + + INIT_WORK(&fault->work, user_event_enabler_fault_fixup); + fault->mm = user_event_mm_get(mm); + fault->enabler = enabler; + + /* Don't try to queue in again while we have a pending fault */ + set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!schedule_work(&fault->work)) { + /* Allow another attempt later */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); + + return false; + } + + return true; +} + +static int user_event_enabler_write(struct user_event_mm *mm, + struct user_event_enabler *enabler, + bool fixup_fault) { unsigned long uaddr = enabler->addr; unsigned long *ptr; @@ -278,11 +370,19 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + return -EBUSY; + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, &page, NULL, NULL); - if (ret <= 0) { - pr_warn("user_events: Enable write failed\n"); + if (unlikely(ret <= 0)) { + if (!fixup_fault) + return -EFAULT; + + if (!user_event_enabler_queue_fault(mm, enabler)) + pr_warn("user_events: Unable to queue fault handler\n"); + return -EFAULT; } @@ -314,7 +414,7 @@ static void user_event_enabler_update(struct user_event *user) list_for_each_entry_rcu(enabler, &mm->enablers, link) if (enabler->event == user) - user_event_enabler_write(mm, enabler); + user_event_enabler_write(mm, enabler, true); rcu_read_unlock(); mmap_read_unlock(mm->mm); @@ -562,7 +662,7 @@ static struct user_event_enabler /* Attempt to reflect the current state within the process */ mmap_read_lock(user_mm->mm); - *write_result = user_event_enabler_write(user_mm, enabler); + *write_result = user_event_enabler_write(user_mm, enabler, false); mmap_read_unlock(user_mm->mm); /* @@ -2201,16 +2301,24 @@ static int __init trace_events_user_init(void) { int ret; + fault_cache = KMEM_CACHE(user_event_enabler_fault, 0); + + if (!fault_cache) + return -ENOMEM; + init_group = user_event_group_create(&init_user_ns); - if (!init_group) + if (!init_group) { + kmem_cache_destroy(fault_cache); return -ENOMEM; + } ret = create_user_tracefs(); if (ret) { pr_warn("user_events could not register with tracefs\n"); user_event_group_destroy(init_group); + kmem_cache_destroy(fault_cache); init_group = NULL; return ret; } From patchwork Tue Feb 21 21:11:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60253 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225122wrd; Tue, 21 Feb 2023 13:13:18 -0800 (PST) X-Google-Smtp-Source: AK7set/VNiggCu6zSYWN+6iZ73pCEIEPoyKcjt6MZ7/OTKMDPtZklTGJQIIzT6nN1DQqUsHWUt9C X-Received: by 2002:a17:903:1c6:b0:196:2ade:6e21 with SMTP id e6-20020a17090301c600b001962ade6e21mr8677857plh.14.1677013998287; Tue, 21 Feb 2023 13:13:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677013998; cv=none; d=google.com; s=arc-20160816; b=JZ5MwodQHsK5TaN2+PKOnk5XZj3M03IDP5Apy8Frc1cdbWgBIou3MRdZ9anqUYZra9 rGv3lHhbO9cYLwr3in+n8wg4xomWdfefLt6zpdzcMI7O6KAuD+5FzYp/qiFkXmG2QrDk kZJezsvgIHSlhClywL0zHHv5UkbPGbvuDyEYQqe+c5n25xkzfPInp3PQVmLGKmArAlfn fpp6Cp3avq4uwD6YRb05YoVUAWTnXIHTpzWmAR65/7HFTd8BriQIsNqHVRSeatDVtyw1 PBf54bmdYKSiaBDIywpMzNuvJweBajSuR8NVPKl8yAAF3pxW3UWI5ZRXhuItF4WId3kH c7+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=hPint7oilCioEigkIIslQUbcyG4C0cJ57YmYs/t8wm4=; b=JeHeX3twpoAf1I2sm67ZOy7QeWLJGJLCV5McUMHAEnc4Wxn/P8JB5Vv4Encz/AEUUF lIQ0aI4LuFtFrccc7iwesOsFW5aop+jjBJZUm9BFvRWOXr3EmfsfSwZNGdF3ZsowFCb5 EIbagaYqVjuevSsImFOT55qhh6rnX+cEbS1UylS/nMyZvfTVSZ7LVI0bbX5F1eT4OlTh nJtdzTAt4clm7EGTOUs+q7H7xjl9A+EwK+EvGIYyHqpPZ8dqOOfZcWTZjEn9rhrfmQ8/ u3EzG2r/wyRsR5FMvDmJqfJ5uQpSNEvGkspVl6vePllt/DOVT3I/2hB2qrZ8VXZmjt9j 0Vwg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=jv3UDZtG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r185-20020a632bc2000000b004fb3c246af5si9326229pgr.274.2023.02.21.13.13.05; Tue, 21 Feb 2023 13:13:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=jv3UDZtG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229716AbjBUVMD (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43402 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229865AbjBUVLx (ORCPT ); Tue, 21 Feb 2023 16:11:53 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 203F0305F7; Tue, 21 Feb 2023 13:11:51 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 4A74C20B9C3E; Tue, 21 Feb 2023 13:11:50 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 4A74C20B9C3E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013910; bh=hPint7oilCioEigkIIslQUbcyG4C0cJ57YmYs/t8wm4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jv3UDZtGn4MCJzppxz97f4vtpUvEDWVn9grdnd5DHfIYBdQYH0tWC57TDrXDygH9r /ejC3JrJSkat9FDpQJUtBt8htUhJrxgVNZbHqwuNxLwyLoOWndWR8GVj0jnfBZx/pS FrZdvmgb+hmbVHo4FmBegwQVXLNo5/hH4qGK9irg= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 05/11] tracing/user_events: Add ioctl for disabling addresses Date: Tue, 21 Feb 2023 13:11:37 -0800 Message-Id: <20230221211143.574-6-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476630492206249?= X-GMAIL-MSGID: =?utf-8?q?1758476630492206249?= Enablements are now tracked by the lifetime of the task/mm. User processes need to be able to disable their addresses if tracing is requested to be turned off. Before unmapping the page would suffice. However, we now need a stronger contract. Add an ioctl to enable this. A new flag bit is added, freeing, to user_event_enabler to ensure that if the event is attempted to be removed while a fault is being handled that the remove is delayed until after the fault is reattempted. Signed-off-by: Beau Belgrave --- include/uapi/linux/user_events.h | 24 +++++++++ kernel/trace/trace_events_user.c | 93 +++++++++++++++++++++++++++++++- 2 files changed, 115 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 22521bc622db..3e7275e3234a 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -46,6 +46,27 @@ struct user_reg { __u32 write_index; } __attribute__((__packed__)); +/* + * Describes an event unregister, callers must set the size, address and bit. + * This structure is passed to the DIAG_IOCSUNREG ioctl to disable bit updates. + */ +struct user_unreg { + /* Input: Size of the user_unreg structure being used */ + __u32 size; + + /* Input: Bit to unregister */ + __u8 disable_bit; + + /* Input: Reserved, set to 0 */ + __u8 __reserved; + + /* Input: Reserved, set to 0 */ + __u16 __reserved2; + + /* Input: Address to unregister */ + __u64 disable_addr; +} __attribute__((__packed__)); + #define DIAG_IOC_MAGIC '*' /* Request to register a user_event */ @@ -54,4 +75,7 @@ struct user_reg { /* Request to delete a user_event */ #define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char *) +/* Requests to unregister a user_event */ +#define DIAG_IOCSUNREG _IOW(DIAG_IOC_MAGIC, 2, struct user_unreg*) + #endif /* _UAPI_LINUX_USER_EVENTS_H */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 86bda1660536..e4ee25d16f3b 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -102,6 +102,9 @@ struct user_event_enabler { /* Bit 6 is for faulting status of enablement */ #define ENABLE_VAL_FAULTING_BIT 6 +/* Bit 7 is for freeing status of enablement */ +#define ENABLE_VAL_FREEING_BIT 7 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK @@ -301,6 +304,12 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) /* Prevent state changes from racing */ mutex_lock(&event_mutex); + /* User asked for enabler to be removed during fault */ + if (test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))) { + user_event_enabler_destroy(enabler); + goto out; + } + /* * If we managed to get the page, re-issue the write. We do not * want to get into a possible infinite loop, which is why we only @@ -315,7 +324,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) user_event_enabler_write(mm, enabler, true); mmap_read_unlock(mm->mm); } - +out: mutex_unlock(&event_mutex); /* In all cases we no longer need the mm or fault */ @@ -370,7 +379,8 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; - if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)) || + test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)))) return -EBUSY; ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, @@ -428,6 +438,10 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, { struct user_event_enabler *enabler; + /* Skip pending frees */ + if (unlikely(test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(orig)))) + return true; + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); if (!enabler) @@ -2086,6 +2100,75 @@ static long user_events_ioctl_del(struct user_event_file_info *info, return ret; } +static long user_unreg_get(struct user_unreg __user *ureg, + struct user_unreg *kreg) +{ + u32 size; + long ret; + + ret = get_user(size, &ureg->size); + + if (ret) + return ret; + + if (size > PAGE_SIZE) + return -E2BIG; + + if (size < offsetofend(struct user_unreg, disable_addr)) + return -EINVAL; + + ret = copy_struct_from_user(kreg, sizeof(*kreg), ureg, size); + + return ret; +} + +/* + * Unregisters an enablement address/bit within a task/user mm. + */ +static long user_events_ioctl_unreg(unsigned long uarg) +{ + struct user_unreg __user *ureg = (struct user_unreg __user *)uarg; + struct user_event_mm *mm = current->user_event_mm; + struct user_event_enabler *enabler, *next; + struct user_unreg reg; + long ret; + + ret = user_unreg_get(ureg, ®); + + if (ret) + return ret; + + if (!mm) + return -ENOENT; + + ret = -ENOENT; + + /* + * Flags freeing and faulting are used to indicate if the enabler is in + * use at all. When faulting is set a page-fault is occurring asyncly. + * During async fault if freeing is set, the enabler will be destroyed. + * If no async fault is happening, we can destroy it now since we hold + * the event_mutex during these checks. + */ + mutex_lock(&event_mutex); + + list_for_each_entry_safe(enabler, next, &mm->enablers, link) + if (enabler->addr == reg.disable_addr && + (enabler->values & ENABLE_VAL_BIT_MASK) == reg.disable_bit) { + set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); + + if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) + user_event_enabler_destroy(enabler); + + /* Removed at least one */ + ret = 0; + } + + mutex_unlock(&event_mutex); + + return ret; +} + /* * Handles the ioctl from user mode to register or alter operations. */ @@ -2108,6 +2191,12 @@ static long user_events_ioctl(struct file *file, unsigned int cmd, ret = user_events_ioctl_del(info, uarg); mutex_unlock(&group->reg_mutex); break; + + case DIAG_IOCSUNREG: + mutex_lock(&group->reg_mutex); + ret = user_events_ioctl_unreg(uarg); + mutex_unlock(&group->reg_mutex); + break; } return ret; From patchwork Tue Feb 21 21:11:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60260 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225547wrd; Tue, 21 Feb 2023 13:14:17 -0800 (PST) X-Google-Smtp-Source: AK7set+jPa0QITqWFwnUfsFKOzVNtad+HPeQYVtAX8Um+UYwyaVNBfHDs7ouDgXCxzuK3+IT0Zd6 X-Received: by 2002:a17:903:32c3:b0:19a:aab3:c97c with SMTP id i3-20020a17090332c300b0019aaab3c97cmr6230187plr.0.1677014057248; Tue, 21 Feb 2023 13:14:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014057; cv=none; d=google.com; s=arc-20160816; b=nIyr+Y8E2lKi0jGetMlCR0koU8fzJ0vvJO7XcXEvRVtE869re++gfWAOYlSkG77Mps +16XJfrX1lOPvMIzZrSKPIS4YHq0T3G4yhFBL53Zkuf20GepLLzvSAgE/XUIF1yebzmJ SIp8uuRm+eBHpHTLivX3K9W7c3UZB8JcqoO9K7m3c0g9b1J9L8WgWMo7Kt5qsc87e7kL kN8KZIo2vXcu5k2LRolpqytfjIMtRKj64d2cCeD4Fq2dOyaFoGt+EXClE4j8Dr8KFmUx WjrndHEHv862hckORtTkmI7KgF17tBW7iKAiE6TGj8xJUDNsHeM9ZesYlFFOSWVJrvd9 xK+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=4Tw0x0REBVNQbPmE7YKz+0Y5T7GJmSyJL4LF57PSOTg=; b=k7ANuGEWLb3gJL0vuGyjOMsfgsRVWOWC1vfJ2j80OirUIN2DXsrmqi7qmj8xkflVmj UUNLv4hOaevmVbiAxTMV3Gv9pImpOFGm14Llx6M/8oREI2pEhrRj1S41OuGPND0/f5Nq jU5K5N+IdR4uKoxwyZ1XQny0FLgTIYe0R9erol09ms5aVW+qAiNnJX/Y5Q7aYWFPKq64 hu8hhq2QPy9sUNWcNdy5Q6kCNkNnddelIwgEwXquIH6j3HaCg3fLpljy30Y5SBnqJqea PwEqmKSuX3n1uTgAR0/o+FXfX5mshid7ALGmZXO1e9cR8TknHF7+FyMyRmcNzsgK4a6S 5rng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=aesw7t9X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 77-20020a630150000000b004fb171df693si8226636pgb.352.2023.02.21.13.14.04; Tue, 21 Feb 2023 13:14:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=aesw7t9X; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230115AbjBUVMK (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229969AbjBUVLx (ORCPT ); Tue, 21 Feb 2023 16:11:53 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6A8D1302A7; Tue, 21 Feb 2023 13:11:51 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id A3EA420B9C3F; Tue, 21 Feb 2023 13:11:50 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com A3EA420B9C3F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013911; bh=4Tw0x0REBVNQbPmE7YKz+0Y5T7GJmSyJL4LF57PSOTg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aesw7t9XQP/X5GQ8EiFfClorrUh6vVIIT2JPSMqWmamU/kxVxPj/z1e2ulC52oiIq z3PZ+4yeKsnEDUhPArL6QiFDuNKf7UaF9FzAshfM3yDYkfgEHU3tG+l7YUtUewIuXh c+LDOOEtb/Fo3dHw4yUvVtHGrfdsCyBqBmRE0Phs= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 06/11] tracing/user_events: Update self-tests to write ABI Date: Tue, 21 Feb 2023 13:11:38 -0800 Message-Id: <20230221211143.574-7-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476692028691455?= X-GMAIL-MSGID: =?utf-8?q?1758476692028691455?= ABI has been changed to remote writes, update existing test cases to use this new ABI to ensure existing functionality continues to work. Signed-off-by: Beau Belgrave --- .../testing/selftests/user_events/dyn_test.c | 2 +- .../selftests/user_events/ftrace_test.c | 162 ++++++++++-------- .../testing/selftests/user_events/perf_test.c | 39 ++--- 3 files changed, 105 insertions(+), 98 deletions(-) diff --git a/tools/testing/selftests/user_events/dyn_test.c b/tools/testing/selftests/user_events/dyn_test.c index d6265d14cd51..8879a7b04c6a 100644 --- a/tools/testing/selftests/user_events/dyn_test.c +++ b/tools/testing/selftests/user_events/dyn_test.c @@ -16,7 +16,7 @@ #include "../kselftest_harness.h" -const char *dyn_file = "/sys/kernel/debug/tracing/dynamic_events"; +const char *dyn_file = "/sys/kernel/tracing/dynamic_events"; const char *clear = "!u:__test_event"; static int Append(const char *value) diff --git a/tools/testing/selftests/user_events/ftrace_test.c b/tools/testing/selftests/user_events/ftrace_test.c index 404a2713dcae..aceafacfb126 100644 --- a/tools/testing/selftests/user_events/ftrace_test.c +++ b/tools/testing/selftests/user_events/ftrace_test.c @@ -12,20 +12,16 @@ #include #include #include +#include #include #include "../kselftest_harness.h" -const char *data_file = "/sys/kernel/debug/tracing/user_events_data"; -const char *status_file = "/sys/kernel/debug/tracing/user_events_status"; -const char *enable_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/enable"; -const char *trace_file = "/sys/kernel/debug/tracing/trace"; -const char *fmt_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/format"; - -static inline int status_check(char *status_page, int status_bit) -{ - return status_page[status_bit >> 3] & (1 << (status_bit & 7)); -} +const char *data_file = "/sys/kernel/tracing/user_events_data"; +const char *status_file = "/sys/kernel/tracing/user_events_status"; +const char *enable_file = "/sys/kernel/tracing/events/user_events/__test_event/enable"; +const char *trace_file = "/sys/kernel/tracing/trace"; +const char *fmt_file = "/sys/kernel/tracing/events/user_events/__test_event/format"; static int trace_bytes(void) { @@ -106,13 +102,23 @@ static int get_print_fmt(char *buffer, int len) return -1; } -static int clear(void) +static int clear(int *check) { + struct user_unreg unreg = {0}; + + unreg.size = sizeof(unreg); + unreg.disable_bit = 31; + unreg.disable_addr = (__u64)check; + int fd = open(data_file, O_RDWR); if (fd == -1) return -1; + if (ioctl(fd, DIAG_IOCSUNREG, &unreg) == -1) + if (errno != ENOENT) + return -1; + if (ioctl(fd, DIAG_IOCSDEL, "__test_event") == -1) if (errno != ENOENT) return -1; @@ -122,7 +128,7 @@ static int clear(void) return 0; } -static int check_print_fmt(const char *event, const char *expected) +static int check_print_fmt(const char *event, const char *expected, int *check) { struct user_reg reg = {0}; char print_fmt[256]; @@ -130,7 +136,7 @@ static int check_print_fmt(const char *event, const char *expected) int fd; /* Ensure cleared */ - ret = clear(); + ret = clear(check); if (ret != 0) return ret; @@ -142,14 +148,19 @@ static int check_print_fmt(const char *event, const char *expected) reg.size = sizeof(reg); reg.name_args = (__u64)event; + reg.enable_bit = 31; + reg.enable_addr = (__u64)check; + reg.enable_size = sizeof(*check); /* Register should work */ ret = ioctl(fd, DIAG_IOCSREG, ®); close(fd); - if (ret != 0) + if (ret != 0) { + printf("Reg failed in fmt\n"); return ret; + } /* Ensure correct print_fmt */ ret = get_print_fmt(print_fmt, sizeof(print_fmt)); @@ -164,6 +175,7 @@ FIXTURE(user) { int status_fd; int data_fd; int enable_fd; + int check; }; FIXTURE_SETUP(user) { @@ -185,59 +197,56 @@ FIXTURE_TEARDOWN(user) { close(self->enable_fd); } - ASSERT_EQ(0, clear()); + if (clear(&self->check) != 0) + printf("WARNING: Clear didn't work!\n"); } TEST_F(user, register_events) { struct user_reg reg = {0}; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; + struct user_unreg unreg = {0}; reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u32 field1; u32 field2"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); + unreg.size = sizeof(unreg); + unreg.disable_bit = 31; + unreg.disable_addr = (__u64)&self->check; /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); /* Multiple registers should result in same index */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); /* Ensure disabled */ self->enable_fd = open(enable_file, O_RDWR); ASSERT_NE(-1, self->enable_fd); ASSERT_NE(-1, write(self->enable_fd, "0", sizeof("0"))) - /* MMAP should work and be zero'd */ - ASSERT_NE(MAP_FAILED, status_page); - ASSERT_NE(NULL, status_page); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); - /* Enable event and ensure bits updated in status */ ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1"))) - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(1 << reg.enable_bit, self->check); /* Disable event and ensure bits updated in status */ ASSERT_NE(-1, write(self->enable_fd, "0", sizeof("0"))) - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); /* File still open should return -EBUSY for delete */ ASSERT_EQ(-1, ioctl(self->data_fd, DIAG_IOCSDEL, "__test_event")); ASSERT_EQ(EBUSY, errno); - /* Delete should work only after close */ + /* Unregister */ + ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSUNREG, &unreg)); + + /* Delete should work only after close and unregister */ close(self->data_fd); self->data_fd = open(data_file, O_RDWR); ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSDEL, "__test_event")); - - /* Unmap should work */ - ASSERT_EQ(0, munmap(status_page, page_size)); } TEST_F(user, write_events) { @@ -245,11 +254,12 @@ TEST_F(user, write_events) { struct iovec io[3]; __u32 field1, field2; int before = 0, after = 0; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u32 field1; u32 field2"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); field1 = 1; field2 = 2; @@ -261,18 +271,10 @@ TEST_F(user, write_events) { io[2].iov_base = &field2; io[2].iov_len = sizeof(field2); - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); - /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); - - /* MMAP should work and be zero'd */ - ASSERT_NE(MAP_FAILED, status_page); - ASSERT_NE(NULL, status_page); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); /* Write should fail on invalid slot with ENOENT */ io[0].iov_base = &field2; @@ -287,7 +289,7 @@ TEST_F(user, write_events) { ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1"))) /* Event should now be enabled */ - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_NE(1 << reg.enable_bit, self->check); /* Write should make it out to ftrace buffers */ before = trace_bytes(); @@ -304,6 +306,9 @@ TEST_F(user, write_fault) { reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u64 anon"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); anon = mmap(NULL, l, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); ASSERT_NE(MAP_FAILED, anon); @@ -316,7 +321,6 @@ TEST_F(user, write_fault) { /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); /* Write should work normally */ ASSERT_NE(-1, writev(self->data_fd, (const struct iovec *)io, 2)); @@ -333,24 +337,17 @@ TEST_F(user, write_validator) { int loc, bytes; char data[8]; int before = 0, after = 0; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; - - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event __rel_loc char[] data"; + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); - - /* MMAP should work and be zero'd */ - ASSERT_NE(MAP_FAILED, status_page); - ASSERT_NE(NULL, status_page); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); io[0].iov_base = ®.write_index; io[0].iov_len = sizeof(reg.write_index); @@ -369,7 +366,7 @@ TEST_F(user, write_validator) { ASSERT_NE(-1, write(self->enable_fd, "1", sizeof("1"))) /* Event should now be enabled */ - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(1 << reg.enable_bit, self->check); /* Full in-bounds write should work */ before = trace_bytes(); @@ -409,71 +406,88 @@ TEST_F(user, print_fmt) { int ret; ret = check_print_fmt("__test_event __rel_loc char[] data", - "print fmt: \"data=%s\", __get_rel_str(data)"); + "print fmt: \"data=%s\", __get_rel_str(data)", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event __data_loc char[] data", - "print fmt: \"data=%s\", __get_str(data)"); + "print fmt: \"data=%s\", __get_str(data)", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s64 data", - "print fmt: \"data=%lld\", REC->data"); + "print fmt: \"data=%lld\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u64 data", - "print fmt: \"data=%llu\", REC->data"); + "print fmt: \"data=%llu\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s32 data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u32 data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event int data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event unsigned int data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s16 data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u16 data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event short data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event unsigned short data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event s8 data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event u8 data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event char data", - "print fmt: \"data=%d\", REC->data"); + "print fmt: \"data=%d\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event unsigned char data", - "print fmt: \"data=%u\", REC->data"); + "print fmt: \"data=%u\", REC->data", + &self->check); ASSERT_EQ(0, ret); ret = check_print_fmt("__test_event char[4] data", - "print fmt: \"data=%s\", REC->data"); + "print fmt: \"data=%s\", REC->data", + &self->check); ASSERT_EQ(0, ret); } diff --git a/tools/testing/selftests/user_events/perf_test.c b/tools/testing/selftests/user_events/perf_test.c index 8b4c7879d5a7..a070258d4449 100644 --- a/tools/testing/selftests/user_events/perf_test.c +++ b/tools/testing/selftests/user_events/perf_test.c @@ -18,10 +18,9 @@ #include "../kselftest_harness.h" -const char *data_file = "/sys/kernel/debug/tracing/user_events_data"; -const char *status_file = "/sys/kernel/debug/tracing/user_events_status"; -const char *id_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/id"; -const char *fmt_file = "/sys/kernel/debug/tracing/events/user_events/__test_event/format"; +const char *data_file = "/sys/kernel/tracing/user_events_data"; +const char *id_file = "/sys/kernel/tracing/events/user_events/__test_event/id"; +const char *fmt_file = "/sys/kernel/tracing/events/user_events/__test_event/format"; struct event { __u32 index; @@ -35,11 +34,6 @@ static long perf_event_open(struct perf_event_attr *pe, pid_t pid, return syscall(__NR_perf_event_open, pe, pid, cpu, group_fd, flags); } -static inline int status_check(char *status_page, int status_bit) -{ - return status_page[status_bit >> 3] & (1 << (status_bit & 7)); -} - static int get_id(void) { FILE *fp = fopen(id_file, "r"); @@ -88,45 +82,38 @@ static int get_offset(void) } FIXTURE(user) { - int status_fd; int data_fd; + int check; }; FIXTURE_SETUP(user) { - self->status_fd = open(status_file, O_RDONLY); - ASSERT_NE(-1, self->status_fd); - self->data_fd = open(data_file, O_RDWR); ASSERT_NE(-1, self->data_fd); } FIXTURE_TEARDOWN(user) { - close(self->status_fd); close(self->data_fd); } TEST_F(user, perf_write) { struct perf_event_attr pe = {0}; struct user_reg reg = {0}; - int page_size = sysconf(_SC_PAGESIZE); - char *status_page; struct event event; struct perf_event_mmap_page *perf_page; + int page_size = sysconf(_SC_PAGESIZE); int id, fd, offset; __u32 *val; reg.size = sizeof(reg); reg.name_args = (__u64)"__test_event u32 field1; u32 field2"; - - status_page = mmap(NULL, page_size, PROT_READ, MAP_SHARED, - self->status_fd, 0); - ASSERT_NE(MAP_FAILED, status_page); + reg.enable_bit = 31; + reg.enable_addr = (__u64)&self->check; + reg.enable_size = sizeof(self->check); /* Register should work */ ASSERT_EQ(0, ioctl(self->data_fd, DIAG_IOCSREG, ®)); ASSERT_EQ(0, reg.write_index); - ASSERT_NE(0, reg.status_bit); - ASSERT_EQ(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(0, self->check); /* Id should be there */ id = get_id(); @@ -149,7 +136,7 @@ TEST_F(user, perf_write) { ASSERT_NE(MAP_FAILED, perf_page); /* Status should be updated */ - ASSERT_NE(0, status_check(status_page, reg.status_bit)); + ASSERT_EQ(1 << reg.enable_bit, self->check); event.index = reg.write_index; event.field1 = 0xc001; @@ -165,6 +152,12 @@ TEST_F(user, perf_write) { /* Ensure correct */ ASSERT_EQ(event.field1, *val++); ASSERT_EQ(event.field2, *val++); + + munmap(perf_page, page_size * 2); + close(fd); + + /* Status should be updated */ + ASSERT_EQ(0, self->check); } int main(int argc, char **argv) From patchwork Tue Feb 21 21:11:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60258 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225544wrd; Tue, 21 Feb 2023 13:14:17 -0800 (PST) X-Google-Smtp-Source: AK7set9IsY83RpmPT8j97xsORToz5EvE3W9C+DLRGN9y7/Obmh8dqv0rKOeVQ9fAUiLivGmBtgoK X-Received: by 2002:a17:902:e812:b0:19b:da8:1ce6 with SMTP id u18-20020a170902e81200b0019b0da81ce6mr7689094plg.55.1677014056842; Tue, 21 Feb 2023 13:14:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014056; cv=none; d=google.com; s=arc-20160816; b=SomArWuh/xhPR3UfCIxXh2D+6rgiaiBjgr8053KO+r9jNFOSl+Q1SZwnf+1eohETmn pNDOBOyW59XLoj111O5qDM+7BIc9FX0yO/BnNte/RhW0MZTGIPi2vU60T1XEdm5wrKT4 yQ00dvbApwwP0rFqo6kSJGKM7VqojD2tvD1xe2zhR2e0MgrHMvfPATUZbIeS6XW7l+Mc WtKDkfTnbIGQsyZdQmHnFiXn/kxUdbAN0QpGhc/VDE+XGxUCS05KS8KUAjSm+/6T/fcz tmNepXLkoTicNd3E6Hcg/a05aKLkhPTkL0IJZZntW+0gTdAo7mziH1P4IJtOWOHSLmKt 7W5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=yLsV99bpcauClh1MlrmzrNiAoYsutXdpDe7QcYfID8o=; b=JidxSxmhjDxuGMMqzWHOvlTT+QvRFVofypI/yawUya8rzJ8N+K8S4dgdHxxyqOVNBP 7wlmD8ssnBJwRAoJKjuzeZfGbpZw8Tq2TvLAL/sphuX937gh72V0vIRATHKKCVckTtCD 9U5h5wvW+21HJKTAdwgdcIjIrnBYtE9F4wog0hmoXlAIkBCIoqq6l4nnVo+IPaGpmsLb PhrV853WpOOkrIbxYRyJcgDHLNqBt4KJRU3K7Cb2TLfgp35pJxkod1JUDGmIPqDOBbrX JsnoWXjNNiRTXJGWMV9fBSNwcpULrKG7IvqzITkZeN1/xklyx6x/xzgTUjojdUwhqU3E sUjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=p+SgSI7b; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b11-20020a1709027e0b00b0019ca0b70039si1196437plm.516.2023.02.21.13.14.04; Tue, 21 Feb 2023 13:14:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=p+SgSI7b; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230192AbjBUVMI (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229491AbjBUVLx (ORCPT ); Tue, 21 Feb 2023 16:11:53 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 42B3630B10; Tue, 21 Feb 2023 13:11:52 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 161AB20B4713; Tue, 21 Feb 2023 13:11:51 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 161AB20B4713 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013911; bh=yLsV99bpcauClh1MlrmzrNiAoYsutXdpDe7QcYfID8o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p+SgSI7bm2mgXtau5qLZXy/kmce2IeIeB96UI7pYJj38XQ6a29dGVXZT0fvdCKx+k /oZheuPRGoF0oc5x37iEulC+2+TpqV83O9djvyYFXLydVIHWoOdmOhANC53Ms8XxNw aSB5LNiMIO7qn/k9OqG/b0kTbEk8p6vdaAFPNJc4= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 07/11] tracing/user_events: Add ABI self-test Date: Tue, 21 Feb 2023 13:11:39 -0800 Message-Id: <20230221211143.574-8-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476691624500717?= X-GMAIL-MSGID: =?utf-8?q?1758476691624500717?= Add ABI specific self-test to ensure enablements work in various scenarios such as fork, VM_CLONE, and basic event enable/disable. Ensure ABI contracts/limits are also being upheld, such as bit limits and data size limits. Signed-off-by: Beau Belgrave --- tools/testing/selftests/user_events/Makefile | 2 +- .../testing/selftests/user_events/abi_test.c | 226 ++++++++++++++++++ 2 files changed, 227 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/user_events/abi_test.c diff --git a/tools/testing/selftests/user_events/Makefile b/tools/testing/selftests/user_events/Makefile index c765d8635d9a..d5f64ef93197 100644 --- a/tools/testing/selftests/user_events/Makefile +++ b/tools/testing/selftests/user_events/Makefile @@ -2,7 +2,7 @@ CFLAGS += -Wl,-no-as-needed -Wall -I../../../../usr/include LDLIBS += -lrt -lpthread -lm -TEST_GEN_PROGS = ftrace_test dyn_test perf_test +TEST_GEN_PROGS = ftrace_test dyn_test perf_test abi_test TEST_FILES := settings diff --git a/tools/testing/selftests/user_events/abi_test.c b/tools/testing/selftests/user_events/abi_test.c new file mode 100644 index 000000000000..e0323d3777a7 --- /dev/null +++ b/tools/testing/selftests/user_events/abi_test.c @@ -0,0 +1,226 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * User Events ABI Test Program + * + * Copyright (c) 2022 Beau Belgrave + */ + +#define _GNU_SOURCE +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest_harness.h" + +const char *data_file = "/sys/kernel/tracing/user_events_data"; +const char *enable_file = "/sys/kernel/tracing/events/user_events/__abi_event/enable"; + +static int change_event(bool enable) +{ + int fd = open(enable_file, O_RDWR); + int ret; + + if (fd < 0) + return -1; + + if (enable) + ret = write(fd, "1", 1); + else + ret = write(fd, "0", 1); + + close(fd); + + if (ret == 1) + ret = 0; + else + ret = -1; + + return ret; +} + +static int reg_enable(long *enable, int size, int bit) +{ + struct user_reg reg = {0}; + int fd = open(data_file, O_RDWR); + int ret; + + if (fd < 0) + return -1; + + reg.size = sizeof(reg); + reg.name_args = (__u64)"__abi_event"; + reg.enable_bit = bit; + reg.enable_addr = (__u64)enable; + reg.enable_size = size; + + ret = ioctl(fd, DIAG_IOCSREG, ®); + + close(fd); + + return ret; +} + +static int reg_disable(long *enable, int bit) +{ + struct user_unreg reg = {0}; + int fd = open(data_file, O_RDWR); + int ret; + + if (fd < 0) + return -1; + + reg.size = sizeof(reg); + reg.disable_bit = bit; + reg.disable_addr = (__u64)enable; + + ret = ioctl(fd, DIAG_IOCSUNREG, ®); + + close(fd); + + return ret; +} + +FIXTURE(user) { + long check; +}; + +FIXTURE_SETUP(user) { + change_event(false); + self->check = 0; +} + +FIXTURE_TEARDOWN(user) { +} + +TEST_F(user, enablement) { + /* Changes should be reflected immediately */ + ASSERT_EQ(0, self->check); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, change_event(true)); + ASSERT_EQ(1, self->check); + ASSERT_EQ(0, change_event(false)); + ASSERT_EQ(0, self->check); + + /* Should not change after disable */ + ASSERT_EQ(0, change_event(true)); + ASSERT_EQ(1, self->check); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, change_event(false)); + ASSERT_EQ(1, self->check); + self->check = 0; +} + +TEST_F(user, bit_sizes) { + /* Allow 0-31 bits for 32-bit */ + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 31)); + ASSERT_NE(0, reg_enable(&self->check, sizeof(int), 32)); + ASSERT_EQ(0, reg_disable(&self->check, 0)); + ASSERT_EQ(0, reg_disable(&self->check, 31)); + +#if BITS_PER_LONG == 8 + /* Allow 0-64 bits for 64-bit */ + ASSERT_EQ(0, reg_enable(&self->check, sizeof(long), 63)); + ASSERT_NE(0, reg_enable(&self->check, sizeof(long), 64)); + ASSERT_EQ(0, reg_disable(&self->check, 63)); +#endif + + /* Disallowed sizes (everything beside 4 and 8) */ + ASSERT_NE(0, reg_enable(&self->check, 1, 0)); + ASSERT_NE(0, reg_enable(&self->check, 2, 0)); + ASSERT_NE(0, reg_enable(&self->check, 3, 0)); + ASSERT_NE(0, reg_enable(&self->check, 5, 0)); + ASSERT_NE(0, reg_enable(&self->check, 6, 0)); + ASSERT_NE(0, reg_enable(&self->check, 7, 0)); + ASSERT_NE(0, reg_enable(&self->check, 9, 0)); + ASSERT_NE(0, reg_enable(&self->check, 128, 0)); +} + +TEST_F(user, forks) { + int i; + + /* Ensure COW pages get updated after fork */ + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, self->check); + + if (fork() == 0) { + /* Force COW */ + self->check = 0; + + /* Up to 1 sec for enablement */ + for (i = 0; i < 10; ++i) { + usleep(100000); + + if (self->check) + exit(0); + } + + exit(1); + } + + /* Allow generous time for COW, then enable */ + usleep(100000); + ASSERT_EQ(0, change_event(true)); + + ASSERT_NE(-1, wait(&i)); + ASSERT_EQ(0, WEXITSTATUS(i)); + + /* Ensure child doesn't disable parent */ + if (fork() == 0) + exit(reg_disable(&self->check, 0)); + + ASSERT_NE(-1, wait(&i)); + ASSERT_EQ(0, WEXITSTATUS(i)); + ASSERT_EQ(1, self->check); + ASSERT_EQ(0, change_event(false)); + ASSERT_EQ(0, self->check); +} + +/* Waits up to 1 sec for enablement */ +static int clone_check(void *check) +{ + int i; + + for (i = 0; i < 10; ++i) { + usleep(100000); + + if (*(long *)check) + return 0; + } + + return 1; +} + +TEST_F(user, clones) { + int i, stack_size = 4096; + void *stack = mmap(NULL, stack_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, + -1, 0); + + ASSERT_NE(MAP_FAILED, stack); + ASSERT_EQ(0, reg_enable(&self->check, sizeof(int), 0)); + ASSERT_EQ(0, self->check); + + /* Shared VM should see enablements */ + ASSERT_NE(-1, clone(&clone_check, stack + stack_size, + CLONE_VM | SIGCHLD, &self->check)); + + ASSERT_EQ(0, change_event(true)); + ASSERT_NE(-1, wait(&i)); + ASSERT_EQ(0, WEXITSTATUS(i)); + munmap(stack, stack_size); + ASSERT_EQ(0, change_event(false)); +} + +int main(int argc, char **argv) +{ + return test_harness_run(argc, argv); +} From patchwork Tue Feb 21 21:11:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60256 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225385wrd; Tue, 21 Feb 2023 13:13:58 -0800 (PST) X-Google-Smtp-Source: AK7set/k56AyW0jG7AfjwmWmHRH04dwhkNfxQoQ0NVsyoT9fVRQ2hs+efvYMMJ6TSDCjWjnT57ab X-Received: by 2002:a05:6a20:3c92:b0:be:b49e:a634 with SMTP id b18-20020a056a203c9200b000beb49ea634mr6938550pzj.23.1677014037837; Tue, 21 Feb 2023 13:13:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014037; cv=none; d=google.com; s=arc-20160816; b=WuSYOXHOsFZsBYcTK1dIFYi7YElSsWrvaLj03NyvJnhB148h7Hb+pQFbYWPhNnuQc9 izOqufCRRYhjHqiJO6+hXgBX2LbLF1MiOq2EnOpE1sAkaupTadl78hnQpALz+SvZQN9t WzHy1+Z5qdjEUmYxmJIiUKRdN/rsEiFSylmU9S6upsm59LfYOpyj5qWXLpGNMxdZQiOi reptX81Zmq+O3a8dD7zyRG7tAhQz2Z0rn6LlxIf8Hjpc2/a1qB9Wc92MFZL0+5b3hj1D DstdoMTUm74CeUZlLyCEs4UXZhKhu0XGTH6Mvw1QYQwtF5WOqo5CjJaMXBz2ITFkNaot labw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=dmkEXknb1A0Qhw2AWPTXryudmpuAFo4IqxhDoe6dNiw=; b=BsKb5z1PV0k03VFYpFOw6VievwILgGZ1JAsaqQ+Wg2n6BJlE4NPk7CfqwFpHVKDobj Q+9hX1r+6ZIyFqX5tgF7+OLnxQYZp3wgwHQRZ4BnMegoXDa2brWRevskgzr6vILGwmo8 p7QUzrtJuN8qXhy0Dmr0pl56KTj31+R48240iQrmBCoJuSpfc8q+8DmVFknn2+cTA8K3 qkR014kZrQk6yU2x2N3jAhy5L4rbYISXLNQ+ELqbXsUojNS3DEfPNuog5g5Q+QZpde2k AF7E8pgWIVwyqsFUGjEt/L8itD6JyzRyvcAlUN8uv9QwyfQZ7XV29+0Zg6fbUTzJMWPL urDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=lx3feqfY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e12-20020a65478c000000b004fcc0088e49si28171pgs.142.2023.02.21.13.13.44; Tue, 21 Feb 2023 13:13:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=lx3feqfY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230006AbjBUVMF (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43404 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbjBUVLx (ORCPT ); Tue, 21 Feb 2023 16:11:53 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4283130B0C; Tue, 21 Feb 2023 13:11:52 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 8134B209A93F; Tue, 21 Feb 2023 13:11:51 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 8134B209A93F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013911; bh=dmkEXknb1A0Qhw2AWPTXryudmpuAFo4IqxhDoe6dNiw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lx3feqfYF+aEhN1mP3tQCcQD1e60V/NolIzXLKvQb3OD0NFi/tqVHTvtuYYX9E39b Q3k7ob2SXXoxxt8Wf/FzpCxyAqgo0vutRwwN79bYJPy3AyBVTE4RhOpUOfXAddi0EH /hci1B8YQnPMz+s1a4T+A6LYfh2NggSBhkYNaUwg= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 08/11] tracing/user_events: Use write ABI in example Date: Tue, 21 Feb 2023 13:11:40 -0800 Message-Id: <20230221211143.574-9-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476671676550460?= X-GMAIL-MSGID: =?utf-8?q?1758476671676550460?= The ABI has changed to use a remote write approach. Update the example to show the expected use of this new ABI. Also remove debugfs path and use tracefs to ensure example works in more environments. Signed-off-by: Beau Belgrave --- samples/user_events/example.c | 47 +++++++---------------------------- 1 file changed, 9 insertions(+), 38 deletions(-) diff --git a/samples/user_events/example.c b/samples/user_events/example.c index d06dc24156ec..28165a096697 100644 --- a/samples/user_events/example.c +++ b/samples/user_events/example.c @@ -9,51 +9,28 @@ #include #include #include +#include #include #include #include -#include -#include #include -#if __BITS_PER_LONG == 64 -#define endian_swap(x) htole64(x) -#else -#define endian_swap(x) htole32(x) -#endif +const char *data_file = "/sys/kernel/tracing/user_events_data"; +int enabled = 0; -/* Assumes debugfs is mounted */ -const char *data_file = "/sys/kernel/debug/tracing/user_events_data"; -const char *status_file = "/sys/kernel/debug/tracing/user_events_status"; - -static int event_status(long **status) -{ - int fd = open(status_file, O_RDONLY); - - *status = mmap(NULL, sysconf(_SC_PAGESIZE), PROT_READ, - MAP_SHARED, fd, 0); - - close(fd); - - if (*status == MAP_FAILED) - return -1; - - return 0; -} - -static int event_reg(int fd, const char *command, long *index, long *mask, - int *write) +static int event_reg(int fd, const char *command, int *write, int *enabled) { struct user_reg reg = {0}; reg.size = sizeof(reg); + reg.enable_bit = 31; + reg.enable_size = sizeof(*enabled); + reg.enable_addr = (__u64)enabled; reg.name_args = (__u64)command; if (ioctl(fd, DIAG_IOCSREG, ®) == -1) return -1; - *index = reg.status_bit / __BITS_PER_LONG; - *mask = endian_swap(1L << (reg.status_bit % __BITS_PER_LONG)); *write = reg.write_index; return 0; @@ -62,17 +39,12 @@ static int event_reg(int fd, const char *command, long *index, long *mask, int main(int argc, char **argv) { int data_fd, write; - long index, mask; - long *status_page; struct iovec io[2]; __u32 count = 0; - if (event_status(&status_page) == -1) - return errno; - data_fd = open(data_file, O_RDWR); - if (event_reg(data_fd, "test u32 count", &index, &mask, &write) == -1) + if (event_reg(data_fd, "test u32 count", &write, &enabled) == -1) return errno; /* Setup iovec */ @@ -80,13 +52,12 @@ int main(int argc, char **argv) io[0].iov_len = sizeof(write); io[1].iov_base = &count; io[1].iov_len = sizeof(count); - ask: printf("Press enter to check status...\n"); getchar(); /* Check if anyone is listening */ - if (status_page[index] & mask) { + if (enabled) { /* Yep, trace out our data */ writev(data_fd, (const struct iovec *)io, 2); From patchwork Tue Feb 21 21:11:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60257 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225402wrd; Tue, 21 Feb 2023 13:13:59 -0800 (PST) X-Google-Smtp-Source: AK7set+d65tQUknDeN/yPrDA5d0zSeKHFs1t17lN+Z8jFY5dBsCpFqGmG9gL8+Xy1OXkNhasDlfn X-Received: by 2002:a05:6a21:7896:b0:b8:42b0:1215 with SMTP id bf22-20020a056a21789600b000b842b01215mr19249616pzc.5.1677014039356; Tue, 21 Feb 2023 13:13:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014039; cv=none; d=google.com; s=arc-20160816; b=QGJ4ZFYqpnWxX3OXBpCXnK/wLg8IlQN7l3FoVEZYQBoYYpeEQ8M45VDX/OC/zUZd2+ 6eSSEKlWXycRuvm2CtUduIlFHmHOsFiRNNsZRjvg4gXTDCFtA4m6oPa6Th6uoHVwx+rD 7Kv7g8LWvQ/LNXr1GxCsZ3R6H+BD+zvTz+Iwj2a/bI328u2PbLvVBvMXei1jCGHpDlp7 8IaHIIJPZm3XNz0Jm/Gii/C8WvJqyEBKJ9r/Y0eHY11wP55ak8Q7rDS61Cbmlr8A1QuU d46U9evk6051EFxfbR8TQPko0QemsC0LrbXxIDW4paSDpcTHEqwY0BuWHpu5nKN/sDLd LTbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=gZQIOerORAHQ2Mn7LD3nOn2GElU/ypPYbhlbPDEbsno=; b=0jOAFsALGYEuhlg/eKoWhdcXn+G6m7Jk3xuajSCwbKhO0/sBe/Uk/8B1qEl9WKsEBp aoGaqakJODcKbS+I5LYrnfFvehbXm/myE7ZeTCfAJDYDEnKxiBbq2iUk+YHGiXvuEUSa T62rJUgBEtwT3RdjF5ZqSkMT7bwCSWJ1svXvdj+BBjb/voFyB6IQNqLJ/pliKpMOR7TS 86zOHSkzNHmAong7X1iu/ikgFVhuO9PZfIjhGPQSfHSzBqO+purm8BkOPENXoJSziKyt E9CaY0cXW8tRrpM9uxWcrVDg89tf+9bvNebGNKcXI2+ltQcUt205EmoEGMOihBgeW0tK 9YcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=BUdTvGEK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t2-20020a654082000000b004fbf6b6908asi6510920pgp.480.2023.02.21.13.13.47; Tue, 21 Feb 2023 13:13:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=BUdTvGEK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230086AbjBUVMX (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230020AbjBUVLy (ORCPT ); Tue, 21 Feb 2023 16:11:54 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9CB14301B9; Tue, 21 Feb 2023 13:11:52 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id DD925209A941; Tue, 21 Feb 2023 13:11:51 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com DD925209A941 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013912; bh=gZQIOerORAHQ2Mn7LD3nOn2GElU/ypPYbhlbPDEbsno=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BUdTvGEKFTxG53UcqiOSURDvw98xB+iyfYf9dF7ScN9+7MEOiD8QDyVROM6socop4 pBIogVpF1trRimejMkyNUnuRrfKuJtxESPX3a0I/JczzPZcQ4L/HAL2u/umMaulFYv 9fRFAzMV3r9vnlrAdnF2TyU6LKgM8FJMFct39tn4= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 09/11] tracing/user_events: Update documentation for ABI Date: Tue, 21 Feb 2023 13:11:41 -0800 Message-Id: <20230221211143.574-10-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476673276795998?= X-GMAIL-MSGID: =?utf-8?q?1758476673276795998?= The ABI for user_events has changed from mmap() based to remote writes. Update the documentation to reflect these changes, add new section for unregistering events since lifetime is now tied to tasks instead of files. Signed-off-by: Beau Belgrave --- Documentation/trace/user_events.rst | 177 ++++++++++++++++------------ 1 file changed, 102 insertions(+), 75 deletions(-) diff --git a/Documentation/trace/user_events.rst b/Documentation/trace/user_events.rst index 9f181f342a70..0180714f10e3 100644 --- a/Documentation/trace/user_events.rst +++ b/Documentation/trace/user_events.rst @@ -11,20 +11,19 @@ that can be viewed via existing tools, such as ftrace and perf. To enable this feature, build your kernel with CONFIG_USER_EVENTS=y. Programs can view status of the events via -/sys/kernel/debug/tracing/user_events_status and can both register and write -data out via /sys/kernel/debug/tracing/user_events_data. +/sys/kernel/tracing/user_events_status and can both register and write +data out via /sys/kernel/tracing/user_events_data. -Programs can also use /sys/kernel/debug/tracing/dynamic_events to register and +Programs can also use /sys/kernel/tracing/dynamic_events to register and delete user based events via the u: prefix. The format of the command to dynamic_events is the same as the ioctl with the u: prefix applied. Typically programs will register a set of events that they wish to expose to tools that can read trace_events (such as ftrace and perf). The registration -process gives back two ints to the program for each event. The first int is -the status bit. This describes which bit in little-endian format in the -/sys/kernel/debug/tracing/user_events_status file represents this event. The -second int is the write index which describes the data when a write() or -writev() is called on the /sys/kernel/debug/tracing/user_events_data file. +process tells the kernel which address and bit to reflect if any tool has +enabled the event and data should be written. The registration will give back +a write index which describes the data when a write() or writev() is called +on the /sys/kernel/tracing/user_events_data file. The structures referenced in this document are contained within the /include/uapi/linux/user_events.h file in the source tree. @@ -35,29 +34,70 @@ filesystem and may be mounted at different paths than above.* Registering ----------- Registering within a user process is done via ioctl() out to the -/sys/kernel/debug/tracing/user_events_data file. The command to issue is +/sys/kernel/tracing/user_events_data file. The command to issue is DIAG_IOCSREG. This command takes a packed struct user_reg as an argument:: struct user_reg { - u32 size; - u64 name_args; - u32 status_bit; - u32 write_index; - }; + /* Input: Size of the user_reg structure being used */ + __u32 size; + + /* Input: Bit in enable address to use */ + __u8 enable_bit; + + /* Input: Enable size in bytes at address */ + __u8 enable_size; + + /* Input: Flags for future use, set to 0 */ + __u16 flags; + + /* Input: Address to update when enabled */ + __u64 enable_addr; + + /* Input: Pointer to string with event name, description and flags */ + __u64 name_args; + + /* Output: Index of the event to use when writing data */ + __u32 write_index; + } __attribute__((__packed__)); + +The struct user_reg requires all the above inputs to be set appropriately. -The struct user_reg requires two inputs, the first is the size of the structure -to ensure forward and backward compatibility. The second is the command string -to issue for registering. Upon success two outputs are set, the status bit -and the write index. ++ size: This must be set to sizeof(struct user_reg). + ++ enable_bit: The bit to reflect the event status at the address specified by + enable_addr. + ++ enable_size: The size of the value specified by enable_addr. + This must be 4 (32-bit) or 8 (64-bit). 64-bit values are only allowed to be + used on 64-bit kernels, however, 32-bit can be used on all kernels. + ++ flags: The flags to use, if any. For the initial version this must be 0. + Callers should first attempt to use flags and retry without flags to ensure + support for lower versions of the kernel. If a flag is not supported -EINVAL + is returned. + ++ enable_addr: The address of the value to use to reflect event status. This + must be naturally aligned and write accessible within the user program. + ++ name_args: The name and arguments to describe the event, see command format + for details. + +Upon successful registration the following is set. + ++ write_index: The index to use for this file descriptor that represents this + event when writing out data. The index is unique to this instance of the file + descriptor that was used for the registration. See writing data for details. User based events show up under tracefs like any other event under the subsystem named "user_events". This means tools that wish to attach to the -events need to use /sys/kernel/debug/tracing/events/user_events/[name]/enable +events need to use /sys/kernel/tracing/events/user_events/[name]/enable or perf record -e user_events:[name] when attaching/recording. -**NOTE:** *The write_index returned is only valid for the FD that was used* +**NOTE:** The event subsystem name by default is "user_events". Callers should +not assume it will always be "user_events". Operators reserve the right in the +future to change the subsystem name per-process to accomodate event isolation. Command Format ^^^^^^^^^^^^^^ @@ -94,9 +134,9 @@ Would be represented by the following field:: struct mytype myname 20 Deleting ------------ +-------- Deleting an event from within a user process is done via ioctl() out to the -/sys/kernel/debug/tracing/user_events_data file. The command to issue is +/sys/kernel/tracing/user_events_data file. The command to issue is DIAG_IOCSDEL. This command only requires a single string specifying the event to delete by @@ -104,92 +144,79 @@ its name. Delete will only succeed if there are no references left to the event (in both user and kernel space). User programs should use a separate file to request deletes than the one used for registration due to this. -Status ------- -When tools attach/record user based events the status of the event is updated -in realtime. This allows user programs to only incur the cost of the write() or -writev() calls when something is actively attached to the event. - -User programs call mmap() on /sys/kernel/debug/tracing/user_events_status to -check the status for each event that is registered. The bit to check in the -file is given back after the register ioctl() via user_reg.status_bit. The bit -is always in little-endian format. Programs can check if the bit is set either -using a byte-wise index with a mask or a long-wise index with a little-endian -mask. - -Currently the size of user_events_status is a single page, however, custom -kernel configurations can change this size to allow more user based events. In -all cases the size of the file is a multiple of a page size. +Unregistering +------------- +If after registering an event it is no longer wanted to be updated then it can +be disabled via ioctl() out to the /sys/kernel/tracing/user_events_data file. +The command to issue is DIAG_IOCSUNREG. This is different than deleting, where +deleting actually removes the event from the system. Unregistering simply tells +the kernel your process is no longer interested in updates to the event. -For example, if the register ioctl() gives back a status_bit of 3 you would -check byte 0 (3 / 8) of the returned mmap data and then AND the result with 8 -(1 << (3 % 8)) to see if anything is attached to that event. +This command takes a packed struct user_unreg as an argument:: -A byte-wise index check is performed as follows:: + struct user_unreg { + /* Input: Size of the user_unreg structure being used */ + __u32 size; - int index, mask; - char *status_page; + /* Input: Bit to unregister */ + __u8 disable_bit; - index = status_bit / 8; - mask = 1 << (status_bit % 8); + /* Input: Reserved, set to 0 */ + __u8 __reserved; - ... + /* Input: Reserved, set to 0 */ + __u16 __reserved2; - if (status_page[index] & mask) { - /* Enabled */ - } + /* Input: Address to unregister */ + __u64 disable_addr; + } __attribute__((__packed__)); -A long-wise index check is performed as follows:: +The struct user_unreg requires all the above inputs to be set appropriately. - #include - #include ++ size: This must be set to sizeof(struct user_unreg). - #if __BITS_PER_LONG == 64 - #define endian_swap(x) htole64(x) - #else - #define endian_swap(x) htole32(x) - #endif ++ disable_bit: This must be set to the bit to disable (same bit that was + previously registered via enable_bit). - long index, mask, *status_page; ++ disable_addr: This must be set to the address to disable (same address that was + previously registered via enable_addr). - index = status_bit / __BITS_PER_LONG; - mask = 1L << (status_bit % __BITS_PER_LONG); - mask = endian_swap(mask); +**NOTE:** Events are automatically unregistered when execve() is invoked. During +fork() the registered events will be retained and must be unregistered manually +in each process if wanted. - ... +Status +------ +When tools attach/record user based events the status of the event is updated +in realtime. This allows user programs to only incur the cost of the write() or +writev() calls when something is actively attached to the event. - if (status_page[index] & mask) { - /* Enabled */ - } +The kernel will update the specified bit that was registered for the event as +tools attach/detach from the event. User programs simply check if the bit is set +to see if something is attached or not. Administrators can easily check the status of all registered events by reading the user_events_status file directly via a terminal. The output is as follows:: - Byte:Name [# Comments] + Name [# Comments] ... Active: ActiveCount Busy: BusyCount - Max: MaxCount For example, on a system that has a single event the output looks like this:: - 1:test + test Active: 1 Busy: 0 - Max: 32768 If a user enables the user event via ftrace, the output would change to this:: - 1:test # Used by ftrace + test # Used by ftrace Active: 1 Busy: 1 - Max: 32768 - -**NOTE:** *A status bit of 0 will never be returned. This allows user programs -to have a bit that can be used on error cases.* Writing Data ------------ From patchwork Tue Feb 21 21:11:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60259 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225546wrd; Tue, 21 Feb 2023 13:14:17 -0800 (PST) X-Google-Smtp-Source: AK7set8cRh4YIfJGnEP+Og4Hz3tAsNT8Fxz83vTgDWUyRu8nVrrvuzGZi2TP7031wflTyC25Hifa X-Received: by 2002:a17:90b:4d07:b0:234:b23:ead2 with SMTP id mw7-20020a17090b4d0700b002340b23ead2mr8610628pjb.45.1677014057118; Tue, 21 Feb 2023 13:14:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014057; cv=none; d=google.com; s=arc-20160816; b=gfP9B2ie7dnHe56XEMwqwsGbk+dPDXMbd1uqTxXP1L96njJzMVAg2k0eSRUISuUarK LPZrtIv57xezetVZkFW/JTkWZeK8Ip5L35X46cTl9Gqtfvrzh/QGVVloq0X3rC6PbVon cD3YBVLg5mOQI1FECs0pVXT19krFMh/3SVPwnK5SF9rMTYAhxSeS2hKfv6O5z9Tw+HF1 STSjgJgSM60M/TmiUHEOw94vjsIjID9zuzPNt/3+yfqioN/yUfr00i7vgQN6WDESrQh0 6ueKTZy/fuK/1FI/TYZyRa3YKtIts/AqwwIjC6aZmP31SAYVYuN/agokgJ3mn2sP1fQr NF+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=IeyP4hszPxXQh3EfDrorhwGAyqHtWNJ8d/cCBZiRrsA=; b=Swdv2xRNi4TutKXKJL4fJkXeppl8Y0jdfqloVACC7Qig4VwWSYeBPRB8yUKV+1Jqwf sRqT9XACZoXWoqgbMdgEWh9EXm7QK6bElOtuMOCluQpA+2Aw3WyQ/8bfKOrALv7CUpou MV+UBJ+eGWMxBiySGrGbqqLeVuMRfz3rqRB6ZlZ+j0R+WbDRN0N7yJNszjzqllHTfE5Y +JgMusr1Mgt7Le0Hyl9FJ0cD0g2IDLpa2H+83ggzlkp7eO0WDHs+XyHL3BrbPe9mp/gv 9BOm7IdVaYMCMZ686rLb4U7/ZWVwdFHlHcNyFcmniW6bEFlB9BF4J6GfndmVZkV++Cm6 rABw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b="DFb/gknM"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 19-20020a630213000000b004fbd79d66c9si19694404pgc.23.2023.02.21.13.14.04; Tue, 21 Feb 2023 13:14:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b="DFb/gknM"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230158AbjBUVMM (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43414 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229992AbjBUVLy (ORCPT ); Tue, 21 Feb 2023 16:11:54 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 0A364302B4; Tue, 21 Feb 2023 13:11:53 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 517E720BBF92; Tue, 21 Feb 2023 13:11:52 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 517E720BBF92 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013912; bh=IeyP4hszPxXQh3EfDrorhwGAyqHtWNJ8d/cCBZiRrsA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DFb/gknM+mi/vihppHUU0KM7C+O4SVHVXHJmj8uZTYF5D/ikRqZp6FkFJIs+BTzb2 ZR2UW1qpSfHJN9gEd4YxwjrKcOfHckBDudI60lWN0AmRkWs+x7fdhwIeuIhDGkNMAE GhoVZnVd60BQ8o8Y/Tyos70WS5dq1Xlfu0faEm1k= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 10/11] tracing/user_events: Charge event allocs to cgroups Date: Tue, 21 Feb 2023 13:11:42 -0800 Message-Id: <20230221211143.574-11-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476692224467655?= X-GMAIL-MSGID: =?utf-8?q?1758476692224467655?= Operators need a way to limit how much memory cgroups use. User events need to be included into that accounting. Fix this by using GFP_KERNEL_ACCOUNT for allocations generated by user programs for user_event tracing. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index e4ee25d16f3b..222f2eb59c7c 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -442,7 +442,7 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, if (unlikely(test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(orig)))) return true; - enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT | __GFP_ACCOUNT); if (!enabler) return false; @@ -502,7 +502,7 @@ static struct user_event_mm *user_event_mm_create(struct task_struct *t) struct user_event_mm *user_mm; unsigned long flags; - user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL); + user_mm = kzalloc(sizeof(*user_mm), GFP_KERNEL_ACCOUNT); if (!user_mm) return NULL; @@ -662,7 +662,7 @@ static struct user_event_enabler if (!user_mm) return NULL; - enabler = kzalloc(sizeof(*enabler), GFP_KERNEL); + enabler = kzalloc(sizeof(*enabler), GFP_KERNEL_ACCOUNT); if (!enabler) goto out; @@ -870,7 +870,7 @@ static int user_event_add_field(struct user_event *user, const char *type, struct ftrace_event_field *field; int validator_flags = 0; - field = kmalloc(sizeof(*field), GFP_KERNEL); + field = kmalloc(sizeof(*field), GFP_KERNEL_ACCOUNT); if (!field) return -ENOMEM; @@ -889,7 +889,7 @@ static int user_event_add_field(struct user_event *user, const char *type, if (strstr(type, "char") != NULL) validator_flags |= VALIDATOR_ENSURE_NULL; - validator = kmalloc(sizeof(*validator), GFP_KERNEL); + validator = kmalloc(sizeof(*validator), GFP_KERNEL_ACCOUNT); if (!validator) { kfree(field); @@ -1175,7 +1175,7 @@ static int user_event_create_print_fmt(struct user_event *user) len = user_event_set_print_fmt(user, NULL, 0); - print_fmt = kmalloc(len, GFP_KERNEL); + print_fmt = kmalloc(len, GFP_KERNEL_ACCOUNT); if (!print_fmt) return -ENOMEM; @@ -1508,7 +1508,7 @@ static int user_event_create(const char *raw_command) raw_command += USER_EVENTS_PREFIX_LEN; raw_command = skip_spaces(raw_command); - name = kstrdup(raw_command, GFP_KERNEL); + name = kstrdup(raw_command, GFP_KERNEL_ACCOUNT); if (!name) return -ENOMEM; @@ -1704,7 +1704,7 @@ static int user_event_parse(struct user_event_group *group, char *name, return 0; } - user = kzalloc(sizeof(*user), GFP_KERNEL); + user = kzalloc(sizeof(*user), GFP_KERNEL_ACCOUNT); if (!user) return -ENOMEM; @@ -1874,7 +1874,7 @@ static int user_events_open(struct inode *node, struct file *file) if (!group) return -ENOENT; - info = kzalloc(sizeof(*info), GFP_KERNEL); + info = kzalloc(sizeof(*info), GFP_KERNEL_ACCOUNT); if (!info) return -ENOMEM; @@ -1927,7 +1927,7 @@ static int user_events_ref_add(struct user_event_file_info *info, size = struct_size(refs, events, count + 1); - new_refs = kzalloc(size, GFP_KERNEL); + new_refs = kzalloc(size, GFP_KERNEL_ACCOUNT); if (!new_refs) return -ENOMEM; From patchwork Tue Feb 21 21:11:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 60254 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5915:0:0:0:0:0 with SMTP id v21csp225274wrd; Tue, 21 Feb 2023 13:13:42 -0800 (PST) X-Google-Smtp-Source: AK7set8oZqazXcvzBzgnOX/lHeKQ+PYwJ6+OqWAjJnDC893hoO/watXgtN0mZymeDpu6f7Dx/d4a X-Received: by 2002:a05:6a20:7d9c:b0:bc:246c:9bdf with SMTP id v28-20020a056a207d9c00b000bc246c9bdfmr6496607pzj.1.1677014021949; Tue, 21 Feb 2023 13:13:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1677014021; cv=none; d=google.com; s=arc-20160816; b=RricF6RTs5XyNmjx3J7a9TWY7aPmwdq5/aiXtxFnfzreE1CYPyMpzuHH+0cUjLXZo1 UoVRzENcBT/RworPOGDtkjLFdwdCDUvxtrvmFxFRhr/87TkVlb59eS0or8AJD++lQVk+ lXAOaNYEPUM5e9CNBpwortymVwcFB+gm+QPQ/3lHbTmINgXcP9tws0hZX2vESy9+Iqei KMJDMngLKTRP6ERhjE4Om3Oo1nOn6+9eQ/bfRrWuQqR2y3+9ClOnn4Crd600nl6fFSzz WczbDojcz+dKH05E1H2bYNK5D+ZdCwNYqfjdp/zTPdPP5uBKtFIMZ+SCvISgsj8xSpGN pkEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=I/N19C63THCRBfckR+N4mJDWmIQC1iuXco7+RGR9L6U=; b=E79/P55OpDDeVSwNCI6HMTqZsOuxmj66YjcVmAmFqhliQd7AFnEYJz3v0hPf41/yTe baL7J0rb/mUr42av5GJXV5C2ZKVm5fu04EsvgyjtbtQlYXxAWeZsy5uX+EO8XOf7XW1h aP8qbs3w2O38VU7E5rgxtpTqbkr6qcbBVpW/HwXsmgLjvbOpMTnW5vxa60UT6VVAC9SO qkyH44uvbICfbcRhgIgNAZpZOQ3ftuywbk5WGr8lsIuOfDR8NUzVy3Oe+bGIHXZAhwEO fLrURUFolJS2h4+ZY4Ul4mgW7FaVCu8fgA3UtSbog8VjbG9WorBk0vy0hEY9FeoHTFL/ 9fFQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=dCRzQGcI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k3-20020a170902c40300b0019a8119053esi17930487plk.327.2023.02.21.13.13.29; Tue, 21 Feb 2023 13:13:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=dCRzQGcI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229814AbjBUVMS (ORCPT + 99 others); Tue, 21 Feb 2023 16:12:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43432 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230010AbjBUVLy (ORCPT ); Tue, 21 Feb 2023 16:11:54 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 3C98A30E8A; Tue, 21 Feb 2023 13:11:53 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id BEE1620BC5E7; Tue, 21 Feb 2023 13:11:52 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com BEE1620BC5E7 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1677013913; bh=I/N19C63THCRBfckR+N4mJDWmIQC1iuXco7+RGR9L6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dCRzQGcIfvRpOp9kpdwMTF1aaihpuvaIfHxS1euR/JCwnaOjxQUy3XkUPyUcF+G0t U2AkicFcPC9hBHiH7CZP+EKRF6aPM1gjYIrEUJz8BSq2VL0LT7plTe302im6xToFwl QP9/1kHRZa3txCLzPg3NMGhV5c8lGlBU2QrG8+Po= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v8 11/11] tracing/user_events: Limit global user_event count Date: Tue, 21 Feb 2023 13:11:43 -0800 Message-Id: <20230221211143.574-12-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230221211143.574-1-beaub@linux.microsoft.com> References: <20230221211143.574-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1758476655231703770?= X-GMAIL-MSGID: =?utf-8?q?1758476655231703770?= Operators want to be able to ensure enough tracepoints exist on the system for kernel components as well as for user components. Since there are only up to 64K events, by default allow up to half to be used by user events. Add a boot parameter (user_events_max=%d) and a kernel sysctl parameter (kernel.user_events_max) to set a global limit that is honored among all groups on the system. This ensures hard limits can be setup to prevent user processes from consuming all event IDs on the system. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 59 ++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 222f2eb59c7c..6a5ebe243999 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include "trace.h" #include "trace_dynevent.h" @@ -61,6 +62,12 @@ struct user_event_group { /* Group for init_user_ns mapping, top-most group */ static struct user_event_group *init_group; +/* Max allowed events for the whole system */ +static unsigned int max_user_events = 32768; + +/* Current number of events on the whole system */ +static unsigned int current_user_events; + /* * Stores per-event properties, as users register events * within a file a user_event might be created if it does not @@ -1241,6 +1248,8 @@ static int destroy_user_event(struct user_event *user) { int ret = 0; + lockdep_assert_held(&event_mutex); + /* Must destroy fields before call removal */ user_event_destroy_fields(user); @@ -1257,6 +1266,11 @@ static int destroy_user_event(struct user_event *user) kfree(EVENT_NAME(user)); kfree(user); + if (current_user_events > 0) + current_user_events--; + else + pr_alert("BUG: Bad current_user_events\n"); + return ret; } @@ -1744,6 +1758,11 @@ static int user_event_parse(struct user_event_group *group, char *name, mutex_lock(&event_mutex); + if (current_user_events >= max_user_events) { + ret = -EMFILE; + goto put_user_lock; + } + ret = user_event_trace_register(user); if (ret) @@ -1755,6 +1774,7 @@ static int user_event_parse(struct user_event_group *group, char *name, dyn_event_init(&user->devent, &user_event_dops); dyn_event_add(&user->devent, &user->call); hash_add(group->register_table, &user->node, key); + current_user_events++; mutex_unlock(&event_mutex); @@ -2386,6 +2406,43 @@ static int create_user_tracefs(void) return -ENODEV; } +static int __init set_max_user_events(char *str) +{ + if (!str) + return 0; + + if (kstrtouint(str, 0, &max_user_events)) + return 0; + + return 1; +} +__setup("user_events_max=", set_max_user_events); + +static int set_max_user_events_sysctl(struct ctl_table *table, int write, + void *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + mutex_lock(&event_mutex); + + ret = proc_douintvec(table, write, buffer, lenp, ppos); + + mutex_unlock(&event_mutex); + + return ret; +} + +static struct ctl_table user_event_sysctls[] = { + { + .procname = "user_events_max", + .data = &max_user_events, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = set_max_user_events_sysctl, + }, + {} +}; + static int __init trace_events_user_init(void) { int ret; @@ -2415,6 +2472,8 @@ static int __init trace_events_user_init(void) if (dyn_event_register(&user_event_dops)) pr_warn("user_events could not register with dyn_events\n"); + register_sysctl_init("kernel", user_event_sysctls); + return 0; }