From patchwork Fri Mar 24 22:30:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 74781 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp75590vqo; Fri, 24 Mar 2023 15:58:40 -0700 (PDT) X-Google-Smtp-Source: AKy350bQmnEHDrHiA986gkFvUdWGRihlsplTMVyZqqgPYCmguinrY/3CMIu92t5YLmVgxWsWi7kb X-Received: by 2002:a17:90a:312:b0:23f:2d2c:abcc with SMTP id 18-20020a17090a031200b0023f2d2cabccmr4631156pje.9.1679698719873; Fri, 24 Mar 2023 15:58:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1679698719; cv=none; d=google.com; s=arc-20160816; b=aUZ5b6trPmXaxfOSUDSnDscDNPJohdwP4ExOYrHKvvMntaK8O9eheUSQXTBWOq9yTA z6cTA9jrjQJwDJcFTUqN15iAZngn3+wuTVcWPHqct1AxNGTYVeJVIJGZTag/Cq2TvVEj OiKPCtmJjfHSPbMrKK/2Suv+4N8PP7/f0i7xQH+q+yY0qil43g3MRxeHRvy509IVHzA3 YgEHmlpjyen9TTGyPTxAYDq6IJTJgVoJnJa2nRTG1hg/alQaq5E/blzN42nwWpu7XPox k/4rgB4FI5yA8pSP7FY/mPoU7m2ipiJoVFWN18CwiDPl0FABQFB6lO494gZ8n3YGPX+h qOKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=hPint7oilCioEigkIIslQUbcyG4C0cJ57YmYs/t8wm4=; b=SOxJ1zACTzukUWS7kifC6WRQC2U9S5h9VcfM8aTQr4442225A9IHZjbf+eyh5uywA8 FskxwNRQ4Kh5TTMkKrQHnHoxEqAtF5dYEIqms4ur29Rj+2mguA+GSLyrpHyzCdyAeEMl WuUxeWMastxI6qCyACPxjpxniSfKtI7Sq2G+Eq40fK8VkahT3YV4uwbBOtfaYW+8jcQn hHblZCbb4othc6MGsx36VPIUkDMm31Wk6HHP+vR88AQjJov7jkaIm2xqrIgkyLJRRo57 sgmZZmVbg4lV/0wElOQ12kuViw23DcOFnF+roOlYAMPSu0RGHd+Gjap4Fes1h7K4No4a cmXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=T1h0cFi4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o19-20020a170903211300b0019ca19614c7si20909994ple.69.2023.03.24.15.58.04; Fri, 24 Mar 2023 15:58:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=T1h0cFi4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232334AbjCXWaw (ORCPT + 99 others); Fri, 24 Mar 2023 18:30:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231959AbjCXWal (ORCPT ); Fri, 24 Mar 2023 18:30:41 -0400 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 020FB9030; Fri, 24 Mar 2023 15:30:40 -0700 (PDT) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id 4BAA320FC4DD; Fri, 24 Mar 2023 15:30:39 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 4BAA320FC4DD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1679697039; bh=hPint7oilCioEigkIIslQUbcyG4C0cJ57YmYs/t8wm4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T1h0cFi4IJCvrrodn8sP35NqQ8H3yTURpNJVbQP8fli2BFNBDSwP5G0dK/5EcVkcN xzmLqb1hU09rPc0J+fRC13LdXN1uc0Bu+MF9PMC/aa3bYjWQfdUeosRjkwvewFsjUC wWGcICmEV4CWi+9SAfDX6w/0+ftkvkk+3dqTe88I= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org, ebiederm@xmission.com, keescook@chromium.org, tglx@linutronix.de Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org Subject: [PATCH v9 05/11] tracing/user_events: Add ioctl for disabling addresses Date: Fri, 24 Mar 2023 15:30:22 -0700 Message-Id: <20230324223028.172-6-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230324223028.172-1-beaub@linux.microsoft.com> References: <20230324223028.172-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-17.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED,SPF_HELO_PASS, SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761291764870020595?= X-GMAIL-MSGID: =?utf-8?q?1761291764870020595?= Enablements are now tracked by the lifetime of the task/mm. User processes need to be able to disable their addresses if tracing is requested to be turned off. Before unmapping the page would suffice. However, we now need a stronger contract. Add an ioctl to enable this. A new flag bit is added, freeing, to user_event_enabler to ensure that if the event is attempted to be removed while a fault is being handled that the remove is delayed until after the fault is reattempted. Signed-off-by: Beau Belgrave --- include/uapi/linux/user_events.h | 24 +++++++++ kernel/trace/trace_events_user.c | 93 +++++++++++++++++++++++++++++++- 2 files changed, 115 insertions(+), 2 deletions(-) diff --git a/include/uapi/linux/user_events.h b/include/uapi/linux/user_events.h index 22521bc622db..3e7275e3234a 100644 --- a/include/uapi/linux/user_events.h +++ b/include/uapi/linux/user_events.h @@ -46,6 +46,27 @@ struct user_reg { __u32 write_index; } __attribute__((__packed__)); +/* + * Describes an event unregister, callers must set the size, address and bit. + * This structure is passed to the DIAG_IOCSUNREG ioctl to disable bit updates. + */ +struct user_unreg { + /* Input: Size of the user_unreg structure being used */ + __u32 size; + + /* Input: Bit to unregister */ + __u8 disable_bit; + + /* Input: Reserved, set to 0 */ + __u8 __reserved; + + /* Input: Reserved, set to 0 */ + __u16 __reserved2; + + /* Input: Address to unregister */ + __u64 disable_addr; +} __attribute__((__packed__)); + #define DIAG_IOC_MAGIC '*' /* Request to register a user_event */ @@ -54,4 +75,7 @@ struct user_reg { /* Request to delete a user_event */ #define DIAG_IOCSDEL _IOW(DIAG_IOC_MAGIC, 1, char *) +/* Requests to unregister a user_event */ +#define DIAG_IOCSUNREG _IOW(DIAG_IOC_MAGIC, 2, struct user_unreg*) + #endif /* _UAPI_LINUX_USER_EVENTS_H */ diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index 86bda1660536..e4ee25d16f3b 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -102,6 +102,9 @@ struct user_event_enabler { /* Bit 6 is for faulting status of enablement */ #define ENABLE_VAL_FAULTING_BIT 6 +/* Bit 7 is for freeing status of enablement */ +#define ENABLE_VAL_FREEING_BIT 7 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK @@ -301,6 +304,12 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) /* Prevent state changes from racing */ mutex_lock(&event_mutex); + /* User asked for enabler to be removed during fault */ + if (test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler))) { + user_event_enabler_destroy(enabler); + goto out; + } + /* * If we managed to get the page, re-issue the write. We do not * want to get into a possible infinite loop, which is why we only @@ -315,7 +324,7 @@ static void user_event_enabler_fault_fixup(struct work_struct *work) user_event_enabler_write(mm, enabler, true); mmap_read_unlock(mm->mm); } - +out: mutex_unlock(&event_mutex); /* In all cases we no longer need the mm or fault */ @@ -370,7 +379,8 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; - if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)) || + test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)))) return -EBUSY; ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, @@ -428,6 +438,10 @@ static bool user_event_enabler_dup(struct user_event_enabler *orig, { struct user_event_enabler *enabler; + /* Skip pending frees */ + if (unlikely(test_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(orig)))) + return true; + enabler = kzalloc(sizeof(*enabler), GFP_NOWAIT); if (!enabler) @@ -2086,6 +2100,75 @@ static long user_events_ioctl_del(struct user_event_file_info *info, return ret; } +static long user_unreg_get(struct user_unreg __user *ureg, + struct user_unreg *kreg) +{ + u32 size; + long ret; + + ret = get_user(size, &ureg->size); + + if (ret) + return ret; + + if (size > PAGE_SIZE) + return -E2BIG; + + if (size < offsetofend(struct user_unreg, disable_addr)) + return -EINVAL; + + ret = copy_struct_from_user(kreg, sizeof(*kreg), ureg, size); + + return ret; +} + +/* + * Unregisters an enablement address/bit within a task/user mm. + */ +static long user_events_ioctl_unreg(unsigned long uarg) +{ + struct user_unreg __user *ureg = (struct user_unreg __user *)uarg; + struct user_event_mm *mm = current->user_event_mm; + struct user_event_enabler *enabler, *next; + struct user_unreg reg; + long ret; + + ret = user_unreg_get(ureg, ®); + + if (ret) + return ret; + + if (!mm) + return -ENOENT; + + ret = -ENOENT; + + /* + * Flags freeing and faulting are used to indicate if the enabler is in + * use at all. When faulting is set a page-fault is occurring asyncly. + * During async fault if freeing is set, the enabler will be destroyed. + * If no async fault is happening, we can destroy it now since we hold + * the event_mutex during these checks. + */ + mutex_lock(&event_mutex); + + list_for_each_entry_safe(enabler, next, &mm->enablers, link) + if (enabler->addr == reg.disable_addr && + (enabler->values & ENABLE_VAL_BIT_MASK) == reg.disable_bit) { + set_bit(ENABLE_VAL_FREEING_BIT, ENABLE_BITOPS(enabler)); + + if (!test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler))) + user_event_enabler_destroy(enabler); + + /* Removed at least one */ + ret = 0; + } + + mutex_unlock(&event_mutex); + + return ret; +} + /* * Handles the ioctl from user mode to register or alter operations. */ @@ -2108,6 +2191,12 @@ static long user_events_ioctl(struct file *file, unsigned int cmd, ret = user_events_ioctl_del(info, uarg); mutex_unlock(&group->reg_mutex); break; + + case DIAG_IOCSUNREG: + mutex_lock(&group->reg_mutex); + ret = user_events_ioctl_unreg(uarg); + mutex_unlock(&group->reg_mutex); + break; } return ret;