From patchwork Wed Jan 4 21:42:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Beau Belgrave X-Patchwork-Id: 39125 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp5370271wrt; Wed, 4 Jan 2023 13:43:43 -0800 (PST) X-Google-Smtp-Source: AMrXdXsn5wv3+Q8Aptdag5yPo2Dj/L1suuf6k+hQ9qwovHXbax1JEzwqHgnf3DIwzyxll8kO8XWb X-Received: by 2002:a17:906:99c7:b0:7b5:860d:7039 with SMTP id s7-20020a17090699c700b007b5860d7039mr42431056ejn.10.1672868623104; Wed, 04 Jan 2023 13:43:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672868623; cv=none; d=google.com; s=arc-20160816; b=TmZ6JMv/EfezAWvgcADPjIi3aLpEnC27hoNVqyuCZK8mdCG2yzNrLSCLCNVJkHyoGK 2YTNAWpHYEO8zG7UI3lZ38cUWUjHY0Iq9PEbL5bIj3KVG677n9arEljEjGkgJcqc2LAC FNGzubrx/FFmjJalDZVWGl/yL/RbYllv7F/TiXY1Cu6h7zNIaskcwUJlZM5Pz3VSQEQC x+NS1C6yvcJ1ArewQofp6UdaSXZdooXh56en6qMgfhBL5APoogu7rmvgU2ibDpR5F0/Z hPwh9n6Hv4dvFhTb0sDMTBJjV7cYdWh5O1GO0NWYJjzZ5ixu6AGyiNkIaAq+eyMeBDVf xtQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:dkim-filter; bh=FkMfgsUuhGoC0sIzECUCMrg0be6JE8p0jf7kD/wYfbY=; b=fuJdAXCaJitOVYnrfIIFbpjJ911B1DKTkXvUVY1nCgGS/2OW8hnqcI0ghUiZSJrfa6 LpfbsVaV7p6ga+R7ra2+Y2zN2HuMdfu1QNdV+k3s/bCjQHdX93v9FstWJZCVudWiITH0 OhlspQvfRsKyGEVGZXg/AzKQVXrq4+t9NR7uegufrFh2vAHfHB2Xw4LpVdsqhGDa1Ms5 5IowF3g7dI/0PIoCcJXVrOacaffRV30s2hjyS5xU/TtSOGdTW3UZOM9GTnUyUHAG6Xum gef/jZ7AizgGjXnCBqaW8M6z9yY05eAbc55L8SaGIig3O7tkSO0gNn7OlDO5ZlzJAnnO H6NA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=VPWVVmRp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hg3-20020a1709072cc300b007c0b61436efsi29562427ejc.1008.2023.01.04.13.43.19; Wed, 04 Jan 2023 13:43:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.microsoft.com header.s=default header.b=VPWVVmRp; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240077AbjADVmz (ORCPT + 99 others); Wed, 4 Jan 2023 16:42:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235494AbjADVmk (ORCPT ); Wed, 4 Jan 2023 16:42:40 -0500 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7EF831D0E7; Wed, 4 Jan 2023 13:42:37 -0800 (PST) Received: from W11-BEAU-MD.localdomain (unknown [76.135.27.212]) by linux.microsoft.com (Postfix) with ESMTPSA id C57D720B92AA; Wed, 4 Jan 2023 13:42:36 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com C57D720B92AA DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1672868557; bh=FkMfgsUuhGoC0sIzECUCMrg0be6JE8p0jf7kD/wYfbY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VPWVVmRpgA3sDKETLEiMMa2q0/g3M6Tj1xid1BzuxUoMNDUI67qq2NydS47v4VaOo nGhucCLO/y56iKrJFOpRlc6NDases7skNl3Zp0wkVqVrLm/4Nt4jYRYClDvy801f72 /3B7q8b/SMOoCIYkYVBF9sN6lJJdVMFMymZOQoE4= From: Beau Belgrave To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, dcook@linux.microsoft.com, alanau@linux.microsoft.com, brauner@kernel.org, akpm@linux-foundation.org Cc: linux-trace-devel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 04/11] tracing/user_events: Fixup enable faults asyncly Date: Wed, 4 Jan 2023 13:42:23 -0800 Message-Id: <20230104214230.26349-5-beaub@linux.microsoft.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230104214230.26349-1-beaub@linux.microsoft.com> References: <20230104214230.26349-1-beaub@linux.microsoft.com> MIME-Version: 1.0 X-Spam-Status: No, score=-19.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_PASS,USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754129889187496365?= X-GMAIL-MSGID: =?utf-8?q?1754129889187496365?= When events are enabled within the various tracing facilities, such as ftrace/perf, the event_mutex is held. As events are enabled pages are accessed. We do not want page faults to occur under this lock. Instead queue the fault to a workqueue to be handled in a process context safe way without the lock. The enable address is marked faulting while the async fault-in occurs. This ensures that we don't attempt to fault-in more than is necessary. Once the page has been faulted in, an address write is re-attempted. If the page couldn't fault-in, then we wait until the next time the event is enabled to prevent any potential infinite loops. Signed-off-by: Beau Belgrave --- kernel/trace/trace_events_user.c | 120 +++++++++++++++++++++++++++++-- 1 file changed, 114 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c index e2b3e42d256d..83f6832f891d 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -99,9 +99,23 @@ struct user_event_enabler { /* Bits 0-5 are for the bit to update upon enable/disable (0-63 allowed) */ #define ENABLE_VAL_BIT_MASK 0x3F +/* Bit 6 is for faulting status of enablement */ +#define ENABLE_VAL_FAULTING_BIT 6 + /* Only duplicate the bit value */ #define ENABLE_VAL_DUP_MASK ENABLE_VAL_BIT_MASK +#define ENABLE_BITOPS(e) ((unsigned long *)&(e)->values) + +/* Used for asynchronous faulting in of pages */ +struct user_event_enabler_fault { + struct work_struct work; + struct user_event_mm *mm; + struct user_event_enabler *enabler; +}; + +static struct kmem_cache *fault_cache; + /* Global list of memory descriptors using user_events */ static LIST_HEAD(user_event_mms); static DEFINE_SPINLOCK(user_event_mms_lock); @@ -263,7 +277,85 @@ static int user_event_mm_fault_in(struct user_event_mm *mm, unsigned long uaddr) } static int user_event_enabler_write(struct user_event_mm *mm, - struct user_event_enabler *enabler) + struct user_event_enabler *enabler, + bool fixup_fault); + +static void user_event_enabler_fault_fixup(struct work_struct *work) +{ + struct user_event_enabler_fault *fault = container_of( + work, struct user_event_enabler_fault, work); + struct user_event_enabler *enabler = fault->enabler; + struct user_event_mm *mm = fault->mm; + unsigned long uaddr = enabler->addr; + int ret; + + ret = user_event_mm_fault_in(mm, uaddr); + + if (ret && ret != -ENOENT) { + struct user_event *user = enabler->event; + + pr_warn("user_events: Fault for mm: 0x%pK @ 0x%llx event: %s\n", + mm->mm, (unsigned long long)uaddr, EVENT_NAME(user)); + } + + /* Prevent state changes from racing */ + mutex_lock(&event_mutex); + + /* + * If we managed to get the page, re-issue the write. We do not + * want to get into a possible infinite loop, which is why we only + * attempt again directly if the page came in. If we couldn't get + * the page here, then we will try again the next time the event is + * enabled/disabled. + */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!ret) { + mmap_read_lock(mm->mm); + user_event_enabler_write(mm, enabler, true); + mmap_read_unlock(mm->mm); + } + + mutex_unlock(&event_mutex); + + /* In all cases we no longer need the mm or fault */ + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); +} + +static bool user_event_enabler_queue_fault(struct user_event_mm *mm, + struct user_event_enabler *enabler) +{ + struct user_event_enabler_fault *fault; + + fault = kmem_cache_zalloc(fault_cache, GFP_NOWAIT | __GFP_NOWARN); + + if (!fault) + return false; + + INIT_WORK(&fault->work, user_event_enabler_fault_fixup); + fault->mm = user_event_mm_get(mm); + fault->enabler = enabler; + + /* Don't try to queue in again while we have a pending fault */ + set_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + if (!schedule_work(&fault->work)) { + /* Allow another attempt later */ + clear_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)); + + user_event_mm_put(mm); + kmem_cache_free(fault_cache, fault); + + return false; + } + + return true; +} + +static int user_event_enabler_write(struct user_event_mm *mm, + struct user_event_enabler *enabler, + bool fixup_fault) { unsigned long uaddr = enabler->addr; unsigned long *ptr; @@ -278,11 +370,19 @@ static int user_event_enabler_write(struct user_event_mm *mm, if (refcount_read(&mm->tasks) == 0) return -ENOENT; + if (unlikely(test_bit(ENABLE_VAL_FAULTING_BIT, ENABLE_BITOPS(enabler)))) + return -EBUSY; + ret = pin_user_pages_remote(mm->mm, uaddr, 1, FOLL_WRITE | FOLL_NOFAULT, &page, NULL, NULL); - if (ret <= 0) { - pr_warn("user_events: Enable write failed\n"); + if (unlikely(ret <= 0)) { + if (!fixup_fault) + return -EFAULT; + + if (!user_event_enabler_queue_fault(mm, enabler)) + pr_warn("user_events: Unable to queue fault handler\n"); + return -EFAULT; } @@ -314,7 +414,7 @@ static void user_event_enabler_update(struct user_event *user) list_for_each_entry_rcu(enabler, &mm->enablers, link) if (enabler->event == user) - user_event_enabler_write(mm, enabler); + user_event_enabler_write(mm, enabler, true); rcu_read_unlock(); mmap_read_unlock(mm->mm); @@ -562,7 +662,7 @@ static struct user_event_enabler /* Attempt to reflect the current state within the process */ mmap_read_lock(user_mm->mm); - *write_result = user_event_enabler_write(user_mm, enabler); + *write_result = user_event_enabler_write(user_mm, enabler, false); mmap_read_unlock(user_mm->mm); /* @@ -2197,16 +2297,24 @@ static int __init trace_events_user_init(void) { int ret; + fault_cache = KMEM_CACHE(user_event_enabler_fault, 0); + + if (!fault_cache) + return -ENOMEM; + init_group = user_event_group_create(&init_user_ns); - if (!init_group) + if (!init_group) { + kmem_cache_destroy(fault_cache); return -ENOMEM; + } ret = create_user_tracefs(); if (ret) { pr_warn("user_events could not register with tracefs\n"); user_event_group_destroy(init_group); + kmem_cache_destroy(fault_cache); init_group = NULL; return ret; }