From patchwork Thu Dec 7 02:37:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174884 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515349vqy; Wed, 6 Dec 2023 18:38:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGFyuFWOqoHNMtUJc2kiyL9ESliUD9yLHuVbUF5a43V9tIyJRUBLirl2pl4rVlOAPD88X8u X-Received: by 2002:a62:cf87:0:b0:6ce:2731:79f8 with SMTP id b129-20020a62cf87000000b006ce273179f8mr1289079pfg.46.1701916715437; Wed, 06 Dec 2023 18:38:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916715; cv=none; d=google.com; s=arc-20160816; b=qqDvAbDbeoW+5pkYjg3j1ST4CSGRxehqg4GEWgV+MDCY0DXJsuNorC9KKhS8PXCG5j /2e5q4o3qBDDzGKs+1Kne3Q8vsIWG/RL/srhsgCLuP3ezznrGh2mmvFKAHp4MmByIRBF 7SmPXfgASnzKPWY6CZRO7qd9pcsAe2Wh3RTGBQ0e8ezciaFzTZdNqgJ7BzOAM4Ij//8M 7B36UpyhTH9WZX76csgH74heD6uyPHC2zDHvUhjUyYlAB4pskzHGU6BkxdMApAVBC+iQ X/SjHTg2dNWfjMv7tiX2Odirn9E8H4bez/WH3ZdSzp/b7CfowDbW3fBNqBz3FSuYjQRa VZ8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=mbL44jnDa3WlNYRuHIY3a2lBcPx7Otqh/pPjFKiUAuw=; fh=5t1KX07n8eCYcIm6uZaoB+ULW72xabC0j7iFTe6yRvg=; b=NfWxccydWwi3whVKiImPg9MDkKOqnLQr4pM5H8lRIosPZvA4WIT3A7ZhW74VEQgH69 Z7470pZ4kyv59YpK820VcZOe2v5XV4KAbhQD6TLaIvRG6ec8+1ongPlWluwLRyjvjk9y JIk9CDI5EF8qVc0cR/gcYhobA3Q3tfcUwCsgVaKrgdF1HzA/ByxqyuSduxBezAZoF5Rx Hc2va7ibTvJTPt7yxP0H75x0liHaeDWqPkR0iOLfha8UqXwcCmrqudWXAQUDR0guyHdh +zBdsGNlsovk7AypzyBAqk/f5zZUB6xR1TFOx05vcrsOfM9KvU64rr2Bel79zo59nA0f PjGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id a16-20020a056a000c9000b006cbd28f1f43si381647pfv.47.2023.12.06.18.38.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:38:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 4D1B680DD87B; Wed, 6 Dec 2023 18:38:26 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443123AbjLGCiA (ORCPT + 99 others); Wed, 6 Dec 2023 21:38:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235178AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFF89D69 for ; Wed, 6 Dec 2023 18:37:50 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97C03C433AD; Thu, 7 Dec 2023 02:37:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HM-00000001aPB-1r1j; Wed, 06 Dec 2023 21:38:20 -0500 Message-ID: <20231207023820.217771011@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:58 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org, Petr Pavlu Subject: [for-linus][PATCH 6/8] tracing: Fix a possible race when disabling buffered events References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:26 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589021858723482 X-GMAIL-MSGID: 1784589021858723482 From: Petr Pavlu Function trace_buffered_event_disable() is responsible for freeing pages backing buffered events and this process can run concurrently with trace_event_buffer_lock_reserve(). The following race is currently possible: * Function trace_buffered_event_disable() is called on CPU 0. It increments trace_buffered_event_cnt on each CPU and waits via synchronize_rcu() for each user of trace_buffered_event to complete. * After synchronize_rcu() is finished, function trace_buffered_event_disable() has the exclusive access to trace_buffered_event. All counters trace_buffered_event_cnt are at 1 and all pointers trace_buffered_event are still valid. * At this point, on a different CPU 1, the execution reaches trace_event_buffer_lock_reserve(). The function calls preempt_disable_notrace() and only now enters an RCU read-side critical section. The function proceeds and reads a still valid pointer from trace_buffered_event[CPU1] into the local variable "entry". However, it doesn't yet read trace_buffered_event_cnt[CPU1] which happens later. * Function trace_buffered_event_disable() continues. It frees trace_buffered_event[CPU1] and decrements trace_buffered_event_cnt[CPU1] back to 0. * Function trace_event_buffer_lock_reserve() continues. It reads and increments trace_buffered_event_cnt[CPU1] from 0 to 1. This makes it believe that it can use the "entry" that it already obtained but the pointer is now invalid and any access results in a use-after-free. Fix the problem by making a second synchronize_rcu() call after all trace_buffered_event values are set to NULL. This waits on all potential users in trace_event_buffer_lock_reserve() that still read a previous pointer from trace_buffered_event. Link: https://lore.kernel.org/all/20231127151248.7232-2-petr.pavlu@suse.com/ Link: https://lkml.kernel.org/r/20231205161736.19663-4-petr.pavlu@suse.com Cc: stable@vger.kernel.org Fixes: 0fc1b09ff1ff ("tracing: Use temp buffer when filtering events") Signed-off-by: Petr Pavlu Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index ef72354f61ce..fbcd3bafb93e 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2791,13 +2791,17 @@ void trace_buffered_event_disable(void) free_page((unsigned long)per_cpu(trace_buffered_event, cpu)); per_cpu(trace_buffered_event, cpu) = NULL; } + /* - * Make sure trace_buffered_event is NULL before clearing - * trace_buffered_event_cnt. + * Wait for all CPUs that potentially started checking if they can use + * their event buffer only after the previous synchronize_rcu() call and + * they still read a valid pointer from trace_buffered_event. It must be + * ensured they don't see cleared trace_buffered_event_cnt else they + * could wrongly decide to use the pointed-to buffer which is now freed. */ - smp_wmb(); + synchronize_rcu(); - /* Do the work on each cpu */ + /* For each CPU, relinquish the buffer */ on_each_cpu_mask(tracing_buffer_mask, enable_trace_buffered_event, NULL, true); }