From patchwork Sat Oct 22 07:21:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 7263 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4242:0:0:0:0:0 with SMTP id s2csp1091736wrr; Sat, 22 Oct 2022 00:51:42 -0700 (PDT) X-Google-Smtp-Source: AMsMyM61E/oRSowS0esfaLnyZTFTe9veUW5xTiTUdrDp5uoD1/7cJxkBMxUZmJ1gM+/8zalwpXa8 X-Received: by 2002:a17:90b:4a49:b0:20d:5099:f5cc with SMTP id lb9-20020a17090b4a4900b0020d5099f5ccmr25997028pjb.137.1666425092418; Sat, 22 Oct 2022 00:51:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666425092; cv=none; d=google.com; s=arc-20160816; b=upnriXDoQhFNxRaZDG2Vrr8oXGLH/S0/EGZhQFKImsYipidOp5nX937P7BSZq3GUs5 Prvr0NGm4PHSgVbbkhx6K9v3ufHsrr5cnvqNF2XDEQl10SWdKd/SVf08LlFahzWE3edZ q78EoxLM8gE4Lwxn7RKKauoN2mpDkTzxIbX1L9WfqS69d3ZfXPodphu5hDsRfeyW2Rlv g/E+t+WFLztpQoOrzv+5lYeWHddVG1biWVmvria/qMCte3ieXRerqo1vpeY6PLWDHY9X FpolOSZRTZUROjH7jQSVoO76U05GCAzTLGfo6gfd7uOPW+cQNFrJbaf8byxy1aFX59fQ 55aA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=N9UMxzjypajciwKqhbwdJTjx2heuE5HS6V6j3GuNwAE=; b=02NCTQuBL5n1LF5MsaopIRfRncFcZnR5ADB9b60PH1drZ38+BMosgD3zTtrzXvxia3 PqXlEVt8hToNr5w629NAzNLkooCFseDtIc0Iwst3uv2AyIyiitzSyKc5fSaQdFDD3THu C9bBkYiz/iRy2OORXIdAMu6IvE9464vhBUb5ou+KtDVSsWx7W3djScnLTqUjvCbXit5e rt//Pkhhk0Px/YLSb1O8+JVPUHlsfIJtIEXcH3K+NeW6Ijsby4RD9f1am3bt0igQXjWX hG9tdLz5o5gyy4q+y3i9odCmhvL05G7soVetB0siW0xbudP9VhD9TzTlNbByqBnNSih4 ecLA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=KZEUn25x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g8-20020a1709026b4800b001784c98bfddsi25296937plt.24.2022.10.22.00.51.19; Sat, 22 Oct 2022 00:51:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=KZEUn25x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231767AbiJVHu6 (ORCPT + 99 others); Sat, 22 Oct 2022 03:50:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231756AbiJVHtF (ORCPT ); Sat, 22 Oct 2022 03:49:05 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E82926908D; Sat, 22 Oct 2022 00:45:49 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 6A21DB82E0C; Sat, 22 Oct 2022 07:43:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A70E6C433C1; Sat, 22 Oct 2022 07:43:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1666424633; bh=X5Y186WYxWqCHF+6mWY7rwZAQh/o7I6+AlwNmuTbF6s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KZEUn25xzlxv/QrN7jiVYgpT601Z3s5a1VhVc7nArmZ3JYlxRsj5ClKYbVI9y00Nj x05vQn5gMX/hIcDRdcuEjJEDqIH+IwmgnG39CWJW+le9CnL/0oSjKzfpkioTTr+z/o T833tgSwe3jw51IqYTr6hOloN2XbKaZOnb8aWWGw= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Hao Luo , Hou Tao , Martin KaFai Lau , Sasha Levin Subject: [PATCH 5.19 218/717] bpf: Disable preemption when increasing per-cpu map_locked Date: Sat, 22 Oct 2022 09:21:37 +0200 Message-Id: <20221022072453.805093302@linuxfoundation.org> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221022072415.034382448@linuxfoundation.org> References: <20221022072415.034382448@linuxfoundation.org> User-Agent: quilt/0.67 MIME-Version: 1.0 X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747373357409885331?= X-GMAIL-MSGID: =?utf-8?q?1747373357409885331?= From: Hou Tao [ Upstream commit 2775da21628738ce073a3a6a806adcbaada0f091 ] Per-cpu htab->map_locked is used to prohibit the concurrent accesses from both NMI and non-NMI contexts. But since commit 74d862b682f5 ("sched: Make migrate_disable/enable() independent of RT"), migrate_disable() is also preemptible under CONFIG_PREEMPT case, so now map_locked also disallows concurrent updates from normal contexts (e.g. userspace processes) unexpectedly as shown below: process A process B htab_map_update_elem() htab_lock_bucket() migrate_disable() /* return 1 */ __this_cpu_inc_return() /* preempted by B */ htab_map_update_elem() /* the same bucket as A */ htab_lock_bucket() migrate_disable() /* return 2, so lock fails */ __this_cpu_inc_return() return -EBUSY A fix that seems feasible is using in_nmi() in htab_lock_bucket() and only checking the value of map_locked for nmi context. But it will re-introduce dead-lock on bucket lock if htab_lock_bucket() is re-entered through non-tracing program (e.g. fentry program). One cannot use preempt_disable() to fix this issue as htab_use_raw_lock being false causes the bucket lock to be a spin lock which can sleep and does not work with preempt_disable(). Therefore, use migrate_disable() when using the spinlock instead of preempt_disable() and defer fixing concurrent updates to when the kernel has its own BPF memory allocator. Fixes: 74d862b682f5 ("sched: Make migrate_disable/enable() independent of RT") Reviewed-by: Hao Luo Signed-off-by: Hou Tao Link: https://lore.kernel.org/r/20220831042629.130006-2-houtao@huaweicloud.com Signed-off-by: Martin KaFai Lau Signed-off-by: Sasha Levin --- kernel/bpf/hashtab.c | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 4dd5e0005afa..717f85973443 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -162,17 +162,25 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, unsigned long *pflags) { unsigned long flags; + bool use_raw_lock; hash = hash & HASHTAB_MAP_LOCK_MASK; - migrate_disable(); + use_raw_lock = htab_use_raw_lock(htab); + if (use_raw_lock) + preempt_disable(); + else + migrate_disable(); if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { __this_cpu_dec(*(htab->map_locked[hash])); - migrate_enable(); + if (use_raw_lock) + preempt_enable(); + else + migrate_enable(); return -EBUSY; } - if (htab_use_raw_lock(htab)) + if (use_raw_lock) raw_spin_lock_irqsave(&b->raw_lock, flags); else spin_lock_irqsave(&b->lock, flags); @@ -185,13 +193,18 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab, struct bucket *b, u32 hash, unsigned long flags) { + bool use_raw_lock = htab_use_raw_lock(htab); + hash = hash & HASHTAB_MAP_LOCK_MASK; - if (htab_use_raw_lock(htab)) + if (use_raw_lock) raw_spin_unlock_irqrestore(&b->raw_lock, flags); else spin_unlock_irqrestore(&b->lock, flags); __this_cpu_dec(*(htab->map_locked[hash])); - migrate_enable(); + if (use_raw_lock) + preempt_enable(); + else + migrate_enable(); } static bool htab_lru_map_delete_node(void *arg, struct bpf_lru_node *node);