From patchwork Sat Oct 14 20:40:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 152958 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2908:b0:403:3b70:6f57 with SMTP id ib8csp2652150vqb; Sat, 14 Oct 2023 13:42:47 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHO0lYoBFQH/cmzau9YJv43vcVxOa+PjyeIuU22tqFb6C9NIuzH5f/Mj4K2Ly3zpW62Cyik X-Received: by 2002:a17:90a:17c5:b0:27d:3b8:a08a with SMTP id q63-20020a17090a17c500b0027d03b8a08amr10718856pja.1.1697316166964; Sat, 14 Oct 2023 13:42:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697316166; cv=none; d=google.com; s=arc-20160816; b=x5EvbCs6F+0QS20bfUGdRC+aggBkDkj3q5PCkrwjHU8sKmoYPEe8JXzrBxQhBJeihV JMYfJwSi49GPSMj82snLBl+NTKxvKiVlTK1CGZSu6PQNXwjxCvt/cNOcTofWFFzcP3TK 3pVOkTuUMWUqWGqLnbTKuCrywWvdIxda+x1N0F1sxyFv4P/0HIjlxJWwLyv61HMSUIZk hihj9e5RDRJve89jmI8FU0fO45vM4cnM9WTKpxakyT3WHYdd8ePAed+cHLzdDQjHv7/u p2/tLJ5DOhDiIbMnBvoSYw5y96+ZHKLW8bf4Qn5CQ14jYY4N0J01yumJsKlA6y5SWztY Lfbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=nrBmvd9+HKQtjiJXEc2CELH7cTTlhGMjGBmYVCmqqBE=; fh=vCKNOWZfszQmgIQahuZuraZH+6D5Kbfa0K8x3PD34h4=; b=WVHoc0+TfhqMDZz9/9KUumBS8iVJo3pDoeWYmHYEzId0YqmPU1lb4Pt33l8wn/zRzp +QcwktRsz/hiAlB20ZJzvmCADeLjiQbrbsA1iQtW+F7w0zqLhWudSXQJ7OqnYX6835gw kQ59SbNLZC9aqZ0iaPC23pSE//2icN5pxcnWnfVjr6AqQfaD/bOe7sydIufczxoRDkjs 5SXoO4An2HZewxacMJN2MtBwg92LoCZB4LWxo0c+CorrxVM/+QGQUX0oBpGdPp2xyY2y qkjeQQgPtzbmsjAibZqSSZH3dTV3hGphFYf+gB0e4ZAuoN52dzNXzXCqzdqALSaWTm3d dlxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FdhDf0LT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id p18-20020a637412000000b005702257f332si3604046pgc.21.2023.10.14.13.42.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 14 Oct 2023 13:42:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FdhDf0LT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 06BDA803DB0E; Sat, 14 Oct 2023 13:42:44 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232112AbjJNUmM (ORCPT + 19 others); Sat, 14 Oct 2023 16:42:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbjJNUmJ (ORCPT ); Sat, 14 Oct 2023 16:42:09 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C084ACC; Sat, 14 Oct 2023 13:42:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697316126; x=1728852126; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=OaslphHan7Hv5PmKPolRGCoW4aUeL2tsgJv2nV+xZCk=; b=FdhDf0LT8qPfjYfZyKydgNWMuUZv8A23q4kZc7h4UNsSRypWzJhs8fUe fz9Q8KLJUXwuhHHxT1mhDSVCl71wayFpPVmKCJe3Pa0Qd+J3nz5aM2/ys sT/hSLg/y3h0NjKHcXAd1+4upowkx4cWonUzR0PFodxhdNHLUeKKieh/J zVIqqGgHmQAs2VI8U7daj+xkcUzw1hUIbmvR4fu7gXNuBW7ztdvOIhn5D f+EqlH3OBOESuyOUJ6Dj/t7rHw5dmCDCbiemBakxhghlGj0v1lGHloR4o ydwBtQ8naoSEkdSYk9E6kS4NNspyIbxENc4QowsGtYVOHmJuz+naaAb9P Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="364709464" X-IronPort-AV: E=Sophos;i="6.03,224,1694761200"; d="scan'208";a="364709464" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2023 13:42:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10863"; a="748755154" X-IronPort-AV: E=Sophos;i="6.03,224,1694761200"; d="scan'208";a="748755154" Received: from asamachi-mobl.amr.corp.intel.com (HELO box.shutemov.name) ([10.251.223.207]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2023 13:41:58 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 5676210A1C3; Sat, 14 Oct 2023 23:41:55 +0300 (+03) From: "Kirill A. Shutemov" To: Borislav Petkov , Andy Lutomirski , Dave Hansen , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel Cc: Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Dario Faggioli , Mike Rapoport , David Hildenbrand , Mel Gorman , marcelo.cerri@canonical.com, tim.gardner@canonical.com, khalid.elmously@canonical.com, philip.cox@canonical.com, aarcange@redhat.com, peterx@redhat.com, x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , stable@kernel.org Subject: [PATCH] efi/unaccepted: Fix soft lockups caused by parallel memory acceptance Date: Sat, 14 Oct 2023 23:40:40 +0300 Message-ID: <20231014204040.28765-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Sat, 14 Oct 2023 13:42:44 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1779764996983368132 X-GMAIL-MSGID: 1779764996983368132 Michael reported soft lockups on a system that has unaccepted memory. This occurs when a user attempts to allocate and accept memory on multiple CPUs simultaneously. The root cause of the issue is that memory acceptance is serialized with a spinlock, allowing only one CPU to accept memory at a time. The other CPUs spin and wait for their turn, leading to starvation and soft lockup reports. To address this, the code has been modified to release the spinlock while accepting memory. This allows for parallel memory acceptance on multiple CPUs. A newly introduced "accepting_list" keeps track of which memory is currently being accepted. This is necessary to prevent parallel acceptance of the same memory block. If a collision occurs, the lock is released and the process is retried. Such collisions should rarely occur. The main path for memory acceptance is the page allocator, which accepts memory in MAX_ORDER chunks. As long as MAX_ORDER is equal to or larger than the unit_size, collisions will never occur because the caller fully owns the memory block being accepted. Aside from the page allocator, only memblock and deferered_free_range() accept memory, but this only happens during boot. The code has been tested with unit_size == 128MiB to trigger collisions and validate the retry codepath. Signed-off-by: Kirill A. Shutemov Reported-by: Michael Roth Reviewed-by: Nikolay Borisov --- drivers/firmware/efi/unaccepted_memory.c | 55 ++++++++++++++++++++++-- 1 file changed, 51 insertions(+), 4 deletions(-) diff --git a/drivers/firmware/efi/unaccepted_memory.c b/drivers/firmware/efi/unaccepted_memory.c index 853f7dc3c21d..8af0306c8e5c 100644 --- a/drivers/firmware/efi/unaccepted_memory.c +++ b/drivers/firmware/efi/unaccepted_memory.c @@ -5,9 +5,17 @@ #include #include -/* Protects unaccepted memory bitmap */ +/* Protects unaccepted memory bitmap and accepting_list */ static DEFINE_SPINLOCK(unaccepted_memory_lock); +struct accept_range { + struct list_head list; + unsigned long start; + unsigned long end; +}; + +static LIST_HEAD(accepting_list); + /* * accept_memory() -- Consult bitmap and accept the memory if needed. * @@ -24,6 +32,7 @@ void accept_memory(phys_addr_t start, phys_addr_t end) { struct efi_unaccepted_memory *unaccepted; unsigned long range_start, range_end; + struct accept_range range, *entry; unsigned long flags; u64 unit_size; @@ -78,20 +87,58 @@ void accept_memory(phys_addr_t start, phys_addr_t end) if (end > unaccepted->size * unit_size * BITS_PER_BYTE) end = unaccepted->size * unit_size * BITS_PER_BYTE; - range_start = start / unit_size; - + range.start = start / unit_size; + range.end = DIV_ROUND_UP(end, unit_size); +retry: spin_lock_irqsave(&unaccepted_memory_lock, flags); + + /* + * Check if anybody works on accepting the same range of the memory. + * + * The check with unit_size granularity. It is crucial to catch all + * accept requests to the same unit_size block, even if they don't + * overlap on physical address level. + */ + list_for_each_entry(entry, &accepting_list, list) { + if (entry->end < range.start) + continue; + if (entry->start >= range.end) + continue; + + /* + * Somebody else accepting the range. Or at least part of it. + * + * Drop the lock and retry until it is complete. + */ + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + cond_resched(); + goto retry; + } + + /* + * Register that the range is about to be accepted. + * Make sure nobody else will accept it. + */ + list_add(&range.list, &accepting_list); + + range_start = range.start; for_each_set_bitrange_from(range_start, range_end, unaccepted->bitmap, - DIV_ROUND_UP(end, unit_size)) { + range.end) { unsigned long phys_start, phys_end; unsigned long len = range_end - range_start; phys_start = range_start * unit_size + unaccepted->phys_base; phys_end = range_end * unit_size + unaccepted->phys_base; + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); + arch_accept_memory(phys_start, phys_end); + + spin_lock_irqsave(&unaccepted_memory_lock, flags); bitmap_clear(unaccepted->bitmap, range_start, len); } + + list_del(&range.list); spin_unlock_irqrestore(&unaccepted_memory_lock, flags); }