From patchwork Tue Oct 3 16:30:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yajun Deng X-Patchwork-Id: 148000 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:2a8e:b0:403:3b70:6f57 with SMTP id in14csp2206966vqb; Tue, 3 Oct 2023 09:31:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFYh3H4FCnTARAgxpwufJATU2U4b5v3oNoDl6lkd9OJgjNAizwET7RbYHV0HBncOM9SjemX X-Received: by 2002:a17:903:41d1:b0:1c3:9928:7b28 with SMTP id u17-20020a17090341d100b001c399287b28mr4432757ple.6.1696350718379; Tue, 03 Oct 2023 09:31:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696350718; cv=none; d=google.com; s=arc-20160816; b=hVC6IDGfS2XBpRirt6qZz473oJqBSQzVN2tC5TmbQrHGVYRSgHzarG2YljAk+ZfT5s +4MH63J3fRyspdOrIBn5EG2V7Vn554TUO+5ljw/z98r/DUxeTBtTGsdTQnhF0zrBx4sk M0/O2D2+zxSaL9rY5DRY4MyQu2bYoGe61KV9RttiAVXsVMCIR8UgOO+NY3Rp29aavPZ3 zq3tRjovgEIzriBlpt9Lg24zFt9cEyPPBKyR5P6lAQHcsfx/ZNvOVsd1vL8mNYNh23G/ CquMavr3RbxBR+E6yeW9IevShbw8Ak9pugUd1fdwKUpOwIUyAD0RFXVz0mzAmpUX4hzt Biuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=NJl+SD8R37eFOrXZ1l+okjOAETzKwOwWaRxJv78u12M=; fh=QZU3guN9hBCNbG3XWCsoVTJE3Km/GRJW/frPYUuNAVA=; b=rzxshd/1IM4Ki0JOZfsucpUAnRSPZOnxe66hUnpYQmP+PjYguXYNUV1lkI2ZcuYIEo oQzAinZkPyHCGM+6khVGz7pXKU7Tkdd0SLg7I//NYVKztLOQbdNgCX2n8dMuV2RV1Bq8 YVCyKDTNsDZvwCYNn36x3W12Juv+dZTvvXWs+qgv2g9xYjzx3EoZVEw0z2OuyRp3YfbW 8l1YkTnzrlutXMW9nFSkbnDwHPfAkeuGd21K1mL9WK+n8DcgBZ1Mby5b/9fsJoPOUZ5m O4jvKiLyKkk6pBMe8HCHAbldDbYheP/ATriPoZxZApyOuUsc5ZnwMQqVkAqOLqjI7D74 NtMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=TwocuYdY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id ik9-20020a170902ab0900b001c62e2ce6a8si1635412plb.449.2023.10.03.09.31.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Oct 2023 09:31:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=TwocuYdY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id E16FD81583B1; Tue, 3 Oct 2023 09:31:55 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240425AbjJCQbM (ORCPT + 17 others); Tue, 3 Oct 2023 12:31:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45106 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240424AbjJCQbK (ORCPT ); Tue, 3 Oct 2023 12:31:10 -0400 Received: from out-197.mta1.migadu.com (out-197.mta1.migadu.com [95.215.58.197]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDBD2A6 for ; Tue, 3 Oct 2023 09:31:06 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1696350665; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=NJl+SD8R37eFOrXZ1l+okjOAETzKwOwWaRxJv78u12M=; b=TwocuYdYMlro3aGLYJPrdidoxA7phm/DW/sDlYiP/nmVycs2Zn8vw3sxQZcnc5S9rlZ7pb qiEr/g4pojwuKJrQwKidrjMA8Jtxsq6tNa6ATyxn/zG++L1I8qATkHMNa1l6b8+NLlseBn D9ENZHT3D/ReUFolazSxjXESWtXBP7c= From: Yajun Deng To: rppt@kernel.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yajun Deng Subject: [PATCH v3] memblock: don't run loop in memblock_add_range() twice Date: Wed, 4 Oct 2023 00:30:45 +0800 Message-Id: <20231003163045.191184-1-yajun.deng@linux.dev> MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 03 Oct 2023 09:31:55 -0700 (PDT) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778752650455802295 X-GMAIL-MSGID: 1778752650455802295 There is round twice in memblock_add_range(). The first counts the number of regions needed to accommodate the new area. The second actually inserts them. But the first round isn't really needed, we just need to check the counts before inserting them. Check the count before memblock_insert_region. If the count is equal to the maximum, it needs to resize the array. Otherwise, insert it directly. Also, there is a nested call here, we need to reserve the current array immediately if slab is unavailable. Signed-off-by: Yajun Deng --- v3: reserve the current array immediately if slab is unavailable. v2: remove the changes of memblock_double_array. v1: https://lore.kernel.org/all/20230927013752.2515238-1-yajun.deng@linux.dev/ --- mm/memblock.c | 93 +++++++++++++++++++++++---------------------------- 1 file changed, 41 insertions(+), 52 deletions(-) diff --git a/mm/memblock.c b/mm/memblock.c index 5a88d6d24d79..71449c0b8bc8 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -588,11 +588,12 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, phys_addr_t base, phys_addr_t size, int nid, enum memblock_flags flags) { - bool insert = false; phys_addr_t obase = base; phys_addr_t end = base + memblock_cap_size(base, &size); - int idx, nr_new, start_rgn = -1, end_rgn; + int idx, start_rgn = -1, end_rgn; struct memblock_region *rgn; + int use_slab = slab_is_available(); + unsigned long ocnt = type->cnt; if (!size) return 0; @@ -608,25 +609,6 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, return 0; } - /* - * The worst case is when new range overlaps all existing regions, - * then we'll need type->cnt + 1 empty regions in @type. So if - * type->cnt * 2 + 1 is less than or equal to type->max, we know - * that there is enough empty regions in @type, and we can insert - * regions directly. - */ - if (type->cnt * 2 + 1 <= type->max) - insert = true; - -repeat: - /* - * The following is executed twice. Once with %false @insert and - * then with %true. The first counts the number of regions needed - * to accommodate the new area. The second actually inserts them. - */ - base = obase; - nr_new = 0; - for_each_memblock_type(idx, type, rgn) { phys_addr_t rbase = rgn->base; phys_addr_t rend = rbase + rgn->size; @@ -644,15 +626,30 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, WARN_ON(nid != memblock_get_region_node(rgn)); #endif WARN_ON(flags != rgn->flags); - nr_new++; - if (insert) { - if (start_rgn == -1) - start_rgn = idx; - end_rgn = idx + 1; - memblock_insert_region(type, idx++, base, - rbase - base, nid, - flags); + + /* + * If type->cnt is equal to type->max, it means there's + * not enough empty region and the array needs to be + * resized. Otherwise, insert it directly. + * + * If slab is unavailable, it means a new array was reserved + * in memblock_double_array. There is a nested call here, We + * need to reserve the current array now if its type is + * reserved. + */ + if (type->cnt == type->max) { + if (memblock_double_array(type, obase, size)) + return -ENOMEM; + else if (!use_slab && type == &memblock.reserved) + return memblock_reserve(obase, size); } + + if (start_rgn == -1) + start_rgn = idx; + end_rgn = idx + 1; + memblock_insert_region(type, idx++, base, + rbase - base, nid, + flags); } /* area below @rend is dealt with, forget about it */ base = min(rend, end); @@ -660,33 +657,25 @@ static int __init_memblock memblock_add_range(struct memblock_type *type, /* insert the remaining portion */ if (base < end) { - nr_new++; - if (insert) { - if (start_rgn == -1) - start_rgn = idx; - end_rgn = idx + 1; - memblock_insert_region(type, idx, base, end - base, - nid, flags); + + if (type->cnt == type->max) { + if (memblock_double_array(type, obase, size)) + return -ENOMEM; + else if (!use_slab && type == &memblock.reserved) + return memblock_reserve(obase, size); } - } - if (!nr_new) - return 0; + if (start_rgn == -1) + start_rgn = idx; + end_rgn = idx + 1; + memblock_insert_region(type, idx, base, end - base, + nid, flags); + } - /* - * If this was the first round, resize array and repeat for actual - * insertions; otherwise, merge and return. - */ - if (!insert) { - while (type->cnt + nr_new > type->max) - if (memblock_double_array(type, obase, size) < 0) - return -ENOMEM; - insert = true; - goto repeat; - } else { + if (ocnt != type->cnt) memblock_merge_regions(type, start_rgn, end_rgn); - return 0; - } + + return 0; } /**