Message ID | 20230803114051.637709-1-linmiaohe@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f41:0:b0:3e4:2afc:c1 with SMTP id v1csp1120766vqx; Thu, 3 Aug 2023 05:44:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlHtb7yPTKU/jYB+tRO4VpnzdMVF2mq8s1tzLthl6+HALdl7DYrXQbQyk7g1zJo0pjMZVfGP X-Received: by 2002:a17:907:d86:b0:98d:f2c9:a1eb with SMTP id go6-20020a1709070d8600b0098df2c9a1ebmr11081625ejc.24.1691066647691; Thu, 03 Aug 2023 05:44:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1691066647; cv=none; d=google.com; s=arc-20160816; b=IJ4YUZZz7BfZO4PsEq2suYUJqmwDfAv6UtZKpDmV1L+Z2fl6F0ibEx0A15oDBx5Cb/ d5esZB5ODBwzjZrk7QlMe5Ii1tXTtGTx9gjpPriAxc5fL+WcMz7j/TufXEtEEpOSu/7Z 8CI/JkHNB9tsfzo7vAZS8qlBPGVeBKF7OASdABZWuGYmP9xV4ZR/kKu4PP4smOVYUy7j 8CUh8ccOCg2RWg36yi59EJuXQ2DrlZeABLR+iMX304T2C/5TBZX26v1uz5Ua+hQl5fvl 8DAJXvq8p2TYJhA2mguKX7A1/vM0v8BeFX1KbceBf5oGgSeP51FAWf1uYjDPlfzl2izc qA7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=SXAPUA2SeVic5O+tkGG4D1GMu53/vPL4YJbs3Gax6jM=; fh=h7AqDXAYyImXwN82hrRd+tr0US7hUyfOr+zaT2KCj0M=; b=TXhijZyJNzUq3CRyAHYaIFZxQiOsYhbaPIiylLKPmiqejlKhFbS06JgePOt5B5YKIZ ThaMhgJlA7V5CAkRod/V9mKYbrQSbPHItA33nQdg7O3rhgNEWiC/vVUkQyyiEqP7T0Hp xVNPh8DzTZPMX/q+NGbmQCtvrarxYJxBB2tCAqoe6Cc7HQtXzJlciL8qa23rGesG5dKt JhobCNMF9BxZGsg5XOiUM4JzGU6Jx+yaFgSQHMKTUKCsAt/K8wCX8fO0R6EnQ+SBU1bq sP0QUrD5485Dix940G972yFv/KrQDrJkYxDzOdKxrsTjA8OA2lMaz0QT/26AmW0xMatP IUMA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y19-20020a170906915300b0099318359ea8si2039966ejw.452.2023.08.03.05.43.44; Thu, 03 Aug 2023 05:44:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234115AbjHCLlL (ORCPT <rfc822;jeff.pang.chn@gmail.com> + 99 others); Thu, 3 Aug 2023 07:41:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230058AbjHCLlK (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 3 Aug 2023 07:41:10 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FEB226A1 for <linux-kernel@vger.kernel.org>; Thu, 3 Aug 2023 04:41:08 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RGmzm1MWvzNmhL; Thu, 3 Aug 2023 19:37:40 +0800 (CST) Received: from huawei.com (10.174.151.185) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Thu, 3 Aug 2023 19:41:05 +0800 From: Miaohe Lin <linmiaohe@huawei.com> To: <akpm@linux-foundation.org>, <rppt@kernel.org> CC: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>, <linmiaohe@huawei.com> Subject: [PATCH] mm/mm_init: use helper macro BITS_PER_LONG Date: Thu, 3 Aug 2023 19:40:51 +0800 Message-ID: <20230803114051.637709-1-linmiaohe@huawei.com> X-Mailer: git-send-email 2.33.0 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.151.185] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773211901496519465 X-GMAIL-MSGID: 1773211901496519465 |
Series |
mm/mm_init: use helper macro BITS_PER_LONG
|
|
Commit Message
Miaohe Lin
Aug. 3, 2023, 11:40 a.m. UTC
It's more readable to use helper macro BITS_PER_LONG. No functional
change intended.
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/mm_init.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Comments
On 03.08.23 13:40, Miaohe Lin wrote: > It's more readable to use helper macro BITS_PER_LONG. No functional > change intended. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/mm_init.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 66aca3f6accd..2f37dbb5ff9a 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -79,7 +79,7 @@ void __init mminit_verify_pageflags_layout(void) > int shift, width; > unsigned long or_mask, add_mask; > > - shift = 8 * sizeof(unsigned long); > + shift = BITS_PER_LONG; > width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH > - LAST_CPUPID_SHIFT - KASAN_TAG_WIDTH - LRU_GEN_WIDTH - LRU_REFS_WIDTH; > mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths", > @@ -1431,7 +1431,7 @@ static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned l > usemapsize = roundup(zonesize, pageblock_nr_pages); > usemapsize = usemapsize >> pageblock_order; > usemapsize *= NR_PAGEBLOCK_BITS; > - usemapsize = roundup(usemapsize, 8 * sizeof(unsigned long)); > + usemapsize = roundup(usemapsize, BITS_PER_LONG); > > return usemapsize / 8; > } Reviewed-by: David Hildenbrand <david@redhat.com>
On Thu, Aug 03, 2023 at 07:40:51PM +0800, Miaohe Lin wrote: > It's more readable to use helper macro BITS_PER_LONG. No functional > change intended. > > Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> > --- > mm/mm_init.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/mm_init.c b/mm/mm_init.c > index 66aca3f6accd..2f37dbb5ff9a 100644 > --- a/mm/mm_init.c > +++ b/mm/mm_init.c > @@ -79,7 +79,7 @@ void __init mminit_verify_pageflags_layout(void) > int shift, width; > unsigned long or_mask, add_mask; > > - shift = 8 * sizeof(unsigned long); > + shift = BITS_PER_LONG; > width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH > - LAST_CPUPID_SHIFT - KASAN_TAG_WIDTH - LRU_GEN_WIDTH - LRU_REFS_WIDTH; > mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths", > @@ -1431,7 +1431,7 @@ static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned l > usemapsize = roundup(zonesize, pageblock_nr_pages); > usemapsize = usemapsize >> pageblock_order; > usemapsize *= NR_PAGEBLOCK_BITS; > - usemapsize = roundup(usemapsize, 8 * sizeof(unsigned long)); > + usemapsize = roundup(usemapsize, BITS_PER_LONG); > > return usemapsize / 8; BITS_PER_BYTE instead of 8 here? > }
On 2023/8/3 21:33, Mike Rapoport wrote: > On Thu, Aug 03, 2023 at 07:40:51PM +0800, Miaohe Lin wrote: >> It's more readable to use helper macro BITS_PER_LONG. No functional >> change intended. >> >> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> >> --- >> mm/mm_init.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/mm/mm_init.c b/mm/mm_init.c >> index 66aca3f6accd..2f37dbb5ff9a 100644 >> --- a/mm/mm_init.c >> +++ b/mm/mm_init.c >> @@ -79,7 +79,7 @@ void __init mminit_verify_pageflags_layout(void) >> int shift, width; >> unsigned long or_mask, add_mask; >> >> - shift = 8 * sizeof(unsigned long); >> + shift = BITS_PER_LONG; >> width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH >> - LAST_CPUPID_SHIFT - KASAN_TAG_WIDTH - LRU_GEN_WIDTH - LRU_REFS_WIDTH; >> mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths", >> @@ -1431,7 +1431,7 @@ static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned l >> usemapsize = roundup(zonesize, pageblock_nr_pages); >> usemapsize = usemapsize >> pageblock_order; >> usemapsize *= NR_PAGEBLOCK_BITS; >> - usemapsize = roundup(usemapsize, 8 * sizeof(unsigned long)); >> + usemapsize = roundup(usemapsize, BITS_PER_LONG); >> >> return usemapsize / 8; > > BITS_PER_BYTE instead of 8 here? Sure, this is even better. Will do. Thanks.
diff --git a/mm/mm_init.c b/mm/mm_init.c index 66aca3f6accd..2f37dbb5ff9a 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -79,7 +79,7 @@ void __init mminit_verify_pageflags_layout(void) int shift, width; unsigned long or_mask, add_mask; - shift = 8 * sizeof(unsigned long); + shift = BITS_PER_LONG; width = shift - SECTIONS_WIDTH - NODES_WIDTH - ZONES_WIDTH - LAST_CPUPID_SHIFT - KASAN_TAG_WIDTH - LRU_GEN_WIDTH - LRU_REFS_WIDTH; mminit_dprintk(MMINIT_TRACE, "pageflags_layout_widths", @@ -1431,7 +1431,7 @@ static unsigned long __init usemap_size(unsigned long zone_start_pfn, unsigned l usemapsize = roundup(zonesize, pageblock_nr_pages); usemapsize = usemapsize >> pageblock_order; usemapsize *= NR_PAGEBLOCK_BITS; - usemapsize = roundup(usemapsize, 8 * sizeof(unsigned long)); + usemapsize = roundup(usemapsize, BITS_PER_LONG); return usemapsize / 8; }