Message ID | 20221019034945.93081-3-wangkefeng.wang@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp108780wrs; Tue, 18 Oct 2022 20:35:35 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6/CzEzMPvrgEel7H1UBOzKNreYHSJVp8kIWQ45UP0HqZVf9o/rVbSXFkLiIabakOkEaN4d X-Received: by 2002:a62:4c6:0:b0:55f:c739:51e0 with SMTP id 189-20020a6204c6000000b0055fc73951e0mr6386623pfe.49.1666150534979; Tue, 18 Oct 2022 20:35:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666150534; cv=none; d=google.com; s=arc-20160816; b=YngJlapPc8mVK9XWDplEBKO8RF0oZTe2HS0jl4NF68AmF7FP4vcCurhYt/XD/4fD0C 8ElGiREvtatwwnGtl5uWnUaTkifoVq4IBN/wxn6mY1425X1/nARCGhyea+5F6na0HEnu vx3XBoBoEmBe/t+tUINZVJ3y1waFo2WozxWH5Iqd6ACiJkKEx+kKl8tRvIg2xnnPPG5I UjAp5OcDGXJjvYjisTHNFv6Dh8khBdeMTlqjCC40loVgL1LHIBWdCiSuJOJp7XLAdtbU x3eYpP/Bq2yrEl4ntCBn3qVYNju0Nb8qD6RbjoaCRReYQapa6UtgzqrJZzzeJ7pg+abj zvyg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=kHHJyDT2hgQmZR5mRQG4tLDHVs0Z/pdo4jfpgZD/puE=; b=kevS8+heqPVlFdIOe/Dy1d1h5CCNuurGovrm5PsIvlP49teJkeKnvSWy5Cg7beHkbt iK6R8zOjAh5h5n8c+AuAnz0mwTRG6hYmZE4kn67rfEiolX8wPHkcVhL8rKLiEIrWyrdf xfCXGBrHDnc1TULcq4dR/CR+Wfy2hNY0BYzFGa7eQyh1Pm6YcaDvEqyyK/iDooU+4RCE //U6oQU63FX8V+SyBcNVbA4tciht7SFyFS9cG7gSGIv+DY/NSbStTlVpexIV5ES/cBRK litj3U2K2IRz6VVUK/b2S4qoycCBFNwLSp5rDnjIErPOKnbwip3xTYQcL4Ky4ErjcVle 1r0g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w1-20020a63a741000000b00461c7200fafsi16921287pgo.320.2022.10.18.20.35.22; Tue, 18 Oct 2022 20:35:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229928AbiJSDaQ (ORCPT <rfc822;samuel.l.nystrom@gmail.com> + 99 others); Tue, 18 Oct 2022 23:30:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229926AbiJSDaH (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 18 Oct 2022 23:30:07 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7174D50F94; Tue, 18 Oct 2022 20:30:06 -0700 (PDT) Received: from dggpemm500024.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Msbny0JjPzHv4h; Wed, 19 Oct 2022 11:29:58 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggpemm500024.china.huawei.com (7.185.36.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 19 Oct 2022 11:29:57 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 19 Oct 2022 11:29:56 +0800 From: Kefeng Wang <wangkefeng.wang@huawei.com> To: <linux-kernel@vger.kernel.org>, Andrew Morton <akpm@linux-foundation.org> CC: Dinh Nguyen <dinguyen@kernel.org>, Jarkko Sakkinen <jarkko@kernel.org>, Dave Hansen <dave.hansen@linux.intel.com>, <linux-sgx@vger.kernel.org>, <amd-gfx@lists.freedesktop.org>, <linux-mm@kvack.org>, Kefeng Wang <wangkefeng.wang@huawei.com> Subject: [PATCH 2/5] x86/sgx: use VM_ACCESS_FLAGS Date: Wed, 19 Oct 2022 11:49:42 +0800 Message-ID: <20221019034945.93081-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221019034945.93081-1-wangkefeng.wang@huawei.com> References: <20221019034945.93081-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747085463182250150?= X-GMAIL-MSGID: =?utf-8?q?1747085463182250150?= |
Series | mm: cleanup with VM_ACCESS_FLAGS | |
Commit Message
Kefeng Wang
Oct. 19, 2022, 3:49 a.m. UTC
Simplify VM_READ|VM_WRITE|VM_EXEC with VM_ACCESS_FLAGS.
Cc: Jarkko Sakkinen <jarkko@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
arch/x86/kernel/cpu/sgx/encl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Comments
On Wed, Oct 19, 2022 at 11:49:42AM +0800, Kefeng Wang wrote: > Simplify VM_READ|VM_WRITE|VM_EXEC with VM_ACCESS_FLAGS. > > Cc: Jarkko Sakkinen <jarkko@kernel.org> > Cc: Dave Hansen <dave.hansen@linux.intel.com> > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > --- > arch/x86/kernel/cpu/sgx/encl.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > index 1ec20807de1e..6225c525372d 100644 > --- a/arch/x86/kernel/cpu/sgx/encl.c > +++ b/arch/x86/kernel/cpu/sgx/encl.c > @@ -268,7 +268,7 @@ static struct sgx_encl_page *sgx_encl_load_page_in_vma(struct sgx_encl *encl, > unsigned long addr, > unsigned long vm_flags) > { > - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); > + unsigned long vm_prot_bits = vm_flags & VM_ACCESS_FLAGS; > struct sgx_encl_page *entry; > > entry = xa_load(&encl->page_array, PFN_DOWN(addr)); > @@ -502,7 +502,7 @@ static void sgx_vma_open(struct vm_area_struct *vma) > int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, > unsigned long end, unsigned long vm_flags) > { > - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); > + unsigned long vm_prot_bits = vm_flags & VM_ACCESS_FLAGS; > struct sgx_encl_page *page; > unsigned long count = 0; > int ret = 0; > -- > 2.35.3 > Why? BR, Jarkko
On Sun, Oct 23, 2022 at 11:07:47PM +0300, Jarkko Sakkinen wrote: > On Wed, Oct 19, 2022 at 11:49:42AM +0800, Kefeng Wang wrote: > > Simplify VM_READ|VM_WRITE|VM_EXEC with VM_ACCESS_FLAGS. > > > > Cc: Jarkko Sakkinen <jarkko@kernel.org> > > Cc: Dave Hansen <dave.hansen@linux.intel.com> > > Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> > > --- > > arch/x86/kernel/cpu/sgx/encl.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c > > index 1ec20807de1e..6225c525372d 100644 > > --- a/arch/x86/kernel/cpu/sgx/encl.c > > +++ b/arch/x86/kernel/cpu/sgx/encl.c > > @@ -268,7 +268,7 @@ static struct sgx_encl_page *sgx_encl_load_page_in_vma(struct sgx_encl *encl, > > unsigned long addr, > > unsigned long vm_flags) > > { > > - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); > > + unsigned long vm_prot_bits = vm_flags & VM_ACCESS_FLAGS; > > struct sgx_encl_page *entry; > > > > entry = xa_load(&encl->page_array, PFN_DOWN(addr)); > > @@ -502,7 +502,7 @@ static void sgx_vma_open(struct vm_area_struct *vma) > > int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, > > unsigned long end, unsigned long vm_flags) > > { > > - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); > > + unsigned long vm_prot_bits = vm_flags & VM_ACCESS_FLAGS; > > struct sgx_encl_page *page; > > unsigned long count = 0; > > int ret = 0; > > -- > > 2.35.3 > > > > Why? Only benefit I see is a downside: you have xref VM_ACCESS_FLAGS, which is counter-productive. Zero gain. BR, Jarkko
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 1ec20807de1e..6225c525372d 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -268,7 +268,7 @@ static struct sgx_encl_page *sgx_encl_load_page_in_vma(struct sgx_encl *encl, unsigned long addr, unsigned long vm_flags) { - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); + unsigned long vm_prot_bits = vm_flags & VM_ACCESS_FLAGS; struct sgx_encl_page *entry; entry = xa_load(&encl->page_array, PFN_DOWN(addr)); @@ -502,7 +502,7 @@ static void sgx_vma_open(struct vm_area_struct *vma) int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, unsigned long end, unsigned long vm_flags) { - unsigned long vm_prot_bits = vm_flags & (VM_READ | VM_WRITE | VM_EXEC); + unsigned long vm_prot_bits = vm_flags & VM_ACCESS_FLAGS; struct sgx_encl_page *page; unsigned long count = 0; int ret = 0;