From patchwork Fri Jan 27 11:29:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Price X-Patchwork-Id: 49355 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp809729wrn; Fri, 27 Jan 2023 04:37:30 -0800 (PST) X-Google-Smtp-Source: AMrXdXvaiwYEsBs9DLO4XdCzQKeTS/tEzxhZVZTNaGp7FPwksR/iD6lfQG0rHsNJuueHdUgBRBfg X-Received: by 2002:a05:6402:e01:b0:49b:65cc:faa6 with SMTP id h1-20020a0564020e0100b0049b65ccfaa6mr51993353edh.16.1674823050016; Fri, 27 Jan 2023 04:37:30 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674823050; cv=none; d=google.com; s=arc-20160816; b=0RNMHgmaB+PBqcYQRuFhHupfz5ISvhL1rolKCYSGNBx8c56/0jd7U6C56Flbk/jRue vaZlOK8lMoepr7VJBF5s35SPVOwGZypYJ+/A3SD1vFgrGstl7ysSmvawMqjJ0YBWKSJV ebarU7WZJQcBaDlgPAsJhCBefCA38LW8ULv525CWM+QyVKgdMW2dF6P2pjWdgPanOdRJ KKmvNqPgc6Pm6Xv0rhqY3Aj/baAVlmICjV4AYEAlJU5424ZK9IXD+wFCVLzyaLEPsfEN 5hj4jde/sJjN+labdQLSbt31hneFObPlr02YVW95KP+Cw6V+Lk7WqyFyryjB3jEY1a7c Nthg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KBnfn2WQ5HPFsqny4y8tKoqjPbdK1tXZboMuGzjYtxY=; b=SoHjGAmGv2NI27vMVjGpDUJtb0RRBCR0jCUYGJcpJl46lb0SSgVRzMkiX+VQkTdFCk 0rmoifsRnggkS/RW8lMz/j98oQC4ohbA72yz3CNl9av7DEDdVhrButOv3F7g4xxZQcwn +5C69AugmV2WpxG0U9QDXEnDsuUbkSFv6RutW9FqWCKgqV8IB6WKoPSZ54hsmo/3BHM1 312EoUGH+ajV3vNpf7NbRankXe5n46nLdGKecy+7vu57fXmxMpp8Wp7mJWjuX7JwYwg0 gqzxBZSFU/1VVxgKD8JvnmCikDbCuyKAGFliDtAbqhpskMj/sZJ3bK0SHxjTW4azeKwQ /yaA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s7-20020aa7c547000000b00497d974880dsi5163423edr.338.2023.01.27.04.37.06; Fri, 27 Jan 2023 04:37:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234180AbjA0MYP (ORCPT + 99 others); Fri, 27 Jan 2023 07:24:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234114AbjA0MX0 (ORCPT ); Fri, 27 Jan 2023 07:23:26 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5B5167B7AA; Fri, 27 Jan 2023 04:21:00 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 884A41756; Fri, 27 Jan 2023 03:31:33 -0800 (PST) Received: from e122027.cambridge.arm.com (e122027.cambridge.arm.com [10.1.35.16]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 86C463F64C; Fri, 27 Jan 2023 03:30:49 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev Subject: [RFC PATCH 27/28] arm64: RME: Always use 4k pages for realms Date: Fri, 27 Jan 2023 11:29:31 +0000 Message-Id: <20230127112932.38045-28-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230127112932.38045-1-steven.price@arm.com> References: <20230127112248.136810-1-suzuki.poulose@arm.com> <20230127112932.38045-1-steven.price@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756179254480802407?= X-GMAIL-MSGID: =?utf-8?q?1756179254480802407?= Always split up huge pages to avoid problems managing huge pages. There are two issues currently: 1. The uABI for the VMM allows populating memory on 4k boundaries even if the underlying allocator (e.g. hugetlbfs) is using a larger page size. Using a memfd for private allocations will push this issue onto the VMM as it will need to respect the granularity of the allocator. 2. The guest is able to request arbitrary ranges to be remapped as shared. Again with a memfd approach it will be up to the VMM to deal with the complexity and either overmap (need the huge mapping and add an additional 'overlapping' shared mapping) or reject the request as invalid due to the use of a huge page allocator. For now just break everything down to 4k pages in the RMM controlled stage 2. Signed-off-by: Steven Price --- arch/arm64/kvm/mmu.c | 4 ++++ arch/arm64/kvm/rme.c | 4 +++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 5417c273861b..b5fc8d8f7049 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1278,6 +1278,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (logging_active) { force_pte = true; vma_shift = PAGE_SHIFT; + } else if (kvm_is_realm(kvm)) { + // Force PTE level mappings for realms + force_pte = true; + vma_shift = PAGE_SHIFT; } else { vma_shift = get_vma_page_shift(vma, hva); } diff --git a/arch/arm64/kvm/rme.c b/arch/arm64/kvm/rme.c index 6ae7871aa6ed..1eb76cbee267 100644 --- a/arch/arm64/kvm/rme.c +++ b/arch/arm64/kvm/rme.c @@ -730,7 +730,9 @@ static int populate_par_region(struct kvm *kvm, break; } - if (is_vm_hugetlb_page(vma)) + // FIXME: To avoid the overmapping issue (see below comment) + // force the use of 4k pages + if (is_vm_hugetlb_page(vma) && 0) vma_shift = huge_page_shift(hstate_vma(vma)); else vma_shift = PAGE_SHIFT;