From patchwork Sat Feb 3 00:09:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 196105 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:9bc1:b0:106:209c:c626 with SMTP id op1csp769792dyc; Fri, 2 Feb 2024 16:11:40 -0800 (PST) X-Google-Smtp-Source: AGHT+IGotl4Td6JAblKN7P7MV0fEZ+NQgy9jRep2nMwYpvrF2LzyS13M3lDe24gjYN39Zgqzsfcc X-Received: by 2002:a17:90a:e284:b0:296:1195:df00 with SMTP id d4-20020a17090ae28400b002961195df00mr6378834pjz.36.1706919100369; Fri, 02 Feb 2024 16:11:40 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706919100; cv=pass; d=google.com; s=arc-20160816; b=QYPgEu3BvkI2cVfPt2OF6LfydGovCFC2Kf3FRKOiWk/AAS/1SlBQAoNq2z62bMZWhJ cR00d7WxbY+c+7ey7WsRSKg5IqtfSJDKCmw6GQonvZVdAHFl50sOmpFlpcPsqe0/YL+0 /feknKgKSepgVjk0BBy9ESKxLjL51DkteRS2S3RsmX+IgJIeXMNFvFhzgCaygkly+weC HaJNRa7iDS9d1PzE6rUUnyH4mfUS4S1hqirwppxWAHMEU51Mb8znlht/rJjFAGbLu4O6 IJO/AKC2//J74rhTqNfkKGOJLwyvFJkCDbuav0FWmrV1KAdmPUpnxcrmmgxs5YrV1ynM Fvbg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :reply-to:dkim-signature; bh=CwvKSfH4xv/NatBnDCESKILLStQmhSJKJagouj3ct64=; fh=kx2p58LX4Ss5pTdrcFTqJaPbV7c3ly9vFOP+bcQGnCI=; b=YMXng0EOpO8st8Fvawh9DffeUlf4ojpHRgjBZ/QhXX3voh2dZ6h7Ddy9drrOnikaHs V/QruzuPssLbrEuKNG4FOvvp6EVoar3r7zjdzy5xmVufiBrtNTsUB6MCN2PRox6YYv6t X6B0EUR6VkezsXKPDIP/5gJISmK8aSS8PNiyO2iTbBH3rliGxUHOhpi8ajFISfmqHlUE FOllLgolnj8pSwk59epQrykyHPgSQjapja1VVztOhIWgCHdrcdNpBbpgiTEsStnQ1MDM Nzx9TMb/9hBQpyObKvk/sMhi+gSQEx0x3CAHMKWcR+c5ZAyu+M9N+ZwzV8HVDkGt6IkN S/wQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=i0u1N5ds; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-50750-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-50750-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=1; AJvYcCVwJL5Zbv91+U8OMR7BRcdjSuA72Fw2gd0PLffI3IFS8wteQpsKo3pUMpn44CNXvIeDNxIdsVrIzirQx6pbsyLHvJOo1A== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id y18-20020a17090aa41200b0029649e36354si653687pjp.163.2024.02.02.16.11.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Feb 2024 16:11:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-50750-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=i0u1N5ds; arc=pass (i=1 spf=pass spfdomain=flex--seanjc.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-50750-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-50750-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 23AE9282F57 for ; Sat, 3 Feb 2024 00:11:40 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BF58B11181; Sat, 3 Feb 2024 00:09:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="i0u1N5ds" Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93AE779C8 for ; Sat, 3 Feb 2024 00:09:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706918971; cv=none; b=iShYPZgpsjU7gACshcdgaQ1PLy6SSEjQc49eskpnynSfbd9S1PjtUSjYk9ZfeL2YpY1u1MMpE9nl+6U/lZnRSTC5p3sk245Ghz5/P/6/yv13NoYlwjgHxoxRolA5mkqVjurqT0imUQkU+nwde0tEN/VBZy7D387XIpnKuL85ZCk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706918971; c=relaxed/simple; bh=WLkUW0/Unoe62tn0PAI8vXH8dvOJolXpQGzrHPU+8WU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rOXY9NMtUQMeB7vISyemwyqqW+InFgaCBLP1uxj270rI4FVnTM8ecYPnimxw5fj125Nj3mV+dpC2mtQMhg2/pkcBvWCvl6yWPnR0C8rfPw7u74cb6pKABQlshdF+BL1eq8NB/CL0lfO6Vfw8mIp2J8V6u3N7bf8n0XMDhFOpvpQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=i0u1N5ds; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5f38d676cecso44494287b3.0 for ; Fri, 02 Feb 2024 16:09:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706918968; x=1707523768; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=CwvKSfH4xv/NatBnDCESKILLStQmhSJKJagouj3ct64=; b=i0u1N5dsSmRPwUp91mpyA8kVP1ezD7HeFPc3JuIWVmQGwynyCvZ4JU84J0tDJfEjWl JninIDOspfAG03g/KL8GlNV2TfGxGCt88XU6AqE4H5jkubPfhTM02Pc8kCyNi9v7LDW3 H5LwKPD5eg1Bya9/7HHnLCOMDTq3NHZvQyN4Yo19FKMOhPUxzvYE6B1MTE/+6+BjsXPy QM/YkghIzbSa/eh3X1YIF0tOX4jmuZnKYb1jZgqgPGKkKy1Mqfnlp5URG2ub0A0dbglX cP9JKr2pOKCx+6j9aCZhShwMV+Mcp3I6sDiu4vmE4uHVpdBTr3IqE4Pp7siZV2Ze4IpV qSzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706918968; x=1707523768; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CwvKSfH4xv/NatBnDCESKILLStQmhSJKJagouj3ct64=; b=rfcB2tUUYcemFelWv1feQzEvD8hhvDxo1iZ/SFe6WGxNj8zLxzwmAgYhKYt41/QMCw kdhyK6FfBTaQY3Ud3JGP/xT1bmOn9bTk8OsWsh98ED8fGQfZCSvj5xrTmBJv/JHokXfN zGJXa0LeS0AvejU0l7IP/Ex3Bo+pLg9HNBj2Pjv8M2P7ZnEXqHJcOGEgWhtldc9COaAg D7sfhHoGwZJZ7ioqXBFsItVVM9kS4fJ5eeRrIphTx5dYxBj1/lm+Gk5uyUNPX5rQhjDh GQ2CRIEc3/AqwVTTShitfJX59MpfYEprmp0JhBnOfCWSW8Qq00pNBeyaiIiYTzmMBlu3 jKRg== X-Gm-Message-State: AOJu0Yx+d7m6K3HNTzoYIH3e7+You8HzHfkzzEO7T3+zFY6JhOSSqFe0 qFQ3fDejgSTZWjAXNO39SUg8bwvLVGZZ83wBJB7TvsaIN1Y0kWtRZuuII4XVYNansTOgUsmRfIN jaQ== X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a25:ce90:0:b0:dc6:deca:8122 with SMTP id x138-20020a25ce90000000b00dc6deca8122mr35402ybe.5.1706918968697; Fri, 02 Feb 2024 16:09:28 -0800 (PST) Reply-To: Sean Christopherson Date: Fri, 2 Feb 2024 16:09:10 -0800 In-Reply-To: <20240203000917.376631-1-seanjc@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240203000917.376631-1-seanjc@google.com> X-Mailer: git-send-email 2.43.0.594.gd9cf4e227d-goog Message-ID: <20240203000917.376631-5-seanjc@google.com> Subject: [PATCH v8 04/10] KVM: selftests: Add support for allocating/managing protected guest memory From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Sean Christopherson Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Vishal Annapurve , Ackerley Tng , Andrew Jones , Tom Lendacky , Michael Roth , Peter Gonda X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789834402332934822 X-GMAIL-MSGID: 1789834402332934822 From: Peter Gonda Add support for differentiating between protected (a.k.a. private, a.k.a. encrypted) memory and normal (a.k.a. shared) memory for VMs that support protected guest memory, e.g. x86's SEV. Provide and manage a common bitmap for tracking whether a given physical page resides in protected memory, as support for protected memory isn't x86 specific, i.e. adding a arch hook would be a net negative now, and in the future. Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Vishal Annapurve Cc: Ackerley Tng cc: Andrew Jones Cc: Tom Lendacky Cc: Michael Roth Originally-by: Michael Roth Signed-off-by: Peter Gonda Co-developed-by: Sean Christopherson Signed-off-by: Sean Christopherson Reviewed-by: Itaru Kitayama --- .../selftests/kvm/include/kvm_util_base.h | 25 +++++++++++++++++-- tools/testing/selftests/kvm/lib/kvm_util.c | 22 +++++++++++++--- 2 files changed, 41 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h index d9dc31af2f96..a82149305349 100644 --- a/tools/testing/selftests/kvm/include/kvm_util_base.h +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h @@ -46,6 +46,7 @@ typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */ struct userspace_mem_region { struct kvm_userspace_memory_region2 region; struct sparsebit *unused_phy_pages; + struct sparsebit *protected_phy_pages; int fd; off_t offset; enum vm_mem_backing_src_type backing_src_type; @@ -573,6 +574,13 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, uint64_t guest_paddr, uint32_t slot, uint64_t npages, uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset); +#ifndef vm_arch_has_protected_memory +static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm) +{ + return false; +} +#endif + void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags); void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa); void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot); @@ -836,10 +844,23 @@ const char *exit_reason_str(unsigned int exit_reason); vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min, uint32_t memslot); -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot); +vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot, + bool protected); vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm); +static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot) +{ + /* + * By default, allocate memory as protected for VMs that support + * protected memory, as the majority of memory for such VMs is + * protected, i.e. using shared memory is effectively opt-in. + */ + return __vm_phy_pages_alloc(vm, num, paddr_min, memslot, + vm_arch_has_protected_memory(vm)); +} + /* * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also * loads the test binary into guest memory and creates an IRQ chip (x86 only). diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c index a53caf81eb87..ea677aa019ef 100644 --- a/tools/testing/selftests/kvm/lib/kvm_util.c +++ b/tools/testing/selftests/kvm/lib/kvm_util.c @@ -717,6 +717,7 @@ static void __vm_mem_region_delete(struct kvm_vm *vm, vm_ioctl(vm, KVM_SET_USER_MEMORY_REGION2, ®ion->region); sparsebit_free(®ion->unused_phy_pages); + sparsebit_free(®ion->protected_phy_pages); ret = munmap(region->mmap_start, region->mmap_size); TEST_ASSERT(!ret, __KVM_SYSCALL_ERROR("munmap()", ret)); if (region->fd >= 0) { @@ -1098,6 +1099,8 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type, } region->unused_phy_pages = sparsebit_alloc(); + if (vm_arch_has_protected_memory(vm)) + region->protected_phy_pages = sparsebit_alloc(); sparsebit_set_num(region->unused_phy_pages, guest_paddr >> vm->page_shift, npages); region->region.slot = slot; @@ -1924,6 +1927,10 @@ void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent) region->host_mem); fprintf(stream, "%*sunused_phy_pages: ", indent + 2, ""); sparsebit_dump(stream, region->unused_phy_pages, 0); + if (region->protected_phy_pages) { + fprintf(stream, "%*sprotected_phy_pages: ", indent + 2, ""); + sparsebit_dump(stream, region->protected_phy_pages, 0); + } } fprintf(stream, "%*sMapped Virtual Pages:\n", indent, ""); sparsebit_dump(stream, vm->vpages_mapped, indent + 2); @@ -2025,6 +2032,7 @@ const char *exit_reason_str(unsigned int exit_reason) * num - number of pages * paddr_min - Physical address minimum * memslot - Memory region to allocate page from + * protected - True if the pages will be used as protected/private memory * * Output Args: None * @@ -2036,8 +2044,9 @@ const char *exit_reason_str(unsigned int exit_reason) * and their base address is returned. A TEST_ASSERT failure occurs if * not enough pages are available at or above paddr_min. */ -vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, - vm_paddr_t paddr_min, uint32_t memslot) +vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, + vm_paddr_t paddr_min, uint32_t memslot, + bool protected) { struct userspace_mem_region *region; sparsebit_idx_t pg, base; @@ -2050,8 +2059,10 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, paddr_min, vm->page_size); region = memslot2region(vm, memslot); + TEST_ASSERT(!protected || region->protected_phy_pages, + "Region doesn't support protected memory"); + base = pg = paddr_min >> vm->page_shift; - do { for (; pg < base + num; ++pg) { if (!sparsebit_is_set(region->unused_phy_pages, pg)) { @@ -2070,8 +2081,11 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num, abort(); } - for (pg = base; pg < base + num; ++pg) + for (pg = base; pg < base + num; ++pg) { sparsebit_clear(region->unused_phy_pages, pg); + if (protected) + sparsebit_set(region->protected_phy_pages, pg); + } return base * vm->page_size; }