From patchwork Tue Oct 25 15:13:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10863 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1065840wru; Tue, 25 Oct 2022 08:21:46 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4YbWTfq2nd6G8dD3xgdSbGccBl9Mdt5UEPd8QeNxwHObaUuW5UEIHiyVbP95Zr6C0A/W14 X-Received: by 2002:a05:6402:84d:b0:454:f41d:6ccf with SMTP id b13-20020a056402084d00b00454f41d6ccfmr36131185edz.129.1666711306745; Tue, 25 Oct 2022 08:21:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711306; cv=none; d=google.com; s=arc-20160816; b=Zqjh3NXIrs8rAwN6MC8eKntG87iwE9PeciHXf+fMO6HlD3jVvkforodZQ++N65nQOa 9Mcbrv2zXkKqKQIq1hIZYHu3RrLCPz8ezZo/k07FtOhXa+zxO/pmGYYvM18ZEEQqZxao 4H66u++IOk8d8+0APTlPy4reEi+eKhsWBnMQ+Wa+DrqviVAI+YEXi87cRjZeyuVRqjS2 CpKNSY8N4BC/ihjLE5IYOEd0hyOu5BQqfFVWkfVR0P2UZGKXqxfdMn9StcCy+UCV6Bnk tKG2BtFSVn9ibEJjRIjTkGw7E2NubYwkbXinJ0//QSVatEvTrti66nUeM9tF/MoRcmns iKbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=B9ak8iDwWJ4gw5uC/9UD9wZIrO7gTbpJwS41owklwbc=; b=LRRt5aiR4NrNBlglIPfhXVdcbo7JrBoJwy52Epr6n11RnnRAFrcg4sPJkXrarSeUS3 iRwCVHvHikuegUC/9Q7ct7WVwA8gZfynfmB6svISzd9w7A/+mZ2H8bdk2LQ+vU/ILudE gULp/prwELs3ubsHNtdmUdHwQpRiazl5bsavMToEikWJFVV+PGf/Dtgi7MaXliyd2Lqx H9emcLrSp9H0BMh2F5BxEDXTX2T9WpSmnUt25vpHTKp0srw+1KHh2JG7EoVHpyRQOoJI tpBzUiRkxYLzu5eMLQS29i9g6Y1M3oMu5N9AH9Vgy4t7SqoI6ozvDVAaGQvvEwyKehE4 GQdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VO0HXsij; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id h9-20020a0564020e8900b0045d46f28533si2917853eda.377.2022.10.25.08.21.21; Tue, 25 Oct 2022 08:21:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VO0HXsij; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233115AbiJYPTO (ORCPT + 99 others); Tue, 25 Oct 2022 11:19:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231429AbiJYPSt (ORCPT ); Tue, 25 Oct 2022 11:18:49 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61F4D18D45F; Tue, 25 Oct 2022 08:18:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711116; x=1698247116; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PWzabYw82KjsHY+Wdd8kK1yaEP2EV57gElCnFtm53oE=; b=VO0HXsijG5sU4kOUhQiCBpSWuNVWatpMAA5pvvaNxLQjpwckQjZtaOWP ThD0UTbNNc9RE26n8w5cYKPUDUd7E5yXrsvVpmFjavFZMalth6G7Dm1Yj 5rElXbVa7MtT9fQJZHY3oPsw0hGUmTBetbiV/1BuLcFk1T2FETWmHgzvk pCyJu5tdT8vk1rYU1kAubaSElPanjsp01k4VdqVsgCW5KcmQXLBt1A9hR nGqnDY938eo1qfBigz4XPdJ3FvFqDp+wyQtT9bsyyqQlDD5I0oNuHqWrB 1BhfYDY1wdHNNM1oeXkam+lEMK6IyqbjnMVFA04PEV9c3dI0yR76xuGqx g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="334307198" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="334307198" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:18:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865490" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865490" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:25 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory Date: Tue, 25 Oct 2022 23:13:37 +0800 Message-Id: <20221025151344.3784230-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673474937103426?= X-GMAIL-MSGID: =?utf-8?q?1747673474937103426?= From: "Kirill A. Shutemov" Introduce 'memfd_restricted' system call with the ability to create memory areas that are restricted from userspace access through ordinary MMU operations (e.g. read/write/mmap). The memory content is expected to be used through a new in-kernel interface by a third kernel module. memfd_restricted() is useful for scenarios where a file descriptor(fd) can be used as an interface into mm but want to restrict userspace's ability on the fd. Initially it is designed to provide protections for KVM encrypted guest memory. Normally KVM uses memfd memory via mmapping the memfd into KVM userspace (e.g. QEMU) and then using the mmaped virtual address to setup the mapping in the KVM secondary page table (e.g. EPT). With confidential computing technologies like Intel TDX, the memfd memory may be encrypted with special key for special software domain (e.g. KVM guest) and is not expected to be directly accessed by userspace. Precisely, userspace access to such encrypted memory may lead to host crash so should be prevented. memfd_restricted() provides semantics required for KVM guest encrypted memory support that a fd created with memfd_restricted() is going to be used as the source of guest memory in confidential computing environment and KVM can directly interact with core-mm without the need to expose the memoy content into KVM userspace. KVM userspace is still in charge of the lifecycle of the fd. It should pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to obtain the physical memory page and then uses it to populate the KVM secondary page table entries. The userspace restricted memfd can be fallocate-ed or hole-punched from userspace. When these operations happen, KVM can get notified through restrictedmem_notifier, it then gets chance to remove any mapped entries of the range in the secondary page tables. memfd_restricted() itself is implemented as a shim layer on top of real memory file systems (currently tmpfs). Pages in restrictedmem are marked as unmovable and unevictable, this is required for current confidential usage. But in future this might be changed. By default memfd_restricted() prevents userspace read, write and mmap. By defining new bit in the 'flags', it can be extended to support other restricted semantics in the future. The system call is currently wired up for x86 arch. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/restrictedmem.h | 62 ++++++ include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/magic.h | 1 + kernel/sys_ni.c | 3 + mm/Kconfig | 4 + mm/Makefile | 1 + mm/restrictedmem.c | 250 +++++++++++++++++++++++++ 10 files changed, 328 insertions(+), 1 deletion(-) create mode 100644 include/linux/restrictedmem.h create mode 100644 mm/restrictedmem.c diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 320480a8db4f..dc70ba90247e 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -455,3 +455,4 @@ 448 i386 process_mrelease sys_process_mrelease 449 i386 futex_waitv sys_futex_waitv 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node +451 i386 memfd_restricted sys_memfd_restricted diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index c84d12608cd2..06516abc8318 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 common memfd_restricted sys_memfd_restricted # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h new file mode 100644 index 000000000000..9c37c3ea3180 --- /dev/null +++ b/include/linux/restrictedmem.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _LINUX_RESTRICTEDMEM_H + +#include +#include +#include + +struct restrictedmem_notifier; + +struct restrictedmem_notifier_ops { + void (*invalidate_start)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); + void (*invalidate_end)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); +}; + +struct restrictedmem_notifier { + struct list_head list; + const struct restrictedmem_notifier_ops *ops; +}; + +#ifdef CONFIG_RESTRICTEDMEM + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier); +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order); + +static inline bool file_is_restrictedmem(struct file *file) +{ + return file->f_inode->i_sb->s_magic == RESTRICTEDMEM_MAGIC; +} + +#else + +static inline void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + return -1; +} + +static inline bool file_is_restrictedmem(struct file *file) +{ + return false; +} + +#endif /* CONFIG_RESTRICTEDMEM */ + +#endif /* _LINUX_RESTRICTEDMEM_H */ diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index a34b0f9a9972..f9e9e0c820c5 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1056,6 +1056,7 @@ asmlinkage long sys_memfd_secret(unsigned int flags); asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len, unsigned long home_node, unsigned long flags); +asmlinkage long sys_memfd_restricted(unsigned int flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 45fa180cc56a..e93cd35e46d0 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) +#define __NR_memfd_restricted 451 +__SYSCALL(__NR_memfd_restricted, sys_memfd_restricted) + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index 6325d1d0e90f..8aa38324b90a 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -101,5 +101,6 @@ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ +#define RESTRICTEDMEM_MAGIC 0x5245534d /* "RESM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..7c4a32cbd2e7 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free); /* memfd_secret */ COND_SYSCALL(memfd_secret); +/* memfd_restricted */ +COND_SYSCALL(memfd_restricted); + /* * Architecture specific weak syscall entries. */ diff --git a/mm/Kconfig b/mm/Kconfig index 0331f1461f81..0177d53676c7 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1076,6 +1076,10 @@ config IO_MAPPING config SECRETMEM def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED +config RESTRICTEDMEM + bool + depends on TMPFS + config ANON_VMA_NAME bool "Anonymous VMA name support" depends on PROC_FS && ADVISE_SYSCALLS && MMU diff --git a/mm/Makefile b/mm/Makefile index 9a564f836403..6cb6403ffd40 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -117,6 +117,7 @@ obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o +obj-$(CONFIG_RESTRICTEDMEM) += restrictedmem.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c new file mode 100644 index 000000000000..e5bf8907e0f8 --- /dev/null +++ b/mm/restrictedmem.c @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "linux/sbitmap.h" +#include +#include +#include +#include +#include +#include +#include + +struct restrictedmem_data { + struct mutex lock; + struct file *memfd; + struct list_head notifiers; +}; + +static void restrictedmem_notifier_invalidate(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end, bool notify_start) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + if (notify_start) + notifier->ops->invalidate_start(notifier, start, end); + else + notifier->ops->invalidate_end(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static int restrictedmem_release(struct inode *inode, struct file *file) +{ + struct restrictedmem_data *data = inode->i_mapping->private_data; + + fput(data->memfd); + kfree(data); + return 0; +} + +static long restrictedmem_fallocate(struct file *file, int mode, + loff_t offset, loff_t len) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (mode & FALLOC_FL_PUNCH_HOLE) { + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + } + + restrictedmem_notifier_invalidate(data, offset, offset + len, true); + ret = memfd->f_op->fallocate(memfd, mode, offset, len); + restrictedmem_notifier_invalidate(data, offset, offset + len, false); + return ret; +} + +static const struct file_operations restrictedmem_fops = { + .release = restrictedmem_release, + .fallocate = restrictedmem_fallocate, +}; + +static int restrictedmem_getattr(struct user_namespace *mnt_userns, + const struct path *path, struct kstat *stat, + u32 request_mask, unsigned int query_flags) +{ + struct inode *inode = d_inode(path->dentry); + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + + return memfd->f_inode->i_op->getattr(mnt_userns, path, stat, + request_mask, query_flags); +} + +static int restrictedmem_setattr(struct user_namespace *mnt_userns, + struct dentry *dentry, struct iattr *attr) +{ + struct inode *inode = d_inode(dentry); + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (attr->ia_valid & ATTR_SIZE) { + if (memfd->f_inode->i_size) + return -EPERM; + + if (!PAGE_ALIGNED(attr->ia_size)) + return -EINVAL; + } + + ret = memfd->f_inode->i_op->setattr(mnt_userns, + file_dentry(memfd), attr); + return ret; +} + +static const struct inode_operations restrictedmem_iops = { + .getattr = restrictedmem_getattr, + .setattr = restrictedmem_setattr, +}; + +static int restrictedmem_init_fs_context(struct fs_context *fc) +{ + if (!init_pseudo(fc, RESTRICTEDMEM_MAGIC)) + return -ENOMEM; + + fc->s_iflags |= SB_I_NOEXEC; + return 0; +} + +static struct file_system_type restrictedmem_fs = { + .owner = THIS_MODULE, + .name = "memfd:restrictedmem", + .init_fs_context = restrictedmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static struct vfsmount *restrictedmem_mnt; + +static __init int restrictedmem_init(void) +{ + restrictedmem_mnt = kern_mount(&restrictedmem_fs); + if (IS_ERR(restrictedmem_mnt)) + return PTR_ERR(restrictedmem_mnt); + return 0; +} +fs_initcall(restrictedmem_init); + +static struct file *restrictedmem_file_create(struct file *memfd) +{ + struct restrictedmem_data *data; + struct address_space *mapping; + struct inode *inode; + struct file *file; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return ERR_PTR(-ENOMEM); + + data->memfd = memfd; + mutex_init(&data->lock); + INIT_LIST_HEAD(&data->notifiers); + + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); + if (IS_ERR(inode)) { + kfree(data); + return ERR_CAST(inode); + } + + inode->i_mode |= S_IFREG; + inode->i_op = &restrictedmem_iops; + inode->i_mapping->private_data = data; + + file = alloc_file_pseudo(inode, restrictedmem_mnt, + "restrictedmem", O_RDWR, + &restrictedmem_fops); + if (IS_ERR(file)) { + iput(inode); + kfree(data); + return ERR_CAST(file); + } + + file->f_flags |= O_LARGEFILE; + + mapping = memfd->f_mapping; + mapping_set_unevictable(mapping); + mapping_set_gfp_mask(mapping, + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); + + return file; +} + +SYSCALL_DEFINE1(memfd_restricted, unsigned int, flags) +{ + struct file *file, *restricted_file; + int fd, err; + + if (flags) + return -EINVAL; + + fd = get_unused_fd_flags(0); + if (fd < 0) + return fd; + + file = shmem_file_setup("memfd:restrictedmem", 0, VM_NORESERVE); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_fd; + } + file->f_mode |= FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE; + file->f_flags |= O_LARGEFILE; + + restricted_file = restrictedmem_file_create(file); + if (IS_ERR(restricted_file)) { + err = PTR_ERR(restricted_file); + fput(file); + goto err_fd; + } + + fd_install(fd, restricted_file); + return fd; +err_fd: + put_unused_fd(fd); + return err; +} + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_add(¬ifier->list, &data->notifiers); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_register_notifier); + +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_del(¬ifier->list); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_unregister_notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + struct page *page; + int ret; + + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); + if (ret) + return ret; + + *pagep = page; + if (order) + *order = thp_order(compound_head(page)); + + SetPageUptodate(page); + unlock_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(restrictedmem_get_page); From patchwork Tue Oct 25 15:13:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10864 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1065952wru; Tue, 25 Oct 2022 08:22:02 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6LQJOC3KuwmKIKvDRlsKqOzDF+RXg17vKVnMn9awCQcgokhkL1aP3QWB845u3tdxPNM0HC X-Received: by 2002:a17:906:328c:b0:780:7574:ced2 with SMTP id 12-20020a170906328c00b007807574ced2mr33173603ejw.634.1666711321937; Tue, 25 Oct 2022 08:22:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711321; cv=none; d=google.com; s=arc-20160816; b=LyF+55Ylno2Tqi9lwqAwUliluj1r1clH7RBYJ4imnue+haZSiZL695o9ee3Wlkz9FG bNbJVBXnN3OEFNhODuTrjb6f8UBI/kUR7ZglMTfEVspeOdBtFX4i95Y5rJE/EbCTDd9Q VoTs2gz+O7wbuE8E3uaMDtyEHx9fLrE10rB5aCj8nDqsHvyTxfLQEFamxO5ymWhA56j5 BCP6ZEmUoPN33fWDgsuBLMItLwGjYP8Sq86+gfNQ5bzdAK0VN4hXsueAs+bbVI58Wy8L TaKlNVGtIFkKd3Gs5M+3rbsoYo6p2SheaAAuIHfmwVcBwrAxlhw7Ohi1YR3sWBhL3Wxt VwTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EiedROpUv8rfOn97xaJYxmo4YEh2NMfpIZ7wMB0ZcwU=; b=SmRdGQu10keUHWVARs4EB0oqHKWeq5gTkNNaVbfJxTSstiyrelB6Oiisym+TR6xzp0 M3PQCVK2j8TIxoe05TcrH/06W5JVdp1s2bY6/ZwhK56G96NvukNvfPRVDx8NCilx0XVL eoxHlznTFIsOxVxLib40//wRrqXyNziS5tvey8TSNPqRCAh5UOq6HiIriBlMayG1g9ai QS9Qneg7EsLz0lcTfAgg6m9AJl4Uqf42A2atnk8C1x0ZymnSCxCYu4pd/w7gdIHdXxY+ hY0b/d96V/y3ZYo1gpKGzJynoi7PzkysiMxdW1/f2DymudSuzizTKEZFJB6qFuKAvzYR 1NCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TVp9KQtF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds9-20020a170907724900b0078e27f2fbe3si3584752ejc.115.2022.10.25.08.21.37; Tue, 25 Oct 2022 08:22:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=TVp9KQtF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233165AbiJYPTT (ORCPT + 99 others); Tue, 25 Oct 2022 11:19:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233131AbiJYPS7 (ORCPT ); Tue, 25 Oct 2022 11:18:59 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 62D5319422E; Tue, 25 Oct 2022 08:18:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711128; x=1698247128; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lotBbnmI5RSBbig2cJtL+iRErDtEVdlh7GuCKWGHGPs=; b=TVp9KQtFR0Spyuz2314Ei1+9agkjKbQY9GfFTYvfAh1IhMqI8TwFvEqJ w1qTF3uTnU/pRO9zMvVysRwIyoUka9hFkhz5nR05rgaevCU/JuwzsQZmp fx/PPH2kdYd2o0Q6YabsGHep001jFUmtg3Diueju/ql3G69gPLSCm9Fo5 pfVqWTIEoU3msZy9XLSNXbkV/JE+tBcmVp1NXdCk2Yc35lUSeKabphES8 isphCqd2vAkp8pZXkXQOVAz4rRb6Bjy5C2Fi9wZgNmeFSHp/1gV8omutc hqYz6d+HTt5H9XSIpj8bw26BE4Lvf6zCkvmR4qXHwjlgxjV58y3Vg6bGx Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="371918734" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="371918734" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:18:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865540" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865540" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:36 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 2/8] KVM: Extend the memslot to support fd-based private memory Date: Tue, 25 Oct 2022 23:13:38 +0800 Message-Id: <20221025151344.3784230-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673491081872288?= X-GMAIL-MSGID: =?utf-8?q?1747673491081872288?= In memory encryption usage, guest memory may be encrypted with special key and can be accessed only by the guest itself. We call such memory private memory. It's valueless and sometimes can cause problem to allow userspace to access guest private memory. This new KVM memslot extension allows guest private memory being provided though a restrictedmem backed file descriptor(fd) and userspace is restricted to access the bookmarked memory in the fd. This new extension, indicated by the new flag KVM_MEM_PRIVATE, adds two additional KVM memslot fields restricted_fd/restricted_offset to allow userspace to instruct KVM to provide guest memory through restricted_fd. 'guest_phys_addr' is mapped at the restricted_offset of restricted_fd and the size is 'memory_size'. The extended memslot can still have the userspace_addr(hva). When use, a single memslot can maintain both private memory through restricted_fd and shared memory through userspace_addr. Whether the private or shared part is visible to guest is maintained by other KVM code. A restrictedmem_notifier field is also added to the memslot structure to allow the restricted_fd's backing store to notify KVM the memory change, KVM then can invalidate its page table entries. Together with the change, a new config HAVE_KVM_RESTRICTED_MEM is added and right now it is selected on X86_64 only. A KVM_CAP_PRIVATE_MEM is also introduced to indicate KVM support for KVM_MEM_PRIVATE. To make code maintenance easy, internally we use a binary compatible alias struct kvm_user_mem_region to handle both the normal and the '_ext' variants. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 48 ++++++++++++++++++++++++++++----- arch/x86/kvm/Kconfig | 2 ++ arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 13 +++++++-- include/uapi/linux/kvm.h | 29 ++++++++++++++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 49 ++++++++++++++++++++++++++++------ 7 files changed, 128 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index eee9f857a986..f3fa75649a78 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1319,7 +1319,7 @@ yet and must be cleared on entry. :Capability: KVM_CAP_USER_MEMORY :Architectures: all :Type: vm ioctl -:Parameters: struct kvm_userspace_memory_region (in) +:Parameters: struct kvm_userspace_memory_region(_ext) (in) :Returns: 0 on success, -1 on error :: @@ -1332,9 +1332,18 @@ yet and must be cleared on entry. __u64 userspace_addr; /* start of the userspace allocated memory */ }; + struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; + }; + /* for kvm_memory_region::flags */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) + #define KVM_MEM_PRIVATE (1UL << 2) This ioctl allows the user to create, modify or delete a guest physical memory slot. Bits 0-15 of "slot" specify the slot id and this value @@ -1365,12 +1374,27 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr be identical. This allows large pages in the guest to be backed by large pages in the host. -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it, -to make a new slot read-only. In this case, writes to this memory will be -posted to userspace as KVM_EXIT_MMIO exits. +kvm_userspace_memory_region_ext struct includes all fields of +kvm_userspace_memory_region struct, while also adds additional fields for some +other features. See below description of flags field for more information. +It's recommended to use kvm_userspace_memory_region_ext in new userspace code. + +The flags field supports following flags: + +- KVM_MEM_LOG_DIRTY_PAGES to instruct KVM to keep track of writes to memory + within the slot. For more details, see KVM_GET_DIRTY_LOG ioctl. + +- KVM_MEM_READONLY, if KVM_CAP_READONLY_MEM allows, to make a new slot + read-only. In this case, writes to this memory will be posted to userspace as + KVM_EXIT_MMIO exits. + +- KVM_MEM_PRIVATE, if KVM_CAP_PRIVATE_MEM allows, to indicate a new slot has + private memory backed by a file descriptor(fd) and userspace access to the + fd may be restricted. Userspace should use restricted_fd/restricted_offset in + kvm_userspace_memory_region_ext to instruct KVM to provide private memory + to guest. Userspace should guarantee not to map the same pfn indicated by + restricted_fd/restricted_offset to different gfns with multiple memslots. + Failed to do this may result undefined behavior. When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of the memory region are automatically reflected into the guest. For example, an @@ -8215,6 +8239,16 @@ structure. When getting the Modified Change Topology Report value, the attr->addr must point to a byte where the value will be stored or retrieved from. +8.36 KVM_CAP_PRIVATE_MEM +------------------------ + +:Architectures: x86 + +This capability indicates that private memory is supported and userspace can +set KVM_MEM_PRIVATE flag for KVM_SET_USER_MEMORY_REGION ioctl. See +KVM_SET_USER_MEMORY_REGION for details on the usage of KVM_MEM_PRIVATE and +kvm_userspace_memory_region_ext fields. + 9. Known KVM API problems ========================= diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 67be7f217e37..8d2bd455c0cd 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,8 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_KVM_RESTRICTED_MEM if X86_64 + select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4bd5f8a751de..02ad31f46dd7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12425,7 +12425,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_user_mem_region m; m.slot = id | (i << 16); m.flags = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 32f259fa5801..739a7562a1f3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -44,6 +44,7 @@ #include #include +#include #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS @@ -575,8 +576,16 @@ struct kvm_memory_slot { u32 flags; short id; u16 as_id; + struct file *restricted_file; + loff_t restricted_offset; + struct restrictedmem_notifier notifier; }; +static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +{ + return slot && (slot->flags & KVM_MEM_PRIVATE); +} + static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; @@ -1103,9 +1112,9 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 0d5d4419139a..f1ae45c10c94 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -103,6 +103,33 @@ struct kvm_userspace_memory_region { __u64 userspace_addr; /* start of the userspace allocated memory */ }; +struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; + +#ifdef __KERNEL__ +/* + * kvm_user_mem_region is a kernel-only alias of kvm_userspace_memory_region_ext + * that "unpacks" kvm_userspace_memory_region so that KVM can directly access + * all fields from the top-level "extended" region. + */ +struct kvm_user_mem_region { + __u32 slot; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; + __u64 userspace_addr; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; +#endif + /* * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace, * other bits are reserved for kvm internal use which are defined in @@ -110,6 +137,7 @@ struct kvm_userspace_memory_region { */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) +#define KVM_MEM_PRIVATE (1UL << 2) /* for KVM_IRQ_LINE */ struct kvm_irq_level { @@ -1178,6 +1206,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_ZPCI_OP 221 #define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 +#define KVM_CAP_PRIVATE_MEM 224 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 800f9470e36b..9ff164c7e0cc 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -86,3 +86,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_KVM_RESTRICTED_MEM + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e30f1b4ecfa5..8dace78a0278 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1526,7 +1526,7 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem) +static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -1920,7 +1920,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -2024,7 +2024,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { int r; @@ -2036,7 +2036,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_user_mem_region *mem) { if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4627,6 +4627,33 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } +#define SANITY_CHECK_MEM_REGION_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region, field)); \ +} while (0) + +#define SANITY_CHECK_MEM_REGION_EXT_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region_ext, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region_ext, field)); \ +} while (0) + +static void kvm_sanity_check_user_mem_region_alias(void) +{ + SANITY_CHECK_MEM_REGION_FIELD(slot); + SANITY_CHECK_MEM_REGION_FIELD(flags); + SANITY_CHECK_MEM_REGION_FIELD(guest_phys_addr); + SANITY_CHECK_MEM_REGION_FIELD(memory_size); + SANITY_CHECK_MEM_REGION_FIELD(userspace_addr); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_offset); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_fd); +} + static long kvm_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4650,14 +4677,20 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_user_mem_region mem; + unsigned long size = sizeof(struct kvm_userspace_memory_region); + + kvm_sanity_check_user_mem_region_alias(); r = -EFAULT; - if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + if (copy_from_user(&mem, argp, size)) + goto out; + + r = -EINVAL; + if (mem.flags & KVM_MEM_PRIVATE) goto out; - r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); + r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } case KVM_GET_DIRTY_LOG: { From patchwork Tue Oct 25 15:13:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10865 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1065970wru; Tue, 25 Oct 2022 08:22:05 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7jcs5Zdmkr9363SG4oIw1aXTop34N1bmFKXzcKJa5Zjk/rrMTPlyzpOIqGQzgZ7rt1hojS X-Received: by 2002:a17:906:cc0f:b0:7ad:2da5:4711 with SMTP id ml15-20020a170906cc0f00b007ad2da54711mr490148ejb.628.1666711324984; Tue, 25 Oct 2022 08:22:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711324; cv=none; d=google.com; s=arc-20160816; b=X7f0D+9eDBMhOsLuVDgVRJcmGRo63jCZQ29gJNTJq/H18XhuKQE/XzKf+qPUzeD2vq vZh3NEINa8xTKa4b4CDFj4CKVqAVNjIeuqR81XBVvmmgD026E+3mOMj43jLKVdi4jZLe huDUOnIWUxjD7uhFUI56vqg0eru85Y8/w72CuyKBWBvcLCJNMZDrQCx1/TP4Sa97GbzH EFmvdK/lUTbZwLw/yxEiVy5T6BfvxxU4UuFWx8NNzISNDUa3Rs5yXpgCEw2lvDZG+Y+1 skEQQvXNeSHHXsH5zVSFttr5o4Hes8cdExmfx8Cw17/gxwS1wJhapp1RqQTdvwm4dZ9+ cuQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fknS4Ck8zD0qFZp1qLGKTaX9bAxoV+hj2N6xgz0n5M0=; b=pPWYMeSeq/6xe7u1DiPxKt5ZPsvm3JhLnZjvSXHb4PLHpD1d+WtccVZGtY7kadmJsh IEsgjeNFLS2Qxt2SfIcLo+MxSMGsPszFNKvIyp7OaEUZI6oFAyqONHnH8OP1AYH+GF5U ppt7aKqn0BeW9NLaZdT2DrqHLizDiZIFhOW4D+EwU5BufQy1uDGtGuDRd9OIaqOBgIqQ mUIbT/m8sn+VsqzZzDeS4ds7x0bqGyZMOPSWS1TxP+/Hk4kNSXkN3vcczcf8NxATBDvE uELn8By8wZ0qzJB/88cHm6lJPenSaQ/9IhAlSvF98uq/lKLMUDekauEBnLP77cUEEnDR /3Vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kFK9dAVM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bk6-20020a170906b0c600b0077b21283d09si2705373ejb.349.2022.10.25.08.21.40; Tue, 25 Oct 2022 08:22:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kFK9dAVM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233132AbiJYPT0 (ORCPT + 99 others); Tue, 25 Oct 2022 11:19:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41768 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233174AbiJYPTF (ORCPT ); Tue, 25 Oct 2022 11:19:05 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD40818A01A; Tue, 25 Oct 2022 08:18:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711139; x=1698247139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ih/6HAJshbJt6TWBogTEZBcZrL2eIfbHydJCwgeirTw=; b=kFK9dAVMD1GARcs8kLVXHQANNi01S689U2idQCwQeiEI36zfxs0IRv7J kQoiCJqeURImcmONFBS8fXxM1YsLLna+ZFGi543IxWNbdjjcvwVsTWQEq 67jv7z4T4QV3LhUaK3IdNTdxvK4xiCx2nDgiUBgYl7YOisyBcjyQqj86N aA8gZ8V1kXjF0RIzvvMYQVdpdmNtmPEWPD0inbpv8v07lYtGmMU0IzPrP M6G/87cxqxKMx6tbd9OWv84b5CcTQm2pR7HD0sbNU1fjdMa5mBETnwJWg aotvT/TQtmQ/3SkMaUhnORzF7qNCMK0+p2Zv4NbieZOQj3Orm7NJZ6A4V g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="307700547" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="307700547" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:18:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865566" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865566" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:47 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 3/8] KVM: Add KVM_EXIT_MEMORY_FAULT exit Date: Tue, 25 Oct 2022 23:13:39 +0800 Message-Id: <20221025151344.3784230-4-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673494300872350?= X-GMAIL-MSGID: =?utf-8?q?1747673494300872350?= This new KVM exit allows userspace to handle memory-related errors. It indicates an error happens in KVM at guest memory range [gpa, gpa+size). The flags includes additional information for userspace to handle the error. Currently bit 0 is defined as 'private memory' where '1' indicates error happens due to private memory access and '0' indicates error happens due to shared memory access. When private memory is enabled, this new exit will be used for KVM to exit to userspace for shared <-> private memory conversion in memory encryption usage. In such usage, typically there are two kind of memory conversions: - explicit conversion: happens when guest explicitly calls into KVM to map a range (as private or shared), KVM then exits to userspace to perform the map/unmap operations. - implicit conversion: happens in KVM page fault handler where KVM exits to userspace for an implicit conversion when the page is in a different state than requested (private or shared). Suggested-by: Sean Christopherson Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 23 +++++++++++++++++++++++ include/uapi/linux/kvm.h | 9 +++++++++ 2 files changed, 32 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index f3fa75649a78..975688912b8c 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6537,6 +6537,29 @@ array field represents return values. The userspace should update the return values of SBI call before resuming the VCPU. For more details on RISC-V SBI spec refer, https://github.com/riscv/riscv-sbi-doc. +:: + + /* KVM_EXIT_MEMORY_FAULT */ + struct { + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0) + __u32 flags; + __u32 padding; + __u64 gpa; + __u64 size; + } memory; + +If exit reason is KVM_EXIT_MEMORY_FAULT then it indicates that the VCPU has +encountered a memory error which is not handled by KVM kernel module and +userspace may choose to handle it. The 'flags' field indicates the memory +properties of the exit. + + - KVM_MEMORY_EXIT_FLAG_PRIVATE - indicates the memory error is caused by + private memory access when the bit is set. Otherwise the memory error is + caused by shared memory access when the bit is clear. + +'gpa' and 'size' indicate the memory range the error occurs at. The userspace +may handle the error and return to KVM to retry the previous memory access. + :: /* KVM_EXIT_NOTIFY */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index f1ae45c10c94..fa60b032a405 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -300,6 +300,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_MEMORY_FAULT 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -538,6 +539,14 @@ struct kvm_run { #define KVM_NOTIFY_CONTEXT_INVALID (1 << 0) __u32 flags; } notify; + /* KVM_EXIT_MEMORY_FAULT */ + struct { +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0) + __u32 flags; + __u32 padding; + __u64 gpa; + __u64 size; + } memory; /* Fix the size of the union. */ char padding[256]; }; From patchwork Tue Oct 25 15:13:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10867 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1066092wru; Tue, 25 Oct 2022 08:22:24 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4Zgqf03HOq0N+RHrN6sz1Dd/y+fY8U+gq+AND4QA09YRdBWCPMXgtgzBSi21cdP0WaZdK8 X-Received: by 2002:a17:907:1b22:b0:741:8809:b4e6 with SMTP id mp34-20020a1709071b2200b007418809b4e6mr33352920ejc.84.1666711343818; Tue, 25 Oct 2022 08:22:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711343; cv=none; d=google.com; s=arc-20160816; b=tNF/SjaeFHdRfOQn1uQ6T97MZ47RiM7KUo73IYUKcF2BCgam4xFBTaCuFrzUvSK5Q2 +PhdmHyoFJvl/WwJ+oVCcwqzXl6Mxc7DKT2fVkaolH5hbw5z+7X/Z2Ix+YzqlG/aP9E8 g+DI13g4W9Ua9PNWAXO1dNoNeVIHY8BF9NJLnuSL2EoGrGyjsZEqbw4t/aD/ng/Wniak s9qWdS11JSuLSzGxPg7uqD0wUeXzQG5HlLK2/fwiWEv1GgAjkaRHd5eGE34l76KNLsLq 6xQr1BxU9i7DH6oti7YadHqw/BwtAiulp+BmbO6Irb+izi6/vyUECgaKbvnb6Ly4sTTW Em1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZeCX6+XYJ88tSgL5hc/072IEYNeLVQ1q2js914cezdE=; b=MZame2CzeRBSVkWxngykXQpt2lDkFqCeRgkrws34TuVX4GQXAq0dRImbbm8wh1FIIK SdguEs9hTDiAyfBcVzCPYVwoxAIlEk84ygQQHBktEgxYRpyOrGCo0yLw3iVMUKWqKyL3 7ZZXUr+owPdxd6XiiaIUMm9SDfQ+sluzla4RniGP/WkyxZK4Doz93SpEVtjLin11ASAF J7YOVXz/7e8cd4OLjAZB/dMYjaPiLiIPxeWoz8z4l7e3LYpzStRcr8aCkulwIgn6WO2R V8PYVLC3w/t9Zu1vN8q0P5wVIOKrORLDFebxLG1Kdf8UzXUFfTp3a489cyQ4g68M5CbQ o9/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="VyLGvL/w"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y9-20020a056402358900b00458d1c48708si3575866edc.303.2022.10.25.08.21.59; Tue, 25 Oct 2022 08:22:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="VyLGvL/w"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233201AbiJYPTh (ORCPT + 99 others); Tue, 25 Oct 2022 11:19:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233086AbiJYPTL (ORCPT ); Tue, 25 Oct 2022 11:19:11 -0400 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF9081918BB; Tue, 25 Oct 2022 08:19:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711150; x=1698247150; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6EFuXYDl0hDb29a8QD5wsl2s9EmZqV7kVnmmwCD6iGo=; b=VyLGvL/wP8/mBpFhPp6ykzEEm8SsapOp+ks30wrzmCruDBiVSmpcHuKT wulS5CD1fPE2CtvLJlLxnLOfdrGQP25pkPg+WVvVWvwBdD3p3qpT9pUnp qOAg9ixu6WgDqwBVHo1E2z+aRXuN16A+S+H+pWnwyWzsB4bpRbP07rYYU 7SBt2dZeVWCoKst8641CgaqTTeYlczKxWeG+bPeWRvqKxqRsF8Js0aM8I x5dU6LyOzeP3sn/9qod2leUwe+CMtydc8j9YXV7RJrU1Wipbm/qMkACE5 JsCgvuMi5jpxOG3Ep44PJd26m8+1dyi8xYt5hV82/N6yr+f99WvIONhzD w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="334307384" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="334307384" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865621" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865621" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:59 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 4/8] KVM: Use gfn instead of hva for mmu_notifier_retry Date: Tue, 25 Oct 2022 23:13:40 +0800 Message-Id: <20221025151344.3784230-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673513916970101?= X-GMAIL-MSGID: =?utf-8?q?1747673513916970101?= Currently in mmu_notifier validate path, hva range is recorded and then checked against in the mmu_notifier_retry_hva() of the page fault path. However, for the to be introduced private memory, a page fault may not have a hva associated, checking gfn(gpa) makes more sense. For existing non private memory case, gfn is expected to continue to work. The only downside is when aliasing multiple gfns to a single hva, the current algorithm of checking multiple ranges could result in a much larger range being rejected. Such aliasing should be uncommon, so the impact is expected small. It also fixes a bug in kvm_zap_gfn_range() which has already been using gfn when calling kvm_mmu_invalidate_begin/end() while these functions accept hva in current code. Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- include/linux/kvm_host.h | 18 +++++++--------- virt/kvm/kvm_main.c | 45 ++++++++++++++++++++++++++-------------- 3 files changed, 39 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f81539061d6..33b1aec44fb8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4217,7 +4217,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 739a7562a1f3..79e5cbc35fcf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -775,8 +775,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1365,10 +1365,8 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end); long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1937,9 +1935,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) return 0; } -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1949,8 +1947,8 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, * positives, due to shortcuts when handing concurrent invalidations. */ if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >= kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) + gfn >= kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) return 1; if (kvm->mmu_invalidate_seq != mmu_seq) return 1; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8dace78a0278..09c9cdeb773c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -540,8 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); +typedef void (*on_lock_fn_t)(struct kvm *kvm, gfn_t start, gfn_t end); typedef void (*on_unlock_fn_t)(struct kvm *kvm); @@ -628,7 +627,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, locked = true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm, gfn_range.start, + gfn_range.end); if (IS_KVM_NULL_FN(range->handler)) break; } @@ -715,15 +715,9 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, + gfn_t end) { - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; if (likely(kvm->mmu_invalidate_in_progress == 1)) { kvm->mmu_invalidate_range_start = start; kvm->mmu_invalidate_range_end = end; @@ -744,6 +738,28 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, } } +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; +} + +static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + update_invalidate_range(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) +{ + mark_invalidate_in_progress(kvm, start, end); + update_invalidate_range(kvm, start, end); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -752,8 +768,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, .start = range->start, .end = range->end, .pte = __pte(0), - .handler = kvm_unmap_gfn_range, - .on_lock = kvm_mmu_invalidate_begin, + .handler = kvm_mmu_handle_gfn_range, + .on_lock = mark_invalidate_in_progress, .on_unlock = kvm_arch_guest_memory_reclaimed, .flush_on_ret = true, .may_block = mmu_notifier_range_blockable(range), @@ -791,8 +807,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) { /* * This sequence increase will notify the kvm page fault that From patchwork Tue Oct 25 15:13:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10869 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1066285wru; Tue, 25 Oct 2022 08:22:53 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6Q6fL7aOVKJqCZnTPz3Sg+1vJBvopoy8WXPOx9GC0LcSJE/tzlUX/RwROFiI7vAjUgE0ax X-Received: by 2002:a17:907:75e6:b0:7a1:848:20cb with SMTP id jz6-20020a17090775e600b007a1084820cbmr15708730ejc.745.1666711373000; Tue, 25 Oct 2022 08:22:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711372; cv=none; d=google.com; s=arc-20160816; b=MQi6jfhEY38sV9kGncJH9emykCwNV2wdHZ9XoCjXeb6TBJ1M3ZgNNtgkXtydK/crYr 3EihDRL/3jHU1OI9z2t0YbWLBcZkqwfDtrxZGDapYgv8yye3wvS6fzm8poZL7AKOml6C bOwFSWyK56Fc4p27KIg3Wo90K6SNKWoL3LnmcM1sn34oA1GNwCYZ7hMt1MeibJ7OcptY QKZ9+s49/txAhTqfn2phrBW1MacG5dyMNqDgC+evmKfUzmYzaDgLFyfPmq/CD9RmSxDs gBJ2eHuNxDlFCHAW0X5nnukw//IAeW49Hqq+RXjrimpPVK8XzYDGcUVVpOQs6Pfzls0Q Di+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FwTLoCwh/MiCXurkwROzZt91PSoc9QqKgtUzt7vqWZA=; b=C1t3Q0em2NCMNApLRyxEk8a994FqM7BW5JC2EdcMNR5OlNaDSosHt15dt0eSW5K969 zRCrbwFKCGFGvBUA7gCXdiGYtu0WwgWMII91LX+rngtoB3ToXH+HREY/D/KY7ALwFy2J p4fG+lt5dRH7rTJjE14a0mzD7fsta79xGbn/pbOhoJ3VmFkcqqT47MDeUdL377FsGh/G /rdJBlHN1wuOV7Hno6cuHfvOj9uZdR3GvTWCfcWzRfMSTpmKxwM8ccZbM20uz8z8k1l/ R+a9ZYPZDw4O0p7EImz3EFEd7wtfj7+L37hCdZzgtr52dkz8rMm971IeCGDsG+XM+BK4 8yrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KbXg8KFT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id du21-20020a17090772d500b0078dcddc1b8csi3285861ejc.788.2022.10.25.08.22.23; Tue, 25 Oct 2022 08:22:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KbXg8KFT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233155AbiJYPUk (ORCPT + 99 others); Tue, 25 Oct 2022 11:20:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233109AbiJYPTW (ORCPT ); Tue, 25 Oct 2022 11:19:22 -0400 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B961192D9E; Tue, 25 Oct 2022 08:19:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711161; x=1698247161; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aLtIISiTrDngZitiG9gcuAA16c7d+c3InY1rPwfxPPk=; b=KbXg8KFT39OqEcoPhq/qu0XhGwGQXebywRwy9Fyu9xtMcY+0p41d2Eq1 yLlEb4sfqqNzuAAB3pKCHcxYv6nOEp+Q02MevkIXshatyfZoUF04g6ZAI rzHim3CelO1ZWNEqYwWf6Az7dWWZf5j9Wv0/F7jekSZmNzIkyXl8n5D7Z oxUnCX8dlNT6Lkqw+Na2VYVd2uV7dZEtCSv0yR+eZi0GQx3D4w3mjm09n vy4kVO0LUXRTsnz/uutIrGM+BxGfkbXSqRmP9wTVAN7vWSnS8Kd0+O+mS /HPfVQwWNsyG+BokzxSd0GNTyrzcQBrcOV51G7kziPpGro4rITXMoPSzP Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="305320988" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="305320988" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865649" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865649" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:10 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 5/8] KVM: Register/unregister the guest private memory regions Date: Tue, 25 Oct 2022 23:13:41 +0800 Message-Id: <20221025151344.3784230-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673544920496599?= X-GMAIL-MSGID: =?utf-8?q?1747673544920496599?= Introduce generic private memory register/unregister by reusing existing SEV ioctls KVM_MEMORY_ENCRYPT_{UN,}REG_REGION. It differs from SEV case by treating address in the region as gpa instead of hva. Which cases should these ioctls go is determined by the kvm_arch_has_private_mem(). Architecture which supports KVM_PRIVATE_MEM should override this function. KVM internally defaults all guest memory as private memory and maintain the shared memory in 'mem_attr_array'. The above ioctls operate on this field and unmap existing mappings if any. Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 17 ++- arch/x86/kvm/Kconfig | 1 + include/linux/kvm_host.h | 10 +- virt/kvm/Kconfig | 4 + virt/kvm/kvm_main.c | 227 +++++++++++++++++++++++++-------- 5 files changed, 198 insertions(+), 61 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 975688912b8c..08253cf498d1 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4717,10 +4717,19 @@ Documentation/virt/kvm/x86/amd-memory-encryption.rst. This ioctl can be used to register a guest memory region which may contain encrypted data (e.g. guest RAM, SMRAM etc). -It is used in the SEV-enabled guest. When encryption is enabled, a guest -memory region may contain encrypted data. The SEV memory encryption -engine uses a tweak such that two identical plaintext pages, each at -different locations will have differing ciphertexts. So swapping or +Currently this ioctl supports registering memory regions for two usages: +private memory and SEV-encrypted memory. + +When private memory is enabled, this ioctl is used to register guest private +memory region and the addr/size of kvm_enc_region represents guest physical +address (GPA). In this usage, this ioctl zaps the existing guest memory +mappings in KVM that fallen into the region. + +When SEV-encrypted memory is enabled, this ioctl is used to register guest +memory region which may contain encrypted data for a SEV-enabled guest. The +addr/size of kvm_enc_region represents userspace address (HVA). The SEV +memory encryption engine uses a tweak such that two identical plaintext pages, +each at different locations will have differing ciphertexts. So swapping or moving ciphertext of those pages will not result in plaintext being swapped. So relocating (or migrating) physical backing pages for the SEV guest will require some additional steps. diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 8d2bd455c0cd..73fdfa429b20 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -51,6 +51,7 @@ config KVM select HAVE_KVM_PM_NOTIFIER if PM select HAVE_KVM_RESTRICTED_MEM if X86_64 select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM + select KVM_GENERIC_PRIVATE_MEM if HAVE_KVM_RESTRICTED_MEM help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79e5cbc35fcf..4ce98fa0153c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -245,7 +245,8 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif -#ifdef KVM_ARCH_WANT_MMU_NOTIFIER + +#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_KVM_GENERIC_PRIVATE_MEM) struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -254,6 +255,9 @@ struct kvm_gfn_range { bool may_block; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); +#endif + +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range); @@ -794,6 +798,9 @@ struct kvm { struct notifier_block pm_notifier; #endif char stats_id[KVM_STATS_NAME_SIZE]; +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + struct xarray mem_attr_array; +#endif }; #define kvm_err(fmt, ...) \ @@ -1453,6 +1460,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_post_init_vm(struct kvm *kvm); void kvm_arch_pre_destroy_vm(struct kvm *kvm); int kvm_arch_create_vm_debugfs(struct kvm *kvm); +bool kvm_arch_has_private_mem(struct kvm *kvm); #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 9ff164c7e0cc..69ca59e82149 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -89,3 +89,7 @@ config HAVE_KVM_PM_NOTIFIER config HAVE_KVM_RESTRICTED_MEM bool + +config KVM_GENERIC_PRIVATE_MEM + bool + depends on HAVE_KVM_RESTRICTED_MEM diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 09c9cdeb773c..fc3835826ace 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -520,6 +520,62 @@ void kvm_destroy_vcpus(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_destroy_vcpus); +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + if (likely(kvm->mmu_invalidate_in_progress == 1)) { + kvm->mmu_invalidate_range_start = start; + kvm->mmu_invalidate_range_end = end; + } else { + /* + * Fully tracking multiple concurrent ranges has diminishing + * returns. Keep things simple and just find the minimal range + * which includes the current and new ranges. As there won't be + * enough information to subtract a range after its invalidate + * completes, any ranges invalidated concurrently will + * accumulate and persist until all outstanding invalidates + * complete. + */ + kvm->mmu_invalidate_range_start = + min(kvm->mmu_invalidate_range_start, start); + kvm->mmu_invalidate_range_end = + max(kvm->mmu_invalidate_range_end, end); + } +} + +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; +} + +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) +{ + mark_invalidate_in_progress(kvm, start, end); + update_invalidate_range(kvm, start, end); +} + +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * This sequence increase will notify the kvm page fault that + * the page that is going to be mapped in the spte could have + * been freed. + */ + kvm->mmu_invalidate_seq++; + smp_wmb(); + /* + * The above sequence increase must be visible before the + * below count decrease, which is ensured by the smp_wmb above + * in conjunction with the smp_rmb in mmu_invalidate_retry(). + */ + kvm->mmu_invalidate_in_progress--; +} + #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) { @@ -715,51 +771,12 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, - gfn_t end) -{ - if (likely(kvm->mmu_invalidate_in_progress == 1)) { - kvm->mmu_invalidate_range_start = start; - kvm->mmu_invalidate_range_end = end; - } else { - /* - * Fully tracking multiple concurrent ranges has diminishing - * returns. Keep things simple and just find the minimal range - * which includes the current and new ranges. As there won't be - * enough information to subtract a range after its invalidate - * completes, any ranges invalidated concurrently will - * accumulate and persist until all outstanding invalidates - * complete. - */ - kvm->mmu_invalidate_range_start = - min(kvm->mmu_invalidate_range_start, start); - kvm->mmu_invalidate_range_end = - max(kvm->mmu_invalidate_range_end, end); - } -} - -static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) -{ - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; -} - static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { update_invalidate_range(kvm, range->start, range->end); return kvm_unmap_gfn_range(kvm, range); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) -{ - mark_invalidate_in_progress(kvm, start, end); - update_invalidate_range(kvm, start, end); -} - static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -807,23 +824,6 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) -{ - /* - * This sequence increase will notify the kvm page fault that - * the page that is going to be mapped in the spte could have - * been freed. - */ - kvm->mmu_invalidate_seq++; - smp_wmb(); - /* - * The above sequence increase must be visible before the - * below count decrease, which is ensured by the smp_wmb above - * in conjunction with the smp_rmb in mmu_invalidate_retry(). - */ - kvm->mmu_invalidate_in_progress--; -} - static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -937,6 +937,89 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_gfn_range gfn_range; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + struct kvm_memslot_iter iter; + int i; + int r = 0; + + gfn_range.pte = __pte(0); + gfn_range.may_block = true; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + slot = iter.slot; + gfn_range.start = max(start, slot->base_gfn); + gfn_range.end = min(end, slot->base_gfn + slot->npages); + if (gfn_range.start >= gfn_range.end) + continue; + gfn_range.slot = slot; + + r |= kvm_unmap_gfn_range(kvm, &gfn_range); + } + } + + if (r) + kvm_flush_remote_tlbs(kvm); +} + +#define KVM_MEM_ATTR_SHARED 0x0001 +static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, + bool is_private) +{ + gfn_t start, end; + unsigned long i; + void *entry; + int idx; + int r = 0; + + if (size == 0 || gpa + size < gpa) + return -EINVAL; + if (gpa & (PAGE_SIZE - 1) || size & (PAGE_SIZE - 1)) + return -EINVAL; + + start = gpa >> PAGE_SHIFT; + end = (gpa + size - 1 + PAGE_SIZE) >> PAGE_SHIFT; + + /* + * Guest memory defaults to private, kvm->mem_attr_array only stores + * shared memory. + */ + entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED); + + idx = srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_begin(kvm, start, end); + + for (i = start; i < end; i++) { + r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, + GFP_KERNEL_ACCOUNT)); + if (r) + goto err; + } + + kvm_unmap_mem_range(kvm, start, end); + + goto ret; +err: + for (; i > start; i--) + xa_erase(&kvm->mem_attr_array, i); +ret: + kvm_mmu_invalidate_end(kvm, start, end); + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); + + return r; +} +#endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -1165,6 +1248,9 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); xa_init(&kvm->vcpu_array); +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + xa_init(&kvm->mem_attr_array); +#endif INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -1338,6 +1424,9 @@ static void kvm_destroy_vm(struct kvm *kvm) kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + xa_destroy(&kvm->mem_attr_array); +#endif cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); kvm_arch_free_vm(kvm); @@ -1541,6 +1630,11 @@ static void kvm_replace_memslot(struct kvm *kvm, } } +bool __weak kvm_arch_has_private_mem(struct kvm *kvm) +{ + return false; +} + static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -4708,6 +4802,24 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + case KVM_MEMORY_ENCRYPT_REG_REGION: + case KVM_MEMORY_ENCRYPT_UNREG_REGION: { + struct kvm_enc_region region; + bool set = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION; + + if (!kvm_arch_has_private_mem(kvm)) + goto arch_vm_ioctl; + + r = -EFAULT; + if (copy_from_user(®ion, argp, sizeof(region))) + goto out; + + r = kvm_vm_ioctl_set_mem_attr(kvm, region.addr, + region.size, set); + break; + } +#endif case KVM_GET_DIRTY_LOG: { struct kvm_dirty_log log; @@ -4861,6 +4973,9 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_get_stats_fd(kvm); break; default: +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +arch_vm_ioctl: +#endif r = kvm_arch_vm_ioctl(filp, ioctl, arg); } out: From patchwork Tue Oct 25 15:13:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10873 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1068144wru; Tue, 25 Oct 2022 08:26:43 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6SEr2u0yRi8UgEQ5IkIFUoF6tbfe3uW+lP6rOdn3x0QOHEY+iOEqZl2+he0Sl6Fs0ov8Ec X-Received: by 2002:a17:907:3f94:b0:78d:9d2f:3002 with SMTP id hr20-20020a1709073f9400b0078d9d2f3002mr32739680ejc.40.1666711603584; Tue, 25 Oct 2022 08:26:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711603; cv=none; d=google.com; s=arc-20160816; b=dhMZRRLkMiZm2i4xjNeJ8iLqQA6Y4yjSc2Btr8Izq57aCEg1qGPl2fNH+1qD9l21qu x1XWtrRvMeUTXxnB2WqaSfKqWyMLdmw+bVoZeRnA9S5gZW+e6jwu5phA9stec1XCLZSq /YISGr7PdTRNl1fN3zLHr6rGP6atsoX5X/1U47JwiHah/h4PJkxMvBvnGdBUCPE5IHNr 4y2K7W2fwg7I7yIwkAgB3XhqRbbRRgZ3LNb7/qOSDOyaa/K6rjh0TkaL7DRGS8zfWssj OPMoHduGYCrlHn6rjuAwRxYhq4IXsJpOEkNOCFNsHZ8R1DgJkmMCBkOvdjYLt3w+lB+d vINQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=dmMeA0i+XC1aI8CmAVcc6CO7d+pGCGu2hdfG6xQJpR0=; b=Z3g3mOa+iyRAvz9wf3kk1KgHXME7UPv3pWbx5nGjtGu/uu3PkS9JMC3vUqT43LXh48 7EwSjZ39rNskrRTGmjSqz9sTTm9YMzzwPIfFdHIOchByD/uoPFFA+IS35sK9F7is5l74 0nJ/AYkKBVGugDP7Qvff7pkuNgIvyvaQKNpHX+J4Ix6BuWqilzEXo7vrfYq3OqWCiXRl AgrT5ihJUCa0jMeSaeTUS7yga/C7sQX3+pcbJ9aHf3Gxnv7GFKJIjwfHu2+jd10Mj+le mVCyWtjy3whg6y2mmk/CGWiud5XLXMJdtZ5jsZtNLSSvK2JlToMED1xIk8KdpUTqDZfM PrXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mqvnKObf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id qq17-20020a17090720d100b007879bc7eacfsi2732461ejb.93.2022.10.25.08.26.19; Tue, 25 Oct 2022 08:26:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mqvnKObf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233258AbiJYPVj (ORCPT + 99 others); Tue, 25 Oct 2022 11:21:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42060 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233256AbiJYPVG (ORCPT ); Tue, 25 Oct 2022 11:21:06 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7385A15A17; Tue, 25 Oct 2022 08:19:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711194; x=1698247194; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0kS2wF3HRGDR5BE6p747FISp0sobPFWYTxerbWnA2RQ=; b=mqvnKObfbsE2dXlhe1LXnddPYScwtEC585SujEcj7FrEyNjf5pUD9fIc 0SumgN9umwochEJu4zdsmFL/ipc84nYaecYApwWfFrnTQWxQSd5UoBBE2 FmKXl+YOqKECGwoGj1ceSBbg4HtgiGL35zdyfUxYWAzoSF3fyCKFjhyFw ZRuy+RrpKLisE8M9eWvGnKeXxd8EYIltHDq+nGsMNm5MXMIZcRrJWUM6D SpWn9G/jsWLTwDK5bkPgSckvhTL3g1DH2xtcfBTb8baslprhY90sN0vy3 LrWS3pF9rOjcYKin0YqwcERKwSh8lfbm6OF2R5g3cwpg6jz9obp47HPGo w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="308799868" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="308799868" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865704" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865704" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:21 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 6/8] KVM: Update lpage info when private/shared memory are mixed Date: Tue, 25 Oct 2022 23:13:42 +0800 Message-Id: <20221025151344.3784230-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673786142767815?= X-GMAIL-MSGID: =?utf-8?q?1747673786142767815?= When private/shared memory are mixed in a large page, the lpage_info may not be accurate and should be updated with this mixed info. A large page has mixed pages can't be really mapped as large page since its private/shared pages are from different physical memory. Update lpage_info when private/shared memory attribute is changed. If both private and shared pages are within a large page region, it can't be mapped as large page. It's a bit challenge to track the mixed info in a 'count' like variable, this patch instead reserves a bit in 'disallow_lpage' to indicate a large page has mixed private/share pages. Signed-off-by: Chao Peng --- arch/x86/include/asm/kvm_host.h | 8 +++ arch/x86/kvm/mmu/mmu.c | 112 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 2 + include/linux/kvm_host.h | 19 ++++++ virt/kvm/kvm_main.c | 16 +++-- 5 files changed, 152 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7551b6f9c31c..db811a54e3fd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,7 @@ #include #define __KVM_HAVE_ARCH_VCPU_DEBUGFS +#define __KVM_HAVE_ARCH_UPDATE_MEM_ATTR #define KVM_MAX_VCPUS 1024 @@ -952,6 +953,13 @@ struct kvm_vcpu_arch { #endif }; +/* + * Use a bit in disallow_lpage to indicate private/shared pages mixed at the + * level. The remaining bits are used as a reference count. + */ +#define KVM_LPAGE_PRIVATE_SHARED_MIXED (1U << 31) +#define KVM_LPAGE_COUNT_MAX ((1U << 31) - 1) + struct kvm_lpage_info { int disallow_lpage; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 33b1aec44fb8..67a9823a8c35 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -762,11 +762,16 @@ static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot, { struct kvm_lpage_info *linfo; int i; + int disallow_count; for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); + + disallow_count = linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX; + WARN_ON(disallow_count + count < 0 || + disallow_count > KVM_LPAGE_COUNT_MAX - count); + linfo->disallow_lpage += count; - WARN_ON(linfo->disallow_lpage < 0); } } @@ -6910,3 +6915,108 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_lpage_recovery_thread) kthread_stop(kvm->arch.nx_lpage_recovery_thread); } + +static inline bool linfo_is_mixed(struct kvm_lpage_info *linfo) +{ + return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static inline void linfo_update_mixed(struct kvm_lpage_info *linfo, bool mixed) +{ + if (mixed) + linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED; + else + linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static bool mem_attr_is_mixed_2m(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + gfn_t gfn = start; + void *entry; + bool shared = attr == KVM_MEM_ATTR_SHARED; + bool mixed = false; + + rcu_read_lock(); + entry = xas_load(&xas); + while (gfn < end) { + if (xas_retry(&xas, entry)) + continue; + + KVM_BUG_ON(gfn != xas.xa_index, kvm); + + if ((entry && !shared) || (!entry && shared)) { + mixed = true; + goto out; + } + + entry = xas_next(&xas); + gfn++; + } +out: + rcu_read_unlock(); + return mixed; +} + +static bool mem_attr_is_mixed(struct kvm *kvm, struct kvm_memory_slot *slot, + int level, unsigned int attr, + gfn_t start, gfn_t end) +{ + unsigned long gfn; + void *entry; + + if (level == PG_LEVEL_2M) + return mem_attr_is_mixed_2m(kvm, attr, start, end); + + entry = xa_load(&kvm->mem_attr_array, start); + for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { + if (linfo_is_mixed(lpage_info_slot(gfn, slot, level - 1))) + return true; + if (xa_load(&kvm->mem_attr_array, gfn) != entry) + return true; + } + return false; +} + +void kvm_arch_update_mem_attr(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned int attr, gfn_t start, gfn_t end) +{ + + unsigned long lpage_start, lpage_end; + unsigned long gfn, pages, mask; + int level; + + WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)), + "Unsupported mem attribute.\n"); + + /* + * The sequence matters here: we update the higher level basing on the + * lower level's scanning result. + */ + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + pages = KVM_PAGES_PER_HPAGE(level); + mask = ~(pages - 1); + lpage_start = max(start & mask, slot->base_gfn); + lpage_end = (end - 1) & mask; + + /* + * We only need to scan the head and tail page, for middle pages + * we know they are not mixed. + */ + linfo_update_mixed(lpage_info_slot(lpage_start, slot, level), + mem_attr_is_mixed(kvm, slot, level, attr, + lpage_start, start)); + + if (lpage_start == lpage_end) + return; + + for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) + linfo_update_mixed(lpage_info_slot(gfn, slot, level), + false); + + linfo_update_mixed(lpage_info_slot(lpage_end, slot, level), + mem_attr_is_mixed(kvm, slot, level, attr, + end, lpage_end + pages)); + } +} diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 02ad31f46dd7..4276ca73bd7b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12563,6 +12563,8 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1)) linfo[lpages - 1].disallow_lpage = 1; ugfn = slot->userspace_addr >> PAGE_SHIFT; + if (kvm_slot_can_be_private(slot)) + ugfn |= slot->restricted_offset >> PAGE_SHIFT; /* * If the gfn and userspace address are not aligned wrt each * other, disable large page support for this slot. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4ce98fa0153c..6ce36065532c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2284,4 +2284,23 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + +#define KVM_MEM_ATTR_SHARED 0x0001 +#define KVM_MEM_ATTR_PRIVATE 0x0002 + +#ifdef __KVM_HAVE_ARCH_UPDATE_MEM_ATTR +void kvm_arch_update_mem_attr(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned int attr, gfn_t start, gfn_t end); +#else +static inline void kvm_arch_update_mem_attr(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned int attr, + gfn_t start, gfn_t end) +{ +} +#endif + +#endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fc3835826ace..13a37b4d9e97 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -939,7 +939,8 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM -static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned int attr) { struct kvm_gfn_range gfn_range; struct kvm_memory_slot *slot; @@ -963,6 +964,7 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) gfn_range.slot = slot; r |= kvm_unmap_gfn_range(kvm, &gfn_range); + kvm_arch_update_mem_attr(kvm, slot, attr, start, end); } } @@ -970,7 +972,6 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) kvm_flush_remote_tlbs(kvm); } -#define KVM_MEM_ATTR_SHARED 0x0001 static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, bool is_private) { @@ -979,6 +980,7 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, void *entry; int idx; int r = 0; + unsigned int attr; if (size == 0 || gpa + size < gpa) return -EINVAL; @@ -992,7 +994,13 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, * Guest memory defaults to private, kvm->mem_attr_array only stores * shared memory. */ - entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED); + if (is_private) { + attr = KVM_MEM_ATTR_PRIVATE; + entry = NULL; + } else { + attr = KVM_MEM_ATTR_SHARED; + entry = xa_mk_value(KVM_MEM_ATTR_SHARED); + } idx = srcu_read_lock(&kvm->srcu); KVM_MMU_LOCK(kvm); @@ -1005,7 +1013,7 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, goto err; } - kvm_unmap_mem_range(kvm, start, end); + kvm_unmap_mem_range(kvm, start, end, attr); goto ret; err: From patchwork Tue Oct 25 15:13:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10871 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1067643wru; Tue, 25 Oct 2022 08:25:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6VFeamuEQoIxh7aRnrzGBykNfU9mx6eCGlPnl5Fv4CIwg0LKIn7kqPPNEd5C5T6/Pvm68W X-Received: by 2002:a05:6402:4029:b0:45b:d50c:b9b0 with SMTP id d41-20020a056402402900b0045bd50cb9b0mr36103900eda.126.1666711531564; Tue, 25 Oct 2022 08:25:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711531; cv=none; d=google.com; s=arc-20160816; b=irV9nRnq/xfw/3r9vKAXSx5OYsOydeyWvAp97N2PwuS0cBb8mk18rxtPJGFSsxWs2j I61NtBc/bmZZ5ILCP/rQQTCkHss0+J94uwAxpUxKtu2a23X8/cMwkBX5VZkKRGItnF7k DYvXJoF8maqHBDACaeK565wK1sVGAIzwi5cI2n7aHabNJBgGvIyeL6DfwM5Goc01RS6C hCQLBkSOq3+5zGl+tMfXfJgyrgtS408YkHDxEa5HJO3SPFpTf2NZAkKX/WxwCfPVUcFQ JpIDP6Ro7WAWYSeASxoo3rb5gSF4kuWAMrkgF4m7zbl70AIPpEo7BJV3vDOZWykXsTIp mUkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1CT8ZXgp0AeheY8gksB2XQZwbbyPtszMCd40r0AHKkY=; b=DhCDIg66noWiLcvVT82YnnjaZAvPnt8wbGXt8oUAC3nzH3oqnmo72+2xZNgFVs5pET dGk5+m9MehjheKhoNgSEp2eKxVkB5wl0KdKq6c/zkS95DCyeqWiOLmYefwyhdBaY/+BB eNR/b48Goc957xS+dlvVII+CYqN890RCfTuf+bd2MKIG8+IGeNDRlr7tTnNTV9wUkxNf TtpPadqN47NCMOH1hW5TQXmMAJvcuuv4CnTPE9kvVLgWamMTmxii3VkESoT2XfGhljVQ 2V+93UK5yXYp8uy641TeWRuj6yRtocmKv3gpIsDNptT/IQddJf05KjHaCgH+3EjNrTLO bOxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G8FJPcFB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x27-20020a50d61b000000b004615ccd71a0si3270447edi.162.2022.10.25.08.25.03; Tue, 25 Oct 2022 08:25:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G8FJPcFB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233306AbiJYPWA (ORCPT + 99 others); Tue, 25 Oct 2022 11:22:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233066AbiJYPVK (ORCPT ); Tue, 25 Oct 2022 11:21:10 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85FF219C33; Tue, 25 Oct 2022 08:19:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711193; x=1698247193; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4Npg8YeluRyzE+wi0wB8FNymqEhtTKsC/t0Ec1nfRz0=; b=G8FJPcFB2N/IRaWapAdfIezBzvfEH77ePUyQiKhX62SrrrevISZISH5J xZNR1lKwIXdOCG1v0Lfb62FG2VU4uaB512ps58TiKNGxr5Ba8pxsJFIgv J4nief4E3/bX9UtkpZcXrBlLg/B/2sv56uRcH0OFCdQCdUD2x8cJrlyDX TwjHBead7ggo5fk+54ArvLVFICVJtQ+ZgUgBE2SnkuWmY5egbvI50GgFV mKkzpi0tSzLez4y/0EKim2dpxH5r+bn34JfgkcmrISXkfWXGSn5dgzjlS 49TotV7MCA1YXxgC1C9NVVRevzXZipJNtPpl4w16BNGJkRfiXMi29iRqt A==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="369772308" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="369772308" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865800" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865800" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:32 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 7/8] KVM: Handle page fault for private memory Date: Tue, 25 Oct 2022 23:13:43 +0800 Message-Id: <20221025151344.3784230-8-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673711235997079?= X-GMAIL-MSGID: =?utf-8?q?1747673711235997079?= A memslot with KVM_MEM_PRIVATE being set can include both fd-based private memory and hva-based shared memory. Architecture code (like TDX code) can tell whether the on-going fault is private or not. This patch adds a 'is_private' field to kvm_page_fault to indicate this and architecture code is expected to set it. To handle page fault for such memslot, the handling logic is different depending on whether the fault is private or shared. KVM checks if 'is_private' matches the host's view of the page (maintained in mem_attr_array). - For a successful match, private pfn is obtained with restrictedmem_get_page () from private fd and shared pfn is obtained with existing get_user_pages(). - For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to userspace. Userspace then can convert memory between private/shared in host's view and retry the fault. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 56 +++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++- arch/x86/kvm/mmu/mmutrace.h | 1 + arch/x86/kvm/mmu/spte.h | 6 ++++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +- include/linux/kvm_host.h | 28 +++++++++++++++++ 6 files changed, 103 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 67a9823a8c35..10017a9f26ee 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3030,7 +3030,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool is_private) { struct kvm_lpage_info *linfo; int host_level; @@ -3042,6 +3042,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } + if (is_private) + return max_level; + if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; @@ -3070,7 +3073,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * level, which will be used to do precise, accurate accounting. */ fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level); + fault->gfn, fault->max_level, + fault->is_private); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -4141,6 +4145,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } +static inline u8 order_to_level(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static int kvm_faultin_pfn_private(struct kvm_page_fault *fault) +{ + int order; + struct kvm_memory_slot *slot = fault->slot; + + if (kvm_restricted_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) + return RET_PF_RETRY; + + fault->max_level = min(order_to_level(order), fault->max_level); + fault->map_writable = !(slot->flags & KVM_MEM_READONLY); + return RET_PF_CONTINUE; +} + static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4173,6 +4203,22 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return RET_PF_EMULATE; } + if (kvm_slot_can_be_private(slot) && + fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { + vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; + if (fault->is_private) + vcpu->run->memory.flags = KVM_MEMORY_EXIT_FLAG_PRIVATE; + else + vcpu->run->memory.flags = 0; + vcpu->run->memory.padding = 0; + vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT; + vcpu->run->memory.size = PAGE_SIZE; + return RET_PF_USER; + } + + if (fault->is_private) + return kvm_faultin_pfn_private(fault); + async = false; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, fault->write, &fault->map_writable, @@ -5557,6 +5603,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err return -EIO; } + if (r == RET_PF_USER) + return 0; + if (r < 0) return r; if (r != RET_PF_EMULATE) @@ -6408,7 +6457,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, + false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..5cdff5ca546c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -188,6 +188,7 @@ struct kvm_page_fault { /* Derived from mmu and global state. */ const bool is_tdp; + const bool is_private; const bool nx_huge_page_workaround_enabled; /* @@ -236,6 +237,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); * RET_PF_RETRY: let CPU fault again on the address. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_USER: need to exit to userspace to handle this fault. * RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. * @@ -252,6 +254,7 @@ enum { RET_PF_RETRY, RET_PF_EMULATE, RET_PF_INVALID, + RET_PF_USER, RET_PF_FIXED, RET_PF_SPURIOUS, }; @@ -309,7 +312,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool is_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); @@ -318,4 +321,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifndef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + WARN_ON_ONCE(1); + return -EOPNOTSUPP; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..2d7555381955 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -58,6 +58,7 @@ TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_EMULATE); TRACE_DEFINE_ENUM(RET_PF_INVALID); +TRACE_DEFINE_ENUM(RET_PF_USER); TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 7670c13ce251..9acdf72537ce 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -315,6 +315,12 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } +static inline bool is_private_spte(u64 spte) +{ + /* FIXME: Query C-bit/S-bit for SEV/TDX. */ + return false; +} + static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64 pte, int level) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 672f0432d777..9f97aac90606 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1768,7 +1768,8 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, + is_private_spte(iter.old_spte)); if (max_mapping_level < iter.level) continue; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6ce36065532c..69300fc6d572 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2301,6 +2301,34 @@ static inline void kvm_arch_update_mem_attr(struct kvm *kvm, } #endif +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return !xa_load(&kvm->mem_attr_array, gfn); +} + +#else /* !CONFIG_KVM_GENERIC_PRIVATE_MEM */ + +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return false; +} + #endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + int ret; + struct page *page; + pgoff_t index = gfn - slot->base_gfn + + (slot->restricted_offset >> PAGE_SHIFT); + + ret = restrictedmem_get_page(slot->restricted_file, index, + &page, order); + *pfn = page_to_pfn(page); + return ret; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif From patchwork Tue Oct 25 15:13:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 10872 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp1067943wru; Tue, 25 Oct 2022 08:26:12 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7fnbCKoqEKNkDvHl2lw8tscvTa0q8HiGQEFPuGj7VTb5nisox24bZvWqmy81/7N3KyGeuS X-Received: by 2002:a05:6402:1348:b0:461:c056:bf65 with SMTP id y8-20020a056402134800b00461c056bf65mr10673139edw.414.1666711572344; Tue, 25 Oct 2022 08:26:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666711572; cv=none; d=google.com; s=arc-20160816; b=X/ZpWBHkTO27j656CJtBzmpnrU5/lR0z2bGwWbOZ4miZXTxdJxui5KpGTYqU2ELav9 xfNbJSsUFNQZGFm70b3Fkr537AZF04NDTXrTprzsZARi+CK2Zi4oQuD7iRvKOrzouP+n EjkHmnxsO2neSEkSheL4c1lwPYh5bkSc7EmqhMlVaVP7WczrN+zv7cp9FW7+LzsLNTWU Dk7ut03sHrAoqqiOdR80mZ2LTnsYiAtVBa1PVJG/BuQ0Y/4Rl+B6+oUohjgFG9TEPiLo OPuXkkE0TGOA1/IhAfYkxInq/tmW6DCddoKO7KzowY62nz54w/iTj7vpLTagWqAn0l/X F9+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=IB/8SLeaHlLmiP60umsYqWiy5FLt7B8Dbecuyp2aI28=; b=bA2jhDSxkqUIz0jiyRvPgcdtUk3Ay+Je8PAx7e4xjWYawvdaMI08b5im+u7NQrUttO HbONYjbM0NqfLLuuijchs8Mc+jwNNN6OEklF/8kVAVIIe4JifsSDLZr9UB6wpNWU6PkZ IL/OU+HY30MOKcWBSvJJfFkp85AWaewQlL27ZadoFqQXA4alE4iZqG7CtfjZraXCi+wy aD6WrNKXqf2gtokiMAHxmOQzHob047qOS8mbTcC9ArP0bRpLL23cqqY2lfue+UCzNXDl ShPU7RntTnAKOqxeDdqmGxbjPsgi0N/PSsd5Wilfxw4cdt4qtVtChCR+CTXIr8qlDpH/ 4w5A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lgez4p6M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id wu13-20020a170906eecd00b0078dcc87b1c4si3417187ejb.923.2022.10.25.08.25.47; Tue, 25 Oct 2022 08:26:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=lgez4p6M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233330AbiJYPWY (ORCPT + 99 others); Tue, 25 Oct 2022 11:22:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233206AbiJYPVP (ORCPT ); Tue, 25 Oct 2022 11:21:15 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF1F4233B9; Tue, 25 Oct 2022 08:19:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711198; x=1698247198; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gSDGYt1SO9GcrVjFFDdyGRTv5gtWiyJtwnCYk8FJbTc=; b=lgez4p6MWIjuq9XDnGl5vrkhF+5Xy/J1q4ZLwWAH5b1eCsp9lnYLX1TG +l7xfN1wFlOfywqYLWwkDDPxg29FA4EY2VUMya+qndBU5XlQWsWnQxkP7 /DssHSr1u4fQIpu0wB51TkrFFOUBtAjC7fPRotkCOtTxFbS9mfO4ZPHAB 8PZL8O9hoMPidByXOnFJrLv3A3S0trMWR+jGWoR6aTXkLflNsWq2CzG6r K6Kp1dOFxH1ZWM40jG5or44Pchrg6Dqk0x0kz+6a5XRFTONcGuAIVhsRI DrppYJOt9JKCL2SIVD8+Q/dO9CHd3r4eO0THS0W71voP87dcLwdeo88mY g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="394019202" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="394019202" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865843" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865843" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:46 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 8/8] KVM: Enable and expose KVM_MEM_PRIVATE Date: Tue, 25 Oct 2022 23:13:44 +0800 Message-Id: <20221025151344.3784230-9-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spam-Status: No, score=-4.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747673753494539838?= X-GMAIL-MSGID: =?utf-8?q?1747673753494539838?= Expose KVM_MEM_PRIVATE and memslot fields restricted_fd/offset to userspace. KVM register/unregister private memslot to fd-based memory backing store and responses to invalidation event from restrictedmem_notifier to zap the existing memory mappings in the secondary page table. Whether KVM_MEM_PRIVATE is actually exposed to userspace is determined by architecture code which can turn on it by overriding the default kvm_arch_has_private_mem(). A 'kvm' reference is added in memslot structure since in restrictedmem_notifier callback we can only obtain a memslot reference but 'kvm' is needed to do the zapping. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba --- include/linux/kvm_host.h | 3 +- virt/kvm/kvm_main.c | 174 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 171 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 69300fc6d572..e27d62c30484 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -246,7 +246,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif -#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_KVM_GENERIC_PRIVATE_MEM) +#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_HAVE_KVM_RESTRICTED_MEM) struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -583,6 +583,7 @@ struct kvm_memory_slot { struct file *restricted_file; loff_t restricted_offset; struct restrictedmem_notifier notifier; + struct kvm *kvm; }; static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13a37b4d9e97..dae6a2c196ad 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1028,6 +1028,111 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, } #endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot, + pgoff_t start, pgoff_t end, + gfn_t *gfn_start, gfn_t *gfn_end) +{ + unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT; + + if (start > base_pgoff) + *gfn_start = slot->base_gfn + start - base_pgoff; + else + *gfn_start = slot->base_gfn; + + if (end < base_pgoff + slot->npages) + *gfn_end = slot->base_gfn + end - base_pgoff; + else + *gfn_end = slot->base_gfn + slot->npages; + + if (*gfn_start >= *gfn_end) + return false; + + return true; +} + +static void kvm_restrictedmem_invalidate_begin(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm = slot->kvm; + gfn_t gfn_start, gfn_end; + struct kvm_gfn_range gfn_range; + int idx; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + idx = srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + + kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); + + gfn_range.start = gfn_start; + gfn_range.end = gfn_end; + gfn_range.slot = slot; + gfn_range.pte = __pte(0); + gfn_range.may_block = true; + + if (kvm_unmap_gfn_range(kvm, &gfn_range)) + kvm_flush_remote_tlbs(kvm); + + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); +} + +static void kvm_restrictedmem_invalidate_end(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm = slot->kvm; + gfn_t gfn_start, gfn_end; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); + KVM_MMU_UNLOCK(kvm); +} + +static struct restrictedmem_notifier_ops kvm_restrictedmem_notifier_ops = { + .invalidate_start = kvm_restrictedmem_invalidate_begin, + .invalidate_end = kvm_restrictedmem_invalidate_end, +}; + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + slot->notifier.ops = &kvm_restrictedmem_notifier_ops; + restrictedmem_register_notifier(slot->restricted_file, &slot->notifier); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *slot) +{ + restrictedmem_unregister_notifier(slot->restricted_file, + &slot->notifier); +} + +#else /* !CONFIG_HAVE_KVM_RESTRICTED_MEM */ + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -1072,6 +1177,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) /* This does not remove the slot from struct kvm_memslots data structures */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { + if (slot->flags & KVM_MEM_PRIVATE) { + kvm_restrictedmem_unregister(slot); + fput(slot->restricted_file); + } + kvm_destroy_dirty_bitmap(slot); kvm_arch_free_memslot(kvm, slot); @@ -1643,10 +1753,16 @@ bool __weak kvm_arch_has_private_mem(struct kvm *kvm) return false; } -static int check_memory_region_flags(const struct kvm_user_mem_region *mem) +static int check_memory_region_flags(struct kvm *kvm, + const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + if (kvm_arch_has_private_mem(kvm)) + valid_flags |= KVM_MEM_PRIVATE; +#endif + #ifdef __KVM_HAVE_READONLY_MEM valid_flags |= KVM_MEM_READONLY; #endif @@ -1722,6 +1838,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, { int r; + if (change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_register(new); + /* * If dirty logging is disabled, nullify the bitmap; the old bitmap * will be freed on "commit". If logging is enabled in both old and @@ -1750,6 +1869,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap)) kvm_destroy_dirty_bitmap(new); + if (r && change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_unregister(new); + return r; } @@ -2047,7 +2169,7 @@ int __kvm_set_memory_region(struct kvm *kvm, int as_id, id; int r; - r = check_memory_region_flags(mem); + r = check_memory_region_flags(kvm, mem); if (r) return r; @@ -2066,6 +2188,10 @@ int __kvm_set_memory_region(struct kvm *kvm, !access_ok((void __user *)(unsigned long)mem->userspace_addr, mem->memory_size)) return -EINVAL; + if (mem->flags & KVM_MEM_PRIVATE && + (mem->restricted_offset & (PAGE_SIZE - 1) || + mem->restricted_offset > U64_MAX - mem->memory_size)) + return -EINVAL; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) @@ -2104,6 +2230,9 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ + /* Private memslots are immutable, they can only be deleted. */ + if (mem->flags & KVM_MEM_PRIVATE) + return -EINVAL; if ((mem->userspace_addr != old->userspace_addr) || (npages != old->npages) || ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) @@ -2132,10 +2261,28 @@ int __kvm_set_memory_region(struct kvm *kvm, new->npages = npages; new->flags = mem->flags; new->userspace_addr = mem->userspace_addr; + if (mem->flags & KVM_MEM_PRIVATE) { + new->restricted_file = fget(mem->restricted_fd); + if (!new->restricted_file || + !file_is_restrictedmem(new->restricted_file)) { + r = -EINVAL; + goto out; + } + new->restricted_offset = mem->restricted_offset; + } + + new->kvm = kvm; r = kvm_set_memslot(kvm, old, new, change); if (r) - kfree(new); + goto out; + + return 0; + +out: + if (new->restricted_file) + fput(new->restricted_file); + kfree(new); return r; } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); @@ -4604,6 +4751,11 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_BINARY_STATS_FD: case KVM_CAP_SYSTEM_EVENT_DATA: return 1; +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + case KVM_CAP_PRIVATE_MEM: + return 1; +#endif + default: break; } @@ -4795,16 +4947,28 @@ static long kvm_vm_ioctl(struct file *filp, } case KVM_SET_USER_MEMORY_REGION: { struct kvm_user_mem_region mem; - unsigned long size = sizeof(struct kvm_userspace_memory_region); + unsigned int flags_offset = offsetof(typeof(mem), flags); + unsigned long size; + u32 flags; kvm_sanity_check_user_mem_region_alias(); + memset(&mem, 0, sizeof(mem)); + r = -EFAULT; + if (get_user(flags, (u32 __user *)(argp + flags_offset))) + goto out; + + if (flags & KVM_MEM_PRIVATE) + size = sizeof(struct kvm_userspace_memory_region_ext); + else + size = sizeof(struct kvm_userspace_memory_region); + if (copy_from_user(&mem, argp, size)) goto out; r = -EINVAL; - if (mem.flags & KVM_MEM_PRIVATE) + if ((flags ^ mem.flags) & KVM_MEM_PRIVATE) goto out; r = kvm_vm_ioctl_set_memory_region(kvm, &mem);