From patchwork Fri Oct 14 07:19:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 2541 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp44525wrs; Fri, 14 Oct 2022 00:21:41 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6t/O4dgg398kREC2BPhPpPVoNzWozxWh4GbyzYPtQ6iW2VHGJ0Bwa9GUMEdTP4S4xpT+le X-Received: by 2002:a17:907:7f24:b0:78d:ee20:5c62 with SMTP id qf36-20020a1709077f2400b0078dee205c62mr2589228ejc.177.1665732101532; Fri, 14 Oct 2022 00:21:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665732101; cv=none; d=google.com; s=arc-20160816; b=qBGqGq23ARI8yeQ7JFgsEcgoayThGibQuAyC5ltySS2d1VHPS5aSgo1j/HuVBJvul8 +XeBLf+M4r5xpB9O4+zK5M8S98x+0AqhxkU//qIuwCIKCZXbl/HDvCa2DPLTptBC1aRA zTRcw+9vvYHznOTDQpS4cajEfgLsjcinfEKzgisX4Oa/9KFaKfyo+MeLJqwDqUOOL/zq DwRVbT9p8C0JYNO9C+EPh8luqLQ2hxsJK6lwr77Fqqf4bxc0Vqd4ss0KGkyt5eI+bQiM 3k5TXWVnlpC+MQ2yldJhHlsWkD9ghH3xttXUaTerW1K3hH3STBgP69wFy0GTml8esC2J A7jA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Vvc9nSeTOx+R9O1KRdNsOErR2cLTeHQp0FKXCgftjxs=; b=PaHeEc02vLhkvtnF+uryDYs3YomF96khmk0XRlTfXDh6iiUKjnATtg5T4OzmLby4Xx zpbN4EFiyb5yo6VGB2P0xpHO4P2YsZza7GDoMG1x7KPhL32CVe77hIB/nkiaTJ+97562 ogGVZ57YmyA+e2sCmqDIOW12uuNtKdF6+DieIVs53U7M7OsdOvFa9K8Jo6MJoqaBWJdA oDlr7PEd6Xr9Ieuuwta44XSq0S7QCxV1drbksua17dtZyCE6I+AQK7+MdGKk2qTHiQoX AIwVRCU6SwDRJ9oroNdpHrrwFj6Sk9HmbhkiZi1yty/bu8evsSQ0L1RxnYTFXi7cNh5Q Ofcw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HNrdpU2x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dp19-20020a170906c15300b0078c47b4478esi1570788ejc.106.2022.10.14.00.21.16; Fri, 14 Oct 2022 00:21:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=HNrdpU2x; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229696AbiJNHT7 (ORCPT + 99 others); Fri, 14 Oct 2022 03:19:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229739AbiJNHT5 (ORCPT ); Fri, 14 Oct 2022 03:19:57 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C57422AC4E for ; Fri, 14 Oct 2022 00:19:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665731993; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vvc9nSeTOx+R9O1KRdNsOErR2cLTeHQp0FKXCgftjxs=; b=HNrdpU2xwIN8G6BM60njqsV+5qsMbZ1kdPNmM1RsLTY/CyWCFmW7ECJ26Gg0uHFbiGpYGX EOAnpEm5bJlg43tPN4w+kSVn+skz96UUI5GPHumphbS2WaiJe4UN/qAOBUiiBJaqvDq8ul cyIjYoH5PAe9Cp1mnFzL7UlI4rUrebU= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-315-Ig3lh1acMgil5WThUrAQyw-1; Fri, 14 Oct 2022 03:19:49 -0400 X-MC-Unique: Ig3lh1acMgil5WThUrAQyw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8C86A3C0F423; Fri, 14 Oct 2022 07:19:48 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-52.bne.redhat.com [10.64.54.52]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 642DFC3343C; Fri, 14 Oct 2022 07:19:43 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, ajones@ventanamicro.com, pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org, oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH 1/6] KVM: selftests: memslot_perf_test: Use data->nslots in prepare_vm() Date: Fri, 14 Oct 2022 15:19:09 +0800 Message-Id: <20221014071914.227134-2-gshan@redhat.com> In-Reply-To: <20221014071914.227134-1-gshan@redhat.com> References: <20221014071914.227134-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746646703714404671?= X-GMAIL-MSGID: =?utf-8?q?1746646703714404671?= In prepare_vm(), 'data->nslots' is assigned with 'max_mem_slots - 1' at the beginning, meaning they are interchangeable. Use 'data->nslots' isntead of 'max_mem_slots - 1'. With this, it becomes easier to move the logic of probing number of slots into upper layer in subsequent patches. No functional change intended. Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/memslot_perf_test.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index 44995446d942..231cc8449c2e 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -280,14 +280,14 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, ucall_init(data->vm, NULL); pr_info_v("Adding slots 1..%i, each slot with %"PRIu64" pages + %"PRIu64" extra pages last\n", - max_mem_slots - 1, data->pages_per_slot, rempages); + data->nslots, data->pages_per_slot, rempages); clock_gettime(CLOCK_MONOTONIC, &tstart); - for (slot = 1, guest_addr = MEM_GPA; slot < max_mem_slots; slot++) { + for (slot = 1, guest_addr = MEM_GPA; slot <= data->nslots; slot++) { uint64_t npages; npages = data->pages_per_slot; - if (slot == max_mem_slots - 1) + if (slot == data->nslots) npages += rempages; vm_userspace_mem_region_add(data->vm, VM_MEM_SRC_ANONYMOUS, @@ -297,12 +297,12 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, } *slot_runtime = timespec_elapsed(tstart); - for (slot = 0, guest_addr = MEM_GPA; slot < max_mem_slots - 1; slot++) { + for (slot = 0, guest_addr = MEM_GPA; slot < data->nslots; slot++) { uint64_t npages; uint64_t gpa; npages = data->pages_per_slot; - if (slot == max_mem_slots - 2) + if (slot == data->nslots - 1) npages += rempages; gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr, From patchwork Fri Oct 14 07:19:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 2540 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp44477wrs; Fri, 14 Oct 2022 00:21:29 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5YTr1WPqD+iX7am0h6XqFdqcC8yaAeoR9K1uk1C5NO9MBg8SyqIo5F1vZbQsOLLxGJFi0S X-Received: by 2002:a05:6a00:1596:b0:563:9a1a:b5b0 with SMTP id u22-20020a056a00159600b005639a1ab5b0mr3907634pfk.38.1665732089168; Fri, 14 Oct 2022 00:21:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665732089; cv=none; d=google.com; s=arc-20160816; b=sgOBCGrHaP5Wc90M6tRbZLa41vFXk2RAKcWgdQlFhgd0bGDGUGMCghFQffw9fta8vr l7UICZ5h9fNzUelwprdrqbJlccJHeK0W5JbYw/cdqr9xHjYJUpRlYfin0ja51GQ2LINz 9e74W+HPTyGCpRqypv/OLVJhtqibrpXniPv9kT33yRCpzhHM5DHj5TzRKoqUssE7HJ8K OcN2rzCJ6bfx4ZoxDJanA+3KB1kboSPP60YinH3FN8mikV5tFGJnsqv5d9h9q+YTuMkS Q7etulRgGUwdAz0YEIc84X4Y/XouF3B4z8LCBrkh6LHDohCXOqgxKxG80e93DQ4ttbSK 7i/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9FjkKJ4RB0wBhJUFjtqAZ3VQdnrJF4akYRPmacZ3lG8=; b=KKwf+V32qJny9bFemk08kc+22YwDKK8oDRWVyn2FWcT9KoDUkC8xpL7/iRn67E0Gna yMsJNzvLISdoZtlz9j2ZLCm9XEkqMUO6yk4IBHbLmRpw+UNYBaFcF6jtmuHaNTHnAJEY 5nKMJdTdF80e6dOR5eC9i4yO9vjZ2kQ+DInP7AVxSsW+K+hvKVqeIufzHrad4l7AI0+3 FsetpmvikccuxN1hTwt3veVwL17tpCy048epvRbVJ3H9NGAYOnI9asCrYqx122+o3UmV vSoItMbmb1xj9/kmiuTNDMlHCcQ/cU3cc8MSUXDU0xblqqElogaCIprpbDfHNNz60Zsu MhHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=P1XPkpzl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t18-20020a170902e85200b0017f77922b11si2634993plg.84.2022.10.14.00.21.16; Fri, 14 Oct 2022 00:21:29 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=P1XPkpzl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229657AbiJNHUU (ORCPT + 99 others); Fri, 14 Oct 2022 03:20:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229863AbiJNHUG (ORCPT ); Fri, 14 Oct 2022 03:20:06 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 137C145053 for ; Fri, 14 Oct 2022 00:19:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665731997; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9FjkKJ4RB0wBhJUFjtqAZ3VQdnrJF4akYRPmacZ3lG8=; b=P1XPkpzlb1v28g7EfnK041ADQwt3UVlUPA+4JgImZKYdy6eLyS/3L8tvB5fFOGD+V6bO5N ZdgPTCn+xVXAkCijlhaC2PLVzBPPDVSpWof1M3OKDl1Qkjovgix2cAyelyvh5KKEfuGAe0 lFOQuI1N+zTD4vLj0BhcTESWrJ2uNHk= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-547-p7IZhB-gOPeyF-R__aZl2Q-1; Fri, 14 Oct 2022 03:19:54 -0400 X-MC-Unique: p7IZhB-gOPeyF-R__aZl2Q-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CA6A1185A7A9; Fri, 14 Oct 2022 07:19:53 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-52.bne.redhat.com [10.64.54.52]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 183ACC06224; Fri, 14 Oct 2022 07:19:48 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, ajones@ventanamicro.com, pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org, oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH 2/6] KVM: selftests: memslot_perf_test: Consolidate loop conditions in prepare_vm() Date: Fri, 14 Oct 2022 15:19:10 +0800 Message-Id: <20221014071914.227134-3-gshan@redhat.com> In-Reply-To: <20221014071914.227134-1-gshan@redhat.com> References: <20221014071914.227134-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746646690839984654?= X-GMAIL-MSGID: =?utf-8?q?1746646690839984654?= There are two loops in prepare_vm(), which have different conditions. 'slot' is treated as meory slot index in the first loop, but index of the host virtual address array in the second loop. It makes it a bit hard to understand the code. Change the usage of 'slot' in the second loop, to treat it as the memory slot index either. No functional change intended. Signed-off-by: Gavin Shan --- tools/testing/selftests/kvm/memslot_perf_test.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index 231cc8449c2e..dcb492b3f27b 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -297,21 +297,20 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, } *slot_runtime = timespec_elapsed(tstart); - for (slot = 0, guest_addr = MEM_GPA; slot < data->nslots; slot++) { + for (slot = 1, guest_addr = MEM_GPA; slot <= data->nslots; slot++) { uint64_t npages; uint64_t gpa; npages = data->pages_per_slot; - if (slot == data->nslots - 1) + if (slot == data->nslots) npages += rempages; - gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr, - slot + 1); + gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr, slot); TEST_ASSERT(gpa == guest_addr, "vm_phy_pages_alloc() failed\n"); - data->hva_slots[slot] = addr_gpa2hva(data->vm, guest_addr); - memset(data->hva_slots[slot], 0, npages * 4096); + data->hva_slots[slot - 1] = addr_gpa2hva(data->vm, guest_addr); + memset(data->hva_slots[slot - 1], 0, npages * 4096); guest_addr += npages * 4096; } From patchwork Fri Oct 14 07:19:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 2545 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp48440wrs; Fri, 14 Oct 2022 00:35:37 -0700 (PDT) X-Google-Smtp-Source: AMsMyM71H77+NI/oIcsgX7JZxsbEKeB5iCLoKGLT9Vqa3/kVWIK90wQ3/0zFQJYw9w+wO66MmgpF X-Received: by 2002:a17:90b:4b83:b0:20c:5ac8:1e30 with SMTP id lr3-20020a17090b4b8300b0020c5ac81e30mr16190351pjb.71.1665732937318; Fri, 14 Oct 2022 00:35:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665732937; cv=none; d=google.com; s=arc-20160816; b=b3UsoRKSTON8OwxoyFU11j0bKdRieRxb3rp3BIEn5kZPeEwjA/8H1rgTGIpAZ7LIFL //66dFKTUbTm/zDjxw9vtTNpUXWb+N0yfxo/NPS3SJfpZjGVzA54AK8GU4TblmFU9P8f bhdWHraRN50gLVX6hkITe2iuPat4N1QrMQJBh9XUscB4wmbbq+ayUsxg04/vleclVGq5 XW6Pqj8o96i9Q+99pP3QCO/3SNp1BGYzLArIXm56a1P3xJt/Hsv1u0imAvD4G1kg2Q1C 7z9uue381jW/xaj4+B9NEr0p6pldD7FvxVJEanynR+bPhFZcSFEyJOrcoK5a2+RaJaSa ISCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DdHs0tIGZs6Hd5dMAbgHdexs9jDtMkST+MADKDSgxqM=; b=eXexNSOALfCqHgK8MsEpC+RT5/EfOONV1tb60eveNK7PEEWiOCGWla37Vl42RnO8wE bJb8NqwpLi9NpxojXs3Z2bSA4eTWxf4V8r5PWmMdBdZ1DVtK0a/UeINB9wsZTpzuoYN2 TqkY8PQM3R9WASbR5ImPpIiFx1CHezZiZCmG8vhezCu5TpNavNgimBTSFOayD9Xt3deP GvKZWXW2oiylkgn4IiPEzEO062Xb4OyRO3CBgss/MWcXm0yXY9dWD4uNVZon1glodud3 XYKugwILsKBDk/ooWD/++jUEQ+BBFtiETd9T6Ydbu4NwYzdy7zvYiBmMgJ71qRlc3m1z L3UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=as9BqTSW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ay12-20020a1709028b8c00b00178072335cbsi2107517plb.132.2022.10.14.00.35.23; Fri, 14 Oct 2022 00:35:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=as9BqTSW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229867AbiJNHVP (ORCPT + 99 others); Fri, 14 Oct 2022 03:21:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbiJNHVN (ORCPT ); Fri, 14 Oct 2022 03:21:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DA3F7CE2E for ; Fri, 14 Oct 2022 00:21:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665732071; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DdHs0tIGZs6Hd5dMAbgHdexs9jDtMkST+MADKDSgxqM=; b=as9BqTSWqng7mobUeUjwKK2LPHGo15/KchleXMVJYqlFE3blFjUz6K55/8zCZ3k0dENeib HknrQHUQT8EA0I05SYNISAVd6NIA8NzzSJt9oZWBhcavTELpHTeqdBy41qZbTGg70dpTpp EDA9NOEWLMAZLcTNjfFRfD/1Sa7HhE4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-297-V9TD_DjrPUuoIF-FNHa4xw-1; Fri, 14 Oct 2022 03:21:08 -0400 X-MC-Unique: V9TD_DjrPUuoIF-FNHa4xw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A88133C0F446; Fri, 14 Oct 2022 07:20:12 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-52.bne.redhat.com [10.64.54.52]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 60002C47DEC; Fri, 14 Oct 2022 07:19:54 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, ajones@ventanamicro.com, pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org, oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH 3/6] KVM: selftests: memslot_perf_test: Probe memory slots for once Date: Fri, 14 Oct 2022 15:19:11 +0800 Message-Id: <20221014071914.227134-4-gshan@redhat.com> In-Reply-To: <20221014071914.227134-1-gshan@redhat.com> References: <20221014071914.227134-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746647580852261037?= X-GMAIL-MSGID: =?utf-8?q?1746647580852261037?= prepare_vm() is called in every iteration and run. The allowed memory slots (KVM_CAP_NR_MEMSLOTS) are probed for multiple times. It's not free and unnecessary. Move the probing logic for the allowed memory slots to parse_args() for once, which is upper layer of prepare_vm(). No functional change intended. Signed-off-by: Gavin Shan --- .../testing/selftests/kvm/memslot_perf_test.c | 29 ++++++++++--------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index dcb492b3f27b..d5aa9148f96f 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -245,27 +245,17 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, void *guest_code, uint64_t mempages, struct timespec *slot_runtime) { - uint32_t max_mem_slots; uint64_t rempages; uint64_t guest_addr; uint32_t slot; struct timespec tstart; struct sync_area *sync; - max_mem_slots = kvm_check_cap(KVM_CAP_NR_MEMSLOTS); - TEST_ASSERT(max_mem_slots > 1, - "KVM_CAP_NR_MEMSLOTS should be greater than 1"); - TEST_ASSERT(nslots > 1 || nslots == -1, - "Slot count cap should be greater than 1"); - if (nslots != -1) - max_mem_slots = min(max_mem_slots, (uint32_t)nslots); - pr_info_v("Allowed number of memory slots: %"PRIu32"\n", max_mem_slots); - TEST_ASSERT(mempages > 1, "Can't test without any memory"); data->npages = mempages; - data->nslots = max_mem_slots - 1; + data->nslots = nslots; data->pages_per_slot = mempages / data->nslots; if (!data->pages_per_slot) { *maxslots = mempages + 1; @@ -885,8 +875,8 @@ static bool parse_args(int argc, char *argv[], break; case 's': targs->nslots = atoi(optarg); - if (targs->nslots <= 0 && targs->nslots != -1) { - pr_info("Slot count cap has to be positive or -1 for no cap\n"); + if (targs->nslots <= 1 && targs->nslots != -1) { + pr_info("Slot count cap must be larger than 1 or -1 for no cap\n"); return false; } break; @@ -932,6 +922,19 @@ static bool parse_args(int argc, char *argv[], return false; } + /* Memory slot 0 is reserved */ + if (targs->nslots == -1) { + targs->nslots = kvm_check_cap(KVM_CAP_NR_MEMSLOTS) - 1; + if (targs->nslots < 1) { + pr_info("KVM_CAP_NR_MEMSLOTS should be greater than 1\n"); + return false; + } + } else { + targs->nslots--; + } + + pr_info_v("Number of memory slots: %d\n", targs->nslots); + return true; } From patchwork Fri Oct 14 07:19:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 2542 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp45887wrs; Fri, 14 Oct 2022 00:26:19 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7QwfkAYTZ3XU4zufBNywHRffe3b+jUP+avQYwDGsfnGOWuIRCI62iLPiwnzPhR6HjAdpxQ X-Received: by 2002:a17:907:6e22:b0:78d:cce6:c29b with SMTP id sd34-20020a1709076e2200b0078dcce6c29bmr2602194ejc.300.1665732379081; Fri, 14 Oct 2022 00:26:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665732379; cv=none; d=google.com; s=arc-20160816; b=ZMTN0hHYoydaj3gx5Y1QM+gdcSvW4HoZL/LuKhmIO2io88NS4xzq9RuhGejBG7PiUF ESAgiI+Rec+2Nl0fuNi88nRGGhEE//CV5KQ5H3JV4ukRvWQaX1eNa57IzKJl67bKfaPf o4aLDWIR4WO4dIa84xF/cr2o5MQeiZbH1Hw+J7m+S67Q+oWjsntgiH5UK6y63q4aQHx3 wbeYlPDDayxb+EGjXL2t6EXDXGhmjbtCKZktjZ+YsQFlwSgc73ySUkSpm4S8TBxcUP4c sXrUTeaIZVxcHUj1R7OAVUKOaD5yNP98eTAQKRm5G6lYN9TYjxObiodsAH7yGOqcuvwr yM6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DYOkxg6OJq6CXCgwvELDYvgLwghtKYWzJ4qCZeosqgA=; b=cdsDJ0CMYsXu8/VreKR3nCWtmVDuwKXhpWqQDkELsDinoh+SRu9QY7SW3InMinlYxz AifMEYTgqv8zf5n9JHjuAOZTvQKafnbw7BjCNqVeFrnbHXIgfnNNVX3bvMsKgbhgp+k/ 92NlKnDbDM7TruM1Wz8qCOl4kqxfyVc17SSnxLxZ7s0ZKH3SLDDlBbBAe9Y5cDGBtq6T aGfKF+chQfsh+N4f0KgRvbRZlkDE8zBCDYrSey6WSty/GB+zu57RRNgmpRmmI7/emTcY hoVxTx7+uCOdIoGlGknmd9XBWUhoCECHUS5fkFLdFYFsZdcRf5Qjau2ki+/Csr6odYML W2tw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XkWn2Mua; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id kl12-20020a170907994c00b00780af308970si1648990ejc.185.2022.10.14.00.25.51; Fri, 14 Oct 2022 00:26:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=XkWn2Mua; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229936AbiJNHVW (ORCPT + 99 others); Fri, 14 Oct 2022 03:21:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229926AbiJNHVQ (ORCPT ); Fri, 14 Oct 2022 03:21:16 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0C55192BA3 for ; Fri, 14 Oct 2022 00:21:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665732073; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DYOkxg6OJq6CXCgwvELDYvgLwghtKYWzJ4qCZeosqgA=; b=XkWn2MuaE9CmEy7kRsk67/Kt7LXMufCybXfoKaYtJhxXvuz60Ww5tP9Bi4FnQ4VoiDkh3h 1hkF/KR4tJ8Kd8xFsSuXvQhIHLW/FqLql+ZmPkByRG85PvpzcOzq9EG6++sDI4Nf75r+yX doe7VWlGdvqCgcjGc1a1zTphsrYmSkE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-494-fBt_ZowANmaHQgMpFMxUDg-1; Fri, 14 Oct 2022 03:21:10 -0400 X-MC-Unique: fBt_ZowANmaHQgMpFMxUDg-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id DE14F18E0952; Fri, 14 Oct 2022 07:20:34 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-52.bne.redhat.com [10.64.54.52]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 88894C42390; Fri, 14 Oct 2022 07:20:01 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, ajones@ventanamicro.com, pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org, oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH 4/6] KVM: selftests: memslot_perf_test: Support variable guest page size Date: Fri, 14 Oct 2022 15:19:12 +0800 Message-Id: <20221014071914.227134-5-gshan@redhat.com> In-Reply-To: <20221014071914.227134-1-gshan@redhat.com> References: <20221014071914.227134-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746646994798776162?= X-GMAIL-MSGID: =?utf-8?q?1746646994798776162?= The test case is obviously broken on aarch64 because non-4KB guest page size is supported. The guest page size on aarch64 could be 4KB, 16KB or 64KB. This supports variable guest page size, mostly for aarch64. - The host determines the guest page size when virtual machine is created. The value is also passed to guest through the synchronization area. - The number of guest pages are unknown until the virtual machine is to be created. So all the related macros are dropped. Instead, their values are dynamically calculated based on the guest page size. - The static checks on memory sizes and pages becomes dependent on guest page size, which is unknown until the virtual machine is about to be created. So all the static checks are converted to dynamic checks, done in check_memory_sizes(). - As the address passed to madvise() should be aligned to host page, the size of page chunk is automatically selected, other than one page. - All other changes included in this patch are almost mechanical replacing '4096' with 'guest_page_size'. Signed-off-by: Gavin Shan --- .../testing/selftests/kvm/memslot_perf_test.c | 191 +++++++++++------- 1 file changed, 115 insertions(+), 76 deletions(-) diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index d5aa9148f96f..d587bd952ff9 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -26,14 +26,11 @@ #include #define MEM_SIZE ((512U << 20) + 4096) -#define MEM_SIZE_PAGES (MEM_SIZE / 4096) #define MEM_GPA 0x10000000UL #define MEM_AUX_GPA MEM_GPA #define MEM_SYNC_GPA MEM_AUX_GPA #define MEM_TEST_GPA (MEM_AUX_GPA + 4096) #define MEM_TEST_SIZE (MEM_SIZE - 4096) -static_assert(MEM_SIZE % 4096 == 0, "invalid mem size"); -static_assert(MEM_TEST_SIZE % 4096 == 0, "invalid mem test size"); /* * 32 MiB is max size that gets well over 100 iterations on 509 slots. @@ -42,29 +39,16 @@ static_assert(MEM_TEST_SIZE % 4096 == 0, "invalid mem test size"); * limited resolution). */ #define MEM_SIZE_MAP ((32U << 20) + 4096) -#define MEM_SIZE_MAP_PAGES (MEM_SIZE_MAP / 4096) #define MEM_TEST_MAP_SIZE (MEM_SIZE_MAP - 4096) -#define MEM_TEST_MAP_SIZE_PAGES (MEM_TEST_MAP_SIZE / 4096) -static_assert(MEM_SIZE_MAP % 4096 == 0, "invalid map test region size"); -static_assert(MEM_TEST_MAP_SIZE % 4096 == 0, "invalid map test region size"); -static_assert(MEM_TEST_MAP_SIZE_PAGES % 2 == 0, "invalid map test region size"); -static_assert(MEM_TEST_MAP_SIZE_PAGES > 2, "invalid map test region size"); /* * 128 MiB is min size that fills 32k slots with at least one page in each * while at the same time gets 100+ iterations in such test + * + * 2 MiB chunk size like a typical huge page */ #define MEM_TEST_UNMAP_SIZE (128U << 20) -#define MEM_TEST_UNMAP_SIZE_PAGES (MEM_TEST_UNMAP_SIZE / 4096) -/* 2 MiB chunk size like a typical huge page */ -#define MEM_TEST_UNMAP_CHUNK_PAGES (2U << (20 - 12)) -static_assert(MEM_TEST_UNMAP_SIZE <= MEM_TEST_SIZE, - "invalid unmap test region size"); -static_assert(MEM_TEST_UNMAP_SIZE % 4096 == 0, - "invalid unmap test region size"); -static_assert(MEM_TEST_UNMAP_SIZE_PAGES % - (2 * MEM_TEST_UNMAP_CHUNK_PAGES) == 0, - "invalid unmap test region size"); +#define MEM_TEST_UNMAP_CHUNK_SIZE (2U << 20) /* * For the move active test the middle of the test area is placed on @@ -77,8 +61,7 @@ static_assert(MEM_TEST_UNMAP_SIZE_PAGES % * for the total size of 25 pages. * Hence, the maximum size here is 50 pages. */ -#define MEM_TEST_MOVE_SIZE_PAGES (50) -#define MEM_TEST_MOVE_SIZE (MEM_TEST_MOVE_SIZE_PAGES * 4096) +#define MEM_TEST_MOVE_SIZE 0x32000 #define MEM_TEST_MOVE_GPA_DEST (MEM_GPA + MEM_SIZE) static_assert(MEM_TEST_MOVE_SIZE <= MEM_TEST_SIZE, "invalid move test region size"); @@ -100,6 +83,7 @@ struct vm_data { }; struct sync_area { + uint32_t guest_page_size; atomic_bool start_flag; atomic_bool exit_flag; atomic_bool sync_flag; @@ -192,14 +176,15 @@ static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages) uint64_t gpage, pgoffs; uint32_t slot, slotoffs; void *base; + uint32_t guest_page_size = data->vm->page_size; TEST_ASSERT(gpa >= MEM_GPA, "Too low gpa to translate"); - TEST_ASSERT(gpa < MEM_GPA + data->npages * 4096, + TEST_ASSERT(gpa < MEM_GPA + data->npages * guest_page_size, "Too high gpa to translate"); gpa -= MEM_GPA; - gpage = gpa / 4096; - pgoffs = gpa % 4096; + gpage = gpa / guest_page_size; + pgoffs = gpa % guest_page_size; slot = min(gpage / data->pages_per_slot, (uint64_t)data->nslots - 1); slotoffs = gpage - (slot * data->pages_per_slot); @@ -217,14 +202,16 @@ static void *vm_gpa2hva(struct vm_data *data, uint64_t gpa, uint64_t *rempages) } base = data->hva_slots[slot]; - return (uint8_t *)base + slotoffs * 4096 + pgoffs; + return (uint8_t *)base + slotoffs * guest_page_size + pgoffs; } static uint64_t vm_slot2gpa(struct vm_data *data, uint32_t slot) { + uint32_t guest_page_size = data->vm->page_size; + TEST_ASSERT(slot < data->nslots, "Too high slot number"); - return MEM_GPA + slot * data->pages_per_slot * 4096; + return MEM_GPA + slot * data->pages_per_slot * guest_page_size; } static struct vm_data *alloc_vm(void) @@ -242,33 +229,34 @@ static struct vm_data *alloc_vm(void) } static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, - void *guest_code, uint64_t mempages, + void *guest_code, uint64_t mem_size, struct timespec *slot_runtime) { - uint64_t rempages; + uint64_t mempages, rempages; uint64_t guest_addr; - uint32_t slot; + uint32_t slot, guest_page_size; struct timespec tstart; struct sync_area *sync; - TEST_ASSERT(mempages > 1, - "Can't test without any memory"); + guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size; + mempages = mem_size / guest_page_size; + + data->vm = __vm_create_with_one_vcpu(&data->vcpu, mempages, guest_code); + ucall_init(data->vm, NULL); data->npages = mempages; + TEST_ASSERT(data->npages > 1, "Can't test without any memory"); data->nslots = nslots; - data->pages_per_slot = mempages / data->nslots; + data->pages_per_slot = data->npages / data->nslots; if (!data->pages_per_slot) { - *maxslots = mempages + 1; + *maxslots = data->npages + 1; return false; } - rempages = mempages % data->nslots; + rempages = data->npages % data->nslots; data->hva_slots = malloc(sizeof(*data->hva_slots) * data->nslots); TEST_ASSERT(data->hva_slots, "malloc() fail"); - data->vm = __vm_create_with_one_vcpu(&data->vcpu, mempages, guest_code); - ucall_init(data->vm, NULL); - pr_info_v("Adding slots 1..%i, each slot with %"PRIu64" pages + %"PRIu64" extra pages last\n", data->nslots, data->pages_per_slot, rempages); @@ -283,7 +271,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, vm_userspace_mem_region_add(data->vm, VM_MEM_SRC_ANONYMOUS, guest_addr, slot, npages, 0); - guest_addr += npages * 4096; + guest_addr += npages * guest_page_size; } *slot_runtime = timespec_elapsed(tstart); @@ -300,12 +288,12 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, "vm_phy_pages_alloc() failed\n"); data->hva_slots[slot - 1] = addr_gpa2hva(data->vm, guest_addr); - memset(data->hva_slots[slot - 1], 0, npages * 4096); + memset(data->hva_slots[slot - 1], 0, npages * guest_page_size); - guest_addr += npages * 4096; + guest_addr += npages * guest_page_size; } - virt_map(data->vm, MEM_GPA, MEM_GPA, mempages); + virt_map(data->vm, MEM_GPA, MEM_GPA, data->npages); sync = (typeof(sync))vm_gpa2hva(data, MEM_SYNC_GPA, NULL); atomic_init(&sync->start_flag, false); @@ -404,6 +392,7 @@ static bool guest_perform_sync(void) static void guest_code_test_memslot_move(void) { struct sync_area *sync = (typeof(sync))MEM_SYNC_GPA; + uint32_t page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size); uintptr_t base = (typeof(base))READ_ONCE(sync->move_area_ptr); GUEST_SYNC(0); @@ -414,7 +403,7 @@ static void guest_code_test_memslot_move(void) uintptr_t ptr; for (ptr = base; ptr < base + MEM_TEST_MOVE_SIZE; - ptr += 4096) + ptr += page_size) *(uint64_t *)ptr = MEM_TEST_VAL_1; /* @@ -432,6 +421,7 @@ static void guest_code_test_memslot_move(void) static void guest_code_test_memslot_map(void) { struct sync_area *sync = (typeof(sync))MEM_SYNC_GPA; + uint32_t page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size); GUEST_SYNC(0); @@ -441,14 +431,16 @@ static void guest_code_test_memslot_map(void) uintptr_t ptr; for (ptr = MEM_TEST_GPA; - ptr < MEM_TEST_GPA + MEM_TEST_MAP_SIZE / 2; ptr += 4096) + ptr < MEM_TEST_GPA + MEM_TEST_MAP_SIZE / 2; + ptr += page_size) *(uint64_t *)ptr = MEM_TEST_VAL_1; if (!guest_perform_sync()) break; for (ptr = MEM_TEST_GPA + MEM_TEST_MAP_SIZE / 2; - ptr < MEM_TEST_GPA + MEM_TEST_MAP_SIZE; ptr += 4096) + ptr < MEM_TEST_GPA + MEM_TEST_MAP_SIZE; + ptr += page_size) *(uint64_t *)ptr = MEM_TEST_VAL_2; if (!guest_perform_sync()) @@ -495,6 +487,9 @@ static void guest_code_test_memslot_unmap(void) static void guest_code_test_memslot_rw(void) { + struct sync_area *sync = (typeof(sync))MEM_SYNC_GPA; + uint32_t page_size = (typeof(page_size))READ_ONCE(sync->guest_page_size); + GUEST_SYNC(0); guest_spin_until_start(); @@ -503,14 +498,14 @@ static void guest_code_test_memslot_rw(void) uintptr_t ptr; for (ptr = MEM_TEST_GPA; - ptr < MEM_TEST_GPA + MEM_TEST_SIZE; ptr += 4096) + ptr < MEM_TEST_GPA + MEM_TEST_SIZE; ptr += page_size) *(uint64_t *)ptr = MEM_TEST_VAL_1; if (!guest_perform_sync()) break; - for (ptr = MEM_TEST_GPA + 4096 / 2; - ptr < MEM_TEST_GPA + MEM_TEST_SIZE; ptr += 4096) { + for (ptr = MEM_TEST_GPA + page_size / 2; + ptr < MEM_TEST_GPA + MEM_TEST_SIZE; ptr += page_size) { uint64_t val = *(uint64_t *)ptr; GUEST_ASSERT_1(val == MEM_TEST_VAL_2, val); @@ -528,6 +523,8 @@ static bool test_memslot_move_prepare(struct vm_data *data, struct sync_area *sync, uint64_t *maxslots, bool isactive) { + uint32_t guest_page_size = data->vm->page_size; + uint64_t move_pages = MEM_TEST_MOVE_SIZE / guest_page_size; uint64_t movesrcgpa, movetestgpa; movesrcgpa = vm_slot2gpa(data, data->nslots - 1); @@ -536,7 +533,7 @@ static bool test_memslot_move_prepare(struct vm_data *data, uint64_t lastpages; vm_gpa2hva(data, movesrcgpa, &lastpages); - if (lastpages < MEM_TEST_MOVE_SIZE_PAGES / 2) { + if (lastpages < move_pages / 2) { *maxslots = 0; return false; } @@ -582,8 +579,9 @@ static void test_memslot_do_unmap(struct vm_data *data, uint64_t offsp, uint64_t count) { uint64_t gpa, ctr; + uint32_t guest_page_size = data->vm->page_size; - for (gpa = MEM_TEST_GPA + offsp * 4096, ctr = 0; ctr < count; ) { + for (gpa = MEM_TEST_GPA + offsp * guest_page_size, ctr = 0; ctr < count; ) { uint64_t npages; void *hva; int ret; @@ -591,12 +589,12 @@ static void test_memslot_do_unmap(struct vm_data *data, hva = vm_gpa2hva(data, gpa, &npages); TEST_ASSERT(npages, "Empty memory slot at gptr 0x%"PRIx64, gpa); npages = min(npages, count - ctr); - ret = madvise(hva, npages * 4096, MADV_DONTNEED); + ret = madvise(hva, npages * guest_page_size, MADV_DONTNEED); TEST_ASSERT(!ret, "madvise(%p, MADV_DONTNEED) on VM memory should not fail for gptr 0x%"PRIx64, hva, gpa); ctr += npages; - gpa += npages * 4096; + gpa += npages * guest_page_size; } TEST_ASSERT(ctr == count, "madvise(MADV_DONTNEED) should exactly cover all of the requested area"); @@ -607,11 +605,12 @@ static void test_memslot_map_unmap_check(struct vm_data *data, { uint64_t gpa; uint64_t *val; + uint32_t guest_page_size = data->vm->page_size; if (!map_unmap_verify) return; - gpa = MEM_TEST_GPA + offsp * 4096; + gpa = MEM_TEST_GPA + offsp * guest_page_size; val = (typeof(val))vm_gpa2hva(data, gpa, NULL); TEST_ASSERT(*val == valexp, "Guest written values should read back correctly before unmap (%"PRIu64" vs %"PRIu64" @ %"PRIx64")", @@ -621,12 +620,14 @@ static void test_memslot_map_unmap_check(struct vm_data *data, static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync) { + uint32_t guest_page_size = data->vm->page_size; + uint64_t guest_pages = MEM_TEST_MAP_SIZE / guest_page_size; + /* * Unmap the second half of the test area while guest writes to (maps) * the first half. */ - test_memslot_do_unmap(data, MEM_TEST_MAP_SIZE_PAGES / 2, - MEM_TEST_MAP_SIZE_PAGES / 2); + test_memslot_do_unmap(data, guest_pages / 2, guest_pages / 2); /* * Wait for the guest to finish writing the first half of the test @@ -637,10 +638,8 @@ static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync) */ host_perform_sync(sync); test_memslot_map_unmap_check(data, 0, MEM_TEST_VAL_1); - test_memslot_map_unmap_check(data, - MEM_TEST_MAP_SIZE_PAGES / 2 - 1, - MEM_TEST_VAL_1); - test_memslot_do_unmap(data, 0, MEM_TEST_MAP_SIZE_PAGES / 2); + test_memslot_map_unmap_check(data, guest_pages / 2 - 1, MEM_TEST_VAL_1); + test_memslot_do_unmap(data, 0, guest_pages / 2); /* @@ -653,16 +652,16 @@ static void test_memslot_map_loop(struct vm_data *data, struct sync_area *sync) * the test area. */ host_perform_sync(sync); - test_memslot_map_unmap_check(data, MEM_TEST_MAP_SIZE_PAGES / 2, - MEM_TEST_VAL_2); - test_memslot_map_unmap_check(data, MEM_TEST_MAP_SIZE_PAGES - 1, - MEM_TEST_VAL_2); + test_memslot_map_unmap_check(data, guest_pages / 2, MEM_TEST_VAL_2); + test_memslot_map_unmap_check(data, guest_pages - 1, MEM_TEST_VAL_2); } static void test_memslot_unmap_loop_common(struct vm_data *data, struct sync_area *sync, uint64_t chunk) { + uint32_t guest_page_size = data->vm->page_size; + uint64_t guest_pages = MEM_TEST_UNMAP_SIZE / guest_page_size; uint64_t ctr; /* @@ -674,42 +673,49 @@ static void test_memslot_unmap_loop_common(struct vm_data *data, */ host_perform_sync(sync); test_memslot_map_unmap_check(data, 0, MEM_TEST_VAL_1); - for (ctr = 0; ctr < MEM_TEST_UNMAP_SIZE_PAGES / 2; ctr += chunk) + for (ctr = 0; ctr < guest_pages / 2; ctr += chunk) test_memslot_do_unmap(data, ctr, chunk); /* Likewise, but for the opposite host / guest areas */ host_perform_sync(sync); - test_memslot_map_unmap_check(data, MEM_TEST_UNMAP_SIZE_PAGES / 2, - MEM_TEST_VAL_2); - for (ctr = MEM_TEST_UNMAP_SIZE_PAGES / 2; - ctr < MEM_TEST_UNMAP_SIZE_PAGES; ctr += chunk) + test_memslot_map_unmap_check(data, guest_pages / 2, MEM_TEST_VAL_2); + for (ctr = guest_pages / 2; ctr < guest_pages; ctr += chunk) test_memslot_do_unmap(data, ctr, chunk); } static void test_memslot_unmap_loop(struct vm_data *data, struct sync_area *sync) { - test_memslot_unmap_loop_common(data, sync, 1); + uint32_t host_page_size = getpagesize(); + uint32_t guest_page_size = data->vm->page_size; + uint64_t guest_chunk_pages = guest_page_size >= host_page_size ? + 1 : host_page_size / guest_page_size; + + test_memslot_unmap_loop_common(data, sync, guest_chunk_pages); } static void test_memslot_unmap_loop_chunked(struct vm_data *data, struct sync_area *sync) { - test_memslot_unmap_loop_common(data, sync, MEM_TEST_UNMAP_CHUNK_PAGES); + uint32_t guest_page_size = data->vm->page_size; + uint64_t guest_chunk_pages = MEM_TEST_UNMAP_CHUNK_SIZE / guest_page_size; + + test_memslot_unmap_loop_common(data, sync, guest_chunk_pages); } static void test_memslot_rw_loop(struct vm_data *data, struct sync_area *sync) { uint64_t gptr; + uint32_t guest_page_size = data->vm->page_size; - for (gptr = MEM_TEST_GPA + 4096 / 2; - gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += 4096) + for (gptr = MEM_TEST_GPA + guest_page_size / 2; + gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += guest_page_size) *(uint64_t *)vm_gpa2hva(data, gptr, NULL) = MEM_TEST_VAL_2; host_perform_sync(sync); for (gptr = MEM_TEST_GPA; - gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += 4096) { + gptr < MEM_TEST_GPA + MEM_TEST_SIZE; gptr += guest_page_size) { uint64_t *vptr = (typeof(vptr))vm_gpa2hva(data, gptr, NULL); uint64_t val = *vptr; @@ -738,7 +744,7 @@ static bool test_execute(int nslots, uint64_t *maxslots, struct timespec *slot_runtime, struct timespec *guest_runtime) { - uint64_t mem_size = tdata->mem_size ? : MEM_SIZE_PAGES; + uint64_t mem_size = tdata->mem_size ? : MEM_SIZE; struct vm_data *data; struct sync_area *sync; struct timespec tstart; @@ -753,6 +759,7 @@ static bool test_execute(int nslots, uint64_t *maxslots, sync = (typeof(sync))vm_gpa2hva(data, MEM_SYNC_GPA, NULL); + sync->guest_page_size = data->vm->page_size; if (tdata->prepare && !tdata->prepare(data, sync, maxslots)) { ret = false; @@ -786,19 +793,19 @@ static bool test_execute(int nslots, uint64_t *maxslots, static const struct test_data tests[] = { { .name = "map", - .mem_size = MEM_SIZE_MAP_PAGES, + .mem_size = MEM_SIZE_MAP, .guest_code = guest_code_test_memslot_map, .loop = test_memslot_map_loop, }, { .name = "unmap", - .mem_size = MEM_TEST_UNMAP_SIZE_PAGES + 1, + .mem_size = MEM_TEST_UNMAP_SIZE + 4096, .guest_code = guest_code_test_memslot_unmap, .loop = test_memslot_unmap_loop, }, { .name = "unmap chunked", - .mem_size = MEM_TEST_UNMAP_SIZE_PAGES + 1, + .mem_size = MEM_TEST_UNMAP_SIZE + 4096, .guest_code = guest_code_test_memslot_unmap, .loop = test_memslot_unmap_loop_chunked, }, @@ -856,6 +863,35 @@ static void help(char *name, struct test_args *targs) pr_info("%d: %s\n", ctr, tests[ctr].name); } +static bool check_memory_sizes(void) +{ + uint32_t guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size; + + if (MEM_SIZE % guest_page_size || + MEM_TEST_SIZE % guest_page_size) { + pr_info("invalid MEM_SIZE or MEM_TEST_SIZE\n"); + return false; + } + + if (MEM_SIZE_MAP % guest_page_size || + MEM_TEST_MAP_SIZE % guest_page_size || + (MEM_TEST_MAP_SIZE / guest_page_size) <= 2 || + (MEM_TEST_MAP_SIZE / guest_page_size) % 2) { + pr_info("invalid MEM_SIZE_MAP or MEM_TEST_MAP_SIZE\n"); + return false; + } + + if (MEM_TEST_UNMAP_SIZE > MEM_TEST_SIZE || + MEM_TEST_UNMAP_SIZE % guest_page_size || + (MEM_TEST_UNMAP_SIZE / guest_page_size) % + (MEM_TEST_UNMAP_CHUNK_SIZE / guest_page_size)) { + pr_info("invalid MEM_TEST_UNMAP_SIZE or MEM_TEST_UNMAP_CHUNK_SIZE\n"); + return false; + } + + return true; +} + static bool parse_args(int argc, char *argv[], struct test_args *targs) { @@ -1012,6 +1048,9 @@ int main(int argc, char *argv[]) /* Tell stdout not to buffer its content */ setbuf(stdout, NULL); + if (!check_memory_sizes()) + return -1; + if (!parse_args(argc, argv, &targs)) return -1; From patchwork Fri Oct 14 07:19:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 2548 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp49051wrs; Fri, 14 Oct 2022 00:38:25 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7C2/uIHm3rttpcPSLMzT1e14hQIZO2r4z3qOgh+DsHYIxSQZWBlNdscSiLqukYxu7X1fj9 X-Received: by 2002:a63:7909:0:b0:458:1ba6:ec80 with SMTP id u9-20020a637909000000b004581ba6ec80mr3542135pgc.414.1665733105314; Fri, 14 Oct 2022 00:38:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665733105; cv=none; d=google.com; s=arc-20160816; b=gdRFyi+sUnL6ckTWj+3+CN8yG+ALgcQtiWDEMNIoHwxGigQ3iXDK1U6CvlquPw7INh GmqWnpu9+P1lg2N7V726xAWt9Qr3qcKfI1mBsPQU5R34/CcSE/YyO/YEZKAYxXZ7prb/ EiNeuYhZiqahmxCp+6bo2NDD6nsIUvtBHqCJV2Z9B4BpDJGcOLx8EBB1sOm1B++eTEMP MMkn9lwsUPwdOphA6lAjz4MfHRWs4tHe+JA3s+ir1LtxgKtMETCn18UX5PAwapWXiGIQ 5odiA9vNhF3/yJELEtvUiyB8cQ/D/s9ExndjIn/hRPtwr9ZIrc9X6wAXv+efkcCpttbF be8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=pr7VHnuYR01/Gc5/jPxyHFPhwmvdOLksQ25lnmxgeCs=; b=KtW8Mrki3nuSoKmv83f8z7PXL+cUQFp4lMQZ4LINtdK1GH8NiciehsvFfO9YAdNxCV QKGVZlpSL//quSHc2UtP3ONv8amq5Bi25ImCC6cL/vSDxHwOe7N0g71wxV1MU0rJbGcg F1KfsKBfcqh6NyH4rpcdfkqlcznf6h1Oc2Lj/vIVqSLLabIHVig6RcNPdcpjw+BbtaJh Tsh6z4MaEzIUO2cgn2kq9YkZ8QfCGupF7jOaDXfIGDRpLERtyy2ReBj7dOZ8Jz98ogBR c+cVsp6FePwzQ8AJmu6w9FvyTKYBrvlrwl+dFySC1BjBjRDxm7I5bje2SygkFDR5QSgl Cd1g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=f4UPCYCc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s5-20020a170902ea0500b0017f9b980fadsi2111954plg.446.2022.10.14.00.38.12; Fri, 14 Oct 2022 00:38:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=f4UPCYCc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229940AbiJNHVS (ORCPT + 99 others); Fri, 14 Oct 2022 03:21:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229732AbiJNHVO (ORCPT ); Fri, 14 Oct 2022 03:21:14 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A8919193744 for ; Fri, 14 Oct 2022 00:21:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665732073; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pr7VHnuYR01/Gc5/jPxyHFPhwmvdOLksQ25lnmxgeCs=; b=f4UPCYCclimsDtTFoFrtOAfMfJQLNIsfhvVqdlcYio/l9frLzsaGyYIg2zP64Gr+Nt0DQ3 L9y/fAMf4St4phe52nxhfQZE0EE2ZB6t9DpeWUvGK1mnTfn6cvXh/N5L0iYJZm3f3uf5Qx B305StfW9q5WZv9CPn+kqDXuJuezY/s= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-204-rtiAg9aKM9azsTkhDxYPzQ-1; Fri, 14 Oct 2022 03:21:08 -0400 X-MC-Unique: rtiAg9aKM9azsTkhDxYPzQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 972B51C05ED2; Fri, 14 Oct 2022 07:20:52 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-52.bne.redhat.com [10.64.54.52]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 6FAF2C3343C; Fri, 14 Oct 2022 07:20:23 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, ajones@ventanamicro.com, pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org, oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH 5/6] KVM: selftests: memslot_perf_test: Consolidate memory sizes Date: Fri, 14 Oct 2022 15:19:13 +0800 Message-Id: <20221014071914.227134-6-gshan@redhat.com> In-Reply-To: <20221014071914.227134-1-gshan@redhat.com> References: <20221014071914.227134-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746647756806878193?= X-GMAIL-MSGID: =?utf-8?q?1746647756806878193?= The addresses and sizes passed to madvise() and vm_userspace_mem_region_add() should be aligned to host page size, which can be 64KB on aarch64. So it's wrong by passing additional fixed 4KB memory area to various tests. Fix it by passing additional fixed 64KB memory area to various tests. After it's applied, the following command works fine on 64KB-page-size-host and 4KB-page-size-guest. # ./memslot_perf_test -v -s 512 Signed-off-by: Gavin Shan --- .../testing/selftests/kvm/memslot_perf_test.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index d587bd952ff9..e6d34744b45d 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -25,12 +25,14 @@ #include #include -#define MEM_SIZE ((512U << 20) + 4096) -#define MEM_GPA 0x10000000UL +#define MEM_EXTRA_SIZE 0x10000 + +#define MEM_SIZE ((512U << 20) + MEM_EXTRA_SIZE) +#define MEM_GPA 0x10000000UL #define MEM_AUX_GPA MEM_GPA #define MEM_SYNC_GPA MEM_AUX_GPA -#define MEM_TEST_GPA (MEM_AUX_GPA + 4096) -#define MEM_TEST_SIZE (MEM_SIZE - 4096) +#define MEM_TEST_GPA (MEM_AUX_GPA + MEM_EXTRA_SIZE) +#define MEM_TEST_SIZE (MEM_SIZE - MEM_EXTRA_SIZE) /* * 32 MiB is max size that gets well over 100 iterations on 509 slots. @@ -38,8 +40,8 @@ * 8194 slots in use can then be tested (although with slightly * limited resolution). */ -#define MEM_SIZE_MAP ((32U << 20) + 4096) -#define MEM_TEST_MAP_SIZE (MEM_SIZE_MAP - 4096) +#define MEM_SIZE_MAP ((32U << 20) + MEM_EXTRA_SIZE) +#define MEM_TEST_MAP_SIZE (MEM_SIZE_MAP - MEM_EXTRA_SIZE) /* * 128 MiB is min size that fills 32k slots with at least one page in each @@ -799,13 +801,13 @@ static const struct test_data tests[] = { }, { .name = "unmap", - .mem_size = MEM_TEST_UNMAP_SIZE + 4096, + .mem_size = MEM_TEST_UNMAP_SIZE + MEM_EXTRA_SIZE, .guest_code = guest_code_test_memslot_unmap, .loop = test_memslot_unmap_loop, }, { .name = "unmap chunked", - .mem_size = MEM_TEST_UNMAP_SIZE + 4096, + .mem_size = MEM_TEST_UNMAP_SIZE + MEM_EXTRA_SIZE, .guest_code = guest_code_test_memslot_unmap, .loop = test_memslot_unmap_loop_chunked, }, From patchwork Fri Oct 14 07:19:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gavin Shan X-Patchwork-Id: 2547 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp48679wrs; Fri, 14 Oct 2022 00:36:41 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7IrcVdxEfNxDmqLCGisWnQ2IdGXQII0Zc+gpg2/ztSIN8K8gBUAj8ucc3c0Yi5cOtH39WN X-Received: by 2002:a17:90b:1bc9:b0:20d:b990:5028 with SMTP id oa9-20020a17090b1bc900b0020db9905028mr3704460pjb.111.1665733001112; Fri, 14 Oct 2022 00:36:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665733001; cv=none; d=google.com; s=arc-20160816; b=PXQjsRv732DF0KscaFjGxLDz2803j3tnvUogxjEHLU0B60BSbkFg32kh0pyz5qydvG oD3Gsg9hueQVNfgklhmn9bw7FoYB7zDoOWuLzOpAmhHTrixpqHwZ2KcHiJ5Tm1mvzKxx aTwBcrc9eCdQ61jCZHq0NUXi7WbGka2pphWw/73gd2L1qp4p0C3H0dEgeSgfK7mKRJNB YMkz77LY1VjqnU/YkcGHS7tKTXW92hvVMI4/MUc0Upsgt/Uk+5Q6rDtcJgyZvARsQLJc x25Cqi+dTeisbxeLIow8MrVlKGnMp820QEP9GaIaX/TLb8WWwtfWzH4IBQsRlqwcRSq9 yLXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1HO6ItwplxcowXlIKdJH2SQbuhYvBFkPnC0M8MrxP0E=; b=z4gzMx4zFZhyJ1+I/FJITLiLEcACcepcja6zzmhRn+L8tRHUEvevflL8EYruF1n/eD RY9ySek08f2C6dv+RUq5NsqYPOoW4KSeoeC+eKUAbbQFk2WXKYluZu2aVfCj1d8KLx9I xp+WrF6D5IKNS6Eg9iTxAWA2mGGkU86pt4jMnEyXeuQe45hb8ncXEDb1fMnkyEA2nM7z AOODiM3e6T3Njjhe9s32l84V+p5rm8SU9AXXZBqLV/eqBNp1tfPisHdLNmNly++10Ifr QgyM8q5IsRq7vUUAUfAv4csPjsrLnxl+gE5Ov0KeZuEXmknd6Fbnm9GsFtz5qUzG37Qb 7Vdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Rmjx5On1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p8-20020a170902a40800b00178297641d4si2052808plq.287.2022.10.14.00.36.29; Fri, 14 Oct 2022 00:36:41 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=Rmjx5On1; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229836AbiJNH1m (ORCPT + 99 others); Fri, 14 Oct 2022 03:27:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229811AbiJNH1j (ORCPT ); Fri, 14 Oct 2022 03:27:39 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B578C1BF851 for ; Fri, 14 Oct 2022 00:27:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1665732454; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1HO6ItwplxcowXlIKdJH2SQbuhYvBFkPnC0M8MrxP0E=; b=Rmjx5On1H9a8hHJ9v7I37BOMox6PV9DnW3GIAyVRYtQAcQaCWsvOYq7ZS1aDpCDWHWF9W1 xjpOAPX+9L9OCIAtMdcyGWJElB3eFhXI6lq2ajE8NLGwHQBaepploYK8c0RQFw/6L1wq7t p70Z0PQ3nwcasdu2o3e62/4UY2uzdQs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-193-l_Q2vU7YPHS0w63Vld1mlw-1; Fri, 14 Oct 2022 03:27:30 -0400 X-MC-Unique: l_Q2vU7YPHS0w63Vld1mlw-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 50BD518029D2; Fri, 14 Oct 2022 07:27:09 +0000 (UTC) Received: from gshan.redhat.com (vpn2-54-52.bne.redhat.com [10.64.54.52]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 180DEC56621; Fri, 14 Oct 2022 07:20:40 +0000 (UTC) From: Gavin Shan To: kvmarm@lists.linux.dev Cc: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, ajones@ventanamicro.com, pbonzini@redhat.com, maz@kernel.org, shuah@kernel.org, oliver.upton@linux.dev, seanjc@google.com, peterx@redhat.com, maciej.szmigiero@oracle.com, ricarkol@google.com, zhenyzha@redhat.com, shan.gavin@gmail.com Subject: [PATCH 6/6] KVM: selftests: memslot_perf_test: Report optimal memory slots Date: Fri, 14 Oct 2022 15:19:14 +0800 Message-Id: <20221014071914.227134-7-gshan@redhat.com> In-Reply-To: <20221014071914.227134-1-gshan@redhat.com> References: <20221014071914.227134-1-gshan@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746647647176318651?= X-GMAIL-MSGID: =?utf-8?q?1746647647176318651?= The memory area in each slot should be aligned to host page size. Otherwise, the test will failure. For example, the following command fails with the following messages with 64KB-page-size-host and 4KB-pae-size-guest. It's not user friendly. Lets do something to report the optimal memory slots, instead of failing the test. # ./memslot_perf_test -v -s 1000 Number of memory slots: 999 Testing map performance with 1 runs, 5 seconds each Adding slots 1..999, each slot with 8 pages + 216 extra pages last ==== Test Assertion Failure ==== lib/kvm_util.c:824: vm_adjust_num_guest_pages(vm->mode, npages) == npages pid=19872 tid=19872 errno=0 - Success 1 0x00000000004065b3: vm_userspace_mem_region_add at kvm_util.c:822 2 0x0000000000401d6b: prepare_vm at memslot_perf_test.c:273 3 (inlined by) test_execute at memslot_perf_test.c:756 4 (inlined by) test_loop at memslot_perf_test.c:994 5 (inlined by) main at memslot_perf_test.c:1073 6 0x0000ffff7ebb4383: ?? ??:0 7 0x00000000004021ff: _start at :? Number of guest pages is not compatible with the host. Try npages=16 Report the optimal memory slots instead of failing the test when the memory area in each slot isn't aligned to host page size. With this applied, the optimal memory slots is reported. # ./memslot_perf_test -v -s 1000 # ./memslot_perf_test -v -s 1000 Number of memory slots: 999 Testing map performance with 1 runs, 5 seconds each Memslot count too high for this test, decrease the cap (max is 514) Signed-off-by: Gavin Shan --- .../testing/selftests/kvm/memslot_perf_test.c | 45 +++++++++++++++++-- 1 file changed, 41 insertions(+), 4 deletions(-) diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c index e6d34744b45d..bec65803f220 100644 --- a/tools/testing/selftests/kvm/memslot_perf_test.c +++ b/tools/testing/selftests/kvm/memslot_perf_test.c @@ -230,16 +230,52 @@ static struct vm_data *alloc_vm(void) return data; } +static bool check_slot_pages(uint32_t host_page_size, uint32_t guest_page_size, + uint64_t pages_per_slot, uint64_t rempages) +{ + if (!pages_per_slot) + return false; + + if ((pages_per_slot * guest_page_size) % host_page_size) + return false; + + if ((rempages * guest_page_size) % host_page_size) + return false; + + return true; +} + + +static uint64_t get_max_slots(struct vm_data *data, uint32_t host_page_size) +{ + uint32_t guest_page_size = data->vm->page_size; + uint64_t mempages, pages_per_slot, rempages; + uint64_t slots; + + mempages = data->npages; + slots = data->nslots; + while (--slots > 1) { + pages_per_slot = mempages / slots; + rempages = mempages % pages_per_slot; + if (check_slot_pages(host_page_size, guest_page_size, + pages_per_slot, rempages)) + return slots + 1; /* slot 0 is reserved */ + } + + return 0; +} + static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, void *guest_code, uint64_t mem_size, struct timespec *slot_runtime) { uint64_t mempages, rempages; uint64_t guest_addr; - uint32_t slot, guest_page_size; + uint32_t slot, host_page_size, guest_page_size; struct timespec tstart; struct sync_area *sync; + host_page_size = getpagesize(); guest_page_size = vm_guest_mode_params[VM_MODE_DEFAULT].page_size; mempages = mem_size / guest_page_size; @@ -250,12 +286,13 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots, TEST_ASSERT(data->npages > 1, "Can't test without any memory"); data->nslots = nslots; data->pages_per_slot = data->npages / data->nslots; - if (!data->pages_per_slot) { - *maxslots = data->npages + 1; + rempages = data->npages % data->nslots; + if (!check_slot_pages(host_page_size, guest_page_size, + data->pages_per_slot, rempages)) { + *maxslots = get_max_slots(data, host_page_size); return false; } - rempages = data->npages % data->nslots; data->hva_slots = malloc(sizeof(*data->hva_slots) * data->nslots); TEST_ASSERT(data->hva_slots, "malloc() fail");