Message ID | 20230330160752.3107283-1-peterx@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp1251086vqo; Thu, 30 Mar 2023 09:23:13 -0700 (PDT) X-Google-Smtp-Source: AK7set/MZYnWClUoKqdVzXgyzRyNZjtPWPEK/HfhQm8ZJflsqsXYgVDOpo3VsMmgLEJ6gZVMoKq4 X-Received: by 2002:a05:6a20:7924:b0:da:17b4:461a with SMTP id b36-20020a056a20792400b000da17b4461amr19937219pzg.32.1680193393206; Thu, 30 Mar 2023 09:23:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680193393; cv=none; d=google.com; s=arc-20160816; b=UQLhLNcxu1kMZ8O8mCIGVJmhmJyrB4NpFYv/Zsf1CVzhkTvkTk+L6yFj+EskvaGyTb eM3WXE6OWaMHkHeNAoYpD0zQ5UQYY1gIu3NB2e9Ppu0uAdQJaO+cWsg4waXUdtwKxVR3 rVt3vMFFvqU1KjmKXJd3G0gXRVQJap86mY4ugt78l8Qjuqxua0IaqybLywkx+t7hReV/ m/BZJzofx3LEhmEMs7sz3U0OE0LRvq12zPN1yzKG/6HbgI5qMbFzwVXPYXRBW9eiZFAE YHOANzShI/D9bk0ll+KFebwXKgFPdyeNS7JaFOYTzyeDg98ZU+mt02/3OXZnbfW8KV39 66ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Pi0McrG33VvWSi8wJEF+ZnP9lqDjRKaT61ZsjgmAIPc=; b=kfoC92IWvbmChAeGWNqkWH3ivuUmzJIXWge6tACg75MGSAr2ouIMnsOCdxPYJciTMo BKgmob9plCG7G2u2qKM3/U/q4hEOQDeTmocckxXdreSPD/BKQw0NepVbmMuOdflVPHsQ dsjZFK04YJFjYRGGDuZ4MRi4kTmYKNk04MeMvBmavzaek3tArWJceXh/tj+I9WAI+L37 441U9RnRSdstFThk07usVHzNbyLq37ehDUXNu8vXXfTKKBJDDTVl0ZF64yowTlV2F0XF 0uw8TsVO39Ux9RkPwP5qpy194Rl1XSDHHrKfyXI2YO6iDP3zOo+U+9KZUJQuoIDybJLy mtbA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=F9IXHLTF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f20-20020a63f114000000b005073e334327si25636731pgi.779.2023.03.30.09.22.49; Thu, 30 Mar 2023 09:23:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=F9IXHLTF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232059AbjC3QJ1 (ORCPT <rfc822;rua109.linux@gmail.com> + 99 others); Thu, 30 Mar 2023 12:09:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33870 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231869AbjC3QJH (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 30 Mar 2023 12:09:07 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0643E9750 for <linux-kernel@vger.kernel.org>; Thu, 30 Mar 2023 09:08:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1680192483; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Pi0McrG33VvWSi8wJEF+ZnP9lqDjRKaT61ZsjgmAIPc=; b=F9IXHLTFLdcQKC63lniv1YDIfcAb6eJR7uE7mjdCPkaqoANI/7KKPooaudFIIv6PZ0wEDx w+NoQZlMv77QpxXHga0GGWmEp6ABAVi7yEawfL+b6s9mcDTydOeyGiKn2n7y1vyUUaTsr5 Ph3X0FA0xtsBdZubHCUuNkfncq1/FHQ= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-128-9gAKRPr7O2euqJG7T0FH3w-1; Thu, 30 Mar 2023 12:07:59 -0400 X-MC-Unique: 9gAKRPr7O2euqJG7T0FH3w-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-57c67ea348eso12977546d6.1 for <linux-kernel@vger.kernel.org>; Thu, 30 Mar 2023 09:07:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1680192475; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pi0McrG33VvWSi8wJEF+ZnP9lqDjRKaT61ZsjgmAIPc=; b=Swl9kX6rCafUUpSaIbWgytBvrWd24CSK7RQfs/T3dITrEglgl1ByKCoxP2GPphLRaY L1vEpfJp3GsEOW1dsd/VzV9V9CXuDxtzvo3AQYxzKAyLgA7JFqZubkeLqbuAweY4OG6F N7z524+QaXlGEWsv1QKf0v+G8V1Eg08MsNuJDdJR69UPK5+OzpteBZ5Fxcsw/5+P0rWe Ln5INGfWe5bCqPbODa4VNS8XncGAvzsZHFDDJn87wXfk0oiBVMRk2/DV3aYIK6kxyiOb h4FA0UUPaaea0FlwyAoT61GA8RvxpwjaXFNY3zK337rdyIR/dZ3gL1sk4n2rwkJ6TgKp +/Fg== X-Gm-Message-State: AAQBX9eC+INKDjmE9IKswrDP2HxyJbRBU5R1zw+1ePj6w2sE6kK8Wq+q BsqAsSzbW/D1alGAia+d0oW5J3apAH05xORQLG7XWZVfAz72K9qBDDI6UzMTG7xdoN35bJ4dRnZ gm0TJTF53cXwVSXdf8rgtXCHE X-Received: by 2002:a05:6214:528f:b0:5a5:e941:f33d with SMTP id kj15-20020a056214528f00b005a5e941f33dmr3649471qvb.3.1680192475183; Thu, 30 Mar 2023 09:07:55 -0700 (PDT) X-Received: by 2002:a05:6214:528f:b0:5a5:e941:f33d with SMTP id kj15-20020a056214528f00b005a5e941f33dmr3649415qvb.3.1680192474610; Thu, 30 Mar 2023 09:07:54 -0700 (PDT) Received: from x1n.redhat.com (bras-base-aurron9127w-grc-40-70-52-229-124.dsl.bell.ca. [70.52.229.124]) by smtp.gmail.com with ESMTPSA id bl28-20020a05620a1a9c00b007339c5114a9sm19630391qkb.103.2023.03.30.09.07.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Mar 2023 09:07:53 -0700 (PDT) From: Peter Xu <peterx@redhat.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrea Arcangeli <aarcange@redhat.com>, peterx@redhat.com, David Hildenbrand <david@redhat.com>, Andrew Morton <akpm@linux-foundation.org>, Nadav Amit <nadav.amit@gmail.com>, Mike Kravetz <mike.kravetz@oracle.com>, Axel Rasmussen <axelrasmussen@google.com>, Leonardo Bras Soares Passos <lsoaresp@redhat.com>, Mike Rapoport <rppt@linux.vnet.ibm.com> Subject: [PATCH 16/29] selftests/mm: UFFDIO_API test Date: Thu, 30 Mar 2023 12:07:52 -0400 Message-Id: <20230330160752.3107283-1-peterx@redhat.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20230330155707.3106228-1-peterx@redhat.com> References: <20230330155707.3106228-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761810467423260323?= X-GMAIL-MSGID: =?utf-8?q?1761810467423260323?= |
Series |
selftests/mm: Split / Refactor userfault test
|
|
Commit Message
Peter Xu
March 30, 2023, 4:07 p.m. UTC
Add one simple test for UFFDIO_API. With that, I also added a bunch of
small but handy helpers along the way.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
tools/testing/selftests/mm/uffd-unit-tests.c | 111 ++++++++++++++++++-
tools/testing/selftests/mm/vm_util.c | 10 ++
2 files changed, 120 insertions(+), 1 deletion(-)
Comments
On 30.03.23 18:07, Peter Xu wrote: > Add one simple test for UFFDIO_API. With that, I also added a bunch of > small but handy helpers along the way. > > Signed-off-by: Peter Xu <peterx@redhat.com> > --- > tools/testing/selftests/mm/uffd-unit-tests.c | 111 ++++++++++++++++++- > tools/testing/selftests/mm/vm_util.c | 10 ++ > 2 files changed, 120 insertions(+), 1 deletion(-) > > diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c > index 6857388783be..dfb44ffad5f5 100644 > --- a/tools/testing/selftests/mm/uffd-unit-tests.c > +++ b/tools/testing/selftests/mm/uffd-unit-tests.c > @@ -9,9 +9,118 @@ > > #ifdef __NR_userfaultfd > > +struct { > + unsigned int pass, skip, fail, total; > +} uffd_test_acct; > + > +static void uffd_test_report(void) > +{ > + printf("Userfaults unit tests: pass=%u, skip=%u, fail=%u (total=%u)\n", > + uffd_test_acct.pass, > + uffd_test_acct.skip, > + uffd_test_acct.fail, > + uffd_test_acct.total); > +} > + > +static void uffd_test_pass(void) > +{ > + printf("done\n"); > + uffd_test_acct.pass++; > +} > + > +#define uffd_test_start(...) do { \ > + printf(__VA_ARGS__); \ > + printf("... "); \ > + uffd_test_acct.total++; \ > + } while (0) > + > +#define uffd_test_fail(...) do { \ > + printf("failed [reason: "); \ > + printf(__VA_ARGS__); \ > + printf("]\n"); \ > + uffd_test_acct.fail++; \ > + } while (0) > + > +#define uffd_test_skip(...) do { \ > + printf("skipped [reason: "); \ > + printf(__VA_ARGS__); \ > + printf("]\n"); \ > + uffd_test_acct.skip++; \ > + } while (0) > + There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we maybe want to convert properly to ksft while already at it?
On Mon, Apr 03, 2023 at 09:59:50AM +0200, David Hildenbrand wrote: > There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we > maybe want to convert properly to ksft while already at it? Yes, I started with trying to use that but found that there're not a lot of things that I can leverage. Starting with ksft_set_plan() - I think this is something we call first. I want the current unit test to skip everything if UFFD API test failed here, then I need to feed in a dynamic number of "plan" into ksft_set_plan(). But I never know after I ran the 1st test.. I can call ksft_set_plan() later than this, but it misses a few tests which also looks weird. It also seems to not really help anything at all and not obvious to use. E.g. ksft_finished() will reference ksft_plan then it'll trigger ksft_exit_fail() but here I want to make it SKIP if the 1st test failed simply because the kernel probably doesn't have CONFIG_USERFAULTFD. Another example: I never figured what does x{fail|pass|skip} meant in the header.. e.g. ksft_inc_xfail_cnt() is used nowhere so I cannot reference either. Then I don't know when I should increase them. In short, to make the unit test behave as expected, I figured I'll just write these few helpers and that's good enough for this unit test. That takes perhaps 5 min anyway and isn't hugely bad for an unit test. Then I keep the exit code matching kselftests (KSFT_SKIP, etc.). What I can do here, though, is at least reuse the counters, e.g: ksft_inc_pass_cnt() / ksft_inc_fail_cnt() There's no ksft_inc_skip_cnt() so, maybe, I can just reuse ksft_inc_xskip_cnt() assuming that counts "skip"s? Let me know if you have better ideas, I'll be happy to switch in that case. Thanks,
On 03.04.23 18:43, Peter Xu wrote: > On Mon, Apr 03, 2023 at 09:59:50AM +0200, David Hildenbrand wrote: >> There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we >> maybe want to convert properly to ksft while already at it? > > Yes, I started with trying to use that but found that there're not a lot of > things that I can leverage. > > Starting with ksft_set_plan() - I think this is something we call first. I > want the current unit test to skip everything if UFFD API test failed here, > then I need to feed in a dynamic number of "plan" into ksft_set_plan(). > But I never know after I ran the 1st test.. In cow.c I did that. Getting the number of tests right can be challenging indeed. Basic "feature availability" checks would go first (is uffd even around?), and depending on that you can set the plan. For everything else, you can skip instead of test, so it will still be accounted towards the plan. > > I can call ksft_set_plan() later than this, but it misses a few tests which > also looks weird. Yeah, it would be nice to simply make ksft_set_plan() optional. For example, make ksft_print_cnts() skip the comparison if ksft_plan == 0. At least ksft_exit_skip() handles that already in a descend way (below). > > It also seems to not really help anything at all and not obvious to use. > E.g. ksft_finished() will reference ksft_plan then it'll trigger > ksft_exit_fail() but here I want to make it SKIP if the 1st test failed > simply because the kernel probably doesn't have CONFIG_USERFAULTFD. You'd simply do that availability check first and then use ksft_exit_skip() in case not available I guess. > > Another example: I never figured what does x{fail|pass|skip} meant in the > header.. e.g. ksft_inc_xfail_cnt() is used nowhere so I cannot reference > either. Then I don't know when I should increase them. In cow.c I have the following flow: ksft_print_header(); ksft_set_plan(); ... tests ... err = ksft_get_fail_cnt(); if (err) ksft_exit_fail_msg(); return ksft_exit_pass(); That gives me: # [INFO] detected THP size: 2048 KiB # [INFO] detected hugetlb size: 2048 KiB # [INFO] detected hugetlb size: 1048576 KiB # [INFO] huge zeropage is enabled TAP version 13 1..190 ... # Totals: pass:87 fail:0 xfail:0 xpass:0 skip:103 error:0 I didn't use xfail or xpass so far, but what I understood is that these are "expected failures" and "expected passes". fail/pass/skip are straight forward. ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip() are used to set them. You'd do availability checks before ksft_set_plan() and fail with a ksft_exit_skip() if the kernel doesn't support it. Then, you'd just use ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip(). > > In short, to make the unit test behave as expected, I figured I'll just > write these few helpers and that's good enough for this unit test. That > takes perhaps 5 min anyway and isn't hugely bad for an unit test. > > Then I keep the exit code matching kselftests (KSFT_SKIP, etc.). > > What I can do here, though, is at least reuse the counters, e.g: > > ksft_inc_pass_cnt() / ksft_inc_fail_cnt() > > There's no ksft_inc_skip_cnt() so, maybe, I can just reuse > ksft_inc_xskip_cnt() assuming that counts "skip"s? > > Let me know if you have better ideas, I'll be happy to switch in that case. I guess once you start manually increasing/decreasing the cnt, you might be abusing the ksft framework indeed and are better off handling it differently :D
On Mon, Apr 03, 2023 at 09:06:26PM +0200, David Hildenbrand wrote: > On 03.04.23 18:43, Peter Xu wrote: > > On Mon, Apr 03, 2023 at 09:59:50AM +0200, David Hildenbrand wrote: > > > There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we > > > maybe want to convert properly to ksft while already at it? > > > > Yes, I started with trying to use that but found that there're not a lot of > > things that I can leverage. > > > > Starting with ksft_set_plan() - I think this is something we call first. I > > want the current unit test to skip everything if UFFD API test failed here, > > then I need to feed in a dynamic number of "plan" into ksft_set_plan(). > > But I never know after I ran the 1st test.. > > In cow.c I did that. Getting the number of tests right can be challenging > indeed. IMHO the major thing is not about not easy to set, it's about there's merely no benefit I can see of having that calculated at the start of a test. There's one thing it can do, that is when calling ksft_finished() it can be used to know whether all tests are run, but sadly here we're calculating everything just to make it match.. so it loses its last purpose.. IMHO. > > Basic "feature availability" checks would go first (is uffd even around?), > and depending on that you can set the plan. > > For everything else, you can skip instead of test, so it will still be > accounted towards the plan. > > > > > I can call ksft_set_plan() later than this, but it misses a few tests which > > also looks weird. > > Yeah, it would be nice to simply make ksft_set_plan() optional. For example, > make ksft_print_cnts() skip the comparison if ksft_plan == 0. At least > ksft_exit_skip() handles that already in a descend way (below). > > > > > It also seems to not really help anything at all and not obvious to use. > > E.g. ksft_finished() will reference ksft_plan then it'll trigger > > ksft_exit_fail() but here I want to make it SKIP if the 1st test failed > > simply because the kernel probably doesn't have CONFIG_USERFAULTFD. > > You'd simply do that availability check first and then use ksft_exit_skip() > in case not available I guess. > > > > > Another example: I never figured what does x{fail|pass|skip} meant in the > > header.. e.g. ksft_inc_xfail_cnt() is used nowhere so I cannot reference > > either. Then I don't know when I should increase them. > > In cow.c I have the following flow: > > ksft_print_header(); > ksft_set_plan(); > ... tests ... > err = ksft_get_fail_cnt(); > if (err) > ksft_exit_fail_msg(); > return ksft_exit_pass(); > > That gives me: > > # [INFO] detected THP size: 2048 KiB > # [INFO] detected hugetlb size: 2048 KiB > # [INFO] detected hugetlb size: 1048576 KiB > # [INFO] huge zeropage is enabled > TAP version 13 > 1..190 > ... > # Totals: pass:87 fail:0 xfail:0 xpass:0 skip:103 error:0 > > > I didn't use xfail or xpass so far, but what I understood is that these are > "expected failures" and "expected passes". fail/pass/skip are straight Yes, xfail can be expressed that way, but maybe not xpass? Otherwise it's hard to identify what's the difference between xpass and pass, because IIUC pass also means "expected to pass". > forward. > ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip() are > used to set them. > > You'd do availability checks before ksft_set_plan() and fail with a > ksft_exit_skip() if the kernel doesn't support it. Then, you'd just use > ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip(). > > > > > In short, to make the unit test behave as expected, I figured I'll just > > write these few helpers and that's good enough for this unit test. That > > takes perhaps 5 min anyway and isn't hugely bad for an unit test. > > > > Then I keep the exit code matching kselftests (KSFT_SKIP, etc.). > > > > What I can do here, though, is at least reuse the counters, e.g: > > > > ksft_inc_pass_cnt() / ksft_inc_fail_cnt() > > > > There's no ksft_inc_skip_cnt() so, maybe, I can just reuse > > ksft_inc_xskip_cnt() assuming that counts "skip"s? > > > > Let me know if you have better ideas, I'll be happy to switch in that case. > > I guess once you start manually increasing/decreasing the cnt, you might be > abusing the ksft framework indeed and are better off handling it differently > :D I'm serious considering that to address your comment here, to show that I'm trying my best to use whatever can help in this test case. :) Here reusing it would avoid a few bytes in the bss, which is still beneficial so I can do. At least I'm sure xskip is for skipping now, so I know what to use. PS: one other reason I didn't use the header is also because I prefer the current output of uffd-self-tests.c. I didn't say anything because I think it's stingy.. So let's keep this in a trivial small PS. After all, the print format is half of what the header provides functional-wise. I'll see what I can come up with at last in the next version, thanks David.
On 03.04.23 22:24, Peter Xu wrote: > On Mon, Apr 03, 2023 at 09:06:26PM +0200, David Hildenbrand wrote: >> On 03.04.23 18:43, Peter Xu wrote: >>> On Mon, Apr 03, 2023 at 09:59:50AM +0200, David Hildenbrand wrote: >>>> There is ksft_print_msg, ksft_test_result, ksft_test_result_fail, ... do we >>>> maybe want to convert properly to ksft while already at it? >>> >>> Yes, I started with trying to use that but found that there're not a lot of >>> things that I can leverage. >>> >>> Starting with ksft_set_plan() - I think this is something we call first. I >>> want the current unit test to skip everything if UFFD API test failed here, >>> then I need to feed in a dynamic number of "plan" into ksft_set_plan(). >>> But I never know after I ran the 1st test.. >> >> In cow.c I did that. Getting the number of tests right can be challenging >> indeed. > > IMHO the major thing is not about not easy to set, it's about there's > merely no benefit I can see of having that calculated at the start of a > test. Thinking about it, I believe I figured out why it makes sense. By specifying upfront how many tests you intend to run, the framework can check if you have the right number of pass/fail/skip during that test case execution. This requires a different way of writing tests: each test case is supposed to trigger the exact same number of pass/fail/skip on each possible path. If the numbers don't add up, it could indicate a possible bug in your tests. For example, not triggering a fail on some exit path. While your test execution might indicate "success", there is actually a hidden issue in your test. I started using the framework for all new tests, because I think it's quite nice and at least things will be a bit consistent and possibly tests easier to maintain. Having that said, I can understand why one might not want to use it. And that there are some things in there that might be improved. > > There's one thing it can do, that is when calling ksft_finished() it can be > used to know whether all tests are run, but sadly here we're calculating > everything just to make it match.. so it loses its last purpose.. IMHO. > >> >> Basic "feature availability" checks would go first (is uffd even around?), >> and depending on that you can set the plan. >> >> For everything else, you can skip instead of test, so it will still be >> accounted towards the plan. >> >>> >>> I can call ksft_set_plan() later than this, but it misses a few tests which >>> also looks weird. >> >> Yeah, it would be nice to simply make ksft_set_plan() optional. For example, >> make ksft_print_cnts() skip the comparison if ksft_plan == 0. At least >> ksft_exit_skip() handles that already in a descend way (below). >> >>> >>> It also seems to not really help anything at all and not obvious to use. >>> E.g. ksft_finished() will reference ksft_plan then it'll trigger >>> ksft_exit_fail() but here I want to make it SKIP if the 1st test failed >>> simply because the kernel probably doesn't have CONFIG_USERFAULTFD. >> >> You'd simply do that availability check first and then use ksft_exit_skip() >> in case not available I guess. >> >>> >>> Another example: I never figured what does x{fail|pass|skip} meant in the >>> header.. e.g. ksft_inc_xfail_cnt() is used nowhere so I cannot reference >>> either. Then I don't know when I should increase them. >> >> In cow.c I have the following flow: >> >> ksft_print_header(); >> ksft_set_plan(); >> ... tests ... >> err = ksft_get_fail_cnt(); >> if (err) >> ksft_exit_fail_msg(); >> return ksft_exit_pass(); >> >> That gives me: >> >> # [INFO] detected THP size: 2048 KiB >> # [INFO] detected hugetlb size: 2048 KiB >> # [INFO] detected hugetlb size: 1048576 KiB >> # [INFO] huge zeropage is enabled >> TAP version 13 >> 1..190 >> ... >> # Totals: pass:87 fail:0 xfail:0 xpass:0 skip:103 error:0 >> >> >> I didn't use xfail or xpass so far, but what I understood is that these are >> "expected failures" and "expected passes". fail/pass/skip are straight > > Yes, xfail can be expressed that way, but maybe not xpass? Otherwise it's > hard to identify what's the difference between xpass and pass, because IIUC > pass also means "expected to pass". > I'm a simple man, I use pass/fail/skip in my tests. But let's try figuring that out; a quick internet search (no idea how trustworthy) tells me that I was wrong about xpass: xfail: expected failure xpass: unexpected pass it essentially is: xfail -> pass xpass -> fail but with a slight semantic difference when thinking about a test case: ret = mmap(0, 0 ...); if (ret == MAP_FAILED) { XFAIL(); } else { XPASS(); } vs. ret = mmap(0, 0 ...); if (ret == MAP_FAILED) { PASS(); } else { FAIL(); } It's all inherited from other testing frameworks I think. And xpass seems to be completely unused and xfail mostly unused. So I wouldn't worry about that and simply use pass/fail/skip. >> forward. >> ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip() are >> used to set them. >> >> You'd do availability checks before ksft_set_plan() and fail with a >> ksft_exit_skip() if the kernel doesn't support it. Then, you'd just use >> ksft_test_result_fail()/ksft_test_result_pass()/ksft_test_result_skip(). >> >>> >>> In short, to make the unit test behave as expected, I figured I'll just >>> write these few helpers and that's good enough for this unit test. That >>> takes perhaps 5 min anyway and isn't hugely bad for an unit test. >>> >>> Then I keep the exit code matching kselftests (KSFT_SKIP, etc.). >>> >>> What I can do here, though, is at least reuse the counters, e.g: >>> >>> ksft_inc_pass_cnt() / ksft_inc_fail_cnt() >>> >>> There's no ksft_inc_skip_cnt() so, maybe, I can just reuse >>> ksft_inc_xskip_cnt() assuming that counts "skip"s? >>> >>> Let me know if you have better ideas, I'll be happy to switch in that case. >> >> I guess once you start manually increasing/decreasing the cnt, you might be >> abusing the ksft framework indeed and are better off handling it differently >> :D > > I'm serious considering that to address your comment here, to show that I'm > trying my best to use whatever can help in this test case. :) Here reusing :) appreciated, but don't let my comments distract you. If you don't think ksft is a good fit (or any good), then don't use it.
On Thu, Mar 30, 2023 at 12:07:52PM -0400, Peter Xu wrote: > +int uffd_open(unsigned int flags) > +{ > + int uffd = uffd_open_sys(flags); > + > + if (uffd < 0) > + uffd = uffd_open_dev(flags); > + > + return uffd; > +} A (benign) accident when rebasing here.. I'll move this function into "selftests/mm: Add framework for uffd-unit-test" which is its first usage.
diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c index 6857388783be..dfb44ffad5f5 100644 --- a/tools/testing/selftests/mm/uffd-unit-tests.c +++ b/tools/testing/selftests/mm/uffd-unit-tests.c @@ -9,9 +9,118 @@ #ifdef __NR_userfaultfd +struct { + unsigned int pass, skip, fail, total; +} uffd_test_acct; + +static void uffd_test_report(void) +{ + printf("Userfaults unit tests: pass=%u, skip=%u, fail=%u (total=%u)\n", + uffd_test_acct.pass, + uffd_test_acct.skip, + uffd_test_acct.fail, + uffd_test_acct.total); +} + +static void uffd_test_pass(void) +{ + printf("done\n"); + uffd_test_acct.pass++; +} + +#define uffd_test_start(...) do { \ + printf(__VA_ARGS__); \ + printf("... "); \ + uffd_test_acct.total++; \ + } while (0) + +#define uffd_test_fail(...) do { \ + printf("failed [reason: "); \ + printf(__VA_ARGS__); \ + printf("]\n"); \ + uffd_test_acct.fail++; \ + } while (0) + +#define uffd_test_skip(...) do { \ + printf("skipped [reason: "); \ + printf(__VA_ARGS__); \ + printf("]\n"); \ + uffd_test_acct.skip++; \ + } while (0) + +/* + * Returns 1 if specific userfaultfd supported, 0 otherwise. Note, we'll + * return 1 even if some test failed as long as uffd supported, because in + * that case we still want to proceed with the rest uffd unit tests. + */ +static int test_uffd_api(bool use_dev) +{ + struct uffdio_api uffdio_api; + int uffd; + + uffd_test_start("UFFDIO_API (with %s)", + use_dev ? "/dev/userfaultfd" : "syscall"); + + if (use_dev) + uffd = uffd_open_dev(UFFD_FLAGS); + else + uffd = uffd_open_sys(UFFD_FLAGS); + if (uffd < 0) { + uffd_test_skip("cannot open userfaultfd handle"); + return 0; + } + + /* Test wrong UFFD_API */ + uffdio_api.api = 0xab; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api) == 0) { + uffd_test_fail("UFFDIO_API should fail with wrong api but didn't"); + goto out; + } + + /* Test wrong feature bit */ + uffdio_api.api = UFFD_API; + uffdio_api.features = BIT_ULL(63); + if (ioctl(uffd, UFFDIO_API, &uffdio_api) == 0) { + uffd_test_fail("UFFDIO_API should fail with wrong feature but didn't"); + goto out; + } + + /* Test normal UFFDIO_API */ + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api)) { + uffd_test_fail("UFFDIO_API should succeed but failed"); + goto out; + } + + /* Test double requests of UFFDIO_API with a random feature set */ + uffdio_api.features = BIT_ULL(0); + if (ioctl(uffd, UFFDIO_API, &uffdio_api) == 0) { + uffd_test_fail("UFFDIO_API should reject initialized uffd"); + goto out; + } + + uffd_test_pass(); +out: + close(uffd); + /* We have a valid uffd handle */ + return 1; +} + int main(int argc, char *argv[]) { - return KSFT_PASS; + int has_uffd; + + has_uffd = test_uffd_api(false); + has_uffd |= test_uffd_api(true); + + if (!has_uffd) { + printf("Userfaultfd not supported or unprivileged, skip all tests\n"); + exit(KSFT_SKIP); + } + uffd_test_report(); + return uffd_test_acct.fail ? KSFT_FAIL : KSFT_PASS; } #else /* __NR_userfaultfd */ diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c index 7c2bf88d6393..62fcf039d6b7 100644 --- a/tools/testing/selftests/mm/vm_util.c +++ b/tools/testing/selftests/mm/vm_util.c @@ -254,3 +254,13 @@ int uffd_open_sys(unsigned int flags) return -1; #endif } + +int uffd_open(unsigned int flags) +{ + int uffd = uffd_open_sys(flags); + + if (uffd < 0) + uffd = uffd_open_dev(flags); + + return uffd; +}