[v2,1/4] lib: test copy_{to,from}_user()
Commit Message
Historically, implementations of raw_copy_{to,from}_user() haven't
correctly handled the requirements laid forth in <linux/uaccess.h>, e.g.
as Al spotted on arm64:
https://lore.kernel.org/all/YNSyZaZtPTmTa5P8@zeniv-ca.linux.org.uk/
Similarly, we've had other issues where incorrect fault handling logic
lead to memory corruption, e.g.
https://lore.kernel.org/linux-arm-kernel/Y44gVm7IEMXqilef@FVFF77S0Q05N.cambridge.arm.com/
These issues are easy to introduce and hard to spot as we don't have any
systematic tests for the faulting paths.
This patch adds KUnit tests for raw_copy_{to,from}_user() which
systematically test the faulting paths, allowing us to detect and avoid
problems such as those above. The test checks copy sizes of 0 to 128
bytes at every possible alignment relative to leading/trailing page
boundaries to exhaustively check faulting and success paths.
I've given this a spin on a few architectures using the KUnit QEMU
harness, and it looks like most get *something* wrong. From those
initial runs:
* arm64's copy_to_user() under-reports the number of bytes copied in
some cases, e.g.
| post-destination bytes modified (dst_page[4082]=0x1, offset=4081, size=16, ret=15)
* arm's copy_to_user() under-reports the number of bytes copied in some
cases, and both copy_to_user() and copy_from_user() don't guarantee
that at least a single byte is copied when a partial copy is possible,
e.g.
| post-destination bytes modified (dst_page[4068]=0x1, offset=4067, size=33, ret=32)
| too few bytes consumed (offset=4068, size=32, ret=32)
* i386's copy_from_user does not guarantee that at least a single byte
is copied when a partial copit is possible, e.g.
| too few bytes consumed (offset=4093, size=8, ret=8)
* riscv's copy_to_user() and copy_from_user() don't guarantee that at
least a single byte is copied when a partial copy is possible, e.g.
| too few bytes consumed (offset=4095, size=2, ret=2)
* s390 passes all tests
* sparc's copy_from_user() over-reports the number of bbytes copied in
some caes, e.g.
| copied bytes incorrect (dst_page[0+0]=0x0, src_page[4095+0]=0xff, offset=4095, size=2, ret=1
* x86_64 passes all tests
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arch@vger.kernel.org
---
lib/Kconfig.debug | 9 +
lib/Makefile | 1 +
lib/usercopy_kunit.c | 431 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 441 insertions(+)
create mode 100644 lib/usercopy_kunit.c
Comments
On Tue, Mar 21, 2023 at 12:25:11PM +0000, Mark Rutland wrote:
> +static void assert_size_valid(struct kunit *test,
> + const struct usercopy_params *params,
> + unsigned long ret)
> +{
> + const unsigned long size = params->size;
> + const unsigned long offset = params->offset;
I think you should drop the 'unsigned' for 'offset', it better matches
the usercopy_params structure and the 'offset < 0' test.
> +
> + if (ret > size) {
> + KUNIT_ASSERT_FAILURE(test,
> + "return value is impossibly large (offset=%ld, size=%lu, ret=%lu)",
> + offset, size, ret);
> + }
> +
> + /*
> + * When the user buffer starts within a faulting page, no bytes can be
> + * copied, so ret must equal size.
> + */
> + if (offset < 0 || offset >= PAGE_SIZE) {
> + if (ret == size)
> + return;
> +
> + KUNIT_ASSERT_FAILURE(test,
> + "impossible copy did not fail (offset=%ld, size=%lu, ret=%lu)",
> + offset, size, ret);
> + }
> +
> + /*
> + * When the user buffer fits entirely within a non-faulting page, all
> + * bytes must be copied, so ret must equal 0.
> + */
> + if (offset + size <= PAGE_SIZE) {
> + if (ret == 0)
> + return;
> +
> + KUNIT_ASSERT_FAILURE(test,
> + "completely possible copy failed (offset=%ld, size=%lu, ret=%lu)",
> + offset, size, ret);
> + }
> +
> + /*
> + * The buffer starts in a non-faulting page, but continues into a
> + * faulting page. At least one byte must be copied, and at most all the
> + * non-faulting bytes may be copied.
> + */
> + if (ret == size) {
> + KUNIT_ASSERT_FAILURE(test,
> + "too few bytes consumed (offset=%ld, size=%lu, ret=%lu)",
> + offset, size, ret);
> + }
> +
> + if (ret < (offset + size) - PAGE_SIZE) {
> + KUNIT_ASSERT_FAILURE(test,
> + "too many bytes consumed (offset=%ld, size=%lu, ret=%lu)",
> + offset, size, ret);
> + }
> +}
Nitpick: slightly less indentation probably if we write the above as:
KUNIT_ASSERT_TRUE_MSG(test,
ret < (offset + size) - PAGE_SIZE,
"too many bytes consumed (offset=%ld, size=%lu, ret=%lu)",
offset, size, ret);
Not sure this works for the early return cases above.
> +static void assert_src_valid(struct kunit *test,
> + const struct usercopy_params *params,
> + const char *src, long src_offset,
> + unsigned long ret)
> +{
> + const unsigned long size = params->size;
> + const unsigned long offset = params->offset;
The unsigned offset here doesn't matter much but I'd drop it as well.
> + /*
> + * A usercopy MUST NOT modify the source buffer.
> + */
> + for (int i = 0; i < PAGE_SIZE; i++) {
> + char val = src[i];
> +
> + if (val == buf_pattern(i))
> + continue;
> +
> + KUNIT_ASSERT_FAILURE(test,
> + "source bytes modified (src[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
> + i, offset, size, ret);
> + }
> +}
> +
> +static void assert_dst_valid(struct kunit *test,
> + const struct usercopy_params *params,
> + const char *dst, long dst_offset,
> + unsigned long ret)
> +{
> + const unsigned long size = params->size;
> + const unsigned long offset = params->offset;
> +
> + /*
> + * A usercopy MUST NOT modify any bytes before the destination buffer.
> + */
> + for (int i = 0; i < dst_offset; i++) {
> + char val = dst[i];
> +
> + if (val == 0)
> + continue;
> +
> + KUNIT_ASSERT_FAILURE(test,
> + "pre-destination bytes modified (dst_page[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
> + i, val, offset, size, ret);
> + }
> +
> + /*
> + * A usercopy MUST NOT modify any bytes after the end of the destination
> + * buffer.
> + */
Without looking at the arm64 fixes in this series, I think such test can
fail for us currently given the right offset.
> + for (int i = dst_offset + size - ret; i < PAGE_SIZE; i++) {
> + char val = dst[i];
> +
> + if (val == 0)
> + continue;
> +
> + KUNIT_ASSERT_FAILURE(test,
> + "post-destination bytes modified (dst_page[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
> + i, val, offset, size, ret);
> + }
> +}
[...]
> +/*
> + * Generate the size and offset combinations to test.
> + *
> + * Usercopies may have different paths for small/medium/large copies, but
> + * typically max out at 64 byte chunks. We test sizes 0 to 128 bytes to check
> + * all combinations of leading/trailing chunks and bulk copies.
> + *
> + * For each size, we test every offset relative to a leading and trailing page
> + * boundary (i.e. [size, 0] and [PAGE_SIZE - size, PAGE_SIZE]) to check every
> + * possible faulting boundary.
> + */
> +#define for_each_size_offset(size, offset) \
> + for (unsigned long size = 0; size <= 128; size++) \
> + for (long offset = -size; \
> + offset <= (long)PAGE_SIZE; \
> + offset = (offset ? offset + 1: (PAGE_SIZE - size)))
> +
> +static void test_copy_to_user(struct kunit *test)
> +{
> + const struct usercopy_env *env = test->priv;
> +
> + for_each_size_offset(size, offset) {
> + const struct usercopy_params params = {
> + .size = size,
> + .offset = offset,
> + };
> + unsigned long ret;
> +
> + buf_init_zero(env->ubuf);
> + buf_init_pattern(env->kbuf);
> +
> + ret = do_copy_to_user(env, ¶ms);
> +
> + assert_size_valid(test, ¶ms, ret);
> + assert_src_valid(test, ¶ms, env->kbuf, 0, ret);
> + assert_dst_valid(test, ¶ms, env->ubuf, params.offset, ret);
> + assert_copy_valid(test, ¶ms,
> + env->ubuf, params.offset,
> + env->kbuf, 0,
> + ret);
> + }
> +}
IIUC, in such tests you only vary the destination offset. Our copy
routines in general try to align the source and leave the destination
unaligned for performance. It would be interesting to add some variation
on the source offset as well to spot potential issues with that part of
the memcpy routines.
On Tue, Mar 21, 2023 at 5:25 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> * arm64's copy_to_user() under-reports the number of bytes copied in
> some cases, e.g.
So I think this is the ok case.
> * arm's copy_to_user() under-reports the number of bytes copied in some
> cases, and both copy_to_user() and copy_from_user() don't guarantee
> that at least a single byte is copied when a partial copy is possible,
Again, this is ok historically.
> * i386's copy_from_user does not guarantee that at least a single byte
> is copied when a partial copit is possible, e.g.
>
> | too few bytes consumed (offset=4093, size=8, ret=8)
And here's the real example of "we've always done this optimization".
The exact details have differed, but the i386 case is the really
really traditional one: it does word-at-a-time copies, and does *not*
try to fall back to byte-wise copies. Never has.
> * riscv's copy_to_user() and copy_from_user() don't guarantee that at
> least a single byte is copied when a partial copy is possible, e.g.
>
> | too few bytes consumed (offset=4095, size=2, ret=2)
Yup. This is all the same "we've never forced byte-at-a-time copies"
> * s390 passes all tests
>
> * sparc's copy_from_user() over-reports the number of bbytes copied in
> some caes, e.g.
So this case I think this is wrong, and an outright bug. That can
cause people to think that uninitialized data is initialized, and leak
sensitive information.
> * x86_64 passes all tests
I suspect your testing is flawed due to being too limited, and x86-64
having multiple different copying routines.
Yes, at some point we made everything be quite careful with
"handle_tail" etc, but we end up still having things that fail early,
and fail hard.
At a minimum, at least unsafe_copy_to_user() will fault and not do the
"fill to the very last byte" case. Of course, that doesn't return a
partial length (it only has a "fail" case), but it's an example of
this whole thing where we haven't really been byte-exact when doing
copies.
So again, I get the feeling that these rules may make sense from a
validation standpoint, but I'm not 100% sure we should generally have
to be this careful.
Linus
On Tue, Mar 21, 2023 at 11:04:26AM -0700, Linus Torvalds wrote:
> On Tue, Mar 21, 2023 at 5:25 AM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > * arm64's copy_to_user() under-reports the number of bytes copied in
> > some cases, e.g.
>
> So I think this is the ok case.
>
> > * arm's copy_to_user() under-reports the number of bytes copied in some
> > cases, and both copy_to_user() and copy_from_user() don't guarantee
> > that at least a single byte is copied when a partial copy is possible,
>
> Again, this is ok historically.
>
> > * i386's copy_from_user does not guarantee that at least a single byte
> > is copied when a partial copit is possible, e.g.
> >
> > | too few bytes consumed (offset=4093, size=8, ret=8)
>
> And here's the real example of "we've always done this optimization".
> The exact details have differed, but the i386 case is the really
> really traditional one: it does word-at-a-time copies, and does *not*
> try to fall back to byte-wise copies. Never has.
Sure; I understand that. The reason for pointing this out is that Al was very
specific that implementations *must* guarantee this back in:
https://lore.kernel.org/all/YNSyZaZtPTmTa5P8@zeniv-ca.linux.org.uk/
... and that this could be done by having the fixup handler try to copy a byte.
I had assumed that *something* depended upon that, but I don't know what that
something actually is.
I'm not wedded to the semantic either way; if that's not required I can drop it
from the tests.
> > * s390 passes all tests
> >
> > * sparc's copy_from_user() over-reports the number of bbytes copied in
> > some caes, e.g.
>
> So this case I think this is wrong, and an outright bug. That can
> cause people to think that uninitialized data is initialized, and leak
> sensitive information.
Agreed.
> > * x86_64 passes all tests
>
> I suspect your testing is flawed due to being too limited, and x86-64
> having multiple different copying routines.
Sorry; I should've called that out explicitly. I'm aware I'm not testing all
the variants (I'd be happy to); I just wanted to check that I wasn't going off
into the weeds with the semantics first.
I probably should've sent this as an RFC...
> Yes, at some point we made everything be quite careful with
> "handle_tail" etc, but we end up still having things that fail early,
> and fail hard.
>
> At a minimum, at least unsafe_copy_to_user() will fault and not do the
> "fill to the very last byte" case. Of course, that doesn't return a
> partial length (it only has a "fail" case), but it's an example of
> this whole thing where we haven't really been byte-exact when doing
> copies.
Sure; that does seem to be different structurally too, so it'd need to be
plumbed into the harness differently.
I'll note that's more like {get,put}_user() which similarly just have a fail
case (and a put_user() could do a parital write then fault).
> So again, I get the feeling that these rules may make sense from a
> validation standpoint, but I'm not 100% sure we should generally have
> to be this careful.
I'm more than happy to relax the tests (and the docs); I just need to know
where the boundary is between what we must guarantee and what's a nice-to-have.
Thanks,
Mark.
On Tue, Mar 21, 2023 at 05:09:48PM +0000, Catalin Marinas wrote:
> On Tue, Mar 21, 2023 at 12:25:11PM +0000, Mark Rutland wrote:
> > +static void assert_size_valid(struct kunit *test,
> > + const struct usercopy_params *params,
> > + unsigned long ret)
> > +{
> > + const unsigned long size = params->size;
> > + const unsigned long offset = params->offset;
>
> I think you should drop the 'unsigned' for 'offset', it better matches
> the usercopy_params structure and the 'offset < 0' test.
Agreed. I'll go fix all offset values to use long.
[...]
> > + if (ret < (offset + size) - PAGE_SIZE) {
> > + KUNIT_ASSERT_FAILURE(test,
> > + "too many bytes consumed (offset=%ld, size=%lu, ret=%lu)",
> > + offset, size, ret);
> > + }
> > +}
>
> Nitpick: slightly less indentation probably if we write the above as:
>
> KUNIT_ASSERT_TRUE_MSG(test,
> ret < (offset + size) - PAGE_SIZE,
> "too many bytes consumed (offset=%ld, size=%lu, ret=%lu)",
> offset, size, ret);
>
> Not sure this works for the early return cases above.
I had originally used KUNIT_ASSERT_*_MSG(), but found those produced a lot of
unhelpful output; lemme go check what KUNIT_ASSERT_TRUE_MSG() produces.
[...]
> > + /*
> > + * A usercopy MUST NOT modify any bytes before the destination buffer.
> > + */
> > + for (int i = 0; i < dst_offset; i++) {
> > + char val = dst[i];
> > +
> > + if (val == 0)
> > + continue;
> > +
> > + KUNIT_ASSERT_FAILURE(test,
> > + "pre-destination bytes modified (dst_page[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
> > + i, val, offset, size, ret);
> > + }
> > +
> > + /*
> > + * A usercopy MUST NOT modify any bytes after the end of the destination
> > + * buffer.
> > + */
>
> Without looking at the arm64 fixes in this series, I think such test can
> fail for us currently given the right offset.
Yes, it can.
The test matches the documened semantic, so in that sense it's correctly
detecting that arm64 doesn't match the documentation.
Whether the documentation is right is clearly the key issue here. :)
> > +/*
> > + * Generate the size and offset combinations to test.
> > + *
> > + * Usercopies may have different paths for small/medium/large copies, but
> > + * typically max out at 64 byte chunks. We test sizes 0 to 128 bytes to check
> > + * all combinations of leading/trailing chunks and bulk copies.
> > + *
> > + * For each size, we test every offset relative to a leading and trailing page
> > + * boundary (i.e. [size, 0] and [PAGE_SIZE - size, PAGE_SIZE]) to check every
> > + * possible faulting boundary.
> > + */
> > +#define for_each_size_offset(size, offset) \
> > + for (unsigned long size = 0; size <= 128; size++) \
> > + for (long offset = -size; \
> > + offset <= (long)PAGE_SIZE; \
> > + offset = (offset ? offset + 1: (PAGE_SIZE - size)))
> > +
> > +static void test_copy_to_user(struct kunit *test)
> > +{
> > + const struct usercopy_env *env = test->priv;
> > +
> > + for_each_size_offset(size, offset) {
> > + const struct usercopy_params params = {
> > + .size = size,
> > + .offset = offset,
> > + };
> > + unsigned long ret;
> > +
> > + buf_init_zero(env->ubuf);
> > + buf_init_pattern(env->kbuf);
> > +
> > + ret = do_copy_to_user(env, ¶ms);
> > +
> > + assert_size_valid(test, ¶ms, ret);
> > + assert_src_valid(test, ¶ms, env->kbuf, 0, ret);
> > + assert_dst_valid(test, ¶ms, env->ubuf, params.offset, ret);
> > + assert_copy_valid(test, ¶ms,
> > + env->ubuf, params.offset,
> > + env->kbuf, 0,
> > + ret);
> > + }
> > +}
>
> IIUC, in such tests you only vary the destination offset. Our copy
> routines in general try to align the source and leave the destination
> unaligned for performance. It would be interesting to add some variation
> on the source offset as well to spot potential issues with that part of
> the memcpy routines.
I have that on my TODO list; I had intended to drop that into the
usercopy_params. The only problem is that the cross product of size,
src_offset, and dst_offset gets quite large.
Thanks,
Mark.
From: Mark Rutland
> Sent: 22 March 2023 14:05
....
> > IIUC, in such tests you only vary the destination offset. Our copy
> > routines in general try to align the source and leave the destination
> > unaligned for performance. It would be interesting to add some variation
> > on the source offset as well to spot potential issues with that part of
> > the memcpy routines.
>
> I have that on my TODO list; I had intended to drop that into the
> usercopy_params. The only problem is that the cross product of size,
> src_offset, and dst_offset gets quite large.
I thought that is was better to align the writes and do misaligned reads.
Although maybe copy_to/from_user() would be best aligning the user address
(to avoid page faults part way through a misaligned access).
OTOH, on x86, is it even worth bothering at all.
I have measured a performance drop for misaligned reads, but it
was less than 1 clock per cache line in a test that was doing
2 misaligned reads in at least some of the clock cycles.
I think the memory read path can do two AVX reads each clock.
So doing two misaligned 64bit reads isn't stressing it.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
On Thu, Mar 23, 2023 at 10:16:12PM +0000, David Laight wrote:
> From: Mark Rutland
> > Sent: 22 March 2023 14:05
> ....
> > > IIUC, in such tests you only vary the destination offset. Our copy
> > > routines in general try to align the source and leave the destination
> > > unaligned for performance. It would be interesting to add some variation
> > > on the source offset as well to spot potential issues with that part of
> > > the memcpy routines.
> >
> > I have that on my TODO list; I had intended to drop that into the
> > usercopy_params. The only problem is that the cross product of size,
> > src_offset, and dst_offset gets quite large.
>
> I thought that is was better to align the writes and do misaligned reads.
We inherited the memcpy/memset routines from the optimised cortex
strings library (fine-tuned by the toolchain people for various Arm
microarchitectures). For some CPUs with less aggressive prefetching it's
probably marginally faster to align the reads instead of writes (as
multiple unaligned writes are usually combined in the write buffer
somewhere).
Also, IIRC for some small copies (less than 16 bytes), our routines
don't bother with any alignment at all.
> Although maybe copy_to/from_user() would be best aligning the user address
> (to avoid page faults part way through a misaligned access).
In theory only copy_to_user() needs the write aligned if we want strict
guarantees of what was written. For copy_from_user() we can work around
by falling back to a byte read.
> OTOH, on x86, is it even worth bothering at all.
> I have measured a performance drop for misaligned reads, but it
> was less than 1 clock per cache line in a test that was doing
> 2 misaligned reads in at least some of the clock cycles.
> I think the memory read path can do two AVX reads each clock.
> So doing two misaligned 64bit reads isn't stressing it.
I think that's what Mark found as well in his testing, though I'm sure
one can build a very specific benchmark that shows a small degradation.
@@ -2647,6 +2647,15 @@ config SIPHASH_KUNIT_TEST
This is intended to help people writing architecture-specific
optimized versions. If unsure, say N.
+config USERCOPY_KUNIT_TEST
+ bool "KUnit tests for usercopy functions" if !KUNIT_ALL_TESTS
+ depends on KUNIT=y
+ default KUNIT_ALL_TESTS
+ help
+ Tests for faulting behaviour of copy_{to,from}_user().
+
+ If unsure, say N.
+
config TEST_UDELAY
tristate "udelay test driver"
help
@@ -391,6 +391,7 @@ CFLAGS_fortify_kunit.o += $(DISABLE_STRUCTLEAK_PLUGIN)
obj-$(CONFIG_FORTIFY_KUNIT_TEST) += fortify_kunit.o
obj-$(CONFIG_STRSCPY_KUNIT_TEST) += strscpy_kunit.o
obj-$(CONFIG_SIPHASH_KUNIT_TEST) += siphash_kunit.o
+obj-$(CONFIG_USERCOPY_KUNIT_TEST) += usercopy_kunit.o
obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o
new file mode 100644
@@ -0,0 +1,431 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Test cases for usercopy faulting behaviour
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <kunit/test.h>
+
+#include <linux/highmem.h>
+#include <linux/kthread.h>
+#include <linux/mm.h>
+#include <linux/sched/mm.h>
+#include <linux/uaccess.h>
+#include <linux/vmalloc.h>
+
+/*
+ * Arbitrarily chosen user address for the test page.
+ */
+#define UBUF_ADDR_BASE SZ_2M
+
+struct usercopy_env {
+ struct mm_struct *mm;
+ void *kbuf;
+ struct page *ubuf_page;
+ void *ubuf;
+};
+
+struct usercopy_params {
+ long offset;
+ unsigned long size;
+};
+
+static void *usercopy_env_alloc(void)
+{
+ struct usercopy_env *env = kzalloc(sizeof(*env), GFP_KERNEL);
+ if (!env)
+ return NULL;
+
+ env->kbuf = vmalloc(PAGE_SIZE);
+ if (!env->kbuf)
+ goto out_free_env;
+
+ return env;
+
+out_free_env:
+ kfree(env);
+ return NULL;
+}
+
+static void usercopy_env_free(struct usercopy_env *env)
+{
+ vfree(env->kbuf);
+ kfree(env);
+}
+
+static void *usercopy_mm_alloc(struct usercopy_env *env)
+{
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+ mm = mm_alloc();
+ if (!mm)
+ return NULL;
+
+ if (mmap_write_lock_killable(mm))
+ goto out_free;
+
+ vma = vm_area_alloc(mm);
+ if (!vma)
+ goto out_unlock;
+
+ vma_set_anonymous(vma);
+ vma->vm_start = UBUF_ADDR_BASE;
+ vma->vm_end = UBUF_ADDR_BASE + PAGE_SIZE;
+ vm_flags_init(vma, VM_READ | VM_MAYREAD | VM_WRITE | VM_MAYWRITE);
+ vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
+
+ if (insert_vm_struct(mm, vma))
+ goto out_free_vma;
+
+ mmap_write_unlock(mm);
+ return mm;
+
+out_free_vma:
+ vm_area_free(vma);
+out_unlock:
+ mmap_write_unlock(mm);
+out_free:
+ mmput(mm);
+ return NULL;
+}
+
+static void usercopy_mm_free(struct mm_struct *mm)
+{
+ mmput(mm);
+}
+
+static struct page *usercopy_ubuf_pin(struct usercopy_env *env)
+{
+ struct page *p = NULL;
+
+ kthread_use_mm(env->mm);
+ pin_user_pages_unlocked(UBUF_ADDR_BASE, 1, &p, FOLL_LONGTERM);
+ kthread_unuse_mm(env->mm);
+
+ return p;
+}
+
+static void usercopy_ubuf_unpin(struct usercopy_env *env)
+{
+ unpin_user_page(env->ubuf_page);
+}
+
+static int usercopy_test_init(struct kunit *test)
+{
+ struct usercopy_env *env;
+
+ env = usercopy_env_alloc();
+ if (!env)
+ return -ENOMEM;
+
+ env->mm = usercopy_mm_alloc(env);
+ if (!env->mm)
+ goto out_free_env;
+
+ env->ubuf_page = usercopy_ubuf_pin(env);
+ if (!env->ubuf_page)
+ goto out_free_mm;
+
+ env->ubuf = kmap(env->ubuf_page);
+ if (!env->ubuf)
+ goto out_unpin_ubuf;
+
+ test->priv = env;
+
+ return 0;
+
+out_unpin_ubuf:
+ usercopy_ubuf_unpin(env);
+out_free_mm:
+ usercopy_mm_free(env->mm);
+out_free_env:
+ usercopy_env_free(env);
+ return -ENOMEM;
+}
+
+static void usercopy_test_exit(struct kunit *test)
+{
+ struct usercopy_env *env = test->priv;
+
+ kunmap(env->ubuf);
+
+ usercopy_ubuf_unpin(env);
+ usercopy_mm_free(env->mm);
+ usercopy_env_free(env);
+}
+
+static char buf_pattern(int offset)
+{
+ return offset & 0xff;
+}
+
+static void buf_init_pattern(char *buf)
+{
+ for (int i = 0; i < PAGE_SIZE; i++)
+ buf[i] = buf_pattern(i);
+}
+
+static void buf_init_zero(char *buf)
+{
+ memset(buf, 0, PAGE_SIZE);
+}
+
+static void assert_size_valid(struct kunit *test,
+ const struct usercopy_params *params,
+ unsigned long ret)
+{
+ const unsigned long size = params->size;
+ const unsigned long offset = params->offset;
+
+ if (ret > size) {
+ KUNIT_ASSERT_FAILURE(test,
+ "return value is impossibly large (offset=%ld, size=%lu, ret=%lu)",
+ offset, size, ret);
+ }
+
+ /*
+ * When the user buffer starts within a faulting page, no bytes can be
+ * copied, so ret must equal size.
+ */
+ if (offset < 0 || offset >= PAGE_SIZE) {
+ if (ret == size)
+ return;
+
+ KUNIT_ASSERT_FAILURE(test,
+ "impossible copy did not fail (offset=%ld, size=%lu, ret=%lu)",
+ offset, size, ret);
+ }
+
+ /*
+ * When the user buffer fits entirely within a non-faulting page, all
+ * bytes must be copied, so ret must equal 0.
+ */
+ if (offset + size <= PAGE_SIZE) {
+ if (ret == 0)
+ return;
+
+ KUNIT_ASSERT_FAILURE(test,
+ "completely possible copy failed (offset=%ld, size=%lu, ret=%lu)",
+ offset, size, ret);
+ }
+
+ /*
+ * The buffer starts in a non-faulting page, but continues into a
+ * faulting page. At least one byte must be copied, and at most all the
+ * non-faulting bytes may be copied.
+ */
+ if (ret == size) {
+ KUNIT_ASSERT_FAILURE(test,
+ "too few bytes consumed (offset=%ld, size=%lu, ret=%lu)",
+ offset, size, ret);
+ }
+
+ if (ret < (offset + size) - PAGE_SIZE) {
+ KUNIT_ASSERT_FAILURE(test,
+ "too many bytes consumed (offset=%ld, size=%lu, ret=%lu)",
+ offset, size, ret);
+ }
+}
+
+static void assert_src_valid(struct kunit *test,
+ const struct usercopy_params *params,
+ const char *src, long src_offset,
+ unsigned long ret)
+{
+ const unsigned long size = params->size;
+ const unsigned long offset = params->offset;
+
+ /*
+ * A usercopy MUST NOT modify the source buffer.
+ */
+ for (int i = 0; i < PAGE_SIZE; i++) {
+ char val = src[i];
+
+ if (val == buf_pattern(i))
+ continue;
+
+ KUNIT_ASSERT_FAILURE(test,
+ "source bytes modified (src[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
+ i, offset, size, ret);
+ }
+}
+
+static void assert_dst_valid(struct kunit *test,
+ const struct usercopy_params *params,
+ const char *dst, long dst_offset,
+ unsigned long ret)
+{
+ const unsigned long size = params->size;
+ const unsigned long offset = params->offset;
+
+ /*
+ * A usercopy MUST NOT modify any bytes before the destination buffer.
+ */
+ for (int i = 0; i < dst_offset; i++) {
+ char val = dst[i];
+
+ if (val == 0)
+ continue;
+
+ KUNIT_ASSERT_FAILURE(test,
+ "pre-destination bytes modified (dst_page[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
+ i, val, offset, size, ret);
+ }
+
+ /*
+ * A usercopy MUST NOT modify any bytes after the end of the destination
+ * buffer.
+ */
+ for (int i = dst_offset + size - ret; i < PAGE_SIZE; i++) {
+ char val = dst[i];
+
+ if (val == 0)
+ continue;
+
+ KUNIT_ASSERT_FAILURE(test,
+ "post-destination bytes modified (dst_page[%d]=0x%x, offset=%ld, size=%lu, ret=%lu)",
+ i, val, offset, size, ret);
+ }
+}
+
+static void assert_copy_valid(struct kunit *test,
+ const struct usercopy_params *params,
+ const char *dst, long dst_offset,
+ const char *src, long src_offset,
+ unsigned long ret)
+{
+ const unsigned long size = params->size;
+ const unsigned long offset = params->offset;
+
+ /*
+ * Have we actually copied the bytes we expected to?
+ */
+ for (int i = 0; i < params->size - ret; i++) {
+ char dst_val = dst[dst_offset + i];
+ char src_val = src[src_offset + i];
+
+ if (dst_val == src_val)
+ continue;
+
+ KUNIT_ASSERT_FAILURE(test,
+ "copied bytes incorrect (dst_page[%ld+%d]=0x%x, src_page[%ld+%d]=0x%x, offset=%ld, size=%lu, ret=%lu",
+ dst_offset, i, dst_val,
+ src_offset, i, src_val,
+ offset, size, ret);
+ }
+}
+
+static unsigned long do_copy_to_user(const struct usercopy_env *env,
+ const struct usercopy_params *params)
+{
+ void __user *uptr = (void __user *)UBUF_ADDR_BASE + params->offset;
+ void *kptr = env->kbuf;
+ unsigned long ret;
+
+ kthread_use_mm(env->mm);
+ ret = raw_copy_to_user(uptr, kptr, params->size);
+ kthread_unuse_mm(env->mm);
+
+ return ret;
+}
+
+static unsigned long do_copy_from_user(const struct usercopy_env *env,
+ const struct usercopy_params *params)
+{
+ void __user *uptr = (void __user *)UBUF_ADDR_BASE + params->offset;
+ void *kptr = env->kbuf;
+ unsigned long ret;
+
+ kthread_use_mm(env->mm);
+ ret = raw_copy_from_user(kptr, uptr, params->size);
+ kthread_unuse_mm(env->mm);
+
+ return ret;
+}
+
+/*
+ * Generate the size and offset combinations to test.
+ *
+ * Usercopies may have different paths for small/medium/large copies, but
+ * typically max out at 64 byte chunks. We test sizes 0 to 128 bytes to check
+ * all combinations of leading/trailing chunks and bulk copies.
+ *
+ * For each size, we test every offset relative to a leading and trailing page
+ * boundary (i.e. [size, 0] and [PAGE_SIZE - size, PAGE_SIZE]) to check every
+ * possible faulting boundary.
+ */
+#define for_each_size_offset(size, offset) \
+ for (unsigned long size = 0; size <= 128; size++) \
+ for (long offset = -size; \
+ offset <= (long)PAGE_SIZE; \
+ offset = (offset ? offset + 1: (PAGE_SIZE - size)))
+
+static void test_copy_to_user(struct kunit *test)
+{
+ const struct usercopy_env *env = test->priv;
+
+ for_each_size_offset(size, offset) {
+ const struct usercopy_params params = {
+ .size = size,
+ .offset = offset,
+ };
+ unsigned long ret;
+
+ buf_init_zero(env->ubuf);
+ buf_init_pattern(env->kbuf);
+
+ ret = do_copy_to_user(env, ¶ms);
+
+ assert_size_valid(test, ¶ms, ret);
+ assert_src_valid(test, ¶ms, env->kbuf, 0, ret);
+ assert_dst_valid(test, ¶ms, env->ubuf, params.offset, ret);
+ assert_copy_valid(test, ¶ms,
+ env->ubuf, params.offset,
+ env->kbuf, 0,
+ ret);
+ }
+}
+
+static void test_copy_from_user(struct kunit *test)
+{
+ const struct usercopy_env *env = test->priv;
+
+ for_each_size_offset(size, offset) {
+ const struct usercopy_params params = {
+ .size = size,
+ .offset = offset,
+ };
+ unsigned long ret;
+
+ buf_init_zero(env->kbuf);
+ buf_init_pattern(env->ubuf);
+
+ ret = do_copy_from_user(env, ¶ms);
+
+ assert_size_valid(test, ¶ms, ret);
+ assert_src_valid(test, ¶ms, env->ubuf, params.offset, ret);
+ assert_dst_valid(test, ¶ms, env->kbuf, 0, ret);
+ assert_copy_valid(test, ¶ms,
+ env->kbuf, 0,
+ env->ubuf, params.offset,
+ ret);
+ }
+}
+
+static struct kunit_case usercopy_cases[] = {
+ KUNIT_CASE(test_copy_to_user),
+ KUNIT_CASE(test_copy_from_user),
+ { /* sentinel */ }
+};
+
+static struct kunit_suite usercopy_suite = {
+ .name = "usercopy",
+ .init = usercopy_test_init,
+ .exit = usercopy_test_exit,
+ .test_cases = usercopy_cases,
+};
+kunit_test_suites(&usercopy_suite);
+
+MODULE_AUTHOR("Mark Rutland <mark.rutland@arm.com>");
+MODULE_LICENSE("GPL v2");