Message ID | 20230731215021.70911-1-lstoakes@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:918b:0:b0:3e4:2afc:c1 with SMTP id s11csp2300602vqg; Mon, 31 Jul 2023 15:27:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlFXQaUSiWVRkugzjCZuWl/4MNctNpXWNrsXtSqFha0xY6O0XBP1DDOGUEHalq9HzGIKspnu X-Received: by 2002:a05:6a00:1305:b0:686:bd4f:3a6e with SMTP id j5-20020a056a00130500b00686bd4f3a6emr13500485pfu.24.1690842427808; Mon, 31 Jul 2023 15:27:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690842427; cv=none; d=google.com; s=arc-20160816; b=nCYC/BOBhh+FtQtjFq8+4KYCy3w/UMumSoLxW0e+ovzR+eCgYOtB1M+q70BgWTv6Bh PoVHYduZiconS5uDmF8IdMhNVoCtEfy2ChcKzoZ7sWPuyFZM8m3SoL9TxKPmAc0qAyDF yHFn8bX0i9xfwEd9A2bRc4Eq8SaOgTEObtaNtZ9I9hVM+xdaAA6W4QbYJpHaUg6+hCv+ Kz0VoDxB6BLl4C7FInziDejG5yEI9K+RbG6IDhftkmEUefTxU3kNRfIPYHXFGUV9UPAy pqG1YnDUcoih0SbMexD+2Jo7NNVdwBxG9Oc/8JVX3SiCrQPfy83v241jIjIlgvoFWXzg PsPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=LzzSGKWkmVAwS6+5gn2kW/zuOpGQRDh54S2FzorJoyA=; fh=BN+OCloEIU8IYWIT5tGxORnZBEy4suDZN+R4IULuRdk=; b=XsLzJZwONUYxdgt0PhwdlYdTGfQTxGTaoWG2zbJP9osUAgp2SGzxCcEM3UEHo5p4/P TlxZJWo36hHuJ5m0WY8o04Ne6xBGmiadsLJkEoC8/mK5LGuoqv8uY+B0k2KHvWUIcY9J fDTUjLzQcv9g9NA8Bo9zESQmVmAUdWK1MSlO3RVshWwep+GvfIPHl9DX9XqdZVQfQnAl pPjounONlyWPmXxC2F7QeK7KMErs/wpztgps4cT6AYxzz88hX1058Gi5xW46bpAGnNE0 Wu40ZvcZXnPQhGE8ecSFRXW3eJ0WTTGXUCt/c+csHRaRwp17n+wOGSzKGB11dsFqKe1d WU9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=Gh6QtEjO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m194-20020a633fcb000000b00553b783ad97si7821918pga.228.2023.07.31.15.26.53; Mon, 31 Jul 2023 15:27:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=Gh6QtEjO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231676AbjGaVvA (ORCPT <rfc822;dengxinlin2429@gmail.com> + 99 others); Mon, 31 Jul 2023 17:51:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231504AbjGaVu4 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 31 Jul 2023 17:50:56 -0400 Received: from mail-wm1-x336.google.com (mail-wm1-x336.google.com [IPv6:2a00:1450:4864:20::336]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80589E7; Mon, 31 Jul 2023 14:50:33 -0700 (PDT) Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-3fbc5d5742eso54961405e9.3; Mon, 31 Jul 2023 14:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690840232; x=1691445032; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=LzzSGKWkmVAwS6+5gn2kW/zuOpGQRDh54S2FzorJoyA=; b=Gh6QtEjOYwj9odnsjh1f9DapsMkYw7zBcT5m9Kd1gaN0YupuuaRA1tt+K3+L1M0cAN +iRFN3msBWUF/UKb2JDD+6GJI2Gv1MpevVryLb9Bdoci+QBKe5pT1Wnv8UJK1ofVufOb xeETko2BcN4dMLmlLu7ea6ABxmGItWChPuqDD1x9cUqb5NUk/8kVlI50gFWAC4FIgthg WykjFjx6566tCwyrYocar6zjVxbZ7lNyX5ar+4lNWJQ7d9yEO/PAgOrNAibQMW6T2vPX 2q5ZT1t0X6cQnwloXqZRTBwwiJ4F+bpq6IzkWzbzTq3MGZKXR6b1Dil1dLPiW+t7xm03 L/Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690840232; x=1691445032; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=LzzSGKWkmVAwS6+5gn2kW/zuOpGQRDh54S2FzorJoyA=; b=E957p7m1n4ECUvhYapME1VVCeave8eJhoB0ZmhLlngqp/d5fX4WbMjwi4DinVxWuZv LSa5LhRrJFgx0rdcbmPYNDAUw6GJCccOUAXQRpc1IOwircYR/pBwvNzAWPLUUp0/anHf iZxmHMCgk/MjOp2+3gHyQ5CCMOcZC86ckIJhtCN1NK915JvqdQRwsFciOn4fldL2mh19 moRqyQAuBwUc3rTW6jGtrfq6Wzd9vWBt9tIufi8BHt2Rvxi4qp+jsauLGaTqEEJYaFKj qH6ehPMepF4U2+ff2jGvCWc75RmZId4z+4y+khJcZXKN2tAFd1SKRu7Dnwg2Qj3UMwPE mn2g== X-Gm-Message-State: ABy/qLaO/GaqZypvHCNgazW7dDBf9AVm2ow/U4iQwVlcRQZr2UiiWONO vde3TKXKqLfyGM9qpb1X/ywBuAeQ+Ds= X-Received: by 2002:a7b:cd1a:0:b0:3fc:92:73d6 with SMTP id f26-20020a7bcd1a000000b003fc009273d6mr854436wmj.11.1690840231335; Mon, 31 Jul 2023 14:50:31 -0700 (PDT) Received: from lucifer.home ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.googlemail.com with ESMTPSA id d12-20020a1c730c000000b003fa999cefc0sm12123626wmb.36.2023.07.31.14.50.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 31 Jul 2023 14:50:30 -0700 (PDT) From: Lorenzo Stoakes <lstoakes@gmail.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org> Cc: Baoquan He <bhe@redhat.com>, Uladzislau Rezki <urezki@gmail.com>, linux-fsdevel@vger.kernel.org, Jiri Olsa <olsajiri@gmail.com>, Will Deacon <will@kernel.org>, Mike Galbraith <efault@gmx.de>, Mark Rutland <mark.rutland@arm.com>, wangkefeng.wang@huawei.com, catalin.marinas@arm.com, ardb@kernel.org, David Hildenbrand <david@redhat.com>, Linux regression tracking <regressions@leemhuis.info>, regressions@lists.linux.dev, Matthew Wilcox <willy@infradead.org>, Liu Shixin <liushixin2@huawei.com>, Jens Axboe <axboe@kernel.dk>, Alexander Viro <viro@zeniv.linux.org.uk>, Lorenzo Stoakes <lstoakes@gmail.com>, stable@vger.kernel.org Subject: [PATCH] fs/proc/kcore: reinstate bounce buffer for KCORE_TEXT regions Date: Mon, 31 Jul 2023 22:50:21 +0100 Message-ID: <20230731215021.70911-1-lstoakes@gmail.com> X-Mailer: git-send-email 2.41.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1772976789331404435 X-GMAIL-MSGID: 1772976789331404435 |
Series |
fs/proc/kcore: reinstate bounce buffer for KCORE_TEXT regions
|
|
Commit Message
Lorenzo Stoakes
July 31, 2023, 9:50 p.m. UTC
Some architectures do not populate the entire range categorised by
KCORE_TEXT, so we must ensure that the kernel address we read from is
valid.
Unfortunately there is no solution currently available to do so with a
purely iterator solution so reinstate the bounce buffer in this instance so
we can use copy_from_kernel_nofault() in order to avoid page faults when
regions are unmapped.
This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid
bounce buffer for ktext data"), reinstating the bounce buffer, but adapts
the code to continue to use an iterator.
Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data")
Reported-by: Jiri Olsa <olsajiri@gmail.com>
Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava
Cc: stable@vger.kernel.org
Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>
---
fs/proc/kcore.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
Comments
On Mon, Jul 31, 2023 at 10:50:21PM +0100, Lorenzo Stoakes wrote: > Some architectures do not populate the entire range categorised by > KCORE_TEXT, so we must ensure that the kernel address we read from is > valid. > > Unfortunately there is no solution currently available to do so with a > purely iterator solution so reinstate the bounce buffer in this instance so > we can use copy_from_kernel_nofault() in order to avoid page faults when > regions are unmapped. > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > the code to continue to use an iterator. > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > Reported-by: Jiri Olsa <olsajiri@gmail.com> it fixed my issue, thanks Tested-by: Jiri Olsa <jolsa@kernel.org> jirka > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > Cc: stable@vger.kernel.org > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > --- > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > index 9cb32e1a78a0..3bc689038232 100644 > --- a/fs/proc/kcore.c > +++ b/fs/proc/kcore.c > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > { > + struct file *file = iocb->ki_filp; > + char *buf = file->private_data; > loff_t *fpos = &iocb->ki_pos; > size_t phdrs_offset, notes_offset, data_offset; > size_t page_offline_frozen = 1; > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > fallthrough; > case KCORE_VMEMMAP: > case KCORE_TEXT: > + /* > + * Sadly we must use a bounce buffer here to be able to > + * make use of copy_from_kernel_nofault(), as these > + * memory regions might not always be mapped on all > + * architectures. > + */ > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > + if (iov_iter_zero(tsz, iter) != tsz) { > + ret = -EFAULT; > + goto out; > + } > /* > * We use _copy_to_iter() to bypass usermode hardening > * which would otherwise prevent this operation. > */ > - if (_copy_to_iter((char *)start, tsz, iter) != tsz) { > + } else if (_copy_to_iter(buf, tsz, iter) != tsz) { > ret = -EFAULT; > goto out; > } > @@ -595,6 +608,10 @@ static int open_kcore(struct inode *inode, struct file *filp) > if (ret) > return ret; > > + filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); > + if (!filp->private_data) > + return -ENOMEM; > + > if (kcore_need_update) > kcore_update_ram(); > if (i_size_read(inode) != proc_root_kcore->size) { > @@ -605,9 +622,16 @@ static int open_kcore(struct inode *inode, struct file *filp) > return 0; > } > > +static int release_kcore(struct inode *inode, struct file *file) > +{ > + kfree(file->private_data); > + return 0; > +} > + > static const struct proc_ops kcore_proc_ops = { > .proc_read_iter = read_kcore_iter, > .proc_open = open_kcore, > + .proc_release = release_kcore, > .proc_lseek = default_llseek, > }; > > -- > 2.41.0 >
On Mon, Jul 31, 2023 at 10:50:21PM +0100, Lorenzo Stoakes wrote: > Some architectures do not populate the entire range categorised by > KCORE_TEXT, so we must ensure that the kernel address we read from is > valid. > > Unfortunately there is no solution currently available to do so with a > purely iterator solution so reinstate the bounce buffer in this instance so > we can use copy_from_kernel_nofault() in order to avoid page faults when > regions are unmapped. > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > the code to continue to use an iterator. > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > Reported-by: Jiri Olsa <olsajiri@gmail.com> > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > Cc: stable@vger.kernel.org > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > --- > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) Tested-by: Will Deacon <will@kernel.org> I can confirm this fixes the arm64 issue reported by Mike over at [1]. Cheers, Will [1] https://lore.kernel.org/r/b39c62d29a431b023e98959578ba87e96af0e030.camel@gmx.de
On 31.07.23 23:50, Lorenzo Stoakes wrote: > Some architectures do not populate the entire range categorised by > KCORE_TEXT, so we must ensure that the kernel address we read from is > valid. > > Unfortunately there is no solution currently available to do so with a > purely iterator solution so reinstate the bounce buffer in this instance so > we can use copy_from_kernel_nofault() in order to avoid page faults when > regions are unmapped. > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > the code to continue to use an iterator. > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > Reported-by: Jiri Olsa <olsajiri@gmail.com> > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > Cc: stable@vger.kernel.org > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > --- > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > index 9cb32e1a78a0..3bc689038232 100644 > --- a/fs/proc/kcore.c > +++ b/fs/proc/kcore.c > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > { > + struct file *file = iocb->ki_filp; > + char *buf = file->private_data; > loff_t *fpos = &iocb->ki_pos; > size_t phdrs_offset, notes_offset, data_offset; > size_t page_offline_frozen = 1; > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > fallthrough; > case KCORE_VMEMMAP: > case KCORE_TEXT: > + /* > + * Sadly we must use a bounce buffer here to be able to > + * make use of copy_from_kernel_nofault(), as these > + * memory regions might not always be mapped on all > + * architectures. > + */ > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > + if (iov_iter_zero(tsz, iter) != tsz) { > + ret = -EFAULT; > + goto out; > + } > /* > * We use _copy_to_iter() to bypass usermode hardening > * which would otherwise prevent this operation. > */ Having a comment at this indentation level looks for the else case looks kind of weird. (does that comment still apply?)
On 07/31/23 at 10:50pm, Lorenzo Stoakes wrote: > Some architectures do not populate the entire range categorised by > KCORE_TEXT, so we must ensure that the kernel address we read from is > valid. > > Unfortunately there is no solution currently available to do so with a > purely iterator solution so reinstate the bounce buffer in this instance so > we can use copy_from_kernel_nofault() in order to avoid page faults when > regions are unmapped. > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > the code to continue to use an iterator. > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > Reported-by: Jiri Olsa <olsajiri@gmail.com> > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > Cc: stable@vger.kernel.org > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > --- > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > 1 file changed, 25 insertions(+), 1 deletion(-) > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > index 9cb32e1a78a0..3bc689038232 100644 > --- a/fs/proc/kcore.c > +++ b/fs/proc/kcore.c > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > { > + struct file *file = iocb->ki_filp; > + char *buf = file->private_data; > loff_t *fpos = &iocb->ki_pos; > size_t phdrs_offset, notes_offset, data_offset; > size_t page_offline_frozen = 1; > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > fallthrough; > case KCORE_VMEMMAP: > case KCORE_TEXT: > + /* > + * Sadly we must use a bounce buffer here to be able to > + * make use of copy_from_kernel_nofault(), as these > + * memory regions might not always be mapped on all > + * architectures. > + */ > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > + if (iov_iter_zero(tsz, iter) != tsz) { > + ret = -EFAULT; > + goto out; > + } > /* > * We use _copy_to_iter() to bypass usermode hardening > * which would otherwise prevent this operation. > */ > - if (_copy_to_iter((char *)start, tsz, iter) != tsz) { > + } else if (_copy_to_iter(buf, tsz, iter) != tsz) { > ret = -EFAULT; > goto out; > } > @@ -595,6 +608,10 @@ static int open_kcore(struct inode *inode, struct file *filp) > if (ret) > return ret; > > + filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); > + if (!filp->private_data) > + return -ENOMEM; > + > if (kcore_need_update) > kcore_update_ram(); > if (i_size_read(inode) != proc_root_kcore->size) { > @@ -605,9 +622,16 @@ static int open_kcore(struct inode *inode, struct file *filp) > return 0; > } > > +static int release_kcore(struct inode *inode, struct file *file) > +{ > + kfree(file->private_data); > + return 0; > +} > + > static const struct proc_ops kcore_proc_ops = { > .proc_read_iter = read_kcore_iter, > .proc_open = open_kcore, > + .proc_release = release_kcore, > .proc_lseek = default_llseek, > }; On 6.5-rc4, the failures can be reproduced stably on a arm64 machine. With patch applied, both makedumpfile and objdump test cases passed. And the code change looks good to me, thanks. Tested-by: Baoquan He <bhe@redhat.com> Reviewed-by: Baoquan He <bhe@redhat.com> =============================================== [root@ ~]# makedumpfile --mem-usage /proc/kcore The kernel version is not supported. The makedumpfile operation may be incomplete. TYPE PAGES EXCLUDABLE DESCRIPTION ---------------------------------------------------------------------- ZERO 76234 yes Pages filled with zero NON_PRI_CACHE 147613 yes Cache pages without private flag PRI_CACHE 3847 yes Cache pages with private flag USER 15276 yes User process pages FREE 15809884 yes Free pages KERN_DATA 459950 no Dumpable kernel data page size: 4096 Total pages on system: 16512804 Total size on system: 67636445184 Byte [root@ ~]# objdump -d --start-address=0x^C [root@ ~]# cat /proc/kallsyms | grep ksys_read ffffab3be77229d8 T ksys_readahead ffffab3be782a700 T ksys_read [root@ ~]# objdump -d --start-address=0xffffab3be782a700 --stop-address=0xffffab3be782a710 /proc/kcore /proc/kcore: file format elf64-littleaarch64 Disassembly of section load1: ffffab3be782a700 <load1+0x41a700>: ffffab3be782a700: aa1e03e9 mov x9, x30 ffffab3be782a704: d503201f nop ffffab3be782a708: d503233f paciasp ffffab3be782a70c: a9bc7bfd stp x29, x30, [sp, #-64]! objdump: error: /proc/kcore(load2) is too large (0x7bff70000000 bytes) objdump: Reading section load2 failed because: memory exhausted
On 08/01/23 at 11:57pm, Baoquan He wrote: > On 07/31/23 at 10:50pm, Lorenzo Stoakes wrote: > > Some architectures do not populate the entire range categorised by > > KCORE_TEXT, so we must ensure that the kernel address we read from is > > valid. > > > > Unfortunately there is no solution currently available to do so with a > > purely iterator solution so reinstate the bounce buffer in this instance so > > we can use copy_from_kernel_nofault() in order to avoid page faults when > > regions are unmapped. > > > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > > the code to continue to use an iterator. > > > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > > Reported-by: Jiri Olsa <olsajiri@gmail.com> > > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > > Cc: stable@vger.kernel.org > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > > --- > > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > > 1 file changed, 25 insertions(+), 1 deletion(-) > > > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > > index 9cb32e1a78a0..3bc689038232 100644 > > --- a/fs/proc/kcore.c > > +++ b/fs/proc/kcore.c > > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > { > > + struct file *file = iocb->ki_filp; > > + char *buf = file->private_data; > > loff_t *fpos = &iocb->ki_pos; > > size_t phdrs_offset, notes_offset, data_offset; > > size_t page_offline_frozen = 1; > > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > fallthrough; > > case KCORE_VMEMMAP: > > case KCORE_TEXT: > > + /* > > + * Sadly we must use a bounce buffer here to be able to > > + * make use of copy_from_kernel_nofault(), as these > > + * memory regions might not always be mapped on all > > + * architectures. > > + */ > > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > > + if (iov_iter_zero(tsz, iter) != tsz) { > > + ret = -EFAULT; > > + goto out; > > + } > > /* > > * We use _copy_to_iter() to bypass usermode hardening > > * which would otherwise prevent this operation. > > */ > > - if (_copy_to_iter((char *)start, tsz, iter) != tsz) { > > + } else if (_copy_to_iter(buf, tsz, iter) != tsz) { > > ret = -EFAULT; > > goto out; > > } > > @@ -595,6 +608,10 @@ static int open_kcore(struct inode *inode, struct file *filp) > > if (ret) > > return ret; > > > > + filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); > > + if (!filp->private_data) > > + return -ENOMEM; > > + > > if (kcore_need_update) > > kcore_update_ram(); > > if (i_size_read(inode) != proc_root_kcore->size) { > > @@ -605,9 +622,16 @@ static int open_kcore(struct inode *inode, struct file *filp) > > return 0; > > } > > > > +static int release_kcore(struct inode *inode, struct file *file) > > +{ > > + kfree(file->private_data); > > + return 0; > > +} > > + > > static const struct proc_ops kcore_proc_ops = { > > .proc_read_iter = read_kcore_iter, > > .proc_open = open_kcore, > > + .proc_release = release_kcore, > > .proc_lseek = default_llseek, > > }; > > On 6.5-rc4, the failures can be reproduced stably on a arm64 machine. > With patch applied, both makedumpfile and objdump test cases passed. > > And the code change looks good to me, thanks. > > Tested-by: Baoquan He <bhe@redhat.com> > Reviewed-by: Baoquan He <bhe@redhat.com> > > > =============================================== > [root@ ~]# makedumpfile --mem-usage /proc/kcore > The kernel version is not supported. > The makedumpfile operation may be incomplete. > > TYPE PAGES EXCLUDABLE DESCRIPTION > ---------------------------------------------------------------------- > ZERO 76234 yes Pages filled with zero > NON_PRI_CACHE 147613 yes Cache pages without private flag > PRI_CACHE 3847 yes Cache pages with private flag > USER 15276 yes User process pages > FREE 15809884 yes Free pages > KERN_DATA 459950 no Dumpable kernel data > > page size: 4096 > Total pages on system: 16512804 > Total size on system: 67636445184 Byte > > [root@ ~]# objdump -d --start-address=0x^C > [root@ ~]# cat /proc/kallsyms | grep ksys_read > ffffab3be77229d8 T ksys_readahead > ffffab3be782a700 T ksys_read > [root@ ~]# objdump -d --start-address=0xffffab3be782a700 --stop-address=0xffffab3be782a710 /proc/kcore > > /proc/kcore: file format elf64-littleaarch64 > > > Disassembly of section load1: > > ffffab3be782a700 <load1+0x41a700>: > ffffab3be782a700: aa1e03e9 mov x9, x30 > ffffab3be782a704: d503201f nop > ffffab3be782a708: d503233f paciasp > ffffab3be782a70c: a9bc7bfd stp x29, x30, [sp, #-64]! > objdump: error: /proc/kcore(load2) is too large (0x7bff70000000 bytes) > objdump: Reading section load2 failed because: memory exhausted By the way, I can still see the objdump error saying kcore is too large as above, at the same time there's console printing as below. Haven't checked it's objdump's issue or kernel's. [ 6631.575800] __vm_enough_memory: pid: 5321, comm: objdump, not enough memory for the allocation [ 6631.584469] __vm_enough_memory: pid: 5321, comm: objdump, not enough memory for the allocation
On Wed, Aug 02, 2023 at 12:01:16AM +0800, Baoquan He wrote: > On 08/01/23 at 11:57pm, Baoquan He wrote: > > On 07/31/23 at 10:50pm, Lorenzo Stoakes wrote: > > > Some architectures do not populate the entire range categorised by > > > KCORE_TEXT, so we must ensure that the kernel address we read from is > > > valid. > > > > > > Unfortunately there is no solution currently available to do so with a > > > purely iterator solution so reinstate the bounce buffer in this instance so > > > we can use copy_from_kernel_nofault() in order to avoid page faults when > > > regions are unmapped. > > > > > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > > > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > > > the code to continue to use an iterator. > > > > > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > > > Reported-by: Jiri Olsa <olsajiri@gmail.com> > > > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > > > Cc: stable@vger.kernel.org > > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > > > --- > > > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > > > 1 file changed, 25 insertions(+), 1 deletion(-) > > > > > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > > > index 9cb32e1a78a0..3bc689038232 100644 > > > --- a/fs/proc/kcore.c > > > +++ b/fs/proc/kcore.c > > > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > > > > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > > { > > > + struct file *file = iocb->ki_filp; > > > + char *buf = file->private_data; > > > loff_t *fpos = &iocb->ki_pos; > > > size_t phdrs_offset, notes_offset, data_offset; > > > size_t page_offline_frozen = 1; > > > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > > fallthrough; > > > case KCORE_VMEMMAP: > > > case KCORE_TEXT: > > > + /* > > > + * Sadly we must use a bounce buffer here to be able to > > > + * make use of copy_from_kernel_nofault(), as these > > > + * memory regions might not always be mapped on all > > > + * architectures. > > > + */ > > > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > > > + if (iov_iter_zero(tsz, iter) != tsz) { > > > + ret = -EFAULT; > > > + goto out; > > > + } > > > /* > > > * We use _copy_to_iter() to bypass usermode hardening > > > * which would otherwise prevent this operation. > > > */ > > > - if (_copy_to_iter((char *)start, tsz, iter) != tsz) { > > > + } else if (_copy_to_iter(buf, tsz, iter) != tsz) { > > > ret = -EFAULT; > > > goto out; > > > } > > > @@ -595,6 +608,10 @@ static int open_kcore(struct inode *inode, struct file *filp) > > > if (ret) > > > return ret; > > > > > > + filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); > > > + if (!filp->private_data) > > > + return -ENOMEM; > > > + > > > if (kcore_need_update) > > > kcore_update_ram(); > > > if (i_size_read(inode) != proc_root_kcore->size) { > > > @@ -605,9 +622,16 @@ static int open_kcore(struct inode *inode, struct file *filp) > > > return 0; > > > } > > > > > > +static int release_kcore(struct inode *inode, struct file *file) > > > +{ > > > + kfree(file->private_data); > > > + return 0; > > > +} > > > + > > > static const struct proc_ops kcore_proc_ops = { > > > .proc_read_iter = read_kcore_iter, > > > .proc_open = open_kcore, > > > + .proc_release = release_kcore, > > > .proc_lseek = default_llseek, > > > }; > > > > On 6.5-rc4, the failures can be reproduced stably on a arm64 machine. > > With patch applied, both makedumpfile and objdump test cases passed. > > > > And the code change looks good to me, thanks. > > > > Tested-by: Baoquan He <bhe@redhat.com> > > Reviewed-by: Baoquan He <bhe@redhat.com> Thanks! > > > > > > =============================================== > > [root@ ~]# makedumpfile --mem-usage /proc/kcore > > The kernel version is not supported. > > The makedumpfile operation may be incomplete. > > > > TYPE PAGES EXCLUDABLE DESCRIPTION > > ---------------------------------------------------------------------- > > ZERO 76234 yes Pages filled with zero > > NON_PRI_CACHE 147613 yes Cache pages without private flag > > PRI_CACHE 3847 yes Cache pages with private flag > > USER 15276 yes User process pages > > FREE 15809884 yes Free pages > > KERN_DATA 459950 no Dumpable kernel data > > > > page size: 4096 > > Total pages on system: 16512804 > > Total size on system: 67636445184 Byte > > > > [root@ ~]# objdump -d --start-address=0x^C > > [root@ ~]# cat /proc/kallsyms | grep ksys_read > > ffffab3be77229d8 T ksys_readahead > > ffffab3be782a700 T ksys_read > > [root@ ~]# objdump -d --start-address=0xffffab3be782a700 --stop-address=0xffffab3be782a710 /proc/kcore > > > > /proc/kcore: file format elf64-littleaarch64 > > > > > > Disassembly of section load1: > > > > ffffab3be782a700 <load1+0x41a700>: > > ffffab3be782a700: aa1e03e9 mov x9, x30 > > ffffab3be782a704: d503201f nop > > ffffab3be782a708: d503233f paciasp > > ffffab3be782a70c: a9bc7bfd stp x29, x30, [sp, #-64]! > > objdump: error: /proc/kcore(load2) is too large (0x7bff70000000 bytes) > > objdump: Reading section load2 failed because: memory exhausted > > By the way, I can still see the objdump error saying kcore is too large > as above, at the same time there's console printing as below. Haven't > checked it's objdump's issue or kernel's. > > [ 6631.575800] __vm_enough_memory: pid: 5321, comm: objdump, not enough memory for the allocation > [ 6631.584469] __vm_enough_memory: pid: 5321, comm: objdump, not enough memory for the allocation > Yeah this issue existed before this patch was applied on arm64, apparently an ancient objdump bug according to the other thread [0]. I confirmed it exists on v6.0 kernel for instance. [0]:https://lore.kernel.org/all/7b94619ad89c9e308c7aedef2cacfa10b8666e69.camel@gmx.de/
On Tue, Aug 01, 2023 at 11:05:40AM +0200, David Hildenbrand wrote: > On 31.07.23 23:50, Lorenzo Stoakes wrote: > > Some architectures do not populate the entire range categorised by > > KCORE_TEXT, so we must ensure that the kernel address we read from is > > valid. > > > > Unfortunately there is no solution currently available to do so with a > > purely iterator solution so reinstate the bounce buffer in this instance so > > we can use copy_from_kernel_nofault() in order to avoid page faults when > > regions are unmapped. > > > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > > the code to continue to use an iterator. > > > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > > Reported-by: Jiri Olsa <olsajiri@gmail.com> > > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > > Cc: stable@vger.kernel.org > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > > --- > > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > > 1 file changed, 25 insertions(+), 1 deletion(-) > > > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > > index 9cb32e1a78a0..3bc689038232 100644 > > --- a/fs/proc/kcore.c > > +++ b/fs/proc/kcore.c > > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > { > > + struct file *file = iocb->ki_filp; > > + char *buf = file->private_data; > > loff_t *fpos = &iocb->ki_pos; > > size_t phdrs_offset, notes_offset, data_offset; > > size_t page_offline_frozen = 1; > > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > fallthrough; > > case KCORE_VMEMMAP: > > case KCORE_TEXT: > > + /* > > + * Sadly we must use a bounce buffer here to be able to > > + * make use of copy_from_kernel_nofault(), as these > > + * memory regions might not always be mapped on all > > + * architectures. > > + */ > > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > > + if (iov_iter_zero(tsz, iter) != tsz) { > > + ret = -EFAULT; > > + goto out; > > + } > > /* > > * We use _copy_to_iter() to bypass usermode hardening > > * which would otherwise prevent this operation. > > */ > > Having a comment at this indentation level looks for the else case looks > kind of weird. Yeah, but having it indented again would be weird and seem like it doesn't apply to the block below, there's really no good spot for it and checkpatch.pl doesn't mind so I think this is ok :) > > (does that comment still apply?) Hm good point, actually, now we're using the bounce buffer we don't need to avoid usermode hardening any more. However since we've established a bounce buffer ourselves its still appropriate to use _copy_to_iter() as we know the source region is good to copy from. To make life easy I'll just respin with an updated comment :) > > > -- > Cheers, > > David / dhildenb >
On 01.08.23 18:33, Lorenzo Stoakes wrote: > On Tue, Aug 01, 2023 at 11:05:40AM +0200, David Hildenbrand wrote: >> On 31.07.23 23:50, Lorenzo Stoakes wrote: >>> Some architectures do not populate the entire range categorised by >>> KCORE_TEXT, so we must ensure that the kernel address we read from is >>> valid. >>> >>> Unfortunately there is no solution currently available to do so with a >>> purely iterator solution so reinstate the bounce buffer in this instance so >>> we can use copy_from_kernel_nofault() in order to avoid page faults when >>> regions are unmapped. >>> >>> This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid >>> bounce buffer for ktext data"), reinstating the bounce buffer, but adapts >>> the code to continue to use an iterator. >>> >>> Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") >>> Reported-by: Jiri Olsa <olsajiri@gmail.com> >>> Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava >>> Cc: stable@vger.kernel.org >>> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> >>> --- >>> fs/proc/kcore.c | 26 +++++++++++++++++++++++++- >>> 1 file changed, 25 insertions(+), 1 deletion(-) >>> >>> diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c >>> index 9cb32e1a78a0..3bc689038232 100644 >>> --- a/fs/proc/kcore.c >>> +++ b/fs/proc/kcore.c >>> @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, >>> static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) >>> { >>> + struct file *file = iocb->ki_filp; >>> + char *buf = file->private_data; >>> loff_t *fpos = &iocb->ki_pos; >>> size_t phdrs_offset, notes_offset, data_offset; >>> size_t page_offline_frozen = 1; >>> @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) >>> fallthrough; >>> case KCORE_VMEMMAP: >>> case KCORE_TEXT: >>> + /* >>> + * Sadly we must use a bounce buffer here to be able to >>> + * make use of copy_from_kernel_nofault(), as these >>> + * memory regions might not always be mapped on all >>> + * architectures. >>> + */ >>> + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { >>> + if (iov_iter_zero(tsz, iter) != tsz) { >>> + ret = -EFAULT; >>> + goto out; >>> + } >>> /* >>> * We use _copy_to_iter() to bypass usermode hardening >>> * which would otherwise prevent this operation. >>> */ >> >> Having a comment at this indentation level looks for the else case looks >> kind of weird. > > Yeah, but having it indented again would be weird and seem like it doesn't > apply to the block below, there's really no good spot for it and > checkpatch.pl doesn't mind so I think this is ok :) > >> >> (does that comment still apply?) > > Hm good point, actually, now we're using the bounce buffer we don't need to > avoid usermode hardening any more. > > However since we've established a bounce buffer ourselves its still > appropriate to use _copy_to_iter() as we know the source region is good to > copy from. > > To make life easy I'll just respin with an updated comment :) I'm not too picky this time, no need to resend if everybody else is fine :P
On Tue, Aug 01, 2023 at 06:34:26PM +0200, David Hildenbrand wrote: > On 01.08.23 18:33, Lorenzo Stoakes wrote: > > On Tue, Aug 01, 2023 at 11:05:40AM +0200, David Hildenbrand wrote: > > > On 31.07.23 23:50, Lorenzo Stoakes wrote: > > > > Some architectures do not populate the entire range categorised by > > > > KCORE_TEXT, so we must ensure that the kernel address we read from is > > > > valid. > > > > > > > > Unfortunately there is no solution currently available to do so with a > > > > purely iterator solution so reinstate the bounce buffer in this instance so > > > > we can use copy_from_kernel_nofault() in order to avoid page faults when > > > > regions are unmapped. > > > > > > > > This change partly reverts commit 2e1c0170771e ("fs/proc/kcore: avoid > > > > bounce buffer for ktext data"), reinstating the bounce buffer, but adapts > > > > the code to continue to use an iterator. > > > > > > > > Fixes: 2e1c0170771e ("fs/proc/kcore: avoid bounce buffer for ktext data") > > > > Reported-by: Jiri Olsa <olsajiri@gmail.com> > > > > Closes: https://lore.kernel.org/all/ZHc2fm+9daF6cgCE@krava > > > > Cc: stable@vger.kernel.org > > > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > > > > --- > > > > fs/proc/kcore.c | 26 +++++++++++++++++++++++++- > > > > 1 file changed, 25 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > > > > index 9cb32e1a78a0..3bc689038232 100644 > > > > --- a/fs/proc/kcore.c > > > > +++ b/fs/proc/kcore.c > > > > @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, > > > > static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > > > { > > > > + struct file *file = iocb->ki_filp; > > > > + char *buf = file->private_data; > > > > loff_t *fpos = &iocb->ki_pos; > > > > size_t phdrs_offset, notes_offset, data_offset; > > > > size_t page_offline_frozen = 1; > > > > @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > > > > fallthrough; > > > > case KCORE_VMEMMAP: > > > > case KCORE_TEXT: > > > > + /* > > > > + * Sadly we must use a bounce buffer here to be able to > > > > + * make use of copy_from_kernel_nofault(), as these > > > > + * memory regions might not always be mapped on all > > > > + * architectures. > > > > + */ > > > > + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { > > > > + if (iov_iter_zero(tsz, iter) != tsz) { > > > > + ret = -EFAULT; > > > > + goto out; > > > > + } > > > > /* > > > > * We use _copy_to_iter() to bypass usermode hardening > > > > * which would otherwise prevent this operation. > > > > */ > > > > > > Having a comment at this indentation level looks for the else case looks > > > kind of weird. > > > > Yeah, but having it indented again would be weird and seem like it doesn't > > apply to the block below, there's really no good spot for it and > > checkpatch.pl doesn't mind so I think this is ok :) > > > > > > > > (does that comment still apply?) > > > > Hm good point, actually, now we're using the bounce buffer we don't need to > > avoid usermode hardening any more. > > > > However since we've established a bounce buffer ourselves its still > > appropriate to use _copy_to_iter() as we know the source region is good to > > copy from. > > > > To make life easy I'll just respin with an updated comment :) > > I'm not too picky this time, no need to resend if everybody else is fine :P > Haha you know the classic Lorenzo respin spiral and want to avoid it I see ;) The comment is actually inaccurate now, so to avoid noise + make life easy (maybe) for Andrew here's a fix patch that just corrects the comment:- ----8<---- From d2b8fb271f21b79048e5630699133f77a93d0481 Mon Sep 17 00:00:00 2001 From: Lorenzo Stoakes <lstoakes@gmail.com> Date: Tue, 1 Aug 2023 17:36:08 +0100 Subject: [PATCH] fs/proc/kcore: correct comment Correct comment to be strictly correct about reasoning. Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> --- fs/proc/kcore.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 3bc689038232..23fc24d16b31 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -568,8 +568,8 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) goto out; } /* - * We use _copy_to_iter() to bypass usermode hardening - * which would otherwise prevent this operation. + * We know the bounce buffer is safe to copy from, so + * use _copy_to_iter() directly. */ } else if (_copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; -- 2.41.0
>>> Hm good point, actually, now we're using the bounce buffer we don't need to >>> avoid usermode hardening any more. >>> >>> However since we've established a bounce buffer ourselves its still >>> appropriate to use _copy_to_iter() as we know the source region is good to >>> copy from. >>> >>> To make life easy I'll just respin with an updated comment :) >> >> I'm not too picky this time, no need to resend if everybody else is fine :P >> > > Haha you know the classic Lorenzo respin spiral and want to avoid it I see ;) Don't want to make your apparently stressful week more stressful. Not this time ;) > > The comment is actually inaccurate now, so to avoid noise + make life easy > (maybe) for Andrew here's a fix patch that just corrects the comment:- > > ----8<---- > > From d2b8fb271f21b79048e5630699133f77a93d0481 Mon Sep 17 00:00:00 2001 > From: Lorenzo Stoakes <lstoakes@gmail.com> > Date: Tue, 1 Aug 2023 17:36:08 +0100 > Subject: [PATCH] fs/proc/kcore: correct comment > > Correct comment to be strictly correct about reasoning. > > Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> > --- > fs/proc/kcore.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c > index 3bc689038232..23fc24d16b31 100644 > --- a/fs/proc/kcore.c > +++ b/fs/proc/kcore.c > @@ -568,8 +568,8 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) > goto out; > } > /* > - * We use _copy_to_iter() to bypass usermode hardening > - * which would otherwise prevent this operation. > + * We know the bounce buffer is safe to copy from, so > + * use _copy_to_iter() directly. > */ > } else if (_copy_to_iter(buf, tsz, iter) != tsz) { > ret = -EFAULT; > -- > 2.41.0 > Thanks! Acked-by: David Hildenbrand <david@redhat.com>
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 9cb32e1a78a0..3bc689038232 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -309,6 +309,8 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { + struct file *file = iocb->ki_filp; + char *buf = file->private_data; loff_t *fpos = &iocb->ki_pos; size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; @@ -554,11 +556,22 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) fallthrough; case KCORE_VMEMMAP: case KCORE_TEXT: + /* + * Sadly we must use a bounce buffer here to be able to + * make use of copy_from_kernel_nofault(), as these + * memory regions might not always be mapped on all + * architectures. + */ + if (copy_from_kernel_nofault(buf, (void *)start, tsz)) { + if (iov_iter_zero(tsz, iter) != tsz) { + ret = -EFAULT; + goto out; + } /* * We use _copy_to_iter() to bypass usermode hardening * which would otherwise prevent this operation. */ - if (_copy_to_iter((char *)start, tsz, iter) != tsz) { + } else if (_copy_to_iter(buf, tsz, iter) != tsz) { ret = -EFAULT; goto out; } @@ -595,6 +608,10 @@ static int open_kcore(struct inode *inode, struct file *filp) if (ret) return ret; + filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); + if (!filp->private_data) + return -ENOMEM; + if (kcore_need_update) kcore_update_ram(); if (i_size_read(inode) != proc_root_kcore->size) { @@ -605,9 +622,16 @@ static int open_kcore(struct inode *inode, struct file *filp) return 0; } +static int release_kcore(struct inode *inode, struct file *file) +{ + kfree(file->private_data); + return 0; +} + static const struct proc_ops kcore_proc_ops = { .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, + .proc_release = release_kcore, .proc_lseek = default_llseek, };