Message ID | 20240131125907.1006760-1-liuyongqiang13@huawei.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-46448-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:2087:b0:106:209c:c626 with SMTP id gs7csp1867133dyb; Wed, 31 Jan 2024 04:59:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IHvNl9rCGKc7j1lMBqdOpL+lBuCA3AnLtjkw6EOTN7El/ANgdcBHqebAneuUI5nKuWOx4OB X-Received: by 2002:a05:6808:14cf:b0:3be:bc8e:31de with SMTP id f15-20020a05680814cf00b003bebc8e31demr1737900oiw.41.1706705972774; Wed, 31 Jan 2024 04:59:32 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706705972; cv=pass; d=google.com; s=arc-20160816; b=vpHPcGNbrBteV6ujM5RUgUd+7K2dDWiPxsZu6t5cjzc0uI8F4PSUJJY2ejMgROzkn0 jS3Oa2lNBuDo1/b6Csw/lLYmVf50cOSa+pQLbnToyhii9j4/3cc2NNlqzNbLbUeC9D6q EzrQtCeJ/7Td6fEkUCUXHn6t1e3fsoAh4qvys9n5inPlpsNsaok0OmSbhaufdAfj4rcX NOYxBYudWISxgjYXeX5O/Wi6qIC6ynEo8UkuUo4SMwetNpixrI+1toCqHtlRO0zdJX2K D/mljHN18kwmE6Ptl4+oew595k50DF/PHrjjJuQk6nX0W4W/GQW7N3ZHl0C66mcP1+Hi ryYQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from; bh=XJFmFBnr4jx+jwgQ08rEfPOT3Yel3lvfpWfQ0hYKFyk=; fh=/RONOBveqNhYAXgOCNLo0JUtcqBMPq5ZEuQdxEIX3so=; b=xw0JoZ4VrndNgya5RnD75BSZQG6iA0GdbizjwQGfqxpyzCIvEDhZ6U4zJLSl8TWoCF IhXX8w/gqHYKy9L/IZEvc+B2FMIH+TZDzN8S/Ipw4lV1vy91HmW8iXHJnmLgDCw5+i2Z Jylcxqvo4HpfuyWt7zJrTgifuEzvFMGielqSPiozH4RDNY7EF8HNnoM3ANbk750rwR// +QyBLfy0Rn+jHv4ZO+9q9yf7WhJcg1kY4zG3zgrPe0Se3cSlWJg3QDSG/7x4sGFOiOZ2 5h6qHpvYNQWQAh8ufcPtUddne/XtD8P5Em9wLV/ZlYZxo26RTYbSEbIYfa8+XrQziuwt 8Zfg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-46448-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46448-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com X-Forwarded-Encrypted: i=1; AJvYcCXuuW690KuLYUPQAjDZGLkZuQAkheqX0YFhFVqfIXXXytLz3+hMPgMOBdVqOGLVpsBkT5JRFPKl3ftOzDo+fcn3lrAwsw== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id p9-20020a056a0026c900b006dfdfc0366asi309493pfw.107.2024.01.31.04.59.32 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jan 2024 04:59:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-46448-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=huawei.com dmarc=pass fromdomain=huawei.com); spf=pass (google.com: domain of linux-kernel+bounces-46448-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46448-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 51AC4286DE0 for <ouuuleilei@gmail.com>; Wed, 31 Jan 2024 12:59:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D934D7AE7E; Wed, 31 Jan 2024 12:59:20 +0000 (UTC) Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A7C0179DA6 for <linux-kernel@vger.kernel.org>; Wed, 31 Jan 2024 12:59:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.190 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706705959; cv=none; b=FaloWQ4tm4n11/NUcYZvvV3TmovlmvPfLxiFpf2MrE52F7JLQ2nBYV38hFMGjdXW6ftoDind3/KrjtacQzOo6FD6DJWSO0dCV2yVc5ZqlnyoxUpNnvLHdlCZkkn3pfR/yo2DqHmlZZXmwk40pjQjmB7Y4nlCno2Alh9/OH7LVc4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706705959; c=relaxed/simple; bh=YQlXIdm7qMwVqhQEJvAw6fcm6RlKimVydVPzfD3qE9A=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=D0SSq0pMwddBCXWb9D+GI2c0K7Mw/rmVG7A3aCYvpKKpMpNqtdjvXDV7N9cVHF0V+ng6Rntap1+KB+bReoFVRnyHwzcQgMeUf/TNFOyRcGXCuGVwXxQMs0iRky2uUludSlByh3srsGgAbf+FXslFLsPvlXixa7ofM8LXe5QDTjs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.190 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4TQ2B62ZyKz29kp4; Wed, 31 Jan 2024 20:57:18 +0800 (CST) Received: from dggpeml500005.china.huawei.com (unknown [7.185.36.59]) by mail.maildlp.com (Postfix) with ESMTPS id DE2D41402E0; Wed, 31 Jan 2024 20:59:08 +0800 (CST) Received: from huawei.com (10.175.112.125) by dggpeml500005.china.huawei.com (7.185.36.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 31 Jan 2024 20:59:08 +0800 From: Yongqiang Liu <liuyongqiang13@huawei.com> To: <linux-arm-kernel@lists.infradead.org> CC: <yanaijie@huawei.com>, <zhangxiaoxu5@huawei.com>, <wangkefeng.wang@huawei.com>, <sunnanyong@huawei.com>, <linux@armlinux.org.uk>, <rppt@linux.ibm.com>, <linux-kernel@vger.kernel.org>, <keescook@chromium.org>, <arnd@arndb.de>, <m.szyprowski@samsung.com>, <willy@infradead.org>, <liuyongqiang13@huawei.com> Subject: [PATCH] arm: flush: don't abuse pfn_valid() to check if pfn is in RAM Date: Wed, 31 Jan 2024 20:59:07 +0800 Message-ID: <20240131125907.1006760-1-liuyongqiang13@huawei.com> X-Mailer: git-send-email 2.25.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpeml500005.china.huawei.com (7.185.36.59) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1789610922341951416 X-GMAIL-MSGID: 1789610922341951416 |
Series |
arm: flush: don't abuse pfn_valid() to check if pfn is in RAM
|
|
Commit Message
Yongqiang Liu
Jan. 31, 2024, 12:59 p.m. UTC
Since commit a4d5613c4dc6 ("arm: extend pfn_valid to take into account
freed memory map alignment") changes the semantics of pfn_valid() to check
presence of the memory map for a PFN. __sync_icache_dcache() should use
memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in
RAM or not.In Some uio case we will get a crash on a system with the
following memory layout:
node 0: [mem 0x00000000c0a00000-0x00000000cc8fffff]
node 0: [mem 0x00000000d0000000-0x00000000da1fffff]
the uio layout is:0xc0900000, 0x100000
the crash backtrace like:
Unable to handle kernel paging request at virtual address bff00000
[...]
CPU: 1 PID: 465 Comm: startapp.bin Tainted: G O 5.10.0 #1
Hardware name: Generic DT based system
PC is at b15_flush_kern_dcache_area+0x24/0x3c
LR is at __sync_icache_dcache+0x6c/0x98
[...]
(b15_flush_kern_dcache_area) from (__sync_icache_dcache+0x6c/0x98)
(__sync_icache_dcache) from (set_pte_at+0x28/0x54)
(set_pte_at) from (remap_pfn_range+0x1a0/0x274)
(remap_pfn_range) from (uio_mmap+0x184/0x1b8 [uio])
(uio_mmap [uio]) from (__mmap_region+0x264/0x5f4)
(__mmap_region) from (__do_mmap_mm+0x3ec/0x440)
(__do_mmap_mm) from (do_mmap+0x50/0x58)
(do_mmap) from (vm_mmap_pgoff+0xfc/0x188)
(vm_mmap_pgoff) from (ksys_mmap_pgoff+0xac/0xc4)
(ksys_mmap_pgoff) from (ret_fast_syscall+0x0/0x5c)
Code: e0801001 e2423001 e1c00003 f57ff04f (ee070f3e)
---[ end trace 09cf0734c3805d52 ]---
Kernel panic - not syncing: Fatal exception
Fixes: a4d5613c4dc6 ("arm: extend pfn_valid to take into account freed memory map alignment")
Signed-off-by: Yongqiang Liu <liuyongqiang13@huawei.com>
---
arch/arm/mm/flush.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
Comments
On 1/31/2024 4:59 AM, Yongqiang Liu wrote: > Since commit a4d5613c4dc6 ("arm: extend pfn_valid to take into account > freed memory map alignment") changes the semantics of pfn_valid() to check > presence of the memory map for a PFN. __sync_icache_dcache() should use > memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in > RAM or not.In Some uio case we will get a crash on a system with the > following memory layout: > > node 0: [mem 0x00000000c0a00000-0x00000000cc8fffff] > node 0: [mem 0x00000000d0000000-0x00000000da1fffff] > the uio layout is:0xc0900000, 0x100000 > > the crash backtrace like: > > Unable to handle kernel paging request at virtual address bff00000 > [...] > CPU: 1 PID: 465 Comm: startapp.bin Tainted: G O 5.10.0 #1 > Hardware name: Generic DT based system > PC is at b15_flush_kern_dcache_area+0x24/0x3c Humm, what Broadcom platform using a Brahma-B15 CPU are you using out of curiosity? > LR is at __sync_icache_dcache+0x6c/0x98 > [...] > (b15_flush_kern_dcache_area) from (__sync_icache_dcache+0x6c/0x98) > (__sync_icache_dcache) from (set_pte_at+0x28/0x54) > (set_pte_at) from (remap_pfn_range+0x1a0/0x274) > (remap_pfn_range) from (uio_mmap+0x184/0x1b8 [uio]) > (uio_mmap [uio]) from (__mmap_region+0x264/0x5f4) > (__mmap_region) from (__do_mmap_mm+0x3ec/0x440) > (__do_mmap_mm) from (do_mmap+0x50/0x58) > (do_mmap) from (vm_mmap_pgoff+0xfc/0x188) > (vm_mmap_pgoff) from (ksys_mmap_pgoff+0xac/0xc4) > (ksys_mmap_pgoff) from (ret_fast_syscall+0x0/0x5c) > Code: e0801001 e2423001 e1c00003 f57ff04f (ee070f3e) > ---[ end trace 09cf0734c3805d52 ]--- > Kernel panic - not syncing: Fatal exception > > Fixes: a4d5613c4dc6 ("arm: extend pfn_valid to take into account freed memory map alignment") > Signed-off-by: Yongqiang Liu <liuyongqiang13@huawei.com> > --- > arch/arm/mm/flush.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c > index d19d140a10c7..11ec6c5ff5fc 100644 > --- a/arch/arm/mm/flush.c > +++ b/arch/arm/mm/flush.c > @@ -15,6 +15,7 @@ > #include <asm/smp_plat.h> > #include <asm/tlbflush.h> > #include <linux/hugetlb.h> > +#include <linux/memblock.h> > > #include "mm.h" > > @@ -292,7 +293,7 @@ void __sync_icache_dcache(pte_t pteval) > /* only flush non-aliasing VIPT caches for exec mappings */ > return; > pfn = pte_pfn(pteval); > - if (!pfn_valid(pfn)) > + if (!memblock_is_map_memory(PFN_PHYS(pfn))) > return; > > folio = page_folio(pfn_to_page(pfn));
Hi, Please don't top-post to Linux mailing lists. On Thu, Feb 01, 2024 at 04:00:04PM +0800, Yongqiang Liu wrote: > Very appreciate it for extra explanation. Notice that commit 024591f9a6e0 > > ("arm: ioremap: don't abuse pfn_valid() to check if pfn is in RAM") use > > memblock_is_map_memory() instead of pfn_valid() to check if a PFN is in > > RAM or not, so I wrote the patch to solve this case. Otherwise, when we > > use pageblock align(4M) address of memory or uio, like : > > node 0: [mem 0x00000000c0c00000-0x00000000cc8fffff] > node 0: [mem 0x00000000d0000000-0x00000000da1fffff] > > or uio address set like: > > 0xc0400000, 0x100000 > > the pfn_valid will return false as memblock_is_map_memory. pfn_valid() should return false if and only if there is no struct page for that pfn. My understanding is that struct pages exist for the range of UIO addresses, and hopefully they have PG_reserved bit set, so a better fix IMO would be to check if the folio is !reserved. > 在 2024/2/1 5:20, Robin Murphy 写道: > > On 2024-01-31 7:00 pm, Russell King (Oracle) wrote: > > > On Wed, Jan 31, 2024 at 06:39:31PM +0000, Robin Murphy wrote: > > > > On 31/01/2024 12:59 pm, Yongqiang Liu wrote: > > > > > @@ -292,7 +293,7 @@ void __sync_icache_dcache(pte_t pteval) > > > > > /* only flush non-aliasing VIPT caches for exec mappings */ > > > > > return; > > > > > pfn = pte_pfn(pteval); > > > > > - if (!pfn_valid(pfn)) > > > > > + if (!memblock_is_map_memory(PFN_PHYS(pfn))) > > > > > return; > > > > > folio = page_folio(pfn_to_page(pfn)); > > > > > > > > Hmm, it's a bit odd in context, since pfn_valid() obviously > > > > pairs with this > > > > pfn_to_page(), whereas it's not necessarily clear that > > > > memblock_is_map_memory() implies pfn_valid(). > > > > > > > > However, in this case we're starting from a PTE - rather than > > > > going off to > > > > do a slow scan of memblock to determine whether a round-trip through > > > > page_address() is going to give back a mapped VA, can we not trivially > > > > identify that from whether the PTE itself is valid? > > > > > > Depends what you mean by "valid". If you're referring to pte_valid() > > > and L_PTE_VALID then no. > > > > > > On 32-bit non-LPAE, the valid bit is the same as the present bit, and > > > needs to be set for the PTE to not fault. Any PTE that is mapping > > > something will be "valid" whether it is memory or not, whether it is > > > backed by a page or not. > > > > > > pfn_valid() should be telling us whether the PFN is suitable to be > > > passed to pfn_to_page(), and if we have a situation where pfn_valid() > > > returns true, but pfn_to_page() returns an invalid page, then that in > > > itself is a bug that needs to be fixed and probably has far reaching > > > implications for the stability of the kernel. > > > > Right, the problem here seems to be the opposite one, wherein we *do* > > often have a valid struct page for an address which is reserved and thus > > not mapped by the kernel, but seemingly we then take it down a path > > which assumes anything !PageHighmem() is lowmem and dereferences > > page_address() without looking. > > > > However I realise I should have looked closer at the caller, and my idea > > is futile since the PTE here is for a userspace mapping, not a kernel > > VA, and is already pte_valid_user() && !pte_special(). Plus the fact > > that the stack trace indicates an mmap() path suggests it most likely is > > a legitimate mapping of some no-map carveout or MMIO region. Oh well. My > > first point still stands, though - I think at least a comment to clarify > > that assumption would be warranted. > > > > Thanks, > > Robin. > > .
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c index d19d140a10c7..11ec6c5ff5fc 100644 --- a/arch/arm/mm/flush.c +++ b/arch/arm/mm/flush.c @@ -15,6 +15,7 @@ #include <asm/smp_plat.h> #include <asm/tlbflush.h> #include <linux/hugetlb.h> +#include <linux/memblock.h> #include "mm.h" @@ -292,7 +293,7 @@ void __sync_icache_dcache(pte_t pteval) /* only flush non-aliasing VIPT caches for exec mappings */ return; pfn = pte_pfn(pteval); - if (!pfn_valid(pfn)) + if (!memblock_is_map_memory(PFN_PHYS(pfn))) return; folio = page_folio(pfn_to_page(pfn));