[v6,06/19] mm/ioremap: add slab availability checking in ioremap_prot

Message ID 20230609075528.9390-7-bhe@redhat.com
State New
Headers
Series mm: ioremap: Convert architectures to take GENERIC_IOREMAP way |

Commit Message

Baoquan He June 9, 2023, 7:55 a.m. UTC
  Several architectures has done checking if slab if available in
ioremap_prot(). In fact it should be done in generic ioremap_prot()
since on any architecutre, slab allocator must be available before
get_vm_area_caller() and vunmap() are used.

Add the checking into generic_ioremap_prot().

Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Baoquan He <bhe@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org>
---
v5-v6:
  Add WARN_ON_ONCE to aid debugging - Christoph.

 mm/ioremap.c | 4 ++++
 1 file changed, 4 insertions(+)
  

Patch

diff --git a/mm/ioremap.c b/mm/ioremap.c
index 9f34a8f90b58..86b82ec27d2b 100644
--- a/mm/ioremap.c
+++ b/mm/ioremap.c
@@ -18,6 +18,10 @@  void __iomem *generic_ioremap_prot(phys_addr_t phys_addr, size_t size,
 	phys_addr_t last_addr;
 	struct vm_struct *area;
 
+	/* An early platform driver might end up here */
+	if (WARN_ON_ONCE(!slab_is_available()))
+		return NULL;
+
 	/* Disallow wrap-around or zero size */
 	last_addr = phys_addr + size - 1;
 	if (!size || last_addr < phys_addr)