[v2] swiotlb: check alloc_size before the allocation of a new memory pool

Message ID 20240109024547.3541529-1-zhangpeng362@huawei.com
State New
Headers
Series [v2] swiotlb: check alloc_size before the allocation of a new memory pool |

Commit Message

zhangpeng (AS) Jan. 9, 2024, 2:45 a.m. UTC
  From: ZhangPeng <zhangpeng362@huawei.com>

The allocation request for swiotlb contiguous memory greater than
128*2KB cannot be fulfilled because it exceeds the maximum contiguous
memory limit. If the swiotlb memory we allocate is larger than 128*2KB,
swiotlb_find_slots() will still schedule the allocation of a new memory
pool, which will increase memory overhead.

Fix it by adding a check with alloc_size no more than 128*2KB before
scheduling the allocation of a new memory pool in swiotlb_find_slots().

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
Reviewed-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
---
v1->v2:
- Rebased on for-next branch per Christoph Hellwig
- Added RB from Petr Tesarik

 kernel/dma/swiotlb.c | 3 +++
 1 file changed, 3 insertions(+)
  

Comments

Christoph Hellwig Jan. 11, 2024, 8:35 a.m. UTC | #1
Thanks, applied.
  

Patch

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 97c298b210bc..b079a9a8e087 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -1136,6 +1136,9 @@  static int swiotlb_find_slots(struct device *dev, phys_addr_t orig_addr,
 	int cpu, i;
 	int index;
 
+	if (alloc_size > IO_TLB_SEGSIZE * IO_TLB_SIZE)
+		return -1;
+
 	cpu = raw_smp_processor_id();
 	for (i = 0; i < default_nareas; ++i) {
 		index = swiotlb_search_area(dev, cpu, i, orig_addr, alloc_size,