Message ID | 20221217015435.73889-4-bhe@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:e747:0:0:0:0:0 with SMTP id c7csp1304311wrn; Fri, 16 Dec 2022 17:57:12 -0800 (PST) X-Google-Smtp-Source: AA0mqf700CcCHVsFhBqba1gNmBKAx6yUwcZqme1rsu3zvWMX+Qr8K2chauxuWmRyMbu3jUt3JbwA X-Received: by 2002:a05:6a20:c6ca:b0:9d:efd3:66df with SMTP id gw10-20020a056a20c6ca00b0009defd366dfmr37718866pzb.38.1671242232662; Fri, 16 Dec 2022 17:57:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671242232; cv=none; d=google.com; s=arc-20160816; b=DeYl2/EXRS2HHvYMQapb6Xkg/5Lo1Pr2Ox4+QCjvSkn2WrSbhTPQprpHb3MMVKThyQ pD+DsIyAScDEhU7En6UKkrovEtuD3vc674h+Sh17xmncD2g+Q2c99p75NJEMwuNlkGMr w6R0okIxfiAlAB1ReYU2EEoe/9MsaSR2jr+Rwv+20ein9/smCJ0M61WP1aZl13qt6KyG DC2x0wYsJ2wI7iuAw2goPY+okLa9OkleRALqGM6AQErck4X7722g7DiOENAVs9VWM+TM //laRIfwfMF2mVj3cZsE0xjxqxyDG0JpYbaSfS2tLr9Ip2RxNb9hb6iITBGMCxFOgpiG cbAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8uJ0J4mR/6OkkIc2w45IdSAQBabtBwURgx8VBlnBo3s=; b=G/2K24mCQbcAWwngaFfs6AMYqa6oZjaXguZwkZmDFyPPahAZVrxYOUJz9JGC4LXgZB NvkapYJkCQIzFhpK5iWeZAMyRH7FKrTEk/SZDNkFvgzRqgQmPEb55QVSVEUa5bx0Yikb GcZKvGhA4jtLtpmElevaWMTRct97+WybcFL7pwzqwtTG1f5Yv8gKZjnefVARA6gCnVIr eVWqrLQ5M9rCcTAYgAgkd6kMUhZvipWMCzZxv/Yts4GmyqMiU14R4Ru7AURNyJSrXe2N HoYM//B7VvFLZld3uozlbeQXZeAcjUPQL7KE5gl7Gvyz2AZOqvXO/L7uRj7y4mrZO2Mi AH+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RBzZrBFF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j12-20020a633c0c000000b00477ca5b5617si4042537pga.147.2022.12.16.17.56.59; Fri, 16 Dec 2022 17:57:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=RBzZrBFF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230025AbiLQB4A (ORCPT <rfc822;jeantsuru.cumc.mandola@gmail.com> + 99 others); Fri, 16 Dec 2022 20:56:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230010AbiLQBzw (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Fri, 16 Dec 2022 20:55:52 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54B2E4A5BE for <linux-kernel@vger.kernel.org>; Fri, 16 Dec 2022 17:55:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1671242099; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8uJ0J4mR/6OkkIc2w45IdSAQBabtBwURgx8VBlnBo3s=; b=RBzZrBFFFrqDH44ZcK4uB4IKEfA63YefRRGIgB7RPYvNPtLPPXNHeNhVuRW9dxQnHxW4Qj D0gFNAgbWkklqqUyk28VEWehA+5ZzF4+DGijrAQ/PZkm8+Sez6fLnmrMC11w81JJFvYokF YwGsYBm2ideSiX1DwwCc1VIlay7vgOo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-317-0DN93yIENQmxRONjjR8X0Q-1; Fri, 16 Dec 2022 20:54:58 -0500 X-MC-Unique: 0DN93yIENQmxRONjjR8X0Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 9225B18A63EB; Sat, 17 Dec 2022 01:54:57 +0000 (UTC) Received: from MiWiFi-R3L-srv.redhat.com (ovpn-12-34.pek2.redhat.com [10.72.12.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id BF1BD400F5A; Sat, 17 Dec 2022 01:54:53 +0000 (UTC) From: Baoquan He <bhe@redhat.com> To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, urezki@gmail.com, stephen.s.brennan@oracle.com, willy@infradead.org, akpm@linux-foundation.org, hch@infradead.org, Baoquan He <bhe@redhat.com> Subject: [PATCH v2 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas Date: Sat, 17 Dec 2022 09:54:31 +0800 Message-Id: <20221217015435.73889-4-bhe@redhat.com> In-Reply-To: <20221217015435.73889-1-bhe@redhat.com> References: <20221217015435.73889-1-bhe@redhat.com> MIME-Version: 1.0 Content-type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1751245207010757996?= X-GMAIL-MSGID: =?utf-8?q?1752424495467027797?= |
Series |
mm/vmalloc.c: allow vread() to read out vm_map_ram areas
|
|
Commit Message
Baoquan He
Dec. 17, 2022, 1:54 a.m. UTC
Currently, vread can read out vmalloc areas which is associated with
a vm_struct. While this doesn't work for areas created by vm_map_ram()
interface because it doesn't have an associated vm_struct. Then in vread(),
these areas will be skipped.
Here, add a new function vb_vread() to read out areas managed by
vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags
and handle them respectively.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 59 insertions(+), 7 deletions(-)
Comments
Hi Baoquan, I love your patch! Perhaps something to improve: [auto build test WARNING on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221217-095615 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20221217015435.73889-4-bhe%40redhat.com patch subject: [PATCH v2 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas config: powerpc-randconfig-r031-20221216 compiler: clang version 16.0.0 (https://github.com/llvm/llvm-project 98b13979fb05f3ed288a900deb843e7b27589e58) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install powerpc cross compiling tool for clang build # apt-get install binutils-powerpc-linux-gnu # https://github.com/intel-lab-lkp/linux/commit/368cd65be8fedd1642e53393dc3f28ff8726122d git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221217-095615 git checkout 368cd65be8fedd1642e53393dc3f28ff8726122d # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=powerpc olddefconfig COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=powerpc SHELL=/bin/bash If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): >> mm/vmalloc.c:3563:35: warning: operator '<<' has lower precedence than '-'; '-' will be evaluated first [-Wshift-op-parentheses] n = (re - rs + 1) << PAGE_SHIFT - offset; ~~ ~~~~~~~~~~~^~~~~~~~ mm/vmalloc.c:3563:35: note: place parentheses around the '-' expression to silence this warning n = (re - rs + 1) << PAGE_SHIFT - offset; ~~~~~~~~~~~^~~~~~~~ 1 warning generated. vim +3563 mm/vmalloc.c 3533 3534 static void vb_vread(char *buf, char *addr, int count) 3535 { 3536 char *start; 3537 struct vmap_block *vb; 3538 unsigned long offset; 3539 unsigned int rs, re, n; 3540 3541 vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); 3542 3543 spin_lock(&vb->lock); 3544 if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { 3545 spin_unlock(&vb->lock); 3546 memset(buf, 0, count); 3547 return; 3548 } 3549 for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { 3550 if (!count) 3551 break; 3552 start = vmap_block_vaddr(vb->va->va_start, rs); 3553 if (addr < start) { 3554 if (count == 0) 3555 break; 3556 *buf = '\0'; 3557 buf++; 3558 addr++; 3559 count--; 3560 } 3561 /*it could start reading from the middle of used region*/ 3562 offset = offset_in_page(addr); > 3563 n = (re - rs + 1) << PAGE_SHIFT - offset; 3564 if (n > count) 3565 n = count; 3566 aligned_vread(buf, start+offset, n); 3567 3568 buf += n; 3569 addr += n; 3570 count -= n; 3571 } 3572 spin_unlock(&vb->lock); 3573 3574 /* zero-fill the left dirty or free regions */ 3575 if (count) 3576 memset(buf, 0, count); 3577 } 3578
Hi Baoquan, I love your patch! Perhaps something to improve: [auto build test WARNING on akpm-mm/mm-everything] url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221217-095615 base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything patch link: https://lore.kernel.org/r/20221217015435.73889-4-bhe%40redhat.com patch subject: [PATCH v2 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas config: loongarch-randconfig-r006-20221216 compiler: loongarch64-linux-gcc (GCC) 12.1.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/368cd65be8fedd1642e53393dc3f28ff8726122d git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221217-095615 git checkout 368cd65be8fedd1642e53393dc3f28ff8726122d # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch olddefconfig COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash If you fix the issue, kindly add following tag where applicable | Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): mm/vmalloc.c: In function 'vb_vread': >> mm/vmalloc.c:3563:49: warning: suggest parentheses around '-' inside '<<' [-Wparentheses] 3563 | n = (re - rs + 1) << PAGE_SHIFT - offset; vim +3563 mm/vmalloc.c 3533 3534 static void vb_vread(char *buf, char *addr, int count) 3535 { 3536 char *start; 3537 struct vmap_block *vb; 3538 unsigned long offset; 3539 unsigned int rs, re, n; 3540 3541 vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); 3542 3543 spin_lock(&vb->lock); 3544 if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { 3545 spin_unlock(&vb->lock); 3546 memset(buf, 0, count); 3547 return; 3548 } 3549 for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { 3550 if (!count) 3551 break; 3552 start = vmap_block_vaddr(vb->va->va_start, rs); 3553 if (addr < start) { 3554 if (count == 0) 3555 break; 3556 *buf = '\0'; 3557 buf++; 3558 addr++; 3559 count--; 3560 } 3561 /*it could start reading from the middle of used region*/ 3562 offset = offset_in_page(addr); > 3563 n = (re - rs + 1) << PAGE_SHIFT - offset; 3564 if (n > count) 3565 n = count; 3566 aligned_vread(buf, start+offset, n); 3567 3568 buf += n; 3569 addr += n; 3570 count -= n; 3571 } 3572 spin_unlock(&vb->lock); 3573 3574 /* zero-fill the left dirty or free regions */ 3575 if (count) 3576 memset(buf, 0, count); 3577 } 3578
On 12/17/22 at 02:41pm, kernel test robot wrote: > Hi Baoquan, > > I love your patch! Perhaps something to improve: > > [auto build test WARNING on akpm-mm/mm-everything] > > url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221217-095615 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > patch link: https://lore.kernel.org/r/20221217015435.73889-4-bhe%40redhat.com > patch subject: [PATCH v2 3/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas > config: loongarch-randconfig-r006-20221216 > compiler: loongarch64-linux-gcc (GCC) 12.1.0 > reproduce (this is a W=1 build): > wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross > chmod +x ~/bin/make.cross > # https://github.com/intel-lab-lkp/linux/commit/368cd65be8fedd1642e53393dc3f28ff8726122d > git remote add linux-review https://github.com/intel-lab-lkp/linux > git fetch --no-tags linux-review Baoquan-He/mm-vmalloc-c-allow-vread-to-read-out-vm_map_ram-areas/20221217-095615 > git checkout 368cd65be8fedd1642e53393dc3f28ff8726122d > # save the config file > mkdir build_dir && cp config build_dir/.config > COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch olddefconfig > COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=loongarch SHELL=/bin/bash > > If you fix the issue, kindly add following tag where applicable > | Reported-by: kernel test robot <lkp@intel.com> > > All warnings (new ones prefixed by >>): > > mm/vmalloc.c: In function 'vb_vread': > >> mm/vmalloc.c:3563:49: warning: suggest parentheses around '-' inside '<<' [-Wparentheses] > 3563 | n = (re - rs + 1) << PAGE_SHIFT - offset; Thanks, below code change can fix the warning. diff --git a/mm/vmalloc.c b/mm/vmalloc.c index bdaceda1b878..ec5665e70114 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3544,7 +3544,7 @@ static void vb_vread(char *buf, char *addr, int count) } /*it could start reading from the middle of used region*/ offset = offset_in_page(addr); - n = (re - rs + 1) << PAGE_SHIFT - offset; + n = ((re - rs + 1) << PAGE_SHIFT) - offset; if (n > count) n = count; aligned_vread(buf, start+offset, n);
On Sat, Dec 17, 2022 at 09:54:31AM +0800, Baoquan He wrote: > Currently, vread can read out vmalloc areas which is associated with > a vm_struct. While this doesn't work for areas created by vm_map_ram() > interface because it doesn't have an associated vm_struct. Then in vread(), > these areas will be skipped. > > Here, add a new function vb_vread() to read out areas managed by > vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags > and handle them respectively. > > Signed-off-by: Baoquan He <bhe@redhat.com> > --- > mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 59 insertions(+), 7 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 190f29bbaaa7..6612914459cf 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -3515,6 +3515,51 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) > return copied; > } > > +static void vb_vread(char *buf, char *addr, int count) > +{ > + char *start; > + struct vmap_block *vb; > + unsigned long offset; > + unsigned int rs, re, n; > + > + vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); > + > + spin_lock(&vb->lock); > + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { > + spin_unlock(&vb->lock); > + memset(buf, 0, count); > + return; > + } > + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { > + if (!count) > + break; > + start = vmap_block_vaddr(vb->va->va_start, rs); > + if (addr < start) { > + if (count == 0) > + break; > + *buf = '\0'; > + buf++; > + addr++; > + count--; > + } I may be missing something here, but is this not essentially 'if the address is below a used region, write a single null byte into the buffer and continue, assuming we are now in a used area?' This doesn't seem right, but I am happy to be corrected (perhaps we only expect to be a single byte below a start region?) > + /*it could start reading from the middle of used region*/ > + offset = offset_in_page(addr); > + n = (re - rs + 1) << PAGE_SHIFT - offset; The kernel bot has already picked up on this paren issue :) > + if (n > count) > + n = count; > + aligned_vread(buf, start+offset, n); > + > + buf += n; > + addr += n; > + count -= n; > + } > + spin_unlock(&vb->lock); > + > + /* zero-fill the left dirty or free regions */ > + if (count) > + memset(buf, 0, count); > +} > + > /** > * vread() - read vmalloc area in a safe way. > * @buf: buffer for reading data > @@ -3545,7 +3590,7 @@ long vread(char *buf, char *addr, unsigned long count) > struct vm_struct *vm; > char *vaddr, *buf_start = buf; > unsigned long buflen = count; > - unsigned long n; > + unsigned long n, size, flags; > > addr = kasan_reset_tag(addr); > > @@ -3566,12 +3611,16 @@ long vread(char *buf, char *addr, unsigned long count) > if (!count) > break; > > - if (!va->vm) > + vm = va->vm; > + flags = va->flags & VMAP_FLAGS_MASK; > + > + if (!vm && !flags) > continue; > This seems very delicate now as going forward, vm _could_ be NULL. In fact, a later patch in the series then goes on to use vm and assume it is not null (will comment). I feel we should be very explicit after here asserting that vm != NULL. > - vm = va->vm; > - vaddr = (char *) vm->addr; > - if (addr >= vaddr + get_vm_area_size(vm)) > + vaddr = (char *) va->va_start; > + size = flags ? va_size(va) : get_vm_area_size(vm); For example here, I feel that this ternary should be reversed and based on whether vm is null, unles we expect vm to ever be non-null _and_ flags to be set? > + > + if (addr >= vaddr + size) > continue; > while (addr < vaddr) { > if (count == 0) > @@ -3581,10 +3630,13 @@ long vread(char *buf, char *addr, unsigned long count) > addr++; > count--; > } > - n = vaddr + get_vm_area_size(vm) - addr; > + n = vaddr + size - addr; > if (n > count) > n = count; > - if (!(vm->flags & VM_IOREMAP)) > + > + if ((flags & (VMAP_RAM|VMAP_BLOCK)) == (VMAP_RAM|VMAP_BLOCK)) > + vb_vread(buf, addr, n); > + else if ((flags & VMAP_RAM) || !(vm->flags & VM_IOREMAP)) > aligned_vread(buf, addr, n); > else /* IOREMAP area is treated as memory hole */ > memset(buf, 0, n); > -- > 2.34.1 >
On 12/17/22 at 12:06pm, Lorenzo Stoakes wrote: > On Sat, Dec 17, 2022 at 09:54:31AM +0800, Baoquan He wrote: > > Currently, vread can read out vmalloc areas which is associated with > > a vm_struct. While this doesn't work for areas created by vm_map_ram() > > interface because it doesn't have an associated vm_struct. Then in vread(), > > these areas will be skipped. > > > > Here, add a new function vb_vread() to read out areas managed by > > vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags > > and handle them respectively. > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > --- > > mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++------ > > 1 file changed, 59 insertions(+), 7 deletions(-) > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 190f29bbaaa7..6612914459cf 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -3515,6 +3515,51 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) > > return copied; > > } > > > > +static void vb_vread(char *buf, char *addr, int count) > > +{ > > + char *start; > > + struct vmap_block *vb; > > + unsigned long offset; > > + unsigned int rs, re, n; > > + > > + vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); > > + > > + spin_lock(&vb->lock); > > + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { > > + spin_unlock(&vb->lock); > > + memset(buf, 0, count); > > + return; > > + } > > + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { > > + if (!count) > > + break; > > + start = vmap_block_vaddr(vb->va->va_start, rs); > > + if (addr < start) { > > + if (count == 0) > > + break; > > + *buf = '\0'; > > + buf++; > > + addr++; > > + count--; > > + } Very sorry, Lorenzo, I just noticed this mail. It's very weird. Earlier, Uladzislau's reply to patch 2/7 got to be seen in my mutt mail client 10 days later. I am not sure it's my mail client's problem, or a mail server delivery issue. > > I may be missing something here, but is this not essentially 'if the address is > below a used region, write a single null byte into the buffer and continue, > assuming we are now in a used area?' Not sure if I got you. for_each_set_bitrange only iterates the used regions. So in the for loop, what we do is fill zero into the buffer below the used region, then read out the used region. You said 'continue', I don't understand what it means. Assume we have 3 used regions in one vmap block, see below diagram. |_______|______________|________|_____________|_____|_____________|______| |hole 0 |used region 0 |hole 1 |used region 1|hole2|used region2 |hole 3 | hole 0,1,2 will be set zero when we iterate to the used region above them. And the last hole 3 is set at the end of this function. Please help point it out if I got it wrong. > > This doesn't seem right, but I am happy to be corrected (perhaps we only expect > to be a single byte below a start region?) > > > + /*it could start reading from the middle of used region*/ > > + offset = offset_in_page(addr); > > + n = (re - rs + 1) << PAGE_SHIFT - offset; > > The kernel bot has already picked up on this paren issue :) Right, has been handled. Thanks. > > > + if (n > count) > > + n = count; > > + aligned_vread(buf, start+offset, n); > > + > > + buf += n; > > + addr += n; > > + count -= n; > > + } > > + spin_unlock(&vb->lock); > > + > > + /* zero-fill the left dirty or free regions */ > > + if (count) > > + memset(buf, 0, count); > > +} > > + > > /** > > * vread() - read vmalloc area in a safe way. > > * @buf: buffer for reading data > > @@ -3545,7 +3590,7 @@ long vread(char *buf, char *addr, unsigned long count) > > struct vm_struct *vm; > > char *vaddr, *buf_start = buf; > > unsigned long buflen = count; > > - unsigned long n; > > + unsigned long n, size, flags; > > > > addr = kasan_reset_tag(addr); > > > > @@ -3566,12 +3611,16 @@ long vread(char *buf, char *addr, unsigned long count) > > if (!count) > > break; > > > > - if (!va->vm) > > + vm = va->vm; > > + flags = va->flags & VMAP_FLAGS_MASK; > > + > > + if (!vm && !flags) > > continue; > > > > This seems very delicate now as going forward, vm _could_ be NULL. In fact, a > later patch in the series then goes on to use vm and assume it is not null (will > comment). > > I feel we should be very explicit after here asserting that vm != NULL. > > > - vm = va->vm; > > - vaddr = (char *) vm->addr; > > - if (addr >= vaddr + get_vm_area_size(vm)) > > + vaddr = (char *) va->va_start; > > + size = flags ? va_size(va) : get_vm_area_size(vm); > > For example here, I feel that this ternary should be reversed and based on > whether vm is null, unles we expect vm to ever be non-null _and_ flags to be > set? Now only vm_map_ram area sets flags, all other types has vm not null. Since those temporary state, e.g vm==NULL, flags==0 case has been filtered out. Is below you suggested? size = (!vm&&flags)? va_size(va) : get_vm_area_size(vm); or size = (vm&&!flags)? get_vm_area_size(vm):va_size(va); > > > + > > + if (addr >= vaddr + size) > > continue; > > while (addr < vaddr) { > > if (count == 0) > > @@ -3581,10 +3630,13 @@ long vread(char *buf, char *addr, unsigned long count) > > addr++; > > count--; > > } > > - n = vaddr + get_vm_area_size(vm) - addr; > > + n = vaddr + size - addr; > > if (n > count) > > n = count; > > - if (!(vm->flags & VM_IOREMAP)) > > + > > + if ((flags & (VMAP_RAM|VMAP_BLOCK)) == (VMAP_RAM|VMAP_BLOCK)) > > + vb_vread(buf, addr, n); > > + else if ((flags & VMAP_RAM) || !(vm->flags & VM_IOREMAP)) > > aligned_vread(buf, addr, n); > > else /* IOREMAP area is treated as memory hole */ > > memset(buf, 0, n); > > -- > > 2.34.1 > > >
On Wed, Jan 04, 2023 at 04:01:36PM +0800, Baoquan He wrote: > On 12/17/22 at 12:06pm, Lorenzo Stoakes wrote: > > On Sat, Dec 17, 2022 at 09:54:31AM +0800, Baoquan He wrote: > > > Currently, vread can read out vmalloc areas which is associated with > > > a vm_struct. While this doesn't work for areas created by vm_map_ram() > > > interface because it doesn't have an associated vm_struct. Then in vread(), > > > these areas will be skipped. > > > > > > Here, add a new function vb_vread() to read out areas managed by > > > vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags > > > and handle them respectively. > > > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > > --- > > > mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++------ > > > 1 file changed, 59 insertions(+), 7 deletions(-) > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > index 190f29bbaaa7..6612914459cf 100644 > > > --- a/mm/vmalloc.c > > > +++ b/mm/vmalloc.c > > > @@ -3515,6 +3515,51 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) > > > return copied; > > > } > > > > > > +static void vb_vread(char *buf, char *addr, int count) > > > +{ > > > + char *start; > > > + struct vmap_block *vb; > > > + unsigned long offset; > > > + unsigned int rs, re, n; > > > + > > > + vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); > > > + > > > + spin_lock(&vb->lock); > > > + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { > > > + spin_unlock(&vb->lock); > > > + memset(buf, 0, count); > > > + return; > > > + } > > > + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { > > > + if (!count) > > > + break; > > > + start = vmap_block_vaddr(vb->va->va_start, rs); > > > + if (addr < start) { > > > + if (count == 0) > > > + break; > > > + *buf = '\0'; > > > + buf++; > > > + addr++; > > > + count--; > > > + } > > Very sorry, Lorenzo, I just noticed this mail. It's very weird. Earlier, > Uladzislau's reply to patch 2/7 got to be seen in my mutt mail client 10 > days later. I am not sure it's my mail client's problem, or a mail server > delivery issue. > Odd, maybe try lei with mutt I find that works well :) > > > > I may be missing something here, but is this not essentially 'if the address is > > below a used region, write a single null byte into the buffer and continue, > > assuming we are now in a used area?' > > Not sure if I got you. for_each_set_bitrange only iterates the used > regions. So in the for loop, what we do is fill zero into the buffer > below the used region, then read out the used region. You said > 'continue', I don't understand what it means. > > Assume we have 3 used regions in one vmap block, see below diagram. > |_______|______________|________|_____________|_____|_____________|______| > |hole 0 |used region 0 |hole 1 |used region 1|hole2|used region2 |hole 3 | > > hole 0,1,2 will be set zero when we iterate to the used region above > them. And the last hole 3 is set at the end of this function. Please > help point it out if I got it wrong. Maybe let me rephrase:- - We want to read `count` bytes from `addr` into `buf` - We iterate over _used_ blocks, placing the start/end of each block in `rs`, `re` respectively. - If we hit a block whose start address is above the one in which we are interested then:- - Place a zero byte in the buffer - Increment `addr` by 1 byte - Decrement the `count` by 1 byte - Carry on I am seriously confused as to why we do this? Surely we should be checking whether the range [addr, addr + count) overlaps this block at all, and only then copying the relevant region? It's the fact that blocks are at base page granularity but then this condition is at byte granularity that is confusing to me (again it's _very_ possible I am just being dumb here and missing something, just really want to understand this better :) > > > - vm = va->vm; > > > - vaddr = (char *) vm->addr; > > > - if (addr >= vaddr + get_vm_area_size(vm)) > > > + vaddr = (char *) va->va_start; > > > + size = flags ? va_size(va) : get_vm_area_size(vm); > > > > For example here, I feel that this ternary should be reversed and based on > > whether vm is null, unles we expect vm to ever be non-null _and_ flags to be > > set? > > Now only vm_map_ram area sets flags, all other types has vm not null. > Since those temporary state, e.g vm==NULL, flags==0 case has been > filtered out. Is below you suggested? > > size = (!vm&&flags)? va_size(va) : get_vm_area_size(vm); > or > size = (vm&&!flags)? get_vm_area_size(vm):va_size(va); > Sorry I didn't phrase this very well, my point is that the key thing you're relying on here is whether vm exists in order to use it so I simply meant:- size = vm ? get_vm_area_size(vm) : va_size(va); This just makes it really explicit that you need vm to be non-NULL, and you've already done the flags check before so this should suffice.
On 01/04/23 at 08:20pm, Lorenzo Stoakes wrote: > On Wed, Jan 04, 2023 at 04:01:36PM +0800, Baoquan He wrote: > > On 12/17/22 at 12:06pm, Lorenzo Stoakes wrote: > > > On Sat, Dec 17, 2022 at 09:54:31AM +0800, Baoquan He wrote: > > > > Currently, vread can read out vmalloc areas which is associated with > > > > a vm_struct. While this doesn't work for areas created by vm_map_ram() > > > > interface because it doesn't have an associated vm_struct. Then in vread(), > > > > these areas will be skipped. > > > > > > > > Here, add a new function vb_vread() to read out areas managed by > > > > vmap_block specifically. Then recognize vm_map_ram areas via vmap->flags > > > > and handle them respectively. > > > > > > > > Signed-off-by: Baoquan He <bhe@redhat.com> > > > > --- > > > > mm/vmalloc.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++------ > > > > 1 file changed, 59 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > > > index 190f29bbaaa7..6612914459cf 100644 > > > > --- a/mm/vmalloc.c > > > > +++ b/mm/vmalloc.c > > > > @@ -3515,6 +3515,51 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) > > > > return copied; > > > > } > > > > > > > > +static void vb_vread(char *buf, char *addr, int count) > > > > +{ > > > > + char *start; > > > > + struct vmap_block *vb; > > > > + unsigned long offset; > > > > + unsigned int rs, re, n; > > > > + > > > > + vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); > > > > + > > > > + spin_lock(&vb->lock); > > > > + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { > > > > + spin_unlock(&vb->lock); > > > > + memset(buf, 0, count); > > > > + return; > > > > + } > > > > + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { > > > > + if (!count) > > > > + break; > > > > + start = vmap_block_vaddr(vb->va->va_start, rs); > > > > + if (addr < start) { > > > > + if (count == 0) > > > > + break; > > > > + *buf = '\0'; > > > > + buf++; > > > > + addr++; > > > > + count--; > > > > + } > > > > Very sorry, Lorenzo, I just noticed this mail. It's very weird. Earlier, > > Uladzislau's reply to patch 2/7 got to be seen in my mutt mail client 10 > > days later. I am not sure it's my mail client's problem, or a mail server > > delivery issue. > > > > Odd, maybe try lei with mutt I find that works well :) Sorry for late reply, just come back from vacation. Lei + mutt sounds like a good idea. I relied too much on mbsync in the past. > > > > > > > I may be missing something here, but is this not essentially 'if the address is > > > below a used region, write a single null byte into the buffer and continue, > > > assuming we are now in a used area?' > > > > Not sure if I got you. for_each_set_bitrange only iterates the used > > regions. So in the for loop, what we do is fill zero into the buffer > > below the used region, then read out the used region. You said > > 'continue', I don't understand what it means. > > > > Assume we have 3 used regions in one vmap block, see below diagram. > > |_______|______________|________|_____________|_____|_____________|______| > > |hole 0 |used region 0 |hole 1 |used region 1|hole2|used region2 |hole 3 | > > > > hole 0,1,2 will be set zero when we iterate to the used region above > > them. And the last hole 3 is set at the end of this function. Please > > help point it out if I got it wrong. > > Maybe let me rephrase:- > > - We want to read `count` bytes from `addr` into `buf` > - We iterate over _used_ blocks, placing the start/end of each block in `rs`, `re` > respectively. > - If we hit a block whose start address is above the one in which we are interested then:- > - Place a zero byte in the buffer > - Increment `addr` by 1 byte > - Decrement the `count` by 1 byte > - Carry on > > I am seriously confused as to why we do this? Surely we should be checking > whether the range [addr, addr + count) overlaps this block at all, and only then > copying the relevant region? I guessed this could be your concern, but not very sure. That code block is copied from vread(), and my considerations are: 1) We could starting read from any position of kcore file. /proc/kcore is a elf file logically, it's allowed to read from anywhere, right? We don't have to read the entire file always. So the vmap_block reading is not necessarily page aligned. It's very similar with the empty area filling in vread(). 2) memset() is doing the byte by byte reading. We can change code as below. While we don't save the effort very much, and we need introduce an extra local variable to store the value of (start - end). diff --git a/mm/vmalloc.c b/mm/vmalloc.c index b054081aa66b..dce4a843a9e8 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3576,6 +3576,15 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags + if (addr < start) { + int num = min(count, (start - add)); + memset(buf, 0, count); + count -= num; + if (count == 0) + break; + buf -= num; + addr -= num; + } /*it could start reading from the middle of used region*/ offset = offset_in_page(addr); n = ((re - rs + 1) << PAGE_SHIFT) - offset; void *memset(void *s, int c, size_t count) { char *xs = s; while (count--) *xs++ = c; return s; } > > It's the fact that blocks are at base page granularity but then this condition > is at byte granularity that is confusing to me (again it's _very_ possible I am > just being dumb here and missing something, just really want to understand this > better :) I like this kind of reviewing with careful checking and deep thinking. For above code block, I think it's a very great point. From my point of view, I like the memset version better, it's easier to understand. If we all agree, we can change it to take memset way. When I made patches, several issues related to patches were hovering in my mind at the same time, I did not consider this one so deeply. > > > > > - vm = va->vm; > > > > - vaddr = (char *) vm->addr; > > > > - if (addr >= vaddr + get_vm_area_size(vm)) > > > > + vaddr = (char *) va->va_start; > > > > + size = flags ? va_size(va) : get_vm_area_size(vm); > > > > > > For example here, I feel that this ternary should be reversed and based on > > > whether vm is null, unles we expect vm to ever be non-null _and_ flags to be > > > set? > > > > Now only vm_map_ram area sets flags, all other types has vm not null. > > Since those temporary state, e.g vm==NULL, flags==0 case has been > > filtered out. Is below you suggested? > > > > size = (!vm&&flags)? va_size(va) : get_vm_area_size(vm); > > or > > size = (vm&&!flags)? get_vm_area_size(vm):va_size(va); > > > > Sorry I didn't phrase this very well, my point is that the key thing you're > relying on here is whether vm exists in order to use it so I simply meant:- > > size = vm ? get_vm_area_size(vm) : va_size(va); > > This just makes it really explicit that you need vm to be non-NULL, and you've > already done the flags check before so this should suffice. Sounds reasonable, I will copy above line you pasted. Thanks a lot.
On Mon, Jan 09, 2023 at 12:35:04PM +0800, Baoquan He wrote: > Sorry for late reply, just come back from vacation. Hope you had a great time! :) > > Lei + mutt sounds like a good idea. I relied too much on mbsync in the > past. > Yeah I'm finding it works well, https://josefbacik.github.io/kernel/2021/10/18/lei-and-b4.html is a handy guide! [snip] > > Maybe let me rephrase:- > > > > - We want to read `count` bytes from `addr` into `buf` > > - We iterate over _used_ blocks, placing the start/end of each block in `rs`, `re` > > respectively. > > - If we hit a block whose start address is above the one in which we are interested then:- > > - Place a zero byte in the buffer > > - Increment `addr` by 1 byte > > - Decrement the `count` by 1 byte > > - Carry on > > > > I am seriously confused as to why we do this? Surely we should be checking > > whether the range [addr, addr + count) overlaps this block at all, and only then > > copying the relevant region? > > I guessed this could be your concern, but not very sure. That > code block is copied from vread(), and my considerations are: > 1) We could starting read from any position of kcore file. /proc/kcore > is a elf file logically, it's allowed to read from anywhere, right? We > don't have to read the entire file always. So the vmap_block reading is > not necessarily page aligned. It's very similar with the empty area > filling in vread(). > 2) memset() is doing the byte by byte reading. We can > change code as below. While we don't save the effort very much, and we > need introduce an extra local variable to store the value of > (start - end). > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index b054081aa66b..dce4a843a9e8 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -3576,6 +3576,15 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags > + if (addr < start) { > + int num = min(count, (start - add)); > + memset(buf, 0, count); > + count -= num; > + if (count == 0) > + break; > + buf -= num; > + addr -= num; > + } > /*it could start reading from the middle of used region*/ > offset = offset_in_page(addr); > n = ((re - rs + 1) << PAGE_SHIFT) - offset; > The difference with vread() is that uses a while loop rather than an if clause so operates over the whole region byte-by-byte, your original would only do this for 1 byte so now things make a lot more sense! This approach makes sense though I'd put the count == 0 check first and nit 'add' should be 'addr'. I am happy with either this or a while loop instead of an if which it seems is what the original issue was! > void *memset(void *s, int c, size_t count) > { > char *xs = s; > > while (count--) > *xs++ = c; > return s; > } > > > > > It's the fact that blocks are at base page granularity but then this condition > > is at byte granularity that is confusing to me (again it's _very_ possible I am > > just being dumb here and missing something, just really want to understand this > > better :) > > I like this kind of reviewing with careful checking and deep thinking. > For above code block, I think it's a very great point. From my point of > view, I like the memset version better, it's easier to understand. If we > all agree, we can change it to take memset way. When I made patches, > several issues related to patches were hovering in my mind at the same > time, I did not consider this one so deeply. > Thanks :) I have a particular interest in vmalloc so am happy to dive in with reviews here! > > > > > > > - vm = va->vm; > > > > > - vaddr = (char *) vm->addr; > > > > > - if (addr >= vaddr + get_vm_area_size(vm)) > > > > > + vaddr = (char *) va->va_start; > > > > > + size = flags ? va_size(va) : get_vm_area_size(vm); > > > > > > > > For example here, I feel that this ternary should be reversed and based on > > > > whether vm is null, unles we expect vm to ever be non-null _and_ flags to be > > > > set? > > > > > > Now only vm_map_ram area sets flags, all other types has vm not null. > > > Since those temporary state, e.g vm==NULL, flags==0 case has been > > > filtered out. Is below you suggested? > > > > > > size = (!vm&&flags)? va_size(va) : get_vm_area_size(vm); > > > or > > > size = (vm&&!flags)? get_vm_area_size(vm):va_size(va); > > > > > > > Sorry I didn't phrase this very well, my point is that the key thing you're > > relying on here is whether vm exists in order to use it so I simply meant:- > > > > size = vm ? get_vm_area_size(vm) : va_size(va); > > > > This just makes it really explicit that you need vm to be non-NULL, and you've > > already done the flags check before so this should suffice. > > Sounds reasonable, I will copy above line you pasted. Thanks a lot. > Cheers!
On 01/09/23 at 07:12am, Lorenzo Stoakes wrote: > On Mon, Jan 09, 2023 at 12:35:04PM +0800, Baoquan He wrote: > > Sorry for late reply, just come back from vacation. > > Hope you had a great time! :) Thanks. > > > > > Lei + mutt sounds like a good idea. I relied too much on mbsync in the > > past. > > > > Yeah I'm finding it works well, > https://josefbacik.github.io/kernel/2021/10/18/lei-and-b4.html is a handy guide! Very helpful, will try. > > [snip] > > > Maybe let me rephrase:- > > > > > > - We want to read `count` bytes from `addr` into `buf` > > > - We iterate over _used_ blocks, placing the start/end of each block in `rs`, `re` > > > respectively. > > > - If we hit a block whose start address is above the one in which we are interested then:- > > > - Place a zero byte in the buffer > > > - Increment `addr` by 1 byte > > > - Decrement the `count` by 1 byte > > > - Carry on > > > > > > I am seriously confused as to why we do this? Surely we should be checking > > > whether the range [addr, addr + count) overlaps this block at all, and only then > > > copying the relevant region? > > > > I guessed this could be your concern, but not very sure. That > > code block is copied from vread(), and my considerations are: > > 1) We could starting read from any position of kcore file. /proc/kcore > > is a elf file logically, it's allowed to read from anywhere, right? We > > don't have to read the entire file always. So the vmap_block reading is > > not necessarily page aligned. It's very similar with the empty area > > filling in vread(). > > 2) memset() is doing the byte by byte reading. We can > > change code as below. While we don't save the effort very much, and we > > need introduce an extra local variable to store the value of > > (start - end). > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index b054081aa66b..dce4a843a9e8 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -3576,6 +3576,15 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags > > + if (addr < start) { > > + int num = min(count, (start - add)); > > + memset(buf, 0, count); > > + count -= num; > > + if (count == 0) > > + break; > > + buf -= num; > > + addr -= num; > > + } > > /*it could start reading from the middle of used region*/ > > offset = offset_in_page(addr); > > n = ((re - rs + 1) << PAGE_SHIFT) - offset; > > > > The difference with vread() is that uses a while loop rather than an if clause > so operates over the whole region byte-by-byte, your original would only do this > for 1 byte so now things make a lot more sense! Oops, that 'if clause' is a code bug, I finally got your point until now, my dumb head. > > This approach makes sense though I'd put the count == 0 check first and nit > 'add' should be 'addr'. > > I am happy with either this or a while loop instead of an if which it seems is > what the original issue was! OK, I will think again which one is more appropriate. > > > void *memset(void *s, int c, size_t count) > > { > > char *xs = s; > > > > while (count--) > > *xs++ = c; > > return s; > > } > > > > > > > > It's the fact that blocks are at base page granularity but then this condition > > > is at byte granularity that is confusing to me (again it's _very_ possible I am > > > just being dumb here and missing something, just really want to understand this > > > better :) > > > > I like this kind of reviewing with careful checking and deep thinking. > > For above code block, I think it's a very great point. From my point of > > view, I like the memset version better, it's easier to understand. If we > > all agree, we can change it to take memset way. When I made patches, > > several issues related to patches were hovering in my mind at the same > > time, I did not consider this one so deeply. > > > > Thanks :) I have a particular interest in vmalloc so am happy to dive in with > reviews here! > > > > > > > > > > - vm = va->vm; > > > > > > - vaddr = (char *) vm->addr; > > > > > > - if (addr >= vaddr + get_vm_area_size(vm)) > > > > > > + vaddr = (char *) va->va_start; > > > > > > + size = flags ? va_size(va) : get_vm_area_size(vm); > > > > > > > > > > For example here, I feel that this ternary should be reversed and based on > > > > > whether vm is null, unles we expect vm to ever be non-null _and_ flags to be > > > > > set? > > > > > > > > Now only vm_map_ram area sets flags, all other types has vm not null. > > > > Since those temporary state, e.g vm==NULL, flags==0 case has been > > > > filtered out. Is below you suggested? > > > > > > > > size = (!vm&&flags)? va_size(va) : get_vm_area_size(vm); > > > > or > > > > size = (vm&&!flags)? get_vm_area_size(vm):va_size(va); > > > > > > > > > > Sorry I didn't phrase this very well, my point is that the key thing you're > > > relying on here is whether vm exists in order to use it so I simply meant:- > > > > > > size = vm ? get_vm_area_size(vm) : va_size(va); > > > > > > This just makes it really explicit that you need vm to be non-NULL, and you've > > > already done the flags check before so this should suffice. > > > > Sounds reasonable, I will copy above line you pasted. Thanks a lot. Thanks again for careful reviewing and great suggestions and findings.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 190f29bbaaa7..6612914459cf 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3515,6 +3515,51 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) return copied; } +static void vb_vread(char *buf, char *addr, int count) +{ + char *start; + struct vmap_block *vb; + unsigned long offset; + unsigned int rs, re, n; + + vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); + + spin_lock(&vb->lock); + if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { + spin_unlock(&vb->lock); + memset(buf, 0, count); + return; + } + for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { + if (!count) + break; + start = vmap_block_vaddr(vb->va->va_start, rs); + if (addr < start) { + if (count == 0) + break; + *buf = '\0'; + buf++; + addr++; + count--; + } + /*it could start reading from the middle of used region*/ + offset = offset_in_page(addr); + n = (re - rs + 1) << PAGE_SHIFT - offset; + if (n > count) + n = count; + aligned_vread(buf, start+offset, n); + + buf += n; + addr += n; + count -= n; + } + spin_unlock(&vb->lock); + + /* zero-fill the left dirty or free regions */ + if (count) + memset(buf, 0, count); +} + /** * vread() - read vmalloc area in a safe way. * @buf: buffer for reading data @@ -3545,7 +3590,7 @@ long vread(char *buf, char *addr, unsigned long count) struct vm_struct *vm; char *vaddr, *buf_start = buf; unsigned long buflen = count; - unsigned long n; + unsigned long n, size, flags; addr = kasan_reset_tag(addr); @@ -3566,12 +3611,16 @@ long vread(char *buf, char *addr, unsigned long count) if (!count) break; - if (!va->vm) + vm = va->vm; + flags = va->flags & VMAP_FLAGS_MASK; + + if (!vm && !flags) continue; - vm = va->vm; - vaddr = (char *) vm->addr; - if (addr >= vaddr + get_vm_area_size(vm)) + vaddr = (char *) va->va_start; + size = flags ? va_size(va) : get_vm_area_size(vm); + + if (addr >= vaddr + size) continue; while (addr < vaddr) { if (count == 0) @@ -3581,10 +3630,13 @@ long vread(char *buf, char *addr, unsigned long count) addr++; count--; } - n = vaddr + get_vm_area_size(vm) - addr; + n = vaddr + size - addr; if (n > count) n = count; - if (!(vm->flags & VM_IOREMAP)) + + if ((flags & (VMAP_RAM|VMAP_BLOCK)) == (VMAP_RAM|VMAP_BLOCK)) + vb_vread(buf, addr, n); + else if ((flags & VMAP_RAM) || !(vm->flags & VM_IOREMAP)) aligned_vread(buf, addr, n); else /* IOREMAP area is treated as memory hole */ memset(buf, 0, n);