From patchwork Fri Jan 13 03:19:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baoquan He X-Patchwork-Id: 3948 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp57014wrt; Thu, 12 Jan 2023 19:21:16 -0800 (PST) X-Google-Smtp-Source: AMrXdXs/ookPrPBzX9j+7Wk/UDI1Lkhuy2QfmfhXp/GIeuI/m2x3Rt0zQfy/AsHW/Ow1JqBMVU7f X-Received: by 2002:a17:906:85d9:b0:842:1627:77b4 with SMTP id i25-20020a17090685d900b00842162777b4mr67832440ejy.3.1673580076106; Thu, 12 Jan 2023 19:21:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673580076; cv=none; d=google.com; s=arc-20160816; b=GGm0zO1JzW8R9s7JiOvjWa4HS4ABAJzPckRUQrdpaM3lJvaRaIpsV75SggPLFMsB0i GVyPlYu8D4e2P/HvN5QN38yPsgg0VnTIxiqvqQQadqDfGp31pagxfg/AAMEzAY5BuwdS o/AoAAayQ/Fg7F6LHjslqyS8QWN7vROhc+jpJyZTL9IoC/g1DEjTADVW5HsmRJCmxPUN 7e8CKg+tcFHbYY/EqBRZx/38sST37O74vPuVhxWAkTpZPMGAoi3rLwoM82ycRybc9R5J BdM4wbUa9gV3gHdDZ1SU5tZR1Nm2FE5q2lITWj18kGp8LsWNI5B+TbmFYwufkMSqJ0uG JnZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=heSo2zYyjsWra8J2esyq7BRIle6U3yRJN+38qcrwJ7Y=; b=kYuzPFhWj5PiccYAgDWeXf6e/li7m8ETvjoQg4vS/LLnwUTakhjjaUeTiHtXxQx1Dq iOc5MRp48qAKzdxSjF0XXPB9vw1HI50A4rPBJ0H0Bz9W2+Mjt2Sp3Z4Hd7T8Hgn6FRzc pSRGYBZ4zT1lmLFD2SQLQEXoNWa34VkRjf0ws13u0nHbgKg3Do83A2tnMmjG/gR0Cymj LPmaWCin15FjpqDdRE8HbHBuuBsBbxBDZ/h9Rdgov4mn99xNsPBjUB3tO1O6XV03LN6m 0WbJGjOKkEl+VIaz1G99mS2oiiizNubmgIuTlQtyig86zc0gmeUY7QLH3cOMztG3vo/m x8UA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YANmIHQF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ds1-20020a170907724100b0082372aef3absi20113447ejc.284.2023.01.12.19.20.51; Thu, 12 Jan 2023 19:21:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=YANmIHQF; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230444AbjAMDU0 (ORCPT + 99 others); Thu, 12 Jan 2023 22:20:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229550AbjAMDUY (ORCPT ); Thu, 12 Jan 2023 22:20:24 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D3CF9559CC for ; Thu, 12 Jan 2023 19:19:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673579977; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=heSo2zYyjsWra8J2esyq7BRIle6U3yRJN+38qcrwJ7Y=; b=YANmIHQFbrFph0gRb+8aRcBzW7q/e3vLqXc7oZt+9w+dhFU7KvKAOXW9NLvHlUlk61JvYl eaE10kCHVj/hGufXuX48KPwPKQCD1nbZ1+889sCS3dkqW0+WJLCJDcxTef93csOb72BemI 4I8/VJwYxQG76SlNx+DNkpyh6njiC7Q= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-175-O1TZpb1QMjSPeoEQH3OjOA-1; Thu, 12 Jan 2023 22:19:33 -0500 X-MC-Unique: O1TZpb1QMjSPeoEQH3OjOA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0CF36811E6E; Fri, 13 Jan 2023 03:19:33 +0000 (UTC) Received: from fedora.redhat.com (ovpn-12-229.pek2.redhat.com [10.72.12.229]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3891C4078903; Fri, 13 Jan 2023 03:19:25 +0000 (UTC) From: Baoquan He To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, urezki@gmail.com, lstoakes@gmail.com, stephen.s.brennan@oracle.com, willy@infradead.org, akpm@linux-foundation.org, hch@infradead.org, Baoquan He Subject: [PATCH v3 0/7] mm/vmalloc.c: allow vread() to read out vm_map_ram areas Date: Fri, 13 Jan 2023 11:19:14 +0800 Message-Id: <20230113031921.64716-1-bhe@redhat.com> MIME-Version: 1.0 Content-type: text/plain X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754875902296423228?= X-GMAIL-MSGID: =?utf-8?q?1754875902296423228?= Problem: *** Stephen reported vread() will skip vm_map_ram areas when reading out /proc/kcore with drgn utility. Please see below link to get more details. /proc/kcore reads 0's for vmap_block https://lore.kernel.org/all/87ilk6gos2.fsf@oracle.com/T/#u Root cause: *** The normal vmalloc API uses struct vmap_area to manage the virtual kernel area allocated, and associate a vm_struct to store more information and pass out. However, area reserved through vm_map_ram() interface doesn't allocate vm_struct to associate with. So the current code in vread() will skip the vm_map_ram area through 'if (!va->vm)' conditional checking. Solution: *** To mark the area reserved through vm_map_ram() interface, add field 'flags' into struct vmap_area. Bit 0 indicates this is vm_map_ram area created through vm_map_ram() interface, bit 1 marks out the type of vm_map_ram area which makes use of vmap_block to manage split regions via vb_alloc/free(). And also add bitmap field 'used_map' into struct vmap_block to mark those further subdivided regions being used to differentiate with dirty and free regions in vmap_block. With the help of above vmap_area->flags and vmap_block->used_map, we can recognize and handle vm_map_ram areas successfully. All these are done in patch 1~3. Meanwhile, do some improvement on areas related to vm_map_ram areas in patch 4, 5. And also change area flag from VM_ALLOC to VM_IOREMAP in patch 6, 7 because this will show them as 'ioremap' in /proc/vmallocinfo, and exclude them from /proc/kcore. Testing *** Only did the basic testing on kvm guest, and run below command to access kcore file to do statistics: makedumpfile --mem-usage /proc/kcore Changelog *** v2->v3: - Benefited from find_unlink_vmap_area() introduced by Uladzislau, do not need to worry about the va->vm and va->flags reset during freeing. - Change to identify vm_map_area with VMAP_RAM in vmap->flags in vread(). - Rename the old vb_vread() to vmap_ram_vread(). - Handle two kinds of vm_map_area area reading in vmap_ram_vread() respectively. - Fix bug of the while loop code block in vmap_block reading, found by Lorenzo. - Improve conditional check related to vm_map_ram area, suggested by Lorenzo. v1->v2: - Change alloc_vmap_area() to pass in va_flags so that we can pass and set vmap_area->flags for vm_map_ram area. With this, no extra action need be added to acquire vmap_area_lock when doing the vmap_area->flags setting. Uladzislau reviewed v1 and pointed out the issue. - Improve vb_vread() to cover the case where reading is started from a dirty or free region. RFC->v1: - Add a new field flags in vmap_area to mark vm_map_ram area. It cold be risky reusing the vm union in vmap_area in RFC. I will consider reusing the union in vmap_area to save memory later. Now just take the simpler way to let's focus on resolving the main problem. - Add patch 4~7 for optimization. Baoquan He (7): mm/vmalloc.c: add used_map into vmap_block to track space of vmap_block mm/vmalloc.c: add flags to mark vm_map_ram area mm/vmalloc.c: allow vread() to read out vm_map_ram areas mm/vmalloc: explicitly identify vm_map_ram area when shown in /proc/vmcoreinfo mm/vmalloc: skip the uninitilized vmalloc areas powerpc: mm: add VM_IOREMAP flag to the vmalloc area sh: mm: set VM_IOREMAP flag to the vmalloc area arch/powerpc/kernel/pci_64.c | 2 +- arch/sh/kernel/cpu/sh4/sq.c | 2 +- include/linux/vmalloc.h | 1 + mm/vmalloc.c | 114 ++++++++++++++++++++++++++++++----- 4 files changed, 101 insertions(+), 18 deletions(-)