Message ID | 20230721012932.190742-1-ying.huang@intel.com |
---|---|
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:c923:0:b0:3e4:2afc:c1 with SMTP id j3csp3505745vqt; Thu, 20 Jul 2023 19:20:57 -0700 (PDT) X-Google-Smtp-Source: APBJJlFZ7D591HXsttdCrGtkHra6hVD0wUHNb+7qjcyhPMp2rvKiV/z1ESzYo2Ii4VQiHn0BQxTJ X-Received: by 2002:a17:906:844f:b0:997:b843:7cb2 with SMTP id e15-20020a170906844f00b00997b8437cb2mr462943ejy.60.1689906057281; Thu, 20 Jul 2023 19:20:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1689906057; cv=none; d=google.com; s=arc-20160816; b=m1Yr3fqoi16sUHM9IaDYpzQWHaiV+9Wwcsr8Hwl+jG2rbUxkjG7x0p/RswlQwu7w4v CbPrGP7Vm1YFMbCCWEX7nax5JS3is8vsbs0VldkUCZsddpTcO96b8sPxkw+dADN/9YgK fKgaO4NdI17z12rr0DQyqwD8rE6yO/FHd6V4r2PkiGJyqaKdluDAtcE13ydjgXeUtt4U 71jOkDZr0M98z0N6ZXXbHnsUdgL5RqZ3yvNRTxt24UfWT3dOs0JR3lO2G1KPwv3mRmI6 H2B5CROBJas8NAPnwAwQxSqd7rND5y8Wgc03a+XZzNSNWzfpibGWexCsB4zq5p1ttDSq vy+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=UZDClLW28dPkcMGqMjzuiTBuAqJ8RCZqiVXdFO+ISAw=; fh=lzKkhZyaUrhNkIakGPxlbJijiu34Upm1x29wDAUxkMQ=; b=q/t92yTICSVOhgi7e/A16wWU5pWQrgS/qMNZM1JporFR438hZt1I8zbAmsksJaAlft MfgR6MJWuG+mX9NY6SuHFrtvZu7vajQ35eC5MLRvDW8iJrOzwVOmnMJAk/sETkq2pXXP awCM9fooOsSbWO2+zMpZIxYBXKHy+s+2Py2MajwODgKN4vvc9pDunxSXBFxGWmvGddCc Yx6TEsx+RbCyjW6HBLYMSPaQKaaOgMcAB0xF09trllDP3EhWiif9QCbsK1cIpDLkvI7f d1jFeAhVqC28WUMDt6v1XU7h6rrp8u72a0YjGau9YCLOxgMEB0P5wP8Nu3I2tUddkpTh E9vw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="FKGMW/N2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dc14-20020a170906c7ce00b00991d97d8eb5si1644176ejb.145.2023.07.20.19.20.34; Thu, 20 Jul 2023 19:20:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="FKGMW/N2"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229887AbjGUBpH (ORCPT <rfc822;assdfgzxcv4@gmail.com> + 99 others); Thu, 20 Jul 2023 21:45:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229450AbjGUBpF (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 20 Jul 2023 21:45:05 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2995F186; Thu, 20 Jul 2023 18:45:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689903905; x=1721439905; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=4myecPwu5Ccazp290GdECvXjfGsVAVkoX8NsXNv5t8M=; b=FKGMW/N22cBrAlIUhPKXOVk6LOeicg5kiJpBNH6ixnGsZJz76QPw0z0e 58LNGEeiFcog5NDpcB9G/Ar9rR4xrdWL9QucdiYrBvlI/7OQg0xskfoOR KSoXJc6wn29lvLKDE31clL9W6diFOUNxZ7AGmP8vs9ahNF8vCuaVPpPaU 3V1CrOw6Rl5cRPOEEvhysLm6aFhE8x+1gpxERR+el8/fcwLcjkVuaLY3h eqmsXNxsVuTPOPb7qIiOAiMJvPxDKXkekLzZPWVn4iCSWPKqTB0mGkFEM pbUyKu4kLBn27uZ2mBf4k9TcGiopVk0OZj3vKcK257J/CciNOph1FIhaC g==; X-IronPort-AV: E=McAfee;i="6600,9927,10777"; a="347214116" X-IronPort-AV: E=Sophos;i="6.01,220,1684825200"; d="scan'208";a="347214116" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2023 18:45:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10777"; a="724670858" X-IronPort-AV: E=Sophos;i="6.01,220,1684825200"; d="scan'208";a="724670858" Received: from yanfeng1-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.29.24]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jul 2023 18:44:59 -0700 From: Huang Ying <ying.huang@intel.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev, linux-acpi@vger.kernel.org, Huang Ying <ying.huang@intel.com>, "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>, Wei Xu <weixugc@google.com>, Alistair Popple <apopple@nvidia.com>, Dan Williams <dan.j.williams@intel.com>, Dave Hansen <dave.hansen@intel.com>, Davidlohr Bueso <dave@stgolabs.net>, Johannes Weiner <hannes@cmpxchg.org>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Michal Hocko <mhocko@kernel.org>, Yang Shi <shy828301@gmail.com>, Rafael J Wysocki <rafael.j.wysocki@intel.com> Subject: [PATCH RESEND 0/4] memory tiering: calculate abstract distance based on ACPI HMAT Date: Fri, 21 Jul 2023 09:29:28 +0800 Message-Id: <20230721012932.190742-1-ying.huang@intel.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1771994933929843509 X-GMAIL-MSGID: 1771994933929843509 |
Series |
memory tiering: calculate abstract distance based on ACPI HMAT
|
|
Message
Huang, Ying
July 21, 2023, 1:29 a.m. UTC
We have the explicit memory tiers framework to manage systems with multiple types of memory, e.g., DRAM in DIMM slots and CXL memory devices. Where, same kind of memory devices will be grouped into memory types, then put into memory tiers. To describe the performance of a memory type, abstract distance is defined. Which is in direct proportion to the memory latency and inversely proportional to the memory bandwidth. To keep the code as simple as possible, fixed abstract distance is used in dax/kmem to describe slow memory such as Optane DCPMM. To support more memory types, in this series, we added the abstract distance calculation algorithm management mechanism, provided a algorithm implementation based on ACPI HMAT, and used the general abstract distance calculation interface in dax/kmem driver. So, dax/kmem can support HBM (high bandwidth memory) in addition to the original Optane DCPMM. Changelog: V1 (from RFC): - Added some comments per Aneesh's comments, Thanks! Best Regards, Huang, Ying
Comments
Thanks for this Huang, I had been hoping to take a look at it this week but have run out of time. I'm keen to do some testing with it as well. Hopefully next week... Huang Ying <ying.huang@intel.com> writes: > We have the explicit memory tiers framework to manage systems with > multiple types of memory, e.g., DRAM in DIMM slots and CXL memory > devices. Where, same kind of memory devices will be grouped into > memory types, then put into memory tiers. To describe the performance > of a memory type, abstract distance is defined. Which is in direct > proportion to the memory latency and inversely proportional to the > memory bandwidth. To keep the code as simple as possible, fixed > abstract distance is used in dax/kmem to describe slow memory such as > Optane DCPMM. > > To support more memory types, in this series, we added the abstract > distance calculation algorithm management mechanism, provided a > algorithm implementation based on ACPI HMAT, and used the general > abstract distance calculation interface in dax/kmem driver. So, > dax/kmem can support HBM (high bandwidth memory) in addition to the > original Optane DCPMM. > > Changelog: > > V1 (from RFC): > > - Added some comments per Aneesh's comments, Thanks! > > Best Regards, > Huang, Ying
On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote: > Thanks for this Huang, I had been hoping to take a look at it this week > but have run out of time. I'm keen to do some testing with it as well. Thanks. I'll queue this in mm-unstable for some testing. Detailed review and testing would be appreciated. I made some adjustments to handle the renaming of destroy_memory_type() to put_memory_type() (https://lkml.kernel.org/r/20230706063905.543800-1-linmiaohe@huawei.com)
On 24-Jul-23 11:28 PM, Andrew Morton wrote: > On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote: > >> Thanks for this Huang, I had been hoping to take a look at it this week >> but have run out of time. I'm keen to do some testing with it as well. > > Thanks. I'll queue this in mm-unstable for some testing. Detailed > review and testing would be appreciated. I gave this series a try on a 2P system with 2 CXL cards. I don't trust the bandwidth and latency numbers reported by HMAT here, but FWIW, this patchset puts the CXL nodes on a lower tier than DRAM nodes. Regards, Bharata.
Hi, Rao, Bharata B Rao <bharata@amd.com> writes: > On 24-Jul-23 11:28 PM, Andrew Morton wrote: >> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote: >> >>> Thanks for this Huang, I had been hoping to take a look at it this week >>> but have run out of time. I'm keen to do some testing with it as well. >> >> Thanks. I'll queue this in mm-unstable for some testing. Detailed >> review and testing would be appreciated. > > I gave this series a try on a 2P system with 2 CXL cards. I don't trust the > bandwidth and latency numbers reported by HMAT here, but FWIW, this patchset > puts the CXL nodes on a lower tier than DRAM nodes. Thank you very much! Can I add your "Tested-by" for the series? -- Best Regards, Huang, Ying
On 11-Aug-23 11:56 AM, Huang, Ying wrote: > Hi, Rao, > > Bharata B Rao <bharata@amd.com> writes: > >> On 24-Jul-23 11:28 PM, Andrew Morton wrote: >>> On Fri, 21 Jul 2023 14:15:31 +1000 Alistair Popple <apopple@nvidia.com> wrote: >>> >>>> Thanks for this Huang, I had been hoping to take a look at it this week >>>> but have run out of time. I'm keen to do some testing with it as well. >>> >>> Thanks. I'll queue this in mm-unstable for some testing. Detailed >>> review and testing would be appreciated. >> >> I gave this series a try on a 2P system with 2 CXL cards. I don't trust the >> bandwidth and latency numbers reported by HMAT here, but FWIW, this patchset >> puts the CXL nodes on a lower tier than DRAM nodes. > > Thank you very much! > > Can I add your "Tested-by" for the series? Yes if the above test qualifies for it, please go ahead. Regards, Bharata.