From patchwork Wed Oct 4 19:02:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Roesch X-Patchwork-Id: 148552 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:612c:254a:b0:403:3b70:6f57 with SMTP id hf10csp348091vqb; Wed, 4 Oct 2023 12:09:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEbOao87vIGbv+ks2im1M/dxDEi0eUf7p45Eyd06X9rDWWlVmIifLGIAL7HX21tJ4Vi92lQ X-Received: by 2002:a05:6a00:22ca:b0:690:454a:dc7b with SMTP id f10-20020a056a0022ca00b00690454adc7bmr3861814pfj.28.1696446585639; Wed, 04 Oct 2023 12:09:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696446585; cv=none; d=google.com; s=arc-20160816; b=YtK5/RUDDt+5xIIhGJPcpVU76UwACiayIxatX7HGUJI3h930WK8qLbIoyalwwgbPSQ dKWM5AUcxHqMyDIDuzJR0nPkckFqlDpy2ao6Wg4v/xBiMUwUiwyKrzSZ3mZPGvPr5jrU pCeV8AE6ePZ4Vn6K3Yaj8oJfpAmJZO7dyfFCkR4nhZx1atSG4tgFDZkdQWjQqikTIZA1 AzoeI9sBMUfxTwpC433Fg6wU7OhgBe8VCjGx5AlV0OgDXuopmKpOAQypfF34OEbqrd36 h7MuKEojVCw9NyUJTrsngRI9Fh2f6YkM3ahxS9ChKxBlrtxxiV+NbMEjN1N1siT5X3WO ooww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=EeBQ4CD9wlWlM9GOiS7w5/83c9YEl/Dj/IX4sBPGk/c=; fh=sS+J4OyOC0EcVLWJpS3mBHGeO+0+dYZJ+ImCUfzsrH4=; b=avgOP8w1rPsEQMl6Bkt+nXH1kCkSQk4Y7IMZqxLfSNO5jOBUocBfS+CQbhmE9nt9KJ 3obnkBjTZ+81Mh6TcOjsrx0vvPPuO1DhUdD52axutukvbJYQucTPljqE9dnAOPGf96NU RSl2cSkUqESzPBIa8gLeUcW6cTYsG8Y++9k2n9ygzoRn3zu/2jXfB0kIZrzg2qBU9RWf j1MldTNt3axQs1tujUQBYkoFmQfmGr+rEZyVhg2xzcN5ptfj5Byads9AXv7OT6jCh04y DqfXhGUBWIFNJDb9m/lAtIDHVvsCoCMR0cPOKB//LtdzQX4ARCm0P3HLiRlMnc5cNyyr 14QQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id d14-20020a056a0024ce00b00690d42e334fsi4561338pfv.181.2023.10.04.12.09.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Oct 2023 12:09:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 78E9880765D5; Wed, 4 Oct 2023 12:09:08 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245423AbjJDTHp (ORCPT + 19 others); Wed, 4 Oct 2023 15:07:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343886AbjJDTHY (ORCPT ); Wed, 4 Oct 2023 15:07:24 -0400 Received: from 66-220-144-179.mail-mxout.facebook.com (66-220-144-179.mail-mxout.facebook.com [66.220.144.179]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C183B173F for ; Wed, 4 Oct 2023 12:03:15 -0700 (PDT) Received: by devbig1114.prn1.facebook.com (Postfix, from userid 425415) id 70FD2D08BFFB; Wed, 4 Oct 2023 12:03:02 -0700 (PDT) From: Stefan Roesch To: kernel-team@fb.com Cc: shr@devkernel.io, akpm@linux-foundation.org, david@redhat.com, hannes@cmpxchg.org, riel@surriel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1 1/4] mm/ksm: add ksm advisor Date: Wed, 4 Oct 2023 12:02:46 -0700 Message-Id: <20231004190249.829015-2-shr@devkernel.io> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20231004190249.829015-1-shr@devkernel.io> References: <20231004190249.829015-1-shr@devkernel.io> MIME-Version: 1.0 X-Spam-Status: No, score=2.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Wed, 04 Oct 2023 12:09:08 -0700 (PDT) X-Spam-Level: ** X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1778853175129645794 X-GMAIL-MSGID: 1778853175129645794 This adds the ksm advisor. The ksm advisor automatically manages the pages_to_scan setting to achieve a target scan time. The target scan time defines how many seconds it should take to scan all the candidate KSM pages. In other words the pages_to_scan rate is changed by the advisor to achieve the target scan time. The algorithm has a max and min value to: - guarantee responsiveness to changes - to avoid to spend too much CPU The respective parameters are: - ksm_advisor_target_scan_time (how many seconds a scan should take) - ksm_advisor_min_pages (minimum value for pages_to_scan per batch) - ksm_advisor_max_pages (maximum valoe for pages_to_scan per batch) The algorithm calculates the change value based on the target scan time and the previous scan time. To avoid pertubations an exponentially weighted moving average is applied. By default the advisor is disabled. Currently there are two advisors: none and scan_time. Tests with various workloads have shown considerable CPU savings. Most of the workloads I have investigated have more candidate pages during startup, once the workload is stable in terms of memory, the number of candidate pages is reduced. Without the advisor, the pages_to_scan needs to be sized for the maximum number of candidate pages. So having this advisor definitely helps in reducing CPU consumption. For the instagram workload, the advisor achieves a 25% CPU reduction. Once the memory is stable, the pages_to_scan parameter gets reduced to about 40% of its max value. Signed-off-by: Stefan Roesch --- mm/ksm.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 131 insertions(+), 1 deletion(-) diff --git a/mm/ksm.c b/mm/ksm.c index 7efcc68ccc6e..c9edfb293024 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -248,6 +248,9 @@ static struct kmem_cache *rmap_item_cache; static struct kmem_cache *stable_node_cache; static struct kmem_cache *mm_slot_cache; +/* Default number of pages to scan per batch */ +#define DEFAULT_PAGES_TO_SCAN 100 + /* The number of pages scanned */ static unsigned long ksm_pages_scanned; @@ -276,7 +279,7 @@ static unsigned int ksm_stable_node_chains_prune_millisecs = 2000; static int ksm_max_page_sharing = 256; /* Number of pages ksmd should scan in one batch */ -static unsigned int ksm_thread_pages_to_scan = 100; +static unsigned int ksm_thread_pages_to_scan = DEFAULT_PAGES_TO_SCAN; /* Milliseconds ksmd should sleep between batches */ static unsigned int ksm_thread_sleep_millisecs = 20; @@ -297,6 +300,129 @@ unsigned long ksm_zero_pages; /* The number of pages that have been skipped due to "smart scanning" */ static unsigned long ksm_pages_skipped; +/* At least scan this many pages per batch. */ +static unsigned long ksm_advisor_min_pages = 500; + +/* Don't scan more than max pages per batch. */ +static unsigned long ksm_advisor_max_pages = 5000; + +/* Target scan time in seconds to analyze all KSM candidate pages. */ +static unsigned long ksm_advisor_target_scan_time = 200; + +/* Exponentially weighted moving average. */ +#define EWMA_WEIGHT 50 + +/** + * struct advisor_ctx - metadata for KSM advisor + * @start_scan: start time of the current scan + * @scan_time: scan time of previous scan + * @change: change in percent to pages_to_scan parameter + */ +struct advisor_ctx { + ktime_t start_scan; + s64 scan_time; + unsigned long change; +}; +static struct advisor_ctx advisor_ctx; + +/* Define different advisor's */ +enum ksm_advisor_type { + KSM_ADVISOR_NONE, + KSM_ADVISOR_FIRST = KSM_ADVISOR_NONE, + KSM_ADVISOR_SCAN_TIME, + KSM_ADVISOR_LAST = KSM_ADVISOR_SCAN_TIME +}; +static enum ksm_advisor_type ksm_advisor; + +static void init_advisor(void) +{ + advisor_ctx.start_scan = 0; + advisor_ctx.scan_time = 0; + advisor_ctx.change = 0; +} + +/* + * Use previous scan time if available, otherwise use current scan time as an + * approximation for the previous scan time. + */ +static inline s64 prev_scan_time(struct advisor_ctx *ctx, s64 new_scan_time) +{ + return ctx->scan_time ? ctx->scan_time : new_scan_time; +} + +/* Calculate exponential weighted moving average */ +static unsigned long ewma(unsigned long prev, unsigned long curr) +{ + return ((100 - EWMA_WEIGHT) * prev + EWMA_WEIGHT * curr) / 100; +} + +/* + * The scan time advisor is based on the current scan rate and the target + * scan rate. + * + * new_pages_to_scan = pages_to_scan * (scan_time / target_scan_time) + * + * To avoid pertubations it calculates a change factor of previous changes. + * A new change factor is calculated for each iteration and it uses an + * exponentially weighted moving average. The new pages_to_scan value is + * multiplied with that change factor: + * + * new_pages_to_scan *= change facor + * + * In addition the new pages_to_scan value is capped by the max and min + * limits. + */ +static void scan_time_advisor(s64 scan_time) +{ + unsigned long pages; + unsigned long factor; + unsigned long change; + unsigned long last_scan_time; + + pages = ksm_thread_pages_to_scan; + last_scan_time = prev_scan_time(&advisor_ctx, scan_time); + + /* Calculate scan time as percentage of target scan time */ + factor = ksm_advisor_target_scan_time * 100 / scan_time; + factor = factor ? factor : 1; + + /* + * Calculate scan time as percentage of last scan time and use + * exponentially weighted average to smooth it + */ + change = scan_time * 100 / last_scan_time; + change = change ? change : 1; + change = ewma(advisor_ctx.change, change); + + /* Calculate new scan rate based on target scan rate. */ + pages = pages * 100 / factor; + /* Update pages_to_scan by weighted change percentage. */ + pages = pages * change / 100; + + /* Cap new pages_to_scan value */ + pages = max(pages, ksm_advisor_min_pages); + pages = min(pages, ksm_advisor_max_pages); + + /* Update advisor context */ + advisor_ctx.change = change; + advisor_ctx.scan_time = scan_time; + ksm_thread_pages_to_scan = pages; +} + +static void run_advisor(void) +{ + if (ksm_advisor == KSM_ADVISOR_SCAN_TIME) { + s64 scan_time; + + /* Convert scan time to seconds */ + scan_time = ktime_ms_delta(ktime_get(), advisor_ctx.start_scan); + scan_time /= MSEC_PER_SEC; + scan_time = scan_time ? scan_time : 1; + + scan_time_advisor(scan_time); + } +} + #ifdef CONFIG_NUMA /* Zeroed when merging across nodes is not allowed */ static unsigned int ksm_merge_across_nodes = 1; @@ -2401,6 +2527,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) mm_slot = ksm_scan.mm_slot; if (mm_slot == &ksm_mm_head) { + advisor_ctx.start_scan = ktime_get(); trace_ksm_start_scan(ksm_scan.seqnr, ksm_rmap_items); /* @@ -2558,6 +2685,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) if (mm_slot != &ksm_mm_head) goto next_mm; + run_advisor(); + trace_ksm_stop_scan(ksm_scan.seqnr, ksm_rmap_items); ksm_scan.seqnr++; return NULL; @@ -3603,6 +3732,7 @@ static int __init ksm_init(void) zero_checksum = calc_checksum(ZERO_PAGE(0)); /* Default to false for backwards compatibility */ ksm_use_zero_pages = false; + init_advisor(); err = ksm_slab_init(); if (err)