From patchwork Fri Feb 17 00:08:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Johansen X-Patchwork-Id: 58302 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp614339wrn; Thu, 16 Feb 2023 16:46:10 -0800 (PST) X-Google-Smtp-Source: AK7set8zGMrElLrxzdeH/uLKfkxAwr3ihLj8JbuznwWbQGKYO5Yl1KBMBfgQ5294SlQkLjAZRp7y X-Received: by 2002:a05:6a20:a914:b0:c7:651c:1baa with SMTP id cd20-20020a056a20a91400b000c7651c1baamr1206676pzb.17.1676594769839; Thu, 16 Feb 2023 16:46:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1676594769; cv=none; d=google.com; s=arc-20160816; b=re2v2re7ShfN1Mau+UZAlvObxS4xfHN+Pow+2TK5Uy/kseSkZiubiZ71HovnOkz452 A7Gi0fcbXrSeqDnUEAK32sydLamZ0u6zxojgAiUWGssplYPDBTrDhab6MpkxFIl6S1No tKmaZ049aQY0rSF9MRtmyYKBUCuqOQ7NGYwT4wx8S/CdGMUdUWvCBFWQ7U63Ur3TCoNT PqF3KnVcVwLnc64ta8jJ478qLbnVgb0ypdVTQEj88IjyeaOPctBq9zORPZigBJLQFXIt CJCWl9bNNE9QfmoGQ+naHMzLR1ICtiQFVqDU5O7oJ3sOmeIYuMDZAQo+z5r4LSTNMrkW WK8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :organization:content-language:references:cc:to:subject:from :user-agent:mime-version:date:message-id:dkim-signature; bh=p3JVtjx8dt3W0Q/dpd/SmnqOoJaOmvUOKPFg8yElruU=; b=uBlM98ZV9a230fDa0uRGg486daDHd3UvMCUtE+JkNWBNffk+NrWFQXKxcimWNnlPdx JiAsvhxGAazrhVYI++6v0BzQS/ce4Q7Ofn390Gq01DV/ymDAqqFC0OcGhhP4Wr3on1JR NNpdca74GzblSb4p4kyT8EptI2uXG9zNnO7+eEz5n1F9GZ0C54/oDkF899HO+VP6L8dr CKZkK045jsxIN9DAlSH3UQ0WysQ8KFATw1UAYrr8K8nMCWf+QfvTG+c+nynRqz619Ion 5gCYsGGzK3pQXvzHlpFLk0TR6NOy+Om95fsDUf5QI5/KoWpzRV54TPC5oI4PpSwL+xni 2p0w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@canonical.com header.s=20210705 header.b="rH9PWZ/O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=canonical.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u18-20020a63f652000000b004ce0ead7aabsi3196849pgj.302.2023.02.16.16.45.57; Thu, 16 Feb 2023 16:46:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@canonical.com header.s=20210705 header.b="rH9PWZ/O"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=canonical.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230125AbjBQAIS (ORCPT + 99 others); Thu, 16 Feb 2023 19:08:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229448AbjBQAIQ (ORCPT ); Thu, 16 Feb 2023 19:08:16 -0500 Received: from smtp-relay-canonical-0.canonical.com (smtp-relay-canonical-0.canonical.com [185.125.188.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 247AC33446; Thu, 16 Feb 2023 16:08:15 -0800 (PST) Received: from [192.168.192.83] (unknown [50.47.134.245]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by smtp-relay-canonical-0.canonical.com (Postfix) with ESMTPSA id 274923F2FE; Fri, 17 Feb 2023 00:08:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1676592493; bh=p3JVtjx8dt3W0Q/dpd/SmnqOoJaOmvUOKPFg8yElruU=; h=Message-ID:Date:MIME-Version:From:Subject:To:Cc:References: In-Reply-To:Content-Type; b=rH9PWZ/Owu/OXnl5Dun4c7nD2nsnIqv+GXb8iqpG+HF4+fKEgCv0LKt8z6JKdHNZ0 XNnZxc+dGDE9raz2bSLU0djYC47vJDCrQm2D2Vt6XWQuy/h2EjNJggD+bIId6HNof3 f3Iz7VsIkOFpwOcgebQxdBZLrD4lPxLSCsyzcMFdq/2yrdzTeBtwk7GluUrB5QXtUS kYmcKK1bN4AnmPSuWpTz0Akh4X3SI4ySijXkOAm2BiHtQ2mXjo1KSZqd/9dS9jU64o hvzKVazm9OoeX23+HtAQegJ8R9upfYlA+YyIRHSKfWeGzHFJuZVP3uEIOo24PmzMcC Omd/IwAXIWIjQ== Message-ID: <4595e7b4-ea31-5b01-f636-259e84737dfc@canonical.com> Date: Thu, 16 Feb 2023 16:08:10 -0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 From: John Johansen Subject: [PATCH v3] apparmor: global buffers spin lock may get contended To: LKLM Cc: Sergey Senozhatsky , Sebastian Andrzej Siewior , Peter Zijlstra , Tomasz Figa , linux-security-module@vger.kernel.org, Anil Altinay References: Content-Language: en-US Organization: Canonical In-Reply-To: X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1747924079611056967?= X-GMAIL-MSGID: =?utf-8?q?1758037037230618148?= From f44dee132b0b55386b7ea31e68c80d367b073ee0 Mon Sep 17 00:00:00 2001 From: John Johansen Date: Tue, 25 Oct 2022 01:18:41 -0700 Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock contention On a heavily loaded machine there can be lock contention on the global buffers lock. Add a percpu list to cache buffers on when lock contention is encountered. When allocating buffers attempt to use cached buffers first, before taking the global buffers lock. When freeing buffers try to put them back to the global list but if contention is encountered, put the buffer on the percpu list. The length of time a buffer is held on the percpu list is dynamically adjusted based on lock contention. The amount of hold time is rapidly increased and slow ramped down. v3: - limit number of buffers that can be pushed onto the percpu list. This avoids a problem on some kernels where one percpu list can inherit buffers from another cpu after a reschedule, causing more kernel memory to used than is necessary. Under normal conditions this should eventually return to normal but under pathelogical conditions the extra memory consumption may have been unbouanded v2: - dynamically adjust buffer hold time on percpu list based on lock contention. v1: - cache buffers on percpu list on lock contention Signed-off-by: John Johansen Reported-by: Sergey Senozhatsky Signed-off-by: John Johansen Signed-off-by: John Johansen Signed-off-by: John Johansen Signed-off-by: John Johansen --- security/apparmor/lsm.c | 81 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 76 insertions(+), 5 deletions(-) diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c index 25114735bc11..21f5ea20e715 100644 --- a/security/apparmor/lsm.c +++ b/security/apparmor/lsm.c @@ -49,12 +49,19 @@ union aa_buffer { char buffer[1]; }; +struct aa_local_cache { + unsigned int contention; + unsigned int hold; + struct list_head head; +}; + #define RESERVE_COUNT 2 static int reserve_count = RESERVE_COUNT; static int buffer_count; static LIST_HEAD(aa_global_buffers); static DEFINE_SPINLOCK(aa_buffers_lock); +static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers); /* * LSM hook functions @@ -1622,14 +1629,44 @@ static int param_set_mode(const char *val, const struct kernel_param *kp) return 0; } +static void update_contention(struct aa_local_cache *cache) +{ + cache->contention += 3; + if (cache->contention > 9) + cache->contention = 9; + cache->hold += 1 << cache->contention; /* 8, 64, 512 */ +} + char *aa_get_buffer(bool in_atomic) { union aa_buffer *aa_buf; + struct aa_local_cache *cache; bool try_again = true; gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); + /* use per cpu cached buffers first */ + cache = get_cpu_ptr(&aa_local_buffers); + if (!list_empty(&cache->head)) { + aa_buf = list_first_entry(&cache->head, union aa_buffer, list); + list_del(&aa_buf->list); + cache->hold--; + put_cpu_ptr(&aa_local_buffers); + return &aa_buf->buffer[0]; + } + put_cpu_ptr(&aa_local_buffers); + + if (!spin_trylock(&aa_buffers_lock)) { + cache = get_cpu_ptr(&aa_local_buffers); + update_contention(cache); + put_cpu_ptr(&aa_local_buffers); + spin_lock(&aa_buffers_lock); + } else { + cache = get_cpu_ptr(&aa_local_buffers); + if (cache->contention) + cache->contention--; + put_cpu_ptr(&aa_local_buffers); + } retry: - spin_lock(&aa_buffers_lock); if (buffer_count > reserve_count || (in_atomic && !list_empty(&aa_global_buffers))) { aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer, @@ -1655,6 +1692,7 @@ char *aa_get_buffer(bool in_atomic) if (!aa_buf) { if (try_again) { try_again = false; + spin_lock(&aa_buffers_lock); goto retry; } pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n"); @@ -1666,15 +1704,39 @@ char *aa_get_buffer(bool in_atomic) void aa_put_buffer(char *buf) { union aa_buffer *aa_buf; + struct aa_local_cache *cache; if (!buf) return; aa_buf = container_of(buf, union aa_buffer, buffer[0]); - spin_lock(&aa_buffers_lock); - list_add(&aa_buf->list, &aa_global_buffers); - buffer_count++; - spin_unlock(&aa_buffers_lock); + cache = get_cpu_ptr(&aa_local_buffers); + if (!cache->hold || cache->count >= 2) { + put_cpu_ptr(&aa_local_buffers); + if (spin_trylock(&aa_buffers_lock)) { + locked: + list_add(&aa_buf->list, &aa_global_buffers); + buffer_count++; + spin_unlock(&aa_buffers_lock); + cache = get_cpu_ptr(&aa_local_buffers); + if (cache->contention) + cache->contention--; + put_cpu_ptr(&aa_local_buffers); + return; + } + cache = get_cpu_ptr(&aa_local_buffers); + update_contention(cache); + if (cache->count >= 2) { + put_cpu_ptr(&aa_local_buffers); + spin_lock(&aa_buffers_lock); + /* force putting the buffer to global */ + goto locked; + } + } + + /* cache in percpu list */ + list_add(&aa_buf->list, &cache->head); + put_cpu_ptr(&aa_local_buffers); } /* @@ -1716,6 +1778,15 @@ static int __init alloc_buffers(void) union aa_buffer *aa_buf; int i, num; + /* + * per cpu set of cached allocated buffers used to help reduce + * lock contention + */ + for_each_possible_cpu(i) { + per_cpu(aa_local_buffers, i).contention = 0; + per_cpu(aa_local_buffers, i).hold = 0; + INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head); + } /* * A function may require two buffers at once. Usually the buffers are * used for a short period of time and are shared. On UP kernel buffers