From patchwork Wed Jul 5 18:12:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Valentin Schneider X-Patchwork-Id: 116285 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f45:0:b0:3ea:f831:8777 with SMTP id v5csp2053102vqx; Wed, 5 Jul 2023 11:20:21 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4YxmsxMnVftCXOUz91KFKKBh+wejE78Yk7X+yAmXhMZbCDVCxu8wK6YBfAoloBpzu7oEXs X-Received: by 2002:a05:6871:4405:b0:1b0:5fc0:e2b5 with SMTP id nd5-20020a056871440500b001b05fc0e2b5mr19393444oab.53.1688581220759; Wed, 05 Jul 2023 11:20:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688581220; cv=none; d=google.com; s=arc-20160816; b=rCQD3ojvBevmXHbepCTk7/jf7pSzEJp0+gaPLbQnzSqx2i6/Z8Qri0/UZrc+AVd/1s jDK1faeh/FU4VtlRs9rySUnPCFjAyd3Uv0FtFm1xndO196OYKs9ZXpPP2q+Cldujl9Oq 0kJ3iL0glXYa/iiuufV9gFDgLaYz15MTqdQ7JQ2JzJQZKs2uOx8kjUPKR3QR5Z0iQd0C 4N2WVxDiGpmnWbiuuPvOiwb+g67YxbNwdTr5/b5nphYWlMafWsgisQxvSHP2qndRW44J 22qabcdFJe/yHxDYxljGBTFRL1tw9UJOkK/QLPlvuwq4k1b1TljAxvYyHXu6QQlHm5yI DjKw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4KAtt5Wg9gvWlallwwTPlSO2JwaIw83isELp5cjnhmg=; fh=dVCMRuKhK2aqTEuWethYlqkP4G2sINJQEE+rq5xoO1c=; b=dk/mkimenSgKo4gVy1aBDDeIZZNvTUOE77hxIoU0o8ryqsqSRE1vYYf1UF7WJMGP3C vxHrL5i/+Z/2Uxf9LNml5svXx8+AwoYPhD9Q40scY2abft2GwpolrUYm5hlD4c4/N8nE IZy52H+lAHRULvHDQBI3O0B7dvx+fbpStxwuCfBNG0MIjzXBqFD4tszJorBMVuPScrHy /JoQVER+S4AiBhLpy/VT46zreKhn9TEeGuUO0x1p3wXFu0rZJLrrWvMCOPsQuknA2eg2 wQH5KppHQxfNIGFd0oSRyL2i/7RYj4Zl/JcLY5eU/exLoH/BcUL2GpGEAxic5Iy2xUHV 8ozw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FiDx++pe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mi2-20020a17090b4b4200b0024e35ef410fsi2426118pjb.131.2023.07.05.11.20.08; Wed, 05 Jul 2023 11:20:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FiDx++pe; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232095AbjGESTc (ORCPT + 99 others); Wed, 5 Jul 2023 14:19:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233400AbjGESSn (ORCPT ); Wed, 5 Jul 2023 14:18:43 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF9BD1728 for ; Wed, 5 Jul 2023 11:17:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1688581045; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4KAtt5Wg9gvWlallwwTPlSO2JwaIw83isELp5cjnhmg=; b=FiDx++petWFoVlFU4wdVWFNOAZEWd3qrJ4nbsyTuguUgGFpVzqx6Lf4Y+U/+1RyGYDUWZX PSfSZmAfOl+H3L1c9jAQEbK6hsmo0uyS/YFeBvHGUSruThn5ifPkeJHzlqUqqoLKzfWAQU AyARbK+dPRqZtvLyxxvABIq+q+b9PkI= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-376-XnGbn6mUOsqBjFOroOxxLg-1; Wed, 05 Jul 2023 14:17:22 -0400 X-MC-Unique: XnGbn6mUOsqBjFOroOxxLg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 48B262808E60; Wed, 5 Jul 2023 18:17:20 +0000 (UTC) Received: from vschneid.remote.csb (unknown [10.42.28.164]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7229918EB4; Wed, 5 Jul 2023 18:17:15 +0000 (UTC) From: Valentin Schneider To: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org, x86@kernel.org Cc: Steven Rostedt , Masami Hiramatsu , Jonathan Corbet , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Paolo Bonzini , Wanpeng Li , Vitaly Kuznetsov , Andy Lutomirski , Peter Zijlstra , Frederic Weisbecker , "Paul E. McKenney" , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Josh Poimboeuf , Kees Cook , Sami Tolvanen , Ard Biesheuvel , Nicholas Piggin , Juerg Haefliger , Nicolas Saenz Julienne , "Kirill A. Shutemov" , Nadav Amit , Dan Carpenter , Chuang Wang , Yang Jihong , Petr Mladek , "Jason A. Donenfeld" , Song Liu , Julian Pidancet , Tom Lendacky , Dionna Glaze , =?utf-8?q?Thomas_Wei=C3=9Fschuh?= , Juri Lelli , Daniel Bristot de Oliveira , Marcelo Tosatti , Yair Podemsky Subject: [RFC PATCH 14/14] x86/mm, mm/vmalloc: Defer flush_tlb_kernel_range() targeting NOHZ_FULL CPUs Date: Wed, 5 Jul 2023 19:12:56 +0100 Message-Id: <20230705181256.3539027-15-vschneid@redhat.com> In-Reply-To: <20230705181256.3539027-1-vschneid@redhat.com> References: <20230705181256.3539027-1-vschneid@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1770605741873280317?= X-GMAIL-MSGID: =?utf-8?q?1770605741873280317?= vunmap()'s issued from housekeeping CPUs are a relatively common source of interference for isolated NOHZ_FULL CPUs, as they are hit by the flush_tlb_kernel_range() IPIs. Given that CPUs executing in userspace do not access data in the vmalloc range, these IPIs could be deferred until their next kernel entry. This does require a guarantee that nothing in the vmalloc range can be accessed in early entry code. vmalloc'd kernel stacks (VMAP_STACK) are AFAICT a safe exception, as a task running in userspace needs to enter kernelspace to execute do_exit() before its stack can be vfree'd. XXX: Validation that nothing in the vmalloc range is accessed in .noinstr or somesuch? Blindly deferring any and all flush of the kernel mappings is a risky move, so introduce a variant of flush_tlb_kernel_range() that explicitly allows deferral. Use it for vunmap flushes. Note that while flush_tlb_kernel_range() may end up issuing a full flush (including user mappings), this only happens when reaching a invalidation range threshold where it is cheaper to do a full flush than to individually invalidate each page in the range via INVLPG. IOW, it doesn't *require* invalidating user mappings, and thus remains safe to defer until a later kernel entry. Signed-off-by: Valentin Schneider --- arch/x86/include/asm/tlbflush.h | 1 + arch/x86/mm/tlb.c | 23 ++++++++++++++++++++--- mm/vmalloc.c | 15 ++++++++++----- 3 files changed, 31 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 9064aa0027d05..2b3605c09649c 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -255,6 +255,7 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); +extern void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 631df9189ded4..bb18b35e61b4a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include @@ -1045,6 +1046,11 @@ static void do_flush_tlb_all(void *info) __flush_tlb_all(); } +static bool do_kernel_flush_defer_cond(int cpu, void *info) +{ + return !ct_set_cpu_work(cpu, CONTEXT_WORK_TLBI); +} + void flush_tlb_all(void) { count_vm_tlb_event(NR_TLB_REMOTE_FLUSH); @@ -1061,12 +1067,13 @@ static void do_kernel_range_flush(void *info) flush_tlb_one_kernel(addr); } -void flush_tlb_kernel_range(unsigned long start, unsigned long end) +static inline void +__flush_tlb_kernel_range(smp_cond_func_t cond_func, unsigned long start, unsigned long end) { /* Balance as user space task's flush, a bit conservative */ if (end == TLB_FLUSH_ALL || (end - start) > tlb_single_page_flush_ceiling << PAGE_SHIFT) { - on_each_cpu(do_flush_tlb_all, NULL, 1); + on_each_cpu_cond(cond_func, do_flush_tlb_all, NULL, 1); } else { struct flush_tlb_info *info; @@ -1074,13 +1081,23 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) info = get_flush_tlb_info(NULL, start, end, 0, false, TLB_GENERATION_INVALID); - on_each_cpu(do_kernel_range_flush, info, 1); + on_each_cpu_cond(cond_func, do_kernel_range_flush, info, 1); put_flush_tlb_info(); preempt_enable(); } } +void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(NULL, start, end); +} + +void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(do_kernel_flush_defer_cond, start, end); +} + /* * This can be used from process context to figure out what the value of * CR3 is without needing to do a (slow) __read_cr3(). diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 1d13d71687d73..10f99224e12e0 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -439,6 +439,11 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) __vunmap_range_noflush(start, end); } +#ifndef flush_tlb_kernel_range_deferrable +#define flush_tlb_kernel_range_deferrable(start, end) \ + flush_tlb_kernel_range(start, end) +#endif + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -452,7 +457,7 @@ void vunmap_range(unsigned long addr, unsigned long end) { flush_cache_vunmap(addr, end); vunmap_range_noflush(addr, end); - flush_tlb_kernel_range(addr, end); + flush_tlb_kernel_range_deferrable(addr, end); } static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, @@ -1747,7 +1752,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end) list_last_entry(&local_purge_list, struct vmap_area, list)->va_end); - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); resched_threshold = lazy_max_pages() << 1; spin_lock(&free_vmap_area_lock); @@ -1849,7 +1854,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) flush_cache_vunmap(va->va_start, va->va_end); vunmap_range_noflush(va->va_start, va->va_end); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(va->va_start, va->va_end); + flush_tlb_kernel_range_deferrable(va->va_start, va->va_end); free_vmap_area_noflush(va); } @@ -2207,7 +2212,7 @@ static void vb_free(unsigned long addr, unsigned long size) vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(addr, addr + size); + flush_tlb_kernel_range_deferrable(addr, addr + size); spin_lock(&vb->lock); @@ -2260,7 +2265,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) mutex_lock(&vmap_purge_lock); purge_fragmented_blocks_allcpus(); if (!__purge_vmap_area_lazy(start, end) && flush) - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); mutex_unlock(&vmap_purge_lock); }