From patchwork Mon Oct 17 14:54:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tip-bot2 for Thomas Gleixner X-Patchwork-Id: 3528 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1493930wrs; Mon, 17 Oct 2022 07:57:37 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6svxdkW/Ts1UKku6j6F0nh5T3ut6cyKZcP1lvqUp6xX6wtgcM1kMFSBZZUvFQP1mUEQvD0 X-Received: by 2002:a17:90b:350d:b0:20d:5438:f59a with SMTP id ls13-20020a17090b350d00b0020d5438f59amr13709664pjb.41.1666018657497; Mon, 17 Oct 2022 07:57:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666018657; cv=none; d=google.com; s=arc-20160816; b=HK2JnlkiQ6pbNA625wFGFya+2AOyY0NVNTErg9oBI1Nc7e90a6FvWM8qlJwoQyAlWO XEXTCg/hIqpBwh6TKOBzLQXw3a7rq94bJPdmbOCxGE3XFFwiU5ZXXAlOpG7qM2o7DWpz 6/Ixl5GqsuvXw0JIkTC4MP1jyRbhkAv9uGfRCS+r4O08Z8QC+xb1n5aQmTcgZ3nq7S4m WoFZyK0MxFlE8+dCPh99iWWLVChrSI5QQEt1LoFkDMD8dhW9sz+fO33mcrLz4cCZAobu abVwtkyAtKM8qqRXRRaEVCDTW4TbSASaSgnz2ctJRMY674SwtN05KePOqaCL0u1qBH3P iucw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:robot-unsubscribe :robot-id:message-id:mime-version:references:in-reply-to:cc:subject :to:reply-to:sender:from:dkim-signature:dkim-signature:date; bh=nayXj0krMV4xaTuItGv4bT76yWtzDIaZDrVUw3mV+FU=; b=hLU+2KyIFTvxCrjAalbE6XHYd4Oc1iHG0vjBYAnZdEEh0bGJepfEfuktekc7d7KvxR e3bN+t8lrEsqJ2cpQl5lDVLQh795LGvylMeh0DyZvvsdMQRox7zK6Iay5NoDa3cmn/l3 5sXjM/q6rtNk9sg+0Y3A/aqIsK6J8qKcqUxsFiODMLP5pQ7I35mbLPawN9sVKDPtHT0X x5dxy6wbFjLkIxps6eWCy8pKC3PTR2ckyzcef6XaVaXS5QZeydYEpTuLf122Qud9Mn5f WuDPIgSeDGTcHIarQhQmnAmOuiR3TDHyVhpf29zeMPapldYX6UIqJdFFH/dyQDLb3p2g w3Xw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=Ld0qpBZ3; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b="q9Zc1Xs/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y10-20020a63180a000000b0045b1d6c87cfsi12535516pgl.432.2022.10.17.07.57.23; Mon, 17 Oct 2022 07:57:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=Ld0qpBZ3; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b="q9Zc1Xs/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231424AbiJQO4n (ORCPT + 99 others); Mon, 17 Oct 2022 10:56:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231258AbiJQOzF (ORCPT ); Mon, 17 Oct 2022 10:55:05 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8159A696F5; Mon, 17 Oct 2022 07:54:18 -0700 (PDT) Date: Mon, 17 Oct 2022 14:54:08 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1666018450; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nayXj0krMV4xaTuItGv4bT76yWtzDIaZDrVUw3mV+FU=; b=Ld0qpBZ3x3sDxi9bczubF83Ok/y0HTxEdRataVvQll35lLziIP8RXV5XqkLrf+dtr/do6t h/XfYNubqxm/FIFUaL3pNw7sjfeyXXeT8GoEsuSdUh/hxqQOXcswNVQen7q67NGkkCliO0 W0h1VA+g1BByZlHrz0SM+FqBv7yZHNvOLp/nlVlzCB76ZiTqJnt70NzfrhQyrdDlZkDv/G 0N86/LIPjJjDjsUujH3ZVB8xHDOBhVN2jvwHYOEMGlfHrySpq0DvsIpWyKN5JksHvVGHSM 1v677bmWZ9HOeqblSntbOFBne4AMk7zfKsADzJN3voGCSoCcS+wezfa+aLCyCg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1666018450; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nayXj0krMV4xaTuItGv4bT76yWtzDIaZDrVUw3mV+FU=; b=q9Zc1Xs/xDNP+SMejy7dtGahnVTGM3OmhCQkmeMPGwoeM9bfCYJS11BZmUH8zWHiQa7Ews aaY3+AFBTNqaaWCA== From: "tip-bot2 for Thomas Gleixner" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/core] x86/percpu: Move preempt_count next to current_task Cc: Thomas Gleixner , "Peter Zijlstra (Intel)" , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20220915111145.284170644@infradead.org> References: <20220915111145.284170644@infradead.org> MIME-Version: 1.0 Message-ID: <166601844863.401.9155951814716704738.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746947179696190768?= X-GMAIL-MSGID: =?utf-8?q?1746947179696190768?= The following commit has been merged into the x86/core branch of tip: Commit-ID: 64701838bf0575ef8acb1ad2db5934e864f3e6c3 Gitweb: https://git.kernel.org/tip/64701838bf0575ef8acb1ad2db5934e864f3e6c3 Author: Thomas Gleixner AuthorDate: Thu, 15 Sep 2022 13:11:02 +02:00 Committer: Peter Zijlstra CommitterDate: Mon, 17 Oct 2022 16:41:04 +02:00 x86/percpu: Move preempt_count next to current_task Add preempt_count to pcpu_hot, since it is once of the most used per-cpu variables. Signed-off-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Link: https://lore.kernel.org/r/20220915111145.284170644@infradead.org --- arch/x86/include/asm/current.h | 1 + arch/x86/include/asm/preempt.h | 27 ++++++++++++++------------- arch/x86/kernel/cpu/common.c | 8 +------- 3 files changed, 16 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/current.h b/arch/x86/include/asm/current.h index 63c42ac..0f4b462 100644 --- a/arch/x86/include/asm/current.h +++ b/arch/x86/include/asm/current.h @@ -15,6 +15,7 @@ struct pcpu_hot { union { struct { struct task_struct *current_task; + int preempt_count; }; u8 pad[64]; }; diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h index 5f6daea..2d13f25 100644 --- a/arch/x86/include/asm/preempt.h +++ b/arch/x86/include/asm/preempt.h @@ -4,11 +4,11 @@ #include #include +#include + #include #include -DECLARE_PER_CPU(int, __preempt_count); - /* We use the MSB mostly because its available */ #define PREEMPT_NEED_RESCHED 0x80000000 @@ -24,7 +24,7 @@ DECLARE_PER_CPU(int, __preempt_count); */ static __always_inline int preempt_count(void) { - return raw_cpu_read_4(__preempt_count) & ~PREEMPT_NEED_RESCHED; + return raw_cpu_read_4(pcpu_hot.preempt_count) & ~PREEMPT_NEED_RESCHED; } static __always_inline void preempt_count_set(int pc) @@ -32,10 +32,10 @@ static __always_inline void preempt_count_set(int pc) int old, new; do { - old = raw_cpu_read_4(__preempt_count); + old = raw_cpu_read_4(pcpu_hot.preempt_count); new = (old & PREEMPT_NEED_RESCHED) | (pc & ~PREEMPT_NEED_RESCHED); - } while (raw_cpu_cmpxchg_4(__preempt_count, old, new) != old); + } while (raw_cpu_cmpxchg_4(pcpu_hot.preempt_count, old, new) != old); } /* @@ -44,7 +44,7 @@ static __always_inline void preempt_count_set(int pc) #define init_task_preempt_count(p) do { } while (0) #define init_idle_preempt_count(p, cpu) do { \ - per_cpu(__preempt_count, (cpu)) = PREEMPT_DISABLED; \ + per_cpu(pcpu_hot.preempt_count, (cpu)) = PREEMPT_DISABLED; \ } while (0) /* @@ -58,17 +58,17 @@ static __always_inline void preempt_count_set(int pc) static __always_inline void set_preempt_need_resched(void) { - raw_cpu_and_4(__preempt_count, ~PREEMPT_NEED_RESCHED); + raw_cpu_and_4(pcpu_hot.preempt_count, ~PREEMPT_NEED_RESCHED); } static __always_inline void clear_preempt_need_resched(void) { - raw_cpu_or_4(__preempt_count, PREEMPT_NEED_RESCHED); + raw_cpu_or_4(pcpu_hot.preempt_count, PREEMPT_NEED_RESCHED); } static __always_inline bool test_preempt_need_resched(void) { - return !(raw_cpu_read_4(__preempt_count) & PREEMPT_NEED_RESCHED); + return !(raw_cpu_read_4(pcpu_hot.preempt_count) & PREEMPT_NEED_RESCHED); } /* @@ -77,12 +77,12 @@ static __always_inline bool test_preempt_need_resched(void) static __always_inline void __preempt_count_add(int val) { - raw_cpu_add_4(__preempt_count, val); + raw_cpu_add_4(pcpu_hot.preempt_count, val); } static __always_inline void __preempt_count_sub(int val) { - raw_cpu_add_4(__preempt_count, -val); + raw_cpu_add_4(pcpu_hot.preempt_count, -val); } /* @@ -92,7 +92,8 @@ static __always_inline void __preempt_count_sub(int val) */ static __always_inline bool __preempt_count_dec_and_test(void) { - return GEN_UNARY_RMWcc("decl", __preempt_count, e, __percpu_arg([var])); + return GEN_UNARY_RMWcc("decl", pcpu_hot.preempt_count, e, + __percpu_arg([var])); } /* @@ -100,7 +101,7 @@ static __always_inline bool __preempt_count_dec_and_test(void) */ static __always_inline bool should_resched(int preempt_offset) { - return unlikely(raw_cpu_read_4(__preempt_count) == preempt_offset); + return unlikely(raw_cpu_read_4(pcpu_hot.preempt_count) == preempt_offset); } #ifdef CONFIG_PREEMPTION diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 5207153..cafb6bd 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -2014,6 +2014,7 @@ __setup("clearcpuid=", setup_clearcpuid); DEFINE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot) = { .current_task = &init_task, + .preempt_count = INIT_PREEMPT_COUNT, }; EXPORT_PER_CPU_SYMBOL(pcpu_hot); @@ -2022,13 +2023,9 @@ DEFINE_PER_CPU_FIRST(struct fixed_percpu_data, fixed_percpu_data) __aligned(PAGE_SIZE) __visible; EXPORT_PER_CPU_SYMBOL_GPL(fixed_percpu_data); - DEFINE_PER_CPU(void *, hardirq_stack_ptr); DEFINE_PER_CPU(bool, hardirq_stack_inuse); -DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT; -EXPORT_PER_CPU_SYMBOL(__preempt_count); - DEFINE_PER_CPU(unsigned long, cpu_current_top_of_stack) = TOP_OF_INIT_STACK; static void wrmsrl_cstar(unsigned long val) @@ -2081,9 +2078,6 @@ void syscall_init(void) #else /* CONFIG_X86_64 */ -DEFINE_PER_CPU(int, __preempt_count) = INIT_PREEMPT_COUNT; -EXPORT_PER_CPU_SYMBOL(__preempt_count); - /* * On x86_32, vm86 modifies tss.sp0, so sp0 isn't a reliable way to find * the top of the kernel stack. Use an extra percpu variable to track the