From patchwork Mon Dec 25 04:42:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jisheng Zhang X-Patchwork-Id: 183120 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7301:6f82:b0:100:9c79:88ff with SMTP id tb2csp233289dyb; Sun, 24 Dec 2023 20:55:18 -0800 (PST) X-Google-Smtp-Source: AGHT+IGv7L43lsk7SVFIj2cKXjT5ckXGrQCXygNllTHu6nTnS+QXIvK/wAfHVZbggGg0COq5G4n+ X-Received: by 2002:a17:907:3d8e:b0:a23:6244:8370 with SMTP id he14-20020a1709073d8e00b00a2362448370mr2667937ejc.142.1703480118570; Sun, 24 Dec 2023 20:55:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703480118; cv=none; d=google.com; s=arc-20160816; b=LDh3AfJsnakykqCn+sbj3d7Jqs5PsjVN+Wf0nMtPIFCFdWcjEl6MPjl0qkN0HI5E3M QT3p4J+8+7JlVkgnEOx4RvjBbBW06a4dgd65r8MDwomcv2zmtspawUeB6SNF1eTr3rI2 +2TuURr4VIee7Y+yxldlz1kaM3F/DpY9I/3/MwkLT6kR0YY0dcxUHPGAMY+2my8Vog4t mxEtc4iySXDBFHW4Tf45a+SypFQejPqXufUBb7Cnv3pJXsfG0z0X6pUp6z+md2Lwq4xT 5w0oAVXhX+xJPJqa5LlyTHRU4tKebhlXHjMUvGeIsTJUOIasCKiS7qG1uCYJ+WVSBKjr FdVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=iT8PHEDIhr2+mfpmXGSy+ZxGtwsywYpmP3I4Y4PpOwE=; fh=UNuO+lQlip0GI5UaM6CATOxbnP1aMuk9nV/bmz8XnUw=; b=g0QjSMsuZLAil3HWc8mgQm+ntfBiqBiFYWsr69nTW2dYcgU2VRZSivmNudGIONFvvm i/xMAH3hACe6x77OpA0fwQZ0Qu8qGmqMGBdPp1eiwOjngJTllVletqGKVXFnG+KBui0k 4sK0TpomLE71pIrsH9wWeCBD0T/dubMc4msK9HIfVoqbH72frpz+wNI6HWRPwP31o4Eq YuofbhoZBXHrwMYYQUZTcTPZ1OczstXGD4kjv68krjAa94r+UAH8bTsHsSe21yq5DTSr yKBuwUYVR8jN0gI3Sot5nmcxzx2s9o/9ScTH3lAuyJD+FFhvxSFKwh9u/e05CCMjx/Cg 6DKQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ApSX89aV; spf=pass (google.com: domain of linux-kernel+bounces-10892-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10892-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id h22-20020a17090791d600b00a26aa93ff28si3633027ejz.639.2023.12.24.20.55.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 24 Dec 2023 20:55:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10892-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ApSX89aV; spf=pass (google.com: domain of linux-kernel+bounces-10892-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10892-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 34EE31F21359 for ; Mon, 25 Dec 2023 04:55:18 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F25C22908; Mon, 25 Dec 2023 04:54:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ApSX89aV" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5ECF017FE for ; Mon, 25 Dec 2023 04:54:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1D88DC433C9; Mon, 25 Dec 2023 04:54:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703480094; bh=joYODQQgM8OQwgZYoqPwgCmKzSHGbsW2er2Qt7q5rp8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ApSX89aVjmgqbdkXyq2u5pLTi4jiZQPcbWieu08zEKDV+Dph9ImzxtwcsGt5gy8wr /TPfe/Cqs8k7g5QxPltEkToGICEHusdBwH+BW/P4QGyTWlikiQ+KV91UxOhnGwtY3c 62T/Ms8cCCjpthMdr1c7uRPwAwhn6ga8kJFq/mbyljdf+t9cOOAkO4wdcsp6ehWHg1 eLDPnILHE/41GJwESFN4SoIiwr26rVqsjG9I4d/50ZVfoarK04qEswNR7OvVSUPl3x ORk0R8hDtobKogJ4ISI9Ei8kZKGWfBcjKtMXq0H6NRCViLC1l6L9fSyImDeQ/TOicW AbSrAHslZGDZA== From: Jisheng Zhang To: Paul Walmsley , Palmer Dabbelt , Albert Ou Cc: linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Eric Biggers , Conor Dooley , Qingfang DENG , Charlie Jenkins Subject: [PATCH v4 1/2] riscv: introduce RISCV_EFFICIENT_UNALIGNED_ACCESS Date: Mon, 25 Dec 2023 12:42:06 +0800 Message-Id: <20231225044207.3821-2-jszhang@kernel.org> X-Mailer: git-send-email 2.40.0 In-Reply-To: <20231225044207.3821-1-jszhang@kernel.org> References: <20231225044207.3821-1-jszhang@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786228368507814400 X-GMAIL-MSGID: 1786228368507814400 Some riscv implementations such as T-HEAD's C906, C908, C910 and C920 support efficient unaligned access, for performance reason we want to enable HAVE_EFFICIENT_UNALIGNED_ACCESS on these platforms. To avoid performance regressions on other non efficient unaligned access platforms, HAVE_EFFICIENT_UNALIGNED_ACCESS can't be globally selected. To solve this problem, runtime code patching based on the detected speed is a good solution. But that's not easy, it involves lots of work to modify vairous subsystems such as net, mm, lib and so on. This can be done step by step. So let's take an easier solution: add support to efficient unaligned access and hide the support under NONPORTABLE. Now let's introduce RISCV_EFFICIENT_UNALIGNED_ACCESS which depends on NONPORTABLE, if users know during config time that the kernel will be only run on those efficient unaligned access hw platforms, they can enable it. Obviously, generic unified kernel Image shouldn't enable it. Signed-off-by: Jisheng Zhang Reviewed-by: Charlie Jenkins Reviewed-by: Eric Biggers --- arch/riscv/Kconfig | 13 +++++++++++++ arch/riscv/Makefile | 2 ++ 2 files changed, 15 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 24c1799e2ec4..afcc5fdc16f7 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -651,6 +651,19 @@ config RISCV_MISALIGNED load/store for both kernel and userspace. When disable, misaligned accesses will generate SIGBUS in userspace and panic in kernel. +config RISCV_EFFICIENT_UNALIGNED_ACCESS + bool "Assume the CPU supports fast unaligned memory accesses" + depends on NONPORTABLE + select HAVE_EFFICIENT_UNALIGNED_ACCESS + help + Say Y here if you want the kernel to assume that the CPU supports + efficient unaligned memory accesses. When enabled, this option + improves the performance of the kernel on such CPUs. However, the + kernel will run much more slowly, or will not be able to run at all, + on CPUs that do not support efficient unaligned memory accesses. + + If unsure what to do here, say N. + endmenu # "Platform type" menu "Kernel features" diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index a74be78678eb..ebbe02628a27 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -108,7 +108,9 @@ KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax) # unaligned accesses. While unaligned accesses are explicitly allowed in the # RISC-V ISA, they're emulated by machine mode traps on all extant # architectures. It's faster to have GCC emit only aligned accesses. +ifneq ($(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS),y) KBUILD_CFLAGS += $(call cc-option,-mstrict-align) +endif ifeq ($(CONFIG_STACKPROTECTOR_PER_TASK),y) prepare: stack_protector_prepare