[3/4] vmlinux.lds.h: Fix alignment for __ksymtab*, __kcrctab_* and .pci_fixup sections
Message ID | 20231122221814.139916-4-deller@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:6358:6f03:b0:164:83eb:24d7 with SMTP id r3csp1537283rwn; Wed, 22 Nov 2023 14:19:02 -0800 (PST) X-Google-Smtp-Source: AGHT+IHA10e5rNFN1DZeLgKUuVjbVVOz2kviKKUcH7eIdxsMciHQaUzd3NojBGfMJpG/2cMY2Opb X-Received: by 2002:a05:6808:1141:b0:3ae:1351:d0f with SMTP id u1-20020a056808114100b003ae13510d0fmr5205193oiu.22.1700691542180; Wed, 22 Nov 2023 14:19:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700691542; cv=none; d=google.com; s=arc-20160816; b=BR0Jd1k7mKpbURmX3vfjqatq5kO9EnP5kIMLMQnq2SS3CT0Ztbx01AeaRNH/M40tVV rZwhnX9jFE7dxReZNP/NljUap0sr9ea01FePbkmLOo+pgvW0oA4NuDTeGHo8415707ck 597cybhd8TIa0MpZHbyCAFG64CUCDziGsEupVR7JpidMjAt5RyTszSMeVl+ZecoANBym 2WMcWqGPvIqB0OfR0uNJ69BnyaLk85wTDFuzBtMoefZA5VrELiAQyJ0UJhQmbAlqafq/ VVyBdxRNALQ/5zDhWX84OoF6pMPdQUSG2K2iAsYHq2tGkJoR6NTQgQ/TyW3UhVtVYP4M i9FA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=P0Az5Jp7l7jQRvdJqQLa77xwQnuLvLdqqZ5nKDT3r9s=; fh=p1Qf4H5TYe6x87oeu/ZwD97uWE3f/aSmdhLxEgeh+tk=; b=PIM1hDDi/AqGLJa6Ik1PWl92TGtDmRQfKi9AdghxKJIvI7/jzs53ZnGND/l0d4c4+9 LZYfKbIl6zhA1IG3S4HCTuYrRYHQliGeOrsRO73JaMGroinxHo4D/R5k71SNU0HgOy+O Zln+VXh/T1DfdA5TqkDNRteAYgXOo4lL73t4ab/OU7M21fHkv+aBFlhANBtlXU0GLvZd Yt7kTdacS68fLqprsG3Ob+zaJcBpkZIc/lNg+MTY5uD2+0PqjZOwYoE5YcIPYrRsprpE YgLHSW4fw/ORXiOOx0FZdZS3sROVOSwRXj0XQQE+IM47zEhrkvE67IiH3PB03f6ngCsu QZAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=d4Yov3+O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id j14-20020a634a4e000000b005bd5a50b559si8463pgl.715.2023.11.22.14.18.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Nov 2023 14:19:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=d4Yov3+O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id BD6FF82691A1; Wed, 22 Nov 2023 14:18:45 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344700AbjKVWSh (ORCPT <rfc822;ouuuleilei@gmail.com> + 99 others); Wed, 22 Nov 2023 17:18:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344652AbjKVWS2 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Wed, 22 Nov 2023 17:18:28 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2D401BD for <linux-kernel@vger.kernel.org>; Wed, 22 Nov 2023 14:18:24 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 04A14C433C8; Wed, 22 Nov 2023 22:18:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1700691504; bh=OEYsi17tOKR7Yn0RX96OmFEl8yOfWLgmEUuZLjzMjVw=; h=From:To:Subject:Date:In-Reply-To:References:From; b=d4Yov3+O5pJZYtO+vAquTnBX0iikeSqBxlGU2Gli2xi78IA03yNmk9fyQbbo1bUKy USXCZ0TgaqwsYqNgrN9RxOE/iLncTbx6jFokMFv7xi/puqkUnVBny2fLDJBlzBjEgd VNMknv62dWvQ4bduSYJIvxhYEqiyoAS68ikNdyJpKg5B1sqD0cEJntPOZX6eTEuKXm H4LAl9o7KqFrHVXUFXpSkYn/yymkDBfZhVFzA4Aj9+5xP4PT/dZK6VG4Q6yJvGVmw5 PMhCtOZQzVPwHjoc+itTW0E6Qq+o2h8hFh4Yxo+ORbInR5eRhdlSP8Xw7NPsKYQ9MW im5YtndDD/e3Q== From: deller@kernel.org To: linux-kernel@vger.kernel.org, Masahiro Yamada <masahiroy@kernel.org>, Arnd Bergmann <arnd@arndb.de>, linux-modules@vger.kernel.org, linux-arch@vger.kernel.org, Luis Chamberlain <mcgrof@kernel.org> Subject: [PATCH 3/4] vmlinux.lds.h: Fix alignment for __ksymtab*, __kcrctab_* and .pci_fixup sections Date: Wed, 22 Nov 2023 23:18:13 +0100 Message-ID: <20231122221814.139916-4-deller@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20231122221814.139916-1-deller@kernel.org> References: <20231122221814.139916-1-deller@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 22 Nov 2023 14:18:45 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1783304334553344615 X-GMAIL-MSGID: 1783304334553344615 |
Series |
[1/4] linux/export: Fix alignment for 64-bit ksymtab entries
|
|
Commit Message
Helge Deller
Nov. 22, 2023, 10:18 p.m. UTC
From: Helge Deller <deller@gmx.de> On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores 64-bit pointers into the __ksymtab* sections. Make sure that the start of those sections is 64-bit aligned in the vmlinux executable, otherwise unaligned memory accesses may happen at runtime. The __kcrctab* sections store 32-bit entities, so make those sections 32-bit aligned. The pci fixup routines want to be 64-bit aligned on 64-bit platforms which don't define CONFIG_HAVE_ARCH_PREL32_RELOCATIONS. An alignment of 8 bytes is sufficient to guarantee aligned accesses at runtime. Signed-off-by: Helge Deller <deller@gmx.de> Cc: <stable@vger.kernel.org> # v6.0+ --- include/asm-generic/vmlinux.lds.h | 5 +++++ 1 file changed, 5 insertions(+)
Comments
On Thu, Nov 23, 2023 at 7:18 AM <deller@kernel.org> wrote: > > From: Helge Deller <deller@gmx.de> > > On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS > (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores > 64-bit pointers into the __ksymtab* sections. > Make sure that the start of those sections is 64-bit aligned in the vmlinux > executable, otherwise unaligned memory accesses may happen at runtime. Are you solving a real problem? 1/4 already ensures the proper alignment of __ksymtab*, doesn't it? I applied the following hack to attempt to break the alignment intentionally. diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index bae0fe4d499b..e2b5c9acee97 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -482,7 +482,7 @@ TRACEDATA \ \ PRINTK_INDEX \ - \ + . = . + 1; \ /* Kernel symbol table: Normal symbols */ \ __ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \ __start___ksymtab = .; \ The __ksymtab section and __start___ksymtab symbol are still properly aligned due to the '.balign' in <linux/export-internal.h> So, my understanding is this patch is unneeded. Or, does the behaviour depend on toolchains? > The __kcrctab* sections store 32-bit entities, so make those sections > 32-bit aligned. > > The pci fixup routines want to be 64-bit aligned on 64-bit platforms > which don't define CONFIG_HAVE_ARCH_PREL32_RELOCATIONS. An alignment > of 8 bytes is sufficient to guarantee aligned accesses at runtime. > > Signed-off-by: Helge Deller <deller@gmx.de> > Cc: <stable@vger.kernel.org> # v6.0+ > --- > include/asm-generic/vmlinux.lds.h | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h > index bae0fe4d499b..fa4335346e7d 100644 > --- a/include/asm-generic/vmlinux.lds.h > +++ b/include/asm-generic/vmlinux.lds.h > @@ -467,6 +467,7 @@ > } \ > \ > /* PCI quirks */ \ > + . = ALIGN(8); \ > .pci_fixup : AT(ADDR(.pci_fixup) - LOAD_OFFSET) { \ > BOUNDED_SECTION_PRE_LABEL(.pci_fixup_early, _pci_fixups_early, __start, __end) \ > BOUNDED_SECTION_PRE_LABEL(.pci_fixup_header, _pci_fixups_header, __start, __end) \ > @@ -484,6 +485,7 @@ > PRINTK_INDEX \ > \ > /* Kernel symbol table: Normal symbols */ \ > + . = ALIGN(8); \ > __ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \ > __start___ksymtab = .; \ > KEEP(*(SORT(___ksymtab+*))) \ > @@ -491,6 +493,7 @@ > } \ > \ > /* Kernel symbol table: GPL-only symbols */ \ > + . = ALIGN(8); \ > __ksymtab_gpl : AT(ADDR(__ksymtab_gpl) - LOAD_OFFSET) { \ > __start___ksymtab_gpl = .; \ > KEEP(*(SORT(___ksymtab_gpl+*))) \ > @@ -498,6 +501,7 @@ > } \ > \ > /* Kernel symbol table: Normal symbols */ \ > + . = ALIGN(4); \ > __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \ > __start___kcrctab = .; \ > KEEP(*(SORT(___kcrctab+*))) \ > @@ -505,6 +509,7 @@ > } \ > \ > /* Kernel symbol table: GPL-only symbols */ \ > + . = ALIGN(4); \ > __kcrctab_gpl : AT(ADDR(__kcrctab_gpl) - LOAD_OFFSET) { \ > __start___kcrctab_gpl = .; \ > KEEP(*(SORT(___kcrctab_gpl+*))) \ > -- > 2.41.0 >
On 12/21/23 14:07, Masahiro Yamada wrote: > On Thu, Nov 23, 2023 at 7:18 AM <deller@kernel.org> wrote: >> >> From: Helge Deller <deller@gmx.de> >> >> On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS >> (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores >> 64-bit pointers into the __ksymtab* sections. >> Make sure that the start of those sections is 64-bit aligned in the vmlinux >> executable, otherwise unaligned memory accesses may happen at runtime. > > > Are you solving a real problem? Not any longer. I faced a problem on parisc when neither #1 and #3 were applied because of a buggy unalignment exception handler. But this is not something which I would count a "real generic problem". > 1/4 already ensures the proper alignment of __ksymtab*, doesn't it? Yes, it does. >... > So, my understanding is this patch is unneeded. Yes, it's not required and I'm fine if we drop it. But regarding __kcrctab: >> @@ -498,6 +501,7 @@ >> } \ >> \ >> /* Kernel symbol table: Normal symbols */ \ >> + . = ALIGN(4); \ >> __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \ >> __start___kcrctab = .; \ >> KEEP(*(SORT(___kcrctab+*))) \ I think this patch would be beneficial to get proper alignment: diff --git a/include/linux/export-internal.h b/include/linux/export-internal.h index cd253eb51d6c..d445705ac13c 100644 --- a/include/linux/export-internal.h +++ b/include/linux/export-internal.h @@ -64,6 +64,7 @@ #define SYMBOL_CRC(sym, crc, sec) \ asm(".section \"___kcrctab" sec "+" #sym "\",\"a\"" "\n" \ + ".balign 4" "\n" \ "__crc_" #sym ":" "\n" \ ".long " #crc "\n" \ ".previous" "\n") Helge
On Fri, Dec 22, 2023 at 6:02 PM Helge Deller <deller@gmx.de> wrote: > > On 12/21/23 14:07, Masahiro Yamada wrote: > > On Thu, Nov 23, 2023 at 7:18 AM <deller@kernel.org> wrote: > >> > >> From: Helge Deller <deller@gmx.de> > >> > >> On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS > >> (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores > >> 64-bit pointers into the __ksymtab* sections. > >> Make sure that the start of those sections is 64-bit aligned in the vmlinux > >> executable, otherwise unaligned memory accesses may happen at runtime. > > > > > > Are you solving a real problem? > > Not any longer. > I faced a problem on parisc when neither #1 and #3 were applied > because of a buggy unalignment exception handler. But this is > not something which I would count a "real generic problem". > > > 1/4 already ensures the proper alignment of __ksymtab*, doesn't it? > > Yes, it does. > > >... > > So, my understanding is this patch is unneeded. > > Yes, it's not required and I'm fine if we drop it. > > But regarding __kcrctab: > > >> @@ -498,6 +501,7 @@ > >> } \ > >> \ > >> /* Kernel symbol table: Normal symbols */ \ > >> + . = ALIGN(4); \ > >> __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \ > >> __start___kcrctab = .; \ > >> KEEP(*(SORT(___kcrctab+*))) \ > > I think this patch would be beneficial to get proper alignment: > > diff --git a/include/linux/export-internal.h b/include/linux/export-internal.h > index cd253eb51d6c..d445705ac13c 100644 > --- a/include/linux/export-internal.h > +++ b/include/linux/export-internal.h > @@ -64,6 +64,7 @@ > > #define SYMBOL_CRC(sym, crc, sec) \ > asm(".section \"___kcrctab" sec "+" #sym "\",\"a\"" "\n" \ > + ".balign 4" "\n" \ > "__crc_" #sym ":" "\n" \ > ".long " #crc "\n" \ > ".previous" "\n") Yes! Please send a patch with this: Fixes: f3304ecd7f06 ("linux/export: use inline assembler to populate symbol CRCs")
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index bae0fe4d499b..fa4335346e7d 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -467,6 +467,7 @@ } \ \ /* PCI quirks */ \ + . = ALIGN(8); \ .pci_fixup : AT(ADDR(.pci_fixup) - LOAD_OFFSET) { \ BOUNDED_SECTION_PRE_LABEL(.pci_fixup_early, _pci_fixups_early, __start, __end) \ BOUNDED_SECTION_PRE_LABEL(.pci_fixup_header, _pci_fixups_header, __start, __end) \ @@ -484,6 +485,7 @@ PRINTK_INDEX \ \ /* Kernel symbol table: Normal symbols */ \ + . = ALIGN(8); \ __ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \ __start___ksymtab = .; \ KEEP(*(SORT(___ksymtab+*))) \ @@ -491,6 +493,7 @@ } \ \ /* Kernel symbol table: GPL-only symbols */ \ + . = ALIGN(8); \ __ksymtab_gpl : AT(ADDR(__ksymtab_gpl) - LOAD_OFFSET) { \ __start___ksymtab_gpl = .; \ KEEP(*(SORT(___ksymtab_gpl+*))) \ @@ -498,6 +501,7 @@ } \ \ /* Kernel symbol table: Normal symbols */ \ + . = ALIGN(4); \ __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \ __start___kcrctab = .; \ KEEP(*(SORT(___kcrctab+*))) \ @@ -505,6 +509,7 @@ } \ \ /* Kernel symbol table: GPL-only symbols */ \ + . = ALIGN(4); \ __kcrctab_gpl : AT(ADDR(__kcrctab_gpl) - LOAD_OFFSET) { \ __start___kcrctab_gpl = .; \ KEEP(*(SORT(___kcrctab_gpl+*))) \