[V2,2/2] arm64: errata: Workaround possible Cortex-A715 [ESR|FAR]_ELx corruption
Commit Message
If a Cortex-A715 cpu sees a page mapping permissions change from executable
to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
next instruction abort caused by permission fault.
Only user-space does executable to non-executable permission transition via
mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
_prot_commit() helpers, while changing the page mapping. The platform code
can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
Work around the problem via doing a break-before-make TLB invalidation, for
all executable user space mappings, that go through mprotect() system call.
This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
an opportunity to intercept user space exec mappings, and do the necessary
TLB invalidation. Similar interceptions are also implemented for HugeTLB.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
Documentation/arm64/silicon-errata.rst | 2 ++
arch/arm64/Kconfig | 16 ++++++++++++++++
arch/arm64/include/asm/hugetlb.h | 9 +++++++++
arch/arm64/include/asm/pgtable.h | 9 +++++++++
arch/arm64/kernel/cpu_errata.c | 7 +++++++
arch/arm64/mm/hugetlbpage.c | 21 +++++++++++++++++++++
arch/arm64/mm/mmu.c | 21 +++++++++++++++++++++
arch/arm64/tools/cpucaps | 1 +
8 files changed, 86 insertions(+)
Comments
On Sun, Nov 13, 2022 at 06:56:45AM +0530, Anshuman Khandual wrote:
> If a Cortex-A715 cpu sees a page mapping permissions change from executable
> to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
> next instruction abort caused by permission fault.
>
> Only user-space does executable to non-executable permission transition via
> mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
> _prot_commit() helpers, while changing the page mapping. The platform code
> can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
>
> Work around the problem via doing a break-before-make TLB invalidation, for
> all executable user space mappings, that go through mprotect() system call.
> This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
> defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
> an opportunity to intercept user space exec mappings, and do the necessary
> TLB invalidation. Similar interceptions are also implemented for HugeTLB.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Jonathan Corbet <corbet@lwn.net>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-doc@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> Documentation/arm64/silicon-errata.rst | 2 ++
> arch/arm64/Kconfig | 16 ++++++++++++++++
> arch/arm64/include/asm/hugetlb.h | 9 +++++++++
> arch/arm64/include/asm/pgtable.h | 9 +++++++++
> arch/arm64/kernel/cpu_errata.c | 7 +++++++
> arch/arm64/mm/hugetlbpage.c | 21 +++++++++++++++++++++
> arch/arm64/mm/mmu.c | 21 +++++++++++++++++++++
> arch/arm64/tools/cpucaps | 1 +
> 8 files changed, 86 insertions(+)
[...]
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9a7c38965154..c1fb0ce1473c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
> }
> early_initcall(prevent_bootmem_remove_init);
> #endif
> +
> +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +{
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> + pte_t pte = READ_ONCE(*ptep);
> + /*
> + * Break-before-make (BBM) is required for all user space mappings
> + * when the permission changes from executable to non-executable
> + * in cases where cpu is affected with errata #2645198.
> + */
> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + return ptep_clear_flush(vma, addr, ptep);
> + }
> + return ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +}
> +
> +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> + pte_t old_pte, pte_t pte)
> +{
> + __set_pte_at(vma->vm_mm, addr, ptep, pte);
> +}
So these are really similar to the generic copies and, in looking at
change_pte_range(), it appears that we already invalidate the TLB, it just
happens _after_ writing the new version.
So with your change, I think we end up invalidating twice. Can we instead
change the generic code to invalidate the TLB before writing the new entry?
Will
On Tue, Nov 15, 2022 at 01:38:54PM +0000, Will Deacon wrote:
> On Sun, Nov 13, 2022 at 06:56:45AM +0530, Anshuman Khandual wrote:
> > If a Cortex-A715 cpu sees a page mapping permissions change from executable
> > to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
> > next instruction abort caused by permission fault.
> >
> > Only user-space does executable to non-executable permission transition via
> > mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
> > _prot_commit() helpers, while changing the page mapping. The platform code
> > can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
> >
> > Work around the problem via doing a break-before-make TLB invalidation, for
> > all executable user space mappings, that go through mprotect() system call.
> > This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
> > defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
> > an opportunity to intercept user space exec mappings, and do the necessary
> > TLB invalidation. Similar interceptions are also implemented for HugeTLB.
> >
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Jonathan Corbet <corbet@lwn.net>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: linux-arm-kernel@lists.infradead.org
> > Cc: linux-doc@vger.kernel.org
> > Cc: linux-kernel@vger.kernel.org
> > Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> > ---
> > Documentation/arm64/silicon-errata.rst | 2 ++
> > arch/arm64/Kconfig | 16 ++++++++++++++++
> > arch/arm64/include/asm/hugetlb.h | 9 +++++++++
> > arch/arm64/include/asm/pgtable.h | 9 +++++++++
> > arch/arm64/kernel/cpu_errata.c | 7 +++++++
> > arch/arm64/mm/hugetlbpage.c | 21 +++++++++++++++++++++
> > arch/arm64/mm/mmu.c | 21 +++++++++++++++++++++
> > arch/arm64/tools/cpucaps | 1 +
> > 8 files changed, 86 insertions(+)
>
> [...]
>
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 9a7c38965154..c1fb0ce1473c 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
> > }
> > early_initcall(prevent_bootmem_remove_init);
> > #endif
> > +
> > +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> > +{
> > + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> > + pte_t pte = READ_ONCE(*ptep);
> > + /*
> > + * Break-before-make (BBM) is required for all user space mappings
> > + * when the permission changes from executable to non-executable
> > + * in cases where cpu is affected with errata #2645198.
> > + */
> > + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> > + return ptep_clear_flush(vma, addr, ptep);
> > + }
> > + return ptep_get_and_clear(vma->vm_mm, addr, ptep);
> > +}
> > +
> > +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> > + pte_t old_pte, pte_t pte)
> > +{
> > + __set_pte_at(vma->vm_mm, addr, ptep, pte);
> > +}
>
> So these are really similar to the generic copies and, in looking at
> change_pte_range(), it appears that we already invalidate the TLB, it just
> happens _after_ writing the new version.
>
> So with your change, I think we end up invalidating twice. Can we instead
> change the generic code to invalidate the TLB before writing the new entry?
Bah, scratch that, the invalidations are all batched, aren't they?
It just seems silly that we have to add all this code just to do a TLB
invalidation.
Will
On Sun, Nov 13, 2022 at 06:56:45AM +0530, Anshuman Khandual wrote:
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index 35e9a468d13e..6552947ca7fa 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -559,3 +559,24 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
> {
> return __hugetlb_valid_size(size);
> }
> +
> +pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +{
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> + pte_t pte = READ_ONCE(*ptep);
Not sure whether the generated code would be any different but we should
probably add the check for the CPU capability in the 'if' condition
above, before the READ_ONCE (which expands to some asm volatile):
if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) &&
cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
pte_t pte = ...
...
}
> + /*
> + * Break-before-make (BBM) is required for all user space mappings
> + * when the permission changes from executable to non-executable
> + * in cases where cpu is affected with errata #2645198.
> + */
> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + return huge_ptep_clear_flush(vma, addr, ptep);
> + }
> + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +}
> +
> +void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> + pte_t old_pte, pte_t pte)
> +{
> + set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
> +}
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9a7c38965154..c1fb0ce1473c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
> }
> early_initcall(prevent_bootmem_remove_init);
> #endif
> +
> +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +{
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> + pte_t pte = READ_ONCE(*ptep);
> + /*
> + * Break-before-make (BBM) is required for all user space mappings
> + * when the permission changes from executable to non-executable
> + * in cases where cpu is affected with errata #2645198.
> + */
> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + return ptep_clear_flush(vma, addr, ptep);
> + }
Same here.
> + return ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +}
> +
> +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> + pte_t old_pte, pte_t pte)
> +{
> + __set_pte_at(vma->vm_mm, addr, ptep, pte);
> +}
On 11/15/22 19:32, Catalin Marinas wrote:
> On Sun, Nov 13, 2022 at 06:56:45AM +0530, Anshuman Khandual wrote:
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index 35e9a468d13e..6552947ca7fa 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -559,3 +559,24 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
>> {
>> return __hugetlb_valid_size(size);
>> }
>> +
>> +pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
>> +{
>> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
>> + pte_t pte = READ_ONCE(*ptep);
>
> Not sure whether the generated code would be any different but we should
> probably add the check for the CPU capability in the 'if' condition
> above, before the READ_ONCE (which expands to some asm volatile):
Sure, will do.
>
> if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) &&
> cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
> pte_t pte = ...
> ...
> }
The local variable 'pte_t pte' can be dropped as well.
>
>> + /*
>> + * Break-before-make (BBM) is required for all user space mappings
>> + * when the permission changes from executable to non-executable
>> + * in cases where cpu is affected with errata #2645198.
>> + */
>> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
>> + return huge_ptep_clear_flush(vma, addr, ptep);
>> + }
>> + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
>> +}
>> +
>> +void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
>> + pte_t old_pte, pte_t pte)
>> +{
>> + set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
>> +}
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index 9a7c38965154..c1fb0ce1473c 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
>> }
>> early_initcall(prevent_bootmem_remove_init);
>> #endif
>> +
>> +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
>> +{
>> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
>> + pte_t pte = READ_ONCE(*ptep);
>> + /*
>> + * Break-before-make (BBM) is required for all user space mappings
>> + * when the permission changes from executable to non-executable
>> + * in cases where cpu is affected with errata #2645198.
>> + */
>> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
>> + return ptep_clear_flush(vma, addr, ptep);
>> + }
>
> Same here.
Sure, will do.
>
>> + return ptep_get_and_clear(vma->vm_mm, addr, ptep);
>> +}
>> +
>> +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
>> + pte_t old_pte, pte_t pte)
>> +{
>> + __set_pte_at(vma->vm_mm, addr, ptep, pte);
>> +}
>
Planning to apply the following change after this patch.
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 6552947ca7fa..cd8d96e1fa1a 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -562,14 +562,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
{
- if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
- pte_t pte = READ_ONCE(*ptep);
+ if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) &&
+ cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
/*
* Break-before-make (BBM) is required for all user space mappings
* when the permission changes from executable to non-executable
* in cases where cpu is affected with errata #2645198.
*/
- if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
+ if (pte_user_exec(READ_ONCE(*ptep)))
return huge_ptep_clear_flush(vma, addr, ptep);
}
return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index c1fb0ce1473c..ec305ea3942c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1705,14 +1705,14 @@ early_initcall(prevent_bootmem_remove_init);
pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
{
- if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
- pte_t pte = READ_ONCE(*ptep);
+ if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) &&
+ cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
/*
* Break-before-make (BBM) is required for all user space mappings
* when the permission changes from executable to non-executable
* in cases where cpu is affected with errata #2645198.
*/
- if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
+ if (pte_user_exec(READ_ONCE(*ptep)))
return ptep_clear_flush(vma, addr, ptep);
}
return ptep_get_and_clear(vma->vm_mm, addr, ptep);
On 11/13/22 06:56, Anshuman Khandual wrote:
> +pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +{
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> + pte_t pte = READ_ONCE(*ptep);
> + /*
> + * Break-before-make (BBM) is required for all user space mappings
> + * when the permission changes from executable to non-executable
> + * in cases where cpu is affected with errata #2645198.
> + */
> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + return huge_ptep_clear_flush(vma, addr, ptep);
> + }
> + return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +}
> +
> +void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> + pte_t old_pte, pte_t pte)
> +{
> + set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
> +}
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 9a7c38965154..c1fb0ce1473c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
> }
> early_initcall(prevent_bootmem_remove_init);
> #endif
> +
> +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +{
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> + pte_t pte = READ_ONCE(*ptep);
> + /*
> + * Break-before-make (BBM) is required for all user space mappings
> + * when the permission changes from executable to non-executable
> + * in cases where cpu is affected with errata #2645198.
> + */
> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + return ptep_clear_flush(vma, addr, ptep);
> + }
> + return ptep_get_and_clear(vma->vm_mm, addr, ptep);
> +}
> +
> +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> + pte_t old_pte, pte_t pte)
> +{
> + __set_pte_at(vma->vm_mm, addr, ptep, pte);
> +}
Will change this __set_pte_at() to set_pte_at() instead like the generic version, which also
makes sure that page_table_check_pte_set() gets called on the way, also making it similar to
the HugeTLB implementation huge_ptep_modify_prot_commit().
On 11/15/22 19:12, Will Deacon wrote:
> On Tue, Nov 15, 2022 at 01:38:54PM +0000, Will Deacon wrote:
>> On Sun, Nov 13, 2022 at 06:56:45AM +0530, Anshuman Khandual wrote:
>>> If a Cortex-A715 cpu sees a page mapping permissions change from executable
>>> to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers, on the
>>> next instruction abort caused by permission fault.
>>>
>>> Only user-space does executable to non-executable permission transition via
>>> mprotect() system call which calls ptep_modify_prot_start() and ptep_modify
>>> _prot_commit() helpers, while changing the page mapping. The platform code
>>> can override these helpers via __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION.
>>>
>>> Work around the problem via doing a break-before-make TLB invalidation, for
>>> all executable user space mappings, that go through mprotect() system call.
>>> This overrides ptep_modify_prot_start() and ptep_modify_prot_commit(), via
>>> defining HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION on the platform thus giving
>>> an opportunity to intercept user space exec mappings, and do the necessary
>>> TLB invalidation. Similar interceptions are also implemented for HugeTLB.
>>>
>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>>> Cc: Will Deacon <will@kernel.org>
>>> Cc: Jonathan Corbet <corbet@lwn.net>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: linux-arm-kernel@lists.infradead.org
>>> Cc: linux-doc@vger.kernel.org
>>> Cc: linux-kernel@vger.kernel.org
>>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>>> ---
>>> Documentation/arm64/silicon-errata.rst | 2 ++
>>> arch/arm64/Kconfig | 16 ++++++++++++++++
>>> arch/arm64/include/asm/hugetlb.h | 9 +++++++++
>>> arch/arm64/include/asm/pgtable.h | 9 +++++++++
>>> arch/arm64/kernel/cpu_errata.c | 7 +++++++
>>> arch/arm64/mm/hugetlbpage.c | 21 +++++++++++++++++++++
>>> arch/arm64/mm/mmu.c | 21 +++++++++++++++++++++
>>> arch/arm64/tools/cpucaps | 1 +
>>> 8 files changed, 86 insertions(+)
>>
>> [...]
>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 9a7c38965154..c1fb0ce1473c 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
>>> }
>>> early_initcall(prevent_bootmem_remove_init);
>>> #endif
>>> +
>>> +pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
>>> +{
>>> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
>>> + pte_t pte = READ_ONCE(*ptep);
>>> + /*
>>> + * Break-before-make (BBM) is required for all user space mappings
>>> + * when the permission changes from executable to non-executable
>>> + * in cases where cpu is affected with errata #2645198.
>>> + */
>>> + if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
>>> + return ptep_clear_flush(vma, addr, ptep);
>>> + }
>>> + return ptep_get_and_clear(vma->vm_mm, addr, ptep);
>>> +}
>>> +
>>> +void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
>>> + pte_t old_pte, pte_t pte)
>>> +{
>>> + __set_pte_at(vma->vm_mm, addr, ptep, pte);
>>> +}
>>
>> So these are really similar to the generic copies and, in looking at
>> change_pte_range(), it appears that we already invalidate the TLB, it just
>> happens _after_ writing the new version.
>>
>> So with your change, I think we end up invalidating twice. Can we instead
>> change the generic code to invalidate the TLB before writing the new entry?
>
> Bah, scratch that, the invalidations are all batched, aren't they?
Right.
>
> It just seems silly that we have to add all this code just to do a TLB
> invalidation.
Right, but only when affected by this errata. Otherwise it is just same as the
existing generic definitions.
On Wed, Nov 16, 2022 at 10:12:34AM +0530, Anshuman Khandual wrote:
> Planning to apply the following change after this patch.
>
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index 6552947ca7fa..cd8d96e1fa1a 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -562,14 +562,14 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
>
> pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> {
> - if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> - pte_t pte = READ_ONCE(*ptep);
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) &&
> + cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
> /*
> * Break-before-make (BBM) is required for all user space mappings
> * when the permission changes from executable to non-executable
> * in cases where cpu is affected with errata #2645198.
> */
> - if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + if (pte_user_exec(READ_ONCE(*ptep)))
> return huge_ptep_clear_flush(vma, addr, ptep);
> }
> return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index c1fb0ce1473c..ec305ea3942c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1705,14 +1705,14 @@ early_initcall(prevent_bootmem_remove_init);
>
> pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> {
> - if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
> - pte_t pte = READ_ONCE(*ptep);
> + if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198) &&
> + cpus_have_const_cap(ARM64_WORKAROUND_2645198)) {
> /*
> * Break-before-make (BBM) is required for all user space mappings
> * when the permission changes from executable to non-executable
> * in cases where cpu is affected with errata #2645198.
> */
> - if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
> + if (pte_user_exec(READ_ONCE(*ptep)))
> return ptep_clear_flush(vma, addr, ptep);
> }
> return ptep_get_and_clear(vma->vm_mm, addr, ptep);
It looks fine to me. Thanks.
@@ -120,6 +120,8 @@ stable kernels.
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A710 | #2224489 | ARM64_ERRATUM_2224489 |
+----------------+-----------------+-----------------+-----------------------------+
+| ARM | Cortex-A715 | #2645198 | ARM64_ERRATUM_2645198 |
++----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X2 | #2119858 | ARM64_ERRATUM_2119858 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-X2 | #2224489 | ARM64_ERRATUM_2224489 |
@@ -964,6 +964,22 @@ config ARM64_ERRATUM_2457168
If unsure, say Y.
+config ARM64_ERRATUM_2645198
+ bool "Cortex-A715: 2645198: Workaround possible [ESR|FAR]_ELx corruption"
+ default y
+ help
+ This option adds the workaround for ARM Cortex-A715 erratum 2645198.
+
+ If a Cortex-A715 cpu sees a page mapping permissions change from executable
+ to non-executable, it may corrupt the ESR_ELx and FAR_ELx registers on the
+ next instruction abort caused by permission fault.
+
+ Only user-space does executable to non-executable permission transition via
+ mprotect() system call. Workaround the problem by doing a break-before-make
+ TLB invalidation, for all changes to executable user space mappings.
+
+ If unsure, say Y.
+
config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313"
default y
@@ -49,6 +49,15 @@ extern pte_t huge_ptep_get(pte_t *ptep);
void __init arm64_hugetlb_cma_reserve(void);
+#define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
+extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep);
+
+#define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
+extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t old_pte, pte_t new_pte);
+
#include <asm-generic/hugetlb.h>
#endif /* __ASM_HUGETLB_H */
@@ -1096,6 +1096,15 @@ static inline bool pud_sect_supported(void)
}
+#define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
+#define ptep_modify_prot_start ptep_modify_prot_start
+extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep);
+
+#define ptep_modify_prot_commit ptep_modify_prot_commit
+extern void ptep_modify_prot_commit(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t old_pte, pte_t new_pte);
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_PGTABLE_H */
@@ -661,6 +661,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
CAP_MIDR_RANGE_LIST(trbe_write_out_of_range_cpus),
},
#endif
+#ifdef CONFIG_ARM64_ERRATUM_2645198
+ {
+ .desc = "ARM erratum 2645198",
+ .capability = ARM64_WORKAROUND_2645198,
+ ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A715)
+ },
+#endif
#ifdef CONFIG_ARM64_ERRATUM_2077057
{
.desc = "ARM erratum 2077057",
@@ -559,3 +559,24 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
{
return __hugetlb_valid_size(size);
}
+
+pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+{
+ if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
+ pte_t pte = READ_ONCE(*ptep);
+ /*
+ * Break-before-make (BBM) is required for all user space mappings
+ * when the permission changes from executable to non-executable
+ * in cases where cpu is affected with errata #2645198.
+ */
+ if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
+ return huge_ptep_clear_flush(vma, addr, ptep);
+ }
+ return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
+}
+
+void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+ pte_t old_pte, pte_t pte)
+{
+ set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
+}
@@ -1702,3 +1702,24 @@ static int __init prevent_bootmem_remove_init(void)
}
early_initcall(prevent_bootmem_remove_init);
#endif
+
+pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+{
+ if (IS_ENABLED(CONFIG_ARM64_WORKAROUND_2645198)) {
+ pte_t pte = READ_ONCE(*ptep);
+ /*
+ * Break-before-make (BBM) is required for all user space mappings
+ * when the permission changes from executable to non-executable
+ * in cases where cpu is affected with errata #2645198.
+ */
+ if (pte_user_exec(pte) && cpus_have_const_cap(ARM64_WORKAROUND_2645198))
+ return ptep_clear_flush(vma, addr, ptep);
+ }
+ return ptep_get_and_clear(vma->vm_mm, addr, ptep);
+}
+
+void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+ pte_t old_pte, pte_t pte)
+{
+ __set_pte_at(vma->vm_mm, addr, ptep, pte);
+}
@@ -70,6 +70,7 @@ WORKAROUND_2038923
WORKAROUND_2064142
WORKAROUND_2077057
WORKAROUND_2457168
+WORKAROUND_2645198
WORKAROUND_2658417
WORKAROUND_TRBE_OVERWRITE_FILL_MODE
WORKAROUND_TSB_FLUSH_FAILURE