RISC-V: Fix incorrect condition of EEW = 64 mode

Message ID 20230407011143.46004-1-juzhe.zhong@rivai.ai
State Unresolved
Headers
Series RISC-V: Fix incorrect condition of EEW = 64 mode |

Checks

Context Check Description
snail/gcc-patch-check warning Git am fail log

Commit Message

juzhe.zhong@rivai.ai April 7, 2023, 1:11 a.m. UTC
  From: Juzhe-Zhong <juzhe.zhong@rivai.ai>

This patch should be merged before this patch:
https://gcc.gnu.org/pipermail/gcc-patches/2023-March/614935.html

According to RVV ISA, the EEW = 64 is enable only when -march=*zve64*
Current condition is incorrect, since -march=*zve32*_zvl64b will enable EEW = 64 which
is incorrect.

gcc/ChangeLog:

        * config/riscv/riscv-vector-switch.def (ENTRY): Change to TARGET_VECTOR_ELEN_64.

---
 gcc/config/riscv/riscv-vector-switch.def | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
  

Comments

Jeff Law April 12, 2023, 11 p.m. UTC | #1
On 4/6/23 19:11, juzhe.zhong@rivai.ai wrote:
> From: Juzhe-Zhong <juzhe.zhong@rivai.ai>
> 
> This patch should be merged before this patch:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-March/614935.html
> 
> According to RVV ISA, the EEW = 64 is enable only when -march=*zve64*
> Current condition is incorrect, since -march=*zve32*_zvl64b will enable EEW = 64 which
> is incorrect.
> 
> gcc/ChangeLog:
> 
>          * config/riscv/riscv-vector-switch.def (ENTRY): Change to TARGET_VECTOR_ELEN_64.
Just to be clear, this was for gcc-14, right?  I don't see these modes 
in the current trunk.

jeff
  
juzhe.zhong@rivai.ai April 12, 2023, 11:12 p.m. UTC | #2
Yeah. But this patch is not appropriate now since it is conflict with the upstream GCC.
I am gonna re-check the current upstream GCC and the queue patch for GCC 14.
If there are some conflicts, I will resend them.

Thanks


juzhe.zhong@rivai.ai
 
From: Jeff Law
Date: 2023-04-13 07:00
To: juzhe.zhong; gcc-patches
CC: kito.cheng; palmer
Subject: Re: [PATCH] RISC-V: Fix incorrect condition of EEW = 64 mode
 
 
On 4/6/23 19:11, juzhe.zhong@rivai.ai wrote:
> From: Juzhe-Zhong <juzhe.zhong@rivai.ai>
> 
> This patch should be merged before this patch:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-March/614935.html
> 
> According to RVV ISA, the EEW = 64 is enable only when -march=*zve64*
> Current condition is incorrect, since -march=*zve32*_zvl64b will enable EEW = 64 which
> is incorrect.
> 
> gcc/ChangeLog:
> 
>          * config/riscv/riscv-vector-switch.def (ENTRY): Change to TARGET_VECTOR_ELEN_64.
Just to be clear, this was for gcc-14, right?  I don't see these modes 
in the current trunk.
 
jeff
  
Jeff Law April 25, 2023, 5:55 a.m. UTC | #3
On 4/6/23 19:11, juzhe.zhong@rivai.ai wrote:
> From: Juzhe-Zhong <juzhe.zhong@rivai.ai>
> 
> This patch should be merged before this patch:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-March/614935.html
> 
> According to RVV ISA, the EEW = 64 is enable only when -march=*zve64*
> Current condition is incorrect, since -march=*zve32*_zvl64b will enable EEW = 64 which
> is incorrect.
> 
> gcc/ChangeLog:
> 
>          * config/riscv/riscv-vector-switch.def (ENTRY): Change to TARGET_VECTOR_ELEN_64.
This is OK for the trunk.
jeff
  

Patch

diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/riscv-vector-switch.def
index bfb591773dc..b772c282769 100644
--- a/gcc/config/riscv/riscv-vector-switch.def
+++ b/gcc/config/riscv/riscv-vector-switch.def
@@ -187,11 +187,11 @@  ENTRY (VNx1SF, TARGET_VECTOR_FP32 && TARGET_MIN_VLEN < 128, LMUL_1, 32, LMUL_F2,
    For double-precision floating-point, we need TARGET_VECTOR_FP64 ==
    RVV_ENABLE.  */
 /* SEW = 64. Disable VNx1DImode/VNx1DFmode when TARGET_MIN_VLEN >= 128.  */
-ENTRY (VNx16DI, TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
-ENTRY (VNx8DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16)
-ENTRY (VNx4DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
-ENTRY (VNx2DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
-ENTRY (VNx1DI, TARGET_MIN_VLEN > 32 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
+ENTRY (VNx16DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN >= 128, LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
+ENTRY (VNx8DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_8, 8, LMUL_4, 16)
+ENTRY (VNx4DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_4, 16, LMUL_2, 32)
+ENTRY (VNx2DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_2, 32, LMUL_1, 64)
+ENTRY (VNx1DI, TARGET_VECTOR_ELEN_64 && TARGET_MIN_VLEN < 128, LMUL_RESERVED, 0, LMUL_1, 64, LMUL_RESERVED, 0)
 
 ENTRY (VNx16DF, TARGET_VECTOR_FP64 && (TARGET_MIN_VLEN >= 128), LMUL_RESERVED, 0, LMUL_RESERVED, 0, LMUL_8, 8)
 ENTRY (VNx8DF, TARGET_VECTOR_FP64 && (TARGET_MIN_VLEN > 32), LMUL_RESERVED, 0,