From patchwork Sun Jun 11 12:01:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georg-Johann Lay X-Patchwork-Id: 106079 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp1996446vqr; Sun, 11 Jun 2023 05:02:07 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5tzxNv+m3h3GXTVfC2sirrjPrepQhYfC1DCBQqWZfJ9iBWP6YzrEewTd6KXLZLdpiZ2xc+ X-Received: by 2002:a17:906:6a20:b0:973:8afb:634a with SMTP id qw32-20020a1709066a2000b009738afb634amr6711564ejc.54.1686484927719; Sun, 11 Jun 2023 05:02:07 -0700 (PDT) Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id a18-20020a170906275200b0094f51792556si3823714ejd.14.2023.06.11.05.02.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Jun 2023 05:02:07 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=fail header.i=@gjlay.de header.s=strato-dkim-0002 header.b=NSViOMS4; dkim=neutral (no key) header.i=@gjlay.de header.s=strato-dkim-0003; arc=fail (signature failed); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 37E853857838 for ; Sun, 11 Jun 2023 12:01:58 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mo4-p00-ob.smtp.rzone.de (mo4-p00-ob.smtp.rzone.de [81.169.146.221]) by sourceware.org (Postfix) with ESMTPS id 9E7953858D20 for ; Sun, 11 Jun 2023 12:01:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9E7953858D20 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=gjlay.de Authentication-Results: sourceware.org; spf=none smtp.mailfrom=gjlay.de ARC-Seal: i=1; a=rsa-sha256; t=1686484891; cv=none; d=strato.com; s=strato-dkim-0002; b=Jl3peY80TiI4lXfGFG7hXOY45lXdWBQv1akbvP2/tGPLr3CozP1IjKjywNvKI1iurA /j9w4H2VzrOTi9sZD8Df6H30ZjrUlsrpE62TX97TC1t1D2zK/YqD7bkWDe60XRs4h/kF EBJsdEV7eNh9v9i4c/VZ4Oguhns9tDd7962YXTcV0zFGDj3SiNFwDYTQykNKB7A8kl5V 7P+YV2PHKUmkdVs73hQSPpFRnL0Eb4FI6aDtomr7lXdtJisO5Nb5NzwFZdC4Cd/Qt4Ay HS8nkM+HT4zDoOz8T2J5gLbeXofMqpm3MwRGa+vROMaiTEjvdVoM5lUGv2ATJUnntEwx nt7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1686484891; s=strato-dkim-0002; d=strato.com; h=Subject:From:Cc:To:Date:Message-ID:Cc:Date:From:Subject:Sender; bh=SVR79pbquUwYKHpjxEo/VImk5JID3jzY4BuZnJEs/iU=; b=bkFgfDLdTco/gfABl/xOkwuLvei1YoQO47zZDwZcdAgtH8W1cJZR/LSj5dD+8yJHOB tFSKD0gcx7yrYJIxqBy5dKpAhLMIgHnecXynVm2AjPCQRTwfo4nlfP7ky5zMj4PKtLb2 WF7c1hj+izC5Jl/TdiFqtHfsm2fJ2hge9ZEMsNPKsJcsBX6Vwonmol+G5UOdD/D1N6zX t7W3HwfG4v4Zh6f/lfI0UP9RfqDv83m5Hqzynil4i7sGJbcucwsn8egSz4rh4xdfSHm5 D1P4+mHf9unmWfyAUSOsHX9tCzTLGiM4AxhhLLyGnJ1m/EwZHiDPyKIwWKuIOEYM5ZEK 8Y3g== ARC-Authentication-Results: i=1; strato.com; arc=none; dkim=none X-RZG-CLASS-ID: mo00 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1686484891; s=strato-dkim-0002; d=gjlay.de; h=Subject:From:Cc:To:Date:Message-ID:Cc:Date:From:Subject:Sender; bh=SVR79pbquUwYKHpjxEo/VImk5JID3jzY4BuZnJEs/iU=; b=NSViOMS4nDQdGwnU5d8G8IG5LPw8mf4/hD1BJOrmbAvGx+n8maOJ46YwOu1P60DgzK 1o1IcNfsj5zGXusZ+z/z79Zfjj9VVpDTDKYwLmXJBj0jfI/C9vuCKimN0V4BjuI1jz8g JqZFvHd2CQXtLh3kXqYlYUsAFBZ1e+/HgT3Ek5N3BfmAPjIUsee3/pWVD/xpBdrcj7up UxRyDDxZMk2Kmug3Yg03n/rgzpTKK8NB5v6aXetNvkQKP/pHKm6GrzqfQ+9oZoGdctqx JIKq8uKY2MJUO/+LWb4GxOcimrPVVgFLDh+9jheHoysEGz2zHDj8Ub41VEUGTcu3MGsm zSIw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; t=1686484891; s=strato-dkim-0003; d=gjlay.de; h=Subject:From:Cc:To:Date:Message-ID:Cc:Date:From:Subject:Sender; bh=SVR79pbquUwYKHpjxEo/VImk5JID3jzY4BuZnJEs/iU=; b=JFOjW0UNoTXl4VASZ7M36KIrMZhZwBiMdJ9RAgfP6kvyz1Cs+tE/LDFEe4HKvvx5Uf UQ83ustrX8mmeMN6l2Cw== X-RZG-AUTH: ":LXoWVUeid/7A29J/hMvvT3koxZnKT7Qq0xotTetVnKkRmM69o2y+LiO3MutATA==" Received: from [192.168.2.102] by smtp.strato.de (RZmta 49.5.3 DYNA|AUTH) with ESMTPSA id 638aecz5BC1Ud52 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits)) (Client did not present a certificate); Sun, 11 Jun 2023 14:01:30 +0200 (CEST) Message-ID: <8e7a741d-b1c1-54b6-f20e-a4e84ffb075b@gjlay.de> Date: Sun, 11 Jun 2023 14:01:30 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Content-Language: en-US To: gcc-patches@gcc.gnu.org Cc: Denis Chertykov From: Georg-Johann Lay Subject: [avr,committed] Tidy code for inverted bit insertions X-Spam-Status: No, score=-10.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1768407619799409826?= X-GMAIL-MSGID: =?utf-8?q?1768407619799409826?= Applied this no-op change that tidies up the code for inverted bit insertions. Johann --- Use canonical form for reversed single-bit insertions after reload. We now split almost all insns after reload in order to add clobber of REG_CC. If insns are coming from insn combiner and there is no canonical form for the respective arithmetic (like for reversed bit insertions), there is no need to keep all these different representations after reload: Instead of splitting such patterns to their clobber-REG_CC-analogon, we can split to a canonical representation, which is insv_notbit for the present case. This is a no-op change. gcc/ * config/avr/avr.md (adjust_len) [insv_notbit_0, insv_notbit_7]: Remove attribute values. (insv_notbit): New post-reload insn. (*insv.not-shiftrt_split, *insv.xor1-bit.0_split) (*insv.not-bit.0_split, *insv.not-bit.7_split) (*insv.xor-extract_split): Split to insv_notbit. (*insv.not-shiftrt, *insv.xor1-bit.0, *insv.not-bit.0, *insv.not-bit.7) (*insv.xor-extract): Remove post-reload insns. * config/avr/avr.cc (avr_out_insert_notbit) [bitno]: Remove parameter. (avr_adjust_insn_length): Adjust call of avr_out_insert_notbit. [ADJUST_LEN_INSV_NOTBIT_0, ADJUST_LEN_INSV_NOTBIT_7]: Remove cases. * config/avr/avr-protos.h (avr_out_insert_notbit): Adjust prototype. + ;; Insert bit ~$2.$3 into $0.$1 (define_insn_and_split "*insv.not-shiftrt_split" [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") @@ -9161,25 +9176,11 @@ (define_insn_and_split "*insv.not-shiftrt_split" "" "#" "&& reload_completed" - [(parallel [(set (zero_extract:QI (match_dup 0) - (const_int 1) - (match_dup 1)) - (not:QI (any_shiftrt:QI (match_dup 2) - (match_dup 3)))) - (clobber (reg:CC REG_CC))])]) - -(define_insn "*insv.not-shiftrt" - [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") - (const_int 1) - (match_operand:QI 1 "const_0_to_7_operand" "n")) - (not:QI (any_shiftrt:QI (match_operand:QI 2 "register_operand" "r") - (match_operand:QI 3 "const_0_to_7_operand" "n")))) - (clobber (reg:CC REG_CC))] - "reload_completed" + [(scratch)] { - return avr_out_insert_notbit (insn, operands, NULL_RTX, NULL); - } - [(set_attr "adjust_len" "insv_notbit")]) + emit (gen_insv_notbit (operands[0], operands[1], operands[2], operands[3])); + DONE; + }) ;; Insert bit ~$2.0 into $0.$1 (define_insn_and_split "*insv.xor1-bit.0_split" @@ -9191,25 +9192,11 @@ (define_insn_and_split "*insv.xor1-bit.0_split" "" "#" "&& reload_completed" - [(parallel [(set (zero_extract:QI (match_dup 0) - (const_int 1) - (match_dup 1)) - (xor:QI (match_dup 2) - (const_int 1))) - (clobber (reg:CC REG_CC))])]) - -(define_insn "*insv.xor1-bit.0" - [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") - (const_int 1) - (match_operand:QI 1 "const_0_to_7_operand" "n")) - (xor:QI (match_operand:QI 2 "register_operand" "r") - (const_int 1))) - (clobber (reg:CC REG_CC))] - "reload_completed" + [(scratch)] { - return avr_out_insert_notbit (insn, operands, const0_rtx, NULL); - } - [(set_attr "adjust_len" "insv_notbit_0")]) + emit (gen_insv_notbit (operands[0], operands[1], operands[2], const0_rtx)); + DONE; + }) ;; Insert bit ~$2.0 into $0.$1 (define_insn_and_split "*insv.not-bit.0_split" @@ -9220,23 +9207,11 @@ (define_insn_and_split "*insv.not-bit.0_split" "" "#" "&& reload_completed" - [(parallel [(set (zero_extract:QI (match_dup 0) - (const_int 1) - (match_dup 1)) - (not:QI (match_dup 2))) - (clobber (reg:CC REG_CC))])]) - -(define_insn "*insv.not-bit.0" - [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") - (const_int 1) - (match_operand:QI 1 "const_0_to_7_operand" "n")) - (not:QI (match_operand:QI 2 "register_operand" "r"))) - (clobber (reg:CC REG_CC))] - "reload_completed" + [(scratch)] { - return avr_out_insert_notbit (insn, operands, const0_rtx, NULL); - } - [(set_attr "adjust_len" "insv_notbit_0")]) + emit (gen_insv_notbit (operands[0], operands[1], operands[2], const0_rtx)); + DONE; + }) ;; Insert bit ~$2.7 into $0.$1 (define_insn_and_split "*insv.not-bit.7_split" @@ -9248,25 +9223,11 @@ (define_insn_and_split "*insv.not-bit.7_split" "" "#" "&& reload_completed" - [(parallel [(set (zero_extract:QI (match_dup 0) - (const_int 1) - (match_dup 1)) - (ge:QI (match_dup 2) - (const_int 0))) - (clobber (reg:CC REG_CC))])]) - -(define_insn "*insv.not-bit.7" - [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") - (const_int 1) - (match_operand:QI 1 "const_0_to_7_operand" "n")) - (ge:QI (match_operand:QI 2 "register_operand" "r") - (const_int 0))) - (clobber (reg:CC REG_CC))] - "reload_completed" + [(scratch)] { - return avr_out_insert_notbit (insn, operands, GEN_INT (7), NULL); - } - [(set_attr "adjust_len" "insv_notbit_7")]) + emit (gen_insv_notbit (operands[0], operands[1], operands[2], GEN_INT(7))); + DONE; + }) ;; Insert bit ~$2.$3 into $0.$1 (define_insn_and_split "*insv.xor-extract_split" @@ -9280,31 +9241,13 @@ (define_insn_and_split "*insv.xor-extract_split" "INTVAL (operands[4]) & (1 << INTVAL (operands[3]))" "#" "&& reload_completed" - [(parallel [(set (zero_extract:QI (match_dup 0) - (const_int 1) - (match_dup 1)) - (any_extract:QI (xor:QI (match_dup 2) - (match_dup 4)) - (const_int 1) - (match_dup 3))) - (clobber (reg:CC REG_CC))])]) - -(define_insn "*insv.xor-extract" - [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") - (const_int 1) - (match_operand:QI 1 "const_0_to_7_operand" "n")) - (any_extract:QI (xor:QI (match_operand:QI 2 "register_operand" "r") - (match_operand:QI 4 "const_int_operand" "n")) - (const_int 1) - (match_operand:QI 3 "const_0_to_7_operand" "n"))) - (clobber (reg:CC REG_CC))] - "INTVAL (operands[4]) & (1 << INTVAL (operands[3])) && reload_completed" + [(scratch)] { - return avr_out_insert_notbit (insn, operands, NULL_RTX, NULL); - } - [(set_attr "adjust_len" "insv_notbit")]) + emit (gen_insv_notbit (operands[0], operands[1], operands[2], operands[3])); + DONE; + }) + - ;; Some combine patterns that try to fix bad code when a value is composed ;; from byte parts like in PR27663. ;; The patterns give some release but the code still is not optimal, diff --git a/gcc/config/avr/avr-protos.h b/gcc/config/avr/avr-protos.h index a10d91d186f..5c1343f0df8 100644 --- a/gcc/config/avr/avr-protos.h +++ b/gcc/config/avr/avr-protos.h @@ -57,7 +57,7 @@ extern const char *avr_out_compare64 (rtx_insn *, rtx*, int*); extern const char *ret_cond_branch (rtx x, int len, int reverse); extern const char *avr_out_movpsi (rtx_insn *, rtx*, int*); extern const char *avr_out_sign_extend (rtx_insn *, rtx*, int*); -extern const char *avr_out_insert_notbit (rtx_insn *, rtx*, rtx, int*); +extern const char *avr_out_insert_notbit (rtx_insn *, rtx*, int*); extern const char *avr_out_extr (rtx_insn *, rtx*, int*); extern const char *avr_out_extr_not (rtx_insn *, rtx*, int*); extern const char *avr_out_plus_set_ZN (rtx*, int*); diff --git a/gcc/config/avr/avr.cc b/gcc/config/avr/avr.cc index b02fdddd5e2..ef6872a3f55 100644 --- a/gcc/config/avr/avr.cc +++ b/gcc/config/avr/avr.cc @@ -8995,20 +8995,15 @@ avr_out_addto_sp (rtx *op, int *plen) } -/* Output instructions to insert an inverted bit into OPERANDS[0]: - $0.$1 = ~$2.$3 if XBITNO = NULL - $0.$1 = ~$2.XBITNO if XBITNO != NULL. +/* Output instructions to insert an inverted bit into OP[0]: $0.$1 = ~$2.$3. If PLEN = NULL then output the respective instruction sequence which is a combination of BST / BLD and some instruction(s) to invert the bit. If PLEN != NULL then store the length of the sequence (in words) in *PLEN. Return "". */ const char* -avr_out_insert_notbit (rtx_insn *insn, rtx operands[], rtx xbitno, int *plen) +avr_out_insert_notbit (rtx_insn *insn, rtx op[], int *plen) { - rtx op[4] = { operands[0], operands[1], operands[2], - xbitno == NULL_RTX ? operands [3] : xbitno }; - if (INTVAL (op[1]) == 7 && test_hard_reg_class (LD_REGS, op[0])) { @@ -10038,15 +10033,7 @@ avr_adjust_insn_length (rtx_insn *insn, int len) case ADJUST_LEN_INSERT_BITS: avr_out_insert_bits (op, &len); break; case ADJUST_LEN_ADD_SET_ZN: avr_out_plus_set_ZN (op, &len); break; - case ADJUST_LEN_INSV_NOTBIT: - avr_out_insert_notbit (insn, op, NULL_RTX, &len); - break; - case ADJUST_LEN_INSV_NOTBIT_0: - avr_out_insert_notbit (insn, op, const0_rtx, &len); - break; - case ADJUST_LEN_INSV_NOTBIT_7: - avr_out_insert_notbit (insn, op, GEN_INT (7), &len); - break; + case ADJUST_LEN_INSV_NOTBIT: avr_out_insert_notbit (insn, op, &len); break; default: gcc_unreachable(); diff --git a/gcc/config/avr/avr.md b/gcc/config/avr/avr.md index eadc482da15..83dd15040b0 100644 --- a/gcc/config/avr/avr.md +++ b/gcc/config/avr/avr.md @@ -163,7 +163,7 @@ (define_attr "adjust_len" ashlhi, ashrhi, lshrhi, ashlsi, ashrsi, lshrsi, ashlpsi, ashrpsi, lshrpsi, - insert_bits, insv_notbit, insv_notbit_0, insv_notbit_7, + insert_bits, insv_notbit, add_set_ZN, cmp_uext, cmp_sext, no" (const_string "no")) @@ -9151,6 +9151,21 @@ (define_insn "*insv.shiftrt" [(set_attr "length" "2")]) ;; Same, but with a NOT inverting the source bit. +;; Insert bit ~$2.$3 into $0.$1 +(define_insn "insv_notbit" + [(set (zero_extract:QI (match_operand:QI 0 "register_operand" "+r") + (const_int 1) + (match_operand:QI 1 "const_0_to_7_operand" "n")) + (not:QI (zero_extract:QI (match_operand:QI 2 "register_operand" "r") + (const_int 1) + (match_operand:QI 3 "const_0_to_7_operand" "n")))) + (clobber (reg:CC REG_CC))] + "reload_completed" + { + return avr_out_insert_notbit (insn, operands, NULL); + } + [(set_attr "adjust_len" "insv_notbit")])