From patchwork Sun Sep 25 16:28:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jeff Law X-Patchwork-Id: 1435 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:5044:0:0:0:0:0 with SMTP id h4csp1126871wrt; Sun, 25 Sep 2022 09:29:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6CaxTT8vJ+rH3/FVOB/SB7/4pwZ8VC//IuHqGgtPkmeKSoPHGVhRAgg1WkQeeOZpePOvOg X-Received: by 2002:a17:907:2da8:b0:782:60bc:c864 with SMTP id gt40-20020a1709072da800b0078260bcc864mr13471006ejc.687.1664123370220; Sun, 25 Sep 2022 09:29:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664123370; cv=none; d=google.com; s=arc-20160816; b=pN+b//ETqyrQ/BnXHev73zX38rJMMKcA8GM5RANMaiXKHVETR2fealwzqlO9cevXbo LqyuV1njm6j5TeXEtm+bTQx4c8Pqu9tujccv+9wJYVSevzldd1+Srfw8M75QZ+Y0qKbJ lzkn6BgI3czIW6Nw1R+bDInbJ4RSisZDFRUof5Wwfbara7nL9y8ySTW0C1GiCT3IKvmo F/zY+vcc5/pTWc4xPim1Q8jXO6SIj+oSufd22c4IOph5/h396QbX9KfdBwR+CCGPVCW1 njbdXwIBZuTp8LP/MPYqoqoi8XIW05kdkBZAcljvV5ZyIBURR4ObbDoBD/W0j3uLpbQg n+vQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-language:to:subject :from:user-agent:mime-version:date:message-id:dkim-signature :dmarc-filter:delivered-to; bh=38g8fRwSzFDQ8iJiB3Ol3qepHm3yheTfW8FslJkNGuk=; b=Sx51mesKbSzRMyPPuaetJQfVBlxht8CZR6s/2RdsOXE7Hp81TgXNGhr8u8yl85y+m7 WCCB1EAOc6GaH/BVaIJOviWsO0MDTpV2sU9Yj8x5oKOsGo3HKcZ0jeOvwsjpyK72Mz2H Ay/xzgJQiJwjg3c+aOggwd32tStfvjL4XIU2nQyDVaiqVL//5FvcDn0xcPnLx9kyBSBG LoWKySIeYHzxR2JKPZ3icrTAuCJ7TADIUiXMOkt6o2ZJGPxj5zJSBhohVqLoYXLueBmO BJo4oWW5XP1CDaehdo70r/1GZ/aD9GnKdZrAmGJ/tunOreZDDB3VHPJJWI/YCtRfBAtx qzZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ventanamicro.com header.s=google header.b=GGYDMHjW; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id b6-20020a056402278600b00445ec4df14asi14883582ede.173.2022.09.25.09.29.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Sep 2022 09:29:30 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@ventanamicro.com header.s=google header.b=GGYDMHjW; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 44F0A3858009 for ; Sun, 25 Sep 2022 16:29:25 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by sourceware.org (Postfix) with ESMTPS id 8D8303858CDA for ; Sun, 25 Sep 2022 16:28:58 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 8D8303858CDA Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=ventanamicro.com Received: by mail-pg1-x52e.google.com with SMTP id 78so4555011pgb.13 for ; Sun, 25 Sep 2022 09:28:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; h=content-language:to:subject:from:user-agent:mime-version:date :message-id:from:to:cc:subject:date; bh=38g8fRwSzFDQ8iJiB3Ol3qepHm3yheTfW8FslJkNGuk=; b=GGYDMHjWVzbxnrWxoEmQbXc4B6uOqJ2cUnpvDqwTH/5Y7ZezUhnnsPbVQk+FuTTTYC EHa6DdW6vnRCLKmwg5MS3ELdSHb/LFd+wEo029yW4dcxsHil+olq9UtXmpo6iu+X2S9w +zIzatOiRGBM4y49H1ivRxyrVsK76OHZRNpNeH3R1qmvAJIzPe+rlvisBWiZWwT+P4UM SFwVTxg05+bhDEbNGd8w7BFtY8dVduEZQjclJh7ZIV3dD/aYgoc6NJgtuBCLaIJdPKzS NPJywZBOpxSfJlKFNphxnz4VQSIj9K7PA2siCUsFV9AfBiglZ/U0tl5bJacLkUeqZD36 4i3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-language:to:subject:from:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date; bh=38g8fRwSzFDQ8iJiB3Ol3qepHm3yheTfW8FslJkNGuk=; b=BPS5za264rVtwYUUOEZDQ8MMncUEaci4iZucQrL5n5o3Vv+lfrbteruXHNoHAll2b+ 2p/WsDsvRKPv6yzfGxkBEtvN3XZ+A450VmGcY2baq5Gx97gdbdtfgz9AJKr9We/520og Nifku0OBl/h8PpVfJEdcREhgcMDr/x+8mCu2csDiwD/kVmi3JvvRSFRnUgB58pJ0aZQR l5wvQgxbMitRe7/n+mKBOHa3soYthI+w6OkjmK0MI3gkGD6bYj7H6kRO58KppDiyLUVu jFyQLeYL55oqgyCgRwObQQt4iGdnWJMZ1eQdApYcgzyIMjkUBdPxeD9MQPV504QH+YBa qQxA== X-Gm-Message-State: ACrzQf1RBO8p9cBLkwl1NGARMxj5LdANvyPApvSFyfmzAylCUbtlRmio lICX3LjrM2yYaIH6aBTLv8L51mS0Hx8+G1o8 X-Received: by 2002:aa7:838a:0:b0:536:101a:9ccf with SMTP id u10-20020aa7838a000000b00536101a9ccfmr19198754pfm.18.1664123337261; Sun, 25 Sep 2022 09:28:57 -0700 (PDT) Received: from ?IPV6:2601:681:8600:13d0::f0a? ([2601:681:8600:13d0::f0a]) by smtp.gmail.com with ESMTPSA id mm10-20020a17090b358a00b002005f5ab6a8sm4939148pjb.29.2022.09.25.09.28.56 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 25 Sep 2022 09:28:56 -0700 (PDT) Message-ID: Date: Sun, 25 Sep 2022 10:28:55 -0600 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.2.1 From: Jeff Law Subject: [RFA] Minor improvement to coremark, avoid unconditional jump to return To: gcc-patches@gcc.gnu.org Content-Language: en-US X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1744959827126722909?= X-GMAIL-MSGID: =?utf-8?q?1744959827126722909?= This is a minor improvement for the core_list_find routine in coremark. Basically for riscv, and likely other targets, we can end up with an unconditional jump to a return statement.    This is a result of compensation code created by bb-reorder, and no jump optimization pass runs after bb-reorder to clean this stuff up. This patch utilizes preexisting code to identify suitable branch targets as well as preexisting code to emit a suitable return, so it's pretty simple.  Note that when we arrange to do this optimization, the original return block may become unreachable. So we conditionally call delete_unreachable_blocks to fix that up. This triggers ~160 times during an x86_64 bootstrap.  Naturally it bootstraps and regression tests on x86_64. I've also bootstrapped this on riscv64, regression testing with qemu shows some regressions, but AFAICT they're actually qemu bugs with signal handling/delivery -- qemu user mode emulation is not consistently calling user defined signal handlers.  Given the same binary, sometimes they'll get called and the test passes, other times the handler isn't called and the test (of course) fails. I'll probably spend some time to try and chase this down for the sake of making testing easier. OK for the trunk? Jeff commit f9a9119fa47f94348305a883fd88c23647fb1b07 Author: Jeff Law Date: Sun Sep 25 12:23:59 2022 -0400 gcc/ * cfgcleanup.cc (bb_is_just_return): No longer static. * cfgcleanup.h (bb_is_just_return): Add prototype. * cfgrtl.cc (fixup_reorder_chain): Do not create an unconditional jump to a return block. Conditionally remove unreachable blocks. gcc/testsuite/gcc.target/riscv/ * ret-1.c: New test. diff --git a/gcc/cfgcleanup.cc b/gcc/cfgcleanup.cc index a8b0139bb4d..a363e0b4da3 100644 --- a/gcc/cfgcleanup.cc +++ b/gcc/cfgcleanup.cc @@ -2599,7 +2599,7 @@ trivially_empty_bb_p (basic_block bb) return value. Fill in *RET and *USE with the return and use insns if any found, otherwise NULL. All CLOBBERs are ignored. */ -static bool +bool bb_is_just_return (basic_block bb, rtx_insn **ret, rtx_insn **use) { *ret = *use = NULL; diff --git a/gcc/cfgcleanup.h b/gcc/cfgcleanup.h index a6d882f98a4..f1021ca835f 100644 --- a/gcc/cfgcleanup.h +++ b/gcc/cfgcleanup.h @@ -30,5 +30,6 @@ extern int flow_find_head_matching_sequence (basic_block, basic_block, extern bool delete_unreachable_blocks (void); extern void delete_dead_jumptables (void); extern bool cleanup_cfg (int); +extern bool bb_is_just_return (basic_block, rtx_insn **, rtx_insn **); #endif /* GCC_CFGCLEANUP_H */ diff --git a/gcc/cfgrtl.cc b/gcc/cfgrtl.cc index a05c338a4c8..90cd6ee56a7 100644 --- a/gcc/cfgrtl.cc +++ b/gcc/cfgrtl.cc @@ -3901,6 +3901,7 @@ fixup_reorder_chain (void) /* Now add jumps and labels as needed to match the blocks new outgoing edges. */ + bool remove_unreachable_blocks = false; for (bb = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb; bb ; bb = (basic_block) bb->aux) { @@ -4043,10 +4044,30 @@ fixup_reorder_chain (void) continue; } + /* If E_FALL->dest is just a return block, then we can emit a + return rather than a jump to the return block. */ + rtx_insn *ret, *use; + basic_block dest; + if (bb_is_just_return (e_fall->dest, &ret, &use) + && (PATTERN (ret) == simple_return_rtx || PATTERN (ret) == ret_rtx)) + { + ret_label = PATTERN (ret); + dest = EXIT_BLOCK_PTR_FOR_FN (cfun); + + /* E_FALL->dest might become unreachable as a result of + replacing the jump with a return. So arrange to remove + unreachable blocks. */ + remove_unreachable_blocks = true; + } + else + { + dest = e_fall->dest; + } + /* We got here if we need to add a new jump insn. Note force_nonfallthru can delete E_FALL and thus we have to save E_FALL->src prior to the call to force_nonfallthru. */ - nb = force_nonfallthru_and_redirect (e_fall, e_fall->dest, ret_label); + nb = force_nonfallthru_and_redirect (e_fall, dest, ret_label); if (nb) { nb->aux = bb->aux; @@ -4134,6 +4155,12 @@ fixup_reorder_chain (void) ei_next (&ei2); } } + + /* Replacing a jump with a return may have exposed an unreachable + block. Conditionally remove them if such transformations were + made. */ + if (remove_unreachable_blocks) + delete_unreachable_blocks (); } /* Perform sanity checks on the insn chain. diff --git a/gcc/testsuite/gcc.target/riscv/ret-1.c b/gcc/testsuite/gcc.target/riscv/ret-1.c new file mode 100644 index 00000000000..28133aa4226 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/ret-1.c @@ -0,0 +1,41 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -dp" } */ +/* This was extracted from coremark. */ + + +typedef signed short ee_s16; +typedef struct list_data_s +{ + ee_s16 data16; + ee_s16 idx; +} list_data; + +typedef struct list_head_s +{ + struct list_head_s *next; + struct list_data_s *info; +} list_head; + + +list_head * +core_list_find(list_head *list, list_data *info) +{ + if (info->idx >= 0) + { + while (list && (list->info->idx != info->idx)) + list = list->next; + return list; + } + else + { + while (list && ((list->info->data16 & 0xff) != info->data16)) + list = list->next; + return list; + } +} + +/* There is only one legitimate unconditional jump, so test for that, + which will catch the case where bb-reorder leaves a jump to a ret + in the IL. */ +/* { dg-final { scan-assembler-times "jump" 1 } } */ +