From patchwork Wed Dec 20 09:25:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kewen.Lin" X-Patchwork-Id: 181551 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp2516788dyi; Wed, 20 Dec 2023 01:27:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IFdfIOUHz46wOqB+oBwFqgvYTiWxXxzMrjtQLy2N9kQ7Vt4JBl4Ks3dXqRmSKtLb47PlQjr X-Received: by 2002:a05:622a:181:b0:425:4043:8d5e with SMTP id s1-20020a05622a018100b0042540438d5emr17717818qtw.121.1703064424063; Wed, 20 Dec 2023 01:27:04 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1703064424; cv=pass; d=google.com; s=arc-20160816; b=GYAEyO4Vt23+StiBgDNwK2IGQ18i9guXsimQ7M9byKxOB6UW8LSosHYsPmTgx+LsAf dpHIiStNcfYcdZ3u4wOTGCesYPBUbw6DPdkikG70Aa20AWviyXjOKH7WEag5bVyDZk9q senzMRk2ckpdH7/n7uw3iOB5ZXJONw4d4K5Ek62NmqroGYFw3VBxRyVAKubpN+cWoEb3 Fz1PG01BHfBlz+Fq4710H4SUntTZ4InKBYXw0pm7p6fs61qccL3tn2XrnLdaoJKW2+XZ 0qoyzvOQ9Iq3jYvalcy17ZiA0ygRebwtEW8dlf37JSTutmMQmyBJHCuO8dBw6tZ07hsR giEg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:mime-version :content-transfer-encoding:subject:from:cc:to:content-language :user-agent:date:message-id:dkim-signature:arc-filter:dmarc-filter :delivered-to; bh=aM5e91eJEe7fGt/T5DpR+errMGh88eFtUyoJzsQH/cY=; fh=vhFbKVGjEMF+GpjYC/zKpfKi/u48ZssGj4LQwj13sHE=; b=tBR926BkWibVuf+qLspgDj6VucnqR72W66ZqlVWUdta+VKOJNBbGXgCMZjp230zvrc zgebNrOyUf2h6jjO1ztKZmFEy8ClTnf2Dctw2R6BRP6M1qfTDCvaE88UvhsHCyEnVBnm vIjaz3gi/rIO/eOxjShwMcndCRZJ3ivEgVC8NaNqUU2F5tk/Fav0IS7CKIRoH/vO4KPs u8MZkp48Q6iUFZHjQOcfpqvNIe+F6gMl3XNlYES5TikEAKTrk+04A8IXKURNutzj3NsQ XNGEbAnXWgLur/wBIRn3LlOUSpHGaxjEAZuPxYGvdySqTONPFUXFhYOc2Owbod/H396W sXSA== ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=jRCvEyGn; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from server2.sourceware.org (server2.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id b10-20020ac8754a000000b0042588d3bd82si26737751qtr.175.2023.12.20.01.27.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 01:27:04 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=jRCvEyGn; arc=pass (i=1); spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B8D0A3857BAE for ; Wed, 20 Dec 2023 09:27:03 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 494943858C2C for ; Wed, 20 Dec 2023 09:26:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 494943858C2C Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=linux.ibm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linux.ibm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 494943858C2C Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703064397; cv=none; b=kdFBvwos9TtNoc8TW9Az1BqArYYjE2GZScGVcUj+hni36tm972G/yTz3xynO+W9YCP/LtcauMScR7aWvf+2I+aep3HXBEsny9bMO8xyUm1EsJzBvk+MdfHNMm5dwsG13zt6ox9EWRdLnFHtpBE6ETiS91am+Iaxxk52Vt43SjSE= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703064397; c=relaxed/simple; bh=KyC6TJLPR2JEd/bl0pyjYZfOviX3qTnp6rSOgyPMEyY=; h=DKIM-Signature:Message-ID:Date:To:From:Subject:MIME-Version; b=fTKmAquv3zdG3afV+JSPttV24gUxp2gqx4/5eCIIAE8LKvYdCNi/odLS7UENCNOSem7oZWe9snJYsyfQiAHutN2R8vBkA9srBxrYs0RgwP/uP/E0Rzya5/DGkWvrNs5obZ24GeugrXXDI3B4it2h99J4TVzddKcP3Ig0YMX1vcg= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from pps.filterd (m0353727.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3BK9ObFL013333; Wed, 20 Dec 2023 09:26:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=message-id : date : to : cc : from : subject : content-type : content-transfer-encoding : mime-version; s=pp1; bh=aM5e91eJEe7fGt/T5DpR+errMGh88eFtUyoJzsQH/cY=; b=jRCvEyGnmMtf7WsDmg85Qy69S/iYDLosarDav+Tsn0fH3USzpQBfqO0f49vnETenO6rl n3bSBDASY8WDZvfMJf42utCVAaTaJ+xE30SPSiOmofQGKUZqw02VXD1mEhgNe5aLw4Wf nQDcRazoEhGsPzojDGkhgEA5D5tbSqK1UnoQQpd6ArVT+8rMh2FNwC9S0Ooq9Ocjr0Yl MJoVKZVIEjMXirn84uspt71Ld7Ii21h30dHWlgY8YOFY4k8s3fZg29JmTB1jmoOBS9vF bVum9vx+v9aq1b0XOCdBzOd5enfCHOb/Nbwo0zedWkHBi1lKROZgsGjbcO5FWk59rcho gg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v3n4gu82s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Dec 2023 09:26:28 +0000 Received: from m0353727.ppops.net (m0353727.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 3BK9QSCs026467; Wed, 20 Dec 2023 09:26:28 GMT Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3v3n4gu7k4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Dec 2023 09:26:28 +0000 Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3BK91GOg027073; Wed, 20 Dec 2023 09:25:51 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 3v1rek561h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Dec 2023 09:25:51 +0000 Received: from smtpav05.fra02v.mail.ibm.com (smtpav05.fra02v.mail.ibm.com [10.20.54.104]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3BK9Pmit42664258 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 20 Dec 2023 09:25:48 GMT Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 02A6A20043; Wed, 20 Dec 2023 09:25:48 +0000 (GMT) Received: from smtpav05.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1688320040; Wed, 20 Dec 2023 09:25:44 +0000 (GMT) Received: from [9.197.250.121] (unknown [9.197.250.121]) by smtpav05.fra02v.mail.ibm.com (Postfix) with ESMTP; Wed, 20 Dec 2023 09:25:43 +0000 (GMT) Message-ID: <3262ddae-efdf-b008-dc9e-342b283062af@linux.ibm.com> Date: Wed, 20 Dec 2023 17:25:42 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.6.1 Content-Language: en-US To: GCC Patches Cc: Richard Biener , Richard Sandiford , Jeff Law , Alexander Monakov , Maxim Kuvyrkov , Peter Bergner , Segher Boessenkool , Michael Meissner , Vladimir Makarov From: "Kewen.Lin" Subject: [PATCH] sched: Don't skip empty block by removing no_real_insns_p [PR108273] X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: tlGCFV0uibhr57oO77Uuuxp6dBQoDsgv X-Proofpoint-GUID: 8yig3LKx3YhElUixoIfK56zKMlBqpLTQ X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-20_02,2023-12-14_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 impostorscore=0 phishscore=0 adultscore=0 suspectscore=0 mlxlogscore=999 spamscore=0 lowpriorityscore=0 bulkscore=0 priorityscore=1501 clxscore=1015 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2312200066 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785792481782347403 X-GMAIL-MSGID: 1785792481782347403 Hi, This patch follows Richi's suggestion "scheduling shouldn't special case empty blocks as they usually do not appear" in [1], it removes function no_real_insns_p and its uses completely. There is some case that one block previously has only one INSN_P, but while scheduling some other blocks this only INSN_P gets moved there and the block becomes empty so that the only NOTE_P insn was counted then, but since this block isn't empty initially and any NOTE_P gets skipped in a normal block, the count to-be-scheduled doesn't count it in, it can cause the below assertion to fail: /* Sanity check: verify that all region insns were scheduled. */ gcc_assert (sched_rgn_n_insns == rgn_n_insns); A bitmap rgn_init_empty_bb is proposed to detect such case by recording one one block is empty initially or not before actual scheduling. The other changes are mainly to handle NOTE which wasn't expected before but now we have to face with. Bootstrapped and regress-tested on: - powerpc64{,le}-linux-gnu - x86_64-redhat-linux - aarch64-linux-gnu Also tested this with superblock scheduling (sched2) turned on by default, bootstrapped and regress-tested again on the above triples. I tried to test with seletive-scheduling 1/2 enabled by default, it's bootstrapped & regress-tested on x86_64-redhat-linux, but both failed to build on powerpc64{,le}-linux-gnu and aarch64-linux-gnu even without this patch (so it's unrelated, I've filed two PRs for observed failures on Power separately). [1] https://inbox.sourceware.org/gcc-patches/CAFiYyc2hMvbU_+ A47yTNBXF0YrcYbwrHRU2jDcW5a0pX3+nqBg@mail.gmail.com/ Is it ok for trunk or next stage 1? BR, Kewen ----- PR rtl-optimization/108273 gcc/ChangeLog: * config/aarch64/aarch64.cc (aarch64_sched_adjust_priority): Early return for NOTE_P. * haifa-sched.cc (recompute_todo_spec): Likewise. (setup_insn_reg_pressure_info): Likewise. (schedule_insn): Handle NOTE_P specially as we don't skip empty block any more and adopt NONDEBUG_INSN_P somewhere appropriate. (commit_schedule): Likewise. (prune_ready_list): Likewise. (schedule_block): Likewise. (set_priorities): Likewise. (fix_tick_ready): Likewise. (no_real_insns_p): Remove. * rtl.h (SCHED_GROUP_P): Add NOTE consideration. * sched-ebb.cc (schedule_ebb): Skip leading labels like note to ensure that we don't have the chance to have single label block, remove the call to no_real_insns_p. * sched-int.h (no_real_insns_p): Remove declaration. * sched-rgn.cc (free_block_dependencies): Remove the call to no_real_insns_p. (compute_priorities): Likewise. (schedule_region): Remove the call to no_real_insns_p, check rgn_init_empty_bb and update rgn_n_insns if need. (sched_rgn_local_init): Init rgn_init_empty_bb. (sched_rgn_local_free): Free rgn_init_empty_bb. (rgn_init_empty_bb): New static bitmap. * sel-sched.cc (sel_region_target_finish): Remove the call to no_real_insns_p. --- gcc/config/aarch64/aarch64.cc | 4 + gcc/haifa-sched.cc | 180 ++++++++++++++++++---------------- gcc/rtl.h | 4 +- gcc/sched-ebb.cc | 10 +- gcc/sched-int.h | 1 - gcc/sched-rgn.cc | 43 ++++---- gcc/sel-sched.cc | 3 - 7 files changed, 125 insertions(+), 120 deletions(-) -- 2.39.3 diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 4fd8c2de43a..749eef7a7c5 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -24178,6 +24178,10 @@ aarch64_sched_fusion_priority (rtx_insn *insn, int max_pri, static int aarch64_sched_adjust_priority (rtx_insn *insn, int priority) { + /* Skip NOTE in empty block. */ + if (!INSN_P (insn)) + return priority; + rtx x = PATTERN (insn); if (GET_CODE (x) == SET) diff --git a/gcc/haifa-sched.cc b/gcc/haifa-sched.cc index 8e8add709b3..6e4724a79f8 100644 --- a/gcc/haifa-sched.cc +++ b/gcc/haifa-sched.cc @@ -1207,6 +1207,11 @@ recompute_todo_spec (rtx_insn *next, bool for_backtrack) int n_replace = 0; bool first_p = true; + /* Since we don't skip empty block any more, it's possible + to meet NOTE insn now, early return if so. */ + if (NOTE_P (next)) + return 0; + if (sd_lists_empty_p (next, SD_LIST_BACK)) /* NEXT has all its dependencies resolved. */ return 0; @@ -1726,6 +1731,11 @@ setup_insn_reg_pressure_info (rtx_insn *insn) int *max_reg_pressure; static int death[N_REG_CLASSES]; + /* Since we don't skip empty block any more, it's possible to + schedule NOTE insn now, we should check for it first. */ + if (NOTE_P (insn)) + return; + gcc_checking_assert (!DEBUG_INSN_P (insn)); excess_cost_change = 0; @@ -4017,10 +4027,10 @@ schedule_insn (rtx_insn *insn) /* Scheduling instruction should have all its dependencies resolved and should have been removed from the ready list. */ - gcc_assert (sd_lists_empty_p (insn, SD_LIST_HARD_BACK)); + gcc_assert (NOTE_P (insn) || sd_lists_empty_p (insn, SD_LIST_HARD_BACK)); /* Reset debug insns invalidated by moving this insn. */ - if (MAY_HAVE_DEBUG_BIND_INSNS && !DEBUG_INSN_P (insn)) + if (MAY_HAVE_DEBUG_BIND_INSNS && NONDEBUG_INSN_P (insn)) for (sd_it = sd_iterator_start (insn, SD_LIST_BACK); sd_iterator_cond (&sd_it, &dep);) { @@ -4106,61 +4116,66 @@ schedule_insn (rtx_insn *insn) check_clobbered_conditions (insn); - /* Update dependent instructions. First, see if by scheduling this insn - now we broke a dependence in a way that requires us to change another - insn. */ - for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK); - sd_iterator_cond (&sd_it, &dep); sd_iterator_next (&sd_it)) + /* Since we don't skip empty block any more, it's possible to + schedule NOTE insn now, we should check for it first. */ + if (!NOTE_P (insn)) { - struct dep_replacement *desc = DEP_REPLACE (dep); - rtx_insn *pro = DEP_PRO (dep); - if (QUEUE_INDEX (pro) != QUEUE_SCHEDULED - && desc != NULL && desc->insn == pro) - apply_replacement (dep, false); - } + /* Update dependent instructions. First, see if by scheduling this insn + now we broke a dependence in a way that requires us to change another + insn. */ + for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK); + sd_iterator_cond (&sd_it, &dep); sd_iterator_next (&sd_it)) + { + struct dep_replacement *desc = DEP_REPLACE (dep); + rtx_insn *pro = DEP_PRO (dep); + if (QUEUE_INDEX (pro) != QUEUE_SCHEDULED && desc != NULL + && desc->insn == pro) + apply_replacement (dep, false); + } - /* Go through and resolve forward dependencies. */ - for (sd_it = sd_iterator_start (insn, SD_LIST_FORW); - sd_iterator_cond (&sd_it, &dep);) - { - rtx_insn *next = DEP_CON (dep); - bool cancelled = (DEP_STATUS (dep) & DEP_CANCELLED) != 0; + /* Go through and resolve forward dependencies. */ + for (sd_it = sd_iterator_start (insn, SD_LIST_FORW); + sd_iterator_cond (&sd_it, &dep);) + { + rtx_insn *next = DEP_CON (dep); + bool cancelled = (DEP_STATUS (dep) & DEP_CANCELLED) != 0; - /* Resolve the dependence between INSN and NEXT. - sd_resolve_dep () moves current dep to another list thus - advancing the iterator. */ - sd_resolve_dep (sd_it); + /* Resolve the dependence between INSN and NEXT. + sd_resolve_dep () moves current dep to another list thus + advancing the iterator. */ + sd_resolve_dep (sd_it); - if (cancelled) - { - if (must_restore_pattern_p (next, dep)) - restore_pattern (dep, false); - continue; - } + if (cancelled) + { + if (must_restore_pattern_p (next, dep)) + restore_pattern (dep, false); + continue; + } - /* Don't bother trying to mark next as ready if insn is a debug - insn. If insn is the last hard dependency, it will have - already been discounted. */ - if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next)) - continue; + /* Don't bother trying to mark next as ready if insn is a debug + insn. If insn is the last hard dependency, it will have + already been discounted. */ + if (DEBUG_INSN_P (insn) && !DEBUG_INSN_P (next)) + continue; - if (!IS_SPECULATION_BRANCHY_CHECK_P (insn)) - { - int effective_cost; + if (!IS_SPECULATION_BRANCHY_CHECK_P (insn)) + { + int effective_cost; - effective_cost = try_ready (next); + effective_cost = try_ready (next); - if (effective_cost >= 0 - && SCHED_GROUP_P (next) - && advance < effective_cost) - advance = effective_cost; - } - else - /* Check always has only one forward dependence (to the first insn in - the recovery block), therefore, this will be executed only once. */ - { - gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW)); - fix_recovery_deps (RECOVERY_BLOCK (insn)); + if (effective_cost >= 0 && SCHED_GROUP_P (next) + && advance < effective_cost) + advance = effective_cost; + } + else + /* Check always has only one forward dependence (to the first insn + in the recovery block), therefore, this will be executed only + once. */ + { + gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW)); + fix_recovery_deps (RECOVERY_BLOCK (insn)); + } } } @@ -4170,9 +4185,9 @@ schedule_insn (rtx_insn *insn) may use this information to decide how the instruction should be aligned. */ if (issue_rate > 1 + && NONDEBUG_INSN_P (insn) && GET_CODE (PATTERN (insn)) != USE - && GET_CODE (PATTERN (insn)) != CLOBBER - && !DEBUG_INSN_P (insn)) + && GET_CODE (PATTERN (insn)) != CLOBBER) { if (reload_completed) PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode); @@ -5033,20 +5048,6 @@ get_ebb_head_tail (basic_block beg, basic_block end, *tailp = end_tail; } -/* Return true if there are no real insns in the range [ HEAD, TAIL ]. */ - -bool -no_real_insns_p (const rtx_insn *head, const rtx_insn *tail) -{ - while (head != NEXT_INSN (tail)) - { - if (!NOTE_P (head) && !LABEL_P (head)) - return false; - head = NEXT_INSN (head); - } - return true; -} - /* Restore-other-notes: NOTE_LIST is the end of a chain of notes previously found among the insns. Insert them just before HEAD. */ rtx_insn * @@ -6224,8 +6225,12 @@ commit_schedule (rtx_insn *prev_head, rtx_insn *tail, basic_block *target_bb) scheduled_insns.iterate (i, &insn); i++) { - if (control_flow_insn_p (last_scheduled_insn) - || current_sched_info->advance_target_bb (*target_bb, insn)) + /* Since we don't skip empty block any more, it's possible to + schedule NOTE insn now, we should check for it here to avoid + unexpected target bb advance. */ + if ((control_flow_insn_p (last_scheduled_insn) + || current_sched_info->advance_target_bb (*target_bb, insn)) + && !NOTE_P (insn)) { *target_bb = current_sched_info->advance_target_bb (*target_bb, 0); @@ -6245,7 +6250,7 @@ commit_schedule (rtx_insn *prev_head, rtx_insn *tail, basic_block *target_bb) (*current_sched_info->begin_move_insn) (insn, last_scheduled_insn); move_insn (insn, last_scheduled_insn, current_sched_info->next_tail); - if (!DEBUG_INSN_P (insn)) + if (NONDEBUG_INSN_P (insn)) reemit_notes (insn); last_scheduled_insn = insn; } @@ -6296,7 +6301,7 @@ prune_ready_list (state_t temp_state, bool first_cycle_insn_p, int cost = 0; const char *reason = "resource conflict"; - if (DEBUG_INSN_P (insn)) + if (DEBUG_INSN_P (insn) || NOTE_P (insn)) continue; if (sched_group_found && !SCHED_GROUP_P (insn) @@ -6504,7 +6509,7 @@ schedule_block (basic_block *target_bb, state_t init_state) and caused problems because schedule_block and compute_forward_dependences had different notions of what the "head" insn was. */ - gcc_assert (head != tail || INSN_P (head)); + gcc_assert (head != tail || INSN_P (head) || NOTE_P (head)); haifa_recovery_bb_recently_added_p = false; @@ -6539,15 +6544,15 @@ schedule_block (basic_block *target_bb, state_t init_state) if (targetm.sched.init) targetm.sched.init (sched_dump, sched_verbose, ready.veclen); + gcc_assert (((NOTE_P (prev_head) || DEBUG_INSN_P (prev_head)) + && BLOCK_FOR_INSN (prev_head) == *target_bb) + || (head == tail && NOTE_P (head))); + /* We start inserting insns after PREV_HEAD. */ last_scheduled_insn = prev_head; last_nondebug_scheduled_insn = NULL; nonscheduled_insns_begin = NULL; - gcc_assert ((NOTE_P (last_scheduled_insn) - || DEBUG_INSN_P (last_scheduled_insn)) - && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb); - /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the queue. */ q_ptr = 0; @@ -6725,15 +6730,16 @@ schedule_block (basic_block *target_bb, state_t init_state) } } - /* We don't want md sched reorder to even see debug isns, so put - them out right away. */ - if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)) + /* We don't want md sched reorder to even see debug and note insns, + so put them out right away. */ + if (ready.n_ready + && !NONDEBUG_INSN_P (ready_element (&ready, 0)) && (*current_sched_info->schedule_more_p) ()) { - while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))) + while (ready.n_ready && !NONDEBUG_INSN_P (ready_element (&ready, 0))) { rtx_insn *insn = ready_remove_first (&ready); - gcc_assert (DEBUG_INSN_P (insn)); + gcc_assert (DEBUG_INSN_P (insn) || NOTE_P (insn)); (*current_sched_info->begin_schedule_ready) (insn); scheduled_insns.safe_push (insn); last_scheduled_insn = insn; @@ -7145,17 +7151,18 @@ schedule_block (basic_block *target_bb, state_t init_state) int set_priorities (rtx_insn *head, rtx_insn *tail) { + /* Since we don't skip empty block any more, it's possible to + meet NOTE insn now, we don't need to compute priority for + such block, so early return. */ + if (head == tail && !INSN_P (head)) + return 1; + rtx_insn *insn; - int n_insn; + int n_insn = 0; int sched_max_insns_priority = current_sched_info->sched_max_insns_priority; rtx_insn *prev_head; - if (head == tail && ! INSN_P (head)) - gcc_unreachable (); - - n_insn = 0; - prev_head = PREV_INSN (head); for (insn = tail; insn != prev_head; insn = PREV_INSN (insn)) { @@ -7688,7 +7695,8 @@ fix_tick_ready (rtx_insn *next) { int tick, delay; - if (!DEBUG_INSN_P (next) && !sd_lists_empty_p (next, SD_LIST_RES_BACK)) + if (NONDEBUG_INSN_P (next) + && !sd_lists_empty_p (next, SD_LIST_RES_BACK)) { int full_p; sd_iterator_def sd_it; diff --git a/gcc/rtl.h b/gcc/rtl.h index e4b6cc0dbb5..34b3f31d1ee 100644 --- a/gcc/rtl.h +++ b/gcc/rtl.h @@ -2695,8 +2695,8 @@ do { \ /* During sched, 1 if RTX is an insn that must be scheduled together with the preceding insn. */ #define SCHED_GROUP_P(RTX) \ - (RTL_FLAG_CHECK4 ("SCHED_GROUP_P", (RTX), DEBUG_INSN, INSN, \ - JUMP_INSN, CALL_INSN)->in_struct) + (RTL_FLAG_CHECK5 ("SCHED_GROUP_P", (RTX), DEBUG_INSN, INSN, \ + JUMP_INSN, CALL_INSN, NOTE)->in_struct) /* For a SET rtx, SET_DEST is the place that is set and SET_SRC is the value it is set to. */ diff --git a/gcc/sched-ebb.cc b/gcc/sched-ebb.cc index 110fcdbca4d..1d0eeeada82 100644 --- a/gcc/sched-ebb.cc +++ b/gcc/sched-ebb.cc @@ -478,12 +478,10 @@ schedule_ebb (rtx_insn *head, rtx_insn *tail, bool modulo_scheduling) a note or two. */ while (head != tail) { - if (NOTE_P (head) || DEBUG_INSN_P (head)) + if (LABEL_P (head) || NOTE_P (head) || DEBUG_INSN_P (head)) head = NEXT_INSN (head); else if (NOTE_P (tail) || DEBUG_INSN_P (tail)) tail = PREV_INSN (tail); - else if (LABEL_P (head)) - head = NEXT_INSN (head); else break; } @@ -491,10 +489,8 @@ schedule_ebb (rtx_insn *head, rtx_insn *tail, bool modulo_scheduling) first_bb = BLOCK_FOR_INSN (head); last_bb = BLOCK_FOR_INSN (tail); - if (no_real_insns_p (head, tail)) - return BLOCK_FOR_INSN (tail); - - gcc_assert (INSN_P (head) && INSN_P (tail)); + gcc_assert ((NOTE_P (head) && head == tail) + || (INSN_P (head) && INSN_P (tail))); if (!bitmap_bit_p (&dont_calc_deps, first_bb->index)) { diff --git a/gcc/sched-int.h b/gcc/sched-int.h index 64a2f0bcff9..445308210a6 100644 --- a/gcc/sched-int.h +++ b/gcc/sched-int.h @@ -1397,7 +1397,6 @@ extern void free_global_sched_pressure_data (void); extern int haifa_classify_insn (const_rtx); extern void get_ebb_head_tail (basic_block, basic_block, rtx_insn **, rtx_insn **); -extern bool no_real_insns_p (const rtx_insn *, const rtx_insn *); extern int insn_sched_cost (rtx_insn *); extern int dep_cost_1 (dep_t, dw_t); diff --git a/gcc/sched-rgn.cc b/gcc/sched-rgn.cc index e5964f54ead..bf967561fc9 100644 --- a/gcc/sched-rgn.cc +++ b/gcc/sched-rgn.cc @@ -228,6 +228,9 @@ static edgeset *pot_split; /* For every bb, a set of its ancestor edges. */ static edgeset *ancestor_edges; +/* Indicate the bb is empty initially if set. */ +static bitmap rgn_init_empty_bb; + #define INSN_PROBABILITY(INSN) (SRC_PROB (BLOCK_TO_BB (BLOCK_NUM (INSN)))) /* Speculative scheduling functions. */ @@ -2757,10 +2760,6 @@ free_block_dependencies (int bb) rtx_insn *tail; get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail); - - if (no_real_insns_p (head, tail)) - return; - sched_free_deps (head, tail, true); } @@ -3024,9 +3023,6 @@ compute_priorities (void) gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb)); get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail); - if (no_real_insns_p (head, tail)) - continue; - rgn_n_insns += set_priorities (head, tail); } current_sched_info->sched_max_insns_priority++; @@ -3157,12 +3153,6 @@ schedule_region (int rgn) last_bb = EBB_LAST_BB (bb); get_ebb_head_tail (first_bb, last_bb, &head, &tail); - - if (no_real_insns_p (head, tail)) - { - gcc_assert (first_bb == last_bb); - continue; - } sched_setup_bb_reg_pressure_info (first_bb, PREV_INSN (head)); } } @@ -3178,13 +3168,6 @@ schedule_region (int rgn) get_ebb_head_tail (first_bb, last_bb, &head, &tail); - if (no_real_insns_p (head, tail)) - { - gcc_assert (first_bb == last_bb); - save_state_for_fallthru_edge (last_bb, bb_state[first_bb->index]); - continue; - } - current_sched_info->prev_head = PREV_INSN (head); current_sched_info->next_tail = NEXT_INSN (tail); @@ -3216,6 +3199,14 @@ schedule_region (int rgn) /* Clean up. */ if (current_nr_blocks > 1) free_trg_info (); + + /* This empty block isn't empty initially, it means the only NOTE + inside was not counted when computing rgn_n_insns, so fix it up + now. */ + if (head == tail + && NOTE_P (head) + && !bitmap_bit_p (rgn_init_empty_bb, bb)) + rgn_n_insns++; } /* Sanity check: verify that all region insns were scheduled. */ @@ -3448,7 +3439,16 @@ sched_rgn_local_init (int rgn) continue; FOR_EACH_EDGE (e, ei, block->succs) e->aux = NULL; - } + } + } + + rgn_init_empty_bb = BITMAP_ALLOC (NULL); + for (bb = 0; bb < current_nr_blocks; bb++) + { + rtx_insn *head, *tail; + get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail); + if (head == tail && NOTE_P (head)) + bitmap_set_bit (rgn_init_empty_bb, bb); } } @@ -3461,6 +3461,7 @@ sched_rgn_local_free (void) sbitmap_vector_free (pot_split); sbitmap_vector_free (ancestor_edges); free (rgn_edges); + BITMAP_FREE (rgn_init_empty_bb); } /* Free data computed for the finished region. */ diff --git a/gcc/sel-sched.cc b/gcc/sel-sched.cc index 1925f4a9461..927232bc9e7 100644 --- a/gcc/sel-sched.cc +++ b/gcc/sel-sched.cc @@ -7213,9 +7213,6 @@ sel_region_target_finish (bool reset_sched_cycles_p) find_ebb_boundaries (EBB_FIRST_BB (i), scheduled_blocks); - if (no_real_insns_p (current_sched_info->head, current_sched_info->tail)) - continue; - if (reset_sched_cycles_p) reset_sched_cycles_in_current_ebb ();