From patchwork Fri Jan 27 22:02:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Patrick Palka X-Patchwork-Id: 49702 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp1065011wrn; Fri, 27 Jan 2023 14:03:52 -0800 (PST) X-Google-Smtp-Source: AMrXdXsLmam39F+Fy+tmPJuVVSHqJQinf5pQ3hbfsJdjaY9e0ip++3TTeixsAmGSN0XP9iKBwvtN X-Received: by 2002:a17:907:88c4:b0:86d:d041:b8aa with SMTP id rq4-20020a17090788c400b0086dd041b8aamr45655455ejc.27.1674857032422; Fri, 27 Jan 2023 14:03:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1674857032; cv=none; d=google.com; s=arc-20160816; b=UuIj62GI0bUUg5gttjGB8mbE9BWK5IcrcHgbb9Nr/gzPiEk047xVnzQvcsCg6YYNKA nrK5ILw2gQtQZ2Xf10AX3rAwWY7kVy04X04xeiZPyoQpIZcgX5O0tI7WF2DW1ICOKXpA 7j4GABhTY3VMWoqTSNHRNTxOleSLHB6NntOAtkatexL/+3WrsAdclElHjJYVUEOADtZ+ 50MxcApAdLleJX+zsyjmtWyESJ9SwW7LFEGrll/Rw4jA/Nm8jjcp+KQbL8Js57kElnWT R/pL9ANRHRxHO4jnqzv0p6HioBe9nNR742thcsB+GOLsAffbNLKzQkT8c+vCWR+nMJLG SkrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=tpmpBDJmZmwFMMuKhea9Ui2HTfYD9fBvNE8qPs5/nsM=; b=Zm0aXakhP0uxvmE1g/9zcXXUl65yC6jEMXDRdIsW09UtnFGDIcINQYNquCIFK/cO5v 7//fEWUW90mx76n1sBm3as3a72uoX8s9k9qtiziIn70pVrM/xhrQEfUD09Nd4KDQRS4T f3vpHa2SMD/IMIcLgkxC8d099YGsx5jJW5AuX5ecnhhItndDq2FKi8TgXoGCpTIwE1hY NmN7tOq1MmyP8lJmpLrFTZeX6vDbMzD2XQcgKojNxmFM3rE/M45394d7M2TyWWU1kGnZ lXGSB8TYJK34NbM3Jg0J48VdUaHREft2HhjNMkjzK6fti1IGFBhUxJPMrNkgD2yFmwu0 WKfw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=kJFuLCNp; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id kb5-20020a170907924500b00877dde5c2a8si5463808ejb.753.2023.01.27.14.03.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 14:03:52 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=kJFuLCNp; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 95CCD38582BC for ; Fri, 27 Jan 2023 22:03:40 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 95CCD38582BC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1674857020; bh=tpmpBDJmZmwFMMuKhea9Ui2HTfYD9fBvNE8qPs5/nsM=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=kJFuLCNpdtqrQOlKOsnUpGwvOrrcoluMwFVYeE8BhJo/rUC1cuQhWOig981WqdB1n r0cCrPrAEidUnuy1pqnodr4B1ff6DKpoJQFo45QaJq9xR6Hhr6FC8u1apXX6fMIsUc PqADpiRxssaQ0YZl/mTYRJFPTeQAajt8UAsJQ3Cs= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 498583858D20 for ; Fri, 27 Jan 2023 22:02:56 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 498583858D20 Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-300-bnGAyUf3M7CCSZvaVF-0uA-1; Fri, 27 Jan 2023 17:02:55 -0500 X-MC-Unique: bnGAyUf3M7CCSZvaVF-0uA-1 Received: by mail-qv1-f69.google.com with SMTP id jh2-20020a0562141fc200b004c74bbb0affso3507567qvb.21 for ; Fri, 27 Jan 2023 14:02:55 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tpmpBDJmZmwFMMuKhea9Ui2HTfYD9fBvNE8qPs5/nsM=; b=nLkQLj4C+AC/URujpfEj23b2X+/QRDWvFpNMhsc5u9woKrHxqULIymQG2RhcBmFT47 j1Ej8cw7IUEbo+H2DhCa3MeyxzWJar6nD46ItEIDGXzQEId6o6+M006oN+iHygXE+bVZ UA71W/J7pb8qhnp8/u9eMdYc0NmX6BPuQH9HSczuQun/7de4Mp4tiE6fYPBZEIiYOD1H bjrXTwKkBMnerAjkMMMVPziFi8VtG/Ge1oosz6QER4moMcpsHUYlPy0ijFGydF8Xoy8T ar+2ZHD5ub0LfiEem6Z6hf964GPAVqwj1I0eItZfbY3ECFmVi3azKZnmgmbBXepzV2SG 56nQ== X-Gm-Message-State: AO0yUKXRr6yGr/fpo0ERrXUaSuEEohfjnN7sjxyLcJVGKzyh66XsM02j lOKZnyQks0AUiZAeLWV/jkjloBxp9zwTAlGaVGBy93b0cGnfRelsR9FkJBSIblOMEIcx08S+vvb A4i5zmMqc1axAD2WoGeREHOTHPfm2tXGGWqdFfrrkO9euXeRgRH/X+G3apyWYEGeZ9E0= X-Received: by 2002:a05:622a:391:b0:3b6:33bc:f6bc with SMTP id j17-20020a05622a039100b003b633bcf6bcmr806056qtx.10.1674856973888; Fri, 27 Jan 2023 14:02:53 -0800 (PST) X-Received: by 2002:a05:622a:391:b0:3b6:33bc:f6bc with SMTP id j17-20020a05622a039100b003b633bcf6bcmr806014qtx.10.1674856973465; Fri, 27 Jan 2023 14:02:53 -0800 (PST) Received: from localhost.localdomain (ool-457670bb.dyn.optonline.net. [69.118.112.187]) by smtp.gmail.com with ESMTPSA id w19-20020ae9e513000000b006f9f3c0c63csm3685947qkf.32.2023.01.27.14.02.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Jan 2023 14:02:53 -0800 (PST) To: gcc-patches@gcc.gnu.org Cc: jason@redhat.com, Patrick Palka Subject: [PATCH 2/2] c++: speculative constexpr and is_constant_evaluated [PR108243] Date: Fri, 27 Jan 2023 17:02:50 -0500 Message-Id: <20230127220250.1896137-2-ppalka@redhat.com> X-Mailer: git-send-email 2.39.1.348.g5dec958dcf In-Reply-To: <20230127220250.1896137-1-ppalka@redhat.com> References: <20230127220250.1896137-1-ppalka@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-13.7 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Patrick Palka via Gcc-patches From: Patrick Palka Reply-To: Patrick Palka Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756214887404895477?= X-GMAIL-MSGID: =?utf-8?q?1756214887404895477?= This PR illustrates that __builtin_is_constant_evaluated currently acts as an optimization barrier for our speculative constexpr evaluation, since we don't want to prematurely fold the builtin to false if the expression in question would be later manifestly constant evaluated (in which case it must be folded to true). This patch fixes this by permitting __builtin_is_constant_evaluated to get folded as false during cp_fold_function, since at that point we're sure we're doing manifestly constant evaluation. To that end we add a flags parameter to cp_fold that controls what mce_value the CALL_EXPR case passes to maybe_constant_value. bootstrapped and rgetsted no x86_64-pc-linux-gnu, does this look OK for trunk? PR c++/108243 gcc/cp/ChangeLog: * cp-gimplify.cc (enum fold_flags): Define. (cp_fold_data::genericize): Replace this data member with ... (cp_fold_data::fold_flags): ... this. (cp_fold_r): Adjust cp_fold_data use and cp_fold_calls. (cp_fold_function): Likewise. (cp_fold_maybe_rvalue): Likewise. (cp_fully_fold_init): Likewise. (cp_fold): Add fold_flags parameter. Don't cache if flags isn't empty. : Pass mce_false to maybe_constant_value if if ff_genericize is set. gcc/testsuite/ChangeLog: * g++.dg/opt/pr108243.C: New test. --- gcc/cp/cp-gimplify.cc | 76 ++++++++++++++++++----------- gcc/testsuite/g++.dg/opt/pr108243.C | 29 +++++++++++ 2 files changed, 76 insertions(+), 29 deletions(-) create mode 100644 gcc/testsuite/g++.dg/opt/pr108243.C diff --git a/gcc/cp/cp-gimplify.cc b/gcc/cp/cp-gimplify.cc index a35cedd05cc..d023a63768f 100644 --- a/gcc/cp/cp-gimplify.cc +++ b/gcc/cp/cp-gimplify.cc @@ -43,12 +43,20 @@ along with GCC; see the file COPYING3. If not see #include "omp-general.h" #include "opts.h" +/* Flags for cp_fold and cp_fold_r. */ + +enum fold_flags { + ff_none = 0, + /* Whether we're being called from cp_fold_function. */ + ff_genericize = 1 << 0, +}; + /* Forward declarations. */ static tree cp_genericize_r (tree *, int *, void *); static tree cp_fold_r (tree *, int *, void *); static void cp_genericize_tree (tree*, bool); -static tree cp_fold (tree); +static tree cp_fold (tree, fold_flags); /* Genericize a TRY_BLOCK. */ @@ -996,9 +1004,8 @@ struct cp_genericize_data struct cp_fold_data { hash_set pset; - bool genericize; // called from cp_fold_function? - - cp_fold_data (bool g): genericize (g) {} + fold_flags flags; + cp_fold_data (fold_flags flags): flags (flags) {} }; static tree @@ -1039,7 +1046,7 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void *data_) break; } - *stmt_p = stmt = cp_fold (*stmt_p); + *stmt_p = stmt = cp_fold (*stmt_p, data->flags); if (data->pset.add (stmt)) { @@ -1119,12 +1126,12 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void *data_) here rather than in cp_genericize to avoid problems with the invisible reference transition. */ case INIT_EXPR: - if (data->genericize) + if (data->flags & ff_genericize) cp_genericize_init_expr (stmt_p); break; case TARGET_EXPR: - if (data->genericize) + if (data->flags & ff_genericize) cp_genericize_target_expr (stmt_p); /* Folding might replace e.g. a COND_EXPR with a TARGET_EXPR; in @@ -1157,7 +1164,7 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void *data_) void cp_fold_function (tree fndecl) { - cp_fold_data data (/*genericize*/true); + cp_fold_data data (ff_genericize); cp_walk_tree (&DECL_SAVED_TREE (fndecl), cp_fold_r, &data, NULL); } @@ -2375,7 +2382,7 @@ cp_fold_maybe_rvalue (tree x, bool rval) { while (true) { - x = cp_fold (x); + x = cp_fold (x, ff_none); if (rval) x = mark_rvalue_use (x); if (rval && DECL_P (x) @@ -2434,7 +2441,7 @@ cp_fully_fold_init (tree x) if (processing_template_decl) return x; x = cp_fully_fold (x); - cp_fold_data data (/*genericize*/false); + cp_fold_data data (ff_none); cp_walk_tree (&x, cp_fold_r, &data, NULL); return x; } @@ -2469,7 +2476,7 @@ clear_fold_cache (void) Function returns X or its folded variant. */ static tree -cp_fold (tree x) +cp_fold (tree x, fold_flags flags) { tree op0, op1, op2, op3; tree org_x = x, r = NULL_TREE; @@ -2490,8 +2497,11 @@ cp_fold (tree x) if (fold_cache == NULL) fold_cache = hash_map::create_ggc (101); - if (tree *cached = fold_cache->get (x)) - return *cached; + bool cache_p = (flags == ff_none); + + if (cache_p) + if (tree *cached = fold_cache->get (x)) + return *cached; uid_sensitive_constexpr_evaluation_checker c; @@ -2526,7 +2536,7 @@ cp_fold (tree x) Don't create a new tree if op0 != TREE_OPERAND (x, 0), the folding of the operand should be in the caches and if in cp_fold_r it will modify it in place. */ - op0 = cp_fold (TREE_OPERAND (x, 0)); + op0 = cp_fold (TREE_OPERAND (x, 0), flags); if (op0 == error_mark_node) x = error_mark_node; break; @@ -2571,7 +2581,7 @@ cp_fold (tree x) { tree p = maybe_undo_parenthesized_ref (x); if (p != x) - return cp_fold (p); + return cp_fold (p, flags); } goto unary; @@ -2763,8 +2773,8 @@ cp_fold (tree x) case COND_EXPR: loc = EXPR_LOCATION (x); op0 = cp_fold_rvalue (TREE_OPERAND (x, 0)); - op1 = cp_fold (TREE_OPERAND (x, 1)); - op2 = cp_fold (TREE_OPERAND (x, 2)); + op1 = cp_fold (TREE_OPERAND (x, 1), flags); + op2 = cp_fold (TREE_OPERAND (x, 2), flags); if (TREE_CODE (TREE_TYPE (x)) == BOOLEAN_TYPE) { @@ -2854,7 +2864,7 @@ cp_fold (tree x) { if (!same_type_p (TREE_TYPE (x), TREE_TYPE (r))) r = build_nop (TREE_TYPE (x), r); - x = cp_fold (r); + x = cp_fold (r, flags); break; } } @@ -2908,7 +2918,7 @@ cp_fold (tree x) int m = call_expr_nargs (x); for (int i = 0; i < m; i++) { - r = cp_fold (CALL_EXPR_ARG (x, i)); + r = cp_fold (CALL_EXPR_ARG (x, i), flags); if (r != CALL_EXPR_ARG (x, i)) { if (r == error_mark_node) @@ -2931,7 +2941,7 @@ cp_fold (tree x) if (TREE_CODE (r) != CALL_EXPR) { - x = cp_fold (r); + x = cp_fold (r, flags); break; } @@ -2944,7 +2954,15 @@ cp_fold (tree x) constant, but the call followed by an INDIRECT_REF is. */ if (callee && DECL_DECLARED_CONSTEXPR_P (callee) && !flag_no_inline) - r = maybe_constant_value (x); + { + mce_value manifestly_const_eval = mce_unknown; + if (flags & ff_genericize) + /* At genericization time it's safe to fold + __builtin_is_constant_evaluated to false. */ + manifestly_const_eval = mce_false; + r = maybe_constant_value (x, /*decl=*/NULL_TREE, + manifestly_const_eval); + } optimize = sv; if (TREE_CODE (r) != CALL_EXPR) @@ -2971,7 +2989,7 @@ cp_fold (tree x) vec *nelts = NULL; FOR_EACH_VEC_SAFE_ELT (elts, i, p) { - tree op = cp_fold (p->value); + tree op = cp_fold (p->value, flags); if (op != p->value) { if (op == error_mark_node) @@ -3002,7 +3020,7 @@ cp_fold (tree x) for (int i = 0; i < n; i++) { - tree op = cp_fold (TREE_VEC_ELT (x, i)); + tree op = cp_fold (TREE_VEC_ELT (x, i), flags); if (op != TREE_VEC_ELT (x, i)) { if (!changed) @@ -3019,10 +3037,10 @@ cp_fold (tree x) case ARRAY_RANGE_REF: loc = EXPR_LOCATION (x); - op0 = cp_fold (TREE_OPERAND (x, 0)); - op1 = cp_fold (TREE_OPERAND (x, 1)); - op2 = cp_fold (TREE_OPERAND (x, 2)); - op3 = cp_fold (TREE_OPERAND (x, 3)); + op0 = cp_fold (TREE_OPERAND (x, 0), flags); + op1 = cp_fold (TREE_OPERAND (x, 1), flags); + op2 = cp_fold (TREE_OPERAND (x, 2), flags); + op3 = cp_fold (TREE_OPERAND (x, 3), flags); if (op0 != TREE_OPERAND (x, 0) || op1 != TREE_OPERAND (x, 1) @@ -3050,7 +3068,7 @@ cp_fold (tree x) /* A SAVE_EXPR might contain e.g. (0 * i) + (0 * j), which, after folding, evaluates to an invariant. In that case no need to wrap this folded tree with a SAVE_EXPR. */ - r = cp_fold (TREE_OPERAND (x, 0)); + r = cp_fold (TREE_OPERAND (x, 0), flags); if (tree_invariant_p (r)) x = r; break; @@ -3069,7 +3087,7 @@ cp_fold (tree x) copy_warning (x, org_x); } - if (!c.evaluation_restricted_p ()) + if (cache_p && !c.evaluation_restricted_p ()) { fold_cache->put (org_x, x); /* Prevent that we try to fold an already folded result again. */ diff --git a/gcc/testsuite/g++.dg/opt/pr108243.C b/gcc/testsuite/g++.dg/opt/pr108243.C new file mode 100644 index 00000000000..4c45dbba13c --- /dev/null +++ b/gcc/testsuite/g++.dg/opt/pr108243.C @@ -0,0 +1,29 @@ +// PR c++/108243 +// { dg-do compile { target c++11 } } +// { dg-additional-options "-O -fdump-tree-original" } + +constexpr int foo() { + return __builtin_is_constant_evaluated() + 1; +} + +#if __cpp_if_consteval +constexpr int bar() { + if consteval { + return 5; + } else { + return 4; + } +} +#endif + +int p, q; + +int main() { + p = foo(); +#if __cpp_if_consteval + q = bar(); +#endif +} + +// { dg-final { scan-tree-dump-not "= foo" "original" } } +// { dg-final { scan-tree-dump-not "= bar" "original" } }