From patchwork Sat May 20 02:14:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Pinski X-Patchwork-Id: 96716 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp89666vqo; Fri, 19 May 2023 19:19:47 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5kzmA35AywZlk3USGw+wI4r2KVm4xGPi8dxLB6vSCncg3HQUHycK9+PDPsY5eaQhlWakle X-Received: by 2002:a50:ed99:0:b0:506:c231:95ac with SMTP id h25-20020a50ed99000000b00506c23195acmr3087774edr.16.1684549187066; Fri, 19 May 2023 19:19:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684549187; cv=none; d=google.com; s=arc-20160816; b=PyHYL4xtmOC5RqSQGaQspl0kwqvByd9DK2iP2uvlUPrahGVJZjkLbGklG7iHATSjSI nzh3CaT+Y/hNz6n68dUhIlOYZyvhrJ9C769qE0C/iYPOzISSID5VF7ke4zS82Z77zQGL bJUjSMWVvSM7JoAX9oszOfDzdPQO7A+UlXLinWuPzswNVXVg3dEHV6rK+CtB0d+w8oll 2USDgXQnpOuMQ7tLmOhi7tz/Kgbj8NGWJEoBM7+Vk9mfW+teHntFvuysZJced90H7yfk nDN1e5tuoS98+LAny3bv5o50PYCHribgDb86cvv8UQRCNQXCV3/DlJWjj/QkAKzUBP6M 5hCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:dmarc-filter:delivered-to :dkim-signature:dkim-filter; bh=hl+pZG9Zm4a10HXYj9uXKXjDLGBQog1Nc5yvTLmnYCQ=; b=LhTclLwkLG1hBols8rIdcSRS5bfZ3uKkRCPbKFcM6AS4yknjCO2gvNR8Q9eEt819aF mvgXbQAlL34PB3wjZX4QlKPzQtQsmLqn0ed8jsoaRhrjVbN6deT6Z3p1qy8BXW4bu9D1 XNLwsd44qigAQ1tNOPr3O9lfxY48K0h8Tv4YXEbqbRd09PuBj5cyjTXv5uPsj5gU+VtG FzVnvyED+/oTY5Tj0W86VFN2HwdKqlD8vO5j5F/3bHNTMQwcEp9faUDhsvmSVkPyHq86 M4R3sx+5l2vMLjWEaLp+6zL21h6IN/1PHC2lVEH+Cau4zwKVCXAEbJvPjmAPskGfTIUl qZfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=ISQRMpeb; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id z17-20020aa7d411000000b00510e6d8d3a4si638622edq.210.2023.05.19.19.19.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 May 2023 19:19:47 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=ISQRMpeb; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 31C493856DF8 for ; Sat, 20 May 2023 02:17:58 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 31C493856DF8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1684549078; bh=hl+pZG9Zm4a10HXYj9uXKXjDLGBQog1Nc5yvTLmnYCQ=; h=To:CC:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=ISQRMpebuMo0wcCf/UZbEXRhdyiR7p5MvPNHo6a3IRFQnK7WBEd8dCIBqshA2mV4c JL3EDtXF0wo1g77tx1AmeTCPnU5t0lryGxxJtNNfD/C/p5Ob2rZHx7eMrysBDsikFL DkGj2oYg2MkpLVmXffox55VoD/Ow0W49Z/PlXQi4= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by sourceware.org (Postfix) with ESMTPS id 331493857C44 for ; Sat, 20 May 2023 02:15:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 331493857C44 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 34JMUGcQ016997 for ; Fri, 19 May 2023 19:15:05 -0700 Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3qmyexj7b1-7 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 19 May 2023 19:15:05 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Fri, 19 May 2023 19:15:03 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Fri, 19 May 2023 19:15:03 -0700 Received: from vpnclient.wrightpinski.org.com (unknown [10.69.242.187]) by maili.marvell.com (Postfix) with ESMTP id 5AE263F707C; Fri, 19 May 2023 19:15:02 -0700 (PDT) To: CC: Andrew Pinski Subject: [PATCH 5/7] Simplify fold_single_bit_test with respect to code Date: Fri, 19 May 2023 19:14:49 -0700 Message-ID: <20230520021451.1901275-6-apinski@marvell.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230520021451.1901275-1-apinski@marvell.com> References: <20230520021451.1901275-1-apinski@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: itRaFFVEQUZy4Wk5Z5ETDLaTHqmOCuHd X-Proofpoint-ORIG-GUID: itRaFFVEQUZy4Wk5Z5ETDLaTHqmOCuHd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.170.22 definitions=2023-05-19_18,2023-05-17_02,2023-02-09_01 X-Spam-Status: No, score=-14.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Andrew Pinski via Gcc-patches From: Andrew Pinski Reply-To: Andrew Pinski Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766377847862581814?= X-GMAIL-MSGID: =?utf-8?q?1766377847862581814?= Since we know that fold_single_bit_test is now only passed NE_EXPR or EQ_EXPR, we can simplify it and just use a gcc_assert to assert that is the code that is being passed. OK? Bootstrapped and tested on x86_64-linux. gcc/ChangeLog: * expr.cc (fold_single_bit_test): Add an assert and simplify based on code being NE_EXPR or EQ_EXPR. --- gcc/expr.cc | 108 ++++++++++++++++++++++++++-------------------------- 1 file changed, 53 insertions(+), 55 deletions(-) diff --git a/gcc/expr.cc b/gcc/expr.cc index 67a9f82ca17..b5bc3fabb7e 100644 --- a/gcc/expr.cc +++ b/gcc/expr.cc @@ -12909,72 +12909,70 @@ fold_single_bit_test (location_t loc, enum tree_code code, tree inner, int bitnum, tree result_type) { - if ((code == NE_EXPR || code == EQ_EXPR)) - { - tree type = TREE_TYPE (inner); - scalar_int_mode operand_mode = SCALAR_INT_TYPE_MODE (type); - int ops_unsigned; - tree signed_type, unsigned_type, intermediate_type; - tree one; - gimple *inner_def; + gcc_assert (code == NE_EXPR || code == EQ_EXPR); - /* First, see if we can fold the single bit test into a sign-bit - test. */ - if (bitnum == TYPE_PRECISION (type) - 1 - && type_has_mode_precision_p (type)) - { - tree stype = signed_type_for (type); - return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR, - result_type, - fold_convert_loc (loc, stype, inner), - build_int_cst (stype, 0)); - } + tree type = TREE_TYPE (inner); + scalar_int_mode operand_mode = SCALAR_INT_TYPE_MODE (type); + int ops_unsigned; + tree signed_type, unsigned_type, intermediate_type; + tree one; + gimple *inner_def; - /* Otherwise we have (A & C) != 0 where C is a single bit, - convert that into ((A >> C2) & 1). Where C2 = log2(C). - Similarly for (A & C) == 0. */ + /* First, see if we can fold the single bit test into a sign-bit + test. */ + if (bitnum == TYPE_PRECISION (type) - 1 + && type_has_mode_precision_p (type)) + { + tree stype = signed_type_for (type); + return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR, + result_type, + fold_convert_loc (loc, stype, inner), + build_int_cst (stype, 0)); + } - /* If INNER is a right shift of a constant and it plus BITNUM does - not overflow, adjust BITNUM and INNER. */ - if ((inner_def = get_def_for_expr (inner, RSHIFT_EXPR)) - && TREE_CODE (gimple_assign_rhs2 (inner_def)) == INTEGER_CST - && bitnum < TYPE_PRECISION (type) - && wi::ltu_p (wi::to_wide (gimple_assign_rhs2 (inner_def)), - TYPE_PRECISION (type) - bitnum)) - { - bitnum += tree_to_uhwi (gimple_assign_rhs2 (inner_def)); - inner = gimple_assign_rhs1 (inner_def); - } + /* Otherwise we have (A & C) != 0 where C is a single bit, + convert that into ((A >> C2) & 1). Where C2 = log2(C). + Similarly for (A & C) == 0. */ - /* If we are going to be able to omit the AND below, we must do our - operations as unsigned. If we must use the AND, we have a choice. - Normally unsigned is faster, but for some machines signed is. */ - ops_unsigned = (load_extend_op (operand_mode) == SIGN_EXTEND - && !flag_syntax_only) ? 0 : 1; + /* If INNER is a right shift of a constant and it plus BITNUM does + not overflow, adjust BITNUM and INNER. */ + if ((inner_def = get_def_for_expr (inner, RSHIFT_EXPR)) + && TREE_CODE (gimple_assign_rhs2 (inner_def)) == INTEGER_CST + && bitnum < TYPE_PRECISION (type) + && wi::ltu_p (wi::to_wide (gimple_assign_rhs2 (inner_def)), + TYPE_PRECISION (type) - bitnum)) + { + bitnum += tree_to_uhwi (gimple_assign_rhs2 (inner_def)); + inner = gimple_assign_rhs1 (inner_def); + } - signed_type = lang_hooks.types.type_for_mode (operand_mode, 0); - unsigned_type = lang_hooks.types.type_for_mode (operand_mode, 1); - intermediate_type = ops_unsigned ? unsigned_type : signed_type; - inner = fold_convert_loc (loc, intermediate_type, inner); + /* If we are going to be able to omit the AND below, we must do our + operations as unsigned. If we must use the AND, we have a choice. + Normally unsigned is faster, but for some machines signed is. */ + ops_unsigned = (load_extend_op (operand_mode) == SIGN_EXTEND + && !flag_syntax_only) ? 0 : 1; - if (bitnum != 0) - inner = build2 (RSHIFT_EXPR, intermediate_type, - inner, size_int (bitnum)); + signed_type = lang_hooks.types.type_for_mode (operand_mode, 0); + unsigned_type = lang_hooks.types.type_for_mode (operand_mode, 1); + intermediate_type = ops_unsigned ? unsigned_type : signed_type; + inner = fold_convert_loc (loc, intermediate_type, inner); - one = build_int_cst (intermediate_type, 1); + if (bitnum != 0) + inner = build2 (RSHIFT_EXPR, intermediate_type, + inner, size_int (bitnum)); - if (code == EQ_EXPR) - inner = fold_build2_loc (loc, BIT_XOR_EXPR, intermediate_type, inner, one); + one = build_int_cst (intermediate_type, 1); - /* Put the AND last so it can combine with more things. */ - inner = build2 (BIT_AND_EXPR, intermediate_type, inner, one); + if (code == EQ_EXPR) + inner = fold_build2_loc (loc, BIT_XOR_EXPR, intermediate_type, inner, one); - /* Make sure to return the proper type. */ - inner = fold_convert_loc (loc, result_type, inner); + /* Put the AND last so it can combine with more things. */ + inner = build2 (BIT_AND_EXPR, intermediate_type, inner, one); - return inner; - } - return NULL_TREE; + /* Make sure to return the proper type. */ + inner = fold_convert_loc (loc, result_type, inner); + + return inner; } /* Generate code to calculate OPS, and exploded expression