[2/7] Inline and simplify fold_single_bit_test_into_sign_test into fold_single_bit_test

Message ID 20230520021451.1901275-3-apinski@marvell.com
State Accepted
Headers
Series Improve do_store_flag |

Checks

Context Check Description
snail/gcc-patch-check success Github commit url

Commit Message

Andrew Pinski May 20, 2023, 2:14 a.m. UTC
  Since the last use of fold_single_bit_test is fold_single_bit_test,
we can inline it and even simplify the inlined version. This has
no behavior change.

OK? Bootstrapped and tested on x86_64-linux.

gcc/ChangeLog:

	* expr.cc (fold_single_bit_test_into_sign_test): Inline into ...
	(fold_single_bit_test): This and simplify.
---
 gcc/expr.cc | 51 ++++++++++-----------------------------------------
 1 file changed, 10 insertions(+), 41 deletions(-)
  

Comments

Jeff Law May 20, 2023, 4:47 a.m. UTC | #1
On 5/19/23 20:14, Andrew Pinski via Gcc-patches wrote:
> Since the last use of fold_single_bit_test is fold_single_bit_test,
> we can inline it and even simplify the inlined version. This has
> no behavior change.
> 
> OK? Bootstrapped and tested on x86_64-linux.
> 
> gcc/ChangeLog:
> 
> 	* expr.cc (fold_single_bit_test_into_sign_test): Inline into ...
> 	(fold_single_bit_test): This and simplify.
Going to trust the inlining and simpification is really NFC.  It's not 
really obvious from the patch.

jeff
  
Jeff Law May 20, 2023, 4:48 a.m. UTC | #2
On 5/19/23 20:14, Andrew Pinski via Gcc-patches wrote:
> Since the last use of fold_single_bit_test is fold_single_bit_test,
> we can inline it and even simplify the inlined version. This has
> no behavior change.
> 
> OK? Bootstrapped and tested on x86_64-linux.
> 
> gcc/ChangeLog:
> 
> 	* expr.cc (fold_single_bit_test_into_sign_test): Inline into ...
> 	(fold_single_bit_test): This and simplify.
Just to be clear, based on the NFC assumption, this is OK for the trunk.
jeff
  

Patch

diff --git a/gcc/expr.cc b/gcc/expr.cc
index f999f81af4a..6221b6991c5 100644
--- a/gcc/expr.cc
+++ b/gcc/expr.cc
@@ -12899,42 +12899,6 @@  maybe_optimize_sub_cmp_0 (enum tree_code code, tree *arg0, tree *arg1)
 }
 
 
-
-/* If CODE with arguments ARG0 and ARG1 represents a single bit
-   equality/inequality test, then return a simplified form of the test
-   using a sign testing.  Otherwise return NULL.  TYPE is the desired
-   result type.  */
-
-static tree
-fold_single_bit_test_into_sign_test (location_t loc,
-				     enum tree_code code, tree arg0, tree arg1,
-				     tree result_type)
-{
-  /* If this is testing a single bit, we can optimize the test.  */
-  if ((code == NE_EXPR || code == EQ_EXPR)
-      && TREE_CODE (arg0) == BIT_AND_EXPR && integer_zerop (arg1)
-      && integer_pow2p (TREE_OPERAND (arg0, 1)))
-    {
-      /* If we have (A & C) != 0 where C is the sign bit of A, convert
-	 this into A < 0.  Similarly for (A & C) == 0 into A >= 0.  */
-      tree arg00 = sign_bit_p (TREE_OPERAND (arg0, 0), TREE_OPERAND (arg0, 1));
-
-      if (arg00 != NULL_TREE
-	  /* This is only a win if casting to a signed type is cheap,
-	     i.e. when arg00's type is not a partial mode.  */
-	  && type_has_mode_precision_p (TREE_TYPE (arg00)))
-	{
-	  tree stype = signed_type_for (TREE_TYPE (arg00));
-	  return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
-			      result_type,
-			      fold_convert_loc (loc, stype, arg00),
-			      build_int_cst (stype, 0));
-	}
-    }
-
-  return NULL_TREE;
-}
-
 /* If CODE with arguments ARG0 and ARG1 represents a single bit
    equality/inequality test, then return a simplified form of
    the test using shifts and logical operations.  Otherwise return
@@ -12955,14 +12919,19 @@  fold_single_bit_test (location_t loc, enum tree_code code,
       scalar_int_mode operand_mode = SCALAR_INT_TYPE_MODE (type);
       int ops_unsigned;
       tree signed_type, unsigned_type, intermediate_type;
-      tree tem, one;
+      tree one;
 
       /* First, see if we can fold the single bit test into a sign-bit
 	 test.  */
-      tem = fold_single_bit_test_into_sign_test (loc, code, arg0, arg1,
-						 result_type);
-      if (tem)
-	return tem;
+      if (bitnum == TYPE_PRECISION (type) - 1
+	  && type_has_mode_precision_p (type))
+	{
+	  tree stype = signed_type_for (type);
+	  return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
+				  result_type,
+				  fold_convert_loc (loc, stype, inner),
+				  build_int_cst (stype, 0));
+	}
 
       /* Otherwise we have (A & C) != 0 where C is a single bit,
 	 convert that into ((A >> C2) & 1).  Where C2 = log2(C).