Rename nonzero_bits to known_zero_bits.

Message ID 20221021131426.308205-1-aldyh@redhat.com
State Accepted
Headers
Series Rename nonzero_bits to known_zero_bits. |

Checks

Context Check Description
snail/gcc-patch-check success Github commit url

Commit Message

Aldy Hernandez Oct. 21, 2022, 1:14 p.m. UTC
  The name nonzero_bits is confusing.  We're not tracking nonzero bits.
We're tracking known-zero bits, or at the worst we're tracking "maye
nonzero bits".  But really, the only thing we're sure about in the
"nonzero" bits are the bits that are zero, which are known to be 0.
We're not tracking nonzero bits.

I know we've been carrying around this name forever, but the fact that
both of the maintainers of the code *HATE* it, should be telling.
Also, we'd also like to track known-one bits in the irange, so it's
best to keep the nomenclature consistent.

Andrew, are you ok with this naming, or would you prefer something
else?

gcc/ChangeLog:

	* asan.cc (handle_builtin_alloca): Rename *nonzero* to *known_zero*.
	* fold-const.cc (expr_not_equal_to): Same.
	(tree_nonzero_bits): Same.
	* gimple-range-op.cc: Same.
	* ipa-cp.cc (ipcp_bits_lattice::get_value_and_mask): Same.
	* ipa-prop.cc (ipa_compute_jump_functions_for_edge): Same.
	(ipcp_update_bits): Same.
	* match.pd: Same.
	* range-op.cc (operator_lt::fold_range): Same.
	(operator_cast::fold_range): Same.
	(operator_bitwise_and::fold_range): Same.
	(set_nonzero_range_from_mask): Same.
	(set_known_zero_range_from_mask): Same.
	(operator_bitwise_and::simple_op1_range_solver): Same.
	(operator_bitwise_and::op1_range): Same.
	(range_op_cast_tests): Same.
	(range_op_bitwise_and_tests): Same.
	* tree-data-ref.cc (split_constant_offset): Same.
	* tree-ssa-ccp.cc (get_default_value): Same.
	(ccp_finalize): Same.
	(evaluate_stmt): Same.
	* tree-ssa-dom.cc
	(dom_opt_dom_walker::set_global_ranges_from_unreachable_edges): Same.
	* tree-ssa-reassoc.cc (optimize_range_tests_var_bound): Same.
	* tree-ssanames.cc (set_nonzero_bits): Same.
	(set_known_zero_bits): Same.
	(get_nonzero_bits): Same.
	(get_known_zero_bits): Same.
	(ssa_name_has_boolean_range): Same.
	* tree-ssanames.h (set_nonzero_bits): Same.
	(get_nonzero_bits): Same.
	(set_known_zero_bits): Same.
	(get_known_zero_bits): Same.
	* tree-vect-patterns.cc (vect_get_range_info): Same.
	* tree-vrp.cc (maybe_set_nonzero_bits): Same.
	(maybe_set_known_zero_bits): Same.
	(vrp_asserts::remove_range_assertions): Same.
	* tree-vrp.h (maybe_set_nonzero_bits): Same.
	(maybe_set_known_zero_bits): Same.
	* tree.cc (tree_ctz): Same.
	* value-range-pretty-print.cc
	(vrange_printer::print_irange_bitmasks): Same.
	* value-range-storage.cc (irange_storage_slot::set_irange): Same.
	(irange_storage_slot::get_irange): Same.
	(irange_storage_slot::dump): Same.
	* value-range-storage.h: Same.
	* value-range.cc (irange::operator=): Same.
	(irange::copy_to_legacy): Same.
	(irange::irange_set): Same.
	(irange::irange_set_anti_range): Same.
	(irange::set): Same.
	(irange::verify_range): Same.
	(irange::legacy_equal_p): Same.
	(irange::operator==): Same.
	(irange::contains_p): Same.
	(irange::irange_single_pair_union): Same.
	(irange::irange_union): Same.
	(irange::irange_intersect): Same.
	(irange::invert): Same.
	(irange::get_nonzero_bits_from_range): Same.
	(irange::get_known_zero_bits_from_range): Same.
	(irange::set_range_from_nonzero_bits): Same.
	(irange::set_range_from_known_zero_bits): Same.
	(irange::set_nonzero_bits): Same.
	(irange::set_known_zero_bits): Same.
	(irange::get_nonzero_bits): Same.
	(irange::get_known_zero_bits): Same.
	(irange::intersect_nonzero_bits): Same.
	(irange::intersect_known_zero_bits): Same.
	(irange::union_nonzero_bits): Same.
	(irange::union_known_zero_bits): Same.
	(range_tests_nonzero_bits): Same.
	* value-range.h (irange::varying_compatible_p): Same.
	(gt_ggc_mx): Same.
	(gt_pch_nx): Same.
	(irange::set_undefined): Same.
	(irange::set_varying): Same.
---
 gcc/asan.cc                     |   2 +-
 gcc/fold-const.cc               |   4 +-
 gcc/gimple-range-op.cc          |   2 +-
 gcc/ipa-cp.cc                   |   2 +-
 gcc/ipa-prop.cc                 |   4 +-
 gcc/match.pd                    |  14 +--
 gcc/range-op.cc                 |  28 +++---
 gcc/tree-data-ref.cc            |   2 +-
 gcc/tree-ssa-ccp.cc             |   8 +-
 gcc/tree-ssa-dom.cc             |   2 +-
 gcc/tree-ssa-reassoc.cc         |   4 +-
 gcc/tree-ssanames.cc            |  14 +--
 gcc/tree-ssanames.h             |   4 +-
 gcc/tree-vect-patterns.cc       |   2 +-
 gcc/tree-vrp.cc                 |   6 +-
 gcc/tree-vrp.h                  |   2 +-
 gcc/tree.cc                     |   2 +-
 gcc/value-range-pretty-print.cc |   2 +-
 gcc/value-range-storage.cc      |   6 +-
 gcc/value-range-storage.h       |   2 +-
 gcc/value-range.cc              | 148 ++++++++++++++++----------------
 gcc/value-range.h               |  36 ++++----
 22 files changed, 148 insertions(+), 148 deletions(-)
  

Comments

Segher Boessenkool Oct. 21, 2022, 4:45 p.m. UTC | #1
Hi!

On Fri, Oct 21, 2022 at 03:14:26PM +0200, Aldy Hernandez via Gcc-patches wrote:
> The name nonzero_bits is confusing.  We're not tracking nonzero bits.
> We're tracking known-zero bits, or at the worst we're tracking "maye
> nonzero bits".  But really, the only thing we're sure about in the
> "nonzero" bits are the bits that are zero, which are known to be 0.
> We're not tracking nonzero bits.

Indeed.

> I know we've been carrying around this name forever, but the fact that
> both of the maintainers of the code *HATE* it, should be telling.
> Also, we'd also like to track known-one bits in the irange, so it's
> best to keep the nomenclature consistent.

And that as well.

However:

> 	* asan.cc (handle_builtin_alloca): Rename *nonzero* to *known_zero*.

Our "nonzero" means "not known to be zero", not "known to be zero", so
this renaming makes it worse than it was.  Rename it to
"not_known_zero", make that a thin wrapper around a new "known_zero",
and slowly get rid of not_known_zero?

> --- a/gcc/asan.cc
> +++ b/gcc/asan.cc
> @@ -816,7 +816,7 @@ handle_builtin_alloca (gcall *call, gimple_stmt_iterator *iter)
>    tree redzone_size = build_int_cst (size_type_node, ASAN_RED_ZONE_SIZE);
>  
>    /* Extract lower bits from old_size.  */
> -  wide_int size_nonzero_bits = get_nonzero_bits (old_size);
> +  wide_int size_nonzero_bits = get_known_zero_bits (old_size);

Such variables should also be renamed :-(


Segher
  
Jakub Jelinek Oct. 21, 2022, 4:51 p.m. UTC | #2
On Fri, Oct 21, 2022 at 11:45:33AM -0500, Segher Boessenkool wrote:
> On Fri, Oct 21, 2022 at 03:14:26PM +0200, Aldy Hernandez via Gcc-patches wrote:
> > The name nonzero_bits is confusing.  We're not tracking nonzero bits.
> > We're tracking known-zero bits, or at the worst we're tracking "maye
> > nonzero bits".  But really, the only thing we're sure about in the
> > "nonzero" bits are the bits that are zero, which are known to be 0.
> > We're not tracking nonzero bits.
> 
> Indeed.
> 
> > I know we've been carrying around this name forever, but the fact that
> > both of the maintainers of the code *HATE* it, should be telling.
> > Also, we'd also like to track known-one bits in the irange, so it's
> > best to keep the nomenclature consistent.
> 
> And that as well.
> 
> However:
> 
> > 	* asan.cc (handle_builtin_alloca): Rename *nonzero* to *known_zero*.
> 
> Our "nonzero" means "not known to be zero", not "known to be zero", so
> this renaming makes it worse than it was.  Rename it to

Agreed.

I think maybe_nonzero_bits would be fine.

Anyway, the reason it is called this way is that we have similar APIs
on the RTL side, nonzero_bits* in rtlanal.cc.
So if we rename, it should be renamed consistently.

> "not_known_zero", make that a thin wrapper around a new "known_zero",
> and slowly get rid of not_known_zero?

	Jakub
  
Jakub Jelinek Oct. 21, 2022, 4:54 p.m. UTC | #3
On Fri, Oct 21, 2022 at 06:51:19PM +0200, Jakub Jelinek wrote:
> Agreed.
> 
> I think maybe_nonzero_bits would be fine.

Or yet another option is to change what we track and instead of
having just one bitmask have 2 as tree-ssa-ccp.cc does,
one bitmask says which bits are known to be always the same
and the other which specifies the values of those bits.
"For X with a CONSTANT lattice value X & ~mask == value & ~mask.  The
zero bits in the mask cover constant values.  The ones mean no
information."

	Jakub
  
Segher Boessenkool Oct. 21, 2022, 5:44 p.m. UTC | #4
On Fri, Oct 21, 2022 at 06:51:17PM +0200, Jakub Jelinek wrote:
> On Fri, Oct 21, 2022 at 11:45:33AM -0500, Segher Boessenkool wrote:
> > On Fri, Oct 21, 2022 at 03:14:26PM +0200, Aldy Hernandez via Gcc-patches wrote:
> > > 	* asan.cc (handle_builtin_alloca): Rename *nonzero* to *known_zero*.
> > 
> > Our "nonzero" means "not known to be zero", not "known to be zero", so
> > this renaming makes it worse than it was.  Rename it to
> 
> Agreed.
> 
> I think maybe_nonzero_bits would be fine.

Yes, but the shorter nam known_zero is much better.  Converting to that
is a bit more work, cannot really be mechanic: code simplifications are
needed to make things better instead of adding another layer of double
negations, and variable names and comments should be changes as well.

> Anyway, the reason it is called this way is that we have similar APIs
> on the RTL side, nonzero_bits* in rtlanal.cc.

I am well aware ;-)

> So if we rename, it should be renamed consistently.

Yes.


Segher
  
Segher Boessenkool Oct. 21, 2022, 6 p.m. UTC | #5
On Fri, Oct 21, 2022 at 06:54:32PM +0200, Jakub Jelinek wrote:
> On Fri, Oct 21, 2022 at 06:51:19PM +0200, Jakub Jelinek wrote:
> > Agreed.
> > 
> > I think maybe_nonzero_bits would be fine.
> 
> Or yet another option is to change what we track and instead of
> having just one bitmask have 2 as tree-ssa-ccp.cc does,
> one bitmask says which bits are known to be always the same
> and the other which specifies the values of those bits.
> "For X with a CONSTANT lattice value X & ~mask == value & ~mask.  The
> zero bits in the mask cover constant values.  The ones mean no
> information."

I am still working on making the RTL nonzero_bits use DF (and indeed I
do a known_zero instead :-) ).  This makes the special version in
combine unnecessary: instead of working better than the generic version
it is strictly weaker then.  This change then makes it possible to use
nonzero_bits in instruction conditions (without causing ICEs as now --
passes after combine return a subset of the nonzero_bits the version in
combine does, which can make insns no longer match in later passes).

My fear is tracking twice as many bits might become expensive.  OTOH
ideally we can get rid of combine's reg_stat completely at some point
in the future (which has all the same problems as combine's version of
nonzero_bits: the values it returns depend on the order combine tried
possible combinations).

Storage requirements are the same for known_zero_bits and known_one_bits
vs. known_bits and known_bit_values, but the latter is a bit more
costly to compute, but more importantly it is usually a lot less
convenient in use.  (A third option is known_bits and known_zero_bits?)


Segher
  
Richard Biener Oct. 24, 2022, 7:21 a.m. UTC | #6
On Fri, Oct 21, 2022 at 3:15 PM Aldy Hernandez via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> The name nonzero_bits is confusing.  We're not tracking nonzero bits.
> We're tracking known-zero bits, or at the worst we're tracking "maye
> nonzero bits".  But really, the only thing we're sure about in the
> "nonzero" bits are the bits that are zero, which are known to be 0.
> We're not tracking nonzero bits.
>
> I know we've been carrying around this name forever, but the fact that
> both of the maintainers of the code *HATE* it, should be telling.
> Also, we'd also like to track known-one bits in the irange, so it's
> best to keep the nomenclature consistent.
>
> Andrew, are you ok with this naming, or would you prefer something
> else?

But it's the same as on RTL.  And on release branches.  But yes,
it's maybe_nonzero_bits.  Ideally we'd track known/unknown_bits
(both zero and one) instead.  bit-CCP already computes that but throws
away the ones:

          unsigned int precision = TYPE_PRECISION (TREE_TYPE (val->value));
          wide_int nonzero_bits
            = (wide_int::from (val->mask, precision, UNSIGNED)
               | wi::to_wide (val->value));
          nonzero_bits &= get_nonzero_bits (name);
          set_nonzero_bits (name, nonzero_bits);

so I think instead of renaming can you see what it takes to also record known
set bits?  (yeah, needs two masks instead of one in the storage)

> gcc/ChangeLog:
>
>         * asan.cc (handle_builtin_alloca): Rename *nonzero* to *known_zero*.
>         * fold-const.cc (expr_not_equal_to): Same.
>         (tree_nonzero_bits): Same.
>         * gimple-range-op.cc: Same.
>         * ipa-cp.cc (ipcp_bits_lattice::get_value_and_mask): Same.
>         * ipa-prop.cc (ipa_compute_jump_functions_for_edge): Same.
>         (ipcp_update_bits): Same.
>         * match.pd: Same.
>         * range-op.cc (operator_lt::fold_range): Same.
>         (operator_cast::fold_range): Same.
>         (operator_bitwise_and::fold_range): Same.
>         (set_nonzero_range_from_mask): Same.
>         (set_known_zero_range_from_mask): Same.
>         (operator_bitwise_and::simple_op1_range_solver): Same.
>         (operator_bitwise_and::op1_range): Same.
>         (range_op_cast_tests): Same.
>         (range_op_bitwise_and_tests): Same.
>         * tree-data-ref.cc (split_constant_offset): Same.
>         * tree-ssa-ccp.cc (get_default_value): Same.
>         (ccp_finalize): Same.
>         (evaluate_stmt): Same.
>         * tree-ssa-dom.cc
>         (dom_opt_dom_walker::set_global_ranges_from_unreachable_edges): Same.
>         * tree-ssa-reassoc.cc (optimize_range_tests_var_bound): Same.
>         * tree-ssanames.cc (set_nonzero_bits): Same.
>         (set_known_zero_bits): Same.
>         (get_nonzero_bits): Same.
>         (get_known_zero_bits): Same.
>         (ssa_name_has_boolean_range): Same.
>         * tree-ssanames.h (set_nonzero_bits): Same.
>         (get_nonzero_bits): Same.
>         (set_known_zero_bits): Same.
>         (get_known_zero_bits): Same.
>         * tree-vect-patterns.cc (vect_get_range_info): Same.
>         * tree-vrp.cc (maybe_set_nonzero_bits): Same.
>         (maybe_set_known_zero_bits): Same.
>         (vrp_asserts::remove_range_assertions): Same.
>         * tree-vrp.h (maybe_set_nonzero_bits): Same.
>         (maybe_set_known_zero_bits): Same.
>         * tree.cc (tree_ctz): Same.
>         * value-range-pretty-print.cc
>         (vrange_printer::print_irange_bitmasks): Same.
>         * value-range-storage.cc (irange_storage_slot::set_irange): Same.
>         (irange_storage_slot::get_irange): Same.
>         (irange_storage_slot::dump): Same.
>         * value-range-storage.h: Same.
>         * value-range.cc (irange::operator=): Same.
>         (irange::copy_to_legacy): Same.
>         (irange::irange_set): Same.
>         (irange::irange_set_anti_range): Same.
>         (irange::set): Same.
>         (irange::verify_range): Same.
>         (irange::legacy_equal_p): Same.
>         (irange::operator==): Same.
>         (irange::contains_p): Same.
>         (irange::irange_single_pair_union): Same.
>         (irange::irange_union): Same.
>         (irange::irange_intersect): Same.
>         (irange::invert): Same.
>         (irange::get_nonzero_bits_from_range): Same.
>         (irange::get_known_zero_bits_from_range): Same.
>         (irange::set_range_from_nonzero_bits): Same.
>         (irange::set_range_from_known_zero_bits): Same.
>         (irange::set_nonzero_bits): Same.
>         (irange::set_known_zero_bits): Same.
>         (irange::get_nonzero_bits): Same.
>         (irange::get_known_zero_bits): Same.
>         (irange::intersect_nonzero_bits): Same.
>         (irange::intersect_known_zero_bits): Same.
>         (irange::union_nonzero_bits): Same.
>         (irange::union_known_zero_bits): Same.
>         (range_tests_nonzero_bits): Same.
>         * value-range.h (irange::varying_compatible_p): Same.
>         (gt_ggc_mx): Same.
>         (gt_pch_nx): Same.
>         (irange::set_undefined): Same.
>         (irange::set_varying): Same.
> ---
>  gcc/asan.cc                     |   2 +-
>  gcc/fold-const.cc               |   4 +-
>  gcc/gimple-range-op.cc          |   2 +-
>  gcc/ipa-cp.cc                   |   2 +-
>  gcc/ipa-prop.cc                 |   4 +-
>  gcc/match.pd                    |  14 +--
>  gcc/range-op.cc                 |  28 +++---
>  gcc/tree-data-ref.cc            |   2 +-
>  gcc/tree-ssa-ccp.cc             |   8 +-
>  gcc/tree-ssa-dom.cc             |   2 +-
>  gcc/tree-ssa-reassoc.cc         |   4 +-
>  gcc/tree-ssanames.cc            |  14 +--
>  gcc/tree-ssanames.h             |   4 +-
>  gcc/tree-vect-patterns.cc       |   2 +-
>  gcc/tree-vrp.cc                 |   6 +-
>  gcc/tree-vrp.h                  |   2 +-
>  gcc/tree.cc                     |   2 +-
>  gcc/value-range-pretty-print.cc |   2 +-
>  gcc/value-range-storage.cc      |   6 +-
>  gcc/value-range-storage.h       |   2 +-
>  gcc/value-range.cc              | 148 ++++++++++++++++----------------
>  gcc/value-range.h               |  36 ++++----
>  22 files changed, 148 insertions(+), 148 deletions(-)
>
> diff --git a/gcc/asan.cc b/gcc/asan.cc
> index 8276f12cc69..9960803b99f 100644
> --- a/gcc/asan.cc
> +++ b/gcc/asan.cc
> @@ -816,7 +816,7 @@ handle_builtin_alloca (gcall *call, gimple_stmt_iterator *iter)
>    tree redzone_size = build_int_cst (size_type_node, ASAN_RED_ZONE_SIZE);
>
>    /* Extract lower bits from old_size.  */
> -  wide_int size_nonzero_bits = get_nonzero_bits (old_size);
> +  wide_int size_nonzero_bits = get_known_zero_bits (old_size);
>    wide_int rz_mask
>      = wi::uhwi (redzone_mask, wi::get_precision (size_nonzero_bits));
>    wide_int old_size_lower_bits = wi::bit_and (size_nonzero_bits, rz_mask);
> diff --git a/gcc/fold-const.cc b/gcc/fold-const.cc
> index 9f7beae14e5..c85231b4ca1 100644
> --- a/gcc/fold-const.cc
> +++ b/gcc/fold-const.cc
> @@ -10815,7 +10815,7 @@ expr_not_equal_to (tree t, const wide_int &w)
>         return true;
>        /* If T has some known zero bits and W has any of those bits set,
>          then T is known not to be equal to W.  */
> -      if (wi::ne_p (wi::zext (wi::bit_and_not (w, get_nonzero_bits (t)),
> +      if (wi::ne_p (wi::zext (wi::bit_and_not (w, get_known_zero_bits (t)),
>                               TYPE_PRECISION (TREE_TYPE (t))), 0))
>         return true;
>        return false;
> @@ -16508,7 +16508,7 @@ tree_nonzero_bits (const_tree t)
>      case INTEGER_CST:
>        return wi::to_wide (t);
>      case SSA_NAME:
> -      return get_nonzero_bits (t);
> +      return get_known_zero_bits (t);
>      case NON_LVALUE_EXPR:
>      case SAVE_EXPR:
>        return tree_nonzero_bits (TREE_OPERAND (t, 0));
> diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
> index 7764166d5fb..90c6f7b3fd9 100644
> --- a/gcc/gimple-range-op.cc
> +++ b/gcc/gimple-range-op.cc
> @@ -477,7 +477,7 @@ public:
>      if (lh.undefined_p ())
>        return false;
>      unsigned prec = TYPE_PRECISION (type);
> -    wide_int nz = lh.get_nonzero_bits ();
> +    wide_int nz = lh.get_known_zero_bits ();
>      wide_int pop = wi::shwi (wi::popcount (nz), prec);
>      // Calculating the popcount of a singleton is trivial.
>      if (lh.singleton_p ())
> diff --git a/gcc/ipa-cp.cc b/gcc/ipa-cp.cc
> index d2bcd5e5e69..4ba7ef878ba 100644
> --- a/gcc/ipa-cp.cc
> +++ b/gcc/ipa-cp.cc
> @@ -1119,7 +1119,7 @@ ipcp_bits_lattice::known_nonzero_p () const
>  void
>  ipcp_bits_lattice::get_value_and_mask (tree operand, widest_int *valuep, widest_int *maskp)
>  {
> -  wide_int get_nonzero_bits (const_tree);
> +  wide_int get_known_zero_bits (const_tree);
>
>    if (TREE_CODE (operand) == INTEGER_CST)
>      {
> diff --git a/gcc/ipa-prop.cc b/gcc/ipa-prop.cc
> index e6cf25591b3..e3cd5cf6415 100644
> --- a/gcc/ipa-prop.cc
> +++ b/gcc/ipa-prop.cc
> @@ -2331,7 +2331,7 @@ ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi,
>         {
>           if (TREE_CODE (arg) == SSA_NAME)
>             ipa_set_jfunc_bits (jfunc, 0,
> -                               widest_int::from (get_nonzero_bits (arg),
> +                               widest_int::from (get_known_zero_bits (arg),
>                                                   TYPE_SIGN (TREE_TYPE (arg))));
>           else
>             ipa_set_jfunc_bits (jfunc, wi::to_widest (arg), 0);
> @@ -5816,7 +5816,7 @@ ipcp_update_bits (struct cgraph_node *node)
>
>           wide_int nonzero_bits = wide_int::from (bits[i]->mask, prec, UNSIGNED)
>                                   | wide_int::from (bits[i]->value, prec, sgn);
> -         set_nonzero_bits (ddef, nonzero_bits);
> +         set_known_zero_bits (ddef, nonzero_bits);
>         }
>        else
>         {
> diff --git a/gcc/match.pd b/gcc/match.pd
> index 194ba8f5188..0f58f1ad2ae 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -1199,7 +1199,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>  (simplify
>   (bit_and (bit_not SSA_NAME@0) INTEGER_CST@1)
>   (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> -      && wi::bit_and_not (get_nonzero_bits (@0), wi::to_wide (@1)) == 0)
> +      && wi::bit_and_not (get_known_zero_bits (@0), wi::to_wide (@1)) == 0)
>    (bit_xor @0 @1)))
>
>  /* For constants M and N, if M == (1LL << cst) - 1 && (N & M) == M,
> @@ -1317,7 +1317,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>  (simplify
>   (bit_and SSA_NAME@0 INTEGER_CST@1)
>   (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> -      && wi::bit_and_not (get_nonzero_bits (@0), wi::to_wide (@1)) == 0)
> +      && wi::bit_and_not (get_known_zero_bits (@0), wi::to_wide (@1)) == 0)
>    @0))
>  #endif
>
> @@ -2286,7 +2286,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>    (if (TREE_INT_CST_LOW (@1) & 1)
>     { constant_boolean_node (cmp == NE_EXPR, type); })))
>
> -/* Arguments on which one can call get_nonzero_bits to get the bits
> +/* Arguments on which one can call get_known_zero_bits to get the bits
>     possibly set.  */
>  (match with_possible_nonzero_bits
>   INTEGER_CST@0)
> @@ -2300,7 +2300,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>   (bit_and:c with_possible_nonzero_bits@0 @2))
>
>  /* Same for bits that are known to be set, but we do not have
> -   an equivalent to get_nonzero_bits yet.  */
> +   an equivalent to get_known_zero_bits yet.  */
>  (match (with_certain_nonzero_bits2 @0)
>   INTEGER_CST@0)
>  (match (with_certain_nonzero_bits2 @0)
> @@ -2310,7 +2310,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>  (for cmp (eq ne)
>   (simplify
>    (cmp:c (with_possible_nonzero_bits2 @0) (with_certain_nonzero_bits2 @1))
> -  (if (wi::bit_and_not (wi::to_wide (@1), get_nonzero_bits (@0)) != 0)
> +  (if (wi::bit_and_not (wi::to_wide (@1), get_known_zero_bits (@0)) != 0)
>     { constant_boolean_node (cmp == NE_EXPR, type); })))
>
>  /* ((X inner_op C0) outer_op C1)
> @@ -2336,7 +2336,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>      wide_int cst_emit;
>
>      if (TREE_CODE (@2) == SSA_NAME)
> -      zero_mask_not = get_nonzero_bits (@2);
> +      zero_mask_not = get_known_zero_bits (@2);
>      else
>        fail = true;
>
> @@ -3562,7 +3562,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>        int width = ceil_log2 (element_precision (TREE_TYPE (@0)));
>        int prec = TYPE_PRECISION (TREE_TYPE (@1));
>       }
> -     (if ((get_nonzero_bits (@1) & wi::mask (width, false, prec)) == 0)
> +     (if ((get_known_zero_bits (@1) & wi::mask (width, false, prec)) == 0)
>        @0)))))
>  #endif
>
> diff --git a/gcc/range-op.cc b/gcc/range-op.cc
> index 49ee7be3d3b..7e5b3ad6aad 100644
> --- a/gcc/range-op.cc
> +++ b/gcc/range-op.cc
> @@ -804,8 +804,8 @@ operator_lt::fold_range (irange &r, tree type,
>      r = range_true (type);
>    else if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
>      r = range_false (type);
> -  // Use nonzero bits to determine if < 0 is false.
> -  else if (op2.zero_p () && !wi::neg_p (op1.get_nonzero_bits (), sign))
> +  // Use known-zero bits to determine if < 0 is false.
> +  else if (op2.zero_p () && !wi::neg_p (op1.get_known_zero_bits (), sign))
>      r = range_false (type);
>    else
>      r = range_true_and_false (type);
> @@ -2552,16 +2552,16 @@ operator_cast::fold_range (irange &r, tree type ATTRIBUTE_UNUSED,
>         return true;
>      }
>
> -  // Update the nonzero mask.  Truncating casts are problematic unless
> +  // Update the known-zero mask.  Truncating casts are problematic unless
>    // the conversion fits in the resulting outer type.
> -  wide_int nz = inner.get_nonzero_bits ();
> +  wide_int nz = inner.get_known_zero_bits ();
>    if (truncating_cast_p (inner, outer)
>        && wi::rshift (nz, wi::uhwi (TYPE_PRECISION (outer.type ()),
>                                    TYPE_PRECISION (inner.type ())),
>                      TYPE_SIGN (inner.type ())) != 0)
>      return true;
>    nz = wide_int::from (nz, TYPE_PRECISION (type), TYPE_SIGN (inner.type ()));
> -  r.set_nonzero_bits (nz);
> +  r.set_known_zero_bits (nz);
>
>    return true;
>  }
> @@ -2794,8 +2794,8 @@ operator_bitwise_and::fold_range (irange &r, tree type,
>    if (range_operator::fold_range (r, type, lh, rh))
>      {
>        if (!lh.undefined_p () && !rh.undefined_p ())
> -       r.set_nonzero_bits (wi::bit_and (lh.get_nonzero_bits (),
> -                                        rh.get_nonzero_bits ()));
> +       r.set_known_zero_bits (wi::bit_and (lh.get_known_zero_bits (),
> +                                        rh.get_known_zero_bits ()));
>        return true;
>      }
>    return false;
> @@ -2805,7 +2805,7 @@ operator_bitwise_and::fold_range (irange &r, tree type,
>  // Optimize BIT_AND_EXPR, BIT_IOR_EXPR and BIT_XOR_EXPR of signed types
>  // by considering the number of leading redundant sign bit copies.
>  // clrsb (X op Y) = min (clrsb (X), clrsb (Y)), so for example
> -// [-1, 0] op [-1, 0] is [-1, 0] (where nonzero_bits doesn't help).
> +// [-1, 0] op [-1, 0] is [-1, 0] (where known-zero bits doesn't help).
>  static bool
>  wi_optimize_signed_bitwise_op (irange &r, tree type,
>                                const wide_int &lh_lb, const wide_int &lh_ub,
> @@ -3046,7 +3046,7 @@ operator_bitwise_and::wi_fold (irange &r, tree type,
>  }
>
>  static void
> -set_nonzero_range_from_mask (irange &r, tree type, const irange &lhs)
> +set_known_zero_range_from_mask (irange &r, tree type, const irange &lhs)
>  {
>    if (!lhs.contains_p (build_zero_cst (type)))
>      r = range_nonzero (type);
> @@ -3064,7 +3064,7 @@ operator_bitwise_and::simple_op1_range_solver (irange &r, tree type,
>  {
>    if (!op2.singleton_p ())
>      {
> -      set_nonzero_range_from_mask (r, type, lhs);
> +      set_known_zero_range_from_mask (r, type, lhs);
>        return;
>      }
>    unsigned int nprec = TYPE_PRECISION (type);
> @@ -3157,14 +3157,14 @@ operator_bitwise_and::op1_range (irange &r, tree type,
>        r.union_ (res);
>      }
>    if (r.undefined_p ())
> -    set_nonzero_range_from_mask (r, type, lhs);
> +    set_known_zero_range_from_mask (r, type, lhs);
>
>    // For 0 = op1 & MASK, op1 is ~MASK.
>    if (lhs.zero_p () && op2.singleton_p ())
>      {
> -      wide_int nz = wi::bit_not (op2.get_nonzero_bits ());
> +      wide_int nz = wi::bit_not (op2.get_known_zero_bits ());
>        int_range<2> tmp (type);
> -      tmp.set_nonzero_bits (nz);
> +      tmp.set_known_zero_bits (nz);
>        r.intersect (tmp);
>      }
>    return true;
> @@ -4851,7 +4851,7 @@ range_op_bitwise_and_tests ()
>      int_range<2> mask = int_range<2> (INT (7), INT (7));
>      op_bitwise_and.op1_range (res, integer_type_node, zero, mask);
>      wide_int inv = wi::shwi (~7U, TYPE_PRECISION (integer_type_node));
> -    ASSERT_TRUE (res.get_nonzero_bits () == inv);
> +    ASSERT_TRUE (res.get_known_zero_bits () == inv);
>    }
>
>    // (NONZERO | X) is nonzero.
> diff --git a/gcc/tree-data-ref.cc b/gcc/tree-data-ref.cc
> index 978c3f002f7..1232c69174a 100644
> --- a/gcc/tree-data-ref.cc
> +++ b/gcc/tree-data-ref.cc
> @@ -1027,7 +1027,7 @@ split_constant_offset (tree exp, tree *var, tree *off, value_range *exp_range,
>           wide_int var_min = wi::to_wide (vr.min ());
>           wide_int var_max = wi::to_wide (vr.max ());
>           value_range_kind vr_kind = vr.kind ();
> -         wide_int var_nonzero = get_nonzero_bits (exp);
> +         wide_int var_nonzero = get_known_zero_bits (exp);
>           vr_kind = intersect_range_with_nonzero_bits (vr_kind,
>                                                        &var_min, &var_max,
>                                                        var_nonzero,
> diff --git a/gcc/tree-ssa-ccp.cc b/gcc/tree-ssa-ccp.cc
> index 9778e776cf2..94528f430d3 100644
> --- a/gcc/tree-ssa-ccp.cc
> +++ b/gcc/tree-ssa-ccp.cc
> @@ -297,7 +297,7 @@ get_default_value (tree var)
>           val.mask = -1;
>           if (flag_tree_bit_ccp)
>             {
> -             wide_int nonzero_bits = get_nonzero_bits (var);
> +             wide_int nonzero_bits = get_known_zero_bits (var);
>               tree value;
>               widest_int mask;
>
> @@ -1013,8 +1013,8 @@ ccp_finalize (bool nonzero_p)
>           wide_int nonzero_bits
>             = (wide_int::from (val->mask, precision, UNSIGNED)
>                | wi::to_wide (val->value));
> -         nonzero_bits &= get_nonzero_bits (name);
> -         set_nonzero_bits (name, nonzero_bits);
> +         nonzero_bits &= get_known_zero_bits (name);
> +         set_known_zero_bits (name, nonzero_bits);
>         }
>      }
>
> @@ -2438,7 +2438,7 @@ evaluate_stmt (gimple *stmt)
>        && TREE_CODE (gimple_get_lhs (stmt)) == SSA_NAME)
>      {
>        tree lhs = gimple_get_lhs (stmt);
> -      wide_int nonzero_bits = get_nonzero_bits (lhs);
> +      wide_int nonzero_bits = get_known_zero_bits (lhs);
>        if (nonzero_bits != -1)
>         {
>           if (!is_constant)
> diff --git a/gcc/tree-ssa-dom.cc b/gcc/tree-ssa-dom.cc
> index c7f095d79fc..b9b218f663a 100644
> --- a/gcc/tree-ssa-dom.cc
> +++ b/gcc/tree-ssa-dom.cc
> @@ -1380,7 +1380,7 @@ dom_opt_dom_walker::set_global_ranges_from_unreachable_edges (basic_block bb)
>             && !r.undefined_p ())
>           {
>             set_range_info (name, r);
> -           maybe_set_nonzero_bits (pred_e, name);
> +           maybe_set_known_zero_bits (pred_e, name);
>           }
>        }
>  }
> diff --git a/gcc/tree-ssa-reassoc.cc b/gcc/tree-ssa-reassoc.cc
> index b39c3c882c4..407a3b7ee1d 100644
> --- a/gcc/tree-ssa-reassoc.cc
> +++ b/gcc/tree-ssa-reassoc.cc
> @@ -3858,7 +3858,7 @@ optimize_range_tests_var_bound (enum tree_code opcode, int first, int length,
>        /* maybe_optimize_range_tests allows statements without side-effects
>          in the basic blocks as long as they are consumed in the same bb.
>          Make sure rhs2's def stmt is not among them, otherwise we can't
> -        use safely get_nonzero_bits on it.  E.g. in:
> +        use safely get_known_zero_bits on it.  E.g. in:
>           # RANGE [-83, 1] NONZERO 173
>           # k_32 = PHI <k_47(13), k_12(9)>
>          ...
> @@ -3925,7 +3925,7 @@ optimize_range_tests_var_bound (enum tree_code opcode, int first, int length,
>        if (rhs2 == NULL_TREE)
>         continue;
>
> -      wide_int nz = get_nonzero_bits (rhs2);
> +      wide_int nz = get_known_zero_bits (rhs2);
>        if (wi::neg_p (nz))
>         continue;
>
> diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
> index 5c5d0e346c4..a140a194024 100644
> --- a/gcc/tree-ssanames.cc
> +++ b/gcc/tree-ssanames.cc
> @@ -456,23 +456,23 @@ set_ptr_nonnull (tree name)
>    pi->pt.null = 0;
>  }
>
> -/* Update the non-zero bits bitmask of NAME.  */
> +/* Update the known-zero bits bitmask of NAME.  */
>
>  void
> -set_nonzero_bits (tree name, const wide_int_ref &mask)
> +set_known_zero_bits (tree name, const wide_int_ref &mask)
>  {
>    gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
>
>    int_range<2> r (TREE_TYPE (name));
> -  r.set_nonzero_bits (mask);
> +  r.set_known_zero_bits (mask);
>    set_range_info (name, r);
>  }
>
> -/* Return a widest_int with potentially non-zero bits in SSA_NAME
> +/* Return a widest_int with potentially known-zero bits in SSA_NAME
>     NAME, the constant for INTEGER_CST, or -1 if unknown.  */
>
>  wide_int
> -get_nonzero_bits (const_tree name)
> +get_known_zero_bits (const_tree name)
>  {
>    if (TREE_CODE (name) == INTEGER_CST)
>      return wi::to_wide (name);
> @@ -497,7 +497,7 @@ get_nonzero_bits (const_tree name)
>       through vrange_storage.  */
>    irange_storage_slot *ri
>      = static_cast <irange_storage_slot *> (SSA_NAME_RANGE_INFO (name));
> -  return ri->get_nonzero_bits ();
> +  return ri->get_known_zero_bits ();
>  }
>
>  /* Return TRUE is OP, an SSA_NAME has a range of values [0..1], false
> @@ -534,7 +534,7 @@ ssa_name_has_boolean_range (tree op)
>        if (get_range_query (cfun)->range_of_expr (r, op) && r == onezero)
>         return true;
>
> -      if (wi::eq_p (get_nonzero_bits (op), 1))
> +      if (wi::eq_p (get_known_zero_bits (op), 1))
>         return true;
>      }
>
> diff --git a/gcc/tree-ssanames.h b/gcc/tree-ssanames.h
> index ce10af9670a..f9cf6938269 100644
> --- a/gcc/tree-ssanames.h
> +++ b/gcc/tree-ssanames.h
> @@ -58,8 +58,8 @@ struct GTY(()) ptr_info_def
>
>  /* Sets the value range to SSA.  */
>  extern bool set_range_info (tree, const vrange &);
> -extern void set_nonzero_bits (tree, const wide_int_ref &);
> -extern wide_int get_nonzero_bits (const_tree);
> +extern void set_known_zero_bits (tree, const wide_int_ref &);
> +extern wide_int get_known_zero_bits (const_tree);
>  extern bool ssa_name_has_boolean_range (tree);
>  extern void init_ssanames (struct function *, int);
>  extern void fini_ssanames (struct function *);
> diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
> index 777ba2f5903..54776003af3 100644
> --- a/gcc/tree-vect-patterns.cc
> +++ b/gcc/tree-vect-patterns.cc
> @@ -71,7 +71,7 @@ vect_get_range_info (tree var, wide_int *min_value, wide_int *max_value)
>    *min_value = wi::to_wide (vr.min ());
>    *max_value = wi::to_wide (vr.max ());
>    value_range_kind vr_type = vr.kind ();
> -  wide_int nonzero = get_nonzero_bits (var);
> +  wide_int nonzero = get_known_zero_bits (var);
>    signop sgn = TYPE_SIGN (TREE_TYPE (var));
>    if (intersect_range_with_nonzero_bits (vr_type, min_value, max_value,
>                                          nonzero, sgn) == VR_RANGE)
> diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
> index e5a292bb875..2b81a3dd168 100644
> --- a/gcc/tree-vrp.cc
> +++ b/gcc/tree-vrp.cc
> @@ -2242,7 +2242,7 @@ register_edge_assert_for (tree name, edge e,
>     from the non-zero bitmask.  */
>
>  void
> -maybe_set_nonzero_bits (edge e, tree var)
> +maybe_set_known_zero_bits (edge e, tree var)
>  {
>    basic_block cond_bb = e->src;
>    gimple *stmt = last_stmt (cond_bb);
> @@ -2276,7 +2276,7 @@ maybe_set_nonzero_bits (edge e, tree var)
>         return;
>      }
>    cst = gimple_assign_rhs2 (stmt);
> -  set_nonzero_bits (var, wi::bit_and_not (get_nonzero_bits (var),
> +  set_known_zero_bits (var, wi::bit_and_not (get_known_zero_bits (var),
>                                           wi::to_wide (cst)));
>  }
>
> @@ -3754,7 +3754,7 @@ vrp_asserts::remove_range_assertions ()
>                         SSA_NAME_RANGE_INFO (var) = NULL;
>                       }
>                     duplicate_ssa_name_range_info (var, lhs);
> -                   maybe_set_nonzero_bits (single_pred_edge (bb), var);
> +                   maybe_set_known_zero_bits (single_pred_edge (bb), var);
>                   }
>               }
>
> diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h
> index b8644e9d0a7..1cfed8ea52c 100644
> --- a/gcc/tree-vrp.h
> +++ b/gcc/tree-vrp.h
> @@ -61,7 +61,7 @@ extern tree find_case_label_range (gswitch *, const irange *vr);
>  extern bool find_case_label_index (gswitch *, size_t, tree, size_t *);
>  extern bool overflow_comparison_p (tree_code, tree, tree, bool, tree *);
>  extern tree get_single_symbol (tree, bool *, tree *);
> -extern void maybe_set_nonzero_bits (edge, tree);
> +extern void maybe_set_known_zero_bits (edge, tree);
>  extern wide_int masked_increment (const wide_int &val_in, const wide_int &mask,
>                                   const wide_int &sgnbit, unsigned int prec);
>
> diff --git a/gcc/tree.cc b/gcc/tree.cc
> index 81a6ceaf181..921a9881b1e 100644
> --- a/gcc/tree.cc
> +++ b/gcc/tree.cc
> @@ -3025,7 +3025,7 @@ tree_ctz (const_tree expr)
>        ret1 = wi::ctz (wi::to_wide (expr));
>        return MIN (ret1, prec);
>      case SSA_NAME:
> -      ret1 = wi::ctz (get_nonzero_bits (expr));
> +      ret1 = wi::ctz (get_known_zero_bits (expr));
>        return MIN (ret1, prec);
>      case PLUS_EXPR:
>      case MINUS_EXPR:
> diff --git a/gcc/value-range-pretty-print.cc b/gcc/value-range-pretty-print.cc
> index 3a3b4b44cbd..0f95ad1e956 100644
> --- a/gcc/value-range-pretty-print.cc
> +++ b/gcc/value-range-pretty-print.cc
> @@ -107,7 +107,7 @@ vrange_printer::print_irange_bound (const wide_int &bound, tree type) const
>  void
>  vrange_printer::print_irange_bitmasks (const irange &r) const
>  {
> -  wide_int nz = r.get_nonzero_bits ();
> +  wide_int nz = r.get_known_zero_bits ();
>    if (nz == -1)
>      return;
>
> diff --git a/gcc/value-range-storage.cc b/gcc/value-range-storage.cc
> index 6e054622830..74aaa929c4c 100644
> --- a/gcc/value-range-storage.cc
> +++ b/gcc/value-range-storage.cc
> @@ -150,7 +150,7 @@ irange_storage_slot::set_irange (const irange &r)
>  {
>    gcc_checking_assert (fits_p (r));
>
> -  m_ints[0] = r.get_nonzero_bits ();
> +  m_ints[0] = r.get_known_zero_bits ();
>
>    unsigned pairs = r.num_pairs ();
>    for (unsigned i = 0; i < pairs; ++i)
> @@ -174,7 +174,7 @@ irange_storage_slot::get_irange (irange &r, tree type) const
>        int_range<2> tmp (type, m_ints[i], m_ints[i + 1]);
>        r.union_ (tmp);
>      }
> -  r.set_nonzero_bits (get_nonzero_bits ());
> +  r.set_known_zero_bits (get_known_zero_bits ());
>  }
>
>  // Return the size in bytes to allocate a slot that can hold R.
> @@ -220,7 +220,7 @@ irange_storage_slot::dump () const
>        m_ints[i + 1].dump ();
>      }
>    fprintf (stderr, "NONZERO ");
> -  wide_int nz = get_nonzero_bits ();
> +  wide_int nz = get_known_zero_bits ();
>    nz.dump ();
>  }
>
> diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
> index 0cf95ebf7c1..cfa15b48884 100644
> --- a/gcc/value-range-storage.h
> +++ b/gcc/value-range-storage.h
> @@ -70,7 +70,7 @@ public:
>    static irange_storage_slot *alloc_slot (vrange_allocator &, const irange &r);
>    void set_irange (const irange &r);
>    void get_irange (irange &r, tree type) const;
> -  wide_int get_nonzero_bits () const { return m_ints[0]; }
> +  wide_int get_known_zero_bits () const { return m_ints[0]; }
>    bool fits_p (const irange &r) const;
>    static size_t size (const irange &r);
>    void dump () const;
> diff --git a/gcc/value-range.cc b/gcc/value-range.cc
> index bcda4987307..05c43485cef 100644
> --- a/gcc/value-range.cc
> +++ b/gcc/value-range.cc
> @@ -837,7 +837,7 @@ irange::operator= (const irange &src)
>
>    m_num_ranges = lim;
>    m_kind = src.m_kind;
> -  m_nonzero_mask = src.m_nonzero_mask;
> +  m_known_zero_mask = src.m_known_zero_mask;
>    if (flag_checking)
>      verify_range ();
>    return *this;
> @@ -894,7 +894,7 @@ irange::copy_to_legacy (const irange &src)
>        m_base[0] = src.m_base[0];
>        m_base[1] = src.m_base[1];
>        m_kind = src.m_kind;
> -      m_nonzero_mask = src.m_nonzero_mask;
> +      m_known_zero_mask = src.m_known_zero_mask;
>        return;
>      }
>    // Copy multi-range to legacy.
> @@ -959,7 +959,7 @@ irange::irange_set (tree min, tree max)
>    m_base[1] = max;
>    m_num_ranges = 1;
>    m_kind = VR_RANGE;
> -  m_nonzero_mask = NULL;
> +  m_known_zero_mask = NULL;
>    normalize_kind ();
>
>    if (flag_checking)
> @@ -1033,7 +1033,7 @@ irange::irange_set_anti_range (tree min, tree max)
>      }
>
>    m_kind = VR_RANGE;
> -  m_nonzero_mask = NULL;
> +  m_known_zero_mask = NULL;
>    normalize_kind ();
>
>    if (flag_checking)
> @@ -1090,7 +1090,7 @@ irange::set (tree min, tree max, value_range_kind kind)
>        m_base[0] = min;
>        m_base[1] = max;
>        m_num_ranges = 1;
> -      m_nonzero_mask = NULL;
> +      m_known_zero_mask = NULL;
>        return;
>      }
>
> @@ -1140,7 +1140,7 @@ irange::set (tree min, tree max, value_range_kind kind)
>    m_base[0] = min;
>    m_base[1] = max;
>    m_num_ranges = 1;
> -  m_nonzero_mask = NULL;
> +  m_known_zero_mask = NULL;
>    normalize_kind ();
>    if (flag_checking)
>      verify_range ();
> @@ -1159,8 +1159,8 @@ irange::verify_range ()
>      }
>    if (m_kind == VR_VARYING)
>      {
> -      gcc_checking_assert (!m_nonzero_mask
> -                          || wi::to_wide (m_nonzero_mask) == -1);
> +      gcc_checking_assert (!m_known_zero_mask
> +                          || wi::to_wide (m_known_zero_mask) == -1);
>        gcc_checking_assert (m_num_ranges == 1);
>        gcc_checking_assert (varying_compatible_p ());
>        return;
> @@ -1255,7 +1255,7 @@ irange::legacy_equal_p (const irange &other) const
>                                other.tree_lower_bound (0))
>           && vrp_operand_equal_p (tree_upper_bound (0),
>                                   other.tree_upper_bound (0))
> -         && get_nonzero_bits () == other.get_nonzero_bits ());
> +         && get_known_zero_bits () == other.get_known_zero_bits ());
>  }
>
>  bool
> @@ -1290,7 +1290,7 @@ irange::operator== (const irange &other) const
>           || !operand_equal_p (ub, ub_other, 0))
>         return false;
>      }
> -  return get_nonzero_bits () == other.get_nonzero_bits ();
> +  return get_known_zero_bits () == other.get_known_zero_bits ();
>  }
>
>  /* Return TRUE if this is a symbolic range.  */
> @@ -1433,11 +1433,11 @@ irange::contains_p (tree cst) const
>
>    gcc_checking_assert (TREE_CODE (cst) == INTEGER_CST);
>
> -  // See if we can exclude CST based on the nonzero bits.
> -  if (m_nonzero_mask)
> +  // See if we can exclude CST based on the known-zero bits.
> +  if (m_known_zero_mask)
>      {
>        wide_int cstw = wi::to_wide (cst);
> -      if (cstw != 0 && wi::bit_and (wi::to_wide (m_nonzero_mask), cstw) == 0)
> +      if (cstw != 0 && wi::bit_and (wi::to_wide (m_known_zero_mask), cstw) == 0)
>         return false;
>      }
>
> @@ -2335,7 +2335,7 @@ irange::irange_single_pair_union (const irange &r)
>      {
>        // If current upper bound is new upper bound, we're done.
>        if (wi::le_p (wi::to_wide (r.m_base[1]), wi::to_wide (m_base[1]), sign))
> -       return union_nonzero_bits (r);
> +       return union_known_zero_bits (r);
>        // Otherwise R has the new upper bound.
>        // Check for overlap/touching ranges, or single target range.
>        if (m_max_ranges == 1
> @@ -2348,7 +2348,7 @@ irange::irange_single_pair_union (const irange &r)
>           m_base[3] = r.m_base[1];
>           m_num_ranges = 2;
>         }
> -      union_nonzero_bits (r);
> +      union_known_zero_bits (r);
>        return true;
>      }
>
> @@ -2371,7 +2371,7 @@ irange::irange_single_pair_union (const irange &r)
>        m_base[3] = m_base[1];
>        m_base[1] = r.m_base[1];
>      }
> -  union_nonzero_bits (r);
> +  union_known_zero_bits (r);
>    return true;
>  }
>
> @@ -2408,7 +2408,7 @@ irange::irange_union (const irange &r)
>
>    // If this ranges fully contains R, then we need do nothing.
>    if (irange_contains_p (r))
> -    return union_nonzero_bits (r);
> +    return union_known_zero_bits (r);
>
>    // Do not worry about merging and such by reserving twice as many
>    // pairs as needed, and then simply sort the 2 ranges into this
> @@ -2496,7 +2496,7 @@ irange::irange_union (const irange &r)
>    m_num_ranges = i / 2;
>
>    m_kind = VR_RANGE;
> -  union_nonzero_bits (r);
> +  union_known_zero_bits (r);
>    return true;
>  }
>
> @@ -2576,13 +2576,13 @@ irange::irange_intersect (const irange &r)
>        if (undefined_p ())
>         return true;
>
> -      res |= intersect_nonzero_bits (r);
> +      res |= intersect_known_zero_bits (r);
>        return res;
>      }
>
>    // If R fully contains this, then intersection will change nothing.
>    if (r.irange_contains_p (*this))
> -    return intersect_nonzero_bits (r);
> +    return intersect_known_zero_bits (r);
>
>    signop sign = TYPE_SIGN (TREE_TYPE(m_base[0]));
>    unsigned bld_pair = 0;
> @@ -2658,7 +2658,7 @@ irange::irange_intersect (const irange &r)
>      }
>
>    m_kind = VR_RANGE;
> -  intersect_nonzero_bits (r);
> +  intersect_known_zero_bits (r);
>    return true;
>  }
>
> @@ -2801,7 +2801,7 @@ irange::invert ()
>    signop sign = TYPE_SIGN (ttype);
>    wide_int type_min = wi::min_value (prec, sign);
>    wide_int type_max = wi::max_value (prec, sign);
> -  m_nonzero_mask = NULL;
> +  m_known_zero_mask = NULL;
>    if (m_num_ranges == m_max_ranges
>        && lower_bound () != type_min
>        && upper_bound () != type_max)
> @@ -2876,10 +2876,10 @@ irange::invert ()
>      verify_range ();
>  }
>
> -// Return the nonzero bits inherent in the range.
> +// Return the known-zero bits inherent in the range.
>
>  wide_int
> -irange::get_nonzero_bits_from_range () const
> +irange::get_known_zero_bits_from_range () const
>  {
>    // For legacy symbolics.
>    if (!constant_p ())
> @@ -2900,25 +2900,25 @@ irange::get_nonzero_bits_from_range () const
>  // so and return TRUE.
>
>  bool
> -irange::set_range_from_nonzero_bits ()
> +irange::set_range_from_known_zero_bits ()
>  {
>    gcc_checking_assert (!undefined_p ());
> -  if (!m_nonzero_mask)
> +  if (!m_known_zero_mask)
>      return false;
> -  unsigned popcount = wi::popcount (wi::to_wide (m_nonzero_mask));
> +  unsigned popcount = wi::popcount (wi::to_wide (m_known_zero_mask));
>
>    // If we have only one bit set in the mask, we can figure out the
>    // range immediately.
>    if (popcount == 1)
>      {
>        // Make sure we don't pessimize the range.
> -      if (!contains_p (m_nonzero_mask))
> +      if (!contains_p (m_known_zero_mask))
>         return false;
>
>        bool has_zero = contains_p (build_zero_cst (type ()));
> -      tree nz = m_nonzero_mask;
> +      tree nz = m_known_zero_mask;
>        set (nz, nz);
> -      m_nonzero_mask = nz;
> +      m_known_zero_mask = nz;
>        if (has_zero)
>         {
>           int_range<2> zero;
> @@ -2936,14 +2936,14 @@ irange::set_range_from_nonzero_bits ()
>  }
>
>  void
> -irange::set_nonzero_bits (const wide_int_ref &bits)
> +irange::set_known_zero_bits (const wide_int_ref &bits)
>  {
>    gcc_checking_assert (!undefined_p ());
>    unsigned prec = TYPE_PRECISION (type ());
>
>    if (bits == -1)
>      {
> -      m_nonzero_mask = NULL;
> +      m_known_zero_mask = NULL;
>        normalize_kind ();
>        if (flag_checking)
>         verify_range ();
> @@ -2955,8 +2955,8 @@ irange::set_nonzero_bits (const wide_int_ref &bits)
>      m_kind = VR_RANGE;
>
>    wide_int nz = wide_int::from (bits, prec, TYPE_SIGN (type ()));
> -  m_nonzero_mask = wide_int_to_tree (type (), nz);
> -  if (set_range_from_nonzero_bits ())
> +  m_known_zero_mask = wide_int_to_tree (type (), nz);
> +  if (set_range_from_known_zero_bits ())
>      return;
>
>    normalize_kind ();
> @@ -2964,11 +2964,11 @@ irange::set_nonzero_bits (const wide_int_ref &bits)
>      verify_range ();
>  }
>
> -// Return the nonzero bitmask.  This will return the nonzero bits plus
> -// the nonzero bits inherent in the range.
> +// Return the nonzero bitmask.  This will return the known-zero bits plus
> +// the known-zero bits inherent in the range.
>
>  wide_int
> -irange::get_nonzero_bits () const
> +irange::get_known_zero_bits () const
>  {
>    gcc_checking_assert (!undefined_p ());
>    // The nonzero mask inherent in the range is calculated on-demand.
> @@ -2979,10 +2979,10 @@ irange::get_nonzero_bits () const
>    // the mask precisely up to date at all times.  Instead, we default
>    // to -1 and set it when explicitly requested.  However, this
>    // function will always return the correct mask.
> -  if (m_nonzero_mask)
> -    return wi::to_wide (m_nonzero_mask) & get_nonzero_bits_from_range ();
> +  if (m_known_zero_mask)
> +    return wi::to_wide (m_known_zero_mask) & get_known_zero_bits_from_range ();
>    else
> -    return get_nonzero_bits_from_range ();
> +    return get_known_zero_bits_from_range ();
>  }
>
>  // Convert tree mask to wide_int.  Returns -1 for NULL masks.
> @@ -2996,15 +2996,15 @@ mask_to_wi (tree mask, tree type)
>      return wi::shwi (-1, TYPE_PRECISION (type));
>  }
>
> -// Intersect the nonzero bits in R into THIS and normalize the range.
> +// Intersect the known-zero bits in R into THIS and normalize the range.
>  // Return TRUE if the intersection changed anything.
>
>  bool
> -irange::intersect_nonzero_bits (const irange &r)
> +irange::intersect_known_zero_bits (const irange &r)
>  {
>    gcc_checking_assert (!undefined_p () && !r.undefined_p ());
>
> -  if (!m_nonzero_mask && !r.m_nonzero_mask)
> +  if (!m_known_zero_mask && !r.m_known_zero_mask)
>      {
>        normalize_kind ();
>        if (flag_checking)
> @@ -3014,11 +3014,11 @@ irange::intersect_nonzero_bits (const irange &r)
>
>    bool changed = false;
>    tree t = type ();
> -  if (mask_to_wi (m_nonzero_mask, t) != mask_to_wi (r.m_nonzero_mask, t))
> +  if (mask_to_wi (m_known_zero_mask, t) != mask_to_wi (r.m_known_zero_mask, t))
>      {
> -      wide_int nz = get_nonzero_bits () & r.get_nonzero_bits ();
> -      m_nonzero_mask = wide_int_to_tree (t, nz);
> -      if (set_range_from_nonzero_bits ())
> +      wide_int nz = get_known_zero_bits () & r.get_known_zero_bits ();
> +      m_known_zero_mask = wide_int_to_tree (t, nz);
> +      if (set_range_from_known_zero_bits ())
>         return true;
>        changed = true;
>      }
> @@ -3028,15 +3028,15 @@ irange::intersect_nonzero_bits (const irange &r)
>    return changed;
>  }
>
> -// Union the nonzero bits in R into THIS and normalize the range.
> +// Union the known-zero bits in R into THIS and normalize the range.
>  // Return TRUE if the union changed anything.
>
>  bool
> -irange::union_nonzero_bits (const irange &r)
> +irange::union_known_zero_bits (const irange &r)
>  {
>    gcc_checking_assert (!undefined_p () && !r.undefined_p ());
>
> -  if (!m_nonzero_mask && !r.m_nonzero_mask)
> +  if (!m_known_zero_mask && !r.m_known_zero_mask)
>      {
>        normalize_kind ();
>        if (flag_checking)
> @@ -3046,14 +3046,14 @@ irange::union_nonzero_bits (const irange &r)
>
>    bool changed = false;
>    tree t = type ();
> -  if (mask_to_wi (m_nonzero_mask, t) != mask_to_wi (r.m_nonzero_mask, t))
> +  if (mask_to_wi (m_known_zero_mask, t) != mask_to_wi (r.m_known_zero_mask, t))
>      {
> -      wide_int nz = get_nonzero_bits () | r.get_nonzero_bits ();
> -      m_nonzero_mask = wide_int_to_tree (t, nz);
> -      // No need to call set_range_from_nonzero_bits, because we'll
> +      wide_int nz = get_known_zero_bits () | r.get_known_zero_bits ();
> +      m_known_zero_mask = wide_int_to_tree (t, nz);
> +      // No need to call set_range_from_known_zero_bits, because we'll
>        // never narrow the range.  Besides, it would cause endless
>        // recursion because of the union_ in
> -      // set_range_from_nonzero_bits.
> +      // set_range_from_known_zero_bits.
>        changed = true;
>      }
>    normalize_kind ();
> @@ -3626,58 +3626,58 @@ range_tests_nonzero_bits ()
>  {
>    int_range<2> r0, r1;
>
> -  // Adding nonzero bits to a varying drops the varying.
> +  // Adding known-zero bits to a varying drops the varying.
>    r0.set_varying (integer_type_node);
> -  r0.set_nonzero_bits (255);
> +  r0.set_known_zero_bits (255);
>    ASSERT_TRUE (!r0.varying_p ());
> -  // Dropping the nonzero bits brings us back to varying.
> -  r0.set_nonzero_bits (-1);
> +  // Dropping the known-zero bits brings us back to varying.
> +  r0.set_known_zero_bits (-1);
>    ASSERT_TRUE (r0.varying_p ());
>
> -  // Test contains_p with nonzero bits.
> +  // Test contains_p with known-zero bits.
>    r0.set_zero (integer_type_node);
>    ASSERT_TRUE (r0.contains_p (INT (0)));
>    ASSERT_FALSE (r0.contains_p (INT (1)));
> -  r0.set_nonzero_bits (0xfe);
> +  r0.set_known_zero_bits (0xfe);
>    ASSERT_FALSE (r0.contains_p (INT (0x100)));
>    ASSERT_FALSE (r0.contains_p (INT (0x3)));
>
> -  // Union of nonzero bits.
> +  // Union of known-zero bits.
>    r0.set_varying (integer_type_node);
> -  r0.set_nonzero_bits (0xf0);
> +  r0.set_known_zero_bits (0xf0);
>    r1.set_varying (integer_type_node);
> -  r1.set_nonzero_bits (0xf);
> +  r1.set_known_zero_bits (0xf);
>    r0.union_ (r1);
> -  ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
> +  ASSERT_TRUE (r0.get_known_zero_bits () == 0xff);
>
> -  // Intersect of nonzero bits.
> +  // Intersect of known-zero bits.
>    r0.set (INT (0), INT (255));
> -  r0.set_nonzero_bits (0xfe);
> +  r0.set_known_zero_bits (0xfe);
>    r1.set_varying (integer_type_node);
> -  r1.set_nonzero_bits (0xf0);
> +  r1.set_known_zero_bits (0xf0);
>    r0.intersect (r1);
> -  ASSERT_TRUE (r0.get_nonzero_bits () == 0xf0);
> +  ASSERT_TRUE (r0.get_known_zero_bits () == 0xf0);
>
> -  // Intersect where the mask of nonzero bits is implicit from the range.
> +  // Intersect where the mask of known-zero bits is implicit from the range.
>    r0.set_varying (integer_type_node);
>    r1.set (INT (0), INT (255));
>    r0.intersect (r1);
> -  ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
> +  ASSERT_TRUE (r0.get_known_zero_bits () == 0xff);
>
>    // The union of a mask of 0xff..ffff00 with a mask of 0xff spans the
>    // entire domain, and makes the range a varying.
>    r0.set_varying (integer_type_node);
>    wide_int x = wi::shwi (0xff, TYPE_PRECISION (integer_type_node));
>    x = wi::bit_not (x);
> -  r0.set_nonzero_bits (x);     // 0xff..ff00
> +  r0.set_known_zero_bits (x);  // 0xff..ff00
>    r1.set_varying (integer_type_node);
> -  r1.set_nonzero_bits (0xff);
> +  r1.set_known_zero_bits (0xff);
>    r0.union_ (r1);
>    ASSERT_TRUE (r0.varying_p ());
>
>    // Test that setting a nonzero bit of 1 does not pessimize the range.
>    r0.set_zero (integer_type_node);
> -  r0.set_nonzero_bits (1);
> +  r0.set_known_zero_bits (1);
>    ASSERT_TRUE (r0.zero_p ());
>  }
>
> diff --git a/gcc/value-range.h b/gcc/value-range.h
> index b48542a68aa..444f357afdf 100644
> --- a/gcc/value-range.h
> +++ b/gcc/value-range.h
> @@ -156,9 +156,9 @@ public:
>    virtual bool fits_p (const vrange &r) const override;
>    virtual void accept (const vrange_visitor &v) const override;
>
> -  // Nonzero masks.
> -  wide_int get_nonzero_bits () const;
> -  void set_nonzero_bits (const wide_int_ref &bits);
> +  // Known bit masks.
> +  wide_int get_known_zero_bits () const;
> +  void set_known_zero_bits (const wide_int_ref &bits);
>
>    // Deprecated legacy public methods.
>    tree min () const;                           // DEPRECATED
> @@ -207,15 +207,15 @@ private:
>
>    void irange_set_1bit_anti_range (tree, tree);
>    bool varying_compatible_p () const;
> -  bool intersect_nonzero_bits (const irange &r);
> -  bool union_nonzero_bits (const irange &r);
> -  wide_int get_nonzero_bits_from_range () const;
> -  bool set_range_from_nonzero_bits ();
> +  bool intersect_known_zero_bits (const irange &r);
> +  bool union_known_zero_bits (const irange &r);
> +  wide_int get_known_zero_bits_from_range () const;
> +  bool set_range_from_known_zero_bits ();
>
>    bool intersect (const wide_int& lb, const wide_int& ub);
>    unsigned char m_num_ranges;
>    unsigned char m_max_ranges;
> -  tree m_nonzero_mask;
> +  tree m_known_zero_mask;
>    tree *m_base;
>  };
>
> @@ -687,11 +687,11 @@ irange::varying_compatible_p () const
>    if (INTEGRAL_TYPE_P (t))
>      return (wi::to_wide (l) == wi::min_value (prec, sign)
>             && wi::to_wide (u) == wi::max_value (prec, sign)
> -           && (!m_nonzero_mask || wi::to_wide (m_nonzero_mask) == -1));
> +           && (!m_known_zero_mask || wi::to_wide (m_known_zero_mask) == -1));
>    if (POINTER_TYPE_P (t))
>      return (wi::to_wide (l) == 0
>             && wi::to_wide (u) == wi::max_value (prec, sign)
> -           && (!m_nonzero_mask || wi::to_wide (m_nonzero_mask) == -1));
> +           && (!m_known_zero_mask || wi::to_wide (m_known_zero_mask) == -1));
>    return true;
>  }
>
> @@ -758,8 +758,8 @@ gt_ggc_mx (irange *x)
>        gt_ggc_mx (x->m_base[i * 2]);
>        gt_ggc_mx (x->m_base[i * 2 + 1]);
>      }
> -  if (x->m_nonzero_mask)
> -    gt_ggc_mx (x->m_nonzero_mask);
> +  if (x->m_known_zero_mask)
> +    gt_ggc_mx (x->m_known_zero_mask);
>  }
>
>  inline void
> @@ -770,8 +770,8 @@ gt_pch_nx (irange *x)
>        gt_pch_nx (x->m_base[i * 2]);
>        gt_pch_nx (x->m_base[i * 2 + 1]);
>      }
> -  if (x->m_nonzero_mask)
> -    gt_pch_nx (x->m_nonzero_mask);
> +  if (x->m_known_zero_mask)
> +    gt_pch_nx (x->m_known_zero_mask);
>  }
>
>  inline void
> @@ -782,8 +782,8 @@ gt_pch_nx (irange *x, gt_pointer_operator op, void *cookie)
>        op (&x->m_base[i * 2], NULL, cookie);
>        op (&x->m_base[i * 2 + 1], NULL, cookie);
>      }
> -  if (x->m_nonzero_mask)
> -    op (&x->m_nonzero_mask, NULL, cookie);
> +  if (x->m_known_zero_mask)
> +    op (&x->m_known_zero_mask, NULL, cookie);
>  }
>
>  template<unsigned N>
> @@ -878,7 +878,7 @@ irange::set_undefined ()
>  {
>    m_kind = VR_UNDEFINED;
>    m_num_ranges = 0;
> -  m_nonzero_mask = NULL;
> +  m_known_zero_mask = NULL;
>  }
>
>  inline void
> @@ -886,7 +886,7 @@ irange::set_varying (tree type)
>  {
>    m_kind = VR_VARYING;
>    m_num_ranges = 1;
> -  m_nonzero_mask = NULL;
> +  m_known_zero_mask = NULL;
>
>    if (INTEGRAL_TYPE_P (type))
>      {
> --
> 2.37.3
>
  
Aldy Hernandez Nov. 1, 2022, 4:33 p.m. UTC | #7
Folks.  I have decided to put this aside until the next release.  I
originally wanted a simple rename, and reimplementing things to align
with rtl, etc, is beyond what I want to tackle on this late.

I'll archive this away, and revisit it when we implement the
irange::known_ones mask.

Thanks for your input.
Aldy

On Fri, Oct 21, 2022 at 8:01 PM Segher Boessenkool
<segher@kernel.crashing.org> wrote:
>
> On Fri, Oct 21, 2022 at 06:54:32PM +0200, Jakub Jelinek wrote:
> > On Fri, Oct 21, 2022 at 06:51:19PM +0200, Jakub Jelinek wrote:
> > > Agreed.
> > >
> > > I think maybe_nonzero_bits would be fine.
> >
> > Or yet another option is to change what we track and instead of
> > having just one bitmask have 2 as tree-ssa-ccp.cc does,
> > one bitmask says which bits are known to be always the same
> > and the other which specifies the values of those bits.
> > "For X with a CONSTANT lattice value X & ~mask == value & ~mask.  The
> > zero bits in the mask cover constant values.  The ones mean no
> > information."
>
> I am still working on making the RTL nonzero_bits use DF (and indeed I
> do a known_zero instead :-) ).  This makes the special version in
> combine unnecessary: instead of working better than the generic version
> it is strictly weaker then.  This change then makes it possible to use
> nonzero_bits in instruction conditions (without causing ICEs as now --
> passes after combine return a subset of the nonzero_bits the version in
> combine does, which can make insns no longer match in later passes).
>
> My fear is tracking twice as many bits might become expensive.  OTOH
> ideally we can get rid of combine's reg_stat completely at some point
> in the future (which has all the same problems as combine's version of
> nonzero_bits: the values it returns depend on the order combine tried
> possible combinations).
>
> Storage requirements are the same for known_zero_bits and known_one_bits
> vs. known_bits and known_bit_values, but the latter is a bit more
> costly to compute, but more importantly it is usually a lot less
> convenient in use.  (A third option is known_bits and known_zero_bits?)
>
>
> Segher
>
  

Patch

diff --git a/gcc/asan.cc b/gcc/asan.cc
index 8276f12cc69..9960803b99f 100644
--- a/gcc/asan.cc
+++ b/gcc/asan.cc
@@ -816,7 +816,7 @@  handle_builtin_alloca (gcall *call, gimple_stmt_iterator *iter)
   tree redzone_size = build_int_cst (size_type_node, ASAN_RED_ZONE_SIZE);
 
   /* Extract lower bits from old_size.  */
-  wide_int size_nonzero_bits = get_nonzero_bits (old_size);
+  wide_int size_nonzero_bits = get_known_zero_bits (old_size);
   wide_int rz_mask
     = wi::uhwi (redzone_mask, wi::get_precision (size_nonzero_bits));
   wide_int old_size_lower_bits = wi::bit_and (size_nonzero_bits, rz_mask);
diff --git a/gcc/fold-const.cc b/gcc/fold-const.cc
index 9f7beae14e5..c85231b4ca1 100644
--- a/gcc/fold-const.cc
+++ b/gcc/fold-const.cc
@@ -10815,7 +10815,7 @@  expr_not_equal_to (tree t, const wide_int &w)
 	return true;
       /* If T has some known zero bits and W has any of those bits set,
 	 then T is known not to be equal to W.  */
-      if (wi::ne_p (wi::zext (wi::bit_and_not (w, get_nonzero_bits (t)),
+      if (wi::ne_p (wi::zext (wi::bit_and_not (w, get_known_zero_bits (t)),
 			      TYPE_PRECISION (TREE_TYPE (t))), 0))
 	return true;
       return false;
@@ -16508,7 +16508,7 @@  tree_nonzero_bits (const_tree t)
     case INTEGER_CST:
       return wi::to_wide (t);
     case SSA_NAME:
-      return get_nonzero_bits (t);
+      return get_known_zero_bits (t);
     case NON_LVALUE_EXPR:
     case SAVE_EXPR:
       return tree_nonzero_bits (TREE_OPERAND (t, 0));
diff --git a/gcc/gimple-range-op.cc b/gcc/gimple-range-op.cc
index 7764166d5fb..90c6f7b3fd9 100644
--- a/gcc/gimple-range-op.cc
+++ b/gcc/gimple-range-op.cc
@@ -477,7 +477,7 @@  public:
     if (lh.undefined_p ())
       return false;
     unsigned prec = TYPE_PRECISION (type);
-    wide_int nz = lh.get_nonzero_bits ();
+    wide_int nz = lh.get_known_zero_bits ();
     wide_int pop = wi::shwi (wi::popcount (nz), prec);
     // Calculating the popcount of a singleton is trivial.
     if (lh.singleton_p ())
diff --git a/gcc/ipa-cp.cc b/gcc/ipa-cp.cc
index d2bcd5e5e69..4ba7ef878ba 100644
--- a/gcc/ipa-cp.cc
+++ b/gcc/ipa-cp.cc
@@ -1119,7 +1119,7 @@  ipcp_bits_lattice::known_nonzero_p () const
 void
 ipcp_bits_lattice::get_value_and_mask (tree operand, widest_int *valuep, widest_int *maskp)
 {
-  wide_int get_nonzero_bits (const_tree);
+  wide_int get_known_zero_bits (const_tree);
 
   if (TREE_CODE (operand) == INTEGER_CST)
     {
diff --git a/gcc/ipa-prop.cc b/gcc/ipa-prop.cc
index e6cf25591b3..e3cd5cf6415 100644
--- a/gcc/ipa-prop.cc
+++ b/gcc/ipa-prop.cc
@@ -2331,7 +2331,7 @@  ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi,
 	{
 	  if (TREE_CODE (arg) == SSA_NAME)
 	    ipa_set_jfunc_bits (jfunc, 0,
-				widest_int::from (get_nonzero_bits (arg),
+				widest_int::from (get_known_zero_bits (arg),
 						  TYPE_SIGN (TREE_TYPE (arg))));
 	  else
 	    ipa_set_jfunc_bits (jfunc, wi::to_widest (arg), 0);
@@ -5816,7 +5816,7 @@  ipcp_update_bits (struct cgraph_node *node)
 
 	  wide_int nonzero_bits = wide_int::from (bits[i]->mask, prec, UNSIGNED)
 				  | wide_int::from (bits[i]->value, prec, sgn);
-	  set_nonzero_bits (ddef, nonzero_bits);
+	  set_known_zero_bits (ddef, nonzero_bits);
 	}
       else
 	{
diff --git a/gcc/match.pd b/gcc/match.pd
index 194ba8f5188..0f58f1ad2ae 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -1199,7 +1199,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (simplify
  (bit_and (bit_not SSA_NAME@0) INTEGER_CST@1)
  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
-      && wi::bit_and_not (get_nonzero_bits (@0), wi::to_wide (@1)) == 0)
+      && wi::bit_and_not (get_known_zero_bits (@0), wi::to_wide (@1)) == 0)
   (bit_xor @0 @1)))
 
 /* For constants M and N, if M == (1LL << cst) - 1 && (N & M) == M,
@@ -1317,7 +1317,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (simplify
  (bit_and SSA_NAME@0 INTEGER_CST@1)
  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
-      && wi::bit_and_not (get_nonzero_bits (@0), wi::to_wide (@1)) == 0)
+      && wi::bit_and_not (get_known_zero_bits (@0), wi::to_wide (@1)) == 0)
   @0))
 #endif
 
@@ -2286,7 +2286,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
   (if (TREE_INT_CST_LOW (@1) & 1)
    { constant_boolean_node (cmp == NE_EXPR, type); })))
 
-/* Arguments on which one can call get_nonzero_bits to get the bits
+/* Arguments on which one can call get_known_zero_bits to get the bits
    possibly set.  */
 (match with_possible_nonzero_bits
  INTEGER_CST@0)
@@ -2300,7 +2300,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
  (bit_and:c with_possible_nonzero_bits@0 @2))
 
 /* Same for bits that are known to be set, but we do not have
-   an equivalent to get_nonzero_bits yet.  */
+   an equivalent to get_known_zero_bits yet.  */
 (match (with_certain_nonzero_bits2 @0)
  INTEGER_CST@0)
 (match (with_certain_nonzero_bits2 @0)
@@ -2310,7 +2310,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (for cmp (eq ne)
  (simplify
   (cmp:c (with_possible_nonzero_bits2 @0) (with_certain_nonzero_bits2 @1))
-  (if (wi::bit_and_not (wi::to_wide (@1), get_nonzero_bits (@0)) != 0)
+  (if (wi::bit_and_not (wi::to_wide (@1), get_known_zero_bits (@0)) != 0)
    { constant_boolean_node (cmp == NE_EXPR, type); })))
 
 /* ((X inner_op C0) outer_op C1)
@@ -2336,7 +2336,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
     wide_int cst_emit;
 
     if (TREE_CODE (@2) == SSA_NAME)
-      zero_mask_not = get_nonzero_bits (@2);
+      zero_mask_not = get_known_zero_bits (@2);
     else
       fail = true;
 
@@ -3562,7 +3562,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
       int width = ceil_log2 (element_precision (TREE_TYPE (@0)));
       int prec = TYPE_PRECISION (TREE_TYPE (@1));
      }
-     (if ((get_nonzero_bits (@1) & wi::mask (width, false, prec)) == 0)
+     (if ((get_known_zero_bits (@1) & wi::mask (width, false, prec)) == 0)
       @0)))))
 #endif
 
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 49ee7be3d3b..7e5b3ad6aad 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -804,8 +804,8 @@  operator_lt::fold_range (irange &r, tree type,
     r = range_true (type);
   else if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
     r = range_false (type);
-  // Use nonzero bits to determine if < 0 is false.
-  else if (op2.zero_p () && !wi::neg_p (op1.get_nonzero_bits (), sign))
+  // Use known-zero bits to determine if < 0 is false.
+  else if (op2.zero_p () && !wi::neg_p (op1.get_known_zero_bits (), sign))
     r = range_false (type);
   else
     r = range_true_and_false (type);
@@ -2552,16 +2552,16 @@  operator_cast::fold_range (irange &r, tree type ATTRIBUTE_UNUSED,
 	return true;
     }
 
-  // Update the nonzero mask.  Truncating casts are problematic unless
+  // Update the known-zero mask.  Truncating casts are problematic unless
   // the conversion fits in the resulting outer type.
-  wide_int nz = inner.get_nonzero_bits ();
+  wide_int nz = inner.get_known_zero_bits ();
   if (truncating_cast_p (inner, outer)
       && wi::rshift (nz, wi::uhwi (TYPE_PRECISION (outer.type ()),
 				   TYPE_PRECISION (inner.type ())),
 		     TYPE_SIGN (inner.type ())) != 0)
     return true;
   nz = wide_int::from (nz, TYPE_PRECISION (type), TYPE_SIGN (inner.type ()));
-  r.set_nonzero_bits (nz);
+  r.set_known_zero_bits (nz);
 
   return true;
 }
@@ -2794,8 +2794,8 @@  operator_bitwise_and::fold_range (irange &r, tree type,
   if (range_operator::fold_range (r, type, lh, rh))
     {
       if (!lh.undefined_p () && !rh.undefined_p ())
-	r.set_nonzero_bits (wi::bit_and (lh.get_nonzero_bits (),
-					 rh.get_nonzero_bits ()));
+	r.set_known_zero_bits (wi::bit_and (lh.get_known_zero_bits (),
+					 rh.get_known_zero_bits ()));
       return true;
     }
   return false;
@@ -2805,7 +2805,7 @@  operator_bitwise_and::fold_range (irange &r, tree type,
 // Optimize BIT_AND_EXPR, BIT_IOR_EXPR and BIT_XOR_EXPR of signed types
 // by considering the number of leading redundant sign bit copies.
 // clrsb (X op Y) = min (clrsb (X), clrsb (Y)), so for example
-// [-1, 0] op [-1, 0] is [-1, 0] (where nonzero_bits doesn't help).
+// [-1, 0] op [-1, 0] is [-1, 0] (where known-zero bits doesn't help).
 static bool
 wi_optimize_signed_bitwise_op (irange &r, tree type,
 			       const wide_int &lh_lb, const wide_int &lh_ub,
@@ -3046,7 +3046,7 @@  operator_bitwise_and::wi_fold (irange &r, tree type,
 }
 
 static void
-set_nonzero_range_from_mask (irange &r, tree type, const irange &lhs)
+set_known_zero_range_from_mask (irange &r, tree type, const irange &lhs)
 {
   if (!lhs.contains_p (build_zero_cst (type)))
     r = range_nonzero (type);
@@ -3064,7 +3064,7 @@  operator_bitwise_and::simple_op1_range_solver (irange &r, tree type,
 {
   if (!op2.singleton_p ())
     {
-      set_nonzero_range_from_mask (r, type, lhs);
+      set_known_zero_range_from_mask (r, type, lhs);
       return;
     }
   unsigned int nprec = TYPE_PRECISION (type);
@@ -3157,14 +3157,14 @@  operator_bitwise_and::op1_range (irange &r, tree type,
       r.union_ (res);
     }
   if (r.undefined_p ())
-    set_nonzero_range_from_mask (r, type, lhs);
+    set_known_zero_range_from_mask (r, type, lhs);
 
   // For 0 = op1 & MASK, op1 is ~MASK.
   if (lhs.zero_p () && op2.singleton_p ())
     {
-      wide_int nz = wi::bit_not (op2.get_nonzero_bits ());
+      wide_int nz = wi::bit_not (op2.get_known_zero_bits ());
       int_range<2> tmp (type);
-      tmp.set_nonzero_bits (nz);
+      tmp.set_known_zero_bits (nz);
       r.intersect (tmp);
     }
   return true;
@@ -4851,7 +4851,7 @@  range_op_bitwise_and_tests ()
     int_range<2> mask = int_range<2> (INT (7), INT (7));
     op_bitwise_and.op1_range (res, integer_type_node, zero, mask);
     wide_int inv = wi::shwi (~7U, TYPE_PRECISION (integer_type_node));
-    ASSERT_TRUE (res.get_nonzero_bits () == inv);
+    ASSERT_TRUE (res.get_known_zero_bits () == inv);
   }
 
   // (NONZERO | X) is nonzero.
diff --git a/gcc/tree-data-ref.cc b/gcc/tree-data-ref.cc
index 978c3f002f7..1232c69174a 100644
--- a/gcc/tree-data-ref.cc
+++ b/gcc/tree-data-ref.cc
@@ -1027,7 +1027,7 @@  split_constant_offset (tree exp, tree *var, tree *off, value_range *exp_range,
 	  wide_int var_min = wi::to_wide (vr.min ());
 	  wide_int var_max = wi::to_wide (vr.max ());
 	  value_range_kind vr_kind = vr.kind ();
-	  wide_int var_nonzero = get_nonzero_bits (exp);
+	  wide_int var_nonzero = get_known_zero_bits (exp);
 	  vr_kind = intersect_range_with_nonzero_bits (vr_kind,
 						       &var_min, &var_max,
 						       var_nonzero,
diff --git a/gcc/tree-ssa-ccp.cc b/gcc/tree-ssa-ccp.cc
index 9778e776cf2..94528f430d3 100644
--- a/gcc/tree-ssa-ccp.cc
+++ b/gcc/tree-ssa-ccp.cc
@@ -297,7 +297,7 @@  get_default_value (tree var)
 	  val.mask = -1;
 	  if (flag_tree_bit_ccp)
 	    {
-	      wide_int nonzero_bits = get_nonzero_bits (var);
+	      wide_int nonzero_bits = get_known_zero_bits (var);
 	      tree value;
 	      widest_int mask;
 
@@ -1013,8 +1013,8 @@  ccp_finalize (bool nonzero_p)
 	  wide_int nonzero_bits
 	    = (wide_int::from (val->mask, precision, UNSIGNED)
 	       | wi::to_wide (val->value));
-	  nonzero_bits &= get_nonzero_bits (name);
-	  set_nonzero_bits (name, nonzero_bits);
+	  nonzero_bits &= get_known_zero_bits (name);
+	  set_known_zero_bits (name, nonzero_bits);
 	}
     }
 
@@ -2438,7 +2438,7 @@  evaluate_stmt (gimple *stmt)
       && TREE_CODE (gimple_get_lhs (stmt)) == SSA_NAME)
     {
       tree lhs = gimple_get_lhs (stmt);
-      wide_int nonzero_bits = get_nonzero_bits (lhs);
+      wide_int nonzero_bits = get_known_zero_bits (lhs);
       if (nonzero_bits != -1)
 	{
 	  if (!is_constant)
diff --git a/gcc/tree-ssa-dom.cc b/gcc/tree-ssa-dom.cc
index c7f095d79fc..b9b218f663a 100644
--- a/gcc/tree-ssa-dom.cc
+++ b/gcc/tree-ssa-dom.cc
@@ -1380,7 +1380,7 @@  dom_opt_dom_walker::set_global_ranges_from_unreachable_edges (basic_block bb)
 	    && !r.undefined_p ())
 	  {
 	    set_range_info (name, r);
-	    maybe_set_nonzero_bits (pred_e, name);
+	    maybe_set_known_zero_bits (pred_e, name);
 	  }
       }
 }
diff --git a/gcc/tree-ssa-reassoc.cc b/gcc/tree-ssa-reassoc.cc
index b39c3c882c4..407a3b7ee1d 100644
--- a/gcc/tree-ssa-reassoc.cc
+++ b/gcc/tree-ssa-reassoc.cc
@@ -3858,7 +3858,7 @@  optimize_range_tests_var_bound (enum tree_code opcode, int first, int length,
       /* maybe_optimize_range_tests allows statements without side-effects
 	 in the basic blocks as long as they are consumed in the same bb.
 	 Make sure rhs2's def stmt is not among them, otherwise we can't
-	 use safely get_nonzero_bits on it.  E.g. in:
+	 use safely get_known_zero_bits on it.  E.g. in:
 	  # RANGE [-83, 1] NONZERO 173
 	  # k_32 = PHI <k_47(13), k_12(9)>
 	 ...
@@ -3925,7 +3925,7 @@  optimize_range_tests_var_bound (enum tree_code opcode, int first, int length,
       if (rhs2 == NULL_TREE)
 	continue;
 
-      wide_int nz = get_nonzero_bits (rhs2);
+      wide_int nz = get_known_zero_bits (rhs2);
       if (wi::neg_p (nz))
 	continue;
 
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index 5c5d0e346c4..a140a194024 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -456,23 +456,23 @@  set_ptr_nonnull (tree name)
   pi->pt.null = 0;
 }
 
-/* Update the non-zero bits bitmask of NAME.  */
+/* Update the known-zero bits bitmask of NAME.  */
 
 void
-set_nonzero_bits (tree name, const wide_int_ref &mask)
+set_known_zero_bits (tree name, const wide_int_ref &mask)
 {
   gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
 
   int_range<2> r (TREE_TYPE (name));
-  r.set_nonzero_bits (mask);
+  r.set_known_zero_bits (mask);
   set_range_info (name, r);
 }
 
-/* Return a widest_int with potentially non-zero bits in SSA_NAME
+/* Return a widest_int with potentially known-zero bits in SSA_NAME
    NAME, the constant for INTEGER_CST, or -1 if unknown.  */
 
 wide_int
-get_nonzero_bits (const_tree name)
+get_known_zero_bits (const_tree name)
 {
   if (TREE_CODE (name) == INTEGER_CST)
     return wi::to_wide (name);
@@ -497,7 +497,7 @@  get_nonzero_bits (const_tree name)
      through vrange_storage.  */
   irange_storage_slot *ri
     = static_cast <irange_storage_slot *> (SSA_NAME_RANGE_INFO (name));
-  return ri->get_nonzero_bits ();
+  return ri->get_known_zero_bits ();
 }
 
 /* Return TRUE is OP, an SSA_NAME has a range of values [0..1], false
@@ -534,7 +534,7 @@  ssa_name_has_boolean_range (tree op)
       if (get_range_query (cfun)->range_of_expr (r, op) && r == onezero)
 	return true;
 
-      if (wi::eq_p (get_nonzero_bits (op), 1))
+      if (wi::eq_p (get_known_zero_bits (op), 1))
 	return true;
     }
 
diff --git a/gcc/tree-ssanames.h b/gcc/tree-ssanames.h
index ce10af9670a..f9cf6938269 100644
--- a/gcc/tree-ssanames.h
+++ b/gcc/tree-ssanames.h
@@ -58,8 +58,8 @@  struct GTY(()) ptr_info_def
 
 /* Sets the value range to SSA.  */
 extern bool set_range_info (tree, const vrange &);
-extern void set_nonzero_bits (tree, const wide_int_ref &);
-extern wide_int get_nonzero_bits (const_tree);
+extern void set_known_zero_bits (tree, const wide_int_ref &);
+extern wide_int get_known_zero_bits (const_tree);
 extern bool ssa_name_has_boolean_range (tree);
 extern void init_ssanames (struct function *, int);
 extern void fini_ssanames (struct function *);
diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
index 777ba2f5903..54776003af3 100644
--- a/gcc/tree-vect-patterns.cc
+++ b/gcc/tree-vect-patterns.cc
@@ -71,7 +71,7 @@  vect_get_range_info (tree var, wide_int *min_value, wide_int *max_value)
   *min_value = wi::to_wide (vr.min ());
   *max_value = wi::to_wide (vr.max ());
   value_range_kind vr_type = vr.kind ();
-  wide_int nonzero = get_nonzero_bits (var);
+  wide_int nonzero = get_known_zero_bits (var);
   signop sgn = TYPE_SIGN (TREE_TYPE (var));
   if (intersect_range_with_nonzero_bits (vr_type, min_value, max_value,
 					 nonzero, sgn) == VR_RANGE)
diff --git a/gcc/tree-vrp.cc b/gcc/tree-vrp.cc
index e5a292bb875..2b81a3dd168 100644
--- a/gcc/tree-vrp.cc
+++ b/gcc/tree-vrp.cc
@@ -2242,7 +2242,7 @@  register_edge_assert_for (tree name, edge e,
    from the non-zero bitmask.  */
 
 void
-maybe_set_nonzero_bits (edge e, tree var)
+maybe_set_known_zero_bits (edge e, tree var)
 {
   basic_block cond_bb = e->src;
   gimple *stmt = last_stmt (cond_bb);
@@ -2276,7 +2276,7 @@  maybe_set_nonzero_bits (edge e, tree var)
 	return;
     }
   cst = gimple_assign_rhs2 (stmt);
-  set_nonzero_bits (var, wi::bit_and_not (get_nonzero_bits (var),
+  set_known_zero_bits (var, wi::bit_and_not (get_known_zero_bits (var),
 					  wi::to_wide (cst)));
 }
 
@@ -3754,7 +3754,7 @@  vrp_asserts::remove_range_assertions ()
 			SSA_NAME_RANGE_INFO (var) = NULL;
 		      }
 		    duplicate_ssa_name_range_info (var, lhs);
-		    maybe_set_nonzero_bits (single_pred_edge (bb), var);
+		    maybe_set_known_zero_bits (single_pred_edge (bb), var);
 		  }
 	      }
 
diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h
index b8644e9d0a7..1cfed8ea52c 100644
--- a/gcc/tree-vrp.h
+++ b/gcc/tree-vrp.h
@@ -61,7 +61,7 @@  extern tree find_case_label_range (gswitch *, const irange *vr);
 extern bool find_case_label_index (gswitch *, size_t, tree, size_t *);
 extern bool overflow_comparison_p (tree_code, tree, tree, bool, tree *);
 extern tree get_single_symbol (tree, bool *, tree *);
-extern void maybe_set_nonzero_bits (edge, tree);
+extern void maybe_set_known_zero_bits (edge, tree);
 extern wide_int masked_increment (const wide_int &val_in, const wide_int &mask,
 				  const wide_int &sgnbit, unsigned int prec);
 
diff --git a/gcc/tree.cc b/gcc/tree.cc
index 81a6ceaf181..921a9881b1e 100644
--- a/gcc/tree.cc
+++ b/gcc/tree.cc
@@ -3025,7 +3025,7 @@  tree_ctz (const_tree expr)
       ret1 = wi::ctz (wi::to_wide (expr));
       return MIN (ret1, prec);
     case SSA_NAME:
-      ret1 = wi::ctz (get_nonzero_bits (expr));
+      ret1 = wi::ctz (get_known_zero_bits (expr));
       return MIN (ret1, prec);
     case PLUS_EXPR:
     case MINUS_EXPR:
diff --git a/gcc/value-range-pretty-print.cc b/gcc/value-range-pretty-print.cc
index 3a3b4b44cbd..0f95ad1e956 100644
--- a/gcc/value-range-pretty-print.cc
+++ b/gcc/value-range-pretty-print.cc
@@ -107,7 +107,7 @@  vrange_printer::print_irange_bound (const wide_int &bound, tree type) const
 void
 vrange_printer::print_irange_bitmasks (const irange &r) const
 {
-  wide_int nz = r.get_nonzero_bits ();
+  wide_int nz = r.get_known_zero_bits ();
   if (nz == -1)
     return;
 
diff --git a/gcc/value-range-storage.cc b/gcc/value-range-storage.cc
index 6e054622830..74aaa929c4c 100644
--- a/gcc/value-range-storage.cc
+++ b/gcc/value-range-storage.cc
@@ -150,7 +150,7 @@  irange_storage_slot::set_irange (const irange &r)
 {
   gcc_checking_assert (fits_p (r));
 
-  m_ints[0] = r.get_nonzero_bits ();
+  m_ints[0] = r.get_known_zero_bits ();
 
   unsigned pairs = r.num_pairs ();
   for (unsigned i = 0; i < pairs; ++i)
@@ -174,7 +174,7 @@  irange_storage_slot::get_irange (irange &r, tree type) const
       int_range<2> tmp (type, m_ints[i], m_ints[i + 1]);
       r.union_ (tmp);
     }
-  r.set_nonzero_bits (get_nonzero_bits ());
+  r.set_known_zero_bits (get_known_zero_bits ());
 }
 
 // Return the size in bytes to allocate a slot that can hold R.
@@ -220,7 +220,7 @@  irange_storage_slot::dump () const
       m_ints[i + 1].dump ();
     }
   fprintf (stderr, "NONZERO ");
-  wide_int nz = get_nonzero_bits ();
+  wide_int nz = get_known_zero_bits ();
   nz.dump ();
 }
 
diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
index 0cf95ebf7c1..cfa15b48884 100644
--- a/gcc/value-range-storage.h
+++ b/gcc/value-range-storage.h
@@ -70,7 +70,7 @@  public:
   static irange_storage_slot *alloc_slot (vrange_allocator &, const irange &r);
   void set_irange (const irange &r);
   void get_irange (irange &r, tree type) const;
-  wide_int get_nonzero_bits () const { return m_ints[0]; }
+  wide_int get_known_zero_bits () const { return m_ints[0]; }
   bool fits_p (const irange &r) const;
   static size_t size (const irange &r);
   void dump () const;
diff --git a/gcc/value-range.cc b/gcc/value-range.cc
index bcda4987307..05c43485cef 100644
--- a/gcc/value-range.cc
+++ b/gcc/value-range.cc
@@ -837,7 +837,7 @@  irange::operator= (const irange &src)
 
   m_num_ranges = lim;
   m_kind = src.m_kind;
-  m_nonzero_mask = src.m_nonzero_mask;
+  m_known_zero_mask = src.m_known_zero_mask;
   if (flag_checking)
     verify_range ();
   return *this;
@@ -894,7 +894,7 @@  irange::copy_to_legacy (const irange &src)
       m_base[0] = src.m_base[0];
       m_base[1] = src.m_base[1];
       m_kind = src.m_kind;
-      m_nonzero_mask = src.m_nonzero_mask;
+      m_known_zero_mask = src.m_known_zero_mask;
       return;
     }
   // Copy multi-range to legacy.
@@ -959,7 +959,7 @@  irange::irange_set (tree min, tree max)
   m_base[1] = max;
   m_num_ranges = 1;
   m_kind = VR_RANGE;
-  m_nonzero_mask = NULL;
+  m_known_zero_mask = NULL;
   normalize_kind ();
 
   if (flag_checking)
@@ -1033,7 +1033,7 @@  irange::irange_set_anti_range (tree min, tree max)
     }
 
   m_kind = VR_RANGE;
-  m_nonzero_mask = NULL;
+  m_known_zero_mask = NULL;
   normalize_kind ();
 
   if (flag_checking)
@@ -1090,7 +1090,7 @@  irange::set (tree min, tree max, value_range_kind kind)
       m_base[0] = min;
       m_base[1] = max;
       m_num_ranges = 1;
-      m_nonzero_mask = NULL;
+      m_known_zero_mask = NULL;
       return;
     }
 
@@ -1140,7 +1140,7 @@  irange::set (tree min, tree max, value_range_kind kind)
   m_base[0] = min;
   m_base[1] = max;
   m_num_ranges = 1;
-  m_nonzero_mask = NULL;
+  m_known_zero_mask = NULL;
   normalize_kind ();
   if (flag_checking)
     verify_range ();
@@ -1159,8 +1159,8 @@  irange::verify_range ()
     }
   if (m_kind == VR_VARYING)
     {
-      gcc_checking_assert (!m_nonzero_mask
-			   || wi::to_wide (m_nonzero_mask) == -1);
+      gcc_checking_assert (!m_known_zero_mask
+			   || wi::to_wide (m_known_zero_mask) == -1);
       gcc_checking_assert (m_num_ranges == 1);
       gcc_checking_assert (varying_compatible_p ());
       return;
@@ -1255,7 +1255,7 @@  irange::legacy_equal_p (const irange &other) const
 			       other.tree_lower_bound (0))
 	  && vrp_operand_equal_p (tree_upper_bound (0),
 				  other.tree_upper_bound (0))
-	  && get_nonzero_bits () == other.get_nonzero_bits ());
+	  && get_known_zero_bits () == other.get_known_zero_bits ());
 }
 
 bool
@@ -1290,7 +1290,7 @@  irange::operator== (const irange &other) const
 	  || !operand_equal_p (ub, ub_other, 0))
 	return false;
     }
-  return get_nonzero_bits () == other.get_nonzero_bits ();
+  return get_known_zero_bits () == other.get_known_zero_bits ();
 }
 
 /* Return TRUE if this is a symbolic range.  */
@@ -1433,11 +1433,11 @@  irange::contains_p (tree cst) const
 
   gcc_checking_assert (TREE_CODE (cst) == INTEGER_CST);
 
-  // See if we can exclude CST based on the nonzero bits.
-  if (m_nonzero_mask)
+  // See if we can exclude CST based on the known-zero bits.
+  if (m_known_zero_mask)
     {
       wide_int cstw = wi::to_wide (cst);
-      if (cstw != 0 && wi::bit_and (wi::to_wide (m_nonzero_mask), cstw) == 0)
+      if (cstw != 0 && wi::bit_and (wi::to_wide (m_known_zero_mask), cstw) == 0)
 	return false;
     }
 
@@ -2335,7 +2335,7 @@  irange::irange_single_pair_union (const irange &r)
     {
       // If current upper bound is new upper bound, we're done.
       if (wi::le_p (wi::to_wide (r.m_base[1]), wi::to_wide (m_base[1]), sign))
-	return union_nonzero_bits (r);
+	return union_known_zero_bits (r);
       // Otherwise R has the new upper bound.
       // Check for overlap/touching ranges, or single target range.
       if (m_max_ranges == 1
@@ -2348,7 +2348,7 @@  irange::irange_single_pair_union (const irange &r)
 	  m_base[3] = r.m_base[1];
 	  m_num_ranges = 2;
 	}
-      union_nonzero_bits (r);
+      union_known_zero_bits (r);
       return true;
     }
 
@@ -2371,7 +2371,7 @@  irange::irange_single_pair_union (const irange &r)
       m_base[3] = m_base[1];
       m_base[1] = r.m_base[1];
     }
-  union_nonzero_bits (r);
+  union_known_zero_bits (r);
   return true;
 }
 
@@ -2408,7 +2408,7 @@  irange::irange_union (const irange &r)
 
   // If this ranges fully contains R, then we need do nothing.
   if (irange_contains_p (r))
-    return union_nonzero_bits (r);
+    return union_known_zero_bits (r);
 
   // Do not worry about merging and such by reserving twice as many
   // pairs as needed, and then simply sort the 2 ranges into this
@@ -2496,7 +2496,7 @@  irange::irange_union (const irange &r)
   m_num_ranges = i / 2;
 
   m_kind = VR_RANGE;
-  union_nonzero_bits (r);
+  union_known_zero_bits (r);
   return true;
 }
 
@@ -2576,13 +2576,13 @@  irange::irange_intersect (const irange &r)
       if (undefined_p ())
 	return true;
 
-      res |= intersect_nonzero_bits (r);
+      res |= intersect_known_zero_bits (r);
       return res;
     }
 
   // If R fully contains this, then intersection will change nothing.
   if (r.irange_contains_p (*this))
-    return intersect_nonzero_bits (r);
+    return intersect_known_zero_bits (r);
 
   signop sign = TYPE_SIGN (TREE_TYPE(m_base[0]));
   unsigned bld_pair = 0;
@@ -2658,7 +2658,7 @@  irange::irange_intersect (const irange &r)
     }
 
   m_kind = VR_RANGE;
-  intersect_nonzero_bits (r);
+  intersect_known_zero_bits (r);
   return true;
 }
 
@@ -2801,7 +2801,7 @@  irange::invert ()
   signop sign = TYPE_SIGN (ttype);
   wide_int type_min = wi::min_value (prec, sign);
   wide_int type_max = wi::max_value (prec, sign);
-  m_nonzero_mask = NULL;
+  m_known_zero_mask = NULL;
   if (m_num_ranges == m_max_ranges
       && lower_bound () != type_min
       && upper_bound () != type_max)
@@ -2876,10 +2876,10 @@  irange::invert ()
     verify_range ();
 }
 
-// Return the nonzero bits inherent in the range.
+// Return the known-zero bits inherent in the range.
 
 wide_int
-irange::get_nonzero_bits_from_range () const
+irange::get_known_zero_bits_from_range () const
 {
   // For legacy symbolics.
   if (!constant_p ())
@@ -2900,25 +2900,25 @@  irange::get_nonzero_bits_from_range () const
 // so and return TRUE.
 
 bool
-irange::set_range_from_nonzero_bits ()
+irange::set_range_from_known_zero_bits ()
 {
   gcc_checking_assert (!undefined_p ());
-  if (!m_nonzero_mask)
+  if (!m_known_zero_mask)
     return false;
-  unsigned popcount = wi::popcount (wi::to_wide (m_nonzero_mask));
+  unsigned popcount = wi::popcount (wi::to_wide (m_known_zero_mask));
 
   // If we have only one bit set in the mask, we can figure out the
   // range immediately.
   if (popcount == 1)
     {
       // Make sure we don't pessimize the range.
-      if (!contains_p (m_nonzero_mask))
+      if (!contains_p (m_known_zero_mask))
 	return false;
 
       bool has_zero = contains_p (build_zero_cst (type ()));
-      tree nz = m_nonzero_mask;
+      tree nz = m_known_zero_mask;
       set (nz, nz);
-      m_nonzero_mask = nz;
+      m_known_zero_mask = nz;
       if (has_zero)
 	{
 	  int_range<2> zero;
@@ -2936,14 +2936,14 @@  irange::set_range_from_nonzero_bits ()
 }
 
 void
-irange::set_nonzero_bits (const wide_int_ref &bits)
+irange::set_known_zero_bits (const wide_int_ref &bits)
 {
   gcc_checking_assert (!undefined_p ());
   unsigned prec = TYPE_PRECISION (type ());
 
   if (bits == -1)
     {
-      m_nonzero_mask = NULL;
+      m_known_zero_mask = NULL;
       normalize_kind ();
       if (flag_checking)
 	verify_range ();
@@ -2955,8 +2955,8 @@  irange::set_nonzero_bits (const wide_int_ref &bits)
     m_kind = VR_RANGE;
 
   wide_int nz = wide_int::from (bits, prec, TYPE_SIGN (type ()));
-  m_nonzero_mask = wide_int_to_tree (type (), nz);
-  if (set_range_from_nonzero_bits ())
+  m_known_zero_mask = wide_int_to_tree (type (), nz);
+  if (set_range_from_known_zero_bits ())
     return;
 
   normalize_kind ();
@@ -2964,11 +2964,11 @@  irange::set_nonzero_bits (const wide_int_ref &bits)
     verify_range ();
 }
 
-// Return the nonzero bitmask.  This will return the nonzero bits plus
-// the nonzero bits inherent in the range.
+// Return the nonzero bitmask.  This will return the known-zero bits plus
+// the known-zero bits inherent in the range.
 
 wide_int
-irange::get_nonzero_bits () const
+irange::get_known_zero_bits () const
 {
   gcc_checking_assert (!undefined_p ());
   // The nonzero mask inherent in the range is calculated on-demand.
@@ -2979,10 +2979,10 @@  irange::get_nonzero_bits () const
   // the mask precisely up to date at all times.  Instead, we default
   // to -1 and set it when explicitly requested.  However, this
   // function will always return the correct mask.
-  if (m_nonzero_mask)
-    return wi::to_wide (m_nonzero_mask) & get_nonzero_bits_from_range ();
+  if (m_known_zero_mask)
+    return wi::to_wide (m_known_zero_mask) & get_known_zero_bits_from_range ();
   else
-    return get_nonzero_bits_from_range ();
+    return get_known_zero_bits_from_range ();
 }
 
 // Convert tree mask to wide_int.  Returns -1 for NULL masks.
@@ -2996,15 +2996,15 @@  mask_to_wi (tree mask, tree type)
     return wi::shwi (-1, TYPE_PRECISION (type));
 }
 
-// Intersect the nonzero bits in R into THIS and normalize the range.
+// Intersect the known-zero bits in R into THIS and normalize the range.
 // Return TRUE if the intersection changed anything.
 
 bool
-irange::intersect_nonzero_bits (const irange &r)
+irange::intersect_known_zero_bits (const irange &r)
 {
   gcc_checking_assert (!undefined_p () && !r.undefined_p ());
 
-  if (!m_nonzero_mask && !r.m_nonzero_mask)
+  if (!m_known_zero_mask && !r.m_known_zero_mask)
     {
       normalize_kind ();
       if (flag_checking)
@@ -3014,11 +3014,11 @@  irange::intersect_nonzero_bits (const irange &r)
 
   bool changed = false;
   tree t = type ();
-  if (mask_to_wi (m_nonzero_mask, t) != mask_to_wi (r.m_nonzero_mask, t))
+  if (mask_to_wi (m_known_zero_mask, t) != mask_to_wi (r.m_known_zero_mask, t))
     {
-      wide_int nz = get_nonzero_bits () & r.get_nonzero_bits ();
-      m_nonzero_mask = wide_int_to_tree (t, nz);
-      if (set_range_from_nonzero_bits ())
+      wide_int nz = get_known_zero_bits () & r.get_known_zero_bits ();
+      m_known_zero_mask = wide_int_to_tree (t, nz);
+      if (set_range_from_known_zero_bits ())
 	return true;
       changed = true;
     }
@@ -3028,15 +3028,15 @@  irange::intersect_nonzero_bits (const irange &r)
   return changed;
 }
 
-// Union the nonzero bits in R into THIS and normalize the range.
+// Union the known-zero bits in R into THIS and normalize the range.
 // Return TRUE if the union changed anything.
 
 bool
-irange::union_nonzero_bits (const irange &r)
+irange::union_known_zero_bits (const irange &r)
 {
   gcc_checking_assert (!undefined_p () && !r.undefined_p ());
 
-  if (!m_nonzero_mask && !r.m_nonzero_mask)
+  if (!m_known_zero_mask && !r.m_known_zero_mask)
     {
       normalize_kind ();
       if (flag_checking)
@@ -3046,14 +3046,14 @@  irange::union_nonzero_bits (const irange &r)
 
   bool changed = false;
   tree t = type ();
-  if (mask_to_wi (m_nonzero_mask, t) != mask_to_wi (r.m_nonzero_mask, t))
+  if (mask_to_wi (m_known_zero_mask, t) != mask_to_wi (r.m_known_zero_mask, t))
     {
-      wide_int nz = get_nonzero_bits () | r.get_nonzero_bits ();
-      m_nonzero_mask = wide_int_to_tree (t, nz);
-      // No need to call set_range_from_nonzero_bits, because we'll
+      wide_int nz = get_known_zero_bits () | r.get_known_zero_bits ();
+      m_known_zero_mask = wide_int_to_tree (t, nz);
+      // No need to call set_range_from_known_zero_bits, because we'll
       // never narrow the range.  Besides, it would cause endless
       // recursion because of the union_ in
-      // set_range_from_nonzero_bits.
+      // set_range_from_known_zero_bits.
       changed = true;
     }
   normalize_kind ();
@@ -3626,58 +3626,58 @@  range_tests_nonzero_bits ()
 {
   int_range<2> r0, r1;
 
-  // Adding nonzero bits to a varying drops the varying.
+  // Adding known-zero bits to a varying drops the varying.
   r0.set_varying (integer_type_node);
-  r0.set_nonzero_bits (255);
+  r0.set_known_zero_bits (255);
   ASSERT_TRUE (!r0.varying_p ());
-  // Dropping the nonzero bits brings us back to varying.
-  r0.set_nonzero_bits (-1);
+  // Dropping the known-zero bits brings us back to varying.
+  r0.set_known_zero_bits (-1);
   ASSERT_TRUE (r0.varying_p ());
 
-  // Test contains_p with nonzero bits.
+  // Test contains_p with known-zero bits.
   r0.set_zero (integer_type_node);
   ASSERT_TRUE (r0.contains_p (INT (0)));
   ASSERT_FALSE (r0.contains_p (INT (1)));
-  r0.set_nonzero_bits (0xfe);
+  r0.set_known_zero_bits (0xfe);
   ASSERT_FALSE (r0.contains_p (INT (0x100)));
   ASSERT_FALSE (r0.contains_p (INT (0x3)));
 
-  // Union of nonzero bits.
+  // Union of known-zero bits.
   r0.set_varying (integer_type_node);
-  r0.set_nonzero_bits (0xf0);
+  r0.set_known_zero_bits (0xf0);
   r1.set_varying (integer_type_node);
-  r1.set_nonzero_bits (0xf);
+  r1.set_known_zero_bits (0xf);
   r0.union_ (r1);
-  ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
+  ASSERT_TRUE (r0.get_known_zero_bits () == 0xff);
 
-  // Intersect of nonzero bits.
+  // Intersect of known-zero bits.
   r0.set (INT (0), INT (255));
-  r0.set_nonzero_bits (0xfe);
+  r0.set_known_zero_bits (0xfe);
   r1.set_varying (integer_type_node);
-  r1.set_nonzero_bits (0xf0);
+  r1.set_known_zero_bits (0xf0);
   r0.intersect (r1);
-  ASSERT_TRUE (r0.get_nonzero_bits () == 0xf0);
+  ASSERT_TRUE (r0.get_known_zero_bits () == 0xf0);
 
-  // Intersect where the mask of nonzero bits is implicit from the range.
+  // Intersect where the mask of known-zero bits is implicit from the range.
   r0.set_varying (integer_type_node);
   r1.set (INT (0), INT (255));
   r0.intersect (r1);
-  ASSERT_TRUE (r0.get_nonzero_bits () == 0xff);
+  ASSERT_TRUE (r0.get_known_zero_bits () == 0xff);
 
   // The union of a mask of 0xff..ffff00 with a mask of 0xff spans the
   // entire domain, and makes the range a varying.
   r0.set_varying (integer_type_node);
   wide_int x = wi::shwi (0xff, TYPE_PRECISION (integer_type_node));
   x = wi::bit_not (x);
-  r0.set_nonzero_bits (x); 	// 0xff..ff00
+  r0.set_known_zero_bits (x); 	// 0xff..ff00
   r1.set_varying (integer_type_node);
-  r1.set_nonzero_bits (0xff);
+  r1.set_known_zero_bits (0xff);
   r0.union_ (r1);
   ASSERT_TRUE (r0.varying_p ());
 
   // Test that setting a nonzero bit of 1 does not pessimize the range.
   r0.set_zero (integer_type_node);
-  r0.set_nonzero_bits (1);
+  r0.set_known_zero_bits (1);
   ASSERT_TRUE (r0.zero_p ());
 }
 
diff --git a/gcc/value-range.h b/gcc/value-range.h
index b48542a68aa..444f357afdf 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -156,9 +156,9 @@  public:
   virtual bool fits_p (const vrange &r) const override;
   virtual void accept (const vrange_visitor &v) const override;
 
-  // Nonzero masks.
-  wide_int get_nonzero_bits () const;
-  void set_nonzero_bits (const wide_int_ref &bits);
+  // Known bit masks.
+  wide_int get_known_zero_bits () const;
+  void set_known_zero_bits (const wide_int_ref &bits);
 
   // Deprecated legacy public methods.
   tree min () const;				// DEPRECATED
@@ -207,15 +207,15 @@  private:
 
   void irange_set_1bit_anti_range (tree, tree);
   bool varying_compatible_p () const;
-  bool intersect_nonzero_bits (const irange &r);
-  bool union_nonzero_bits (const irange &r);
-  wide_int get_nonzero_bits_from_range () const;
-  bool set_range_from_nonzero_bits ();
+  bool intersect_known_zero_bits (const irange &r);
+  bool union_known_zero_bits (const irange &r);
+  wide_int get_known_zero_bits_from_range () const;
+  bool set_range_from_known_zero_bits ();
 
   bool intersect (const wide_int& lb, const wide_int& ub);
   unsigned char m_num_ranges;
   unsigned char m_max_ranges;
-  tree m_nonzero_mask;
+  tree m_known_zero_mask;
   tree *m_base;
 };
 
@@ -687,11 +687,11 @@  irange::varying_compatible_p () const
   if (INTEGRAL_TYPE_P (t))
     return (wi::to_wide (l) == wi::min_value (prec, sign)
 	    && wi::to_wide (u) == wi::max_value (prec, sign)
-	    && (!m_nonzero_mask || wi::to_wide (m_nonzero_mask) == -1));
+	    && (!m_known_zero_mask || wi::to_wide (m_known_zero_mask) == -1));
   if (POINTER_TYPE_P (t))
     return (wi::to_wide (l) == 0
 	    && wi::to_wide (u) == wi::max_value (prec, sign)
-	    && (!m_nonzero_mask || wi::to_wide (m_nonzero_mask) == -1));
+	    && (!m_known_zero_mask || wi::to_wide (m_known_zero_mask) == -1));
   return true;
 }
 
@@ -758,8 +758,8 @@  gt_ggc_mx (irange *x)
       gt_ggc_mx (x->m_base[i * 2]);
       gt_ggc_mx (x->m_base[i * 2 + 1]);
     }
-  if (x->m_nonzero_mask)
-    gt_ggc_mx (x->m_nonzero_mask);
+  if (x->m_known_zero_mask)
+    gt_ggc_mx (x->m_known_zero_mask);
 }
 
 inline void
@@ -770,8 +770,8 @@  gt_pch_nx (irange *x)
       gt_pch_nx (x->m_base[i * 2]);
       gt_pch_nx (x->m_base[i * 2 + 1]);
     }
-  if (x->m_nonzero_mask)
-    gt_pch_nx (x->m_nonzero_mask);
+  if (x->m_known_zero_mask)
+    gt_pch_nx (x->m_known_zero_mask);
 }
 
 inline void
@@ -782,8 +782,8 @@  gt_pch_nx (irange *x, gt_pointer_operator op, void *cookie)
       op (&x->m_base[i * 2], NULL, cookie);
       op (&x->m_base[i * 2 + 1], NULL, cookie);
     }
-  if (x->m_nonzero_mask)
-    op (&x->m_nonzero_mask, NULL, cookie);
+  if (x->m_known_zero_mask)
+    op (&x->m_known_zero_mask, NULL, cookie);
 }
 
 template<unsigned N>
@@ -878,7 +878,7 @@  irange::set_undefined ()
 {
   m_kind = VR_UNDEFINED;
   m_num_ranges = 0;
-  m_nonzero_mask = NULL;
+  m_known_zero_mask = NULL;
 }
 
 inline void
@@ -886,7 +886,7 @@  irange::set_varying (tree type)
 {
   m_kind = VR_VARYING;
   m_num_ranges = 1;
-  m_nonzero_mask = NULL;
+  m_known_zero_mask = NULL;
 
   if (INTEGRAL_TYPE_P (type))
     {