From patchwork Tue Oct 4 07:35:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aldy Hernandez X-Patchwork-Id: 1670 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp5147wrs; Tue, 4 Oct 2022 00:36:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5X9S7r/NATnHf4KjI6NYiUccTJHvJaw/jVMlCbTh7ESgekwSm7QpRPzFdYRfpuMBDIyK4C X-Received: by 2002:a17:907:1608:b0:78d:152d:4925 with SMTP id hb8-20020a170907160800b0078d152d4925mr1425992ejc.38.1664868990725; Tue, 04 Oct 2022 00:36:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664868990; cv=none; d=google.com; s=arc-20160816; b=q336SblBmB1JE9fcxPe8VMp9E7IHLST3g6SY9mNqtDc6hax6ZLD6dh6epRbWKyP57k KI2HSQuiROSszRcC6uZiaVffElf6awJLkNAfF3zxkdNMA1WAcxM8sBIgbBzx+cNogRqq OBB+tXmT2SbpcDr/dGv7+FZNWAkxHi9WWqNW4aafooUTAsAR1K5WPyN/LskQwlfNzWZs m2s2wLQEmtRw+y/paIZx3XmewbzqRIM5xnZKKSrKZJSQ4KSnLBAdt42UkvqAihTYl77E LyE9y6E9Clnf5qr37K4k3aSRPukC4kr3Z5oI4Kp+76axUeDzqvZARgbatm//UAeLQv1e uuXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:reply-to:from:list-subscribe:list-help:list-post :list-archive:list-unsubscribe:list-id:precedence :content-transfer-encoding:mime-version:message-id:date:subject:to :dmarc-filter:delivered-to:dkim-signature:dkim-filter; bh=3ZewUycNN89i/y3MMf88iz+DKwKj3mY2cQolkSEjuEs=; b=0+gx0kcyCxgsEMdT8s3uVry2lzAQsoEQ36DcbDUERd1M19ZiQT2E7IQ5ffgsDFBd9l oZLNM12sf2L4tEXv4ru8brsctk07FXzk5s64a/qcIUrOKXg0HSlbHvtJrQqfzjrwYgJk CldxUnb0dUEP2XLr2JJkoLAm9FzrvtkogEVwfC8XAdKoAHBYGvNP8//a5pxqC6mbsucL H9rJ+e4WDa/zJ2oWg6h3v0pWaTIjqc0Wu6b3FIQ53OeW1XluI7RByqRP7sCC1H2JHsSE pc3LM+8DfRsevZxSDE6/zH/k9+TNu598o+svSdf8tBtj08IJAJ3oWFGHhB1XUT+ZBnzI au4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=pEe+T31l; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id sg18-20020a170907a41200b0073ffabc3352si11453787ejc.836.2022.10.04.00.36.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Oct 2022 00:36:30 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org header.s=default header.b=pEe+T31l; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=gnu.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 66F5C3857BA8 for ; Tue, 4 Oct 2022 07:36:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 66F5C3857BA8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1664868989; bh=3ZewUycNN89i/y3MMf88iz+DKwKj3mY2cQolkSEjuEs=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=pEe+T31l5SIrpx66Xb6wZ4ibnCirGnqolYMIFxCtbYsMm8wPzAv70/1ibx9qfY9Y7 1xgssGz4FtIPYnDXnimKkz9KnKfblI2ZFrG1NTX8CFChpIV3sgkMeiLvwnO1qcvYtn 8QdiAHNyyuAP0+GRQrRDYWR340qt9s1rErubxc4Y= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 185823858D38 for ; Tue, 4 Oct 2022 07:35:44 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 185823858D38 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-500-H2IdsBs6N9ydggNNiuk5bg-1; Tue, 04 Oct 2022 03:35:42 -0400 X-MC-Unique: H2IdsBs6N9ydggNNiuk5bg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 535C3299E750 for ; Tue, 4 Oct 2022 07:35:42 +0000 (UTC) Received: from abulafia.quesejoda.com (unknown [10.39.192.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id ACA932027061; Tue, 4 Oct 2022 07:35:41 +0000 (UTC) Received: from abulafia.quesejoda.com (localhost [127.0.0.1]) by abulafia.quesejoda.com (8.17.1/8.17.1) with ESMTPS id 2947ZdkN1461407 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Tue, 4 Oct 2022 09:35:39 +0200 Received: (from aldyh@localhost) by abulafia.quesejoda.com (8.17.1/8.17.1/Submit) id 2947ZcrG1461406; Tue, 4 Oct 2022 09:35:38 +0200 To: GCC patches Subject: [COMMITTED] Convert nonzero mask in irange to wide_int. Date: Tue, 4 Oct 2022 09:35:30 +0200 Message-Id: <20221004073530.1461390-1-aldyh@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.9 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Aldy Hernandez via Gcc-patches From: Aldy Hernandez Reply-To: Aldy Hernandez Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1745741666747664450?= X-GMAIL-MSGID: =?utf-8?q?1745741666747664450?= The reason the nonzero mask was kept in a tree was basically inertia, as everything in irange is a tree. However, there's no need to keep it in a tree, as the conversions to and from wide ints are very annoying. That, plus special casing NULL masks to be -1 is prone to error. I have not only rewritten all the uses to assume a wide int, but have corrected a few places where we weren't propagating the masks, or rather pessimizing them to -1. This will become more important in upcoming patches where we make better use of the masks. Performance testing shows a trivial improvement in VRP, as things like irange::contains_p() are tied to a tree. Ughh, can't wait for trees in iranges to go away. gcc/ChangeLog: * value-range-storage.cc (irange_storage_slot::set_irange): Remove special case. * value-range.cc (irange::irange_set): Adjust for nonzero mask being a wide int. (irange::irange_set_anti_range): Same. (irange::set): Same. (irange::verify_range): Same. (irange::legacy_equal_p): Same. (irange::operator==): Same. (irange::contains_p): Same. (irange::legacy_intersect): Same. (irange::legacy_union): Same. (irange::irange_single_pair_union): Call union_nonzero_bits. (irange::irange_union): Same. (irange::irange_intersect): Call intersect_nonzero_bits. (irange::intersect): Adjust for nonzero mask being a wide int. (irange::invert): Same. (irange::set_nonzero_bits): Same. (irange::get_nonzero_bits_from_range): New. (irange::set_range_from_nonzero_bits): New. (irange::get_nonzero_bits): Adjust for nonzero mask being a wide int. (irange::intersect_nonzero_bits): Same. (irange::union_nonzero_bits): Same. (range_tests_nonzero_bits): Remove test. * value-range.h (irange::varying_compatible_p): Adjust for nonzero mask being a wide int. (gt_ggc_mx): Same. (gt_pch_nx): Same. (irange::set_undefined): Same. (irange::set_varying): Same. (irange::normalize_kind): Same. --- gcc/value-range-storage.cc | 6 +- gcc/value-range.cc | 270 ++++++++++++++++--------------------- gcc/value-range.h | 25 ++-- 3 files changed, 130 insertions(+), 171 deletions(-) diff --git a/gcc/value-range-storage.cc b/gcc/value-range-storage.cc index de7575ed48d..6e054622830 100644 --- a/gcc/value-range-storage.cc +++ b/gcc/value-range-storage.cc @@ -150,11 +150,7 @@ irange_storage_slot::set_irange (const irange &r) { gcc_checking_assert (fits_p (r)); - // Avoid calling unsupported get_nonzero_bits on legacy. - if (r.legacy_mode_p ()) - m_ints[0] = -1; - else - m_ints[0] = r.get_nonzero_bits (); + m_ints[0] = r.get_nonzero_bits (); unsigned pairs = r.num_pairs (); for (unsigned i = 0; i < pairs; ++i) diff --git a/gcc/value-range.cc b/gcc/value-range.cc index 6e196574de9..afb26a40083 100644 --- a/gcc/value-range.cc +++ b/gcc/value-range.cc @@ -940,7 +940,7 @@ irange::irange_set (tree min, tree max) m_base[1] = max; m_num_ranges = 1; m_kind = VR_RANGE; - m_nonzero_mask = NULL; + m_nonzero_mask = wi::shwi (-1, TYPE_PRECISION (TREE_TYPE (min))); normalize_kind (); if (flag_checking) @@ -1014,7 +1014,7 @@ irange::irange_set_anti_range (tree min, tree max) } m_kind = VR_RANGE; - m_nonzero_mask = NULL; + m_nonzero_mask = wi::shwi (-1, TYPE_PRECISION (TREE_TYPE (min))); normalize_kind (); if (flag_checking) @@ -1071,7 +1071,7 @@ irange::set (tree min, tree max, value_range_kind kind) m_base[0] = min; m_base[1] = max; m_num_ranges = 1; - m_nonzero_mask = NULL; + m_nonzero_mask = wi::shwi (-1, TYPE_PRECISION (TREE_TYPE (min))); return; } @@ -1121,7 +1121,7 @@ irange::set (tree min, tree max, value_range_kind kind) m_base[0] = min; m_base[1] = max; m_num_ranges = 1; - m_nonzero_mask = NULL; + m_nonzero_mask = wi::shwi (-1, TYPE_PRECISION (TREE_TYPE (min))); normalize_kind (); if (flag_checking) verify_range (); @@ -1136,14 +1136,11 @@ irange::verify_range () if (m_kind == VR_UNDEFINED) { gcc_checking_assert (m_num_ranges == 0); - gcc_checking_assert (!m_nonzero_mask); return; } - if (m_nonzero_mask) - gcc_checking_assert (wi::to_wide (m_nonzero_mask) != -1); if (m_kind == VR_VARYING) { - gcc_checking_assert (!m_nonzero_mask); + gcc_checking_assert (m_nonzero_mask == -1); gcc_checking_assert (m_num_ranges == 1); gcc_checking_assert (varying_compatible_p ()); return; @@ -1238,7 +1235,7 @@ irange::legacy_equal_p (const irange &other) const other.tree_lower_bound (0)) && vrp_operand_equal_p (tree_upper_bound (0), other.tree_upper_bound (0)) - && vrp_operand_equal_p (m_nonzero_mask, other.m_nonzero_mask)); + && get_nonzero_bits () == other.get_nonzero_bits ()); } bool @@ -1273,7 +1270,7 @@ irange::operator== (const irange &other) const || !operand_equal_p (ub, ub_other, 0)) return false; } - return vrp_operand_equal_p (m_nonzero_mask, other.m_nonzero_mask); + return get_nonzero_bits () == other.get_nonzero_bits (); } /* Return TRUE if this is a symbolic range. */ @@ -1416,10 +1413,11 @@ irange::contains_p (tree cst) const gcc_checking_assert (TREE_CODE (cst) == INTEGER_CST); - if (m_nonzero_mask) + // See if we can exclude CST based on the nonzero bits. + if (m_nonzero_mask != -1) { wide_int cstw = wi::to_wide (cst); - if (cstw != 0 && wi::bit_and (wi::to_wide (m_nonzero_mask), cstw) == 0) + if (cstw != 0 && wi::bit_and (m_nonzero_mask, cstw) == 0) return false; } @@ -1878,9 +1876,6 @@ irange::legacy_intersect (irange *vr0, const irange *vr1) intersect_ranges (&vr0kind, &vr0min, &vr0max, vr1->kind (), vr1->min (), vr1->max ()); - // Pessimize nonzero masks, as we don't support them. - m_nonzero_mask = NULL; - /* Make sure to canonicalize the result though as the inversion of a VR_RANGE can still be a VR_RANGE. */ if (vr0kind == VR_UNDEFINED) @@ -2202,9 +2197,6 @@ irange::legacy_union (irange *vr0, const irange *vr1) union_ranges (&vr0kind, &vr0min, &vr0max, vr1->kind (), vr1->min (), vr1->max ()); - // Pessimize nonzero masks, as we don't support them. - m_nonzero_mask = NULL; - if (vr0kind == VR_UNDEFINED) vr0->set_undefined (); else if (vr0kind == VR_VARYING) @@ -2310,8 +2302,6 @@ irange::legacy_verbose_intersect (const irange *other) // Perform an efficient union with R when both ranges have only a single pair. // Excluded are VARYING and UNDEFINED ranges. -// -// NOTE: It is the caller's responsibility to set the nonzero mask. bool irange::irange_single_pair_union (const irange &r) @@ -2325,7 +2315,7 @@ irange::irange_single_pair_union (const irange &r) { // If current upper bound is new upper bound, we're done. if (wi::le_p (wi::to_wide (r.m_base[1]), wi::to_wide (m_base[1]), sign)) - return false; + return union_nonzero_bits (r); // Otherwise R has the new upper bound. // Check for overlap/touching ranges, or single target range. if (m_max_ranges == 1 @@ -2338,8 +2328,7 @@ irange::irange_single_pair_union (const irange &r) m_base[3] = r.m_base[1]; m_num_ranges = 2; } - if (varying_compatible_p ()) - m_kind = VR_VARYING; + union_nonzero_bits (r); return true; } @@ -2353,11 +2342,7 @@ irange::irange_single_pair_union (const irange &r) // Check for overlapping ranges, or target limited to a single range. else if (m_max_ranges == 1 || wi::to_widest (r.m_base[1]) + 1 >= wi::to_widest (lb)) - { - // This has the new upper bound, just check for varying. - if (varying_compatible_p ()) - m_kind = VR_VARYING; - } + ; else { // Left with 2 pairs. @@ -2366,8 +2351,7 @@ irange::irange_single_pair_union (const irange &r) m_base[3] = m_base[1]; m_base[1] = r.m_base[1]; } - if (varying_compatible_p ()) - m_kind = VR_VARYING; + union_nonzero_bits (r); return true; } @@ -2398,27 +2382,13 @@ irange::irange_union (const irange &r) return true; } - // Save the nonzero mask in case the set operations below clobber it. - bool ret_nz = union_nonzero_bits (r); - tree saved_nz = m_nonzero_mask; - - // The union_nonzero_bits may have turned things into a varying. - if (varying_p ()) - return true; - // Special case one range union one range. if (m_num_ranges == 1 && r.m_num_ranges == 1) - { - bool ret = irange_single_pair_union (r); - set_nonzero_bits (saved_nz); - if (flag_checking) - verify_range (); - return ret || ret_nz; - } + return irange_single_pair_union (r); // If this ranges fully contains R, then we need do nothing. if (irange_contains_p (r)) - return ret_nz; + return union_nonzero_bits (r); // Do not worry about merging and such by reserving twice as many // pairs as needed, and then simply sort the 2 ranges into this @@ -2506,11 +2476,7 @@ irange::irange_union (const irange &r) m_num_ranges = i / 2; m_kind = VR_RANGE; - m_nonzero_mask = saved_nz; - normalize_kind (); - - if (flag_checking) - verify_range (); + union_nonzero_bits (r); return true; } @@ -2576,21 +2542,11 @@ irange::irange_intersect (const irange &r) set_undefined (); return true; } - - // Save the nonzero mask in case the set operations below clobber it. - bool ret_nz = intersect_nonzero_bits (r); - tree saved_nz = m_nonzero_mask; - if (r.varying_p ()) - return ret_nz; - + return false; if (varying_p ()) { operator= (r); - if (saved_nz) - set_nonzero_bits (saved_nz); - if (flag_checking) - verify_range (); return true; } @@ -2600,13 +2556,13 @@ irange::irange_intersect (const irange &r) if (undefined_p ()) return true; - set_nonzero_bits (saved_nz); - return res || saved_nz; + res |= intersect_nonzero_bits (r); + return res; } // If R fully contains this, then intersection will change nothing. if (r.irange_contains_p (*this)) - return ret_nz; + return intersect_nonzero_bits (r); signop sign = TYPE_SIGN (TREE_TYPE(m_base[0])); unsigned bld_pair = 0; @@ -2675,15 +2631,14 @@ irange::irange_intersect (const irange &r) // At the exit of this loop, it is one of 2 things: // ran out of r1, or r2, but either means we are done. m_num_ranges = bld_pair; + if (m_num_ranges == 0) + { + set_undefined (); + return true; + } m_kind = VR_RANGE; - if (!undefined_p ()) - m_nonzero_mask = saved_nz; - normalize_kind (); - - if (flag_checking) - verify_range (); - + intersect_nonzero_bits (r); return true; } @@ -2749,10 +2704,15 @@ irange::intersect (const wide_int& lb, const wide_int& ub) } m_num_ranges = bld_index; + if (m_num_ranges == 0) + { + set_undefined (); + return true; + } m_kind = VR_RANGE; - normalize_kind (); - + // No need to call normalize_kind(), as the caller will do this + // while intersecting the nonzero mask. if (flag_checking) verify_range (); return true; @@ -2801,7 +2761,6 @@ irange::invert () } gcc_checking_assert (!undefined_p () && !varying_p ()); - m_nonzero_mask = NULL; // We always need one more set of bounds to represent an inverse, so // if we're at the limit, we can't properly represent things. @@ -2822,6 +2781,7 @@ irange::invert () signop sign = TYPE_SIGN (ttype); wide_int type_min = wi::min_value (prec, sign); wide_int type_max = wi::max_value (prec, sign); + m_nonzero_mask = wi::shwi (-1, prec); if (m_num_ranges == m_max_ranges && lower_bound () != type_min && upper_bound () != type_max) @@ -2896,127 +2856,140 @@ irange::invert () verify_range (); } -void -irange::set_nonzero_bits (tree mask) +// Return the nonzero bits inherent in the range. + +wide_int +irange::get_nonzero_bits_from_range () const { - gcc_checking_assert (!undefined_p ()); + // For legacy symbolics. + if (!constant_p ()) + return wi::shwi (-1, TYPE_PRECISION (type ())); - if (!mask) + wide_int min = lower_bound (); + wide_int max = upper_bound (); + wide_int xorv = min ^ max; + if (xorv != 0) { - if (m_nonzero_mask) - { - m_nonzero_mask = NULL; - // Clearing the mask may have turned a range into VARYING. - normalize_kind (); - } - return; + unsigned prec = TYPE_PRECISION (type ()); + xorv = wi::mask (prec - wi::clz (xorv), false, prec); } - m_nonzero_mask = mask; - // Setting the mask may have turned a VARYING into a range. - if (m_kind == VR_VARYING) - m_kind = VR_RANGE; - - if (flag_checking) - verify_range (); + return min | xorv; } -void -irange::set_nonzero_bits (const wide_int_ref &bits) +// If the the nonzero mask can be trivially converted to a range, do +// so and return TRUE. + +bool +irange::set_range_from_nonzero_bits () { gcc_checking_assert (!undefined_p ()); + unsigned popcount = wi::popcount (m_nonzero_mask); - if (bits == -1) - { - set_nonzero_bits (NULL); - return; - } // If we have only one bit set in the mask, we can figure out the // range immediately. - if (wi::popcount (bits) == 1) + if (popcount == 1) { // Make sure we don't pessimize the range. - tree tbits = wide_int_to_tree (type (), bits); - if (!contains_p (tbits)) - { - set_nonzero_bits (tbits); - return; - } + if (!contains_p (wide_int_to_tree (type (), m_nonzero_mask))) + return false; bool has_zero = contains_p (build_zero_cst (type ())); + wide_int bits = m_nonzero_mask; set (type (), bits, bits); + m_nonzero_mask = bits; if (has_zero) { int_range<2> zero; zero.set_zero (type ()); union_ (zero); } + return true; } - set_nonzero_bits (wide_int_to_tree (type (), bits)); + return false; } -wide_int -irange::get_nonzero_bits () const +void +irange::set_nonzero_bits (const wide_int_ref &bits) { gcc_checking_assert (!undefined_p ()); + unsigned prec = TYPE_PRECISION (type ()); + gcc_checking_assert (prec == bits.get_precision ()); - // In case anyone in the legacy world queries us. - if (!constant_p ()) - { - if (m_nonzero_mask) - return wi::to_wide (m_nonzero_mask); - return wi::shwi (-1, TYPE_PRECISION (type ())); - } + // Drop VARYINGs with a nonzero mask to a plain range. + if (m_kind == VR_VARYING && bits != -1) + m_kind = VR_RANGE; - // Calculate the nonzero bits inherent in the range. - wide_int min = lower_bound (); - wide_int max = upper_bound (); - wide_int xorv = min ^ max; - if (xorv != 0) - { - unsigned prec = TYPE_PRECISION (type ()); - xorv = wi::mask (prec - wi::clz (xorv), false, prec); - } - wide_int mask = min | xorv; + m_nonzero_mask = wide_int::from (bits, prec, TYPE_SIGN (type ())); + if (set_range_from_nonzero_bits ()) + return; - // Return the nonzero bits augmented by the range. - if (m_nonzero_mask) - return mask & wi::to_wide (m_nonzero_mask); + normalize_kind (); + if (flag_checking) + verify_range (); +} - return mask; +// Return the nonzero bitmask. This will return the nonzero bits plus +// the nonzero bits inherent in the range. + +wide_int +irange::get_nonzero_bits () const +{ + gcc_checking_assert (!undefined_p ()); + // The nonzero mask inherent in the range is calculated on-demand. + // For example, [0,255] does not have a 0xff nonzero mask by default + // (unless manually set). This saves us considerable time, because + // setting it at creation incurs a large penalty for irange::set. + // At the time of writing there was a 5% slowdown in VRP if we kept + // the mask precisely up to date at all times. Instead, we default + // to -1 and set it when explicitly requested. However, this + // function will always return the correct mask. + return m_nonzero_mask & get_nonzero_bits_from_range (); } -// Intersect the nonzero bits in R into THIS. +// Intersect the nonzero bits in R into THIS and normalize the range. +// Return TRUE if the intersection changed anything. bool irange::intersect_nonzero_bits (const irange &r) { gcc_checking_assert (!undefined_p () && !r.undefined_p ()); - if (m_nonzero_mask || r.m_nonzero_mask) + bool changed = false; + if (m_nonzero_mask != r.m_nonzero_mask) { - wide_int nz = wi::bit_and (get_nonzero_bits (), - r.get_nonzero_bits ()); - set_nonzero_bits (nz); - return true; + m_nonzero_mask = get_nonzero_bits () & r.get_nonzero_bits (); + if (set_range_from_nonzero_bits ()) + return true; + changed = true; } - return false; + normalize_kind (); + if (flag_checking) + verify_range (); + return changed; } -// Union the nonzero bits in R into THIS. +// Union the nonzero bits in R into THIS and normalize the range. +// Return TRUE if the union changed anything. bool irange::union_nonzero_bits (const irange &r) { gcc_checking_assert (!undefined_p () && !r.undefined_p ()); - if (m_nonzero_mask || r.m_nonzero_mask) + bool changed = false; + if (m_nonzero_mask != r.m_nonzero_mask) { - wide_int nz = wi::bit_or (get_nonzero_bits (), - r.get_nonzero_bits ()); - set_nonzero_bits (nz); - return true; + m_nonzero_mask = get_nonzero_bits () | r.get_nonzero_bits (); + // No need to call set_range_from_nonzero_bits, because we'll + // never narrow the range. Besides, it would cause endless + // recursion because of the union_ in + // set_range_from_nonzero_bits. + changed = true; } - return false; + normalize_kind (); + if (flag_checking) + verify_range (); + return changed; } void @@ -3605,13 +3578,6 @@ range_tests_nonzero_bits () r0.union_ (r1); ASSERT_TRUE (r0.get_nonzero_bits () == 0xff); - // Union where the mask of nonzero bits is implicit from the range. - r0.set_varying (integer_type_node); - r0.set_nonzero_bits (0xf00); - r1.set_zero (integer_type_node); // nonzero mask is implicitly 0 - r0.union_ (r1); - ASSERT_TRUE (r0.get_nonzero_bits () == 0xf00); - // Intersect of nonzero bits. r0.set (INT (0), INT (255)); r0.set_nonzero_bits (0xfe); diff --git a/gcc/value-range.h b/gcc/value-range.h index 556e31aece1..d1663620444 100644 --- a/gcc/value-range.h +++ b/gcc/value-range.h @@ -207,14 +207,15 @@ private: void irange_set_1bit_anti_range (tree, tree); bool varying_compatible_p () const; - void set_nonzero_bits (tree mask); bool intersect_nonzero_bits (const irange &r); bool union_nonzero_bits (const irange &r); + wide_int get_nonzero_bits_from_range () const; + bool set_range_from_nonzero_bits (); bool intersect (const wide_int& lb, const wide_int& ub); unsigned char m_num_ranges; unsigned char m_max_ranges; - tree m_nonzero_mask; + wide_int m_nonzero_mask; tree *m_base; }; @@ -682,10 +683,11 @@ irange::varying_compatible_p () const if (INTEGRAL_TYPE_P (t)) return (wi::to_wide (l) == wi::min_value (prec, sign) && wi::to_wide (u) == wi::max_value (prec, sign) - && !m_nonzero_mask); + && m_nonzero_mask == -1); if (POINTER_TYPE_P (t)) return (wi::to_wide (l) == 0 - && wi::to_wide (u) == wi::max_value (prec, sign)); + && wi::to_wide (u) == wi::max_value (prec, sign) + && m_nonzero_mask == -1); return true; } @@ -752,8 +754,6 @@ gt_ggc_mx (irange *x) gt_ggc_mx (x->m_base[i * 2]); gt_ggc_mx (x->m_base[i * 2 + 1]); } - if (x->m_nonzero_mask) - gt_ggc_mx (x->m_nonzero_mask); } inline void @@ -764,8 +764,6 @@ gt_pch_nx (irange *x) gt_pch_nx (x->m_base[i * 2]); gt_pch_nx (x->m_base[i * 2 + 1]); } - if (x->m_nonzero_mask) - gt_pch_nx (x->m_nonzero_mask); } inline void @@ -776,8 +774,6 @@ gt_pch_nx (irange *x, gt_pointer_operator op, void *cookie) op (&x->m_base[i * 2], NULL, cookie); op (&x->m_base[i * 2 + 1], NULL, cookie); } - if (x->m_nonzero_mask) - op (&x->m_nonzero_mask, NULL, cookie); } template @@ -872,7 +868,6 @@ irange::set_undefined () { m_kind = VR_UNDEFINED; m_num_ranges = 0; - m_nonzero_mask = NULL; } inline void @@ -880,7 +875,11 @@ irange::set_varying (tree type) { m_kind = VR_VARYING; m_num_ranges = 1; - m_nonzero_mask = NULL; + + if (type == error_mark_node) + m_nonzero_mask = wi::shwi (-1, 1); + else + m_nonzero_mask = wi::shwi (-1, TYPE_PRECISION (type)); if (INTEGRAL_TYPE_P (type)) { @@ -1002,8 +1001,6 @@ irange::normalize_kind () m_kind = VR_VARYING; else if (m_kind == VR_ANTI_RANGE) set_undefined (); - else - gcc_unreachable (); } }