use *_grow_cleared rather than *_grow on vec<bitmap_head>
Checks
Commit Message
On Thu, Sep 28, 2023 at 12:29:15PM +0200, Jakub Jelinek wrote:
> On Thu, Sep 28, 2023 at 09:29:31AM +0000, Richard Biener wrote:
> > > The following patch splits the bitmap_head class into a POD
> > > struct bitmap_head_pod and bitmap_head derived from it with non-trivial
> > > default constexpr constructor. Most code should keep using bitmap_head
> > > as before, bitmap_head_pod is there just for the cases where we want to
> > > embed the bitmap head into a vector which we want to e.g. {quick,safe}_grow
> > > and in a loop bitmap_initialize it afterwards (to avoid having to
> > > {quick,safe}_grow_cleared them just to overwrite with bitmap_initialize).
> > > The patch is larger than I hoped, because e.g. some code just used bitmap
> > > and bitmap_head * or const_bitmap and const bitmap_head * interchangeably.
> > >
> > > Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
> >
> > OK if there are no comments indicating otherwise.
>
> A counter argument against this patch would be that it weakens the intent
> to catch uses of uninitialized bitmaps for saving a few compile time cycles.
> If one uses
> bitmap_head var;
> bitmap_initialize (&var, NULL);
> etc., we spend those extra cycles to initialize it and nothing is told that
> bitmap_initialize overwrites the whole var without ever using of any of its
> elements, so DSE can't eliminate that. And in the vec case which prompted
> this patch it was about
> vec<bitmap_head> a;
> a.create (n);
> a.safe_grow (n); // vs. a.safe_grow_cleared (n);
> for (int i = 0; i < n; ++i)
> bitmap_initialize (&a[i], NULL);
> When using bitmap_head_pod, one needs to ensure initialization without
> help to catch such mistakes.
Here is the alternative patch which pays the small extra price while not
undermining the checking. Verified in all those places there is a loop
doing bitmap_initialize immediately afterwards or worst case a few lines
later.
With the static_assert uncommented, the remaining failures are poly_int
related (supposedly gone with Richard S.'s poly_int patch) and the
vect_unpromoted_value/ao_ref still unresolved cases.
2023-09-28 Jakub Jelinek <jakub@redhat.com>
* tree-ssa-loop-im.cc (tree_ssa_lim_initialize): Use quick_grow_cleared
instead of quick_grow on vec<bitmap_head> members.
* cfganal.cc (control_dependences::control_dependences): Likewise.
* rtl-ssa/blocks.cc (function_info::build_info::build_info): Likewise.
(function_info::place_phis): Use safe_grow_cleared instead of safe_grow
on auto_vec<bitmap_head> vars.
* tree-ssa-live.cc (compute_live_vars): Use quick_grow_cleared instead
of quick_grow on vec<bitmap_head> var.
Jakub
Comments
On Thu, 28 Sep 2023, Jakub Jelinek wrote:
> On Thu, Sep 28, 2023 at 12:29:15PM +0200, Jakub Jelinek wrote:
> > On Thu, Sep 28, 2023 at 09:29:31AM +0000, Richard Biener wrote:
> > > > The following patch splits the bitmap_head class into a POD
> > > > struct bitmap_head_pod and bitmap_head derived from it with non-trivial
> > > > default constexpr constructor. Most code should keep using bitmap_head
> > > > as before, bitmap_head_pod is there just for the cases where we want to
> > > > embed the bitmap head into a vector which we want to e.g. {quick,safe}_grow
> > > > and in a loop bitmap_initialize it afterwards (to avoid having to
> > > > {quick,safe}_grow_cleared them just to overwrite with bitmap_initialize).
> > > > The patch is larger than I hoped, because e.g. some code just used bitmap
> > > > and bitmap_head * or const_bitmap and const bitmap_head * interchangeably.
> > > >
> > > > Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
> > >
> > > OK if there are no comments indicating otherwise.
> >
> > A counter argument against this patch would be that it weakens the intent
> > to catch uses of uninitialized bitmaps for saving a few compile time cycles.
> > If one uses
> > bitmap_head var;
> > bitmap_initialize (&var, NULL);
> > etc., we spend those extra cycles to initialize it and nothing is told that
> > bitmap_initialize overwrites the whole var without ever using of any of its
> > elements, so DSE can't eliminate that. And in the vec case which prompted
> > this patch it was about
> > vec<bitmap_head> a;
> > a.create (n);
> > a.safe_grow (n); // vs. a.safe_grow_cleared (n);
> > for (int i = 0; i < n; ++i)
> > bitmap_initialize (&a[i], NULL);
> > When using bitmap_head_pod, one needs to ensure initialization without
> > help to catch such mistakes.
>
> Here is the alternative patch which pays the small extra price while not
> undermining the checking. Verified in all those places there is a loop
> doing bitmap_initialize immediately afterwards or worst case a few lines
> later.
>
> With the static_assert uncommented, the remaining failures are poly_int
> related (supposedly gone with Richard S.'s poly_int patch) and the
> vect_unpromoted_value/ao_ref still unresolved cases.
OK, I like this better - it's only when we'd sparsely use the vec<>
that it's worth to delay initialization.
Richard.
> 2023-09-28 Jakub Jelinek <jakub@redhat.com>
>
> * tree-ssa-loop-im.cc (tree_ssa_lim_initialize): Use quick_grow_cleared
> instead of quick_grow on vec<bitmap_head> members.
> * cfganal.cc (control_dependences::control_dependences): Likewise.
> * rtl-ssa/blocks.cc (function_info::build_info::build_info): Likewise.
> (function_info::place_phis): Use safe_grow_cleared instead of safe_grow
> on auto_vec<bitmap_head> vars.
> * tree-ssa-live.cc (compute_live_vars): Use quick_grow_cleared instead
> of quick_grow on vec<bitmap_head> var.
>
> --- gcc/tree-ssa-loop-im.cc.jj 2023-09-28 12:06:03.527974171 +0200
> +++ gcc/tree-ssa-loop-im.cc 2023-09-28 12:38:07.028966742 +0200
> @@ -3496,13 +3496,13 @@ tree_ssa_lim_initialize (bool store_moti
> (mem_ref_alloc (NULL, 0, UNANALYZABLE_MEM_ID));
>
> memory_accesses.refs_loaded_in_loop.create (number_of_loops (cfun));
> - memory_accesses.refs_loaded_in_loop.quick_grow (number_of_loops (cfun));
> + memory_accesses.refs_loaded_in_loop.quick_grow_cleared (number_of_loops (cfun));
> memory_accesses.refs_stored_in_loop.create (number_of_loops (cfun));
> - memory_accesses.refs_stored_in_loop.quick_grow (number_of_loops (cfun));
> + memory_accesses.refs_stored_in_loop.quick_grow_cleared (number_of_loops (cfun));
> if (store_motion)
> {
> memory_accesses.all_refs_stored_in_loop.create (number_of_loops (cfun));
> - memory_accesses.all_refs_stored_in_loop.quick_grow
> + memory_accesses.all_refs_stored_in_loop.quick_grow_cleared
> (number_of_loops (cfun));
> }
>
> --- gcc/cfganal.cc.jj 2023-09-28 11:31:45.013870771 +0200
> +++ gcc/cfganal.cc 2023-09-28 12:37:34.302425957 +0200
> @@ -468,7 +468,7 @@ control_dependences::control_dependences
>
> bitmap_obstack_initialize (&m_bitmaps);
> control_dependence_map.create (last_basic_block_for_fn (cfun));
> - control_dependence_map.quick_grow (last_basic_block_for_fn (cfun));
> + control_dependence_map.quick_grow_cleared (last_basic_block_for_fn (cfun));
> for (int i = 0; i < last_basic_block_for_fn (cfun); ++i)
> bitmap_initialize (&control_dependence_map[i], &m_bitmaps);
> for (int i = 0; i < num_edges; ++i)
> --- gcc/rtl-ssa/blocks.cc.jj 2023-09-28 11:31:45.413865158 +0200
> +++ gcc/rtl-ssa/blocks.cc 2023-09-28 12:41:28.063145949 +0200
> @@ -57,7 +57,7 @@ function_info::build_info::build_info (u
> // write to an entry before reading from it. But poison the contents
> // when checking, just to make sure we don't accidentally use an
> // uninitialized value.
> - bb_phis.quick_grow (num_bb_indices);
> + bb_phis.quick_grow_cleared (num_bb_indices);
> bb_mem_live_out.quick_grow (num_bb_indices);
> bb_to_rpo.quick_grow (num_bb_indices);
> if (flag_checking)
> @@ -614,7 +614,7 @@ function_info::place_phis (build_info &b
>
> // Calculate dominance frontiers.
> auto_vec<bitmap_head> frontiers;
> - frontiers.safe_grow (num_bb_indices);
> + frontiers.safe_grow_cleared (num_bb_indices);
> for (unsigned int i = 0; i < num_bb_indices; ++i)
> bitmap_initialize (&frontiers[i], &bitmap_default_obstack);
> compute_dominance_frontiers (frontiers.address ());
> @@ -626,7 +626,7 @@ function_info::place_phis (build_info &b
> // they are live on entry to the corresponding block, but do not need
> // phi nodes otherwise.
> auto_vec<bitmap_head> unfiltered;
> - unfiltered.safe_grow (num_bb_indices);
> + unfiltered.safe_grow_cleared (num_bb_indices);
> for (unsigned int i = 0; i < num_bb_indices; ++i)
> bitmap_initialize (&unfiltered[i], &bitmap_default_obstack);
>
> --- gcc/tree-ssa-live.cc.jj 2023-09-28 11:31:45.637862015 +0200
> +++ gcc/tree-ssa-live.cc 2023-09-28 12:38:25.590706289 +0200
> @@ -1361,7 +1361,7 @@ compute_live_vars (struct function *fn,
> We then do a mostly classical bitmap liveness algorithm. */
>
> active.create (last_basic_block_for_fn (fn));
> - active.quick_grow (last_basic_block_for_fn (fn));
> + active.quick_grow_cleared (last_basic_block_for_fn (fn));
> for (int i = 0; i < last_basic_block_for_fn (fn); i++)
> bitmap_initialize (&active[i], &bitmap_default_obstack);
>
>
>
> Jakub
>
>
@@ -3496,13 +3496,13 @@ tree_ssa_lim_initialize (bool store_moti
(mem_ref_alloc (NULL, 0, UNANALYZABLE_MEM_ID));
memory_accesses.refs_loaded_in_loop.create (number_of_loops (cfun));
- memory_accesses.refs_loaded_in_loop.quick_grow (number_of_loops (cfun));
+ memory_accesses.refs_loaded_in_loop.quick_grow_cleared (number_of_loops (cfun));
memory_accesses.refs_stored_in_loop.create (number_of_loops (cfun));
- memory_accesses.refs_stored_in_loop.quick_grow (number_of_loops (cfun));
+ memory_accesses.refs_stored_in_loop.quick_grow_cleared (number_of_loops (cfun));
if (store_motion)
{
memory_accesses.all_refs_stored_in_loop.create (number_of_loops (cfun));
- memory_accesses.all_refs_stored_in_loop.quick_grow
+ memory_accesses.all_refs_stored_in_loop.quick_grow_cleared
(number_of_loops (cfun));
}
@@ -468,7 +468,7 @@ control_dependences::control_dependences
bitmap_obstack_initialize (&m_bitmaps);
control_dependence_map.create (last_basic_block_for_fn (cfun));
- control_dependence_map.quick_grow (last_basic_block_for_fn (cfun));
+ control_dependence_map.quick_grow_cleared (last_basic_block_for_fn (cfun));
for (int i = 0; i < last_basic_block_for_fn (cfun); ++i)
bitmap_initialize (&control_dependence_map[i], &m_bitmaps);
for (int i = 0; i < num_edges; ++i)
@@ -57,7 +57,7 @@ function_info::build_info::build_info (u
// write to an entry before reading from it. But poison the contents
// when checking, just to make sure we don't accidentally use an
// uninitialized value.
- bb_phis.quick_grow (num_bb_indices);
+ bb_phis.quick_grow_cleared (num_bb_indices);
bb_mem_live_out.quick_grow (num_bb_indices);
bb_to_rpo.quick_grow (num_bb_indices);
if (flag_checking)
@@ -614,7 +614,7 @@ function_info::place_phis (build_info &b
// Calculate dominance frontiers.
auto_vec<bitmap_head> frontiers;
- frontiers.safe_grow (num_bb_indices);
+ frontiers.safe_grow_cleared (num_bb_indices);
for (unsigned int i = 0; i < num_bb_indices; ++i)
bitmap_initialize (&frontiers[i], &bitmap_default_obstack);
compute_dominance_frontiers (frontiers.address ());
@@ -626,7 +626,7 @@ function_info::place_phis (build_info &b
// they are live on entry to the corresponding block, but do not need
// phi nodes otherwise.
auto_vec<bitmap_head> unfiltered;
- unfiltered.safe_grow (num_bb_indices);
+ unfiltered.safe_grow_cleared (num_bb_indices);
for (unsigned int i = 0; i < num_bb_indices; ++i)
bitmap_initialize (&unfiltered[i], &bitmap_default_obstack);
@@ -1361,7 +1361,7 @@ compute_live_vars (struct function *fn,
We then do a mostly classical bitmap liveness algorithm. */
active.create (last_basic_block_for_fn (fn));
- active.quick_grow (last_basic_block_for_fn (fn));
+ active.quick_grow_cleared (last_basic_block_for_fn (fn));
for (int i = 0; i < last_basic_block_for_fn (fn); i++)
bitmap_initialize (&active[i], &bitmap_default_obstack);