[0/4] Add vector pair support to PowerPC attribute((vector_size(32)))

Message ID ZVreIppK5dO9j3oU@cowardly-lion.the-meissners.org
Headers
Series Add vector pair support to PowerPC attribute((vector_size(32))) |

Message

Michael Meissner Nov. 20, 2023, 4:18 a.m. UTC
  This is simiilar to the patches on November 10th.

    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636077.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636078.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636083.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636080.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636081.html

to add a set of built-in functions that use the PowePC __vector_pair type and
that provide a set of functions to do basic operations on vector pair.

After I posted these patches, it was decided that it would be better to have a
new type that is used rather than a bunch of new built-in functions.  Within
the GCC context, the best way to add this support is to extend the vector modes
so that V4DFmode, V8SFmode, V4DImode, V8SImode, V16HImode, and V32QImode are
used.

These patches are to provide this new implementation.

While in theory you could add a whole new type that isn't a larger size vector,
my experience with IEEE 128-bit floating point is that GCC really doesn't like
2 modes that are the same size but have different implementations (such as we
see with IEEE 128-bit floating point and IBM double-double 128-bit floating
point).  So I did not consider adding a new mode for using with vector pairs.

My original intention was to just implement V4DFmode and V8SFmode, since the
primary users asking for vector pair support are people implementing the high
end math libraries like Eigen and Blas.

However in implementing this code, I discovered that we will need integer
vector pair support as well as floating point vector pair.  The integer modes
and types are needed to properly implement byte shuffling and vector
comparisons which need integer vector pairs.

With the current patches, vector pair support is not enabled by default.  The
main reason is I have not implemented the support for byte shuffling which
various tests depend on.

I would also like to implement overloads for the vector built-in functions like
vec_add, vec_sum, etc. that if you give it a vector pair, it would handle it
just like if you give a vector type.

In addition, once the various bugs are addressed, I would then implement the
support so that automatic vectorization would consider using vector pairs
instead of vectors.

In terms of benchmarks, I wrote two benchmarks:

   1)	One benchmark is a saxpy type loop: value[i] += (a[i] * b[i]).  That is
	a loop with 3 loads and a store per loop.

   2)	Another benchmark produces a scalar sun of an entire vector.  This is a
	loop that just has a single load and no store.

For the saxpy type loop, I get the following general numbers for both float and
double:

   1)	The benchmarks that use attribute((vector_size(32))) are roughly 9-10%
	faster than using normal vector processing (both auto vectorize and
	using vector types).

   2)	The benchmarks that use attribute((vector_size(32))) are roughly 19-20%
	faster than if I write the loop using the vector pair loads using the
	exist built-ins, and then manually split the values and do the
	arithmetic and single vector stores,

Unfortunately, for floating point, doing the sum of the whole vector is slower
using the new vector pair built-in functions using a simple loop (compared to
using the existing built-ins for disassembling vector pairs.  If I write more
complex loops that manually unroll the loop, then the floating point vector
pair built-in functions become like the integer vector pair integer built-in
functions.  So there is some amount of tuning that will need to be done.

There are 4 patches in this set:

The first patch adds support for the types, and does moves, and provides some
optimizations for extracting an element and setting an element.

The second patch implements the floating point arithmetic operations.

The third patch implements the integer operations.

The fourth patch provides new tests to test these features.
  

Comments

Michael Meissner Nov. 20, 2023, 4:20 a.m. UTC | #1
Add basic support for vector_size(32).

We have had several users ask us to implement ways of using the Power10 load
vector pair and store vector pair instructions to give their code a speed up
due to reduced memory bandwidth.

I had originally posted the following patches:

    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636077.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636078.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636083.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636080.html
    *	https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636081.html

to add a set of built-in functions that use the PowePC __vector_pair type and
that provide a set of functions to do basic operations on vector pair.

After I posted these patches, it was decided that it would be better to have a
new type that is used rather than a bunch of new built-in functions.  Within
the GCC context, the best way to add this support is to extend the vector modes
so that V4DFmode, V8SFmode, V4DImode, V8SImode, V16HImode, and V32QImode are
used.

While in theory you could add a whole new type that isn't a larger size vector,
my experience with IEEE 128-bit floating point is that GCC really doesn't like
2 modes that are the same size but have different implementations (such as we
see with IEEE 128-bit floating point and IBM double-double 128-bit floating
point).  So I did not consider adding a new mode for using with vector pairs.

My original intention was to just implement V4DFmode and V8SFmode, since the
primary users asking for vector pair support are people implementing the high
end math libraries like Eigen and Blas.

However in implementing this code, I discovered that we will need integer
vector pair support as well as floating point vector pair.  The integer modes
and types are needed to properly implement byte shuffling and vector
comparisons which need integer vector pairs.

With the current patches, vector pair support is not enabled by default.  The
main reason is I have not implemented the support for byte shuffling which
various tests depend on.

I would also like to implement overloads for the vector built-in functions like
vec_add, vec_sum, etc. that if you give it a vector pair, it would handle it
just like if you give a vector type.

In addition, once the various bugs are addressed, I would then implement the
support so that automatic vectorization would consider using vector pairs
instead of vectors.

This is the first patch in the series.  It implements the basic modes, and
it allows for initialization of the modes.  I've added some optimizations for
extracting and setting fields within the vector pair.

The second patch will implement the floating point vector pair support.

The third patch will implement the integer vector pair support.

The fourth patch will provide new tests to the test suite.

When I test a saxpy type loop (a[i] += (b[i] * c[i])), I generally see a 10%
improvement over either auto-factorization, or just using the vector types.

I have tested these patches on a little endian power10 system.  With
-vector-size-32 disabled by default, there are no regressions in the
test suite.

I have also built and run the tests on both little endian power9 and big
endian power9 systems, and there are no regressions.  Can I check these
patches into the master branch?

2023-11-19  Michael Meissner  <meissner@linux.ibm.com>

gcc/

	* config/rs6000/constraint.md (eV): New constraint.
	* config/rs6000/predicates.md (cons_0_to_31_operand): New predicate.
	(easy_vector_constant): Add support for vector pair constants.
	(easy_vector_pair_constant): New predicate.
	(mam_assemble_input_operand): Allow other 16-byte vector modes than
	Immodest.
	* config/rs6000/rs6000-c.cc (rs6000_cpu_cpp_builtins): Define
	__VECTOR_SIZE_32__ if -mvector-size-32.
	* config/rs6000/rs6000-protos.h (vector_pair_to_vector_mode): New
	declaration.
	(split_vector_pair_constant): Likewise.
	(rs6000_expand_vector_pair_init): Likewise.
	* config/rs6000/rs6000.cc (rs6000_hard_regno_mode_ok_uncached): Use
	VECTOR_PAIR_MODE instead of comparing mode to OOmode.
	(rs6000_modes_tieable_p): Allow various vector pair modes to pair with
	each other.  Allow 16-byte vectors to pair with vector pair modes.
	(rs6000_setup_reg_addr_masks): Use VECTOR_PAIR_MODE instead of comparing
	mode to OOmode.
	(rs6000_init_hard_regno_mode_ok): Setup vector pair mode basic type
	information and reload handlers.
	(rs6000_option_override_internal): Warn if -mvector-pair-32 is used
	without -mcpu=power10 or -mmma.
	(vector_pair_to_vector_mode): New function.
	(split_vector_pair_constant): Likewise.
	(rs6000_expand_vector_pair_init): Likewise.
	(reg_offset_addressing_ok_p): Add support for vector pair modes.
	(rs6000_emit_move): Likewise.
	(rs6000_preferred_reload_class): Likewise.
	(altivec_expand_vec_perm_le): Likewise.
	(rs6000_opt_vars): Add -mvector-size-32 switch.
	(rs6000_split_multireg_move): Add support for vector pair modes.
	* config/rs6000/rs6000.h (VECTOR_PAIR_MODE): New macro.
	* config/rs6000/rs6000.md (wd mode attribute): Add vector pair modes.
	(RELOAD mode iterator): Likewise.
	(toplevel): Include vector-pair.md.
	* config/rs6000/rs6000.opt (-mvector-size-32): New option.
	* config/rs6000/vector-pair.md: New file.
	* doc/md.texi (PowerPC constraints): Document the eV constraint.
  
Richard Biener Nov. 20, 2023, 7:24 a.m. UTC | #2
On Mon, Nov 20, 2023 at 5:19 AM Michael Meissner <meissner@linux.ibm.com> wrote:
>
> This is simiilar to the patches on November 10th.
>
>     *   https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636077.html
>     *   https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636078.html
>     *   https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636083.html
>     *   https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636080.html
>     *   https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636081.html
>
> to add a set of built-in functions that use the PowePC __vector_pair type and
> that provide a set of functions to do basic operations on vector pair.
>
> After I posted these patches, it was decided that it would be better to have a
> new type that is used rather than a bunch of new built-in functions.  Within
> the GCC context, the best way to add this support is to extend the vector modes
> so that V4DFmode, V8SFmode, V4DImode, V8SImode, V16HImode, and V32QImode are
> used.
>
> These patches are to provide this new implementation.
>
> While in theory you could add a whole new type that isn't a larger size vector,
> my experience with IEEE 128-bit floating point is that GCC really doesn't like
> 2 modes that are the same size but have different implementations (such as we
> see with IEEE 128-bit floating point and IBM double-double 128-bit floating
> point).  So I did not consider adding a new mode for using with vector pairs.
>
> My original intention was to just implement V4DFmode and V8SFmode, since the
> primary users asking for vector pair support are people implementing the high
> end math libraries like Eigen and Blas.
>
> However in implementing this code, I discovered that we will need integer
> vector pair support as well as floating point vector pair.  The integer modes
> and types are needed to properly implement byte shuffling and vector
> comparisons which need integer vector pairs.
>
> With the current patches, vector pair support is not enabled by default.  The
> main reason is I have not implemented the support for byte shuffling which
> various tests depend on.
>
> I would also like to implement overloads for the vector built-in functions like
> vec_add, vec_sum, etc. that if you give it a vector pair, it would handle it
> just like if you give a vector type.
>
> In addition, once the various bugs are addressed, I would then implement the
> support so that automatic vectorization would consider using vector pairs
> instead of vectors.
>
> In terms of benchmarks, I wrote two benchmarks:
>
>    1)   One benchmark is a saxpy type loop: value[i] += (a[i] * b[i]).  That is
>         a loop with 3 loads and a store per loop.
>
>    2)   Another benchmark produces a scalar sun of an entire vector.  This is a
>         loop that just has a single load and no store.
>
> For the saxpy type loop, I get the following general numbers for both float and
> double:
>
>    1)   The benchmarks that use attribute((vector_size(32))) are roughly 9-10%
>         faster than using normal vector processing (both auto vectorize and
>         using vector types).
>
>    2)   The benchmarks that use attribute((vector_size(32))) are roughly 19-20%
>         faster than if I write the loop using the vector pair loads using the
>         exist built-ins, and then manually split the values and do the
>         arithmetic and single vector stores,
>
> Unfortunately, for floating point, doing the sum of the whole vector is slower
> using the new vector pair built-in functions using a simple loop (compared to
> using the existing built-ins for disassembling vector pairs.  If I write more
> complex loops that manually unroll the loop, then the floating point vector
> pair built-in functions become like the integer vector pair integer built-in
> functions.  So there is some amount of tuning that will need to be done.
>
> There are 4 patches in this set:
>
> The first patch adds support for the types, and does moves, and provides some
> optimizations for extracting an element and setting an element.
>
> The second patch implements the floating point arithmetic operations.
>
> The third patch implements the integer operations.
>
> The fourth patch provides new tests to test these features.

I wouldn't expose the "fake" larger modes to the vectorizer but rather
adjust m_suggested_unroll_factor (which you already do to some extent).

> --
> Michael Meissner, IBM
> PO Box 98, Ayer, Massachusetts, USA, 01432
> email: meissner@linux.ibm.com
  
Michael Meissner Nov. 20, 2023, 8:56 a.m. UTC | #3
On Mon, Nov 20, 2023 at 08:24:35AM +0100, Richard Biener wrote:
> I wouldn't expose the "fake" larger modes to the vectorizer but rather
> adjust m_suggested_unroll_factor (which you already do to some extent).

Thanks.  I figure I first need to fix the shuffle byes issue first and get a
clean test run (with the flag enabled by default), before delving into the
vectorization issues.

But testing has shown that at least in the loop I was looking at, that using
vector pair instructions (either through the built-ins I had previously posted
or with these patches), that even if I turn off unrolling completely for the
vector pair case, it still is faster than unrolling the loop 4 times for using
vector types (or auto vectorization).  Note, of course the margin is much
smaller in this case.

vector double:           (a * b) + c, unroll 4         loop time: 0.55483
vector double:           (a * b) + c, unroll default   loop time: 0.55638
vector double:           (a * b) + c, unroll 0         loop time: 0.55686
vector double:           (a * b) + c, unroll 2         loop time: 0.55772

vector32, w/vector pair: (a * b) + c, unroll 4         loop time: 0.48257
vector32, w/vector pair: (a * b) + c, unroll 2         loop time: 0.50782
vector32, w/vector pair: (a * b) + c, unroll default   loop time: 0.50864
vector32, w/vector pair: (a * b) + c, unroll 0         loop time: 0.52224

Of course being micro-benchmarks, it doesn't mean that this translates to the
behavior on actual code.
  
Richard Biener Nov. 20, 2023, 9:08 a.m. UTC | #4
On Mon, Nov 20, 2023 at 9:56 AM Michael Meissner <meissner@linux.ibm.com> wrote:
>
> On Mon, Nov 20, 2023 at 08:24:35AM +0100, Richard Biener wrote:
> > I wouldn't expose the "fake" larger modes to the vectorizer but rather
> > adjust m_suggested_unroll_factor (which you already do to some extent).
>
> Thanks.  I figure I first need to fix the shuffle byes issue first and get a
> clean test run (with the flag enabled by default), before delving into the
> vectorization issues.
>
> But testing has shown that at least in the loop I was looking at, that using
> vector pair instructions (either through the built-ins I had previously posted
> or with these patches), that even if I turn off unrolling completely for the
> vector pair case, it still is faster than unrolling the loop 4 times for using
> vector types (or auto vectorization).  Note, of course the margin is much
> smaller in this case.

But unrolling 2 times or doubling the vector mode size results in exactly
the same - using a lager vectorization factor.

>
> vector double:           (a * b) + c, unroll 4         loop time: 0.55483
> vector double:           (a * b) + c, unroll default   loop time: 0.55638
> vector double:           (a * b) + c, unroll 0         loop time: 0.55686
> vector double:           (a * b) + c, unroll 2         loop time: 0.55772
>
> vector32, w/vector pair: (a * b) + c, unroll 4         loop time: 0.48257
> vector32, w/vector pair: (a * b) + c, unroll 2         loop time: 0.50782
> vector32, w/vector pair: (a * b) + c, unroll default   loop time: 0.50864
> vector32, w/vector pair: (a * b) + c, unroll 0         loop time: 0.52224
>
> Of course being micro-benchmarks, it doesn't mean that this translates to the
> behavior on actual code.

I guess the difference is from how RTL handles the larger modes vs.
more instructions with the smaller mode (if you don't immediately
expose the smaller modes during RTL expansion).

I'd compare assembly of vector double with unroll 2 and vector32 with unroll 0.

Richard.

>
>
> --
> Michael Meissner, IBM
> PO Box 98, Ayer, Massachusetts, USA, 01432
> email: meissner@linux.ibm.com
  
Kewen.Lin Nov. 24, 2023, 9:41 a.m. UTC | #5
on 2023/11/20 16:56, Michael Meissner wrote:
> On Mon, Nov 20, 2023 at 08:24:35AM +0100, Richard Biener wrote:
>> I wouldn't expose the "fake" larger modes to the vectorizer but rather
>> adjust m_suggested_unroll_factor (which you already do to some extent).
> 
> Thanks.  I figure I first need to fix the shuffle byes issue first and get a
> clean test run (with the flag enabled by default), before delving into the
> vectorization issues.
> 
> But testing has shown that at least in the loop I was looking at, that using
> vector pair instructions (either through the built-ins I had previously posted
> or with these patches), that even if I turn off unrolling completely for the
> vector pair case, it still is faster than unrolling the loop 4 times for using
> vector types (or auto vectorization).  Note, of course the margin is much
> smaller in this case.
> 
> vector double:           (a * b) + c, unroll 4         loop time: 0.55483
> vector double:           (a * b) + c, unroll default   loop time: 0.55638
> vector double:           (a * b) + c, unroll 0         loop time: 0.55686
> vector double:           (a * b) + c, unroll 2         loop time: 0.55772
> 
> vector32, w/vector pair: (a * b) + c, unroll 4         loop time: 0.48257
> vector32, w/vector pair: (a * b) + c, unroll 2         loop time: 0.50782
> vector32, w/vector pair: (a * b) + c, unroll default   loop time: 0.50864
> vector32, w/vector pair: (a * b) + c, unroll 0         loop time: 0.52224
> 
> Of course being micro-benchmarks, it doesn't mean that this translates to the
> behavior on actual code.
> 
> 

I noticed that Ajit posted a patch for adding one new pass to replace contiguous
addresses vector load lxv with lxvp:

https://inbox.sourceware.org/gcc-patches/ef0c54a5-c35c-3519-f062-9ac78ee66b81@linux.ibm.com/

How about making this kind of rs6000 specific pass to pair both vector load and
store?  Users can make more unrolling with parameters and those memory accesses
from unrolling should be neat, I'd expect the pass can easily detect and pair the
candidates.

BR,
Kewen
  
Michael Meissner Nov. 28, 2023, 4:26 a.m. UTC | #6
On Fri, Nov 24, 2023 at 05:41:02PM +0800, Kewen.Lin wrote:
> on 2023/11/20 16:56, Michael Meissner wrote:
> > On Mon, Nov 20, 2023 at 08:24:35AM +0100, Richard Biener wrote:
> >> I wouldn't expose the "fake" larger modes to the vectorizer but rather
> >> adjust m_suggested_unroll_factor (which you already do to some extent).
> > 
> > Thanks.  I figure I first need to fix the shuffle byes issue first and get a
> > clean test run (with the flag enabled by default), before delving into the
> > vectorization issues.
> > 
> > But testing has shown that at least in the loop I was looking at, that using
> > vector pair instructions (either through the built-ins I had previously posted
> > or with these patches), that even if I turn off unrolling completely for the
> > vector pair case, it still is faster than unrolling the loop 4 times for using
> > vector types (or auto vectorization).  Note, of course the margin is much
> > smaller in this case.
> > 
> > vector double:           (a * b) + c, unroll 4         loop time: 0.55483
> > vector double:           (a * b) + c, unroll default   loop time: 0.55638
> > vector double:           (a * b) + c, unroll 0         loop time: 0.55686
> > vector double:           (a * b) + c, unroll 2         loop time: 0.55772
> > 
> > vector32, w/vector pair: (a * b) + c, unroll 4         loop time: 0.48257
> > vector32, w/vector pair: (a * b) + c, unroll 2         loop time: 0.50782
> > vector32, w/vector pair: (a * b) + c, unroll default   loop time: 0.50864
> > vector32, w/vector pair: (a * b) + c, unroll 0         loop time: 0.52224
> > 
> > Of course being micro-benchmarks, it doesn't mean that this translates to the
> > behavior on actual code.
> > 
> > 
> 
> I noticed that Ajit posted a patch for adding one new pass to replace contiguous
> addresses vector load lxv with lxvp:
> 
> https://inbox.sourceware.org/gcc-patches/ef0c54a5-c35c-3519-f062-9ac78ee66b81@linux.ibm.com/
> 
> How about making this kind of rs6000 specific pass to pair both vector load and
> store?  Users can make more unrolling with parameters and those memory accesses
> from unrolling should be neat, I'd expect the pass can easily detect and pair the
> candidates.

Yes, I tend to think a combination of things will be needed.  In my tests with
a saxpy type loop, I could not get the current built-ins to load/store vector
pairs to be fast enough.  Peter's code that he posted help, but ultimately it
was still slower than adding vector_size(32).  I will try out the patch and
compare it to my patches.