From patchwork Sun Oct 9 21:51:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Brown X-Patchwork-Id: 1840 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1309149wrs; Sun, 9 Oct 2022 14:53:23 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6ItG4gE2OkR8fMSK1v82ctLRjMEUktxwH6380fULOz5LtneuSLDM7bUb2bOa9rmF8CbhHp X-Received: by 2002:aa7:c956:0:b0:43b:206d:c283 with SMTP id h22-20020aa7c956000000b0043b206dc283mr14824460edt.381.1665352403259; Sun, 09 Oct 2022 14:53:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665352403; cv=none; d=google.com; s=arc-20160816; b=AHc0nGbgyN/PZV6VtoYM9UoymCnQf8jnKMFRFUk04g9P+e1y1SIi2lHnpdCcfmkazT 41tXbPYqqB+HspGM7NDn4USP/evPbAzKQQOPc4KiN6/xusqL3vXm9U5pnZBmwjme7lqT uIFcaqSTJuZdoBS9ZeuaSRvk94rJN1VePpiHDNDY5mSXIMP8qSmgEn8cipkN6bWGN86J 2Qa0DECYH3/CWoRx/YaDmCE2jXFo/qyklE6fFxdojbjueay44DvD/Ccqbs2k6dEefp9z 1lq81usII6l6/tbnW37cHae1vdDZqpWlUuOPzahajkdzajud08ESOSM+OENbOKLqxquN zmHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :ironport-sdr:dmarc-filter:delivered-to; bh=7AJRcYeUK/vJ0pJjj2T/bRj1IcYbe/Q4zI3wau6LXWA=; b=nkd9kfweyepnUVWrUpW0hR/x4xGyWkzkYKhrHq3trL9NGjtvLXkQQArYlq6A8ceeBw F9Z/3aKMX2hm+Jl8S637GUscrQmLo+p12MR3kX2YskrCjcnAFzTQgeQUuan7DQ2cNyeX F+YD5PmGbZcLk6FXQp/cUN/AkrRNE7to4wpFX1FvecNR4NmGW/luRDLXMrL9f9Ha4GeE YWT9fe3PpfL3k6e3OmiVYm/audZ9ABLzpVgf8sRVJ9PzLvYrejLjsJvIyR+G3rj40hL5 I9Ih0SIi6aic21owYUKO1gonHwnX2IXwTk2j6eCe6tInLpO+KoQDeAor4Qel06tSmYLH J8vg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id hg7-20020a1709072cc700b0073d92f83e06si9421175ejc.887.2022.10.09.14.53.23 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Oct 2022 14:53:23 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9624739524B8 for ; Sun, 9 Oct 2022 21:52:18 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from esa3.mentor.iphmx.com (esa3.mentor.iphmx.com [68.232.137.180]) by sourceware.org (Postfix) with ESMTPS id 6F2FB3858D37; Sun, 9 Oct 2022 21:51:48 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 6F2FB3858D37 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=codesourcery.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=mentor.com X-IronPort-AV: E=Sophos;i="5.95,172,1661846400"; d="scan'208";a="84258939" Received: from orw-gwy-01-in.mentorg.com ([192.94.38.165]) by esa3.mentor.iphmx.com with ESMTP; 09 Oct 2022 13:51:48 -0800 IronPort-SDR: tdQdzMQsXOK++1eZgdDc037dM0g1UX559i7tmZydUgHq9LddVYzmXpNY0rMzpGdclHBGRngCOt S7en+oI52ylvFoG94to+GEgj6Xrs5HPJ+pzTJ/DbGW/FrMY7/Ut7XYoaVuQht1gL/Fc2RsHoql /E32Nni7b/NsutHWYx8Rk/raGs7n0H+yueAit0FNSG4AtyaOTpOhE01tNsRka3c+HEOhn27jPS nI1YL50+O14bJZQfLDo/EeUIoXpl/R6ysskULcflpvo/pF08fRSivAxwJzBTAgetxE8gUAXaMJ gdU= From: Julian Brown To: Subject: [PATCH v4 1/4] OpenMP: Pointers and member mappings Date: Sun, 9 Oct 2022 14:51:34 -0700 Message-ID: X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [137.202.0.90] X-ClientProxiedBy: svr-ies-mbx-14.mgc.mentorg.com (139.181.222.14) To svr-ies-mbx-11.mgc.mentorg.com (139.181.222.11) X-Spam-Status: No, score=-11.0 required=5.0 tests=BAYES_00, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_STATUS, KAM_STOCKGEN, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jakub Jelinek , Tobias Burnus , fortran@gcc.gnu.org Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746248561404146593?= X-GMAIL-MSGID: =?utf-8?q?1746248561404146593?= Implementing the "omp declare mapper" functionality, I noticed some cases where handling of derived type members that are pointers doesn't seem to be quite right. At present, a type such as this: type T integer, pointer, dimension(:) :: arrptr end type T type(T) :: tvar [...] !$omp target map(tofrom: tvar%arrptr) will be mapped using three mapping nodes: GOMP_MAP_TO tvar%arrptr (the descriptor) GOMP_MAP_TOFROM *tvar%arrptr%data (the actual array data) GOMP_MAP_ALWAYS_POINTER tvar%arrptr%data (a pointer to the array data) This follows OMP 5.0, 2.19.7.1 (or OpenMP 5.2, 5.8.3) "map Clause": "If a list item in a map clause is an associated pointer and the pointer is not the base pointer of another list item in a map clause on the same construct, then it is treated as if its pointer target is implicitly mapped in the same clause. For the purposes of the map clause, the mapped pointer target is treated as if its base pointer is the associated pointer." However, we can also write this: map(to: tvar%arrptr) map(tofrom: tvar%arrptr(3:8)) and then instead we should follow (OpenMP 5.2, 5.8.3 "map Clause"): "For map clauses on map-entering constructs, if any list item has a base pointer for which a corresponding pointer exists in the data environment upon entry to the region and either a new list item or the corresponding pointer is created in the device data environment on entry to the region, then: 1. [Fortran] The corresponding pointer variable is associated with a pointer target that has the same rank and bounds as the pointer target of the original pointer, such that the corresponding list item can be accessed through the pointer in a target region. 2. The corresponding pointer variable becomes an attached pointer for the corresponding list item." But, that's not implemented quite right at the moment (and completely breaks once we introduce declare mappers), because we still map the "to: tvar%arrptr" as the descriptor and the entire array, then we map the "tvar%arrptr(3:8)" part using the descriptor (again!) and the array slice. The solution is to detect when we're mapping a smaller part of the array (or a subcomponent) on the same directive, and only map the descriptor in that case. So we get mappings like this instead: map(to: tvar%arrptr) --> GOMP_MAP_ALLOC tvar%arrptr (the descriptor) map(tofrom: tvar%arrptr(3:8) --> GOMP_MAP_TOFROM tvar%arrptr%data(3) (size 8-3+1, etc.) GOMP_MAP_ALWAYS_POINTER tvar%arrptr%data (bias 3, etc.) This version of the patch alters the way that expressions are compared, so that e.g. "tvar(i)%arrptr" is not considered to overlap/reference the same array as "tvar(j)%arrptr(3:8)". This is done using a new function, gfc_omp_expr_prefix_same, which only considers element accesses equal if they can be proven the same, i.e. if they use the same index variable. Together with the "unordered struct" patch at the end of this series, that allows certain types of mapping to work that don't at present. 2022-10-09 Julian Brown gcc/fortran/ * dependency.cc (gfc_omp_expr_prefix_same): New function. * dependency.h (gfc_omp_expr_prefix_same): Add prototype. * gfortran.h (gfc_omp_namelist): Add "duplicate_of" field to "u2" union. * trans-openmp.cc (dependency.h): Include. (gfc_trans_omp_array_section): Do not map descriptors here for OpenMP. (gfc_symbol_rooted_namelist): New function. (gfc_trans_omp_clauses): Check subcomponent and subarray/element accesses elsewhere in the clause list for pointers to derived types or array descriptors, and map just the pointer/descriptor if we have any. libgomp/ * testsuite/libgomp.fortran/map-subarray.f90: New test. * testsuite/libgomp.fortran/map-subarray-2.f90: New test. * testsuite/libgomp.fortran/map-subarray-4.f90: New test. * testsuite/libgomp.fortran/map-subarray-6.f90: New test. * testsuite/libgomp.fortran/map-subcomponents.f90: New test. * testsuite/libgomp.fortran/struct-elem-map-1.f90: Adjust for descriptor-mapping changes. --- gcc/fortran/dependency.cc | 128 +++++++++++ gcc/fortran/dependency.h | 1 + gcc/fortran/gfortran.h | 1 + gcc/fortran/trans-openmp.cc | 204 ++++++++++++++++-- .../libgomp.fortran/map-subarray-2.f90 | 108 ++++++++++ .../libgomp.fortran/map-subarray-4.f90 | 38 ++++ .../libgomp.fortran/map-subarray-6.f90 | 26 +++ .../libgomp.fortran/map-subarray.f90 | 33 +++ .../libgomp.fortran/map-subcomponents.f90 | 35 +++ .../libgomp.fortran/struct-elem-map-1.f90 | 10 +- 10 files changed, 567 insertions(+), 17 deletions(-) create mode 100644 libgomp/testsuite/libgomp.fortran/map-subarray-2.f90 create mode 100644 libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 create mode 100644 libgomp/testsuite/libgomp.fortran/map-subarray-6.f90 create mode 100644 libgomp/testsuite/libgomp.fortran/map-subarray.f90 create mode 100644 libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 diff --git a/gcc/fortran/dependency.cc b/gcc/fortran/dependency.cc index ab3bd36f74e..1c98f933ff1 100644 --- a/gcc/fortran/dependency.cc +++ b/gcc/fortran/dependency.cc @@ -2334,3 +2334,131 @@ gfc_dep_resolver (gfc_ref *lref, gfc_ref *rref, gfc_reverse *reverse, return fin_dep == GFC_DEP_OVERLAP; } + +/* Check if two refs are equal, for the purposes of checking if one might be + the base of the other for OpenMP (target directives). Derived from + gfc_dep_resolver. This function is stricter, e.g. indices arr(i) and + arr(j) compare as non-equal. */ + +bool +gfc_omp_expr_prefix_same (gfc_expr *lexpr, gfc_expr *rexpr) +{ + gfc_ref *lref, *rref; + + if (lexpr->symtree && rexpr->symtree) + { + /* See are_identical_variables above. */ + if (lexpr->symtree->n.sym->attr.dummy + && rexpr->symtree->n.sym->attr.dummy) + { + /* Dummy arguments: Only check for equal names. */ + if (lexpr->symtree->n.sym->name != rexpr->symtree->n.sym->name) + return false; + } + else + { + if (lexpr->symtree->n.sym != rexpr->symtree->n.sym) + return false; + } + } + else if (lexpr->base_expr && rexpr->base_expr) + { + if (gfc_dep_compare_expr (lexpr->base_expr, rexpr->base_expr) != 0) + return false; + } + else + return false; + + lref = lexpr->ref; + rref = rexpr->ref; + + while (lref && rref) + { + gfc_dependency fin_dep = GFC_DEP_EQUAL; + + if (lref && lref->type == REF_COMPONENT && lref->u.c.component + && strcmp (lref->u.c.component->name, "_data") == 0) + lref = lref->next; + + if (rref && rref->type == REF_COMPONENT && rref->u.c.component + && strcmp (rref->u.c.component->name, "_data") == 0) + rref = rref->next; + + gcc_assert (lref->type == rref->type); + + switch (lref->type) + { + case REF_COMPONENT: + if (lref->u.c.component != rref->u.c.component) + return false; + break; + + case REF_ARRAY: + if (ref_same_as_full_array (lref, rref)) + break; + if (ref_same_as_full_array (rref, lref)) + break; + + if (lref->u.ar.dimen != rref->u.ar.dimen) + { + if (lref->u.ar.type == AR_FULL + && gfc_full_array_ref_p (rref, NULL)) + break; + if (rref->u.ar.type == AR_FULL + && gfc_full_array_ref_p (lref, NULL)) + break; + return false; + } + + for (int n = 0; n < lref->u.ar.dimen; n++) + { + if (lref->u.ar.dimen_type[n] == DIMEN_VECTOR + && rref->u.ar.dimen_type[n] == DIMEN_VECTOR + && gfc_dep_compare_expr (lref->u.ar.start[n], + rref->u.ar.start[n]) == 0) + continue; + if (lref->u.ar.dimen_type[n] == DIMEN_RANGE + && rref->u.ar.dimen_type[n] == DIMEN_RANGE) + fin_dep = check_section_vs_section (&lref->u.ar, &rref->u.ar, + n); + else if (lref->u.ar.dimen_type[n] == DIMEN_ELEMENT + && rref->u.ar.dimen_type[n] == DIMEN_RANGE) + fin_dep = gfc_check_element_vs_section (lref, rref, n); + else if (rref->u.ar.dimen_type[n] == DIMEN_ELEMENT + && lref->u.ar.dimen_type[n] == DIMEN_RANGE) + fin_dep = gfc_check_element_vs_section (rref, lref, n); + else if (lref->u.ar.dimen_type[n] == DIMEN_ELEMENT + && rref->u.ar.dimen_type[n] == DIMEN_ELEMENT) + { + gfc_array_ref l_ar = lref->u.ar; + gfc_array_ref r_ar = rref->u.ar; + gfc_expr *l_start = l_ar.start[n]; + gfc_expr *r_start = r_ar.start[n]; + int i = gfc_dep_compare_expr (r_start, l_start); + if (i == 0) + fin_dep = GFC_DEP_EQUAL; + else + return false; + } + else + return false; + if (n + 1 < lref->u.ar.dimen + && fin_dep != GFC_DEP_EQUAL) + return false; + } + + if (fin_dep != GFC_DEP_EQUAL + && fin_dep != GFC_DEP_OVERLAP) + return false; + + break; + + default: + gcc_unreachable (); + } + lref = lref->next; + rref = rref->next; + } + + return true; +} diff --git a/gcc/fortran/dependency.h b/gcc/fortran/dependency.h index 339be76a8d0..ac94010f84c 100644 --- a/gcc/fortran/dependency.h +++ b/gcc/fortran/dependency.h @@ -40,5 +40,6 @@ int gfc_expr_is_one (gfc_expr *, int); int gfc_dep_resolver (gfc_ref *, gfc_ref *, gfc_reverse *, bool identical = false); int gfc_are_equivalenced_arrays (gfc_expr *, gfc_expr *); +bool gfc_omp_expr_prefix_same (gfc_expr *, gfc_expr *); gfc_expr * gfc_discard_nops (gfc_expr *); diff --git a/gcc/fortran/gfortran.h b/gcc/fortran/gfortran.h index 4babd77924b..fe8c4e131f3 100644 --- a/gcc/fortran/gfortran.h +++ b/gcc/fortran/gfortran.h @@ -1358,6 +1358,7 @@ typedef struct gfc_omp_namelist { struct gfc_omp_namelist_udr *udr; gfc_namespace *ns; + struct gfc_omp_namelist *duplicate_of; } u2; struct gfc_omp_namelist *next; locus where; diff --git a/gcc/fortran/trans-openmp.cc b/gcc/fortran/trans-openmp.cc index 8e9d5346b05..cf12e032fbd 100644 --- a/gcc/fortran/trans-openmp.cc +++ b/gcc/fortran/trans-openmp.cc @@ -40,6 +40,7 @@ along with GCC; see the file COPYING3. If not see #include "omp-general.h" #include "omp-low.h" #include "memmodel.h" /* For MEMMODEL_ enums. */ +#include "dependency.h" #undef GCC_DIAG_STYLE #define GCC_DIAG_STYLE __gcc_tdiag__ @@ -2470,22 +2471,20 @@ gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n, } if (GFC_DESCRIPTOR_TYPE_P (TREE_TYPE (decl))) { - tree desc_node; tree type = TREE_TYPE (decl); ptr2 = gfc_conv_descriptor_data_get (decl); - desc_node = build_omp_clause (input_location, OMP_CLAUSE_MAP); - OMP_CLAUSE_DECL (desc_node) = decl; - OMP_CLAUSE_SIZE (desc_node) = TYPE_SIZE_UNIT (type); - if (ptr_kind == GOMP_MAP_ALWAYS_POINTER) + if (ptr_kind != GOMP_MAP_ALWAYS_POINTER) { - OMP_CLAUSE_SET_MAP_KIND (desc_node, GOMP_MAP_TO); - node2 = node; - node = desc_node; /* Needs to come first. */ - } - else - { - OMP_CLAUSE_SET_MAP_KIND (desc_node, GOMP_MAP_TO_PSET); - node2 = desc_node; + /* We only create a GOMP_MAP_TO_PSET mapping for derived-type + members here for OpenACC. + For OpenMP, the descriptor must be mapped with its own explicit + map clause (e.g. both "map(foo%arr)" and "map(foo%arr(:))" must + be present in the clause list if "foo%arr" is a pointer to an + array). */ + node2 = build_omp_clause (input_location, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (node2, GOMP_MAP_TO_PSET); + OMP_CLAUSE_DECL (node2) = decl; + OMP_CLAUSE_SIZE (node2) = TYPE_SIZE_UNIT (type); } node3 = build_omp_clause (input_location, OMP_CLAUSE_MAP); @@ -2592,6 +2591,74 @@ handle_iterator (gfc_namespace *ns, stmtblock_t *iter_block, tree block) return list; } +/* To alleviate quadratic behaviour in checking each entry of a + gfc_omp_namelist against every other entry, we build a hashtable indexed by + gfc_symbol pointer, which we can use in the (overwhelmingly common) case + that a map expression has a symbol as its root term. Return a namelist + based on the root symbol used by N, building a new table in SYM_ROOTED_NL + using the gfc_omp_namelist N2 (all clauses) if we haven't done so + already. */ + +static gfc_omp_namelist * +get_symbol_rooted_namelist (hash_map *&sym_rooted_nl, + gfc_omp_namelist *n, + gfc_omp_namelist *n2, bool *sym_based) +{ + /* Early-out if we have a NULL clause list (e.g. for OpenACC). */ + if (!n2) + return NULL; + + gfc_symbol *use_sym = NULL; + + /* We're only interested in cases where we have an expression, e.g. a + component access. */ + if (n->expr && n->expr->expr_type == EXPR_VARIABLE && n->expr->symtree) + use_sym = n->expr->symtree->n.sym; + + *sym_based = false; + + if (!use_sym) + return n2; + + if (!sym_rooted_nl) + { + sym_rooted_nl = new hash_map (); + + for (; n2 != NULL; n2 = n2->next) + { + if (!n2->expr + || n2->expr->expr_type != EXPR_VARIABLE + || !n2->expr->symtree) + continue; + + gfc_omp_namelist *nl_copy = gfc_get_omp_namelist (); + memcpy (nl_copy, n2, sizeof *nl_copy); + nl_copy->u2.duplicate_of = n2; + nl_copy->next = NULL; + + gfc_symbol *idx_sym = n2->expr->symtree->n.sym; + + bool existed; + gfc_omp_namelist *&entry + = sym_rooted_nl->get_or_insert (idx_sym, &existed); + if (existed) + nl_copy->next = entry; + entry = nl_copy; + } + } + + gfc_omp_namelist **n2_sym = sym_rooted_nl->get (use_sym); + + if (n2_sym) + { + *sym_based = true; + return *n2_sym; + } + + return NULL; +} + static tree gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, locus where, bool declare_simd = false, @@ -2609,6 +2676,8 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, if (clauses == NULL) return NULL_TREE; + hash_map *sym_rooted_nl = NULL; + for (list = 0; list < OMP_LIST_NUM; list++) { gfc_omp_namelist *n = clauses->lists[list]; @@ -3448,6 +3517,54 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, { if (pointer || (openacc && allocatable)) { + gfc_omp_namelist *n2 + = openacc ? NULL : clauses->lists[OMP_LIST_MAP]; + + bool sym_based; + n2 = get_symbol_rooted_namelist (sym_rooted_nl, n, + n2, &sym_based); + + /* If the last reference is a pointer to a derived + type ("foo%dt_ptr"), check if any subcomponents + of the same derived type member are being mapped + elsewhere in the clause list ("foo%dt_ptr%x", + etc.). If we have such subcomponent mappings, + we only create an ALLOC node for the pointer + itself, and inhibit mapping the whole derived + type. */ + + for (; n2 != NULL; n2 = n2->next) + { + if ((!sym_based && n == n2) + || (sym_based && n == n2->u2.duplicate_of) + || !n2->expr) + continue; + + if (!gfc_omp_expr_prefix_same (n->expr, + n2->expr)) + continue; + + gfc_ref *ref1 = n->expr->ref; + gfc_ref *ref2 = n2->expr->ref; + + while (ref1->next && ref2->next) + { + ref1 = ref1->next; + ref2 = ref2->next; + } + + if (ref2->next) + { + inner = build_fold_addr_expr (inner); + OMP_CLAUSE_SET_MAP_KIND (node, + GOMP_MAP_ALLOC); + OMP_CLAUSE_DECL (node) = inner; + OMP_CLAUSE_SIZE (node) + = TYPE_SIZE_UNIT (TREE_TYPE (inner)); + goto finalize_map_clause; + } + } + tree data, size; if (lastref->u.c.component->ts.type == BT_CLASS) @@ -3549,8 +3666,52 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, node2 = desc_node; else { + gfc_omp_namelist *n2 + = clauses->lists[OMP_LIST_MAP]; node2 = node; node = desc_node; /* Put first. */ + + bool sym_based; + n2 = get_symbol_rooted_namelist (sym_rooted_nl, + n, n2, + &sym_based); + + for (; n2 != NULL; n2 = n2->next) + { + if ((!sym_based && n == n2) + || (sym_based && n == n2->u2.duplicate_of) + || !n2->expr) + continue; + + if (!gfc_omp_expr_prefix_same (n->expr, + n2->expr)) + continue; + + gfc_ref *ref1 = n->expr->ref; + gfc_ref *ref2 = n2->expr->ref; + + /* We know ref1 and ref2 overlap. We're + interested in whether ref2 describes a + smaller part of the array than ref1, which + we already know refers to the full + array. */ + + while (ref1->next && ref2->next) + { + ref1 = ref1->next; + ref2 = ref2->next; + } + + if (ref2->next + || (ref2->type == REF_ARRAY + && (ref2->u.ar.type == AR_ELEMENT + || (ref2->u.ar.type + == AR_SECTION)))) + { + node2 = NULL_TREE; + goto finalize_map_clause; + } + } } node3 = build_omp_clause (input_location, OMP_CLAUSE_MAP); @@ -3702,6 +3863,23 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, } } + /* Free hashmap if we built it. */ + if (sym_rooted_nl) + { + typedef hash_map::iterator hti; + for (hti it = sym_rooted_nl->begin (); it != sym_rooted_nl->end (); ++it) + { + gfc_omp_namelist *&nl = (*it).second; + while (nl) + { + gfc_omp_namelist *next = nl->next; + free (nl); + nl = next; + } + } + delete sym_rooted_nl; + } + if (clauses->if_expr) { tree if_var; diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray-2.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray-2.f90 new file mode 100644 index 00000000000..02f08c52a8c --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subarray-2.f90 @@ -0,0 +1,108 @@ +! { dg-do run } + +program myprog +type u + integer, dimension (:), pointer :: tarr1 + integer, dimension (:), pointer :: tarr2 + integer, dimension (:), pointer :: tarr3 +end type u + +type(u) :: myu1, myu2, myu3 + +integer, dimension (12), target :: myarray1 +integer, dimension (12), target :: myarray2 +integer, dimension (12), target :: myarray3 +integer, dimension (12), target :: myarray4 +integer, dimension (12), target :: myarray5 +integer, dimension (12), target :: myarray6 +integer, dimension (12), target :: myarray7 +integer, dimension (12), target :: myarray8 +integer, dimension (12), target :: myarray9 + +myu1%tarr1 => myarray1 +myu1%tarr2 => myarray2 +myu1%tarr3 => myarray3 +myu2%tarr1 => myarray4 +myu2%tarr2 => myarray5 +myu2%tarr3 => myarray6 +myu3%tarr1 => myarray7 +myu3%tarr2 => myarray8 +myu3%tarr3 => myarray9 + +myu1%tarr1 = 0 +myu1%tarr2 = 0 +myu1%tarr3 = 0 +myu2%tarr1 = 0 +myu2%tarr2 = 0 +myu2%tarr3 = 0 +myu3%tarr1 = 0 +myu3%tarr2 = 0 +myu3%tarr3 = 0 + +!$omp target map(to:myu1%tarr1) map(tofrom:myu1%tarr1(:)) & +!$omp& map(to:myu1%tarr2) map(tofrom:myu1%tarr2(:)) & +!$omp& map(to:myu1%tarr3) map(tofrom:myu1%tarr3(:)) & +!$omp& map(to:myu2%tarr1) map(tofrom:myu2%tarr1(:)) & +!$omp& map(to:myu2%tarr2) map(tofrom:myu2%tarr2(:)) & +!$omp& map(to:myu2%tarr3) map(tofrom:myu2%tarr3(:)) & +!$omp& map(to:myu3%tarr1) map(tofrom:myu3%tarr1(:)) & +!$omp& map(to:myu3%tarr2) map(tofrom:myu3%tarr2(:)) & +!$omp& map(to:myu3%tarr3) map(tofrom:myu3%tarr3(:)) +myu1%tarr1(1) = myu1%tarr1(1) + 1 +myu2%tarr1(1) = myu2%tarr1(1) + 1 +myu3%tarr1(1) = myu3%tarr1(1) + 1 +!$omp end target + +!$omp target map(to:myu1%tarr1) map(tofrom:myu1%tarr1(1:2)) & +!$omp& map(to:myu1%tarr2) map(tofrom:myu1%tarr2(1:2)) & +!$omp& map(to:myu1%tarr3) map(tofrom:myu1%tarr3(1:2)) & +!$omp& map(to:myu2%tarr1) map(tofrom:myu2%tarr1(1:2)) & +!$omp& map(to:myu2%tarr2) map(tofrom:myu2%tarr2(1:2)) & +!$omp& map(to:myu2%tarr3) map(tofrom:myu2%tarr3(1:2)) & +!$omp& map(to:myu3%tarr1) map(tofrom:myu3%tarr1(1:2)) & +!$omp& map(to:myu3%tarr2) map(tofrom:myu3%tarr2(1:2)) & +!$omp& map(to:myu3%tarr3) map(tofrom:myu3%tarr3(1:2)) +myu1%tarr2(1) = myu1%tarr2(1) + 1 +myu2%tarr2(1) = myu2%tarr2(1) + 1 +myu3%tarr2(1) = myu3%tarr2(1) + 1 +!$omp end target + +!$omp target map(to:myu1%tarr1) map(tofrom:myu1%tarr1(1)) & +!$omp& map(to:myu1%tarr2) map(tofrom:myu1%tarr2(1)) & +!$omp& map(to:myu1%tarr3) map(tofrom:myu1%tarr3(1)) & +!$omp& map(to:myu2%tarr1) map(tofrom:myu2%tarr1(1)) & +!$omp& map(to:myu2%tarr2) map(tofrom:myu2%tarr2(1)) & +!$omp& map(to:myu2%tarr3) map(tofrom:myu2%tarr3(1)) & +!$omp& map(to:myu3%tarr1) map(tofrom:myu3%tarr1(1)) & +!$omp& map(to:myu3%tarr2) map(tofrom:myu3%tarr2(1)) & +!$omp& map(to:myu3%tarr3) map(tofrom:myu3%tarr3(1)) +myu1%tarr3(1) = myu1%tarr3(1) + 1 +myu2%tarr3(1) = myu2%tarr3(1) + 1 +myu3%tarr3(1) = myu3%tarr3(1) + 1 +!$omp end target + +!$omp target map(tofrom:myu1%tarr1) & +!$omp& map(tofrom:myu1%tarr2) & +!$omp& map(tofrom:myu1%tarr3) & +!$omp& map(tofrom:myu2%tarr1) & +!$omp& map(tofrom:myu2%tarr2) & +!$omp& map(tofrom:myu2%tarr3) & +!$omp& map(tofrom:myu3%tarr1) & +!$omp& map(tofrom:myu3%tarr2) & +!$omp& map(tofrom:myu3%tarr3) +myu1%tarr2(1) = myu1%tarr2(1) + 1 +myu2%tarr2(1) = myu2%tarr2(1) + 1 +myu3%tarr2(1) = myu3%tarr2(1) + 1 +!$omp end target + +if (myu1%tarr1(1).ne.1) stop 1 +if (myu2%tarr1(1).ne.1) stop 2 +if (myu3%tarr1(1).ne.1) stop 3 +if (myu1%tarr2(1).ne.2) stop 4 +if (myu2%tarr2(1).ne.2) stop 5 +if (myu3%tarr2(1).ne.2) stop 6 +if (myu1%tarr3(1).ne.1) stop 7 +if (myu2%tarr3(1).ne.1) stop 8 +if (myu3%tarr3(1).ne.1) stop 9 + +end program myprog diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 new file mode 100644 index 00000000000..14f18de8db5 --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 @@ -0,0 +1,38 @@ +! { dg-do run } + +type t + integer, pointer :: p(:) +end type t + +type(t) :: var(2) + +allocate (var(1)%p, source=[1,2,3,5]) +allocate (var(2)%p, source=[2,3,5]) + +!$omp target map(var(1)%p, var(2)%p) +var(1)%p(1) = 5 +var(2)%p(2) = 7 +!$omp end target + +!$omp target map(var(1)%p(1:3), var(1)%p, var(2)%p) +var(1)%p(1) = var(1)%p(1) + 1 +var(2)%p(2) = var(2)%p(2) + 1 +!$omp end target + +!$omp target map(var(1)%p, var(2)%p, var(2)%p(1:3)) +var(1)%p(1) = var(1)%p(1) + 1 +var(2)%p(2) = var(2)%p(2) + 1 +!$omp end target + +!$omp target map(var(1)%p, var(1)%p(1:3), var(2)%p, var(2)%p(2)) +var(1)%p(1) = var(1)%p(1) + 1 +var(2)%p(2) = var(2)%p(2) + 1 +!$omp end target + +if (var(1)%p(1).ne.8) stop 1 +if (var(2)%p(2).ne.10) stop 2 + +end + +! This is fixed by the address inspector/address tokenization patch. +! { dg-xfail-run-if TODO { offload_device_nonshared_as } } diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray-6.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray-6.f90 new file mode 100644 index 00000000000..9f0edf70890 --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subarray-6.f90 @@ -0,0 +1,26 @@ +! { dg-do run } + +type t + integer, pointer :: p(:) + integer, pointer :: p2(:) +end type t + +type(t) :: var +integer, target :: tgt(5), tgt2(1000) +var%p => tgt +var%p2 => tgt2 + +p = 0 +p2 = 0 + +!$omp target map(tgt, tgt2(4:6), var) + var%p(1) = 5 + var%p2(5) = 7 +!$omp end target + +if (var%p(1).ne.5) stop 1 +if (var%p2(5).ne.7) stop 2 + +end + +! { dg-shouldfail "" { offload_device_nonshared_as } } diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray.f90 new file mode 100644 index 00000000000..85f5af3a2a6 --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subarray.f90 @@ -0,0 +1,33 @@ +! { dg-do run } + +program myprog +type u + integer, dimension (:), pointer :: tarr +end type u + +type(u) :: myu +integer, dimension (12), target :: myarray + +myu%tarr => myarray + +myu%tarr = 0 + +!$omp target map(to:myu%tarr) map(tofrom:myu%tarr(:)) +myu%tarr(1) = myu%tarr(1) + 1 +!$omp end target + +!$omp target map(to:myu%tarr) map(tofrom:myu%tarr(1:2)) +myu%tarr(1) = myu%tarr(1) + 1 +!$omp end target + +!$omp target map(to:myu%tarr) map(tofrom:myu%tarr(1)) +myu%tarr(1) = myu%tarr(1) + 1 +!$omp end target + +!$omp target map(tofrom:myu%tarr) +myu%tarr(1) = myu%tarr(1) + 1 +!$omp end target + +if (myu%tarr(1).ne.4) stop 1 + +end program myprog diff --git a/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 b/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 new file mode 100644 index 00000000000..4074a952dd1 --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 @@ -0,0 +1,35 @@ +! { dg-do run } + +module mymod +type F +integer :: a, b, c +integer, dimension(10) :: d +end type F + +type G +integer :: x, y +type(F), pointer :: myf +integer :: z +end type G +end module mymod + +program myprog +use mymod + +type(F), target :: ftmp +type(G) :: gvar + +gvar%myf => ftmp + +gvar%myf%d = 0 + +!$omp target map(to:gvar%myf) map(tofrom: gvar%myf%b, gvar%myf%d) +gvar%myf%d(1) = gvar%myf%d(1) + 1 +!$omp end target + +if (gvar%myf%d(1).ne.1) stop 1 + +end program myprog + +! This is fixed by the address inspector/address tokenization patch. +! { dg-xfail-run-if TODO { offload_device_nonshared_as } } diff --git a/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90 b/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90 index 58550c79d69..f128ebcffc1 100644 --- a/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90 +++ b/libgomp/testsuite/libgomp.fortran/struct-elem-map-1.f90 @@ -229,7 +229,8 @@ contains ! !$omp target map(tofrom: var%d(4:7), var%f(2:3), var%str2(2:3)) & ! !$omp& map(tofrom: var%str4(2:2), var%uni2(2:3), var%uni4(2:2)) - !$omp target map(tofrom: var%d(4:7), var%f(2:3), var%str2(2:3), var%uni2(2:3)) + !$omp target map(to: var%f) map(tofrom: var%d(4:7), var%f(2:3), & + !$omp& var%str2(2:3), var%uni2(2:3)) if (any (var%d(4:7) /= [(-3*i, i = 4, 7)])) stop 4 if (any (var%str2(2:3) /= ["67890", "ABCDE"])) stop 6 @@ -274,7 +275,7 @@ contains if (any (var%str2(2:3) /= ["67890", "ABCDE"])) stop 6 !$omp end target - !$omp target map(tofrom: var%f(2:3)) + !$omp target map(to: var%f) map(tofrom: var%f(2:3)) if (.not. associated (var%f)) stop 9 if (size (var%f) /= 4) stop 10 if (any (var%f(2:3) /= [33, 44])) stop 11 @@ -314,7 +315,8 @@ contains ! !$omp target map(tofrom: var%d(5), var%f(3), var%str2(3), & ! !$omp var%str4(2), var%uni2(3), var%uni4(2)) - !$omp target map(tofrom: var%d(5), var%f(3), var%str2(3), var%uni2(3)) + !$omp target map(to: var%f) map(tofrom: var%d(5), var%f(3), & + !$omp& var%str2(3), var%uni2(3)) if (var%d(5) /= -3*5) stop 4 if (var%str2(3) /= "ABCDE") stop 6 if (var%uni2(3) /= 4_"ABCDE") stop 7 @@ -362,7 +364,7 @@ contains if (any (var%uni2(2:3) /= [4_"67890", 4_"ABCDE"])) stop 7 !$omp end target - !$omp target map(tofrom: var%f(2:3)) + !$omp target map(to: var%f) map(tofrom: var%f(2:3)) if (.not. associated (var%f)) stop 9 if (size (var%f) /= 4) stop 10 if (any (var%f(2:3) /= [33, 44])) stop 11 From patchwork Sun Oct 9 21:51:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Brown X-Patchwork-Id: 1841 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1309452wrs; Sun, 9 Oct 2022 14:54:56 -0700 (PDT) X-Google-Smtp-Source: AMsMyM43uxtbwQS1xdRnl9w+k8xD7szWxOdaHusIwsOAeiVqoY1vZVyBTX61jYj690Zh9ac1Y99b X-Received: by 2002:a05:6402:254f:b0:45a:1799:d8fc with SMTP id l15-20020a056402254f00b0045a1799d8fcmr13550474edb.237.1665352496471; Sun, 09 Oct 2022 14:54:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665352496; cv=none; d=google.com; s=arc-20160816; b=gLKyf1psuhi0ghf8wso14ptR8WsPOukHHlbraRDgZBkY7bBuPsoKeoYYKeWigWRq1/ 8W5tSjXsU1ETCchruB95vlP8wjnFlU4+EXp9J/t3lmY0srMoxW+m4DuYz9QU8v2mXxvy be5/xr+8fvW5LIDoIWsLbxMfWgzyoLJTdjQFymqt9b4sifuPpi7unUjOvSbqpzMDexAT rwEUqXC/k6ExlvAdUPmHYayYCZUHQ9YKWPjnMA/7NfTnWA/1A6DkI50+ET8B/seuYA8W 3OwrNe8J2su444XmkU9Oysa4NnZM5cYgTt0by0wui7f0GPrkUID5HNl5oCxp51g7nhM6 G5jw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :ironport-sdr:dmarc-filter:delivered-to; bh=7TxXUXtAg0m0H3RqRVxqn6wsPzj4afvdHG2B1ZZLx9k=; b=lJ5cOPjzzrq+lRBdPwj2eXhKH4p089BCpI8eOGQleDwDUfhVJW1mGvEDg+AkGXYPbk 7LuPs4XnY8iYVQWS9I+2V/UJS7T6dapgoHYSeUMfNvVu/Is3Hn3UFKiM5cdrgtKWvile MirZZZORZPIZoTJwLQCLBamW5T2mqXJIXdR4CdeptZom5dlMY7k7suJhS2WbakoiCnOJ 43fbKezlEqSgz5566thR7LKdz7liFxPLprba4B6qYX+a88+ognKHL4cOZKXjd8yfd2wx JgYb87r3mBKczs9gz+dr+3rQ0cn1eBujKwhqKsHQjpgdqSq2hQElJsJXywd+cijEGG2Y 3Y6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (server2.sourceware.org. [2620:52:3:1:0:246e:9693:128c]) by mx.google.com with ESMTPS id o9-20020a170906974900b007824786a7easi10572869ejy.724.2022.10.09.14.54.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Oct 2022 14:54:56 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) client-ip=2620:52:3:1:0:246e:9693:128c; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 2620:52:3:1:0:246e:9693:128c as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D6B803857BBB for ; Sun, 9 Oct 2022 21:52:49 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from esa3.mentor.iphmx.com (esa3.mentor.iphmx.com [68.232.137.180]) by sourceware.org (Postfix) with ESMTPS id 69659385C413; Sun, 9 Oct 2022 21:51:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 69659385C413 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=codesourcery.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=mentor.com X-IronPort-AV: E=Sophos;i="5.95,172,1661846400"; d="scan'208";a="84258941" Received: from orw-gwy-01-in.mentorg.com ([192.94.38.165]) by esa3.mentor.iphmx.com with ESMTP; 09 Oct 2022 13:51:54 -0800 IronPort-SDR: mds5eGqTXX0JD+BIBMRLkn9mh8c5pnYRR2SQaYiuTWq/CQuPzWMfxPo5Q9IZsqkibojVeVvRvf pt58xVX6aRvC1QOudgIUKNfuEBRj3xpP+wjSOYI4/yZmOAfKhvq81AH8HEyBi6ejnVAoEioA8z cobA0LG+wnjYzUwL7XjdvVT6zXQ6IjwXJrruCxKrudOeo3uU7b6dVhVuLTg82fFHSjpnGdIbUO NhJDG6kxiSEHmrML2+Nj6zi16T+eH/9IZs1kSgDLUEvgASolEh1ZQJb+A90o5Bv4tYFLkD7APl jKA= From: Julian Brown To: Subject: [PATCH v4 2/4] OpenMP/OpenACC: Reindent TO/FROM/_CACHE_ stanza in {c_}finish_omp_clause Date: Sun, 9 Oct 2022 14:51:35 -0700 Message-ID: <8f25b1d4aa40f4d76b864c9e5635f0bda6f6c3d2.1665351784.git.julian@codesourcery.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [137.202.0.90] X-ClientProxiedBy: svr-ies-mbx-14.mgc.mentorg.com (139.181.222.14) To svr-ies-mbx-11.mgc.mentorg.com (139.181.222.11) X-Spam-Status: No, score=-11.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_STATUS, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jakub Jelinek , Tobias Burnus , fortran@gcc.gnu.org Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746248659278338566?= X-GMAIL-MSGID: =?utf-8?q?1746248659278338566?= This patch trivially adds braces and reindents the OMP_CLAUSE_TO/OMP_CLAUSE_FROM/OMP_CLAUSE__CACHE_ stanza in c_finish_omp_clause and finish_omp_clause, in preparation for the following patch (to clarify the diff a little). 2022-09-13 Julian Brown gcc/c/ * c-typeck.cc (c_finish_omp_clauses): Add braces and reindent OMP_CLAUSE_TO/OMP_CLAUSE_FROM/OMP_CLAUSE__CACHE_ stanza. gcc/cp/ * semantics.cc (finish_omp_clause): Add braces and reindent OMP_CLAUSE_TO/OMP_CLAUSE_FROM/OMP_CLAUSE__CACHE_ stanza. --- gcc/c/c-typeck.cc | 615 +++++++++++++++++----------------- gcc/cp/semantics.cc | 786 ++++++++++++++++++++++---------------------- 2 files changed, 706 insertions(+), 695 deletions(-) diff --git a/gcc/c/c-typeck.cc b/gcc/c/c-typeck.cc index ac242b5ed13..f57365fb588 100644 --- a/gcc/c/c-typeck.cc +++ b/gcc/c/c-typeck.cc @@ -15013,321 +15013,326 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) case OMP_CLAUSE_TO: case OMP_CLAUSE_FROM: case OMP_CLAUSE__CACHE_: - t = OMP_CLAUSE_DECL (c); - if (TREE_CODE (t) == TREE_LIST) - { - grp_start_p = pc; - grp_sentinel = OMP_CLAUSE_CHAIN (c); + { + t = OMP_CLAUSE_DECL (c); + if (TREE_CODE (t) == TREE_LIST) + { + grp_start_p = pc; + grp_sentinel = OMP_CLAUSE_CHAIN (c); - if (handle_omp_array_sections (c, ort)) - remove = true; - else - { - t = OMP_CLAUSE_DECL (c); - if (!omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "array section does not have mappable type " - "in %qs clause", - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (TYPE_ATOMIC (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%<_Atomic%> %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - while (TREE_CODE (t) == ARRAY_REF) - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == COMPONENT_REF - && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE) - { - do - { - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - while (TREE_CODE (t) == COMPONENT_REF - || TREE_CODE (t) == ARRAY_REF); - - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_IMPLICIT (c) - && (bitmap_bit_p (&map_head, DECL_UID (t)) - || bitmap_bit_p (&map_field_head, DECL_UID (t)) - || bitmap_bit_p (&map_firstprivate_head, - DECL_UID (t)))) - { - remove = true; - break; - } - if (bitmap_bit_p (&map_field_head, DECL_UID (t))) - break; - if (bitmap_bit_p (&map_head, DECL_UID (t))) - { - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in motion " - "clauses", t); - else if (ort == C_ORT_ACC) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data " - "clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in map " - "clauses", t); - remove = true; - } - else - { - bitmap_set_bit (&map_head, DECL_UID (t)); - bitmap_set_bit (&map_field_head, DECL_UID (t)); - } - } - } - if (c_oacc_check_attachments (c)) - remove = true; - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH - || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) - /* In this case, we have a single array element which is a - pointer, and we already set OMP_CLAUSE_SIZE in - handle_omp_array_sections above. For attach/detach clauses, - reset the OMP_CLAUSE_SIZE (representing a bias) to zero - here. */ - OMP_CLAUSE_SIZE (c) = size_zero_node; - break; - } - if (t == error_mark_node) - { - remove = true; - break; - } - /* OpenACC attach / detach clauses must be pointers. */ - if (c_oacc_check_attachments (c)) - { - remove = true; - break; - } - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH - || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) - /* For attach/detach clauses, set OMP_CLAUSE_SIZE (representing a - bias) to zero here, so it is not set erroneously to the pointer - size later on in gimplify.cc. */ - OMP_CLAUSE_SIZE (c) = size_zero_node; - while (TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == COMPOUND_EXPR) - { - t = TREE_OPERAND (t, 1); - STRIP_NOPS (t); - } - indir_component_ref_p = false; - if (TREE_CODE (t) == COMPONENT_REF - && (TREE_CODE (TREE_OPERAND (t, 0)) == MEM_REF - || TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF - || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF)) - { - t = TREE_OPERAND (TREE_OPERAND (t, 0), 0); - indir_component_ref_p = true; - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - - if (TREE_CODE (t) == COMPONENT_REF - && OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_) - { - if (DECL_BIT_FIELD (TREE_OPERAND (t, 1))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "bit-field %qE in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + if (handle_omp_array_sections (c, ort)) remove = true; - } - else if (!omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE does not have a mappable type in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (TYPE_ATOMIC (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%<_Atomic%> %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - while (TREE_CODE (t) == COMPONENT_REF) - { - if (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) - == UNION_TYPE) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is a member of a union", t); - remove = true; - break; - } - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF) - { - if (maybe_ne (mem_ref_offset (t), 0)) + else + { + t = OMP_CLAUSE_DECL (c); + if (!omp_mappable_type (TREE_TYPE (t))) + { error_at (OMP_CLAUSE_LOCATION (c), - "cannot dereference %qE in %qs clause", t, + "array section does not have mappable type " + "in %qs clause", omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - else - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { + remove = true; + } + else if (TYPE_ATOMIC (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%<_Atomic%> %qE in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + while (TREE_CODE (t) == ARRAY_REF) t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - if (remove) - break; - if (VAR_P (t) || TREE_CODE (t) == PARM_DECL) - { - if (bitmap_bit_p (&map_field_head, DECL_UID (t)) - || (ort != C_ORT_ACC - && bitmap_bit_p (&map_head, DECL_UID (t)))) - break; - } - } - if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is not a variable in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (VAR_P (t) && DECL_THREAD_LOCAL_P (t)) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD is threadprivate variable in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if ((OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP - || (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_FIRSTPRIVATE_POINTER)) - && !indir_component_ref_p - && !c_mark_addressable (t)) - remove = true; - else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER - || (OMP_CLAUSE_MAP_KIND (c) - == GOMP_MAP_FIRSTPRIVATE_POINTER) - || (OMP_CLAUSE_MAP_KIND (c) - == GOMP_MAP_FORCE_DEVICEPTR))) - && t == OMP_CLAUSE_DECL (c) - && !omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD does not have a mappable type in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (TREE_TYPE (t) == error_mark_node) - remove = true; - else if (TYPE_ATOMIC (strip_array_types (TREE_TYPE (t)))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%<_Atomic%> %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_IMPLICIT (c) - && (bitmap_bit_p (&map_head, DECL_UID (t)) - || bitmap_bit_p (&map_field_head, DECL_UID (t)) - || bitmap_bit_p (&map_firstprivate_head, DECL_UID (t)))) - remove = true; - else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER) - { - if (bitmap_bit_p (&generic_head, DECL_UID (t)) - || bitmap_bit_p (&firstprivate_head, DECL_UID (t)) - || bitmap_bit_p (&map_firstprivate_head, DECL_UID (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", t); + if (TREE_CODE (t) == COMPONENT_REF + && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE) + { + do + { + t = TREE_OPERAND (t, 0); + if (TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == INDIRECT_REF) + { + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + } + while (TREE_CODE (t) == COMPONENT_REF + || TREE_CODE (t) == ARRAY_REF); + + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_IMPLICIT (c) + && (bitmap_bit_p (&map_head, DECL_UID (t)) + || bitmap_bit_p (&map_field_head, DECL_UID (t)) + || bitmap_bit_p (&map_firstprivate_head, + DECL_UID (t)))) + { + remove = true; + break; + } + if (bitmap_bit_p (&map_field_head, DECL_UID (t))) + break; + if (bitmap_bit_p (&map_head, DECL_UID (t))) + { + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in motion " + "clauses", t); + else if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data " + "clauses", t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in map " + "clauses", t); + remove = true; + } + else + { + bitmap_set_bit (&map_head, DECL_UID (t)); + bitmap_set_bit (&map_field_head, DECL_UID (t)); + } + } + } + if (c_oacc_check_attachments (c)) remove = true; - } - else if (bitmap_bit_p (&map_head, DECL_UID (t)) - && !bitmap_bit_p (&map_field_head, DECL_UID (t))) - { - if (ort == C_ORT_ACC) + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) + /* In this case, we have a single array element which is a + pointer, and we already set OMP_CLAUSE_SIZE in + handle_omp_array_sections above. For attach/detach + clauses, reset the OMP_CLAUSE_SIZE (representing a bias) + to zero here. */ + OMP_CLAUSE_SIZE (c) = size_zero_node; + break; + } + if (t == error_mark_node) + { + remove = true; + break; + } + /* OpenACC attach / detach clauses must be pointers. */ + if (c_oacc_check_attachments (c)) + { + remove = true; + break; + } + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) + /* For attach/detach clauses, set OMP_CLAUSE_SIZE (representing a + bias) to zero here, so it is not set erroneously to the pointer + size later on in gimplify.cc. */ + OMP_CLAUSE_SIZE (c) = size_zero_node; + while (TREE_CODE (t) == INDIRECT_REF + || TREE_CODE (t) == ARRAY_REF) + { + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + while (TREE_CODE (t) == COMPOUND_EXPR) + { + t = TREE_OPERAND (t, 1); + STRIP_NOPS (t); + } + indir_component_ref_p = false; + if (TREE_CODE (t) == COMPONENT_REF + && (TREE_CODE (TREE_OPERAND (t, 0)) == MEM_REF + || TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF + || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF)) + { + t = TREE_OPERAND (TREE_OPERAND (t, 0), 0); + indir_component_ref_p = true; + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + + if (TREE_CODE (t) == COMPONENT_REF + && OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_) + { + if (DECL_BIT_FIELD (TREE_OPERAND (t, 1))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "bit-field %qE in %qs clause", + t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (!omp_mappable_type (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qE does not have a mappable type in %qs clause", + t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (TYPE_ATOMIC (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%<_Atomic%> %qE in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + while (TREE_CODE (t) == COMPONENT_REF) + { + if (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) + == UNION_TYPE) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qE is a member of a union", t); + remove = true; + break; + } + t = TREE_OPERAND (t, 0); + if (TREE_CODE (t) == MEM_REF) + { + if (maybe_ne (mem_ref_offset (t), 0)) + error_at (OMP_CLAUSE_LOCATION (c), + "cannot dereference %qE in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + else + t = TREE_OPERAND (t, 0); + } + while (TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == INDIRECT_REF + || TREE_CODE (t) == ARRAY_REF) + { + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + } + if (remove) + break; + if (VAR_P (t) || TREE_CODE (t) == PARM_DECL) + { + if (bitmap_bit_p (&map_field_head, DECL_UID (t)) + || (ort != C_ORT_ACC + && bitmap_bit_p (&map_head, DECL_UID (t)))) + break; + } + } + if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qE is not a variable in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (VAR_P (t) && DECL_THREAD_LOCAL_P (t)) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qD is threadprivate variable in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if ((OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP + || (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_POINTER)) + && !indir_component_ref_p + && !c_mark_addressable (t)) + remove = true; + else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER + || (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_FIRSTPRIVATE_POINTER) + || (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_FORCE_DEVICEPTR))) + && t == OMP_CLAUSE_DECL (c) + && !omp_mappable_type (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qD does not have a mappable type in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (TREE_TYPE (t) == error_mark_node) + remove = true; + else if (TYPE_ATOMIC (strip_array_types (TREE_TYPE (t)))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%<_Atomic%> %qE in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_IMPLICIT (c) + && (bitmap_bit_p (&map_head, DECL_UID (t)) + || bitmap_bit_p (&map_field_head, DECL_UID (t)) + || bitmap_bit_p (&map_firstprivate_head, + DECL_UID (t)))) + remove = true; + else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_FIRSTPRIVATE_POINTER)) + { + if (bitmap_bit_p (&generic_head, DECL_UID (t)) + || bitmap_bit_p (&firstprivate_head, DECL_UID (t)) + || bitmap_bit_p (&map_firstprivate_head, DECL_UID (t))) + { error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in data clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears both in data and map clauses", t); - remove = true; - } - else - bitmap_set_bit (&map_firstprivate_head, DECL_UID (t)); - } - else if (bitmap_bit_p (&map_head, DECL_UID (t)) - && !bitmap_bit_p (&map_field_head, DECL_UID (t))) - { - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in motion clauses", t); - else if (ort == C_ORT_ACC) + remove = true; + } + else if (bitmap_bit_p (&map_head, DECL_UID (t)) + && !bitmap_bit_p (&map_field_head, DECL_UID (t))) + { + if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", + t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears both in data and map clauses", t); + remove = true; + } + else + bitmap_set_bit (&map_firstprivate_head, DECL_UID (t)); + } + else if (bitmap_bit_p (&map_head, DECL_UID (t)) + && !bitmap_bit_p (&map_field_head, DECL_UID (t))) + { + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in motion clauses", t); + else if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in map clauses", t); + remove = true; + } + else if (ort == C_ORT_ACC + && bitmap_bit_p (&generic_head, DECL_UID (t))) + { error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in data clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in map clauses", t); - remove = true; - } - else if (ort == C_ORT_ACC - && bitmap_bit_p (&generic_head, DECL_UID (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", t); - remove = true; - } - else if (bitmap_bit_p (&firstprivate_head, DECL_UID (t)) - || bitmap_bit_p (&is_on_device_head, DECL_UID (t))) - { - if (ort == C_ORT_ACC) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears both in data and map clauses", t); - remove = true; - } - else - { - bitmap_set_bit (&map_head, DECL_UID (t)); - if (t != OMP_CLAUSE_DECL (c) - && TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF) - bitmap_set_bit (&map_field_head, DECL_UID (t)); - } + remove = true; + } + else if (bitmap_bit_p (&firstprivate_head, DECL_UID (t)) + || bitmap_bit_p (&is_on_device_head, DECL_UID (t))) + { + if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears both in data and map clauses", t); + remove = true; + } + else + { + bitmap_set_bit (&map_head, DECL_UID (t)); + if (t != OMP_CLAUSE_DECL (c) + && TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF) + bitmap_set_bit (&map_field_head, DECL_UID (t)); + } + } break; case OMP_CLAUSE_ENTER: diff --git a/gcc/cp/semantics.cc b/gcc/cp/semantics.cc index 66ee2186a84..7aa81101c63 100644 --- a/gcc/cp/semantics.cc +++ b/gcc/cp/semantics.cc @@ -7982,408 +7982,414 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) case OMP_CLAUSE_TO: case OMP_CLAUSE_FROM: case OMP_CLAUSE__CACHE_: - t = OMP_CLAUSE_DECL (c); - if (TREE_CODE (t) == TREE_LIST) - { - grp_start_p = pc; - grp_sentinel = OMP_CLAUSE_CHAIN (c); + { + t = OMP_CLAUSE_DECL (c); + if (TREE_CODE (t) == TREE_LIST) + { + grp_start_p = pc; + grp_sentinel = OMP_CLAUSE_CHAIN (c); - if (handle_omp_array_sections (c, ort)) - remove = true; - else - { - t = OMP_CLAUSE_DECL (c); - if (TREE_CODE (t) != TREE_LIST - && !type_dependent_expression_p (t) - && !omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "array section does not have mappable type " - "in %qs clause", - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - if (TREE_TYPE (t) != error_mark_node - && !COMPLETE_TYPE_P (TREE_TYPE (t))) - cxx_incomplete_type_inform (TREE_TYPE (t)); - remove = true; - } - while (TREE_CODE (t) == ARRAY_REF) - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == COMPONENT_REF - && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE) - { - do - { - t = TREE_OPERAND (t, 0); - if (REFERENCE_REF_P (t)) - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - while (TREE_CODE (t) == COMPONENT_REF - || TREE_CODE (t) == ARRAY_REF); - - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_IMPLICIT (c) - && (bitmap_bit_p (&map_head, DECL_UID (t)) - || bitmap_bit_p (&map_field_head, DECL_UID (t)) - || bitmap_bit_p (&map_firstprivate_head, - DECL_UID (t)))) - { - remove = true; - break; - } - if (bitmap_bit_p (&map_field_head, DECL_UID (t))) - break; - if (bitmap_bit_p (&map_head, DECL_UID (t))) - { - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in motion" - " clauses", t); - else if (ort == C_ORT_ACC) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data" - " clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in map" - " clauses", t); - remove = true; - } - else - { - bitmap_set_bit (&map_head, DECL_UID (t)); - bitmap_set_bit (&map_field_head, DECL_UID (t)); - } - } - } - if (cp_oacc_check_attachments (c)) - remove = true; - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH - || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) - /* In this case, we have a single array element which is a - pointer, and we already set OMP_CLAUSE_SIZE in - handle_omp_array_sections above. For attach/detach clauses, - reset the OMP_CLAUSE_SIZE (representing a bias) to zero - here. */ - OMP_CLAUSE_SIZE (c) = size_zero_node; - break; - } - if (t == error_mark_node) - { - remove = true; - break; - } - /* OpenACC attach / detach clauses must be pointers. */ - if (cp_oacc_check_attachments (c)) - { - remove = true; - break; - } - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH - || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) - /* For attach/detach clauses, set OMP_CLAUSE_SIZE (representing a - bias) to zero here, so it is not set erroneously to the pointer - size later on in gimplify.cc. */ - OMP_CLAUSE_SIZE (c) = size_zero_node; - if (REFERENCE_REF_P (t) - && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF) - { - t = TREE_OPERAND (t, 0); - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH) - OMP_CLAUSE_DECL (c) = t; - } - while (TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == COMPOUND_EXPR) - { - t = TREE_OPERAND (t, 1); - STRIP_NOPS (t); - } - if (TREE_CODE (t) == COMPONENT_REF - && invalid_nonstatic_memfn_p (EXPR_LOCATION (t), t, - tf_warning_or_error)) - remove = true; - indir_component_ref_p = false; - if (TREE_CODE (t) == COMPONENT_REF - && (TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF - || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF)) - { - t = TREE_OPERAND (TREE_OPERAND (t, 0), 0); - indir_component_ref_p = true; - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - if (TREE_CODE (t) == COMPONENT_REF - && OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_) - { - if (type_dependent_expression_p (t)) - break; - if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL - && DECL_BIT_FIELD (TREE_OPERAND (t, 1))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "bit-field %qE in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + if (handle_omp_array_sections (c, ort)) remove = true; - } - else if (!omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE does not have a mappable type in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - if (TREE_TYPE (t) != error_mark_node - && !COMPLETE_TYPE_P (TREE_TYPE (t))) - cxx_incomplete_type_inform (TREE_TYPE (t)); - remove = true; - } - while (TREE_CODE (t) == COMPONENT_REF) - { - if (TREE_TYPE (TREE_OPERAND (t, 0)) - && (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) - == UNION_TYPE)) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is a member of a union", t); - remove = true; - break; - } - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF) - { - if (maybe_ne (mem_ref_offset (t), 0)) + else + { + t = OMP_CLAUSE_DECL (c); + if (TREE_CODE (t) != TREE_LIST + && !type_dependent_expression_p (t) + && !omp_mappable_type (TREE_TYPE (t))) + { error_at (OMP_CLAUSE_LOCATION (c), - "cannot dereference %qE in %qs clause", t, + "array section does not have mappable type " + "in %qs clause", omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - else - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { + if (TREE_TYPE (t) != error_mark_node + && !COMPLETE_TYPE_P (TREE_TYPE (t))) + cxx_incomplete_type_inform (TREE_TYPE (t)); + remove = true; + } + while (TREE_CODE (t) == ARRAY_REF) t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - if (remove) - break; - if (REFERENCE_REF_P (t)) - t = TREE_OPERAND (t, 0); - if (VAR_P (t) || TREE_CODE (t) == PARM_DECL) - { - if (bitmap_bit_p (&map_field_head, DECL_UID (t)) - || (ort != C_ORT_ACC - && bitmap_bit_p (&map_head, DECL_UID (t)))) - goto handle_map_references; - } - } - if (!processing_template_decl - && TREE_CODE (t) == FIELD_DECL) - { - OMP_CLAUSE_DECL (c) = finish_non_static_data_member (t, NULL_TREE, - NULL_TREE); - break; - } - if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) - { - if (processing_template_decl && TREE_CODE (t) != OVERLOAD) - break; - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER - || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_POINTER - || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH_DETACH)) - break; - if (DECL_P (t)) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD is not a variable in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is not a variable in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (VAR_P (t) && CP_DECL_THREAD_LOCAL_P (t)) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD is threadprivate variable in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (!processing_template_decl - && !TYPE_REF_P (TREE_TYPE (t)) - && (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP - || (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_FIRSTPRIVATE_POINTER)) - && !indir_component_ref_p - && !cxx_mark_addressable (t)) - remove = true; - else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER - || (OMP_CLAUSE_MAP_KIND (c) - == GOMP_MAP_FIRSTPRIVATE_POINTER))) - && t == OMP_CLAUSE_DECL (c) - && !type_dependent_expression_p (t) - && !omp_mappable_type (TYPE_REF_P (TREE_TYPE (t)) - ? TREE_TYPE (TREE_TYPE (t)) - : TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD does not have a mappable type in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - if (TREE_TYPE (t) != error_mark_node - && !COMPLETE_TYPE_P (TREE_TYPE (t))) - cxx_incomplete_type_inform (TREE_TYPE (t)); - remove = true; - } - else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FORCE_DEVICEPTR - && !type_dependent_expression_p (t) - && !INDIRECT_TYPE_P (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD is not a pointer variable", t); - remove = true; - } - else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_IMPLICIT (c) - && (bitmap_bit_p (&map_head, DECL_UID (t)) - || bitmap_bit_p (&map_field_head, DECL_UID (t)) - || bitmap_bit_p (&map_firstprivate_head, - DECL_UID (t)))) - remove = true; - else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER) - { - if (bitmap_bit_p (&generic_head, DECL_UID (t)) - || bitmap_bit_p (&firstprivate_head, DECL_UID (t)) - || bitmap_bit_p (&map_firstprivate_head, DECL_UID (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", t); + if (TREE_CODE (t) == COMPONENT_REF + && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE) + { + do + { + t = TREE_OPERAND (t, 0); + if (REFERENCE_REF_P (t)) + t = TREE_OPERAND (t, 0); + if (TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == INDIRECT_REF) + { + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + } + while (TREE_CODE (t) == COMPONENT_REF + || TREE_CODE (t) == ARRAY_REF); + + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_IMPLICIT (c) + && (bitmap_bit_p (&map_head, DECL_UID (t)) + || bitmap_bit_p (&map_field_head, DECL_UID (t)) + || bitmap_bit_p (&map_firstprivate_head, + DECL_UID (t)))) + { + remove = true; + break; + } + if (bitmap_bit_p (&map_field_head, DECL_UID (t))) + break; + if (bitmap_bit_p (&map_head, DECL_UID (t))) + { + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in motion" + " clauses", t); + else if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data" + " clauses", t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in map" + " clauses", t); + remove = true; + } + else + { + bitmap_set_bit (&map_head, DECL_UID (t)); + bitmap_set_bit (&map_field_head, DECL_UID (t)); + } + } + } + if (cp_oacc_check_attachments (c)) remove = true; - } - else if (bitmap_bit_p (&map_head, DECL_UID (t)) - && !bitmap_bit_p (&map_field_head, DECL_UID (t))) - { - if (ort == C_ORT_ACC) + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) + /* In this case, we have a single array element which is a + pointer, and we already set OMP_CLAUSE_SIZE in + handle_omp_array_sections above. For attach/detach + clauses, reset the OMP_CLAUSE_SIZE (representing a bias) + to zero here. */ + OMP_CLAUSE_SIZE (c) = size_zero_node; + break; + } + if (t == error_mark_node) + { + remove = true; + break; + } + /* OpenACC attach / detach clauses must be pointers. */ + if (cp_oacc_check_attachments (c)) + { + remove = true; + break; + } + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_DETACH)) + /* For attach/detach clauses, set OMP_CLAUSE_SIZE (representing a + bias) to zero here, so it is not set erroneously to the pointer + size later on in gimplify.cc. */ + OMP_CLAUSE_SIZE (c) = size_zero_node; + if (REFERENCE_REF_P (t) + && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF) + { + t = TREE_OPERAND (t, 0); + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH) + OMP_CLAUSE_DECL (c) = t; + } + while (TREE_CODE (t) == INDIRECT_REF + || TREE_CODE (t) == ARRAY_REF) + { + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + while (TREE_CODE (t) == COMPOUND_EXPR) + { + t = TREE_OPERAND (t, 1); + STRIP_NOPS (t); + } + if (TREE_CODE (t) == COMPONENT_REF + && invalid_nonstatic_memfn_p (EXPR_LOCATION (t), t, + tf_warning_or_error)) + remove = true; + indir_component_ref_p = false; + if (TREE_CODE (t) == COMPONENT_REF + && (TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF + || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF)) + { + t = TREE_OPERAND (TREE_OPERAND (t, 0), 0); + indir_component_ref_p = true; + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + if (TREE_CODE (t) == COMPONENT_REF + && OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_) + { + if (type_dependent_expression_p (t)) + break; + if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL + && DECL_BIT_FIELD (TREE_OPERAND (t, 1))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "bit-field %qE in %qs clause", + t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (!omp_mappable_type (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qE does not have a mappable type in %qs clause", + t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + if (TREE_TYPE (t) != error_mark_node + && !COMPLETE_TYPE_P (TREE_TYPE (t))) + cxx_incomplete_type_inform (TREE_TYPE (t)); + remove = true; + } + while (TREE_CODE (t) == COMPONENT_REF) + { + if (TREE_TYPE (TREE_OPERAND (t, 0)) + && (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) + == UNION_TYPE)) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qE is a member of a union", t); + remove = true; + break; + } + t = TREE_OPERAND (t, 0); + if (TREE_CODE (t) == MEM_REF) + { + if (maybe_ne (mem_ref_offset (t), 0)) + error_at (OMP_CLAUSE_LOCATION (c), + "cannot dereference %qE in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + else + t = TREE_OPERAND (t, 0); + } + while (TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == INDIRECT_REF + || TREE_CODE (t) == ARRAY_REF) + { + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); + } + } + if (remove) + break; + if (REFERENCE_REF_P (t)) + t = TREE_OPERAND (t, 0); + if (VAR_P (t) || TREE_CODE (t) == PARM_DECL) + { + if (bitmap_bit_p (&map_field_head, DECL_UID (t)) + || (ort != C_ORT_ACC + && bitmap_bit_p (&map_head, DECL_UID (t)))) + goto handle_map_references; + } + } + if (!processing_template_decl + && TREE_CODE (t) == FIELD_DECL) + { + OMP_CLAUSE_DECL (c) + = finish_non_static_data_member (t, NULL_TREE, NULL_TREE); + break; + } + if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) + { + if (processing_template_decl && TREE_CODE (t) != OVERLOAD) + break; + if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_POINTER + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH_DETACH)) + break; + if (DECL_P (t)) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD is not a variable in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qE is not a variable in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (VAR_P (t) && CP_DECL_THREAD_LOCAL_P (t)) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qD is threadprivate variable in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + remove = true; + } + else if (!processing_template_decl + && !TYPE_REF_P (TREE_TYPE (t)) + && (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP + || (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_POINTER)) + && !indir_component_ref_p + && !cxx_mark_addressable (t)) + remove = true; + else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER + || (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_FIRSTPRIVATE_POINTER))) + && t == OMP_CLAUSE_DECL (c) + && !type_dependent_expression_p (t) + && !omp_mappable_type (TYPE_REF_P (TREE_TYPE (t)) + ? TREE_TYPE (TREE_TYPE (t)) + : TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qD does not have a mappable type in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (c)]); + if (TREE_TYPE (t) != error_mark_node + && !COMPLETE_TYPE_P (TREE_TYPE (t))) + cxx_incomplete_type_inform (TREE_TYPE (t)); + remove = true; + } + else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FORCE_DEVICEPTR + && !type_dependent_expression_p (t) + && !INDIRECT_TYPE_P (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (c), + "%qD is not a pointer variable", t); + remove = true; + } + else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_IMPLICIT (c) + && (bitmap_bit_p (&map_head, DECL_UID (t)) + || bitmap_bit_p (&map_field_head, DECL_UID (t)) + || bitmap_bit_p (&map_firstprivate_head, + DECL_UID (t)))) + remove = true; + else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_FIRSTPRIVATE_POINTER)) + { + if (bitmap_bit_p (&generic_head, DECL_UID (t)) + || bitmap_bit_p (&firstprivate_head, DECL_UID (t)) + || bitmap_bit_p (&map_firstprivate_head, DECL_UID (t))) + { error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in data clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears both in data and map clauses", t); - remove = true; - } - else - bitmap_set_bit (&map_firstprivate_head, DECL_UID (t)); - } - else if (bitmap_bit_p (&map_head, DECL_UID (t)) - && !bitmap_bit_p (&map_field_head, DECL_UID (t))) - { - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in motion clauses", t); - else if (ort == C_ORT_ACC) + remove = true; + } + else if (bitmap_bit_p (&map_head, DECL_UID (t)) + && !bitmap_bit_p (&map_field_head, DECL_UID (t))) + { + if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", + t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears both in data and map clauses", t); + remove = true; + } + else + bitmap_set_bit (&map_firstprivate_head, DECL_UID (t)); + } + else if (bitmap_bit_p (&map_head, DECL_UID (t)) + && !bitmap_bit_p (&map_field_head, DECL_UID (t))) + { + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in motion clauses", t); + else if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in map clauses", t); + remove = true; + } + else if (ort == C_ORT_ACC + && bitmap_bit_p (&generic_head, DECL_UID (t))) + { error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in data clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in map clauses", t); - remove = true; - } - else if (ort == C_ORT_ACC - && bitmap_bit_p (&generic_head, DECL_UID (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", t); - remove = true; - } - else if (bitmap_bit_p (&firstprivate_head, DECL_UID (t)) - || bitmap_bit_p (&is_on_device_head, DECL_UID (t))) - { - if (ort == C_ORT_ACC) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears both in data and map clauses", t); - remove = true; - } - else - { - bitmap_set_bit (&map_head, DECL_UID (t)); + remove = true; + } + else if (bitmap_bit_p (&firstprivate_head, DECL_UID (t)) + || bitmap_bit_p (&is_on_device_head, DECL_UID (t))) + { + if (ort == C_ORT_ACC) + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", t); + else + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears both in data and map clauses", t); + remove = true; + } + else + { + bitmap_set_bit (&map_head, DECL_UID (t)); - tree decl = OMP_CLAUSE_DECL (c); - if (t != decl - && (TREE_CODE (decl) == COMPONENT_REF - || (INDIRECT_REF_P (decl) - && TREE_CODE (TREE_OPERAND (decl, 0)) == COMPONENT_REF - && TYPE_REF_P (TREE_TYPE (TREE_OPERAND (decl, 0)))))) - bitmap_set_bit (&map_field_head, DECL_UID (t)); - } - handle_map_references: - if (!remove - && !processing_template_decl - && ort != C_ORT_DECLARE_SIMD - && TYPE_REF_P (TREE_TYPE (OMP_CLAUSE_DECL (c)))) - { - t = OMP_CLAUSE_DECL (c); - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) - { - OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t); - if (OMP_CLAUSE_SIZE (c) == NULL_TREE) - OMP_CLAUSE_SIZE (c) - = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t))); - } - else if (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_FIRSTPRIVATE_POINTER - && (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_FIRSTPRIVATE_REFERENCE) - && (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_ALWAYS_POINTER) - && (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_ATTACH_DETACH)) - { - grp_start_p = pc; - grp_sentinel = OMP_CLAUSE_CHAIN (c); + tree decl = OMP_CLAUSE_DECL (c); + if (t != decl + && (TREE_CODE (decl) == COMPONENT_REF + || (INDIRECT_REF_P (decl) + && (TREE_CODE (TREE_OPERAND (decl, 0)) + == COMPONENT_REF) + && TYPE_REF_P (TREE_TYPE (TREE_OPERAND (decl, + 0)))))) + bitmap_set_bit (&map_field_head, DECL_UID (t)); + } + handle_map_references: + if (!remove + && !processing_template_decl + && ort != C_ORT_DECLARE_SIMD + && TYPE_REF_P (TREE_TYPE (OMP_CLAUSE_DECL (c)))) + { + t = OMP_CLAUSE_DECL (c); + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) + { + OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t); + if (OMP_CLAUSE_SIZE (c) == NULL_TREE) + OMP_CLAUSE_SIZE (c) + = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t))); + } + else if (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_POINTER + && (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_REFERENCE) + && (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_ALWAYS_POINTER) + && (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_ATTACH_DETACH)) + { + grp_start_p = pc; + grp_sentinel = OMP_CLAUSE_CHAIN (c); - tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), - OMP_CLAUSE_MAP); - if (TREE_CODE (t) == COMPONENT_REF) - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER); - else - OMP_CLAUSE_SET_MAP_KIND (c2, - GOMP_MAP_FIRSTPRIVATE_REFERENCE); - OMP_CLAUSE_DECL (c2) = t; - OMP_CLAUSE_SIZE (c2) = size_zero_node; - OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); - OMP_CLAUSE_CHAIN (c) = c2; - OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t); - if (OMP_CLAUSE_SIZE (c) == NULL_TREE) - OMP_CLAUSE_SIZE (c) - = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t))); - c = c2; - } - } + tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), + OMP_CLAUSE_MAP); + if (TREE_CODE (t) == COMPONENT_REF) + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER); + else + OMP_CLAUSE_SET_MAP_KIND (c2, + GOMP_MAP_FIRSTPRIVATE_REFERENCE); + OMP_CLAUSE_DECL (c2) = t; + OMP_CLAUSE_SIZE (c2) = size_zero_node; + OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); + OMP_CLAUSE_CHAIN (c) = c2; + OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t); + if (OMP_CLAUSE_SIZE (c) == NULL_TREE) + OMP_CLAUSE_SIZE (c) + = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t))); + c = c2; + } + } + } break; case OMP_CLAUSE_ENTER: From patchwork Sun Oct 9 21:51:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Julian Brown X-Patchwork-Id: 1843 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1309758wrs; Sun, 9 Oct 2022 14:56:17 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4d/wDG6hBKcgC5PGEp4lo11PpFWlmO73t+2D9xp3FQ4ZgwCE7+g+fc8i+v66t13+S3LsoP X-Received: by 2002:aa7:cdcb:0:b0:459:e3a6:2df9 with SMTP id h11-20020aa7cdcb000000b00459e3a62df9mr14914825edw.105.1665352577615; Sun, 09 Oct 2022 14:56:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665352577; cv=none; d=google.com; s=arc-20160816; b=iKqI0ADMp0WnQk/KvJu+iWEiIJ5RQn2SAedBAQp8bZahM72FYl1gfqUHnBjjFoYaOR 9uV1FlFKvl8jsdLH99I8YLP9C818m1G6M7xAypllgbTX3adovuRHhkXRlmohwOya/Cl3 XiIt/Q0Jrrpt/upew7EcMxVei6bI5qIBE2+j+UtHbVl8ofvNYphOmWe2X8PhLbVoi9Rr oLANY+qDT047lq95lvEvggB4NLBHgDHc8MzF5Y5yMvGqUA1T9C3ccPrvEdAdi8RSSHX4 yTAF+EDCLGsfp5dykWtPfF/HRyZ1uPvNJUuhfJ1bvHFMcNrXip3xPyH7hkG+bRqU0vIa vxUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :ironport-sdr:dmarc-filter:delivered-to; bh=CWohP2zxmMr1+1sBZ+YVfzZhtGfu9o9E67l31vQUsjE=; b=Sln+r7BektIhpR+8dAAwfy4o0+pzmqm5h9OBOPYph0mG/QVMk/rxxUn2XffKGDDkE/ /YY0e3Gb5M4pl7SC1J4F1NeRBisU+I/aLqr7na6QChTRBbjhx2/8AUd170GwpdRROIwn g66XDb6vFQHCQ4m8EfY7AzVjT/4xEhnD6vZAlLbbi1/fWSNDgETAh1Vbj764djzQ+k2T vr9iDODpoPKD2/CrpbxidwVZwkEDo/dUqOI7CsZSjwdN7n6MktBNccyg99eKryi//6p7 CG2vmqbk6ruPWtH8oWS/+DM7WQNjLYFKQWrNZR/CgUEKMtK/Sas2pMjTjkzztLsA3hLM NQGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id t15-20020a170906178f00b0077cfec3a52fsi7962127eje.839.2022.10.09.14.56.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Oct 2022 14:56:17 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 951F2389A101 for ; Sun, 9 Oct 2022 21:54:15 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from esa3.mentor.iphmx.com (esa3.mentor.iphmx.com [68.232.137.180]) by sourceware.org (Postfix) with ESMTPS id 1F752385829A; Sun, 9 Oct 2022 21:51:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1F752385829A Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=codesourcery.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=mentor.com X-IronPort-AV: E=Sophos;i="5.95,172,1661846400"; d="scan'208";a="84258944" Received: from orw-gwy-01-in.mentorg.com ([192.94.38.165]) by esa3.mentor.iphmx.com with ESMTP; 09 Oct 2022 13:51:56 -0800 IronPort-SDR: oLJmwNAbtTOmIOn1UpIxESMANgQtyYoRTve6uXIL5Oo9+1slCUVppwz76xZH9HT3KdoBYofzDo y/WR6898BOrGTY+sOcwobRmhWZJ4Rjs7y+ZXIfVK1IGH5TVrALdOvOkNlLT59uuhxSxqRaVD2f a+svlYkfYOiEzjwQ6Un7XmLt3Mqn1XW2s5zskDSGgRZ9bm3OKPAhiywdu8zdEcxIy+e3sXIlDv h5MtCHisPgqSEmFJtm1ZWlhIJB5+yZKd2RuH9Prv4F6wJTpqPHYJk0k7tWZ/zxcLNXM4xm9LoU v1I= From: Julian Brown To: Subject: [PATCH v4 3/4] OpenMP/OpenACC: Rework clause expansion and nested struct handling Date: Sun, 9 Oct 2022 14:51:36 -0700 Message-ID: <2cf61b61db094bb9f38c35828e53cd715878e384.1665351784.git.julian@codesourcery.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [137.202.0.90] X-ClientProxiedBy: svr-ies-mbx-14.mgc.mentorg.com (139.181.222.14) To svr-ies-mbx-11.mgc.mentorg.com (139.181.222.11) X-Spam-Status: No, score=-11.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jakub Jelinek , Tobias Burnus , fortran@gcc.gnu.org Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746248744790556942?= X-GMAIL-MSGID: =?utf-8?q?1746248744790556942?= (This one has already been approved. Some adjustments have been made to this version for other patches already committed.) This patch is an extension and rewrite/rethink of the following two patches: "OpenMP/OpenACC: Add inspector class to unify mapped address analysis" https://gcc.gnu.org/pipermail/gcc-patches/2022-March/591977.html "OpenMP: Handle reference-typed struct members" https://gcc.gnu.org/pipermail/gcc-patches/2022-March/591978.html The latter was reviewed here by Jakub: https://gcc.gnu.org/pipermail/gcc-patches/2022-May/595510.html with the with the comment, > Why isn't a reference to pointer handled that way too? and that opened a whole can of worms... generally, C++ references were not handled very consistently after the clause-processing code had been extended several times already for both OpenACC and OpenMP, and many cases of using C++ (and Fortran) references were broken. Even some cases not involving references were being mapped incorrectly. At present a single clause may be turned into several mapping nodes, or have its mapping type changed, in several places scattered through the front- and middle-end. The analysis relating to which particular transformations are needed for some given expression has become quite hard to follow. Briefly, we manipulate clause types in the following places: 1. During parsing, in c_omp_adjust_map_clauses. Depending on a set of rules, we may change a FIRSTPRIVATE_POINTER (etc.) mapping into ATTACH_DETACH, or mark the decl addressable. 2. In semantics.cc or c-typeck.cc, clauses are expanded in handle_omp_array_sections (called via {c_}finish_omp_clauses, or in finish_omp_clauses itself. The two cases are for processing array sections (the former), or non-array sections (the latter). 3. In gimplify.cc, we build sibling lists for struct accesses, which groups and sorts accesses along with their struct base, creating new ALLOC/RELEASE nodes for pointers. 4. In gimplify.cc:gimplify_adjust_omp_clauses, mapping nodes may be adjusted or created. This patch doesn't completely disrupt this scheme, though clause types are no longer adjusted in c_omp_adjust_map_clauses (step 1). Clause expansion in step 2 (for C and C++) now uses a single, unified mechanism, parts of which are also reused for analysis in step 3. Rather than the kind-of "ad-hoc" pattern matching on addresses used to expand clauses used at present, a new method for analysing addresses is introduced. This does a recursive-descent tree walk on expression nodes, and emits a vector of tokens describing each "part" of the address. This tokenized address can then be translated directly into mapping nodes, with the assurance that no part of the expression has been inadvertently skipped or misinterpreted. In this way, all the variations of ways pointers, arrays, references and component accesses can be teased apart into easily-understood cases - and we know we've "parsed" the whole address before we start analysis, so the right code paths can easily be selected. For example, a simple access "arr[idx]" might parse as: base-decl access-indexed-array or "mystruct->foo[x]" with a pointer "foo" component might parse as: base-decl access-pointer component-selector access-pointer A key observation is that support for "array" bases, e.g. accesses whose root nodes are not structures, but describe scalars or arrays, and also *one-level deep* structure accesses, have first-class support in gimplify and beyond. Expressions that use deeper struct accesses or e.g. multiple indirections were more problematic: some cases worked, but lots of cases didn't. This patch reimplements the support for those in gimplify.cc, again using the new "address tokenization" support. An expression like "mystruct->foo->bar[0:10]" used in a mapping node will translate the right-hand access directly in the front-end. The base for the access will be "mystruct->foo". This is handled recursively -- there may be several accesses of "mystruct"'s members on the same directive, so the sibling-list building machinery can be used again. (This was already being done for OpenACC, but the new implementation differs somewhat in details, and is more robust.) For OpenMP, in the case where the base pointer itself, i.e. "mystruct->foo" here, is NOT mapped on the same directive, we create a "fragile" mapping. This turns the "foo" component access into a zero-length allocation (which is a new feature for the runtime, so support has been added there too). A couple of changes have been made to how mapping clauses are turned into mapping nodes: The first change is based on the observation that it is probably never correct to use GOMP_MAP_ALWAYS_POINTER for component accesses (e.g. for references), because if the containing struct is already mapped on the target then the host version of the pointer in question will be corrupted if the struct is copied back from the target. This patch removes all such uses, across each of C, C++ and Fortran. The second change is to the way that GOMP_MAP_ATTACH_DETACH nodes are processed during sibling-list creation. For OpenMP, for pointer components, we must map the base pointer separately from an array section that uses the base pointer, so e.g. we must have both "map(mystruct.base)" and "map(mystruct.base[0:10])" mappings. These create nodes such as: GOMP_MAP_TOFROM mystruct.base G_M_TOFROM *mystruct.base [len: 10*elemsize] G_M_ATTACH_DETACH mystruct.base Instead of using the first of these directly when building the struct sibling list then skipping the group using GOMP_MAP_ATTACH_DETACH, leading to: GOMP_MAP_STRUCT mystruct [len: 1] GOMP_MAP_TOFROM mystruct.base we now introduce a new "mini-pass", omp_resolve_clause_dependencies, that drops the GOMP_MAP_TOFROM for the base pointer, marks the second group as having had a base-pointer mapping, then omp_build_struct_sibling_lists can create: GOMP_MAP_STRUCT mystruct [len: 1] GOMP_MAP_ALLOC mystruct.base [len: ptrsize] This ends up working better in many cases, particularly those involving references. (The "alloc" space is immediately overwritten by a pointer attachment, so this is mildly more efficient than a redundant TO mapping at runtime also.) There is support in the address tokenizer for "arbitrary" base expressions which aren't rooted at a decl, but that is not used as present because such addresses are disallowed at parse time. In the front-ends, the address tokenization machinery is mostly only used for clause expansion and not for diagnostics at present. It could be used for those too, which would allow more of my previous "address inspector" implementation to be removed. The new bits in gimplify.cc work with OpenACC also. 2022-10-09 Julian Brown gcc/c-family/ * c-common.h (omp_addr_token): Add forward declaration. (c_omp_address_inspector): New class. * c-omp.cc (c_omp_adjust_map_clauses): Mark decls addressable here, but do not change any mapping node types. (c_omp_address_inspector::unconverted_ref_origin, c_omp_address_inspector::component_access_p, c_omp_address_inspector::check_clause, c_omp_address_inspector::get_root_term, c_omp_address_inspector::map_supported_p, c_omp_address_inspector::get_origin, c_omp_address_inspector::maybe_unconvert_ref, c_omp_address_inspector::maybe_zero_length_array_section, c_omp_address_inspector::expand_array_base, c_omp_address_inspector::expand_component_selector, c_omp_address_inspector::expand_map_clause): New methods. (omp_expand_access_chain): New function. gcc/c/ * c-typeck.cc (handle_omp_array_sections_1, handle_omp_array_sections, c_finish_omp_clauses): Use c_omp_address_inspector class and OMP address tokenizer to analyze and expand map clause expressions. Fix some diagnostics. gcc/cp/ * semantics.cc (cp_omp_address_inspector): New class, derived from c_omp_address_inspector. (handle_omp_array_sections_1, handle_omp_array_sections, finish_omp_clauses): Use cp_omp_address_inspector class and OMP address tokenizer to analyze and expand OpenMP map clause expressions. Fix some diagnostics. gcc/fortran/ * trans-openmp.cc (gfc_trans_omp_array_section): Add OPENMP parameter. Use GOMP_MAP_ATTACH_DETACH instead of GOMP_MAP_ALWAYS_POINTER for derived type components. (gfc_trans_omp_clauses): Update calls to gfc_trans_omp_array_section. gcc/ * gimplify.cc (build_struct_comp_nodes): Don't process GOMP_MAP_ATTACH_DETACH "middle" nodes here. (omp_mapping_group): Add REPROCESS_STRUCT and FRAGILE booleans for nested struct handling. (omp_strip_components_and_deref, omp_strip_indirections): Remove functions. (omp_gather_mapping_groups_1): Initialise reprocess_struct and fragile fields. (omp_group_base): Handle GOMP_MAP_ATTACH_DETACH after GOMP_MAP_STRUCT. (omp_index_mapping_groups_1): Skip reprocess_struct groups. (omp_get_nonfirstprivate_group, omp_directive_maps_explicitly, omp_resolve_clause_dependencies, omp_expand_access_chain): New functions. (omp_accumulate_sibling_list): Add GROUP_MAP, ADDR_TOKENS, FRAGILE_P, REPROCESSING_STRUCT, ADDED_TAIL parameters. Use OMP address tokenizer to analyze addresses. Reimplement nested struct handling, and implement "fragile groups". (omp_build_struct_sibling_lists): Adjust for changes to omp_accumulate_sibling_list. Recalculate bias for ATTACH_DETACH nodes after GOMP_MAP_STRUCT nodes. (gimplify_scan_omp_clauses): Call omp_resolve_clause_dependencies. Use OMP address tokenizer. (gimplify_adjust_omp_clauses_1): Use build_fold_indirect_ref_loc instead of build_simple_mem_ref_loc. * omp-general.cc (omp-general.h, tree-pretty-print.h): Include. (omp_addr_tokenizer): New namespace. (omp_addr_tokenizer::omp_addr_token): New. (omp_addr_tokenizer::omp_parse_component_selector, omp_addr_tokenizer::omp_parse_ref, omp_addr_tokenizer::omp_parse_pointer, omp_addr_tokenizer::omp_parse_access_method, omp_addr_tokenizer::omp_parse_access_methods, omp_addr_tokenizer::omp_parse_structure_base, omp_addr_tokenizer::omp_parse_structured_expr, omp_addr_tokenizer::omp_parse_array_expr, omp_addr_tokenizer::omp_access_chain_p, omp_addr_tokenizer::omp_accessed_addr): New functions. (omp_parse_expr, debug_omp_tokenized_addr): New functions. * omp-general.h (omp_addr_tokenizer::access_method_kinds, omp_addr_tokenizer::structure_base_kinds, omp_addr_tokenizer::token_type, omp_addr_tokenizer::omp_addr_token, omp_addr_tokenizer::omp_access_chain_p, omp_addr_tokenizer::omp_accessed_addr): New. (omp_addr_token, omp_parse_expr): New. * omp-low.cc (scan_sharing_clauses): Skip error check for references to pointers. * tree.h (OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED): New macro. gcc/testsuite/ * c-c++-common/gomp/clauses-2.c: Fix error output. * c-c++-common/gomp/target-implicit-map-2.c: Adjust scan output. * c-c++-common/gomp/target-50.c: Adjust scan output. * g++.dg/gomp/static-component-1.C: New test. * gcc.dg/gomp/target-3.c: Adjust scan output. libgomp/ * target.c (gomp_map_fields_existing): Use gomp_map_0len_lookup. (gomp_attach_pointer): Allow attaching null pointers (or Fortran "unassociated" pointers). (gomp_map_vars_internal): Handle zero-sized struct members. Add diagnostic for unmapped struct pointer members. * testsuite/libgomp.c++/class-array-1.C: New test. * testsuite/libgomp.c-c++-common/baseptrs-1.c: New test. * testsuite/libgomp.c-c++-common/baseptrs-2.c: New test. * testsuite/libgomp.c++/baseptrs-3.C: New test. * testsuite/libgomp.c++/baseptrs-4.C: New test. * testsuite/libgomp.c++/baseptrs-5.C: New test. * testsuite/libgomp.c++/target-48.C: New test. * testsuite/libgomp.c++/target-49.C: New test. * testsuite/libgomp.c/target-22.c: Add necessary explicit base pointer mappings. * testsuite/libgomp.fortran/map-subcomponents.f90: Remove XFAIL. --- gcc/c-family/c-common.h | 70 + gcc/c-family/c-omp.cc | 766 +++- gcc/c/c-typeck.cc | 363 +- gcc/cp/semantics.cc | 585 ++- gcc/fortran/trans-openmp.cc | 36 +- gcc/gimplify.cc | 1034 +++++- gcc/omp-general.cc | 425 +++ gcc/omp-general.h | 69 + gcc/omp-low.cc | 7 +- gcc/testsuite/c-c++-common/gomp/clauses-2.c | 2 +- gcc/testsuite/c-c++-common/gomp/target-50.c | 2 +- .../c-c++-common/gomp/target-implicit-map-2.c | 2 +- .../g++.dg/gomp/static-component-1.C | 23 + gcc/testsuite/gcc.dg/gomp/target-3.c | 2 +- gcc/tree.h | 4 + libgomp/target.c | 31 +- libgomp/testsuite/libgomp.c++/baseptrs-3.C | 275 ++ libgomp/testsuite/libgomp.c++/baseptrs-4.C | 3154 +++++++++++++++++ libgomp/testsuite/libgomp.c++/baseptrs-5.C | 62 + libgomp/testsuite/libgomp.c++/class-array-1.C | 59 + libgomp/testsuite/libgomp.c++/target-48.C | 32 + libgomp/testsuite/libgomp.c++/target-49.C | 37 + .../libgomp.c-c++-common/baseptrs-1.c | 50 + .../libgomp.c-c++-common/baseptrs-2.c | 70 + libgomp/testsuite/libgomp.c/target-22.c | 3 +- .../libgomp.fortran/map-subarray-4.f90 | 3 - .../libgomp.fortran/map-subcomponents.f90 | 3 - 27 files changed, 6424 insertions(+), 745 deletions(-) create mode 100644 gcc/testsuite/g++.dg/gomp/static-component-1.C create mode 100644 libgomp/testsuite/libgomp.c++/baseptrs-3.C create mode 100644 libgomp/testsuite/libgomp.c++/baseptrs-4.C create mode 100644 libgomp/testsuite/libgomp.c++/baseptrs-5.C create mode 100644 libgomp/testsuite/libgomp.c++/class-array-1.C create mode 100644 libgomp/testsuite/libgomp.c++/target-48.C create mode 100644 libgomp/testsuite/libgomp.c++/target-49.C create mode 100644 libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c create mode 100644 libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c diff --git a/gcc/c-family/c-common.h b/gcc/c-family/c-common.h index 5f470d94f4a..86ab744e065 100644 --- a/gcc/c-family/c-common.h +++ b/gcc/c-family/c-common.h @@ -1255,6 +1255,76 @@ extern tree c_omp_check_context_selector (location_t, tree); extern void c_omp_mark_declare_variant (location_t, tree, tree); extern void c_omp_adjust_map_clauses (tree, bool); +namespace omp_addr_tokenizer { struct omp_addr_token; } +typedef omp_addr_tokenizer::omp_addr_token omp_addr_token; + +class c_omp_address_inspector +{ + location_t loc; + tree root_term; + bool indirections; + int map_supported; + +protected: + tree orig; + +public: + c_omp_address_inspector (location_t loc, tree t) + : loc (loc), root_term (NULL_TREE), indirections (false), + map_supported (-1), orig (t) + { + } + + ~c_omp_address_inspector () + { + } + + virtual bool processing_template_decl_p () + { + return false; + } + + virtual void emit_unmappable_type_notes (tree) + { + } + + virtual tree convert_from_reference (tree) + { + gcc_unreachable (); + } + + virtual tree build_array_ref (location_t loc, tree arr, tree idx) + { + tree eltype = TREE_TYPE (TREE_TYPE (arr)); + return build4_loc (loc, ARRAY_REF, eltype, arr, idx, NULL_TREE, + NULL_TREE); + } + + virtual bool check_clause (tree); + tree get_root_term (bool); + + tree get_address () + { + return orig; + } + + tree unconverted_ref_origin (); + bool component_access_p (); + + bool map_supported_p (); + + static tree get_origin (tree); + static tree maybe_unconvert_ref (tree); + + bool maybe_zero_length_array_section (tree); + + tree expand_array_base (tree, vec &, tree, unsigned *, + bool, bool); + tree expand_component_selector (tree, vec &, tree, + unsigned *, bool); + tree expand_map_clause (tree, tree, vec &, bool); +}; + enum c_omp_directive_kind { C_OMP_DIR_STANDALONE, C_OMP_DIR_CONSTRUCT, diff --git a/gcc/c-family/c-omp.cc b/gcc/c-family/c-omp.cc index 7a97c40935a..d049996cc0b 100644 --- a/gcc/c-family/c-omp.cc +++ b/gcc/c-family/c-omp.cc @@ -3018,8 +3018,9 @@ struct map_clause decl_mapped (false), omp_declare_target (false) { } }; -/* Adjust map clauses after normal clause parsing, mainly to turn specific - base-pointer map cases into attach/detach and mark them addressable. */ +/* Adjust map clauses after normal clause parsing, mainly to mark specific + base-pointer map cases addressable that may be turned into attach/detach + operations during gimplification. */ void c_omp_adjust_map_clauses (tree clauses, bool is_target) { @@ -3035,7 +3036,6 @@ c_omp_adjust_map_clauses (tree clauses, bool is_target) && POINTER_TYPE_P (TREE_TYPE (OMP_CLAUSE_DECL (c)))) { tree ptr = OMP_CLAUSE_DECL (c); - OMP_CLAUSE_SET_MAP_KIND (c, GOMP_MAP_ATTACH_DETACH); c_common_mark_addressable_vec (ptr); } return; @@ -3048,7 +3048,7 @@ c_omp_adjust_map_clauses (tree clauses, bool is_target) && DECL_P (OMP_CLAUSE_DECL (c))) { /* If this is for a target construct, the firstprivate pointer - is changed to attach/detach if either is true: + is marked addressable if either is true: (1) the base-pointer is mapped in this same construct, or (2) the base-pointer is a variable place on the device by "declare target" directives. @@ -3090,11 +3090,765 @@ c_omp_adjust_map_clauses (tree clauses, bool is_target) if (mc.firstprivate_ptr_p && (mc.decl_mapped || mc.omp_declare_target)) + c_common_mark_addressable_vec (OMP_CLAUSE_DECL (mc.clause)); + } +} + +/* Maybe strip off an indirection from a "converted" reference, then find the + origin of a pointer (i.e. without any offset). */ + +tree +c_omp_address_inspector::unconverted_ref_origin () +{ + tree t = orig; + + /* We may have a reference-typed component access at the outermost level + that has had convert_from_reference called on it. Get the un-dereferenced + reference itself. */ + t = maybe_unconvert_ref (t); + + /* Find base pointer for POINTER_PLUS_EXPR, etc. */ + t = get_origin (t); + + return t; +} + +/* Return TRUE if the address is a component access. */ + +bool +c_omp_address_inspector::component_access_p () +{ + tree t = maybe_unconvert_ref (orig); + + t = get_origin (t); + + return TREE_CODE (t) == COMPONENT_REF; +} + +/* Perform various checks on the address, as described by clause CLAUSE (we + only use its code and location here). */ + +bool +c_omp_address_inspector::check_clause (tree clause) +{ + tree t = unconverted_ref_origin (); + + if (TREE_CODE (t) != COMPONENT_REF) + return true; + + if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL + && DECL_BIT_FIELD (TREE_OPERAND (t, 1))) + { + error_at (OMP_CLAUSE_LOCATION (clause), + "bit-field %qE in %qs clause", + t, omp_clause_code_name[OMP_CLAUSE_CODE (clause)]); + return false; + } + else if (!processing_template_decl_p () + && !omp_mappable_type (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (clause), + "%qE does not have a mappable type in %qs clause", + t, omp_clause_code_name[OMP_CLAUSE_CODE (clause)]); + emit_unmappable_type_notes (TREE_TYPE (t)); + return false; + } + else if (TREE_TYPE (t) && TYPE_ATOMIC (TREE_TYPE (t))) + { + error_at (OMP_CLAUSE_LOCATION (clause), + "%<_Atomic%> %qE in %qs clause", t, + omp_clause_code_name[OMP_CLAUSE_CODE (clause)]); + return false; + } + + return true; +} + +/* Find the "root term" for the address. This is the innermost decl, etc. + of the access. */ + +tree +c_omp_address_inspector::get_root_term (bool checking) +{ + if (root_term && !checking) + return root_term; + + tree t = unconverted_ref_origin (); + + while (TREE_CODE (t) == COMPONENT_REF) + { + if (checking + && TREE_TYPE (TREE_OPERAND (t, 0)) + && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == UNION_TYPE) { - OMP_CLAUSE_SET_MAP_KIND (mc.clause, GOMP_MAP_ATTACH_DETACH); - c_common_mark_addressable_vec (OMP_CLAUSE_DECL (mc.clause)); + error_at (loc, "%qE is a member of a union", t); + return error_mark_node; + } + t = TREE_OPERAND (t, 0); + while (TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == INDIRECT_REF + || TREE_CODE (t) == ARRAY_REF) + { + if (TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == INDIRECT_REF) + indirections = true; + t = TREE_OPERAND (t, 0); + STRIP_NOPS (t); + if (TREE_CODE (t) == POINTER_PLUS_EXPR) + t = TREE_OPERAND (t, 0); } } + + root_term = t; + + return t; +} + +/* Return TRUE if the address is supported in mapping clauses. At present, + this means that the innermost expression is a DECL_P, but could be extended + to other types of expression in the future. */ + +bool +c_omp_address_inspector::map_supported_p () +{ + /* If we've already decided if the mapped address is supported, return + that. */ + if (map_supported != -1) + return map_supported; + + tree t = unconverted_ref_origin (); + + STRIP_NOPS (t); + + while (TREE_CODE (t) == INDIRECT_REF + || TREE_CODE (t) == MEM_REF + || TREE_CODE (t) == ARRAY_REF + || TREE_CODE (t) == COMPONENT_REF + || TREE_CODE (t) == COMPOUND_EXPR + || TREE_CODE (t) == SAVE_EXPR + || TREE_CODE (t) == POINTER_PLUS_EXPR + || TREE_CODE (t) == NON_LVALUE_EXPR + || TREE_CODE (t) == NOP_EXPR) + if (TREE_CODE (t) == COMPOUND_EXPR) + t = TREE_OPERAND (t, 1); + else + t = TREE_OPERAND (t, 0); + + STRIP_NOPS (t); + + map_supported = DECL_P (t); + + return map_supported; +} + +/* Get the origin of an address T, stripping off offsets and some other + bits. */ + +tree +c_omp_address_inspector::get_origin (tree t) +{ + while (1) + { + if (TREE_CODE (t) == COMPOUND_EXPR) + { + t = TREE_OPERAND (t, 1); + STRIP_NOPS (t); + } + else if (TREE_CODE (t) == POINTER_PLUS_EXPR + || TREE_CODE (t) == SAVE_EXPR) + t = TREE_OPERAND (t, 0); + else if (TREE_CODE (t) == INDIRECT_REF + && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == REFERENCE_TYPE) + t = TREE_OPERAND (t, 0); + else + break; + } + STRIP_NOPS (t); + return t; +} + +/* For an address T that might be a reference that has had + "convert_from_reference" called on it, return the actual reference without + any indirection. */ + +tree +c_omp_address_inspector::maybe_unconvert_ref (tree t) +{ + if (TREE_CODE (t) == INDIRECT_REF + && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == REFERENCE_TYPE) + return TREE_OPERAND (t, 0); + + return t; +} + +/* Return TRUE if CLAUSE might describe a zero-length array section. */ + +bool +c_omp_address_inspector::maybe_zero_length_array_section (tree clause) +{ + switch (OMP_CLAUSE_MAP_KIND (clause)) + { + case GOMP_MAP_ALLOC: + case GOMP_MAP_IF_PRESENT: + case GOMP_MAP_TO: + case GOMP_MAP_FROM: + case GOMP_MAP_TOFROM: + case GOMP_MAP_ALWAYS_TO: + case GOMP_MAP_ALWAYS_FROM: + case GOMP_MAP_ALWAYS_TOFROM: + case GOMP_MAP_RELEASE: + case GOMP_MAP_DELETE: + case GOMP_MAP_FORCE_TO: + case GOMP_MAP_FORCE_FROM: + case GOMP_MAP_FORCE_TOFROM: + case GOMP_MAP_FORCE_PRESENT: + return true; + default: + return false; + } +} + +/* Expand a chained access. We only expect to see a quite limited range of + expression types here, because e.g. you can't have an array of + references. See also gimplify.cc:omp_expand_access_chain. */ + +static tree +omp_expand_access_chain (tree c, tree expr, vec &addr_tokens, + unsigned *idx) +{ + using namespace omp_addr_tokenizer; + location_t loc = OMP_CLAUSE_LOCATION (c); + unsigned i = *idx; + tree c2 = NULL_TREE; + + switch (addr_tokens[i]->u.access_kind) + { + case ACCESS_POINTER: + case ACCESS_POINTER_OFFSET: + { + tree virtual_origin + = fold_convert_loc (loc, ptrdiff_type_node, addr_tokens[i]->expr); + tree data_addr = omp_accessed_addr (addr_tokens, i, expr); + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (c2) = addr_tokens[i]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_INDEXED_ARRAY: + break; + + default: + return error_mark_node; + } + + if (c2) + { + OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); + OMP_CLAUSE_CHAIN (c) = c2; + c = c2; + } + + *idx = ++i; + + if (i < addr_tokens.length () + && addr_tokens[i]->type == ACCESS_METHOD) + return omp_expand_access_chain (c, expr, addr_tokens, idx); + + return c; +} + +/* Translate "array_base_decl access_method" to OMP mapping clauses. */ + +tree +c_omp_address_inspector::expand_array_base (tree c, + vec &addr_tokens, + tree expr, unsigned *idx, + bool target, bool decl_p) +{ + using namespace omp_addr_tokenizer; + location_t loc = OMP_CLAUSE_LOCATION (c); + int i = *idx; + tree decl = addr_tokens[i + 1]->expr; + bool declare_target_p = (decl_p + && is_global_var (decl) + && lookup_attribute ("omp declare target", + DECL_ATTRIBUTES (decl))); + bool implicit_p = (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && OMP_CLAUSE_MAP_IMPLICIT (c)); + bool chain_p = omp_access_chain_p (addr_tokens, i + 1); + tree c2 = NULL_TREE, c3 = NULL_TREE; + unsigned consume_tokens = 2; + + gcc_assert (i == 0); + + switch (addr_tokens[i + 1]->u.access_kind) + { + case ACCESS_DIRECT: + if (decl_p && !target) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + break; + + case ACCESS_REF: + { + /* Copy the referenced object. */ + tree obj = convert_from_reference (addr_tokens[i + 1]->expr); + OMP_CLAUSE_DECL (c) = obj; + OMP_CLAUSE_SIZE (c) = TYPE_SIZE_UNIT (TREE_TYPE (obj)); + + /* If we have a reference to a pointer, avoid using + FIRSTPRIVATE_REFERENCE here in case the pointer is modified in the + offload region (we can only do that if the pointer does not point + to a mapped block). We could avoid doing this if we don't have a + FROM mapping... */ + bool ref_to_ptr = TREE_CODE (TREE_TYPE (obj)) == POINTER_TYPE; + + if (target) + { + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + if (target + && !ref_to_ptr + && !declare_target_p + && decl_p) + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_REFERENCE); + else + { + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + if (decl_p) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + } + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) = size_zero_node; + + if (ref_to_ptr) + { + c3 = c2; + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALLOC); + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (c2))); + } + } + } + break; + + case ACCESS_INDEXED_REF_TO_ARRAY: + { + tree virtual_origin + = convert_from_reference (addr_tokens[i + 1]->expr); + virtual_origin = build_fold_addr_expr (virtual_origin); + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + virtual_origin); + tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr); + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + if (decl_p && target && !declare_target_p) + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_REFERENCE); + else + { + if (decl_p) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + } + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_INDEXED_ARRAY: + { + /* The code handling "firstprivatize_array_bases" in gimplify.cc is + relevant here. What do we need to create for arrays at this + stage? (This condition doesn't feel quite right. FIXME?) */ + if (!target + && (TREE_CODE (TREE_TYPE (addr_tokens[i + 1]->expr)) + == ARRAY_TYPE)) + break; + + tree virtual_origin + = build_fold_addr_expr (addr_tokens[i + 1]->expr); + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + virtual_origin); + tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr); + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + if (decl_p && target) + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER); + else + { + if (decl_p) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + } + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_POINTER: + case ACCESS_POINTER_OFFSET: + { + unsigned last_access = i + 1; + tree virtual_origin; + + if (chain_p + && addr_tokens[i + 2]->type == ACCESS_METHOD + && addr_tokens[i + 2]->u.access_kind == ACCESS_INDEXED_ARRAY) + { + /* !!! This seems wrong for ACCESS_POINTER_OFFSET. */ + consume_tokens = 3; + chain_p = omp_access_chain_p (addr_tokens, i + 2); + last_access = i + 2; + virtual_origin + = build_array_ref (loc, addr_tokens[last_access]->expr, + integer_zero_node); + virtual_origin = build_fold_addr_expr (virtual_origin); + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + virtual_origin); + } + else + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + addr_tokens[last_access]->expr); + tree data_addr = omp_accessed_addr (addr_tokens, last_access, expr); + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + if (decl_p && target && !chain_p && !declare_target_p) + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER); + else + { + if (decl_p) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + } + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_REF_TO_POINTER: + case ACCESS_REF_TO_POINTER_OFFSET: + { + unsigned last_access = i + 1; + tree virtual_origin; + + if (chain_p + && addr_tokens[i + 2]->type == ACCESS_METHOD + && addr_tokens[i + 2]->u.access_kind == ACCESS_INDEXED_ARRAY) + { + /* !!! This seems wrong for ACCESS_POINTER_OFFSET. */ + consume_tokens = 3; + chain_p = omp_access_chain_p (addr_tokens, i + 2); + last_access = i + 2; + virtual_origin + = build_array_ref (loc, addr_tokens[last_access]->expr, + integer_zero_node); + virtual_origin = build_fold_addr_expr (virtual_origin); + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + virtual_origin); + } + else + { + virtual_origin + = convert_from_reference (addr_tokens[last_access]->expr); + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + virtual_origin); + } + + tree data_addr = omp_accessed_addr (addr_tokens, last_access, expr); + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + if (decl_p && target && !declare_target_p) + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_REFERENCE); + else + { + if (decl_p) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + } + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + default: + *idx = i + consume_tokens; + return error_mark_node; + } + + if (c3) + { + OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c); + OMP_CLAUSE_CHAIN (c2) = c3; + OMP_CLAUSE_CHAIN (c) = c2; + if (implicit_p) + { + OMP_CLAUSE_MAP_IMPLICIT (c2) = 1; + OMP_CLAUSE_MAP_IMPLICIT (c3) = 1; + } + c = c3; + } + else if (c2) + { + OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); + OMP_CLAUSE_CHAIN (c) = c2; + if (implicit_p) + OMP_CLAUSE_MAP_IMPLICIT (c2) = 1; + c = c2; + } + + i += consume_tokens; + *idx = i; + + if (target && chain_p) + return omp_expand_access_chain (c, expr, addr_tokens, idx); + else if (chain_p) + while (*idx < addr_tokens.length () + && addr_tokens[*idx]->type == ACCESS_METHOD) + (*idx)++; + + return c; +} + +/* Translate "component_selector access_method" to OMP mapping clauses. */ + +tree +c_omp_address_inspector::expand_component_selector (tree c, + vec + &addr_tokens, + tree expr, unsigned *idx, + bool target) +{ + using namespace omp_addr_tokenizer; + location_t loc = OMP_CLAUSE_LOCATION (c); + unsigned i = *idx; + tree c2 = NULL_TREE, c3 = NULL_TREE; + bool chain_p = omp_access_chain_p (addr_tokens, i + 1); + + switch (addr_tokens[i + 1]->u.access_kind) + { + case ACCESS_DIRECT: + case ACCESS_INDEXED_ARRAY: + break; + + case ACCESS_REF: + { + /* Copy the referenced object. */ + tree obj = convert_from_reference (addr_tokens[i + 1]->expr); + OMP_CLAUSE_DECL (c) = obj; + OMP_CLAUSE_SIZE (c) = TYPE_SIZE_UNIT (TREE_TYPE (obj)); + + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) = size_zero_node; + } + break; + + case ACCESS_INDEXED_REF_TO_ARRAY: + { + tree virtual_origin + = convert_from_reference (addr_tokens[i + 1]->expr); + virtual_origin = build_fold_addr_expr (virtual_origin); + virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + virtual_origin); + tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr); + + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_POINTER: + case ACCESS_POINTER_OFFSET: + { + tree virtual_origin + = fold_convert_loc (loc, ptrdiff_type_node, + addr_tokens[i + 1]->expr); + tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr); + + c2 = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (c2) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_REF_TO_POINTER: + case ACCESS_REF_TO_POINTER_OFFSET: + { + tree ptr = convert_from_reference (addr_tokens[i + 1]->expr); + tree virtual_origin = fold_convert_loc (loc, ptrdiff_type_node, + ptr); + tree data_addr = omp_accessed_addr (addr_tokens, i + 1, expr); + + /* Attach the pointer... */ + c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (c2) = ptr; + OMP_CLAUSE_SIZE (c2) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + + /* ...and also the reference. */ + c3 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c3, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (c3) = addr_tokens[i + 1]->expr; + OMP_CLAUSE_SIZE (c3) = size_zero_node; + } + break; + + default: + *idx = i + 2; + return error_mark_node; + } + + if (c3) + { + OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c); + OMP_CLAUSE_CHAIN (c2) = c3; + OMP_CLAUSE_CHAIN (c) = c2; + c = c3; + } + else if (c2) + { + OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); + OMP_CLAUSE_CHAIN (c) = c2; + c = c2; + } + + i += 2; + *idx = i; + + if (target && chain_p) + return omp_expand_access_chain (c, expr, addr_tokens, idx); + else if (chain_p) + while (*idx < addr_tokens.length () + && addr_tokens[*idx]->type == ACCESS_METHOD) + (*idx)++; + + return c; +} + +/* Expand a map clause into a group of mapping clauses, creating nodes to + attach/detach pointers and so forth as necessary. */ + +tree +c_omp_address_inspector::expand_map_clause (tree c, tree expr, + vec &addr_tokens, + bool target) +{ + using namespace omp_addr_tokenizer; + unsigned i, length = addr_tokens.length (); + + for (i = 0; i < length;) + { + int remaining = length - i; + + if (remaining >= 2 + && addr_tokens[i]->type == ARRAY_BASE + && addr_tokens[i]->u.structure_base_kind == BASE_DECL + && addr_tokens[i + 1]->type == ACCESS_METHOD) + { + c = expand_array_base (c, addr_tokens, expr, &i, target, true); + if (c == error_mark_node) + return error_mark_node; + } + else if (remaining >= 2 + && addr_tokens[i]->type == ARRAY_BASE + && addr_tokens[i]->u.structure_base_kind == BASE_ARBITRARY_EXPR + && addr_tokens[i + 1]->type == ACCESS_METHOD) + { + c = expand_array_base (c, addr_tokens, expr, &i, target, false); + if (c == error_mark_node) + return error_mark_node; + } + else if (remaining >= 2 + && addr_tokens[i]->type == STRUCTURE_BASE + && addr_tokens[i]->u.structure_base_kind == BASE_DECL + && addr_tokens[i + 1]->type == ACCESS_METHOD) + { + if (addr_tokens[i + 1]->u.access_kind == ACCESS_DIRECT) + c_common_mark_addressable_vec (addr_tokens[i + 1]->expr); + i += 2; + while (addr_tokens[i]->type == ACCESS_METHOD) + i++; + } + else if (remaining >= 2 + && addr_tokens[i]->type == STRUCTURE_BASE + && addr_tokens[i]->u.structure_base_kind == BASE_ARBITRARY_EXPR + && addr_tokens[i + 1]->type == ACCESS_METHOD) + { + switch (addr_tokens[i + 1]->u.access_kind) + { + case ACCESS_DIRECT: + case ACCESS_POINTER: + i += 2; + while (addr_tokens[i]->type == ACCESS_METHOD) + i++; + break; + default: + return error_mark_node; + } + } + else if (remaining >= 2 + && addr_tokens[i]->type == COMPONENT_SELECTOR + && addr_tokens[i + 1]->type == ACCESS_METHOD) + { + c = expand_component_selector (c, addr_tokens, expr, &i, target); + /* We used 'expr', so these must have been the last tokens. */ + gcc_assert (i == length); + if (c == error_mark_node) + return error_mark_node; + } + else if (remaining >= 3 + && addr_tokens[i]->type == COMPONENT_SELECTOR + && addr_tokens[i + 1]->type == STRUCTURE_BASE + && (addr_tokens[i + 1]->u.structure_base_kind + == BASE_COMPONENT_EXPR) + && addr_tokens[i + 2]->type == ACCESS_METHOD) + { + i += 3; + while (addr_tokens[i]->type == ACCESS_METHOD) + i++; + } + else + break; + } + + if (i == length) + return c; + + return error_mark_node; } const struct c_omp_directive c_omp_directives[] = { diff --git a/gcc/c/c-typeck.cc b/gcc/c/c-typeck.cc index f57365fb588..b86fdef4656 100644 --- a/gcc/c/c-typeck.cc +++ b/gcc/c/c-typeck.cc @@ -13305,6 +13305,7 @@ handle_omp_array_sections_1 (tree c, tree t, vec &types, { if (error_operand_p (t)) return error_mark_node; + c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); ret = t; if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_AFFINITY && OMP_CLAUSE_CODE (c) != OMP_CLAUSE_DEPEND @@ -13314,59 +13315,17 @@ handle_omp_array_sections_1 (tree c, tree t, vec &types, t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); return error_mark_node; } - while (TREE_CODE (t) == INDIRECT_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == COMPOUND_EXPR) - { - t = TREE_OPERAND (t, 1); - STRIP_NOPS (t); - } - if (TREE_CODE (t) == COMPONENT_REF - && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO - || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM)) - { - if (DECL_BIT_FIELD (TREE_OPERAND (t, 1))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "bit-field %qE in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - return error_mark_node; - } - while (TREE_CODE (t) == COMPONENT_REF) - { - if (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == UNION_TYPE) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is a member of a union", t); - return error_mark_node; - } - t = TREE_OPERAND (t, 0); - while (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - if (ort == C_ORT_ACC && TREE_CODE (t) == MEM_REF) - { - if (maybe_ne (mem_ref_offset (t), 0)) - error_at (OMP_CLAUSE_LOCATION (c), - "cannot dereference %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - else - t = TREE_OPERAND (t, 0); - } - } - } + if (!ai.check_clause (c)) + return error_mark_node; + else if (ai.component_access_p () + && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO + || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM)) + t = ai.get_root_term (true); + else + t = ai.unconverted_ref_origin (); + if (t == error_mark_node) + return error_mark_node; if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) { if (DECL_P (t)) @@ -13898,55 +13857,27 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort) if (size) size = c_fully_fold (size, false, NULL); OMP_CLAUSE_SIZE (c) = size; - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP - || (TREE_CODE (t) == COMPONENT_REF - && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE)) + + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) return false; - gcc_assert (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FORCE_DEVICEPTR); - switch (OMP_CLAUSE_MAP_KIND (c)) + + auto_vec addr_tokens; + + if (!omp_parse_expr (addr_tokens, first)) + return true; + + c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + + tree nc = ai.expand_map_clause (c, first, addr_tokens, + (ort == C_ORT_OMP_TARGET + || ort == C_ORT_ACC)); + if (nc != error_mark_node) { - case GOMP_MAP_ALLOC: - case GOMP_MAP_IF_PRESENT: - case GOMP_MAP_TO: - case GOMP_MAP_FROM: - case GOMP_MAP_TOFROM: - case GOMP_MAP_ALWAYS_TO: - case GOMP_MAP_ALWAYS_FROM: - case GOMP_MAP_ALWAYS_TOFROM: - case GOMP_MAP_RELEASE: - case GOMP_MAP_DELETE: - case GOMP_MAP_FORCE_TO: - case GOMP_MAP_FORCE_FROM: - case GOMP_MAP_FORCE_TOFROM: - case GOMP_MAP_FORCE_PRESENT: - OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1; - break; - default: - break; + if (ai.maybe_zero_length_array_section (c)) + OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1; + + return false; } - tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP); - if (TREE_CODE (t) == COMPONENT_REF) - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); - else - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER); - OMP_CLAUSE_MAP_IMPLICIT (c2) = OMP_CLAUSE_MAP_IMPLICIT (c); - if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER - && !c_mark_addressable (t)) - return false; - OMP_CLAUSE_DECL (c2) = t; - t = build_fold_addr_expr (first); - t = fold_convert_loc (OMP_CLAUSE_LOCATION (c), ptrdiff_type_node, t); - tree ptr = OMP_CLAUSE_DECL (c2); - if (!POINTER_TYPE_P (TREE_TYPE (ptr))) - ptr = build_fold_addr_expr (ptr); - t = fold_build2_loc (OMP_CLAUSE_LOCATION (c), MINUS_EXPR, - ptrdiff_type_node, t, - fold_convert_loc (OMP_CLAUSE_LOCATION (c), - ptrdiff_type_node, ptr)); - t = c_fully_fold (t, false, NULL); - OMP_CLAUSE_SIZE (c2) = t; - OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); - OMP_CLAUSE_CHAIN (c) = c2; } return false; } @@ -14212,7 +14143,6 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) tree ordered_clause = NULL_TREE; tree schedule_clause = NULL_TREE; bool oacc_async = false; - bool indir_component_ref_p = false; tree last_iterators = NULL_TREE; bool last_iterators_remove = false; tree *nogroup_seen = NULL; @@ -14744,7 +14674,8 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) "%qE appears more than once in data clauses", t); remove = true; } - else if (bitmap_bit_p (&map_head, DECL_UID (t))) + else if (bitmap_bit_p (&map_head, DECL_UID (t)) + || bitmap_bit_p (&map_field_head, DECL_UID (t))) { if (ort == C_ORT_ACC) error_at (OMP_CLAUSE_LOCATION (c), @@ -15014,6 +14945,9 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) case OMP_CLAUSE_FROM: case OMP_CLAUSE__CACHE_: { + using namespace omp_addr_tokenizer; + auto_vec addr_tokens; + t = OMP_CLAUSE_DECL (c); if (TREE_CODE (t) == TREE_LIST) { @@ -15042,56 +14976,68 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) } while (TREE_CODE (t) == ARRAY_REF) t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == COMPONENT_REF - && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE) + + c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + + if (!omp_parse_expr (addr_tokens, t)) { - do - { - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - while (TREE_CODE (t) == COMPONENT_REF - || TREE_CODE (t) == ARRAY_REF); + sorry_at (OMP_CLAUSE_LOCATION (c), + "unsupported map expression %qE", + OMP_CLAUSE_DECL (c)); + remove = true; + break; + } + + /* This check is to determine if this will be the only map + clause created for this node. Otherwise, we'll check + the following FIRSTPRIVATE_POINTER or ATTACH_DETACH + node on the next iteration(s) of the loop. */ + if (addr_tokens.length () >= 4 + && addr_tokens[0]->type == STRUCTURE_BASE + && addr_tokens[0]->u.structure_base_kind == BASE_DECL + && addr_tokens[1]->type == ACCESS_METHOD + && addr_tokens[2]->type == COMPONENT_SELECTOR + && addr_tokens[3]->type == ACCESS_METHOD + && (addr_tokens[3]->u.access_kind == ACCESS_DIRECT + || (addr_tokens[3]->u.access_kind + == ACCESS_INDEXED_ARRAY))) + { + tree rt = addr_tokens[1]->expr; + + gcc_assert (DECL_P (rt)); if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP && OMP_CLAUSE_MAP_IMPLICIT (c) - && (bitmap_bit_p (&map_head, DECL_UID (t)) - || bitmap_bit_p (&map_field_head, DECL_UID (t)) + && (bitmap_bit_p (&map_head, DECL_UID (rt)) + || bitmap_bit_p (&map_field_head, DECL_UID (rt)) || bitmap_bit_p (&map_firstprivate_head, - DECL_UID (t)))) + DECL_UID (rt)))) { remove = true; break; } - if (bitmap_bit_p (&map_field_head, DECL_UID (t))) + if (bitmap_bit_p (&map_field_head, DECL_UID (rt))) break; - if (bitmap_bit_p (&map_head, DECL_UID (t))) + if (bitmap_bit_p (&map_head, DECL_UID (rt))) { if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in motion " - "clauses", t); + "clauses", rt); else if (ort == C_ORT_ACC) error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in data " - "clauses", t); + "clauses", rt); else error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in map " - "clauses", t); + "clauses", rt); remove = true; } else { - bitmap_set_bit (&map_head, DECL_UID (t)); - bitmap_set_bit (&map_field_head, DECL_UID (t)); + bitmap_set_bit (&map_head, DECL_UID (rt)); + bitmap_set_bit (&map_field_head, DECL_UID (rt)); } } } @@ -15108,6 +15054,14 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) OMP_CLAUSE_SIZE (c) = size_zero_node; break; } + else if (!omp_parse_expr (addr_tokens, t)) + { + sorry_at (OMP_CLAUSE_LOCATION (c), + "unsupported map expression %qE", + OMP_CLAUSE_DECL (c)); + remove = true; + break; + } if (t == error_mark_node) { remove = true; @@ -15126,96 +15080,42 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) bias) to zero here, so it is not set erroneously to the pointer size later on in gimplify.cc. */ OMP_CLAUSE_SIZE (c) = size_zero_node; - while (TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) + + c_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + + if (!ai.check_clause (c)) { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == COMPOUND_EXPR) - { - t = TREE_OPERAND (t, 1); - STRIP_NOPS (t); - } - indir_component_ref_p = false; - if (TREE_CODE (t) == COMPONENT_REF - && (TREE_CODE (TREE_OPERAND (t, 0)) == MEM_REF - || TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF - || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF)) - { - t = TREE_OPERAND (TREE_OPERAND (t, 0), 0); - indir_component_ref_p = true; - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); + remove = true; + break; } - if (TREE_CODE (t) == COMPONENT_REF - && OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_) + if (!ai.map_supported_p ()) { - if (DECL_BIT_FIELD (TREE_OPERAND (t, 1))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "bit-field %qE in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (!omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE does not have a mappable type in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (TYPE_ATOMIC (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%<_Atomic%> %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - while (TREE_CODE (t) == COMPONENT_REF) - { - if (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) - == UNION_TYPE) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is a member of a union", t); - remove = true; - break; - } - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF) - { - if (maybe_ne (mem_ref_offset (t), 0)) - error_at (OMP_CLAUSE_LOCATION (c), - "cannot dereference %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - else - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - if (remove) - break; - if (VAR_P (t) || TREE_CODE (t) == PARM_DECL) - { - if (bitmap_bit_p (&map_field_head, DECL_UID (t)) - || (ort != C_ORT_ACC - && bitmap_bit_p (&map_head, DECL_UID (t)))) - break; - } + sorry_at (OMP_CLAUSE_LOCATION (c), + "unsupported map expression %qE", + OMP_CLAUSE_DECL (c)); + remove = true; + break; } + + gcc_assert ((addr_tokens[0]->type == ARRAY_BASE + || addr_tokens[0]->type == STRUCTURE_BASE) + && addr_tokens[1]->type == ACCESS_METHOD); + + t = addr_tokens[1]->expr; + + if (addr_tokens[0]->u.structure_base_kind != BASE_DECL) + goto skip_decl_checks; + + /* For OpenMP, we can access a struct "t" and "t.d" on the same + mapping. OpenACC allows multiple fields of the same structure + to be written. */ + if (addr_tokens[0]->type == STRUCTURE_BASE + && (bitmap_bit_p (&map_field_head, DECL_UID (t)) + || (ort != C_ORT_ACC + && bitmap_bit_p (&map_head, DECL_UID (t))))) + goto skip_decl_checks; + if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) { error_at (OMP_CLAUSE_LOCATION (c), @@ -15233,7 +15133,6 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) else if ((OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP || (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER)) - && !indir_component_ref_p && !c_mark_addressable (t)) remove = true; else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP @@ -15279,15 +15178,11 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) remove = true; } else if (bitmap_bit_p (&map_head, DECL_UID (t)) - && !bitmap_bit_p (&map_field_head, DECL_UID (t))) + && !bitmap_bit_p (&map_field_head, DECL_UID (t)) + && ort == C_ORT_ACC) { - if (ort == C_ORT_ACC) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", - t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears both in data and map clauses", t); + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", t); remove = true; } else @@ -15325,13 +15220,37 @@ c_finish_omp_clauses (tree clauses, enum c_omp_region_type ort) "%qD appears both in data and map clauses", t); remove = true; } - else + else if (!omp_access_chain_p (addr_tokens, 1)) { bitmap_set_bit (&map_head, DECL_UID (t)); if (t != OMP_CLAUSE_DECL (c) && TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF) bitmap_set_bit (&map_field_head, DECL_UID (t)); } + + skip_decl_checks: + /* If we call omp_expand_map_clause in handle_omp_array_sections, + the containing loop (here) iterates through the new nodes + created by that expansion. Avoid expanding those again (just + by checking the node type). */ + if (!remove + && ort != C_ORT_DECLARE_SIMD + && (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP + || (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER + && (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_REFERENCE) + && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ALWAYS_POINTER + && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH))) + { + grp_start_p = pc; + grp_sentinel = OMP_CLAUSE_CHAIN (c); + tree nc = ai.expand_map_clause (c, OMP_CLAUSE_DECL (c), + addr_tokens, + (ort == C_ORT_OMP_TARGET + || ort == C_ORT_ACC)); + if (nc != error_mark_node) + c = nc; + } } break; diff --git a/gcc/cp/semantics.cc b/gcc/cp/semantics.cc index 7aa81101c63..9ae7b2a43e5 100644 --- a/gcc/cp/semantics.cc +++ b/gcc/cp/semantics.cc @@ -5055,6 +5055,54 @@ omp_privatize_field (tree t, bool shared) return v; } +/* C++ specialisation of the c_omp_address_inspector class. */ + +class cp_omp_address_inspector : public c_omp_address_inspector +{ +public: + cp_omp_address_inspector (location_t loc, tree t) + : c_omp_address_inspector (loc, t) + { + } + + ~cp_omp_address_inspector () + { + } + + bool processing_template_decl_p () + { + return processing_template_decl; + } + + void emit_unmappable_type_notes (tree t) + { + if (TREE_TYPE (t) != error_mark_node + && !COMPLETE_TYPE_P (TREE_TYPE (t))) + cxx_incomplete_type_inform (TREE_TYPE (t)); + } + + tree convert_from_reference (tree x) + { + return ::convert_from_reference (x); + } + + tree build_array_ref (location_t loc, tree arr, tree idx) + { + return ::build_array_ref (loc, arr, idx); + } + + bool check_clause (tree clause) + { + if (TREE_CODE (orig) == COMPONENT_REF + && invalid_nonstatic_memfn_p (EXPR_LOCATION (orig), orig, + tf_warning_or_error)) + return false; + if (!c_omp_address_inspector::check_clause (clause)) + return false; + return true; + } +}; + /* Helper function for handle_omp_array_sections. Called recursively to handle multiple array-section-subscripts. C is the clause, T current expression (initially OMP_CLAUSE_DECL), which is either @@ -5085,59 +5133,22 @@ handle_omp_array_sections_1 (tree c, tree t, vec &types, { if (error_operand_p (t)) return error_mark_node; - if (REFERENCE_REF_P (t) - && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF) - t = TREE_OPERAND (t, 0); - ret = t; - while (TREE_CODE (t) == INDIRECT_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == COMPOUND_EXPR) - { - t = TREE_OPERAND (t, 1); - STRIP_NOPS (t); - } - if (TREE_CODE (t) == COMPONENT_REF - && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO - || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM) - && !type_dependent_expression_p (t)) - { - if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL - && DECL_BIT_FIELD (TREE_OPERAND (t, 1))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "bit-field %qE in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - return error_mark_node; - } - while (TREE_CODE (t) == COMPONENT_REF) - { - if (TREE_TYPE (TREE_OPERAND (t, 0)) - && TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) == UNION_TYPE) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is a member of a union", t); - return error_mark_node; - } - t = TREE_OPERAND (t, 0); - while (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - if (REFERENCE_REF_P (t)) - t = TREE_OPERAND (t, 0); - } + + cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + tree t_refto = ai.maybe_unconvert_ref (t); + + if (!ai.check_clause (c)) + return error_mark_node; + else if (ai.component_access_p () + && (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO + || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_FROM)) + t = ai.get_root_term (true); + else + t = ai.unconverted_ref_origin (); + if (t == error_mark_node) + return error_mark_node; + ret = t_refto; if (TREE_CODE (t) == FIELD_DECL) ret = finish_non_static_data_member (t, NULL_TREE, NULL_TREE); else if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL) @@ -5471,7 +5482,7 @@ handle_omp_array_sections_1 (tree c, tree t, vec &types, /* Handle array sections for clause C. */ static bool -handle_omp_array_sections (tree c, enum c_omp_region_type ort) +handle_omp_array_sections (tree &c, enum c_omp_region_type ort) { bool maybe_zero_len = false; unsigned int first_non_one = 0; @@ -5682,111 +5693,72 @@ handle_omp_array_sections (tree c, enum c_omp_region_type ort) OMP_CLAUSE_SIZE (c) = size; if (TREE_CODE (t) == FIELD_DECL) t = finish_non_static_data_member (t, NULL_TREE, NULL_TREE); - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP - || (TREE_CODE (t) == COMPONENT_REF - && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE)) + + if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) return false; - switch (OMP_CLAUSE_MAP_KIND (c)) - { - case GOMP_MAP_ALLOC: - case GOMP_MAP_IF_PRESENT: - case GOMP_MAP_TO: - case GOMP_MAP_FROM: - case GOMP_MAP_TOFROM: - case GOMP_MAP_ALWAYS_TO: - case GOMP_MAP_ALWAYS_FROM: - case GOMP_MAP_ALWAYS_TOFROM: - case GOMP_MAP_RELEASE: - case GOMP_MAP_DELETE: - case GOMP_MAP_FORCE_TO: - case GOMP_MAP_FORCE_FROM: - case GOMP_MAP_FORCE_TOFROM: - case GOMP_MAP_FORCE_PRESENT: - OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1; - break; - default: - break; - } - bool reference_always_pointer = true; - tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), - OMP_CLAUSE_MAP); - if (TREE_CODE (t) == COMPONENT_REF) - { - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ATTACH_DETACH); - if ((ort & C_ORT_OMP_DECLARE_SIMD) == C_ORT_OMP - && TYPE_REF_P (TREE_TYPE (t))) + if (TREE_CODE (first) == INDIRECT_REF) + { + /* Detect and skip adding extra nodes for pointer-to-member + mappings. These are unsupported for now. */ + tree tmp = TREE_OPERAND (first, 0); + + if (TREE_CODE (tmp) == NON_LVALUE_EXPR) + tmp = TREE_OPERAND (tmp, 0); + + if (TREE_CODE (tmp) == INDIRECT_REF) + tmp = TREE_OPERAND (tmp, 0); + + if (TREE_CODE (tmp) == POINTER_PLUS_EXPR) { - if (TREE_CODE (TREE_TYPE (TREE_TYPE (t))) == ARRAY_TYPE) - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER); - else - t = convert_from_reference (t); - - reference_always_pointer = false; + tree offset = TREE_OPERAND (tmp, 1); + STRIP_NOPS (offset); + if (TYPE_PTRMEM_P (TREE_TYPE (offset))) + { + sorry_at (OMP_CLAUSE_LOCATION (c), + "pointer-to-member mapping %qE not supported", + OMP_CLAUSE_DECL (c)); + return true; + } } } - else if (REFERENCE_REF_P (t) - && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF) - { - gomp_map_kind k; - if ((ort & C_ORT_OMP_DECLARE_SIMD) == C_ORT_OMP - && TREE_CODE (TREE_TYPE (t)) == POINTER_TYPE) - k = GOMP_MAP_ATTACH_DETACH; - else - { - t = TREE_OPERAND (t, 0); - k = (ort == C_ORT_ACC - ? GOMP_MAP_ATTACH_DETACH : GOMP_MAP_ALWAYS_POINTER); - } - OMP_CLAUSE_SET_MAP_KIND (c2, k); - } - else - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER); - OMP_CLAUSE_MAP_IMPLICIT (c2) = OMP_CLAUSE_MAP_IMPLICIT (c); - if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER - && !cxx_mark_addressable (t)) - return false; - OMP_CLAUSE_DECL (c2) = t; - t = build_fold_addr_expr (first); - t = fold_convert_loc (OMP_CLAUSE_LOCATION (c), - ptrdiff_type_node, t); - tree ptr = OMP_CLAUSE_DECL (c2); - ptr = convert_from_reference (ptr); - if (!INDIRECT_TYPE_P (TREE_TYPE (ptr))) - ptr = build_fold_addr_expr (ptr); - t = fold_build2_loc (OMP_CLAUSE_LOCATION (c), MINUS_EXPR, - ptrdiff_type_node, t, - fold_convert_loc (OMP_CLAUSE_LOCATION (c), - ptrdiff_type_node, ptr)); - OMP_CLAUSE_SIZE (c2) = t; - OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); - OMP_CLAUSE_CHAIN (c) = c2; - ptr = OMP_CLAUSE_DECL (c2); - if (reference_always_pointer - && OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER - && TYPE_REF_P (TREE_TYPE (ptr)) - && INDIRECT_TYPE_P (TREE_TYPE (TREE_TYPE (ptr)))) + /* FIRST represents the first item of data that we are mapping. + E.g. if we're mapping an array, FIRST might resemble + "foo.bar.myarray[0]". */ + + auto_vec addr_tokens; + + if (!omp_parse_expr (addr_tokens, first)) + return true; + + cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + + tree nc = ai.expand_map_clause (c, first, addr_tokens, + (ort == C_ORT_OMP_TARGET + || ort == C_ORT_ACC)); + if (nc != error_mark_node) { - tree c3 = build_omp_clause (OMP_CLAUSE_LOCATION (c), - OMP_CLAUSE_MAP); - OMP_CLAUSE_SET_MAP_KIND (c3, OMP_CLAUSE_MAP_KIND (c2)); - OMP_CLAUSE_MAP_IMPLICIT (c2) = OMP_CLAUSE_MAP_IMPLICIT (c); - OMP_CLAUSE_DECL (c3) = ptr; - if (OMP_CLAUSE_MAP_KIND (c2) == GOMP_MAP_ALWAYS_POINTER - || OMP_CLAUSE_MAP_KIND (c2) == GOMP_MAP_ATTACH_DETACH) - { - OMP_CLAUSE_DECL (c2) = build_simple_mem_ref (ptr); - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER); - } - else - OMP_CLAUSE_DECL (c2) = convert_from_reference (ptr); - OMP_CLAUSE_SIZE (c3) = size_zero_node; - OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c2); - OMP_CLAUSE_CHAIN (c2) = c3; + using namespace omp_addr_tokenizer; + + if (ai.maybe_zero_length_array_section (c)) + OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (c) = 1; + + /* !!! If we're accessing a base decl via chained access + methods (e.g. multiple indirections), duplicate clause + detection won't work properly. Skip it in that case. */ + if ((addr_tokens[0]->type == STRUCTURE_BASE + || addr_tokens[0]->type == ARRAY_BASE) + && addr_tokens[0]->u.structure_base_kind == BASE_DECL + && addr_tokens[1]->type == ACCESS_METHOD + && omp_access_chain_p (addr_tokens, 1)) + c = nc; + + return false; } } } + return false; } @@ -7162,7 +7134,8 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) "%qD appears more than once in data clauses", t); remove = true; } - else if (bitmap_bit_p (&map_head, DECL_UID (t))) + else if (bitmap_bit_p (&map_head, DECL_UID (t)) + || bitmap_bit_p (&map_field_head, DECL_UID (t))) { if (ort == C_ORT_ACC) error_at (OMP_CLAUSE_LOCATION (c), @@ -7983,6 +7956,9 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) case OMP_CLAUSE_FROM: case OMP_CLAUSE__CACHE_: { + using namespace omp_addr_tokenizer; + auto_vec addr_tokens; + t = OMP_CLAUSE_DECL (c); if (TREE_CODE (t) == TREE_LIST) { @@ -8009,58 +7985,73 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) } while (TREE_CODE (t) == ARRAY_REF) t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == COMPONENT_REF - && TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE) + + if (type_dependent_expression_p (t)) + break; + + cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + + if (!ai.map_supported_p () + || !omp_parse_expr (addr_tokens, t)) { - do - { - t = TREE_OPERAND (t, 0); - if (REFERENCE_REF_P (t)) - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - while (TREE_CODE (t) == COMPONENT_REF - || TREE_CODE (t) == ARRAY_REF); + sorry_at (OMP_CLAUSE_LOCATION (c), + "unsupported map expression %qE", + OMP_CLAUSE_DECL (c)); + remove = true; + break; + } + + /* This check is to determine if this will be the only map + clause created for this node. Otherwise, we'll check + the following FIRSTPRIVATE_POINTER, + FIRSTPRIVATE_REFERENCE or ATTACH_DETACH node on the next + iteration(s) of the loop. */ + if (addr_tokens.length () >= 4 + && addr_tokens[0]->type == STRUCTURE_BASE + && addr_tokens[0]->u.structure_base_kind == BASE_DECL + && addr_tokens[1]->type == ACCESS_METHOD + && addr_tokens[2]->type == COMPONENT_SELECTOR + && addr_tokens[3]->type == ACCESS_METHOD + && (addr_tokens[3]->u.access_kind == ACCESS_DIRECT + || (addr_tokens[3]->u.access_kind + == ACCESS_INDEXED_ARRAY))) + { + tree rt = addr_tokens[1]->expr; + + gcc_assert (DECL_P (rt)); if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP && OMP_CLAUSE_MAP_IMPLICIT (c) - && (bitmap_bit_p (&map_head, DECL_UID (t)) - || bitmap_bit_p (&map_field_head, DECL_UID (t)) + && (bitmap_bit_p (&map_head, DECL_UID (rt)) + || bitmap_bit_p (&map_field_head, DECL_UID (rt)) || bitmap_bit_p (&map_firstprivate_head, - DECL_UID (t)))) + DECL_UID (rt)))) { remove = true; break; } - if (bitmap_bit_p (&map_field_head, DECL_UID (t))) + if (bitmap_bit_p (&map_field_head, DECL_UID (rt))) break; - if (bitmap_bit_p (&map_head, DECL_UID (t))) + if (bitmap_bit_p (&map_head, DECL_UID (rt))) { if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in motion" - " clauses", t); + " clauses", rt); else if (ort == C_ORT_ACC) error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in data" - " clauses", t); + " clauses", rt); else error_at (OMP_CLAUSE_LOCATION (c), "%qD appears more than once in map" - " clauses", t); + " clauses", rt); remove = true; } else { - bitmap_set_bit (&map_head, DECL_UID (t)); - bitmap_set_bit (&map_field_head, DECL_UID (t)); + bitmap_set_bit (&map_head, DECL_UID (rt)); + bitmap_set_bit (&map_field_head, DECL_UID (rt)); } } } @@ -8077,6 +8068,16 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) OMP_CLAUSE_SIZE (c) = size_zero_node; break; } + else if (type_dependent_expression_p (t)) + break; + else if (!omp_parse_expr (addr_tokens, t)) + { + sorry_at (OMP_CLAUSE_LOCATION (c), + "unsupported map expression %qE", + OMP_CLAUSE_DECL (c)); + remove = true; + break; + } if (t == error_mark_node) { remove = true; @@ -8095,110 +8096,50 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) bias) to zero here, so it is not set erroneously to the pointer size later on in gimplify.cc. */ OMP_CLAUSE_SIZE (c) = size_zero_node; - if (REFERENCE_REF_P (t) - && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF) + + cp_omp_address_inspector ai (OMP_CLAUSE_LOCATION (c), t); + + if (!ai.check_clause (c)) { - t = TREE_OPERAND (t, 0); - if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP - && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH) - OMP_CLAUSE_DECL (c) = t; + remove = true; + break; } - while (TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) + + if (!ai.map_supported_p ()) { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); + sorry_at (OMP_CLAUSE_LOCATION (c), + "unsupported map expression %qE", + OMP_CLAUSE_DECL (c)); + remove = true; + break; } - while (TREE_CODE (t) == COMPOUND_EXPR) - { - t = TREE_OPERAND (t, 1); - STRIP_NOPS (t); - } - if (TREE_CODE (t) == COMPONENT_REF - && invalid_nonstatic_memfn_p (EXPR_LOCATION (t), t, - tf_warning_or_error)) - remove = true; - indir_component_ref_p = false; - if (TREE_CODE (t) == COMPONENT_REF - && (TREE_CODE (TREE_OPERAND (t, 0)) == INDIRECT_REF - || TREE_CODE (TREE_OPERAND (t, 0)) == ARRAY_REF)) - { - t = TREE_OPERAND (TREE_OPERAND (t, 0), 0); - indir_component_ref_p = true; - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - if (TREE_CODE (t) == COMPONENT_REF - && OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_) - { - if (type_dependent_expression_p (t)) - break; - if (TREE_CODE (TREE_OPERAND (t, 1)) == FIELD_DECL - && DECL_BIT_FIELD (TREE_OPERAND (t, 1))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "bit-field %qE in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - remove = true; - } - else if (!omp_mappable_type (TREE_TYPE (t))) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE does not have a mappable type in %qs clause", - t, omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - if (TREE_TYPE (t) != error_mark_node - && !COMPLETE_TYPE_P (TREE_TYPE (t))) - cxx_incomplete_type_inform (TREE_TYPE (t)); - remove = true; - } - while (TREE_CODE (t) == COMPONENT_REF) - { - if (TREE_TYPE (TREE_OPERAND (t, 0)) - && (TREE_CODE (TREE_TYPE (TREE_OPERAND (t, 0))) - == UNION_TYPE)) - { - error_at (OMP_CLAUSE_LOCATION (c), - "%qE is a member of a union", t); - remove = true; - break; - } - t = TREE_OPERAND (t, 0); - if (TREE_CODE (t) == MEM_REF) - { - if (maybe_ne (mem_ref_offset (t), 0)) - error_at (OMP_CLAUSE_LOCATION (c), - "cannot dereference %qE in %qs clause", t, - omp_clause_code_name[OMP_CLAUSE_CODE (c)]); - else - t = TREE_OPERAND (t, 0); - } - while (TREE_CODE (t) == MEM_REF - || TREE_CODE (t) == INDIRECT_REF - || TREE_CODE (t) == ARRAY_REF) - { - t = TREE_OPERAND (t, 0); - STRIP_NOPS (t); - if (TREE_CODE (t) == POINTER_PLUS_EXPR) - t = TREE_OPERAND (t, 0); - } - } - if (remove) - break; - if (REFERENCE_REF_P (t)) - t = TREE_OPERAND (t, 0); - if (VAR_P (t) || TREE_CODE (t) == PARM_DECL) - { - if (bitmap_bit_p (&map_field_head, DECL_UID (t)) - || (ort != C_ORT_ACC - && bitmap_bit_p (&map_head, DECL_UID (t)))) - goto handle_map_references; - } - } - if (!processing_template_decl - && TREE_CODE (t) == FIELD_DECL) + + gcc_assert ((addr_tokens[0]->type == ARRAY_BASE + || addr_tokens[0]->type == STRUCTURE_BASE) + && addr_tokens[1]->type == ACCESS_METHOD); + + t = addr_tokens[1]->expr; + + /* This is used to prevent cxx_mark_addressable from being called + on 'this' for expressions like 'this->a', i.e. typical member + accesses. */ + indir_component_ref_p + = (addr_tokens[0]->type == STRUCTURE_BASE + && addr_tokens[1]->u.access_kind != ACCESS_DIRECT); + + if (addr_tokens[0]->u.structure_base_kind != BASE_DECL) + goto skip_decl_checks; + + /* For OpenMP, we can access a struct "t" and "t.d" on the same + mapping. OpenACC allows multiple fields of the same structure + to be written. */ + if (addr_tokens[0]->type == STRUCTURE_BASE + && (bitmap_bit_p (&map_field_head, DECL_UID (t)) + || (ort != C_ORT_ACC + && bitmap_bit_p (&map_head, DECL_UID (t))))) + goto skip_decl_checks; + + if (!processing_template_decl && TREE_CODE (t) == FIELD_DECL) { OMP_CLAUSE_DECL (c) = finish_non_static_data_member (t, NULL_TREE, NULL_TREE); @@ -8236,12 +8177,17 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) || (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER)) && !indir_component_ref_p + && (t != current_class_ptr + || OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP + || OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH) && !cxx_mark_addressable (t)) remove = true; else if (!(OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER || (OMP_CLAUSE_MAP_KIND (c) - == GOMP_MAP_FIRSTPRIVATE_POINTER))) + == GOMP_MAP_FIRSTPRIVATE_POINTER) + || (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_ATTACH_DETACH))) && t == OMP_CLAUSE_DECL (c) && !type_dependent_expression_p (t) && !omp_mappable_type (TYPE_REF_P (TREE_TYPE (t)) @@ -8285,20 +8231,20 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) remove = true; } else if (bitmap_bit_p (&map_head, DECL_UID (t)) - && !bitmap_bit_p (&map_field_head, DECL_UID (t))) + && !bitmap_bit_p (&map_field_head, DECL_UID (t)) + && ort == C_ORT_ACC) { - if (ort == C_ORT_ACC) - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears more than once in data clauses", - t); - else - error_at (OMP_CLAUSE_LOCATION (c), - "%qD appears both in data and map clauses", t); + error_at (OMP_CLAUSE_LOCATION (c), + "%qD appears more than once in data clauses", t); remove = true; } else bitmap_set_bit (&map_firstprivate_head, DECL_UID (t)); } + else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP + && (OMP_CLAUSE_MAP_KIND (c) + == GOMP_MAP_FIRSTPRIVATE_REFERENCE)) + bitmap_set_bit (&map_firstprivate_head, DECL_UID (t)); else if (bitmap_bit_p (&map_head, DECL_UID (t)) && !bitmap_bit_p (&map_field_head, DECL_UID (t))) { @@ -8331,7 +8277,7 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) "%qD appears both in data and map clauses", t); remove = true; } - else + else if (!omp_access_chain_p (addr_tokens, 1)) { bitmap_set_bit (&map_head, DECL_UID (t)); @@ -8345,49 +8291,31 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) 0)))))) bitmap_set_bit (&map_field_head, DECL_UID (t)); } - handle_map_references: + + skip_decl_checks: + /* If we call omp_expand_map_clause in handle_omp_array_sections, + the containing loop (here) iterates through the new nodes + created by that expansion. Avoid expanding those again (just + by checking the node type). */ if (!remove && !processing_template_decl && ort != C_ORT_DECLARE_SIMD - && TYPE_REF_P (TREE_TYPE (OMP_CLAUSE_DECL (c)))) + && (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP + || ((OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_POINTER) + && (OMP_CLAUSE_MAP_KIND (c) + != GOMP_MAP_FIRSTPRIVATE_REFERENCE) + && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ALWAYS_POINTER + && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH_DETACH))) { - t = OMP_CLAUSE_DECL (c); - if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP) - { - OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t); - if (OMP_CLAUSE_SIZE (c) == NULL_TREE) - OMP_CLAUSE_SIZE (c) - = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t))); - } - else if (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_FIRSTPRIVATE_POINTER - && (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_FIRSTPRIVATE_REFERENCE) - && (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_ALWAYS_POINTER) - && (OMP_CLAUSE_MAP_KIND (c) - != GOMP_MAP_ATTACH_DETACH)) - { - grp_start_p = pc; - grp_sentinel = OMP_CLAUSE_CHAIN (c); - - tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), - OMP_CLAUSE_MAP); - if (TREE_CODE (t) == COMPONENT_REF) - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER); - else - OMP_CLAUSE_SET_MAP_KIND (c2, - GOMP_MAP_FIRSTPRIVATE_REFERENCE); - OMP_CLAUSE_DECL (c2) = t; - OMP_CLAUSE_SIZE (c2) = size_zero_node; - OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c); - OMP_CLAUSE_CHAIN (c) = c2; - OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t); - if (OMP_CLAUSE_SIZE (c) == NULL_TREE) - OMP_CLAUSE_SIZE (c) - = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t))); - c = c2; - } + grp_start_p = pc; + grp_sentinel = OMP_CLAUSE_CHAIN (c); + tree nc = ai.expand_map_clause (c, OMP_CLAUSE_DECL (c), + addr_tokens, + (ort == C_ORT_OMP_TARGET + || ort == C_ORT_ACC)); + if (nc != error_mark_node) + c = nc; } } break; @@ -8786,7 +8714,8 @@ finish_omp_clauses (tree clauses, enum c_omp_region_type ort) if (grp_start_p) { /* If we found a clause to remove, we want to remove the whole - expanded group, otherwise gimplify can get confused. */ + expanded group, otherwise gimplify + (omp_resolve_clause_dependencies) can get confused. */ *grp_start_p = grp_sentinel; pc = grp_start_p; grp_start_p = NULL; diff --git a/gcc/fortran/trans-openmp.cc b/gcc/fortran/trans-openmp.cc index cf12e032fbd..967eb336a18 100644 --- a/gcc/fortran/trans-openmp.cc +++ b/gcc/fortran/trans-openmp.cc @@ -2405,8 +2405,9 @@ static vec *doacross_steps; static void gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n, - tree decl, bool element, gomp_map_kind ptr_kind, - tree &node, tree &node2, tree &node3, tree &node4) + tree decl, bool element, bool openmp, + gomp_map_kind ptr_kind, tree &node, tree &node2, + tree &node3, tree &node4) { gfc_se se; tree ptr, ptr2; @@ -2473,7 +2474,7 @@ gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n, { tree type = TREE_TYPE (decl); ptr2 = gfc_conv_descriptor_data_get (decl); - if (ptr_kind != GOMP_MAP_ALWAYS_POINTER) + if (ptr_kind != GOMP_MAP_ATTACH_DETACH || !openmp) { /* We only create a GOMP_MAP_TO_PSET mapping for derived-type members here for OpenACC. @@ -2496,7 +2497,7 @@ gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n, struct – and adding an 'alloc: for the 'desc.data' pointer, which would break as the 'desc' (the descriptor) is also mapped (see node4 above). */ - if (ptr_kind == GOMP_MAP_ATTACH_DETACH) + if (ptr_kind == GOMP_MAP_ATTACH_DETACH && !openmp) STRIP_NOPS (OMP_CLAUSE_DECL (node3)); } else @@ -2514,7 +2515,7 @@ gfc_trans_omp_array_section (stmtblock_t *block, gfc_omp_namelist *n, decl, offset, NULL_TREE, NULL_TREE); OMP_CLAUSE_DECL (node) = offset; - if (ptr_kind == GOMP_MAP_ALWAYS_POINTER) + if (ptr_kind == GOMP_MAP_ATTACH_DETACH && openmp) return; } else @@ -3422,8 +3423,9 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, && !(POINTER_TYPE_P (type) && GFC_DESCRIPTOR_TYPE_P (TREE_TYPE (type)))) k = GOMP_MAP_FIRSTPRIVATE_POINTER; - gfc_trans_omp_array_section (block, n, decl, element, k, - node, node2, node3, node4); + gfc_trans_omp_array_section (block, n, decl, element, + !openacc, k, node, node2, + node3, node4); } else if (n->expr && n->expr->expr_type == EXPR_VARIABLE @@ -3449,10 +3451,7 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, { node2 = build_omp_clause (input_location, OMP_CLAUSE_MAP); - gomp_map_kind kind - = (openacc ? GOMP_MAP_ATTACH_DETACH - : GOMP_MAP_ALWAYS_POINTER); - OMP_CLAUSE_SET_MAP_KIND (node2, kind); + OMP_CLAUSE_SET_MAP_KIND (node2, GOMP_MAP_ATTACH_DETACH); OMP_CLAUSE_DECL (node2) = POINTER_TYPE_P (TREE_TYPE (se.expr)) ? se.expr @@ -3585,9 +3584,7 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, node2 = build_omp_clause (input_location, OMP_CLAUSE_MAP); OMP_CLAUSE_SET_MAP_KIND (node2, - openacc - ? GOMP_MAP_ATTACH_DETACH - : GOMP_MAP_ALWAYS_POINTER); + GOMP_MAP_ATTACH_DETACH); OMP_CLAUSE_DECL (node2) = build_fold_addr_expr (data); OMP_CLAUSE_SIZE (node2) = size_int (0); } @@ -3716,9 +3713,7 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, node3 = build_omp_clause (input_location, OMP_CLAUSE_MAP); OMP_CLAUSE_SET_MAP_KIND (node3, - openacc - ? GOMP_MAP_ATTACH_DETACH - : GOMP_MAP_ALWAYS_POINTER); + GOMP_MAP_ATTACH_DETACH); OMP_CLAUSE_DECL (node3) = gfc_conv_descriptor_data_get (inner); /* Similar to gfc_trans_omp_array_section (details @@ -3741,11 +3736,10 @@ gfc_trans_omp_clauses (stmtblock_t *block, gfc_omp_clauses *clauses, { /* An array element or section. */ bool element = lastref->u.ar.type == AR_ELEMENT; - gomp_map_kind kind = (openacc ? GOMP_MAP_ATTACH_DETACH - : GOMP_MAP_ALWAYS_POINTER); + gomp_map_kind kind = GOMP_MAP_ATTACH_DETACH; gfc_trans_omp_array_section (block, n, inner, element, - kind, node, node2, node3, - node4); + !openacc, kind, node, node2, + node3, node4); } else gcc_unreachable (); diff --git a/gcc/gimplify.cc b/gcc/gimplify.cc index 9e0e3429958..e245adfec3a 100644 --- a/gcc/gimplify.cc +++ b/gcc/gimplify.cc @@ -8835,8 +8835,7 @@ build_omp_struct_comp_nodes (enum tree_code code, tree grp_start, tree grp_end, if (grp_mid && OMP_CLAUSE_CODE (grp_mid) == OMP_CLAUSE_MAP - && (OMP_CLAUSE_MAP_KIND (grp_mid) == GOMP_MAP_ALWAYS_POINTER - || OMP_CLAUSE_MAP_KIND (grp_mid) == GOMP_MAP_ATTACH_DETACH)) + && OMP_CLAUSE_MAP_KIND (grp_mid) == GOMP_MAP_ALWAYS_POINTER) { tree c3 = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP); @@ -8916,6 +8915,12 @@ struct omp_mapping_group { /* If we've removed the group but need to reindex, mark the group as deleted. */ bool deleted; + /* The group points to an already-created "GOMP_MAP_STRUCT + GOMP_MAP_ATTACH_DETACH" pair. */ + bool reprocess_struct; + /* The group should use "zero-length" allocations for pointers that are not + mapped "to" on the same directive. */ + bool fragile; struct omp_mapping_group *sibling; struct omp_mapping_group *next; }; @@ -8957,38 +8962,6 @@ omp_get_base_pointer (tree expr) return NULL_TREE; } -/* Remove COMPONENT_REFS and indirections from EXPR. */ - -static tree -omp_strip_components_and_deref (tree expr) -{ - while (TREE_CODE (expr) == COMPONENT_REF - || TREE_CODE (expr) == INDIRECT_REF - || (TREE_CODE (expr) == MEM_REF - && integer_zerop (TREE_OPERAND (expr, 1))) - || TREE_CODE (expr) == POINTER_PLUS_EXPR - || TREE_CODE (expr) == COMPOUND_EXPR) - if (TREE_CODE (expr) == COMPOUND_EXPR) - expr = TREE_OPERAND (expr, 1); - else - expr = TREE_OPERAND (expr, 0); - - STRIP_NOPS (expr); - - return expr; -} - -static tree -omp_strip_indirections (tree expr) -{ - while (TREE_CODE (expr) == INDIRECT_REF - || (TREE_CODE (expr) == MEM_REF - && integer_zerop (TREE_OPERAND (expr, 1)))) - expr = TREE_OPERAND (expr, 0); - - return expr; -} - /* An attach or detach operation depends directly on the address being attached/detached. Return that address, or none if there are no attachments/detachments. */ @@ -9190,6 +9163,8 @@ omp_gather_mapping_groups_1 (tree *list_p, vec *groups, grp.mark = UNVISITED; grp.sibling = NULL; grp.deleted = false; + grp.reprocess_struct = false; + grp.fragile = false; grp.next = NULL; groups->safe_push (grp); @@ -9317,6 +9292,8 @@ omp_group_base (omp_mapping_group *grp, unsigned int *chained, *firstprivate = OMP_CLAUSE_DECL (node); node = OMP_CLAUSE_CHAIN (node); } + else if (OMP_CLAUSE_MAP_KIND (node) == GOMP_MAP_ATTACH_DETACH) + node = OMP_CLAUSE_CHAIN (node); *chained = num_mappings; return node; } @@ -9368,6 +9345,9 @@ omp_index_mapping_groups_1 (hash_mapreprocess_struct) + continue; + tree fpp; unsigned int chained; tree node = omp_group_base (grp, &chained, &fpp); @@ -9861,6 +9841,89 @@ omp_lastprivate_for_combined_outer_constructs (struct gimplify_omp_ctx *octx, omp_notice_variable (octx, decl, true); } +/* We might have indexed several groups for DECL, e.g. a "TO" mapping and also + a "FIRSTPRIVATE" mapping. Return the one that isn't firstprivate, etc. */ + +static omp_mapping_group * +omp_get_nonfirstprivate_group (hash_map *grpmap, + tree decl, bool allow_deleted = false) +{ + omp_mapping_group **to_group_p = grpmap->get (decl); + + if (!to_group_p) + return NULL; + + omp_mapping_group *to_group = *to_group_p; + + for (; to_group; to_group = to_group->sibling) + { + tree grp_end = to_group->grp_end; + switch (OMP_CLAUSE_MAP_KIND (grp_end)) + { + case GOMP_MAP_FIRSTPRIVATE_POINTER: + case GOMP_MAP_FIRSTPRIVATE_REFERENCE: + break; + + default: + if (allow_deleted || !to_group->deleted) + return to_group; + } + } + + return NULL; +} + +/* Return TRUE if the directive (whose clauses are described by the hash table + of mapping groups, GRPMAP) maps DECL explicitly. If TO_SPECIFICALLY is + true, only count TO mappings. If ALLOW_DELETED is true, ignore the + "deleted" flag for groups. If CONTAINED_IN_STRUCT is true, also return + TRUE if DECL is mapped as a member of a whole-struct mapping. */ + +static bool +omp_directive_maps_explicitly (hash_map *grpmap, + tree decl, omp_mapping_group **base_group, + bool to_specifically, bool allow_deleted, + bool contained_in_struct) +{ + omp_mapping_group *decl_group + = omp_get_nonfirstprivate_group (grpmap, decl, allow_deleted); + + *base_group = NULL; + + if (decl_group) + { + tree grp_first = *decl_group->grp_start; + /* We might be called during omp_build_struct_sibling_lists, when + GOMP_MAP_STRUCT might have been inserted at the start of the group. + Skip over that, and also possibly the node after it. */ + if (OMP_CLAUSE_MAP_KIND (grp_first) == GOMP_MAP_STRUCT) + { + grp_first = OMP_CLAUSE_CHAIN (grp_first); + if (OMP_CLAUSE_MAP_KIND (grp_first) == GOMP_MAP_FIRSTPRIVATE_POINTER + || (OMP_CLAUSE_MAP_KIND (grp_first) + == GOMP_MAP_FIRSTPRIVATE_REFERENCE) + || OMP_CLAUSE_MAP_KIND (grp_first) == GOMP_MAP_ATTACH_DETACH) + grp_first = OMP_CLAUSE_CHAIN (grp_first); + } + enum gomp_map_kind first_kind = OMP_CLAUSE_MAP_KIND (grp_first); + if (!to_specifically + || GOMP_MAP_COPY_TO_P (first_kind) + || first_kind == GOMP_MAP_ALLOC) + { + *base_group = decl_group; + return true; + } + } + + if (contained_in_struct + && omp_mapped_by_containing_struct (grpmap, decl, base_group)) + return true; + + return false; +} + /* If we have mappings INNER and OUTER, where INNER is a component access and OUTER is a mapping of the whole containing struct, check that the mappings are compatible. We'll be deleting the inner mapping, so we need to make @@ -9927,6 +9990,257 @@ omp_check_mapping_compatibility (location_t loc, return false; } +/* This function handles several cases where clauses on a mapping directive + can interact with each other. + + If we have a FIRSTPRIVATE_POINTER node and we're also mapping the pointer + on the same directive, change the mapping of the first node to + ATTACH_DETACH. We should have detected that this will happen already in + c-omp.cc:c_omp_adjust_map_clauses and marked the appropriate decl + as addressable. (If we didn't, bail out.) + + If we have a FIRSTPRIVATE_REFERENCE (for a reference to pointer) and we're + mapping the base pointer also, we may need to change the mapping type to + ATTACH_DETACH and synthesize an alloc node for the reference itself. + + If we have an ATTACH_DETACH node, this is an array section with a pointer + base. If we're mapping the base on the same directive too, we can drop its + mapping. However, if we have a reference to pointer, make other appropriate + adjustments to the mapping nodes instead. + + If we have a component access but we're also mapping the whole of the + containing struct, drop the former access. + + If the expression is a component access, and we're also mapping a base + pointer used in that component access in the same expression, change the + mapping type of the latter to ALLOC (ready for processing by + omp_build_struct_sibling_lists). */ + +void +omp_resolve_clause_dependencies (enum tree_code code, + vec *groups, + hash_map *grpmap) +{ + int i; + omp_mapping_group *grp; + bool repair_chain = false; + + FOR_EACH_VEC_ELT (*groups, i, grp) + { + tree grp_end = grp->grp_end; + tree decl = OMP_CLAUSE_DECL (grp_end); + + gcc_assert (OMP_CLAUSE_CODE (grp_end) == OMP_CLAUSE_MAP); + + switch (OMP_CLAUSE_MAP_KIND (grp_end)) + { + case GOMP_MAP_FIRSTPRIVATE_POINTER: + { + omp_mapping_group *to_group + = omp_get_nonfirstprivate_group (grpmap, decl); + + if (!to_group || to_group == grp) + continue; + + tree grp_first = *to_group->grp_start; + enum gomp_map_kind first_kind = OMP_CLAUSE_MAP_KIND (grp_first); + + if ((GOMP_MAP_COPY_TO_P (first_kind) + || first_kind == GOMP_MAP_ALLOC) + && (OMP_CLAUSE_MAP_KIND (to_group->grp_end) + != GOMP_MAP_FIRSTPRIVATE_POINTER)) + { + gcc_assert (TREE_ADDRESSABLE (OMP_CLAUSE_DECL (grp_end))); + OMP_CLAUSE_SET_MAP_KIND (grp_end, GOMP_MAP_ATTACH_DETACH); + } + } + break; + + case GOMP_MAP_FIRSTPRIVATE_REFERENCE: + { + tree ptr = build_fold_indirect_ref (decl); + + omp_mapping_group *to_group + = omp_get_nonfirstprivate_group (grpmap, ptr); + + if (!to_group || to_group == grp) + continue; + + tree grp_first = *to_group->grp_start; + enum gomp_map_kind first_kind = OMP_CLAUSE_MAP_KIND (grp_first); + + if (GOMP_MAP_COPY_TO_P (first_kind) + || first_kind == GOMP_MAP_ALLOC) + { + OMP_CLAUSE_SET_MAP_KIND (grp_end, GOMP_MAP_ATTACH_DETACH); + OMP_CLAUSE_DECL (grp_end) = ptr; + if ((OMP_CLAUSE_CHAIN (*to_group->grp_start) + == to_group->grp_end) + && (OMP_CLAUSE_MAP_KIND (to_group->grp_end) + == GOMP_MAP_FIRSTPRIVATE_REFERENCE)) + { + gcc_assert (TREE_ADDRESSABLE + (OMP_CLAUSE_DECL (to_group->grp_end))); + OMP_CLAUSE_SET_MAP_KIND (to_group->grp_end, + GOMP_MAP_ATTACH_DETACH); + + location_t loc = OMP_CLAUSE_LOCATION (to_group->grp_end); + tree alloc + = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (alloc, GOMP_MAP_ALLOC); + tree tmp = build_fold_addr_expr (OMP_CLAUSE_DECL + (to_group->grp_end)); + tree char_ptr_type = build_pointer_type (char_type_node); + OMP_CLAUSE_DECL (alloc) + = build2 (MEM_REF, char_type_node, + tmp, + build_int_cst (char_ptr_type, 0)); + OMP_CLAUSE_SIZE (alloc) = TYPE_SIZE_UNIT (TREE_TYPE (tmp)); + + OMP_CLAUSE_CHAIN (alloc) + = OMP_CLAUSE_CHAIN (*to_group->grp_start); + OMP_CLAUSE_CHAIN (*to_group->grp_start) = alloc; + } + } + } + break; + + case GOMP_MAP_ATTACH_DETACH: + case GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION: + { + tree base_ptr, referenced_ptr_node = NULL_TREE; + + while (TREE_CODE (decl) == ARRAY_REF) + decl = TREE_OPERAND (decl, 0); + + if (TREE_CODE (decl) == INDIRECT_REF) + decl = TREE_OPERAND (decl, 0); + + /* Only component accesses. */ + if (DECL_P (decl)) + continue; + + /* We want the pointer itself when checking if the base pointer is + mapped elsewhere in the same directive -- if we have a + reference to the pointer, don't use that. */ + + if (TREE_CODE (TREE_TYPE (decl)) == REFERENCE_TYPE + && TREE_CODE (TREE_TYPE (TREE_TYPE (decl))) == POINTER_TYPE) + { + referenced_ptr_node = OMP_CLAUSE_CHAIN (*grp->grp_start); + base_ptr = OMP_CLAUSE_DECL (referenced_ptr_node); + } + else + base_ptr = decl; + + gomp_map_kind zlas_kind + = (code == OACC_EXIT_DATA || code == OMP_TARGET_EXIT_DATA) + ? GOMP_MAP_DETACH : GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION; + + if (TREE_CODE (TREE_TYPE (base_ptr)) == POINTER_TYPE) + { + /* If we map the base TO, and we're doing an attachment, we can + skip the TO mapping altogether and create an ALLOC mapping + instead, since the attachment will overwrite the device + pointer in that location immediately anyway. Otherwise, + change our mapping to + GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION in case the + attachment target has not been copied to the device already + by some earlier directive. */ + + bool base_mapped_to = false; + + omp_mapping_group *base_group; + + if (omp_directive_maps_explicitly (grpmap, base_ptr, + &base_group, false, true, + false)) + { + if (referenced_ptr_node) + { + base_mapped_to = true; + if ((OMP_CLAUSE_MAP_KIND (base_group->grp_end) + == GOMP_MAP_ATTACH_DETACH) + && (OMP_CLAUSE_CHAIN (*base_group->grp_start) + == base_group->grp_end)) + { + OMP_CLAUSE_CHAIN (*base_group->grp_start) + = OMP_CLAUSE_CHAIN (base_group->grp_end); + base_group->grp_end = *base_group->grp_start; + repair_chain = true; + } + } + else + { + base_group->deleted = true; + OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED (grp_end) = 1; + } + } + + /* We're dealing with a reference to a pointer, and we are + attaching both the reference and the pointer. We know the + reference itself is on the target, because we are going to + create an ALLOC node for it in accumulate_sibling_list. The + pointer might be on the target already or it might not, but + if it isn't then it's not an error, so use + GOMP_MAP_ATTACH_ZLAS for it. */ + if (!base_mapped_to && referenced_ptr_node) + OMP_CLAUSE_SET_MAP_KIND (referenced_ptr_node, zlas_kind); + } + else if (TREE_CODE (TREE_TYPE (base_ptr)) == REFERENCE_TYPE + && (TREE_CODE (TREE_TYPE (TREE_TYPE (base_ptr))) + == ARRAY_TYPE) + && OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION + (*grp->grp_start)) + OMP_CLAUSE_SET_MAP_KIND (grp->grp_end, zlas_kind); + } + break; + + default: + { + omp_mapping_group *struct_group; + if (omp_mapped_by_containing_struct (grpmap, decl, &struct_group) + && *grp->grp_start == grp_end) + { + omp_check_mapping_compatibility (OMP_CLAUSE_LOCATION (grp_end), + struct_group, grp); + /* Remove the whole of this mapping -- redundant. */ + grp->deleted = true; + } + + tree base = decl; + while ((base = omp_get_base_pointer (base))) + { + omp_mapping_group *base_group; + + if (omp_directive_maps_explicitly (grpmap, base, &base_group, + true, true, false)) + { + tree grp_first = *base_group->grp_start; + OMP_CLAUSE_SET_MAP_KIND (grp_first, GOMP_MAP_ALLOC); + } + } + } + } + } + + if (repair_chain) + { + /* Group start pointers may have become detached from the + OMP_CLAUSE_CHAIN of previous groups if elements were removed from the + end of those groups. Fix that now. */ + tree *new_next = NULL; + FOR_EACH_VEC_ELT (*groups, i, grp) + { + if (new_next) + grp->grp_start = new_next; + + new_next = &OMP_CLAUSE_CHAIN (grp->grp_end); + } + } +} + /* Similar to omp_resolve_clause_dependencies, but for OpenACC. The only clause dependencies we handle for now are struct element mappings and whole-struct mappings on the same directive, and duplicate clause @@ -10144,6 +10458,59 @@ omp_siblist_move_concat_nodes_after (tree first_new, tree *last_new_tail, return continue_at; } +/* Expand a chained access. We only expect to see a quite limited range of + expression types here, because e.g. you can't have an array of + references. See also c-omp.cc:omp_expand_access_chain. */ + +static void +omp_expand_access_chain (location_t loc, tree **list_pp, tree expr, + vec &addr_tokens, + unsigned *idx, gomp_map_kind kind) +{ + using namespace omp_addr_tokenizer; + unsigned i = *idx; + tree c = NULL_TREE; + + switch (addr_tokens[i]->u.access_kind) + { + case ACCESS_POINTER: + case ACCESS_POINTER_OFFSET: + { + tree virtual_origin + = fold_convert_loc (loc, ptrdiff_type_node, addr_tokens[i]->expr); + tree data_addr = omp_accessed_addr (addr_tokens, i, expr); + c = build_omp_clause (loc, OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (c, kind); + OMP_CLAUSE_DECL (c) = addr_tokens[i]->expr; + OMP_CLAUSE_SIZE (c) + = fold_build2_loc (loc, MINUS_EXPR, ptrdiff_type_node, + fold_convert_loc (loc, ptrdiff_type_node, + data_addr), + virtual_origin); + } + break; + + case ACCESS_INDEXED_ARRAY: + break; + + default: + return; + } + + if (c) + { + OMP_CLAUSE_CHAIN (c) = **list_pp; + **list_pp = c; + *list_pp = &OMP_CLAUSE_CHAIN (c); + } + + *idx = ++i; + + if (addr_tokens[i]->type == ACCESS_METHOD + && omp_access_chain_p (addr_tokens, i)) + omp_expand_access_chain (loc, list_pp, expr, addr_tokens, idx, kind); +} + /* Mapping struct members causes an additional set of nodes to be created, starting with GOMP_MAP_STRUCT followed by a number of mappings equal to the number of members being mapped, in order of ascending position (address or @@ -10185,9 +10552,15 @@ static tree * omp_accumulate_sibling_list (enum omp_region_type region_type, enum tree_code code, hash_map - *&struct_map_to_clause, tree *grp_start_p, - tree grp_end, tree *inner) + *&struct_map_to_clause, + hash_map + *group_map, + tree *grp_start_p, tree grp_end, + vec &addr_tokens, tree **inner, + bool *fragile_p, bool reprocessing_struct, + tree **added_tail) { + using namespace omp_addr_tokenizer; poly_offset_int coffset; poly_int64 cbitpos; tree ocd = OMP_CLAUSE_DECL (grp_end); @@ -10197,118 +10570,265 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, while (TREE_CODE (ocd) == ARRAY_REF) ocd = TREE_OPERAND (ocd, 0); - if (TREE_CODE (ocd) == INDIRECT_REF) - ocd = TREE_OPERAND (ocd, 0); + if (*fragile_p) + { + omp_mapping_group *to_group + = omp_get_nonfirstprivate_group (group_map, ocd, true); + + if (to_group) + return NULL; + } + + omp_addr_token *last_token = addr_tokens[addr_tokens.length () - 1]; + if (last_token->type == ACCESS_METHOD) + { + switch (last_token->u.access_kind) + { + case ACCESS_REF: + case ACCESS_REF_TO_POINTER: + case ACCESS_REF_TO_POINTER_OFFSET: + case ACCESS_INDEXED_REF_TO_ARRAY: + /* We may see either a bare reference or a dereferenced + "convert_from_reference"-like one here. Handle either way. */ + if (TREE_CODE (ocd) == INDIRECT_REF) + ocd = TREE_OPERAND (ocd, 0); + gcc_assert (TREE_CODE (TREE_TYPE (ocd)) == REFERENCE_TYPE); + break; + + default: + ; + } + } tree base = extract_base_bit_offset (ocd, &cbitpos, &coffset); + int base_token; + for (base_token = addr_tokens.length () - 1; base_token >= 0; base_token--) + { + if (addr_tokens[base_token]->type == ARRAY_BASE + || addr_tokens[base_token]->type == STRUCTURE_BASE) + break; + } + + /* The two expressions in the assertion below aren't quite the same: if we + have 'struct_base_decl access_indexed_array' for something like + "myvar[2].x" then base will be "myvar" and addr_tokens[base_token]->expr + will be "myvar[2]" -- the actual base of the structure. + The former interpretation leads to a strange situation where we get + struct(myvar) alloc(myvar[2].ptr1) + That is, the array of structures is kind of treated as one big structure + for the purposes of gathering sibling lists, etc. */ + /* gcc_assert (base == addr_tokens[base_token]->expr); */ + bool ptr = (OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_ALWAYS_POINTER); bool attach_detach = ((OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_ATTACH_DETACH) || (OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION)); - bool attach = (OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_ATTACH - || OMP_CLAUSE_MAP_KIND (grp_end) == GOMP_MAP_DETACH); - - /* FIXME: If we're not mapping the base pointer in some other clause on this - directive, I think we want to create ALLOC/RELEASE here -- i.e. not - early-exit. */ - if (openmp && attach_detach) - return NULL; if (!struct_map_to_clause || struct_map_to_clause->get (base) == NULL) { tree l = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP); - gomp_map_kind k = attach ? GOMP_MAP_FORCE_PRESENT : GOMP_MAP_STRUCT; - - OMP_CLAUSE_SET_MAP_KIND (l, k); + OMP_CLAUSE_SET_MAP_KIND (l, GOMP_MAP_STRUCT); OMP_CLAUSE_DECL (l) = unshare_expr (base); + OMP_CLAUSE_SIZE (l) = size_int (1); - OMP_CLAUSE_SIZE (l) - = (!attach ? size_int (1) - : (DECL_P (OMP_CLAUSE_DECL (l)) - ? DECL_SIZE_UNIT (OMP_CLAUSE_DECL (l)) - : TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (l))))); if (struct_map_to_clause == NULL) struct_map_to_clause = new hash_map; struct_map_to_clause->put (base, l); + /* On first iterating through the clause list, we insert the struct node + just before the component access node that triggers the initial + accumulate_sibling_list call for a particular sibling list (and it + then forms the first entry in that list). When reprocessing struct + bases that are themselves component accesses, we insert the struct + node on an off-side list to avoid inserting the new GOMP_MAP_STRUCT + into the middle of the old one. */ + tree *insert_node_pos = reprocessing_struct ? *added_tail : grp_start_p; + if (ptr || attach_detach) { tree extra_node; tree alloc_node = build_omp_struct_comp_nodes (code, *grp_start_p, grp_end, &extra_node); + tree *tail; OMP_CLAUSE_CHAIN (l) = alloc_node; - tree *insert_node_pos = grp_start_p; - if (extra_node) { OMP_CLAUSE_CHAIN (extra_node) = *insert_node_pos; OMP_CLAUSE_CHAIN (alloc_node) = extra_node; + tail = &OMP_CLAUSE_CHAIN (extra_node); } else - OMP_CLAUSE_CHAIN (alloc_node) = *insert_node_pos; + { + OMP_CLAUSE_CHAIN (alloc_node) = *insert_node_pos; + tail = &OMP_CLAUSE_CHAIN (alloc_node); + } + + /* For OpenMP semantics, we don't want to implicitly allocate + space for the pointer here. A FRAGILE_P node is only being + created so that omp-low.cc is able to rewrite the struct + properly. + For references (to pointers), we want to actually allocate the + space for the reference itself in the sorted list following the + struct node. + For pointers, we want to allocate space if we had an explicit + mapping of the attachment point, but not otherwise. */ + if (*fragile_p + || (openmp + && attach_detach + && TREE_CODE (TREE_TYPE (ocd)) == POINTER_TYPE + && !OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED (grp_end))) + { + if (!lang_GNU_Fortran ()) + /* In Fortran, pointers are dereferenced automatically, but may + be unassociated. So we still want to allocate space for the + pointer (as the base for an attach operation that should be + present in the same directive's clause list also). */ + OMP_CLAUSE_SIZE (alloc_node) = size_zero_node; + OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (alloc_node) = 1; + } *insert_node_pos = l; + + if (reprocessing_struct) + { + /* When reprocessing a struct node group used as the base of a + subcomponent access, if we have a reference-to-pointer base, + we will see: + struct(**ptr) attach(*ptr) + whereas for a non-reprocess-struct group, we see, e.g.: + tofrom(**ptr) attach(*ptr) attach(ptr) + and we create the "alloc" for the second "attach", i.e. + for the reference itself. When reprocessing a struct group we + thus change the pointer attachment into a reference attachment + by stripping the indirection. (The attachment of the + referenced pointer must happen elsewhere, either on the same + directive, or otherwise.) */ + tree adecl = OMP_CLAUSE_DECL (alloc_node); + + if ((TREE_CODE (adecl) == INDIRECT_REF + || (TREE_CODE (adecl) == MEM_REF + && integer_zerop (TREE_OPERAND (adecl, 1)))) + && (TREE_CODE (TREE_TYPE (TREE_OPERAND (adecl, 0))) + == REFERENCE_TYPE) + && (TREE_CODE (TREE_TYPE (TREE_TYPE + (TREE_OPERAND (adecl, 0)))) == POINTER_TYPE)) + OMP_CLAUSE_DECL (alloc_node) = TREE_OPERAND (adecl, 0); + + *added_tail = tail; + } } else { gcc_assert (*grp_start_p == grp_end); - grp_start_p = omp_siblist_insert_node_after (l, grp_start_p); + if (reprocessing_struct) + { + /* If we don't have an attach/detach node, this is a + "target data" directive or similar, not an offload region. + Synthesize an "alloc" node using just the initiating + GOMP_MAP_STRUCT decl. */ + gomp_map_kind k = (code == OMP_TARGET_EXIT_DATA + || code == OACC_EXIT_DATA) + ? GOMP_MAP_RELEASE : GOMP_MAP_ALLOC; + tree alloc_node + = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), + OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (alloc_node, k); + OMP_CLAUSE_DECL (alloc_node) = unshare_expr (last_token->expr); + OMP_CLAUSE_SIZE (alloc_node) + = TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (alloc_node))); + + OMP_CLAUSE_CHAIN (alloc_node) = OMP_CLAUSE_CHAIN (l); + OMP_CLAUSE_CHAIN (l) = alloc_node; + *insert_node_pos = l; + *added_tail = &OMP_CLAUSE_CHAIN (alloc_node); + } + else + grp_start_p = omp_siblist_insert_node_after (l, insert_node_pos); } - tree noind = omp_strip_indirections (base); + unsigned last_access = base_token + 1; - if (!openmp - && (region_type & ORT_TARGET) - && TREE_CODE (noind) == COMPONENT_REF) + while (last_access + 1 < addr_tokens.length () + && addr_tokens[last_access + 1]->type == ACCESS_METHOD) + last_access++; + + if ((region_type & ORT_TARGET) + && addr_tokens[base_token + 1]->type == ACCESS_METHOD) { - /* The base for this component access is a struct component access - itself. Insert a node to be processed on the next iteration of - our caller's loop, which will subsequently be turned into a new, - inner GOMP_MAP_STRUCT mapping. + bool base_ref = false; + access_method_kinds access_kind + = addr_tokens[last_access]->u.access_kind; - We need to do this else the non-DECL_P base won't be - rewritten correctly in the offloaded region. */ + switch (access_kind) + { + case ACCESS_DIRECT: + case ACCESS_INDEXED_ARRAY: + return NULL; + + case ACCESS_REF: + case ACCESS_REF_TO_POINTER: + case ACCESS_REF_TO_POINTER_OFFSET: + case ACCESS_INDEXED_REF_TO_ARRAY: + base_ref = true; + break; + + default: + ; + } tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP); - OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FORCE_PRESENT); - OMP_CLAUSE_DECL (c2) = unshare_expr (noind); - OMP_CLAUSE_SIZE (c2) = TYPE_SIZE_UNIT (TREE_TYPE (noind)); - *inner = c2; - return NULL; - } + enum gomp_map_kind mkind; + omp_mapping_group *decl_group; + tree use_base; + switch (access_kind) + { + case ACCESS_POINTER: + case ACCESS_POINTER_OFFSET: + use_base = addr_tokens[last_access]->expr; + break; + case ACCESS_REF_TO_POINTER: + case ACCESS_REF_TO_POINTER_OFFSET: + use_base + = build_fold_indirect_ref (addr_tokens[last_access]->expr); + break; + default: + use_base = addr_tokens[base_token]->expr; + } + bool mapped_to_p + = omp_directive_maps_explicitly (group_map, use_base, &decl_group, + true, false, true); + if (addr_tokens[base_token]->type == STRUCTURE_BASE + && DECL_P (addr_tokens[last_access]->expr) + && !mapped_to_p) + mkind = base_ref ? GOMP_MAP_FIRSTPRIVATE_REFERENCE + : GOMP_MAP_FIRSTPRIVATE_POINTER; + else + mkind = GOMP_MAP_ATTACH_DETACH; - tree sdecl = omp_strip_components_and_deref (base); - - if (POINTER_TYPE_P (TREE_TYPE (sdecl)) && (region_type & ORT_TARGET)) - { - tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), - OMP_CLAUSE_MAP); - bool base_ref - = (TREE_CODE (base) == INDIRECT_REF - && ((TREE_CODE (TREE_TYPE (TREE_OPERAND (base, 0))) - == REFERENCE_TYPE) - || ((TREE_CODE (TREE_OPERAND (base, 0)) - == INDIRECT_REF) - && (TREE_CODE (TREE_TYPE (TREE_OPERAND - (TREE_OPERAND (base, 0), 0))) - == REFERENCE_TYPE)))); - enum gomp_map_kind mkind = base_ref ? GOMP_MAP_FIRSTPRIVATE_REFERENCE - : GOMP_MAP_FIRSTPRIVATE_POINTER; OMP_CLAUSE_SET_MAP_KIND (c2, mkind); - OMP_CLAUSE_DECL (c2) = sdecl; + /* If we have a reference to pointer base, we want to attach the + pointer here, not the reference. The reference attachment happens + elsewhere. */ + bool ref_to_ptr + = (access_kind == ACCESS_REF_TO_POINTER + || access_kind == ACCESS_REF_TO_POINTER_OFFSET); + tree sdecl = addr_tokens[last_access]->expr; + tree sdecl_ptr = ref_to_ptr ? build_fold_indirect_ref (sdecl) + : sdecl; + /* For the FIRSTPRIVATE_REFERENCE after the struct node, we + want to use the reference itself for the decl, but we + still want to use the pointer to calculate the bias. */ + OMP_CLAUSE_DECL (c2) = (mkind == GOMP_MAP_ATTACH_DETACH) + ? sdecl_ptr : sdecl; + sdecl = sdecl_ptr; tree baddr = build_fold_addr_expr (base); baddr = fold_convert_loc (OMP_CLAUSE_LOCATION (grp_end), ptrdiff_type_node, baddr); - /* This isn't going to be good enough when we add support for more - complicated lvalue expressions. FIXME. */ - if (TREE_CODE (TREE_TYPE (sdecl)) == REFERENCE_TYPE - && TREE_CODE (TREE_TYPE (TREE_TYPE (sdecl))) == POINTER_TYPE) - sdecl = build_simple_mem_ref (sdecl); tree decladdr = fold_convert_loc (OMP_CLAUSE_LOCATION (grp_end), ptrdiff_type_node, sdecl); OMP_CLAUSE_SIZE (c2) @@ -10317,24 +10837,46 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, /* Insert after struct node. */ OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (l); OMP_CLAUSE_CHAIN (l) = c2; + + if (addr_tokens[base_token]->type == STRUCTURE_BASE + && (addr_tokens[base_token]->u.structure_base_kind + == BASE_COMPONENT_EXPR) + && mkind == GOMP_MAP_ATTACH_DETACH + && addr_tokens[last_access]->u.access_kind != ACCESS_REF) + { + *inner = insert_node_pos; + if (openmp) + *fragile_p = true; + return NULL; + } } + if (addr_tokens[base_token]->type == STRUCTURE_BASE + && (addr_tokens[base_token]->u.structure_base_kind + == BASE_COMPONENT_EXPR) + && addr_tokens[last_access]->u.access_kind == ACCESS_REF) + *inner = insert_node_pos; + return NULL; } else if (struct_map_to_clause) { tree *osc = struct_map_to_clause->get (base); tree *sc = NULL, *scp = NULL; + unsigned HOST_WIDE_INT i, elems = tree_to_uhwi (OMP_CLAUSE_SIZE (*osc)); sc = &OMP_CLAUSE_CHAIN (*osc); /* The struct mapping might be immediately followed by a - FIRSTPRIVATE_POINTER and/or FIRSTPRIVATE_REFERENCE -- if it's an - indirect access or a reference, or both. (This added node is removed - in omp-low.c after it has been processed there.) */ - if (*sc != grp_end - && (OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_POINTER - || OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_REFERENCE)) + FIRSTPRIVATE_POINTER, FIRSTPRIVATE_REFERENCE or an ATTACH_DETACH -- + if it's an indirect access or a reference, or if the structure base + is not a decl. The FIRSTPRIVATE_* nodes are removed in omp-low.c + after they have been processed there, and ATTACH_DETACH nodes are + recomputed and moved out of the GOMP_MAP_STRUCT construct once + sibling list building is complete. */ + if (OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_POINTER + || OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_FIRSTPRIVATE_REFERENCE + || OMP_CLAUSE_MAP_KIND (*sc) == GOMP_MAP_ATTACH_DETACH) sc = &OMP_CLAUSE_CHAIN (*sc); - for (; *sc != grp_end; sc = &OMP_CLAUSE_CHAIN (*sc)) + for (i = 0; i < elems; i++, sc = &OMP_CLAUSE_CHAIN (*sc)) if ((ptr || attach_detach) && sc == grp_start_p) break; else if (TREE_CODE (OMP_CLAUSE_DECL (*sc)) != COMPONENT_REF @@ -10366,6 +10908,27 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, break; if (scp) continue; + if ((region_type & ORT_ACC) != 0) + { + /* For OpenACC, allow (ignore) duplicate struct accesses in + the middle of a mapping clause, e.g. "mystruct->foo" in: + copy(mystruct->foo->bar) copy(mystruct->foo->qux). */ + if (reprocessing_struct + && known_eq (coffset, offset) + && known_eq (cbitpos, bitpos)) + return NULL; + } + else if (known_eq (coffset, offset) + && known_eq (cbitpos, bitpos)) + { + /* Having two struct members at the same offset doesn't work, + so make sure we don't. (We're allowed to ignore this. + Should we report the error?) */ + /*error_at (OMP_CLAUSE_LOCATION (grp_end), + "duplicate struct member %qE in map clauses", + OMP_CLAUSE_DECL (grp_end));*/ + return NULL; + } if (maybe_lt (coffset, offset) || (known_eq (coffset, offset) && maybe_lt (cbitpos, bitpos))) @@ -10377,9 +10940,48 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, } } - if (!attach) - OMP_CLAUSE_SIZE (*osc) - = size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc), size_one_node); + OMP_CLAUSE_SIZE (*osc) + = size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc), size_one_node); + + if (reprocessing_struct) + { + /* If we're reprocessing a struct node, we don't want to do most of + the list manipulation below. We only need to handle the (pointer + or reference) attach/detach case. */ + tree extra_node, alloc_node; + if (attach_detach) + alloc_node = build_omp_struct_comp_nodes (code, *grp_start_p, + grp_end, &extra_node); + else + { + /* If we don't have an attach/detach node, this is a + "target data" directive or similar, not an offload region. + Synthesize an "alloc" node using just the initiating + GOMP_MAP_STRUCT decl. */ + gomp_map_kind k = (code == OMP_TARGET_EXIT_DATA + || code == OACC_EXIT_DATA) + ? GOMP_MAP_RELEASE : GOMP_MAP_ALLOC; + alloc_node + = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), + OMP_CLAUSE_MAP); + OMP_CLAUSE_SET_MAP_KIND (alloc_node, k); + OMP_CLAUSE_DECL (alloc_node) = unshare_expr (last_token->expr); + OMP_CLAUSE_SIZE (alloc_node) + = TYPE_SIZE_UNIT (TREE_TYPE (OMP_CLAUSE_DECL (alloc_node))); + } + + if (scp) + omp_siblist_insert_node_after (alloc_node, scp); + else + { + tree *new_end = omp_siblist_insert_node_after (alloc_node, sc); + if (sc == *added_tail) + *added_tail = new_end; + } + + return NULL; + } + if (ptr || attach_detach) { tree cl = NULL_TREE, extra_node; @@ -10387,6 +10989,17 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, grp_end, &extra_node); tree *tail_chain = NULL; + if (*fragile_p + || (openmp + && attach_detach + && TREE_CODE (TREE_TYPE (ocd)) == POINTER_TYPE + && !OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED (grp_end))) + { + if (!lang_GNU_Fortran ()) + OMP_CLAUSE_SIZE (alloc_node) = size_zero_node; + OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION (alloc_node) = 1; + } + /* Here, we have: grp_end : the last (or only) node in this group. @@ -10472,12 +11085,15 @@ omp_build_struct_sibling_lists (enum tree_code code, **grpmap, tree *list_p) { + using namespace omp_addr_tokenizer; unsigned i; omp_mapping_group *grp; hash_map *struct_map_to_clause = NULL; bool success = true; tree *new_next = NULL; tree *tail = &OMP_CLAUSE_CHAIN ((*groups)[groups->length () - 1].grp_end); + tree added_nodes = NULL_TREE; + tree *added_tail = &added_nodes; auto_vec pre_hwm_groups; FOR_EACH_VEC_ELT (*groups, i, grp) @@ -10485,9 +11101,10 @@ omp_build_struct_sibling_lists (enum tree_code code, tree c = grp->grp_end; tree decl = OMP_CLAUSE_DECL (c); tree grp_end = grp->grp_end; + auto_vec addr_tokens; tree sentinel = OMP_CLAUSE_CHAIN (grp_end); - if (new_next) + if (new_next && !grp->reprocess_struct) grp->grp_start = new_next; new_next = NULL; @@ -10498,7 +11115,7 @@ omp_build_struct_sibling_lists (enum tree_code code, continue; /* Skip groups we marked for deletion in - oacc_resolve_clause_dependencies. */ + {omp,oacc}_resolve_clause_dependencies. */ if (grp->deleted) continue; @@ -10515,6 +11132,38 @@ omp_build_struct_sibling_lists (enum tree_code code, continue; } + tree expr = decl; + + while (TREE_CODE (expr) == ARRAY_REF) + expr = TREE_OPERAND (expr, 0); + + if (!omp_parse_expr (addr_tokens, expr)) + continue; + + omp_addr_token *last_token = addr_tokens[addr_tokens.length () - 1]; + + /* A mapping of a reference to a pointer member that doesn't specify an + array section, etc., like this: + *mystruct.ref_to_ptr + should not be processed by the struct sibling-list handling code -- + it just transfers the referenced pointer. + + In contrast, the quite similar-looking construct: + *mystruct.ptr + which is equivalent to e.g. + mystruct.ptr[0] + *does* trigger sibling-list processing. + + An exception for the former case is for "fragile" groups where the + reference itself is not handled otherwise; this is subject to special + handling in omp_accumulate_sibling_list also. */ + + if (TREE_CODE (TREE_TYPE (decl)) == POINTER_TYPE + && last_token->type == ACCESS_METHOD + && last_token->u.access_kind == ACCESS_REF + && !grp->fragile) + continue; + tree d = decl; if (TREE_CODE (d) == ARRAY_REF) { @@ -10543,14 +11192,7 @@ omp_build_struct_sibling_lists (enum tree_code code, omp_mapping_group *wholestruct; if (omp_mapped_by_containing_struct (*grpmap, OMP_CLAUSE_DECL (c), &wholestruct)) - { - if (!(region_type & ORT_ACC) - && *grp_start_p == grp_end) - /* Remove the whole of this mapping -- redundant. */ - grp->deleted = true; - - continue; - } + continue; if (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_TO_PSET && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ATTACH @@ -10577,27 +11219,30 @@ omp_build_struct_sibling_lists (enum tree_code code, goto error_out; } - tree inner = NULL_TREE; + tree *inner = NULL; + bool fragile_p = grp->fragile; new_next = omp_accumulate_sibling_list (region_type, code, - struct_map_to_clause, grp_start_p, - grp_end, &inner); + struct_map_to_clause, *grpmap, + grp_start_p, grp_end, addr_tokens, + &inner, &fragile_p, + grp->reprocess_struct, &added_tail); if (inner) { - if (new_next && *new_next == NULL_TREE) - *new_next = inner; - else - *tail = inner; - - OMP_CLAUSE_CHAIN (inner) = NULL_TREE; omp_mapping_group newgrp; - newgrp.grp_start = new_next ? new_next : tail; - newgrp.grp_end = inner; + newgrp.grp_start = inner; + if (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (*inner)) + == GOMP_MAP_ATTACH_DETACH) + newgrp.grp_end = OMP_CLAUSE_CHAIN (*inner); + else + newgrp.grp_end = *inner; newgrp.mark = UNVISITED; newgrp.sibling = NULL; newgrp.deleted = false; + newgrp.reprocess_struct = true; + newgrp.fragile = fragile_p; newgrp.next = NULL; groups->safe_push (newgrp); @@ -10608,8 +11253,6 @@ omp_build_struct_sibling_lists (enum tree_code code, *grpmap = omp_reindex_mapping_groups (list_p, groups, &pre_hwm_groups, sentinel); - - tail = &OMP_CLAUSE_CHAIN (inner); } } } @@ -10638,6 +11281,62 @@ omp_build_struct_sibling_lists (enum tree_code code, tail = &OMP_CLAUSE_CHAIN (*tail); } + + /* Tack on the struct nodes added during nested struct reprocessing. */ + if (added_nodes) + { + *tail = added_nodes; + tail = added_tail; + } + + /* Now we have finished building the struct sibling lists, reprocess + newly-added "attach" nodes: we need the address of the first + mapped element of each struct sibling list for the bias of the attach + operation -- not necessarily the base address of the whole struct. */ + if (struct_map_to_clause) + for (hash_map::iterator iter + = struct_map_to_clause->begin (); + iter != struct_map_to_clause->end (); + ++iter) + { + tree struct_node = (*iter).second; + gcc_assert (OMP_CLAUSE_CODE (struct_node) == OMP_CLAUSE_MAP); + tree attach = OMP_CLAUSE_CHAIN (struct_node); + + if (OMP_CLAUSE_CODE (attach) != OMP_CLAUSE_MAP + || OMP_CLAUSE_MAP_KIND (attach) != GOMP_MAP_ATTACH_DETACH) + continue; + + OMP_CLAUSE_SET_MAP_KIND (attach, GOMP_MAP_ATTACH); + + /* Sanity check: the standalone attach node will not work if we have + an "enter data" operation (because for those, variables need to be + mapped separately and attach nodes must be grouped together with the + base they attach to). We should only have created the + ATTACH_DETACH node after GOMP_MAP_STRUCT for a target region, so + this should never be true. */ + gcc_assert ((region_type & ORT_TARGET) != 0); + + /* This is the first sorted node in the struct sibling list. Use it + to recalculate the correct bias to use. + (&first_node - attach_decl). */ + tree first_node = OMP_CLAUSE_DECL (OMP_CLAUSE_CHAIN (attach)); + first_node = build_fold_addr_expr (first_node); + first_node = fold_convert (ptrdiff_type_node, first_node); + tree attach_decl = OMP_CLAUSE_DECL (attach); + attach_decl = fold_convert (ptrdiff_type_node, attach_decl); + OMP_CLAUSE_SIZE (attach) + = fold_build2 (MINUS_EXPR, ptrdiff_type_node, first_node, + attach_decl); + + /* Remove GOMP_MAP_ATTACH node from after struct node. */ + OMP_CLAUSE_CHAIN (struct_node) = OMP_CLAUSE_CHAIN (attach); + /* ...and re-insert it at the end of our clause list. */ + *tail = attach; + OMP_CLAUSE_CHAIN (attach) = NULL_TREE; + tail = &OMP_CLAUSE_CHAIN (attach); + } + error_out: if (struct_map_to_clause) delete struct_map_to_clause; @@ -10653,6 +11352,7 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, enum omp_region_type region_type, enum tree_code code) { + using namespace omp_addr_tokenizer; struct gimplify_omp_ctx *ctx, *outer_ctx; tree c; tree *prev_list_p = NULL, *orig_list_p = list_p; @@ -10698,6 +11398,7 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, hash_map *grpmap; grpmap = omp_index_mapping_groups (groups); + omp_resolve_clause_dependencies (code, groups, grpmap); omp_build_struct_sibling_lists (code, region_type, groups, &grpmap, list_p); @@ -10750,6 +11451,7 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, const char *check_non_private = NULL; unsigned int flags; tree decl; + auto_vec addr_tokens; switch (OMP_CLAUSE_CODE (c)) { @@ -11056,6 +11758,13 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, case OMP_CLAUSE_MAP: decl = OMP_CLAUSE_DECL (c); + + if (!omp_parse_expr (addr_tokens, decl)) + { + remove = true; + break; + } + if (error_operand_p (decl)) remove = true; switch (code) @@ -11065,13 +11774,18 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, case OACC_DATA: if (TREE_CODE (TREE_TYPE (decl)) != ARRAY_TYPE) break; + goto check_firstprivate; + case OACC_ENTER_DATA: + case OACC_EXIT_DATA: + if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH_DETACH + && addr_tokens[0]->type == ARRAY_BASE) + remove = true; /* FALLTHRU */ case OMP_TARGET_DATA: case OMP_TARGET_ENTER_DATA: case OMP_TARGET_EXIT_DATA: - case OACC_ENTER_DATA: - case OACC_EXIT_DATA: case OACC_HOST_DATA: + check_firstprivate: if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER || (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_REFERENCE)) @@ -11106,6 +11820,18 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_TO_PSET)) remove = true; + else if (code == OMP_TARGET_EXIT_DATA + && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALLOC + && OMP_CLAUSE_CHAIN (c) + && (OMP_CLAUSE_CODE (OMP_CLAUSE_CHAIN (c)) + == OMP_CLAUSE_MAP) + && ((OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c)) + == GOMP_MAP_ATTACH_DETACH) + || (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c)) + == GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION)) + && TREE_CODE (TREE_TYPE (OMP_CLAUSE_DECL + (OMP_CLAUSE_CHAIN (c)))) == REFERENCE_TYPE) + OMP_CLAUSE_SET_MAP_KIND (c, GOMP_MAP_RELEASE); if (remove) break; @@ -11148,26 +11874,22 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, GOVD_FIRSTPRIVATE | GOVD_SEEN); } - if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT) + if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT + && (addr_tokens[0]->type == STRUCTURE_BASE + || addr_tokens[0]->type == ARRAY_BASE) + && addr_tokens[0]->u.structure_base_kind == BASE_DECL) { - tree base = omp_strip_components_and_deref (decl); - if (DECL_P (base)) - { - decl = base; - splay_tree_node n - = splay_tree_lookup (ctx->variables, - (splay_tree_key) decl); - if (seen_error () - && n - && (n->value & (GOVD_MAP | GOVD_FIRSTPRIVATE)) != 0) - { - remove = true; - break; - } - flags = GOVD_MAP | GOVD_EXPLICIT; + gcc_assert (addr_tokens[1]->type == ACCESS_METHOD); + /* If we got to this struct via a chain of pointers, maybe we + want to map it implicitly instead. */ + if (omp_access_chain_p (addr_tokens, 1)) + break; + decl = addr_tokens[1]->expr; + flags = GOVD_MAP | GOVD_EXPLICIT; - goto do_add_decl; - } + gcc_assert (addr_tokens[1]->u.access_kind != ACCESS_DIRECT + || TREE_ADDRESSABLE (decl)); + goto do_add_decl; } if (TREE_CODE (decl) == TARGET_EXPR) @@ -11414,6 +12136,16 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, : GOMP_MAP_ATTACH); OMP_CLAUSE_SET_MAP_KIND (c, map_kind); } + else if ((code == OACC_ENTER_DATA + || code == OACC_EXIT_DATA + || code == OACC_PARALLEL) + && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ATTACH_DETACH) + { + enum gomp_map_kind map_kind = (code == OACC_EXIT_DATA + ? GOMP_MAP_DETACH + : GOMP_MAP_ATTACH); + OMP_CLAUSE_SET_MAP_KIND (c, map_kind); + } goto do_add; @@ -12316,7 +13048,7 @@ gimplify_adjust_omp_clauses_1 (splay_tree_node n, void *data) if (TREE_CODE (TREE_TYPE (decl)) == REFERENCE_TYPE && TREE_CODE (TREE_TYPE (TREE_TYPE (decl))) == POINTER_TYPE) OMP_CLAUSE_DECL (clause) - = build_simple_mem_ref_loc (input_location, decl); + = build_fold_indirect_ref_loc (input_location, decl); OMP_CLAUSE_DECL (clause) = build2 (MEM_REF, char_type_node, OMP_CLAUSE_DECL (clause), build_int_cst (build_pointer_type (char_type_node), 0)); diff --git a/gcc/omp-general.cc b/gcc/omp-general.cc index 0b4ec823479..1dae1b1b8a5 100644 --- a/gcc/omp-general.cc +++ b/gcc/omp-general.cc @@ -45,6 +45,8 @@ along with GCC; see the file COPYING3. If not see #include "data-streamer.h" #include "streamer-hooks.h" #include "opts.h" +#include "omp-general.h" +#include "tree-pretty-print.h" enum omp_requires omp_requires_mask; @@ -3013,4 +3015,427 @@ omp_build_component_ref (tree obj, tree field) return ret; } +namespace omp_addr_tokenizer { + +/* We scan an expression by recursive descent, and build a vector of + "omp_addr_token *" pointers representing a "parsed" version of the + expression. The grammar we use is something like this: + + expr0:: + expr [section-access] + + expr:: + structured-expr access-method + | array-base access-method + + structured-expr:: + structure-base component-selector + + arbitrary-expr:: + (anything else) + + structure-base:: + DECL access-method + | structured-expr access-method + | arbitrary-expr access-method + + array-base:: + DECL + | arbitrary-expr + + access-method:: + DIRECT + | REF + | POINTER + | REF_TO_POINTER + | POINTER_OFFSET + | REF_TO_POINTER_OFFSET + | INDEXED_ARRAY + | INDEXED_REF_TO_ARRAY + | index-expr + + index-expr:: + INDEX_EXPR access-method + + component-selector:: + component-selector COMPONENT_REF + | component-selector ARRAY_REF + | COMPONENT_REF + + This tokenized form is then used both in parsing, for OpenMP clause + expansion (for C and C++) and in gimplify.cc for sibling-list handling + (for C, C++ and Fortran). */ + +omp_addr_token::omp_addr_token (token_type t, tree e) + : type(t), expr(e) +{ +} + +omp_addr_token::omp_addr_token (access_method_kinds k, tree e) + : type(ACCESS_METHOD), expr(e) +{ + u.access_kind = k; +} + +omp_addr_token::omp_addr_token (token_type t, structure_base_kinds k, tree e) + : type(t), expr(e) +{ + u.structure_base_kind = k; +} + +static bool +omp_parse_component_selector (tree *expr0) +{ + tree expr = *expr0; + tree last_component = NULL_TREE; + + while (TREE_CODE (expr) == COMPONENT_REF + || TREE_CODE (expr) == ARRAY_REF) + { + if (TREE_CODE (expr) == COMPONENT_REF) + last_component = expr; + + expr = TREE_OPERAND (expr, 0); + + if (TREE_CODE (TREE_TYPE (expr)) == REFERENCE_TYPE) + break; + } + + if (!last_component) + return false; + + *expr0 = last_component; + return true; +} + +/* This handles references that have had convert_from_reference called on + them, and also those that haven't. */ + +static bool +omp_parse_ref (tree *expr0) +{ + tree expr = *expr0; + + if (TREE_CODE (TREE_TYPE (expr)) == REFERENCE_TYPE) + return true; + else if ((TREE_CODE (expr) == INDIRECT_REF + || (TREE_CODE (expr) == MEM_REF + && integer_zerop (TREE_OPERAND (expr, 1)))) + && TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0))) == REFERENCE_TYPE) + { + *expr0 = TREE_OPERAND (expr, 0); + return true; + } + + return false; +} + +static bool +omp_parse_pointer (tree *expr0, bool *has_offset) +{ + tree expr = *expr0; + + *has_offset = false; + + if ((TREE_CODE (expr) == INDIRECT_REF + || (TREE_CODE (expr) == MEM_REF + && integer_zerop (TREE_OPERAND (expr, 1)))) + && TREE_CODE (TREE_TYPE (TREE_OPERAND (expr, 0))) == POINTER_TYPE) + { + expr = TREE_OPERAND (expr, 0); + + /* The Fortran FE sometimes emits a no-op cast here. */ + STRIP_NOPS (expr); + + while (1) + { + if (TREE_CODE (expr) == COMPOUND_EXPR) + { + expr = TREE_OPERAND (expr, 1); + STRIP_NOPS (expr); + } + else if (TREE_CODE (expr) == SAVE_EXPR) + expr = TREE_OPERAND (expr, 0); + else if (TREE_CODE (expr) == POINTER_PLUS_EXPR) + { + *has_offset = true; + expr = TREE_OPERAND (expr, 0); + } + else + break; + } + + STRIP_NOPS (expr); + + *expr0 = expr; + return true; + } + + return false; +} + +static bool +omp_parse_access_method (tree *expr0, enum access_method_kinds *kind) +{ + tree expr = *expr0; + bool has_offset; + + if (omp_parse_ref (&expr)) + *kind = ACCESS_REF; + else if (omp_parse_pointer (&expr, &has_offset)) + { + if (omp_parse_ref (&expr)) + *kind = has_offset ? ACCESS_REF_TO_POINTER_OFFSET + : ACCESS_REF_TO_POINTER; + else + *kind = has_offset ? ACCESS_POINTER_OFFSET : ACCESS_POINTER; + } + else if (TREE_CODE (expr) == ARRAY_REF) + { + while (TREE_CODE (expr) == ARRAY_REF) + expr = TREE_OPERAND (expr, 0); + if (omp_parse_ref (&expr)) + *kind = ACCESS_INDEXED_REF_TO_ARRAY; + else + *kind = ACCESS_INDEXED_ARRAY; + } + else + *kind = ACCESS_DIRECT; + + STRIP_NOPS (expr); + + *expr0 = expr; + return true; +} + +static bool +omp_parse_access_methods (vec &addr_tokens, tree *expr0) +{ + tree expr = *expr0; + enum access_method_kinds kind; + tree am_expr; + + if (omp_parse_access_method (&expr, &kind)) + am_expr = expr; + + if (TREE_CODE (expr) == INDIRECT_REF + || TREE_CODE (expr) == MEM_REF + || TREE_CODE (expr) == ARRAY_REF) + omp_parse_access_methods (addr_tokens, &expr); + + addr_tokens.safe_push (new omp_addr_token (kind, am_expr)); + + *expr0 = expr; + return true; +} + +static bool omp_parse_structured_expr (vec &, tree *); + +static bool +omp_parse_structure_base (vec &addr_tokens, + tree *expr0, structure_base_kinds *kind, + vec &base_access_tokens, + bool allow_structured = true) +{ + tree expr = *expr0; + + if (allow_structured) + omp_parse_access_methods (base_access_tokens, &expr); + + if (DECL_P (expr)) + { + *kind = BASE_DECL; + return true; + } + + if (allow_structured && omp_parse_structured_expr (addr_tokens, &expr)) + { + *kind = BASE_COMPONENT_EXPR; + *expr0 = expr; + return true; + } + + *kind = BASE_ARBITRARY_EXPR; + *expr0 = expr; + return true; +} + +static bool +omp_parse_structured_expr (vec &addr_tokens, tree *expr0) +{ + tree expr = *expr0; + tree base_component = NULL_TREE; + structure_base_kinds struct_base_kind; + auto_vec base_access_tokens; + + if (omp_parse_component_selector (&expr)) + base_component = expr; + else + return false; + + gcc_assert (TREE_CODE (expr) == COMPONENT_REF); + expr = TREE_OPERAND (expr, 0); + + tree structure_base = expr; + + if (!omp_parse_structure_base (addr_tokens, &expr, &struct_base_kind, + base_access_tokens)) + return false; + + addr_tokens.safe_push (new omp_addr_token (STRUCTURE_BASE, struct_base_kind, + structure_base)); + addr_tokens.safe_splice (base_access_tokens); + addr_tokens.safe_push (new omp_addr_token (COMPONENT_SELECTOR, + base_component)); + + *expr0 = expr; + + return true; +} + +static bool +omp_parse_array_expr (vec &addr_tokens, tree *expr0) +{ + tree expr = *expr0; + structure_base_kinds s_kind; + auto_vec base_access_tokens; + + if (!omp_parse_structure_base (addr_tokens, &expr, &s_kind, + base_access_tokens, false)) + return false; + + addr_tokens.safe_push (new omp_addr_token (ARRAY_BASE, s_kind, expr)); + addr_tokens.safe_splice (base_access_tokens); + + *expr0 = expr; + return true; +} + +/* Return TRUE if the ACCESS_METHOD token at index 'i' has a further + ACCESS_METHOD chained after it (e.g., if we're processing an expression + containing multiple pointer indirections). */ + +bool +omp_access_chain_p (vec &addr_tokens, unsigned i) +{ + gcc_assert (addr_tokens[i]->type == ACCESS_METHOD); + return (i + 1 < addr_tokens.length () + && addr_tokens[i + 1]->type == ACCESS_METHOD); +} + +/* Return the address of the object accessed by the ACCESS_METHOD token + at 'i': either of the next access method's expr, or of EXPR if we're at + the end of the list of tokens. */ + +tree +omp_accessed_addr (vec &addr_tokens, unsigned i, tree expr) +{ + if (i + 1 < addr_tokens.length ()) + return build_fold_addr_expr (addr_tokens[i + 1]->expr); + else + return build_fold_addr_expr (expr); +} + +} /* namespace omp_addr_tokenizer. */ + +bool +omp_parse_expr (vec &addr_tokens, tree expr) +{ + using namespace omp_addr_tokenizer; + auto_vec expr_access_tokens; + + if (!omp_parse_access_methods (expr_access_tokens, &expr)) + return false; + + if (omp_parse_structured_expr (addr_tokens, &expr)) + ; + else if (omp_parse_array_expr (addr_tokens, &expr)) + ; + else + return false; + + addr_tokens.safe_splice (expr_access_tokens); + + return true; +} + +DEBUG_FUNCTION void +debug_omp_tokenized_addr (vec &addr_tokens, + bool with_exprs) +{ + using namespace omp_addr_tokenizer; + const char *sep = with_exprs ? " " : ""; + + for (auto e : addr_tokens) + { + const char *pfx = ""; + + fputs (sep, stderr); + + switch (e->type) + { + case COMPONENT_SELECTOR: + fputs ("component_selector", stderr); + break; + case ACCESS_METHOD: + switch (e->u.access_kind) + { + case ACCESS_DIRECT: + fputs ("access_direct", stderr); + break; + case ACCESS_REF: + fputs ("access_ref", stderr); + break; + case ACCESS_POINTER: + fputs ("access_pointer", stderr); + break; + case ACCESS_POINTER_OFFSET: + fputs ("access_pointer_offset", stderr); + break; + case ACCESS_REF_TO_POINTER: + fputs ("access_ref_to_pointer", stderr); + break; + case ACCESS_REF_TO_POINTER_OFFSET: + fputs ("access_ref_to_pointer_offset", stderr); + break; + case ACCESS_INDEXED_ARRAY: + fputs ("access_indexed_array", stderr); + break; + case ACCESS_INDEXED_REF_TO_ARRAY: + fputs ("access_indexed_ref_to_array", stderr); + break; + } + break; + case ARRAY_BASE: + case STRUCTURE_BASE: + pfx = e->type == ARRAY_BASE ? "array_" : "struct_"; + switch (e->u.structure_base_kind) + { + case BASE_DECL: + fprintf (stderr, "%sbase_decl", pfx); + break; + case BASE_COMPONENT_EXPR: + fputs ("base_component_expr", stderr); + break; + case BASE_ARBITRARY_EXPR: + fprintf (stderr, "%sbase_arbitrary_expr", pfx); + break; + } + break; + } + if (with_exprs) + { + fputs (" [", stderr); + print_generic_expr (stderr, e->expr); + fputc (']', stderr); + sep = ",\n "; + } + else + sep = " "; + } + + fputs ("\n", stderr); +} + + #include "gt-omp-general.h" diff --git a/gcc/omp-general.h b/gcc/omp-general.h index 1b5455a0a8f..f9b7ef3426c 100644 --- a/gcc/omp-general.h +++ b/gcc/omp-general.h @@ -152,4 +152,73 @@ get_openacc_privatization_dump_flags () extern tree omp_build_component_ref (tree obj, tree field); +namespace omp_addr_tokenizer { + +/* These are the ways of accessing a variable that have special-case handling + in the middle end (gimplify, omp-lower, etc.). */ + +/* These are the kinds of access that an ACCESS_METHOD token can represent. */ + +enum access_method_kinds +{ + ACCESS_DIRECT, + ACCESS_REF, + ACCESS_POINTER, + ACCESS_REF_TO_POINTER, + ACCESS_POINTER_OFFSET, + ACCESS_REF_TO_POINTER_OFFSET, + ACCESS_INDEXED_ARRAY, + ACCESS_INDEXED_REF_TO_ARRAY +}; + +/* These are the kinds that a STRUCTURE_BASE or ARRAY_BASE (except + BASE_COMPONENT_EXPR) can represent. */ + +enum structure_base_kinds +{ + BASE_DECL, + BASE_COMPONENT_EXPR, + BASE_ARBITRARY_EXPR +}; + +/* The coarse type for an address token. These can have subtypes for + ARRAY_BASE or STRUCTURE_BASE (structure_base_kinds) or ACCESS_METHOD + (access_method_kinds). */ + +enum token_type +{ + ARRAY_BASE, + STRUCTURE_BASE, + COMPONENT_SELECTOR, + ACCESS_METHOD +}; + +/* The struct that forms a single token of an address expression as parsed by + omp_parse_expr. These are typically held in a vec after parsing. */ + +struct omp_addr_token +{ + enum token_type type; + tree expr; + + union + { + access_method_kinds access_kind; + structure_base_kinds structure_base_kind; + } u; + + omp_addr_token (token_type, tree); + omp_addr_token (access_method_kinds, tree); + omp_addr_token (token_type, structure_base_kinds, tree); +}; + +extern bool omp_access_chain_p (vec &, unsigned); +extern tree omp_accessed_addr (vec &, unsigned, tree); + +} + +typedef omp_addr_tokenizer::omp_addr_token omp_addr_token; + +extern bool omp_parse_expr (vec &, tree); + #endif /* GCC_OMP_GENERAL_H */ diff --git a/gcc/omp-low.cc b/gcc/omp-low.cc index dc42c752017..67db528f252 100644 --- a/gcc/omp-low.cc +++ b/gcc/omp-low.cc @@ -1599,10 +1599,13 @@ scan_sharing_clauses (tree clauses, omp_context *ctx) { /* If this is an offloaded region, an attach operation should only exist when the pointer variable is mapped in a prior - clause. + clause. An exception is if we have a reference (to pointer): + in that case we should have mapped "*decl" in a previous + mapping instead of "decl". Skip the assertion in that case. If we had an error, we may not have attempted to sort clauses properly, so avoid the test. */ - if (is_gimple_omp_offloaded (ctx->stmt) + if (TREE_CODE (TREE_TYPE (decl)) != REFERENCE_TYPE + && is_gimple_omp_offloaded (ctx->stmt) && !seen_error ()) gcc_assert (maybe_lookup_decl (decl, ctx) diff --git a/gcc/testsuite/c-c++-common/gomp/clauses-2.c b/gcc/testsuite/c-c++-common/gomp/clauses-2.c index bbc8fb4e32b..8f98d57a312 100644 --- a/gcc/testsuite/c-c++-common/gomp/clauses-2.c +++ b/gcc/testsuite/c-c++-common/gomp/clauses-2.c @@ -11,7 +11,7 @@ foo (int *p, int q, struct S t, int i, int j, int k, int l) bar (p); #pragma omp target firstprivate (p), map (p[0]) /* { dg-error "appears more than once in data clauses" } */ bar (p); - #pragma omp target map (p[0]) map (p) /* { dg-error "appears both in data and map clauses" } */ + #pragma omp target map (p[0]) map (p) bar (p); #pragma omp target map (p) , map (p[0]) bar (p); diff --git a/gcc/testsuite/c-c++-common/gomp/target-50.c b/gcc/testsuite/c-c++-common/gomp/target-50.c index 41f1d37845c..a30a25e0893 100644 --- a/gcc/testsuite/c-c++-common/gomp/target-50.c +++ b/gcc/testsuite/c-c++-common/gomp/target-50.c @@ -17,7 +17,7 @@ int main() #pragma omp target map(tofrom: tmp->arr[0:10]) map(to: tmp->arr) { } -/* { dg-final { scan-tree-dump-times {map\(struct:\*tmp \[len: 1\]\) map\(to:tmp[._0-9]*->arr \[len: [0-9]+\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:tmp[._0-9]*->arr \[bias: 0\]\)} 2 "gimple" { target { ! { nvptx*-*-* amdgcn*-*-* } } } } } */ +/* { dg-final { scan-tree-dump-times {map\(struct:\*tmp \[len: 1\]\) map\(alloc:tmp[._0-9]*->arr \[len: [0-9]+\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:tmp[._0-9]*->arr \[bias: 0\]\)} 2 "gimple" { target { ! { nvptx*-*-* amdgcn*-*-* } } } } } */ return 0; } diff --git a/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c b/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c index 3aa1a8fc55e..5ba1d7efe08 100644 --- a/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c +++ b/gcc/testsuite/c-c++-common/gomp/target-implicit-map-2.c @@ -49,4 +49,4 @@ main (void) /* { dg-final { scan-tree-dump {#pragma omp target num_teams.* map\(tofrom:a \[len: [0-9]+\]\[implicit\]\)} "gimple" } } */ -/* { dg-final { scan-tree-dump {#pragma omp target num_teams.* map\(tofrom:a \[len: [0-9]+\]\[implicit\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:a\.ptr \[bias: 0\]\)} "gimple" } } */ +/* { dg-final { scan-tree-dump {#pragma omp target num_teams.* map\(struct:a \[len: 1\]\) map\(alloc:a\.ptr \[len: 0\]\) map\(tofrom:\*_[0-9]+ \[len: [0-9]+\]\) map\(attach:a\.ptr \[bias: 0\]\)} "gimple" } } */ diff --git a/gcc/testsuite/g++.dg/gomp/static-component-1.C b/gcc/testsuite/g++.dg/gomp/static-component-1.C new file mode 100644 index 00000000000..c2f95933567 --- /dev/null +++ b/gcc/testsuite/g++.dg/gomp/static-component-1.C @@ -0,0 +1,23 @@ +/* { dg-do compile } */ + +/* Types with static members should be mappable. */ + +struct A { + static int x[10]; +}; + +struct B { + A a; +}; + +int +main (int argc, char *argv[]) +{ + B *b = new B; +#pragma omp target map(b->a) + ; + B bb; +#pragma omp target map(bb.a) + ; + delete b; +} diff --git a/gcc/testsuite/gcc.dg/gomp/target-3.c b/gcc/testsuite/gcc.dg/gomp/target-3.c index 3e7921270c9..3d5e05f8571 100644 --- a/gcc/testsuite/gcc.dg/gomp/target-3.c +++ b/gcc/testsuite/gcc.dg/gomp/target-3.c @@ -13,4 +13,4 @@ void foo (struct S *s) #pragma omp target enter data map (alloc: s->a, s->b) } -/* { dg-final { scan-tree-dump-times "map\\(struct:\\*s \\\[len: 2\\\]\\) map\\(alloc:s->a \\\[len: \[0-9\]+\\\]\\) map\\(alloc:s->b \\\[len: \[0-9\]+\\\]\\)" 2 "gimple" } } */ +/* { dg-final { scan-tree-dump-times "map\\(struct:\\*s \\\[len: 2\\\]\\) map\\(alloc:s\[\\._0-9\]+->a \\\[len: \[0-9\]+\\\]\\) map\\(alloc:s\[\\._0-9\]+->b \\\[len: \[0-9\]+\\\]\\)" 2 "gimple" } } */ diff --git a/gcc/tree.h b/gcc/tree.h index 95285e45fb7..91f40b2984f 100644 --- a/gcc/tree.h +++ b/gcc/tree.h @@ -1762,6 +1762,10 @@ class auto_suppress_location_wrappers NOTE: this is different than OMP_CLAUSE_MAP_IMPLICIT. */ #define OMP_CLAUSE_MAP_RUNTIME_IMPLICIT_P(NODE) \ (OMP_CLAUSE_SUBCODE_CHECK (NODE, OMP_CLAUSE_MAP)->base.deprecated_flag) +/* Nonzero for an attach/detach node whose decl was explicitly mapped on the + same directive. */ +#define OMP_CLAUSE_ATTACHMENT_MAPPING_ERASED(NODE) \ + TREE_STATIC (OMP_CLAUSE_SUBCODE_CHECK (NODE, OMP_CLAUSE_MAP)) /* Flag that 'OMP_CLAUSE_DECL (NODE)' is to be made addressable during OMP lowering. */ diff --git a/libgomp/target.c b/libgomp/target.c index 57634839c8f..e5dec469519 100644 --- a/libgomp/target.c +++ b/libgomp/target.c @@ -718,7 +718,7 @@ gomp_map_fields_existing (struct target_mem_desc *tgt, cur_node.host_start = (uintptr_t) hostaddrs[i]; cur_node.host_end = cur_node.host_start + sizes[i]; - splay_tree_key n2 = splay_tree_lookup (mem_map, &cur_node); + splay_tree_key n2 = gomp_map_0len_lookup (mem_map, &cur_node); kind = get_kind (short_mapkind, kinds, i); implicit = get_implicit (short_mapkind, kinds, i); if (n2 @@ -815,8 +815,20 @@ gomp_attach_pointer (struct gomp_device_descr *devicep, if ((void *) target == NULL) { - gomp_mutex_unlock (&devicep->lock); - gomp_fatal ("attempt to attach null pointer"); + /* As a special case, allow attaching NULL host pointers. This + allows e.g. unassociated Fortran pointers to be mapped + properly. */ + data = 0; + + gomp_debug (1, + "%s: attaching NULL host pointer, target %p " + "(struct base %p)\n", __FUNCTION__, (void *) devptr, + (void *) (n->tgt->tgt_start + n->tgt_offset)); + + gomp_copy_host2dev (devicep, aq, (void *) devptr, (void *) &data, + sizeof (void *), true, cbufp); + + return; } s.host_start = target + bias; @@ -1073,7 +1085,8 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, tgt->list[i].key = NULL; if (!aq && gomp_to_device_kind_p (get_kind (short_mapkind, kinds, i) - & typemask)) + & typemask) + && sizes[i] != 0) gomp_coalesce_buf_add (&cbuf, tgt_size - cur_node.host_end + (uintptr_t) hostaddrs[i], @@ -1435,7 +1448,17 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, + sizes[last]; if (tgt->list[first].key != NULL) continue; + if (sizes[last] == 0) + cur_node.host_end++; n = splay_tree_lookup (mem_map, &cur_node); + if (sizes[last] == 0) + cur_node.host_end--; + if (n == NULL && cur_node.host_start == cur_node.host_end) + { + gomp_mutex_unlock (&devicep->lock); + gomp_fatal ("Struct pointer member not mapped (%p)", + (void*) hostaddrs[first]); + } if (n == NULL) { size_t align = (size_t) 1 << (kind >> rshift); diff --git a/libgomp/testsuite/libgomp.c++/baseptrs-3.C b/libgomp/testsuite/libgomp.c++/baseptrs-3.C new file mode 100644 index 00000000000..39a48a40920 --- /dev/null +++ b/libgomp/testsuite/libgomp.c++/baseptrs-3.C @@ -0,0 +1,275 @@ +#include +#include +#include + +struct sa0 +{ + int *ptr; +}; + +struct sb0 +{ + int arr[10]; +}; + +struct sc0 +{ + sa0 a; + sb0 b; + sc0 (sa0 &my_a, sb0 &my_b) : a(my_a), b(my_b) {} +}; + +void +foo0 () +{ + sa0 my_a; + sb0 my_b; + + my_a.ptr = (int *) malloc (sizeof (int) * 10); + sc0 my_c(my_a, my_b); + + memset (my_c.a.ptr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c.a.ptr, my_c.a.ptr[:10]) + { + for (int i = 0; i < 10; i++) + my_c.a.ptr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c.a.ptr[i] == i); + + memset (my_c.b.arr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c.b.arr[:10]) + { + for (int i = 0; i < 10; i++) + my_c.b.arr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c.b.arr[i] == i); + + free (my_a.ptr); +} + +struct sa +{ + int *ptr; +}; + +struct sb +{ + int arr[10]; +}; + +struct sc +{ + sa &a; + sb &b; + sc (sa &my_a, sb &my_b) : a(my_a), b(my_b) {} +}; + +void +foo () +{ + sa my_a; + sb my_b; + + my_a.ptr = (int *) malloc (sizeof (int) * 10); + sc my_c(my_a, my_b); + + memset (my_c.a.ptr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c.a.ptr, my_c.a.ptr[:10]) + { + for (int i = 0; i < 10; i++) + my_c.a.ptr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c.a.ptr[i] == i); + + memset (my_c.b.arr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c.b.arr[:10]) + { + for (int i = 0; i < 10; i++) + my_c.b.arr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c.b.arr[i] == i); + + free (my_a.ptr); +} + +void +bar () +{ + sa my_a; + sb my_b; + + my_a.ptr = (int *) malloc (sizeof (int) * 10); + sc my_c(my_a, my_b); + sc &my_cref = my_c; + + memset (my_cref.a.ptr, 0, sizeof (int) * 10); + + #pragma omp target map (my_cref.a.ptr, my_cref.a.ptr[:10]) + { + for (int i = 0; i < 10; i++) + my_cref.a.ptr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_cref.a.ptr[i] == i); + + memset (my_cref.b.arr, 0, sizeof (int) * 10); + + #pragma omp target map (my_cref.b.arr[:10]) + { + for (int i = 0; i < 10; i++) + my_cref.b.arr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_cref.b.arr[i] == i); + + free (my_a.ptr); +} + +struct scp0 +{ + sa *a; + sb *b; + scp0 (sa *my_a, sb *my_b) : a(my_a), b(my_b) {} +}; + +void +foop0 () +{ + sa *my_a = new sa; + sb *my_b = new sb; + + my_a->ptr = new int[10]; + scp0 *my_c = new scp0(my_a, my_b); + + memset (my_c->a->ptr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c->a, my_c->a[:1], my_c->a->ptr, my_c->a->ptr[:10]) + { + for (int i = 0; i < 10; i++) + my_c->a->ptr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c->a->ptr[i] == i); + + memset (my_c->b->arr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c->b, my_c->b[:1], my_c->b->arr[:10]) + { + for (int i = 0; i < 10; i++) + my_c->b->arr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c->b->arr[i] == i); + + delete[] my_a->ptr; + delete my_a; + delete my_b; +} + +struct scp +{ + sa *&a; + sb *&b; + scp (sa *&my_a, sb *&my_b) : a(my_a), b(my_b) {} +}; + +void +foop () +{ + sa *my_a = new sa; + sb *my_b = new sb; + + my_a->ptr = new int[10]; + scp *my_c = new scp(my_a, my_b); + + memset (my_c->a->ptr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c->a, my_c->a[:1], my_c->a->ptr, my_c->a->ptr[:10]) + { + for (int i = 0; i < 10; i++) + my_c->a->ptr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c->a->ptr[i] == i); + + memset (my_c->b->arr, 0, sizeof (int) * 10); + + #pragma omp target map (my_c->b, my_c->b[:1], my_c->b->arr[:10]) + { + for (int i = 0; i < 10; i++) + my_c->b->arr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_c->b->arr[i] == i); + + delete[] my_a->ptr; + delete my_a; + delete my_b; +} + +void +barp () +{ + sa *my_a = new sa; + sb *my_b = new sb; + + my_a->ptr = new int[10]; + scp *my_c = new scp(my_a, my_b); + scp *&my_cref = my_c; + + memset (my_cref->a->ptr, 0, sizeof (int) * 10); + + #pragma omp target map (my_cref->a, my_cref->a[:1], my_cref->a->ptr, \ + my_cref->a->ptr[:10]) + { + for (int i = 0; i < 10; i++) + my_cref->a->ptr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_cref->a->ptr[i] == i); + + memset (my_cref->b->arr, 0, sizeof (int) * 10); + + #pragma omp target map (my_cref->b, my_cref->b[:1], my_cref->b->arr[:10]) + { + for (int i = 0; i < 10; i++) + my_cref->b->arr[i] = i; + } + + for (int i = 0; i < 10; i++) + assert (my_cref->b->arr[i] == i); + + delete my_a->ptr; + delete my_a; + delete my_b; +} + +int main (int argc, char *argv[]) +{ + foo0 (); + foo (); + bar (); + foop0 (); + foop (); + barp (); + return 0; +} diff --git a/libgomp/testsuite/libgomp.c++/baseptrs-4.C b/libgomp/testsuite/libgomp.c++/baseptrs-4.C new file mode 100644 index 00000000000..196029ac186 --- /dev/null +++ b/libgomp/testsuite/libgomp.c++/baseptrs-4.C @@ -0,0 +1,3154 @@ +// { dg-do run } + +#include +#include + +#define MAP_DECLS + +#define NONREF_DECL_BASE +#define REF_DECL_BASE +#define PTR_DECL_BASE +#define REF2PTR_DECL_BASE + +#define ARRAY_DECL_BASE +// Needs map clause "lvalue"-parsing support. +//#define REF2ARRAY_DECL_BASE +#define PTR_OFFSET_DECL_BASE +// Needs map clause "lvalue"-parsing support. +//#define REF2PTR_OFFSET_DECL_BASE + +#define MAP_SECTIONS + +#define NONREF_DECL_MEMBER_SLICE +#define NONREF_DECL_MEMBER_SLICE_BASEPTR +#define REF_DECL_MEMBER_SLICE +#define REF_DECL_MEMBER_SLICE_BASEPTR +#define PTR_DECL_MEMBER_SLICE +#define PTR_DECL_MEMBER_SLICE_BASEPTR +#define REF2PTR_DECL_MEMBER_SLICE +#define REF2PTR_DECL_MEMBER_SLICE_BASEPTR + +#define ARRAY_DECL_MEMBER_SLICE +#define ARRAY_DECL_MEMBER_SLICE_BASEPTR +// Needs map clause "lvalue"-parsing support. +//#define REF2ARRAY_DECL_MEMBER_SLICE +//#define REF2ARRAY_DECL_MEMBER_SLICE_BASEPTR +#define PTR_OFFSET_DECL_MEMBER_SLICE +#define PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR +// Needs map clause "lvalue"-parsing support. +//#define REF2PTR_OFFSET_DECL_MEMBER_SLICE +//#define REF2PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR + +#define PTRARRAY_DECL_MEMBER_SLICE +#define PTRARRAY_DECL_MEMBER_SLICE_BASEPTR +// Needs map clause "lvalue"-parsing support. +//#define REF2PTRARRAY_DECL_MEMBER_SLICE +//#define REF2PTRARRAY_DECL_MEMBER_SLICE_BASEPTR +#define PTRPTR_OFFSET_DECL_MEMBER_SLICE +#define PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR +// Needs map clause "lvalue"-parsing support. +//#define REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE +//#define REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR + +#define NONREF_COMPONENT_BASE +#define NONREF_COMPONENT_MEMBER_SLICE +#define NONREF_COMPONENT_MEMBER_SLICE_BASEPTR + +#define REF_COMPONENT_BASE +#define REF_COMPONENT_MEMBER_SLICE +#define REF_COMPONENT_MEMBER_SLICE_BASEPTR + +#define PTR_COMPONENT_BASE +#define PTR_COMPONENT_MEMBER_SLICE +#define PTR_COMPONENT_MEMBER_SLICE_BASEPTR + +#define REF2PTR_COMPONENT_BASE +#define REF2PTR_COMPONENT_MEMBER_SLICE +#define REF2PTR_COMPONENT_MEMBER_SLICE_BASEPTR + +#ifdef MAP_DECLS +void +map_decls (void) +{ + int x = 0; + int &y = x; + int arr[4]; + int (&arrref)[4] = arr; + int *z = &arr[0]; + int *&t = z; + + memset (arr, 0, sizeof arr); + + #pragma omp target map(x) + { + x++; + } + + #pragma omp target map(y) + { + y++; + } + + assert (x == 2); + assert (y == 2); + + /* "A variable that is of type pointer is treated as if it is the base + pointer of a zero-length array section that appeared as a list item in a + map clause." */ + #pragma omp target map(z) + { + z++; + } + + /* "A variable that is of type reference to pointer is treated as if it had + appeared in a map clause as a zero-length array section." + + The pointer here is *not* associated with a target address, so we're not + disallowed from modifying it. */ + #pragma omp target map(t) + { + t++; + } + + assert (z == &arr[2]); + assert (t == &arr[2]); + + #pragma omp target map(arr) + { + arr[2]++; + } + + #pragma omp target map(arrref) + { + arrref[2]++; + } + + assert (arr[2] == 2); + assert (arrref[2] == 2); +} +#endif + +struct S { + int a; + int &b; + int *c; + int *&d; + int e[4]; + int (&f)[4]; + + S(int a1, int &b1, int *c1, int *&d1) : + a(a1), b(b1), c(c1), d(d1), f(e) + { + memset (e, 0, sizeof e); + } +}; + +#ifdef NONREF_DECL_BASE +void +nonref_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys(a, b, &c, d); + + #pragma omp target map(mys.a) + { + mys.a++; + } + + #pragma omp target map(mys.b) + { + mys.b++; + } + + assert (mys.a == 1); + assert (mys.b == 1); + + #pragma omp target map(mys.c) + { + mys.c++; + } + + #pragma omp target map(mys.d) + { + mys.d++; + } + + assert (mys.c == &c + 1); + assert (mys.d == &c + 1); + + #pragma omp target map(mys.e) + { + mys.e[0]++; + } + + #pragma omp target map(mys.f) + { + mys.f[0]++; + } + + assert (mys.e[0] == 2); + assert (mys.f[0] == 2); +} +#endif + +#ifdef REF_DECL_BASE +void +ref_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys_orig(a, b, &c, d); + S &mys = mys_orig; + + #pragma omp target map(mys.a) + { + mys.a++; + } + + #pragma omp target map(mys.b) + { + mys.b++; + } + + assert (mys.a == 1); + assert (mys.b == 1); + + #pragma omp target map(mys.c) + { + mys.c++; + } + + #pragma omp target map(mys.d) + { + mys.d++; + } + + assert (mys.c == &c + 1); + assert (mys.d == &c + 1); + + #pragma omp target map(mys.e) + { + mys.e[0]++; + } + + #pragma omp target map(mys.f) + { + mys.f[0]++; + } + + assert (mys.e[0] == 2); + assert (mys.f[0] == 2); +} +#endif + +#ifdef PTR_DECL_BASE +void +ptr_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys_orig(a, b, &c, d); + S *mys = &mys_orig; + + #pragma omp target map(mys->a) + { + mys->a++; + } + + #pragma omp target map(mys->b) + { + mys->b++; + } + + assert (mys->a == 1); + assert (mys->b == 1); + + #pragma omp target map(mys->c) + { + mys->c++; + } + + #pragma omp target map(mys->d) + { + mys->d++; + } + + assert (mys->c == &c + 1); + assert (mys->d == &c + 1); + + #pragma omp target map(mys->e) + { + mys->e[0]++; + } + + #pragma omp target map(mys->f) + { + mys->f[0]++; + } + + assert (mys->e[0] == 2); + assert (mys->f[0] == 2); +} +#endif + +#ifdef REF2PTR_DECL_BASE +void +ref2ptr_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys_orig(a, b, &c, d); + S *mysp = &mys_orig; + S *&mys = mysp; + + #pragma omp target map(mys->a) + { + mys->a++; + } + + #pragma omp target map(mys->b) + { + mys->b++; + } + + assert (mys->a == 1); + assert (mys->b == 1); + + #pragma omp target map(mys->c) + { + mys->c++; + } + + #pragma omp target map(mys->d) + { + mys->d++; + } + + assert (mys->c == &c + 1); + assert (mys->d == &c + 1); + + #pragma omp target map(mys->e) + { + mys->e[0]++; + } + + #pragma omp target map(mys->f) + { + mys->f[0]++; + } + + assert (mys->e[0] == 2); + assert (mys->f[0] == 2); +} +#endif + +#ifdef ARRAY_DECL_BASE +void +array_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys[4] = + { + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d) + }; + + #pragma omp target map(mys[2].a) + { + mys[2].a++; + } + + #pragma omp target map(mys[2].b) + { + mys[2].b++; + } + + assert (mys[2].a == 1); + assert (mys[2].b == 1); + + #pragma omp target map(mys[2].c) + { + mys[2].c++; + } + + #pragma omp target map(mys[2].d) + { + mys[2].d++; + } + + assert (mys[2].c == &c + 1); + assert (mys[2].d == &c + 1); + + #pragma omp target map(mys[2].e) + { + mys[2].e[0]++; + } + + #pragma omp target map(mys[2].f) + { + mys[2].f[0]++; + } + + assert (mys[2].e[0] == 2); + assert (mys[2].f[0] == 2); +} +#endif + +#ifdef REF2ARRAY_DECL_BASE +void +ref2array_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys_orig[4] = + { + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d) + }; + S (&mys)[4] = mys_orig; + + #pragma omp target map(mys[2].a) + { + mys[2].a++; + } + + #pragma omp target map(mys[2].b) + { + mys[2].b++; + } + + assert (mys[2].a == 1); + assert (mys[2].b == 1); + + #pragma omp target map(mys[2].c) + { + mys[2].c++; + } + + #pragma omp target map(mys[2].d) + { + mys[2].d++; + } + + assert (mys[2].c == &c + 1); + assert (mys[2].d == &c + 1); + + #pragma omp target map(mys[2].e) + { + mys[2].e[0]++; + } + + #pragma omp target map(mys[2].f) + { + mys[2].f[0]++; + } + + assert (mys[2].e[0] == 2); + assert (mys[2].f[0] == 2); +} +#endif + +#ifdef PTR_OFFSET_DECL_BASE +void +ptr_offset_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys_orig[4] = + { + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d) + }; + S *mys = &mys_orig[0]; + + #pragma omp target map(mys[2].a) + { + mys[2].a++; + } + + #pragma omp target map(mys[2].b) + { + mys[2].b++; + } + + assert (mys[2].a == 1); + assert (mys[2].b == 1); + + #pragma omp target map(mys[2].c) + { + mys[2].c++; + } + + #pragma omp target map(mys[2].d) + { + mys[2].d++; + } + + assert (mys[2].c == &c + 1); + assert (mys[2].d == &c + 1); + + #pragma omp target map(mys[2].e) + { + mys[2].e[0]++; + } + + #pragma omp target map(mys[2].f) + { + mys[2].f[0]++; + } + + assert (mys[2].e[0] == 2); + assert (mys[2].f[0] == 2); +} +#endif + +#ifdef REF2PTR_OFFSET_DECL_BASE +void +ref2ptr_offset_decl_base (void) +{ + int a = 0, b = 0, c, *d = &c; + S mys_orig[4] = + { + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d), + S(a, b, &c, d) + }; + S *mys_ptr = &mys_orig[0]; + S *&mys = mys_ptr; + + #pragma omp target map(mys[2].a) + { + mys[2].a++; + } + + #pragma omp target map(mys[2].b) + { + mys[2].b++; + } + + assert (mys[2].a == 1); + assert (mys[2].b == 1); + + #pragma omp target map(mys[2].c) + { + mys[2].c++; + } + + #pragma omp target map(mys[2].d) + { + mys[2].d++; + } + + assert (mys[2].c == &c + 1); + assert (mys[2].d == &c + 1); + + #pragma omp target map(mys[2].e) + { + mys[2].e[0]++; + } + + #pragma omp target map(mys[2].f) + { + mys[2].f[0]++; + } + + assert (mys[2].e[0] == 2); + assert (mys[2].f[0] == 2); +} +#endif + +#ifdef MAP_SECTIONS +void +map_sections (void) +{ + int arr[10]; + int *ptr; + int (&arrref)[10] = arr; + int *&ptrref = ptr; + + ptr = new int[10]; + memset (ptr, 0, sizeof (int) * 10); + memset (arr, 0, sizeof (int) * 10); + + #pragma omp target map(arr[0:10]) + { + arr[2]++; + } + + #pragma omp target map(ptr[0:10]) + { + ptr[2]++; + } + + #pragma omp target map(arrref[0:10]) + { + arrref[2]++; + } + + #pragma omp target map(ptrref[0:10]) + { + ptrref[2]++; + } + + assert (arr[2] == 2); + assert (ptr[2] == 2); + + delete ptr; +} +#endif + +struct T { + int a[10]; + int (&b)[10]; + int *c; + int *&d; + + T(int (&b1)[10], int *c1, int *&d1) : b(b1), c(c1), d(d1) + { + memset (a, 0, sizeof a); + } +}; + +#ifdef NONREF_DECL_MEMBER_SLICE +void +nonref_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt(c, &c[0], d); + + memset (c, 0, sizeof c); + + #pragma omp target map(myt.a[0:10]) + { + myt.a[2]++; + } + + #pragma omp target map(myt.b[0:10]) + { + myt.b[2]++; + } + + #pragma omp target enter data map(to: myt.c) + + #pragma omp target map(myt.c[0:10]) + { + myt.c[2]++; + } + + #pragma omp target exit data map(release: myt.c) + + #pragma omp target enter data map(to: myt.d) + + #pragma omp target map(myt.d[0:10]) + { + myt.d[2]++; + } + + #pragma omp target exit data map(from: myt.d) + + assert (myt.a[2] == 1); + assert (myt.b[2] == 3); + assert (myt.c[2] == 3); + assert (myt.d[2] == 3); +} +#endif + +#ifdef NONREF_DECL_MEMBER_SLICE_BASEPTR +void +nonref_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt(c, &c[0], d); + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt.c) map(myt.c[0:10]) + { + myt.c[2]++; + } + + #pragma omp target map(to:myt.d) map(myt.d[0:10]) + { + myt.d[2]++; + } + + assert (myt.c[2] == 2); + assert (myt.d[2] == 2); +} +#endif + +#ifdef REF_DECL_MEMBER_SLICE +void +ref_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T &myt = myt_real; + + memset (c, 0, sizeof c); + + #pragma omp target map(myt.a[0:10]) + { + myt.a[2]++; + } + + #pragma omp target map(myt.b[0:10]) + { + myt.b[2]++; + } + + #pragma omp target enter data map(to: myt.c) + + #pragma omp target map(myt.c[0:10]) + { + myt.c[2]++; + } + + #pragma omp target exit data map(release: myt.c) + + #pragma omp target enter data map(to: myt.d) + + #pragma omp target map(myt.d[0:10]) + { + myt.d[2]++; + } + + #pragma omp target exit data map(release: myt.d) + + assert (myt.a[2] == 1); + assert (myt.b[2] == 3); + assert (myt.c[2] == 3); + assert (myt.d[2] == 3); +} +#endif + +#ifdef REF_DECL_MEMBER_SLICE_BASEPTR +void +ref_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T &myt = myt_real; + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt.c) map(myt.c[0:10]) + { + myt.c[2]++; + } + + #pragma omp target map(to:myt.d) map(myt.d[0:10]) + { + myt.d[2]++; + } + + assert (myt.c[2] == 2); + assert (myt.d[2] == 2); +} +#endif + +#ifdef PTR_DECL_MEMBER_SLICE +void +ptr_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt = &myt_real; + + memset (c, 0, sizeof c); + + #pragma omp target enter data map(to: myt) + + #pragma omp target map(myt->a[0:10]) + { + myt->a[2]++; + } + + #pragma omp target map(myt->b[0:10]) + { + myt->b[2]++; + } + + #pragma omp target enter data map(to: myt->c) + + #pragma omp target map(myt->c[0:10]) + { + myt->c[2]++; + } + + #pragma omp target exit data map(release: myt->c) + + #pragma omp target enter data map(to: myt->d) + + #pragma omp target map(myt->d[0:10]) + { + myt->d[2]++; + } + + #pragma omp target exit data map(release: myt, myt->d) + + assert (myt->a[2] == 1); + assert (myt->b[2] == 3); + assert (myt->c[2] == 3); + assert (myt->d[2] == 3); +} +#endif + +#ifdef PTR_DECL_MEMBER_SLICE_BASEPTR +void +ptr_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt = &myt_real; + + memset (c, 0, sizeof c); + + // These ones have an implicit firstprivate for 'myt'. + #pragma omp target map(to:myt->c) map(myt->c[0:10]) + { + myt->c[2]++; + } + + #pragma omp target map(to:myt->d) map(myt->d[0:10]) + { + myt->d[2]++; + } + + // These ones have an explicit "TO" mapping for 'myt'. + #pragma omp target map(to:myt) map(to:myt->c) map(myt->c[0:10]) + { + myt->c[2]++; + } + + #pragma omp target map(to:myt) map(to:myt->d) map(myt->d[0:10]) + { + myt->d[2]++; + } + + assert (myt->c[2] == 4); + assert (myt->d[2] == 4); +} +#endif + +#ifdef REF2PTR_DECL_MEMBER_SLICE +void +ref2ptr_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptr = &myt_real; + T *&myt = myt_ptr; + + memset (c, 0, sizeof c); + + #pragma omp target enter data map(to: myt) + + #pragma omp target map(myt->a[0:10]) + { + myt->a[2]++; + } + + #pragma omp target map(myt->b[0:10]) + { + myt->b[2]++; + } + + #pragma omp target enter data map(to: myt->c) + + #pragma omp target map(myt->c[0:10]) + { + myt->c[2]++; + } + + #pragma omp target exit data map(release: myt->c) + + #pragma omp target enter data map(to: myt->d) + + #pragma omp target map(myt->d[0:10]) + { + myt->d[2]++; + } + + #pragma omp target exit data map(from: myt, myt->d) + + assert (myt->a[2] == 1); + assert (myt->b[2] == 3); + assert (myt->c[2] == 3); + assert (myt->d[2] == 3); +} +#endif + +#ifdef REF2PTR_DECL_MEMBER_SLICE_BASEPTR +void +ref2ptr_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptr = &myt_real; + T *&myt = myt_ptr; + + memset (c, 0, sizeof c); + + // These ones have an implicit firstprivate for 'myt'. + #pragma omp target map(to:myt->c) map(myt->c[0:10]) + { + myt->c[2]++; + } + + #pragma omp target map(to:myt->d) map(myt->d[0:10]) + { + myt->d[2]++; + } + + // These ones have an explicit "TO" mapping for 'myt'. + #pragma omp target map(to:myt) map(to:myt->c) map(myt->c[0:10]) + { + myt->c[2]++; + } + + #pragma omp target map(to:myt) map(to:myt->d) map(myt->d[0:10]) + { + myt->d[2]++; + } + + assert (myt->c[2] == 4); + assert (myt->d[2] == 4); +} +#endif + +#ifdef ARRAY_DECL_MEMBER_SLICE +void +array_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + + memset (c, 0, sizeof c); + + #pragma omp target map(myt[2].a[0:10]) + { + myt[2].a[2]++; + } + + #pragma omp target map(myt[2].b[0:10]) + { + myt[2].b[2]++; + } + + #pragma omp target enter data map(to: myt[2].c) + + #pragma omp target map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target exit data map(release: myt[2].c) + + #pragma omp target enter data map(to: myt[2].d) + + #pragma omp target map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + #pragma omp target exit data map(release: myt[2].d) + + assert (myt[2].a[2] == 1); + assert (myt[2].b[2] == 3); + assert (myt[2].c[2] == 3); + assert (myt[2].d[2] == 3); +} +#endif + +#ifdef ARRAY_DECL_MEMBER_SLICE_BASEPTR +void +array_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + assert (myt[2].c[2] == 2); + assert (myt[2].d[2] == 2); +} +#endif + +#ifdef REF2ARRAY_DECL_MEMBER_SLICE +void +ref2array_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + T (&myt)[4] = myt_real; + + memset (c, 0, sizeof c); + + #pragma omp target map(myt[2].a[0:10]) + { + myt[2].a[2]++; + } + + #pragma omp target map(myt[2].b[0:10]) + { + myt[2].b[2]++; + } + + #pragma omp target enter data map(to: myt[2].c) + + #pragma omp target map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target exit data map(release: myt[2].c) + + #pragma omp target enter data map(to: myt[2].d) + + #pragma omp target map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + #pragma omp target exit data map(release: myt[2].d) + + assert (myt[2].a[2] == 1); + assert (myt[2].b[2] == 3); + assert (myt[2].c[2] == 3); + assert (myt[2].d[2] == 3); +} +#endif + +#ifdef REF2ARRAY_DECL_MEMBER_SLICE_BASEPTR +void +ref2array_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + T (&myt)[4] = myt_real; + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + assert (myt[2].c[2] == 2); + assert (myt[2].d[2] == 2); +} +#endif + +#ifdef PTR_OFFSET_DECL_MEMBER_SLICE +void +ptr_offset_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + T *myt = &myt_real[0]; + + memset (c, 0, sizeof c); + + #pragma omp target map(myt[2].a[0:10]) + { + myt[2].a[2]++; + } + + #pragma omp target map(myt[2].b[0:10]) + { + myt[2].b[2]++; + } + + #pragma omp target enter data map(to: myt[2].c) + + #pragma omp target map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target exit data map(release: myt[2].c) + + #pragma omp target enter data map(to: myt[2].d) + + #pragma omp target map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + #pragma omp target exit data map(release: myt[2].d) + + assert (myt[2].a[2] == 1); + assert (myt[2].b[2] == 3); + assert (myt[2].c[2] == 3); + assert (myt[2].d[2] == 3); +} +#endif + +#ifdef PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR +void +ptr_offset_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + T *myt = &myt_real[0]; + + memset (c, 0, sizeof c); + + /* Implicit 'myt'. */ + #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + /* Explicit 'to'-mapped 'myt'. */ + #pragma omp target map(to:myt) map(to:myt[2].c) map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target map(to:myt) map(to:myt[2].d) map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + assert (myt[2].c[2] == 4); + assert (myt[2].d[2] == 4); +} +#endif + +#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE +void +ref2ptr_offset_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + T *myt_ptr = &myt_real[0]; + T *&myt = myt_ptr; + + memset (c, 0, sizeof c); + + #pragma omp target map(myt[2].a[0:10]) + { + myt[2].a[2]++; + } + + #pragma omp target map(myt[2].b[0:10]) + { + myt[2].b[2]++; + } + + #pragma omp target enter data map(to: myt[2].c) + + #pragma omp target map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target exit data map(release: myt[2].c) + + #pragma omp target enter data map(to: myt[2].d) + + #pragma omp target map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + #pragma omp target exit data map(release: myt[2].d) + + assert (myt[2].a[2] == 1); + assert (myt[2].b[2] == 3); + assert (myt[2].c[2] == 3); + assert (myt[2].d[2] == 3); +} +#endif + +#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR +void +ref2ptr_offset_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real[4] = + { + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d), + T (c, &c[0], d) + }; + T *myt_ptr = &myt_real[0]; + T *&myt = myt_ptr; + + memset (c, 0, sizeof c); + + /* Implicit 'myt'. */ + #pragma omp target map(to:myt[2].c) map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target map(to:myt[2].d) map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + /* Explicit 'to'-mapped 'myt'. */ + #pragma omp target map(to:myt) map(to:myt[2].c) map(myt[2].c[0:10]) + { + myt[2].c[2]++; + } + + #pragma omp target map(to:myt) map(to:myt[2].d) map(myt[2].d[0:10]) + { + myt[2].d[2]++; + } + + assert (myt[2].c[2] == 4); + assert (myt[2].d[2] == 4); +} +#endif + +#ifdef PTRARRAY_DECL_MEMBER_SLICE +void +ptrarray_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt[4] = + { + &myt_real, + &myt_real, + &myt_real, + &myt_real + }; + + memset (c, 0, sizeof c); + + #pragma omp target enter data map(to: myt[2]) + + #pragma omp target map(myt[2]->a[0:10]) + { + myt[2]->a[2]++; + } + + #pragma omp target map(myt[2]->b[0:10]) + { + myt[2]->b[2]++; + } + + #pragma omp target enter data map(to: myt[2]->c) + + #pragma omp target map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target exit data map(from: myt[2]->c) + + #pragma omp target enter data map(to: myt[2]->d) + + #pragma omp target map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + #pragma omp target exit data map(from: myt[2]->d) + + #pragma omp target exit data map(release: myt[2]) + + assert (myt[2]->a[2] == 1); + assert (myt[2]->b[2] == 3); + assert (myt[2]->c[2] == 3); + assert (myt[2]->d[2] == 3); +} +#endif + +#ifdef PTRARRAY_DECL_MEMBER_SLICE_BASEPTR +void +ptrarray_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt[4] = + { + &myt_real, + &myt_real, + &myt_real, + &myt_real + }; + + memset (c, 0, sizeof c); + + // Implicit 'myt' + #pragma omp target map(to: myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to: myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + // One element of 'myt' + #pragma omp target map(to:myt[2], myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt[2], myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + // Explicit map of all of 'myt' + #pragma omp target map(to:myt, myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt, myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + // Explicit map slice of 'myt' + #pragma omp target map(to:myt[1:3], myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt[1:3], myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + assert (myt[2]->c[2] == 8); + assert (myt[2]->d[2] == 8); +} +#endif + +#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE +void +ref2ptrarray_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptrarr[4] = + { + &myt_real, + &myt_real, + &myt_real, + &myt_real + }; + T *(&myt)[4] = myt_ptrarr; + + memset (c, 0, sizeof c); + + #pragma omp target enter data map(to: myt[2]) + + #pragma omp target map(myt[2]->a[0:10]) + { + myt[2]->a[2]++; + } + + #pragma omp target map(myt[2]->b[0:10]) + { + myt[2]->b[2]++; + } + + #pragma omp target enter data map(to: myt[2]->c) + + #pragma omp target map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target exit data map(release: myt[2]->c) + + #pragma omp target enter data map(to: myt[2]->d) + + #pragma omp target map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + #pragma omp target exit data map(release: myt[2]->d) + + #pragma omp target exit data map(release: myt[2]) + + assert (myt[2]->a[2] == 1); + assert (myt[2]->b[2] == 3); + assert (myt[2]->c[2] == 3); + assert (myt[2]->d[2] == 3); +} +#endif + +#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE_BASEPTR +void +ref2ptrarray_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptrarr[4] = + { + &myt_real, + &myt_real, + &myt_real, + &myt_real + }; + T *(&myt)[4] = myt_ptrarr; + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt[2], myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt[2], myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + #pragma omp target map(to:myt, myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt, myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + assert (myt[2]->c[2] == 4); + assert (myt[2]->d[2] == 4); +} +#endif + +#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE +void +ptrptr_offset_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptrarr[4] = + { + &myt_real, + &myt_real, + &myt_real, + &myt_real + }; + T **myt = &myt_ptrarr[0]; + + memset (c, 0, sizeof c); + + #pragma omp target enter data map(to: myt[0:3]) + + /* NOTE: For the implicit firstprivate 'myt' to work, the zeroth element of + myt[] must be mapped above -- otherwise the zero-length array section + lookup fails. */ + #pragma omp target map(myt[2]->a[0:10]) + { + myt[2]->a[2]++; + } + + #pragma omp target map(myt[2]->b[0:10]) + { + myt[2]->b[2]++; + } + + #pragma omp target enter data map(to: myt[2]->c) + + #pragma omp target map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target exit data map(from: myt[2]->c) + + #pragma omp target enter data map(to: myt[2]->d) + + #pragma omp target map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + #pragma omp target exit data map(from: myt[0:3], myt[2]->d) + + assert (myt[2]->a[2] == 1); + assert (myt[2]->b[2] == 3); + assert (myt[2]->c[2] == 3); + assert (myt[2]->d[2] == 3); +} +#endif + +#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR +void +ptrptr_offset_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptrarr[4] = + { + 0, + 0, + 0, + &myt_real + }; + T **myt = &myt_ptrarr[0]; + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt[3], myt[3]->c) map(myt[3]->c[0:10]) + { + myt[3]->c[2]++; + } + + #pragma omp target map(to:myt[3], myt[3]->d) map(myt[3]->d[0:10]) + { + myt[3]->d[2]++; + } + + #pragma omp target map(to:myt, myt[3], myt[3]->c) map(myt[3]->c[0:10]) + { + myt[3]->c[2]++; + } + + #pragma omp target map(to:myt, myt[3], myt[3]->d) map(myt[3]->d[0:10]) + { + myt[3]->d[2]++; + } + + assert (myt[3]->c[2] == 4); + assert (myt[3]->d[2] == 4); +} +#endif + +#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE +void +ref2ptrptr_offset_decl_member_slice (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptrarr[4] = + { + 0, + 0, + &myt_real, + 0 + }; + T **myt_ptrptr = &myt_ptrarr[0]; + T **&myt = myt_ptrptr; + + memset (c, 0, sizeof c); + + #pragma omp target enter data map(to: myt[0:3]) + + #pragma omp target map(myt[2]->a[0:10]) + { + myt[2]->a[2]++; + } + + #pragma omp target map(myt[2]->b[0:10]) + { + myt[2]->b[2]++; + } + + #pragma omp target enter data map(to:myt[2]->c) + + #pragma omp target map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target exit data map(release:myt[2]->c) + + #pragma omp target enter data map(to:myt[2]->d) + + #pragma omp target map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + #pragma omp target exit data map(release: myt[0:3], myt[2]->d) + + assert (myt[2]->a[2] == 1); + assert (myt[2]->b[2] == 3); + assert (myt[2]->c[2] == 3); + assert (myt[2]->d[2] == 3); +} +#endif + +#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR +void +ref2ptrptr_offset_decl_member_slice_baseptr (void) +{ + int c[10]; + int *d = &c[0]; + T myt_real(c, &c[0], d); + T *myt_ptrarr[4] = + { + 0, + 0, + &myt_real, + 0 + }; + T **myt_ptrptr = &myt_ptrarr[0]; + T **&myt = myt_ptrptr; + + memset (c, 0, sizeof c); + + #pragma omp target map(to:myt[2], myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt[2], myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + #pragma omp target map(to:myt, myt[2], myt[2]->c) map(myt[2]->c[0:10]) + { + myt[2]->c[2]++; + } + + #pragma omp target map(to:myt, myt[2], myt[2]->d) map(myt[2]->d[0:10]) + { + myt[2]->d[2]++; + } + + assert (myt[2]->c[2] == 4); + assert (myt[2]->d[2] == 4); +} +#endif + +struct U +{ + S s1; + T t1; + S &s2; + T &t2; + S *s3; + T *t3; + S *&s4; + T *&t4; + + U(S &sptr1, T &tptr1, S &sptr2, T &tptr2, S *sptr3, T *tptr3, + S *&sptr4, T *&tptr4) + : s1(sptr1), t1(tptr1), s2(sptr2), t2(tptr2), s3(sptr3), t3(tptr3), + s4(sptr4), t4(tptr4) + { + } +}; + +#define INIT_S(N) \ + int a##N = 0, b##N = 0, c##N = 0, d##N = 0; \ + int *d##N##ptr = &d##N; \ + S s##N(a##N, b##N, &c##N, d##N##ptr) + +#define INIT_T(N) \ + int arr##N[10]; \ + int *ptr##N = &arr##N[0]; \ + T t##N(arr##N, &arr##N[0], ptr##N); \ + memset (arr##N, 0, sizeof arr##N) + +#define INIT_ST \ + INIT_S(1); \ + INIT_T(1); \ + INIT_S(2); \ + INIT_T(2); \ + INIT_S(3); \ + INIT_T(3); \ + int a4 = 0, b4 = 0, c4 = 0, d4 = 0; \ + int *d4ptr = &d4; \ + S *s4 = new S(a4, b4, &c4, d4ptr); \ + int arr4[10]; \ + int *ptr4 = &arr4[0]; \ + T *t4 = new T(arr4, &arr4[0], ptr4); \ + memset (arr4, 0, sizeof arr4) + +#ifdef NONREF_COMPONENT_BASE +void +nonref_component_base (void) +{ + INIT_ST; + U myu(s1, t1, s2, t2, &s3, &t3, s4, t4); + + #pragma omp target map(myu.s1.a, myu.s1.b, myu.s1.c, myu.s1.d) + { + myu.s1.a++; + myu.s1.b++; + myu.s1.c++; + myu.s1.d++; + } + + assert (myu.s1.a == 1); + assert (myu.s1.b == 1); + assert (myu.s1.c == &c1 + 1); + assert (myu.s1.d == &d1 + 1); + + #pragma omp target map(myu.s2.a, myu.s2.b, myu.s2.c, myu.s2.d) + { + myu.s2.a++; + myu.s2.b++; + myu.s2.c++; + myu.s2.d++; + } + + assert (myu.s2.a == 1); + assert (myu.s2.b == 1); + assert (myu.s2.c == &c2 + 1); + assert (myu.s2.d == &d2 + 1); + + #pragma omp target map(to:myu.s3) \ + map(myu.s3->a, myu.s3->b, myu.s3->c, myu.s3->d) + { + myu.s3->a++; + myu.s3->b++; + myu.s3->c++; + myu.s3->d++; + } + + assert (myu.s3->a == 1); + assert (myu.s3->b == 1); + assert (myu.s3->c == &c3 + 1); + assert (myu.s3->d == &d3 + 1); + + #pragma omp target map(to:myu.s4) \ + map(myu.s4->a, myu.s4->b, myu.s4->c, myu.s4->d) + { + myu.s4->a++; + myu.s4->b++; + myu.s4->c++; + myu.s4->d++; + } + + assert (myu.s4->a == 1); + assert (myu.s4->b == 1); + assert (myu.s4->c == &c4 + 1); + assert (myu.s4->d == &d4 + 1); + + delete s4; + delete t4; +} +#endif + +#ifdef NONREF_COMPONENT_MEMBER_SLICE +void +nonref_component_member_slice (void) +{ + INIT_ST; + U myu(s1, t1, s2, t2, &s3, &t3, s4, t4); + + #pragma omp target map(myu.t1.a[2:5]) + { + myu.t1.a[2]++; + } + + #pragma omp target map(myu.t1.b[2:5]) + { + myu.t1.b[2]++; + } + + #pragma omp target enter data map(to: myu.t1.c) + + #pragma omp target map(myu.t1.c[2:5]) + { + myu.t1.c[2]++; + } + + #pragma omp target exit data map(release: myu.t1.c) + + #pragma omp target enter data map(to: myu.t1.d) + + #pragma omp target map(myu.t1.d[2:5]) + { + myu.t1.d[2]++; + } + + #pragma omp target exit data map(from: myu.t1.d) + + assert (myu.t1.a[2] == 1); + assert (myu.t1.b[2] == 3); + assert (myu.t1.c[2] == 3); + assert (myu.t1.d[2] == 3); + + #pragma omp target map(myu.t2.a[2:5]) + { + myu.t2.a[2]++; + } + + #pragma omp target map(myu.t2.b[2:5]) + { + myu.t2.b[2]++; + } + + #pragma omp target enter data map(to: myu.t2.c) + + #pragma omp target map(myu.t2.c[2:5]) + { + myu.t2.c[2]++; + } + + #pragma omp target exit data map(release: myu.t2.c) + + #pragma omp target enter data map(to: myu.t2.d) + + #pragma omp target map(myu.t2.d[2:5]) + { + myu.t2.d[2]++; + } + + #pragma omp target exit data map(release: myu.t2.d) + + assert (myu.t2.a[2] == 1); + assert (myu.t2.b[2] == 3); + assert (myu.t2.c[2] == 3); + assert (myu.t2.d[2] == 3); + + #pragma omp target enter data map(to: myu.t3) + + #pragma omp target map(myu.t3->a[2:5]) + { + myu.t3->a[2]++; + } + + #pragma omp target map(myu.t3->b[2:5]) + { + myu.t3->b[2]++; + } + + #pragma omp target enter data map(to: myu.t3->c) + + #pragma omp target map(myu.t3->c[2:5]) + { + myu.t3->c[2]++; + } + + #pragma omp target exit data map(release: myu.t3->c) + + #pragma omp target enter data map(to: myu.t3->d) + + #pragma omp target map(myu.t3->d[2:5]) + { + myu.t3->d[2]++; + } + + #pragma omp target exit data map(release: myu.t3, myu.t3->d) + + assert (myu.t3->a[2] == 1); + assert (myu.t3->b[2] == 3); + assert (myu.t3->c[2] == 3); + assert (myu.t3->d[2] == 3); + + #pragma omp target enter data map(to: myu.t4) + + #pragma omp target map(myu.t4->a[2:5]) + { + myu.t4->a[2]++; + } + + #pragma omp target map(myu.t4->b[2:5]) + { + myu.t4->b[2]++; + } + + #pragma omp target enter data map(to: myu.t4->c) + + #pragma omp target map(myu.t4->c[2:5]) + { + myu.t4->c[2]++; + } + + #pragma omp target exit data map(release: myu.t4->c) + + #pragma omp target enter data map(to: myu.t4->d) + + #pragma omp target map(myu.t4->d[2:5]) + { + myu.t4->d[2]++; + } + + #pragma omp target exit data map(release: myu.t4, myu.t4->d) + + assert (myu.t4->a[2] == 1); + assert (myu.t4->b[2] == 3); + assert (myu.t4->c[2] == 3); + assert (myu.t4->d[2] == 3); + + delete s4; + delete t4; +} +#endif + +#ifdef NONREF_COMPONENT_MEMBER_SLICE_BASEPTR +void +nonref_component_member_slice_baseptr (void) +{ + INIT_ST; + U myu(s1, t1, s2, t2, &s3, &t3, s4, t4); + + #pragma omp target map(to: myu.t1.c) map(myu.t1.c[2:5]) + { + myu.t1.c[2]++; + } + + #pragma omp target map(to: myu.t1.d) map(myu.t1.d[2:5]) + { + myu.t1.d[2]++; + } + + assert (myu.t1.c[2] == 2); + assert (myu.t1.d[2] == 2); + + #pragma omp target map(to: myu.t2.c) map(myu.t2.c[2:5]) + { + myu.t2.c[2]++; + } + + #pragma omp target map(to: myu.t2.d) map(myu.t2.d[2:5]) + { + myu.t2.d[2]++; + } + + assert (myu.t2.c[2] == 2); + assert (myu.t2.d[2] == 2); + + #pragma omp target map(to: myu.t3, myu.t3->c) map(myu.t3->c[2:5]) + { + myu.t3->c[2]++; + } + + #pragma omp target map(to: myu.t3, myu.t3->d) map(myu.t3->d[2:5]) + { + myu.t3->d[2]++; + } + + assert (myu.t3->c[2] == 2); + assert (myu.t3->d[2] == 2); + + #pragma omp target map(to: myu.t4, myu.t4->c) map(myu.t4->c[2:5]) + { + myu.t4->c[2]++; + } + + #pragma omp target map(to: myu.t4, myu.t4->d) map(myu.t4->d[2:5]) + { + myu.t4->d[2]++; + } + + assert (myu.t4->c[2] == 2); + assert (myu.t4->d[2] == 2); + + delete s4; + delete t4; +} +#endif + +#ifdef REF_COMPONENT_BASE +void +ref_component_base (void) +{ + INIT_ST; + U myu_real(s1, t1, s2, t2, &s3, &t3, s4, t4); + U &myu = myu_real; + + #pragma omp target map(myu.s1.a, myu.s1.b, myu.s1.c, myu.s1.d) + { + myu.s1.a++; + myu.s1.b++; + myu.s1.c++; + myu.s1.d++; + } + + assert (myu.s1.a == 1); + assert (myu.s1.b == 1); + assert (myu.s1.c == &c1 + 1); + assert (myu.s1.d == &d1 + 1); + + #pragma omp target map(myu.s2.a, myu.s2.b, myu.s2.c, myu.s2.d) + { + myu.s2.a++; + myu.s2.b++; + myu.s2.c++; + myu.s2.d++; + } + + assert (myu.s2.a == 1); + assert (myu.s2.b == 1); + assert (myu.s2.c == &c2 + 1); + assert (myu.s2.d == &d2 + 1); + + #pragma omp target map(to:myu.s3) \ + map(myu.s3->a, myu.s3->b, myu.s3->c, myu.s3->d) + { + myu.s3->a++; + myu.s3->b++; + myu.s3->c++; + myu.s3->d++; + } + + assert (myu.s3->a == 1); + assert (myu.s3->b == 1); + assert (myu.s3->c == &c3 + 1); + assert (myu.s3->d == &d3 + 1); + + #pragma omp target map(to:myu.s4) \ + map(myu.s4->a, myu.s4->b, myu.s4->c, myu.s4->d) + { + myu.s4->a++; + myu.s4->b++; + myu.s4->c++; + myu.s4->d++; + } + + assert (myu.s4->a == 1); + assert (myu.s4->b == 1); + assert (myu.s4->c == &c4 + 1); + assert (myu.s4->d == &d4 + 1); + + delete s4; + delete t4; +} +#endif + +#ifdef REF_COMPONENT_MEMBER_SLICE +void +ref_component_member_slice (void) +{ + INIT_ST; + U myu_real(s1, t1, s2, t2, &s3, &t3, s4, t4); + U &myu = myu_real; + + #pragma omp target map(myu.t1.a[2:5]) + { + myu.t1.a[2]++; + } + + #pragma omp target map(myu.t1.b[2:5]) + { + myu.t1.b[2]++; + } + + #pragma omp target enter data map(to: myu.t1.c) + + #pragma omp target map(myu.t1.c[2:5]) + { + myu.t1.c[2]++; + } + + #pragma omp target exit data map(release: myu.t1.c) + + #pragma omp target enter data map(to: myu.t1.d) + + #pragma omp target map(myu.t1.d[2:5]) + { + myu.t1.d[2]++; + } + + #pragma omp target exit data map(release: myu.t1.d) + + assert (myu.t1.a[2] == 1); + assert (myu.t1.b[2] == 3); + assert (myu.t1.c[2] == 3); + assert (myu.t1.d[2] == 3); + + #pragma omp target map(myu.t2.a[2:5]) + { + myu.t2.a[2]++; + } + + #pragma omp target map(myu.t2.b[2:5]) + { + myu.t2.b[2]++; + } + + #pragma omp target enter data map(to: myu.t2.c) + + #pragma omp target map(myu.t2.c[2:5]) + { + myu.t2.c[2]++; + } + + #pragma omp target exit data map(release: myu.t2.c) + + #pragma omp target enter data map(to: myu.t2.d) + + #pragma omp target map(myu.t2.d[2:5]) + { + myu.t2.d[2]++; + } + + #pragma omp target exit data map(release: myu.t2.d) + + assert (myu.t2.a[2] == 1); + assert (myu.t2.b[2] == 3); + assert (myu.t2.c[2] == 3); + assert (myu.t2.d[2] == 3); + + #pragma omp target enter data map(to: myu.t3) + + #pragma omp target map(myu.t3->a[2:5]) + { + myu.t3->a[2]++; + } + + #pragma omp target map(myu.t3->b[2:5]) + { + myu.t3->b[2]++; + } + + #pragma omp target enter data map(to: myu.t3->c) + + #pragma omp target map(myu.t3->c[2:5]) + { + myu.t3->c[2]++; + } + + #pragma omp target exit data map(release: myu.t3->c) + + #pragma omp target enter data map(to: myu.t3->d) + + #pragma omp target map(myu.t3->d[2:5]) + { + myu.t3->d[2]++; + } + + #pragma omp target exit data map(release: myu.t3, myu.t3->d) + + assert (myu.t3->a[2] == 1); + assert (myu.t3->b[2] == 3); + assert (myu.t3->c[2] == 3); + assert (myu.t3->d[2] == 3); + + #pragma omp target enter data map(to: myu.t4) + + #pragma omp target map(myu.t4->a[2:5]) + { + myu.t4->a[2]++; + } + + #pragma omp target map(myu.t4->b[2:5]) + { + myu.t4->b[2]++; + } + + #pragma omp target enter data map(to: myu.t4->c) + + #pragma omp target map(myu.t4->c[2:5]) + { + myu.t4->c[2]++; + } + + #pragma omp target exit data map(release: myu.t4->c) + + #pragma omp target enter data map(to: myu.t4->d) + + #pragma omp target map(myu.t4->d[2:5]) + { + myu.t4->d[2]++; + } + + #pragma omp target exit data map(release: myu.t4, myu.t4->d) + + assert (myu.t4->a[2] == 1); + assert (myu.t4->b[2] == 3); + assert (myu.t4->c[2] == 3); + assert (myu.t4->d[2] == 3); + + delete s4; + delete t4; +} +#endif + +#ifdef REF_COMPONENT_MEMBER_SLICE_BASEPTR +void +ref_component_member_slice_baseptr (void) +{ + INIT_ST; + U myu_real(s1, t1, s2, t2, &s3, &t3, s4, t4); + U &myu = myu_real; + + #pragma omp target map(to: myu.t1.c) map(myu.t1.c[2:5]) + { + myu.t1.c[2]++; + } + + #pragma omp target map(to: myu.t1.d) map(myu.t1.d[2:5]) + { + myu.t1.d[2]++; + } + + assert (myu.t1.c[2] == 2); + assert (myu.t1.d[2] == 2); + + #pragma omp target map(to: myu.t2.c) map(myu.t2.c[2:5]) + { + myu.t2.c[2]++; + } + + #pragma omp target map(to: myu.t2.d) map(myu.t2.d[2:5]) + { + myu.t2.d[2]++; + } + + assert (myu.t2.c[2] == 2); + assert (myu.t2.d[2] == 2); + + #pragma omp target map(to: myu.t3, myu.t3->c) map(myu.t3->c[2:5]) + { + myu.t3->c[2]++; + } + + #pragma omp target map(to: myu.t3, myu.t3->d) map(myu.t3->d[2:5]) + { + myu.t3->d[2]++; + } + + assert (myu.t3->c[2] == 2); + assert (myu.t3->d[2] == 2); + + #pragma omp target map(to: myu.t4, myu.t4->c) map(myu.t4->c[2:5]) + { + myu.t4->c[2]++; + } + + #pragma omp target map(to: myu.t4, myu.t4->d) map(myu.t4->d[2:5]) + { + myu.t4->d[2]++; + } + + assert (myu.t4->c[2] == 2); + assert (myu.t4->d[2] == 2); + + delete s4; + delete t4; +} +#endif + +#ifdef PTR_COMPONENT_BASE +void +ptr_component_base (void) +{ + INIT_ST; + U *myu = new U(s1, t1, s2, t2, &s3, &t3, s4, t4); + + #pragma omp target map(myu->s1.a, myu->s1.b, myu->s1.c, myu->s1.d) + { + myu->s1.a++; + myu->s1.b++; + myu->s1.c++; + myu->s1.d++; + } + + assert (myu->s1.a == 1); + assert (myu->s1.b == 1); + assert (myu->s1.c == &c1 + 1); + assert (myu->s1.d == &d1 + 1); + + #pragma omp target map(myu->s2.a, myu->s2.b, myu->s2.c, myu->s2.d) + { + myu->s2.a++; + myu->s2.b++; + myu->s2.c++; + myu->s2.d++; + } + + assert (myu->s2.a == 1); + assert (myu->s2.b == 1); + assert (myu->s2.c == &c2 + 1); + assert (myu->s2.d == &d2 + 1); + + #pragma omp target map(to:myu->s3) \ + map(myu->s3->a, myu->s3->b, myu->s3->c, myu->s3->d) + { + myu->s3->a++; + myu->s3->b++; + myu->s3->c++; + myu->s3->d++; + } + + assert (myu->s3->a == 1); + assert (myu->s3->b == 1); + assert (myu->s3->c == &c3 + 1); + assert (myu->s3->d == &d3 + 1); + + #pragma omp target map(to:myu->s4) \ + map(myu->s4->a, myu->s4->b, myu->s4->c, myu->s4->d) + { + myu->s4->a++; + myu->s4->b++; + myu->s4->c++; + myu->s4->d++; + } + + assert (myu->s4->a == 1); + assert (myu->s4->b == 1); + assert (myu->s4->c == &c4 + 1); + assert (myu->s4->d == &d4 + 1); + + delete s4; + delete t4; + delete myu; +} +#endif + +#ifdef PTR_COMPONENT_MEMBER_SLICE +void +ptr_component_member_slice (void) +{ + INIT_ST; + U *myu = new U(s1, t1, s2, t2, &s3, &t3, s4, t4); + + #pragma omp target map(myu->t1.a[2:5]) + { + myu->t1.a[2]++; + } + + #pragma omp target map(myu->t1.b[2:5]) + { + myu->t1.b[2]++; + } + + #pragma omp target enter data map(to: myu->t1.c) + + #pragma omp target map(myu->t1.c[2:5]) + { + myu->t1.c[2]++; + } + + #pragma omp target exit data map(release: myu->t1.c) + + #pragma omp target enter data map(to: myu->t1.d) + + #pragma omp target map(myu->t1.d[2:5]) + { + myu->t1.d[2]++; + } + + #pragma omp target exit data map(release: myu->t1.d) + + assert (myu->t1.a[2] == 1); + assert (myu->t1.b[2] == 3); + assert (myu->t1.c[2] == 3); + assert (myu->t1.d[2] == 3); + + #pragma omp target map(myu->t2.a[2:5]) + { + myu->t2.a[2]++; + } + + #pragma omp target map(myu->t2.b[2:5]) + { + myu->t2.b[2]++; + } + + #pragma omp target enter data map(to: myu->t2.c) + + #pragma omp target map(myu->t2.c[2:5]) + { + myu->t2.c[2]++; + } + + #pragma omp target exit data map(release: myu->t2.c) + + #pragma omp target enter data map(to: myu->t2.d) + + #pragma omp target map(myu->t2.d[2:5]) + { + myu->t2.d[2]++; + } + + #pragma omp target exit data map(release: myu->t2.d) + + assert (myu->t2.a[2] == 1); + assert (myu->t2.b[2] == 3); + assert (myu->t2.c[2] == 3); + assert (myu->t2.d[2] == 3); + + #pragma omp target enter data map(to: myu->t3) + + #pragma omp target map(myu->t3->a[2:5]) + { + myu->t3->a[2]++; + } + + #pragma omp target map(myu->t3->b[2:5]) + { + myu->t3->b[2]++; + } + + #pragma omp target enter data map(to: myu->t3->c) + + #pragma omp target map(myu->t3->c[2:5]) + { + myu->t3->c[2]++; + } + + #pragma omp target exit data map(release: myu->t3->c) + + #pragma omp target enter data map(to: myu->t3->d) + + #pragma omp target map(myu->t3->d[2:5]) + { + myu->t3->d[2]++; + } + + #pragma omp target exit data map(release: myu->t3, myu->t3->d) + + assert (myu->t3->a[2] == 1); + assert (myu->t3->b[2] == 3); + assert (myu->t3->c[2] == 3); + assert (myu->t3->d[2] == 3); + + #pragma omp target enter data map(to: myu->t4) + + #pragma omp target map(myu->t4->a[2:5]) + { + myu->t4->a[2]++; + } + + #pragma omp target map(myu->t4->b[2:5]) + { + myu->t4->b[2]++; + } + + #pragma omp target enter data map(to: myu->t4->c) + + #pragma omp target map(myu->t4->c[2:5]) + { + myu->t4->c[2]++; + } + + #pragma omp target exit data map(release: myu->t4->c) + + #pragma omp target enter data map(to: myu->t4->d) + + #pragma omp target map(myu->t4->d[2:5]) + { + myu->t4->d[2]++; + } + + #pragma omp target exit data map(release: myu->t4, myu->t4->d) + + assert (myu->t4->a[2] == 1); + assert (myu->t4->b[2] == 3); + assert (myu->t4->c[2] == 3); + assert (myu->t4->d[2] == 3); + + delete s4; + delete t4; + delete myu; +} +#endif + +#ifdef PTR_COMPONENT_MEMBER_SLICE_BASEPTR +void +ptr_component_member_slice_baseptr (void) +{ + INIT_ST; + U *myu = new U(s1, t1, s2, t2, &s3, &t3, s4, t4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t1.c) map(myu->t1.c[2:5]) + { + myu->t1.c[2]++; + } + + #pragma omp target map(to: myu->t1.d) map(myu->t1.d[2:5]) + { + myu->t1.d[2]++; + } + + assert (myu->t1.c[2] == 2); + assert (myu->t1.d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t1.c) map(myu->t1.c[2:5]) + { + myu->t1.c[2]++; + } + + #pragma omp target map(to: myu, myu->t1.d) map(myu->t1.d[2:5]) + { + myu->t1.d[2]++; + } + + assert (myu->t1.c[2] == 4); + assert (myu->t1.d[2] == 4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t2.c) map(myu->t2.c[2:5]) + { + myu->t2.c[2]++; + } + + #pragma omp target map(to: myu->t2.d) map(myu->t2.d[2:5]) + { + myu->t2.d[2]++; + } + + assert (myu->t2.c[2] == 2); + assert (myu->t2.d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t2.c) map(myu->t2.c[2:5]) + { + myu->t2.c[2]++; + } + + #pragma omp target map(to: myu, myu->t2.d) map(myu->t2.d[2:5]) + { + myu->t2.d[2]++; + } + + assert (myu->t2.c[2] == 4); + assert (myu->t2.d[2] == 4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t3, myu->t3->c) map(myu->t3->c[2:5]) + { + myu->t3->c[2]++; + } + + #pragma omp target map(to: myu->t3, myu->t3->d) map(myu->t3->d[2:5]) + { + myu->t3->d[2]++; + } + + assert (myu->t3->c[2] == 2); + assert (myu->t3->d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t3, myu->t3->c) map(myu->t3->c[2:5]) + { + myu->t3->c[2]++; + } + + #pragma omp target map(to: myu, myu->t3, myu->t3->d) map(myu->t3->d[2:5]) + { + myu->t3->d[2]++; + } + + assert (myu->t3->c[2] == 4); + assert (myu->t3->d[2] == 4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t4, myu->t4->c) map(myu->t4->c[2:5]) + { + myu->t4->c[2]++; + } + + #pragma omp target map(to: myu->t4, myu->t4->d) map(myu->t4->d[2:5]) + { + myu->t4->d[2]++; + } + + assert (myu->t4->c[2] == 2); + assert (myu->t4->d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t4, myu->t4->c) map(myu->t4->c[2:5]) + { + myu->t4->c[2]++; + } + + #pragma omp target map(to: myu, myu->t4, myu->t4->d) map(myu->t4->d[2:5]) + { + myu->t4->d[2]++; + } + + assert (myu->t4->c[2] == 4); + assert (myu->t4->d[2] == 4); + + delete s4; + delete t4; + delete myu; +} +#endif + +#ifdef REF2PTR_COMPONENT_BASE +void +ref2ptr_component_base (void) +{ + INIT_ST; + U *myu_ptr = new U(s1, t1, s2, t2, &s3, &t3, s4, t4); + U *&myu = myu_ptr; + + #pragma omp target map(myu->s1.a, myu->s1.b, myu->s1.c, myu->s1.d) + { + myu->s1.a++; + myu->s1.b++; + myu->s1.c++; + myu->s1.d++; + } + + assert (myu->s1.a == 1); + assert (myu->s1.b == 1); + assert (myu->s1.c == &c1 + 1); + assert (myu->s1.d == &d1 + 1); + + #pragma omp target map(myu->s2.a, myu->s2.b, myu->s2.c, myu->s2.d) + { + myu->s2.a++; + myu->s2.b++; + myu->s2.c++; + myu->s2.d++; + } + + assert (myu->s2.a == 1); + assert (myu->s2.b == 1); + assert (myu->s2.c == &c2 + 1); + assert (myu->s2.d == &d2 + 1); + + #pragma omp target map(to:myu->s3) \ + map(myu->s3->a, myu->s3->b, myu->s3->c, myu->s3->d) + { + myu->s3->a++; + myu->s3->b++; + myu->s3->c++; + myu->s3->d++; + } + + assert (myu->s3->a == 1); + assert (myu->s3->b == 1); + assert (myu->s3->c == &c3 + 1); + assert (myu->s3->d == &d3 + 1); + + #pragma omp target map(to:myu->s4) \ + map(myu->s4->a, myu->s4->b, myu->s4->c, myu->s4->d) + { + myu->s4->a++; + myu->s4->b++; + myu->s4->c++; + myu->s4->d++; + } + + assert (myu->s4->a == 1); + assert (myu->s4->b == 1); + assert (myu->s4->c == &c4 + 1); + assert (myu->s4->d == &d4 + 1); + + delete s4; + delete t4; + delete myu_ptr; +} +#endif + +#ifdef REF2PTR_COMPONENT_MEMBER_SLICE +void +ref2ptr_component_member_slice (void) +{ + INIT_ST; + U *myu_ptr = new U(s1, t1, s2, t2, &s3, &t3, s4, t4); + U *&myu = myu_ptr; + + #pragma omp target map(myu->t1.a[2:5]) + { + myu->t1.a[2]++; + } + + #pragma omp target map(myu->t1.b[2:5]) + { + myu->t1.b[2]++; + } + + #pragma omp target enter data map(to: myu->t1.c) + + #pragma omp target map(myu->t1.c[2:5]) + { + myu->t1.c[2]++; + } + + #pragma omp target exit data map(release: myu->t1.c) + + #pragma omp target enter data map(to: myu->t1.d) + + #pragma omp target map(myu->t1.d[2:5]) + { + myu->t1.d[2]++; + } + + #pragma omp target exit data map(release: myu->t1.d) + + assert (myu->t1.a[2] == 1); + assert (myu->t1.b[2] == 3); + assert (myu->t1.c[2] == 3); + assert (myu->t1.d[2] == 3); + + #pragma omp target map(myu->t2.a[2:5]) + { + myu->t2.a[2]++; + } + + #pragma omp target map(myu->t2.b[2:5]) + { + myu->t2.b[2]++; + } + + #pragma omp target enter data map(to: myu->t2.c) + + #pragma omp target map(myu->t2.c[2:5]) + { + myu->t2.c[2]++; + } + + #pragma omp target exit data map(release: myu->t2.c) + + #pragma omp target enter data map(to: myu->t2.d) + + #pragma omp target map(myu->t2.d[2:5]) + { + myu->t2.d[2]++; + } + + #pragma omp target exit data map(release: myu->t2.d) + + assert (myu->t2.a[2] == 1); + assert (myu->t2.b[2] == 3); + assert (myu->t2.c[2] == 3); + assert (myu->t2.d[2] == 3); + + #pragma omp target enter data map(to: myu->t3) + + #pragma omp target map(myu->t3->a[2:5]) + { + myu->t3->a[2]++; + } + + #pragma omp target map(myu->t3->b[2:5]) + { + myu->t3->b[2]++; + } + + #pragma omp target enter data map(to: myu->t3->c) + + #pragma omp target map(myu->t3->c[2:5]) + { + myu->t3->c[2]++; + } + + #pragma omp target exit data map(release: myu->t3->c) + + #pragma omp target enter data map(to: myu->t3->d) + + #pragma omp target map(myu->t3->d[2:5]) + { + myu->t3->d[2]++; + } + + #pragma omp target exit data map(release: myu->t3, myu->t3->d) + + assert (myu->t3->a[2] == 1); + assert (myu->t3->b[2] == 3); + assert (myu->t3->c[2] == 3); + assert (myu->t3->d[2] == 3); + + #pragma omp target enter data map(to: myu->t4) + + #pragma omp target map(myu->t4->a[2:5]) + { + myu->t4->a[2]++; + } + + #pragma omp target map(myu->t4->b[2:5]) + { + myu->t4->b[2]++; + } + + #pragma omp target enter data map(to: myu->t4->c) + + #pragma omp target map(myu->t4->c[2:5]) + { + myu->t4->c[2]++; + } + + #pragma omp target exit data map(release: myu->t4->c) + + #pragma omp target enter data map(to: myu->t4->d) + + #pragma omp target map(myu->t4->d[2:5]) + { + myu->t4->d[2]++; + } + + #pragma omp target exit data map(release: myu->t4, myu->t4->d) + + assert (myu->t4->a[2] == 1); + assert (myu->t4->b[2] == 3); + assert (myu->t4->c[2] == 3); + assert (myu->t4->d[2] == 3); + + delete s4; + delete t4; + delete myu_ptr; +} +#endif + +#ifdef REF2PTR_COMPONENT_MEMBER_SLICE_BASEPTR +void +ref2ptr_component_member_slice_baseptr (void) +{ + INIT_ST; + U *myu_ptr = new U(s1, t1, s2, t2, &s3, &t3, s4, t4); + U *&myu = myu_ptr; + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t1.c) map(myu->t1.c[2:5]) + { + myu->t1.c[2]++; + } + + #pragma omp target map(to: myu->t1.d) map(myu->t1.d[2:5]) + { + myu->t1.d[2]++; + } + + assert (myu->t1.c[2] == 2); + assert (myu->t1.d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t1.c) map(myu->t1.c[2:5]) + { + myu->t1.c[2]++; + } + + #pragma omp target map(to: myu, myu->t1.d) map(myu->t1.d[2:5]) + { + myu->t1.d[2]++; + } + + assert (myu->t1.c[2] == 4); + assert (myu->t1.d[2] == 4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t2.c) map(myu->t2.c[2:5]) + { + myu->t2.c[2]++; + } + + #pragma omp target map(to: myu->t2.d) map(myu->t2.d[2:5]) + { + myu->t2.d[2]++; + } + + assert (myu->t2.c[2] == 2); + assert (myu->t2.d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t2.c) map(myu->t2.c[2:5]) + { + myu->t2.c[2]++; + } + + #pragma omp target map(to: myu, myu->t2.d) map(myu->t2.d[2:5]) + { + myu->t2.d[2]++; + } + + assert (myu->t2.c[2] == 4); + assert (myu->t2.d[2] == 4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t3, myu->t3->c) map(myu->t3->c[2:5]) + { + myu->t3->c[2]++; + } + + #pragma omp target map(to: myu->t3, myu->t3->d) map(myu->t3->d[2:5]) + { + myu->t3->d[2]++; + } + + assert (myu->t3->c[2] == 2); + assert (myu->t3->d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t3, myu->t3->c) map(myu->t3->c[2:5]) + { + myu->t3->c[2]++; + } + + #pragma omp target map(to: myu, myu->t3, myu->t3->d) map(myu->t3->d[2:5]) + { + myu->t3->d[2]++; + } + + assert (myu->t3->c[2] == 4); + assert (myu->t3->d[2] == 4); + + /* Implicit firstprivate 'myu'. */ + #pragma omp target map(to: myu->t4, myu->t4->c) map(myu->t4->c[2:5]) + { + myu->t4->c[2]++; + } + + #pragma omp target map(to: myu->t4, myu->t4->d) map(myu->t4->d[2:5]) + { + myu->t4->d[2]++; + } + + assert (myu->t4->c[2] == 2); + assert (myu->t4->d[2] == 2); + + /* Explicitly-mapped 'myu'. */ + #pragma omp target map(to: myu, myu->t4, myu->t4->c) map(myu->t4->c[2:5]) + { + myu->t4->c[2]++; + } + + #pragma omp target map(to: myu, myu->t4, myu->t4->d) map(myu->t4->d[2:5]) + { + myu->t4->d[2]++; + } + + assert (myu->t4->c[2] == 4); + assert (myu->t4->d[2] == 4); + + delete s4; + delete t4; + delete myu_ptr; +} +#endif + +int main (int argc, char *argv[]) +{ +#ifdef MAP_DECLS + map_decls (); +#endif + +#ifdef NONREF_DECL_BASE + nonref_decl_base (); +#endif +#ifdef REF_DECL_BASE + ref_decl_base (); +#endif +#ifdef PTR_DECL_BASE + ptr_decl_base (); +#endif +#ifdef REF2PTR_DECL_BASE + ref2ptr_decl_base (); +#endif + +#ifdef ARRAY_DECL_BASE + array_decl_base (); +#endif +#ifdef REF2ARRAY_DECL_BASE + ref2array_decl_base (); +#endif +#ifdef PTR_OFFSET_DECL_BASE + ptr_offset_decl_base (); +#endif +#ifdef REF2PTR_OFFSET_DECL_BASE + ref2ptr_offset_decl_base (); +#endif + +#ifdef MAP_SECTIONS + map_sections (); +#endif + +#ifdef NONREF_DECL_MEMBER_SLICE + nonref_decl_member_slice (); +#endif +#ifdef NONREF_DECL_MEMBER_SLICE_BASEPTR + nonref_decl_member_slice_baseptr (); +#endif +#ifdef REF_DECL_MEMBER_SLICE + ref_decl_member_slice (); +#endif +#ifdef REF_DECL_MEMBER_SLICE_BASEPTR + ref_decl_member_slice_baseptr (); +#endif +#ifdef PTR_DECL_MEMBER_SLICE + ptr_decl_member_slice (); +#endif +#ifdef PTR_DECL_MEMBER_SLICE_BASEPTR + ptr_decl_member_slice_baseptr (); +#endif +#ifdef REF2PTR_DECL_MEMBER_SLICE + ref2ptr_decl_member_slice (); +#endif +#ifdef REF2PTR_DECL_MEMBER_SLICE_BASEPTR + ref2ptr_decl_member_slice_baseptr (); +#endif + +#ifdef ARRAY_DECL_MEMBER_SLICE + array_decl_member_slice (); +#endif +#ifdef ARRAY_DECL_MEMBER_SLICE_BASEPTR + array_decl_member_slice_baseptr (); +#endif +#ifdef REF2ARRAY_DECL_MEMBER_SLICE + ref2array_decl_member_slice (); +#endif +#ifdef REF2ARRAY_DECL_MEMBER_SLICE_BASEPTR + ref2array_decl_member_slice_baseptr (); +#endif +#ifdef PTR_OFFSET_DECL_MEMBER_SLICE + ptr_offset_decl_member_slice (); +#endif +#ifdef PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR + ptr_offset_decl_member_slice_baseptr (); +#endif +#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE + ref2ptr_offset_decl_member_slice (); +#endif +#ifdef REF2PTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR + ref2ptr_offset_decl_member_slice_baseptr (); +#endif + +#ifdef PTRARRAY_DECL_MEMBER_SLICE + ptrarray_decl_member_slice (); +#endif +#ifdef PTRARRAY_DECL_MEMBER_SLICE_BASEPTR + ptrarray_decl_member_slice_baseptr (); +#endif +#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE + ref2ptrarray_decl_member_slice (); +#endif +#ifdef REF2PTRARRAY_DECL_MEMBER_SLICE_BASEPTR + ref2ptrarray_decl_member_slice_baseptr (); +#endif +#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE + ptrptr_offset_decl_member_slice (); +#endif +#ifdef PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR + ptrptr_offset_decl_member_slice_baseptr (); +#endif +#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE + ref2ptrptr_offset_decl_member_slice (); +#endif +#ifdef REF2PTRPTR_OFFSET_DECL_MEMBER_SLICE_BASEPTR + ref2ptrptr_offset_decl_member_slice_baseptr (); +#endif + +#ifdef NONREF_COMPONENT_BASE + nonref_component_base (); +#endif +#ifdef NONREF_COMPONENT_MEMBER_SLICE + nonref_component_member_slice (); +#endif +#ifdef NONREF_COMPONENT_MEMBER_SLICE_BASEPTR + nonref_component_member_slice_baseptr (); +#endif + +#ifdef REF_COMPONENT_BASE + ref_component_base (); +#endif +#ifdef REF_COMPONENT_MEMBER_SLICE + ref_component_member_slice (); +#endif +#ifdef REF_COMPONENT_MEMBER_SLICE_BASEPTR + ref_component_member_slice_baseptr (); +#endif + +#ifdef PTR_COMPONENT_BASE + ptr_component_base (); +#endif +#ifdef PTR_COMPONENT_MEMBER_SLICE + ptr_component_member_slice (); +#endif +#ifdef PTR_COMPONENT_MEMBER_SLICE_BASEPTR + ptr_component_member_slice_baseptr (); +#endif + +#ifdef REF2PTR_COMPONENT_BASE + ref2ptr_component_base (); +#endif +#ifdef REF2PTR_COMPONENT_MEMBER_SLICE + ref2ptr_component_member_slice (); +#endif +#ifdef REF2PTR_COMPONENT_MEMBER_SLICE_BASEPTR + ref2ptr_component_member_slice_baseptr (); +#endif + + return 0; +} diff --git a/libgomp/testsuite/libgomp.c++/baseptrs-5.C b/libgomp/testsuite/libgomp.c++/baseptrs-5.C new file mode 100644 index 00000000000..16bdfff3ae0 --- /dev/null +++ b/libgomp/testsuite/libgomp.c++/baseptrs-5.C @@ -0,0 +1,62 @@ +// { dg-do run } + +#include +#include + +struct sa +{ + int *ptr; + int *ptr2; +}; + +struct sb +{ + int arr[10]; +}; + +struct scp +{ + sa *&a; + sb *&b; + scp (sa *&my_a, sb *&my_b) : a(my_a), b(my_b) {} +}; + +int +main () +{ + sa *my_a = new sa; + sb *my_b = new sb; + + my_a->ptr = new int[10]; + my_a->ptr2 = new int[10]; + scp *my_c = new scp(my_a, my_b); + + memset (my_c->a->ptr, 0, sizeof (int) * 10); + memset (my_c->a->ptr2, 0, sizeof (int) * 10); + + #pragma omp target map (my_c->a, \ + my_c->a->ptr, my_c->a->ptr[:10], \ + my_c->a->ptr2, my_c->a->ptr2[:10]) + { + for (int i = 0; i < 10; i++) + { + my_c->a->ptr[i] = i; + my_c->a->ptr2[i] = i * 2; + } + } + + for (int i = 0; i < 10; i++) + { + assert (my_c->a->ptr[i] == i); + assert (my_c->a->ptr2[i] == i * 2); + } + + delete[] my_a->ptr; + delete[] my_a->ptr2; + delete my_a; + delete my_b; + delete my_c; + + return 0; +} + diff --git a/libgomp/testsuite/libgomp.c++/class-array-1.C b/libgomp/testsuite/libgomp.c++/class-array-1.C new file mode 100644 index 00000000000..d8d3f7f1f99 --- /dev/null +++ b/libgomp/testsuite/libgomp.c++/class-array-1.C @@ -0,0 +1,59 @@ +/* { dg-do run } */ + +#include + +#define N 1024 + +class M { + int array[N]; + +public: + M () + { + for (int i = 0; i < N; i++) + array[i] = 0; + } + + void incr_with_this (int c) + { +#pragma omp target map(this->array[:N]) + for (int i = 0; i < N; i++) + array[i] += c; + } + + void incr_without_this (int c) + { +#pragma omp target map(array[:N]) + for (int i = 0; i < N; i++) + array[i] += c; + } + + void incr_implicit (int c) + { +#pragma omp target + for (int i = 0; i < N; i++) + array[i] += c; + } + + void check (int c) + { + for (int i = 0; i < N; i++) + assert (array[i] == c); + } +}; + +int +main (int argc, char *argv[]) +{ + M m; + + m.check (0); + m.incr_with_this (3); + m.check (3); + m.incr_without_this (5); + m.check (8); + m.incr_implicit (2); + m.check (10); + + return 0; +} diff --git a/libgomp/testsuite/libgomp.c++/target-48.C b/libgomp/testsuite/libgomp.c++/target-48.C new file mode 100644 index 00000000000..db171d2f5a3 --- /dev/null +++ b/libgomp/testsuite/libgomp.c++/target-48.C @@ -0,0 +1,32 @@ +#include +#include + +struct s { + int (&a)[10]; + s(int (&a0)[10]) : a(a0) {} +}; + +int +main (int argc, char *argv[]) +{ + int la[10]; + s v(la); + + memset (la, 0, sizeof la); + + #pragma omp target enter data map(to: v) + + /* This mapping must use GOMP_MAP_ATTACH_DETACH not GOMP_MAP_ALWAYS_POINTER, + else the host reference v.a will be corrupted on copy-out. */ + + #pragma omp target map(v.a[0:10]) + { + v.a[5]++; + } + + #pragma omp target exit data map(from: v) + + assert (v.a[5] == 1); + + return 0; +} diff --git a/libgomp/testsuite/libgomp.c++/target-49.C b/libgomp/testsuite/libgomp.c++/target-49.C new file mode 100644 index 00000000000..efae9c9580e --- /dev/null +++ b/libgomp/testsuite/libgomp.c++/target-49.C @@ -0,0 +1,37 @@ +#include +#include + +struct s { + int (&a)[10]; + s(int (&a0)[10]) : a(a0) {} +}; + +int +main (int argc, char *argv[]) +{ + int la[10]; + s v_real(la); + s *v = &v_real; + + memset (la, 0, sizeof la); + + #pragma omp target enter data map(to: v) + + /* Copying the whole v[0] here DOES NOT WORK yet because the reference 'a' is + not copied "as if" it was mapped explicitly as a member. FIXME. */ + #pragma omp target enter data map(to: v[0]) + + //#pragma omp target + { + v->a[5]++; + } + + #pragma omp target exit data map(release: v[0]) + #pragma omp target exit data map(from: v) + + assert (v->a[5] == 1); + + return 0; +} + +// { dg-xfail-run-if "TODO" { *-*-* } { "-DACC_MEM_SHARED=0" } } diff --git a/libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c new file mode 100644 index 00000000000..073615625b7 --- /dev/null +++ b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-1.c @@ -0,0 +1,50 @@ +#include +#include +#include +#include + +#define N 32 + +typedef struct { + int x2[10][N]; +} x1type; + +typedef struct { + x1type x1[10]; +} p2type; + +typedef struct { + p2type *p2; +} p1type; + +typedef struct { + p1type *p1; +} x0type; + +typedef struct { + x0type x0[10]; +} p0type; + +int main(int argc, char *argv[]) +{ + p0type *p0; + int k1 = 0, k2 = 0, k3 = 0, n = N; + + p0 = (p0type *) malloc (sizeof *p0); + p0->x0[0].p1 = (p1type *) malloc (sizeof *p0->x0[0].p1); + p0->x0[0].p1->p2 = (p2type *) malloc (sizeof *p0->x0[0].p1->p2); + memset (p0->x0[0].p1->p2, 0, sizeof *p0->x0[0].p1->p2); + +#pragma omp target map(tofrom: p0->x0[k1].p1->p2[k2].x1[k3].x2[4][0:n]) \ + map(to: p0->x0[k1].p1, p0->x0[k1].p1->p2) \ + map(to: p0->x0[k1].p1[0]) + { + for (int i = 0; i < n; i++) + p0->x0[k1].p1->p2[k2].x1[k3].x2[4][i] = i; + } + + for (int i = 0; i < n; i++) + assert (i == p0->x0[k1].p1->p2[k2].x1[k3].x2[4][i]); + + return 0; +} diff --git a/libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c new file mode 100644 index 00000000000..e335d7da966 --- /dev/null +++ b/libgomp/testsuite/libgomp.c-c++-common/baseptrs-2.c @@ -0,0 +1,70 @@ +#include +#include +#include + +#define N 32 + +typedef struct { + int arr[N]; + int *ptr; +} sc; + +typedef struct { + sc *c; +} sb; + +typedef struct { + sb *b; + sc *c; +} sa; + +int main (int argc, char *argv[]) +{ + sa *p; + + p = (sa *) malloc (sizeof *p); + p->b = (sb *) malloc (sizeof *p->b); + p->b->c = (sc *) malloc (sizeof *p->b->c); + p->c = (sc *) malloc (sizeof *p->c); + p->b->c->ptr = (int *) malloc (N * sizeof (int)); + p->c->ptr = (int *) malloc (N * sizeof (int)); + + for (int i = 0; i < N; i++) + { + p->b->c->ptr[i] = 0; + p->c->ptr[i] = 0; + p->b->c->arr[i] = 0; + p->c->arr[i] = 0; + } + +#pragma omp target map(to: p->b, p->b[0], p->c, p->c[0], p->b->c, p->b->c[0]) \ + map(to: p->b->c->ptr, p->c->ptr) \ + map(tofrom: p->b->c->ptr[:N], p->c->ptr[:N]) + { + for (int i = 0; i < N; i++) + { + p->b->c->ptr[i] = i; + p->c->ptr[i] = i * 2; + } + } + +#pragma omp target map(to: p->b, p->b[0], p->b->c, p->c) \ + map(tofrom: p->c[0], p->b->c[0]) + { + for (int i = 0; i < N; i++) + { + p->b->c->arr[i] = i * 3; + p->c->arr[i] = i * 4; + } + } + + for (int i = 0; i < N; i++) + { + assert (p->b->c->ptr[i] == i); + assert (p->c->ptr[i] == i * 2); + assert (p->b->c->arr[i] == i * 3); + assert (p->c->arr[i] == i * 4); + } + + return 0; +} diff --git a/libgomp/testsuite/libgomp.c/target-22.c b/libgomp/testsuite/libgomp.c/target-22.c index aad8a0a09df..492744ad0ef 100644 --- a/libgomp/testsuite/libgomp.c/target-22.c +++ b/libgomp/testsuite/libgomp.c/target-22.c @@ -21,7 +21,8 @@ main () s.v.b = a + 16; s.w = c + 3; int err = 0; - #pragma omp target map (to:s.v.b[0:z + 7], s.u[z + 1:z + 4]) \ + #pragma omp target map (to: s.w, s.v.b, s.u, s.s) \ + map (to:s.v.b[0:z + 7], s.u[z + 1:z + 4]) \ map (tofrom:s.s[3:3]) \ map (from: s.w[z:4], err) private (i) { diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 index 14f18de8db5..5d15808f0da 100644 --- a/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 +++ b/libgomp/testsuite/libgomp.fortran/map-subarray-4.f90 @@ -33,6 +33,3 @@ if (var(1)%p(1).ne.8) stop 1 if (var(2)%p(2).ne.10) stop 2 end - -! This is fixed by the address inspector/address tokenization patch. -! { dg-xfail-run-if TODO { offload_device_nonshared_as } } diff --git a/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 b/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 index 4074a952dd1..c7f90131cba 100644 --- a/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 +++ b/libgomp/testsuite/libgomp.fortran/map-subcomponents.f90 @@ -30,6 +30,3 @@ gvar%myf%d(1) = gvar%myf%d(1) + 1 if (gvar%myf%d(1).ne.1) stop 1 end program myprog - -! This is fixed by the address inspector/address tokenization patch. -! { dg-xfail-run-if TODO { offload_device_nonshared_as } } From patchwork Sun Oct 9 21:51:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julian Brown X-Patchwork-Id: 1842 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4ac7:0:0:0:0:0 with SMTP id y7csp1309535wrs; Sun, 9 Oct 2022 14:55:25 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7MY2mVRxasvFWEqqvulWKMpTqGjYlYctGoY2MmuxCIRX7WFA7gDBL0VzY290Ah152SA1sf X-Received: by 2002:a17:907:968f:b0:782:6a9d:33fb with SMTP id hd15-20020a170907968f00b007826a9d33fbmr12436487ejc.754.1665352525521; Sun, 09 Oct 2022 14:55:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665352525; cv=none; d=google.com; s=arc-20160816; b=ACCcaYeEYJTo6BEWDUKeJ6LBGmCKc5vW9WarBdTihzNosXltDUL+T9pmjc65qsv703 qQswyJJbjvjNPMS2xmv75d9O4QF4usH3N1d+LvpP4UySftap+BVBBWEwHOEEBU1zB08P +Xtk9in8yKXzESsYu9yLGTavSjiQMlF62j6iX4rWvXGwGaRj+GdeICO7qlWwbsOu5Mr+ pDqPOf0krTisk3BftnkPglz/lwDzfspXyvgAVjNiG0H1HfvVenMPUrZqj1o9Glvawy42 0s95CHogM8mpmdP8gHMp+lU/Hac5/Bno04rVOWirgAJRjRP5o069ZPs71UdWe5lMsnuQ Oyvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :ironport-sdr:dmarc-filter:delivered-to; bh=iPjhR9HduMMi/amjFKLGtUOkJjcqV+bIHbjtpGB1/JI=; b=cJTDEkYwRTYNIWEF5tUoxPvtpZto/BJ6V1MFDZ9UUbYn4Emdw87xaJUF2+/VyV8uv1 wSeuPbtnAfGIo16Huu64Tf7AC4XPLUz1G4BgpJl4xJ9uoVIkHnDA7nulRHtLBQ1NvXGJ ZLRTrnrPgOK93RU4ZFdob+ipClGtmxYiSa8vc49uU9UweVPmrUjNRi8RH2dUUH0sGJef P7s/iSlSnR/PQ+LNXkECj5s4RvbiEDLX1Foahk9MPBE0FV2t0by0NQ+TqGjDsNF9/UhE j3Xhl7NNV14pMBF4ho9RIIbK0YHya7gD2IE61lm7YAGmRQP8nJTfA/oPmfuMX81Tr6JK CQsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from sourceware.org (ip-8-43-85-97.sourceware.org. [8.43.85.97]) by mx.google.com with ESMTPS id hd30-20020a170907969e00b0077cfdda438csi8493174ejc.35.2022.10.09.14.55.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 09 Oct 2022 14:55:25 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) client-ip=8.43.85.97; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org designates 8.43.85.97 as permitted sender) smtp.mailfrom="gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org" Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 07C4F384D147 for ; Sun, 9 Oct 2022 21:53:03 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from esa3.mentor.iphmx.com (esa3.mentor.iphmx.com [68.232.137.180]) by sourceware.org (Postfix) with ESMTPS id 6ADCD388B6BA; Sun, 9 Oct 2022 21:52:00 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 6ADCD388B6BA Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=codesourcery.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=mentor.com X-IronPort-AV: E=Sophos;i="5.95,172,1661846400"; d="scan'208";a="84258945" Received: from orw-gwy-01-in.mentorg.com ([192.94.38.165]) by esa3.mentor.iphmx.com with ESMTP; 09 Oct 2022 13:51:57 -0800 IronPort-SDR: CChn/988PQGC4Rq8d8ZXWCn4ivbsxX8ticiGrhA3M5DOIpdMCtXSTPIhRQGx3qSCUS0hkartUw 7/4ygJM92UDLqSCl1CARs3oy7DR2rbphjgfPhqj0AF3ZSzdiwYtsQXCC5joA7qkLl0NpXcUgFd 7DU0YGAav73qEQB8jxJPFCxAk0NiJ9EV4yd/TQ2UfLCwHn6xdHSwGA6lPAY7j6JeHGBYwkW/97 60R6lh6+IGM5H+RkcgnAIlC4s/T977SLrtr+R/e+jyPqulO/3D/796YakUZxcK/7hs2OuVaa96 JrU= From: Julian Brown To: Subject: [PATCH v4 4/4] OpenMP/OpenACC: Unordered/non-constant component offset struct mapping Date: Sun, 9 Oct 2022 14:51:37 -0700 Message-ID: <3ff03cb463d35ffe96b1271a146f24899b2cb573.1665351785.git.julian@codesourcery.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: References: MIME-Version: 1.0 X-Originating-IP: [137.202.0.90] X-ClientProxiedBy: svr-ies-mbx-14.mgc.mentorg.com (139.181.222.14) To svr-ies-mbx-11.mgc.mentorg.com (139.181.222.11) X-Spam-Status: No, score=-11.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_STATUS, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jakub Jelinek , Tobias Burnus , fortran@gcc.gnu.org Errors-To: gcc-patches-bounces+ouuuleilei=gmail.com@gcc.gnu.org Sender: "Gcc-patches" X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746248689740600863?= X-GMAIL-MSGID: =?utf-8?q?1746248689740600863?= This patch adds support for non-constant component offsets in "map" clauses for OpenMP (and the equivalants for OpenACC), which are not able to be sorted into order at compile time. Normally struct accesses in such clauses are gathered together and sorted into increasing address order after a "GOMP_MAP_STRUCT" node: if we have variable indices, that is no longer possible. This patch adds support for such mappings by adding a new variant of GOMP_MAP_STRUCT that does not require the list of nodes following to be in sorted order. This passes right down to the runtime: the list is sorted in libgomp according to the dynamic values of the offsets after the newly-introduced GOMP_MAP_STRUCT_UNORD node. This mostly affects arrays of structs indexed by variables in C and C++, but can also affect derived-type arrays with constant indexes when they have an array descriptor. 2022-10-09 Julian Brown gcc/ * gimplify.cc (extract_base_bit_offset): Add VARIABLE_OFFSET parameter. (omp_get_attachment, omp_group_last, omp_group_base, omp_directive_maps_explicitly): Add GOMP_MAP_STRUCT_UNORD support. (omp_accumulate_sibling_list): Update calls to extract_base_bit_offset. Support GOMP_MAP_STRUCT_UNORD. (omp_build_struct_sibling_lists, gimplify_scan_omp_clauses, gimplify_adjust_omp_clauses, gimplify_omp_target_update): Add GOMP_MAP_STRUCT_UNORD support. * omp-low.cc (lower_omp_target): Add GOMP_MAP_STRUCT_UNORD support. * tree-pretty-print.cc (dump_omp_clause): Likewise. include/ * gomp-constants.h (gomp_map_kind): Add GOMP_MAP_STRUCT_UNORD. libgomp/ * oacc-mem.c (find_group_last, goacc_enter_data_internal, goacc_exit_data_internal, GOACC_enter_exit_data): Add GOMP_MAP_STRUCT_UNORD support. * target.c (compare_addr_r): New helper function. (gomp_map_vars_internal, GOMP_target_enter_exit_data, gomp_target_task_fn): Add GOMP_MAP_STRUCT_UNORD support. * testsuite/libgomp.c-c++-common/map-arrayofstruct-1.c: New test. * testsuite/libgomp.c-c++-common/map-arrayofstruct-2.c: New test. * testsuite/libgomp.c-c++-common/map-arrayofstruct-3.c: New test. * testsuite/libgomp.fortran/map-subarray-3.f90: New test. * testsuite/libgomp.fortran/map-subarray-5.f90: New test. --- gcc/gimplify.cc | 110 +++++++++--- gcc/omp-low.cc | 1 + gcc/tree-pretty-print.cc | 3 + include/gomp-constants.h | 9 + libgomp/oacc-mem.c | 6 +- libgomp/target.c | 162 ++++++++++++++---- .../map-arrayofstruct-1.c | 38 ++++ .../map-arrayofstruct-2.c | 54 ++++++ .../map-arrayofstruct-3.c | 64 +++++++ .../libgomp.fortran/map-subarray-3.f90 | 48 ++++++ .../libgomp.fortran/map-subarray-5.f90 | 50 ++++++ 11 files changed, 495 insertions(+), 50 deletions(-) create mode 100644 libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-1.c create mode 100644 libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-2.c create mode 100644 libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-3.c create mode 100644 libgomp/testsuite/libgomp.fortran/map-subarray-3.f90 create mode 100644 libgomp/testsuite/libgomp.fortran/map-subarray-5.f90 diff --git a/gcc/gimplify.cc b/gcc/gimplify.cc index e245adfec3a..e8e0973eff0 100644 --- a/gcc/gimplify.cc +++ b/gcc/gimplify.cc @@ -8861,7 +8861,8 @@ build_omp_struct_comp_nodes (enum tree_code code, tree grp_start, tree grp_end, static tree extract_base_bit_offset (tree base, poly_int64 *bitposp, - poly_offset_int *poffsetp) + poly_offset_int *poffsetp, + bool *variable_offset) { tree offset; poly_int64 bitsize, bitpos; @@ -8879,10 +8880,13 @@ extract_base_bit_offset (tree base, poly_int64 *bitposp, if (offset && poly_int_tree_p (offset)) { poffset = wi::to_poly_offset (offset); - offset = NULL_TREE; + *variable_offset = false; } else - poffset = 0; + { + poffset = 0; + *variable_offset = (offset != NULL_TREE); + } if (maybe_ne (bitpos, 0)) poffset += bits_to_bytes_round_down (bitpos); @@ -9038,6 +9042,7 @@ omp_get_attachment (omp_mapping_group *grp) return error_mark_node; case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: case GOMP_MAP_FORCE_DEVICEPTR: case GOMP_MAP_DEVICE_RESIDENT: case GOMP_MAP_LINK: @@ -9123,6 +9128,7 @@ omp_group_last (tree *start_p) break; case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: { unsigned HOST_WIDE_INT num_mappings = tree_to_uhwi (OMP_CLAUSE_SIZE (c)); @@ -9282,6 +9288,7 @@ omp_group_base (omp_mapping_group *grp, unsigned int *chained, return error_mark_node; case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: { unsigned HOST_WIDE_INT num_mappings = tree_to_uhwi (OMP_CLAUSE_SIZE (node)); @@ -9898,7 +9905,8 @@ omp_directive_maps_explicitly (hash_map= 0; base_token--) @@ -10628,14 +10638,20 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, if (!struct_map_to_clause || struct_map_to_clause->get (base) == NULL) { - tree l = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP); - - OMP_CLAUSE_SET_MAP_KIND (l, GOMP_MAP_STRUCT); - OMP_CLAUSE_DECL (l) = unshare_expr (base); - OMP_CLAUSE_SIZE (l) = size_int (1); + enum gomp_map_kind str_kind = GOMP_MAP_STRUCT; if (struct_map_to_clause == NULL) struct_map_to_clause = new hash_map; + + if (variable_offset) + str_kind = GOMP_MAP_STRUCT_UNORD; + + tree l = build_omp_clause (OMP_CLAUSE_LOCATION (grp_end), OMP_CLAUSE_MAP); + + OMP_CLAUSE_SET_MAP_KIND (l, str_kind); + OMP_CLAUSE_DECL (l) = unshare_expr (base); + OMP_CLAUSE_SIZE (l) = size_int (1); + struct_map_to_clause->put (base, l); /* On first iterating through the clause list, we insert the struct node @@ -10863,6 +10879,11 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, { tree *osc = struct_map_to_clause->get (base); tree *sc = NULL, *scp = NULL; + bool unordered = false; + + if (osc && OMP_CLAUSE_MAP_KIND (*osc) == GOMP_MAP_STRUCT_UNORD) + unordered = true; + unsigned HOST_WIDE_INT i, elems = tree_to_uhwi (OMP_CLAUSE_SIZE (*osc)); sc = &OMP_CLAUSE_CHAIN (*osc); /* The struct mapping might be immediately followed by a @@ -10903,12 +10924,20 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, == REFERENCE_TYPE)) sc_decl = TREE_OPERAND (sc_decl, 0); - tree base2 = extract_base_bit_offset (sc_decl, &bitpos, &offset); + bool variable_offset2; + tree base2 = extract_base_bit_offset (sc_decl, &bitpos, &offset, + &variable_offset2); if (!base2 || !operand_equal_p (base2, base, 0)) break; if (scp) continue; - if ((region_type & ORT_ACC) != 0) + if (variable_offset2) + { + OMP_CLAUSE_SET_MAP_KIND (*osc, GOMP_MAP_STRUCT_UNORD); + unordered = true; + break; + } + else if ((region_type & ORT_ACC) != 0) { /* For OpenACC, allow (ignore) duplicate struct accesses in the middle of a mapping clause, e.g. "mystruct->foo" in: @@ -10940,6 +10969,15 @@ omp_accumulate_sibling_list (enum omp_region_type region_type, } } + /* If this is an unordered struct, just insert the new element at the + end of the list. */ + if (unordered) + { + for (; i < elems; i++) + sc = &OMP_CLAUSE_CHAIN (*sc); + scp = NULL; + } + OMP_CLAUSE_SIZE (*osc) = size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc), size_one_node); @@ -11319,14 +11357,42 @@ omp_build_struct_sibling_lists (enum tree_code code, /* This is the first sorted node in the struct sibling list. Use it to recalculate the correct bias to use. - (&first_node - attach_decl). */ - tree first_node = OMP_CLAUSE_DECL (OMP_CLAUSE_CHAIN (attach)); - first_node = build_fold_addr_expr (first_node); - first_node = fold_convert (ptrdiff_type_node, first_node); + (&first_node - attach_decl). + For GOMP_MAP_STRUCT_UNORD, we need e.g. the + min(min(min(first,second),third),fourth) element, because the + elements aren't in any particular order. */ + tree lowest_addr; + if (OMP_CLAUSE_MAP_KIND (struct_node) == GOMP_MAP_STRUCT_UNORD) + { + tree first_node = OMP_CLAUSE_CHAIN (attach); + unsigned HOST_WIDE_INT num_mappings + = tree_to_uhwi (OMP_CLAUSE_SIZE (struct_node)); + lowest_addr = OMP_CLAUSE_DECL (first_node); + lowest_addr = build_fold_addr_expr (lowest_addr); + lowest_addr = fold_convert (pointer_sized_int_node, lowest_addr); + tree next_node = OMP_CLAUSE_CHAIN (first_node); + while (num_mappings > 1) + { + tree tmp = OMP_CLAUSE_DECL (next_node); + tmp = build_fold_addr_expr (tmp); + tmp = fold_convert (pointer_sized_int_node, tmp); + lowest_addr = fold_build2 (MIN_EXPR, pointer_sized_int_node, + lowest_addr, tmp); + next_node = OMP_CLAUSE_CHAIN (next_node); + num_mappings--; + } + lowest_addr = fold_convert (ptrdiff_type_node, lowest_addr); + } + else + { + tree first_node = OMP_CLAUSE_DECL (OMP_CLAUSE_CHAIN (attach)); + first_node = build_fold_addr_expr (first_node); + lowest_addr = fold_convert (ptrdiff_type_node, first_node); + } tree attach_decl = OMP_CLAUSE_DECL (attach); attach_decl = fold_convert (ptrdiff_type_node, attach_decl); OMP_CLAUSE_SIZE (attach) - = fold_build2 (MINUS_EXPR, ptrdiff_type_node, first_node, + = fold_build2 (MINUS_EXPR, ptrdiff_type_node, lowest_addr, attach_decl); /* Remove GOMP_MAP_ATTACH node from after struct node. */ @@ -11874,7 +11940,8 @@ gimplify_scan_omp_clauses (tree *list_p, gimple_seq *pre_p, GOVD_FIRSTPRIVATE | GOVD_SEEN); } - if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT + if ((OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT_UNORD) && (addr_tokens[0]->type == STRUCTURE_BASE || addr_tokens[0]->type == ARRAY_BASE) && addr_tokens[0]->u.structure_base_kind == BASE_DECL) @@ -13461,7 +13528,8 @@ gimplify_adjust_omp_clauses (gimple_seq *pre_p, gimple_seq body, tree *list_p, } } } - if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT + if ((OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT_UNORD) && (code == OMP_TARGET_EXIT_DATA || code == OACC_EXIT_DATA)) { remove = true; @@ -13505,7 +13573,8 @@ gimplify_adjust_omp_clauses (gimple_seq *pre_p, gimple_seq body, tree *list_p, in target block and none of the mapping has always modifier, remove all the struct element mappings, which immediately follow the GOMP_MAP_STRUCT map clause. */ - if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT) + if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT + || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT_UNORD) { HOST_WIDE_INT cnt = tree_to_shwi (OMP_CLAUSE_SIZE (c)); while (cnt--) @@ -16284,6 +16353,7 @@ gimplify_omp_target_update (tree *expr_p, gimple_seq *pre_p) have_clause = false; break; case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: have_clause = false; break; default: diff --git a/gcc/omp-low.cc b/gcc/omp-low.cc index 67db528f252..92346672f1d 100644 --- a/gcc/omp-low.cc +++ b/gcc/omp-low.cc @@ -12780,6 +12780,7 @@ lower_omp_target (gimple_stmt_iterator *gsi_p, omp_context *ctx) case GOMP_MAP_FIRSTPRIVATE_POINTER: case GOMP_MAP_FIRSTPRIVATE_REFERENCE: case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: case GOMP_MAP_ALWAYS_POINTER: case GOMP_MAP_ATTACH: case GOMP_MAP_DETACH: diff --git a/gcc/tree-pretty-print.cc b/gcc/tree-pretty-print.cc index e7a8c9481a6..c0656104196 100644 --- a/gcc/tree-pretty-print.cc +++ b/gcc/tree-pretty-print.cc @@ -967,6 +967,9 @@ dump_omp_clause (pretty_printer *pp, tree clause, int spc, dump_flags_t flags) case GOMP_MAP_STRUCT: pp_string (pp, "struct"); break; + case GOMP_MAP_STRUCT_UNORD: + pp_string (pp, "struct_unord"); + break; case GOMP_MAP_ALWAYS_POINTER: pp_string (pp, "always_pointer"); break; diff --git a/include/gomp-constants.h b/include/gomp-constants.h index 84316f953d0..564b119feca 100644 --- a/include/gomp-constants.h +++ b/include/gomp-constants.h @@ -138,6 +138,15 @@ enum gomp_map_kind (address of the last adjacent entry plus its size). */ GOMP_MAP_STRUCT = (GOMP_MAP_FLAG_SPECIAL_2 | GOMP_MAP_FLAG_SPECIAL | 0), + /* As above, but followed by an unordered list of adjacent entries. + Slightly less efficient at runtime, but allows for struct components + with dynamic offsets. We can get those e.g. by indexing into an array + of structs using a non-constant expression, or even with a constant + expression when a Fortran array of derived types has an array + descriptor). */ + GOMP_MAP_STRUCT_UNORD = (GOMP_MAP_FLAG_SPECIAL_3 + | GOMP_MAP_FLAG_SPECIAL_2 + | GOMP_MAP_FLAG_SPECIAL | 0), /* On a location of a pointer/reference that is assumed to be already mapped earlier, store the translated address of the preceeding mapping. No refcount is bumped by this, and the store is done unconditionally. */ diff --git a/libgomp/oacc-mem.c b/libgomp/oacc-mem.c index 73b2710c2b8..6bdee906387 100644 --- a/libgomp/oacc-mem.c +++ b/libgomp/oacc-mem.c @@ -1028,6 +1028,7 @@ find_group_last (int pos, size_t mapnum, size_t *sizes, unsigned short *kinds) break; case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: pos += sizes[pos]; break; @@ -1088,6 +1089,7 @@ goacc_enter_data_internal (struct gomp_device_descr *acc_dev, size_t mapnum, switch (kinds[i] & 0xff) { case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: { size = (uintptr_t) hostaddrs[group_last] + sizes[group_last] - (uintptr_t) hostaddrs[i]; @@ -1297,6 +1299,7 @@ goacc_exit_data_internal (struct gomp_device_descr *acc_dev, size_t mapnum, break; case GOMP_MAP_STRUCT: + case GOMP_MAP_STRUCT_UNORD: /* Skip the 'GOMP_MAP_STRUCT' itself, and use the regular processing for all its entries. This special handling exists for GCC 10.1 compatibility; afterwards, we're not generating these no-op @@ -1435,7 +1438,8 @@ GOACC_enter_exit_data (int flags_m, size_t mapnum, void **hostaddrs, if (kind == GOMP_MAP_POINTER || kind == GOMP_MAP_TO_PSET - || kind == GOMP_MAP_STRUCT) + || kind == GOMP_MAP_STRUCT + || kind == GOMP_MAP_STRUCT_UNORD) continue; if (kind == GOMP_MAP_FORCE_ALLOC diff --git a/libgomp/target.c b/libgomp/target.c index e5dec469519..015c25be86b 100644 --- a/libgomp/target.c +++ b/libgomp/target.c @@ -945,6 +945,20 @@ gomp_map_val (struct target_mem_desc *tgt, void **hostaddrs, size_t i) } } +#if defined(_GNU_SOURCE) || defined(__GNUC__) +static int +compare_addr_r (const void *a, const void *b, void *data) +{ + void **hostaddrs = (void **) data; + int ai = *(int *) a, bi = *(int *) b; + if (hostaddrs[ai] < hostaddrs[bi]) + return -1; + else if (hostaddrs[ai] > hostaddrs[bi]) + return 1; + return 0; +} +#endif + static inline __attribute__((always_inline)) struct target_mem_desc * gomp_map_vars_internal (struct gomp_device_descr *devicep, struct goacc_asyncqueue *aq, size_t mapnum, @@ -968,6 +982,17 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, tgt->device_descr = devicep; tgt->prev = NULL; struct gomp_coalesce_buf cbuf, *cbufp = NULL; + size_t hostaddr_idx; + +#if !defined(_GNU_SOURCE) && defined(__GNUC__) + /* If we don't have _GNU_SOURCE (thus no qsort_r), but we are compiling with + GCC (and why wouldn't we be?), we can use this nested function for + regular qsort. */ + int compare_addr (const void *a, const void *b) + { + return compare_addr_r (a, b, (void *) &hostaddrs[hostaddr_idx]); + } +#endif if (mapnum == 0) { @@ -1061,13 +1086,34 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, tgt->list[i].offset = 0; continue; } - else if ((kind & typemask) == GOMP_MAP_STRUCT) + else if ((kind & typemask) == GOMP_MAP_STRUCT + || (kind & typemask) == GOMP_MAP_STRUCT_UNORD) { - size_t first = i + 1; - size_t last = i + sizes[i]; + int *order = NULL; + if ((kind & typemask) == GOMP_MAP_STRUCT_UNORD) + { + order = (int *) gomp_alloca (sizeof (int) * sizes[i]); + for (int j = 0; j < sizes[i]; j++) + order[j] = j; +#ifdef _GNU_SOURCE + qsort_r (order, sizes[i], sizeof (int), &compare_addr_r, + &hostaddrs[i + 1]); +#elif defined(__GNUC__) + hostaddr_idx = i + 1; + qsort (order, sizes[i], sizeof (int), &compare_addr); +#else +#error no threadsafe qsort +#endif + } + size_t first = i + 1, last = i + sizes[i]; + size_t argmin = first, argmax = last; + if (order) + { + argmin = first + order[0]; + argmax = first + order[sizes[i] - 1]; + } cur_node.host_start = (uintptr_t) hostaddrs[i]; - cur_node.host_end = (uintptr_t) hostaddrs[last] - + sizes[last]; + cur_node.host_end = (uintptr_t) hostaddrs[argmax] + sizes[argmax]; tgt->list[i].key = NULL; tgt->list[i].offset = OFFSET_STRUCT; splay_tree_key n = splay_tree_lookup (mem_map, &cur_node); @@ -1076,21 +1122,26 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, size_t align = (size_t) 1 << (kind >> rshift); if (tgt_align < align) tgt_align = align; - tgt_size -= (uintptr_t) hostaddrs[first] - cur_node.host_start; + tgt_size -= (uintptr_t) hostaddrs[argmin] - cur_node.host_start; tgt_size = (tgt_size + align - 1) & ~(align - 1); tgt_size += cur_node.host_end - cur_node.host_start; not_found_cnt += last - i; + void *prev_addr = NULL; for (i = first; i <= last; i++) { + int oi = order ? first + order[i - first] : i; tgt->list[i].key = NULL; + if (order && i > first && prev_addr == hostaddrs[oi]) + continue; if (!aq - && gomp_to_device_kind_p (get_kind (short_mapkind, kinds, i) - & typemask) - && sizes[i] != 0) + && gomp_to_device_kind_p (get_kind (short_mapkind, kinds, + oi) & typemask) + && sizes[oi] != 0) gomp_coalesce_buf_add (&cbuf, tgt_size - cur_node.host_end - + (uintptr_t) hostaddrs[i], - sizes[i]); + + (uintptr_t) hostaddrs[oi], + sizes[oi]); + prev_addr = hostaddrs[oi]; } i--; continue; @@ -1368,11 +1419,12 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, { int kind = get_kind (short_mapkind, kinds, i); bool implicit = get_implicit (short_mapkind, kinds, i); + int *order = NULL; if (hostaddrs[i] == NULL) continue; switch (kind & typemask) { - size_t align, len, first, last; + size_t align, len, first, last, argmin, argmax; splay_tree_key n; case GOMP_MAP_FIRSTPRIVATE: align = (size_t) 1 << (kind >> rshift); @@ -1440,39 +1492,58 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, tgt->list[i].offset = OFFSET_INLINED; } continue; + case GOMP_MAP_STRUCT_UNORD: + order = (int *) gomp_alloca (sizeof (int) * sizes[i]); + for (int j = 0; j < sizes[i]; j++) + order[j] = j; +#ifdef _GNU_SOURCE + qsort_r (order, sizes[i], sizeof (int), &compare_addr_r, + &hostaddrs[i + 1]); +#elif defined(__GNUC__) + hostaddr_idx = i + 1; + qsort (order, sizes[i], sizeof (int), &compare_addr); +#else +#error no threadsafe qsort +#endif + /* Fallthrough. */ case GOMP_MAP_STRUCT: - first = i + 1; - last = i + sizes[i]; + first = argmin = i + 1; + last = argmax = i + sizes[i]; + if (order) + { + argmin = first + order[0]; + argmax = first + order[sizes[i] - 1]; + } cur_node.host_start = (uintptr_t) hostaddrs[i]; - cur_node.host_end = (uintptr_t) hostaddrs[last] - + sizes[last]; - if (tgt->list[first].key != NULL) + cur_node.host_end = (uintptr_t) hostaddrs[argmax] + + sizes[argmax]; + if (tgt->list[argmin].key != NULL) continue; - if (sizes[last] == 0) + if (sizes[argmax] == 0) cur_node.host_end++; n = splay_tree_lookup (mem_map, &cur_node); - if (sizes[last] == 0) + if (sizes[argmax] == 0) cur_node.host_end--; if (n == NULL && cur_node.host_start == cur_node.host_end) { gomp_mutex_unlock (&devicep->lock); gomp_fatal ("Struct pointer member not mapped (%p)", - (void*) hostaddrs[first]); + (void*) hostaddrs[argmin]); } if (n == NULL) { size_t align = (size_t) 1 << (kind >> rshift); - tgt_size -= (uintptr_t) hostaddrs[first] + tgt_size -= (uintptr_t) hostaddrs[argmin] - (uintptr_t) hostaddrs[i]; tgt_size = (tgt_size + align - 1) & ~(align - 1); - tgt_size += (uintptr_t) hostaddrs[first] + tgt_size += (uintptr_t) hostaddrs[argmin] - (uintptr_t) hostaddrs[i]; - field_tgt_base = (uintptr_t) hostaddrs[first]; + field_tgt_base = (uintptr_t) hostaddrs[argmin]; field_tgt_offset = tgt_size; field_tgt_clear = last; field_tgt_structelem_first = NULL; tgt_size += cur_node.host_end - - (uintptr_t) hostaddrs[first]; + - (uintptr_t) hostaddrs[argmin]; continue; } for (i = first; i <= last; i++) @@ -1557,9 +1628,40 @@ gomp_map_vars_internal (struct gomp_device_descr *devicep, k->host_end = k->host_start + sizeof (void *); splay_tree_key n = splay_tree_lookup (mem_map, k); if (n && n->refcount != REFCOUNT_LINK) - gomp_map_vars_existing (devicep, aq, n, k, &tgt->list[i], - kind & typemask, false, implicit, cbufp, - refcount_set); + { + if (field_tgt_clear != FIELD_TGT_EMPTY) + { + /* For this condition to be true, there must be a + duplicate struct element mapping. This can happen with + GOMP_MAP_STRUCT_UNORD mappings, for example. */ + tgt->list[i].key = n; + if (openmp_p) + { + assert ((n->refcount & REFCOUNT_STRUCTELEM) != 0); + assert (field_tgt_structelem_first != NULL); + + if (i == field_tgt_clear) + { + n->refcount |= REFCOUNT_STRUCTELEM_FLAG_LAST; + field_tgt_structelem_first = NULL; + } + } + if (i == field_tgt_clear) + field_tgt_clear = FIELD_TGT_EMPTY; + gomp_increment_refcount (n, refcount_set); + tgt->list[i].copy_from + = GOMP_MAP_COPY_FROM_P (kind & typemask); + tgt->list[i].always_copy_from + = GOMP_MAP_ALWAYS_FROM_P (kind & typemask); + tgt->list[i].is_attach = false; + tgt->list[i].offset = 0; + tgt->list[i].length = k->host_end - k->host_start; + } + else + gomp_map_vars_existing (devicep, aq, n, k, &tgt->list[i], + kind & typemask, false, implicit, + cbufp, refcount_set); + } else { k->aux = NULL; @@ -3314,7 +3416,8 @@ GOMP_target_enter_exit_data (int device, size_t mapnum, void **hostaddrs, size_t i, j; if ((flags & GOMP_TARGET_FLAG_EXIT_DATA) == 0) for (i = 0; i < mapnum; i++) - if ((kinds[i] & 0xff) == GOMP_MAP_STRUCT) + if ((kinds[i] & 0xff) == GOMP_MAP_STRUCT + || (kinds[i] & 0xff) == GOMP_MAP_STRUCT_UNORD) { gomp_map_vars (devicep, sizes[i] + 1, &hostaddrs[i], NULL, &sizes[i], &kinds[i], true, &refcount_set, @@ -3409,7 +3512,8 @@ gomp_target_task_fn (void *data) htab_t refcount_set = htab_create (ttask->mapnum); if ((ttask->flags & GOMP_TARGET_FLAG_EXIT_DATA) == 0) for (i = 0; i < ttask->mapnum; i++) - if ((ttask->kinds[i] & 0xff) == GOMP_MAP_STRUCT) + if ((ttask->kinds[i] & 0xff) == GOMP_MAP_STRUCT + || (ttask->kinds[i] & 0xff) == GOMP_MAP_STRUCT_UNORD) { gomp_map_vars (devicep, ttask->sizes[i] + 1, &ttask->hostaddrs[i], NULL, &ttask->sizes[i], &ttask->kinds[i], true, diff --git a/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-1.c b/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-1.c new file mode 100644 index 00000000000..b0994c0a7bb --- /dev/null +++ b/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-1.c @@ -0,0 +1,38 @@ +#include +#include + +struct st { + int *p; +}; + +int main (void) +{ + struct st s[2]; + s[0].p = (int *) calloc (5, sizeof (int)); + s[1].p = (int *) calloc (5, sizeof (int)); + +#pragma omp target map(s[0].p, s[1].p, s[0].p[0:2], s[1].p[1:3]) + { + s[0].p[0] = 5; + s[1].p[1] = 7; + } + +#pragma omp target map(s, s[0].p[0:2], s[1].p[1:3]) + { + s[0].p[0]++; + s[1].p[1]++; + } + +#pragma omp target map(s[0:2], s[0].p[0:2], s[1].p[1:3]) + { + s[0].p[0]++; + s[1].p[1]++; + } + + assert (s[0].p[0] == 7); + assert (s[1].p[1] == 9); + + free (s[0].p); + free (s[1].p); + return 0; +} diff --git a/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-2.c b/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-2.c new file mode 100644 index 00000000000..fe2cc8c0515 --- /dev/null +++ b/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-2.c @@ -0,0 +1,54 @@ +#include +#include + +struct st { + int *p; +}; + +int main (void) +{ + struct st s[10]; + + for (int i = 0; i < 10; i++) + s[i].p = (int *) calloc (5, sizeof (int)); + + for (int i = 0; i < 10; i++) + for (int j = 0; j < 10; j++) + for (int k = 0; k < 10; k++) + { + if (i == j || j == k || i == k) + continue; + +#pragma omp target map(s[i].p, s[j].p, s[k].p, s[i].p[0:2], s[j].p[1:3], \ + s[k].p[2]) + { + s[i].p[0]++; + s[j].p[1]++; + s[k].p[2]++; + } + +#pragma omp target map(s, s[i].p[0:2], s[j].p[1:3], s[k].p[2]) + { + s[i].p[0]++; + s[j].p[1]++; + s[k].p[2]++; + } + +#pragma omp target map(s[0:10], s[i].p[0:2], s[j].p[1:3], s[k].p[2]) + { + s[i].p[0]++; + s[j].p[1]++; + s[k].p[2]++; + } + } + + for (int i = 0; i < 10; i++) + { + assert (s[i].p[0] == 216); + assert (s[i].p[1] == 216); + assert (s[i].p[2] == 216); + free (s[i].p); + } + + return 0; +} diff --git a/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-3.c b/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-3.c new file mode 100644 index 00000000000..8ed7e1d60a2 --- /dev/null +++ b/libgomp/testsuite/libgomp.c-c++-common/map-arrayofstruct-3.c @@ -0,0 +1,64 @@ +#include +#include + +struct st { + int *p; +}; + +struct tt { + struct st a[10]; +}; + +struct ut { + struct tt *t; +}; + +int main (void) +{ + struct tt *t = (struct tt *) malloc (sizeof *t); + struct ut *u = (struct ut *) malloc (sizeof *u); + + for (int i = 0; i < 10; i++) + t->a[i].p = (int *) calloc (5, sizeof (int)); + + u->t = t; + + for (int i = 0; i < 10; i++) + for (int j = 0; j < 10; j++) + for (int k = 0; k < 10; k++) + { + if (i == j || j == k || i == k) + continue; + + /* This one can use "firstprivate" for T... */ +#pragma omp target map(t->a[i].p, t->a[j].p, t->a[k].p, \ + t->a[i].p[0:2], t->a[j].p[1:3], t->a[k].p[2]) + { + t->a[i].p[0]++; + t->a[j].p[1]++; + t->a[k].p[2]++; + } + + /* ...but this one must use attach/detach for T. */ +#pragma omp target map(u->t, u->t->a[i].p, u->t->a[j].p, u->t->a[k].p, \ + u->t->a[i].p[0:2], u->t->a[j].p[1:3], u->t->a[k].p[2]) + { + u->t->a[i].p[0]++; + u->t->a[j].p[1]++; + u->t->a[k].p[2]++; + } + } + + for (int i = 0; i < 10; i++) + { + assert (t->a[i].p[0] == 144); + assert (t->a[i].p[1] == 144); + assert (t->a[i].p[2] == 144); + free (t->a[i].p); + } + + free (u); + free (t); + + return 0; +} diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray-3.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray-3.f90 new file mode 100644 index 00000000000..b009a4224cc --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subarray-3.f90 @@ -0,0 +1,48 @@ +! { dg-do run } + +module mymod +type G +integer :: x, y +integer, pointer :: arr(:) +integer :: z +end type G +end module mymod + +program myprog +use mymod + +integer, target :: arr1(10) +integer, target :: arr2(10) +integer, target :: arr3(10) +type(G), dimension(3) :: gvar + +integer :: i, j + +gvar(1)%arr => arr1 +gvar(2)%arr => arr2 +gvar(3)%arr => arr3 + +gvar(1)%arr = 0 +gvar(2)%arr = 0 +gvar(3)%arr = 0 + +i = 1 +j = 2 + +!$omp target map(gvar(i)%arr, gvar(j)%arr, gvar(j)%arr(1:5)) +gvar(i)%arr(1) = gvar(i)%arr(1) + 1 +gvar(j)%arr(1) = gvar(j)%arr(1) + 2 +!$omp end target + +i = 2 +j = 1 + +!$omp target map(gvar(i)%arr, gvar(j)%arr, gvar(j)%arr(1:5)) +gvar(i)%arr(1) = gvar(i)%arr(1) + 3 +gvar(j)%arr(1) = gvar(j)%arr(1) + 4 +!$omp end target + +if (gvar(i)%arr(1).ne.4) stop 1 +if (gvar(j)%arr(1).ne.6) stop 2 + +end program myprog diff --git a/libgomp/testsuite/libgomp.fortran/map-subarray-5.f90 b/libgomp/testsuite/libgomp.fortran/map-subarray-5.f90 new file mode 100644 index 00000000000..33a64292bfd --- /dev/null +++ b/libgomp/testsuite/libgomp.fortran/map-subarray-5.f90 @@ -0,0 +1,50 @@ +! { dg-do run } + +type t + integer, pointer :: p(:) +end type t + +type(t) :: var(3) +integer :: i, j + +allocate (var(1)%p, source=[1,2,3,5]) +allocate (var(2)%p, source=[2,3,5]) +allocate (var(3)%p(1:3)) + +var(3)%p = 0 + +do i = 1, 3 + do j = 1, 3 +!$omp target map(var(i)%p, var(j)%p) + var(i)%p(1) = 5 + var(j)%p(2) = 7 +!$omp end target + + if (i.ne.j) then +!$omp target map(var(i)%p(1:3), var(i)%p, var(j)%p) + var(i)%p(1) = var(i)%p(1) + 1 + var(j)%p(2) = var(j)%p(2) + 1 +!$omp end target + +!$omp target map(var(i)%p, var(j)%p, var(j)%p(1:3)) + var(i)%p(1) = var(i)%p(1) + 1 + var(j)%p(2) = var(j)%p(2) + 1 +!$omp end target + +!$omp target map(var(i)%p, var(i)%p(1:3), var(j)%p, var(j)%p(2)) + var(i)%p(1) = var(i)%p(1) + 1 + var(j)%p(2) = var(j)%p(2) + 1 +!$omp end target + end if + + if (i.eq.j) then + if (var(i)%p(1).ne.5) stop 1 + if (var(j)%p(2).ne.7) stop 2 + else + if (var(i)%p(1).ne.8) stop 3 + if (var(j)%p(2).ne.10) stop 4 + end if + end do +end do + +end