From patchwork Tue Dec 12 16:34:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 177443 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp7845419vqy; Tue, 12 Dec 2023 08:36:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IGchjTY6D3qZQ3zWlfJXTFmHjM5uyrEEpccglFmjd+WDdmOtF8R+0+iHL496BJBqs7Jhw/A X-Received: by 2002:a05:6a20:72a0:b0:18b:8158:86ea with SMTP id o32-20020a056a2072a000b0018b815886eamr7543374pzk.10.1702398975215; Tue, 12 Dec 2023 08:36:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702398975; cv=none; d=google.com; s=arc-20160816; b=gjNWRZeRf7hpjpJTDw6OCUvooLtjnv9n1Ye7Ii8lUtpZc3ECay6PII+Dn8NS4pTWes 7L0eiQCoVKORlrICRNcNnvTduknt9efltDlqFNvNZ7dHYMT2xpffZAn+y23FLhjTOcD3 ODVnJh51c7ti/xEbKjwp2T5J/N3hZrwF5rr5Ui8jBNvwe9+08jfnjUIoQCCL1rT0hE4A mw+5BGXYs5p05pYm3aixsxJ4+dVsRdyi/dOVSgBF1ulENvO2MV1BPoGrDQPKeKEdPNXj uVJ1/V5ykn8E1z9w6SfL5UunYy9AOoSPbpwbPVIiV9waDyUne8HOBA8bOu7cquBkkY8S prxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=f2MzbyPn8rMX/70hHyFKQ1kdGPyPC7EdKYb9ZGPjSt0=; fh=t1bsw/gEI8w3DN7ouBpAewoQyybWh2yxEgAHixB9Iis=; b=VeEO5zQ0Fyvt9aMIJguH5Pcm4SjG9KDrILYlkzRPhNcQejV5XNN/tKKOE3p5/q0CaE gM6bHlXqAd3atPG7nlNrL11CeBxgyZOD9iWt78BxWxbKFHT+Uj/nYFXYyWTYfVvqYXns qdzkNhKFqqAaHgnA+dVlwJHVXzgr5sulPMNGO59dKXJcaP5lZgkswH5PeAcHLQ5UrwNx LuWmhw2ykz7LiGvcTV6Vo7Rit8M3YP13FPfFfwv8WdtljzcnK4Nx0Oemgys+yEKzz4BB n1dkKuIr2l24AK/D3P8mnGZSoUR873yCpfQl9YLlzEYzXs8sllAg8HVJY+3pFbnKNYd5 c3aQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bootlin.com header.s=gm1 header.b=ZxEtWPIm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=bootlin.com Received: from lipwig.vger.email (lipwig.vger.email. [2620:137:e000::3:3]) by mx.google.com with ESMTPS id q9-20020a63f949000000b0059d25cedc79si7980042pgk.767.2023.12.12.08.36.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 08:36:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) client-ip=2620:137:e000::3:3; Authentication-Results: mx.google.com; dkim=pass header.i=@bootlin.com header.s=gm1 header.b=ZxEtWPIm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:3 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=bootlin.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 40B9E80A6BEC; Tue, 12 Dec 2023 08:36:09 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346637AbjLLQf4 (ORCPT + 99 others); Tue, 12 Dec 2023 11:35:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232702AbjLLQf1 (ORCPT ); Tue, 12 Dec 2023 11:35:27 -0500 Received: from relay2-d.mail.gandi.net (relay2-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::222]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81EF0EB; Tue, 12 Dec 2023 08:35:31 -0800 (PST) Received: by mail.gandi.net (Postfix) with ESMTPSA id 53BD940009; Tue, 12 Dec 2023 16:35:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1702398929; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f2MzbyPn8rMX/70hHyFKQ1kdGPyPC7EdKYb9ZGPjSt0=; b=ZxEtWPImiBHuhNGhjGYqvvmMWz8cL0tIPFjnXe+IL9T6lewgaSVU4JFZIITULs3A1enU3O vEZdpj8Z7Rbgmh6rxbBoTOD22jssfxDWAaoFfql6uDRVyw9RYzv/0isH1WCFcua6Mo5y43 8RmCryhnim8jhwgIHNQtlJDBaDWNz+uAXl1JicMSoh8pWgLSlesLGWljxeAqlTUjRdjyZw cIyyOMsrhGlixzJmfkKmAkLK1zHEh8mLSF3ML/4EL0pckEEqmv9fhQ12YDS4LcSPnpyUu5 x1WEnmu2yJfvx35h+FyJv7iLDX1q6tRirkV5ybrPpNgILcXyreWaU7DSnNpilA== From: Gregory CLEMENT To: Paul Burton , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Jiaxun Yang , Rob Herring , Krzysztof Kozlowski , devicetree@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Vladimir Kondratiev , Tawfik Bayouk , Alexandre Belloni , =?utf-8?q?Th=C3=A9o_Lebr?= =?utf-8?q?un?= , Thomas Petazzoni Subject: [PATCH v5 06/22] MIPS: Refactor mips_cps_core_entry implementation Date: Tue, 12 Dec 2023 17:34:38 +0100 Message-ID: <20231212163459.1923041-7-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231212163459.1923041-1-gregory.clement@bootlin.com> References: <20231212163459.1923041-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-GND-Sasl: gregory.clement@bootlin.com X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 12 Dec 2023 08:36:09 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785094707736816608 X-GMAIL-MSGID: 1785094707736816608 From: Jiaxun Yang Now the exception vector for CPS systems are allocated on-fly with memblock as well. It will try to allocate from KSEG1 first, and then try to allocate in low 4G if possible. The main reset vector is now generated by uasm, to avoid tons of patches to the code. Other vectors are copied to the location later. Signed-off-by: Jiaxun Yang --- arch/mips/include/asm/mips-cm.h | 1 + arch/mips/include/asm/smp-cps.h | 4 +- arch/mips/kernel/cps-vec.S | 110 ++++++++------------- arch/mips/kernel/smp-cps.c | 167 +++++++++++++++++++++++++++----- arch/mips/kernel/traps.c | 2 + 5 files changed, 186 insertions(+), 98 deletions(-) diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h index 23c67c0871b17..15d8d69de4550 100644 --- a/arch/mips/include/asm/mips-cm.h +++ b/arch/mips/include/asm/mips-cm.h @@ -311,6 +311,7 @@ GCR_CX_ACCESSOR_RW(32, 0x018, other) /* GCR_Cx_RESET_BASE - Configure where powered up cores will fetch from */ GCR_CX_ACCESSOR_RW(32, 0x020, reset_base) #define CM_GCR_Cx_RESET_BASE_BEVEXCBASE GENMASK(31, 12) +#define CM_GCR_Cx_RESET_BASE_MODE BIT(1) /* GCR_Cx_ID - Identify the current core */ GCR_CX_ACCESSOR_RO(32, 0x028, id) diff --git a/arch/mips/include/asm/smp-cps.h b/arch/mips/include/asm/smp-cps.h index 22a572b70fe31..39a602e5fecc4 100644 --- a/arch/mips/include/asm/smp-cps.h +++ b/arch/mips/include/asm/smp-cps.h @@ -24,7 +24,7 @@ struct core_boot_config { extern struct core_boot_config *mips_cps_core_bootcfg; -extern void mips_cps_core_entry(void); +extern void mips_cps_core_boot(int cca, void __iomem *gcr_base); extern void mips_cps_core_init(void); extern void mips_cps_boot_vpes(struct core_boot_config *cfg, unsigned vpe); @@ -32,8 +32,6 @@ extern void mips_cps_boot_vpes(struct core_boot_config *cfg, unsigned vpe); extern void mips_cps_pm_save(void); extern void mips_cps_pm_restore(void); -extern void *mips_cps_core_entry_patch_end; - #ifdef CONFIG_MIPS_CPS extern bool mips_cps_smp_in_use(void); diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S index 64ecfdac6580b..8870a2dbc35aa 100644 --- a/arch/mips/kernel/cps-vec.S +++ b/arch/mips/kernel/cps-vec.S @@ -4,6 +4,8 @@ * Author: Paul Burton */ +#include + #include #include #include @@ -81,40 +83,48 @@ nop .endm + __INIT +LEAF(excep_tlbfill) + DUMP_EXCEP("TLB Fill") + b . + nop + END(excep_tlbfill) -.balign 0x1000 +LEAF(excep_xtlbfill) + DUMP_EXCEP("XTLB Fill") + b . + nop + END(excep_xtlbfill) -LEAF(mips_cps_core_entry) - /* - * These first several instructions will be patched by cps_smp_setup to load the - * CCA to use into register s0 and GCR base address to register s1. - */ - .rept CPS_ENTRY_PATCH_INSNS - nop - .endr +LEAF(excep_cache) + DUMP_EXCEP("Cache") + b . + nop + END(excep_cache) - .global mips_cps_core_entry_patch_end -mips_cps_core_entry_patch_end: +LEAF(excep_genex) + DUMP_EXCEP("General") + b . + nop + END(excep_genex) - /* Check whether we're here due to an NMI */ - mfc0 k0, CP0_STATUS - and k0, k0, ST0_NMI - beqz k0, not_nmi +LEAF(excep_intex) + DUMP_EXCEP("Interrupt") + b . nop + END(excep_intex) - /* This is an NMI */ - PTR_LA k0, nmi_handler +LEAF(excep_ejtag) + PTR_LA k0, ejtag_debug_handler jr k0 nop + END(excep_ejtag) + __FINIT -not_nmi: - /* Setup Cause */ - li t0, CAUSEF_IV - mtc0 t0, CP0_CAUSE - - /* Setup Status */ - li t0, ST0_CU1 | ST0_CU0 | ST0_BEV | STATUS_BITDEPS - mtc0 t0, CP0_STATUS +LEAF(mips_cps_core_boot) + /* Save CCA and GCR base */ + move s0, a0 + move s1, a1 /* We don't know how to do coherence setup on earlier ISA */ #if MIPS_ISA_REV > 0 @@ -178,49 +188,7 @@ not_nmi: PTR_L sp, VPEBOOTCFG_SP(v1) jr t1 nop - END(mips_cps_core_entry) - -.org 0x200 -LEAF(excep_tlbfill) - DUMP_EXCEP("TLB Fill") - b . - nop - END(excep_tlbfill) - -.org 0x280 -LEAF(excep_xtlbfill) - DUMP_EXCEP("XTLB Fill") - b . - nop - END(excep_xtlbfill) - -.org 0x300 -LEAF(excep_cache) - DUMP_EXCEP("Cache") - b . - nop - END(excep_cache) - -.org 0x380 -LEAF(excep_genex) - DUMP_EXCEP("General") - b . - nop - END(excep_genex) - -.org 0x400 -LEAF(excep_intex) - DUMP_EXCEP("Interrupt") - b . - nop - END(excep_intex) - -.org 0x480 -LEAF(excep_ejtag) - PTR_LA k0, ejtag_debug_handler - jr k0 - nop - END(excep_ejtag) + END(mips_cps_core_boot) LEAF(mips_cps_core_init) #ifdef CONFIG_MIPS_MT_SMP @@ -428,7 +396,7 @@ LEAF(mips_cps_boot_vpes) /* Calculate a pointer to the VPEs struct vpe_boot_config */ li t0, VPEBOOTCFG_SIZE mul t0, t0, ta1 - addu t0, t0, ta3 + PTR_ADDU t0, t0, ta3 /* Set the TC restart PC */ lw t1, VPEBOOTCFG_PC(t0) @@ -603,10 +571,10 @@ dcache_done: lw $1, TI_CPU(gp) sll $1, $1, LONGLOG PTR_LA \dest, __per_cpu_offset - addu $1, $1, \dest + PTR_ADDU $1, $1, \dest lw $1, 0($1) PTR_LA \dest, cps_cpu_state - addu \dest, \dest, $1 + PTR_ADDU \dest, \dest, $1 .set pop .endm diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c index dd55d59b88db3..9aad678a32bd7 100644 --- a/arch/mips/kernel/smp-cps.c +++ b/arch/mips/kernel/smp-cps.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -25,7 +26,33 @@ #include #include +#define BEV_VEC_SIZE 0x500 +#define BEV_VEC_ALIGN 0x1000 + +#define A0 4 +#define A1 5 +#define T9 25 +#define K0 26 +#define K1 27 + +#define C0_STATUS 12, 0 +#define C0_CAUSE 13, 0 + +#define ST0_NMI_BIT 19 +#ifdef CONFIG_64BIT +#define ST0_KX_IF_64 ST0_KX +#else +#define ST0_KX_IF_64 0 +#endif + +enum label_id { + label_not_nmi = 1, +}; + +UASM_L_LA(_not_nmi) + static DECLARE_BITMAP(core_power, NR_CPUS); +static uint32_t core_entry_reg; struct core_boot_config *mips_cps_core_bootcfg; @@ -34,10 +61,113 @@ static unsigned __init core_vpe_count(unsigned int cluster, unsigned core) return min(smp_max_threads, mips_cps_numvps(cluster, core)); } +static void __init *mips_cps_build_core_entry(void *addr) +{ + extern void (*nmi_handler)(void); + u32 *p = addr; + u32 val; + struct uasm_label labels[2]; + struct uasm_reloc relocs[2]; + struct uasm_label *l = labels; + struct uasm_reloc *r = relocs; + + memset(labels, 0, sizeof(labels)); + memset(relocs, 0, sizeof(relocs)); + + uasm_i_mfc0(&p, K0, C0_STATUS); + if (cpu_has_mips_r2_r6) + uasm_i_ext(&p, K0, K0, ST0_NMI_BIT, 1); + else { + uasm_i_srl(&p, K0, K0, ST0_NMI_BIT); + uasm_i_andi(&p, K0, K0, 0x1); + } + + uasm_il_bnez(&p, &r, K0, label_not_nmi); + uasm_i_nop(&p); + UASM_i_LA(&p, K0, (long)&nmi_handler); + + uasm_l_not_nmi(&l, p); + + val = CAUSEF_IV; + uasm_i_lui(&p, K0, val >> 16); + uasm_i_ori(&p, K0, K0, val & 0xffff); + uasm_i_mtc0(&p, K0, C0_CAUSE); + val = ST0_CU1 | ST0_CU0 | ST0_BEV | ST0_KX_IF_64; + uasm_i_lui(&p, K0, val >> 16); + uasm_i_ori(&p, K0, K0, val & 0xffff); + uasm_i_mtc0(&p, K0, C0_STATUS); + uasm_i_ehb(&p); + uasm_i_ori(&p, A0, 0, read_c0_config() & CONF_CM_CMASK); + UASM_i_LA(&p, A1, (long)mips_gcr_base); +#if defined(KBUILD_64BIT_SYM32) || defined(CONFIG_32BIT) + UASM_i_LA(&p, T9, CKSEG1ADDR(__pa_symbol(mips_cps_core_boot))); +#else + UASM_i_LA(&p, T9, TO_UNCAC(__pa_symbol(mips_cps_core_boot))); +#endif + uasm_i_jr(&p, T9); + uasm_i_nop(&p); + + uasm_resolve_relocs(relocs, labels); + + return p; +} + +static int __init setup_cps_vecs(void) +{ + extern void excep_tlbfill(void); + extern void excep_xtlbfill(void); + extern void excep_cache(void); + extern void excep_genex(void); + extern void excep_intex(void); + extern void excep_ejtag(void); + phys_addr_t cps_vec_pa; + void *cps_vec; + + /* Try to allocate in KSEG1 first */ + cps_vec_pa = memblock_phys_alloc_range(BEV_VEC_SIZE, BEV_VEC_ALIGN, + 0x0, KSEGX_SIZE - 1); + + if (cps_vec_pa) + core_entry_reg = CKSEG1ADDR(cps_vec_pa) & + CM_GCR_Cx_RESET_BASE_BEVEXCBASE; + + if (!cps_vec_pa && mips_cm_is64) { + cps_vec_pa = memblock_phys_alloc_range(BEV_VEC_SIZE, BEV_VEC_ALIGN, + 0x0, SZ_4G - 1); + if (cps_vec_pa) + core_entry_reg = (cps_vec_pa & CM_GCR_Cx_RESET_BASE_BEVEXCBASE) | + CM_GCR_Cx_RESET_BASE_MODE; + } + + if (!cps_vec_pa) + return -ENOMEM; + + /* We want to ensure cache is clean before writing uncached mem */ + blast_dcache_range(TO_CAC(cps_vec_pa), TO_CAC(cps_vec_pa) + BEV_VEC_SIZE); + bc_wback_inv(TO_CAC(cps_vec_pa), BEV_VEC_SIZE); + __sync(); + + cps_vec = (void *)TO_UNCAC(cps_vec_pa); + mips_cps_build_core_entry(cps_vec); + + memcpy(cps_vec + 0x200, &excep_tlbfill, 0x80); + memcpy(cps_vec + 0x280, &excep_xtlbfill, 0x80); + memcpy(cps_vec + 0x300, &excep_cache, 0x80); + memcpy(cps_vec + 0x380, &excep_genex, 0x80); + memcpy(cps_vec + 0x400, &excep_intex, 0x80); + memcpy(cps_vec + 0x480, &excep_ejtag, 0x80); + + /* Make sure no prefetched data in cache */ + blast_inv_dcache_range(TO_CAC(cps_vec_pa), TO_CAC(cps_vec_pa) + BEV_VEC_SIZE); + bc_inv(TO_CAC(cps_vec_pa), BEV_VEC_SIZE); + __sync(); + + return 0; +} + static void __init cps_smp_setup(void) { unsigned int nclusters, ncores, nvpes, core_vpes; - unsigned long core_entry; int cl, c, v; /* Detect & record VPE topology */ @@ -94,10 +224,11 @@ static void __init cps_smp_setup(void) /* Make core 0 coherent with everything */ write_gcr_cl_coherence(0xff); - if (mips_cm_revision() >= CM_REV_CM3) { - core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry); - write_gcr_bev_base(core_entry); - } + if (setup_cps_vecs()) + pr_err("Failed to setup CPS vectors\n"); + + if (core_entry_reg && mips_cm_revision() >= CM_REV_CM3) + write_gcr_bev_base(core_entry_reg); #ifdef CONFIG_MIPS_MT_FPAFF /* If we have an FPU, enroll ourselves in the FPU-full mask */ @@ -110,10 +241,14 @@ static void __init cps_prepare_cpus(unsigned int max_cpus) { unsigned ncores, core_vpes, c, cca; bool cca_unsuitable, cores_limited; - u32 *entry_code; mips_mt_set_cpuoptions(); + if (!core_entry_reg) { + pr_err("core_entry address unsuitable, disabling smp-cps\n"); + goto err_out; + } + /* Detect whether the CCA is unsuited to multi-core SMP */ cca = read_c0_config() & CONF_CM_CMASK; switch (cca) { @@ -145,20 +280,6 @@ static void __init cps_prepare_cpus(unsigned int max_cpus) (cca_unsuitable && cpu_has_dc_aliases) ? " & " : "", cpu_has_dc_aliases ? "dcache aliasing" : ""); - /* - * Patch the start of mips_cps_core_entry to provide: - * - * s0 = kseg0 CCA - */ - entry_code = (u32 *)&mips_cps_core_entry; - uasm_i_addiu(&entry_code, 16, 0, cca); - UASM_i_LA(&entry_code, 17, (long)mips_gcr_base); - BUG_ON((void *)entry_code > (void *)&mips_cps_core_entry_patch_end); - blast_dcache_range((unsigned long)&mips_cps_core_entry, - (unsigned long)entry_code); - bc_wback_inv((unsigned long)&mips_cps_core_entry, - (void *)entry_code - (void *)&mips_cps_core_entry); - __sync(); /* Allocate core boot configuration structs */ ncores = mips_cps_numcores(0); @@ -213,7 +334,7 @@ static void boot_core(unsigned int core, unsigned int vpe_id) mips_cm_lock_other(0, core, 0, CM_GCR_Cx_OTHER_BLOCK_LOCAL); /* Set its reset vector */ - write_gcr_co_reset_base(CKSEG1ADDR((unsigned long)mips_cps_core_entry)); + write_gcr_co_reset_base(core_entry_reg); /* Ensure its coherency is disabled */ write_gcr_co_coherence(0); @@ -290,7 +411,6 @@ static int cps_boot_secondary(int cpu, struct task_struct *idle) unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]); struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core]; struct vpe_boot_config *vpe_cfg = &core_cfg->vpe_config[vpe_id]; - unsigned long core_entry; unsigned int remote; int err; @@ -314,8 +434,7 @@ static int cps_boot_secondary(int cpu, struct task_struct *idle) if (cpu_has_vp) { mips_cm_lock_other(0, core, vpe_id, CM_GCR_Cx_OTHER_BLOCK_LOCAL); - core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry); - write_gcr_co_reset_base(core_entry); + write_gcr_co_reset_base(core_entry_reg); mips_cm_unlock_other(); } diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c index 230728d76d11f..ea59d321f713e 100644 --- a/arch/mips/kernel/traps.c +++ b/arch/mips/kernel/traps.c @@ -74,6 +74,8 @@ #include "access-helper.h" +#define MAX(a, b) ((a) >= (b) ? (a) : (b)) + extern void check_wait(void); extern asmlinkage void rollback_handle_int(void); extern asmlinkage void handle_int(void);