From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90227C5B549 for ; Fri, 30 May 2025 09:30:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 342706B00A3; Fri, 30 May 2025 05:30:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 31A0A6B00A4; Fri, 30 May 2025 05:30:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 208F86B00A5; Fri, 30 May 2025 05:30:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 06E0B6B00A3 for ; Fri, 30 May 2025 05:30:48 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id ADF9A142C9F for ; Fri, 30 May 2025 09:30:47 +0000 (UTC) X-FDA: 83499054534.21.6130BFE Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) by imf17.hostedemail.com (Postfix) with ESMTP id B555540011 for ; Fri, 30 May 2025 09:30:45 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=K9KLTW1E; spf=pass (imf17.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748597445; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MCkN3xSvk7RNOqtTsnso8pu1jgTl1XSXrm3OEL9CcBU=; b=5rrYhUgQhWb9x1p9G0lgOxrRkZOw49zszMJglR0MdfrpER47uFLzJy/UGgnyM5+iIjGZnD eNlEn9MPjwQ5W8r+4mX9FX3DooplgHls/XVdLsbY2R9av1of62lufh+trjCpC1Xn7z++qW n3IgeFcpLJvyDSkGHwiDZ4YM4omGtWM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748597445; a=rsa-sha256; cv=none; b=7BUf7mYLVPOmolP+SHhwiYH0n2YkT5BBhtWL9Kp2E3p1vnhgBgY+fbKJm2T/W038QPoNy1 uqP/NvG6I3O1UdmLwoe7Gsrew2PrYKSUG/RwbYvB5fjVDeFerUaWdhHJVX5YCFWkTZonOX dJHqm+8bBLOxK48/xR4J9gthvlsA5a0= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=K9KLTW1E; spf=pass (imf17.hostedemail.com: domain of libo.gcs85@bytedance.com designates 209.85.215.173 as permitted sender) smtp.mailfrom=libo.gcs85@bytedance.com; dmarc=pass (policy=quarantine) header.from=bytedance.com Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-7fd35b301bdso1812720a12.2 for ; Fri, 30 May 2025 02:30:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1748597444; x=1749202244; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MCkN3xSvk7RNOqtTsnso8pu1jgTl1XSXrm3OEL9CcBU=; b=K9KLTW1EiKekN7sK0tLYryCvPSqueZLsnUtJ4kmOBCB3ARxadrp4J9VbKv4J0YCZ14 q+QOeWZNRSIKULzthEiE5NGqD+zdmCM2XXk6DH2SsIble2LpOcZ2FdixrvNDWAyMVcxA Q4qJ6YQi56O5LQcSI1UcsglltWTTlGY++P5IMU32YTA/igNGEQlB3BXFE7xd8JfyfcwV SVoDeuhdmP859LWsva2d+sADYbEzm0k62LP4bG12N0FvWBd2o0rd9kQPIf3Yxmip7GGt KOvwaoKZdmX1iOjUN29P9Y71huClbRtIa46YcJdb1I/6cSWAT0QTYDnrW9dHN1Z7HSau 08wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748597444; x=1749202244; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MCkN3xSvk7RNOqtTsnso8pu1jgTl1XSXrm3OEL9CcBU=; b=qcgsFFdWqhI/uL0Qx9PM5efpgvRbVth3E0Q0fmSOlljN3Us0Yy0CH4UB7AH1CmoEM+ am8kQ0ycFZRWaMJ4g2wEWX6RijqaxmJo1EZHacUAHAk3BV14HGXdvYx1WCN5kwn1c1jO hhDASmkeUcx7MOVAwAk2jjBjoL55a8Gc4AM3CK5DtJe32hM993aaLEwCp1jraGffElaQ L0A73g5GtKaQ4wrrwbFzolD5KMK7Znu5bWGay3tvUlE56A0N49lDYkh5RHJQN4qn6iWw D9g8y+N5xIfGoA37dl6oNUnKT87KicRUzkaVy61anh/iZY6IDr+kMTXnYmxur+1YuSLp 39Hg== X-Forwarded-Encrypted: i=1; AJvYcCXADE4X5H6EfSWurWnmhNEiwT+PVDMCJb2yyEZama/iD1lIpW3mdKz5FEm1bpBwjzvCqVZMPbogBQ==@kvack.org X-Gm-Message-State: AOJu0Yzm7QtegJcky6yR+LNHNZT5Go5bZopMgw8hPSYbLj1RNQTeP9Y+ f2FDuUrFMfwNA56tBJAcVyj8iSk8//rITexk+tWH1fJhZ9ZD8BOCDA4wV3rxZXp/0Bc= X-Gm-Gg: ASbGncvjRY0HzNllbvU3c/Fbo9t8HWuoF3EmY95pN20xzJq9U9LK+5yHjveuk821v9u J7CDdddpuw8/BNogqySMDvMbofPBHIlGfsnraQNOC2uL/o5hRpcAdN2lKL8d/KoffcJyPwVtTMq AAs1+Tq0/EOrhrx5TSHDA61WCox7T1qnuYqYJX+IgT9VuTEzHbynTNK/cuOuQhGmWjGlVwCioAj 6+0Zj4vtahj63v84GWB/ogff4WsDSWlMQ0/zeYbI3cJ/MkNGKkOI4IdIYHyPGVmkP5weTF3x3hC Y/ZRgiSdBqN7zeDRxAFgNAgHRRcykCZzANMKrciMGWnuaLK3xcLWg72evYtPvzrBjKIkqkdT+vm wt+0LWHs8qw== X-Google-Smtp-Source: AGHT+IGxnH4FN8v0lfXoJ+KLdg1DAlTGeu78x7GO8KMsAYvLDaxWhgMloZ/K96UmZpYHSFOIcwKPxQ== X-Received: by 2002:a17:90b:2248:b0:311:ff18:b84b with SMTP id 98e67ed59e1d1-31250427cdamr2068655a91.25.1748597444187; Fri, 30 May 2025 02:30:44 -0700 (PDT) Received: from FQ627FTG20.bytedance.net ([63.216.146.178]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e29f7b8sm838724a91.2.2025.05.30.02.30.29 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 30 May 2025 02:30:43 -0700 (PDT) From: Bo Li To: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, luto@kernel.org, kees@kernel.org, akpm@linux-foundation.org, david@redhat.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, peterz@infradead.org Cc: dietmar.eggemann@arm.com, hpa@zytor.com, acme@kernel.org, namhyung@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, irogers@google.com, adrian.hunter@intel.com, kan.liang@linux.intel.com, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, jannh@google.com, pfalcato@suse.de, riel@surriel.com, harry.yoo@oracle.com, linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, yinhongbo@bytedance.com, dengliang.1214@bytedance.com, xieyongji@bytedance.com, chaiwen.cc@bytedance.com, songmuchun@bytedance.com, yuanzhu@bytedance.com, chengguozhu@bytedance.com, sunjiadong.lff@bytedance.com, Bo Li Subject: [RFC v2 09/35] RPAL: enable address space sharing Date: Fri, 30 May 2025 17:27:37 +0800 Message-Id: <2b5378f3686fd2831468e65c49609fbb19072b43.1748594840.git.libo.gcs85@bytedance.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B555540011 X-Stat-Signature: zf5irjyopc77ha8pfoq3b87hrd5m743u X-Rspam-User: X-HE-Tag: 1748597445-130302 X-HE-Meta: U2FsdGVkX1/W4Xq6vfRuxEiXbhgU8/TjK9IiFeppGL7DSfZnSL77AwVTSYcOT9rjth9m0z/0OH976gXR/DKIJUNRB0QuKar+iJ9u6C1ThFr2Mj9yZ8weOsnDWPuXiw84IjNtf81bZ/mfoI0wbAlcDWWs/HRlUPsJ6BbIN2CZrbzh8cp5KYqfp9JbInJOAjBGpvIs3f7BUoI5rhflNgjpYjyDZVwOsM7MTB1CeCd10/CdRQc+mu9qRy5s6DcU9FNrvwusAUv8rmT7WvVYUUslx6gMWVN8Z+a+SbBsXqCR6M9ey76L2AmLze1hSHFv5SrfksT8yyPrYEULC0HRR9INTBy5ntIMj1iJHaDP+gZIg+dFTlqc01Ha2ago9JcLRsj0XrlJSVPZan+xYcBfdXeSAZp+MhVd8ylzBKBBtf9s4TBAe5SOvsdYu1Lg5XDwFevPCb5bKp6N0a2jKYxXUJJbmhuaaraNPPawRgp166PWoVqdrW5FlBSVKWYXMaJs1yQ0cL7VqC9NU7SWYOCwDvRaaEI/oNOZUxdUBR9JHBltvWT3uSwV2ZxicE43+TFQbJAOyfNnfen1lpUxrT2gHGSMW895hOtsJZ806ZHjoRLTQn3kchQYV6Z8JqA7Bv4J+vvsQfboi0FsWLGLh8w1KchaNX05rBuJ83mXEhRgqGcWezl9pdOlAnBwsES/laBBuFnS+wwYskYZNYY6q+SMe0ZxHCCXcLMRgz6SGqNPkA3yiLT6nN01M9R1bitAGtYMi77V9rZyxeHuQxM6oYU1aWgEa+/e40R16vVwcawqiK7yBWG4tbhrc6Pb5FTUWlX6lCiGxfHiHlSjyWE9yF0EVVRkKADLmnPngUCE3XZ+G4Sk/9xEalkhS+jvJCRdov7dVgzyp3JAYI8ekafukV0FjK/teKqGrIpXAEd96TsXFXD04Hmyg83ZwCjyWVPyuw0gDFdhVZlSuj86YLd8Yp39xrp OGS1Yq4s kXkyE1zfBrHQPX8Aax76/Bor4NjA2XJyWwu9vWUjEMRnQfU6D2AuVr0u0MuyZRjsdArHQXmQbxaxLze/H3y8fpMWKREHNr4a9in245k3Mo9hPGBztrEZmAG4/Q6jZJwV8H4ARB98tSkg/tOsb//6AS1YX+aDQGUwR0YW0SpRf9bJvK6gAj6AEpa/8q/YWbZFYxXSKpjcRiHopLw0bhqr6ycmFC4Te7fyMpsrtez6khrohtrUuX8wLLCGvGGeysgRPvRxQ/By8zl3LyU49i4zGXVep65b2iSJofsT3WlaogjmPzQAjmjQGl//JCA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: RPAL's memory sharing is implemented by copying p4d entries, which requires implementing corresponding interfaces. Meanwhile, copying p4d entries can cause the process's page table to contain p4d entries that do not belong to it, and RPAL needs to resolve compatibility issues with other subsystems caused by this. This patch implements the rpal_map_service() interface to complete the mutual copying of p4d entries between two RPAL services. For the copied p4d entries, RPAL adds a _PAGE_RPAL_IGN flag to them. This flag makes p4d_none() return true and p4d_present() return false, ensuring that these p4d entries are invisible to other kernel subsystems. The protection of p4d entries is guaranteed by the memory balloon, which ensures that the address space corresponding to the p4d entries is not used by the current service. Signed-off-by: Bo Li --- arch/x86/include/asm/pgtable.h | 25 ++++ arch/x86/include/asm/pgtable_types.h | 11 ++ arch/x86/rpal/internal.h | 2 + arch/x86/rpal/mm.c | 175 +++++++++++++++++++++++++++ 4 files changed, 213 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 5ddba366d3b4..54351bfe4e47 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1137,12 +1137,37 @@ static inline int pud_bad(pud_t pud) #if CONFIG_PGTABLE_LEVELS > 3 static inline int p4d_none(p4d_t p4d) { +#if IS_ENABLED(CONFIG_RPAL) + p4dval_t p4dv = native_p4d_val(p4d); + + /* + * Since RPAL copy p4d entry to share address space, + * it is important that other process will not manipulate + * this copied p4d. Thus, make p4d_none() always return + * 0 to bypass kernel page table logic on copied p4d. + */ + return (p4dv & _PAGE_RPAL_IGN) || + ((p4dv & ~(_PAGE_KNL_ERRATUM_MASK)) == 0); +#else return (native_p4d_val(p4d) & ~(_PAGE_KNL_ERRATUM_MASK)) == 0; +#endif } static inline int p4d_present(p4d_t p4d) { +#if IS_ENABLED(CONFIG_RPAL) + p4dval_t p4df = p4d_flags(p4d); + + /* + * Since RPAL copy p4d entry to share address space, + * it is important that other process will not manipulate + * this copied p4d. Thus, make p4d_present() always return + * 0 to bypass kernel page table logic on copied p4d. + */ + return ((p4df & (_PAGE_PRESENT | _PAGE_RPAL_IGN)) == _PAGE_PRESENT); +#else return p4d_flags(p4d) & _PAGE_PRESENT; +#endif } static inline pud_t *p4d_pgtable(p4d_t p4d) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index b74ec5c3643b..781b0f5bc359 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -35,6 +35,13 @@ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */ #define _PAGE_BIT_KERNEL_4K _PAGE_BIT_SOFTW3 /* page must not be converted to large */ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 +/* + * _PAGE_BIT_SOFTW1 is used by _PAGE_BIT_SPECIAL. + * but we are not conflicted with _PAGE_BIT_SPECIAL + * as we use it only on p4d/pud level and _PAGE_BIT_SPECIAL + * is only used on pte level. + */ +#define _PAGE_BIT_RPAL_IGN _PAGE_BIT_SOFTW1 #ifdef CONFIG_X86_64 #define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit (leaf) */ @@ -95,6 +102,10 @@ #define _PAGE_SOFT_DIRTY (_AT(pteval_t, 0)) #endif +#if IS_ENABLED(CONFIG_RPAL) +#define _PAGE_RPAL_IGN (_AT(pteval_t, 1) << _PAGE_BIT_RPAL_IGN) +#endif + /* * Tracking soft dirty bit when a page goes to a swap is tricky. * We need a bit which can be stored in pte _and_ not conflict diff --git a/arch/x86/rpal/internal.h b/arch/x86/rpal/internal.h index 3559c9c6e868..65f2cf4baf8f 100644 --- a/arch/x86/rpal/internal.h +++ b/arch/x86/rpal/internal.h @@ -34,6 +34,8 @@ static inline void rpal_put_shared_page(struct rpal_shared_page *rsp) int rpal_mmap(struct file *filp, struct vm_area_struct *vma); struct rpal_shared_page *rpal_find_shared_page(struct rpal_service *rs, unsigned long addr); +int rpal_map_service(struct rpal_service *tgt); +void rpal_unmap_service(struct rpal_service *tgt); /* thread.c */ int rpal_register_sender(unsigned long addr); diff --git a/arch/x86/rpal/mm.c b/arch/x86/rpal/mm.c index 8a738c502d1d..f1003baae001 100644 --- a/arch/x86/rpal/mm.c +++ b/arch/x86/rpal/mm.c @@ -215,3 +215,178 @@ void rpal_exit_mmap(struct mm_struct *mm) rpal_put_service(rs); } } + +/* + * Since the user address space size of rpal process is 512G, which + * is the size of one p4d, we assume p4d entry will never change after + * rpal process is created. + */ +static int mm_link_p4d(struct mm_struct *dst_mm, p4d_t src_p4d, + unsigned long addr) +{ + spinlock_t *dst_ptl = &dst_mm->page_table_lock; + unsigned long flags; + pgd_t *dst_pgdp; + p4d_t p4d, *dst_p4dp; + p4dval_t p4dv; + int ret = 0; + + BUILD_BUG_ON(CONFIG_PGTABLE_LEVELS < 4); + + mmap_write_lock(dst_mm); + spin_lock_irqsave(dst_ptl, flags); + dst_pgdp = pgd_offset(dst_mm, addr); + /* + * dst_pgd must exists, otherwise we need to alloc pgd entry. When + * src_p4d is freed, we also need to free the pgd entry. This should + * be supported in the future. + */ + if (unlikely(pgd_none_or_clear_bad(dst_pgdp))) { + rpal_err("cannot find pgd entry for addr 0x%016lx\n", addr); + ret = -EINVAL; + goto unlock; + } + + dst_p4dp = p4d_offset(dst_pgdp, addr); + if (unlikely(!p4d_none_or_clear_bad(dst_p4dp))) { + rpal_err("p4d is previously mapped\n"); + ret = -EINVAL; + goto unlock; + } + + p4dv = p4d_val(src_p4d); + + /* + * Since RPAL copy p4d entry to share address space, + * it is important that other process will not manipulate + * this copied p4d. We need mark the copied p4d and make + * p4d_present() and p4d_none() ignore such p4d. + */ + p4dv |= _PAGE_RPAL_IGN; + + if (boot_cpu_has(X86_FEATURE_PTI)) + p4d = native_make_p4d((~_PAGE_NX) & p4dv); + else + p4d = native_make_p4d(p4dv); + + set_p4d(dst_p4dp, p4d); + spin_unlock_irqrestore(dst_ptl, flags); + mmap_write_unlock(dst_mm); + + return 0; +unlock: + spin_unlock_irqrestore(dst_ptl, flags); + mmap_write_unlock(dst_mm); + return ret; +} + +static void mm_unlink_p4d(struct mm_struct *mm, unsigned long addr) +{ + spinlock_t *ptl = &mm->page_table_lock; + unsigned long flags; + pgd_t *pgdp; + p4d_t *p4dp; + + mmap_write_lock(mm); + spin_lock_irqsave(ptl, flags); + pgdp = pgd_offset(mm, addr); + p4dp = p4d_offset(pgdp, addr); + p4d_clear(p4dp); + spin_unlock_irqrestore(ptl, flags); + mmap_write_unlock(mm); + + flush_tlb_mm(mm); +} + +static int get_mm_p4d(struct mm_struct *mm, unsigned long addr, p4d_t *srcp) +{ + spinlock_t *ptl; + unsigned long flags; + pgd_t *pgdp; + p4d_t *p4dp; + int ret = 0; + + ptl = &mm->page_table_lock; + spin_lock_irqsave(ptl, flags); + pgdp = pgd_offset(mm, addr); + if (pgd_none(*pgdp)) { + ret = -EINVAL; + goto out; + } + + p4dp = p4d_offset(pgdp, addr); + if (p4d_none(*p4dp) || p4d_bad(*p4dp)) { + ret = -EINVAL; + goto out; + } + *srcp = *p4dp; + +out: + spin_unlock_irqrestore(ptl, flags); + + return ret; +} + +int rpal_map_service(struct rpal_service *tgt) +{ + struct rpal_service *cur = rpal_current_service(); + struct mm_struct *cur_mm, *tgt_mm; + unsigned long cur_addr, tgt_addr; + p4d_t cur_p4d, tgt_p4d; + int ret = 0; + + cur_mm = current->mm; + tgt_mm = tgt->mm; + if (!mmget_not_zero(tgt_mm)) { + ret = -EINVAL; + goto out; + } + + cur_addr = rpal_get_base(cur); + tgt_addr = rpal_get_base(tgt); + + ret = get_mm_p4d(tgt_mm, tgt_addr, &tgt_p4d); + if (ret) + goto put_tgt; + + ret = get_mm_p4d(cur_mm, cur_addr, &cur_p4d); + if (ret) + goto put_tgt; + + ret = mm_link_p4d(cur_mm, tgt_p4d, tgt_addr); + if (ret) + goto put_tgt; + + ret = mm_link_p4d(tgt_mm, cur_p4d, cur_addr); + if (ret) { + mm_unlink_p4d(cur_mm, tgt_addr); + goto put_tgt; + } + +put_tgt: + mmput(tgt_mm); +out: + return ret; +} + +void rpal_unmap_service(struct rpal_service *tgt) +{ + struct rpal_service *cur = rpal_current_service(); + struct mm_struct *cur_mm, *tgt_mm; + unsigned long cur_addr, tgt_addr; + + cur_mm = current->mm; + tgt_mm = tgt->mm; + + cur_addr = rpal_get_base(cur); + tgt_addr = rpal_get_base(tgt); + + if (mmget_not_zero(tgt_mm)) { + mm_unlink_p4d(tgt_mm, cur_addr); + mmput(tgt_mm); + } else { + /* If tgt has exited, then we get a NULL tgt_mm */ + pr_debug("rpal: [%d] cannot find target mm\n", current->pid); + } + mm_unlink_p4d(cur_mm, tgt->base); +} -- 2.20.1