From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3373C369D5 for ; Mon, 28 Apr 2025 20:14:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D884E6B0031; Mon, 28 Apr 2025 16:14:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0F8D6B007B; Mon, 28 Apr 2025 16:14:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B62346B0082; Mon, 28 Apr 2025 16:14:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 8DC7C6B0031 for ; Mon, 28 Apr 2025 16:14:44 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BCBC25F97F for ; Mon, 28 Apr 2025 20:14:45 +0000 (UTC) X-FDA: 83384555730.08.F3746AA Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf24.hostedemail.com (Postfix) with ESMTP id D7425180012 for ; Mon, 28 Apr 2025 20:14:43 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="co9Lx/Tp"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745871283; a=rsa-sha256; cv=none; b=E+ju9OBqLwaCHDFAqHMdVZMTR70jyJTlwVBJEKm27+1ikoMV6jeGhe9pwrJUkwY5JFMi6U ghzVY87p6vA37FNN0w1H5R8iA5znkpLwC3LO8NrAK7AChsDllbSRMeUx9BexUR9MkLBGKQ i5WwW16yHBOe+cwofCykiDSh4P+sNM4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="co9Lx/Tp"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745871283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5S2govk0YTCyl2KstM3ifkgIwBCMVYbty/KI+EWa3eA=; b=lFM/PDlQQHDAEOEQdxRLvi1+5JSFCfieZhwPMGcuZcBhrdnKbGDkObbgmxOygX1U6Lih4u bV8xxPGHH44cAG0mEv+SGmZPT7zIK+tqIiWYeXBG3xExZJXLyQLcCWqqC7LpFVdji7HX/s r73SIMm5cK6/HzYCGBkGFVpw66pN7/g= Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-47e9fea29easo13281cf.1 for ; Mon, 28 Apr 2025 13:14:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1745871283; x=1746476083; darn=kvack.org; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=5S2govk0YTCyl2KstM3ifkgIwBCMVYbty/KI+EWa3eA=; b=co9Lx/TpoZJNjryL1IQbD9wIhuaMPF5BtoGvB0a2LEQ3TbUa8RbH8chGKdWDv3R1go FOrd2iuuccHmUO2ixg782d0wkvHEdCgVo6gRse7nJOgH7Ce1JZQ6Pax/nCj9VPzINfNp gNhHrpa1+19jKae+dYih2HR8fkBxJIJyuUZBfr1qLaA/yWrVOKGpPvYZf7389P+GHxiX iLo6+GPhgOUniJ2lp5bpg51BVh1LF1W4p2q7ddw2eJSeTZZ4j8IQpnbXgFQnpW3B4mMv XtWF1JfMwvAsZGBD2XWumUKXdjhMTtVrYJLFgMTKItjbKQhYhu9m8qkOCu5y5f2vHw1x 2KEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745871283; x=1746476083; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5S2govk0YTCyl2KstM3ifkgIwBCMVYbty/KI+EWa3eA=; b=sGlXV4rVFO7iho7T2B8paUMiEBCSJ3TfvGYOW/sHy/lfCPcVIqlkPNdXeO5smnqV+f kfRdINyiwazDMRnh8xw8WD/amGUdjmp3Ay+36rHaA0EAIUnLYMorXnbTRhFOYPy80KXS 9lJsaVyywbJtbLZzC1VVkkKPQzs/Mk14YQ6XqnOpxuwFHVkgkQp9Qv7Nec7dXf2atFib y2B7jjZzqTIRD+T/AO5vMUOunfOQ2T0skpRnkBOw6KQEvvilbyhQHb85egW2O3Y+uMD3 c9r/TQ8T8T/xRY0LVkEM2VBbDWOeFkaMpStkzGjJ8BhSbmQ+Tl5xTjNQ023i+XBaxRr8 tKTA== X-Forwarded-Encrypted: i=1; AJvYcCWiinCDQYDbp/Bm/w2d0QnjK4tztKhReNbsPHEjZzuaPLLQKB2WQSmX1UTe2myGPf1eAmt49b2guQ==@kvack.org X-Gm-Message-State: AOJu0YzbbghK4D/G8wMqzHpLZUdJJHoD/85d8kAwfqrLoSjFuLJpLDyT BAbBMYyfb84rTQXJmsEcfRrKzhw/03nDAmjZRhoQcdCBNN509LbXC817VWymU16oozK3/aGSx5j VA2G/sMoOHCaY3mrq7QuHDUH0s1/znQvfhgRD X-Gm-Gg: ASbGncs/LN+/ag69vSqQ67BRo+s+gqTWEuOt0nTNJ+67o1gX56ESngr/eKfy3yxA4vC ypcwaDToIeHwYX0QjD3N8C0LXynXVMHKwcpVLDpYiBYFCZgCKxXrW8iPUMF3zZRcmFQSHq0UnRJ GnqY6YRaHwL9Mn3r+8xqdD X-Google-Smtp-Source: AGHT+IESI645f2qHrcanEzoqJYbRkEi/qXg20FUnpK+Mrmj56B6+m7fdLzaHim/GJodJsJa9F9OUIIFtZEKaj9hH10U= X-Received: by 2002:a05:622a:1487:b0:486:8711:19af with SMTP id d75a77b69052e-48855985193mr976651cf.0.1745871282558; Mon, 28 Apr 2025 13:14:42 -0700 (PDT) MIME-Version: 1.0 References: <91f2cee8f17d65214a9d83abb7011aa15f1ea690.1745853549.git.lorenzo.stoakes@oracle.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 28 Apr 2025 13:14:31 -0700 X-Gm-Features: ATxdqUEvLSPpzrB5CbS4RjadJagAjogPm_l6Zq1i9YQssAsWOKs-X7DJhKk_xvM Message-ID: Subject: Re: [PATCH v3 1/4] mm: establish mm/vma_exec.c for shared exec/mm VMA functionality To: "Liam R. Howlett" , Lorenzo Stoakes , Andrew Morton , Vlastimil Babka , Jann Horn , Pedro Falcato , David Hildenbrand , Kees Cook , Alexander Viro , Christian Brauner , Jan Kara , Suren Baghdasaryan , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D7425180012 X-Stat-Signature: u14g5hqtx33ooh56t8eoh4efyijo3q7z X-Rspam-User: X-HE-Tag: 1745871283-936247 X-HE-Meta: U2FsdGVkX18NhEf13SrMwrueGMlHMx+SplCCOnTi75e4meVGtWlXLZkx7S2Sg8AUpy0C0sFqRuJ+hX1G1sQ8PWhgbF2QmElQYA/gAXtR07hNM2RhbR56ileXmnJO82flWtmkexFx4Rdb0EMiaXDAXYhEztpO6r00rnKSnyBJDNh6nVcgttGdSUYbB5hcmD0ccRVBG5Q03TwtijqEv15gZGyph/KlyM449b5IAnxkZvM1ibfcYYlRK6aS1HZESamCxgSS/jFCk8ngvaPZaL+ScRgfvtc1SFij2Y29gO5YvWR96eD9UH8sT2Yn4YZR8TMuoKN0uPQEpooNM9sEgp9SCoBBgw28Udw0OEs8wkTVCBF3a0x397n07iSZ5MqOHtTPcDyU2mjeKAAx4wyL58X2SKgmHxxFEzYU2RfW+Pvx4vtMBqUHdFEKcRT8JS1lYqD+ovzGXCYuEd3S6fCqWUkvmPNuZIxJ1DB52u7yvh6uyQcv8BnvRwkluMLWfnOTn8hMgbx0Ba0zei48R9Fplw+YK1keDtKSjrUjbyy9+LfhpVunI5bX0psMEYALMLY0onEGieZ7bhncC3Ra5HjdK6Qh79j1k3WlUFJv8aTuf1yUWWr45z78po5dcxuX2o6Iap8qt0kUVILVDNNU86PGzxo+o4CBUYm9FTWDpI2IFd7TUP1/bOvaCEhnxgOWrxctao2icvpbsKM72Xb1zz/5SflSQfu+rsnMwMlRyDFHfHBGt3aFEwAQgdlJ6ewJCvzAvix5nlHrvNvMvH/9+Krqm9YXQhxUFhkvm0p9IBDbObS83NI0USKYLdWS9rw8SPKHDShNzUUFKzSm+hLUQhzMAyKhWyxkJUEffEWE13QbRa6+BbTpMIIV8xc6hrv32jfNC2+QXaW+TcRcWU9xxApL4kQGMzH1xBlYQJkTn3nCOBflmfo9zEyFov7LkNMvWaX4EbqqSvNlhudnzdvhMUo14sc 9viFhSaz t5UK57h9VEehjsiYG6WdAEm0ZQi+Z6jhC5agLlhlNvUxPDNJ7cQ7KrK1HBpv12y426MVs2tpALv7ccOc/VZyoKZZhgnyFRo9ueNfDhBw1hugJEl0NyuIxHk204ycrwlhVrkUevYdGp/JVYFlp8QhzZpJz5GU/Jf6YAhzESWMaQHeYGQy56FYSzMouVWcptN1ghqJes/h4X0+QFw8UKjITK48lkxx5hpxlTL3WcuE2lIQINq3e+ba0eoxTyr4LzN1efZYU2fGAY9J8dbqkmrNUZsN3J3gG4uQ31alj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Apr 28, 2025 at 12:20=E2=80=AFPM Liam R. Howlett wrote: > > * Lorenzo Stoakes [250428 11:28]: > > There is functionality that overlaps the exec and memory mapping > > subsystems. While it properly belongs in mm, it is important that exec > > maintainers maintain oversight of this functionality correctly. > > > > We can establish both goals by adding a new mm/vma_exec.c file which > > contains these 'glue' functions, and have fs/exec.c import them. > > > > As a part of this change, to ensure that proper oversight is achieved, = add > > the file to both the MEMORY MAPPING and EXEC & BINFMT API, ELF sections= . > > > > scripts/get_maintainer.pl can correctly handle files in multiple entrie= s > > and this neatly handles the cross-over. > > > > Signed-off-by: Lorenzo Stoakes > > Reviewed-by: Liam R. Howlett Reviewed-by: Suren Baghdasaryan > > > --- > > MAINTAINERS | 2 + > > fs/exec.c | 3 ++ > > include/linux/mm.h | 1 - > > mm/Makefile | 2 +- > > mm/mmap.c | 83 ---------------------------- > > mm/vma.h | 5 ++ > > mm/vma_exec.c | 92 ++++++++++++++++++++++++++++++++ > > tools/testing/vma/Makefile | 2 +- > > tools/testing/vma/vma.c | 1 + > > tools/testing/vma/vma_internal.h | 40 ++++++++++++++ > > 10 files changed, 145 insertions(+), 86 deletions(-) > > create mode 100644 mm/vma_exec.c > > > > diff --git a/MAINTAINERS b/MAINTAINERS > > index f5ee0390cdee..1ee1c22e6e36 100644 > > --- a/MAINTAINERS > > +++ b/MAINTAINERS > > @@ -8830,6 +8830,7 @@ F: include/linux/elf.h > > F: include/uapi/linux/auxvec.h > > F: include/uapi/linux/binfmts.h > > F: include/uapi/linux/elf.h > > +F: mm/vma_exec.c > > F: tools/testing/selftests/exec/ > > N: asm/elf.h > > N: binfmt > > @@ -15654,6 +15655,7 @@ F: mm/mremap.c > > F: mm/mseal.c > > F: mm/vma.c > > F: mm/vma.h > > +F: mm/vma_exec.c > > F: mm/vma_internal.h > > F: tools/testing/selftests/mm/merge.c > > F: tools/testing/vma/ > > diff --git a/fs/exec.c b/fs/exec.c > > index 8e4ea5f1e64c..477bc3f2e966 100644 > > --- a/fs/exec.c > > +++ b/fs/exec.c > > @@ -78,6 +78,9 @@ > > > > #include > > > > +/* For vma exec functions. */ > > +#include "../mm/internal.h" > > + > > static int bprm_creds_from_file(struct linux_binprm *bprm); > > > > int suid_dumpable =3D 0; > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 21dd110b6655..4fc361df9ad7 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -3223,7 +3223,6 @@ void anon_vma_interval_tree_verify(struct anon_vm= a_chain *node); > > extern int __vm_enough_memory(struct mm_struct *mm, long pages, int ca= p_sys_admin); > > extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct = *); > > extern void exit_mmap(struct mm_struct *); > > -int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)= ; > > bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_= struct *vma, > > unsigned long addr, bool write); > > > > diff --git a/mm/Makefile b/mm/Makefile > > index 9d7e5b5bb694..15a901bb431a 100644 > > --- a/mm/Makefile > > +++ b/mm/Makefile > > @@ -37,7 +37,7 @@ mmu-y :=3D nommu.o > > mmu-$(CONFIG_MMU) :=3D highmem.o memory.o mincore.o \ > > mlock.o mmap.o mmu_gather.o mprotect.o mremap.= o \ > > msync.o page_vma_mapped.o pagewalk.o \ > > - pgtable-generic.o rmap.o vmalloc.o vma.o > > + pgtable-generic.o rmap.o vmalloc.o vma.o vma_e= xec.o > > > > > > ifdef CONFIG_CROSS_MEMORY_ATTACH > > diff --git a/mm/mmap.c b/mm/mmap.c > > index bd210aaf7ebd..1794bf6f4dc0 100644 > > --- a/mm/mmap.c > > +++ b/mm/mmap.c > > @@ -1717,89 +1717,6 @@ static int __meminit init_reserve_notifier(void) > > } > > subsys_initcall(init_reserve_notifier); > > > > -/* > > - * Relocate a VMA downwards by shift bytes. There cannot be any VMAs b= etween > > - * this VMA and its relocated range, which will now reside at [vma->vm= _start - > > - * shift, vma->vm_end - shift). > > - * > > - * This function is almost certainly NOT what you want for anything ot= her than > > - * early executable temporary stack relocation. > > - */ > > -int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift) > > -{ > > - /* > > - * The process proceeds as follows: > > - * > > - * 1) Use shift to calculate the new vma endpoints. > > - * 2) Extend vma to cover both the old and new ranges. This ensu= res the > > - * arguments passed to subsequent functions are consistent. > > - * 3) Move vma's page tables to the new range. > > - * 4) Free up any cleared pgd range. > > - * 5) Shrink the vma to cover only the new range. > > - */ > > - > > - struct mm_struct *mm =3D vma->vm_mm; > > - unsigned long old_start =3D vma->vm_start; > > - unsigned long old_end =3D vma->vm_end; > > - unsigned long length =3D old_end - old_start; > > - unsigned long new_start =3D old_start - shift; > > - unsigned long new_end =3D old_end - shift; > > - VMA_ITERATOR(vmi, mm, new_start); > > - VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); > > - struct vm_area_struct *next; > > - struct mmu_gather tlb; > > - PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); > > - > > - BUG_ON(new_start > new_end); > > - > > - /* > > - * ensure there are no vmas between where we want to go > > - * and where we are > > - */ > > - if (vma !=3D vma_next(&vmi)) > > - return -EFAULT; > > - > > - vma_iter_prev_range(&vmi); > > - /* > > - * cover the whole range: [new_start, old_end) > > - */ > > - vmg.middle =3D vma; > > - if (vma_expand(&vmg)) > > - return -ENOMEM; > > - > > - /* > > - * move the page tables downwards, on failure we rely on > > - * process cleanup to remove whatever mess we made. > > - */ > > - pmc.for_stack =3D true; > > - if (length !=3D move_page_tables(&pmc)) > > - return -ENOMEM; > > - > > - tlb_gather_mmu(&tlb, mm); > > - next =3D vma_next(&vmi); > > - if (new_end > old_start) { > > - /* > > - * when the old and new regions overlap clear from new_en= d. > > - */ > > - free_pgd_range(&tlb, new_end, old_end, new_end, > > - next ? next->vm_start : USER_PGTABLES_CEILING); > > - } else { > > - /* > > - * otherwise, clean from old_start; this is done to not t= ouch > > - * the address space in [new_end, old_start) some archite= ctures > > - * have constraints on va-space that make this illegal (I= A64) - > > - * for the others its just a little faster. > > - */ > > - free_pgd_range(&tlb, old_start, old_end, new_end, > > - next ? next->vm_start : USER_PGTABLES_CEILING); > > - } > > - tlb_finish_mmu(&tlb); > > - > > - vma_prev(&vmi); > > - /* Shrink the vma to just the new range */ > > - return vma_shrink(&vmi, vma, new_start, new_end, vma->vm_pgoff); > > -} > > - > > #ifdef CONFIG_MMU > > /* > > * Obtain a read lock on mm->mmap_lock, if the specified address is be= low the > > diff --git a/mm/vma.h b/mm/vma.h > > index 149926e8a6d1..1ce3e18f01b7 100644 > > --- a/mm/vma.h > > +++ b/mm/vma.h > > @@ -548,4 +548,9 @@ int expand_downwards(struct vm_area_struct *vma, un= signed long address); > > > > int __vm_munmap(unsigned long start, size_t len, bool unlock); > > > > +/* vma_exec.h */ nit: Did you mean vma_exec.c ? > > +#ifdef CONFIG_MMU > > +int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)= ; > > +#endif > > + > > #endif /* __MM_VMA_H */ > > diff --git a/mm/vma_exec.c b/mm/vma_exec.c > > new file mode 100644 > > index 000000000000..6736ae37f748 > > --- /dev/null > > +++ b/mm/vma_exec.c > > @@ -0,0 +1,92 @@ > > +// SPDX-License-Identifier: GPL-2.0-only > > + > > +/* > > + * Functions explicitly implemented for exec functionality which howev= er are > > + * explicitly VMA-only logic. > > + */ > > + > > +#include "vma_internal.h" > > +#include "vma.h" > > + > > +/* > > + * Relocate a VMA downwards by shift bytes. There cannot be any VMAs b= etween > > + * this VMA and its relocated range, which will now reside at [vma->vm= _start - > > + * shift, vma->vm_end - shift). > > + * > > + * This function is almost certainly NOT what you want for anything ot= her than > > + * early executable temporary stack relocation. > > + */ > > +int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift) > > +{ > > + /* > > + * The process proceeds as follows: > > + * > > + * 1) Use shift to calculate the new vma endpoints. > > + * 2) Extend vma to cover both the old and new ranges. This ensu= res the > > + * arguments passed to subsequent functions are consistent. > > + * 3) Move vma's page tables to the new range. > > + * 4) Free up any cleared pgd range. > > + * 5) Shrink the vma to cover only the new range. > > + */ > > + > > + struct mm_struct *mm =3D vma->vm_mm; > > + unsigned long old_start =3D vma->vm_start; > > + unsigned long old_end =3D vma->vm_end; > > + unsigned long length =3D old_end - old_start; > > + unsigned long new_start =3D old_start - shift; > > + unsigned long new_end =3D old_end - shift; > > + VMA_ITERATOR(vmi, mm, new_start); > > + VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); > > + struct vm_area_struct *next; > > + struct mmu_gather tlb; > > + PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); > > + > > + BUG_ON(new_start > new_end); > > + > > + /* > > + * ensure there are no vmas between where we want to go > > + * and where we are > > + */ > > + if (vma !=3D vma_next(&vmi)) > > + return -EFAULT; > > + > > + vma_iter_prev_range(&vmi); > > + /* > > + * cover the whole range: [new_start, old_end) > > + */ > > + vmg.middle =3D vma; > > + if (vma_expand(&vmg)) > > + return -ENOMEM; > > + > > + /* > > + * move the page tables downwards, on failure we rely on > > + * process cleanup to remove whatever mess we made. > > + */ > > + pmc.for_stack =3D true; > > + if (length !=3D move_page_tables(&pmc)) > > + return -ENOMEM; > > + > > + tlb_gather_mmu(&tlb, mm); > > + next =3D vma_next(&vmi); > > + if (new_end > old_start) { > > + /* > > + * when the old and new regions overlap clear from new_en= d. > > + */ > > + free_pgd_range(&tlb, new_end, old_end, new_end, > > + next ? next->vm_start : USER_PGTABLES_CEILING); > > + } else { > > + /* > > + * otherwise, clean from old_start; this is done to not t= ouch > > + * the address space in [new_end, old_start) some archite= ctures > > + * have constraints on va-space that make this illegal (I= A64) - > > + * for the others its just a little faster. > > + */ > > + free_pgd_range(&tlb, old_start, old_end, new_end, > > + next ? next->vm_start : USER_PGTABLES_CEILING); > > + } > > + tlb_finish_mmu(&tlb); > > + > > + vma_prev(&vmi); > > + /* Shrink the vma to just the new range */ > > + return vma_shrink(&vmi, vma, new_start, new_end, vma->vm_pgoff); > > +} > > diff --git a/tools/testing/vma/Makefile b/tools/testing/vma/Makefile > > index 860fd2311dcc..624040fcf193 100644 > > --- a/tools/testing/vma/Makefile > > +++ b/tools/testing/vma/Makefile > > @@ -9,7 +9,7 @@ include ../shared/shared.mk > > OFILES =3D $(SHARED_OFILES) vma.o maple-shim.o > > TARGETS =3D vma > > > > -vma.o: vma.c vma_internal.h ../../../mm/vma.c ../../../mm/vma.h > > +vma.o: vma.c vma_internal.h ../../../mm/vma.c ../../../mm/vma_exec.c .= ./../../mm/vma.h > > > > vma: $(OFILES) > > $(CC) $(CFLAGS) -o $@ $(OFILES) $(LDLIBS) > > diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c > > index 7cfd6e31db10..5832ae5d797d 100644 > > --- a/tools/testing/vma/vma.c > > +++ b/tools/testing/vma/vma.c > > @@ -28,6 +28,7 @@ unsigned long stack_guard_gap =3D 256UL< > * Directly import the VMA implementation here. Our vma_internal.h wra= pper > > * provides userland-equivalent functionality for everything vma.c use= s. > > */ > > +#include "../../../mm/vma_exec.c" > > #include "../../../mm/vma.c" > > > > const struct vm_operations_struct vma_dummy_vm_ops; > > diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_i= nternal.h > > index 572ab2cea763..0df19ca0000a 100644 > > --- a/tools/testing/vma/vma_internal.h > > +++ b/tools/testing/vma/vma_internal.h > > @@ -421,6 +421,28 @@ struct vm_unmapped_area_info { > > unsigned long start_gap; > > }; > > > > +struct pagetable_move_control { > > + struct vm_area_struct *old; /* Source VMA. */ > > + struct vm_area_struct *new; /* Destination VMA. */ > > + unsigned long old_addr; /* Address from which the move begins. */ > > + unsigned long old_end; /* Exclusive address at which old range en= ds. */ > > + unsigned long new_addr; /* Address to move page tables to. */ > > + unsigned long len_in; /* Bytes to remap specified by user. */ > > + > > + bool need_rmap_locks; /* Do rmap locks need to be taken? */ > > + bool for_stack; /* Is this an early temp stack being moved? */ > > +}; > > + > > +#define PAGETABLE_MOVE(name, old_, new_, old_addr_, new_addr_, len_) \ > > + struct pagetable_move_control name =3D { = \ > > + .old =3D old_, = \ > > + .new =3D new_, = \ > > + .old_addr =3D old_addr_, = \ > > + .old_end =3D (old_addr_) + (len_), = \ > > + .new_addr =3D new_addr_, = \ > > + .len_in =3D len_, = \ > > + } > > + > > static inline void vma_iter_invalidate(struct vma_iterator *vmi) > > { > > mas_pause(&vmi->mas); > > @@ -1240,4 +1262,22 @@ static inline int mapping_map_writable(struct ad= dress_space *mapping) > > return 0; > > } > > > > +static inline unsigned long move_page_tables(struct pagetable_move_con= trol *pmc) > > +{ > > + (void)pmc; > > + > > + return 0; > > +} > > + > > +static inline void free_pgd_range(struct mmu_gather *tlb, > > + unsigned long addr, unsigned long end, > > + unsigned long floor, unsigned long ceiling) > > +{ > > + (void)tlb; > > + (void)addr; > > + (void)end; > > + (void)floor; > > + (void)ceiling; > > +} > > + > > #endif /* __MM_VMA_INTERNAL_H */ > > -- > > 2.49.0 > >