linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: Rename vm_area_struct to mm_area
@ 2025-04-01 12:25 Matthew Wilcox (Oracle)
  2025-04-01 12:35 ` Lorenzo Stoakes
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-04-01 12:25 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	Liam R . Howlett, Lorenzo Stoakes, Vlastimil Babka, Jann Horn,
	linux-mm

We don't need to put "_struct" on the end of the name.  It's obviously
a struct.  Just look at the word "struct" before the name.  The acronym
"vm" tends to mean "virtual machine" rather than "virtual memory" these
days, so use "mm_area" instead of "vm_area".  I decided not to rename
the variables (typically "vma") of type "struct mm_area *" as that would
be a fair bit more disruptive.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
Generated against next-20250401.

 Documentation/bpf/prog_lsm.rst                |   6 +-
 Documentation/core-api/cachetlb.rst           |  18 +-
 Documentation/core-api/dma-api.rst            |   4 +-
 Documentation/driver-api/uio-howto.rst        |   2 +-
 Documentation/driver-api/vfio.rst             |   2 +-
 Documentation/filesystems/locking.rst         |  12 +-
 Documentation/filesystems/proc.rst            |   2 +-
 Documentation/filesystems/vfs.rst             |   2 +-
 Documentation/gpu/drm-mm.rst                  |   4 +-
 Documentation/mm/hmm.rst                      |   2 +-
 Documentation/mm/hugetlbfs_reserv.rst         |  12 +-
 Documentation/mm/process_addrs.rst            |   6 +-
 .../translations/zh_CN/core-api/cachetlb.rst  |  18 +-
 Documentation/translations/zh_CN/mm/hmm.rst   |   2 +-
 .../zh_CN/mm/hugetlbfs_reserv.rst             |  12 +-
 .../userspace-api/media/conf_nitpick.py       |   2 +-
 arch/alpha/include/asm/cacheflush.h           |   6 +-
 arch/alpha/include/asm/machvec.h              |   2 +-
 arch/alpha/include/asm/pci.h                  |   2 +-
 arch/alpha/include/asm/pgtable.h              |   6 +-
 arch/alpha/include/asm/tlbflush.h             |  10 +-
 arch/alpha/kernel/pci-sysfs.c                 |  16 +-
 arch/alpha/kernel/smp.c                       |   8 +-
 arch/alpha/mm/fault.c                         |   2 +-
 arch/arc/include/asm/hugepage.h               |   4 +-
 arch/arc/include/asm/page.h                   |   4 +-
 arch/arc/include/asm/pgtable-bits-arcv2.h     |   2 +-
 arch/arc/include/asm/tlbflush.h               |  12 +-
 arch/arc/kernel/arc_hostlink.c                |   2 +-
 arch/arc/kernel/troubleshoot.c                |   2 +-
 arch/arc/mm/cache.c                           |   2 +-
 arch/arc/mm/fault.c                           |   2 +-
 arch/arc/mm/mmap.c                            |   2 +-
 arch/arc/mm/tlb.c                             |  20 +-
 arch/arm/include/asm/cacheflush.h             |  14 +-
 arch/arm/include/asm/page.h                   |  20 +-
 arch/arm/include/asm/tlbflush.h               |  28 +-
 arch/arm/kernel/asm-offsets.c                 |   4 +-
 arch/arm/kernel/process.c                     |  10 +-
 arch/arm/kernel/smp_tlb.c                     |   6 +-
 arch/arm/kernel/vdso.c                        |   4 +-
 arch/arm/mach-rpc/ecard.c                     |   2 +-
 arch/arm/mm/cache-v6.S                        |   2 +-
 arch/arm/mm/cache-v7.S                        |   2 +-
 arch/arm/mm/cache-v7m.S                       |   2 +-
 arch/arm/mm/copypage-fa.c                     |   2 +-
 arch/arm/mm/copypage-feroceon.c               |   2 +-
 arch/arm/mm/copypage-v4mc.c                   |   2 +-
 arch/arm/mm/copypage-v4wb.c                   |   2 +-
 arch/arm/mm/copypage-v4wt.c                   |   2 +-
 arch/arm/mm/copypage-v6.c                     |   4 +-
 arch/arm/mm/copypage-xsc3.c                   |   2 +-
 arch/arm/mm/copypage-xscale.c                 |   2 +-
 arch/arm/mm/dma-mapping.c                     |   2 +-
 arch/arm/mm/fault-armv.c                      |  10 +-
 arch/arm/mm/fault.c                           |   2 +-
 arch/arm/mm/flush.c                           |  14 +-
 arch/arm/mm/mmap.c                            |   4 +-
 arch/arm/mm/nommu.c                           |   2 +-
 arch/arm/mm/tlb-v6.S                          |   2 +-
 arch/arm/mm/tlb-v7.S                          |   2 +-
 arch/arm/mm/tlb.c                             |  12 +-
 arch/arm/xen/enlighten.c                      |   2 +-
 arch/arm64/include/asm/cacheflush.h           |   2 +-
 arch/arm64/include/asm/hugetlb.h              |  10 +-
 arch/arm64/include/asm/mmu_context.h          |   2 +-
 arch/arm64/include/asm/page.h                 |   6 +-
 arch/arm64/include/asm/pgtable.h              |  38 +--
 arch/arm64/include/asm/pkeys.h                |   4 +-
 arch/arm64/include/asm/tlb.h                  |   2 +-
 arch/arm64/include/asm/tlbflush.h             |   8 +-
 arch/arm64/kernel/mte.c                       |   2 +-
 arch/arm64/kernel/vdso.c                      |   4 +-
 arch/arm64/kvm/mmu.c                          |  10 +-
 arch/arm64/mm/contpte.c                       |  10 +-
 arch/arm64/mm/copypage.c                      |   2 +-
 arch/arm64/mm/fault.c                         |  10 +-
 arch/arm64/mm/flush.c                         |   4 +-
 arch/arm64/mm/hugetlbpage.c                   |  14 +-
 arch/arm64/mm/mmu.c                           |   4 +-
 arch/csky/abiv1/cacheflush.c                  |   4 +-
 arch/csky/abiv1/inc/abi/cacheflush.h          |   4 +-
 arch/csky/abiv1/mmap.c                        |   2 +-
 arch/csky/abiv2/cacheflush.c                  |   2 +-
 arch/csky/include/asm/page.h                  |   2 +-
 arch/csky/include/asm/pgtable.h               |   2 +-
 arch/csky/include/asm/tlbflush.h              |   4 +-
 arch/csky/kernel/vdso.c                       |   2 +-
 arch/csky/mm/fault.c                          |   4 +-
 arch/csky/mm/tlb.c                            |   4 +-
 arch/hexagon/include/asm/cacheflush.h         |   4 +-
 arch/hexagon/include/asm/tlbflush.h           |   4 +-
 arch/hexagon/kernel/vdso.c                    |   4 +-
 arch/hexagon/mm/cache.c                       |   2 +-
 arch/hexagon/mm/vm_fault.c                    |   2 +-
 arch/hexagon/mm/vm_tlb.c                      |   4 +-
 arch/loongarch/include/asm/hugetlb.h          |   4 +-
 arch/loongarch/include/asm/page.h             |   4 +-
 arch/loongarch/include/asm/pgtable.h          |   8 +-
 arch/loongarch/include/asm/tlb.h              |   2 +-
 arch/loongarch/include/asm/tlbflush.h         |   8 +-
 arch/loongarch/kernel/smp.c                   |   6 +-
 arch/loongarch/kernel/vdso.c                  |   4 +-
 arch/loongarch/mm/fault.c                     |   2 +-
 arch/loongarch/mm/hugetlbpage.c               |   2 +-
 arch/loongarch/mm/init.c                      |   2 +-
 arch/loongarch/mm/mmap.c                      |   2 +-
 arch/loongarch/mm/tlb.c                       |   8 +-
 arch/m68k/include/asm/cacheflush_mm.h         |  10 +-
 arch/m68k/include/asm/pgtable_mm.h            |   2 +-
 arch/m68k/include/asm/tlbflush.h              |  12 +-
 arch/m68k/kernel/sys_m68k.c                   |   2 +-
 arch/m68k/mm/cache.c                          |   2 +-
 arch/m68k/mm/fault.c                          |   2 +-
 arch/microblaze/include/asm/cacheflush.h      |   2 +-
 arch/microblaze/include/asm/pgtable.h         |   4 +-
 arch/microblaze/include/asm/tlbflush.h        |   4 +-
 arch/microblaze/mm/fault.c                    |   2 +-
 arch/mips/alchemy/common/setup.c              |   2 +-
 arch/mips/include/asm/cacheflush.h            |  10 +-
 arch/mips/include/asm/hugetlb.h               |   4 +-
 arch/mips/include/asm/page.h                  |   4 +-
 arch/mips/include/asm/pgtable.h               |  14 +-
 arch/mips/include/asm/tlbflush.h              |   8 +-
 arch/mips/kernel/smp.c                        |   6 +-
 arch/mips/kernel/vdso.c                       |   2 +-
 arch/mips/mm/c-octeon.c                       |   6 +-
 arch/mips/mm/c-r3k.c                          |   4 +-
 arch/mips/mm/c-r4k.c                          |  10 +-
 arch/mips/mm/cache.c                          |   4 +-
 arch/mips/mm/fault.c                          |   2 +-
 arch/mips/mm/hugetlbpage.c                    |   2 +-
 arch/mips/mm/init.c                           |   6 +-
 arch/mips/mm/mmap.c                           |   2 +-
 arch/mips/mm/tlb-r3k.c                        |   6 +-
 arch/mips/mm/tlb-r4k.c                        |   8 +-
 arch/mips/vdso/genvdso.c                      |   2 +-
 arch/nios2/include/asm/cacheflush.h           |  10 +-
 arch/nios2/include/asm/pgtable.h              |   2 +-
 arch/nios2/include/asm/tlbflush.h             |   6 +-
 arch/nios2/kernel/sys_nios2.c                 |   2 +-
 arch/nios2/mm/cacheflush.c                    |  14 +-
 arch/nios2/mm/fault.c                         |   2 +-
 arch/nios2/mm/init.c                          |   4 +-
 arch/nios2/mm/tlb.c                           |   4 +-
 arch/openrisc/include/asm/pgtable.h           |   8 +-
 arch/openrisc/include/asm/tlbflush.h          |   8 +-
 arch/openrisc/kernel/smp.c                    |   4 +-
 arch/openrisc/mm/cache.c                      |   2 +-
 arch/openrisc/mm/fault.c                      |   2 +-
 arch/openrisc/mm/tlb.c                        |   4 +-
 arch/parisc/include/asm/cacheflush.h          |  12 +-
 arch/parisc/include/asm/hugetlb.h             |   4 +-
 arch/parisc/include/asm/page.h                |   4 +-
 arch/parisc/include/asm/pgtable.h             |   6 +-
 arch/parisc/include/asm/tlbflush.h            |   2 +-
 arch/parisc/kernel/cache.c                    |  30 +-
 arch/parisc/kernel/sys_parisc.c               |   2 +-
 arch/parisc/kernel/traps.c                    |   2 +-
 arch/parisc/kernel/vdso.c                     |   4 +-
 arch/parisc/mm/fault.c                        |   6 +-
 arch/parisc/mm/hugetlbpage.c                  |   4 +-
 arch/powerpc/include/asm/book3s/32/pgtable.h  |   2 +-
 arch/powerpc/include/asm/book3s/32/tlbflush.h |   8 +-
 arch/powerpc/include/asm/book3s/64/hash-4k.h  |   2 +-
 arch/powerpc/include/asm/book3s/64/hash-64k.h |   6 +-
 arch/powerpc/include/asm/book3s/64/hugetlb.h  |  14 +-
 .../include/asm/book3s/64/pgtable-64k.h       |   2 +-
 arch/powerpc/include/asm/book3s/64/pgtable.h  |  30 +-
 arch/powerpc/include/asm/book3s/64/radix.h    |   6 +-
 .../include/asm/book3s/64/tlbflush-radix.h    |  14 +-
 arch/powerpc/include/asm/book3s/64/tlbflush.h |  14 +-
 arch/powerpc/include/asm/cacheflush.h         |   2 +-
 arch/powerpc/include/asm/hugetlb.h            |   6 +-
 arch/powerpc/include/asm/mmu_context.h        |   4 +-
 .../include/asm/nohash/32/hugetlb-8xx.h       |   2 +-
 arch/powerpc/include/asm/nohash/32/pte-8xx.h  |   2 +-
 .../powerpc/include/asm/nohash/hugetlb-e500.h |   2 +-
 arch/powerpc/include/asm/nohash/pgtable.h     |   4 +-
 arch/powerpc/include/asm/nohash/tlbflush.h    |  10 +-
 arch/powerpc/include/asm/page.h               |   2 +-
 arch/powerpc/include/asm/pci.h                |   4 +-
 arch/powerpc/include/asm/pgtable.h            |   6 +-
 arch/powerpc/include/asm/pkeys.h              |   6 +-
 arch/powerpc/include/asm/vas.h                |   2 +-
 arch/powerpc/kernel/pci-common.c              |   4 +-
 arch/powerpc/kernel/proc_powerpc.c            |   2 +-
 arch/powerpc/kernel/vdso.c                    |  10 +-
 arch/powerpc/kvm/book3s_64_vio.c              |   2 +-
 arch/powerpc/kvm/book3s_hv.c                  |   2 +-
 arch/powerpc/kvm/book3s_hv_uvmem.c            |  16 +-
 arch/powerpc/kvm/book3s_xive_native.c         |   6 +-
 arch/powerpc/mm/book3s32/mmu.c                |   2 +-
 arch/powerpc/mm/book3s32/tlb.c                |   4 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c       |   2 +-
 arch/powerpc/mm/book3s64/hash_utils.c         |   2 +-
 arch/powerpc/mm/book3s64/hugetlbpage.c        |   4 +-
 arch/powerpc/mm/book3s64/iommu_api.c          |   2 +-
 arch/powerpc/mm/book3s64/pgtable.c            |  22 +-
 arch/powerpc/mm/book3s64/pkeys.c              |   6 +-
 arch/powerpc/mm/book3s64/radix_hugetlbpage.c  |   8 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c      |   6 +-
 arch/powerpc/mm/book3s64/radix_tlb.c          |  10 +-
 arch/powerpc/mm/book3s64/slice.c              |   4 +-
 arch/powerpc/mm/book3s64/subpage_prot.c       |   4 +-
 arch/powerpc/mm/cacheflush.c                  |   2 +-
 arch/powerpc/mm/copro_fault.c                 |   2 +-
 arch/powerpc/mm/fault.c                       |  12 +-
 arch/powerpc/mm/hugetlbpage.c                 |   2 +-
 arch/powerpc/mm/nohash/e500_hugetlbpage.c     |   6 +-
 arch/powerpc/mm/nohash/tlb.c                  |   6 +-
 arch/powerpc/mm/pgtable.c                     |   6 +-
 arch/powerpc/platforms/book3s/vas-api.c       |   6 +-
 arch/powerpc/platforms/cell/spufs/file.c      |  18 +-
 arch/powerpc/platforms/powernv/memtrace.c     |   2 +-
 arch/powerpc/platforms/powernv/opal-prd.c     |   2 +-
 arch/powerpc/platforms/pseries/vas.c          |   2 +-
 arch/riscv/include/asm/hugetlb.h              |   4 +-
 arch/riscv/include/asm/pgtable.h              |  18 +-
 arch/riscv/include/asm/tlbflush.h             |   6 +-
 arch/riscv/kernel/vdso.c                      |   2 +-
 arch/riscv/kvm/mmu.c                          |   4 +-
 arch/riscv/mm/fault.c                         |   4 +-
 arch/riscv/mm/hugetlbpage.c                   |  10 +-
 arch/riscv/mm/pgtable.c                       |   6 +-
 arch/riscv/mm/tlbflush.c                      |   6 +-
 arch/s390/include/asm/hugetlb.h               |   4 +-
 arch/s390/include/asm/pgtable.h               |  28 +-
 arch/s390/include/asm/tlbflush.h              |   2 +-
 arch/s390/kernel/crash_dump.c                 |   6 +-
 arch/s390/kernel/uv.c                         |   2 +-
 arch/s390/kernel/vdso.c                       |   4 +-
 arch/s390/mm/fault.c                          |   4 +-
 arch/s390/mm/gmap.c                           |  10 +-
 arch/s390/mm/hugetlbpage.c                    |   2 +-
 arch/s390/mm/mmap.c                           |   4 +-
 arch/s390/mm/pgtable.c                        |  12 +-
 arch/s390/pci/pci_mmio.c                      |   4 +-
 arch/sh/include/asm/cacheflush.h              |  14 +-
 arch/sh/include/asm/hugetlb.h                 |   2 +-
 arch/sh/include/asm/page.h                    |   4 +-
 arch/sh/include/asm/pgtable.h                 |   8 +-
 arch/sh/include/asm/tlb.h                     |   4 +-
 arch/sh/include/asm/tlbflush.h                |   8 +-
 arch/sh/kernel/smp.c                          |   6 +-
 arch/sh/kernel/sys_sh.c                       |   2 +-
 arch/sh/kernel/vsyscall/vsyscall.c            |   4 +-
 arch/sh/mm/cache-sh4.c                        |   4 +-
 arch/sh/mm/cache.c                            |  14 +-
 arch/sh/mm/fault.c                            |   4 +-
 arch/sh/mm/hugetlbpage.c                      |   2 +-
 arch/sh/mm/mmap.c                             |   4 +-
 arch/sh/mm/nommu.c                            |   6 +-
 arch/sh/mm/tlb-pteaex.c                       |   2 +-
 arch/sh/mm/tlb-sh3.c                          |   2 +-
 arch/sh/mm/tlb-sh4.c                          |   2 +-
 arch/sh/mm/tlb-urb.c                          |   2 +-
 arch/sh/mm/tlbflush_32.c                      |   4 +-
 arch/sparc/include/asm/cacheflush_64.h        |   2 +-
 arch/sparc/include/asm/cachetlb_32.h          |  10 +-
 arch/sparc/include/asm/hugetlb.h              |   4 +-
 arch/sparc/include/asm/leon.h                 |   4 +-
 arch/sparc/include/asm/page_64.h              |   4 +-
 arch/sparc/include/asm/pgtable_32.h           |   6 +-
 arch/sparc/include/asm/pgtable_64.h           |  20 +-
 arch/sparc/include/asm/tlbflush_64.h          |   4 +-
 arch/sparc/kernel/adi_64.c                    |   8 +-
 arch/sparc/kernel/asm-offsets.c               |   2 +-
 arch/sparc/kernel/pci.c                       |   2 +-
 arch/sparc/kernel/ptrace_64.c                 |   2 +-
 arch/sparc/kernel/sys_sparc_64.c              |   4 +-
 arch/sparc/mm/fault_32.c                      |   4 +-
 arch/sparc/mm/fault_64.c                      |   2 +-
 arch/sparc/mm/hugetlbpage.c                   |   2 +-
 arch/sparc/mm/init_64.c                       |   6 +-
 arch/sparc/mm/leon_mm.c                       |  10 +-
 arch/sparc/mm/srmmu.c                         |  54 +--
 arch/sparc/mm/tlb.c                           |   4 +-
 arch/sparc/vdso/vma.c                         |   2 +-
 arch/um/drivers/mmapper_kern.c                |   2 +-
 arch/um/include/asm/tlbflush.h                |   4 +-
 arch/um/kernel/tlb.c                          |   2 +-
 arch/um/kernel/trap.c                         |   2 +-
 arch/x86/entry/vdso/vma.c                     |  12 +-
 arch/x86/entry/vsyscall/vsyscall_64.c         |   8 +-
 arch/x86/include/asm/mmu_context.h            |   2 +-
 arch/x86/include/asm/paravirt.h               |   4 +-
 arch/x86/include/asm/paravirt_types.h         |   6 +-
 arch/x86/include/asm/pgtable-3level.h         |   2 +-
 arch/x86/include/asm/pgtable.h                |  46 +--
 arch/x86/include/asm/pgtable_32.h             |   2 +-
 arch/x86/include/asm/pkeys.h                  |   6 +-
 arch/x86/include/asm/tlbflush.h               |   2 +-
 arch/x86/kernel/cpu/resctrl/pseudo_lock.c     |   4 +-
 arch/x86/kernel/cpu/sgx/driver.c              |   2 +-
 arch/x86/kernel/cpu/sgx/encl.c                |  14 +-
 arch/x86/kernel/cpu/sgx/encl.h                |   4 +-
 arch/x86/kernel/cpu/sgx/ioctl.c               |   2 +-
 arch/x86/kernel/cpu/sgx/virt.c                |   6 +-
 arch/x86/kernel/shstk.c                       |   2 +-
 arch/x86/kernel/sys_x86_64.c                  |   4 +-
 arch/x86/mm/fault.c                           |  10 +-
 arch/x86/mm/pat/memtype.c                     |  18 +-
 arch/x86/mm/pgtable.c                         |  30 +-
 arch/x86/mm/pkeys.c                           |   4 +-
 arch/x86/um/mem_32.c                          |   6 +-
 arch/x86/um/mem_64.c                          |   2 +-
 arch/x86/um/vdso/vma.c                        |   2 +-
 arch/x86/xen/mmu.c                            |   2 +-
 arch/x86/xen/mmu_pv.c                         |   6 +-
 arch/xtensa/include/asm/cacheflush.h          |  12 +-
 arch/xtensa/include/asm/page.h                |   4 +-
 arch/xtensa/include/asm/pgtable.h             |   8 +-
 arch/xtensa/include/asm/tlbflush.h            |   8 +-
 arch/xtensa/kernel/pci.c                      |   2 +-
 arch/xtensa/kernel/smp.c                      |  10 +-
 arch/xtensa/kernel/syscall.c                  |   2 +-
 arch/xtensa/mm/cache.c                        |  12 +-
 arch/xtensa/mm/fault.c                        |   2 +-
 arch/xtensa/mm/tlb.c                          |   6 +-
 block/fops.c                                  |   2 +-
 drivers/accel/amdxdna/amdxdna_gem.c           |   6 +-
 .../accel/habanalabs/common/command_buffer.c  |   2 +-
 drivers/accel/habanalabs/common/device.c      |   6 +-
 drivers/accel/habanalabs/common/habanalabs.h  |  14 +-
 drivers/accel/habanalabs/common/memory.c      |   8 +-
 drivers/accel/habanalabs/common/memory_mgr.c  |   4 +-
 drivers/accel/habanalabs/gaudi/gaudi.c        |   4 +-
 drivers/accel/habanalabs/gaudi2/gaudi2.c      |   4 +-
 drivers/accel/habanalabs/goya/goya.c          |   4 +-
 drivers/accel/qaic/qaic_data.c                |   2 +-
 drivers/acpi/pfr_telemetry.c                  |   2 +-
 drivers/android/binder.c                      |   6 +-
 drivers/android/binder_alloc.c                |   6 +-
 drivers/android/binder_alloc.h                |   2 +-
 drivers/auxdisplay/cfag12864bfb.c             |   2 +-
 drivers/auxdisplay/ht16k33.c                  |   2 +-
 drivers/block/ublk_drv.c                      |   2 +-
 drivers/cdx/cdx.c                             |   4 +-
 drivers/char/bsr.c                            |   2 +-
 drivers/char/hpet.c                           |   4 +-
 drivers/char/mem.c                            |   8 +-
 drivers/char/uv_mmtimer.c                     |   4 +-
 drivers/comedi/comedi_fops.c                  |   8 +-
 drivers/crypto/hisilicon/qm.c                 |   2 +-
 drivers/dax/device.c                          |   8 +-
 drivers/dma-buf/dma-buf.c                     |   6 +-
 drivers/dma-buf/heaps/cma_heap.c              |   4 +-
 drivers/dma-buf/heaps/system_heap.c           |   2 +-
 drivers/dma-buf/udmabuf.c                     |   4 +-
 drivers/dma/idxd/cdev.c                       |   4 +-
 drivers/firewire/core-cdev.c                  |   2 +-
 drivers/fpga/dfl-afu-main.c                   |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |   6 +-
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c     |   2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_events.c       |   2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  12 +-
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   8 +-
 drivers/gpu/drm/amd/amdkfd/kfd_process.c      |   2 +-
 drivers/gpu/drm/amd/amdkfd/kfd_svm.c          |  10 +-
 drivers/gpu/drm/armada/armada_gem.c           |   2 +-
 drivers/gpu/drm/drm_fbdev_dma.c               |   2 +-
 drivers/gpu/drm/drm_fbdev_shmem.c             |   2 +-
 drivers/gpu/drm/drm_gem.c                     |   8 +-
 drivers/gpu/drm/drm_gem_dma_helper.c          |   2 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c        |   8 +-
 drivers/gpu/drm/drm_gem_ttm_helper.c          |   2 +-
 drivers/gpu/drm/drm_gpusvm.c                  |  10 +-
 drivers/gpu/drm/drm_prime.c                   |   4 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c         |   8 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.h         |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |   2 +-
 drivers/gpu/drm/exynos/exynos_drm_fbdev.c     |   2 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c       |   6 +-
 drivers/gpu/drm/gma500/fbdev.c                |   4 +-
 drivers/gpu/drm/gma500/gem.c                  |   2 +-
 drivers/gpu/drm/i915/display/intel_bo.c       |   2 +-
 drivers/gpu/drm/i915/display/intel_bo.h       |   4 +-
 drivers/gpu/drm/i915/display/intel_fbdev.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |   2 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.c      |  22 +-
 drivers/gpu/drm/i915/gem/i915_gem_mman.h      |   4 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c       |   8 +-
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |   2 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c    |   8 +-
 .../gpu/drm/i915/gem/selftests/mock_dmabuf.c  |   2 +-
 drivers/gpu/drm/i915/gvt/kvmgt.c              |   2 +-
 drivers/gpu/drm/i915/i915_mm.c                |   4 +-
 drivers/gpu/drm/i915/i915_mm.h                |   8 +-
 drivers/gpu/drm/imagination/pvr_gem.c         |   2 +-
 drivers/gpu/drm/lima/lima_gem.c               |   2 +-
 drivers/gpu/drm/lima/lima_gem.h               |   2 +-
 drivers/gpu/drm/loongson/lsdc_gem.c           |   2 +-
 drivers/gpu/drm/mediatek/mtk_gem.c            |   4 +-
 drivers/gpu/drm/msm/msm_fbdev.c               |   2 +-
 drivers/gpu/drm/msm/msm_gem.c                 |   4 +-
 drivers/gpu/drm/nouveau/nouveau_dmem.c        |   2 +-
 drivers/gpu/drm/nouveau/nouveau_dmem.h        |   2 +-
 drivers/gpu/drm/nouveau/nouveau_gem.c         |   2 +-
 drivers/gpu/drm/nouveau/nouveau_svm.c         |   2 +-
 drivers/gpu/drm/omapdrm/omap_fbdev.c          |   2 +-
 drivers/gpu/drm/omapdrm/omap_gem.c            |   8 +-
 drivers/gpu/drm/omapdrm/omap_gem.h            |   2 +-
 drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c     |   2 +-
 drivers/gpu/drm/panthor/panthor_device.c      |   4 +-
 drivers/gpu/drm/panthor/panthor_device.h      |   2 +-
 drivers/gpu/drm/panthor/panthor_drv.c         |   2 +-
 drivers/gpu/drm/panthor/panthor_gem.c         |   2 +-
 drivers/gpu/drm/radeon/radeon_gem.c           |   2 +-
 drivers/gpu/drm/radeon/radeon_ttm.c           |   2 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c   |   6 +-
 drivers/gpu/drm/tegra/fbdev.c                 |   2 +-
 drivers/gpu/drm/tegra/gem.c                   |   8 +-
 drivers/gpu/drm/tegra/gem.h                   |   4 +-
 drivers/gpu/drm/ttm/ttm_bo_vm.c               |  14 +-
 drivers/gpu/drm/vc4/vc4_bo.c                  |   4 +-
 drivers/gpu/drm/virtio/virtgpu_vram.c         |   2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_gem.c           |   2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c    |   4 +-
 drivers/gpu/drm/xe/display/intel_bo.c         |   2 +-
 drivers/gpu/drm/xe/xe_bo.c                    |   2 +-
 drivers/gpu/drm/xe/xe_device.c                |  10 +-
 drivers/gpu/drm/xe/xe_oa.c                    |   2 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c       |   2 +-
 drivers/hsi/clients/cmt_speech.c              |   2 +-
 drivers/hv/mshv_root_main.c                   |   6 +-
 drivers/hwtracing/intel_th/msu.c              |   6 +-
 drivers/hwtracing/stm/core.c                  |   6 +-
 drivers/infiniband/core/core_priv.h           |   4 +-
 drivers/infiniband/core/ib_core_uverbs.c      |   6 +-
 drivers/infiniband/core/uverbs_main.c         |   8 +-
 drivers/infiniband/hw/bnxt_re/ib_verbs.c      |   2 +-
 drivers/infiniband/hw/bnxt_re/ib_verbs.h      |   2 +-
 drivers/infiniband/hw/cxgb4/provider.c        |   2 +-
 drivers/infiniband/hw/efa/efa.h               |   2 +-
 drivers/infiniband/hw/efa/efa_verbs.c         |   4 +-
 drivers/infiniband/hw/erdma/erdma_verbs.c     |   2 +-
 drivers/infiniband/hw/erdma/erdma_verbs.h     |   2 +-
 drivers/infiniband/hw/hfi1/file_ops.c         |   6 +-
 drivers/infiniband/hw/hns/hns_roce_main.c     |   2 +-
 drivers/infiniband/hw/irdma/verbs.c           |   4 +-
 drivers/infiniband/hw/mana/main.c             |   2 +-
 drivers/infiniband/hw/mana/mana_ib.h          |   2 +-
 drivers/infiniband/hw/mlx4/main.c             |   2 +-
 drivers/infiniband/hw/mlx4/mr.c               |   2 +-
 drivers/infiniband/hw/mlx5/main.c             |  10 +-
 drivers/infiniband/hw/mthca/mthca_provider.c  |   2 +-
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.c   |   2 +-
 drivers/infiniband/hw/ocrdma/ocrdma_verbs.h   |   2 +-
 drivers/infiniband/hw/qedr/verbs.c            |   2 +-
 drivers/infiniband/hw/qedr/verbs.h            |   2 +-
 drivers/infiniband/hw/qib/qib_file_ops.c      |  14 +-
 drivers/infiniband/hw/usnic/usnic_ib_verbs.c  |   2 +-
 drivers/infiniband/hw/usnic/usnic_ib_verbs.h  |   2 +-
 .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.c   |   2 +-
 .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.h   |   2 +-
 drivers/infiniband/sw/rdmavt/mmap.c           |   6 +-
 drivers/infiniband/sw/rdmavt/mmap.h           |   2 +-
 drivers/infiniband/sw/rxe/rxe_loc.h           |   2 +-
 drivers/infiniband/sw/rxe/rxe_mmap.c          |   6 +-
 drivers/infiniband/sw/siw/siw_verbs.c         |   2 +-
 drivers/infiniband/sw/siw/siw_verbs.h         |   2 +-
 drivers/iommu/dma-iommu.c                     |   4 +-
 drivers/iommu/iommu-sva.c                     |   2 +-
 .../media/common/videobuf2/videobuf2-core.c   |   2 +-
 .../common/videobuf2/videobuf2-dma-contig.c   |   4 +-
 .../media/common/videobuf2/videobuf2-dma-sg.c |   4 +-
 .../media/common/videobuf2/videobuf2-memops.c |   4 +-
 .../media/common/videobuf2/videobuf2-v4l2.c   |   2 +-
 .../common/videobuf2/videobuf2-vmalloc.c      |   4 +-
 drivers/media/dvb-core/dmxdev.c               |   4 +-
 drivers/media/dvb-core/dvb_vb2.c              |   2 +-
 drivers/media/pci/cx18/cx18-fileops.h         |   2 +-
 drivers/media/pci/intel/ipu6/ipu6-dma.c       |   2 +-
 drivers/media/pci/intel/ipu6/ipu6-dma.h       |   2 +-
 .../platform/samsung/exynos-gsc/gsc-m2m.c     |   2 +-
 .../samsung/s3c-camif/camif-capture.c         |   2 +-
 .../media/platform/samsung/s5p-mfc/s5p_mfc.c  |   2 +-
 drivers/media/platform/ti/omap3isp/ispvideo.c |   2 +-
 drivers/media/usb/uvc/uvc_queue.c             |   2 +-
 drivers/media/usb/uvc/uvc_v4l2.c              |   2 +-
 drivers/media/usb/uvc/uvcvideo.h              |   2 +-
 drivers/media/v4l2-core/v4l2-dev.c            |   2 +-
 drivers/media/v4l2-core/v4l2-mem2mem.c        |   4 +-
 drivers/misc/bcm-vk/bcm_vk_dev.c              |   2 +-
 drivers/misc/fastrpc.c                        |   4 +-
 drivers/misc/genwqe/card_dev.c                |   6 +-
 drivers/misc/ocxl/context.c                   |  12 +-
 drivers/misc/ocxl/file.c                      |   2 +-
 drivers/misc/ocxl/ocxl_internal.h             |   2 +-
 drivers/misc/ocxl/sysfs.c                     |   4 +-
 drivers/misc/open-dice.c                      |   2 +-
 drivers/misc/sgi-gru/grufault.c               |  14 +-
 drivers/misc/sgi-gru/grufile.c                |   6 +-
 drivers/misc/sgi-gru/grumain.c                |  10 +-
 drivers/misc/sgi-gru/grutables.h              |  12 +-
 drivers/misc/uacce/uacce.c                    |   4 +-
 drivers/mtd/mtdchar.c                         |   2 +-
 drivers/pci/mmap.c                            |   4 +-
 drivers/pci/p2pdma.c                          |   2 +-
 drivers/pci/pci-sysfs.c                       |  16 +-
 drivers/pci/pci.h                             |   2 +-
 drivers/pci/proc.c                            |   2 +-
 drivers/platform/x86/intel/pmt/class.c        |   2 +-
 drivers/ptp/ptp_vmclock.c                     |   2 +-
 drivers/rapidio/devices/rio_mport_cdev.c      |   6 +-
 drivers/sbus/char/flash.c                     |   2 +-
 drivers/sbus/char/oradax.c                    |   4 +-
 drivers/scsi/sg.c                             |   4 +-
 drivers/soc/aspeed/aspeed-lpc-ctrl.c          |   2 +-
 drivers/soc/aspeed/aspeed-p2a-ctrl.c          |   2 +-
 drivers/soc/qcom/rmtfs_mem.c                  |   2 +-
 .../staging/media/atomisp/include/hmm/hmm.h   |   2 +-
 .../media/atomisp/include/hmm/hmm_bo.h        |   2 +-
 drivers/staging/media/atomisp/pci/hmm/hmm.c   |   2 +-
 .../staging/media/atomisp/pci/hmm/hmm_bo.c    |   6 +-
 drivers/staging/vme_user/vme.c                |   2 +-
 drivers/staging/vme_user/vme.h                |   2 +-
 drivers/staging/vme_user/vme_user.c           |   8 +-
 drivers/target/target_core_user.c             |   8 +-
 drivers/tee/optee/call.c                      |   2 +-
 drivers/tee/tee_shm.c                         |   2 +-
 drivers/uio/uio.c                             |  10 +-
 drivers/uio/uio_hv_generic.c                  |   2 +-
 drivers/usb/core/devio.c                      |   6 +-
 drivers/usb/gadget/function/uvc_queue.c       |   2 +-
 drivers/usb/gadget/function/uvc_queue.h       |   2 +-
 drivers/usb/gadget/function/uvc_v4l2.c        |   2 +-
 drivers/usb/mon/mon_bin.c                     |   6 +-
 drivers/vdpa/vdpa_user/iova_domain.c          |   2 +-
 drivers/vfio/cdx/main.c                       |   4 +-
 drivers/vfio/fsl-mc/vfio_fsl_mc.c             |   4 +-
 .../vfio/pci/hisilicon/hisi_acc_vfio_pci.c    |   2 +-
 drivers/vfio/pci/nvgrace-gpu/main.c           |   2 +-
 drivers/vfio/pci/vfio_pci_core.c              |   6 +-
 drivers/vfio/platform/vfio_platform_common.c  |   4 +-
 drivers/vfio/platform/vfio_platform_private.h |   2 +-
 drivers/vfio/vfio_iommu_type1.c               |   4 +-
 drivers/vfio/vfio_main.c                      |   2 +-
 drivers/vhost/vdpa.c                          |   6 +-
 drivers/video/fbdev/68328fb.c                 |   4 +-
 drivers/video/fbdev/atafb.c                   |   2 +-
 drivers/video/fbdev/aty/atyfb_base.c          |   4 +-
 drivers/video/fbdev/au1100fb.c                |   2 +-
 drivers/video/fbdev/au1200fb.c                |   2 +-
 drivers/video/fbdev/bw2.c                     |   4 +-
 drivers/video/fbdev/cg14.c                    |   4 +-
 drivers/video/fbdev/cg3.c                     |   4 +-
 drivers/video/fbdev/cg6.c                     |   4 +-
 drivers/video/fbdev/controlfb.c               |   2 +-
 drivers/video/fbdev/core/fb_chrdev.c          |   2 +-
 drivers/video/fbdev/core/fb_defio.c           |   2 +-
 drivers/video/fbdev/core/fb_io_fops.c         |   2 +-
 drivers/video/fbdev/ep93xx-fb.c               |   2 +-
 drivers/video/fbdev/ffb.c                     |   4 +-
 drivers/video/fbdev/gbefb.c                   |   2 +-
 drivers/video/fbdev/leo.c                     |   4 +-
 drivers/video/fbdev/omap/omapfb.h             |   2 +-
 drivers/video/fbdev/omap/omapfb_main.c        |   2 +-
 .../video/fbdev/omap2/omapfb/omapfb-main.c    |   6 +-
 drivers/video/fbdev/p9100.c                   |   4 +-
 drivers/video/fbdev/ps3fb.c                   |   2 +-
 drivers/video/fbdev/pxa3xx-gcu.c              |   2 +-
 drivers/video/fbdev/sa1100fb.c                |   2 +-
 drivers/video/fbdev/sbuslib.c                 |   2 +-
 drivers/video/fbdev/sbuslib.h                 |   4 +-
 drivers/video/fbdev/sh_mobile_lcdcfb.c        |   4 +-
 drivers/video/fbdev/smscufx.c                 |   2 +-
 drivers/video/fbdev/tcx.c                     |   4 +-
 drivers/video/fbdev/udlfb.c                   |   2 +-
 drivers/video/fbdev/vfb.c                     |   4 +-
 drivers/virt/acrn/mm.c                        |   2 +-
 drivers/xen/gntalloc.c                        |   6 +-
 drivers/xen/gntdev.c                          |  10 +-
 drivers/xen/privcmd-buf.c                     |   6 +-
 drivers/xen/privcmd.c                         |  26 +-
 drivers/xen/xenbus/xenbus_dev_backend.c       |   2 +-
 drivers/xen/xenfs/xenstored.c                 |   2 +-
 drivers/xen/xlate_mmu.c                       |   8 +-
 fs/9p/vfs_file.c                              |   4 +-
 fs/afs/file.c                                 |  12 +-
 fs/aio.c                                      |   4 +-
 fs/backing-file.c                             |   2 +-
 fs/bcachefs/fs.c                              |   2 +-
 fs/binfmt_elf.c                               |   2 +-
 fs/btrfs/file.c                               |   2 +-
 fs/buffer.c                                   |   2 +-
 fs/ceph/addr.c                                |   6 +-
 fs/ceph/super.h                               |   2 +-
 fs/coda/file.c                                |   6 +-
 fs/coredump.c                                 |  12 +-
 fs/cramfs/inode.c                             |   4 +-
 fs/dax.c                                      |   8 +-
 fs/ecryptfs/file.c                            |   2 +-
 fs/erofs/data.c                               |   2 +-
 fs/exec.c                                     |  12 +-
 fs/exfat/file.c                               |   4 +-
 fs/ext2/file.c                                |   2 +-
 fs/ext4/file.c                                |   2 +-
 fs/ext4/inode.c                               |   2 +-
 fs/f2fs/file.c                                |   2 +-
 fs/fuse/dax.c                                 |   2 +-
 fs/fuse/file.c                                |   4 +-
 fs/fuse/fuse_i.h                              |   4 +-
 fs/fuse/passthrough.c                         |   2 +-
 fs/gfs2/file.c                                |   2 +-
 fs/hugetlbfs/inode.c                          |  14 +-
 fs/kernfs/file.c                              |   6 +-
 fs/nfs/file.c                                 |   2 +-
 fs/nfs/internal.h                             |   2 +-
 fs/nilfs2/file.c                              |   4 +-
 fs/ntfs3/file.c                               |   2 +-
 fs/ocfs2/mmap.c                               |   4 +-
 fs/ocfs2/mmap.h                               |   2 +-
 fs/orangefs/file.c                            |   2 +-
 fs/overlayfs/file.c                           |   2 +-
 fs/proc/base.c                                |   6 +-
 fs/proc/inode.c                               |   4 +-
 fs/proc/task_mmu.c                            |  88 ++---
 fs/proc/task_nommu.c                          |  12 +-
 fs/proc/vmcore.c                              |  14 +-
 fs/ramfs/file-nommu.c                         |   4 +-
 fs/romfs/mmap-nommu.c                         |   2 +-
 fs/smb/client/cifsfs.h                        |   4 +-
 fs/smb/client/file.c                          |   4 +-
 fs/sysfs/file.c                               |   2 +-
 fs/ubifs/file.c                               |   2 +-
 fs/udf/file.c                                 |   4 +-
 fs/userfaultfd.c                              |  20 +-
 fs/vboxsf/file.c                              |   4 +-
 fs/xfs/xfs_file.c                             |   2 +-
 fs/zonefs/file.c                              |   2 +-
 include/asm-generic/cacheflush.h              |   8 +-
 include/asm-generic/hugetlb.h                 |   4 +-
 include/asm-generic/mm_hooks.h                |   2 +-
 include/asm-generic/tlb.h                     |  12 +-
 include/drm/drm_gem.h                         |  10 +-
 include/drm/drm_gem_dma_helper.h              |   4 +-
 include/drm/drm_gem_shmem_helper.h            |   4 +-
 include/drm/drm_gem_ttm_helper.h              |   2 +-
 include/drm/drm_gem_vram_helper.h             |   2 +-
 include/drm/drm_prime.h                       |   4 +-
 include/drm/ttm/ttm_bo.h                      |   8 +-
 include/linux/backing-file.h                  |   2 +-
 include/linux/binfmts.h                       |   2 +-
 include/linux/bpf.h                           |   2 +-
 include/linux/btf_ids.h                       |   2 +-
 include/linux/buffer_head.h                   |   2 +-
 include/linux/buildid.h                       |   6 +-
 include/linux/cacheflush.h                    |   2 +-
 include/linux/configfs.h                      |   2 +-
 include/linux/crash_dump.h                    |   2 +-
 include/linux/dax.h                           |   4 +-
 include/linux/dma-buf.h                       |   4 +-
 include/linux/dma-map-ops.h                   |  10 +-
 include/linux/dma-mapping.h                   |  12 +-
 include/linux/fb.h                            |   8 +-
 include/linux/fs.h                            |  14 +-
 include/linux/gfp.h                           |   8 +-
 include/linux/highmem.h                       |  10 +-
 include/linux/huge_mm.h                       |  92 +++---
 include/linux/hugetlb.h                       | 132 ++++----
 include/linux/hugetlb_inline.h                |   4 +-
 include/linux/io-mapping.h                    |   2 +-
 include/linux/iomap.h                         |   2 +-
 include/linux/iommu-dma.h                     |   4 +-
 include/linux/kernfs.h                        |   4 +-
 include/linux/khugepaged.h                    |   4 +-
 include/linux/ksm.h                           |  12 +-
 include/linux/kvm_host.h                      |   2 +-
 include/linux/lsm_hook_defs.h                 |   2 +-
 include/linux/mempolicy.h                     |  20 +-
 include/linux/migrate.h                       |   6 +-
 include/linux/mm.h                            | 308 +++++++++---------
 include/linux/mm_inline.h                     |  18 +-
 include/linux/mm_types.h                      |  14 +-
 include/linux/mmdebug.h                       |   4 +-
 include/linux/mmu_notifier.h                  |   8 +-
 include/linux/net.h                           |   4 +-
 include/linux/pagemap.h                       |   2 +-
 include/linux/pagewalk.h                      |  10 +-
 include/linux/pci.h                           |   4 +-
 include/linux/perf_event.h                    |   4 +-
 include/linux/pgtable.h                       | 100 +++---
 include/linux/pkeys.h                         |   2 +-
 include/linux/proc_fs.h                       |   2 +-
 include/linux/ring_buffer.h                   |   2 +-
 include/linux/rmap.h                          |  92 +++---
 include/linux/secretmem.h                     |   4 +-
 include/linux/security.h                      |   4 +-
 include/linux/shmem_fs.h                      |  12 +-
 include/linux/swap.h                          |   2 +-
 include/linux/swapops.h                       |   4 +-
 include/linux/sysfs.h                         |   4 +-
 include/linux/time_namespace.h                |   6 +-
 include/linux/uacce.h                         |   2 +-
 include/linux/uio_driver.h                    |   2 +-
 include/linux/uprobes.h                       |  10 +-
 include/linux/userfaultfd_k.h                 |  86 ++---
 include/linux/vdso_datastore.h                |   2 +-
 include/linux/vfio.h                          |   2 +-
 include/linux/vfio_pci_core.h                 |   4 +-
 include/linux/vmalloc.h                       |   6 +-
 include/media/dvb_vb2.h                       |   4 +-
 include/media/v4l2-dev.h                      |   2 +-
 include/media/v4l2-mem2mem.h                  |   6 +-
 include/media/videobuf2-core.h                |   6 +-
 include/media/videobuf2-v4l2.h                |   2 +-
 include/net/sock.h                            |   2 +-
 include/net/tcp.h                             |   2 +-
 include/rdma/ib_verbs.h                       |   6 +-
 include/rdma/rdma_vt.h                        |   2 +-
 include/sound/compress_driver.h               |   2 +-
 include/sound/hwdep.h                         |   2 +-
 include/sound/info.h                          |   2 +-
 include/sound/memalloc.h                      |   4 +-
 include/sound/pcm.h                           |   8 +-
 include/sound/soc-component.h                 |   6 +-
 include/trace/events/mmap.h                   |   4 +-
 include/trace/events/sched.h                  |   2 +-
 include/uapi/linux/bpf.h                      |   2 +-
 include/xen/xen-ops.h                         |  24 +-
 io_uring/memmap.c                             |   6 +-
 io_uring/memmap.h                             |   2 +-
 ipc/shm.c                                     |  22 +-
 kernel/acct.c                                 |   2 +-
 kernel/bpf/arena.c                            |  10 +-
 kernel/bpf/arraymap.c                         |   2 +-
 kernel/bpf/ringbuf.c                          |   4 +-
 kernel/bpf/stackmap.c                         |   4 +-
 kernel/bpf/syscall.c                          |   6 +-
 kernel/bpf/task_iter.c                        |  16 +-
 kernel/bpf/verifier.c                         |   2 +-
 kernel/dma/coherent.c                         |   6 +-
 kernel/dma/direct.c                           |   2 +-
 kernel/dma/direct.h                           |   2 +-
 kernel/dma/dummy.c                            |   2 +-
 kernel/dma/mapping.c                          |   8 +-
 kernel/dma/ops_helpers.c                      |   2 +-
 kernel/events/core.c                          |  24 +-
 kernel/events/uprobes.c                       |  48 +--
 kernel/fork.c                                 |  26 +-
 kernel/kcov.c                                 |   2 +-
 kernel/relay.c                                |   6 +-
 kernel/sched/fair.c                           |   4 +-
 kernel/signal.c                               |   2 +-
 kernel/sys.c                                  |   2 +-
 kernel/time/namespace.c                       |   2 +-
 kernel/trace/ring_buffer.c                    |   6 +-
 kernel/trace/trace.c                          |   4 +-
 kernel/trace/trace_output.c                   |   2 +-
 lib/buildid.c                                 |   6 +-
 lib/test_hmm.c                                |   6 +-
 lib/vdso/datastore.c                          |   6 +-
 mm/damon/ops-common.c                         |   4 +-
 mm/damon/ops-common.h                         |   4 +-
 mm/damon/paddr.c                              |   4 +-
 mm/damon/tests/vaddr-kunit.h                  |  16 +-
 mm/damon/vaddr.c                              |   4 +-
 mm/debug.c                                    |   2 +-
 mm/debug_vm_pgtable.c                         |   2 +-
 mm/filemap.c                                  |  12 +-
 mm/gup.c                                      |  56 ++--
 mm/hmm.c                                      |   6 +-
 mm/huge_memory.c                              | 104 +++---
 mm/hugetlb.c                                  | 158 ++++-----
 mm/internal.h                                 |  46 +--
 mm/interval_tree.c                            |  16 +-
 mm/io-mapping.c                               |   2 +-
 mm/khugepaged.c                               |  34 +-
 mm/ksm.c                                      |  48 +--
 mm/madvise.c                                  |  78 ++---
 mm/memory-failure.c                           |  16 +-
 mm/memory.c                                   | 244 +++++++-------
 mm/mempolicy.c                                |  42 +--
 mm/migrate.c                                  |  10 +-
 mm/migrate_device.c                           |   4 +-
 mm/mincore.c                                  |   8 +-
 mm/mlock.c                                    |  16 +-
 mm/mmap.c                                     |  70 ++--
 mm/mmu_gather.c                               |   4 +-
 mm/mprotect.c                                 |  22 +-
 mm/mremap.c                                   |  46 +--
 mm/mseal.c                                    |  14 +-
 mm/msync.c                                    |   2 +-
 mm/nommu.c                                    |  66 ++--
 mm/oom_kill.c                                 |   2 +-
 mm/page_idle.c                                |   2 +-
 mm/page_vma_mapped.c                          |   4 +-
 mm/pagewalk.c                                 |  20 +-
 mm/pgtable-generic.c                          |  20 +-
 mm/rmap.c                                     |  74 ++---
 mm/secretmem.c                                |   4 +-
 mm/shmem.c                                    |  34 +-
 mm/swap.c                                     |   2 +-
 mm/swap.h                                     |   6 +-
 mm/swap_state.c                               |   6 +-
 mm/swapfile.c                                 |  14 +-
 mm/userfaultfd.c                              | 116 +++----
 mm/util.c                                     |   4 +-
 mm/vma.c                                      | 196 +++++------
 mm/vma.h                                      | 126 +++----
 mm/vmalloc.c                                  |   4 +-
 mm/vmscan.c                                   |  12 +-
 net/core/sock.c                               |   2 +-
 net/ipv4/tcp.c                                |  12 +-
 net/packet/af_packet.c                        |   6 +-
 net/socket.c                                  |   4 +-
 net/xdp/xsk.c                                 |   2 +-
 samples/ftrace/ftrace-direct-too.c            |   4 +-
 samples/vfio-mdev/mbochs.c                    |   8 +-
 samples/vfio-mdev/mdpy.c                      |   2 +-
 scripts/coccinelle/api/vma_pages.cocci        |   6 +-
 security/apparmor/lsm.c                       |   2 +-
 security/integrity/ima/ima_main.c             |   4 +-
 security/ipe/hooks.c                          |   2 +-
 security/ipe/hooks.h                          |   2 +-
 security/security.c                           |   2 +-
 security/selinux/hooks.c                      |   2 +-
 security/selinux/selinuxfs.c                  |   4 +-
 sound/core/compress_offload.c                 |   2 +-
 sound/core/hwdep.c                            |   2 +-
 sound/core/info.c                             |   2 +-
 sound/core/init.c                             |   2 +-
 sound/core/memalloc.c                         |  22 +-
 sound/core/oss/pcm_oss.c                      |   2 +-
 sound/core/pcm_native.c                       |  20 +-
 sound/soc/fsl/fsl_asrc_m2m.c                  |   2 +-
 sound/soc/intel/avs/pcm.c                     |   2 +-
 sound/soc/loongson/loongson_dma.c             |   2 +-
 sound/soc/pxa/mmp-sspa.c                      |   2 +-
 sound/soc/qcom/lpass-platform.c               |   4 +-
 sound/soc/qcom/qdsp6/q6apm-dai.c              |   2 +-
 sound/soc/qcom/qdsp6/q6asm-dai.c              |   2 +-
 sound/soc/samsung/idma.c                      |   2 +-
 sound/soc/soc-component.c                     |   2 +-
 sound/soc/uniphier/aio-dma.c                  |   2 +-
 sound/usb/usx2y/us122l.c                      |   2 +-
 sound/usb/usx2y/usX2Yhwdep.c                  |   2 +-
 sound/usb/usx2y/usx2yhwdeppcm.c               |   6 +-
 tools/include/linux/btf_ids.h                 |   2 +-
 tools/include/uapi/linux/bpf.h                |   2 +-
 .../testing/selftests/bpf/bpf_experimental.h  |   2 +-
 .../selftests/bpf/progs/bpf_iter_task_vmas.c  |   2 +-
 .../selftests/bpf/progs/bpf_iter_vma_offset.c |   2 +-
 tools/testing/selftests/bpf/progs/find_vma.c  |   2 +-
 .../selftests/bpf/progs/find_vma_fail1.c      |   2 +-
 .../selftests/bpf/progs/find_vma_fail2.c      |   2 +-
 .../selftests/bpf/progs/iters_css_task.c      |   2 +-
 .../selftests/bpf/progs/iters_task_vma.c      |   2 +-
 .../selftests/bpf/progs/iters_testmod.c       |   4 +-
 tools/testing/selftests/bpf/progs/lsm.c       |   2 +-
 .../selftests/bpf/progs/test_bpf_cookie.c     |   2 +-
 .../bpf/progs/verifier_iterating_callbacks.c  |   4 +-
 .../selftests/bpf/test_kmods/bpf_testmod.c    |   2 +-
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   2 +-
 tools/testing/vma/vma.c                       |  70 ++--
 tools/testing/vma/vma_internal.h              | 156 ++++-----
 virt/kvm/kvm_main.c                           |  12 +-
 861 files changed, 3494 insertions(+), 3494 deletions(-)

diff --git a/Documentation/bpf/prog_lsm.rst b/Documentation/bpf/prog_lsm.rst
index ad2be02f30c2..f2b254b5a6ce 100644
--- a/Documentation/bpf/prog_lsm.rst
+++ b/Documentation/bpf/prog_lsm.rst
@@ -15,7 +15,7 @@ Structure
 The example shows an eBPF program that can be attached to the ``file_mprotect``
 LSM hook:
 
-.. c:function:: int file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, unsigned long prot);
+.. c:function:: int file_mprotect(struct mm_area *vma, unsigned long reqprot, unsigned long prot);
 
 Other LSM hooks which can be instrumented can be found in
 ``security/security.c``.
@@ -31,7 +31,7 @@ the fields that need to be accessed.
 		unsigned long start_brk, brk, start_stack;
 	} __attribute__((preserve_access_index));
 
-	struct vm_area_struct {
+	struct mm_area {
 		unsigned long start_brk, brk, start_stack;
 		unsigned long vm_start, vm_end;
 		struct mm_struct *vm_mm;
@@ -65,7 +65,7 @@ example:
 .. code-block:: c
 
 	SEC("lsm/file_mprotect")
-	int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
+	int BPF_PROG(mprotect_audit, struct mm_area *vma,
 		     unsigned long reqprot, unsigned long prot, int ret)
 	{
 		/* ret is the return value from the previous BPF program
diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst
index 889fc84ccd1b..597eb9760dea 100644
--- a/Documentation/core-api/cachetlb.rst
+++ b/Documentation/core-api/cachetlb.rst
@@ -50,7 +50,7 @@ changes occur:
 	page table operations such as what happens during
 	fork, and exec.
 
-3) ``void flush_tlb_range(struct vm_area_struct *vma,
+3) ``void flush_tlb_range(struct mm_area *vma,
    unsigned long start, unsigned long end)``
 
 	Here we are flushing a specific range of (user) virtual
@@ -70,7 +70,7 @@ changes occur:
 	call flush_tlb_page (see below) for each entry which may be
 	modified.
 
-4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
+4) ``void flush_tlb_page(struct mm_area *vma, unsigned long addr)``
 
 	This time we need to remove the PAGE_SIZE sized translation
 	from the TLB.  The 'vma' is the backing structure used by
@@ -89,7 +89,7 @@ changes occur:
 	This is used primarily during fault processing.
 
 5) ``void update_mmu_cache_range(struct vm_fault *vmf,
-   struct vm_area_struct *vma, unsigned long address, pte_t *ptep,
+   struct mm_area *vma, unsigned long address, pte_t *ptep,
    unsigned int nr)``
 
 	At the end of every page fault, this routine is invoked to tell
@@ -159,7 +159,7 @@ Here are the routines, one by one:
 	This option is separate from flush_cache_mm to allow some
 	optimizations for VIPT caches.
 
-3) ``void flush_cache_range(struct vm_area_struct *vma,
+3) ``void flush_cache_range(struct mm_area *vma,
    unsigned long start, unsigned long end)``
 
 	Here we are flushing a specific range of (user) virtual
@@ -176,7 +176,7 @@ Here are the routines, one by one:
 	call flush_cache_page (see below) for each entry which may be
 	modified.
 
-4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
+4) ``void flush_cache_page(struct mm_area *vma, unsigned long addr, unsigned long pfn)``
 
 	This time we need to remove a PAGE_SIZE sized range
 	from the cache.  The 'vma' is the backing structure used by
@@ -331,9 +331,9 @@ maps this page at its virtual address.
 			dirty.  Again, see sparc64 for examples of how
 			to deal with this.
 
-  ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+  ``void copy_to_user_page(struct mm_area *vma, struct page *page,
   unsigned long user_vaddr, void *dst, void *src, int len)``
-  ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+  ``void copy_from_user_page(struct mm_area *vma, struct page *page,
   unsigned long user_vaddr, void *dst, void *src, int len)``
 
 	When the kernel needs to copy arbitrary data in and out
@@ -346,7 +346,7 @@ maps this page at its virtual address.
 	likely that you will need to flush the instruction cache
 	for copy_to_user_page().
 
-  ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
+  ``void flush_anon_page(struct mm_area *vma, struct page *page,
   unsigned long vmaddr)``
 
   	When the kernel needs to access the contents of an anonymous
@@ -365,7 +365,7 @@ maps this page at its virtual address.
 	If the icache does not snoop stores then this routine will need
 	to flush it.
 
-  ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
+  ``void flush_icache_page(struct mm_area *vma, struct page *page)``
 
 	All the functionality of flush_icache_page can be implemented in
 	flush_dcache_folio and update_mmu_cache_range. In the future, the hope
diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
index 8e3cce3d0a23..ca0b3e0ef596 100644
--- a/Documentation/core-api/dma-api.rst
+++ b/Documentation/core-api/dma-api.rst
@@ -581,7 +581,7 @@ dma_alloc_pages().  page must be the pointer returned by dma_alloc_pages().
 ::
 
 	int
-	dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
+	dma_mmap_pages(struct device *dev, struct mm_area *vma,
 		       size_t size, struct page *page)
 
 Map an allocation returned from dma_alloc_pages() into a user address space.
@@ -679,7 +679,7 @@ returned by dma_vmap_noncontiguous().
 ::
 
 	int
-	dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+	dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
 			       size_t size, struct sg_table *sgt)
 
 Map an allocation returned from dma_alloc_noncontiguous() into a user address
diff --git a/Documentation/driver-api/uio-howto.rst b/Documentation/driver-api/uio-howto.rst
index 907ffa3b38f5..9e68c745b295 100644
--- a/Documentation/driver-api/uio-howto.rst
+++ b/Documentation/driver-api/uio-howto.rst
@@ -246,7 +246,7 @@ the members are required, others are optional.
    hardware interrupt number. The flags given here will be used in the
    call to :c:func:`request_irq()`.
 
--  ``int (*mmap)(struct uio_info *info, struct vm_area_struct *vma)``:
+-  ``int (*mmap)(struct uio_info *info, struct mm_area *vma)``:
    Optional. If you need a special :c:func:`mmap()`
    function, you can set it here. If this pointer is not NULL, your
    :c:func:`mmap()` will be called instead of the built-in one.
diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
index 2a21a42c9386..056e27a40f3d 100644
--- a/Documentation/driver-api/vfio.rst
+++ b/Documentation/driver-api/vfio.rst
@@ -419,7 +419,7 @@ similar to a file operations structure::
 			 size_t count, loff_t *size);
 		long	(*ioctl)(struct vfio_device *vdev, unsigned int cmd,
 				 unsigned long arg);
-		int	(*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
+		int	(*mmap)(struct vfio_device *vdev, struct mm_area *vma);
 		void	(*request)(struct vfio_device *vdev, unsigned int count);
 		int	(*match)(struct vfio_device *vdev, char *buf);
 		void	(*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index 0ec0bb6eb0fb..9c83c1262882 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -530,7 +530,7 @@ prototypes::
 	__poll_t (*poll) (struct file *, struct poll_table_struct *);
 	long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
 	long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
-	int (*mmap) (struct file *, struct vm_area_struct *);
+	int (*mmap) (struct file *, struct mm_area *);
 	int (*open) (struct inode *, struct file *);
 	int (*flush) (struct file *);
 	int (*release) (struct inode *, struct file *);
@@ -643,14 +643,14 @@ vm_operations_struct
 
 prototypes::
 
-	void (*open)(struct vm_area_struct *);
-	void (*close)(struct vm_area_struct *);
+	void (*open)(struct mm_area *);
+	void (*close)(struct mm_area *);
 	vm_fault_t (*fault)(struct vm_fault *);
 	vm_fault_t (*huge_fault)(struct vm_fault *, unsigned int order);
 	vm_fault_t (*map_pages)(struct vm_fault *, pgoff_t start, pgoff_t end);
-	vm_fault_t (*page_mkwrite)(struct vm_area_struct *, struct vm_fault *);
-	vm_fault_t (*pfn_mkwrite)(struct vm_area_struct *, struct vm_fault *);
-	int (*access)(struct vm_area_struct *, unsigned long, void*, int, int);
+	vm_fault_t (*page_mkwrite)(struct mm_area *, struct vm_fault *);
+	vm_fault_t (*pfn_mkwrite)(struct mm_area *, struct vm_fault *);
+	int (*access)(struct mm_area *, unsigned long, void*, int, int);
 
 locking rules:
 
diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
index 2a17865dfe39..2935efeceaa9 100644
--- a/Documentation/filesystems/proc.rst
+++ b/Documentation/filesystems/proc.rst
@@ -2175,7 +2175,7 @@ the process is maintaining.  Example output::
      | lr-------- 1 root root 64 Jan 27 11:24 400000-41a000 -> /usr/bin/ls
 
 The name of a link represents the virtual memory bounds of a mapping, i.e.
-vm_area_struct::vm_start-vm_area_struct::vm_end.
+mm_area::vm_start-mm_area::vm_end.
 
 The main purpose of the map_files is to retrieve a set of memory mapped
 files in a fast way instead of parsing /proc/<pid>/maps or
diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
index ae79c30b6c0c..866485f271b0 100644
--- a/Documentation/filesystems/vfs.rst
+++ b/Documentation/filesystems/vfs.rst
@@ -1102,7 +1102,7 @@ This describes how the VFS can manipulate an open file.  As of kernel
 		__poll_t (*poll) (struct file *, struct poll_table_struct *);
 		long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
 		long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
-		int (*mmap) (struct file *, struct vm_area_struct *);
+		int (*mmap) (struct file *, struct mm_area *);
 		int (*open) (struct inode *, struct file *);
 		int (*flush) (struct file *, fl_owner_t id);
 		int (*release) (struct inode *, struct file *);
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index d55751cad67c..aac2545c4a54 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -280,8 +280,8 @@ made up of several fields, the more interesting ones being:
 .. code-block:: c
 
 	struct vm_operations_struct {
-		void (*open)(struct vm_area_struct * area);
-		void (*close)(struct vm_area_struct * area);
+		void (*open)(struct mm_area * area);
+		void (*close)(struct mm_area * area);
 		vm_fault_t (*fault)(struct vm_fault *vmf);
 	};
 
diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst
index 7d61b7a8b65b..63fbba00dc3d 100644
--- a/Documentation/mm/hmm.rst
+++ b/Documentation/mm/hmm.rst
@@ -298,7 +298,7 @@ between device driver specific code and shared common code:
 
 1. ``mmap_read_lock()``
 
-   The device driver has to pass a ``struct vm_area_struct`` to
+   The device driver has to pass a ``struct mm_area`` to
    migrate_vma_setup() so the mmap_read_lock() or mmap_write_lock() needs to
    be held for the duration of the migration.
 
diff --git a/Documentation/mm/hugetlbfs_reserv.rst b/Documentation/mm/hugetlbfs_reserv.rst
index 4914fbf07966..afb86d44c57e 100644
--- a/Documentation/mm/hugetlbfs_reserv.rst
+++ b/Documentation/mm/hugetlbfs_reserv.rst
@@ -104,7 +104,7 @@ These operations result in a call to the routine hugetlb_reserve_pages()::
 
 	int hugetlb_reserve_pages(struct inode *inode,
 				  long from, long to,
-				  struct vm_area_struct *vma,
+				  struct mm_area *vma,
 				  vm_flags_t vm_flags)
 
 The first thing hugetlb_reserve_pages() does is check if the NORESERVE
@@ -181,7 +181,7 @@ Reservations are consumed when huge pages associated with the reservations
 are allocated and instantiated in the corresponding mapping.  The allocation
 is performed within the routine alloc_hugetlb_folio()::
 
-	struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+	struct folio *alloc_hugetlb_folio(struct mm_area *vma,
 				     unsigned long addr, int avoid_reserve)
 
 alloc_hugetlb_folio is passed a VMA pointer and a virtual address, so it can
@@ -464,14 +464,14 @@ account the 'opposite' meaning of reservation map entries for private and
 shared mappings and hide this detail from the caller::
 
 	long vma_needs_reservation(struct hstate *h,
-				   struct vm_area_struct *vma,
+				   struct mm_area *vma,
 				   unsigned long addr)
 
 This routine calls region_chg() for the specified page.  If no reservation
 exists, 1 is returned.  If a reservation exists, 0 is returned::
 
 	long vma_commit_reservation(struct hstate *h,
-				    struct vm_area_struct *vma,
+				    struct mm_area *vma,
 				    unsigned long addr)
 
 This calls region_add() for the specified page.  As in the case of region_chg
@@ -483,7 +483,7 @@ vma_needs_reservation.  An unexpected difference indicates the reservation
 map was modified between calls::
 
 	void vma_end_reservation(struct hstate *h,
-				 struct vm_area_struct *vma,
+				 struct mm_area *vma,
 				 unsigned long addr)
 
 This calls region_abort() for the specified page.  As in the case of region_chg
@@ -492,7 +492,7 @@ vma_needs_reservation.  It will abort/end the in progress reservation add
 operation::
 
 	long vma_add_reservation(struct hstate *h,
-				 struct vm_area_struct *vma,
+				 struct mm_area *vma,
 				 unsigned long addr)
 
 This is a special wrapper routine to help facilitate reservation cleanup
diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst
index e6756e78b476..674c30658f90 100644
--- a/Documentation/mm/process_addrs.rst
+++ b/Documentation/mm/process_addrs.rst
@@ -9,10 +9,10 @@ Process Addresses
 
 
 Userland memory ranges are tracked by the kernel via Virtual Memory Areas or
-'VMA's of type :c:struct:`!struct vm_area_struct`.
+'VMA's of type :c:struct:`!struct mm_area`.
 
 Each VMA describes a virtually contiguous memory range with identical
-attributes, each described by a :c:struct:`!struct vm_area_struct`
+attributes, each described by a :c:struct:`!struct mm_area`
 object. Userland access outside of VMAs is invalid except in the case where an
 adjacent stack VMA could be extended to contain the accessed address.
 
@@ -142,7 +142,7 @@ obtain either a read or a write lock for each of these.
 VMA fields
 ^^^^^^^^^^
 
-We can subdivide :c:struct:`!struct vm_area_struct` fields by their purpose, which makes it
+We can subdivide :c:struct:`!struct mm_area` fields by their purpose, which makes it
 easier to explore their locking characteristics:
 
 .. note:: We exclude VMA lock-specific fields here to avoid confusion, as these
diff --git a/Documentation/translations/zh_CN/core-api/cachetlb.rst b/Documentation/translations/zh_CN/core-api/cachetlb.rst
index 64295c61d1c1..96eefda0262e 100644
--- a/Documentation/translations/zh_CN/core-api/cachetlb.rst
+++ b/Documentation/translations/zh_CN/core-api/cachetlb.rst
@@ -51,7 +51,7 @@ cpu上对这个地址空间进行刷新。
 	这个接口被用来处理整个地址空间的页表操作,比如在fork和exec过程
 	中发生的事情。
 
-3) ``void flush_tlb_range(struct vm_area_struct *vma,
+3) ``void flush_tlb_range(struct mm_area *vma,
    unsigned long start, unsigned long end)``
 
 	这里我们要从TLB中刷新一个特定范围的(用户)虚拟地址转换。在运行后,
@@ -65,7 +65,7 @@ cpu上对这个地址空间进行刷新。
 	个页面大小的转换,而不是让内核为每个可能被修改的页表项调用
 	flush_tlb_page(见下文)。
 
-4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
+4) ``void flush_tlb_page(struct mm_area *vma, unsigned long addr)``
 
 	这一次我们需要从TLB中删除PAGE_SIZE大小的转换。‘vma’是Linux用来跟
 	踪进程的mmap区域的支持结构体,地址空间可以通过vma->vm_mm获得。另
@@ -78,7 +78,7 @@ cpu上对这个地址空间进行刷新。
 
 	这主要是在故障处理时使用。
 
-5) ``void update_mmu_cache(struct vm_area_struct *vma,
+5) ``void update_mmu_cache(struct mm_area *vma,
    unsigned long address, pte_t *ptep)``
 
 	在每个缺页异常结束时,这个程序被调用,以告诉体系结构特定的代码,在
@@ -134,7 +134,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
 
 	这个选项与flush_cache_mm分开,以允许对VIPT缓存进行一些优化。
 
-3) ``void flush_cache_range(struct vm_area_struct *vma,
+3) ``void flush_cache_range(struct mm_area *vma,
    unsigned long start, unsigned long end)``
 
 	在这里,我们要从缓存中刷新一个特定范围的(用户)虚拟地址。运行
@@ -147,7 +147,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
 	除多个页面大小的区域, 而不是让内核为每个可能被修改的页表项调
 	用 flush_cache_page (见下文)。
 
-4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
+4) ``void flush_cache_page(struct mm_area *vma, unsigned long addr, unsigned long pfn)``
 
 	这一次我们需要从缓存中删除一个PAGE_SIZE大小的区域。“vma”是
 	Linux用来跟踪进程的mmap区域的支持结构体,地址空间可以通过
@@ -284,9 +284,9 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
 	该函数的调用情形与flush_dcache_page()相同。它允许架构针对刷新整个
 	folio页面进行优化,而不是一次刷新一页。
 
-  ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+  ``void copy_to_user_page(struct mm_area *vma, struct page *page,
   unsigned long user_vaddr, void *dst, void *src, int len)``
-  ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+  ``void copy_from_user_page(struct mm_area *vma, struct page *page,
   unsigned long user_vaddr, void *dst, void *src, int len)``
 
 	当内核需要复制任意的数据进出任意的用户页时(比如ptrace()),它将使
@@ -296,7 +296,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
 	处理器的指令缓存没有对cpu存储进行窥探,那么你很可能需要为
 	copy_to_user_page()刷新指令缓存。
 
-  ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
+  ``void flush_anon_page(struct mm_area *vma, struct page *page,
   unsigned long vmaddr)``
 
 	当内核需要访问一个匿名页的内容时,它会调用这个函数(目前只有
@@ -310,7 +310,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
 
 	如果icache不对存储进行窥探,那么这个程序将需要对其进行刷新。
 
-  ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
+  ``void flush_icache_page(struct mm_area *vma, struct page *page)``
 
 	flush_icache_page的所有功能都可以在flush_dcache_page和update_mmu_cache
 	中实现。在未来,我们希望能够完全删除这个接口。
diff --git a/Documentation/translations/zh_CN/mm/hmm.rst b/Documentation/translations/zh_CN/mm/hmm.rst
index 22c210f4e94f..ad4e2847b119 100644
--- a/Documentation/translations/zh_CN/mm/hmm.rst
+++ b/Documentation/translations/zh_CN/mm/hmm.rst
@@ -247,7 +247,7 @@ devm_memunmap_pages() 和 devm_release_mem_region() 当资源可以绑定到 ``s
 
 1. ``mmap_read_lock()``
 
-   设备驱动程序必须将 ``struct vm_area_struct`` 传递给migrate_vma_setup(),
+   设备驱动程序必须将 ``struct mm_area`` 传递给migrate_vma_setup(),
    因此需要在迁移期间保留 mmap_read_lock() 或 mmap_write_lock()。
 
 2. ``migrate_vma_setup(struct migrate_vma *args)``
diff --git a/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst b/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst
index 20947f8bd065..b85b68f3afd4 100644
--- a/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst
+++ b/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst
@@ -95,7 +95,7 @@ Page Flags
 
 	int hugetlb_reserve_pages(struct inode *inode,
 				  long from, long to,
-				  struct vm_area_struct *vma,
+				  struct mm_area *vma,
 				  vm_flags_t vm_flags)
 
 hugetlb_reserve_pages()做的第一件事是检查在调用shmget()或mmap()时是否指定了NORESERVE
@@ -146,7 +146,7 @@ HPAGE_RESV_OWNER标志被设置,以表明该VMA拥有预留。
 当与预留相关的巨页在相应的映射中被分配和实例化时,预留就被消耗了。该分配是在函数alloc_hugetlb_folio()
 中进行的::
 
-	struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+	struct folio *alloc_hugetlb_folio(struct mm_area *vma,
 				     unsigned long addr, int avoid_reserve)
 
 alloc_hugetlb_folio被传递给一个VMA指针和一个虚拟地址,因此它可以查阅预留映射以确定是否存在预留。
@@ -342,13 +342,13 @@ region_count()在解除私有巨页映射时被调用。在私有映射中,预
 它们确实考虑到了私有和共享映射的预留映射条目的 “相反” 含义,并向调用者隐藏了这个细节::
 
 	long vma_needs_reservation(struct hstate *h,
-				   struct vm_area_struct *vma,
+				   struct mm_area *vma,
 				   unsigned long addr)
 
 该函数为指定的页面调用 region_chg()。如果不存在预留,则返回1。如果存在预留,则返回0::
 
 	long vma_commit_reservation(struct hstate *h,
-				    struct vm_area_struct *vma,
+				    struct mm_area *vma,
 				    unsigned long addr)
 
 这将调用 region_add(),用于指定的页面。与region_chg和region_add的情况一样,该函数应在
@@ -357,14 +357,14 @@ region_count()在解除私有巨页映射时被调用。在私有映射中,预
 现意外的差异,说明在两次调用之间修改了预留映射::
 
 	void vma_end_reservation(struct hstate *h,
-				 struct vm_area_struct *vma,
+				 struct mm_area *vma,
 				 unsigned long addr)
 
 这将调用指定页面的 region_abort()。与region_chg和region_abort的情况一样,该函数应在
 先前调用的vma_needs_reservation后被调用。它将中止/结束正在进行的预留添加操作::
 
 	long vma_add_reservation(struct hstate *h,
-				 struct vm_area_struct *vma,
+				 struct mm_area *vma,
 				 unsigned long addr)
 
 这是一个特殊的包装函数,有助于在错误路径上清理预留。它只从repare_reserve_on_error()函数
diff --git a/Documentation/userspace-api/media/conf_nitpick.py b/Documentation/userspace-api/media/conf_nitpick.py
index 0a8e236d07ab..3704eb6e4e3b 100644
--- a/Documentation/userspace-api/media/conf_nitpick.py
+++ b/Documentation/userspace-api/media/conf_nitpick.py
@@ -103,7 +103,7 @@ nitpick_ignore = [
     ("c:type", "usb_interface"),
     ("c:type", "v4l2_std_id"),
     ("c:type", "video_system_t"),
-    ("c:type", "vm_area_struct"),
+    ("c:type", "mm_area"),
 
     # Opaque structures
 
diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h
index 36a7e924c3b9..6a9f035ab3c9 100644
--- a/arch/alpha/include/asm/cacheflush.h
+++ b/arch/alpha/include/asm/cacheflush.h
@@ -35,7 +35,7 @@ extern void smp_imb(void);
 
 extern void __load_new_mm_context(struct mm_struct *);
 static inline void
-flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
+flush_icache_user_page(struct mm_area *vma, struct page *page,
 			unsigned long addr, int len)
 {
 	if (vma->vm_flags & VM_EXEC) {
@@ -48,7 +48,7 @@ flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
 }
 #define flush_icache_user_page flush_icache_user_page
 #else /* CONFIG_SMP */
-extern void flush_icache_user_page(struct vm_area_struct *vma,
+extern void flush_icache_user_page(struct mm_area *vma,
 		struct page *page, unsigned long addr, int len);
 #define flush_icache_user_page flush_icache_user_page
 #endif /* CONFIG_SMP */
@@ -57,7 +57,7 @@ extern void flush_icache_user_page(struct vm_area_struct *vma,
  * Both implementations of flush_icache_user_page flush the entire
  * address space, so one call, no matter how many pages.
  */
-static inline void flush_icache_pages(struct vm_area_struct *vma,
+static inline void flush_icache_pages(struct mm_area *vma,
 		struct page *page, unsigned int nr)
 {
 	flush_icache_user_page(vma, page, 0, 0);
diff --git a/arch/alpha/include/asm/machvec.h b/arch/alpha/include/asm/machvec.h
index 490fc880bb3f..964ae4fe2dd3 100644
--- a/arch/alpha/include/asm/machvec.h
+++ b/arch/alpha/include/asm/machvec.h
@@ -16,7 +16,7 @@
 
 struct task_struct;
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 struct linux_hose_info;
 struct pci_dev;
 struct pci_ops;
diff --git a/arch/alpha/include/asm/pci.h b/arch/alpha/include/asm/pci.h
index 6c04fcbdc8ed..d402ba6d7a00 100644
--- a/arch/alpha/include/asm/pci.h
+++ b/arch/alpha/include/asm/pci.h
@@ -82,7 +82,7 @@ extern int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val,
 extern int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val,
 			    size_t count);
 extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
-				      struct vm_area_struct *vma,
+				      struct mm_area *vma,
 				      enum pci_mmap_state mmap_state);
 extern void pci_adjust_legacy_attr(struct pci_bus *bus,
 				   enum pci_mmap_state mmap_type);
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 02e8817a8921..fdb7f661c52a 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -19,7 +19,7 @@
 #include <asm/setup.h>
 
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 
 /* Certain architectures need to do special things when PTEs
  * within a page table are directly modified.  Thus, the following
@@ -298,13 +298,13 @@ extern pgd_t swapper_pg_dir[1024];
  * The Alpha doesn't have any external MMU info:  the kernel page
  * tables contain all the necessary information.
  */
-extern inline void update_mmu_cache(struct vm_area_struct * vma,
+extern inline void update_mmu_cache(struct mm_area * vma,
 	unsigned long address, pte_t *ptep)
 {
 }
 
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 }
diff --git a/arch/alpha/include/asm/tlbflush.h b/arch/alpha/include/asm/tlbflush.h
index ba4b359d6c39..76232c200987 100644
--- a/arch/alpha/include/asm/tlbflush.h
+++ b/arch/alpha/include/asm/tlbflush.h
@@ -26,7 +26,7 @@ ev5_flush_tlb_current(struct mm_struct *mm)
 
 __EXTERN_INLINE void
 ev5_flush_tlb_current_page(struct mm_struct * mm,
-			   struct vm_area_struct *vma,
+			   struct mm_area *vma,
 			   unsigned long addr)
 {
 	if (vma->vm_flags & VM_EXEC)
@@ -81,7 +81,7 @@ flush_tlb_mm(struct mm_struct *mm)
 
 /* Page-granular tlb flush.  */
 static inline void
-flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
@@ -94,7 +94,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 /* Flush a specified range of user mapping.  On the Alpha we flush
    the whole user tlb.  */
 static inline void
-flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+flush_tlb_range(struct mm_area *vma, unsigned long start,
 		unsigned long end)
 {
 	flush_tlb_mm(vma->vm_mm);
@@ -104,8 +104,8 @@ flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *);
-extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
-extern void flush_tlb_range(struct vm_area_struct *, unsigned long,
+extern void flush_tlb_page(struct mm_area *, unsigned long);
+extern void flush_tlb_range(struct mm_area *, unsigned long,
 			    unsigned long);
 
 #endif /* CONFIG_SMP */
diff --git a/arch/alpha/kernel/pci-sysfs.c b/arch/alpha/kernel/pci-sysfs.c
index 3048758304b5..ec66bae1cfae 100644
--- a/arch/alpha/kernel/pci-sysfs.c
+++ b/arch/alpha/kernel/pci-sysfs.c
@@ -16,7 +16,7 @@
 #include <linux/pci.h>
 
 static int hose_mmap_page_range(struct pci_controller *hose,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				enum pci_mmap_state mmap_type, int sparse)
 {
 	unsigned long base;
@@ -34,7 +34,7 @@ static int hose_mmap_page_range(struct pci_controller *hose,
 }
 
 static int __pci_mmap_fits(struct pci_dev *pdev, int num,
-			   struct vm_area_struct *vma, int sparse)
+			   struct mm_area *vma, int sparse)
 {
 	unsigned long nr, start, size;
 	int shift = sparse ? 5 : 0;
@@ -56,7 +56,7 @@ static int __pci_mmap_fits(struct pci_dev *pdev, int num,
  * pci_mmap_resource - map a PCI resource into user memory space
  * @kobj: kobject for mapping
  * @attr: struct bin_attribute for the file being mapped
- * @vma: struct vm_area_struct passed into the mmap
+ * @vma: struct mm_area passed into the mmap
  * @sparse: address space type
  *
  * Use the bus mapping routines to map a PCI resource into userspace.
@@ -65,7 +65,7 @@ static int __pci_mmap_fits(struct pci_dev *pdev, int num,
  */
 static int pci_mmap_resource(struct kobject *kobj,
 			     const struct bin_attribute *attr,
-			     struct vm_area_struct *vma, int sparse)
+			     struct mm_area *vma, int sparse)
 {
 	struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
 	struct resource *res = attr->private;
@@ -94,14 +94,14 @@ static int pci_mmap_resource(struct kobject *kobj,
 
 static int pci_mmap_resource_sparse(struct file *filp, struct kobject *kobj,
 				    const struct bin_attribute *attr,
-				    struct vm_area_struct *vma)
+				    struct mm_area *vma)
 {
 	return pci_mmap_resource(kobj, attr, vma, 1);
 }
 
 static int pci_mmap_resource_dense(struct file *filp, struct kobject *kobj,
 				   const struct bin_attribute *attr,
-				   struct vm_area_struct *vma)
+				   struct mm_area *vma)
 {
 	return pci_mmap_resource(kobj, attr, vma, 0);
 }
@@ -254,7 +254,7 @@ int pci_create_resource_files(struct pci_dev *pdev)
 /* Legacy I/O bus mapping stuff. */
 
 static int __legacy_mmap_fits(struct pci_controller *hose,
-			      struct vm_area_struct *vma,
+			      struct mm_area *vma,
 			      unsigned long res_size, int sparse)
 {
 	unsigned long nr, start, size;
@@ -283,7 +283,7 @@ static inline int has_sparse(struct pci_controller *hose,
 	return base != 0;
 }
 
-int pci_mmap_legacy_page_range(struct pci_bus *bus, struct vm_area_struct *vma,
+int pci_mmap_legacy_page_range(struct pci_bus *bus, struct mm_area *vma,
 			       enum pci_mmap_state mmap_type)
 {
 	struct pci_controller *hose = bus->sysdata;
diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
index ed06367ece57..1f71a076196b 100644
--- a/arch/alpha/kernel/smp.c
+++ b/arch/alpha/kernel/smp.c
@@ -658,7 +658,7 @@ flush_tlb_mm(struct mm_struct *mm)
 EXPORT_SYMBOL(flush_tlb_mm);
 
 struct flush_tlb_page_struct {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm;
 	unsigned long addr;
 };
@@ -676,7 +676,7 @@ ipi_flush_tlb_page(void *x)
 }
 
 void
-flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	struct flush_tlb_page_struct data;
 	struct mm_struct *mm = vma->vm_mm;
@@ -709,7 +709,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 EXPORT_SYMBOL(flush_tlb_page);
 
 void
-flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	/* On the Alpha we always flush the whole user tlb.  */
 	flush_tlb_mm(vma->vm_mm);
@@ -727,7 +727,7 @@ ipi_flush_icache_page(void *x)
 }
 
 void
-flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
+flush_icache_user_page(struct mm_area *vma, struct page *page,
 			unsigned long addr, int len)
 {
 	struct mm_struct *mm = vma->vm_mm;
diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index a9816bbc9f34..a65198563de8 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -85,7 +85,7 @@ asmlinkage void
 do_page_fault(unsigned long address, unsigned long mmcsr,
 	      long cause, struct pt_regs *regs)
 {
-	struct vm_area_struct * vma;
+	struct mm_area * vma;
 	struct mm_struct *mm = current->mm;
 	const struct exception_table_entry *fixup;
 	int si_code = SEGV_MAPERR;
diff --git a/arch/arc/include/asm/hugepage.h b/arch/arc/include/asm/hugepage.h
index 8a2441670a8f..3f3e305802f6 100644
--- a/arch/arc/include/asm/hugepage.h
+++ b/arch/arc/include/asm/hugepage.h
@@ -61,11 +61,11 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 	*pmdp = pmd;
 }
 
-extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+extern void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
 				 pmd_t *pmd);
 
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
-extern void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
 				unsigned long end);
 
 /* We don't have hardware dirty/accessed bits, generic_pmdp_establish is fine.*/
diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
index def0dfb95b43..bb03a8165e36 100644
--- a/arch/arc/include/asm/page.h
+++ b/arch/arc/include/asm/page.h
@@ -25,13 +25,13 @@
 #define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
 #define copy_page(to, from)		memcpy((to), (from), PAGE_SIZE)
 
-struct vm_area_struct;
+struct mm_area;
 struct page;
 
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 
 void copy_user_highpage(struct page *to, struct page *from,
-			unsigned long u_vaddr, struct vm_area_struct *vma);
+			unsigned long u_vaddr, struct mm_area *vma);
 void clear_user_page(void *to, unsigned long u_vaddr, struct page *page);
 
 typedef struct {
diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h
index 8ebec1b21d24..80c4759894fc 100644
--- a/arch/arc/include/asm/pgtable-bits-arcv2.h
+++ b/arch/arc/include/asm/pgtable-bits-arcv2.h
@@ -101,7 +101,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 }
 
 struct vm_fault;
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *ptep, unsigned int nr);
 
 #define update_mmu_cache(vma, addr, ptep) \
diff --git a/arch/arc/include/asm/tlbflush.h b/arch/arc/include/asm/tlbflush.h
index 992a2837a53f..e442c338f36a 100644
--- a/arch/arc/include/asm/tlbflush.h
+++ b/arch/arc/include/asm/tlbflush.h
@@ -10,12 +10,12 @@
 
 void local_flush_tlb_all(void);
 void local_flush_tlb_mm(struct mm_struct *mm);
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page);
 void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
-void local_flush_tlb_range(struct vm_area_struct *vma,
+void local_flush_tlb_range(struct mm_area *vma,
 			   unsigned long start, unsigned long end);
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
 			       unsigned long end);
 #endif
 
@@ -29,14 +29,14 @@ void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
 #define flush_pmd_tlb_range(vma, s, e)	local_flush_pmd_tlb_range(vma, s, e)
 #endif
 #else
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
 							 unsigned long end);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void flush_tlb_page(struct mm_area *vma, unsigned long page);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+extern void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
 #endif
 #endif /* CONFIG_SMP */
 #endif
diff --git a/arch/arc/kernel/arc_hostlink.c b/arch/arc/kernel/arc_hostlink.c
index 08c5196efe0a..ca695259edde 100644
--- a/arch/arc/kernel/arc_hostlink.c
+++ b/arch/arc/kernel/arc_hostlink.c
@@ -15,7 +15,7 @@
 
 static unsigned char __HOSTLINK__[4 * PAGE_SIZE] __aligned(PAGE_SIZE);
 
-static int arc_hl_mmap(struct file *fp, struct vm_area_struct *vma)
+static int arc_hl_mmap(struct file *fp, struct mm_area *vma)
 {
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
 
diff --git a/arch/arc/kernel/troubleshoot.c b/arch/arc/kernel/troubleshoot.c
index c380d8c30704..0e54ebd71f6c 100644
--- a/arch/arc/kernel/troubleshoot.c
+++ b/arch/arc/kernel/troubleshoot.c
@@ -76,7 +76,7 @@ static void print_task_path_n_nm(struct task_struct *tsk)
 
 static void show_faulting_vma(unsigned long address)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *active_mm = current->active_mm;
 
 	/* can't use print_vma_addr() yet as it doesn't check for
diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
index 9106ceac323c..29f282d3b006 100644
--- a/arch/arc/mm/cache.c
+++ b/arch/arc/mm/cache.c
@@ -880,7 +880,7 @@ noinline void flush_cache_all(void)
 }
 
 void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long u_vaddr, struct vm_area_struct *vma)
+	unsigned long u_vaddr, struct mm_area *vma)
 {
 	struct folio *src = page_folio(from);
 	struct folio *dst = page_folio(to);
diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
index 95119a5e7761..a757e4c1aeca 100644
--- a/arch/arc/mm/fault.c
+++ b/arch/arc/mm/fault.c
@@ -72,7 +72,7 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address)
 
 void do_page_fault(unsigned long address, struct pt_regs *regs)
 {
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	struct task_struct *tsk = current;
 	struct mm_struct *mm = tsk->mm;
 	int sig, si_code = SEGV_MAPERR;
diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
index 2185afe8d59f..d43d7ab91d3d 100644
--- a/arch/arc/mm/mmap.c
+++ b/arch/arc/mm/mmap.c
@@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_unmapped_area_info info = {};
 
 	/*
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index cae4a7aae0ed..94da2ce6b491 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -205,7 +205,7 @@ noinline void local_flush_tlb_mm(struct mm_struct *mm)
  *      without doing any explicit Shootdown
  *  -In case of kernel Flush, entry has to be shot down explicitly
  */
-void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
 			   unsigned long end)
 {
 	const unsigned int cpu = smp_processor_id();
@@ -275,7 +275,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
  * NOTE One TLB entry contains translation for single PAGE
  */
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	const unsigned int cpu = smp_processor_id();
 	unsigned long flags;
@@ -295,7 +295,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 #ifdef CONFIG_SMP
 
 struct tlb_args {
-	struct vm_area_struct *ta_vma;
+	struct mm_area *ta_vma;
 	unsigned long ta_start;
 	unsigned long ta_end;
 };
@@ -341,7 +341,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 			 mm, 1);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+void flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
 {
 	struct tlb_args ta = {
 		.ta_vma = vma,
@@ -351,7 +351,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
 	on_each_cpu_mask(mm_cpumask(vma->vm_mm), ipi_flush_tlb_page, &ta, 1);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 		     unsigned long end)
 {
 	struct tlb_args ta = {
@@ -364,7 +364,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
 			 unsigned long end)
 {
 	struct tlb_args ta = {
@@ -391,7 +391,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 /*
  * Routine to create a TLB entry
  */
-static void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep)
+static void create_tlb(struct mm_area *vma, unsigned long vaddr, pte_t *ptep)
 {
 	unsigned long flags;
 	unsigned int asid_or_sasid, rwx;
@@ -469,7 +469,7 @@ static void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *p
  * Note that flush (when done) involves both WBACK - so physical page is
  * in sync as well as INV - so any non-congruent aliases don't remain
  */
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr)
 {
 	unsigned long vaddr = vaddr_unaligned & PAGE_MASK;
@@ -527,14 +527,14 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
  * Thus THP PMD accessors are implemented in terms of PTE (just like sparc)
  */
 
-void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
 				 pmd_t *pmd)
 {
 	pte_t pte = __pte(pmd_val(*pmd));
 	update_mmu_cache_range(NULL, vma, addr, &pte, HPAGE_PMD_NR);
 }
 
-void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
 			       unsigned long end)
 {
 	unsigned int cpu;
diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
index 8ed8b9a24efe..ad88660a95c4 100644
--- a/arch/arm/include/asm/cacheflush.h
+++ b/arch/arm/include/asm/cacheflush.h
@@ -165,7 +165,7 @@ extern void dmac_flush_range(const void *, const void *);
  * processes address space.  Really, we want to allow our "user
  * space" model to handle this.
  */
-extern void copy_to_user_page(struct vm_area_struct *, struct page *,
+extern void copy_to_user_page(struct mm_area *, struct page *,
 	unsigned long, void *, const void *, unsigned long);
 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
 	do {							\
@@ -222,7 +222,7 @@ static inline void vivt_flush_cache_mm(struct mm_struct *mm)
 }
 
 static inline void
-vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+vivt_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
@@ -231,7 +231,7 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
 					vma->vm_flags);
 }
 
-static inline void vivt_flush_cache_pages(struct vm_area_struct *vma,
+static inline void vivt_flush_cache_pages(struct mm_area *vma,
 		unsigned long user_addr, unsigned long pfn, unsigned int nr)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -252,8 +252,8 @@ static inline void vivt_flush_cache_pages(struct vm_area_struct *vma,
 		vivt_flush_cache_pages(vma, addr, pfn, nr)
 #else
 void flush_cache_mm(struct mm_struct *mm);
-void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
-void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr,
+void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
+void flush_cache_pages(struct mm_area *vma, unsigned long user_addr,
 		unsigned long pfn, unsigned int nr);
 #endif
 
@@ -309,10 +309,10 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
 }
 
 #define ARCH_HAS_FLUSH_ANON_PAGE
-static inline void flush_anon_page(struct vm_area_struct *vma,
+static inline void flush_anon_page(struct mm_area *vma,
 			 struct page *page, unsigned long vmaddr)
 {
-	extern void __flush_anon_page(struct vm_area_struct *vma,
+	extern void __flush_anon_page(struct mm_area *vma,
 				struct page *, unsigned long);
 	if (PageAnon(page))
 		__flush_anon_page(vma, page, vmaddr);
diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
index ef11b721230e..ba8262198322 100644
--- a/arch/arm/include/asm/page.h
+++ b/arch/arm/include/asm/page.h
@@ -102,34 +102,34 @@
 #endif
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 
 struct cpu_user_fns {
 	void (*cpu_clear_user_highpage)(struct page *page, unsigned long vaddr);
 	void (*cpu_copy_user_highpage)(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma);
+			unsigned long vaddr, struct mm_area *vma);
 };
 
 void fa_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void fa_clear_user_highpage(struct page *page, unsigned long vaddr);
 void feroceon_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void feroceon_clear_user_highpage(struct page *page, unsigned long vaddr);
 void v4_mc_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void v4_mc_clear_user_highpage(struct page *page, unsigned long vaddr);
 void v4wb_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void v4wb_clear_user_highpage(struct page *page, unsigned long vaddr);
 void v4wt_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void v4wt_clear_user_highpage(struct page *page, unsigned long vaddr);
 void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void xsc3_mc_clear_user_highpage(struct page *page, unsigned long vaddr);
 void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 void xscale_mc_clear_user_highpage(struct page *page, unsigned long vaddr);
 
 #ifdef MULTI_USER
@@ -145,7 +145,7 @@ extern struct cpu_user_fns cpu_user;
 
 extern void __cpu_clear_user_highpage(struct page *page, unsigned long vaddr);
 extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma);
+			unsigned long vaddr, struct mm_area *vma);
 #endif
 
 #define clear_user_highpage(page,vaddr)		\
diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h
index 38c6e4a2a0b6..401ec430d0fd 100644
--- a/arch/arm/include/asm/tlbflush.h
+++ b/arch/arm/include/asm/tlbflush.h
@@ -205,7 +205,7 @@
 #include <linux/sched.h>
 
 struct cpu_tlb_fns {
-	void (*flush_user_range)(unsigned long, unsigned long, struct vm_area_struct *);
+	void (*flush_user_range)(unsigned long, unsigned long, struct mm_area *);
 	void (*flush_kern_range)(unsigned long, unsigned long);
 	unsigned long tlb_flags;
 };
@@ -223,7 +223,7 @@ struct cpu_tlb_fns {
 #define __cpu_flush_user_tlb_range	__glue(_TLB,_flush_user_tlb_range)
 #define __cpu_flush_kern_tlb_range	__glue(_TLB,_flush_kern_tlb_range)
 
-extern void __cpu_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+extern void __cpu_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 extern void __cpu_flush_kern_tlb_range(unsigned long, unsigned long);
 
 #endif
@@ -264,7 +264,7 @@ extern struct cpu_tlb_fns cpu_tlb;
  *	flush_tlb_page(vma, uaddr)
  *
  *		Invalidate the specified page in the specified address range.
- *		- vma	- vm_area_struct describing address range
+ *		- vma	- mm_area describing address range
  *		- vaddr - virtual address (may not be aligned)
  */
 
@@ -410,7 +410,7 @@ static inline void __flush_tlb_mm(struct mm_struct *mm)
 }
 
 static inline void
-__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+__local_flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
 {
 	const int zero = 0;
 	const unsigned int __tlb_flag = __cpu_tlb_flags;
@@ -432,7 +432,7 @@ __local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
 }
 
 static inline void
-local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+local_flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
 {
 	const unsigned int __tlb_flag = __cpu_tlb_flags;
 
@@ -449,7 +449,7 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
 }
 
 static inline void
-__flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+__flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
 {
 	const unsigned int __tlb_flag = __cpu_tlb_flags;
 
@@ -608,9 +608,9 @@ static inline void clean_pmd_entry(void *pmd)
 #else
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr);
+extern void flush_tlb_page(struct mm_area *vma, unsigned long uaddr);
 extern void flush_tlb_kernel_page(unsigned long kaddr);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 extern void flush_bp_all(void);
 #endif
@@ -622,11 +622,11 @@ extern void flush_bp_all(void);
  * the set_ptes() function.
  */
 #if __LINUX_ARM_ARCH__ < 6
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long addr, pte_t *ptep, unsigned int nr);
 #else
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+		struct mm_area *vma, unsigned long addr, pte_t *ptep,
 		unsigned int nr)
 {
 }
@@ -644,17 +644,17 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #ifndef __ASSEMBLY__
 static inline void local_flush_tlb_all(void)									{ }
 static inline void local_flush_tlb_mm(struct mm_struct *mm)							{ }
-static inline void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)			{ }
+static inline void local_flush_tlb_page(struct mm_area *vma, unsigned long uaddr)			{ }
 static inline void local_flush_tlb_kernel_page(unsigned long kaddr)						{ }
-static inline void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)	{ }
+static inline void local_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)	{ }
 static inline void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)				{ }
 static inline void local_flush_bp_all(void)									{ }
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr);
+extern void flush_tlb_page(struct mm_area *vma, unsigned long uaddr);
 extern void flush_tlb_kernel_page(unsigned long kaddr);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 extern void flush_bp_all(void);
 #endif	/* __ASSEMBLY__ */
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 123f4a8ef446..026d60dfd19e 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -106,8 +106,8 @@ int main(void)
   DEFINE(MM_CONTEXT_ID,		offsetof(struct mm_struct, context.id.counter));
   BLANK();
 #endif
-  DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
-  DEFINE(VMA_VM_FLAGS,		offsetof(struct vm_area_struct, vm_flags));
+  DEFINE(VMA_VM_MM,		offsetof(struct mm_area, vm_mm));
+  DEFINE(VMA_VM_FLAGS,		offsetof(struct mm_area, vm_flags));
   BLANK();
   DEFINE(VM_EXEC,	       	VM_EXEC);
   BLANK();
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index e16ed102960c..d35d4687e6a8 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -306,7 +306,7 @@ unsigned long __get_wchan(struct task_struct *p)
  * atomic helpers. Insert it into the gate_vma so that it is visible
  * through ptrace and /proc/<pid>/mem.
  */
-static struct vm_area_struct gate_vma;
+static struct mm_area gate_vma;
 
 static int __init gate_vma_init(void)
 {
@@ -319,7 +319,7 @@ static int __init gate_vma_init(void)
 }
 arch_initcall(gate_vma_init);
 
-struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+struct mm_area *get_gate_vma(struct mm_struct *mm)
 {
 	return &gate_vma;
 }
@@ -338,7 +338,7 @@ int in_gate_area_no_mm(unsigned long addr)
 #define is_gate_vma(vma)	0
 #endif
 
-const char *arch_vma_name(struct vm_area_struct *vma)
+const char *arch_vma_name(struct mm_area *vma)
 {
 	return is_gate_vma(vma) ? "[vectors]" : NULL;
 }
@@ -380,7 +380,7 @@ static struct page *signal_page;
 extern struct page *get_signal_page(void);
 
 static int sigpage_mremap(const struct vm_special_mapping *sm,
-		struct vm_area_struct *new_vma)
+		struct mm_area *new_vma)
 {
 	current->mm->context.sigpage = new_vma->vm_start;
 	return 0;
@@ -395,7 +395,7 @@ static const struct vm_special_mapping sigpage_mapping = {
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long npages;
 	unsigned long addr;
 	unsigned long hint;
diff --git a/arch/arm/kernel/smp_tlb.c b/arch/arm/kernel/smp_tlb.c
index d4908b3736d8..d827500c7538 100644
--- a/arch/arm/kernel/smp_tlb.c
+++ b/arch/arm/kernel/smp_tlb.c
@@ -18,7 +18,7 @@
  * TLB operations
  */
 struct tlb_args {
-	struct vm_area_struct *ta_vma;
+	struct mm_area *ta_vma;
 	unsigned long ta_start;
 	unsigned long ta_end;
 };
@@ -193,7 +193,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 	broadcast_tlb_mm_a15_erratum(mm);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+void flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
 {
 	if (tlb_ops_need_broadcast()) {
 		struct tlb_args ta;
@@ -217,7 +217,7 @@ void flush_tlb_kernel_page(unsigned long kaddr)
 	broadcast_tlb_a15_erratum();
 }
 
-void flush_tlb_range(struct vm_area_struct *vma,
+void flush_tlb_range(struct mm_area *vma,
                      unsigned long start, unsigned long end)
 {
 	if (tlb_ops_need_broadcast()) {
diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index 325448ffbba0..97b28ef9742a 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -35,7 +35,7 @@ extern char vdso_start[], vdso_end[];
 unsigned int vdso_total_pages __ro_after_init;
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-		struct vm_area_struct *new_vma)
+		struct mm_area *new_vma)
 {
 	current->mm->context.vdso = new_vma->vm_start;
 
@@ -210,7 +210,7 @@ static_assert(__VDSO_PAGES == VDSO_NR_PAGES);
 /* assumes mmap_lock is write-locked */
 void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long len;
 
 	mm->context.vdso = 0;
diff --git a/arch/arm/mach-rpc/ecard.c b/arch/arm/mach-rpc/ecard.c
index 2cde4c83b7f9..08d17ee66891 100644
--- a/arch/arm/mach-rpc/ecard.c
+++ b/arch/arm/mach-rpc/ecard.c
@@ -213,7 +213,7 @@ static DEFINE_MUTEX(ecard_mutex);
  */
 static void ecard_init_pgtables(struct mm_struct *mm)
 {
-	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, VM_EXEC);
+	struct mm_area vma = TLB_FLUSH_VMA(mm, VM_EXEC);
 
 	/* We want to set up the page tables for the following mapping:
 	 *  Virtual	Physical
diff --git a/arch/arm/mm/cache-v6.S b/arch/arm/mm/cache-v6.S
index 9f415476e218..560bf185d275 100644
--- a/arch/arm/mm/cache-v6.S
+++ b/arch/arm/mm/cache-v6.S
@@ -94,7 +94,7 @@ SYM_FUNC_END(v6_flush_user_cache_all)
  *
  *	- start - start address (may not be aligned)
  *	- end   - end address (exclusive, may not be aligned)
- *	- flags	- vm_area_struct flags describing address space
+ *	- flags	- mm_area flags describing address space
  *
  *	It is assumed that:
  *	- we have a VIPT cache.
diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
index 201ca05436fa..c3d5c874c895 100644
--- a/arch/arm/mm/cache-v7.S
+++ b/arch/arm/mm/cache-v7.S
@@ -238,7 +238,7 @@ SYM_FUNC_END(v7_flush_user_cache_all)
  *
  *	- start - start address (may not be aligned)
  *	- end   - end address (exclusive, may not be aligned)
- *	- flags	- vm_area_struct flags describing address space
+ *	- flags	- mm_area flags describing address space
  *
  *	It is assumed that:
  *	- we have a VIPT cache.
diff --git a/arch/arm/mm/cache-v7m.S b/arch/arm/mm/cache-v7m.S
index 14d719eba729..611e0c7c4875 100644
--- a/arch/arm/mm/cache-v7m.S
+++ b/arch/arm/mm/cache-v7m.S
@@ -263,7 +263,7 @@ SYM_FUNC_END(v7m_flush_user_cache_all)
  *
  *	- start - start address (may not be aligned)
  *	- end   - end address (exclusive, may not be aligned)
- *	- flags	- vm_area_struct flags describing address space
+ *	- flags	- mm_area flags describing address space
  *
  *	It is assumed that:
  *	- we have a VIPT cache.
diff --git a/arch/arm/mm/copypage-fa.c b/arch/arm/mm/copypage-fa.c
index 7e28c26f5aa4..6620d7e4ef45 100644
--- a/arch/arm/mm/copypage-fa.c
+++ b/arch/arm/mm/copypage-fa.c
@@ -36,7 +36,7 @@ static void fa_copy_user_page(void *kto, const void *kfrom)
 }
 
 void fa_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
diff --git a/arch/arm/mm/copypage-feroceon.c b/arch/arm/mm/copypage-feroceon.c
index 5fc8ef1e665f..c2b763bb8b94 100644
--- a/arch/arm/mm/copypage-feroceon.c
+++ b/arch/arm/mm/copypage-feroceon.c
@@ -64,7 +64,7 @@ static void feroceon_copy_user_page(void *kto, const void *kfrom)
 }
 
 void feroceon_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
index 7ddd82b9fe8b..c151e91373b7 100644
--- a/arch/arm/mm/copypage-v4mc.c
+++ b/arch/arm/mm/copypage-v4mc.c
@@ -62,7 +62,7 @@ static void mc_copy_user_page(void *from, void *to)
 }
 
 void v4_mc_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	struct folio *src = page_folio(from);
 	void *kto = kmap_atomic(to);
diff --git a/arch/arm/mm/copypage-v4wb.c b/arch/arm/mm/copypage-v4wb.c
index c3581b226459..04541e74d6a6 100644
--- a/arch/arm/mm/copypage-v4wb.c
+++ b/arch/arm/mm/copypage-v4wb.c
@@ -45,7 +45,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
 }
 
 void v4wb_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
diff --git a/arch/arm/mm/copypage-v4wt.c b/arch/arm/mm/copypage-v4wt.c
index 1fb10733305a..68cafffaeba6 100644
--- a/arch/arm/mm/copypage-v4wt.c
+++ b/arch/arm/mm/copypage-v4wt.c
@@ -41,7 +41,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
 }
 
 void v4wt_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c
index a1a71f36d850..dff1dd0f9e98 100644
--- a/arch/arm/mm/copypage-v6.c
+++ b/arch/arm/mm/copypage-v6.c
@@ -28,7 +28,7 @@ static DEFINE_RAW_SPINLOCK(v6_lock);
  * attack the kernel's existing mapping of these pages.
  */
 static void v6_copy_user_highpage_nonaliasing(struct page *to,
-	struct page *from, unsigned long vaddr, struct vm_area_struct *vma)
+	struct page *from, unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
@@ -67,7 +67,7 @@ static void discard_old_kernel_data(void *kto)
  * Copy the page, taking account of the cache colour.
  */
 static void v6_copy_user_highpage_aliasing(struct page *to,
-	struct page *from, unsigned long vaddr, struct vm_area_struct *vma)
+	struct page *from, unsigned long vaddr, struct mm_area *vma)
 {
 	struct folio *src = page_folio(from);
 	unsigned int offset = CACHE_COLOUR(vaddr);
diff --git a/arch/arm/mm/copypage-xsc3.c b/arch/arm/mm/copypage-xsc3.c
index c86e79677ff9..4f866b2aba21 100644
--- a/arch/arm/mm/copypage-xsc3.c
+++ b/arch/arm/mm/copypage-xsc3.c
@@ -62,7 +62,7 @@ static void xsc3_mc_copy_user_page(void *kto, const void *kfrom)
 }
 
 void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c
index f1e29d3e8193..dcc5b53e7d8a 100644
--- a/arch/arm/mm/copypage-xscale.c
+++ b/arch/arm/mm/copypage-xscale.c
@@ -82,7 +82,7 @@ static void mc_copy_user_page(void *from, void *to)
 }
 
 void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	struct folio *src = page_folio(from);
 	void *kto = kmap_atomic(to);
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 88c2d68a69c9..88ec2665d5d9 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1112,7 +1112,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
 	return NULL;
 }
 
-static int arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+static int arm_iommu_mmap_attrs(struct device *dev, struct mm_area *vma,
 		    void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		    unsigned long attrs)
 {
diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
index 39fd5df73317..4717aa3256bb 100644
--- a/arch/arm/mm/fault-armv.c
+++ b/arch/arm/mm/fault-armv.c
@@ -33,7 +33,7 @@ static pteval_t shared_pte_mask = L_PTE_MT_BUFFERABLE;
  * Therefore those configurations which might call adjust_pte (those
  * without CONFIG_CPU_CACHE_VIPT) cannot support split page_table_lock.
  */
-static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address,
+static int do_adjust_pte(struct mm_area *vma, unsigned long address,
 	unsigned long pfn, pte_t *ptep)
 {
 	pte_t entry = *ptep;
@@ -61,7 +61,7 @@ static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address,
 	return ret;
 }
 
-static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
+static int adjust_pte(struct mm_area *vma, unsigned long address,
 		      unsigned long pfn, bool need_lock)
 {
 	spinlock_t *ptl;
@@ -121,13 +121,13 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
 }
 
 static void
-make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
+make_coherent(struct address_space *mapping, struct mm_area *vma,
 	      unsigned long addr, pte_t *ptep, unsigned long pfn)
 {
 	const unsigned long pmd_start_addr = ALIGN_DOWN(addr, PMD_SIZE);
 	const unsigned long pmd_end_addr = pmd_start_addr + PMD_SIZE;
 	struct mm_struct *mm = vma->vm_mm;
-	struct vm_area_struct *mpnt;
+	struct mm_area *mpnt;
 	unsigned long offset;
 	pgoff_t pgoff;
 	int aliases = 0;
@@ -184,7 +184,7 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
  *
  * Note that the pte lock will be held.
  */
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long addr, pte_t *ptep, unsigned int nr)
 {
 	unsigned long pfn = pte_pfn(*ptep);
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index ab01b51de559..b89935868510 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -264,7 +264,7 @@ static int __kprobes
 do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int sig, code;
 	vm_fault_t fault;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
index 0749cf8a6637..8b674a426eae 100644
--- a/arch/arm/mm/flush.c
+++ b/arch/arm/mm/flush.c
@@ -76,7 +76,7 @@ void flush_cache_mm(struct mm_struct *mm)
 	}
 }
 
-void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	if (cache_is_vivt()) {
 		vivt_flush_cache_range(vma, start, end);
@@ -95,7 +95,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
 		__flush_icache_all();
 }
 
-void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr)
+void flush_cache_pages(struct mm_area *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr)
 {
 	if (cache_is_vivt()) {
 		vivt_flush_cache_pages(vma, user_addr, pfn, nr);
@@ -156,7 +156,7 @@ void __flush_ptrace_access(struct page *page, unsigned long uaddr, void *kaddr,
 }
 
 static
-void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
+void flush_ptrace_access(struct mm_area *vma, struct page *page,
 			 unsigned long uaddr, void *kaddr, unsigned long len)
 {
 	unsigned int flags = 0;
@@ -182,7 +182,7 @@ void flush_uprobe_xol_access(struct page *page, unsigned long uaddr,
  *
  * Note that this code needs to run on the current CPU.
  */
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		       unsigned long uaddr, void *dst, const void *src,
 		       unsigned long len)
 {
@@ -238,7 +238,7 @@ void __flush_dcache_folio(struct address_space *mapping, struct folio *folio)
 static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio)
 {
 	struct mm_struct *mm = current->active_mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	pgoff_t pgoff, pgoff_end;
 
 	/*
@@ -378,8 +378,8 @@ EXPORT_SYMBOL(flush_dcache_page);
  *  memcpy() to/from page
  *  if written to page, flush_dcache_page()
  */
-void __flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr);
-void __flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
+void __flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr);
+void __flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr)
 {
 	unsigned long pfn;
 
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 3dbb383c26d5..4077f5184814 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -32,7 +32,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int do_align = 0;
 	int aliasing = cache_is_vipt_aliasing();
 	struct vm_unmapped_area_info info = {};
@@ -82,7 +82,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		        const unsigned long len, const unsigned long pgoff,
 		        const unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	int do_align = 0;
diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
index d638cc87807e..57b8172a4830 100644
--- a/arch/arm/mm/nommu.c
+++ b/arch/arm/mm/nommu.c
@@ -189,7 +189,7 @@ void flush_dcache_page(struct page *page)
 }
 EXPORT_SYMBOL(flush_dcache_page);
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		       unsigned long uaddr, void *dst, const void *src,
 		       unsigned long len)
 {
diff --git a/arch/arm/mm/tlb-v6.S b/arch/arm/mm/tlb-v6.S
index 8256a67ac654..d4481f9f0757 100644
--- a/arch/arm/mm/tlb-v6.S
+++ b/arch/arm/mm/tlb-v6.S
@@ -27,7 +27,7 @@
  *
  *	- start - start address (may not be aligned)
  *	- end   - end address (exclusive, may not be aligned)
- *	- vma   - vm_area_struct describing address range
+ *	- vma   - mm_area describing address range
  *
  *	It is assumed that:
  *	- the "Invalidate single entry" instruction will invalidate
diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S
index f1aa0764a2cc..28490bba1cf0 100644
--- a/arch/arm/mm/tlb-v7.S
+++ b/arch/arm/mm/tlb-v7.S
@@ -26,7 +26,7 @@
  *
  *	- start - start address (may not be aligned)
  *	- end   - end address (exclusive, may not be aligned)
- *	- vma   - vm_area_struct describing address range
+ *	- vma   - mm_area describing address range
  *
  *	It is assumed that:
  *	- the "Invalidate single entry" instruction will invalidate
diff --git a/arch/arm/mm/tlb.c b/arch/arm/mm/tlb.c
index 42359793120b..57a2184da8ae 100644
--- a/arch/arm/mm/tlb.c
+++ b/arch/arm/mm/tlb.c
@@ -6,7 +6,7 @@
 #include <asm/tlbflush.h>
 
 #ifdef CONFIG_CPU_TLB_V4WT
-void v4_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+void v4_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 void v4_flush_kern_tlb_range(unsigned long, unsigned long);
 
 struct cpu_tlb_fns v4_tlb_fns __initconst = {
@@ -17,7 +17,7 @@ struct cpu_tlb_fns v4_tlb_fns __initconst = {
 #endif
 
 #ifdef CONFIG_CPU_TLB_V4WB
-void v4wb_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+void v4wb_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 void v4wb_flush_kern_tlb_range(unsigned long, unsigned long);
 
 struct cpu_tlb_fns v4wb_tlb_fns __initconst = {
@@ -28,7 +28,7 @@ struct cpu_tlb_fns v4wb_tlb_fns __initconst = {
 #endif
 
 #if defined(CONFIG_CPU_TLB_V4WBI) || defined(CONFIG_CPU_TLB_FEROCEON)
-void v4wbi_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+void v4wbi_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 void v4wbi_flush_kern_tlb_range(unsigned long, unsigned long);
 
 struct cpu_tlb_fns v4wbi_tlb_fns __initconst = {
@@ -39,7 +39,7 @@ struct cpu_tlb_fns v4wbi_tlb_fns __initconst = {
 #endif
 
 #ifdef CONFIG_CPU_TLB_V6
-void v6wbi_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+void v6wbi_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 void v6wbi_flush_kern_tlb_range(unsigned long, unsigned long);
 
 struct cpu_tlb_fns v6wbi_tlb_fns __initconst = {
@@ -50,7 +50,7 @@ struct cpu_tlb_fns v6wbi_tlb_fns __initconst = {
 #endif
 
 #ifdef CONFIG_CPU_TLB_V7
-void v7wbi_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+void v7wbi_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 void v7wbi_flush_kern_tlb_range(unsigned long, unsigned long);
 
 struct cpu_tlb_fns v7wbi_tlb_fns __initconst = {
@@ -73,7 +73,7 @@ asm("	.pushsection	\".alt.smp.init\", \"a\"		\n" \
 #endif
 
 #ifdef CONFIG_CPU_TLB_FA
-void fa_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
+void fa_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
 void fa_flush_kern_tlb_range(unsigned long, unsigned long);
 
 struct cpu_tlb_fns fa_tlb_fns __initconst = {
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index a395b6c0aae2..11029e2a5413 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -68,7 +68,7 @@ static __read_mostly phys_addr_t xen_grant_frames;
 uint32_t xen_start_flags;
 EXPORT_SYMBOL(xen_start_flags);
 
-int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
+int xen_unmap_domain_gfn_range(struct mm_area *vma,
 			       int nr, struct page **pages)
 {
 	return xen_xlate_unmap_gfn_range(vma, nr, pages);
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 28ab96e808ef..aaf770ee6d2f 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -109,7 +109,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
  * processes address space.  Really, we want to allow our "user
  * space" model to handle this.
  */
-extern void copy_to_user_page(struct vm_area_struct *, struct page *,
+extern void copy_to_user_page(struct mm_area *, struct page *,
 	unsigned long, void *, const void *, unsigned long);
 #define copy_to_user_page copy_to_user_page
 
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 07fbf5bf85a7..0b84bfffd34e 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -38,7 +38,7 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
 extern void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 			    pte_t *ptep, pte_t pte, unsigned long sz);
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+extern int huge_ptep_set_access_flags(struct mm_area *vma,
 				      unsigned long addr, pte_t *ptep,
 				      pte_t pte, int dirty);
 #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
@@ -48,7 +48,7 @@ extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
 extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
 				    unsigned long addr, pte_t *ptep);
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+extern pte_t huge_ptep_clear_flush(struct mm_area *vma,
 				   unsigned long addr, pte_t *ptep);
 #define __HAVE_ARCH_HUGE_PTE_CLEAR
 extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
@@ -59,18 +59,18 @@ extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep
 void __init arm64_hugetlb_cma_reserve(void);
 
 #define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
-extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
+extern pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
 					 unsigned long addr, pte_t *ptep);
 
 #define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
-extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+extern void huge_ptep_modify_prot_commit(struct mm_area *vma,
 					 unsigned long addr, pte_t *ptep,
 					 pte_t old_pte, pte_t new_pte);
 
 #include <asm-generic/hugetlb.h>
 
 #define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
-static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+static inline void flush_hugetlb_tlb_range(struct mm_area *vma,
 					   unsigned long start,
 					   unsigned long end)
 {
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 0dbe3b29049b..f0f70fb6934e 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -307,7 +307,7 @@ static inline unsigned long mm_untag_mask(struct mm_struct *mm)
  * Only enforce protection keys on the current process, because there is no
  * user context to access POR_EL0 for another address space.
  */
-static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+static inline bool arch_vma_access_permitted(struct mm_area *vma,
 		bool write, bool execute, bool foreign)
 {
 	if (!system_supports_poe())
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 2312e6ee595f..d2258e036fae 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -17,19 +17,19 @@
 #include <asm/pgtable-types.h>
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 
 extern void copy_page(void *to, const void *from);
 extern void clear_page(void *to);
 
 void copy_user_highpage(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma);
+			unsigned long vaddr, struct mm_area *vma);
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
-struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
+struct folio *vma_alloc_zeroed_movable_folio(struct mm_area *vma,
 						unsigned long vaddr);
 #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
 
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d3b538be1500..914caa15c4c8 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1207,13 +1207,13 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
 }
 
-extern int __ptep_set_access_flags(struct vm_area_struct *vma,
+extern int __ptep_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pte_t *ptep,
 				 pte_t entry, int dirty);
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
-static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
+static inline int pmdp_set_access_flags(struct mm_area *vma,
 					unsigned long address, pmd_t *pmdp,
 					pmd_t entry, int dirty)
 {
@@ -1252,7 +1252,7 @@ static inline bool pud_user_accessible_page(pud_t pud)
 /*
  * Atomic pte/pmd modifications.
  */
-static inline int __ptep_test_and_clear_young(struct vm_area_struct *vma,
+static inline int __ptep_test_and_clear_young(struct mm_area *vma,
 					      unsigned long address,
 					      pte_t *ptep)
 {
@@ -1269,7 +1269,7 @@ static inline int __ptep_test_and_clear_young(struct vm_area_struct *vma,
 	return pte_young(pte);
 }
 
-static inline int __ptep_clear_flush_young(struct vm_area_struct *vma,
+static inline int __ptep_clear_flush_young(struct mm_area *vma,
 					 unsigned long address, pte_t *ptep)
 {
 	int young = __ptep_test_and_clear_young(vma, address, ptep);
@@ -1291,7 +1291,7 @@ static inline int __ptep_clear_flush_young(struct vm_area_struct *vma,
 
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
 #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
-static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+static inline int pmdp_test_and_clear_young(struct mm_area *vma,
 					    unsigned long address,
 					    pmd_t *pmdp)
 {
@@ -1388,7 +1388,7 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
 		__ptep_set_wrprotect(mm, address, ptep);
 }
 
-static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
+static inline void __clear_young_dirty_pte(struct mm_area *vma,
 					   unsigned long addr, pte_t *ptep,
 					   pte_t pte, cydp_t flags)
 {
@@ -1407,7 +1407,7 @@ static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
 	} while (pte_val(pte) != pte_val(old_pte));
 }
 
-static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma,
+static inline void __clear_young_dirty_ptes(struct mm_area *vma,
 					    unsigned long addr, pte_t *ptep,
 					    unsigned int nr, cydp_t flags)
 {
@@ -1437,7 +1437,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 }
 
 #define pmdp_establish pmdp_establish
-static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+static inline pmd_t pmdp_establish(struct mm_area *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
 	page_table_check_pmd_set(vma->vm_mm, pmdp, pmd);
@@ -1506,7 +1506,7 @@ extern void arch_swap_restore(swp_entry_t entry, struct folio *folio);
  * On AArch64, the cache coherency is handled via the __set_ptes() function.
  */
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+		struct mm_area *vma, unsigned long addr, pte_t *ptep,
 		unsigned int nr)
 {
 	/*
@@ -1552,11 +1552,11 @@ static inline bool pud_sect_supported(void)
 
 #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
 #define ptep_modify_prot_start ptep_modify_prot_start
-extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
+extern pte_t ptep_modify_prot_start(struct mm_area *vma,
 				    unsigned long addr, pte_t *ptep);
 
 #define ptep_modify_prot_commit ptep_modify_prot_commit
-extern void ptep_modify_prot_commit(struct vm_area_struct *vma,
+extern void ptep_modify_prot_commit(struct mm_area *vma,
 				    unsigned long addr, pte_t *ptep,
 				    pte_t old_pte, pte_t new_pte);
 
@@ -1580,16 +1580,16 @@ extern void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr,
 extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
 				unsigned long addr, pte_t *ptep,
 				unsigned int nr, int full);
-extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
+extern int contpte_ptep_test_and_clear_young(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep);
-extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
+extern int contpte_ptep_clear_flush_young(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep);
 extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 				pte_t *ptep, unsigned int nr);
-extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
+extern int contpte_ptep_set_access_flags(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep,
 				pte_t entry, int dirty);
-extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
+extern void contpte_clear_young_dirty_ptes(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep,
 				unsigned int nr, cydp_t flags);
 
@@ -1747,7 +1747,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+static inline int ptep_test_and_clear_young(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep)
 {
 	pte_t orig_pte = __ptep_get(ptep);
@@ -1759,7 +1759,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+static inline int ptep_clear_flush_young(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep)
 {
 	pte_t orig_pte = __ptep_get(ptep);
@@ -1802,7 +1802,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
-static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int ptep_set_access_flags(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep,
 				pte_t entry, int dirty)
 {
@@ -1817,7 +1817,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
 }
 
 #define clear_young_dirty_ptes clear_young_dirty_ptes
-static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
+static inline void clear_young_dirty_ptes(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep,
 					  unsigned int nr, cydp_t flags)
 {
diff --git a/arch/arm64/include/asm/pkeys.h b/arch/arm64/include/asm/pkeys.h
index 0ca5f83ce148..14b1d4bfc8c0 100644
--- a/arch/arm64/include/asm/pkeys.h
+++ b/arch/arm64/include/asm/pkeys.h
@@ -20,12 +20,12 @@ static inline bool arch_pkeys_enabled(void)
 	return system_supports_poe();
 }
 
-static inline int vma_pkey(struct vm_area_struct *vma)
+static inline int vma_pkey(struct mm_area *vma)
 {
 	return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
 }
 
-static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
+static inline int arch_override_mprotect_pkey(struct mm_area *vma,
 		int prot, int pkey)
 {
 	if (pkey != -1)
diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
index 8d762607285c..31aac313a4b8 100644
--- a/arch/arm64/include/asm/tlb.h
+++ b/arch/arm64/include/asm/tlb.h
@@ -52,7 +52,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
 
 static inline void tlb_flush(struct mmu_gather *tlb)
 {
-	struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0);
+	struct mm_area vma = TLB_FLUSH_VMA(tlb->mm, 0);
 	bool last_level = !tlb->freed_tables;
 	unsigned long stride = tlb_get_unmap_size(tlb);
 	int tlb_level = tlb_get_level(tlb);
diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index eba1a98657f1..bfed61ba7b05 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -295,13 +295,13 @@ static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
 						(uaddr & PAGE_MASK) + PAGE_SIZE);
 }
 
-static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
+static inline void flush_tlb_page_nosync(struct mm_area *vma,
 					 unsigned long uaddr)
 {
 	return __flush_tlb_page_nosync(vma->vm_mm, uaddr);
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma,
+static inline void flush_tlb_page(struct mm_area *vma,
 				  unsigned long uaddr)
 {
 	flush_tlb_page_nosync(vma, uaddr);
@@ -472,7 +472,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
 }
 
-static inline void __flush_tlb_range(struct vm_area_struct *vma,
+static inline void __flush_tlb_range(struct mm_area *vma,
 				     unsigned long start, unsigned long end,
 				     unsigned long stride, bool last_level,
 				     int tlb_level)
@@ -482,7 +482,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 	dsb(ish);
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 	/*
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 2fbfd27ff5f2..cc561fb4203d 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -422,7 +422,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
 		return -EIO;
 
 	while (len) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		unsigned long tags, offset;
 		void *maddr;
 		struct page *page = get_user_page_vma_remote(mm, addr,
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 78ddf6bdecad..5e3564b842a4 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -58,7 +58,7 @@ static struct vdso_abi_info vdso_info[] __ro_after_init = {
 };
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-		struct vm_area_struct *new_vma)
+		struct mm_area *new_vma)
 {
 	current->mm->context.vdso = (void *)new_vma->vm_start;
 
@@ -157,7 +157,7 @@ static struct page *aarch32_vectors_page __ro_after_init;
 static struct page *aarch32_sig_page __ro_after_init;
 
 static int aarch32_sigpage_mremap(const struct vm_special_mapping *sm,
-				  struct vm_area_struct *new_vma)
+				  struct mm_area *new_vma)
 {
 	current->mm->context.sigpage = (void *)new_vma->vm_start;
 
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 2feb6c6b63af..54ca059f6a02 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1017,7 +1017,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
 	 *     +--------------------------------------------+
 	 */
 	do {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		hva_t vm_start, vm_end;
 
 		vma = find_vma_intersection(current->mm, hva, reg_end);
@@ -1393,7 +1393,7 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
 	return PAGE_SIZE;
 }
 
-static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
+static int get_vma_page_shift(struct mm_area *vma, unsigned long hva)
 {
 	unsigned long pa;
 
@@ -1461,7 +1461,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
 	}
 }
 
-static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
+static bool kvm_vma_mte_allowed(struct mm_area *vma)
 {
 	return vma->vm_flags & VM_MTE_ALLOWED;
 }
@@ -1478,7 +1478,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
 	unsigned long mmu_seq;
 	phys_addr_t ipa = fault_ipa;
 	struct kvm *kvm = vcpu->kvm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	short vma_shift;
 	void *memcache;
 	gfn_t gfn;
@@ -2190,7 +2190,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 	 *     +--------------------------------------------+
 	 */
 	do {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		vma = find_vma_intersection(current->mm, hva, reg_end);
 		if (!vma)
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index bcac4f55f9c1..8bec9a656558 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -49,7 +49,7 @@ static void contpte_try_unfold_partial(struct mm_struct *mm, unsigned long addr,
 static void contpte_convert(struct mm_struct *mm, unsigned long addr,
 			    pte_t *ptep, pte_t pte)
 {
-	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
 	unsigned long start_addr;
 	pte_t *start_ptep;
 	int i;
@@ -297,7 +297,7 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
 }
 EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
 
-int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
+int contpte_ptep_test_and_clear_young(struct mm_area *vma,
 					unsigned long addr, pte_t *ptep)
 {
 	/*
@@ -322,7 +322,7 @@ int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
 
-int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
+int contpte_ptep_clear_flush_young(struct mm_area *vma,
 					unsigned long addr, pte_t *ptep)
 {
 	int young;
@@ -361,7 +361,7 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
 
-void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
+void contpte_clear_young_dirty_ptes(struct mm_area *vma,
 				    unsigned long addr, pte_t *ptep,
 				    unsigned int nr, cydp_t flags)
 {
@@ -390,7 +390,7 @@ void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
 }
 EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
 
-int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
+int contpte_ptep_set_access_flags(struct mm_area *vma,
 					unsigned long addr, pte_t *ptep,
 					pte_t entry, int dirty)
 {
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index a86c897017df..8bb8e592eab4 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -61,7 +61,7 @@ void copy_highpage(struct page *to, struct page *from)
 EXPORT_SYMBOL(copy_highpage);
 
 void copy_user_highpage(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma)
+			unsigned long vaddr, struct mm_area *vma)
 {
 	copy_highpage(to, from);
 	flush_dcache_page(to);
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index ec0a337891dd..340ac8c5bc25 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -210,7 +210,7 @@ static void show_pte(unsigned long addr)
  *
  * Returns whether or not the PTE actually changed.
  */
-int __ptep_set_access_flags(struct vm_area_struct *vma,
+int __ptep_set_access_flags(struct mm_area *vma,
 			    unsigned long address, pte_t *ptep,
 			    pte_t entry, int dirty)
 {
@@ -487,7 +487,7 @@ static void do_bad_area(unsigned long far, unsigned long esr,
 	}
 }
 
-static bool fault_from_pkey(unsigned long esr, struct vm_area_struct *vma,
+static bool fault_from_pkey(unsigned long esr, struct mm_area *vma,
 			unsigned int mm_flags)
 {
 	unsigned long iss2 = ESR_ELx_ISS2(esr);
@@ -526,7 +526,7 @@ static bool is_write_abort(unsigned long esr)
 	return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM);
 }
 
-static bool is_invalid_gcs_access(struct vm_area_struct *vma, u64 esr)
+static bool is_invalid_gcs_access(struct mm_area *vma, u64 esr)
 {
 	if (!system_supports_gcs())
 		return false;
@@ -552,7 +552,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
 	unsigned long vm_flags;
 	unsigned int mm_flags = FAULT_FLAG_DEFAULT;
 	unsigned long addr = untagged_addr(far);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int si_code;
 	int pkey = -1;
 
@@ -1010,7 +1010,7 @@ NOKPROBE_SYMBOL(do_debug_exception);
 /*
  * Used during anonymous page fault handling.
  */
-struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
+struct folio *vma_alloc_zeroed_movable_folio(struct mm_area *vma,
 						unsigned long vaddr)
 {
 	gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO;
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 013eead9b695..4931bb9d9937 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -29,7 +29,7 @@ void sync_icache_aliases(unsigned long start, unsigned long end)
 	}
 }
 
-static void flush_ptrace_access(struct vm_area_struct *vma, unsigned long start,
+static void flush_ptrace_access(struct mm_area *vma, unsigned long start,
 				unsigned long end)
 {
 	if (vma->vm_flags & VM_EXEC)
@@ -41,7 +41,7 @@ static void flush_ptrace_access(struct vm_area_struct *vma, unsigned long start,
  * address space.  Really, we want to allow our "user space" model to handle
  * this.
  */
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		       unsigned long uaddr, void *dst, const void *src,
 		       unsigned long len)
 {
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index cfe8cb8ba1cc..55246c6e60d0 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -182,7 +182,7 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
 				    unsigned long ncontig)
 {
 	pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
-	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
 
 	flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
 	return orig_pte;
@@ -203,7 +203,7 @@ static void clear_flush(struct mm_struct *mm,
 			     unsigned long pgsize,
 			     unsigned long ncontig)
 {
-	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
 	unsigned long i, saddr = addr;
 
 	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
@@ -244,7 +244,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
 		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
 }
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgdp;
@@ -427,7 +427,7 @@ static int __cont_access_flags_changed(pte_t *ptep, pte_t pte, int ncontig)
 	return 0;
 }
 
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty)
 {
@@ -490,7 +490,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
 }
 
-pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+pte_t huge_ptep_clear_flush(struct mm_area *vma,
 			    unsigned long addr, pte_t *ptep)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -534,7 +534,7 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
 	return __hugetlb_valid_size(size);
 }
 
-pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+pte_t huge_ptep_modify_prot_start(struct mm_area *vma, unsigned long addr, pte_t *ptep)
 {
 	unsigned long psize = huge_page_size(hstate_vma(vma));
 
@@ -550,7 +550,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr
 	return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize);
 }
 
-void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+void huge_ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr, pte_t *ptep,
 				  pte_t old_pte, pte_t pte)
 {
 	unsigned long psize = huge_page_size(hstate_vma(vma));
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ea6695d53fb9..4945b810f03c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1504,7 +1504,7 @@ static int __init prevent_bootmem_remove_init(void)
 early_initcall(prevent_bootmem_remove_init);
 #endif
 
-pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr, pte_t *ptep)
 {
 	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) {
 		/*
@@ -1518,7 +1518,7 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte
 	return ptep_get_and_clear(vma->vm_mm, addr, ptep);
 }
 
-void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr, pte_t *ptep,
 			     pte_t old_pte, pte_t pte)
 {
 	set_pte_at(vma->vm_mm, addr, ptep, pte);
diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c
index 171e8fb32285..9253db16358c 100644
--- a/arch/csky/abiv1/cacheflush.c
+++ b/arch/csky/abiv1/cacheflush.c
@@ -41,7 +41,7 @@ void flush_dcache_page(struct page *page)
 }
 EXPORT_SYMBOL(flush_dcache_page);
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long addr, pte_t *ptep, unsigned int nr)
 {
 	unsigned long pfn = pte_pfn(*ptep);
@@ -65,7 +65,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
 	}
 }
 
-void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+void flush_cache_range(struct mm_area *vma, unsigned long start,
 	unsigned long end)
 {
 	dcache_wbinv_all();
diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h
index d011a81575d2..be382265c4dc 100644
--- a/arch/csky/abiv1/inc/abi/cacheflush.h
+++ b/arch/csky/abiv1/inc/abi/cacheflush.h
@@ -30,7 +30,7 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
 }
 
 #define ARCH_HAS_FLUSH_ANON_PAGE
-static inline void flush_anon_page(struct vm_area_struct *vma,
+static inline void flush_anon_page(struct mm_area *vma,
 			 struct page *page, unsigned long vmaddr)
 {
 	if (PageAnon(page))
@@ -41,7 +41,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
  * if (current_mm != vma->mm) cache_wbinv_range(start, end) will be broken.
  * Use cache_wbinv_all() here and need to be improved in future.
  */
-extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+extern void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
 #define flush_cache_vmap(start, end)		cache_wbinv_all()
 #define flush_cache_vmap_early(start, end)	do { } while (0)
 #define flush_cache_vunmap(start, end)		cache_wbinv_all()
diff --git a/arch/csky/abiv1/mmap.c b/arch/csky/abiv1/mmap.c
index 1047865e82a9..587ea707e56a 100644
--- a/arch/csky/abiv1/mmap.c
+++ b/arch/csky/abiv1/mmap.c
@@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int do_align = 0;
 	struct vm_unmapped_area_info info = {
 		.length = len,
diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c
index 876028b1083f..9001fc55ca76 100644
--- a/arch/csky/abiv2/cacheflush.c
+++ b/arch/csky/abiv2/cacheflush.c
@@ -7,7 +7,7 @@
 #include <asm/cache.h>
 #include <asm/tlbflush.h>
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *pte, unsigned int nr)
 {
 	unsigned long pfn = pte_pfn(*pte);
diff --git a/arch/csky/include/asm/page.h b/arch/csky/include/asm/page.h
index 4911d0892b71..bd643891e28a 100644
--- a/arch/csky/include/asm/page.h
+++ b/arch/csky/include/asm/page.h
@@ -43,7 +43,7 @@ struct page;
 
 #include <abi/page.h>
 
-struct vm_area_struct;
+struct mm_area;
 
 typedef struct { unsigned long pte_low; } pte_t;
 #define pte_val(x)	((x).pte_low)
diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
index a397e1718ab6..17de85d6cae5 100644
--- a/arch/csky/include/asm/pgtable.h
+++ b/arch/csky/include/asm/pgtable.h
@@ -263,7 +263,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
 extern void paging_init(void);
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *pte, unsigned int nr);
 #define update_mmu_cache(vma, addr, ptep) \
 	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
diff --git a/arch/csky/include/asm/tlbflush.h b/arch/csky/include/asm/tlbflush.h
index 407160b4fde7..1bb6e57ee7a5 100644
--- a/arch/csky/include/asm/tlbflush.h
+++ b/arch/csky/include/asm/tlbflush.h
@@ -14,8 +14,8 @@
  */
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_tlb_page(struct mm_area *vma, unsigned long page);
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
diff --git a/arch/csky/kernel/vdso.c b/arch/csky/kernel/vdso.c
index c54d019d66bc..cb26b07cc994 100644
--- a/arch/csky/kernel/vdso.c
+++ b/arch/csky/kernel/vdso.c
@@ -40,7 +40,7 @@ arch_initcall(vdso_init);
 int arch_setup_additional_pages(struct linux_binprm *bprm,
 	int uses_interp)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	unsigned long vdso_base, vdso_len;
 	int ret;
diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
index 5226bc08c336..f64991717a1a 100644
--- a/arch/csky/mm/fault.c
+++ b/arch/csky/mm/fault.c
@@ -168,7 +168,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
 	flush_tlb_one(addr);
 }
 
-static inline bool access_error(struct pt_regs *regs, struct vm_area_struct *vma)
+static inline bool access_error(struct pt_regs *regs, struct mm_area *vma)
 {
 	if (is_write(regs)) {
 		if (!(vma->vm_flags & VM_WRITE))
@@ -187,7 +187,7 @@ static inline bool access_error(struct pt_regs *regs, struct vm_area_struct *vma
 asmlinkage void do_page_fault(struct pt_regs *regs)
 {
 	struct task_struct *tsk;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm;
 	unsigned long addr = read_mmu_entryhi() & PAGE_MASK;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
diff --git a/arch/csky/mm/tlb.c b/arch/csky/mm/tlb.c
index 9234c5e5ceaf..ad8e9be1a714 100644
--- a/arch/csky/mm/tlb.c
+++ b/arch/csky/mm/tlb.c
@@ -49,7 +49,7 @@ do { \
 } while (0)
 #endif
 
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			unsigned long end)
 {
 	unsigned long newpid = cpu_asid(vma->vm_mm);
@@ -132,7 +132,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 #endif
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+void flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	int newpid = cpu_asid(vma->vm_mm);
 
diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h
index bfff514a81c8..29c492c45995 100644
--- a/arch/hexagon/include/asm/cacheflush.h
+++ b/arch/hexagon/include/asm/cacheflush.h
@@ -59,7 +59,7 @@ extern void flush_cache_all_hexagon(void);
  *
  */
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	/*  generic_ptrace_pokedata doesn't wind up here, does it?  */
@@ -68,7 +68,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define update_mmu_cache(vma, addr, ptep) \
 	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		       unsigned long vaddr, void *dst, void *src, int len);
 #define copy_to_user_page copy_to_user_page
 
diff --git a/arch/hexagon/include/asm/tlbflush.h b/arch/hexagon/include/asm/tlbflush.h
index a7c9ab398cab..e79e62a0e132 100644
--- a/arch/hexagon/include/asm/tlbflush.h
+++ b/arch/hexagon/include/asm/tlbflush.h
@@ -23,8 +23,8 @@
  */
 extern void tlb_flush_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
-extern void flush_tlb_range(struct vm_area_struct *vma,
+extern void flush_tlb_page(struct mm_area *vma, unsigned long addr);
+extern void flush_tlb_range(struct mm_area *vma,
 				unsigned long start, unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 extern void flush_tlb_one(unsigned long);
diff --git a/arch/hexagon/kernel/vdso.c b/arch/hexagon/kernel/vdso.c
index 8119084dc519..c4728b6e7b05 100644
--- a/arch/hexagon/kernel/vdso.c
+++ b/arch/hexagon/kernel/vdso.c
@@ -51,7 +51,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
 	int ret;
 	unsigned long vdso_base;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	static struct vm_special_mapping vdso_mapping = {
 		.name = "[vdso]",
@@ -87,7 +87,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	return ret;
 }
 
-const char *arch_vma_name(struct vm_area_struct *vma)
+const char *arch_vma_name(struct mm_area *vma)
 {
 	if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso)
 		return "[vdso]";
diff --git a/arch/hexagon/mm/cache.c b/arch/hexagon/mm/cache.c
index 7e46f40c8b54..c16d16954a28 100644
--- a/arch/hexagon/mm/cache.c
+++ b/arch/hexagon/mm/cache.c
@@ -115,7 +115,7 @@ void flush_cache_all_hexagon(void)
 	mb();
 }
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		       unsigned long vaddr, void *dst, void *src, int len)
 {
 	memcpy(dst, src, len);
diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
index 3771fb453898..5eef0342fcaa 100644
--- a/arch/hexagon/mm/vm_fault.c
+++ b/arch/hexagon/mm/vm_fault.c
@@ -36,7 +36,7 @@
  */
 static void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	int si_signo;
 	int si_code = SEGV_MAPERR;
diff --git a/arch/hexagon/mm/vm_tlb.c b/arch/hexagon/mm/vm_tlb.c
index 8b6405e2234b..fee2184306a4 100644
--- a/arch/hexagon/mm/vm_tlb.c
+++ b/arch/hexagon/mm/vm_tlb.c
@@ -23,7 +23,7 @@
  * processors must be induced to flush the copies in their local TLBs,
  * but Hexagon thread-based virtual processors share the same MMU.
  */
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -64,7 +64,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 /*
  * Flush TLB state associated with a page of a vma.
  */
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long vaddr)
+void flush_tlb_page(struct mm_area *vma, unsigned long vaddr)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
index 4dc4b3e04225..6b92e8c42e37 100644
--- a/arch/loongarch/include/asm/hugetlb.h
+++ b/arch/loongarch/include/asm/hugetlb.h
@@ -48,7 +48,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 	pte_t pte;
@@ -67,7 +67,7 @@ static inline int huge_pte_none(pte_t pte)
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int huge_ptep_set_access_flags(struct mm_area *vma,
 					     unsigned long addr,
 					     pte_t *ptep, pte_t pte,
 					     int dirty)
diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm/page.h
index 7368f12b7cb1..d58207b68c4b 100644
--- a/arch/loongarch/include/asm/page.h
+++ b/arch/loongarch/include/asm/page.h
@@ -36,9 +36,9 @@ extern void copy_page(void *to, void *from);
 extern unsigned long shm_align_mask;
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 void copy_user_highpage(struct page *to, struct page *from,
-	      unsigned long vaddr, struct vm_area_struct *vma);
+	      unsigned long vaddr, struct mm_area *vma);
 
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 
diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index da346733a1da..8f8764731345 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -63,7 +63,7 @@
 #include <asm/sparsemem.h>
 
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 
 /*
  * ZERO_PAGE is a global shared page that is always zero; used
@@ -438,11 +438,11 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 		     (pgprot_val(newprot) & ~_PAGE_CHG_MASK));
 }
 
-extern void __update_tlb(struct vm_area_struct *vma,
+extern void __update_tlb(struct mm_area *vma,
 			unsigned long address, pte_t *ptep);
 
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	for (;;) {
@@ -459,7 +459,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define update_mmu_tlb_range(vma, addr, ptep, nr) \
 	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
 
-static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pmd(struct mm_area *vma,
 			unsigned long address, pmd_t *pmdp)
 {
 	__update_tlb(vma, address, (pte_t *)pmdp);
diff --git a/arch/loongarch/include/asm/tlb.h b/arch/loongarch/include/asm/tlb.h
index e071f5e9e858..38a860530433 100644
--- a/arch/loongarch/include/asm/tlb.h
+++ b/arch/loongarch/include/asm/tlb.h
@@ -139,7 +139,7 @@ static void tlb_flush(struct mmu_gather *tlb);
 
 static inline void tlb_flush(struct mmu_gather *tlb)
 {
-	struct vm_area_struct vma;
+	struct mm_area vma;
 
 	vma.vm_mm = tlb->mm;
 	vm_flags_init(&vma, 0);
diff --git a/arch/loongarch/include/asm/tlbflush.h b/arch/loongarch/include/asm/tlbflush.h
index a0785e590681..3cab349279d8 100644
--- a/arch/loongarch/include/asm/tlbflush.h
+++ b/arch/loongarch/include/asm/tlbflush.h
@@ -20,18 +20,18 @@ extern void local_flush_tlb_all(void);
 extern void local_flush_tlb_user(void);
 extern void local_flush_tlb_kernel(void);
 extern void local_flush_tlb_mm(struct mm_struct *mm);
-extern void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+extern void local_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
 extern void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
-extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void local_flush_tlb_page(struct mm_area *vma, unsigned long page);
 extern void local_flush_tlb_one(unsigned long vaddr);
 
 #ifdef CONFIG_SMP
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long, unsigned long);
+extern void flush_tlb_range(struct mm_area *vma, unsigned long, unsigned long);
 extern void flush_tlb_kernel_range(unsigned long, unsigned long);
-extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
+extern void flush_tlb_page(struct mm_area *, unsigned long);
 extern void flush_tlb_one(unsigned long vaddr);
 
 #else /* CONFIG_SMP */
diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
index 4b24589c0b56..f3cf1633dcc4 100644
--- a/arch/loongarch/kernel/smp.c
+++ b/arch/loongarch/kernel/smp.c
@@ -703,7 +703,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 }
 
 struct flush_tlb_data {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr1;
 	unsigned long addr2;
 };
@@ -715,7 +715,7 @@ static void flush_tlb_range_ipi(void *info)
 	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
@@ -764,7 +764,7 @@ static void flush_tlb_page_ipi(void *info)
 	local_flush_tlb_page(fd->vma, fd->addr1);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	preempt_disable();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
diff --git a/arch/loongarch/kernel/vdso.c b/arch/loongarch/kernel/vdso.c
index 10cf1608c7b3..a33039241859 100644
--- a/arch/loongarch/kernel/vdso.c
+++ b/arch/loongarch/kernel/vdso.c
@@ -25,7 +25,7 @@
 
 extern char vdso_start[], vdso_end[];
 
-static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
+static int vdso_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
 {
 	current->mm->context.vdso = (void *)(new_vma->vm_start);
 
@@ -79,7 +79,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	int ret;
 	unsigned long size, data_addr, vdso_addr;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct loongarch_vdso_info *info = current->thread.vdso;
 
 	if (mmap_write_lock_killable(mm))
diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c
index deefd9617d00..b61c282fe87b 100644
--- a/arch/loongarch/mm/fault.c
+++ b/arch/loongarch/mm/fault.c
@@ -179,7 +179,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs,
 	unsigned int flags = FAULT_FLAG_DEFAULT;
 	struct task_struct *tsk = current;
 	struct mm_struct *mm = tsk->mm;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	vm_fault_t fault;
 
 	if (kprobe_page_fault(regs, current->thread.trap_nr))
diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c
index e4068906143b..44d9969da492 100644
--- a/arch/loongarch/mm/hugetlbpage.c
+++ b/arch/loongarch/mm/hugetlbpage.c
@@ -13,7 +13,7 @@
 #include <asm/tlb.h>
 #include <asm/tlbflush.h>
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index fdb7f73ad160..f238502ebed5 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -40,7 +40,7 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_
 EXPORT_SYMBOL(empty_zero_page);
 
 void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *vfrom, *vto;
 
diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c
index 1df9e99582cc..438f85199a7b 100644
--- a/arch/loongarch/mm/mmap.c
+++ b/arch/loongarch/mm/mmap.c
@@ -23,7 +23,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
 	unsigned long flags, enum mmap_allocation_direction dir)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr = addr0;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {};
diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c
index 3b427b319db2..ec386b53110b 100644
--- a/arch/loongarch/mm/tlb.c
+++ b/arch/loongarch/mm/tlb.c
@@ -54,7 +54,7 @@ void local_flush_tlb_mm(struct mm_struct *mm)
 	preempt_enable();
 }
 
-void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
 	unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -110,7 +110,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 	local_irq_restore(flags);
 }
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	int cpu = smp_processor_id();
 
@@ -135,7 +135,7 @@ void local_flush_tlb_one(unsigned long page)
 	invtlb_addr(INVTLB_ADDR_GTRUE_OR_ASID, 0, page);
 }
 
-static void __update_hugetlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
+static void __update_hugetlb(struct mm_area *vma, unsigned long address, pte_t *ptep)
 {
 #ifdef CONFIG_HUGETLB_PAGE
 	int idx;
@@ -163,7 +163,7 @@ static void __update_hugetlb(struct vm_area_struct *vma, unsigned long address,
 #endif
 }
 
-void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
+void __update_tlb(struct mm_area *vma, unsigned long address, pte_t *ptep)
 {
 	int idx;
 	unsigned long flags;
diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h
index 9a71b0148461..edf5f643578d 100644
--- a/arch/m68k/include/asm/cacheflush_mm.h
+++ b/arch/m68k/include/asm/cacheflush_mm.h
@@ -204,7 +204,7 @@ static inline void flush_cache_mm(struct mm_struct *mm)
 
 /* flush_cache_range/flush_cache_page must be macros to avoid
    a dependency on linux/mm.h, which includes this file... */
-static inline void flush_cache_range(struct vm_area_struct *vma,
+static inline void flush_cache_range(struct mm_area *vma,
 				     unsigned long start,
 				     unsigned long end)
 {
@@ -212,7 +212,7 @@ static inline void flush_cache_range(struct vm_area_struct *vma,
 	        __flush_cache_030();
 }
 
-static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
+static inline void flush_cache_page(struct mm_area *vma, unsigned long vmaddr, unsigned long pfn)
 {
 	if (vma->vm_mm == current->mm)
 	        __flush_cache_030();
@@ -263,13 +263,13 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr)
 #define flush_icache_pages(vma, page, nr)	\
 	__flush_pages_to_ram(page_address(page), nr)
 
-extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
+extern void flush_icache_user_page(struct mm_area *vma, struct page *page,
 				    unsigned long addr, int len);
 extern void flush_icache_range(unsigned long address, unsigned long endaddr);
 extern void flush_icache_user_range(unsigned long address,
 		unsigned long endaddr);
 
-static inline void copy_to_user_page(struct vm_area_struct *vma,
+static inline void copy_to_user_page(struct mm_area *vma,
 				     struct page *page, unsigned long vaddr,
 				     void *dst, void *src, int len)
 {
@@ -277,7 +277,7 @@ static inline void copy_to_user_page(struct vm_area_struct *vma,
 	memcpy(dst, src, len);
 	flush_icache_user_page(vma, page, vaddr, len);
 }
-static inline void copy_from_user_page(struct vm_area_struct *vma,
+static inline void copy_from_user_page(struct mm_area *vma,
 				       struct page *page, unsigned long vaddr,
 				       void *dst, void *src, int len)
 {
diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h
index dbdf1c2b2f66..fadc4c0e77cc 100644
--- a/arch/m68k/include/asm/pgtable_mm.h
+++ b/arch/m68k/include/asm/pgtable_mm.h
@@ -137,7 +137,7 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode);
  * they are updated on demand.
  */
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 }
diff --git a/arch/m68k/include/asm/tlbflush.h b/arch/m68k/include/asm/tlbflush.h
index 6d42e2906887..925c19068569 100644
--- a/arch/m68k/include/asm/tlbflush.h
+++ b/arch/m68k/include/asm/tlbflush.h
@@ -81,13 +81,13 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 		__flush_tlb();
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+static inline void flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	if (vma->vm_mm == current->active_mm)
 		__flush_tlb_one(addr);
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 	if (vma->vm_mm == current->active_mm)
@@ -161,7 +161,7 @@ static inline void flush_tlb_mm (struct mm_struct *mm)
 
 /* Flush a single TLB page. In this case, we're limited to flushing a
    single PMEG */
-static inline void flush_tlb_page (struct vm_area_struct *vma,
+static inline void flush_tlb_page (struct mm_area *vma,
 				   unsigned long addr)
 {
 	unsigned char oldctx;
@@ -182,7 +182,7 @@ static inline void flush_tlb_page (struct vm_area_struct *vma,
 }
 /* Flush a range of pages from TLB. */
 
-static inline void flush_tlb_range (struct vm_area_struct *vma,
+static inline void flush_tlb_range (struct mm_area *vma,
 		      unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -252,12 +252,12 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 	BUG();
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+static inline void flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	BUG();
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 	BUG();
diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c
index 1af5e6082467..cc534ec40930 100644
--- a/arch/m68k/kernel/sys_m68k.c
+++ b/arch/m68k/kernel/sys_m68k.c
@@ -391,7 +391,7 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len)
 
 		mmap_read_lock(current->mm);
 	} else {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		/* Check for overflow.  */
 		if (addr + len < addr)
diff --git a/arch/m68k/mm/cache.c b/arch/m68k/mm/cache.c
index dde978e66f14..2858f1113768 100644
--- a/arch/m68k/mm/cache.c
+++ b/arch/m68k/mm/cache.c
@@ -96,7 +96,7 @@ void flush_icache_range(unsigned long address, unsigned long endaddr)
 }
 EXPORT_SYMBOL(flush_icache_range);
 
-void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
+void flush_icache_user_page(struct mm_area *vma, struct page *page,
 			     unsigned long addr, int len)
 {
 	if (CPU_IS_COLDFIRE) {
diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
index fa3c5f38d989..af2e500427fd 100644
--- a/arch/m68k/mm/fault.c
+++ b/arch/m68k/mm/fault.c
@@ -71,7 +71,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
 			      unsigned long error_code)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct * vma;
+	struct mm_area * vma;
 	vm_fault_t fault;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
 
diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h
index ffa2cf3893e4..c509ae39fec5 100644
--- a/arch/microblaze/include/asm/cacheflush.h
+++ b/arch/microblaze/include/asm/cacheflush.h
@@ -85,7 +85,7 @@ static inline void flush_dcache_folio(struct folio *folio)
 #define flush_cache_page(vma, vmaddr, pfn) \
 	flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE);
 
-static inline void copy_to_user_page(struct vm_area_struct *vma,
+static inline void copy_to_user_page(struct mm_area *vma,
 				     struct page *page, unsigned long vaddr,
 				     void *dst, void *src, int len)
 {
diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h
index e4ea2ec3642f..659f30da0029 100644
--- a/arch/microblaze/include/asm/pgtable.h
+++ b/arch/microblaze/include/asm/pgtable.h
@@ -336,8 +336,8 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
 }
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-struct vm_area_struct;
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+struct mm_area;
+static inline int ptep_test_and_clear_young(struct mm_area *vma,
 		unsigned long address, pte_t *ptep)
 {
 	return (pte_update(ptep, _PAGE_ACCESSED, 0) & _PAGE_ACCESSED) != 0;
diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h
index a31ae9d44083..88e958108295 100644
--- a/arch/microblaze/include/asm/tlbflush.h
+++ b/arch/microblaze/include/asm/tlbflush.h
@@ -24,10 +24,10 @@ static inline void local_flush_tlb_all(void)
 	{ __tlbia(); }
 static inline void local_flush_tlb_mm(struct mm_struct *mm)
 	{ __tlbia(); }
-static inline void local_flush_tlb_page(struct vm_area_struct *vma,
+static inline void local_flush_tlb_page(struct mm_area *vma,
 				unsigned long vmaddr)
 	{ __tlbie(vmaddr); }
-static inline void local_flush_tlb_range(struct vm_area_struct *vma,
+static inline void local_flush_tlb_range(struct mm_area *vma,
 		unsigned long start, unsigned long end)
 	{ __tlbia(); }
 
diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
index d3c3c33b73a6..3a0d2463eb4a 100644
--- a/arch/microblaze/mm/fault.c
+++ b/arch/microblaze/mm/fault.c
@@ -86,7 +86,7 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
 void do_page_fault(struct pt_regs *regs, unsigned long address,
 		   unsigned long error_code)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	int code = SEGV_MAPERR;
 	int is_write = error_code & ESR_S;
diff --git a/arch/mips/alchemy/common/setup.c b/arch/mips/alchemy/common/setup.c
index a7a6d31a7a41..b10a34b4a2ce 100644
--- a/arch/mips/alchemy/common/setup.c
+++ b/arch/mips/alchemy/common/setup.c
@@ -94,7 +94,7 @@ phys_addr_t fixup_bigphys_addr(phys_addr_t phys_addr, phys_addr_t size)
 	return phys_addr;
 }
 
-int io_remap_pfn_range(struct vm_area_struct *vma, unsigned long vaddr,
+int io_remap_pfn_range(struct mm_area *vma, unsigned long vaddr,
 		unsigned long pfn, unsigned long size, pgprot_t prot)
 {
 	phys_addr_t phys_addr = fixup_bigphys_addr(pfn << PAGE_SHIFT, size);
diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h
index 1f14132b3fc9..6a10565c2726 100644
--- a/arch/mips/include/asm/cacheflush.h
+++ b/arch/mips/include/asm/cacheflush.h
@@ -47,9 +47,9 @@ extern void (*flush_cache_all)(void);
 extern void (*__flush_cache_all)(void);
 extern void (*flush_cache_mm)(struct mm_struct *mm);
 #define flush_cache_dup_mm(mm)	do { (void) (mm); } while (0)
-extern void (*flush_cache_range)(struct vm_area_struct *vma,
+extern void (*flush_cache_range)(struct mm_area *vma,
 	unsigned long start, unsigned long end);
-extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
+extern void (*flush_cache_page)(struct mm_area *vma, unsigned long page, unsigned long pfn);
 extern void __flush_dcache_pages(struct page *page, unsigned int nr);
 
 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
@@ -75,7 +75,7 @@ static inline void flush_dcache_page(struct page *page)
 
 #define ARCH_HAS_FLUSH_ANON_PAGE
 extern void __flush_anon_page(struct page *, unsigned long);
-static inline void flush_anon_page(struct vm_area_struct *vma,
+static inline void flush_anon_page(struct mm_area *vma,
 	struct page *page, unsigned long vmaddr)
 {
 	if (cpu_has_dc_aliases && PageAnon(page))
@@ -107,11 +107,11 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
 		__flush_cache_vunmap();
 }
 
-extern void copy_to_user_page(struct vm_area_struct *vma,
+extern void copy_to_user_page(struct mm_area *vma,
 	struct page *page, unsigned long vaddr, void *dst, const void *src,
 	unsigned long len);
 
-extern void copy_from_user_page(struct vm_area_struct *vma,
+extern void copy_from_user_page(struct mm_area *vma,
 	struct page *page, unsigned long vaddr, void *dst, const void *src,
 	unsigned long len);
 
diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
index fbc71ddcf0f6..abe7683fc4c4 100644
--- a/arch/mips/include/asm/hugetlb.h
+++ b/arch/mips/include/asm/hugetlb.h
@@ -39,7 +39,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 	pte_t pte;
@@ -63,7 +63,7 @@ static inline int huge_pte_none(pte_t pte)
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int huge_ptep_set_access_flags(struct mm_area *vma,
 					     unsigned long addr,
 					     pte_t *ptep, pte_t pte,
 					     int dirty)
diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h
index bc3e3484c1bf..5be4423baee8 100644
--- a/arch/mips/include/asm/page.h
+++ b/arch/mips/include/asm/page.h
@@ -91,9 +91,9 @@ static inline void clear_user_page(void *addr, unsigned long vaddr,
 		flush_data_cache_page((unsigned long)addr);
 }
 
-struct vm_area_struct;
+struct mm_area;
 extern void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma);
+	unsigned long vaddr, struct mm_area *vma);
 
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index c29a551eb0ca..ab28b3855dfc 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -23,7 +23,7 @@
 #include <asm/cpu-features.h>
 
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 
 #define PAGE_SHARED	vm_get_page_prot(VM_READ|VM_WRITE|VM_SHARED)
 
@@ -478,7 +478,7 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot)
 	return __pgprot(prot);
 }
 
-static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
+static inline void flush_tlb_fix_spurious_fault(struct mm_area *vma,
 						unsigned long address,
 						pte_t *ptep)
 {
@@ -491,7 +491,7 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
-static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int ptep_set_access_flags(struct mm_area *vma,
 					unsigned long address, pte_t *ptep,
 					pte_t entry, int dirty)
 {
@@ -575,11 +575,11 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
 }
 #endif
 
-extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
+extern void __update_tlb(struct mm_area *vma, unsigned long address,
 	pte_t pte);
 
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	for (;;) {
@@ -597,7 +597,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
 #define update_mmu_tlb_range(vma, address, ptep, nr) \
 	update_mmu_cache_range(NULL, vma, address, ptep, nr)
 
-static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pmd(struct mm_area *vma,
 	unsigned long address, pmd_t *pmdp)
 {
 	pte_t pte = *(pte_t *)pmdp;
@@ -610,7 +610,7 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
  */
 #ifdef CONFIG_MIPS_FIXUP_BIGPHYS_ADDR
 phys_addr_t fixup_bigphys_addr(phys_addr_t addr, phys_addr_t size);
-int io_remap_pfn_range(struct vm_area_struct *vma, unsigned long vaddr,
+int io_remap_pfn_range(struct mm_area *vma, unsigned long vaddr,
 		unsigned long pfn, unsigned long size, pgprot_t prot);
 #define io_remap_pfn_range io_remap_pfn_range
 #else
diff --git a/arch/mips/include/asm/tlbflush.h b/arch/mips/include/asm/tlbflush.h
index 9789e7a32def..26d11d18b2b4 100644
--- a/arch/mips/include/asm/tlbflush.h
+++ b/arch/mips/include/asm/tlbflush.h
@@ -14,11 +14,11 @@
  *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
  */
 extern void local_flush_tlb_all(void);
-extern void local_flush_tlb_range(struct vm_area_struct *vma,
+extern void local_flush_tlb_range(struct mm_area *vma,
 	unsigned long start, unsigned long end);
 extern void local_flush_tlb_kernel_range(unsigned long start,
 	unsigned long end);
-extern void local_flush_tlb_page(struct vm_area_struct *vma,
+extern void local_flush_tlb_page(struct mm_area *vma,
 	unsigned long page);
 extern void local_flush_tlb_one(unsigned long vaddr);
 
@@ -28,10 +28,10 @@ extern void local_flush_tlb_one(unsigned long vaddr);
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long,
+extern void flush_tlb_range(struct mm_area *vma, unsigned long,
 	unsigned long);
 extern void flush_tlb_kernel_range(unsigned long, unsigned long);
-extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
+extern void flush_tlb_page(struct mm_area *, unsigned long);
 extern void flush_tlb_one(unsigned long vaddr);
 
 #else /* CONFIG_SMP */
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 39e193cad2b9..6f006e89d2f3 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -566,7 +566,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 }
 
 struct flush_tlb_data {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr1;
 	unsigned long addr2;
 };
@@ -578,7 +578,7 @@ static void flush_tlb_range_ipi(void *info)
 	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long addr;
@@ -652,7 +652,7 @@ static void flush_tlb_page_ipi(void *info)
 	local_flush_tlb_page(fd->vma, fd->addr1);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	u32 old_mmid;
 
diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
index de096777172f..4ab46161d876 100644
--- a/arch/mips/kernel/vdso.c
+++ b/arch/mips/kernel/vdso.c
@@ -79,7 +79,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	struct mips_vdso_image *image = current->thread.abi->vdso;
 	struct mm_struct *mm = current->mm;
 	unsigned long gic_size, size, base, data_addr, vdso_addr, gic_pfn, gic_base;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret;
 
 	if (mmap_write_lock_killable(mm))
diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
index b7393b61cfa7..ba064d76dd1b 100644
--- a/arch/mips/mm/c-octeon.c
+++ b/arch/mips/mm/c-octeon.c
@@ -60,7 +60,7 @@ static void local_octeon_flush_icache_range(unsigned long start,
  *
  * @vma:    VMA to flush or NULL to flush all icaches.
  */
-static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
+static void octeon_flush_icache_all_cores(struct mm_area *vma)
 {
 	extern void octeon_send_ipi_single(int cpu, unsigned int action);
 #ifdef CONFIG_SMP
@@ -136,7 +136,7 @@ static void octeon_flush_icache_range(unsigned long start, unsigned long end)
  * @start:  beginning address for flush
  * @end:    ending address for flush
  */
-static void octeon_flush_cache_range(struct vm_area_struct *vma,
+static void octeon_flush_cache_range(struct mm_area *vma,
 				     unsigned long start, unsigned long end)
 {
 	if (vma->vm_flags & VM_EXEC)
@@ -151,7 +151,7 @@ static void octeon_flush_cache_range(struct vm_area_struct *vma,
  * @page:   Page to flush
  * @pfn:    Page frame number
  */
-static void octeon_flush_cache_page(struct vm_area_struct *vma,
+static void octeon_flush_cache_page(struct mm_area *vma,
 				    unsigned long page, unsigned long pfn)
 {
 	if (vma->vm_flags & VM_EXEC)
diff --git a/arch/mips/mm/c-r3k.c b/arch/mips/mm/c-r3k.c
index 5869df848fab..c97e789bb9cb 100644
--- a/arch/mips/mm/c-r3k.c
+++ b/arch/mips/mm/c-r3k.c
@@ -228,12 +228,12 @@ static void r3k_flush_cache_mm(struct mm_struct *mm)
 {
 }
 
-static void r3k_flush_cache_range(struct vm_area_struct *vma,
+static void r3k_flush_cache_range(struct mm_area *vma,
 				  unsigned long start, unsigned long end)
 {
 }
 
-static void r3k_flush_cache_page(struct vm_area_struct *vma,
+static void r3k_flush_cache_page(struct mm_area *vma,
 				 unsigned long addr, unsigned long pfn)
 {
 	unsigned long kaddr = KSEG0ADDR(pfn << PAGE_SHIFT);
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
index 10413b6f6662..d2e65e6548e4 100644
--- a/arch/mips/mm/c-r4k.c
+++ b/arch/mips/mm/c-r4k.c
@@ -469,7 +469,7 @@ static void r4k__flush_cache_vunmap(void)
  */
 static inline void local_r4k_flush_cache_range(void * args)
 {
-	struct vm_area_struct *vma = args;
+	struct mm_area *vma = args;
 	int exec = vma->vm_flags & VM_EXEC;
 
 	if (!has_valid_asid(vma->vm_mm, R4K_INDEX))
@@ -487,7 +487,7 @@ static inline void local_r4k_flush_cache_range(void * args)
 		r4k_blast_icache();
 }
 
-static void r4k_flush_cache_range(struct vm_area_struct *vma,
+static void r4k_flush_cache_range(struct mm_area *vma,
 	unsigned long start, unsigned long end)
 {
 	int exec = vma->vm_flags & VM_EXEC;
@@ -529,7 +529,7 @@ static void r4k_flush_cache_mm(struct mm_struct *mm)
 }
 
 struct flush_cache_page_args {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr;
 	unsigned long pfn;
 };
@@ -537,7 +537,7 @@ struct flush_cache_page_args {
 static inline void local_r4k_flush_cache_page(void *args)
 {
 	struct flush_cache_page_args *fcp_args = args;
-	struct vm_area_struct *vma = fcp_args->vma;
+	struct mm_area *vma = fcp_args->vma;
 	unsigned long addr = fcp_args->addr;
 	struct page *page = pfn_to_page(fcp_args->pfn);
 	int exec = vma->vm_flags & VM_EXEC;
@@ -605,7 +605,7 @@ static inline void local_r4k_flush_cache_page(void *args)
 	}
 }
 
-static void r4k_flush_cache_page(struct vm_area_struct *vma,
+static void r4k_flush_cache_page(struct mm_area *vma,
 	unsigned long addr, unsigned long pfn)
 {
 	struct flush_cache_page_args args;
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index bf9a37c60e9f..10eba2a62402 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -30,9 +30,9 @@ void (*flush_cache_all)(void);
 void (*__flush_cache_all)(void);
 EXPORT_SYMBOL_GPL(__flush_cache_all);
 void (*flush_cache_mm)(struct mm_struct *mm);
-void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start,
+void (*flush_cache_range)(struct mm_area *vma, unsigned long start,
 	unsigned long end);
-void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page,
+void (*flush_cache_page)(struct mm_area *vma, unsigned long page,
 	unsigned long pfn);
 void (*flush_icache_range)(unsigned long start, unsigned long end);
 EXPORT_SYMBOL_GPL(flush_icache_range);
diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
index 37fedeaca2e9..a18c0a590a1e 100644
--- a/arch/mips/mm/fault.c
+++ b/arch/mips/mm/fault.c
@@ -39,7 +39,7 @@ int show_unhandled_signals = 1;
 static void __do_page_fault(struct pt_regs *regs, unsigned long write,
 	unsigned long address)
 {
-	struct vm_area_struct * vma = NULL;
+	struct mm_area * vma = NULL;
 	struct task_struct *tsk = current;
 	struct mm_struct *mm = tsk->mm;
 	const int field = sizeof(unsigned long) * 2;
diff --git a/arch/mips/mm/hugetlbpage.c b/arch/mips/mm/hugetlbpage.c
index 0b9e15555b59..a1b62b2ce516 100644
--- a/arch/mips/mm/hugetlbpage.c
+++ b/arch/mips/mm/hugetlbpage.c
@@ -21,7 +21,7 @@
 #include <asm/tlb.h>
 #include <asm/tlbflush.h>
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index a673d3d68254..69ae87f80ad8 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -161,7 +161,7 @@ void kunmap_coherent(void)
 }
 
 void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	struct folio *src = page_folio(from);
 	void *vfrom, *vto;
@@ -185,7 +185,7 @@ void copy_user_highpage(struct page *to, struct page *from,
 	smp_wmb();
 }
 
-void copy_to_user_page(struct vm_area_struct *vma,
+void copy_to_user_page(struct mm_area *vma,
 	struct page *page, unsigned long vaddr, void *dst, const void *src,
 	unsigned long len)
 {
@@ -205,7 +205,7 @@ void copy_to_user_page(struct vm_area_struct *vma,
 		flush_cache_page(vma, vaddr, page_to_pfn(page));
 }
 
-void copy_from_user_page(struct vm_area_struct *vma,
+void copy_from_user_page(struct mm_area *vma,
 	struct page *page, unsigned long vaddr, void *dst, const void *src,
 	unsigned long len)
 {
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index 5d2a1225785b..5451673f26d2 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -31,7 +31,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
 	unsigned long flags, enum mmap_allocation_direction dir)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr = addr0;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {};
diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c
index 173f7b36033b..b43ba28e3a6a 100644
--- a/arch/mips/mm/tlb-r3k.c
+++ b/arch/mips/mm/tlb-r3k.c
@@ -64,7 +64,7 @@ void local_flush_tlb_all(void)
 	local_irq_restore(flags);
 }
 
-void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
 			   unsigned long end)
 {
 	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
@@ -144,7 +144,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 	local_irq_restore(flags);
 }
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
 	int cpu = smp_processor_id();
@@ -176,7 +176,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 	}
 }
 
-void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
+void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
 {
 	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
 	unsigned long flags;
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index 76f3b9c0a9f0..391bc8414146 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -45,7 +45,7 @@ static inline void flush_micro_tlb(void)
 	}
 }
 
-static inline void flush_micro_tlb_vm(struct vm_area_struct *vma)
+static inline void flush_micro_tlb_vm(struct mm_area *vma)
 {
 	if (vma->vm_flags & VM_EXEC)
 		flush_micro_tlb();
@@ -103,7 +103,7 @@ void local_flush_tlb_all(void)
 }
 EXPORT_SYMBOL(local_flush_tlb_all);
 
-void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
 	unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -208,7 +208,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 	local_irq_restore(flags);
 }
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	int cpu = smp_processor_id();
 
@@ -290,7 +290,7 @@ void local_flush_tlb_one(unsigned long page)
  * updates the TLB with the new pte(s), and another which also checks
  * for the R4k "end of page" hardware bug and does the needy.
  */
-void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
+void __update_tlb(struct mm_area * vma, unsigned long address, pte_t pte)
 {
 	unsigned long flags;
 	pgd_t *pgdp;
diff --git a/arch/mips/vdso/genvdso.c b/arch/mips/vdso/genvdso.c
index d47412ea6e67..4fdccdfe055d 100644
--- a/arch/mips/vdso/genvdso.c
+++ b/arch/mips/vdso/genvdso.c
@@ -261,7 +261,7 @@ int main(int argc, char **argv)
 	fprintf(out_file, "#include <asm/vdso.h>\n");
 	fprintf(out_file, "static int vdso_mremap(\n");
 	fprintf(out_file, "	const struct vm_special_mapping *sm,\n");
-	fprintf(out_file, "	struct vm_area_struct *new_vma)\n");
+	fprintf(out_file, "	struct mm_area *new_vma)\n");
 	fprintf(out_file, "{\n");
 	fprintf(out_file, "	current->mm->context.vdso =\n");
 	fprintf(out_file, "	(void *)(new_vma->vm_start);\n");
diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
index 81484a776b33..c87da07c790b 100644
--- a/arch/nios2/include/asm/cacheflush.h
+++ b/arch/nios2/include/asm/cacheflush.h
@@ -23,9 +23,9 @@ struct mm_struct;
 extern void flush_cache_all(void);
 extern void flush_cache_mm(struct mm_struct *mm);
 extern void flush_cache_dup_mm(struct mm_struct *mm);
-extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_cache_range(struct mm_area *vma, unsigned long start,
 	unsigned long end);
-extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
+extern void flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
 	unsigned long pfn);
 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
 void flush_dcache_page(struct page *page);
@@ -33,7 +33,7 @@ void flush_dcache_folio(struct folio *folio);
 #define flush_dcache_folio flush_dcache_folio
 
 extern void flush_icache_range(unsigned long start, unsigned long end);
-void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+void flush_icache_pages(struct mm_area *vma, struct page *page,
 		unsigned int nr);
 #define flush_icache_pages flush_icache_pages
 
@@ -41,10 +41,10 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
 #define flush_cache_vmap_early(start, end)	do { } while (0)
 #define flush_cache_vunmap(start, end)		flush_dcache_range(start, end)
 
-extern void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+extern void copy_to_user_page(struct mm_area *vma, struct page *page,
 				unsigned long user_vaddr,
 				void *dst, void *src, int len);
-extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+extern void copy_from_user_page(struct mm_area *vma, struct page *page,
 				unsigned long user_vaddr,
 				void *dst, void *src, int len);
 
diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
index eab87c6beacb..558eda85615e 100644
--- a/arch/nios2/include/asm/pgtable.h
+++ b/arch/nios2/include/asm/pgtable.h
@@ -285,7 +285,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
 extern void __init paging_init(void);
 extern void __init mmu_init(void);
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *ptep, unsigned int nr);
 
 #define update_mmu_cache(vma, addr, ptep) \
diff --git a/arch/nios2/include/asm/tlbflush.h b/arch/nios2/include/asm/tlbflush.h
index 362d6da09d02..913f409d9777 100644
--- a/arch/nios2/include/asm/tlbflush.h
+++ b/arch/nios2/include/asm/tlbflush.h
@@ -23,11 +23,11 @@ struct mm_struct;
  */
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
-static inline void flush_tlb_page(struct vm_area_struct *vma,
+static inline void flush_tlb_page(struct mm_area *vma,
 				  unsigned long address)
 {
 	flush_tlb_range(vma, address, address + PAGE_SIZE);
@@ -38,7 +38,7 @@ static inline void flush_tlb_kernel_page(unsigned long address)
 	flush_tlb_kernel_range(address, address + PAGE_SIZE);
 }
 
-extern void reload_tlb_page(struct vm_area_struct *vma, unsigned long addr,
+extern void reload_tlb_page(struct mm_area *vma, unsigned long addr,
 			    pte_t pte);
 
 #endif /* _ASM_NIOS2_TLBFLUSH_H */
diff --git a/arch/nios2/kernel/sys_nios2.c b/arch/nios2/kernel/sys_nios2.c
index b1ca85699952..7c275dff5822 100644
--- a/arch/nios2/kernel/sys_nios2.c
+++ b/arch/nios2/kernel/sys_nios2.c
@@ -21,7 +21,7 @@
 asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len,
 				unsigned int op)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 
 	if (len == 0)
diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c
index 0ee9c5f02e08..357ea747ea3d 100644
--- a/arch/nios2/mm/cacheflush.c
+++ b/arch/nios2/mm/cacheflush.c
@@ -74,7 +74,7 @@ static void __flush_icache(unsigned long start, unsigned long end)
 static void flush_aliases(struct address_space *mapping, struct folio *folio)
 {
 	struct mm_struct *mm = current->active_mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long flags;
 	pgoff_t pgoff;
 	unsigned long nr = folio_nr_pages(folio);
@@ -131,7 +131,7 @@ void invalidate_dcache_range(unsigned long start, unsigned long end)
 }
 EXPORT_SYMBOL(invalidate_dcache_range);
 
-void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+void flush_cache_range(struct mm_area *vma, unsigned long start,
 			unsigned long end)
 {
 	__flush_dcache(start, end);
@@ -139,7 +139,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
 		__flush_icache(start, end);
 }
 
-void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+void flush_icache_pages(struct mm_area *vma, struct page *page,
 		unsigned int nr)
 {
 	unsigned long start = (unsigned long) page_address(page);
@@ -149,7 +149,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
 	__flush_icache(start, end);
 }
 
-void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
+void flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
 			unsigned long pfn)
 {
 	unsigned long start = vmaddr;
@@ -206,7 +206,7 @@ void flush_dcache_page(struct page *page)
 }
 EXPORT_SYMBOL(flush_dcache_page);
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *ptep, unsigned int nr)
 {
 	pte_t pte = *ptep;
@@ -258,7 +258,7 @@ void clear_user_page(void *addr, unsigned long vaddr, struct page *page)
 	__flush_icache((unsigned long)addr, (unsigned long)addr + PAGE_SIZE);
 }
 
-void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_from_user_page(struct mm_area *vma, struct page *page,
 			unsigned long user_vaddr,
 			void *dst, void *src, int len)
 {
@@ -269,7 +269,7 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
 		__flush_icache((unsigned long)src, (unsigned long)src + len);
 }
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 			unsigned long user_vaddr,
 			void *dst, void *src, int len)
 {
diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c
index e3fa9c15181d..7901f945202e 100644
--- a/arch/nios2/mm/fault.c
+++ b/arch/nios2/mm/fault.c
@@ -43,7 +43,7 @@
 asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
 				unsigned long address)
 {
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	struct task_struct *tsk = current;
 	struct mm_struct *mm = tsk->mm;
 	int code = SEGV_MAPERR;
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 94efa3de3933..8f5a08ff465d 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -96,7 +96,7 @@ arch_initcall(alloc_kuser_page);
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mmap_write_lock(mm);
 
@@ -110,7 +110,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	return IS_ERR(vma) ? PTR_ERR(vma) : 0;
 }
 
-const char *arch_vma_name(struct vm_area_struct *vma)
+const char *arch_vma_name(struct mm_area *vma)
 {
 	return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
 }
diff --git a/arch/nios2/mm/tlb.c b/arch/nios2/mm/tlb.c
index f90ac35f05f3..749b4fd052cf 100644
--- a/arch/nios2/mm/tlb.c
+++ b/arch/nios2/mm/tlb.c
@@ -99,7 +99,7 @@ static void reload_tlb_one_pid(unsigned long addr, unsigned long mmu_pid, pte_t
 	replace_tlb_one_pid(addr, mmu_pid, pte_val(pte));
 }
 
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			unsigned long end)
 {
 	unsigned long mmu_pid = get_pid_from_context(&vma->vm_mm->context);
@@ -110,7 +110,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
 	}
 }
 
-void reload_tlb_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
+void reload_tlb_page(struct mm_area *vma, unsigned long addr, pte_t pte)
 {
 	unsigned long mmu_pid = get_pid_from_context(&vma->vm_mm->context);
 
diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
index 60c6ce7ff2dc..0acc625d0607 100644
--- a/arch/openrisc/include/asm/pgtable.h
+++ b/arch/openrisc/include/asm/pgtable.h
@@ -370,18 +370,18 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
 
 extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; /* defined in head.S */
 
-struct vm_area_struct;
+struct mm_area;
 
-static inline void update_tlb(struct vm_area_struct *vma,
+static inline void update_tlb(struct mm_area *vma,
 	unsigned long address, pte_t *pte)
 {
 }
 
-extern void update_cache(struct vm_area_struct *vma,
+extern void update_cache(struct mm_area *vma,
 	unsigned long address, pte_t *pte);
 
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	update_tlb(vma, address, ptep);
diff --git a/arch/openrisc/include/asm/tlbflush.h b/arch/openrisc/include/asm/tlbflush.h
index dbf030365ab4..4773da3c2d29 100644
--- a/arch/openrisc/include/asm/tlbflush.h
+++ b/arch/openrisc/include/asm/tlbflush.h
@@ -29,9 +29,9 @@
  */
 extern void local_flush_tlb_all(void);
 extern void local_flush_tlb_mm(struct mm_struct *mm);
-extern void local_flush_tlb_page(struct vm_area_struct *vma,
+extern void local_flush_tlb_page(struct mm_area *vma,
 				 unsigned long addr);
-extern void local_flush_tlb_range(struct vm_area_struct *vma,
+extern void local_flush_tlb_range(struct mm_area *vma,
 				  unsigned long start,
 				  unsigned long end);
 
@@ -43,8 +43,8 @@ extern void local_flush_tlb_range(struct vm_area_struct *vma,
 #else
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_tlb_page(struct mm_area *vma, unsigned long addr);
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end);
 #endif
 
diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
index 86da4bc5ee0b..1eb34b914609 100644
--- a/arch/openrisc/kernel/smp.c
+++ b/arch/openrisc/kernel/smp.c
@@ -300,12 +300,12 @@ void flush_tlb_mm(struct mm_struct *mm)
 	smp_flush_tlb_mm(mm_cpumask(mm), mm);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
+void flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
 {
 	smp_flush_tlb_range(mm_cpumask(vma->vm_mm), uaddr, uaddr + PAGE_SIZE);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma,
+void flush_tlb_range(struct mm_area *vma,
 		     unsigned long start, unsigned long end)
 {
 	const struct cpumask *cmask = vma ? mm_cpumask(vma->vm_mm)
diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c
index 7bdd07cfca60..64649f65f943 100644
--- a/arch/openrisc/mm/cache.c
+++ b/arch/openrisc/mm/cache.c
@@ -78,7 +78,7 @@ void local_icache_range_inv(unsigned long start, unsigned long end)
 	cache_loop(start, end, SPR_ICBIR, SPR_UPR_ICP);
 }
 
-void update_cache(struct vm_area_struct *vma, unsigned long address,
+void update_cache(struct mm_area *vma, unsigned long address,
 	pte_t *pte)
 {
 	unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT;
diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c
index 29e232d78d82..800bceca3bcd 100644
--- a/arch/openrisc/mm/fault.c
+++ b/arch/openrisc/mm/fault.c
@@ -48,7 +48,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address,
 {
 	struct task_struct *tsk;
 	struct mm_struct *mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int si_code;
 	vm_fault_t fault;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
diff --git a/arch/openrisc/mm/tlb.c b/arch/openrisc/mm/tlb.c
index 3115f2e4f864..594a5adb8646 100644
--- a/arch/openrisc/mm/tlb.c
+++ b/arch/openrisc/mm/tlb.c
@@ -80,7 +80,7 @@ void local_flush_tlb_all(void)
 #define flush_itlb_page_no_eir(addr) \
 	mtspr_off(SPR_ITLBMR_BASE(0), ITLB_OFFSET(addr), 0);
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	if (have_dtlbeir)
 		flush_dtlb_page_eir(addr);
@@ -93,7 +93,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
 		flush_itlb_page_no_eir(addr);
 }
 
-void local_flush_tlb_range(struct vm_area_struct *vma,
+void local_flush_tlb_range(struct mm_area *vma,
 			   unsigned long start, unsigned long end)
 {
 	int addr;
diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index 8394718870e1..fe13de0d9a12 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -58,7 +58,7 @@ static inline void flush_dcache_page(struct page *page)
 #define flush_dcache_mmap_unlock_irqrestore(mapping, flags)	\
 		xa_unlock_irqrestore(&mapping->i_pages, flags)
 
-void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+void flush_icache_pages(struct mm_area *vma, struct page *page,
 		unsigned int nr);
 #define flush_icache_pages flush_icache_pages
 
@@ -67,17 +67,17 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
 	flush_kernel_icache_range_asm(s,e); 		\
 } while (0)
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		unsigned long user_vaddr, void *dst, void *src, int len);
-void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_from_user_page(struct mm_area *vma, struct page *page,
 		unsigned long user_vaddr, void *dst, void *src, int len);
-void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
+void flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
 		unsigned long pfn);
-void flush_cache_range(struct vm_area_struct *vma,
+void flush_cache_range(struct mm_area *vma,
 		unsigned long start, unsigned long end);
 
 #define ARCH_HAS_FLUSH_ANON_PAGE
-void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr);
+void flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr);
 
 #define ARCH_HAS_FLUSH_ON_KUNMAP
 void kunmap_flush_on_unmap(const void *addr);
diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h
index 21e9ace17739..f19c029f612b 100644
--- a/arch/parisc/include/asm/hugetlb.h
+++ b/arch/parisc/include/asm/hugetlb.h
@@ -13,7 +13,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, unsigned long sz);
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 	return *ptep;
@@ -24,7 +24,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 					   unsigned long addr, pte_t *ptep);
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 					     unsigned long addr, pte_t *ptep,
 					     pte_t pte, int dirty);
 
diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h
index 7fd447092630..427bf90b3f98 100644
--- a/arch/parisc/include/asm/page.h
+++ b/arch/parisc/include/asm/page.h
@@ -17,13 +17,13 @@
 #define copy_page(to, from)	copy_page_asm((void *)(to), (void *)(from))
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 
 void clear_page_asm(void *page);
 void copy_page_asm(void *to, void *from);
 #define clear_user_page(vto, vaddr, page) clear_page_asm(vto)
 void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr,
-		struct vm_area_struct *vma);
+		struct mm_area *vma);
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 
 /*
diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
index babf65751e81..4b59b5fbd85c 100644
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -454,7 +454,7 @@ static inline pte_t ptep_get(pte_t *ptep)
 }
 #define ptep_get ptep_get
 
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+static inline int ptep_test_and_clear_young(struct mm_area *vma, unsigned long addr, pte_t *ptep)
 {
 	pte_t pte;
 
@@ -466,8 +466,8 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned
 	return 1;
 }
 
-int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep);
-pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep);
+int ptep_clear_flush_young(struct mm_area *vma, unsigned long addr, pte_t *ptep);
+pte_t ptep_clear_flush(struct mm_area *vma, unsigned long addr, pte_t *ptep);
 
 struct mm_struct;
 static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
diff --git a/arch/parisc/include/asm/tlbflush.h b/arch/parisc/include/asm/tlbflush.h
index 5ffd7c17f593..3683645fd41d 100644
--- a/arch/parisc/include/asm/tlbflush.h
+++ b/arch/parisc/include/asm/tlbflush.h
@@ -61,7 +61,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 #endif
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma,
+static inline void flush_tlb_page(struct mm_area *vma,
 	unsigned long addr)
 {
 	purge_tlb_entries(vma->vm_mm, addr);
diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
index db531e58d70e..752562b78d90 100644
--- a/arch/parisc/kernel/cache.c
+++ b/arch/parisc/kernel/cache.c
@@ -328,7 +328,7 @@ void disable_sr_hashing(void)
 }
 
 static inline void
-__flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
+__flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
 		   unsigned long physaddr)
 {
 	if (!static_branch_likely(&parisc_has_cache))
@@ -390,7 +390,7 @@ void kunmap_flush_on_unmap(const void *addr)
 }
 EXPORT_SYMBOL(kunmap_flush_on_unmap);
 
-void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+void flush_icache_pages(struct mm_area *vma, struct page *page,
 		unsigned int nr)
 {
 	void *kaddr = page_address(page);
@@ -473,7 +473,7 @@ static inline unsigned long get_upa(struct mm_struct *mm, unsigned long addr)
 void flush_dcache_folio(struct folio *folio)
 {
 	struct address_space *mapping = folio_flush_mapping(folio);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr, old_addr = 0;
 	void *kaddr;
 	unsigned long count = 0;
@@ -620,7 +620,7 @@ extern void purge_kernel_dcache_page_asm(unsigned long);
 extern void clear_user_page_asm(void *, unsigned long);
 extern void copy_user_page_asm(void *, void *, unsigned long);
 
-static void flush_cache_page_if_present(struct vm_area_struct *vma,
+static void flush_cache_page_if_present(struct mm_area *vma,
 	unsigned long vmaddr)
 {
 #if CONFIG_FLUSH_PAGE_ACCESSED
@@ -645,7 +645,7 @@ static void flush_cache_page_if_present(struct vm_area_struct *vma,
 }
 
 void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	void *kto, *kfrom;
 
@@ -657,7 +657,7 @@ void copy_user_highpage(struct page *to, struct page *from,
 	kunmap_local(kfrom);
 }
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		unsigned long user_vaddr, void *dst, void *src, int len)
 {
 	__flush_cache_page(vma, user_vaddr, PFN_PHYS(page_to_pfn(page)));
@@ -665,7 +665,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 	flush_kernel_dcache_page_addr(PTR_PAGE_ALIGN_DOWN(dst));
 }
 
-void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_from_user_page(struct mm_area *vma, struct page *page,
 		unsigned long user_vaddr, void *dst, void *src, int len)
 {
 	__flush_cache_page(vma, user_vaddr, PFN_PHYS(page_to_pfn(page)));
@@ -702,7 +702,7 @@ int __flush_tlb_range(unsigned long sid, unsigned long start,
 	return 0;
 }
 
-static void flush_cache_pages(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+static void flush_cache_pages(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	unsigned long addr;
 
@@ -712,7 +712,7 @@ static void flush_cache_pages(struct vm_area_struct *vma, unsigned long start, u
 
 static inline unsigned long mm_total_size(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long usize = 0;
 	VMA_ITERATOR(vmi, mm, 0);
 
@@ -726,7 +726,7 @@ static inline unsigned long mm_total_size(struct mm_struct *mm)
 
 void flush_cache_mm(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	/*
@@ -751,7 +751,7 @@ void flush_cache_mm(struct mm_struct *mm)
 		flush_cache_pages(vma, vma->vm_start, vma->vm_end);
 }
 
-void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	if (!parisc_requires_coherency()
 	    || end - start >= parisc_cache_flush_threshold) {
@@ -768,12 +768,12 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
 	flush_cache_pages(vma, start & PAGE_MASK, end);
 }
 
-void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
+void flush_cache_page(struct mm_area *vma, unsigned long vmaddr, unsigned long pfn)
 {
 	__flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
 }
 
-void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
+void flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr)
 {
 	if (!PageAnon(page))
 		return;
@@ -781,7 +781,7 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned lon
 	__flush_cache_page(vma, vmaddr, PFN_PHYS(page_to_pfn(page)));
 }
 
-int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr,
+int ptep_clear_flush_young(struct mm_area *vma, unsigned long addr,
 			   pte_t *ptep)
 {
 	pte_t pte = ptep_get(ptep);
@@ -801,7 +801,7 @@ int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr,
  * can cause random cache corruption. Thus, we must flush the cache
  * as well as the TLB when clearing a PTE that's valid.
  */
-pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr,
+pte_t ptep_clear_flush(struct mm_area *vma, unsigned long addr,
 		       pte_t *ptep)
 {
 	struct mm_struct *mm = (vma)->vm_mm;
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index f852fe274abe..15fd6e8979d7 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -101,7 +101,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
 	unsigned long flags, enum mmap_allocation_direction dir)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	unsigned long filp_pgoff;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {
diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
index b9b3d527bc90..6c26d9c5d7f9 100644
--- a/arch/parisc/kernel/traps.c
+++ b/arch/parisc/kernel/traps.c
@@ -711,7 +711,7 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
 		 */
 
 		if (user_mode(regs)) {
-			struct vm_area_struct *vma;
+			struct mm_area *vma;
 
 			mmap_read_lock(current->mm);
 			vma = find_vma(current->mm,regs->iaoq[0]);
diff --git a/arch/parisc/kernel/vdso.c b/arch/parisc/kernel/vdso.c
index c5cbfce7a84c..f7075a8b3bd1 100644
--- a/arch/parisc/kernel/vdso.c
+++ b/arch/parisc/kernel/vdso.c
@@ -27,7 +27,7 @@ extern char vdso32_start, vdso32_end;
 extern char vdso64_start, vdso64_end;
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-		       struct vm_area_struct *vma)
+		       struct mm_area *vma)
 {
 	current->mm->context.vdso_base = vma->vm_start;
 	return 0;
@@ -56,7 +56,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
 	unsigned long vdso_text_start, vdso_text_len, map_base;
 	struct vm_special_mapping *vdso_mapping;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int rc;
 
 	if (mmap_write_lock_killable(mm))
diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
index c39de84e98b0..c1fbc50fc840 100644
--- a/arch/parisc/mm/fault.c
+++ b/arch/parisc/mm/fault.c
@@ -241,7 +241,7 @@ const char *trap_name(unsigned long code)
 static inline void
 show_signal_msg(struct pt_regs *regs, unsigned long code,
 		unsigned long address, struct task_struct *tsk,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	if (!unhandled_signal(tsk, SIGSEGV))
 		return;
@@ -267,7 +267,7 @@ show_signal_msg(struct pt_regs *regs, unsigned long code,
 void do_page_fault(struct pt_regs *regs, unsigned long code,
 			      unsigned long address)
 {
-	struct vm_area_struct *vma, *prev_vma;
+	struct mm_area *vma, *prev_vma;
 	struct task_struct *tsk;
 	struct mm_struct *mm;
 	unsigned long acc_type;
@@ -454,7 +454,7 @@ handle_nadtlb_fault(struct pt_regs *regs)
 {
 	unsigned long insn = regs->iir;
 	int breg, treg, xreg, val = 0;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk;
 	struct mm_struct *mm;
 	unsigned long address;
diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c
index a94fe546d434..31fa175e4b67 100644
--- a/arch/parisc/mm/hugetlbpage.c
+++ b/arch/parisc/mm/hugetlbpage.c
@@ -23,7 +23,7 @@
 
 
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
@@ -146,7 +146,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 	__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
 }
 
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 				unsigned long addr, pte_t *ptep,
 				pte_t pte, int dirty)
 {
diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 42c3af90d1f0..87c6abe37935 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -325,7 +325,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
 	pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 0);
 }
 
-static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
+static inline void __ptep_set_access_flags(struct mm_area *vma,
 					   pte_t *ptep, pte_t entry,
 					   unsigned long address,
 					   int psize)
diff --git a/arch/powerpc/include/asm/book3s/32/tlbflush.h b/arch/powerpc/include/asm/book3s/32/tlbflush.h
index e43534da5207..dd7630bfcab8 100644
--- a/arch/powerpc/include/asm/book3s/32/tlbflush.h
+++ b/arch/powerpc/include/asm/book3s/32/tlbflush.h
@@ -9,7 +9,7 @@
  * TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
  */
 void hash__flush_tlb_mm(struct mm_struct *mm);
-void hash__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+void hash__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
 void hash__flush_range(struct mm_struct *mm, unsigned long start, unsigned long end);
 
 #ifdef CONFIG_SMP
@@ -52,7 +52,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 		_tlbia();
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+static inline void flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
 		hash__flush_tlb_page(vma, vmaddr);
@@ -61,7 +61,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmad
 }
 
 static inline void
-flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	flush_range(vma->vm_mm, start, end);
 }
@@ -71,7 +71,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
 	flush_range(&init_mm, start, end);
 }
 
-static inline void local_flush_tlb_page(struct vm_area_struct *vma,
+static inline void local_flush_tlb_page(struct mm_area *vma,
 					unsigned long vmaddr)
 {
 	flush_tlb_page(vma, vmaddr);
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index aa90a048f319..47b4b0ee9aff 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -158,7 +158,7 @@ static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
 extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
 					   unsigned long addr, pmd_t *pmdp,
 					   unsigned long clr, unsigned long set);
-extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
+extern pmd_t hash__pmdp_collapse_flush(struct mm_area *vma,
 				   unsigned long address, pmd_t *pmdp);
 extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 					 pgtable_t pgtable);
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 0bf6fd0bf42a..5d42aee48d90 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -170,9 +170,9 @@ extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
 #define pte_pagesize_index(mm, addr, pte)	\
 	(((pte) & H_PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
 
-extern int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
+extern int remap_pfn_range(struct mm_area *, unsigned long addr,
 			   unsigned long pfn, unsigned long size, pgprot_t);
-static inline int hash__remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
+static inline int hash__remap_4k_pfn(struct mm_area *vma, unsigned long addr,
 				 unsigned long pfn, pgprot_t prot)
 {
 	if (pfn > (PTE_RPN_MASK >> PAGE_SHIFT)) {
@@ -271,7 +271,7 @@ static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
 extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
 					   unsigned long addr, pmd_t *pmdp,
 					   unsigned long clr, unsigned long set);
-extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
+extern pmd_t hash__pmdp_collapse_flush(struct mm_area *vma,
 				   unsigned long address, pmd_t *pmdp);
 extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 					 pgtable_t pgtable);
diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
index bb786694dd26..212cdb6c7e1f 100644
--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
+++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
@@ -9,10 +9,10 @@
  * both hash and radix to be enabled together we need to workaround the
  * limitations.
  */
-void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
-void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+void radix__flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
+void radix__local_flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
 
-extern void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+extern void radix__huge_ptep_modify_prot_commit(struct mm_area *vma,
 						unsigned long addr, pte_t *ptep,
 						pte_t old_pte, pte_t pte);
 
@@ -50,22 +50,22 @@ static inline bool gigantic_page_runtime_supported(void)
 }
 
 #define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
-extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
+extern pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
 					 unsigned long addr, pte_t *ptep);
 
 #define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
-extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+extern void huge_ptep_modify_prot_commit(struct mm_area *vma,
 					 unsigned long addr, pte_t *ptep,
 					 pte_t old_pte, pte_t new_pte);
 
-static inline void flush_hugetlb_page(struct vm_area_struct *vma,
+static inline void flush_hugetlb_page(struct mm_area *vma,
 				      unsigned long vmaddr)
 {
 	if (radix_enabled())
 		return radix__flush_hugetlb_page(vma, vmaddr);
 }
 
-void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+void flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
 
 static inline int check_and_get_huge_psize(int shift)
 {
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
index 4d8d7b4ea16b..430ded76ad49 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
@@ -7,7 +7,7 @@
 
 #endif /* CONFIG_HUGETLB_PAGE */
 
-static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
+static inline int remap_4k_pfn(struct mm_area *vma, unsigned long addr,
 			       unsigned long pfn, pgprot_t prot)
 {
 	if (radix_enabled())
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 6d98e6f08d4d..18222f1eab2e 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -722,7 +722,7 @@ static inline bool check_pte_access(unsigned long access, unsigned long ptev)
  * Generic functions with hash/radix callbacks
  */
 
-static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
+static inline void __ptep_set_access_flags(struct mm_area *vma,
 					   pte_t *ptep, pte_t entry,
 					   unsigned long address,
 					   int psize)
@@ -1104,12 +1104,12 @@ extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 extern void set_pud_at(struct mm_struct *mm, unsigned long addr,
 		       pud_t *pudp, pud_t pud);
 
-static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pmd(struct mm_area *vma,
 					unsigned long addr, pmd_t *pmd)
 {
 }
 
-static inline void update_mmu_cache_pud(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pud(struct mm_area *vma,
 					unsigned long addr, pud_t *pud)
 {
 }
@@ -1284,19 +1284,19 @@ static inline pud_t pud_mkhuge(pud_t pud)
 
 
 #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
-extern int pmdp_set_access_flags(struct vm_area_struct *vma,
+extern int pmdp_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pmd_t *pmdp,
 				 pmd_t entry, int dirty);
 #define __HAVE_ARCH_PUDP_SET_ACCESS_FLAGS
-extern int pudp_set_access_flags(struct vm_area_struct *vma,
+extern int pudp_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pud_t *pudp,
 				 pud_t entry, int dirty);
 
 #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
-extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+extern int pmdp_test_and_clear_young(struct mm_area *vma,
 				     unsigned long address, pmd_t *pmdp);
 #define __HAVE_ARCH_PUDP_TEST_AND_CLEAR_YOUNG
-extern int pudp_test_and_clear_young(struct vm_area_struct *vma,
+extern int pudp_test_and_clear_young(struct mm_area *vma,
 				     unsigned long address, pud_t *pudp);
 
 
@@ -1319,7 +1319,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 	return *pudp;
 }
 
-static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+static inline pmd_t pmdp_collapse_flush(struct mm_area *vma,
 					unsigned long address, pmd_t *pmdp)
 {
 	if (radix_enabled())
@@ -1329,12 +1329,12 @@ static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
 #define pmdp_collapse_flush pmdp_collapse_flush
 
 #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
-pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
+pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
 				   unsigned long addr,
 				   pmd_t *pmdp, int full);
 
 #define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR_FULL
-pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
+pud_t pudp_huge_get_and_clear_full(struct mm_area *vma,
 				   unsigned long addr,
 				   pud_t *pudp, int full);
 
@@ -1357,16 +1357,16 @@ static inline pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PMDP_INVALIDATE
-extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+extern pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
 			     pmd_t *pmdp);
-extern pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
+extern pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
 			     pud_t *pudp);
 
 #define pmd_move_must_withdraw pmd_move_must_withdraw
 struct spinlock;
 extern int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
 				  struct spinlock *old_pmd_ptl,
-				  struct vm_area_struct *vma);
+				  struct mm_area *vma);
 /*
  * Hash translation mode use the deposited table to store hash pte
  * slot information.
@@ -1413,8 +1413,8 @@ static inline int pgd_devmap(pgd_t pgd)
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
-pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
-void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
+pte_t ptep_modify_prot_start(struct mm_area *, unsigned long, pte_t *);
+void ptep_modify_prot_commit(struct mm_area *, unsigned long,
 			     pte_t *, pte_t, pte_t);
 
 /*
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index 8f55ff74bb68..ffbeb52f4beb 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -143,11 +143,11 @@ extern void radix__mark_rodata_ro(void);
 extern void radix__mark_initmem_nx(void);
 #endif
 
-extern void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
+extern void radix__ptep_set_access_flags(struct mm_area *vma, pte_t *ptep,
 					 pte_t entry, unsigned long address,
 					 int psize);
 
-extern void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
+extern void radix__ptep_modify_prot_commit(struct mm_area *vma,
 					   unsigned long addr, pte_t *ptep,
 					   pte_t old_pte, pte_t pte);
 
@@ -288,7 +288,7 @@ extern unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned l
 extern unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long addr,
 						pud_t *pudp, unsigned long clr,
 						unsigned long set);
-extern pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma,
+extern pmd_t radix__pmdp_collapse_flush(struct mm_area *vma,
 				  unsigned long address, pmd_t *pmdp);
 extern void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 					pgtable_t pgtable);
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
index a38542259fab..369f7d20a25a 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
@@ -8,7 +8,7 @@
 #define RIC_FLUSH_PWC 1
 #define RIC_FLUSH_ALL 2
 
-struct vm_area_struct;
+struct mm_area;
 struct mm_struct;
 struct mmu_gather;
 
@@ -60,30 +60,30 @@ static inline void radix__flush_all_lpid_guest(unsigned int lpid)
 }
 #endif
 
-extern void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+extern void radix__flush_hugetlb_tlb_range(struct mm_area *vma,
 					   unsigned long start, unsigned long end);
 extern void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start,
 					 unsigned long end, int psize);
 void radix__flush_tlb_pwc_range_psize(struct mm_struct *mm, unsigned long start,
 				      unsigned long end, int psize);
-extern void radix__flush_pmd_tlb_range(struct vm_area_struct *vma,
+extern void radix__flush_pmd_tlb_range(struct mm_area *vma,
 				       unsigned long start, unsigned long end);
-extern void radix__flush_pud_tlb_range(struct vm_area_struct *vma,
+extern void radix__flush_pud_tlb_range(struct mm_area *vma,
 				       unsigned long start, unsigned long end);
-extern void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void radix__flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end);
 extern void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
 extern void radix__local_flush_tlb_mm(struct mm_struct *mm);
 extern void radix__local_flush_all_mm(struct mm_struct *mm);
-extern void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern void radix__local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
 extern void radix__local_flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
 					      int psize);
 extern void radix__tlb_flush(struct mmu_gather *tlb);
 #ifdef CONFIG_SMP
 extern void radix__flush_tlb_mm(struct mm_struct *mm);
 extern void radix__flush_all_mm(struct mm_struct *mm);
-extern void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern void radix__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
 extern void radix__flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
 					int psize);
 #else
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h
index fd642b729775..73cc7feff758 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h
@@ -44,7 +44,7 @@ static inline void tlbiel_all_lpid(bool radix)
 
 
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
-static inline void flush_pmd_tlb_range(struct vm_area_struct *vma,
+static inline void flush_pmd_tlb_range(struct mm_area *vma,
 				       unsigned long start, unsigned long end)
 {
 	if (radix_enabled())
@@ -52,7 +52,7 @@ static inline void flush_pmd_tlb_range(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_FLUSH_PUD_TLB_RANGE
-static inline void flush_pud_tlb_range(struct vm_area_struct *vma,
+static inline void flush_pud_tlb_range(struct mm_area *vma,
 				       unsigned long start, unsigned long end)
 {
 	if (radix_enabled())
@@ -60,7 +60,7 @@ static inline void flush_pud_tlb_range(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
-static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
+static inline void flush_hugetlb_tlb_range(struct mm_area *vma,
 					   unsigned long start,
 					   unsigned long end)
 {
@@ -68,7 +68,7 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
 		radix__flush_hugetlb_tlb_range(vma, start, end);
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 	if (radix_enabled())
@@ -88,7 +88,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
 		radix__local_flush_tlb_mm(mm);
 }
 
-static inline void local_flush_tlb_page(struct vm_area_struct *vma,
+static inline void local_flush_tlb_page(struct mm_area *vma,
 					unsigned long vmaddr)
 {
 	if (radix_enabled())
@@ -117,7 +117,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 		radix__flush_tlb_mm(mm);
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma,
+static inline void flush_tlb_page(struct mm_area *vma,
 				  unsigned long vmaddr)
 {
 	if (radix_enabled())
@@ -129,7 +129,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
 #endif /* CONFIG_SMP */
 
 #define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault
-static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
+static inline void flush_tlb_fix_spurious_fault(struct mm_area *vma,
 						unsigned long address,
 						pte_t *ptep)
 {
diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h
index f2656774aaa9..a7be13f896ca 100644
--- a/arch/powerpc/include/asm/cacheflush.h
+++ b/arch/powerpc/include/asm/cacheflush.h
@@ -53,7 +53,7 @@ static inline void flush_dcache_page(struct page *page)
 void flush_icache_range(unsigned long start, unsigned long stop);
 #define flush_icache_range flush_icache_range
 
-void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
+void flush_icache_user_page(struct mm_area *vma, struct page *page,
 		unsigned long addr, int len);
 #define flush_icache_user_page flush_icache_user_page
 
diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
index 86326587e58d..84540436e22c 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ -52,7 +52,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 	pte_t pte;
@@ -64,7 +64,7 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty);
 
@@ -72,7 +72,7 @@ void gigantic_hugetlb_cma_reserve(void) __init;
 #include <asm-generic/hugetlb.h>
 
 #else /* ! CONFIG_HUGETLB_PAGE */
-static inline void flush_hugetlb_page(struct vm_area_struct *vma,
+static inline void flush_hugetlb_page(struct mm_area *vma,
 				      unsigned long vmaddr)
 {
 }
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index a157ab513347..9677c3775f7a 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -258,11 +258,11 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
 extern void arch_exit_mmap(struct mm_struct *mm);
 
 #ifdef CONFIG_PPC_MEM_KEYS
-bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
+bool arch_vma_access_permitted(struct mm_area *vma, bool write,
 			       bool execute, bool foreign);
 void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm);
 #else /* CONFIG_PPC_MEM_KEYS */
-static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+static inline bool arch_vma_access_permitted(struct mm_area *vma,
 		bool write, bool execute, bool foreign)
 {
 	/* by default, allow everything */
diff --git a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
index 014799557f60..5f9e81383526 100644
--- a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
@@ -4,7 +4,7 @@
 
 #define PAGE_SHIFT_8M		23
 
-static inline void flush_hugetlb_page(struct vm_area_struct *vma,
+static inline void flush_hugetlb_page(struct mm_area *vma,
 				      unsigned long vmaddr)
 {
 	flush_tlb_page(vma, vmaddr);
diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
index 54ebb91dbdcf..ac6c02a4c26e 100644
--- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
@@ -128,7 +128,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
 }
 #define ptep_set_wrprotect ptep_set_wrprotect
 
-static inline void __ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
+static inline void __ptep_set_access_flags(struct mm_area *vma, pte_t *ptep,
 					   pte_t entry, unsigned long address, int psize)
 {
 	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_EXEC);
diff --git a/arch/powerpc/include/asm/nohash/hugetlb-e500.h b/arch/powerpc/include/asm/nohash/hugetlb-e500.h
index cab0e1f1eea0..788c610b8dff 100644
--- a/arch/powerpc/include/asm/nohash/hugetlb-e500.h
+++ b/arch/powerpc/include/asm/nohash/hugetlb-e500.h
@@ -2,7 +2,7 @@
 #ifndef _ASM_POWERPC_NOHASH_HUGETLB_E500_H
 #define _ASM_POWERPC_NOHASH_HUGETLB_E500_H
 
-void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+void flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
 
 static inline int check_and_get_huge_psize(int shift)
 {
diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
index 8d1f0b7062eb..0aad651197ef 100644
--- a/arch/powerpc/include/asm/nohash/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/pgtable.h
@@ -99,7 +99,7 @@ static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, p
 }
 #endif
 
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+static inline int ptep_test_and_clear_young(struct mm_area *vma,
 					    unsigned long addr, pte_t *ptep)
 {
 	unsigned long old;
@@ -133,7 +133,7 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
 
 /* Set the dirty and/or accessed bits atomically in a linux PTE */
 #ifndef __ptep_set_access_flags
-static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
+static inline void __ptep_set_access_flags(struct mm_area *vma,
 					   pte_t *ptep, pte_t entry,
 					   unsigned long address,
 					   int psize)
diff --git a/arch/powerpc/include/asm/nohash/tlbflush.h b/arch/powerpc/include/asm/nohash/tlbflush.h
index 9a2cf83ea4f1..8f013d3b3e17 100644
--- a/arch/powerpc/include/asm/nohash/tlbflush.h
+++ b/arch/powerpc/include/asm/nohash/tlbflush.h
@@ -23,12 +23,12 @@
  * specific tlbie's
  */
 
-struct vm_area_struct;
+struct mm_area;
 struct mm_struct;
 
 #define MMU_NO_CONTEXT      	((unsigned int)-1)
 
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end);
 
 #ifdef CONFIG_PPC_8xx
@@ -40,7 +40,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
 		asm volatile ("sync; tlbia; isync" : : : "memory");
 }
 
-static inline void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+static inline void local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	asm volatile ("tlbie %0; sync" : : "r" (vmaddr) : "memory");
 }
@@ -63,7 +63,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
 #else
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 extern void local_flush_tlb_mm(struct mm_struct *mm);
-extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern void local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
 void local_flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr, int psize);
 
 extern void __local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
@@ -72,7 +72,7 @@ extern void __local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
 
 #ifdef CONFIG_SMP
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern void flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
 extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
 			     int tsize, int ind);
 #else
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index af9a2628d1df..c5d6d4087e3c 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -280,7 +280,7 @@ void arch_free_page(struct page *page, int order);
 #define HAVE_ARCH_FREE_PAGE
 #endif
 
-struct vm_area_struct;
+struct mm_area;
 
 extern unsigned long kernstart_virt_addr;
 
diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
index 46a9c4491ed0..1fa9e34182b4 100644
--- a/arch/powerpc/include/asm/pci.h
+++ b/arch/powerpc/include/asm/pci.h
@@ -67,7 +67,7 @@ extern int pci_domain_nr(struct pci_bus *bus);
 /* Decide whether to display the domain number in /proc */
 extern int pci_proc_domain(struct pci_bus *bus);
 
-struct vm_area_struct;
+struct mm_area;
 
 /* Tell PCI code what kind of PCI resource mappings we support */
 #define HAVE_PCI_MMAP			1
@@ -80,7 +80,7 @@ extern int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val,
 extern int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val,
 			   size_t count);
 extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
-				      struct vm_area_struct *vma,
+				      struct mm_area *vma,
 				      enum pci_mmap_state mmap_state);
 extern void pci_adjust_legacy_attr(struct pci_bus *bus,
 				   enum pci_mmap_state mmap_type);
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 2f72ad885332..d375c25ff925 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -119,7 +119,7 @@ static inline void mark_initmem_nx(void) { }
 #endif
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
-int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+int ptep_set_access_flags(struct mm_area *vma, unsigned long address,
 			  pte_t *ptep, pte_t entry, int dirty);
 
 pgprot_t __phys_mem_access_prot(unsigned long pfn, unsigned long size,
@@ -133,7 +133,7 @@ static inline pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn
 }
 #define __HAVE_PHYS_MEM_ACCESS_PROT
 
-void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep);
+void __update_mmu_cache(struct mm_area *vma, unsigned long address, pte_t *ptep);
 
 /*
  * This gets called at the end of handling a page fault, when
@@ -145,7 +145,7 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t
  * waiting for the inevitable extra hash-table miss exception.
  */
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	if ((mmu_has_feature(MMU_FTR_HPTE_TABLE) && !radix_enabled()) ||
diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
index 59a2c7dbc78f..b36ac2edf846 100644
--- a/arch/powerpc/include/asm/pkeys.h
+++ b/arch/powerpc/include/asm/pkeys.h
@@ -35,7 +35,7 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
 	return (((u64)pkey << VM_PKEY_SHIFT) & ARCH_VM_PKEY_FLAGS);
 }
 
-static inline int vma_pkey(struct vm_area_struct *vma)
+static inline int vma_pkey(struct mm_area *vma)
 {
 	if (!mmu_has_feature(MMU_FTR_PKEY))
 		return 0;
@@ -125,9 +125,9 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
  * execute-only protection key.
  */
 extern int execute_only_pkey(struct mm_struct *mm);
-extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
+extern int __arch_override_mprotect_pkey(struct mm_area *vma,
 					 int prot, int pkey);
-static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
+static inline int arch_override_mprotect_pkey(struct mm_area *vma,
 					      int prot, int pkey)
 {
 	if (!mmu_has_feature(MMU_FTR_PKEY))
diff --git a/arch/powerpc/include/asm/vas.h b/arch/powerpc/include/asm/vas.h
index c36f71e01c0f..086d494bd3d9 100644
--- a/arch/powerpc/include/asm/vas.h
+++ b/arch/powerpc/include/asm/vas.h
@@ -71,7 +71,7 @@ struct vas_user_win_ref {
 	struct mm_struct *mm;	/* Linux process mm_struct */
 	struct mutex mmap_mutex;	/* protects paste address mmap() */
 					/* with DLPAR close/open windows */
-	struct vm_area_struct *vma;	/* Save VMA and used in DLPAR ops */
+	struct mm_area *vma;	/* Save VMA and used in DLPAR ops */
 };
 
 /*
diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
index eac84d687b53..ce9a82d8120f 100644
--- a/arch/powerpc/kernel/pci-common.c
+++ b/arch/powerpc/kernel/pci-common.c
@@ -501,7 +501,7 @@ static int pci_read_irq_line(struct pci_dev *pci_dev)
  * Platform support for /proc/bus/pci/X/Y mmap()s.
  *  -- paulus.
  */
-int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
+int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma)
 {
 	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
 	resource_size_t ioaddr = pci_resource_start(pdev, bar);
@@ -651,7 +651,7 @@ int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val, size_t size)
 
 /* This provides legacy IO or memory mmap access on a bus */
 int pci_mmap_legacy_page_range(struct pci_bus *bus,
-			       struct vm_area_struct *vma,
+			       struct mm_area *vma,
 			       enum pci_mmap_state mmap_state)
 {
 	struct pci_controller *hose = pci_bus_to_host(bus);
diff --git a/arch/powerpc/kernel/proc_powerpc.c b/arch/powerpc/kernel/proc_powerpc.c
index 3816a2bf2b84..c80bc0cb32db 100644
--- a/arch/powerpc/kernel/proc_powerpc.c
+++ b/arch/powerpc/kernel/proc_powerpc.c
@@ -30,7 +30,7 @@ static ssize_t page_map_read( struct file *file, char __user *buf, size_t nbytes
 			pde_data(file_inode(file)), PAGE_SIZE);
 }
 
-static int page_map_mmap( struct file *file, struct vm_area_struct *vma )
+static int page_map_mmap( struct file *file, struct mm_area *vma )
 {
 	if ((vma->vm_end - vma->vm_start) > PAGE_SIZE)
 		return -EINVAL;
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index 219d67bcf747..f6a853ae5dc7 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -42,7 +42,7 @@ extern char vdso64_start, vdso64_end;
 
 long sys_ni_syscall(void);
 
-static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma,
+static int vdso_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma,
 		       unsigned long text_size)
 {
 	unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
@@ -55,17 +55,17 @@ static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struc
 	return 0;
 }
 
-static int vdso32_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
+static int vdso32_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
 {
 	return vdso_mremap(sm, new_vma, &vdso32_end - &vdso32_start);
 }
 
-static int vdso64_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
+static int vdso64_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
 {
 	return vdso_mremap(sm, new_vma, &vdso64_end - &vdso64_start);
 }
 
-static void vdso_close(const struct vm_special_mapping *sm, struct vm_area_struct *vma)
+static void vdso_close(const struct vm_special_mapping *sm, struct mm_area *vma)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
@@ -102,7 +102,7 @@ static int __arch_setup_additional_pages(struct linux_binprm *bprm, int uses_int
 	struct vm_special_mapping *vdso_spec;
 	unsigned long vvar_size = VDSO_NR_PAGES * PAGE_SIZE;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if (is_32bit_task()) {
 		vdso_spec = &vdso32_spec;
diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
index 742aa58a7c7e..236d3f95c4dd 100644
--- a/arch/powerpc/kvm/book3s_64_vio.c
+++ b/arch/powerpc/kvm/book3s_64_vio.c
@@ -247,7 +247,7 @@ static const struct vm_operations_struct kvm_spapr_tce_vm_ops = {
 	.fault = kvm_spapr_tce_fault,
 };
 
-static int kvm_spapr_tce_mmap(struct file *file, struct vm_area_struct *vma)
+static int kvm_spapr_tce_mmap(struct file *file, struct mm_area *vma)
 {
 	vma->vm_ops = &kvm_spapr_tce_vm_ops;
 	return 0;
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 86bff159c51e..62de957ec6da 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -5473,7 +5473,7 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
 	struct kvm *kvm = vcpu->kvm;
 	unsigned long hva;
 	struct kvm_memory_slot *memslot;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long lpcr = 0, senc;
 	unsigned long psize, porder;
 	int srcu_idx;
diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
index 3a6592a31a10..16a49d4b5e47 100644
--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
+++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
@@ -395,7 +395,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
 	unsigned long end, start = gfn_to_hva(kvm, gfn);
 	unsigned long vm_flags;
 	int ret = 0;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int merge_flag = (merge) ? MADV_MERGEABLE : MADV_UNMERGEABLE;
 
 	if (kvm_is_error_hva(start))
@@ -510,7 +510,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
  * from secure memory using UV_PAGE_OUT uvcall.
  * Caller must held kvm->arch.uvmem_lock.
  */
-static int __kvmppc_svm_page_out(struct vm_area_struct *vma,
+static int __kvmppc_svm_page_out(struct mm_area *vma,
 		unsigned long start,
 		unsigned long end, unsigned long page_shift,
 		struct kvm *kvm, unsigned long gpa, struct page *fault_page)
@@ -583,7 +583,7 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma,
 	return ret;
 }
 
-static inline int kvmppc_svm_page_out(struct vm_area_struct *vma,
+static inline int kvmppc_svm_page_out(struct mm_area *vma,
 				      unsigned long start, unsigned long end,
 				      unsigned long page_shift,
 				      struct kvm *kvm, unsigned long gpa,
@@ -613,7 +613,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot,
 	int i;
 	struct kvmppc_uvmem_page_pvt *pvt;
 	struct page *uvmem_page;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	unsigned long uvmem_pfn, gfn;
 	unsigned long addr;
 
@@ -737,7 +737,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
  * Alloc a PFN from private device memory pool. If @pagein is true,
  * copy page from normal memory to secure memory using UV_PAGE_IN uvcall.
  */
-static int kvmppc_svm_page_in(struct vm_area_struct *vma,
+static int kvmppc_svm_page_in(struct mm_area *vma,
 		unsigned long start,
 		unsigned long end, unsigned long gpa, struct kvm *kvm,
 		unsigned long page_shift,
@@ -795,7 +795,7 @@ static int kvmppc_uv_migrate_mem_slot(struct kvm *kvm,
 		const struct kvm_memory_slot *memslot)
 {
 	unsigned long gfn = memslot->base_gfn;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long start, end;
 	int ret = 0;
 
@@ -937,7 +937,7 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
 		unsigned long page_shift)
 {
 	unsigned long start, end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int srcu_idx;
 	unsigned long gfn = gpa >> page_shift;
 	int ret;
@@ -1047,7 +1047,7 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
 {
 	unsigned long gfn = gpa >> page_shift;
 	unsigned long start, end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int srcu_idx;
 	int ret;
 
diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
index d9bf1bc3ff61..90ff2d0ed2a7 100644
--- a/arch/powerpc/kvm/book3s_xive_native.c
+++ b/arch/powerpc/kvm/book3s_xive_native.c
@@ -227,7 +227,7 @@ static struct kvmppc_xive_ops kvmppc_xive_native_ops =  {
 
 static vm_fault_t xive_native_esb_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct kvm_device *dev = vma->vm_file->private_data;
 	struct kvmppc_xive *xive = dev->private;
 	struct kvmppc_xive_src_block *sb;
@@ -287,7 +287,7 @@ static const struct vm_operations_struct xive_native_esb_vmops = {
 
 static vm_fault_t xive_native_tima_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 
 	switch (vmf->pgoff - vma->vm_pgoff) {
 	case 0: /* HW - forbid access */
@@ -307,7 +307,7 @@ static const struct vm_operations_struct xive_native_tima_vmops = {
 };
 
 static int kvmppc_xive_native_mmap(struct kvm_device *dev,
-				   struct vm_area_struct *vma)
+				   struct mm_area *vma)
 {
 	struct kvmppc_xive *xive = dev->private;
 
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
index be9c4106e22f..438af9822627 100644
--- a/arch/powerpc/mm/book3s32/mmu.c
+++ b/arch/powerpc/mm/book3s32/mmu.c
@@ -319,7 +319,7 @@ static void hash_preload(struct mm_struct *mm, unsigned long ea)
  *
  * This must always be called with the pte lock held.
  */
-void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
+void __update_mmu_cache(struct mm_area *vma, unsigned long address,
 		      pte_t *ptep)
 {
 	/*
diff --git a/arch/powerpc/mm/book3s32/tlb.c b/arch/powerpc/mm/book3s32/tlb.c
index 9ad6b56bfec9..badcf34a99b4 100644
--- a/arch/powerpc/mm/book3s32/tlb.c
+++ b/arch/powerpc/mm/book3s32/tlb.c
@@ -80,7 +80,7 @@ EXPORT_SYMBOL(hash__flush_range);
  */
 void hash__flush_tlb_mm(struct mm_struct *mm)
 {
-	struct vm_area_struct *mp;
+	struct mm_area *mp;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	/*
@@ -94,7 +94,7 @@ void hash__flush_tlb_mm(struct mm_struct *mm)
 }
 EXPORT_SYMBOL(hash__flush_tlb_mm);
 
-void hash__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void hash__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	struct mm_struct *mm;
 	pmd_t *pmd;
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index 988948d69bc1..444a148f54f8 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -220,7 +220,7 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr
 	return old;
 }
 
-pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
+pmd_t hash__pmdp_collapse_flush(struct mm_area *vma, unsigned long address,
 			    pmd_t *pmdp)
 {
 	pmd_t pmd;
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 5158aefe4873..8a135a261f2e 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -2099,7 +2099,7 @@ static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
  *
  * This must always be called with the pte lock held.
  */
-void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
+void __update_mmu_cache(struct mm_area *vma, unsigned long address,
 		      pte_t *ptep)
 {
 	/*
diff --git a/arch/powerpc/mm/book3s64/hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c
index 83c3361b358b..a26f928dbf56 100644
--- a/arch/powerpc/mm/book3s64/hugetlbpage.c
+++ b/arch/powerpc/mm/book3s64/hugetlbpage.c
@@ -135,7 +135,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 }
 #endif
 
-pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
+pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
 				  unsigned long addr, pte_t *ptep)
 {
 	unsigned long pte_val;
@@ -150,7 +150,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
 	return __pte(pte_val);
 }
 
-void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
+void huge_ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
 				  pte_t *ptep, pte_t old_pte, pte_t pte)
 {
 	unsigned long psize;
diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c
index c0e8d597e4cb..fbf8a7ae297a 100644
--- a/arch/powerpc/mm/book3s64/iommu_api.c
+++ b/arch/powerpc/mm/book3s64/iommu_api.c
@@ -98,7 +98,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
 
 	mmap_read_lock(mm);
 	chunk = (1UL << (PAGE_SHIFT + MAX_PAGE_ORDER)) /
-			sizeof(struct vm_area_struct *);
+			sizeof(struct mm_area *);
 	chunk = min(chunk, entries);
 	for (entry = 0; entry < entries; entry += chunk) {
 		unsigned long n = min(entries - entry, chunk);
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 8f7d41ce2ca1..58f7938e9872 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -57,7 +57,7 @@ early_param("kfence.sample_interval", parse_kfence_early_init);
  * handled those two for us, we additionally deal with missing execute
  * permission here on some processors
  */
-int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+int pmdp_set_access_flags(struct mm_area *vma, unsigned long address,
 			  pmd_t *pmdp, pmd_t entry, int dirty)
 {
 	int changed;
@@ -77,7 +77,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
 	return changed;
 }
 
-int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+int pudp_set_access_flags(struct mm_area *vma, unsigned long address,
 			  pud_t *pudp, pud_t entry, int dirty)
 {
 	int changed;
@@ -98,13 +98,13 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
 }
 
 
-int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+int pmdp_test_and_clear_young(struct mm_area *vma,
 			      unsigned long address, pmd_t *pmdp)
 {
 	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
 }
 
-int pudp_test_and_clear_young(struct vm_area_struct *vma,
+int pudp_test_and_clear_young(struct mm_area *vma,
 			      unsigned long address, pud_t *pudp)
 {
 	return __pudp_test_and_clear_young(vma->vm_mm, address, pudp);
@@ -177,7 +177,7 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
  * We use this to invalidate a pmdp entry before switching from a
  * hugepte to regular pmd entry.
  */
-pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
 		     pmd_t *pmdp)
 {
 	unsigned long old_pmd;
@@ -188,7 +188,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
 	return __pmd(old_pmd);
 }
 
-pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
 		      pud_t *pudp)
 {
 	unsigned long old_pud;
@@ -199,7 +199,7 @@ pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
 	return __pud(old_pud);
 }
 
-pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
+pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
 				   unsigned long addr, pmd_t *pmdp, int full)
 {
 	pmd_t pmd;
@@ -217,7 +217,7 @@ pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
 	return pmd;
 }
 
-pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
+pud_t pudp_huge_get_and_clear_full(struct mm_area *vma,
 				   unsigned long addr, pud_t *pudp, int full)
 {
 	pud_t pud;
@@ -534,7 +534,7 @@ void arch_report_meminfo(struct seq_file *m)
 }
 #endif /* CONFIG_PROC_FS */
 
-pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
+pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr,
 			     pte_t *ptep)
 {
 	unsigned long pte_val;
@@ -550,7 +550,7 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
 
 }
 
-void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
+void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
 			     pte_t *ptep, pte_t old_pte, pte_t pte)
 {
 	if (radix_enabled())
@@ -574,7 +574,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
  */
 int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
 			   struct spinlock *old_pmd_ptl,
-			   struct vm_area_struct *vma)
+			   struct mm_area *vma)
 {
 	if (radix_enabled())
 		return (new_pmd_ptl != old_pmd_ptl) && vma_is_anonymous(vma);
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index a974baf8f327..3bdeb406fa0f 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -376,7 +376,7 @@ int execute_only_pkey(struct mm_struct *mm)
 	return mm->context.execute_only_pkey;
 }
 
-static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+static inline bool vma_is_pkey_exec_only(struct mm_area *vma)
 {
 	/* Do this check first since the vm_flags should be hot */
 	if ((vma->vm_flags & VM_ACCESS_FLAGS) != VM_EXEC)
@@ -388,7 +388,7 @@ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
 /*
  * This should only be called for *plain* mprotect calls.
  */
-int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
+int __arch_override_mprotect_pkey(struct mm_area *vma, int prot,
 				  int pkey)
 {
 	/*
@@ -444,7 +444,7 @@ bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
  * So do not enforce things if the VMA is not from the current mm, or if we are
  * in a kernel thread.
  */
-bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
+bool arch_vma_access_permitted(struct mm_area *vma, bool write,
 			       bool execute, bool foreign)
 {
 	if (!mmu_has_feature(MMU_FTR_PKEY))
diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c
index 35fd2a95be24..81569a2ec474 100644
--- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c
+++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c
@@ -7,7 +7,7 @@
 #include <asm/mman.h>
 #include <asm/tlb.h>
 
-void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void radix__flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	int psize;
 	struct hstate *hstate = hstate_file(vma->vm_file);
@@ -16,7 +16,7 @@ void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
 	radix__flush_tlb_page_psize(vma->vm_mm, vmaddr, psize);
 }
 
-void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void radix__local_flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	int psize;
 	struct hstate *hstate = hstate_file(vma->vm_file);
@@ -25,7 +25,7 @@ void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long v
 	radix__local_flush_tlb_page_psize(vma->vm_mm, vmaddr, psize);
 }
 
-void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void radix__flush_hugetlb_tlb_range(struct mm_area *vma, unsigned long start,
 				   unsigned long end)
 {
 	int psize;
@@ -42,7 +42,7 @@ void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long st
 	mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end);
 }
 
-void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+void radix__huge_ptep_modify_prot_commit(struct mm_area *vma,
 					 unsigned long addr, pte_t *ptep,
 					 pte_t old_pte, pte_t pte)
 {
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 311e2112d782..abb8ee24f4ec 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1439,7 +1439,7 @@ unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long add
 	return old;
 }
 
-pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
+pmd_t radix__pmdp_collapse_flush(struct mm_area *vma, unsigned long address,
 			pmd_t *pmdp)
 
 {
@@ -1528,7 +1528,7 @@ pud_t radix__pudp_huge_get_and_clear(struct mm_struct *mm,
 
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
+void radix__ptep_set_access_flags(struct mm_area *vma, pte_t *ptep,
 				  pte_t entry, unsigned long address, int psize)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -1570,7 +1570,7 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
 	/* See ptesync comment in radix__set_pte_at */
 }
 
-void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
+void radix__ptep_modify_prot_commit(struct mm_area *vma,
 				    unsigned long addr, pte_t *ptep,
 				    pte_t old_pte, pte_t pte)
 {
diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
index 9e1f6558d026..522515490a77 100644
--- a/arch/powerpc/mm/book3s64/radix_tlb.c
+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
@@ -625,7 +625,7 @@ void radix__local_flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmadd
 	preempt_enable();
 }
 
-void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void radix__local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 #ifdef CONFIG_HUGETLB_PAGE
 	/* need the return fix for nohash.c */
@@ -947,7 +947,7 @@ void radix__flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
 	preempt_enable();
 }
 
-void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void radix__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 #ifdef CONFIG_HUGETLB_PAGE
 	if (is_vm_hugetlb_page(vma))
@@ -1114,7 +1114,7 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm,
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
 }
 
-void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void radix__flush_tlb_range(struct mm_area *vma, unsigned long start,
 		     unsigned long end)
 
 {
@@ -1360,14 +1360,14 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-void radix__flush_pmd_tlb_range(struct vm_area_struct *vma,
+void radix__flush_pmd_tlb_range(struct mm_area *vma,
 				unsigned long start, unsigned long end)
 {
 	radix__flush_tlb_range_psize(vma->vm_mm, start, end, MMU_PAGE_2M);
 }
 EXPORT_SYMBOL(radix__flush_pmd_tlb_range);
 
-void radix__flush_pud_tlb_range(struct vm_area_struct *vma,
+void radix__flush_pud_tlb_range(struct mm_area *vma,
 				unsigned long start, unsigned long end)
 {
 	radix__flush_tlb_range_psize(vma->vm_mm, start, end, MMU_PAGE_1G);
diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c
index 28bec5bc7879..7ea8f4a1046b 100644
--- a/arch/powerpc/mm/book3s64/slice.c
+++ b/arch/powerpc/mm/book3s64/slice.c
@@ -86,7 +86,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len,
 static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
 			      unsigned long len)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if ((mm_ctx_slb_addr_limit(&mm->context) - len) < addr)
 		return 0;
@@ -808,7 +808,7 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
 	return !slice_check_range_fits(mm, maskp, addr, len);
 }
 
-unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
+unsigned long vma_mmu_pagesize(struct mm_area *vma)
 {
 	/* With radix we don't use slice, so derive it from vma*/
 	if (radix_enabled())
diff --git a/arch/powerpc/mm/book3s64/subpage_prot.c b/arch/powerpc/mm/book3s64/subpage_prot.c
index ec98e526167e..574aa22bb238 100644
--- a/arch/powerpc/mm/book3s64/subpage_prot.c
+++ b/arch/powerpc/mm/book3s64/subpage_prot.c
@@ -138,7 +138,7 @@ static void subpage_prot_clear(unsigned long addr, unsigned long len)
 static int subpage_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
 				  unsigned long end, struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	split_huge_pmd(vma, pmd, addr);
 	return 0;
 }
@@ -151,7 +151,7 @@ static const struct mm_walk_ops subpage_walk_ops = {
 static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr,
 				    unsigned long len)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, addr);
 
 	/*
diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c
index 7186516eca52..75547ebd112c 100644
--- a/arch/powerpc/mm/cacheflush.c
+++ b/arch/powerpc/mm/cacheflush.c
@@ -210,7 +210,7 @@ void copy_user_page(void *vto, void *vfrom, unsigned long vaddr,
 	flush_dcache_page(pg);
 }
 
-void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
+void flush_icache_user_page(struct mm_area *vma, struct page *page,
 			     unsigned long addr, int len)
 {
 	void *maddr;
diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
index f5f8692e2c69..b6196e004f19 100644
--- a/arch/powerpc/mm/copro_fault.c
+++ b/arch/powerpc/mm/copro_fault.c
@@ -21,7 +21,7 @@
 int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
 		unsigned long dsisr, vm_fault_t *flt)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long is_write;
 	int ret;
 
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index c156fe0d53c3..45b8039647f6 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -72,7 +72,7 @@ static noinline int bad_area_nosemaphore(struct pt_regs *regs, unsigned long add
 }
 
 static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code,
-		      struct mm_struct *mm, struct vm_area_struct *vma)
+		      struct mm_struct *mm, struct mm_area *vma)
 {
 
 	/*
@@ -89,7 +89,7 @@ static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code,
 
 static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address,
 				    struct mm_struct *mm,
-				    struct vm_area_struct *vma)
+				    struct mm_area *vma)
 {
 	int pkey;
 
@@ -131,7 +131,7 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address,
 }
 
 static noinline int bad_access(struct pt_regs *regs, unsigned long address,
-			       struct mm_struct *mm, struct vm_area_struct *vma)
+			       struct mm_struct *mm, struct mm_area *vma)
 {
 	return __bad_area(regs, address, SEGV_ACCERR, mm, vma);
 }
@@ -235,7 +235,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
 }
 
 static bool access_pkey_error(bool is_write, bool is_exec, bool is_pkey,
-			      struct vm_area_struct *vma)
+			      struct mm_area *vma)
 {
 	/*
 	 * Make sure to check the VMA so that we do not perform
@@ -248,7 +248,7 @@ static bool access_pkey_error(bool is_write, bool is_exec, bool is_pkey,
 	return false;
 }
 
-static bool access_error(bool is_write, bool is_exec, struct vm_area_struct *vma)
+static bool access_error(bool is_write, bool is_exec, struct mm_area *vma)
 {
 	/*
 	 * Allow execution from readable areas if the MMU does not
@@ -413,7 +413,7 @@ static int page_fault_is_bad(unsigned long err)
 static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
 			   unsigned long error_code)
 {
-	struct vm_area_struct * vma;
+	struct mm_area * vma;
 	struct mm_struct *mm = current->mm;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
 	int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE;
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index d3c1b749dcfc..290850810f27 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -40,7 +40,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long s
 	return __find_linux_pte(mm->pgd, addr, NULL, NULL);
 }
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, unsigned long sz)
 {
 	p4d_t *p4d;
diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c
index a134d28a0e4d..1117ec25cafc 100644
--- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c
+++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c
@@ -116,7 +116,7 @@ static inline int book3e_tlb_exists(unsigned long ea, unsigned long pid)
 }
 
 static void
-book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte)
+book3e_hugetlb_preload(struct mm_area *vma, unsigned long ea, pte_t pte)
 {
 	unsigned long mas1, mas2;
 	u64 mas7_3;
@@ -178,13 +178,13 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte)
  *
  * This must always be called with the pte lock held.
  */
-void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
+void __update_mmu_cache(struct mm_area *vma, unsigned long address, pte_t *ptep)
 {
 	if (is_vm_hugetlb_page(vma))
 		book3e_hugetlb_preload(vma, address, *ptep);
 }
 
-void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	struct hstate *hstate = hstate_file(vma->vm_file);
 	unsigned long tsize = huge_page_shift(hstate) - 10;
diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c
index 0a650742f3a0..cd62f02ed016 100644
--- a/arch/powerpc/mm/nohash/tlb.c
+++ b/arch/powerpc/mm/nohash/tlb.c
@@ -149,7 +149,7 @@ void __local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
 	preempt_enable();
 }
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 	__local_flush_tlb_page(vma ? vma->vm_mm : NULL, vmaddr,
 			       mmu_get_tsize(mmu_virtual_psize), 0);
@@ -275,7 +275,7 @@ void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
 	preempt_enable();
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+void flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
 {
 #ifdef CONFIG_HUGETLB_PAGE
 	if (vma && is_vm_hugetlb_page(vma))
@@ -313,7 +313,7 @@ EXPORT_SYMBOL(flush_tlb_kernel_range);
  * some implementation can stack multiple tlbivax before a tlbsync but
  * for now, we keep it that way
  */
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 		     unsigned long end)
 
 {
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 61df5aed7989..425f2f8a2d95 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -141,7 +141,7 @@ static inline pte_t set_pte_filter(pte_t pte, unsigned long addr)
 	return pte_exprotect(pte);
 }
 
-static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma,
+static pte_t set_access_flags_filter(pte_t pte, struct mm_area *vma,
 				     int dirty)
 {
 	struct folio *folio;
@@ -240,7 +240,7 @@ void unmap_kernel_page(unsigned long va)
  * handled those two for us, we additionally deal with missing execute
  * permission here on some processors
  */
-int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+int ptep_set_access_flags(struct mm_area *vma, unsigned long address,
 			  pte_t *ptep, pte_t entry, int dirty)
 {
 	int changed;
@@ -255,7 +255,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
 }
 
 #ifdef CONFIG_HUGETLB_PAGE
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty)
 {
diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
index 0b6365d85d11..ee6e08d98377 100644
--- a/arch/powerpc/platforms/book3s/vas-api.c
+++ b/arch/powerpc/platforms/book3s/vas-api.c
@@ -394,7 +394,7 @@ static int do_fail_paste(void)
  */
 static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct file *fp = vma->vm_file;
 	struct coproc_instance *cp_inst = fp->private_data;
 	struct vas_window *txwin;
@@ -472,7 +472,7 @@ static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
  * be invalid. Set VAS window VMA to NULL in this function which
  * is called before VMA free.
  */
-static void vas_mmap_close(struct vm_area_struct *vma)
+static void vas_mmap_close(struct mm_area *vma)
 {
 	struct file *fp = vma->vm_file;
 	struct coproc_instance *cp_inst = fp->private_data;
@@ -504,7 +504,7 @@ static const struct vm_operations_struct vas_vm_ops = {
 	.fault = vas_mmap_fault,
 };
 
-static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
+static int coproc_mmap(struct file *fp, struct mm_area *vma)
 {
 	struct coproc_instance *cp_inst = fp->private_data;
 	struct vas_window *txwin;
diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
index d5a2c77bc908..a7ec9abc6d00 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/platforms/cell/spufs/file.c
@@ -229,7 +229,7 @@ spufs_mem_write(struct file *file, const char __user *buffer,
 static vm_fault_t
 spufs_mem_mmap_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct spu_context *ctx	= vma->vm_file->private_data;
 	unsigned long pfn, offset;
 	vm_fault_t ret;
@@ -258,7 +258,7 @@ spufs_mem_mmap_fault(struct vm_fault *vmf)
 	return ret;
 }
 
-static int spufs_mem_mmap_access(struct vm_area_struct *vma,
+static int spufs_mem_mmap_access(struct mm_area *vma,
 				unsigned long address,
 				void *buf, int len, int write)
 {
@@ -286,7 +286,7 @@ static const struct vm_operations_struct spufs_mem_mmap_vmops = {
 	.access = spufs_mem_mmap_access,
 };
 
-static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_mem_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
@@ -376,7 +376,7 @@ static const struct vm_operations_struct spufs_cntl_mmap_vmops = {
 /*
  * mmap support for problem state control area [0x4000 - 0x4fff].
  */
-static int spufs_cntl_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_cntl_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
@@ -1031,7 +1031,7 @@ static const struct vm_operations_struct spufs_signal1_mmap_vmops = {
 	.fault = spufs_signal1_mmap_fault,
 };
 
-static int spufs_signal1_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_signal1_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
@@ -1165,7 +1165,7 @@ static const struct vm_operations_struct spufs_signal2_mmap_vmops = {
 	.fault = spufs_signal2_mmap_fault,
 };
 
-static int spufs_signal2_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_signal2_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
@@ -1286,7 +1286,7 @@ static const struct vm_operations_struct spufs_mss_mmap_vmops = {
 /*
  * mmap support for problem state MFC DMA area [0x0000 - 0x0fff].
  */
-static int spufs_mss_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_mss_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
@@ -1347,7 +1347,7 @@ static const struct vm_operations_struct spufs_psmap_mmap_vmops = {
 /*
  * mmap support for full problem state area [0x00000 - 0x1ffff].
  */
-static int spufs_psmap_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_psmap_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
@@ -1406,7 +1406,7 @@ static const struct vm_operations_struct spufs_mfc_mmap_vmops = {
 /*
  * mmap support for problem state MFC DMA area [0x0000 - 0x0fff].
  */
-static int spufs_mfc_mmap(struct file *file, struct vm_area_struct *vma)
+static int spufs_mfc_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
index 4ac9808e55a4..1fd35cc9716e 100644
--- a/arch/powerpc/platforms/powernv/memtrace.c
+++ b/arch/powerpc/platforms/powernv/memtrace.c
@@ -45,7 +45,7 @@ static ssize_t memtrace_read(struct file *filp, char __user *ubuf,
 	return simple_read_from_buffer(ubuf, count, ppos, ent->mem, ent->size);
 }
 
-static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma)
+static int memtrace_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct memtrace_entry *ent = filp->private_data;
 
diff --git a/arch/powerpc/platforms/powernv/opal-prd.c b/arch/powerpc/platforms/powernv/opal-prd.c
index dc246ed4b7b4..5a922ddd9b62 100644
--- a/arch/powerpc/platforms/powernv/opal-prd.c
+++ b/arch/powerpc/platforms/powernv/opal-prd.c
@@ -110,7 +110,7 @@ static int opal_prd_open(struct inode *inode, struct file *file)
  * @vma: VMA to map the registers into
  */
 
-static int opal_prd_mmap(struct file *file, struct vm_area_struct *vma)
+static int opal_prd_mmap(struct file *file, struct mm_area *vma)
 {
 	size_t addr, size;
 	pgprot_t page_prot;
diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/pseries/vas.c
index c25eb1a38185..a47633bd7586 100644
--- a/arch/powerpc/platforms/pseries/vas.c
+++ b/arch/powerpc/platforms/pseries/vas.c
@@ -763,7 +763,7 @@ static int reconfig_close_windows(struct vas_caps *vcap, int excess_creds,
 {
 	struct pseries_vas_window *win, *tmp;
 	struct vas_user_win_ref *task_ref;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int rc = 0, flag;
 
 	if (migrate)
diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
index 446126497768..1a0ebd9019eb 100644
--- a/arch/riscv/include/asm/hugetlb.h
+++ b/arch/riscv/include/asm/hugetlb.h
@@ -32,7 +32,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 			      unsigned long sz);
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+pte_t huge_ptep_clear_flush(struct mm_area *vma,
 			    unsigned long addr, pte_t *ptep);
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
@@ -40,7 +40,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 			     unsigned long addr, pte_t *ptep);
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 			       unsigned long addr, pte_t *ptep,
 			       pte_t pte, int dirty);
 
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 428e48e5f57d..2fa52e4eae6a 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -506,7 +506,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 
 /* Commit new configuration to MMU hardware */
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	asm goto(ALTERNATIVE("nop", "j %l[svvptc]", 0, RISCV_ISA_EXT_SVVPTC, 1)
@@ -535,7 +535,7 @@ svvptc:;
 #define update_mmu_tlb_range(vma, addr, ptep, nr) \
 	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
 
-static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pmd(struct mm_area *vma,
 		unsigned long address, pmd_t *pmdp)
 {
 	pte_t *ptep = (pte_t *)pmdp;
@@ -593,10 +593,10 @@ static inline void pte_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS	/* defined in mm/pgtable.c */
-extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+extern int ptep_set_access_flags(struct mm_area *vma, unsigned long address,
 				 pte_t *ptep, pte_t entry, int dirty);
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG	/* defined in mm/pgtable.c */
-extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address,
+extern int ptep_test_and_clear_young(struct mm_area *vma, unsigned long address,
 				     pte_t *ptep);
 
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
@@ -618,7 +618,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+static inline int ptep_clear_flush_young(struct mm_area *vma,
 					 unsigned long address, pte_t *ptep)
 {
 	/*
@@ -859,7 +859,7 @@ static inline int pmd_trans_huge(pmd_t pmd)
 }
 
 #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
-static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
+static inline int pmdp_set_access_flags(struct mm_area *vma,
 					unsigned long address, pmd_t *pmdp,
 					pmd_t entry, int dirty)
 {
@@ -867,7 +867,7 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
-static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+static inline int pmdp_test_and_clear_young(struct mm_area *vma,
 					unsigned long address, pmd_t *pmdp)
 {
 	return ptep_test_and_clear_young(vma, address, (pte_t *)pmdp);
@@ -892,7 +892,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 }
 
 #define pmdp_establish pmdp_establish
-static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+static inline pmd_t pmdp_establish(struct mm_area *vma,
 				unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
 	page_table_check_pmd_set(vma->vm_mm, pmdp, pmd);
@@ -900,7 +900,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 }
 
 #define pmdp_collapse_flush pmdp_collapse_flush
-extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+extern pmd_t pmdp_collapse_flush(struct mm_area *vma,
 				 unsigned long address, pmd_t *pmdp);
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index ce0dd0fed764..18dbd9b692b9 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -47,14 +47,14 @@ void flush_tlb_all(void);
 void flush_tlb_mm(struct mm_struct *mm);
 void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 			unsigned long end, unsigned int page_size);
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_page(struct mm_area *vma, unsigned long addr);
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 		     unsigned long end);
 void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
-void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
 			unsigned long end);
 #endif
 
diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
index cc2895d1fbc2..0aada37e5b12 100644
--- a/arch/riscv/kernel/vdso.c
+++ b/arch/riscv/kernel/vdso.c
@@ -34,7 +34,7 @@ static struct __vdso_info compat_vdso_info;
 #endif
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-		       struct vm_area_struct *new_vma)
+		       struct mm_area *new_vma)
 {
 	current->mm->context.vdso = (void *)new_vma->vm_start;
 
diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
index 1087ea74567b..afd478082547 100644
--- a/arch/riscv/kvm/mmu.c
+++ b/arch/riscv/kvm/mmu.c
@@ -487,7 +487,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
 	 *     +--------------------------------------------+
 	 */
 	do {
-		struct vm_area_struct *vma = find_vma(current->mm, hva);
+		struct mm_area *vma = find_vma(current->mm, hva);
 		hva_t vm_start, vm_end;
 
 		if (!vma || vma->vm_start >= reg_end)
@@ -595,7 +595,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
 	bool writable;
 	short vma_pageshift;
 	gfn_t gfn = gpa >> PAGE_SHIFT;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct kvm *kvm = vcpu->kvm;
 	struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache;
 	bool logging = (memslot->dirty_bitmap &&
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 0194324a0c50..75986abf7b4e 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -243,7 +243,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
 	local_flush_tlb_page(addr);
 }
 
-static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
+static inline bool access_error(unsigned long cause, struct mm_area *vma)
 {
 	switch (cause) {
 	case EXC_INST_PAGE_FAULT:
@@ -275,7 +275,7 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
 void handle_page_fault(struct pt_regs *regs)
 {
 	struct task_struct *tsk;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm;
 	unsigned long addr, cause;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index b4a78a4b35cf..f9ef0699f193 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -28,7 +28,7 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
 }
 
 pte_t *huge_pte_alloc(struct mm_struct *mm,
-		      struct vm_area_struct *vma,
+		      struct mm_area *vma,
 		      unsigned long addr,
 		      unsigned long sz)
 {
@@ -172,7 +172,7 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
 				    unsigned long pte_num)
 {
 	pte_t orig_pte = get_clear_contig(mm, addr, ptep, pte_num);
-	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
 	bool valid = !pte_none(orig_pte);
 
 	if (valid)
@@ -203,7 +203,7 @@ static void clear_flush(struct mm_struct *mm,
 			unsigned long pgsize,
 			unsigned long ncontig)
 {
-	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
+	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
 	unsigned long i, saddr = addr;
 
 	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
@@ -260,7 +260,7 @@ void set_huge_pte_at(struct mm_struct *mm,
 		set_pte_at(mm, addr, ptep, pte);
 }
 
-int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+int huge_ptep_set_access_flags(struct mm_area *vma,
 			       unsigned long addr,
 			       pte_t *ptep,
 			       pte_t pte,
@@ -331,7 +331,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
 		set_pte_at(mm, addr, ptep, orig_pte);
 }
 
-pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+pte_t huge_ptep_clear_flush(struct mm_area *vma,
 			    unsigned long addr,
 			    pte_t *ptep)
 {
diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c
index 4ae67324f992..f81997996346 100644
--- a/arch/riscv/mm/pgtable.c
+++ b/arch/riscv/mm/pgtable.c
@@ -5,7 +5,7 @@
 #include <linux/kernel.h>
 #include <linux/pgtable.h>
 
-int ptep_set_access_flags(struct vm_area_struct *vma,
+int ptep_set_access_flags(struct mm_area *vma,
 			  unsigned long address, pte_t *ptep,
 			  pte_t entry, int dirty)
 {
@@ -31,7 +31,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
 	return false;
 }
 
-int ptep_test_and_clear_young(struct vm_area_struct *vma,
+int ptep_test_and_clear_young(struct mm_area *vma,
 			      unsigned long address,
 			      pte_t *ptep)
 {
@@ -136,7 +136,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 
 #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+pmd_t pmdp_collapse_flush(struct mm_area *vma,
 					unsigned long address, pmd_t *pmdp)
 {
 	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index c25a40aa2fe0..1ae019b7e60b 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -130,13 +130,13 @@ void flush_tlb_mm_range(struct mm_struct *mm,
 	__flush_tlb_range(mm, mm_cpumask(mm), start, end - start, page_size);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+void flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	__flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
 			  addr, PAGE_SIZE, PAGE_SIZE);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_tlb_range(struct mm_area *vma, unsigned long start,
 		     unsigned long end)
 {
 	unsigned long stride_size;
@@ -176,7 +176,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
 			unsigned long end)
 {
 	__flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
index 931fcc413598..ad92be48a9e4 100644
--- a/arch/s390/include/asm/hugetlb.h
+++ b/arch/s390/include/asm/hugetlb.h
@@ -54,14 +54,14 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long address, pte_t *ptep)
 {
 	return __huge_ptep_get_and_clear(vma->vm_mm, address, ptep);
 }
 
 #define  __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int huge_ptep_set_access_flags(struct mm_area *vma,
 					     unsigned long addr, pte_t *ptep,
 					     pte_t pte, int dirty)
 {
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index f8a6b54986ec..6bc573582112 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1215,7 +1215,7 @@ pte_t ptep_xchg_direct(struct mm_struct *, unsigned long, pte_t *, pte_t);
 pte_t ptep_xchg_lazy(struct mm_struct *, unsigned long, pte_t *, pte_t);
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+static inline int ptep_test_and_clear_young(struct mm_area *vma,
 					    unsigned long addr, pte_t *ptep)
 {
 	pte_t pte = *ptep;
@@ -1225,7 +1225,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+static inline int ptep_clear_flush_young(struct mm_area *vma,
 					 unsigned long address, pte_t *ptep)
 {
 	return ptep_test_and_clear_young(vma, address, ptep);
@@ -1245,12 +1245,12 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
-pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
-void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
+pte_t ptep_modify_prot_start(struct mm_area *, unsigned long, pte_t *);
+void ptep_modify_prot_commit(struct mm_area *, unsigned long,
 			     pte_t *, pte_t, pte_t);
 
 #define __HAVE_ARCH_PTEP_CLEAR_FLUSH
-static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t ptep_clear_flush(struct mm_area *vma,
 				     unsigned long addr, pte_t *ptep)
 {
 	pte_t res;
@@ -1327,7 +1327,7 @@ static inline int pte_allow_rdp(pte_t old, pte_t new)
 	return (pte_val(old) & _PAGE_RDP_MASK) == (pte_val(new) & _PAGE_RDP_MASK);
 }
 
-static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
+static inline void flush_tlb_fix_spurious_fault(struct mm_area *vma,
 						unsigned long address,
 						pte_t *ptep)
 {
@@ -1350,7 +1350,7 @@ void ptep_reset_dat_prot(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
 			 pte_t new);
 
 #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
-static inline int ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int ptep_set_access_flags(struct mm_area *vma,
 					unsigned long addr, pte_t *ptep,
 					pte_t entry, int dirty)
 {
@@ -1776,7 +1776,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
 pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 
 #define  __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
-static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
+static inline int pmdp_set_access_flags(struct mm_area *vma,
 					unsigned long addr, pmd_t *pmdp,
 					pmd_t entry, int dirty)
 {
@@ -1792,7 +1792,7 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
-static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+static inline int pmdp_test_and_clear_young(struct mm_area *vma,
 					    unsigned long addr, pmd_t *pmdp)
 {
 	pmd_t pmd = *pmdp;
@@ -1802,7 +1802,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
-static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
+static inline int pmdp_clear_flush_young(struct mm_area *vma,
 					 unsigned long addr, pmd_t *pmdp)
 {
 	VM_BUG_ON(addr & ~HPAGE_MASK);
@@ -1830,7 +1830,7 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
-static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
+static inline pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
 						 unsigned long addr,
 						 pmd_t *pmdp, int full)
 {
@@ -1843,14 +1843,14 @@ static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
 }
 
 #define __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
-static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
+static inline pmd_t pmdp_huge_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pmd_t *pmdp)
 {
 	return pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp);
 }
 
 #define __HAVE_ARCH_PMDP_INVALIDATE
-static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma,
+static inline pmd_t pmdp_invalidate(struct mm_area *vma,
 				   unsigned long addr, pmd_t *pmdp)
 {
 	pmd_t pmd;
@@ -1870,7 +1870,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 		pmd = pmdp_xchg_lazy(mm, addr, pmdp, pmd_wrprotect(pmd));
 }
 
-static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+static inline pmd_t pmdp_collapse_flush(struct mm_area *vma,
 					unsigned long address,
 					pmd_t *pmdp)
 {
diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h
index 75491baa2197..8eab59435a2c 100644
--- a/arch/s390/include/asm/tlbflush.h
+++ b/arch/s390/include/asm/tlbflush.h
@@ -111,7 +111,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 	__tlb_flush_mm_lazy(mm);
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 	__tlb_flush_mm_lazy(vma->vm_mm);
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index 4a981266b483..cff27a7da9bc 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -176,7 +176,7 @@ ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
  * For the kdump reserved memory this functions performs a swap operation:
  * [0 - OLDMEM_SIZE] is mapped to [OLDMEM_BASE - OLDMEM_BASE + OLDMEM_SIZE]
  */
-static int remap_oldmem_pfn_range_kdump(struct vm_area_struct *vma,
+static int remap_oldmem_pfn_range_kdump(struct mm_area *vma,
 					unsigned long from, unsigned long pfn,
 					unsigned long size, pgprot_t prot)
 {
@@ -203,7 +203,7 @@ static int remap_oldmem_pfn_range_kdump(struct vm_area_struct *vma,
  * We only map available memory above HSA size. Memory below HSA size
  * is read on demand using the copy_oldmem_page() function.
  */
-static int remap_oldmem_pfn_range_zfcpdump(struct vm_area_struct *vma,
+static int remap_oldmem_pfn_range_zfcpdump(struct mm_area *vma,
 					   unsigned long from,
 					   unsigned long pfn,
 					   unsigned long size, pgprot_t prot)
@@ -225,7 +225,7 @@ static int remap_oldmem_pfn_range_zfcpdump(struct vm_area_struct *vma,
 /*
  * Remap "oldmem" for kdump or zfcp/nvme dump
  */
-int remap_oldmem_pfn_range(struct vm_area_struct *vma, unsigned long from,
+int remap_oldmem_pfn_range(struct mm_area *vma, unsigned long from,
 			   unsigned long pfn, unsigned long size, pgprot_t prot)
 {
 	if (oldmem_data.start)
diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
index 9a5d5be8acf4..a41b180a29bc 100644
--- a/arch/s390/kernel/uv.c
+++ b/arch/s390/kernel/uv.c
@@ -356,7 +356,7 @@ static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bo
 
 int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct folio_walk fw;
 	struct folio *folio;
 	int rc;
diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c
index 430feb1a5013..f660415e46c0 100644
--- a/arch/s390/kernel/vdso.c
+++ b/arch/s390/kernel/vdso.c
@@ -27,7 +27,7 @@ extern char vdso64_start[], vdso64_end[];
 extern char vdso32_start[], vdso32_end[];
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-		       struct vm_area_struct *vma)
+		       struct mm_area *vma)
 {
 	current->mm->context.vdso_base = vma->vm_start;
 	return 0;
@@ -55,7 +55,7 @@ static int map_vdso(unsigned long addr, unsigned long vdso_mapping_len)
 	unsigned long vvar_start, vdso_text_start, vdso_text_len;
 	struct vm_special_mapping *vdso_mapping;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int rc;
 
 	BUILD_BUG_ON(VDSO_NR_PAGES != __VDSO_PAGES);
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index da84ff6770de..119a4c17873b 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -258,7 +258,7 @@ static void do_sigbus(struct pt_regs *regs)
  */
 static void do_exception(struct pt_regs *regs, int access)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long address;
 	struct mm_struct *mm;
 	unsigned int flags;
@@ -405,7 +405,7 @@ void do_secure_storage_access(struct pt_regs *regs)
 {
 	union teid teid = { .val = regs->int_parm_long };
 	unsigned long addr = get_fault_address(regs);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct folio_walk fw;
 	struct mm_struct *mm;
 	struct folio *folio;
diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
index a94bd4870c65..8c6a886f71d1 100644
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -620,7 +620,7 @@ EXPORT_SYMBOL(__gmap_link);
  */
 void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long vmaddr;
 	spinlock_t *ptl;
 	pte_t *ptep;
@@ -648,7 +648,7 @@ EXPORT_SYMBOL_GPL(__gmap_zap);
 void gmap_discard(struct gmap *gmap, unsigned long from, unsigned long to)
 {
 	unsigned long gaddr, vmaddr, size;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mmap_read_lock(gmap->mm);
 	for (gaddr = from; gaddr < to;
@@ -2222,7 +2222,7 @@ EXPORT_SYMBOL_GPL(gmap_sync_dirty_log_pmd);
 static int thp_split_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
 				    unsigned long end, struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 
 	split_huge_pmd(vma, pmd, addr);
 	return 0;
@@ -2235,7 +2235,7 @@ static const struct mm_walk_ops thp_split_walk_ops = {
 
 static inline void thp_split_mm(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	for_each_vma(vmi, vma) {
@@ -2312,7 +2312,7 @@ static const struct mm_walk_ops find_zeropage_ops = {
  */
 static int __s390_unshare_zeropages(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 	unsigned long addr;
 	vm_fault_t fault;
diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
index e88c02c9e642..c54f4772b8bf 100644
--- a/arch/s390/mm/hugetlbpage.c
+++ b/arch/s390/mm/hugetlbpage.c
@@ -203,7 +203,7 @@ pte_t __huge_ptep_get_and_clear(struct mm_struct *mm,
 	return pte;
 }
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgdp;
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 40a526d28184..edbd4688f56a 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -81,7 +81,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 				     unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_unmapped_area_info info = {};
 
 	if (len > TASK_SIZE - mmap_min_addr)
@@ -116,7 +116,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
 					     unsigned long len, unsigned long pgoff,
 					     unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	struct vm_unmapped_area_info info = {};
 
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 9901934284ec..28f0316e4db1 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -327,7 +327,7 @@ pte_t ptep_xchg_lazy(struct mm_struct *mm, unsigned long addr,
 }
 EXPORT_SYMBOL(ptep_xchg_lazy);
 
-pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
+pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr,
 			     pte_t *ptep)
 {
 	pgste_t pgste;
@@ -346,7 +346,7 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
 	return old;
 }
 
-void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
+void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
 			     pte_t *ptep, pte_t old_pte, pte_t pte)
 {
 	pgste_t pgste;
@@ -437,7 +437,7 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
 #ifdef CONFIG_PGSTE
 static int pmd_lookup(struct mm_struct *mm, unsigned long addr, pmd_t **pmdp)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	pgd_t *pgd;
 	p4d_t *p4d;
 	pud_t *pud;
@@ -1032,7 +1032,7 @@ EXPORT_SYMBOL(get_guest_storage_key);
 int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
 			unsigned long *oldpte, unsigned long *oldpgste)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long pgstev;
 	spinlock_t *ptl;
 	pgste_t pgste;
@@ -1138,7 +1138,7 @@ EXPORT_SYMBOL(pgste_perform_essa);
 int set_pgste_bits(struct mm_struct *mm, unsigned long hva,
 			unsigned long bits, unsigned long value)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	spinlock_t *ptl;
 	pgste_t new;
 	pte_t *ptep;
@@ -1170,7 +1170,7 @@ EXPORT_SYMBOL(set_pgste_bits);
  */
 int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	spinlock_t *ptl;
 	pte_t *ptep;
 
diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
index 5fcc1a3b04bd..77d158f08245 100644
--- a/arch/s390/pci/pci_mmio.c
+++ b/arch/s390/pci/pci_mmio.c
@@ -126,7 +126,7 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
 	u8 local_buf[64];
 	void __iomem *io_addr;
 	void *buf;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	long ret;
 
 	if (!zpci_is_enabled())
@@ -279,7 +279,7 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
 	u8 local_buf[64];
 	void __iomem *io_addr;
 	void *buf;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	long ret;
 
 	if (!zpci_is_enabled())
diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
index e6642ff14889..87666383d58a 100644
--- a/arch/sh/include/asm/cacheflush.h
+++ b/arch/sh/include/asm/cacheflush.h
@@ -37,9 +37,9 @@ extern void (*__flush_invalidate_region)(void *start, int size);
 extern void flush_cache_all(void);
 extern void flush_cache_mm(struct mm_struct *mm);
 extern void flush_cache_dup_mm(struct mm_struct *mm);
-extern void flush_cache_page(struct vm_area_struct *vma,
+extern void flush_cache_page(struct mm_area *vma,
 				unsigned long addr, unsigned long pfn);
-extern void flush_cache_range(struct vm_area_struct *vma,
+extern void flush_cache_range(struct mm_area *vma,
 				 unsigned long start, unsigned long end);
 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
 void flush_dcache_folio(struct folio *folio);
@@ -51,20 +51,20 @@ static inline void flush_dcache_page(struct page *page)
 
 extern void flush_icache_range(unsigned long start, unsigned long end);
 #define flush_icache_user_range flush_icache_range
-void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+void flush_icache_pages(struct mm_area *vma, struct page *page,
 		unsigned int nr);
 #define flush_icache_pages flush_icache_pages
 extern void flush_cache_sigtramp(unsigned long address);
 
 struct flusher_data {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr1, addr2;
 };
 
 #define ARCH_HAS_FLUSH_ANON_PAGE
 extern void __flush_anon_page(struct page *page, unsigned long);
 
-static inline void flush_anon_page(struct vm_area_struct *vma,
+static inline void flush_anon_page(struct mm_area *vma,
 				   struct page *page, unsigned long vmaddr)
 {
 	if (boot_cpu_data.dcache.n_aliases && PageAnon(page))
@@ -81,11 +81,11 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
 	__flush_invalidate_region(addr, size);
 }
 
-extern void copy_to_user_page(struct vm_area_struct *vma,
+extern void copy_to_user_page(struct mm_area *vma,
 	struct page *page, unsigned long vaddr, void *dst, const void *src,
 	unsigned long len);
 
-extern void copy_from_user_page(struct vm_area_struct *vma,
+extern void copy_from_user_page(struct mm_area *vma,
 	struct page *page, unsigned long vaddr, void *dst, const void *src,
 	unsigned long len);
 
diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h
index 4a92e6e4d627..f2f364330ed9 100644
--- a/arch/sh/include/asm/hugetlb.h
+++ b/arch/sh/include/asm/hugetlb.h
@@ -6,7 +6,7 @@
 #include <asm/page.h>
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 	return *ptep;
diff --git a/arch/sh/include/asm/page.h b/arch/sh/include/asm/page.h
index 3990cbd9aa04..feba697dd921 100644
--- a/arch/sh/include/asm/page.h
+++ b/arch/sh/include/asm/page.h
@@ -48,10 +48,10 @@ extern void copy_page(void *to, void *from);
 #define copy_user_page(to, from, vaddr, pg)  __copy_user(to, from, PAGE_SIZE)
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 
 extern void copy_user_highpage(struct page *to, struct page *from,
-			       unsigned long vaddr, struct vm_area_struct *vma);
+			       unsigned long vaddr, struct mm_area *vma);
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 extern void clear_user_highpage(struct page *page, unsigned long vaddr);
 #define clear_user_highpage	clear_user_highpage
diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h
index 729f5c6225fb..1cc0974cae6c 100644
--- a/arch/sh/include/asm/pgtable.h
+++ b/arch/sh/include/asm/pgtable.h
@@ -94,16 +94,16 @@ typedef pte_t *pte_addr_t;
 
 #define pte_pfn(x)		((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
 
-struct vm_area_struct;
+struct mm_area;
 struct mm_struct;
 
-extern void __update_cache(struct vm_area_struct *vma,
+extern void __update_cache(struct mm_area *vma,
 			   unsigned long address, pte_t pte);
-extern void __update_tlb(struct vm_area_struct *vma,
+extern void __update_tlb(struct mm_area *vma,
 			 unsigned long address, pte_t pte);
 
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long address,
+		struct mm_area *vma, unsigned long address,
 		pte_t *ptep, unsigned int nr)
 {
 	pte_t pte = *ptep;
diff --git a/arch/sh/include/asm/tlb.h b/arch/sh/include/asm/tlb.h
index ddf324bfb9a0..6d1e9c61e24c 100644
--- a/arch/sh/include/asm/tlb.h
+++ b/arch/sh/include/asm/tlb.h
@@ -10,10 +10,10 @@
 #include <linux/swap.h>
 
 #if defined(CONFIG_CPU_SH4)
-extern void tlb_wire_entry(struct vm_area_struct *, unsigned long, pte_t);
+extern void tlb_wire_entry(struct mm_area *, unsigned long, pte_t);
 extern void tlb_unwire_entry(void);
 #else
-static inline void tlb_wire_entry(struct vm_area_struct *vma ,
+static inline void tlb_wire_entry(struct mm_area *vma ,
 				  unsigned long addr, pte_t pte)
 {
 	BUG();
diff --git a/arch/sh/include/asm/tlbflush.h b/arch/sh/include/asm/tlbflush.h
index 8f180cd3bcd6..ca2de60ad063 100644
--- a/arch/sh/include/asm/tlbflush.h
+++ b/arch/sh/include/asm/tlbflush.h
@@ -13,10 +13,10 @@
  */
 extern void local_flush_tlb_all(void);
 extern void local_flush_tlb_mm(struct mm_struct *mm);
-extern void local_flush_tlb_range(struct vm_area_struct *vma,
+extern void local_flush_tlb_range(struct mm_area *vma,
 				  unsigned long start,
 				  unsigned long end);
-extern void local_flush_tlb_page(struct vm_area_struct *vma,
+extern void local_flush_tlb_page(struct mm_area *vma,
 				 unsigned long page);
 extern void local_flush_tlb_kernel_range(unsigned long start,
 					 unsigned long end);
@@ -28,9 +28,9 @@ extern void __flush_tlb_global(void);
 
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
-extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end);
-extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void flush_tlb_page(struct mm_area *vma, unsigned long page);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 extern void flush_tlb_one(unsigned long asid, unsigned long page);
 
diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
index 108d808767fa..61d56994d473 100644
--- a/arch/sh/kernel/smp.c
+++ b/arch/sh/kernel/smp.c
@@ -377,7 +377,7 @@ void flush_tlb_mm(struct mm_struct *mm)
 }
 
 struct flush_tlb_data {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr1;
 	unsigned long addr2;
 };
@@ -389,7 +389,7 @@ static void flush_tlb_range_ipi(void *info)
 	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma,
+void flush_tlb_range(struct mm_area *vma,
 		     unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -435,7 +435,7 @@ static void flush_tlb_page_ipi(void *info)
 	local_flush_tlb_page(fd->vma, fd->addr1);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	preempt_disable();
 	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
diff --git a/arch/sh/kernel/sys_sh.c b/arch/sh/kernel/sys_sh.c
index a5a7b33ed81a..2d263feef643 100644
--- a/arch/sh/kernel/sys_sh.c
+++ b/arch/sh/kernel/sys_sh.c
@@ -57,7 +57,7 @@ asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
 /* sys_cacheflush -- flush (part of) the processor cache.  */
 asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len, int op)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if ((op <= 0) || (op > (CACHEFLUSH_D_PURGE|CACHEFLUSH_I)))
 		return -EINVAL;
diff --git a/arch/sh/kernel/vsyscall/vsyscall.c b/arch/sh/kernel/vsyscall/vsyscall.c
index 1563dcc55fd3..9916506a052a 100644
--- a/arch/sh/kernel/vsyscall/vsyscall.c
+++ b/arch/sh/kernel/vsyscall/vsyscall.c
@@ -83,7 +83,7 @@ fs_initcall(vm_sysctl_init);
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr;
 	int ret;
 
@@ -113,7 +113,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	return ret;
 }
 
-const char *arch_vma_name(struct vm_area_struct *vma)
+const char *arch_vma_name(struct mm_area *vma)
 {
 	if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso)
 		return "[vdso]";
diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c
index 46393b00137e..f4d37a852d27 100644
--- a/arch/sh/mm/cache-sh4.c
+++ b/arch/sh/mm/cache-sh4.c
@@ -214,7 +214,7 @@ static void sh4_flush_cache_mm(void *arg)
 static void sh4_flush_cache_page(void *args)
 {
 	struct flusher_data *data = args;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct page *page;
 	unsigned long address, pfn, phys;
 	int map_coherent = 0;
@@ -283,7 +283,7 @@ static void sh4_flush_cache_page(void *args)
 static void sh4_flush_cache_range(void *args)
 {
 	struct flusher_data *data = args;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long start, end;
 
 	vma = data->vma;
diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c
index 6ebdeaff3021..2f85019529ff 100644
--- a/arch/sh/mm/cache.c
+++ b/arch/sh/mm/cache.c
@@ -57,7 +57,7 @@ static inline void cacheop_on_each_cpu(void (*func) (void *info), void *info,
 	preempt_enable();
 }
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		       unsigned long vaddr, void *dst, const void *src,
 		       unsigned long len)
 {
@@ -78,7 +78,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 		flush_cache_page(vma, vaddr, page_to_pfn(page));
 }
 
-void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_from_user_page(struct mm_area *vma, struct page *page,
 			 unsigned long vaddr, void *dst, const void *src,
 			 unsigned long len)
 {
@@ -97,7 +97,7 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
 }
 
 void copy_user_highpage(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma)
+			unsigned long vaddr, struct mm_area *vma)
 {
 	struct folio *src = page_folio(from);
 	void *vfrom, *vto;
@@ -138,7 +138,7 @@ void clear_user_highpage(struct page *page, unsigned long vaddr)
 }
 EXPORT_SYMBOL(clear_user_highpage);
 
-void __update_cache(struct vm_area_struct *vma,
+void __update_cache(struct mm_area *vma,
 		    unsigned long address, pte_t pte)
 {
 	unsigned long pfn = pte_pfn(pte);
@@ -197,7 +197,7 @@ void flush_cache_dup_mm(struct mm_struct *mm)
 	cacheop_on_each_cpu(local_flush_cache_dup_mm, mm, 1);
 }
 
-void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
+void flush_cache_page(struct mm_area *vma, unsigned long addr,
 		      unsigned long pfn)
 {
 	struct flusher_data data;
@@ -209,7 +209,7 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
 	cacheop_on_each_cpu(local_flush_cache_page, (void *)&data, 1);
 }
 
-void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+void flush_cache_range(struct mm_area *vma, unsigned long start,
 		       unsigned long end)
 {
 	struct flusher_data data;
@@ -240,7 +240,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
 }
 EXPORT_SYMBOL(flush_icache_range);
 
-void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
+void flush_icache_pages(struct mm_area *vma, struct page *page,
 		unsigned int nr)
 {
 	/* Nothing uses the VMA, so just pass the folio along */
diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
index 06e6b4952924..962137e245fc 100644
--- a/arch/sh/mm/fault.c
+++ b/arch/sh/mm/fault.c
@@ -355,7 +355,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
 	return 1;
 }
 
-static inline int access_error(int error_code, struct vm_area_struct *vma)
+static inline int access_error(int error_code, struct mm_area *vma)
 {
 	if (error_code & FAULT_CODE_WRITE) {
 		/* write, present and write, not present: */
@@ -393,7 +393,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
 	unsigned long vec;
 	struct task_struct *tsk;
 	struct mm_struct *mm;
-	struct vm_area_struct * vma;
+	struct mm_area * vma;
 	vm_fault_t fault;
 	unsigned int flags = FAULT_FLAG_DEFAULT;
 
diff --git a/arch/sh/mm/hugetlbpage.c b/arch/sh/mm/hugetlbpage.c
index ff209b55285a..ea147dc50cfa 100644
--- a/arch/sh/mm/hugetlbpage.c
+++ b/arch/sh/mm/hugetlbpage.c
@@ -21,7 +21,7 @@
 #include <asm/tlbflush.h>
 #include <asm/cacheflush.h>
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index c442734d9b0c..a015e881f62f 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -56,7 +56,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int do_colour_align;
 	struct vm_unmapped_area_info info = {};
 
@@ -102,7 +102,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 			  const unsigned long len, const unsigned long pgoff,
 			  const unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	int do_colour_align;
diff --git a/arch/sh/mm/nommu.c b/arch/sh/mm/nommu.c
index fa3dc9428a73..739f316eb55a 100644
--- a/arch/sh/mm/nommu.c
+++ b/arch/sh/mm/nommu.c
@@ -46,13 +46,13 @@ void local_flush_tlb_mm(struct mm_struct *mm)
 	BUG();
 }
 
-void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
 			    unsigned long end)
 {
 	BUG();
 }
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	BUG();
 }
@@ -71,7 +71,7 @@ void __flush_tlb_global(void)
 {
 }
 
-void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
+void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
 {
 }
 
diff --git a/arch/sh/mm/tlb-pteaex.c b/arch/sh/mm/tlb-pteaex.c
index 4db21adfe5de..c88f5cdca94e 100644
--- a/arch/sh/mm/tlb-pteaex.c
+++ b/arch/sh/mm/tlb-pteaex.c
@@ -15,7 +15,7 @@
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
-void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
+void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
 {
 	unsigned long flags, pteval, vpn;
 
diff --git a/arch/sh/mm/tlb-sh3.c b/arch/sh/mm/tlb-sh3.c
index fb400afc2a49..77369712a89c 100644
--- a/arch/sh/mm/tlb-sh3.c
+++ b/arch/sh/mm/tlb-sh3.c
@@ -24,7 +24,7 @@
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
-void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
+void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
 {
 	unsigned long flags, pteval, vpn;
 
diff --git a/arch/sh/mm/tlb-sh4.c b/arch/sh/mm/tlb-sh4.c
index aa0a9f4680a1..edd340097b4a 100644
--- a/arch/sh/mm/tlb-sh4.c
+++ b/arch/sh/mm/tlb-sh4.c
@@ -13,7 +13,7 @@
 #include <asm/mmu_context.h>
 #include <asm/cacheflush.h>
 
-void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
+void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
 {
 	unsigned long flags, pteval, vpn;
 
diff --git a/arch/sh/mm/tlb-urb.c b/arch/sh/mm/tlb-urb.c
index c92ce20db39b..78a98552ccac 100644
--- a/arch/sh/mm/tlb-urb.c
+++ b/arch/sh/mm/tlb-urb.c
@@ -17,7 +17,7 @@
 /*
  * Load the entry for 'addr' into the TLB and wire the entry.
  */
-void tlb_wire_entry(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
+void tlb_wire_entry(struct mm_area *vma, unsigned long addr, pte_t pte)
 {
 	unsigned long status, flags;
 	int urb;
diff --git a/arch/sh/mm/tlbflush_32.c b/arch/sh/mm/tlbflush_32.c
index a6a20d6de4c0..6307b906924a 100644
--- a/arch/sh/mm/tlbflush_32.c
+++ b/arch/sh/mm/tlbflush_32.c
@@ -12,7 +12,7 @@
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	unsigned int cpu = smp_processor_id();
 
@@ -36,7 +36,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 	}
 }
 
-void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
 			   unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h
index 2b1261b77ecd..1e6477ef34bb 100644
--- a/arch/sparc/include/asm/cacheflush_64.h
+++ b/arch/sparc/include/asm/cacheflush_64.h
@@ -53,7 +53,7 @@ static inline void flush_dcache_page(struct page *page)
 	flush_dcache_folio(page_folio(page));
 }
 
-void flush_ptrace_access(struct vm_area_struct *, struct page *,
+void flush_ptrace_access(struct mm_area *, struct page *,
 			 unsigned long uaddr, void *kaddr,
 			 unsigned long len, int write);
 
diff --git a/arch/sparc/include/asm/cachetlb_32.h b/arch/sparc/include/asm/cachetlb_32.h
index 534da70c6357..1ae6b8f58673 100644
--- a/arch/sparc/include/asm/cachetlb_32.h
+++ b/arch/sparc/include/asm/cachetlb_32.h
@@ -3,20 +3,20 @@
 #define _SPARC_CACHETLB_H
 
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 
 struct sparc32_cachetlb_ops {
 	void (*cache_all)(void);
 	void (*cache_mm)(struct mm_struct *);
-	void (*cache_range)(struct vm_area_struct *, unsigned long,
+	void (*cache_range)(struct mm_area *, unsigned long,
 			    unsigned long);
-	void (*cache_page)(struct vm_area_struct *, unsigned long);
+	void (*cache_page)(struct mm_area *, unsigned long);
 
 	void (*tlb_all)(void);
 	void (*tlb_mm)(struct mm_struct *);
-	void (*tlb_range)(struct vm_area_struct *, unsigned long,
+	void (*tlb_range)(struct mm_area *, unsigned long,
 			  unsigned long);
-	void (*tlb_page)(struct vm_area_struct *, unsigned long);
+	void (*tlb_page)(struct mm_area *, unsigned long);
 
 	void (*page_to_ram)(unsigned long);
 	void (*sig_insns)(struct mm_struct *, unsigned long);
diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h
index e7a9cdd498dc..fdc29771a6a6 100644
--- a/arch/sparc/include/asm/hugetlb.h
+++ b/arch/sparc/include/asm/hugetlb.h
@@ -23,7 +23,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, unsigned long sz);
 
 #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 	return *ptep;
@@ -38,7 +38,7 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
 }
 
 #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int huge_ptep_set_access_flags(struct mm_area *vma,
 					     unsigned long addr, pte_t *ptep,
 					     pte_t pte, int dirty)
 {
diff --git a/arch/sparc/include/asm/leon.h b/arch/sparc/include/asm/leon.h
index c1e05e4ab9e3..e0cf0f724fb4 100644
--- a/arch/sparc/include/asm/leon.h
+++ b/arch/sparc/include/asm/leon.h
@@ -195,7 +195,7 @@ static inline int sparc_leon3_cpuid(void)
 #define LEON2_CFG_SSIZE_MASK 0x00007000UL
 
 #ifndef __ASSEMBLY__
-struct vm_area_struct;
+struct mm_area;
 
 unsigned long leon_swprobe(unsigned long vaddr, unsigned long *paddr);
 void leon_flush_icache_all(void);
@@ -204,7 +204,7 @@ void leon_flush_cache_all(void);
 void leon_flush_tlb_all(void);
 extern int leon_flush_during_switch;
 int leon_flush_needed(void);
-void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page);
+void leon_flush_pcache_all(struct mm_area *vma, unsigned long page);
 
 /* struct that hold LEON3 cache configuration registers */
 struct leon3_cacheregs {
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
index 2a68ff5b6eab..1abc1d8743c5 100644
--- a/arch/sparc/include/asm/page_64.h
+++ b/arch/sparc/include/asm/page_64.h
@@ -46,9 +46,9 @@ void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
 #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
 void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
-struct vm_area_struct;
+struct mm_area;
 void copy_user_highpage(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma);
+			unsigned long vaddr, struct mm_area *vma);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 void copy_highpage(struct page *to, struct page *from);
 
diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index 62bcafe38b1f..a451d5430db1 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -33,7 +33,7 @@
 #include <asm/cpu_type.h>
 
 
-struct vm_area_struct;
+struct mm_area;
 struct page;
 
 void load_mmu(void);
@@ -400,10 +400,10 @@ __get_iospace (unsigned long addr)
 #define GET_IOSPACE(pfn)		(pfn >> (BITS_PER_LONG - 4))
 #define GET_PFN(pfn)			(pfn & 0x0fffffffUL)
 
-int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
+int remap_pfn_range(struct mm_area *, unsigned long, unsigned long,
 		    unsigned long, pgprot_t);
 
-static inline int io_remap_pfn_range(struct vm_area_struct *vma,
+static inline int io_remap_pfn_range(struct mm_area *vma,
 				     unsigned long from, unsigned long pfn,
 				     unsigned long size, pgprot_t prot)
 {
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index dc28f2c4eee3..7d06b4894f2a 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -979,17 +979,17 @@ unsigned long find_ecache_flush_span(unsigned long size);
 struct seq_file;
 void mmu_info(struct seq_file *);
 
-struct vm_area_struct;
-void update_mmu_cache_range(struct vm_fault *, struct vm_area_struct *,
+struct mm_area;
+void update_mmu_cache_range(struct vm_fault *, struct mm_area *,
 		unsigned long addr, pte_t *ptep, unsigned int nr);
 #define update_mmu_cache(vma, addr, ptep) \
 	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
 			  pmd_t *pmd);
 
 #define __HAVE_ARCH_PMDP_INVALIDATE
-extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+extern pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
 			    pmd_t *pmdp);
 
 #define __HAVE_ARCH_PGTABLE_DEPOSIT
@@ -1050,18 +1050,18 @@ int page_in_phys_avail(unsigned long paddr);
 #define GET_IOSPACE(pfn)		(pfn >> (BITS_PER_LONG - 4))
 #define GET_PFN(pfn)			(pfn & 0x0fffffffffffffffUL)
 
-int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
+int remap_pfn_range(struct mm_area *, unsigned long, unsigned long,
 		    unsigned long, pgprot_t);
 
-void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+void adi_restore_tags(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, pte_t pte);
 
-int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+int adi_save_tags(struct mm_struct *mm, struct mm_area *vma,
 		  unsigned long addr, pte_t oldpte);
 
 #define __HAVE_ARCH_DO_SWAP_PAGE
 static inline void arch_do_swap_page(struct mm_struct *mm,
-				     struct vm_area_struct *vma,
+				     struct mm_area *vma,
 				     unsigned long addr,
 				     pte_t pte, pte_t oldpte)
 {
@@ -1078,7 +1078,7 @@ static inline void arch_do_swap_page(struct mm_struct *mm,
 
 #define __HAVE_ARCH_UNMAP_ONE
 static inline int arch_unmap_one(struct mm_struct *mm,
-				 struct vm_area_struct *vma,
+				 struct mm_area *vma,
 				 unsigned long addr, pte_t oldpte)
 {
 	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
@@ -1086,7 +1086,7 @@ static inline int arch_unmap_one(struct mm_struct *mm,
 	return 0;
 }
 
-static inline int io_remap_pfn_range(struct vm_area_struct *vma,
+static inline int io_remap_pfn_range(struct mm_area *vma,
 				     unsigned long from, unsigned long pfn,
 				     unsigned long size, pgprot_t prot)
 {
diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h
index 8b8cdaa69272..c41114cbd3fe 100644
--- a/arch/sparc/include/asm/tlbflush_64.h
+++ b/arch/sparc/include/asm/tlbflush_64.h
@@ -27,12 +27,12 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
 {
 }
 
-static inline void flush_tlb_page(struct vm_area_struct *vma,
+static inline void flush_tlb_page(struct mm_area *vma,
 				  unsigned long vmaddr)
 {
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 }
diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
index e0e4fc527b24..3e7c7bb97fd8 100644
--- a/arch/sparc/kernel/adi_64.c
+++ b/arch/sparc/kernel/adi_64.c
@@ -122,7 +122,7 @@ void __init mdesc_adi_init(void)
 }
 
 static tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
-					  struct vm_area_struct *vma,
+					  struct mm_area *vma,
 					  unsigned long addr)
 {
 	tag_storage_desc_t *tag_desc = NULL;
@@ -154,7 +154,7 @@ static tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
 }
 
 static tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
-					   struct vm_area_struct *vma,
+					   struct mm_area *vma,
 					   unsigned long addr)
 {
 	unsigned char *tags;
@@ -324,7 +324,7 @@ static void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
 /* Retrieve any saved ADI tags for the page being swapped back in and
  * restore these tags to the newly allocated physical page.
  */
-void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+void adi_restore_tags(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, pte_t pte)
 {
 	unsigned char *tag;
@@ -367,7 +367,7 @@ void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
  * this physical page so they can be restored later when the page is swapped
  * back in.
  */
-int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
+int adi_save_tags(struct mm_struct *mm, struct mm_area *vma,
 		  unsigned long addr, pte_t oldpte)
 {
 	unsigned char *tag;
diff --git a/arch/sparc/kernel/asm-offsets.c b/arch/sparc/kernel/asm-offsets.c
index 3d9b9855dce9..360c8cb8f396 100644
--- a/arch/sparc/kernel/asm-offsets.c
+++ b/arch/sparc/kernel/asm-offsets.c
@@ -52,7 +52,7 @@ static int __used foo(void)
 	BLANK();
 	DEFINE(AOFF_mm_context, offsetof(struct mm_struct, context));
 	BLANK();
-	DEFINE(VMA_VM_MM,    offsetof(struct vm_area_struct, vm_mm));
+	DEFINE(VMA_VM_MM,    offsetof(struct mm_area, vm_mm));
 
 	/* DEFINE(NUM_USER_SEGMENTS, TASK_SIZE>>28); */
 	return 0;
diff --git a/arch/sparc/kernel/pci.c b/arch/sparc/kernel/pci.c
index ddac216a2aff..64767a6e60cd 100644
--- a/arch/sparc/kernel/pci.c
+++ b/arch/sparc/kernel/pci.c
@@ -750,7 +750,7 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
 }
 
 /* Platform support for /proc/bus/pci/X/Y mmap()s. */
-int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
+int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma)
 {
 	struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller;
 	resource_size_t ioaddr = pci_resource_start(pdev, bar);
diff --git a/arch/sparc/kernel/ptrace_64.c b/arch/sparc/kernel/ptrace_64.c
index 4deba5b6eddb..2bbee6413504 100644
--- a/arch/sparc/kernel/ptrace_64.c
+++ b/arch/sparc/kernel/ptrace_64.c
@@ -103,7 +103,7 @@ void ptrace_disable(struct task_struct *child)
  *    has been created
  * 2) flush the I-cache if this is pre-cheetah and we did a write
  */
-void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
+void flush_ptrace_access(struct mm_area *vma, struct page *page,
 			 unsigned long uaddr, void *kaddr,
 			 unsigned long len, int write)
 {
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index c5a284df7b41..261c971b346a 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -101,7 +101,7 @@ static unsigned long get_align_mask(struct file *filp, unsigned long flags)
 unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct * vma;
+	struct mm_area * vma;
 	unsigned long task_size = TASK_SIZE;
 	int do_color_align;
 	struct vm_unmapped_area_info info = {};
@@ -164,7 +164,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 			  const unsigned long len, const unsigned long pgoff,
 			  const unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	unsigned long task_size = STACK_TOP32;
 	unsigned long addr = addr0;
diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
index 86a831ebd8c8..27bb2c2a8d54 100644
--- a/arch/sparc/mm/fault_32.c
+++ b/arch/sparc/mm/fault_32.c
@@ -112,7 +112,7 @@ static noinline void do_fault_siginfo(int code, int sig, struct pt_regs *regs,
 asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
 			       unsigned long address)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk = current;
 	struct mm_struct *mm = tsk->mm;
 	int from_user = !(regs->psr & PSR_PS);
@@ -304,7 +304,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
 /* This always deals with user addresses. */
 static void force_user_fault(unsigned long address, int write)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk = current;
 	struct mm_struct *mm = tsk->mm;
 	unsigned int flags = FAULT_FLAG_USER;
diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
index e326caf708c6..1dd10e512d61 100644
--- a/arch/sparc/mm/fault_64.c
+++ b/arch/sparc/mm/fault_64.c
@@ -268,7 +268,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
 {
 	enum ctx_state prev_state = exception_enter();
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned int insn = 0;
 	int si_code, fault_code;
 	vm_fault_t fault;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 80504148d8a5..c02f3fa3a0fa 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -167,7 +167,7 @@ unsigned long pud_leaf_size(pud_t pud) { return 1UL << tte_to_shift(*(pte_t *)&p
 unsigned long pmd_leaf_size(pmd_t pmd) { return 1UL << tte_to_shift(*(pte_t *)&pmd); }
 unsigned long pte_leaf_size(pte_t pte) { return 1UL << tte_to_shift(pte); }
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 760818950464..235770b832be 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -394,7 +394,7 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
 }
 #endif	/* CONFIG_HUGETLB_PAGE */
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *ptep, unsigned int nr)
 {
 	struct mm_struct *mm;
@@ -2945,7 +2945,7 @@ void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
 	call_rcu(&page->rcu_head, pte_free_now);
 }
 
-void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
 			  pmd_t *pmd)
 {
 	unsigned long pte, flags;
@@ -3134,7 +3134,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
 }
 
 void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	char *vfrom, *vto;
 
diff --git a/arch/sparc/mm/leon_mm.c b/arch/sparc/mm/leon_mm.c
index 1dc9b3d70eda..2e36b02d81d2 100644
--- a/arch/sparc/mm/leon_mm.c
+++ b/arch/sparc/mm/leon_mm.c
@@ -185,7 +185,7 @@ void leon_flush_dcache_all(void)
 			     "i"(ASI_LEON_DFLUSH) : "memory");
 }
 
-void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page)
+void leon_flush_pcache_all(struct mm_area *vma, unsigned long page)
 {
 	if (vma->vm_flags & VM_EXEC)
 		leon_flush_icache_all();
@@ -273,12 +273,12 @@ static void leon_flush_cache_mm(struct mm_struct *mm)
 	leon_flush_cache_all();
 }
 
-static void leon_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
+static void leon_flush_cache_page(struct mm_area *vma, unsigned long page)
 {
 	leon_flush_pcache_all(vma, page);
 }
 
-static void leon_flush_cache_range(struct vm_area_struct *vma,
+static void leon_flush_cache_range(struct mm_area *vma,
 				   unsigned long start,
 				   unsigned long end)
 {
@@ -290,13 +290,13 @@ static void leon_flush_tlb_mm(struct mm_struct *mm)
 	leon_flush_tlb_all();
 }
 
-static void leon_flush_tlb_page(struct vm_area_struct *vma,
+static void leon_flush_tlb_page(struct mm_area *vma,
 				unsigned long page)
 {
 	leon_flush_tlb_all();
 }
 
-static void leon_flush_tlb_range(struct vm_area_struct *vma,
+static void leon_flush_tlb_range(struct mm_area *vma,
 				 unsigned long start,
 				 unsigned long end)
 {
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index dd32711022f5..1337bc4daf6f 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -555,34 +555,34 @@ void srmmu_unmapiorange(unsigned long virt_addr, unsigned int len)
 /* tsunami.S */
 extern void tsunami_flush_cache_all(void);
 extern void tsunami_flush_cache_mm(struct mm_struct *mm);
-extern void tsunami_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
-extern void tsunami_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
+extern void tsunami_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
+extern void tsunami_flush_cache_page(struct mm_area *vma, unsigned long page);
 extern void tsunami_flush_page_to_ram(unsigned long page);
 extern void tsunami_flush_page_for_dma(unsigned long page);
 extern void tsunami_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
 extern void tsunami_flush_tlb_all(void);
 extern void tsunami_flush_tlb_mm(struct mm_struct *mm);
-extern void tsunami_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
-extern void tsunami_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void tsunami_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
+extern void tsunami_flush_tlb_page(struct mm_area *vma, unsigned long page);
 extern void tsunami_setup_blockops(void);
 
 /* swift.S */
 extern void swift_flush_cache_all(void);
 extern void swift_flush_cache_mm(struct mm_struct *mm);
-extern void swift_flush_cache_range(struct vm_area_struct *vma,
+extern void swift_flush_cache_range(struct mm_area *vma,
 				    unsigned long start, unsigned long end);
-extern void swift_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
+extern void swift_flush_cache_page(struct mm_area *vma, unsigned long page);
 extern void swift_flush_page_to_ram(unsigned long page);
 extern void swift_flush_page_for_dma(unsigned long page);
 extern void swift_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
 extern void swift_flush_tlb_all(void);
 extern void swift_flush_tlb_mm(struct mm_struct *mm);
-extern void swift_flush_tlb_range(struct vm_area_struct *vma,
+extern void swift_flush_tlb_range(struct mm_area *vma,
 				  unsigned long start, unsigned long end);
-extern void swift_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void swift_flush_tlb_page(struct mm_area *vma, unsigned long page);
 
 #if 0  /* P3: deadwood to debug precise flushes on Swift. */
-void swift_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void swift_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	int cctx, ctx1;
 
@@ -621,9 +621,9 @@ void swift_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
 /* viking.S */
 extern void viking_flush_cache_all(void);
 extern void viking_flush_cache_mm(struct mm_struct *mm);
-extern void viking_flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+extern void viking_flush_cache_range(struct mm_area *vma, unsigned long start,
 				     unsigned long end);
-extern void viking_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
+extern void viking_flush_cache_page(struct mm_area *vma, unsigned long page);
 extern void viking_flush_page_to_ram(unsigned long page);
 extern void viking_flush_page_for_dma(unsigned long page);
 extern void viking_flush_sig_insns(struct mm_struct *mm, unsigned long addr);
@@ -631,29 +631,29 @@ extern void viking_flush_page(unsigned long page);
 extern void viking_mxcc_flush_page(unsigned long page);
 extern void viking_flush_tlb_all(void);
 extern void viking_flush_tlb_mm(struct mm_struct *mm);
-extern void viking_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void viking_flush_tlb_range(struct mm_area *vma, unsigned long start,
 				   unsigned long end);
-extern void viking_flush_tlb_page(struct vm_area_struct *vma,
+extern void viking_flush_tlb_page(struct mm_area *vma,
 				  unsigned long page);
 extern void sun4dsmp_flush_tlb_all(void);
 extern void sun4dsmp_flush_tlb_mm(struct mm_struct *mm);
-extern void sun4dsmp_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+extern void sun4dsmp_flush_tlb_range(struct mm_area *vma, unsigned long start,
 				   unsigned long end);
-extern void sun4dsmp_flush_tlb_page(struct vm_area_struct *vma,
+extern void sun4dsmp_flush_tlb_page(struct mm_area *vma,
 				  unsigned long page);
 
 /* hypersparc.S */
 extern void hypersparc_flush_cache_all(void);
 extern void hypersparc_flush_cache_mm(struct mm_struct *mm);
-extern void hypersparc_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
-extern void hypersparc_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
+extern void hypersparc_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
+extern void hypersparc_flush_cache_page(struct mm_area *vma, unsigned long page);
 extern void hypersparc_flush_page_to_ram(unsigned long page);
 extern void hypersparc_flush_page_for_dma(unsigned long page);
 extern void hypersparc_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
 extern void hypersparc_flush_tlb_all(void);
 extern void hypersparc_flush_tlb_mm(struct mm_struct *mm);
-extern void hypersparc_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
-extern void hypersparc_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
+extern void hypersparc_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
+extern void hypersparc_flush_tlb_page(struct mm_area *vma, unsigned long page);
 extern void hypersparc_setup_blockops(void);
 
 /*
@@ -1235,7 +1235,7 @@ static void turbosparc_flush_cache_mm(struct mm_struct *mm)
 	FLUSH_END
 }
 
-static void turbosparc_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+static void turbosparc_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	FLUSH_BEGIN(vma->vm_mm)
 	flush_user_windows();
@@ -1243,7 +1243,7 @@ static void turbosparc_flush_cache_range(struct vm_area_struct *vma, unsigned lo
 	FLUSH_END
 }
 
-static void turbosparc_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
+static void turbosparc_flush_cache_page(struct mm_area *vma, unsigned long page)
 {
 	FLUSH_BEGIN(vma->vm_mm)
 	flush_user_windows();
@@ -1286,14 +1286,14 @@ static void turbosparc_flush_tlb_mm(struct mm_struct *mm)
 	FLUSH_END
 }
 
-static void turbosparc_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+static void turbosparc_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	FLUSH_BEGIN(vma->vm_mm)
 	srmmu_flush_whole_tlb();
 	FLUSH_END
 }
 
-static void turbosparc_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+static void turbosparc_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	FLUSH_BEGIN(vma->vm_mm)
 	srmmu_flush_whole_tlb();
@@ -1672,7 +1672,7 @@ static void smp_flush_tlb_mm(struct mm_struct *mm)
 	}
 }
 
-static void smp_flush_cache_range(struct vm_area_struct *vma,
+static void smp_flush_cache_range(struct mm_area *vma,
 				  unsigned long start,
 				  unsigned long end)
 {
@@ -1686,7 +1686,7 @@ static void smp_flush_cache_range(struct vm_area_struct *vma,
 	}
 }
 
-static void smp_flush_tlb_range(struct vm_area_struct *vma,
+static void smp_flush_tlb_range(struct mm_area *vma,
 				unsigned long start,
 				unsigned long end)
 {
@@ -1700,7 +1700,7 @@ static void smp_flush_tlb_range(struct vm_area_struct *vma,
 	}
 }
 
-static void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
+static void smp_flush_cache_page(struct mm_area *vma, unsigned long page)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
@@ -1711,7 +1711,7 @@ static void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
 	}
 }
 
-static void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+static void smp_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
index a35ddcca5e76..dd950cbd4fd7 100644
--- a/arch/sparc/mm/tlb.c
+++ b/arch/sparc/mm/tlb.c
@@ -231,7 +231,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 	__set_pmd_acct(mm, addr, orig, pmd);
 }
 
-static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+static inline pmd_t pmdp_establish(struct mm_area *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
 	pmd_t old;
@@ -247,7 +247,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 /*
  * This routine is only called when splitting a THP
  */
-pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
 		     pmd_t *pmdp)
 {
 	pmd_t old, entry;
diff --git a/arch/sparc/vdso/vma.c b/arch/sparc/vdso/vma.c
index bab7a59575e8..f8124af4d6f0 100644
--- a/arch/sparc/vdso/vma.c
+++ b/arch/sparc/vdso/vma.c
@@ -363,7 +363,7 @@ static int map_vdso(const struct vdso_image *image,
 		struct vm_special_mapping *vdso_mapping)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long text_start, addr = 0;
 	int ret = 0;
 
diff --git a/arch/um/drivers/mmapper_kern.c b/arch/um/drivers/mmapper_kern.c
index 807cd3358740..0cb875338307 100644
--- a/arch/um/drivers/mmapper_kern.c
+++ b/arch/um/drivers/mmapper_kern.c
@@ -46,7 +46,7 @@ static long mmapper_ioctl(struct file *file, unsigned int cmd, unsigned long arg
 	return -ENOIOCTLCMD;
 }
 
-static int mmapper_mmap(struct file *file, struct vm_area_struct *vma)
+static int mmapper_mmap(struct file *file, struct mm_area *vma)
 {
 	int ret = -EINVAL;
 	int size;
diff --git a/arch/um/include/asm/tlbflush.h b/arch/um/include/asm/tlbflush.h
index 13a3009942be..cb9e58edd300 100644
--- a/arch/um/include/asm/tlbflush.h
+++ b/arch/um/include/asm/tlbflush.h
@@ -35,13 +35,13 @@ extern int um_tlb_sync(struct mm_struct *mm);
 extern void flush_tlb_all(void);
 extern void flush_tlb_mm(struct mm_struct *mm);
 
-static inline void flush_tlb_page(struct vm_area_struct *vma,
+static inline void flush_tlb_page(struct mm_area *vma,
 				  unsigned long address)
 {
 	um_tlb_mark_sync(vma->vm_mm, address, address + PAGE_SIZE);
 }
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
+static inline void flush_tlb_range(struct mm_area *vma,
 				   unsigned long start, unsigned long end)
 {
 	um_tlb_mark_sync(vma->vm_mm, start, end);
diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c
index cf7e0d4407f2..9d8fc85b2896 100644
--- a/arch/um/kernel/tlb.c
+++ b/arch/um/kernel/tlb.c
@@ -214,7 +214,7 @@ void flush_tlb_all(void)
 
 void flush_tlb_mm(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	for_each_vma(vmi, vma)
diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
index ce073150dc20..22dd6c703a70 100644
--- a/arch/um/kernel/trap.c
+++ b/arch/um/kernel/trap.c
@@ -26,7 +26,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
 		      int is_write, int is_user, int *code_out)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	pmd_t *pmd;
 	pte_t *pte;
 	int err = -EFAULT;
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index adb299d3b6a1..987c2d16ed16 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -50,7 +50,7 @@ int __init init_vdso_image(const struct vdso_image *image)
 struct linux_binprm;
 
 static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
-		      struct vm_area_struct *vma, struct vm_fault *vmf)
+		      struct mm_area *vma, struct vm_fault *vmf)
 {
 	const struct vdso_image *image = vma->vm_mm->context.vdso_image;
 
@@ -63,7 +63,7 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
 }
 
 static void vdso_fix_landing(const struct vdso_image *image,
-		struct vm_area_struct *new_vma)
+		struct mm_area *new_vma)
 {
 #if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
 	if (in_ia32_syscall() && image == &vdso_image_32) {
@@ -80,7 +80,7 @@ static void vdso_fix_landing(const struct vdso_image *image,
 }
 
 static int vdso_mremap(const struct vm_special_mapping *sm,
-		struct vm_area_struct *new_vma)
+		struct mm_area *new_vma)
 {
 	const struct vdso_image *image = current->mm->context.vdso_image;
 
@@ -91,7 +91,7 @@ static int vdso_mremap(const struct vm_special_mapping *sm,
 }
 
 static vm_fault_t vvar_vclock_fault(const struct vm_special_mapping *sm,
-				    struct vm_area_struct *vma, struct vm_fault *vmf)
+				    struct mm_area *vma, struct vm_fault *vmf)
 {
 	switch (vmf->pgoff) {
 #ifdef CONFIG_PARAVIRT_CLOCK
@@ -139,7 +139,7 @@ static const struct vm_special_mapping vvar_vclock_mapping = {
 static int map_vdso(const struct vdso_image *image, unsigned long addr)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long text_start;
 	int ret = 0;
 
@@ -203,7 +203,7 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
 int map_vdso_once(const struct vdso_image *image, unsigned long addr)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	mmap_write_lock(mm);
diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
index 2fb7d53cf333..155a54569893 100644
--- a/arch/x86/entry/vsyscall/vsyscall_64.c
+++ b/arch/x86/entry/vsyscall/vsyscall_64.c
@@ -275,14 +275,14 @@ bool emulate_vsyscall(unsigned long error_code,
  * covers the 64bit vsyscall page now. 32bit has a real VMA now and does
  * not need special handling anymore:
  */
-static const char *gate_vma_name(struct vm_area_struct *vma)
+static const char *gate_vma_name(struct mm_area *vma)
 {
 	return "[vsyscall]";
 }
 static const struct vm_operations_struct gate_vma_ops = {
 	.name = gate_vma_name,
 };
-static struct vm_area_struct gate_vma __ro_after_init = {
+static struct mm_area gate_vma __ro_after_init = {
 	.vm_start	= VSYSCALL_ADDR,
 	.vm_end		= VSYSCALL_ADDR + PAGE_SIZE,
 	.vm_page_prot	= PAGE_READONLY_EXEC,
@@ -290,7 +290,7 @@ static struct vm_area_struct gate_vma __ro_after_init = {
 	.vm_ops		= &gate_vma_ops,
 };
 
-struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+struct mm_area *get_gate_vma(struct mm_struct *mm)
 {
 #ifdef CONFIG_COMPAT
 	if (!mm || !test_bit(MM_CONTEXT_HAS_VSYSCALL, &mm->context.flags))
@@ -303,7 +303,7 @@ struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
 
 int in_gate_area(struct mm_struct *mm, unsigned long addr)
 {
-	struct vm_area_struct *vma = get_gate_vma(mm);
+	struct mm_area *vma = get_gate_vma(mm);
 
 	if (!vma)
 		return 0;
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 2398058b6e83..45915a6f2b9e 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -256,7 +256,7 @@ static inline bool is_64bit_mm(struct mm_struct *mm)
  * So do not enforce things if the VMA is not from the current
  * mm, or if we are in a kernel thread.
  */
-static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+static inline bool arch_vma_access_permitted(struct mm_area *vma,
 		bool write, bool execute, bool foreign)
 {
 	/* pkeys never affect instruction fetches */
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index c4c23190925c..3e73c01c3ba0 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -402,7 +402,7 @@ static inline pgdval_t pgd_val(pgd_t pgd)
 }
 
 #define  __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
-static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
+static inline pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr,
 					   pte_t *ptep)
 {
 	pteval_t ret;
@@ -412,7 +412,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned
 	return (pte_t) { .pte = ret };
 }
 
-static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
+static inline void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
 					   pte_t *ptep, pte_t old_pte, pte_t pte)
 {
 
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 631c306ce1ff..dd67df3d8d0d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -21,7 +21,7 @@ struct task_struct;
 struct cpumask;
 struct flush_tlb_info;
 struct mmu_gather;
-struct vm_area_struct;
+struct mm_area;
 
 /*
  * Wrapper type for pointers to code which uses the non-standard
@@ -168,9 +168,9 @@ struct pv_mmu_ops {
 	void (*set_pte)(pte_t *ptep, pte_t pteval);
 	void (*set_pmd)(pmd_t *pmdp, pmd_t pmdval);
 
-	pte_t (*ptep_modify_prot_start)(struct vm_area_struct *vma, unsigned long addr,
+	pte_t (*ptep_modify_prot_start)(struct mm_area *vma, unsigned long addr,
 					pte_t *ptep);
-	void (*ptep_modify_prot_commit)(struct vm_area_struct *vma, unsigned long addr,
+	void (*ptep_modify_prot_commit)(struct mm_area *vma, unsigned long addr,
 					pte_t *ptep, pte_t pte);
 
 	struct paravirt_callee_save pte_val;
diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
index dabafba957ea..b39a39a46f7a 100644
--- a/arch/x86/include/asm/pgtable-3level.h
+++ b/arch/x86/include/asm/pgtable-3level.h
@@ -122,7 +122,7 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp)
 
 #ifndef pmdp_establish
 #define pmdp_establish pmdp_establish
-static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+static inline pmd_t pmdp_establish(struct mm_area *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
 	pmd_t old;
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 5ddba366d3b4..1415b469056b 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -498,8 +498,8 @@ static inline pte_t pte_mkwrite_novma(pte_t pte)
 	return pte_set_flags(pte, _PAGE_RW);
 }
 
-struct vm_area_struct;
-pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma);
+struct mm_area;
+pte_t pte_mkwrite(pte_t pte, struct mm_area *vma);
 #define pte_mkwrite pte_mkwrite
 
 static inline pte_t pte_mkhuge(pte_t pte)
@@ -623,7 +623,7 @@ static inline pmd_t pmd_mkwrite_novma(pmd_t pmd)
 	return pmd_set_flags(pmd, _PAGE_RW);
 }
 
-pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
+pmd_t pmd_mkwrite(pmd_t pmd, struct mm_area *vma);
 #define pmd_mkwrite pmd_mkwrite
 
 /* See comments above mksaveddirty_shift() */
@@ -1291,19 +1291,19 @@ static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
  * race with other CPU's that might be updating the dirty
  * bit at the same time.
  */
-struct vm_area_struct;
+struct mm_area;
 
 #define  __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
-extern int ptep_set_access_flags(struct vm_area_struct *vma,
+extern int ptep_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pte_t *ptep,
 				 pte_t entry, int dirty);
 
 #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-extern int ptep_test_and_clear_young(struct vm_area_struct *vma,
+extern int ptep_test_and_clear_young(struct mm_area *vma,
 				     unsigned long addr, pte_t *ptep);
 
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-extern int ptep_clear_flush_young(struct vm_area_struct *vma,
+extern int ptep_clear_flush_young(struct mm_area *vma,
 				  unsigned long address, pte_t *ptep);
 
 #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
@@ -1356,21 +1356,21 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
 #define mk_pmd(page, pgprot)   pfn_pmd(page_to_pfn(page), (pgprot))
 
 #define  __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
-extern int pmdp_set_access_flags(struct vm_area_struct *vma,
+extern int pmdp_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pmd_t *pmdp,
 				 pmd_t entry, int dirty);
-extern int pudp_set_access_flags(struct vm_area_struct *vma,
+extern int pudp_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pud_t *pudp,
 				 pud_t entry, int dirty);
 
 #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
-extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+extern int pmdp_test_and_clear_young(struct mm_area *vma,
 				     unsigned long addr, pmd_t *pmdp);
-extern int pudp_test_and_clear_young(struct vm_area_struct *vma,
+extern int pudp_test_and_clear_young(struct mm_area *vma,
 				     unsigned long addr, pud_t *pudp);
 
 #define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
-extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
+extern int pmdp_clear_flush_young(struct mm_area *vma,
 				  unsigned long address, pmd_t *pmdp);
 
 
@@ -1415,7 +1415,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 
 #ifndef pmdp_establish
 #define pmdp_establish pmdp_establish
-static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+static inline pmd_t pmdp_establish(struct mm_area *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
 	page_table_check_pmd_set(vma->vm_mm, pmdp, pmd);
@@ -1430,7 +1430,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
 #endif
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static inline pud_t pudp_establish(struct vm_area_struct *vma,
+static inline pud_t pudp_establish(struct mm_area *vma,
 		unsigned long address, pud_t *pudp, pud_t pud)
 {
 	page_table_check_pud_set(vma->vm_mm, pudp, pud);
@@ -1445,10 +1445,10 @@ static inline pud_t pudp_establish(struct vm_area_struct *vma,
 #endif
 
 #define __HAVE_ARCH_PMDP_INVALIDATE_AD
-extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma,
+extern pmd_t pmdp_invalidate_ad(struct mm_area *vma,
 				unsigned long address, pmd_t *pmdp);
 
-pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
 		      pud_t *pudp);
 
 /*
@@ -1554,20 +1554,20 @@ static inline unsigned long page_level_mask(enum pg_level level)
  * The x86 doesn't have any external MMU info: the kernel page
  * tables contain all the necessary information.
  */
-static inline void update_mmu_cache(struct vm_area_struct *vma,
+static inline void update_mmu_cache(struct mm_area *vma,
 		unsigned long addr, pte_t *ptep)
 {
 }
 static inline void update_mmu_cache_range(struct vm_fault *vmf,
-		struct vm_area_struct *vma, unsigned long addr,
+		struct mm_area *vma, unsigned long addr,
 		pte_t *ptep, unsigned int nr)
 {
 }
-static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pmd(struct mm_area *vma,
 		unsigned long addr, pmd_t *pmd)
 {
 }
-static inline void update_mmu_cache_pud(struct vm_area_struct *vma,
+static inline void update_mmu_cache_pud(struct mm_area *vma,
 		unsigned long addr, pud_t *pud)
 {
 }
@@ -1724,13 +1724,13 @@ static inline bool arch_has_pfn_modify_check(void)
 }
 
 #define arch_check_zapped_pte arch_check_zapped_pte
-void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte);
+void arch_check_zapped_pte(struct mm_area *vma, pte_t pte);
 
 #define arch_check_zapped_pmd arch_check_zapped_pmd
-void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd);
+void arch_check_zapped_pmd(struct mm_area *vma, pmd_t pmd);
 
 #define arch_check_zapped_pud arch_check_zapped_pud
-void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud);
+void arch_check_zapped_pud(struct mm_area *vma, pud_t pud);
 
 #ifdef CONFIG_XEN_PV
 #define arch_has_hw_nonleaf_pmd_young arch_has_hw_nonleaf_pmd_young
diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
index b612cc57a4d3..ce08b06f7b85 100644
--- a/arch/x86/include/asm/pgtable_32.h
+++ b/arch/x86/include/asm/pgtable_32.h
@@ -23,7 +23,7 @@
 #include <linux/spinlock.h>
 
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 
 extern pgd_t swapper_pg_dir[1024];
 extern pgd_t initial_page_table[1024];
diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h
index 2e6c04d8a45b..c92d445a2d4d 100644
--- a/arch/x86/include/asm/pkeys.h
+++ b/arch/x86/include/asm/pkeys.h
@@ -30,9 +30,9 @@ static inline int execute_only_pkey(struct mm_struct *mm)
 	return __execute_only_pkey(mm);
 }
 
-extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
+extern int __arch_override_mprotect_pkey(struct mm_area *vma,
 		int prot, int pkey);
-static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
+static inline int arch_override_mprotect_pkey(struct mm_area *vma,
 		int prot, int pkey)
 {
 	if (!cpu_feature_enabled(X86_FEATURE_OSPKE))
@@ -115,7 +115,7 @@ int mm_pkey_free(struct mm_struct *mm, int pkey)
 	return 0;
 }
 
-static inline int vma_pkey(struct vm_area_struct *vma)
+static inline int vma_pkey(struct mm_area *vma)
 {
 	unsigned long vma_pkey_mask = VM_PKEY_BIT0 | VM_PKEY_BIT1 |
 				      VM_PKEY_BIT2 | VM_PKEY_BIT3;
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e9b81876ebe4..0db9ba656abc 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -319,7 +319,7 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
 				bool freed_tables);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
-static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
+static inline void flush_tlb_page(struct mm_area *vma, unsigned long a)
 {
 	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
 }
diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
index 92ea1472bde9..a223490e1042 100644
--- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
+++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
@@ -1484,7 +1484,7 @@ static int pseudo_lock_dev_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static int pseudo_lock_dev_mremap(struct vm_area_struct *area)
+static int pseudo_lock_dev_mremap(struct mm_area *area)
 {
 	/* Not supported */
 	return -EINVAL;
@@ -1494,7 +1494,7 @@ static const struct vm_operations_struct pseudo_mmap_ops = {
 	.mremap = pseudo_lock_dev_mremap,
 };
 
-static int pseudo_lock_dev_mmap(struct file *filp, struct vm_area_struct *vma)
+static int pseudo_lock_dev_mmap(struct file *filp, struct mm_area *vma)
 {
 	unsigned long vsize = vma->vm_end - vma->vm_start;
 	unsigned long off = vma->vm_pgoff << PAGE_SHIFT;
diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
index 7f8d1e11dbee..e7e41b05b5c8 100644
--- a/arch/x86/kernel/cpu/sgx/driver.c
+++ b/arch/x86/kernel/cpu/sgx/driver.c
@@ -81,7 +81,7 @@ static int sgx_release(struct inode *inode, struct file *file)
 	return 0;
 }
 
-static int sgx_mmap(struct file *file, struct vm_area_struct *vma)
+static int sgx_mmap(struct file *file, struct mm_area *vma)
 {
 	struct sgx_encl *encl = file->private_data;
 	int ret;
diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
index 279148e72459..8455a87e56f2 100644
--- a/arch/x86/kernel/cpu/sgx/encl.c
+++ b/arch/x86/kernel/cpu/sgx/encl.c
@@ -324,7 +324,7 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
  * Returns: Appropriate vm_fault_t: VM_FAULT_NOPAGE when PTE was installed
  * successfully, VM_FAULT_SIGBUS or VM_FAULT_OOM as error otherwise.
  */
-static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
+static vm_fault_t sgx_encl_eaug_page(struct mm_area *vma,
 				     struct sgx_encl *encl, unsigned long addr)
 {
 	vm_fault_t vmret = VM_FAULT_SIGBUS;
@@ -430,7 +430,7 @@ static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
 static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 {
 	unsigned long addr = (unsigned long)vmf->address;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct sgx_encl_page *entry;
 	unsigned long phys_addr;
 	struct sgx_encl *encl;
@@ -484,7 +484,7 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
 	return VM_FAULT_NOPAGE;
 }
 
-static void sgx_vma_open(struct vm_area_struct *vma)
+static void sgx_vma_open(struct mm_area *vma)
 {
 	struct sgx_encl *encl = vma->vm_private_data;
 
@@ -567,7 +567,7 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
 	return ret;
 }
 
-static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start,
+static int sgx_vma_mprotect(struct mm_area *vma, unsigned long start,
 			    unsigned long end, unsigned long newflags)
 {
 	return sgx_encl_may_map(vma->vm_private_data, start, end, newflags);
@@ -625,7 +625,7 @@ static struct sgx_encl_page *sgx_encl_reserve_page(struct sgx_encl *encl,
 	return entry;
 }
 
-static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr,
+static int sgx_vma_access(struct mm_area *vma, unsigned long addr,
 			  void *buf, int len, int write)
 {
 	struct sgx_encl *encl = vma->vm_private_data;
@@ -1137,7 +1137,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
 {
 	unsigned long addr = page->desc & PAGE_MASK;
 	struct sgx_encl *encl = page->encl;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret;
 
 	ret = sgx_encl_find(mm, addr, &vma);
@@ -1200,7 +1200,7 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
 {
 	unsigned long mm_list_version;
 	struct sgx_encl_mm *encl_mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int idx, ret;
 
 	do {
diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
index f94ff14c9486..de567cd442bc 100644
--- a/arch/x86/kernel/cpu/sgx/encl.h
+++ b/arch/x86/kernel/cpu/sgx/encl.h
@@ -87,9 +87,9 @@ struct sgx_backing {
 extern const struct vm_operations_struct sgx_vm_ops;
 
 static inline int sgx_encl_find(struct mm_struct *mm, unsigned long addr,
-				struct vm_area_struct **vma)
+				struct mm_area **vma)
 {
-	struct vm_area_struct *result;
+	struct mm_area *result;
 
 	result = vma_lookup(mm, addr);
 	if (!result || result->vm_ops != &sgx_vm_ops)
diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
index 776a20172867..b25b51724b3a 100644
--- a/arch/x86/kernel/cpu/sgx/ioctl.c
+++ b/arch/x86/kernel/cpu/sgx/ioctl.c
@@ -209,7 +209,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
 			       struct sgx_secinfo *secinfo, unsigned long src)
 {
 	struct sgx_pageinfo pginfo;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct page *src_page;
 	int ret;
 
diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
index 7aaa3652e31d..a601d9e1d867 100644
--- a/arch/x86/kernel/cpu/sgx/virt.c
+++ b/arch/x86/kernel/cpu/sgx/virt.c
@@ -31,7 +31,7 @@ static struct mutex zombie_secs_pages_lock;
 static struct list_head zombie_secs_pages;
 
 static int __sgx_vepc_fault(struct sgx_vepc *vepc,
-			    struct vm_area_struct *vma, unsigned long addr)
+			    struct mm_area *vma, unsigned long addr)
 {
 	struct sgx_epc_page *epc_page;
 	unsigned long index, pfn;
@@ -73,7 +73,7 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc,
 
 static vm_fault_t sgx_vepc_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct sgx_vepc *vepc = vma->vm_private_data;
 	int ret;
 
@@ -96,7 +96,7 @@ static const struct vm_operations_struct sgx_vepc_vm_ops = {
 	.fault = sgx_vepc_fault,
 };
 
-static int sgx_vepc_mmap(struct file *file, struct vm_area_struct *vma)
+static int sgx_vepc_mmap(struct file *file, struct mm_area *vma)
 {
 	struct sgx_vepc *vepc = file->private_data;
 
diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c
index 059685612362..f18dd5e2beff 100644
--- a/arch/x86/kernel/shstk.c
+++ b/arch/x86/kernel/shstk.c
@@ -294,7 +294,7 @@ static int shstk_push_sigframe(unsigned long *ssp)
 
 static int shstk_pop_sigframe(unsigned long *ssp)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long token_addr;
 	bool need_to_check_vma;
 	int err = 1;
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 776ae6fa7f2d..ab965bc812a7 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -128,7 +128,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len,
 		       unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_unmapped_area_info info = {};
 	unsigned long begin, end;
 
@@ -168,7 +168,7 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr0,
 			  unsigned long len, unsigned long pgoff,
 			  unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	struct vm_unmapped_area_info info = {};
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 296d294142c8..9255779b17f4 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -836,7 +836,7 @@ bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
 static void
 __bad_area(struct pt_regs *regs, unsigned long error_code,
 	   unsigned long address, struct mm_struct *mm,
-	   struct vm_area_struct *vma, u32 pkey, int si_code)
+	   struct mm_area *vma, u32 pkey, int si_code)
 {
 	/*
 	 * Something tried to access memory that isn't in our memory map..
@@ -851,7 +851,7 @@ __bad_area(struct pt_regs *regs, unsigned long error_code,
 }
 
 static inline bool bad_area_access_from_pkeys(unsigned long error_code,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	/* This code is always called on the current mm */
 	bool foreign = false;
@@ -870,7 +870,7 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code,
 static noinline void
 bad_area_access_error(struct pt_regs *regs, unsigned long error_code,
 		      unsigned long address, struct mm_struct *mm,
-		      struct vm_area_struct *vma)
+		      struct mm_area *vma)
 {
 	/*
 	 * This OSPKE check is not strictly necessary at runtime.
@@ -1049,7 +1049,7 @@ NOKPROBE_SYMBOL(spurious_kernel_fault);
 int show_unhandled_signals = 1;
 
 static inline int
-access_error(unsigned long error_code, struct vm_area_struct *vma)
+access_error(unsigned long error_code, struct mm_area *vma)
 {
 	/* This is only called for the current mm, so: */
 	bool foreign = false;
@@ -1211,7 +1211,7 @@ void do_user_addr_fault(struct pt_regs *regs,
 			unsigned long error_code,
 			unsigned long address)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk;
 	struct mm_struct *mm;
 	vm_fault_t fault;
diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index 72d8cbc61158..f301b40be91b 100644
--- a/arch/x86/mm/pat/memtype.c
+++ b/arch/x86/mm/pat/memtype.c
@@ -932,7 +932,7 @@ static void free_pfn_range(u64 paddr, unsigned long size)
 		memtype_free(paddr, paddr + size);
 }
 
-static int follow_phys(struct vm_area_struct *vma, unsigned long *prot,
+static int follow_phys(struct mm_area *vma, unsigned long *prot,
 		resource_size_t *phys)
 {
 	struct follow_pfnmap_args args = { .vma = vma, .address = vma->vm_start };
@@ -952,7 +952,7 @@ static int follow_phys(struct vm_area_struct *vma, unsigned long *prot,
 	return 0;
 }
 
-static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
+static int get_pat_info(struct mm_area *vma, resource_size_t *paddr,
 		pgprot_t *pgprot)
 {
 	unsigned long prot;
@@ -984,8 +984,8 @@ static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
 	return -EINVAL;
 }
 
-int track_pfn_copy(struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma, unsigned long *pfn)
+int track_pfn_copy(struct mm_area *dst_vma,
+		struct mm_area *src_vma, unsigned long *pfn)
 {
 	const unsigned long vma_size = src_vma->vm_end - src_vma->vm_start;
 	resource_size_t paddr;
@@ -1011,7 +1011,7 @@ int track_pfn_copy(struct vm_area_struct *dst_vma,
 	return 0;
 }
 
-void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
+void untrack_pfn_copy(struct mm_area *dst_vma, unsigned long pfn)
 {
 	untrack_pfn(dst_vma, pfn, dst_vma->vm_end - dst_vma->vm_start, true);
 	/*
@@ -1026,7 +1026,7 @@ void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
  * reserve the entire pfn + size range with single reserve_pfn_range
  * call.
  */
-int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
+int track_pfn_remap(struct mm_area *vma, pgprot_t *prot,
 		    unsigned long pfn, unsigned long addr, unsigned long size)
 {
 	resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
@@ -1066,7 +1066,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
 	return 0;
 }
 
-void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
+void track_pfn_insert(struct mm_area *vma, pgprot_t *prot, pfn_t pfn)
 {
 	enum page_cache_mode pcm;
 
@@ -1084,7 +1084,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
  * untrack can be called for a specific region indicated by pfn and size or
  * can be for the entire vma (in which case pfn, size are zero).
  */
-void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+void untrack_pfn(struct mm_area *vma, unsigned long pfn,
 		 unsigned long size, bool mm_wr_locked)
 {
 	resource_size_t paddr;
@@ -1108,7 +1108,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
 	}
 }
 
-void untrack_pfn_clear(struct vm_area_struct *vma)
+void untrack_pfn_clear(struct mm_area *vma)
 {
 	vm_flags_clear(vma, VM_PAT);
 }
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index a05fcddfc811..c0105e8b5130 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -458,7 +458,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd)
  * to also make the pte writeable at the same time the dirty bit is
  * set. In that case we do actually need to write the PTE.
  */
-int ptep_set_access_flags(struct vm_area_struct *vma,
+int ptep_set_access_flags(struct mm_area *vma,
 			  unsigned long address, pte_t *ptep,
 			  pte_t entry, int dirty)
 {
@@ -471,7 +471,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-int pmdp_set_access_flags(struct vm_area_struct *vma,
+int pmdp_set_access_flags(struct mm_area *vma,
 			  unsigned long address, pmd_t *pmdp,
 			  pmd_t entry, int dirty)
 {
@@ -492,7 +492,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 	return changed;
 }
 
-int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+int pudp_set_access_flags(struct mm_area *vma, unsigned long address,
 			  pud_t *pudp, pud_t entry, int dirty)
 {
 	int changed = !pud_same(*pudp, entry);
@@ -513,7 +513,7 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
 }
 #endif
 
-int ptep_test_and_clear_young(struct vm_area_struct *vma,
+int ptep_test_and_clear_young(struct mm_area *vma,
 			      unsigned long addr, pte_t *ptep)
 {
 	int ret = 0;
@@ -526,7 +526,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma,
 }
 
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
-int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+int pmdp_test_and_clear_young(struct mm_area *vma,
 			      unsigned long addr, pmd_t *pmdp)
 {
 	int ret = 0;
@@ -540,7 +540,7 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 #endif
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-int pudp_test_and_clear_young(struct vm_area_struct *vma,
+int pudp_test_and_clear_young(struct mm_area *vma,
 			      unsigned long addr, pud_t *pudp)
 {
 	int ret = 0;
@@ -553,7 +553,7 @@ int pudp_test_and_clear_young(struct vm_area_struct *vma,
 }
 #endif
 
-int ptep_clear_flush_young(struct vm_area_struct *vma,
+int ptep_clear_flush_young(struct mm_area *vma,
 			   unsigned long address, pte_t *ptep)
 {
 	/*
@@ -573,7 +573,7 @@ int ptep_clear_flush_young(struct vm_area_struct *vma,
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-int pmdp_clear_flush_young(struct vm_area_struct *vma,
+int pmdp_clear_flush_young(struct mm_area *vma,
 			   unsigned long address, pmd_t *pmdp)
 {
 	int young;
@@ -587,7 +587,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 	return young;
 }
 
-pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_invalidate_ad(struct mm_area *vma, unsigned long address,
 			 pmd_t *pmdp)
 {
 	VM_WARN_ON_ONCE(!pmd_present(*pmdp));
@@ -602,7 +602,7 @@ pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
 
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
 	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
-pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
 		     pud_t *pudp)
 {
 	VM_WARN_ON_ONCE(!pud_present(*pudp));
@@ -858,7 +858,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 #endif /* CONFIG_X86_64 */
 #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
 
-pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
+pte_t pte_mkwrite(pte_t pte, struct mm_area *vma)
 {
 	if (vma->vm_flags & VM_SHADOW_STACK)
 		return pte_mkwrite_shstk(pte);
@@ -868,7 +868,7 @@ pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
 	return pte_clear_saveddirty(pte);
 }
 
-pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
+pmd_t pmd_mkwrite(pmd_t pmd, struct mm_area *vma)
 {
 	if (vma->vm_flags & VM_SHADOW_STACK)
 		return pmd_mkwrite_shstk(pmd);
@@ -878,7 +878,7 @@ pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 	return pmd_clear_saveddirty(pmd);
 }
 
-void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte)
+void arch_check_zapped_pte(struct mm_area *vma, pte_t pte)
 {
 	/*
 	 * Hardware before shadow stack can (rarely) set Dirty=1
@@ -891,14 +891,14 @@ void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte)
 			pte_shstk(pte));
 }
 
-void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd)
+void arch_check_zapped_pmd(struct mm_area *vma, pmd_t pmd)
 {
 	/* See note in arch_check_zapped_pte() */
 	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
 			pmd_shstk(pmd));
 }
 
-void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
+void arch_check_zapped_pud(struct mm_area *vma, pud_t pud)
 {
 	/* See note in arch_check_zapped_pte() */
 	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && pud_shstk(pud));
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 7418c367e328..8626515f8331 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -59,7 +59,7 @@ int __execute_only_pkey(struct mm_struct *mm)
 	return execute_only_pkey;
 }
 
-static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
+static inline bool vma_is_pkey_exec_only(struct mm_area *vma)
 {
 	/* Do this check first since the vm_flags should be hot */
 	if ((vma->vm_flags & VM_ACCESS_FLAGS) != VM_EXEC)
@@ -73,7 +73,7 @@ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
 /*
  * This is only called for *plain* mprotect calls.
  */
-int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, int pkey)
+int __arch_override_mprotect_pkey(struct mm_area *vma, int prot, int pkey)
 {
 	/*
 	 * Is this an mprotect_pkey() call?  If so, never
diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c
index 29b2203bc82c..495b032f68f5 100644
--- a/arch/x86/um/mem_32.c
+++ b/arch/x86/um/mem_32.c
@@ -6,7 +6,7 @@
 #include <linux/mm.h>
 #include <asm/elf.h>
 
-static struct vm_area_struct gate_vma;
+static struct mm_area gate_vma;
 
 static int __init gate_vma_init(void)
 {
@@ -23,7 +23,7 @@ static int __init gate_vma_init(void)
 }
 __initcall(gate_vma_init);
 
-struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+struct mm_area *get_gate_vma(struct mm_struct *mm)
 {
 	return FIXADDR_USER_START ? &gate_vma : NULL;
 }
@@ -41,7 +41,7 @@ int in_gate_area_no_mm(unsigned long addr)
 
 int in_gate_area(struct mm_struct *mm, unsigned long addr)
 {
-	struct vm_area_struct *vma = get_gate_vma(mm);
+	struct mm_area *vma = get_gate_vma(mm);
 
 	if (!vma)
 		return 0;
diff --git a/arch/x86/um/mem_64.c b/arch/x86/um/mem_64.c
index c027e93d1002..5fd2a34ebe23 100644
--- a/arch/x86/um/mem_64.c
+++ b/arch/x86/um/mem_64.c
@@ -2,7 +2,7 @@
 #include <linux/mm.h>
 #include <asm/elf.h>
 
-const char *arch_vma_name(struct vm_area_struct *vma)
+const char *arch_vma_name(struct mm_area *vma)
 {
 	if (vma->vm_mm && vma->vm_start == um_vdso_addr)
 		return "[vdso]";
diff --git a/arch/x86/um/vdso/vma.c b/arch/x86/um/vdso/vma.c
index dc8dfb2abd80..2f80bb140815 100644
--- a/arch/x86/um/vdso/vma.c
+++ b/arch/x86/um/vdso/vma.c
@@ -41,7 +41,7 @@ subsys_initcall(init_vdso);
 
 int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm = current->mm;
 	static struct vm_special_mapping vdso_mapping = {
 		.name = "[vdso]",
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index c4c479373249..c268d7d323ab 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -38,7 +38,7 @@ xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 EXPORT_SYMBOL_GPL(arbitrary_virt_to_machine);
 
 /* Returns: 0 success */
-int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
+int xen_unmap_domain_gfn_range(struct mm_area *vma,
 			       int nr, struct page **pages)
 {
 	if (xen_feature(XENFEAT_auto_translated_physmap))
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 38971c6dcd4b..ddb7a5dcce88 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -348,7 +348,7 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
 	__xen_set_pte(ptep, pteval);
 }
 
-static pte_t xen_ptep_modify_prot_start(struct vm_area_struct *vma,
+static pte_t xen_ptep_modify_prot_start(struct mm_area *vma,
 					unsigned long addr, pte_t *ptep)
 {
 	/* Just return the pte as-is.  We preserve the bits on commit */
@@ -356,7 +356,7 @@ static pte_t xen_ptep_modify_prot_start(struct vm_area_struct *vma,
 	return *ptep;
 }
 
-static void xen_ptep_modify_prot_commit(struct vm_area_struct *vma,
+static void xen_ptep_modify_prot_commit(struct mm_area *vma,
 					unsigned long addr,
 					pte_t *ptep, pte_t pte)
 {
@@ -2494,7 +2494,7 @@ static int remap_area_pfn_pte_fn(pte_t *ptep, unsigned long addr, void *data)
 	return 0;
 }
 
-int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
+int xen_remap_pfn(struct mm_area *vma, unsigned long addr,
 		  xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,
 		  unsigned int domid, bool no_translate)
 {
diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h
index a2b6bb5429f5..6d4a401875c2 100644
--- a/arch/xtensa/include/asm/cacheflush.h
+++ b/arch/xtensa/include/asm/cacheflush.h
@@ -96,9 +96,9 @@ static inline void __invalidate_icache_page_alias(unsigned long virt,
 
 #ifdef CONFIG_SMP
 void flush_cache_all(void);
-void flush_cache_range(struct vm_area_struct*, ulong, ulong);
+void flush_cache_range(struct mm_area*, ulong, ulong);
 void flush_icache_range(unsigned long start, unsigned long end);
-void flush_cache_page(struct vm_area_struct*,
+void flush_cache_page(struct mm_area*,
 			     unsigned long, unsigned long);
 #define flush_cache_all flush_cache_all
 #define flush_cache_range flush_cache_range
@@ -133,9 +133,9 @@ static inline void flush_dcache_page(struct page *page)
 	flush_dcache_folio(page_folio(page));
 }
 
-void local_flush_cache_range(struct vm_area_struct *vma,
+void local_flush_cache_range(struct mm_area *vma,
 		unsigned long start, unsigned long end);
-void local_flush_cache_page(struct vm_area_struct *vma,
+void local_flush_cache_page(struct mm_area *vma,
 		unsigned long address, unsigned long pfn);
 
 #else
@@ -155,9 +155,9 @@ void local_flush_cache_page(struct vm_area_struct *vma,
 
 #if defined(CONFIG_MMU) && (DCACHE_WAY_SIZE > PAGE_SIZE)
 
-extern void copy_to_user_page(struct vm_area_struct*, struct page*,
+extern void copy_to_user_page(struct mm_area*, struct page*,
 		unsigned long, void*, const void*, unsigned long);
-extern void copy_from_user_page(struct vm_area_struct*, struct page*,
+extern void copy_from_user_page(struct mm_area*, struct page*,
 		unsigned long, void*, const void*, unsigned long);
 #define copy_to_user_page copy_to_user_page
 #define copy_from_user_page copy_from_user_page
diff --git a/arch/xtensa/include/asm/page.h b/arch/xtensa/include/asm/page.h
index 644413792bf3..47df5872733a 100644
--- a/arch/xtensa/include/asm/page.h
+++ b/arch/xtensa/include/asm/page.h
@@ -106,7 +106,7 @@ typedef struct page *pgtable_t;
 # include <asm-generic/getorder.h>
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 extern void clear_page(void *page);
 extern void copy_page(void *to, void *from);
 
@@ -124,7 +124,7 @@ extern void copy_page_alias(void *to, void *from,
 void clear_user_highpage(struct page *page, unsigned long vaddr);
 #define __HAVE_ARCH_COPY_USER_HIGHPAGE
 void copy_user_highpage(struct page *to, struct page *from,
-			unsigned long vaddr, struct vm_area_struct *vma);
+			unsigned long vaddr, struct mm_area *vma);
 #else
 # define clear_user_page(page, vaddr, pg)	clear_page(page)
 # define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
index 1647a7cc3fbf..247b9d7b91b4 100644
--- a/arch/xtensa/include/asm/pgtable.h
+++ b/arch/xtensa/include/asm/pgtable.h
@@ -313,10 +313,10 @@ set_pmd(pmd_t *pmdp, pmd_t pmdval)
 	*pmdp = pmdval;
 }
 
-struct vm_area_struct;
+struct mm_area;
 
 static inline int
-ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr,
+ptep_test_and_clear_young(struct mm_area *vma, unsigned long addr,
 			  pte_t *ptep)
 {
 	pte_t pte = *ptep;
@@ -403,14 +403,14 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
 #else
 
 struct vm_fault;
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long address, pte_t *ptep, unsigned int nr);
 #define update_mmu_cache(vma, address, ptep) \
 	update_mmu_cache_range(NULL, vma, address, ptep, 1)
 
 typedef pte_t *pte_addr_t;
 
-void update_mmu_tlb_range(struct vm_area_struct *vma,
+void update_mmu_tlb_range(struct mm_area *vma,
 		unsigned long address, pte_t *ptep, unsigned int nr);
 #define update_mmu_tlb_range update_mmu_tlb_range
 
diff --git a/arch/xtensa/include/asm/tlbflush.h b/arch/xtensa/include/asm/tlbflush.h
index 573df8cea200..36a5ca4f41b8 100644
--- a/arch/xtensa/include/asm/tlbflush.h
+++ b/arch/xtensa/include/asm/tlbflush.h
@@ -32,9 +32,9 @@
 
 void local_flush_tlb_all(void);
 void local_flush_tlb_mm(struct mm_struct *mm);
-void local_flush_tlb_page(struct vm_area_struct *vma,
+void local_flush_tlb_page(struct mm_area *vma,
 		unsigned long page);
-void local_flush_tlb_range(struct vm_area_struct *vma,
+void local_flush_tlb_range(struct mm_area *vma,
 		unsigned long start, unsigned long end);
 void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
@@ -42,8 +42,8 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
 void flush_tlb_all(void);
 void flush_tlb_mm(struct mm_struct *);
-void flush_tlb_page(struct vm_area_struct *, unsigned long);
-void flush_tlb_range(struct vm_area_struct *, unsigned long,
+void flush_tlb_page(struct mm_area *, unsigned long);
+void flush_tlb_range(struct mm_area *, unsigned long,
 		unsigned long);
 void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
diff --git a/arch/xtensa/kernel/pci.c b/arch/xtensa/kernel/pci.c
index 62c900e400d6..81f6d62f9bff 100644
--- a/arch/xtensa/kernel/pci.c
+++ b/arch/xtensa/kernel/pci.c
@@ -71,7 +71,7 @@ void pcibios_fixup_bus(struct pci_bus *bus)
  *  -- paulus.
  */
 
-int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
+int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma)
 {
 	struct pci_controller *pci_ctrl = (struct pci_controller*) pdev->sysdata;
 	resource_size_t ioaddr = pci_resource_start(pdev, bar);
diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
index 94a23f100726..66c0c20799ef 100644
--- a/arch/xtensa/kernel/smp.c
+++ b/arch/xtensa/kernel/smp.c
@@ -468,7 +468,7 @@ int setup_profiling_timer(unsigned int multiplier)
 /* TLB flush functions */
 
 struct flush_data {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr1;
 	unsigned long addr2;
 };
@@ -499,7 +499,7 @@ static void ipi_flush_tlb_page(void *arg)
 	local_flush_tlb_page(fd->vma, fd->addr1);
 }
 
-void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+void flush_tlb_page(struct mm_area *vma, unsigned long addr)
 {
 	struct flush_data fd = {
 		.vma = vma,
@@ -514,7 +514,7 @@ static void ipi_flush_tlb_range(void *arg)
 	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
 }
 
-void flush_tlb_range(struct vm_area_struct *vma,
+void flush_tlb_range(struct mm_area *vma,
 		     unsigned long start, unsigned long end)
 {
 	struct flush_data fd = {
@@ -558,7 +558,7 @@ static void ipi_flush_cache_page(void *arg)
 	local_flush_cache_page(fd->vma, fd->addr1, fd->addr2);
 }
 
-void flush_cache_page(struct vm_area_struct *vma,
+void flush_cache_page(struct mm_area *vma,
 		     unsigned long address, unsigned long pfn)
 {
 	struct flush_data fd = {
@@ -575,7 +575,7 @@ static void ipi_flush_cache_range(void *arg)
 	local_flush_cache_range(fd->vma, fd->addr1, fd->addr2);
 }
 
-void flush_cache_range(struct vm_area_struct *vma,
+void flush_cache_range(struct mm_area *vma,
 		     unsigned long start, unsigned long end)
 {
 	struct flush_data fd = {
diff --git a/arch/xtensa/kernel/syscall.c b/arch/xtensa/kernel/syscall.c
index dc54f854c2f5..9dd4ee487337 100644
--- a/arch/xtensa/kernel/syscall.c
+++ b/arch/xtensa/kernel/syscall.c
@@ -58,7 +58,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags,
 		vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vmm;
+	struct mm_area *vmm;
 	struct vma_iterator vmi;
 
 	if (flags & MAP_FIXED) {
diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c
index 23be0e7516ce..b1f503c39d58 100644
--- a/arch/xtensa/mm/cache.c
+++ b/arch/xtensa/mm/cache.c
@@ -100,7 +100,7 @@ void clear_user_highpage(struct page *page, unsigned long vaddr)
 EXPORT_SYMBOL(clear_user_highpage);
 
 void copy_user_highpage(struct page *dst, struct page *src,
-			unsigned long vaddr, struct vm_area_struct *vma)
+			unsigned long vaddr, struct mm_area *vma)
 {
 	struct folio *folio = page_folio(dst);
 	unsigned long dst_paddr, src_paddr;
@@ -181,7 +181,7 @@ EXPORT_SYMBOL(flush_dcache_folio);
  * For now, flush the whole cache. FIXME??
  */
 
-void local_flush_cache_range(struct vm_area_struct *vma,
+void local_flush_cache_range(struct mm_area *vma,
 		       unsigned long start, unsigned long end)
 {
 	__flush_invalidate_dcache_all();
@@ -196,7 +196,7 @@ EXPORT_SYMBOL(local_flush_cache_range);
  * alias versions of the cache flush functions.
  */
 
-void local_flush_cache_page(struct vm_area_struct *vma, unsigned long address,
+void local_flush_cache_page(struct mm_area *vma, unsigned long address,
 		      unsigned long pfn)
 {
 	/* Note that we have to use the 'alias' address to avoid multi-hit */
@@ -213,7 +213,7 @@ EXPORT_SYMBOL(local_flush_cache_page);
 
 #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */
 
-void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
+void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
 		unsigned long addr, pte_t *ptep, unsigned int nr)
 {
 	unsigned long pfn = pte_pfn(*ptep);
@@ -270,7 +270,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
 
 #if (DCACHE_WAY_SIZE > PAGE_SIZE)
 
-void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
+void copy_to_user_page(struct mm_area *vma, struct page *page,
 		unsigned long vaddr, void *dst, const void *src,
 		unsigned long len)
 {
@@ -310,7 +310,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
 	}
 }
 
-extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
+extern void copy_from_user_page(struct mm_area *vma, struct page *page,
 		unsigned long vaddr, void *dst, const void *src,
 		unsigned long len)
 {
diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
index 16e11b6f6f78..02d6bcea445d 100644
--- a/arch/xtensa/mm/fault.c
+++ b/arch/xtensa/mm/fault.c
@@ -87,7 +87,7 @@ static void vmalloc_fault(struct pt_regs *regs, unsigned int address)
 
 void do_page_fault(struct pt_regs *regs)
 {
-	struct vm_area_struct * vma;
+	struct mm_area * vma;
 	struct mm_struct *mm = current->mm;
 	unsigned int exccause = regs->exccause;
 	unsigned int address = regs->excvaddr;
diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c
index 0a1a815dc796..b8fcadd0460a 100644
--- a/arch/xtensa/mm/tlb.c
+++ b/arch/xtensa/mm/tlb.c
@@ -86,7 +86,7 @@ void local_flush_tlb_mm(struct mm_struct *mm)
 # define _TLB_ENTRIES _DTLB_ENTRIES
 #endif
 
-void local_flush_tlb_range(struct vm_area_struct *vma,
+void local_flush_tlb_range(struct mm_area *vma,
 		unsigned long start, unsigned long end)
 {
 	int cpu = smp_processor_id();
@@ -124,7 +124,7 @@ void local_flush_tlb_range(struct vm_area_struct *vma,
 	local_irq_restore(flags);
 }
 
-void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
 {
 	int cpu = smp_processor_id();
 	struct mm_struct* mm = vma->vm_mm;
@@ -163,7 +163,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
 	}
 }
 
-void update_mmu_tlb_range(struct vm_area_struct *vma,
+void update_mmu_tlb_range(struct mm_area *vma,
 			unsigned long address, pte_t *ptep, unsigned int nr)
 {
 	local_flush_tlb_range(vma, address, address + PAGE_SIZE * nr);
diff --git a/block/fops.c b/block/fops.c
index be9f1dbea9ce..6b5d92baf4b6 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -871,7 +871,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
 	return error;
 }
 
-static int blkdev_mmap(struct file *file, struct vm_area_struct *vma)
+static int blkdev_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *bd_inode = bdev_file_inode(file);
 
diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c
index 606433d73236..10a1bd65acb0 100644
--- a/drivers/accel/amdxdna/amdxdna_gem.c
+++ b/drivers/accel/amdxdna/amdxdna_gem.c
@@ -159,7 +159,7 @@ static int amdxdna_hmm_register(struct amdxdna_gem_obj *abo, unsigned long addr,
 }
 
 static int amdxdna_gem_obj_mmap(struct drm_gem_object *gobj,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	struct amdxdna_gem_obj *abo = to_xdna_obj(gobj);
 	unsigned long num_pages;
@@ -192,12 +192,12 @@ static vm_fault_t amdxdna_gem_vm_fault(struct vm_fault *vmf)
 	return drm_gem_shmem_vm_ops.fault(vmf);
 }
 
-static void amdxdna_gem_vm_open(struct vm_area_struct *vma)
+static void amdxdna_gem_vm_open(struct mm_area *vma)
 {
 	drm_gem_shmem_vm_ops.open(vma);
 }
 
-static void amdxdna_gem_vm_close(struct vm_area_struct *vma)
+static void amdxdna_gem_vm_close(struct mm_area *vma)
 {
 	struct drm_gem_object *gobj = vma->vm_private_data;
 
diff --git a/drivers/accel/habanalabs/common/command_buffer.c b/drivers/accel/habanalabs/common/command_buffer.c
index 0f0d295116e7..6dab3015eb48 100644
--- a/drivers/accel/habanalabs/common/command_buffer.c
+++ b/drivers/accel/habanalabs/common/command_buffer.c
@@ -247,7 +247,7 @@ static int hl_cb_mmap_mem_alloc(struct hl_mmap_mem_buf *buf, gfp_t gfp, void *ar
 }
 
 static int hl_cb_mmap(struct hl_mmap_mem_buf *buf,
-				      struct vm_area_struct *vma, void *args)
+				      struct mm_area *vma, void *args)
 {
 	struct hl_cb *cb = buf->private;
 
diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c
index 68eebed3b050..b86d048f3954 100644
--- a/drivers/accel/habanalabs/common/device.c
+++ b/drivers/accel/habanalabs/common/device.c
@@ -647,7 +647,7 @@ static int hl_device_release_ctrl(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static int __hl_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
+static int __hl_mmap(struct hl_fpriv *hpriv, struct mm_area *vma)
 {
 	struct hl_device *hdev = hpriv->hdev;
 	unsigned long vm_pgoff;
@@ -675,12 +675,12 @@ static int __hl_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
  * hl_mmap - mmap function for habanalabs device
  *
  * @*filp: pointer to file structure
- * @*vma: pointer to vm_area_struct of the process
+ * @*vma: pointer to mm_area of the process
  *
  * Called when process does an mmap on habanalabs device. Call the relevant mmap
  * function at the end of the common code.
  */
-int hl_mmap(struct file *filp, struct vm_area_struct *vma)
+int hl_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct drm_file *file_priv = filp->private_data;
 	struct hl_fpriv *hpriv = file_priv->driver_priv;
diff --git a/drivers/accel/habanalabs/common/habanalabs.h b/drivers/accel/habanalabs/common/habanalabs.h
index 6f27ce4fa01b..2cb705768786 100644
--- a/drivers/accel/habanalabs/common/habanalabs.h
+++ b/drivers/accel/habanalabs/common/habanalabs.h
@@ -45,7 +45,7 @@ struct hl_fpriv;
  * bits[63:59] - Encode mmap type
  * bits[45:0]  - mmap offset value
  *
- * NOTE: struct vm_area_struct.vm_pgoff uses offset in pages. Hence, these
+ * NOTE: struct mm_area.vm_pgoff uses offset in pages. Hence, these
  *  defines are w.r.t to PAGE_SIZE
  */
 #define HL_MMAP_TYPE_SHIFT		(59 - PAGE_SHIFT)
@@ -931,7 +931,7 @@ struct hl_mmap_mem_buf_behavior {
 	u64 mem_id;
 
 	int (*alloc)(struct hl_mmap_mem_buf *buf, gfp_t gfp, void *args);
-	int (*mmap)(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, void *args);
+	int (*mmap)(struct hl_mmap_mem_buf *buf, struct mm_area *vma, void *args);
 	void (*release)(struct hl_mmap_mem_buf *buf);
 };
 
@@ -1650,7 +1650,7 @@ struct hl_asic_funcs {
 	void (*halt_engines)(struct hl_device *hdev, bool hard_reset, bool fw_reset);
 	int (*suspend)(struct hl_device *hdev);
 	int (*resume)(struct hl_device *hdev);
-	int (*mmap)(struct hl_device *hdev, struct vm_area_struct *vma,
+	int (*mmap)(struct hl_device *hdev, struct mm_area *vma,
 			void *cpu_addr, dma_addr_t dma_addr, size_t size);
 	void (*ring_doorbell)(struct hl_device *hdev, u32 hw_queue_id, u32 pi);
 	void (*pqe_write)(struct hl_device *hdev, __le64 *pqe,
@@ -1745,7 +1745,7 @@ struct hl_asic_funcs {
 	void (*ack_protection_bits_errors)(struct hl_device *hdev);
 	int (*get_hw_block_id)(struct hl_device *hdev, u64 block_addr,
 				u32 *block_size, u32 *block_id);
-	int (*hw_block_mmap)(struct hl_device *hdev, struct vm_area_struct *vma,
+	int (*hw_block_mmap)(struct hl_device *hdev, struct mm_area *vma,
 			u32 block_id, u32 block_size);
 	void (*enable_events_from_fw)(struct hl_device *hdev);
 	int (*ack_mmu_errors)(struct hl_device *hdev, u64 mmu_cap_mask);
@@ -3733,7 +3733,7 @@ int hl_access_cfg_region(struct hl_device *hdev, u64 addr, u64 *val,
 int hl_access_dev_mem(struct hl_device *hdev, enum pci_region region_type,
 			u64 addr, u64 *val, enum debugfs_access_type acc_type);
 
-int hl_mmap(struct file *filp, struct vm_area_struct *vma);
+int hl_mmap(struct file *filp, struct mm_area *vma);
 
 int hl_device_open(struct drm_device *drm, struct drm_file *file_priv);
 void hl_device_release(struct drm_device *ddev, struct drm_file *file_priv);
@@ -3819,7 +3819,7 @@ int hl_cb_create(struct hl_device *hdev, struct hl_mem_mgr *mmg,
 			struct hl_ctx *ctx, u32 cb_size, bool internal_cb,
 			bool map_cb, u64 *handle);
 int hl_cb_destroy(struct hl_mem_mgr *mmg, u64 cb_handle);
-int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma);
+int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct mm_area *vma);
 struct hl_cb *hl_cb_get(struct hl_mem_mgr *mmg, u64 handle);
 void hl_cb_put(struct hl_cb *cb);
 struct hl_cb *hl_cb_kernel_create(struct hl_device *hdev, u32 cb_size,
@@ -4063,7 +4063,7 @@ const char *hl_sync_engine_to_string(enum hl_sync_engine_type engine_type);
 void hl_mem_mgr_init(struct device *dev, struct hl_mem_mgr *mmg);
 void hl_mem_mgr_fini(struct hl_mem_mgr *mmg, struct hl_mem_mgr_fini_stats *stats);
 void hl_mem_mgr_idr_destroy(struct hl_mem_mgr *mmg);
-int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma,
+int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct mm_area *vma,
 		    void *args);
 struct hl_mmap_mem_buf *hl_mmap_mem_buf_get(struct hl_mem_mgr *mmg,
 						   u64 handle);
diff --git a/drivers/accel/habanalabs/common/memory.c b/drivers/accel/habanalabs/common/memory.c
index 601fdbe70179..4688d24b34df 100644
--- a/drivers/accel/habanalabs/common/memory.c
+++ b/drivers/accel/habanalabs/common/memory.c
@@ -1424,7 +1424,7 @@ static int map_block(struct hl_device *hdev, u64 address, u64 *handle, u32 *size
 	return 0;
 }
 
-static void hw_block_vm_close(struct vm_area_struct *vma)
+static void hw_block_vm_close(struct mm_area *vma)
 {
 	struct hl_vm_hw_block_list_node *lnode =
 		(struct hl_vm_hw_block_list_node *) vma->vm_private_data;
@@ -1452,12 +1452,12 @@ static const struct vm_operations_struct hw_block_vm_ops = {
 /**
  * hl_hw_block_mmap() - mmap a hw block to user.
  * @hpriv: pointer to the private data of the fd
- * @vma: pointer to vm_area_struct of the process
+ * @vma: pointer to mm_area of the process
  *
  * Driver increments context reference for every HW block mapped in order
  * to prevent user from closing FD without unmapping first
  */
-int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
+int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct mm_area *vma)
 {
 	struct hl_vm_hw_block_list_node *lnode;
 	struct hl_device *hdev = hpriv->hdev;
@@ -2103,7 +2103,7 @@ static void ts_buff_release(struct hl_mmap_mem_buf *buf)
 	kfree(ts_buff);
 }
 
-static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, void *args)
+static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct mm_area *vma, void *args)
 {
 	struct hl_ts_buff *ts_buff = buf->private;
 
diff --git a/drivers/accel/habanalabs/common/memory_mgr.c b/drivers/accel/habanalabs/common/memory_mgr.c
index 99cd83139d46..ea06e092b341 100644
--- a/drivers/accel/habanalabs/common/memory_mgr.c
+++ b/drivers/accel/habanalabs/common/memory_mgr.c
@@ -196,7 +196,7 @@ hl_mmap_mem_buf_alloc(struct hl_mem_mgr *mmg,
  *
  * Put the memory buffer if it is no longer mapped.
  */
-static void hl_mmap_mem_buf_vm_close(struct vm_area_struct *vma)
+static void hl_mmap_mem_buf_vm_close(struct mm_area *vma)
 {
 	struct hl_mmap_mem_buf *buf =
 		(struct hl_mmap_mem_buf *)vma->vm_private_data;
@@ -227,7 +227,7 @@ static const struct vm_operations_struct hl_mmap_mem_buf_vm_ops = {
  *
  * Map the buffer specified by the vma->vm_pgoff to the given vma.
  */
-int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma,
+int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct mm_area *vma,
 		    void *args)
 {
 	struct hl_mmap_mem_buf *buf;
diff --git a/drivers/accel/habanalabs/gaudi/gaudi.c b/drivers/accel/habanalabs/gaudi/gaudi.c
index fa893a9b826e..a52647a1b640 100644
--- a/drivers/accel/habanalabs/gaudi/gaudi.c
+++ b/drivers/accel/habanalabs/gaudi/gaudi.c
@@ -4160,7 +4160,7 @@ static int gaudi_resume(struct hl_device *hdev)
 	return gaudi_init_iatu(hdev);
 }
 
-static int gaudi_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+static int gaudi_mmap(struct hl_device *hdev, struct mm_area *vma,
 			void *cpu_addr, dma_addr_t dma_addr, size_t size)
 {
 	int rc;
@@ -8769,7 +8769,7 @@ static int gaudi_get_hw_block_id(struct hl_device *hdev, u64 block_addr,
 }
 
 static int gaudi_block_mmap(struct hl_device *hdev,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				u32 block_id, u32 block_size)
 {
 	return -EPERM;
diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c
index a38b88baadf2..12ef2bdebe5d 100644
--- a/drivers/accel/habanalabs/gaudi2/gaudi2.c
+++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c
@@ -6475,7 +6475,7 @@ static int gaudi2_resume(struct hl_device *hdev)
 	return gaudi2_init_iatu(hdev);
 }
 
-static int gaudi2_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+static int gaudi2_mmap(struct hl_device *hdev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size)
 {
 	int rc;
@@ -11238,7 +11238,7 @@ static int gaudi2_get_hw_block_id(struct hl_device *hdev, u64 block_addr,
 	return -EINVAL;
 }
 
-static int gaudi2_block_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+static int gaudi2_block_mmap(struct hl_device *hdev, struct mm_area *vma,
 			u32 block_id, u32 block_size)
 {
 	struct gaudi2_device *gaudi2 = hdev->asic_specific;
diff --git a/drivers/accel/habanalabs/goya/goya.c b/drivers/accel/habanalabs/goya/goya.c
index 84768e306269..9319d29bb802 100644
--- a/drivers/accel/habanalabs/goya/goya.c
+++ b/drivers/accel/habanalabs/goya/goya.c
@@ -2869,7 +2869,7 @@ int goya_resume(struct hl_device *hdev)
 	return goya_init_iatu(hdev);
 }
 
-static int goya_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+static int goya_mmap(struct hl_device *hdev, struct mm_area *vma,
 			void *cpu_addr, dma_addr_t dma_addr, size_t size)
 {
 	int rc;
@@ -5313,7 +5313,7 @@ static int goya_get_hw_block_id(struct hl_device *hdev, u64 block_addr,
 	return -EPERM;
 }
 
-static int goya_block_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
+static int goya_block_mmap(struct hl_device *hdev, struct mm_area *vma,
 				u32 block_id, u32 block_size)
 {
 	return -EPERM;
diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c
index 43aba57b48f0..331e4683f42a 100644
--- a/drivers/accel/qaic/qaic_data.c
+++ b/drivers/accel/qaic/qaic_data.c
@@ -602,7 +602,7 @@ static const struct vm_operations_struct drm_vm_ops = {
 	.close = drm_gem_vm_close,
 };
 
-static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct qaic_bo *bo = to_qaic_bo(obj);
 	unsigned long offset = 0;
diff --git a/drivers/acpi/pfr_telemetry.c b/drivers/acpi/pfr_telemetry.c
index 32bdf8cbe8f2..4222c75ced8e 100644
--- a/drivers/acpi/pfr_telemetry.c
+++ b/drivers/acpi/pfr_telemetry.c
@@ -295,7 +295,7 @@ static long pfrt_log_ioctl(struct file *file, unsigned int cmd, unsigned long ar
 }
 
 static int
-pfrt_log_mmap(struct file *file, struct vm_area_struct *vma)
+pfrt_log_mmap(struct file *file, struct mm_area *vma)
 {
 	struct pfrt_log_device *pfrt_log_dev;
 	struct pfrt_log_data_info info;
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 76052006bd87..a674ff1ab9a5 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -5935,7 +5935,7 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 	return ret;
 }
 
-static void binder_vma_open(struct vm_area_struct *vma)
+static void binder_vma_open(struct mm_area *vma)
 {
 	struct binder_proc *proc = vma->vm_private_data;
 
@@ -5946,7 +5946,7 @@ static void binder_vma_open(struct vm_area_struct *vma)
 		     (unsigned long)pgprot_val(vma->vm_page_prot));
 }
 
-static void binder_vma_close(struct vm_area_struct *vma)
+static void binder_vma_close(struct mm_area *vma)
 {
 	struct binder_proc *proc = vma->vm_private_data;
 
@@ -5969,7 +5969,7 @@ static const struct vm_operations_struct binder_vm_ops = {
 	.fault = binder_vm_fault,
 };
 
-static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
+static int binder_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct binder_proc *proc = filp->private_data;
 
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index fcfaf1b899c8..95d8a0def3c5 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -258,7 +258,7 @@ static int binder_page_insert(struct binder_alloc *alloc,
 			      struct page *page)
 {
 	struct mm_struct *mm = alloc->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret = -ESRCH;
 
 	/* attempt per-vma lock first */
@@ -892,7 +892,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc,
  *      -ENOMEM = failed to map memory to given address space
  */
 int binder_alloc_mmap_handler(struct binder_alloc *alloc,
-			      struct vm_area_struct *vma)
+			      struct mm_area *vma)
 {
 	struct binder_buffer *buffer;
 	const char *failure_string;
@@ -1140,7 +1140,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
 	struct binder_shrinker_mdata *mdata = container_of(item, typeof(*mdata), lru);
 	struct binder_alloc *alloc = mdata->alloc;
 	struct mm_struct *mm = alloc->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct page *page_to_free;
 	unsigned long page_addr;
 	int mm_locked = 0;
diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
index feecd7414241..71474a96c9dd 100644
--- a/drivers/android/binder_alloc.h
+++ b/drivers/android/binder_alloc.h
@@ -143,7 +143,7 @@ binder_alloc_prepare_to_free(struct binder_alloc *alloc,
 void binder_alloc_free_buf(struct binder_alloc *alloc,
 			   struct binder_buffer *buffer);
 int binder_alloc_mmap_handler(struct binder_alloc *alloc,
-			      struct vm_area_struct *vma);
+			      struct mm_area *vma);
 void binder_alloc_deferred_release(struct binder_alloc *alloc);
 int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
 void binder_alloc_print_allocated(struct seq_file *m,
diff --git a/drivers/auxdisplay/cfag12864bfb.c b/drivers/auxdisplay/cfag12864bfb.c
index 24baf6b2c587..c8953939f33a 100644
--- a/drivers/auxdisplay/cfag12864bfb.c
+++ b/drivers/auxdisplay/cfag12864bfb.c
@@ -47,7 +47,7 @@ static const struct fb_var_screeninfo cfag12864bfb_var = {
 	.vmode = FB_VMODE_NONINTERLACED,
 };
 
-static int cfag12864bfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int cfag12864bfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct page *pages = virt_to_page(cfag12864b_buffer);
 
diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c
index 0b8ba754b343..835db2ac68c3 100644
--- a/drivers/auxdisplay/ht16k33.c
+++ b/drivers/auxdisplay/ht16k33.c
@@ -303,7 +303,7 @@ static int ht16k33_blank(int blank, struct fb_info *info)
 	return 0;
 }
 
-static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int ht16k33_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct ht16k33_priv *priv = info->par;
 	struct page *pages = virt_to_page(priv->fbdev.buffer);
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 2fd05c1bd30b..55cfd9965a5d 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -1467,7 +1467,7 @@ static int ublk_ch_release(struct inode *inode, struct file *filp)
 }
 
 /* map pre-allocated per-queue cmd buffer to ublksrv daemon */
-static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
+static int ublk_ch_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct ublk_device *ub = filp->private_data;
 	size_t sz = vma->vm_end - vma->vm_start;
diff --git a/drivers/cdx/cdx.c b/drivers/cdx/cdx.c
index 092306ca2541..f3f114c29555 100644
--- a/drivers/cdx/cdx.c
+++ b/drivers/cdx/cdx.c
@@ -708,7 +708,7 @@ static const struct vm_operations_struct cdx_phys_vm_ops = {
  *      this API is registered as a callback.
  * @kobj: kobject for mapping
  * @attr: struct bin_attribute for the file being mapped
- * @vma: struct vm_area_struct passed into the mmap
+ * @vma: struct mm_area passed into the mmap
  *
  * Use the regular CDX mapping routines to map a CDX resource into userspace.
  *
@@ -716,7 +716,7 @@ static const struct vm_operations_struct cdx_phys_vm_ops = {
  */
 static int cdx_mmap_resource(struct file *fp, struct kobject *kobj,
 			     const struct bin_attribute *attr,
-			     struct vm_area_struct *vma)
+			     struct mm_area *vma)
 {
 	struct cdx_device *cdx_dev = to_cdx_device(kobj_to_dev(kobj));
 	int num = (unsigned long)attr->private;
diff --git a/drivers/char/bsr.c b/drivers/char/bsr.c
index 837109ef6766..005cbf590708 100644
--- a/drivers/char/bsr.c
+++ b/drivers/char/bsr.c
@@ -111,7 +111,7 @@ static const struct class bsr_class = {
 	.dev_groups	= bsr_dev_groups,
 };
 
-static int bsr_mmap(struct file *filp, struct vm_area_struct *vma)
+static int bsr_mmap(struct file *filp, struct mm_area *vma)
 {
 	unsigned long size   = vma->vm_end - vma->vm_start;
 	struct bsr_dev *dev = filp->private_data;
diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
index e110857824fc..af1076b99117 100644
--- a/drivers/char/hpet.c
+++ b/drivers/char/hpet.c
@@ -354,7 +354,7 @@ static __init int hpet_mmap_enable(char *str)
 }
 __setup("hpet_mmap=", hpet_mmap_enable);
 
-static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+static int hpet_mmap(struct file *file, struct mm_area *vma)
 {
 	struct hpet_dev *devp;
 	unsigned long addr;
@@ -372,7 +372,7 @@ static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
 	return vm_iomap_memory(vma, addr, PAGE_SIZE);
 }
 #else
-static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+static int hpet_mmap(struct file *file, struct mm_area *vma)
 {
 	return -ENOSYS;
 }
diff --git a/drivers/char/mem.c b/drivers/char/mem.c
index 169eed162a7f..350af6fa120a 100644
--- a/drivers/char/mem.c
+++ b/drivers/char/mem.c
@@ -322,13 +322,13 @@ static unsigned zero_mmap_capabilities(struct file *file)
 }
 
 /* can't do an in-place private mapping if there's no MMU */
-static inline int private_mapping_ok(struct vm_area_struct *vma)
+static inline int private_mapping_ok(struct mm_area *vma)
 {
 	return is_nommu_shared_mapping(vma->vm_flags);
 }
 #else
 
-static inline int private_mapping_ok(struct vm_area_struct *vma)
+static inline int private_mapping_ok(struct mm_area *vma)
 {
 	return 1;
 }
@@ -340,7 +340,7 @@ static const struct vm_operations_struct mmap_mem_ops = {
 #endif
 };
 
-static int mmap_mem(struct file *file, struct vm_area_struct *vma)
+static int mmap_mem(struct file *file, struct mm_area *vma)
 {
 	size_t size = vma->vm_end - vma->vm_start;
 	phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
@@ -519,7 +519,7 @@ static ssize_t read_zero(struct file *file, char __user *buf,
 	return cleared;
 }
 
-static int mmap_zero(struct file *file, struct vm_area_struct *vma)
+static int mmap_zero(struct file *file, struct mm_area *vma)
 {
 #ifndef CONFIG_MMU
 	return -ENOSYS;
diff --git a/drivers/char/uv_mmtimer.c b/drivers/char/uv_mmtimer.c
index 956ebe2080a5..3a8a210592db 100644
--- a/drivers/char/uv_mmtimer.c
+++ b/drivers/char/uv_mmtimer.c
@@ -40,7 +40,7 @@ MODULE_LICENSE("GPL");
 
 static long uv_mmtimer_ioctl(struct file *file, unsigned int cmd,
 						unsigned long arg);
-static int uv_mmtimer_mmap(struct file *file, struct vm_area_struct *vma);
+static int uv_mmtimer_mmap(struct file *file, struct mm_area *vma);
 
 /*
  * Period in femtoseconds (10^-15 s)
@@ -144,7 +144,7 @@ static long uv_mmtimer_ioctl(struct file *file, unsigned int cmd,
  * Calls remap_pfn_range() to map the clock's registers into
  * the calling process' address space.
  */
-static int uv_mmtimer_mmap(struct file *file, struct vm_area_struct *vma)
+static int uv_mmtimer_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned long uv_mmtimer_addr;
 
diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
index b9df9b19d4bd..9e3ef27295ec 100644
--- a/drivers/comedi/comedi_fops.c
+++ b/drivers/comedi/comedi_fops.c
@@ -2282,7 +2282,7 @@ static long comedi_unlocked_ioctl(struct file *file, unsigned int cmd,
 	return rc;
 }
 
-static void comedi_vm_open(struct vm_area_struct *area)
+static void comedi_vm_open(struct mm_area *area)
 {
 	struct comedi_buf_map *bm;
 
@@ -2290,7 +2290,7 @@ static void comedi_vm_open(struct vm_area_struct *area)
 	comedi_buf_map_get(bm);
 }
 
-static void comedi_vm_close(struct vm_area_struct *area)
+static void comedi_vm_close(struct mm_area *area)
 {
 	struct comedi_buf_map *bm;
 
@@ -2298,7 +2298,7 @@ static void comedi_vm_close(struct vm_area_struct *area)
 	comedi_buf_map_put(bm);
 }
 
-static int comedi_vm_access(struct vm_area_struct *vma, unsigned long addr,
+static int comedi_vm_access(struct mm_area *vma, unsigned long addr,
 			    void *buf, int len, int write)
 {
 	struct comedi_buf_map *bm = vma->vm_private_data;
@@ -2318,7 +2318,7 @@ static const struct vm_operations_struct comedi_vm_ops = {
 	.access = comedi_vm_access,
 };
 
-static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
+static int comedi_mmap(struct file *file, struct mm_area *vma)
 {
 	struct comedi_file *cfp = file->private_data;
 	struct comedi_device *dev = cfp->dev;
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index d3f5d108b898..c9d9b977c07a 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -2454,7 +2454,7 @@ static void hisi_qm_uacce_put_queue(struct uacce_queue *q)
 
 /* map sq/cq/doorbell to user space */
 static int hisi_qm_uacce_mmap(struct uacce_queue *q,
-			      struct vm_area_struct *vma,
+			      struct mm_area *vma,
 			      struct uacce_qfile_region *qfr)
 {
 	struct hisi_qp *qp = q->priv;
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 328231cfb028..6a5724727688 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -14,7 +14,7 @@
 #include "dax-private.h"
 #include "bus.h"
 
-static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
+static int check_vma(struct dev_dax *dev_dax, struct mm_area *vma,
 		const char *func)
 {
 	struct device *dev = &dev_dax->dev;
@@ -261,7 +261,7 @@ static vm_fault_t dev_dax_fault(struct vm_fault *vmf)
 	return dev_dax_huge_fault(vmf, 0);
 }
 
-static int dev_dax_may_split(struct vm_area_struct *vma, unsigned long addr)
+static int dev_dax_may_split(struct mm_area *vma, unsigned long addr)
 {
 	struct file *filp = vma->vm_file;
 	struct dev_dax *dev_dax = filp->private_data;
@@ -271,7 +271,7 @@ static int dev_dax_may_split(struct vm_area_struct *vma, unsigned long addr)
 	return 0;
 }
 
-static unsigned long dev_dax_pagesize(struct vm_area_struct *vma)
+static unsigned long dev_dax_pagesize(struct mm_area *vma)
 {
 	struct file *filp = vma->vm_file;
 	struct dev_dax *dev_dax = filp->private_data;
@@ -286,7 +286,7 @@ static const struct vm_operations_struct dax_vm_ops = {
 	.pagesize = dev_dax_pagesize,
 };
 
-static int dax_mmap(struct file *filp, struct vm_area_struct *vma)
+static int dax_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct dev_dax *dev_dax = filp->private_data;
 	int rc, id;
diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 5baa83b85515..afc92bd59362 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -144,7 +144,7 @@ static struct file_system_type dma_buf_fs_type = {
 	.kill_sb = kill_anon_super,
 };
 
-static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
+static int dma_buf_mmap_internal(struct file *file, struct mm_area *vma)
 {
 	struct dma_buf *dmabuf;
 
@@ -1364,7 +1364,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, "DMA_BUF");
  *
  *   .. code-block:: c
  *
- *     int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long);
+ *     int dma_buf_mmap(struct dma_buf *, struct mm_area *, unsigned long);
  *
  *   If the importing subsystem simply provides a special-purpose mmap call to
  *   set up a mapping in userspace, calling do_mmap with &dma_buf.file will
@@ -1474,7 +1474,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, "DMA_BUF");
  *
  * Can return negative error values, returns 0 on success.
  */
-int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
+int dma_buf_mmap(struct dma_buf *dmabuf, struct mm_area *vma,
 		 unsigned long pgoff)
 {
 	if (WARN_ON(!dmabuf || !vma))
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
index 9512d050563a..17ae7983a93a 100644
--- a/drivers/dma-buf/heaps/cma_heap.c
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -162,7 +162,7 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 
 static vm_fault_t cma_heap_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct cma_heap_buffer *buffer = vma->vm_private_data;
 
 	if (vmf->pgoff >= buffer->pagecount)
@@ -175,7 +175,7 @@ static const struct vm_operations_struct dma_heap_vm_ops = {
 	.fault = cma_heap_vm_fault,
 };
 
-static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+static int cma_heap_mmap(struct dma_buf *dmabuf, struct mm_area *vma)
 {
 	struct cma_heap_buffer *buffer = dmabuf->priv;
 
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index 26d5dc89ea16..43fd8260f29b 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -192,7 +192,7 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 	return 0;
 }
 
-static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+static int system_heap_mmap(struct dma_buf *dmabuf, struct mm_area *vma)
 {
 	struct system_heap_buffer *buffer = dmabuf->priv;
 	struct sg_table *table = &buffer->sg_table;
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index e74e36a8ecda..7c3de3568e46 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -46,7 +46,7 @@ struct udmabuf {
 
 static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct udmabuf *ubuf = vma->vm_private_data;
 	pgoff_t pgoff = vmf->pgoff;
 	unsigned long addr, pfn;
@@ -93,7 +93,7 @@ static const struct vm_operations_struct udmabuf_vm_ops = {
 	.fault = udmabuf_vm_fault,
 };
 
-static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
+static int mmap_udmabuf(struct dma_buf *buf, struct mm_area *vma)
 {
 	struct udmabuf *ubuf = buf->priv;
 
diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
index ff94ee892339..2fd71e61d6c8 100644
--- a/drivers/dma/idxd/cdev.c
+++ b/drivers/dma/idxd/cdev.c
@@ -368,7 +368,7 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
 	return 0;
 }
 
-static int check_vma(struct idxd_wq *wq, struct vm_area_struct *vma,
+static int check_vma(struct idxd_wq *wq, struct mm_area *vma,
 		     const char *func)
 {
 	struct device *dev = &wq->idxd->pdev->dev;
@@ -384,7 +384,7 @@ static int check_vma(struct idxd_wq *wq, struct vm_area_struct *vma,
 	return 0;
 }
 
-static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
+static int idxd_cdev_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct idxd_user_context *ctx = filp->private_data;
 	struct idxd_wq *wq = ctx->wq;
diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
index bd04980009a4..a8a2ccd8af78 100644
--- a/drivers/firewire/core-cdev.c
+++ b/drivers/firewire/core-cdev.c
@@ -1786,7 +1786,7 @@ static long fw_device_op_ioctl(struct file *file,
 	return dispatch_ioctl(file->private_data, cmd, (void __user *)arg);
 }
 
-static int fw_device_op_mmap(struct file *file, struct vm_area_struct *vma)
+static int fw_device_op_mmap(struct file *file, struct mm_area *vma)
 {
 	struct client *client = file->private_data;
 	unsigned long size;
diff --git a/drivers/fpga/dfl-afu-main.c b/drivers/fpga/dfl-afu-main.c
index 3bf8e7338dbe..1b9b86d2ee0f 100644
--- a/drivers/fpga/dfl-afu-main.c
+++ b/drivers/fpga/dfl-afu-main.c
@@ -805,7 +805,7 @@ static const struct vm_operations_struct afu_vma_ops = {
 #endif
 };
 
-static int afu_mmap(struct file *filp, struct vm_area_struct *vma)
+static int afu_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct platform_device *pdev = filp->private_data;
 	u64 size = vma->vm_end - vma->vm_start;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 69429df09477..993513183c9c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -272,7 +272,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
 	drm_exec_fini(&exec);
 }
 
-static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 53b71e9d8076..304a1c09b89c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -700,7 +700,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages,
 	struct ttm_tt *ttm = bo->tbo.ttm;
 	struct amdgpu_ttm_tt *gtt = ttm_to_amdgpu_ttm_tt(ttm);
 	unsigned long start = gtt->userptr;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm;
 	bool readonly;
 	int r = 0;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
index 1e9dd00620bf..00a7f935b0a7 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
@@ -48,7 +48,7 @@
 static long kfd_ioctl(struct file *, unsigned int, unsigned long);
 static int kfd_open(struct inode *, struct file *);
 static int kfd_release(struct inode *, struct file *);
-static int kfd_mmap(struct file *, struct vm_area_struct *);
+static int kfd_mmap(struct file *, struct mm_area *);
 
 static const char kfd_dev_name[] = "kfd";
 
@@ -3360,7 +3360,7 @@ static long kfd_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 }
 
 static int kfd_mmio_mmap(struct kfd_node *dev, struct kfd_process *process,
-		      struct vm_area_struct *vma)
+		      struct mm_area *vma)
 {
 	phys_addr_t address;
 
@@ -3393,7 +3393,7 @@ static int kfd_mmio_mmap(struct kfd_node *dev, struct kfd_process *process,
 }
 
 
-static int kfd_mmap(struct file *filp, struct vm_area_struct *vma)
+static int kfd_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct kfd_process *process;
 	struct kfd_node *dev = NULL;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
index 05c74887fd6f..cff9e53c009c 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
@@ -104,7 +104,7 @@ void kfd_doorbell_fini(struct kfd_dev *kfd)
 }
 
 int kfd_doorbell_mmap(struct kfd_node *dev, struct kfd_process *process,
-		      struct vm_area_struct *vma)
+		      struct mm_area *vma)
 {
 	phys_addr_t address;
 	struct kfd_process_device *pdd;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
index fecdb6794075..8b767a08782a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
@@ -1063,7 +1063,7 @@ int kfd_wait_on_events(struct kfd_process *p,
 	return ret;
 }
 
-int kfd_event_mmap(struct kfd_process *p, struct vm_area_struct *vma)
+int kfd_event_mmap(struct kfd_process *p, struct mm_area *vma)
 {
 	unsigned long pfn;
 	struct kfd_signal_page *page;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index 79251f22b702..86560564d30d 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -240,7 +240,7 @@ svm_migrate_addr(struct amdgpu_device *adev, struct page *page)
 }
 
 static struct page *
-svm_migrate_get_sys_page(struct vm_area_struct *vma, unsigned long addr)
+svm_migrate_get_sys_page(struct mm_area *vma, unsigned long addr)
 {
 	struct page *page;
 
@@ -385,7 +385,7 @@ svm_migrate_copy_to_vram(struct kfd_node *node, struct svm_range *prange,
 
 static long
 svm_migrate_vma_to_vram(struct kfd_node *node, struct svm_range *prange,
-			struct vm_area_struct *vma, uint64_t start,
+			struct mm_area *vma, uint64_t start,
 			uint64_t end, uint32_t trigger, uint64_t ttm_res_offset)
 {
 	struct kfd_process *p = container_of(prange->svms, struct kfd_process, svms);
@@ -489,7 +489,7 @@ svm_migrate_ram_to_vram(struct svm_range *prange, uint32_t best_loc,
 			struct mm_struct *mm, uint32_t trigger)
 {
 	unsigned long addr, start, end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	uint64_t ttm_res_offset;
 	struct kfd_node *node;
 	unsigned long mpages = 0;
@@ -668,7 +668,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
  * svm_migrate_vma_to_ram - migrate range inside one vma from device to system
  *
  * @prange: svm range structure
- * @vma: vm_area_struct that range [start, end] belongs to
+ * @vma: mm_area that range [start, end] belongs to
  * @start: range start virtual address in pages
  * @end: range end virtual address in pages
  * @node: kfd node device to migrate from
@@ -683,7 +683,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
  */
 static long
 svm_migrate_vma_to_ram(struct kfd_node *node, struct svm_range *prange,
-		       struct vm_area_struct *vma, uint64_t start, uint64_t end,
+		       struct mm_area *vma, uint64_t start, uint64_t end,
 		       uint32_t trigger, struct page *fault_page)
 {
 	struct kfd_process *p = container_of(prange->svms, struct kfd_process, svms);
@@ -793,7 +793,7 @@ int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm,
 			    uint32_t trigger, struct page *fault_page)
 {
 	struct kfd_node *node;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr;
 	unsigned long start;
 	unsigned long end;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index f6aedf69c644..82d332c7bdd1 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -61,7 +61,7 @@
  * BITS[61:46] - Encode gpu_id. To identify to which GPU the offset belongs to
  * BITS[45:0]  - MMAP offset value
  *
- * NOTE: struct vm_area_struct.vm_pgoff uses offset in pages. Hence, these
+ * NOTE: struct mm_area.vm_pgoff uses offset in pages. Hence, these
  *  defines are w.r.t to PAGE_SIZE
  */
 #define KFD_MMAP_TYPE_SHIFT	62
@@ -1077,7 +1077,7 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
 bool kfd_process_xnack_mode(struct kfd_process *p, bool supported);
 
 int kfd_reserved_mem_mmap(struct kfd_node *dev, struct kfd_process *process,
-			  struct vm_area_struct *vma);
+			  struct mm_area *vma);
 
 /* KFD process API for creating and translating handles */
 int kfd_process_device_create_obj_handle(struct kfd_process_device *pdd,
@@ -1099,7 +1099,7 @@ size_t kfd_doorbell_process_slice(struct kfd_dev *kfd);
 int kfd_doorbell_init(struct kfd_dev *kfd);
 void kfd_doorbell_fini(struct kfd_dev *kfd);
 int kfd_doorbell_mmap(struct kfd_node *dev, struct kfd_process *process,
-		      struct vm_area_struct *vma);
+		      struct mm_area *vma);
 void __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
 					unsigned int *doorbell_off);
 void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr);
@@ -1487,7 +1487,7 @@ extern const struct kfd_device_global_init_class device_global_init_class_cik;
 
 int kfd_event_init_process(struct kfd_process *p);
 void kfd_event_free_process(struct kfd_process *p);
-int kfd_event_mmap(struct kfd_process *process, struct vm_area_struct *vma);
+int kfd_event_mmap(struct kfd_process *process, struct mm_area *vma);
 int kfd_wait_on_events(struct kfd_process *p,
 		       uint32_t num_events, void __user *data,
 		       bool all, uint32_t *user_timeout_ms,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 7c0c24732481..94056ffd51d7 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -2111,7 +2111,7 @@ int kfd_resume_all_processes(void)
 }
 
 int kfd_reserved_mem_mmap(struct kfd_node *dev, struct kfd_process *process,
-			  struct vm_area_struct *vma)
+			  struct mm_area *vma)
 {
 	struct kfd_process_device *pdd;
 	struct qcm_process_device *qpd;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index 100717a98ec1..01e2538d9622 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -1704,7 +1704,7 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
 		struct hmm_range *hmm_range = NULL;
 		unsigned long map_start_vma;
 		unsigned long map_last_vma;
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		unsigned long next = 0;
 		unsigned long offset;
 		unsigned long npages;
@@ -2721,7 +2721,7 @@ svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr,
 			       unsigned long *start, unsigned long *last,
 			       bool *is_heap_stack)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct interval_tree_node *node;
 	struct rb_node *rb_node;
 	unsigned long start_limit, end_limit;
@@ -2938,7 +2938,7 @@ svm_range_count_fault(struct kfd_node *node, struct kfd_process *p,
 }
 
 static bool
-svm_fault_allowed(struct vm_area_struct *vma, bool write_fault)
+svm_fault_allowed(struct mm_area *vma, bool write_fault)
 {
 	unsigned long requested = VM_READ;
 
@@ -2965,7 +2965,7 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
 	int32_t best_loc;
 	int32_t gpuid, gpuidx = MAX_GPU_INSTANCE;
 	bool write_locked = false;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	bool migration = false;
 	int r = 0;
 
@@ -3373,7 +3373,7 @@ static int
 svm_range_is_valid(struct kfd_process *p, uint64_t start, uint64_t size)
 {
 	const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long end;
 	unsigned long start_unchg = start;
 
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
index 1a1680d71486..94767247f919 100644
--- a/drivers/gpu/drm/armada/armada_gem.c
+++ b/drivers/gpu/drm/armada/armada_gem.c
@@ -471,7 +471,7 @@ static void armada_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach,
 }
 
 static int
-armada_gem_dmabuf_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
+armada_gem_dmabuf_mmap(struct dma_buf *buf, struct mm_area *vma)
 {
 	return -EINVAL;
 }
diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c
index 02a516e77192..d6b5bcdbc19f 100644
--- a/drivers/gpu/drm/drm_fbdev_dma.c
+++ b/drivers/gpu/drm/drm_fbdev_dma.c
@@ -35,7 +35,7 @@ static int drm_fbdev_dma_fb_release(struct fb_info *info, int user)
 	return 0;
 }
 
-static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct drm_fb_helper *fb_helper = info->par;
 
diff --git a/drivers/gpu/drm/drm_fbdev_shmem.c b/drivers/gpu/drm/drm_fbdev_shmem.c
index f824369baacd..3077d8e6e55b 100644
--- a/drivers/gpu/drm/drm_fbdev_shmem.c
+++ b/drivers/gpu/drm/drm_fbdev_shmem.c
@@ -38,7 +38,7 @@ FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(drm_fbdev_shmem,
 				   drm_fb_helper_damage_range,
 				   drm_fb_helper_damage_area);
 
-static int drm_fbdev_shmem_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int drm_fbdev_shmem_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct drm_fb_helper *fb_helper = info->par;
 	struct drm_framebuffer *fb = fb_helper->fb;
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index c6240bab3fa5..f7a750cea62c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1013,7 +1013,7 @@ EXPORT_SYMBOL(drm_gem_object_free);
  * This function implements the #vm_operations_struct open() callback for GEM
  * drivers. This must be used together with drm_gem_vm_close().
  */
-void drm_gem_vm_open(struct vm_area_struct *vma)
+void drm_gem_vm_open(struct mm_area *vma)
 {
 	struct drm_gem_object *obj = vma->vm_private_data;
 
@@ -1028,7 +1028,7 @@ EXPORT_SYMBOL(drm_gem_vm_open);
  * This function implements the #vm_operations_struct close() callback for GEM
  * drivers. This must be used together with drm_gem_vm_open().
  */
-void drm_gem_vm_close(struct vm_area_struct *vma)
+void drm_gem_vm_close(struct mm_area *vma)
 {
 	struct drm_gem_object *obj = vma->vm_private_data;
 
@@ -1061,7 +1061,7 @@ EXPORT_SYMBOL(drm_gem_vm_close);
  * size, or if no vm_ops are provided.
  */
 int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
-		     struct vm_area_struct *vma)
+		     struct mm_area *vma)
 {
 	int ret;
 
@@ -1119,7 +1119,7 @@ EXPORT_SYMBOL(drm_gem_mmap_obj);
  * If the caller is not granted access to the buffer object, the mmap will fail
  * with EACCES. Please see the vma manager for more information.
  */
-int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
+int drm_gem_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct drm_file *priv = filp->private_data;
 	struct drm_device *dev = priv->minor->dev;
diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
index b7f033d4352a..d3ae2d67fcc0 100644
--- a/drivers/gpu/drm/drm_gem_dma_helper.c
+++ b/drivers/gpu/drm/drm_gem_dma_helper.c
@@ -519,7 +519,7 @@ EXPORT_SYMBOL_GPL(drm_gem_dma_vmap);
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *vma)
+int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct mm_area *vma)
 {
 	struct drm_gem_object *obj = &dma_obj->base;
 	int ret;
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d99dee67353a..b98f02716ad7 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -533,7 +533,7 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
 
 static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 	loff_t num_pages = obj->size >> PAGE_SHIFT;
@@ -561,7 +561,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
 	return ret;
 }
 
-static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
+static void drm_gem_shmem_vm_open(struct mm_area *vma)
 {
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
@@ -583,7 +583,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
 	drm_gem_vm_open(vma);
 }
 
-static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
+static void drm_gem_shmem_vm_close(struct mm_area *vma)
 {
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
@@ -613,7 +613,7 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma)
+int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct mm_area *vma)
 {
 	struct drm_gem_object *obj = &shmem->base;
 	int ret;
diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 3734aa2d1c5b..5ab41caf8e4a 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -97,7 +97,7 @@ EXPORT_SYMBOL(drm_gem_ttm_vunmap);
  * callback.
  */
 int drm_gem_ttm_mmap(struct drm_gem_object *gem,
-		     struct vm_area_struct *vma)
+		     struct mm_area *vma)
 {
 	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
 	int ret;
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 38431e8360e7..8d7fd83f2f1f 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -902,7 +902,7 @@ static bool drm_gpusvm_check_pages(struct drm_gpusvm *gpusvm,
 static unsigned long
 drm_gpusvm_range_chunk_size(struct drm_gpusvm *gpusvm,
 			    struct drm_gpusvm_notifier *notifier,
-			    struct vm_area_struct *vas,
+			    struct mm_area *vas,
 			    unsigned long fault_addr,
 			    unsigned long gpuva_start,
 			    unsigned long gpuva_end,
@@ -1003,7 +1003,7 @@ drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
 	struct drm_gpusvm_notifier *notifier;
 	struct drm_gpusvm_range *range;
 	struct mm_struct *mm = gpusvm->mm;
-	struct vm_area_struct *vas;
+	struct mm_area *vas;
 	bool notifier_alloc = false;
 	unsigned long chunk_size;
 	int err;
@@ -1678,7 +1678,7 @@ int drm_gpusvm_migrate_to_devmem(struct drm_gpusvm *gpusvm,
 	};
 	struct mm_struct *mm = gpusvm->mm;
 	unsigned long i, npages = npages_in_range(start, end);
-	struct vm_area_struct *vas;
+	struct mm_area *vas;
 	struct drm_gpusvm_zdd *zdd = NULL;
 	struct page **pages;
 	dma_addr_t *dma_addr;
@@ -1800,7 +1800,7 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_migrate_to_devmem);
  *
  * Return: 0 on success, negative error code on failure.
  */
-static int drm_gpusvm_migrate_populate_ram_pfn(struct vm_area_struct *vas,
+static int drm_gpusvm_migrate_populate_ram_pfn(struct mm_area *vas,
 					       struct page *fault_page,
 					       unsigned long npages,
 					       unsigned long *mpages,
@@ -1962,7 +1962,7 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_evict_to_ram);
  *
  * Return: 0 on success, negative error code on failure.
  */
-static int __drm_gpusvm_migrate_to_ram(struct vm_area_struct *vas,
+static int __drm_gpusvm_migrate_to_ram(struct mm_area *vas,
 				       void *device_private_page_owner,
 				       struct page *page,
 				       unsigned long fault_addr,
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index bdb51c8f262e..3691e0445696 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -737,7 +737,7 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
  * The fake GEM offset is added to vma->vm_pgoff and &drm_driver->fops->mmap is
  * called to set up the mapping.
  */
-int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+int drm_gem_prime_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct drm_file *priv;
 	struct file *fil;
@@ -795,7 +795,7 @@ EXPORT_SYMBOL(drm_gem_prime_mmap);
  *
  * Returns 0 on success or a negative error code on failure.
  */
-int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
+int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 2f844e82bc46..8a5d096ddb36 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -125,7 +125,7 @@ void etnaviv_gem_put_pages(struct etnaviv_gem_object *etnaviv_obj)
 }
 
 static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	pgprot_t vm_page_prot;
 
@@ -152,7 +152,7 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
 	return 0;
 }
 
-static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
 
@@ -161,7 +161,7 @@ static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *v
 
 static vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
 	struct page **pages;
@@ -718,7 +718,7 @@ static void etnaviv_gem_userptr_release(struct etnaviv_gem_object *etnaviv_obj)
 }
 
 static int etnaviv_gem_userptr_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	return -EINVAL;
 }
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h
index e5ee82a0674c..20c10d1bedd2 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h
@@ -68,7 +68,7 @@ struct etnaviv_gem_ops {
 	int (*get_pages)(struct etnaviv_gem_object *);
 	void (*release)(struct etnaviv_gem_object *);
 	void *(*vmap)(struct etnaviv_gem_object *);
-	int (*mmap)(struct etnaviv_gem_object *, struct vm_area_struct *);
+	int (*mmap)(struct etnaviv_gem_object *, struct mm_area *);
 };
 
 static inline bool is_active(struct etnaviv_gem_object *etnaviv_obj)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 42e57d142554..b81b597367e0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -89,7 +89,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
 }
 
 static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	int ret;
 
diff --git a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
index 9526a25e90ac..637b38b274cd 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
@@ -24,7 +24,7 @@
 
 #define MAX_CONNECTOR		4
 
-static int exynos_drm_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int exynos_drm_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct drm_fb_helper *helper = info->par;
 	struct drm_gem_object *obj = drm_gem_fb_get_obj(helper->fb, 0);
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index 4787fee4696f..8ab046d62150 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -20,7 +20,7 @@
 
 MODULE_IMPORT_NS("DMA_BUF");
 
-static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma);
 
 static int exynos_drm_alloc_buf(struct exynos_drm_gem *exynos_gem, bool kvmap)
 {
@@ -268,7 +268,7 @@ struct exynos_drm_gem *exynos_drm_gem_get(struct drm_file *filp,
 }
 
 static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem *exynos_gem,
-				      struct vm_area_struct *vma)
+				      struct mm_area *vma)
 {
 	struct drm_device *drm_dev = exynos_gem->base.dev;
 	unsigned long vm_size;
@@ -360,7 +360,7 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
 	return 0;
 }
 
-static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct exynos_drm_gem *exynos_gem = to_exynos_gem(obj);
 	int ret;
diff --git a/drivers/gpu/drm/gma500/fbdev.c b/drivers/gpu/drm/gma500/fbdev.c
index 8edefea2ef59..57ff0f19937d 100644
--- a/drivers/gpu/drm/gma500/fbdev.c
+++ b/drivers/gpu/drm/gma500/fbdev.c
@@ -22,7 +22,7 @@
 
 static vm_fault_t psb_fbdev_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct fb_info *info = vma->vm_private_data;
 	unsigned long address = vmf->address - (vmf->pgoff << PAGE_SHIFT);
 	unsigned long pfn = info->fix.smem_start >> PAGE_SHIFT;
@@ -93,7 +93,7 @@ static int psb_fbdev_fb_setcolreg(unsigned int regno,
 	return 0;
 }
 
-static int psb_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int psb_fbdev_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	if (vma->vm_pgoff != 0)
 		return -EINVAL;
diff --git a/drivers/gpu/drm/gma500/gem.c b/drivers/gpu/drm/gma500/gem.c
index 4b7627a72637..b458c86773dd 100644
--- a/drivers/gpu/drm/gma500/gem.c
+++ b/drivers/gpu/drm/gma500/gem.c
@@ -253,7 +253,7 @@ int psb_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
  */
 static vm_fault_t psb_gem_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *obj;
 	struct psb_gem_object *pobj;
 	int err;
diff --git a/drivers/gpu/drm/i915/display/intel_bo.c b/drivers/gpu/drm/i915/display/intel_bo.c
index fbd16d7b58d9..b193ee0f7171 100644
--- a/drivers/gpu/drm/i915/display/intel_bo.c
+++ b/drivers/gpu/drm/i915/display/intel_bo.c
@@ -32,7 +32,7 @@ void intel_bo_flush_if_display(struct drm_gem_object *obj)
 	i915_gem_object_flush_if_display(to_intel_bo(obj));
 }
 
-int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+int intel_bo_fb_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	return i915_gem_fb_mmap(to_intel_bo(obj), vma);
 }
diff --git a/drivers/gpu/drm/i915/display/intel_bo.h b/drivers/gpu/drm/i915/display/intel_bo.h
index ea7a2253aaa5..38f3518bb80f 100644
--- a/drivers/gpu/drm/i915/display/intel_bo.h
+++ b/drivers/gpu/drm/i915/display/intel_bo.h
@@ -8,14 +8,14 @@
 
 struct drm_gem_object;
 struct seq_file;
-struct vm_area_struct;
+struct mm_area;
 
 bool intel_bo_is_tiled(struct drm_gem_object *obj);
 bool intel_bo_is_userptr(struct drm_gem_object *obj);
 bool intel_bo_is_shmem(struct drm_gem_object *obj);
 bool intel_bo_is_protected(struct drm_gem_object *obj);
 void intel_bo_flush_if_display(struct drm_gem_object *obj);
-int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+int intel_bo_fb_mmap(struct drm_gem_object *obj, struct mm_area *vma);
 int intel_bo_read_from_page(struct drm_gem_object *obj, u64 offset, void *dst, int size);
 
 struct intel_frontbuffer *intel_bo_get_frontbuffer(struct drm_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c
index adc19d5607de..69ade9a6ca90 100644
--- a/drivers/gpu/drm/i915/display/intel_fbdev.c
+++ b/drivers/gpu/drm/i915/display/intel_fbdev.c
@@ -121,7 +121,7 @@ static int intel_fbdev_pan_display(struct fb_var_screeninfo *var,
 	return ret;
 }
 
-static int intel_fbdev_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int intel_fbdev_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct drm_fb_helper *fb_helper = info->par;
 	struct drm_gem_object *obj = drm_gem_fb_get_obj(fb_helper->fb, 0);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
index 9473050ac842..2caf031bfbc1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
@@ -91,7 +91,7 @@ static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf,
 	i915_gem_object_unpin_map(obj);
 }
 
-static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
+static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma)
 {
 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
index c3dabb857960..9fcb86c991fd 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
@@ -27,7 +27,7 @@
 #include "i915_vma.h"
 
 static inline bool
-__vma_matches(struct vm_area_struct *vma, struct file *filp,
+__vma_matches(struct mm_area *vma, struct file *filp,
 	      unsigned long addr, unsigned long size)
 {
 	if (vma->vm_file != filp)
@@ -104,7 +104,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
 
 	if (args->flags & I915_MMAP_WC) {
 		struct mm_struct *mm = current->mm;
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		if (mmap_write_lock_killable(mm)) {
 			addr = -EINTR;
@@ -252,7 +252,7 @@ static vm_fault_t i915_error_to_vmf_fault(int err)
 
 static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 {
-	struct vm_area_struct *area = vmf->vma;
+	struct mm_area *area = vmf->vma;
 	struct i915_mmap_offset *mmo = area->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
 	unsigned long obj_offset;
@@ -295,7 +295,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
 	return i915_error_to_vmf_fault(err);
 }
 
-static void set_address_limits(struct vm_area_struct *area,
+static void set_address_limits(struct mm_area *area,
 			       struct i915_vma *vma,
 			       unsigned long obj_offset,
 			       resource_size_t gmadr_start,
@@ -339,7 +339,7 @@ static void set_address_limits(struct vm_area_struct *area,
 static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
 {
 #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
-	struct vm_area_struct *area = vmf->vma;
+	struct mm_area *area = vmf->vma;
 	struct i915_mmap_offset *mmo = area->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
 	struct drm_device *dev = obj->base.dev;
@@ -506,7 +506,7 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
 }
 
 static int
-vm_access(struct vm_area_struct *area, unsigned long addr,
+vm_access(struct mm_area *area, unsigned long addr,
 	  void *buf, int len, int write)
 {
 	struct i915_mmap_offset *mmo = area->vm_private_data;
@@ -919,7 +919,7 @@ i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
 	return __assign_mmap_offset_handle(file, args->handle, type, &args->offset);
 }
 
-static void vm_open(struct vm_area_struct *vma)
+static void vm_open(struct mm_area *vma)
 {
 	struct i915_mmap_offset *mmo = vma->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
@@ -928,7 +928,7 @@ static void vm_open(struct vm_area_struct *vma)
 	i915_gem_object_get(obj);
 }
 
-static void vm_close(struct vm_area_struct *vma)
+static void vm_close(struct mm_area *vma)
 {
 	struct i915_mmap_offset *mmo = vma->vm_private_data;
 	struct drm_i915_gem_object *obj = mmo->obj;
@@ -990,7 +990,7 @@ static struct file *mmap_singleton(struct drm_i915_private *i915)
 static int
 i915_gem_object_mmap(struct drm_i915_gem_object *obj,
 		     struct i915_mmap_offset *mmo,
-		     struct vm_area_struct *vma)
+		     struct mm_area *vma)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct drm_device *dev = &i915->drm;
@@ -1071,7 +1071,7 @@ i915_gem_object_mmap(struct drm_i915_gem_object *obj,
  * be able to resolve multiple mmap offsets which could be tied
  * to a single gem object.
  */
-int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
+int i915_gem_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct drm_vma_offset_node *node;
 	struct drm_file *priv = filp->private_data;
@@ -1114,7 +1114,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return i915_gem_object_mmap(obj, mmo, vma);
 }
 
-int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma)
+int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct mm_area *vma)
 {
 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
 	struct drm_device *dev = &i915->drm;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.h b/drivers/gpu/drm/i915/gem/i915_gem_mman.h
index 196417fd0f5c..5e6faa37dbc2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.h
@@ -18,7 +18,7 @@ struct i915_mmap_offset;
 struct mutex;
 
 int i915_gem_mmap_gtt_version(void);
-int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma);
+int i915_gem_mmap(struct file *filp, struct mm_area *vma);
 
 int i915_gem_dumb_mmap_offset(struct drm_file *file_priv,
 			      struct drm_device *dev,
@@ -29,5 +29,5 @@ void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj);
 
 void i915_gem_object_runtime_pm_release_mmap_offset(struct drm_i915_gem_object *obj);
 void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj);
-int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma);
+int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct mm_area *vma);
 #endif
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 1f4814968868..b65ee3c4c4fc 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -1034,7 +1034,7 @@ static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
 
 static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
 {
-	struct vm_area_struct *area = vmf->vma;
+	struct mm_area *area = vmf->vma;
 	struct ttm_buffer_object *bo = area->vm_private_data;
 	struct drm_device *dev = bo->base.dev;
 	struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
@@ -1147,7 +1147,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
 }
 
 static int
-vm_access_ttm(struct vm_area_struct *area, unsigned long addr,
+vm_access_ttm(struct mm_area *area, unsigned long addr,
 	      void *buf, int len, int write)
 {
 	struct drm_i915_gem_object *obj =
@@ -1159,7 +1159,7 @@ vm_access_ttm(struct vm_area_struct *area, unsigned long addr,
 	return ttm_bo_vm_access(area, addr, buf, len, write);
 }
 
-static void ttm_vm_open(struct vm_area_struct *vma)
+static void ttm_vm_open(struct mm_area *vma)
 {
 	struct drm_i915_gem_object *obj =
 		i915_ttm_to_gem(vma->vm_private_data);
@@ -1168,7 +1168,7 @@ static void ttm_vm_open(struct vm_area_struct *vma)
 	i915_gem_object_get(obj);
 }
 
-static void ttm_vm_close(struct vm_area_struct *vma)
+static void ttm_vm_close(struct mm_area *vma)
 {
 	struct drm_i915_gem_object *obj =
 		i915_ttm_to_gem(vma->vm_private_data);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 09b68713ab32..a3badd817b6b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -401,7 +401,7 @@ static int
 probe_range(struct mm_struct *mm, unsigned long addr, unsigned long len)
 {
 	VMA_ITERATOR(vmi, mm, addr);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long end = addr + len;
 
 	mmap_read_lock(mm);
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 804f74084bd4..c0a2c9bed6da 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -896,7 +896,7 @@ static int __igt_mmap(struct drm_i915_private *i915,
 		      struct drm_i915_gem_object *obj,
 		      enum i915_mmap_type type)
 {
-	struct vm_area_struct *area;
+	struct mm_area *area;
 	unsigned long addr;
 	int err, i;
 	u64 offset;
@@ -924,7 +924,7 @@ static int __igt_mmap(struct drm_i915_private *i915,
 	area = vma_lookup(current->mm, addr);
 	mmap_read_unlock(current->mm);
 	if (!area) {
-		pr_err("%s: Did not create a vm_area_struct for the mmap\n",
+		pr_err("%s: Did not create a mm_area for the mmap\n",
 		       obj->mm.region->name);
 		err = -EINVAL;
 		goto out_unmap;
@@ -1096,7 +1096,7 @@ static int ___igt_mmap_migrate(struct drm_i915_private *i915,
 			       unsigned long addr,
 			       bool unfaultable)
 {
-	struct vm_area_struct *area;
+	struct mm_area *area;
 	int err = 0, i;
 
 	pr_info("igt_mmap(%s, %d) @ %lx\n",
@@ -1106,7 +1106,7 @@ static int ___igt_mmap_migrate(struct drm_i915_private *i915,
 	area = vma_lookup(current->mm, addr);
 	mmap_read_unlock(current->mm);
 	if (!area) {
-		pr_err("%s: Did not create a vm_area_struct for the mmap\n",
+		pr_err("%s: Did not create a mm_area for the mmap\n",
 		       obj->mm.region->name);
 		err = -EINVAL;
 		goto out_unmap;
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
index 5cd58e0f0dcf..11140801f804 100644
--- a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
@@ -82,7 +82,7 @@ static void mock_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
 	vm_unmap_ram(map->vaddr, mock->npages);
 }
 
-static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
+static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma)
 {
 	return -ENODEV;
 }
diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
index 69830a5c49d3..8f4cc972a94c 100644
--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
+++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
@@ -1011,7 +1011,7 @@ static ssize_t intel_vgpu_write(struct vfio_device *vfio_dev,
 }
 
 static int intel_vgpu_mmap(struct vfio_device *vfio_dev,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
 	unsigned int index;
diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c
index 76e2801619f0..d92cf85a65cf 100644
--- a/drivers/gpu/drm/i915/i915_mm.c
+++ b/drivers/gpu/drm/i915/i915_mm.c
@@ -91,7 +91,7 @@ static int remap_pfn(pte_t *pte, unsigned long addr, void *data)
  *
  *  Note: this is only safe if the mm semaphore is held when called.
  */
-int remap_io_mapping(struct vm_area_struct *vma,
+int remap_io_mapping(struct mm_area *vma,
 		     unsigned long addr, unsigned long pfn, unsigned long size,
 		     struct io_mapping *iomap)
 {
@@ -127,7 +127,7 @@ int remap_io_mapping(struct vm_area_struct *vma,
  *
  *  Note: this is only safe if the mm semaphore is held when called.
  */
-int remap_io_sg(struct vm_area_struct *vma,
+int remap_io_sg(struct mm_area *vma,
 		unsigned long addr, unsigned long size,
 		struct scatterlist *sgl, unsigned long offset,
 		resource_size_t iobase)
diff --git a/drivers/gpu/drm/i915/i915_mm.h b/drivers/gpu/drm/i915/i915_mm.h
index 69f9351b1a1c..0ba12093b9ed 100644
--- a/drivers/gpu/drm/i915/i915_mm.h
+++ b/drivers/gpu/drm/i915/i915_mm.h
@@ -9,17 +9,17 @@
 #include <linux/bug.h>
 #include <linux/types.h>
 
-struct vm_area_struct;
+struct mm_area;
 struct io_mapping;
 struct scatterlist;
 
 #if IS_ENABLED(CONFIG_X86)
-int remap_io_mapping(struct vm_area_struct *vma,
+int remap_io_mapping(struct mm_area *vma,
 		     unsigned long addr, unsigned long pfn, unsigned long size,
 		     struct io_mapping *iomap);
 #else
 static inline
-int remap_io_mapping(struct vm_area_struct *vma,
+int remap_io_mapping(struct mm_area *vma,
 		     unsigned long addr, unsigned long pfn, unsigned long size,
 		     struct io_mapping *iomap)
 {
@@ -28,7 +28,7 @@ int remap_io_mapping(struct vm_area_struct *vma,
 }
 #endif
 
-int remap_io_sg(struct vm_area_struct *vma,
+int remap_io_sg(struct mm_area *vma,
 		unsigned long addr, unsigned long size,
 		struct scatterlist *sgl, unsigned long offset,
 		resource_size_t iobase);
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
index 6a8c81fe8c1e..b89482468e95 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.c
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -27,7 +27,7 @@ static void pvr_gem_object_free(struct drm_gem_object *obj)
 	drm_gem_shmem_object_free(obj);
 }
 
-static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma)
+static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct mm_area *vma)
 {
 	struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(gem_obj);
 	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 9bb997dbb4b9..236327d428cd 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -198,7 +198,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
 	return drm_gem_shmem_vmap(&bo->base, map);
 }
 
-static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int lima_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h
index ccea06142f4b..2dc229d7a747 100644
--- a/drivers/gpu/drm/lima/lima_gem.h
+++ b/drivers/gpu/drm/lima/lima_gem.h
@@ -42,6 +42,6 @@ int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset);
 int lima_gem_submit(struct drm_file *file, struct lima_submit *submit);
 int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, s64 timeout_ns);
 
-void lima_set_vma_flags(struct vm_area_struct *vma);
+void lima_set_vma_flags(struct mm_area *vma);
 
 #endif
diff --git a/drivers/gpu/drm/loongson/lsdc_gem.c b/drivers/gpu/drm/loongson/lsdc_gem.c
index a720d8f53209..21d13a9acde5 100644
--- a/drivers/gpu/drm/loongson/lsdc_gem.c
+++ b/drivers/gpu/drm/loongson/lsdc_gem.c
@@ -110,7 +110,7 @@ static void lsdc_gem_object_vunmap(struct drm_gem_object *obj, struct iosys_map
 	}
 }
 
-static int lsdc_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int lsdc_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct ttm_buffer_object *tbo = to_ttm_bo(obj);
 	int ret;
diff --git a/drivers/gpu/drm/mediatek/mtk_gem.c b/drivers/gpu/drm/mediatek/mtk_gem.c
index a172456d1d7b..254a991e94b2 100644
--- a/drivers/gpu/drm/mediatek/mtk_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_gem.c
@@ -15,7 +15,7 @@
 #include "mtk_drm_drv.h"
 #include "mtk_gem.h"
 
-static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
+static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma);
 
 static const struct vm_operations_struct vm_ops = {
 	.open = drm_gem_vm_open,
@@ -157,7 +157,7 @@ int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev,
 }
 
 static int mtk_gem_object_mmap(struct drm_gem_object *obj,
-			       struct vm_area_struct *vma)
+			       struct mm_area *vma)
 
 {
 	int ret;
diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
index c62249b1ab3d..058585d17be3 100644
--- a/drivers/gpu/drm/msm/msm_fbdev.c
+++ b/drivers/gpu/drm/msm/msm_fbdev.c
@@ -29,7 +29,7 @@ FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(msm_fbdev,
 				   drm_fb_helper_damage_range,
 				   drm_fb_helper_damage_area)
 
-static int msm_fbdev_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int msm_fbdev_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct drm_fb_helper *helper = (struct drm_fb_helper *)info->par;
 	struct drm_gem_object *bo = msm_framebuffer_bo(helper->fb, 0);
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index ebc9ba66efb8..4564662c845c 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -321,7 +321,7 @@ static pgprot_t msm_gem_pgprot(struct msm_gem_object *msm_obj, pgprot_t prot)
 
 static vm_fault_t msm_gem_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
 	struct page **pages;
@@ -1097,7 +1097,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
 	kfree(msm_obj);
 }
 
-static int msm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int msm_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct msm_gem_object *msm_obj = to_msm_bo(obj);
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 61d0f411ef84..4dd166e36cfe 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -691,7 +691,7 @@ static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm,
 int
 nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
 			 struct nouveau_svmm *svmm,
-			 struct vm_area_struct *vma,
+			 struct mm_area *vma,
 			 unsigned long start,
 			 unsigned long end)
 {
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.h b/drivers/gpu/drm/nouveau/nouveau_dmem.h
index 64da5d3635c8..c52336b7729f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.h
@@ -36,7 +36,7 @@ void nouveau_dmem_resume(struct nouveau_drm *);
 
 int nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
 			     struct nouveau_svmm *svmm,
-			     struct vm_area_struct *vma,
+			     struct mm_area *vma,
 			     unsigned long start,
 			     unsigned long end);
 unsigned long nouveau_dmem_page_addr(struct page *page);
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 67e3c99de73a..db3fe08c1ee6 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -41,7 +41,7 @@
 
 static vm_fault_t nouveau_ttm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	pgprot_t prot;
 	vm_fault_t ret;
diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
index e12e2596ed84..43e5f70f664e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -173,7 +173,7 @@ nouveau_svmm_bind(struct drm_device *dev, void *data,
 	}
 
 	for (addr = args->va_start, end = args->va_end; addr < end;) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		unsigned long next;
 
 		vma = find_vma_intersection(mm, addr, end);
diff --git a/drivers/gpu/drm/omapdrm/omap_fbdev.c b/drivers/gpu/drm/omapdrm/omap_fbdev.c
index 7b6396890681..5a1818a59244 100644
--- a/drivers/gpu/drm/omapdrm/omap_fbdev.c
+++ b/drivers/gpu/drm/omapdrm/omap_fbdev.c
@@ -81,7 +81,7 @@ static int omap_fbdev_pan_display(struct fb_var_screeninfo *var, struct fb_info
 	return drm_fb_helper_pan_display(var, fbi);
 }
 
-static int omap_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int omap_fbdev_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
 
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index b9c67e4ca360..cbbdaf381ad3 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -351,7 +351,7 @@ size_t omap_gem_mmap_size(struct drm_gem_object *obj)
 
 /* Normal handling for the case of faulting in non-tiled buffers */
 static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj,
-		struct vm_area_struct *vma, struct vm_fault *vmf)
+		struct mm_area *vma, struct vm_fault *vmf)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	unsigned long pfn;
@@ -377,7 +377,7 @@ static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj,
 
 /* Special handling for the case of faulting in 2d tiled buffers */
 static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj,
-		struct vm_area_struct *vma, struct vm_fault *vmf)
+		struct mm_area *vma, struct vm_fault *vmf)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	struct omap_drm_private *priv = obj->dev->dev_private;
@@ -496,7 +496,7 @@ static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj,
  */
 static vm_fault_t omap_gem_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int err;
@@ -531,7 +531,7 @@ static vm_fault_t omap_gem_fault(struct vm_fault *vmf)
 	return ret;
 }
 
-static int omap_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int omap_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.h b/drivers/gpu/drm/omapdrm/omap_gem.h
index fec3fa0e4c33..d28793a23d46 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.h
+++ b/drivers/gpu/drm/omapdrm/omap_gem.h
@@ -23,7 +23,7 @@ struct file;
 struct list_head;
 struct page;
 struct seq_file;
-struct vm_area_struct;
+struct mm_area;
 struct vm_fault;
 
 union omap_gem_size;
diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
index 30cf1cdc1aa3..64d9520d20c0 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
@@ -61,7 +61,7 @@ static int omap_gem_dmabuf_end_cpu_access(struct dma_buf *buffer,
 }
 
 static int omap_gem_dmabuf_mmap(struct dma_buf *buffer,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	struct drm_gem_object *obj = buffer->priv;
 
diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
index a9da1d1eeb70..c3092cf8f280 100644
--- a/drivers/gpu/drm/panthor/panthor_device.c
+++ b/drivers/gpu/drm/panthor/panthor_device.c
@@ -359,7 +359,7 @@ const char *panthor_exception_name(struct panthor_device *ptdev, u32 exception_c
 
 static vm_fault_t panthor_mmio_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct panthor_device *ptdev = vma->vm_private_data;
 	u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT;
 	unsigned long pfn;
@@ -403,7 +403,7 @@ static const struct vm_operations_struct panthor_mmio_vm_ops = {
 	.fault = panthor_mmio_vm_fault,
 };
 
-int panthor_device_mmap_io(struct panthor_device *ptdev, struct vm_area_struct *vma)
+int panthor_device_mmap_io(struct panthor_device *ptdev, struct mm_area *vma)
 {
 	u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT;
 
diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
index da6574021664..a3205e6b0518 100644
--- a/drivers/gpu/drm/panthor/panthor_device.h
+++ b/drivers/gpu/drm/panthor/panthor_device.h
@@ -253,7 +253,7 @@ static inline bool panthor_device_reset_is_pending(struct panthor_device *ptdev)
 }
 
 int panthor_device_mmap_io(struct panthor_device *ptdev,
-			   struct vm_area_struct *vma);
+			   struct mm_area *vma);
 
 int panthor_device_resume(struct device *dev);
 int panthor_device_suspend(struct device *dev);
diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
index 06fe46e32073..3fca24a494d4 100644
--- a/drivers/gpu/drm/panthor/panthor_drv.c
+++ b/drivers/gpu/drm/panthor/panthor_drv.c
@@ -1402,7 +1402,7 @@ static const struct drm_ioctl_desc panthor_drm_driver_ioctls[] = {
 	PANTHOR_IOCTL(GROUP_SUBMIT, group_submit, DRM_RENDER_ALLOW),
 };
 
-static int panthor_mmap(struct file *filp, struct vm_area_struct *vma)
+static int panthor_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct drm_file *file = filp->private_data;
 	struct panthor_file *pfile = file->driver_priv;
diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
index 8244a4e6c2a2..a323f6580f9c 100644
--- a/drivers/gpu/drm/panthor/panthor_gem.c
+++ b/drivers/gpu/drm/panthor/panthor_gem.c
@@ -129,7 +129,7 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
 	return ERR_PTR(ret);
 }
 
-static int panthor_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int panthor_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct panthor_gem_object *bo = to_panthor_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index f86773f3db20..83230ce4e4f3 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -263,7 +263,7 @@ static int radeon_gem_handle_lockup(struct radeon_device *rdev, int r)
 	return r;
 }
 
-static int radeon_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int radeon_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	struct radeon_device *rdev = radeon_get_rdev(bo->tbo.bdev);
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 616d25c8c2de..a9007d171911 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -338,7 +338,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm
 		/* check that we only pin down anonymous memory
 		   to prevent problems with writeback */
 		unsigned long end = gtt->userptr + (u64)ttm->num_pages * PAGE_SIZE;
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		vma = find_vma(gtt->usermm, gtt->userptr);
 		if (!vma || vma->vm_file || vma->vm_end < end)
 			return -EPERM;
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 6330b883efc3..f35e43ef35c0 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -213,7 +213,7 @@ static void rockchip_gem_free_buf(struct rockchip_gem_object *rk_obj)
 }
 
 static int rockchip_drm_gem_object_mmap_iommu(struct drm_gem_object *obj,
-					      struct vm_area_struct *vma)
+					      struct mm_area *vma)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 	unsigned int count = obj->size >> PAGE_SHIFT;
@@ -226,7 +226,7 @@ static int rockchip_drm_gem_object_mmap_iommu(struct drm_gem_object *obj,
 }
 
 static int rockchip_drm_gem_object_mmap_dma(struct drm_gem_object *obj,
-					    struct vm_area_struct *vma)
+					    struct mm_area *vma)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 	struct drm_device *drm = obj->dev;
@@ -236,7 +236,7 @@ static int rockchip_drm_gem_object_mmap_dma(struct drm_gem_object *obj,
 }
 
 static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
-					struct vm_area_struct *vma)
+					struct mm_area *vma)
 {
 	int ret;
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
diff --git a/drivers/gpu/drm/tegra/fbdev.c b/drivers/gpu/drm/tegra/fbdev.c
index cd9d798f8870..bb7d18a7ee7c 100644
--- a/drivers/gpu/drm/tegra/fbdev.c
+++ b/drivers/gpu/drm/tegra/fbdev.c
@@ -22,7 +22,7 @@
 #include "drm.h"
 #include "gem.h"
 
-static int tegra_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int tegra_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct drm_fb_helper *helper = info->par;
 	struct tegra_bo *bo;
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index ace3e5a805cf..8c8233eeeaf9 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -560,7 +560,7 @@ int tegra_bo_dumb_create(struct drm_file *file, struct drm_device *drm,
 
 static vm_fault_t tegra_bo_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *gem = vma->vm_private_data;
 	struct tegra_bo *bo = to_tegra_bo(gem);
 	struct page *page;
@@ -581,7 +581,7 @@ const struct vm_operations_struct tegra_bo_vm_ops = {
 	.close = drm_gem_vm_close,
 };
 
-int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
+int __tegra_gem_mmap(struct drm_gem_object *gem, struct mm_area *vma)
 {
 	struct tegra_bo *bo = to_tegra_bo(gem);
 
@@ -616,7 +616,7 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
 	return 0;
 }
 
-int tegra_drm_mmap(struct file *file, struct vm_area_struct *vma)
+int tegra_drm_mmap(struct file *file, struct mm_area *vma)
 {
 	struct drm_gem_object *gem;
 	int err;
@@ -708,7 +708,7 @@ static int tegra_gem_prime_end_cpu_access(struct dma_buf *buf,
 	return 0;
 }
 
-static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
+static int tegra_gem_prime_mmap(struct dma_buf *buf, struct mm_area *vma)
 {
 	struct drm_gem_object *gem = buf->priv;
 	int err;
diff --git a/drivers/gpu/drm/tegra/gem.h b/drivers/gpu/drm/tegra/gem.h
index bf2cbd48eb3f..ca8e8a5e3335 100644
--- a/drivers/gpu/drm/tegra/gem.h
+++ b/drivers/gpu/drm/tegra/gem.h
@@ -93,8 +93,8 @@ int tegra_bo_dumb_create(struct drm_file *file, struct drm_device *drm,
 
 extern const struct vm_operations_struct tegra_bo_vm_ops;
 
-int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma);
-int tegra_drm_mmap(struct file *file, struct vm_area_struct *vma);
+int __tegra_gem_mmap(struct drm_gem_object *gem, struct mm_area *vma);
+int tegra_drm_mmap(struct file *file, struct mm_area *vma);
 
 struct dma_buf *tegra_gem_prime_export(struct drm_gem_object *gem,
 				       int flags);
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index a194db83421d..4139e029b35f 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -182,7 +182,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 				    pgprot_t prot,
 				    pgoff_t num_prefault)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	struct ttm_device *bdev = bo->bdev;
 	unsigned long page_offset;
@@ -290,7 +290,7 @@ static void ttm_bo_release_dummy_page(struct drm_device *dev, void *res)
 
 vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	struct drm_device *ddev = bo->base.dev;
 	vm_fault_t ret = VM_FAULT_NOPAGE;
@@ -320,7 +320,7 @@ EXPORT_SYMBOL(ttm_bo_vm_dummy_page);
 
 vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	pgprot_t prot;
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 	struct drm_device *ddev = bo->base.dev;
@@ -347,7 +347,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
 }
 EXPORT_SYMBOL(ttm_bo_vm_fault);
 
-void ttm_bo_vm_open(struct vm_area_struct *vma)
+void ttm_bo_vm_open(struct mm_area *vma)
 {
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 
@@ -357,7 +357,7 @@ void ttm_bo_vm_open(struct vm_area_struct *vma)
 }
 EXPORT_SYMBOL(ttm_bo_vm_open);
 
-void ttm_bo_vm_close(struct vm_area_struct *vma)
+void ttm_bo_vm_close(struct mm_area *vma)
 {
 	struct ttm_buffer_object *bo = vma->vm_private_data;
 
@@ -453,7 +453,7 @@ int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset,
 }
 EXPORT_SYMBOL(ttm_bo_access);
 
-int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+int ttm_bo_vm_access(struct mm_area *vma, unsigned long addr,
 		     void *buf, int len, int write)
 {
 	struct ttm_buffer_object *bo = vma->vm_private_data;
@@ -480,7 +480,7 @@ static const struct vm_operations_struct ttm_bo_vm_ops = {
  *
  * Maps a buffer object.
  */
-int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo)
+int ttm_bo_mmap_obj(struct mm_area *vma, struct ttm_buffer_object *bo)
 {
 	/* Enforce no COW since would have really strange behavior with it. */
 	if (is_cow_mapping(vma->vm_flags))
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index fb450b6a4d44..beedeaeecab4 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -715,7 +715,7 @@ static struct dma_buf *vc4_prime_export(struct drm_gem_object *obj, int flags)
 
 static vm_fault_t vc4_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
@@ -729,7 +729,7 @@ static vm_fault_t vc4_fault(struct vm_fault *vmf)
 	return VM_FAULT_SIGBUS;
 }
 
-static int vc4_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int vc4_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
index 5ad3b7c6f73c..02a03a237fb5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vram.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
@@ -30,7 +30,7 @@ static const struct vm_operations_struct virtio_gpu_vram_vm_ops = {
 };
 
 static int virtio_gpu_vram_mmap(struct drm_gem_object *obj,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	int ret;
 	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
index ed5015ced392..3d857670a3a1 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
@@ -107,7 +107,7 @@ static void vmw_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
 		drm_gem_ttm_vunmap(obj, map);
 }
 
-static int vmw_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static int vmw_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	int ret;
 
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
index 74ff2812d66a..38567fdf7163 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
@@ -374,7 +374,7 @@ void vmw_bo_dirty_clear_res(struct vmw_resource *res)
 
 vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
 	    vma->vm_private_data;
 	vm_fault_t ret;
@@ -415,7 +415,7 @@ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf)
 
 vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
 	    vma->vm_private_data;
 	struct vmw_bo *vbo = to_vmw_bo(&bo->base);
diff --git a/drivers/gpu/drm/xe/display/intel_bo.c b/drivers/gpu/drm/xe/display/intel_bo.c
index 27437c22bd70..6e32ab48de68 100644
--- a/drivers/gpu/drm/xe/display/intel_bo.c
+++ b/drivers/gpu/drm/xe/display/intel_bo.c
@@ -32,7 +32,7 @@ void intel_bo_flush_if_display(struct drm_gem_object *obj)
 {
 }
 
-int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+int intel_bo_fb_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	return drm_gem_prime_mmap(obj, vma);
 }
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 3c7c2353d3c8..20e08ee00eee 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1579,7 +1579,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
 	return ret;
 }
 
-static int xe_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+static int xe_bo_vm_access(struct mm_area *vma, unsigned long addr,
 			   void *buf, int len, int write)
 {
 	struct ttm_buffer_object *ttm_bo = vma->vm_private_data;
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index d8e227ddf255..30a5eb67d7a1 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -237,12 +237,12 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
 #define xe_drm_compat_ioctl NULL
 #endif
 
-static void barrier_open(struct vm_area_struct *vma)
+static void barrier_open(struct mm_area *vma)
 {
 	drm_dev_get(vma->vm_private_data);
 }
 
-static void barrier_close(struct vm_area_struct *vma)
+static void barrier_close(struct mm_area *vma)
 {
 	drm_dev_put(vma->vm_private_data);
 }
@@ -257,7 +257,7 @@ static void barrier_release_dummy_page(struct drm_device *dev, void *res)
 static vm_fault_t barrier_fault(struct vm_fault *vmf)
 {
 	struct drm_device *dev = vmf->vma->vm_private_data;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	vm_fault_t ret = VM_FAULT_NOPAGE;
 	pgprot_t prot;
 	int idx;
@@ -299,7 +299,7 @@ static const struct vm_operations_struct vm_ops_barrier = {
 };
 
 static int xe_pci_barrier_mmap(struct file *filp,
-			       struct vm_area_struct *vma)
+			       struct mm_area *vma)
 {
 	struct drm_file *priv = filp->private_data;
 	struct drm_device *dev = priv->minor->dev;
@@ -326,7 +326,7 @@ static int xe_pci_barrier_mmap(struct file *filp,
 	return 0;
 }
 
-static int xe_mmap(struct file *filp, struct vm_area_struct *vma)
+static int xe_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct drm_file *priv = filp->private_data;
 	struct drm_device *dev = priv->minor->dev;
diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
index 346f357b3d1f..d44ce76b3465 100644
--- a/drivers/gpu/drm/xe/xe_oa.c
+++ b/drivers/gpu/drm/xe/xe_oa.c
@@ -1623,7 +1623,7 @@ static int xe_oa_release(struct inode *inode, struct file *file)
 	return 0;
 }
 
-static int xe_oa_mmap(struct file *file, struct vm_area_struct *vma)
+static int xe_oa_mmap(struct file *file, struct mm_area *vma)
 {
 	struct xe_oa_stream *stream = file->private_data;
 	struct xe_bo *bo = stream->oa_buffer.bo;
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 63112ed975c4..41449a270d89 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -58,7 +58,7 @@ static void gem_free_pages_array(struct xen_gem_object *xen_obj)
 }
 
 static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
-					 struct vm_area_struct *vma)
+					 struct mm_area *vma)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
 	int ret;
diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
index daa8e1bff5d9..ca6debea1173 100644
--- a/drivers/hsi/clients/cmt_speech.c
+++ b/drivers/hsi/clients/cmt_speech.c
@@ -1256,7 +1256,7 @@ static long cs_char_ioctl(struct file *file, unsigned int cmd,
 	return r;
 }
 
-static int cs_char_mmap(struct file *file, struct vm_area_struct *vma)
+static int cs_char_mmap(struct file *file, struct mm_area *vma)
 {
 	if (vma->vm_end < vma->vm_start)
 		return -EINVAL;
diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
index 72df774e410a..ac1e44563cbf 100644
--- a/drivers/hv/mshv_root_main.c
+++ b/drivers/hv/mshv_root_main.c
@@ -75,7 +75,7 @@ static int mshv_vp_release(struct inode *inode, struct file *filp);
 static long mshv_vp_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg);
 static int mshv_partition_release(struct inode *inode, struct file *filp);
 static long mshv_partition_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg);
-static int mshv_vp_mmap(struct file *file, struct vm_area_struct *vma);
+static int mshv_vp_mmap(struct file *file, struct mm_area *vma);
 static vm_fault_t mshv_vp_fault(struct vm_fault *vmf);
 static int mshv_init_async_handler(struct mshv_partition *partition);
 static void mshv_async_hvcall_handler(void *data, u64 *status);
@@ -831,7 +831,7 @@ static vm_fault_t mshv_vp_fault(struct vm_fault *vmf)
 	return 0;
 }
 
-static int mshv_vp_mmap(struct file *file, struct vm_area_struct *vma)
+static int mshv_vp_mmap(struct file *file, struct mm_area *vma)
 {
 	struct mshv_vp *vp = file->private_data;
 
@@ -1332,7 +1332,7 @@ mshv_map_user_memory(struct mshv_partition *partition,
 		     struct mshv_user_mem_region mem)
 {
 	struct mshv_mem_region *region;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	bool is_mmio;
 	ulong mmio_pfn;
 	long ret;
diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
index bf99d79a4192..f51cbe4a8c55 100644
--- a/drivers/hwtracing/intel_th/msu.c
+++ b/drivers/hwtracing/intel_th/msu.c
@@ -1589,7 +1589,7 @@ static ssize_t intel_th_msc_read(struct file *file, char __user *buf,
  * vm operations callbacks (vm_ops)
  */
 
-static void msc_mmap_open(struct vm_area_struct *vma)
+static void msc_mmap_open(struct mm_area *vma)
 {
 	struct msc_iter *iter = vma->vm_file->private_data;
 	struct msc *msc = iter->msc;
@@ -1597,7 +1597,7 @@ static void msc_mmap_open(struct vm_area_struct *vma)
 	atomic_inc(&msc->mmap_count);
 }
 
-static void msc_mmap_close(struct vm_area_struct *vma)
+static void msc_mmap_close(struct mm_area *vma)
 {
 	struct msc_iter *iter = vma->vm_file->private_data;
 	struct msc *msc = iter->msc;
@@ -1644,7 +1644,7 @@ static const struct vm_operations_struct msc_mmap_ops = {
 	.fault	= msc_mmap_fault,
 };
 
-static int intel_th_msc_mmap(struct file *file, struct vm_area_struct *vma)
+static int intel_th_msc_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned long size = vma->vm_end - vma->vm_start;
 	struct msc_iter *iter = vma->vm_file->private_data;
diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
index cdba4e875b28..c27322d82289 100644
--- a/drivers/hwtracing/stm/core.c
+++ b/drivers/hwtracing/stm/core.c
@@ -666,7 +666,7 @@ static ssize_t stm_char_write(struct file *file, const char __user *buf,
 	return count;
 }
 
-static void stm_mmap_open(struct vm_area_struct *vma)
+static void stm_mmap_open(struct mm_area *vma)
 {
 	struct stm_file *stmf = vma->vm_file->private_data;
 	struct stm_device *stm = stmf->stm;
@@ -674,7 +674,7 @@ static void stm_mmap_open(struct vm_area_struct *vma)
 	pm_runtime_get(&stm->dev);
 }
 
-static void stm_mmap_close(struct vm_area_struct *vma)
+static void stm_mmap_close(struct mm_area *vma)
 {
 	struct stm_file *stmf = vma->vm_file->private_data;
 	struct stm_device *stm = stmf->stm;
@@ -688,7 +688,7 @@ static const struct vm_operations_struct stm_mmap_vmops = {
 	.close	= stm_mmap_close,
 };
 
-static int stm_char_mmap(struct file *file, struct vm_area_struct *vma)
+static int stm_char_mmap(struct file *file, struct mm_area *vma)
 {
 	struct stm_file *stmf = file->private_data;
 	struct stm_device *stm = stmf->stm;
diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h
index 05102769a918..6662f745c123 100644
--- a/drivers/infiniband/core/core_priv.h
+++ b/drivers/infiniband/core/core_priv.h
@@ -359,13 +359,13 @@ int rdma_nl_net_init(struct rdma_dev_net *rnet);
 void rdma_nl_net_exit(struct rdma_dev_net *rnet);
 
 struct rdma_umap_priv {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct list_head list;
 	struct rdma_user_mmap_entry *entry;
 };
 
 void rdma_umap_priv_init(struct rdma_umap_priv *priv,
-			 struct vm_area_struct *vma,
+			 struct mm_area *vma,
 			 struct rdma_user_mmap_entry *entry);
 
 void ib_cq_pool_cleanup(struct ib_device *dev);
diff --git a/drivers/infiniband/core/ib_core_uverbs.c b/drivers/infiniband/core/ib_core_uverbs.c
index b51bd7087a88..949863e7c66f 100644
--- a/drivers/infiniband/core/ib_core_uverbs.c
+++ b/drivers/infiniband/core/ib_core_uverbs.c
@@ -28,7 +28,7 @@
  *
  */
 void rdma_umap_priv_init(struct rdma_umap_priv *priv,
-			 struct vm_area_struct *vma,
+			 struct mm_area *vma,
 			 struct rdma_user_mmap_entry *entry)
 {
 	struct ib_uverbs_file *ufile = vma->vm_file->private_data;
@@ -64,7 +64,7 @@ EXPORT_SYMBOL(rdma_umap_priv_init);
  * Return -EINVAL on wrong flags or size, -EAGAIN on failure to map. 0 on
  * success.
  */
-int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
+int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct mm_area *vma,
 		      unsigned long pfn, unsigned long size, pgprot_t prot,
 		      struct rdma_user_mmap_entry *entry)
 {
@@ -159,7 +159,7 @@ EXPORT_SYMBOL(rdma_user_mmap_entry_get_pgoff);
  */
 struct rdma_user_mmap_entry *
 rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
-			 struct vm_area_struct *vma)
+			 struct mm_area *vma)
 {
 	struct rdma_user_mmap_entry *entry;
 
diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
index 973fe2c7ef53..565b497a4523 100644
--- a/drivers/infiniband/core/uverbs_main.c
+++ b/drivers/infiniband/core/uverbs_main.c
@@ -688,7 +688,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
 
 static const struct vm_operations_struct rdma_umap_ops;
 
-static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
+static int ib_uverbs_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct ib_uverbs_file *file = filp->private_data;
 	struct ib_ucontext *ucontext;
@@ -717,7 +717,7 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
  * The VMA has been dup'd, initialize the vm_private_data with a new tracking
  * struct
  */
-static void rdma_umap_open(struct vm_area_struct *vma)
+static void rdma_umap_open(struct mm_area *vma)
 {
 	struct ib_uverbs_file *ufile = vma->vm_file->private_data;
 	struct rdma_umap_priv *opriv = vma->vm_private_data;
@@ -759,7 +759,7 @@ static void rdma_umap_open(struct vm_area_struct *vma)
 	zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
 }
 
-static void rdma_umap_close(struct vm_area_struct *vma)
+static void rdma_umap_close(struct mm_area *vma)
 {
 	struct ib_uverbs_file *ufile = vma->vm_file->private_data;
 	struct rdma_umap_priv *priv = vma->vm_private_data;
@@ -872,7 +872,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
 		mutex_lock(&ufile->umap_lock);
 		list_for_each_entry_safe (priv, next_priv, &ufile->umaps,
 					  list) {
-			struct vm_area_struct *vma = priv->vma;
+			struct mm_area *vma = priv->vma;
 
 			if (vma->vm_mm != mm)
 				continue;
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
index 9082b3fd2b47..fd7b8fdc9bfb 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
@@ -4425,7 +4425,7 @@ static struct bnxt_re_srq *bnxt_re_search_for_srq(struct bnxt_re_dev *rdev, u32
 }
 
 /* Helper function to mmap the virtual memory from user app */
-int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct vm_area_struct *vma)
+int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct mm_area *vma)
 {
 	struct bnxt_re_ucontext *uctx = container_of(ib_uctx,
 						   struct bnxt_re_ucontext,
diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
index 22c9eb8e9cfc..6f709d4bfc12 100644
--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
@@ -265,7 +265,7 @@ struct ib_mr *bnxt_re_reg_user_mr_dmabuf(struct ib_pd *ib_pd, u64 start,
 					 struct uverbs_attr_bundle *attrs);
 int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata);
 void bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
-int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+int bnxt_re_mmap(struct ib_ucontext *context, struct mm_area *vma);
 void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
 
 int bnxt_re_process_mad(struct ib_device *device, int process_mad_flags,
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index e059f92d90fd..c3b14c76e9fd 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -125,7 +125,7 @@ static int c4iw_alloc_ucontext(struct ib_ucontext *ucontext,
 	return ret;
 }
 
-static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+static int c4iw_mmap(struct ib_ucontext *context, struct mm_area *vma)
 {
 	int len = vma->vm_end - vma->vm_start;
 	u32 key = vma->vm_pgoff << PAGE_SHIFT;
diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h
index 838182d0409c..12502e6326bc 100644
--- a/drivers/infiniband/hw/efa/efa.h
+++ b/drivers/infiniband/hw/efa/efa.h
@@ -175,7 +175,7 @@ int efa_get_port_immutable(struct ib_device *ibdev, u32 port_num,
 int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata);
 void efa_dealloc_ucontext(struct ib_ucontext *ibucontext);
 int efa_mmap(struct ib_ucontext *ibucontext,
-	     struct vm_area_struct *vma);
+	     struct mm_area *vma);
 void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
 int efa_create_ah(struct ib_ah *ibah,
 		  struct rdma_ah_init_attr *init_attr,
diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index a8645a40730f..3b9b6308bada 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -1978,7 +1978,7 @@ void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
 }
 
 static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
-		      struct vm_area_struct *vma)
+		      struct mm_area *vma)
 {
 	struct rdma_user_mmap_entry *rdma_entry;
 	struct efa_user_mmap_entry *entry;
@@ -2041,7 +2041,7 @@ static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
 }
 
 int efa_mmap(struct ib_ucontext *ibucontext,
-	     struct vm_area_struct *vma)
+	     struct mm_area *vma)
 {
 	struct efa_ucontext *ucontext = to_eucontext(ibucontext);
 	struct efa_dev *dev = to_edev(ibucontext->device);
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
index af36a8d2df22..159f245e2e6b 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.c
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
@@ -1371,7 +1371,7 @@ void erdma_qp_put_ref(struct ib_qp *ibqp)
 	erdma_qp_put(to_eqp(ibqp));
 }
 
-int erdma_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma)
+int erdma_mmap(struct ib_ucontext *ctx, struct mm_area *vma)
 {
 	struct rdma_user_mmap_entry *rdma_entry;
 	struct erdma_user_mmap_entry *entry;
diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.h b/drivers/infiniband/hw/erdma/erdma_verbs.h
index f9408ccc8bad..a4fd2061301c 100644
--- a/drivers/infiniband/hw/erdma/erdma_verbs.h
+++ b/drivers/infiniband/hw/erdma/erdma_verbs.h
@@ -455,7 +455,7 @@ struct ib_mr *erdma_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len,
 				u64 virt, int access, struct ib_udata *udata);
 struct ib_mr *erdma_get_dma_mr(struct ib_pd *ibpd, int rights);
 int erdma_dereg_mr(struct ib_mr *ibmr, struct ib_udata *data);
-int erdma_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma);
+int erdma_mmap(struct ib_ucontext *ctx, struct mm_area *vma);
 void erdma_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
 void erdma_qp_get_ref(struct ib_qp *ibqp);
 void erdma_qp_put_ref(struct ib_qp *ibqp);
diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index 503abec709c9..239416504cd9 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -35,7 +35,7 @@ static int hfi1_file_open(struct inode *inode, struct file *fp);
 static int hfi1_file_close(struct inode *inode, struct file *fp);
 static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from);
 static __poll_t hfi1_poll(struct file *fp, struct poll_table_struct *pt);
-static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma);
+static int hfi1_file_mmap(struct file *fp, struct mm_area *vma);
 
 static u64 kvirt_to_phys(void *addr);
 static int assign_ctxt(struct hfi1_filedata *fd, unsigned long arg, u32 len);
@@ -306,7 +306,7 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
 
 static inline void mmap_cdbg(u16 ctxt, u8 subctxt, u8 type, u8 mapio, u8 vmf,
 			     u64 memaddr, void *memvirt, dma_addr_t memdma,
-			     ssize_t memlen, struct vm_area_struct *vma)
+			     ssize_t memlen, struct mm_area *vma)
 {
 	hfi1_cdbg(PROC,
 		  "%u:%u type:%u io/vf/dma:%d/%d/%d, addr:0x%llx, len:%lu(%lu), flags:0x%lx",
@@ -315,7 +315,7 @@ static inline void mmap_cdbg(u16 ctxt, u8 subctxt, u8 type, u8 mapio, u8 vmf,
 		  vma->vm_end - vma->vm_start, vma->vm_flags);
 }
 
-static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
+static int hfi1_file_mmap(struct file *fp, struct mm_area *vma)
 {
 	struct hfi1_filedata *fd = fp->private_data;
 	struct hfi1_ctxtdata *uctxt = fd->uctxt;
diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
index cf89a8db4f64..098c1ec4de0a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_main.c
+++ b/drivers/infiniband/hw/hns/hns_roce_main.c
@@ -457,7 +457,7 @@ static void hns_roce_dealloc_ucontext(struct ib_ucontext *ibcontext)
 	ida_free(&hr_dev->uar_ida.ida, (int)context->uar.logic_idx);
 }
 
-static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
+static int hns_roce_mmap(struct ib_ucontext *uctx, struct mm_area *vma)
 {
 	struct hns_roce_dev *hr_dev = to_hr_dev(uctx->device);
 	struct rdma_user_mmap_entry *rdma_entry;
diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
index eeb932e58730..a361f423e140 100644
--- a/drivers/infiniband/hw/irdma/verbs.c
+++ b/drivers/infiniband/hw/irdma/verbs.c
@@ -117,7 +117,7 @@ static void irdma_disassociate_ucontext(struct ib_ucontext *context)
 }
 
 static int irdma_mmap_legacy(struct irdma_ucontext *ucontext,
-			     struct vm_area_struct *vma)
+			     struct mm_area *vma)
 {
 	u64 pfn;
 
@@ -168,7 +168,7 @@ irdma_user_mmap_entry_insert(struct irdma_ucontext *ucontext, u64 bar_offset,
  * @context: context created during alloc
  * @vma: kernel info for user memory map
  */
-static int irdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+static int irdma_mmap(struct ib_ucontext *context, struct mm_area *vma)
 {
 	struct rdma_user_mmap_entry *rdma_entry;
 	struct irdma_user_mmap_entry *entry;
diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
index eda9c5b971de..a11368d8c979 100644
--- a/drivers/infiniband/hw/mana/main.c
+++ b/drivers/infiniband/hw/mana/main.c
@@ -512,7 +512,7 @@ int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 gdma_region)
 	return mana_gd_destroy_dma_region(gc, gdma_region);
 }
 
-int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
+int mana_ib_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma)
 {
 	struct mana_ib_ucontext *mana_ucontext =
 		container_of(ibcontext, struct mana_ib_ucontext, ibucontext);
diff --git a/drivers/infiniband/hw/mana/mana_ib.h b/drivers/infiniband/hw/mana/mana_ib.h
index 6903946677e5..f02d93ed4fec 100644
--- a/drivers/infiniband/hw/mana/mana_ib.h
+++ b/drivers/infiniband/hw/mana/mana_ib.h
@@ -628,7 +628,7 @@ int mana_ib_alloc_ucontext(struct ib_ucontext *ibcontext,
 			   struct ib_udata *udata);
 void mana_ib_dealloc_ucontext(struct ib_ucontext *ibcontext);
 
-int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma);
+int mana_ib_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma);
 
 int mana_ib_get_port_immutable(struct ib_device *ibdev, u32 port_num,
 			       struct ib_port_immutable *immutable);
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index dd35e03402ab..26abc9faca3a 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -1150,7 +1150,7 @@ static void mlx4_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
 {
 }
 
-static int mlx4_ib_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+static int mlx4_ib_mmap(struct ib_ucontext *context, struct mm_area *vma)
 {
 	struct mlx4_ib_dev *dev = to_mdev(context->device);
 
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index e77645a673fb..92821271c4a2 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -114,7 +114,7 @@ static struct ib_umem *mlx4_get_umem_mr(struct ib_device *device, u64 start,
 	 */
 	if (!ib_access_writable(access_flags)) {
 		unsigned long untagged_start = untagged_addr(start);
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		mmap_read_lock(current->mm);
 		/*
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index d07cacaa0abd..9434b1c99b60 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2201,7 +2201,7 @@ static inline char *mmap_cmd2str(enum mlx5_ib_mmap_cmd cmd)
 }
 
 static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					struct mlx5_ib_ucontext *context)
 {
 	if ((vma->vm_end - vma->vm_start != PAGE_SIZE) ||
@@ -2252,7 +2252,7 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
 }
 
 static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
-		    struct vm_area_struct *vma,
+		    struct mm_area *vma,
 		    struct mlx5_ib_ucontext *context)
 {
 	struct mlx5_bfreg_info *bfregi = &context->bfregi;
@@ -2359,7 +2359,7 @@ static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
 	return err;
 }
 
-static unsigned long mlx5_vma_to_pgoff(struct vm_area_struct *vma)
+static unsigned long mlx5_vma_to_pgoff(struct mm_area *vma)
 {
 	unsigned long idx;
 	u8 command;
@@ -2371,7 +2371,7 @@ static unsigned long mlx5_vma_to_pgoff(struct vm_area_struct *vma)
 }
 
 static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev,
-			       struct vm_area_struct *vma,
+			       struct mm_area *vma,
 			       struct ib_ucontext *ucontext)
 {
 	struct mlx5_user_mmap_entry *mentry;
@@ -2410,7 +2410,7 @@ static u64 mlx5_entry_to_mmap_offset(struct mlx5_user_mmap_entry *entry)
 		(index & 0xFF)) << PAGE_SHIFT;
 }
 
-static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
+static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma)
 {
 	struct mlx5_ib_ucontext *context = to_mucontext(ibcontext);
 	struct mlx5_ib_dev *dev = to_mdev(ibcontext->device);
diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
index 6a1e2e79ddc3..5934a0cc68a0 100644
--- a/drivers/infiniband/hw/mthca/mthca_provider.c
+++ b/drivers/infiniband/hw/mthca/mthca_provider.c
@@ -330,7 +330,7 @@ static void mthca_dealloc_ucontext(struct ib_ucontext *context)
 }
 
 static int mthca_mmap_uar(struct ib_ucontext *context,
-			  struct vm_area_struct *vma)
+			  struct mm_area *vma)
 {
 	if (vma->vm_end - vma->vm_start != PAGE_SIZE)
 		return -EINVAL;
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index 979de8f8df14..a4940538d888 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -536,7 +536,7 @@ void ocrdma_dealloc_ucontext(struct ib_ucontext *ibctx)
 	}
 }
 
-int ocrdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+int ocrdma_mmap(struct ib_ucontext *context, struct mm_area *vma)
 {
 	struct ocrdma_ucontext *ucontext = get_ocrdma_ucontext(context);
 	struct ocrdma_dev *dev = get_ocrdma_dev(context->device);
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
index 0644346d8d98..7e9ff740faad 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
@@ -64,7 +64,7 @@ int ocrdma_query_pkey(struct ib_device *ibdev, u32 port, u16 index, u16 *pkey);
 int ocrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
 void ocrdma_dealloc_ucontext(struct ib_ucontext *uctx);
 
-int ocrdma_mmap(struct ib_ucontext *, struct vm_area_struct *vma);
+int ocrdma_mmap(struct ib_ucontext *, struct mm_area *vma);
 
 int ocrdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata);
 int ocrdma_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
index 568a5b18803f..779bcac34ca1 100644
--- a/drivers/infiniband/hw/qedr/verbs.c
+++ b/drivers/infiniband/hw/qedr/verbs.c
@@ -385,7 +385,7 @@ void qedr_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
 	kfree(entry);
 }
 
-int qedr_mmap(struct ib_ucontext *ucontext, struct vm_area_struct *vma)
+int qedr_mmap(struct ib_ucontext *ucontext, struct mm_area *vma)
 {
 	struct ib_device *dev = ucontext->device;
 	size_t length = vma->vm_end - vma->vm_start;
diff --git a/drivers/infiniband/hw/qedr/verbs.h b/drivers/infiniband/hw/qedr/verbs.h
index 5731458abb06..50654f10a4ea 100644
--- a/drivers/infiniband/hw/qedr/verbs.h
+++ b/drivers/infiniband/hw/qedr/verbs.h
@@ -45,7 +45,7 @@ int qedr_query_pkey(struct ib_device *ibdev, u32 port, u16 index, u16 *pkey);
 int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
 void qedr_dealloc_ucontext(struct ib_ucontext *uctx);
 
-int qedr_mmap(struct ib_ucontext *ucontext, struct vm_area_struct *vma);
+int qedr_mmap(struct ib_ucontext *ucontext, struct mm_area *vma);
 void qedr_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
 int qedr_alloc_pd(struct ib_pd *pd, struct ib_udata *udata);
 int qedr_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index 29e4c59aa23b..b7ff897e3729 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -59,7 +59,7 @@ static int qib_close(struct inode *, struct file *);
 static ssize_t qib_write(struct file *, const char __user *, size_t, loff_t *);
 static ssize_t qib_write_iter(struct kiocb *, struct iov_iter *);
 static __poll_t qib_poll(struct file *, struct poll_table_struct *);
-static int qib_mmapf(struct file *, struct vm_area_struct *);
+static int qib_mmapf(struct file *, struct mm_area *);
 
 /*
  * This is really, really weird shit - write() and writev() here
@@ -705,7 +705,7 @@ static void qib_clean_part_key(struct qib_ctxtdata *rcd,
 }
 
 /* common code for the mappings on dma_alloc_coherent mem */
-static int qib_mmap_mem(struct vm_area_struct *vma, struct qib_ctxtdata *rcd,
+static int qib_mmap_mem(struct mm_area *vma, struct qib_ctxtdata *rcd,
 			unsigned len, void *kvaddr, u32 write_ok, char *what)
 {
 	struct qib_devdata *dd = rcd->dd;
@@ -747,7 +747,7 @@ static int qib_mmap_mem(struct vm_area_struct *vma, struct qib_ctxtdata *rcd,
 	return ret;
 }
 
-static int mmap_ureg(struct vm_area_struct *vma, struct qib_devdata *dd,
+static int mmap_ureg(struct mm_area *vma, struct qib_devdata *dd,
 		     u64 ureg)
 {
 	unsigned long phys;
@@ -778,7 +778,7 @@ static int mmap_ureg(struct vm_area_struct *vma, struct qib_devdata *dd,
 	return ret;
 }
 
-static int mmap_piobufs(struct vm_area_struct *vma,
+static int mmap_piobufs(struct mm_area *vma,
 			struct qib_devdata *dd,
 			struct qib_ctxtdata *rcd,
 			unsigned piobufs, unsigned piocnt)
@@ -823,7 +823,7 @@ static int mmap_piobufs(struct vm_area_struct *vma,
 	return ret;
 }
 
-static int mmap_rcvegrbufs(struct vm_area_struct *vma,
+static int mmap_rcvegrbufs(struct mm_area *vma,
 			   struct qib_ctxtdata *rcd)
 {
 	struct qib_devdata *dd = rcd->dd;
@@ -889,7 +889,7 @@ static const struct vm_operations_struct qib_file_vm_ops = {
 	.fault = qib_file_vma_fault,
 };
 
-static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
+static int mmap_kvaddr(struct mm_area *vma, u64 pgaddr,
 		       struct qib_ctxtdata *rcd, unsigned subctxt)
 {
 	struct qib_devdata *dd = rcd->dd;
@@ -971,7 +971,7 @@ static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
  * buffers in the chip.  We have the open and close entries so we can bump
  * the ref count and keep the driver from being unloaded while still mapped.
  */
-static int qib_mmapf(struct file *fp, struct vm_area_struct *vma)
+static int qib_mmapf(struct file *fp, struct mm_area *vma)
 {
 	struct qib_ctxtdata *rcd;
 	struct qib_devdata *dd;
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
index 217af34e82b3..9ed349e5fcc3 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
@@ -658,7 +658,7 @@ void usnic_ib_dealloc_ucontext(struct ib_ucontext *ibcontext)
 }
 
 int usnic_ib_mmap(struct ib_ucontext *context,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	struct usnic_ib_ucontext *uctx = to_ucontext(context);
 	struct usnic_ib_dev *us_ibdev;
diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
index 53f53f2d53be..e445f74b027f 100644
--- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
+++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
@@ -65,5 +65,5 @@ int usnic_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
 int usnic_ib_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
 void usnic_ib_dealloc_ucontext(struct ib_ucontext *ibcontext);
 int usnic_ib_mmap(struct ib_ucontext *context,
-			struct vm_area_struct *vma);
+			struct mm_area *vma);
 #endif /* !USNIC_IB_VERBS_H */
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
index bcd43dc30e21..e536181063cf 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
@@ -364,7 +364,7 @@ void pvrdma_dealloc_ucontext(struct ib_ucontext *ibcontext)
  *
  * @return: 0 on success, otherwise errno.
  */
-int pvrdma_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
+int pvrdma_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma)
 {
 	struct pvrdma_ucontext *context = to_vucontext(ibcontext);
 	unsigned long start = vma->vm_start;
diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
index fd47b0b1df5c..a3720f30cb8d 100644
--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
+++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
@@ -358,7 +358,7 @@ enum rdma_link_layer pvrdma_port_link_layer(struct ib_device *ibdev,
 					    u32 port);
 int pvrdma_modify_port(struct ib_device *ibdev, u32 port,
 		       int mask, struct ib_port_modify *props);
-int pvrdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+int pvrdma_mmap(struct ib_ucontext *context, struct mm_area *vma);
 int pvrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
 void pvrdma_dealloc_ucontext(struct ib_ucontext *context);
 int pvrdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata);
diff --git a/drivers/infiniband/sw/rdmavt/mmap.c b/drivers/infiniband/sw/rdmavt/mmap.c
index 46e3b3e0643a..45d7caafa4d0 100644
--- a/drivers/infiniband/sw/rdmavt/mmap.c
+++ b/drivers/infiniband/sw/rdmavt/mmap.c
@@ -39,14 +39,14 @@ void rvt_release_mmap_info(struct kref *ref)
 	kfree(ip);
 }
 
-static void rvt_vma_open(struct vm_area_struct *vma)
+static void rvt_vma_open(struct mm_area *vma)
 {
 	struct rvt_mmap_info *ip = vma->vm_private_data;
 
 	kref_get(&ip->ref);
 }
 
-static void rvt_vma_close(struct vm_area_struct *vma)
+static void rvt_vma_close(struct mm_area *vma)
 {
 	struct rvt_mmap_info *ip = vma->vm_private_data;
 
@@ -65,7 +65,7 @@ static const struct vm_operations_struct rvt_vm_ops = {
  *
  * Return: zero if the mmap is OK. Otherwise, return an errno.
  */
-int rvt_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+int rvt_mmap(struct ib_ucontext *context, struct mm_area *vma)
 {
 	struct rvt_dev_info *rdi = ib_to_rvt(context->device);
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
diff --git a/drivers/infiniband/sw/rdmavt/mmap.h b/drivers/infiniband/sw/rdmavt/mmap.h
index 29aaca3e8b83..7075597849cd 100644
--- a/drivers/infiniband/sw/rdmavt/mmap.h
+++ b/drivers/infiniband/sw/rdmavt/mmap.h
@@ -10,7 +10,7 @@
 
 void rvt_mmap_init(struct rvt_dev_info *rdi);
 void rvt_release_mmap_info(struct kref *ref);
-int rvt_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+int rvt_mmap(struct ib_ucontext *context, struct mm_area *vma);
 struct rvt_mmap_info *rvt_create_mmap_info(struct rvt_dev_info *rdi, u32 size,
 					   struct ib_udata *udata, void *obj);
 void rvt_update_mmap_info(struct rvt_dev_info *rdi, struct rvt_mmap_info *ip,
diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
index feb386d98d1d..3f40a7a141af 100644
--- a/drivers/infiniband/sw/rxe/rxe_loc.h
+++ b/drivers/infiniband/sw/rxe/rxe_loc.h
@@ -54,7 +54,7 @@ void rxe_mmap_release(struct kref *ref);
 struct rxe_mmap_info *rxe_create_mmap_info(struct rxe_dev *dev, u32 size,
 					   struct ib_udata *udata, void *obj);
 
-int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+int rxe_mmap(struct ib_ucontext *context, struct mm_area *vma);
 
 /* rxe_mr.c */
 u8 rxe_get_next_key(u32 last_key);
diff --git a/drivers/infiniband/sw/rxe/rxe_mmap.c b/drivers/infiniband/sw/rxe/rxe_mmap.c
index 6b7f2bd69879..2b478c3138b9 100644
--- a/drivers/infiniband/sw/rxe/rxe_mmap.c
+++ b/drivers/infiniband/sw/rxe/rxe_mmap.c
@@ -34,14 +34,14 @@ void rxe_mmap_release(struct kref *ref)
  * open and close keep track of how many times the memory region is mapped,
  * to avoid releasing it.
  */
-static void rxe_vma_open(struct vm_area_struct *vma)
+static void rxe_vma_open(struct mm_area *vma)
 {
 	struct rxe_mmap_info *ip = vma->vm_private_data;
 
 	kref_get(&ip->ref);
 }
 
-static void rxe_vma_close(struct vm_area_struct *vma)
+static void rxe_vma_close(struct mm_area *vma)
 {
 	struct rxe_mmap_info *ip = vma->vm_private_data;
 
@@ -59,7 +59,7 @@ static const struct vm_operations_struct rxe_vm_ops = {
  * @vma: the VMA to be initialized
  * Return zero if the mmap is OK. Otherwise, return an errno.
  */
-int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
+int rxe_mmap(struct ib_ucontext *context, struct mm_area *vma)
 {
 	struct rxe_dev *rxe = to_rdev(context->device);
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
index fd7b266a221b..e04bb047470d 100644
--- a/drivers/infiniband/sw/siw/siw_verbs.c
+++ b/drivers/infiniband/sw/siw/siw_verbs.c
@@ -51,7 +51,7 @@ void siw_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
 	kfree(entry);
 }
 
-int siw_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma)
+int siw_mmap(struct ib_ucontext *ctx, struct mm_area *vma)
 {
 	struct siw_ucontext *uctx = to_siw_ctx(ctx);
 	size_t size = vma->vm_end - vma->vm_start;
diff --git a/drivers/infiniband/sw/siw/siw_verbs.h b/drivers/infiniband/sw/siw/siw_verbs.h
index 1f1a305540af..0df2ef43317c 100644
--- a/drivers/infiniband/sw/siw/siw_verbs.h
+++ b/drivers/infiniband/sw/siw/siw_verbs.h
@@ -80,7 +80,7 @@ int siw_query_srq(struct ib_srq *base_srq, struct ib_srq_attr *attr);
 int siw_destroy_srq(struct ib_srq *base_srq, struct ib_udata *udata);
 int siw_post_srq_recv(struct ib_srq *base_srq, const struct ib_recv_wr *wr,
 		      const struct ib_recv_wr **bad_wr);
-int siw_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma);
+int siw_mmap(struct ib_ucontext *ctx, struct mm_area *vma);
 void siw_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
 void siw_qp_event(struct siw_qp *qp, enum ib_event_type type);
 void siw_cq_event(struct siw_cq *cq, enum ib_event_type type);
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 6054d0ab8023..44e86a5bf175 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1066,7 +1066,7 @@ void *iommu_dma_vmap_noncontiguous(struct device *dev, size_t size,
 	return vmap(sgt_handle(sgt)->pages, count, VM_MAP, PAGE_KERNEL);
 }
 
-int iommu_dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+int iommu_dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
 		size_t size, struct sg_table *sgt)
 {
 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
@@ -1643,7 +1643,7 @@ void *iommu_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
 	return cpu_addr;
 }
 
-int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+int iommu_dma_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs)
 {
diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
index ab18bc494eef..9d70a137db53 100644
--- a/drivers/iommu/iommu-sva.c
+++ b/drivers/iommu/iommu-sva.c
@@ -209,7 +209,7 @@ static enum iommu_page_response_code
 iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm)
 {
 	vm_fault_t ret;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned int access_flags = 0;
 	unsigned int fault_flags = FAULT_FLAG_REMOTE;
 	struct iommu_fault_page_request *prm = &fault->prm;
diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
index 2df566f409b6..77bafec1433d 100644
--- a/drivers/media/common/videobuf2/videobuf2-core.c
+++ b/drivers/media/common/videobuf2/videobuf2-core.c
@@ -2496,7 +2496,7 @@ int vb2_core_expbuf(struct vb2_queue *q, int *fd, unsigned int type,
 }
 EXPORT_SYMBOL_GPL(vb2_core_expbuf);
 
-int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma)
+int vb2_mmap(struct vb2_queue *q, struct mm_area *vma)
 {
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
 	struct vb2_buffer *vb;
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
index a13ec569c82f..e038533f7541 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
@@ -271,7 +271,7 @@ static void *vb2_dc_alloc(struct vb2_buffer *vb,
 	return buf;
 }
 
-static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
+static int vb2_dc_mmap(void *buf_priv, struct mm_area *vma)
 {
 	struct vb2_dc_buf *buf = buf_priv;
 	int ret;
@@ -453,7 +453,7 @@ static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct iosys_map *map)
 }
 
 static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
-	struct vm_area_struct *vma)
+	struct mm_area *vma)
 {
 	return vb2_dc_mmap(dbuf->priv, vma);
 }
diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
index c6ddf2357c58..78bc6dd98236 100644
--- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
+++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
@@ -329,7 +329,7 @@ static unsigned int vb2_dma_sg_num_users(void *buf_priv)
 	return refcount_read(&buf->refcount);
 }
 
-static int vb2_dma_sg_mmap(void *buf_priv, struct vm_area_struct *vma)
+static int vb2_dma_sg_mmap(void *buf_priv, struct mm_area *vma)
 {
 	struct vb2_dma_sg_buf *buf = buf_priv;
 	int err;
@@ -501,7 +501,7 @@ static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf,
 }
 
 static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
-	struct vm_area_struct *vma)
+	struct mm_area *vma)
 {
 	return vb2_dma_sg_mmap(dbuf->priv, vma);
 }
diff --git a/drivers/media/common/videobuf2/videobuf2-memops.c b/drivers/media/common/videobuf2/videobuf2-memops.c
index f9a4ec44422e..3012d5b5c2d9 100644
--- a/drivers/media/common/videobuf2/videobuf2-memops.c
+++ b/drivers/media/common/videobuf2/videobuf2-memops.c
@@ -87,7 +87,7 @@ EXPORT_SYMBOL(vb2_destroy_framevec);
  * This function adds another user to the provided vma. It expects
  * struct vb2_vmarea_handler pointer in vma->vm_private_data.
  */
-static void vb2_common_vm_open(struct vm_area_struct *vma)
+static void vb2_common_vm_open(struct mm_area *vma)
 {
 	struct vb2_vmarea_handler *h = vma->vm_private_data;
 
@@ -105,7 +105,7 @@ static void vb2_common_vm_open(struct vm_area_struct *vma)
  * This function releases the user from the provided vma. It expects
  * struct vb2_vmarea_handler pointer in vma->vm_private_data.
  */
-static void vb2_common_vm_close(struct vm_area_struct *vma)
+static void vb2_common_vm_close(struct mm_area *vma)
 {
 	struct vb2_vmarea_handler *h = vma->vm_private_data;
 
diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
index 9201d854dbcc..73aa54baf3a0 100644
--- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
+++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
@@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_GPL(vb2_ioctl_expbuf);
 
 /* v4l2_file_operations helpers */
 
-int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma)
+int vb2_fop_mmap(struct file *file, struct mm_area *vma)
 {
 	struct video_device *vdev = video_devdata(file);
 
diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
index 3f777068cd34..7f9526ab3e5a 100644
--- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
+++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
@@ -167,7 +167,7 @@ static unsigned int vb2_vmalloc_num_users(void *buf_priv)
 	return refcount_read(&buf->refcount);
 }
 
-static int vb2_vmalloc_mmap(void *buf_priv, struct vm_area_struct *vma)
+static int vb2_vmalloc_mmap(void *buf_priv, struct mm_area *vma)
 {
 	struct vb2_vmalloc_buf *buf = buf_priv;
 	int ret;
@@ -318,7 +318,7 @@ static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf,
 }
 
 static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
-	struct vm_area_struct *vma)
+	struct mm_area *vma)
 {
 	return vb2_vmalloc_mmap(dbuf->priv, vma);
 }
diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
index 6063782e937a..72eae59b0646 100644
--- a/drivers/media/dvb-core/dmxdev.c
+++ b/drivers/media/dvb-core/dmxdev.c
@@ -1212,7 +1212,7 @@ static __poll_t dvb_demux_poll(struct file *file, poll_table *wait)
 }
 
 #ifdef CONFIG_DVB_MMAP
-static int dvb_demux_mmap(struct file *file, struct vm_area_struct *vma)
+static int dvb_demux_mmap(struct file *file, struct mm_area *vma)
 {
 	struct dmxdev_filter *dmxdevfilter = file->private_data;
 	struct dmxdev *dmxdev = dmxdevfilter->dev;
@@ -1362,7 +1362,7 @@ static __poll_t dvb_dvr_poll(struct file *file, poll_table *wait)
 }
 
 #ifdef CONFIG_DVB_MMAP
-static int dvb_dvr_mmap(struct file *file, struct vm_area_struct *vma)
+static int dvb_dvr_mmap(struct file *file, struct mm_area *vma)
 {
 	struct dvb_device *dvbdev = file->private_data;
 	struct dmxdev *dmxdev = dvbdev->priv;
diff --git a/drivers/media/dvb-core/dvb_vb2.c b/drivers/media/dvb-core/dvb_vb2.c
index 29edaaff7a5c..8e6b7b0463e9 100644
--- a/drivers/media/dvb-core/dvb_vb2.c
+++ b/drivers/media/dvb-core/dvb_vb2.c
@@ -431,7 +431,7 @@ int dvb_vb2_dqbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
 	return 0;
 }
 
-int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct vm_area_struct *vma)
+int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct mm_area *vma)
 {
 	int ret;
 
diff --git a/drivers/media/pci/cx18/cx18-fileops.h b/drivers/media/pci/cx18/cx18-fileops.h
index 943057b83d94..be284bd28c53 100644
--- a/drivers/media/pci/cx18/cx18-fileops.h
+++ b/drivers/media/pci/cx18/cx18-fileops.h
@@ -19,7 +19,7 @@ int cx18_start_capture(struct cx18_open_id *id);
 void cx18_stop_capture(struct cx18_stream *s, int gop_end);
 void cx18_mute(struct cx18 *cx);
 void cx18_unmute(struct cx18 *cx);
-int cx18_v4l2_mmap(struct file *file, struct vm_area_struct *vma);
+int cx18_v4l2_mmap(struct file *file, struct mm_area *vma);
 void cx18_clear_queue(struct cx18_stream *s, enum vb2_buffer_state state);
 void cx18_vb_timeout(struct timer_list *t);
 
diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c
index 1ca60ca79dba..ffcd43703d6a 100644
--- a/drivers/media/pci/intel/ipu6/ipu6-dma.c
+++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c
@@ -294,7 +294,7 @@ void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
 }
 EXPORT_SYMBOL_NS_GPL(ipu6_dma_free, "INTEL_IPU6");
 
-int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
+int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct mm_area *vma,
 		  void *addr, dma_addr_t iova, size_t size,
 		  unsigned long attrs)
 {
diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h
index 2882850d9366..8c63e2883ebb 100644
--- a/drivers/media/pci/intel/ipu6/ipu6-dma.h
+++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h
@@ -30,7 +30,7 @@ void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
 		     unsigned long attrs);
 void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
 		   dma_addr_t dma_handle, unsigned long attrs);
-int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
+int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct mm_area *vma,
 		  void *addr, dma_addr_t iova, size_t size,
 		  unsigned long attrs);
 int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
diff --git a/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c b/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c
index 4bda1c369c44..8c35172b0e38 100644
--- a/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c
+++ b/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c
@@ -703,7 +703,7 @@ static __poll_t gsc_m2m_poll(struct file *file,
 	return ret;
 }
 
-static int gsc_m2m_mmap(struct file *file, struct vm_area_struct *vma)
+static int gsc_m2m_mmap(struct file *file, struct mm_area *vma)
 {
 	struct gsc_ctx *ctx = fh_to_ctx(file->private_data);
 	struct gsc_dev *gsc = ctx->gsc_dev;
diff --git a/drivers/media/platform/samsung/s3c-camif/camif-capture.c b/drivers/media/platform/samsung/s3c-camif/camif-capture.c
index bd1149e8abc2..5ee766d8c40e 100644
--- a/drivers/media/platform/samsung/s3c-camif/camif-capture.c
+++ b/drivers/media/platform/samsung/s3c-camif/camif-capture.c
@@ -604,7 +604,7 @@ static __poll_t s3c_camif_poll(struct file *file,
 	return ret;
 }
 
-static int s3c_camif_mmap(struct file *file, struct vm_area_struct *vma)
+static int s3c_camif_mmap(struct file *file, struct mm_area *vma)
 {
 	struct camif_vp *vp = video_drvdata(file);
 	int ret;
diff --git a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
index 5f80931f056d..81656e3f2c49 100644
--- a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
+++ b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
@@ -1062,7 +1062,7 @@ static __poll_t s5p_mfc_poll(struct file *file,
 }
 
 /* Mmap */
-static int s5p_mfc_mmap(struct file *file, struct vm_area_struct *vma)
+static int s5p_mfc_mmap(struct file *file, struct mm_area *vma)
 {
 	struct s5p_mfc_ctx *ctx = fh_to_ctx(file->private_data);
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
diff --git a/drivers/media/platform/ti/omap3isp/ispvideo.c b/drivers/media/platform/ti/omap3isp/ispvideo.c
index 5c9aa80023fd..ddab948fa88f 100644
--- a/drivers/media/platform/ti/omap3isp/ispvideo.c
+++ b/drivers/media/platform/ti/omap3isp/ispvideo.c
@@ -1401,7 +1401,7 @@ static __poll_t isp_video_poll(struct file *file, poll_table *wait)
 	return ret;
 }
 
-static int isp_video_mmap(struct file *file, struct vm_area_struct *vma)
+static int isp_video_mmap(struct file *file, struct mm_area *vma)
 {
 	struct isp_video_fh *vfh = to_isp_video_fh(file->private_data);
 
diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
index 2ee142621042..25642a2e8eec 100644
--- a/drivers/media/usb/uvc/uvc_queue.c
+++ b/drivers/media/usb/uvc/uvc_queue.c
@@ -346,7 +346,7 @@ int uvc_queue_streamoff(struct uvc_video_queue *queue, enum v4l2_buf_type type)
 	return ret;
 }
 
-int uvc_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma)
+int uvc_queue_mmap(struct uvc_video_queue *queue, struct mm_area *vma)
 {
 	return vb2_mmap(&queue->queue, vma);
 }
diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
index 39065db44e86..f73fd604a62d 100644
--- a/drivers/media/usb/uvc/uvc_v4l2.c
+++ b/drivers/media/usb/uvc/uvc_v4l2.c
@@ -1413,7 +1413,7 @@ static ssize_t uvc_v4l2_read(struct file *file, char __user *data,
 	return -EINVAL;
 }
 
-static int uvc_v4l2_mmap(struct file *file, struct vm_area_struct *vma)
+static int uvc_v4l2_mmap(struct file *file, struct mm_area *vma)
 {
 	struct uvc_fh *handle = file->private_data;
 	struct uvc_streaming *stream = handle->stream;
diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
index b4ee701835fc..a56e30f5a487 100644
--- a/drivers/media/usb/uvc/uvcvideo.h
+++ b/drivers/media/usb/uvc/uvcvideo.h
@@ -708,7 +708,7 @@ struct uvc_buffer *uvc_queue_next_buffer(struct uvc_video_queue *queue,
 struct uvc_buffer *uvc_queue_get_current_buffer(struct uvc_video_queue *queue);
 void uvc_queue_buffer_release(struct uvc_buffer *buf);
 int uvc_queue_mmap(struct uvc_video_queue *queue,
-		   struct vm_area_struct *vma);
+		   struct mm_area *vma);
 __poll_t uvc_queue_poll(struct uvc_video_queue *queue, struct file *file,
 			poll_table *wait);
 #ifndef CONFIG_MMU
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index b40c08ce909d..172f16bd0d79 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -392,7 +392,7 @@ static unsigned long v4l2_get_unmapped_area(struct file *filp,
 }
 #endif
 
-static int v4l2_mmap(struct file *filp, struct vm_area_struct *vm)
+static int v4l2_mmap(struct file *filp, struct mm_area *vm)
 {
 	struct video_device *vdev = video_devdata(filp);
 	int ret = -ENODEV;
diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
index eb22d6172462..219609e59ee1 100644
--- a/drivers/media/v4l2-core/v4l2-mem2mem.c
+++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
@@ -983,7 +983,7 @@ __poll_t v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
 EXPORT_SYMBOL_GPL(v4l2_m2m_poll);
 
 int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
-			 struct vm_area_struct *vma)
+			 struct mm_area *vma)
 {
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
 	struct vb2_queue *vq;
@@ -1615,7 +1615,7 @@ EXPORT_SYMBOL_GPL(v4l2_m2m_ioctl_stateless_decoder_cmd);
  * for the output and the capture buffer queue.
  */
 
-int v4l2_m2m_fop_mmap(struct file *file, struct vm_area_struct *vma)
+int v4l2_m2m_fop_mmap(struct file *file, struct mm_area *vma)
 {
 	struct v4l2_fh *fh = file->private_data;
 
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index d4a96137728d..5742434e1178 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -1201,7 +1201,7 @@ static long bcm_vk_reset(struct bcm_vk *vk, struct vk_reset __user *arg)
 	return ret;
 }
 
-static int bcm_vk_mmap(struct file *file, struct vm_area_struct *vma)
+static int bcm_vk_mmap(struct file *file, struct mm_area *vma)
 {
 	struct bcm_vk_ctx *ctx = file->private_data;
 	struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
index 7b7a22c91fe4..e8c4ed8aea52 100644
--- a/drivers/misc/fastrpc.c
+++ b/drivers/misc/fastrpc.c
@@ -731,7 +731,7 @@ static int fastrpc_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
 }
 
 static int fastrpc_mmap(struct dma_buf *dmabuf,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	struct fastrpc_buf *buf = dmabuf->priv;
 	size_t size = vma->vm_end - vma->vm_start;
@@ -984,7 +984,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
 			continue;
 
 		if (ctx->maps[i]) {
-			struct vm_area_struct *vma = NULL;
+			struct mm_area *vma = NULL;
 
 			rpra[i].buf.pv = (u64) ctx->args[i].ptr;
 			pages[i].addr = ctx->maps[i]->phys;
diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
index 4441aca2280a..acff9681d657 100644
--- a/drivers/misc/genwqe/card_dev.c
+++ b/drivers/misc/genwqe/card_dev.c
@@ -376,7 +376,7 @@ static int genwqe_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static void genwqe_vma_open(struct vm_area_struct *vma)
+static void genwqe_vma_open(struct mm_area *vma)
 {
 	/* nothing ... */
 }
@@ -387,7 +387,7 @@ static void genwqe_vma_open(struct vm_area_struct *vma)
  *
  * Free memory which got allocated by GenWQE mmap().
  */
-static void genwqe_vma_close(struct vm_area_struct *vma)
+static void genwqe_vma_close(struct mm_area *vma)
 {
 	unsigned long vsize = vma->vm_end - vma->vm_start;
 	struct inode *inode = file_inode(vma->vm_file);
@@ -432,7 +432,7 @@ static const struct vm_operations_struct genwqe_vma_ops = {
  * plain buffer, we lookup our dma_mapping list to find the
  * corresponding DMA address for the associated user-space address.
  */
-static int genwqe_mmap(struct file *filp, struct vm_area_struct *vma)
+static int genwqe_mmap(struct file *filp, struct mm_area *vma)
 {
 	int rc;
 	unsigned long pfn, vsize = vma->vm_end - vma->vm_start;
diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
index cded7d1caf32..da4b82b2c938 100644
--- a/drivers/misc/ocxl/context.c
+++ b/drivers/misc/ocxl/context.c
@@ -95,7 +95,7 @@ int ocxl_context_attach(struct ocxl_context *ctx, u64 amr, struct mm_struct *mm)
 }
 EXPORT_SYMBOL_GPL(ocxl_context_attach);
 
-static vm_fault_t map_afu_irq(struct vm_area_struct *vma, unsigned long address,
+static vm_fault_t map_afu_irq(struct mm_area *vma, unsigned long address,
 		u64 offset, struct ocxl_context *ctx)
 {
 	u64 trigger_addr;
@@ -108,7 +108,7 @@ static vm_fault_t map_afu_irq(struct vm_area_struct *vma, unsigned long address,
 	return vmf_insert_pfn(vma, address, trigger_addr >> PAGE_SHIFT);
 }
 
-static vm_fault_t map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
+static vm_fault_t map_pp_mmio(struct mm_area *vma, unsigned long address,
 		u64 offset, struct ocxl_context *ctx)
 {
 	u64 pp_mmio_addr;
@@ -138,7 +138,7 @@ static vm_fault_t map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
 
 static vm_fault_t ocxl_mmap_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ocxl_context *ctx = vma->vm_file->private_data;
 	u64 offset;
 	vm_fault_t ret;
@@ -159,7 +159,7 @@ static const struct vm_operations_struct ocxl_vmops = {
 };
 
 static int check_mmap_afu_irq(struct ocxl_context *ctx,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	int irq_id = ocxl_irq_offset_to_id(ctx, vma->vm_pgoff << PAGE_SHIFT);
 
@@ -185,7 +185,7 @@ static int check_mmap_afu_irq(struct ocxl_context *ctx,
 }
 
 static int check_mmap_mmio(struct ocxl_context *ctx,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	if ((vma_pages(vma) + vma->vm_pgoff) >
 		(ctx->afu->config.pp_mmio_stride >> PAGE_SHIFT))
@@ -193,7 +193,7 @@ static int check_mmap_mmio(struct ocxl_context *ctx,
 	return 0;
 }
 
-int ocxl_context_mmap(struct ocxl_context *ctx, struct vm_area_struct *vma)
+int ocxl_context_mmap(struct ocxl_context *ctx, struct mm_area *vma)
 {
 	int rc;
 
diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
index 7eb74711ac96..68ce28450ac8 100644
--- a/drivers/misc/ocxl/file.c
+++ b/drivers/misc/ocxl/file.c
@@ -289,7 +289,7 @@ static long afu_compat_ioctl(struct file *file, unsigned int cmd,
 	return afu_ioctl(file, cmd, args);
 }
 
-static int afu_mmap(struct file *file, struct vm_area_struct *vma)
+static int afu_mmap(struct file *file, struct mm_area *vma)
 {
 	struct ocxl_context *ctx = file->private_data;
 
diff --git a/drivers/misc/ocxl/ocxl_internal.h b/drivers/misc/ocxl/ocxl_internal.h
index d2028d6c6f08..4008b894d983 100644
--- a/drivers/misc/ocxl/ocxl_internal.h
+++ b/drivers/misc/ocxl/ocxl_internal.h
@@ -139,7 +139,7 @@ int ocxl_config_check_afu_index(struct pci_dev *dev,
 int ocxl_link_update_pe(void *link_handle, int pasid, __u16 tid);
 
 int ocxl_context_mmap(struct ocxl_context *ctx,
-			struct vm_area_struct *vma);
+			struct mm_area *vma);
 void ocxl_context_detach_all(struct ocxl_afu *afu);
 
 int ocxl_sysfs_register_afu(struct ocxl_file_info *info);
diff --git a/drivers/misc/ocxl/sysfs.c b/drivers/misc/ocxl/sysfs.c
index e849641687a0..2ba0dc539358 100644
--- a/drivers/misc/ocxl/sysfs.c
+++ b/drivers/misc/ocxl/sysfs.c
@@ -108,7 +108,7 @@ static ssize_t global_mmio_read(struct file *filp, struct kobject *kobj,
 
 static vm_fault_t global_mmio_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct ocxl_afu *afu = vma->vm_private_data;
 	unsigned long offset;
 
@@ -126,7 +126,7 @@ static const struct vm_operations_struct global_mmio_vmops = {
 
 static int global_mmio_mmap(struct file *filp, struct kobject *kobj,
 			const struct bin_attribute *bin_attr,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	struct ocxl_afu *afu = to_afu(kobj_to_dev(kobj));
 
diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c
index 24c29e0f00ef..d763a0bd0c8a 100644
--- a/drivers/misc/open-dice.c
+++ b/drivers/misc/open-dice.c
@@ -86,7 +86,7 @@ static ssize_t open_dice_write(struct file *filp, const char __user *ptr,
 /*
  * Creates a mapping of the reserved memory region in user address space.
  */
-static int open_dice_mmap(struct file *filp, struct vm_area_struct *vma)
+static int open_dice_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct open_dice_drvdata *drvdata = to_open_dice_drvdata(filp);
 
diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
index 3557d78ee47a..a97dde2c3775 100644
--- a/drivers/misc/sgi-gru/grufault.c
+++ b/drivers/misc/sgi-gru/grufault.c
@@ -45,9 +45,9 @@ static inline int is_gru_paddr(unsigned long paddr)
 /*
  * Find the vma of a GRU segment. Caller must hold mmap_lock.
  */
-struct vm_area_struct *gru_find_vma(unsigned long vaddr)
+struct mm_area *gru_find_vma(unsigned long vaddr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = vma_lookup(current->mm, vaddr);
 	if (vma && vma->vm_ops == &gru_vm_ops)
@@ -66,7 +66,7 @@ struct vm_area_struct *gru_find_vma(unsigned long vaddr)
 static struct gru_thread_state *gru_find_lock_gts(unsigned long vaddr)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct gru_thread_state *gts = NULL;
 
 	mmap_read_lock(mm);
@@ -83,7 +83,7 @@ static struct gru_thread_state *gru_find_lock_gts(unsigned long vaddr)
 static struct gru_thread_state *gru_alloc_locked_gts(unsigned long vaddr)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct gru_thread_state *gts = ERR_PTR(-EINVAL);
 
 	mmap_write_lock(mm);
@@ -174,7 +174,7 @@ static void get_clear_fault_map(struct gru_state *gru,
  * 		< 0 - error code
  * 		  1 - (atomic only) try again in non-atomic context
  */
-static int non_atomic_pte_lookup(struct vm_area_struct *vma,
+static int non_atomic_pte_lookup(struct mm_area *vma,
 				 unsigned long vaddr, int write,
 				 unsigned long *paddr, int *pageshift)
 {
@@ -202,7 +202,7 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma,
  * NOTE: mmap_lock is already held on entry to this function. This
  * guarantees existence of the page tables.
  */
-static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr,
+static int atomic_pte_lookup(struct mm_area *vma, unsigned long vaddr,
 	int write, unsigned long *paddr, int *pageshift)
 {
 	pgd_t *pgdp;
@@ -253,7 +253,7 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
 		    int write, int atomic, unsigned long *gpa, int *pageshift)
 {
 	struct mm_struct *mm = gts->ts_mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long paddr;
 	int ret, ps;
 
diff --git a/drivers/misc/sgi-gru/grufile.c b/drivers/misc/sgi-gru/grufile.c
index e755690c9805..b831fdb27841 100644
--- a/drivers/misc/sgi-gru/grufile.c
+++ b/drivers/misc/sgi-gru/grufile.c
@@ -58,7 +58,7 @@ static int gru_supported(void)
  * Called when unmapping a device mapping. Frees all gru resources
  * and tables belonging to the vma.
  */
-static void gru_vma_close(struct vm_area_struct *vma)
+static void gru_vma_close(struct mm_area *vma)
 {
 	struct gru_vma_data *vdata;
 	struct gru_thread_state *gts;
@@ -92,7 +92,7 @@ static void gru_vma_close(struct vm_area_struct *vma)
  * and private data structure necessary to allocate, track, and free the
  * underlying pages.
  */
-static int gru_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int gru_file_mmap(struct file *file, struct mm_area *vma)
 {
 	if ((vma->vm_flags & (VM_SHARED | VM_WRITE)) != (VM_SHARED | VM_WRITE))
 		return -EPERM;
@@ -121,7 +121,7 @@ static int gru_file_mmap(struct file *file, struct vm_area_struct *vma)
 static int gru_create_new_context(unsigned long arg)
 {
 	struct gru_create_context_req req;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct gru_vma_data *vdata;
 	int ret = -EINVAL;
 
diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c
index 3036c15f3689..96374726d7e6 100644
--- a/drivers/misc/sgi-gru/grumain.c
+++ b/drivers/misc/sgi-gru/grumain.c
@@ -303,7 +303,7 @@ static struct gru_thread_state *gru_find_current_gts_nolock(struct gru_vma_data
 /*
  * Allocate a thread state structure.
  */
-struct gru_thread_state *gru_alloc_gts(struct vm_area_struct *vma,
+struct gru_thread_state *gru_alloc_gts(struct mm_area *vma,
 		int cbr_au_count, int dsr_au_count,
 		unsigned char tlb_preload_count, int options, int tsid)
 {
@@ -352,7 +352,7 @@ struct gru_thread_state *gru_alloc_gts(struct vm_area_struct *vma,
 /*
  * Allocate a vma private data structure.
  */
-struct gru_vma_data *gru_alloc_vma_data(struct vm_area_struct *vma, int tsid)
+struct gru_vma_data *gru_alloc_vma_data(struct mm_area *vma, int tsid)
 {
 	struct gru_vma_data *vdata = NULL;
 
@@ -370,7 +370,7 @@ struct gru_vma_data *gru_alloc_vma_data(struct vm_area_struct *vma, int tsid)
 /*
  * Find the thread state structure for the current thread.
  */
-struct gru_thread_state *gru_find_thread_state(struct vm_area_struct *vma,
+struct gru_thread_state *gru_find_thread_state(struct mm_area *vma,
 					int tsid)
 {
 	struct gru_vma_data *vdata = vma->vm_private_data;
@@ -387,7 +387,7 @@ struct gru_thread_state *gru_find_thread_state(struct vm_area_struct *vma,
  * Allocate a new thread state for a GSEG. Note that races may allow
  * another thread to race to create a gts.
  */
-struct gru_thread_state *gru_alloc_thread_state(struct vm_area_struct *vma,
+struct gru_thread_state *gru_alloc_thread_state(struct mm_area *vma,
 					int tsid)
 {
 	struct gru_vma_data *vdata = vma->vm_private_data;
@@ -920,7 +920,7 @@ struct gru_state *gru_assign_gru_context(struct gru_thread_state *gts)
  */
 vm_fault_t gru_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct gru_thread_state *gts;
 	unsigned long paddr, vaddr;
 	unsigned long expires;
diff --git a/drivers/misc/sgi-gru/grutables.h b/drivers/misc/sgi-gru/grutables.h
index 640daf1994df..cd0756f1e7c4 100644
--- a/drivers/misc/sgi-gru/grutables.h
+++ b/drivers/misc/sgi-gru/grutables.h
@@ -337,7 +337,7 @@ struct gru_thread_state {
 	struct mutex		ts_ctxlock;	/* load/unload CTX lock */
 	struct mm_struct	*ts_mm;		/* mm currently mapped to
 						   context */
-	struct vm_area_struct	*ts_vma;	/* vma of GRU context */
+	struct mm_area	*ts_vma;	/* vma of GRU context */
 	struct gru_state	*ts_gru;	/* GRU where the context is
 						   loaded */
 	struct gru_mm_struct	*ts_gms;	/* asid & ioproc struct */
@@ -607,11 +607,11 @@ struct gru_unload_context_req;
 extern const struct vm_operations_struct gru_vm_ops;
 extern struct device *grudev;
 
-extern struct gru_vma_data *gru_alloc_vma_data(struct vm_area_struct *vma,
+extern struct gru_vma_data *gru_alloc_vma_data(struct mm_area *vma,
 				int tsid);
-extern struct gru_thread_state *gru_find_thread_state(struct vm_area_struct
+extern struct gru_thread_state *gru_find_thread_state(struct mm_area
 				*vma, int tsid);
-extern struct gru_thread_state *gru_alloc_thread_state(struct vm_area_struct
+extern struct gru_thread_state *gru_alloc_thread_state(struct mm_area
 				*vma, int tsid);
 extern struct gru_state *gru_assign_gru_context(struct gru_thread_state *gts);
 extern void gru_load_context(struct gru_thread_state *gts);
@@ -634,12 +634,12 @@ extern int gru_get_exception_detail(unsigned long arg);
 extern int gru_set_context_option(unsigned long address);
 extern int gru_check_context_placement(struct gru_thread_state *gts);
 extern int gru_cpu_fault_map_id(void);
-extern struct vm_area_struct *gru_find_vma(unsigned long vaddr);
+extern struct mm_area *gru_find_vma(unsigned long vaddr);
 extern void gru_flush_all_tlb(struct gru_state *gru);
 extern int gru_proc_init(void);
 extern void gru_proc_exit(void);
 
-extern struct gru_thread_state *gru_alloc_gts(struct vm_area_struct *vma,
+extern struct gru_thread_state *gru_alloc_gts(struct mm_area *vma,
 		int cbr_au_count, int dsr_au_count,
 		unsigned char tlb_preload_count, int options, int tsid);
 extern unsigned long gru_reserve_cb_resources(struct gru_state *gru,
diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
index bdc2e6fda782..316f5f5af318 100644
--- a/drivers/misc/uacce/uacce.c
+++ b/drivers/misc/uacce/uacce.c
@@ -200,7 +200,7 @@ static int uacce_fops_release(struct inode *inode, struct file *filep)
 	return 0;
 }
 
-static void uacce_vma_close(struct vm_area_struct *vma)
+static void uacce_vma_close(struct mm_area *vma)
 {
 	struct uacce_queue *q = vma->vm_private_data;
 
@@ -218,7 +218,7 @@ static const struct vm_operations_struct uacce_vm_ops = {
 	.close = uacce_vma_close,
 };
 
-static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+static int uacce_fops_mmap(struct file *filep, struct mm_area *vma)
 {
 	struct uacce_queue *q = filep->private_data;
 	struct uacce_device *uacce = q->uacce;
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index 8dc4f5c493fc..389461af2b3e 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -1374,7 +1374,7 @@ static unsigned mtdchar_mmap_capabilities(struct file *file)
 /*
  * set up a mapping for shared memory segments
  */
-static int mtdchar_mmap(struct file *file, struct vm_area_struct *vma)
+static int mtdchar_mmap(struct file *file, struct mm_area *vma)
 {
 #ifdef CONFIG_MMU
 	struct mtd_file_info *mfi = file->private_data;
diff --git a/drivers/pci/mmap.c b/drivers/pci/mmap.c
index 8da3347a95c4..183568aa7b8c 100644
--- a/drivers/pci/mmap.c
+++ b/drivers/pci/mmap.c
@@ -22,7 +22,7 @@ static const struct vm_operations_struct pci_phys_vm_ops = {
 };
 
 int pci_mmap_resource_range(struct pci_dev *pdev, int bar,
-			    struct vm_area_struct *vma,
+			    struct mm_area *vma,
 			    enum pci_mmap_state mmap_state, int write_combine)
 {
 	unsigned long size;
@@ -56,7 +56,7 @@ int pci_mmap_resource_range(struct pci_dev *pdev, int bar,
 #if (defined(CONFIG_SYSFS) || defined(CONFIG_PROC_FS)) && \
     (defined(HAVE_PCI_MMAP) || defined(ARCH_GENERIC_PCI_MMAP_RESOURCE))
 
-int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vma,
+int pci_mmap_fits(struct pci_dev *pdev, int resno, struct mm_area *vma,
 		  enum pci_mmap_api mmap_api)
 {
 	resource_size_t pci_start = 0, pci_end;
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 19214ec81fbb..ba40bd4cb2a1 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -90,7 +90,7 @@ static ssize_t published_show(struct device *dev, struct device_attribute *attr,
 static DEVICE_ATTR_RO(published);
 
 static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj,
-		const struct bin_attribute *attr, struct vm_area_struct *vma)
+		const struct bin_attribute *attr, struct mm_area *vma)
 {
 	struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
 	size_t len = vma->vm_end - vma->vm_start;
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
index c6cda56ca52c..4ceec1061fe5 100644
--- a/drivers/pci/pci-sysfs.c
+++ b/drivers/pci/pci-sysfs.c
@@ -930,7 +930,7 @@ static ssize_t pci_write_legacy_io(struct file *filp, struct kobject *kobj,
  * @filp: open sysfs file
  * @kobj: kobject corresponding to device to be mapped
  * @attr: struct bin_attribute for this file
- * @vma: struct vm_area_struct passed to mmap
+ * @vma: struct mm_area passed to mmap
  *
  * Uses an arch specific callback, pci_mmap_legacy_mem_page_range, to mmap
  * legacy memory space (first meg of bus space) into application virtual
@@ -938,7 +938,7 @@ static ssize_t pci_write_legacy_io(struct file *filp, struct kobject *kobj,
  */
 static int pci_mmap_legacy_mem(struct file *filp, struct kobject *kobj,
 			       const struct bin_attribute *attr,
-			       struct vm_area_struct *vma)
+			       struct mm_area *vma)
 {
 	struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj));
 
@@ -950,7 +950,7 @@ static int pci_mmap_legacy_mem(struct file *filp, struct kobject *kobj,
  * @filp: open sysfs file
  * @kobj: kobject corresponding to device to be mapped
  * @attr: struct bin_attribute for this file
- * @vma: struct vm_area_struct passed to mmap
+ * @vma: struct mm_area passed to mmap
  *
  * Uses an arch specific callback, pci_mmap_legacy_io_page_range, to mmap
  * legacy IO space (first meg of bus space) into application virtual
@@ -958,7 +958,7 @@ static int pci_mmap_legacy_mem(struct file *filp, struct kobject *kobj,
  */
 static int pci_mmap_legacy_io(struct file *filp, struct kobject *kobj,
 			      const struct bin_attribute *attr,
-			      struct vm_area_struct *vma)
+			      struct mm_area *vma)
 {
 	struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj));
 
@@ -1056,13 +1056,13 @@ void pci_remove_legacy_files(struct pci_bus *b)
  * pci_mmap_resource - map a PCI resource into user memory space
  * @kobj: kobject for mapping
  * @attr: struct bin_attribute for the file being mapped
- * @vma: struct vm_area_struct passed into the mmap
+ * @vma: struct mm_area passed into the mmap
  * @write_combine: 1 for write_combine mapping
  *
  * Use the regular PCI mapping routines to map a PCI resource into userspace.
  */
 static int pci_mmap_resource(struct kobject *kobj, const struct bin_attribute *attr,
-			     struct vm_area_struct *vma, int write_combine)
+			     struct mm_area *vma, int write_combine)
 {
 	struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
 	int bar = (unsigned long)attr->private;
@@ -1087,14 +1087,14 @@ static int pci_mmap_resource(struct kobject *kobj, const struct bin_attribute *a
 
 static int pci_mmap_resource_uc(struct file *filp, struct kobject *kobj,
 				const struct bin_attribute *attr,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	return pci_mmap_resource(kobj, attr, vma, 0);
 }
 
 static int pci_mmap_resource_wc(struct file *filp, struct kobject *kobj,
 				const struct bin_attribute *attr,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	return pci_mmap_resource(kobj, attr, vma, 1);
 }
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index b81e99cd4b62..3595cd20c401 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -99,7 +99,7 @@ enum pci_mmap_api {
 	PCI_MMAP_SYSFS,	/* mmap on /sys/bus/pci/devices/<BDF>/resource<N> */
 	PCI_MMAP_PROCFS	/* mmap on /proc/bus/pci/<BDF> */
 };
-int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vmai,
+int pci_mmap_fits(struct pci_dev *pdev, int resno, struct mm_area *vmai,
 		  enum pci_mmap_api mmap_api);
 
 bool pci_reset_supported(struct pci_dev *dev);
diff --git a/drivers/pci/proc.c b/drivers/pci/proc.c
index 9348a0fb8084..bb9b1a16c6b4 100644
--- a/drivers/pci/proc.c
+++ b/drivers/pci/proc.c
@@ -240,7 +240,7 @@ static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd,
 }
 
 #ifdef HAVE_PCI_MMAP
-static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma)
+static int proc_bus_pci_mmap(struct file *file, struct mm_area *vma)
 {
 	struct pci_dev *dev = pde_data(file_inode(file));
 	struct pci_filp_private *fpriv = file->private_data;
diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c
index 7233b654bbad..1757c1109a16 100644
--- a/drivers/platform/x86/intel/pmt/class.c
+++ b/drivers/platform/x86/intel/pmt/class.c
@@ -105,7 +105,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj,
 
 static int
 intel_pmt_mmap(struct file *filp, struct kobject *kobj,
-		const struct bin_attribute *attr, struct vm_area_struct *vma)
+		const struct bin_attribute *attr, struct mm_area *vma)
 {
 	struct intel_pmt_entry *entry = container_of(attr,
 						     struct intel_pmt_entry,
diff --git a/drivers/ptp/ptp_vmclock.c b/drivers/ptp/ptp_vmclock.c
index b3a83b03d9c1..b1dddbc99ce7 100644
--- a/drivers/ptp/ptp_vmclock.c
+++ b/drivers/ptp/ptp_vmclock.c
@@ -357,7 +357,7 @@ static struct ptp_clock *vmclock_ptp_register(struct device *dev,
 	return ptp_clock_register(&st->ptp_clock_info, dev);
 }
 
-static int vmclock_miscdev_mmap(struct file *fp, struct vm_area_struct *vma)
+static int vmclock_miscdev_mmap(struct file *fp, struct mm_area *vma)
 {
 	struct vmclock_state *st = container_of(fp->private_data,
 						struct vmclock_state, miscdev);
diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
index cbf531d0ba68..e6f7cd47e550 100644
--- a/drivers/rapidio/devices/rio_mport_cdev.c
+++ b/drivers/rapidio/devices/rio_mport_cdev.c
@@ -2173,7 +2173,7 @@ static void mport_release_mapping(struct kref *ref)
 	kfree(map);
 }
 
-static void mport_mm_open(struct vm_area_struct *vma)
+static void mport_mm_open(struct mm_area *vma)
 {
 	struct rio_mport_mapping *map = vma->vm_private_data;
 
@@ -2181,7 +2181,7 @@ static void mport_mm_open(struct vm_area_struct *vma)
 	kref_get(&map->ref);
 }
 
-static void mport_mm_close(struct vm_area_struct *vma)
+static void mport_mm_close(struct mm_area *vma)
 {
 	struct rio_mport_mapping *map = vma->vm_private_data;
 
@@ -2196,7 +2196,7 @@ static const struct vm_operations_struct vm_ops = {
 	.close = mport_mm_close,
 };
 
-static int mport_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
+static int mport_cdev_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct mport_cdev_priv *priv = filp->private_data;
 	struct mport_dev *md;
diff --git a/drivers/sbus/char/flash.c b/drivers/sbus/char/flash.c
index 6524a4a19109..20e2687a4cc7 100644
--- a/drivers/sbus/char/flash.c
+++ b/drivers/sbus/char/flash.c
@@ -31,7 +31,7 @@ static struct {
 } flash;
 
 static int
-flash_mmap(struct file *file, struct vm_area_struct *vma)
+flash_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned long addr;
 	unsigned long size;
diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c
index a536dd6f4f7c..151f9f99565f 100644
--- a/drivers/sbus/char/oradax.c
+++ b/drivers/sbus/char/oradax.c
@@ -208,7 +208,7 @@ static ssize_t dax_read(struct file *filp, char __user *buf,
 			size_t count, loff_t *ppos);
 static ssize_t dax_write(struct file *filp, const char __user *buf,
 			 size_t count, loff_t *ppos);
-static int dax_devmap(struct file *f, struct vm_area_struct *vma);
+static int dax_devmap(struct file *f, struct mm_area *vma);
 static int dax_close(struct inode *i, struct file *f);
 
 static const struct file_operations dax_fops = {
@@ -368,7 +368,7 @@ static void __exit dax_detach(void)
 module_exit(dax_detach);
 
 /* map completion area */
-static int dax_devmap(struct file *f, struct vm_area_struct *vma)
+static int dax_devmap(struct file *f, struct mm_area *vma)
 {
 	struct dax_ctx *ctx = (struct dax_ctx *)f->private_data;
 	size_t len = vma->vm_end - vma->vm_start;
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index effb7e768165..a20fc2341c3c 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -1214,7 +1214,7 @@ sg_fasync(int fd, struct file *filp, int mode)
 static vm_fault_t
 sg_vma_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	Sg_fd *sfp;
 	unsigned long offset, len, sa;
 	Sg_scatter_hold *rsv_schp;
@@ -1253,7 +1253,7 @@ static const struct vm_operations_struct sg_mmap_vm_ops = {
 };
 
 static int
-sg_mmap(struct file *filp, struct vm_area_struct *vma)
+sg_mmap(struct file *filp, struct mm_area *vma)
 {
 	Sg_fd *sfp;
 	unsigned long req_sz, len, sa;
diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
index ee58151bd69e..9a64d76880a9 100644
--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
+++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
@@ -46,7 +46,7 @@ static struct aspeed_lpc_ctrl *file_aspeed_lpc_ctrl(struct file *file)
 			miscdev);
 }
 
-static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma)
+static int aspeed_lpc_ctrl_mmap(struct file *file, struct mm_area *vma)
 {
 	struct aspeed_lpc_ctrl *lpc_ctrl = file_aspeed_lpc_ctrl(file);
 	unsigned long vsize = vma->vm_end - vma->vm_start;
diff --git a/drivers/soc/aspeed/aspeed-p2a-ctrl.c b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
index 6cc943744e12..8ad07f33f25c 100644
--- a/drivers/soc/aspeed/aspeed-p2a-ctrl.c
+++ b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
@@ -97,7 +97,7 @@ static void aspeed_p2a_disable_bridge(struct aspeed_p2a_ctrl *p2a_ctrl)
 	regmap_update_bits(p2a_ctrl->regmap, SCU180, SCU180_ENP2A, 0);
 }
 
-static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma)
+static int aspeed_p2a_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned long vsize;
 	pgprot_t prot;
diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
index 1b32469f2789..f07526023635 100644
--- a/drivers/soc/qcom/rmtfs_mem.c
+++ b/drivers/soc/qcom/rmtfs_mem.c
@@ -129,7 +129,7 @@ static const struct class rmtfs_class = {
 	.name           = "rmtfs",
 };
 
-static int qcom_rmtfs_mem_mmap(struct file *filep, struct vm_area_struct *vma)
+static int qcom_rmtfs_mem_mmap(struct file *filep, struct mm_area *vma)
 {
 	struct qcom_rmtfs_mem *rmtfs_mem = filep->private_data;
 
diff --git a/drivers/staging/media/atomisp/include/hmm/hmm.h b/drivers/staging/media/atomisp/include/hmm/hmm.h
index a7aef27f54de..6c20072ca7e0 100644
--- a/drivers/staging/media/atomisp/include/hmm/hmm.h
+++ b/drivers/staging/media/atomisp/include/hmm/hmm.h
@@ -63,7 +63,7 @@ void hmm_flush_vmap(ia_css_ptr virt);
  * virt must be the start address of ISP memory (return by hmm_alloc),
  * do not pass any other address.
  */
-int hmm_mmap(struct vm_area_struct *vma, ia_css_ptr virt);
+int hmm_mmap(struct mm_area *vma, ia_css_ptr virt);
 
 extern struct hmm_bo_device bo_device;
 
diff --git a/drivers/staging/media/atomisp/include/hmm/hmm_bo.h b/drivers/staging/media/atomisp/include/hmm/hmm_bo.h
index e09ac29ac43d..9546a39e747b 100644
--- a/drivers/staging/media/atomisp/include/hmm/hmm_bo.h
+++ b/drivers/staging/media/atomisp/include/hmm/hmm_bo.h
@@ -232,7 +232,7 @@ void hmm_bo_vunmap(struct hmm_buffer_object *bo);
  *
  * vma->vm_flags will be set to (VM_RESERVED | VM_IO).
  */
-int hmm_bo_mmap(struct vm_area_struct *vma,
+int hmm_bo_mmap(struct mm_area *vma,
 		struct hmm_buffer_object *bo);
 
 /*
diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c
index 84102c3aaf97..64712310f850 100644
--- a/drivers/staging/media/atomisp/pci/hmm/hmm.c
+++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c
@@ -522,7 +522,7 @@ phys_addr_t hmm_virt_to_phys(ia_css_ptr virt)
 	return page_to_phys(bo->pages[idx]) + offset;
 }
 
-int hmm_mmap(struct vm_area_struct *vma, ia_css_ptr virt)
+int hmm_mmap(struct mm_area *vma, ia_css_ptr virt)
 {
 	struct hmm_buffer_object *bo;
 
diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
index 224ca8d42721..15c48650d883 100644
--- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
+++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
@@ -974,7 +974,7 @@ void hmm_bo_unref(struct hmm_buffer_object *bo)
 	kref_put(&bo->kref, kref_hmm_bo_release);
 }
 
-static void hmm_bo_vm_open(struct vm_area_struct *vma)
+static void hmm_bo_vm_open(struct mm_area *vma)
 {
 	struct hmm_buffer_object *bo =
 	    (struct hmm_buffer_object *)vma->vm_private_data;
@@ -992,7 +992,7 @@ static void hmm_bo_vm_open(struct vm_area_struct *vma)
 	mutex_unlock(&bo->mutex);
 }
 
-static void hmm_bo_vm_close(struct vm_area_struct *vma)
+static void hmm_bo_vm_close(struct mm_area *vma)
 {
 	struct hmm_buffer_object *bo =
 	    (struct hmm_buffer_object *)vma->vm_private_data;
@@ -1021,7 +1021,7 @@ static const struct vm_operations_struct hmm_bo_vm_ops = {
 /*
  * mmap the bo to user space.
  */
-int hmm_bo_mmap(struct vm_area_struct *vma, struct hmm_buffer_object *bo)
+int hmm_bo_mmap(struct mm_area *vma, struct hmm_buffer_object *bo)
 {
 	unsigned int start, end;
 	unsigned int virt;
diff --git a/drivers/staging/vme_user/vme.c b/drivers/staging/vme_user/vme.c
index 42304c9f83a2..ed589a97da4f 100644
--- a/drivers/staging/vme_user/vme.c
+++ b/drivers/staging/vme_user/vme.c
@@ -745,7 +745,7 @@ EXPORT_SYMBOL(vme_master_rmw);
  *         resource or -EFAULT if map exceeds window size. Other generic mmap
  *         errors may also be returned.
  */
-int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma)
+int vme_master_mmap(struct vme_resource *resource, struct mm_area *vma)
 {
 	struct vme_bridge *bridge = find_bridge(resource);
 	struct vme_master_resource *image;
diff --git a/drivers/staging/vme_user/vme.h b/drivers/staging/vme_user/vme.h
index 7753e736f9fd..a1505b68907f 100644
--- a/drivers/staging/vme_user/vme.h
+++ b/drivers/staging/vme_user/vme.h
@@ -151,7 +151,7 @@ ssize_t vme_master_read(struct vme_resource *, void *, size_t, loff_t);
 ssize_t vme_master_write(struct vme_resource *, void *, size_t, loff_t);
 unsigned int vme_master_rmw(struct vme_resource *, unsigned int, unsigned int,
 			    unsigned int, loff_t);
-int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma);
+int vme_master_mmap(struct vme_resource *resource, struct mm_area *vma);
 void vme_master_free(struct vme_resource *);
 
 struct vme_resource *vme_dma_request(struct vme_dev *, u32);
diff --git a/drivers/staging/vme_user/vme_user.c b/drivers/staging/vme_user/vme_user.c
index 5829a4141561..fd777648698d 100644
--- a/drivers/staging/vme_user/vme_user.c
+++ b/drivers/staging/vme_user/vme_user.c
@@ -424,14 +424,14 @@ vme_user_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 	return ret;
 }
 
-static void vme_user_vm_open(struct vm_area_struct *vma)
+static void vme_user_vm_open(struct mm_area *vma)
 {
 	struct vme_user_vma_priv *vma_priv = vma->vm_private_data;
 
 	refcount_inc(&vma_priv->refcnt);
 }
 
-static void vme_user_vm_close(struct vm_area_struct *vma)
+static void vme_user_vm_close(struct mm_area *vma)
 {
 	struct vme_user_vma_priv *vma_priv = vma->vm_private_data;
 	unsigned int minor = vma_priv->minor;
@@ -451,7 +451,7 @@ static const struct vm_operations_struct vme_user_vm_ops = {
 	.close = vme_user_vm_close,
 };
 
-static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct *vma)
+static int vme_user_master_mmap(unsigned int minor, struct mm_area *vma)
 {
 	int err;
 	struct vme_user_vma_priv *vma_priv;
@@ -482,7 +482,7 @@ static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct *vma)
 	return 0;
 }
 
-static int vme_user_mmap(struct file *file, struct vm_area_struct *vma)
+static int vme_user_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned int minor = iminor(file_inode(file));
 
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
index 0f5d820af119..eaff895205b4 100644
--- a/drivers/target/target_core_user.c
+++ b/drivers/target/target_core_user.c
@@ -1823,7 +1823,7 @@ static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
  * mmap code from uio.c. Copied here because we want to hook mmap()
  * and this stuff must come along.
  */
-static int tcmu_find_mem_index(struct vm_area_struct *vma)
+static int tcmu_find_mem_index(struct mm_area *vma)
 {
 	struct tcmu_dev *udev = vma->vm_private_data;
 	struct uio_info *info = &udev->uio_info;
@@ -1860,7 +1860,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
 	return NULL;
 }
 
-static void tcmu_vma_open(struct vm_area_struct *vma)
+static void tcmu_vma_open(struct mm_area *vma)
 {
 	struct tcmu_dev *udev = vma->vm_private_data;
 
@@ -1869,7 +1869,7 @@ static void tcmu_vma_open(struct vm_area_struct *vma)
 	kref_get(&udev->kref);
 }
 
-static void tcmu_vma_close(struct vm_area_struct *vma)
+static void tcmu_vma_close(struct mm_area *vma)
 {
 	struct tcmu_dev *udev = vma->vm_private_data;
 
@@ -1924,7 +1924,7 @@ static const struct vm_operations_struct tcmu_vm_ops = {
 	.fault = tcmu_vma_fault,
 };
 
-static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma)
+static int tcmu_mmap(struct uio_info *info, struct mm_area *vma)
 {
 	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
 
diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
index 16eb953e14bb..24db89ca4e26 100644
--- a/drivers/tee/optee/call.c
+++ b/drivers/tee/optee/call.c
@@ -611,7 +611,7 @@ static bool is_normal_memory(pgprot_t p)
 static int __check_mem_type(struct mm_struct *mm, unsigned long start,
 				unsigned long end)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, start);
 
 	for_each_vma_range(vmi, vma, end) {
diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
index daf6e5cfd59a..c6b120e0d3ae 100644
--- a/drivers/tee/tee_shm.c
+++ b/drivers/tee/tee_shm.c
@@ -434,7 +434,7 @@ static int tee_shm_fop_release(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma)
+static int tee_shm_fop_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct tee_shm *shm = filp->private_data;
 	size_t size = vma->vm_end - vma->vm_start;
diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
index d93ed4e86a17..93d41eddc33c 100644
--- a/drivers/uio/uio.c
+++ b/drivers/uio/uio.c
@@ -669,7 +669,7 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
 	return retval ? retval : sizeof(s32);
 }
 
-static int uio_find_mem_index(struct vm_area_struct *vma)
+static int uio_find_mem_index(struct mm_area *vma)
 {
 	struct uio_device *idev = vma->vm_private_data;
 
@@ -726,7 +726,7 @@ static const struct vm_operations_struct uio_logical_vm_ops = {
 	.fault = uio_vma_fault,
 };
 
-static int uio_mmap_logical(struct vm_area_struct *vma)
+static int uio_mmap_logical(struct mm_area *vma)
 {
 	vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_ops = &uio_logical_vm_ops;
@@ -739,7 +739,7 @@ static const struct vm_operations_struct uio_physical_vm_ops = {
 #endif
 };
 
-static int uio_mmap_physical(struct vm_area_struct *vma)
+static int uio_mmap_physical(struct mm_area *vma)
 {
 	struct uio_device *idev = vma->vm_private_data;
 	int mi = uio_find_mem_index(vma);
@@ -774,7 +774,7 @@ static int uio_mmap_physical(struct vm_area_struct *vma)
 			       vma->vm_page_prot);
 }
 
-static int uio_mmap_dma_coherent(struct vm_area_struct *vma)
+static int uio_mmap_dma_coherent(struct mm_area *vma)
 {
 	struct uio_device *idev = vma->vm_private_data;
 	struct uio_mem *mem;
@@ -817,7 +817,7 @@ static int uio_mmap_dma_coherent(struct vm_area_struct *vma)
 	return ret;
 }
 
-static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
+static int uio_mmap(struct file *filep, struct mm_area *vma)
 {
 	struct uio_listener *listener = filep->private_data;
 	struct uio_device *idev = listener->dev;
diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
index 1b19b5647495..5283c75d0860 100644
--- a/drivers/uio/uio_hv_generic.c
+++ b/drivers/uio/uio_hv_generic.c
@@ -136,7 +136,7 @@ static void hv_uio_rescind(struct vmbus_channel *channel)
  */
 static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj,
 			    const struct bin_attribute *attr,
-			    struct vm_area_struct *vma)
+			    struct mm_area *vma)
 {
 	struct vmbus_channel *channel
 		= container_of(kobj, struct vmbus_channel, kobj);
diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
index f6ce6e26e0d4..328bdbc57cf0 100644
--- a/drivers/usb/core/devio.c
+++ b/drivers/usb/core/devio.c
@@ -205,7 +205,7 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
 	}
 }
 
-static void usbdev_vm_open(struct vm_area_struct *vma)
+static void usbdev_vm_open(struct mm_area *vma)
 {
 	struct usb_memory *usbm = vma->vm_private_data;
 	unsigned long flags;
@@ -215,7 +215,7 @@ static void usbdev_vm_open(struct vm_area_struct *vma)
 	spin_unlock_irqrestore(&usbm->ps->lock, flags);
 }
 
-static void usbdev_vm_close(struct vm_area_struct *vma)
+static void usbdev_vm_close(struct mm_area *vma)
 {
 	struct usb_memory *usbm = vma->vm_private_data;
 
@@ -227,7 +227,7 @@ static const struct vm_operations_struct usbdev_vm_ops = {
 	.close = usbdev_vm_close
 };
 
-static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+static int usbdev_mmap(struct file *file, struct mm_area *vma)
 {
 	struct usb_memory *usbm = NULL;
 	struct usb_dev_state *ps = file->private_data;
diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
index 9a1bbd79ff5a..519586dfeb0f 100644
--- a/drivers/usb/gadget/function/uvc_queue.c
+++ b/drivers/usb/gadget/function/uvc_queue.c
@@ -212,7 +212,7 @@ __poll_t uvcg_queue_poll(struct uvc_video_queue *queue, struct file *file,
 	return vb2_poll(&queue->queue, file, wait);
 }
 
-int uvcg_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma)
+int uvcg_queue_mmap(struct uvc_video_queue *queue, struct mm_area *vma)
 {
 	return vb2_mmap(&queue->queue, vma);
 }
diff --git a/drivers/usb/gadget/function/uvc_queue.h b/drivers/usb/gadget/function/uvc_queue.h
index b54becc570a3..4f8a2d2ef2ae 100644
--- a/drivers/usb/gadget/function/uvc_queue.h
+++ b/drivers/usb/gadget/function/uvc_queue.h
@@ -83,7 +83,7 @@ int uvcg_dequeue_buffer(struct uvc_video_queue *queue,
 __poll_t uvcg_queue_poll(struct uvc_video_queue *queue,
 			     struct file *file, poll_table *wait);
 
-int uvcg_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma);
+int uvcg_queue_mmap(struct uvc_video_queue *queue, struct mm_area *vma);
 
 #ifndef CONFIG_MMU
 unsigned long uvcg_queue_get_unmapped_area(struct uvc_video_queue *queue,
diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
index fc9a8d31a1e9..f0016d03f5bb 100644
--- a/drivers/usb/gadget/function/uvc_v4l2.c
+++ b/drivers/usb/gadget/function/uvc_v4l2.c
@@ -702,7 +702,7 @@ uvc_v4l2_release(struct file *file)
 }
 
 static int
-uvc_v4l2_mmap(struct file *file, struct vm_area_struct *vma)
+uvc_v4l2_mmap(struct file *file, struct mm_area *vma)
 {
 	struct video_device *vdev = video_devdata(file);
 	struct uvc_device *uvc = video_get_drvdata(vdev);
diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
index c93b43f5bc46..765efbb61818 100644
--- a/drivers/usb/mon/mon_bin.c
+++ b/drivers/usb/mon/mon_bin.c
@@ -1222,7 +1222,7 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
  * open and close: just keep track of how many times the device is
  * mapped, to use the proper memory allocation function.
  */
-static void mon_bin_vma_open(struct vm_area_struct *vma)
+static void mon_bin_vma_open(struct mm_area *vma)
 {
 	struct mon_reader_bin *rp = vma->vm_private_data;
 	unsigned long flags;
@@ -1232,7 +1232,7 @@ static void mon_bin_vma_open(struct vm_area_struct *vma)
 	spin_unlock_irqrestore(&rp->b_lock, flags);
 }
 
-static void mon_bin_vma_close(struct vm_area_struct *vma)
+static void mon_bin_vma_close(struct mm_area *vma)
 {
 	unsigned long flags;
 
@@ -1272,7 +1272,7 @@ static const struct vm_operations_struct mon_bin_vm_ops = {
 	.fault =    mon_bin_vma_fault,
 };
 
-static int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
+static int mon_bin_mmap(struct file *filp, struct mm_area *vma)
 {
 	/* don't do anything here: "fault" will set up page table entries */
 	vma->vm_ops = &mon_bin_vm_ops;
diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
index 58116f89d8da..372456ffd5a3 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -532,7 +532,7 @@ static const struct vm_operations_struct vduse_domain_mmap_ops = {
 	.fault = vduse_domain_mmap_fault,
 };
 
-static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma)
+static int vduse_domain_mmap(struct file *file, struct mm_area *vma)
 {
 	struct vduse_iova_domain *domain = file->private_data;
 
diff --git a/drivers/vfio/cdx/main.c b/drivers/vfio/cdx/main.c
index 5dd5f5ad7686..81d6e3d2293d 100644
--- a/drivers/vfio/cdx/main.c
+++ b/drivers/vfio/cdx/main.c
@@ -233,7 +233,7 @@ static long vfio_cdx_ioctl(struct vfio_device *core_vdev,
 }
 
 static int vfio_cdx_mmap_mmio(struct vfio_cdx_region region,
-			      struct vm_area_struct *vma)
+			      struct mm_area *vma)
 {
 	u64 size = vma->vm_end - vma->vm_start;
 	u64 pgoff, base;
@@ -253,7 +253,7 @@ static int vfio_cdx_mmap_mmio(struct vfio_cdx_region region,
 }
 
 static int vfio_cdx_mmap(struct vfio_device *core_vdev,
-			 struct vm_area_struct *vma)
+			 struct mm_area *vma)
 {
 	struct vfio_cdx_device *vdev =
 		container_of(core_vdev, struct vfio_cdx_device, vdev);
diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
index f65d91c01f2e..27b03c09f016 100644
--- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c
+++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
@@ -357,7 +357,7 @@ static ssize_t vfio_fsl_mc_write(struct vfio_device *core_vdev,
 }
 
 static int vfio_fsl_mc_mmap_mmio(struct vfio_fsl_mc_region region,
-				 struct vm_area_struct *vma)
+				 struct mm_area *vma)
 {
 	u64 size = vma->vm_end - vma->vm_start;
 	u64 pgoff, base;
@@ -382,7 +382,7 @@ static int vfio_fsl_mc_mmap_mmio(struct vfio_fsl_mc_region region,
 }
 
 static int vfio_fsl_mc_mmap(struct vfio_device *core_vdev,
-			    struct vm_area_struct *vma)
+			    struct mm_area *vma)
 {
 	struct vfio_fsl_mc_device *vdev =
 		container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
index 451c639299eb..e61c19772dc2 100644
--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
@@ -1218,7 +1218,7 @@ static int hisi_acc_pci_rw_access_check(struct vfio_device *core_vdev,
 }
 
 static int hisi_acc_vfio_pci_mmap(struct vfio_device *core_vdev,
-				  struct vm_area_struct *vma)
+				  struct mm_area *vma)
 {
 	struct vfio_pci_core_device *vdev =
 		container_of(core_vdev, struct vfio_pci_core_device, vdev);
diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
index e5ac39c4cc6b..935332b63571 100644
--- a/drivers/vfio/pci/nvgrace-gpu/main.c
+++ b/drivers/vfio/pci/nvgrace-gpu/main.c
@@ -131,7 +131,7 @@ static void nvgrace_gpu_close_device(struct vfio_device *core_vdev)
 }
 
 static int nvgrace_gpu_mmap(struct vfio_device *core_vdev,
-			    struct vm_area_struct *vma)
+			    struct mm_area *vma)
 {
 	struct nvgrace_gpu_pci_core_device *nvdev =
 		container_of(core_vdev, struct nvgrace_gpu_pci_core_device,
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 35f9046af315..3e24952b7309 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -1629,7 +1629,7 @@ void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev, u16 c
 	up_write(&vdev->memory_lock);
 }
 
-static unsigned long vma_to_pfn(struct vm_area_struct *vma)
+static unsigned long vma_to_pfn(struct mm_area *vma)
 {
 	struct vfio_pci_core_device *vdev = vma->vm_private_data;
 	int index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
@@ -1644,7 +1644,7 @@ static unsigned long vma_to_pfn(struct vm_area_struct *vma)
 static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
 					   unsigned int order)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct vfio_pci_core_device *vdev = vma->vm_private_data;
 	unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
 	vm_fault_t ret = VM_FAULT_SIGBUS;
@@ -1708,7 +1708,7 @@ static const struct vm_operations_struct vfio_pci_mmap_ops = {
 #endif
 };
 
-int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma)
+int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct mm_area *vma)
 {
 	struct vfio_pci_core_device *vdev =
 		container_of(core_vdev, struct vfio_pci_core_device, vdev);
diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
index 3bf1043cd795..194cd554d8e8 100644
--- a/drivers/vfio/platform/vfio_platform_common.c
+++ b/drivers/vfio/platform/vfio_platform_common.c
@@ -550,7 +550,7 @@ ssize_t vfio_platform_write(struct vfio_device *core_vdev, const char __user *bu
 EXPORT_SYMBOL_GPL(vfio_platform_write);
 
 static int vfio_platform_mmap_mmio(struct vfio_platform_region region,
-				   struct vm_area_struct *vma)
+				   struct mm_area *vma)
 {
 	u64 req_len, pgoff, req_start;
 
@@ -569,7 +569,7 @@ static int vfio_platform_mmap_mmio(struct vfio_platform_region region,
 			       req_len, vma->vm_page_prot);
 }
 
-int vfio_platform_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma)
+int vfio_platform_mmap(struct vfio_device *core_vdev, struct mm_area *vma)
 {
 	struct vfio_platform_device *vdev =
 		container_of(core_vdev, struct vfio_platform_device, vdev);
diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
index 8d8fab516849..a7355a03e43c 100644
--- a/drivers/vfio/platform/vfio_platform_private.h
+++ b/drivers/vfio/platform/vfio_platform_private.h
@@ -92,7 +92,7 @@ ssize_t vfio_platform_write(struct vfio_device *core_vdev,
 			    const char __user *buf,
 			    size_t count, loff_t *ppos);
 int vfio_platform_mmap(struct vfio_device *core_vdev,
-		       struct vm_area_struct *vma);
+		       struct mm_area *vma);
 
 int vfio_platform_irq_init(struct vfio_platform_device *vdev);
 void vfio_platform_irq_cleanup(struct vfio_platform_device *vdev);
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 0ac56072af9f..acf89ab4e254 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -518,7 +518,7 @@ static void vfio_batch_fini(struct vfio_batch *batch)
 		free_page((unsigned long)batch->pages);
 }
 
-static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
+static int follow_fault_pfn(struct mm_area *vma, struct mm_struct *mm,
 			    unsigned long vaddr, unsigned long *pfn,
 			    unsigned long *addr_mask, bool write_fault)
 {
@@ -567,7 +567,7 @@ static long vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
 			   struct vfio_batch *batch)
 {
 	unsigned long pin_pages = min_t(unsigned long, npages, batch->capacity);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned int flags = 0;
 	long ret;
 
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 1fd261efc582..24eca55e4635 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -1339,7 +1339,7 @@ static ssize_t vfio_device_fops_write(struct file *filep,
 	return device->ops->write(device, buf, count, ppos);
 }
 
-static int vfio_device_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+static int vfio_device_fops_mmap(struct file *filep, struct mm_area *vma)
 {
 	struct vfio_device_file *df = filep->private_data;
 	struct vfio_device *device = df->device;
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 5a49b5a6d496..00dac20fc834 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -1048,7 +1048,7 @@ static int vhost_vdpa_va_map(struct vhost_vdpa *v,
 	struct vhost_dev *dev = &v->vdev;
 	u64 offset, map_size, map_iova = iova;
 	struct vdpa_map_file *map_file;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret = 0;
 
 	mmap_read_lock(dev->mm);
@@ -1486,7 +1486,7 @@ static vm_fault_t vhost_vdpa_fault(struct vm_fault *vmf)
 	struct vdpa_device *vdpa = v->vdpa;
 	const struct vdpa_config_ops *ops = vdpa->config;
 	struct vdpa_notification_area notify;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	u16 index = vma->vm_pgoff;
 
 	notify = ops->get_vq_notification(vdpa, index);
@@ -1498,7 +1498,7 @@ static const struct vm_operations_struct vhost_vdpa_vm_ops = {
 	.fault = vhost_vdpa_fault,
 };
 
-static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
+static int vhost_vdpa_mmap(struct file *file, struct mm_area *vma)
 {
 	struct vhost_vdpa *v = vma->vm_file->private_data;
 	struct vdpa_device *vdpa = v->vdpa;
diff --git a/drivers/video/fbdev/68328fb.c b/drivers/video/fbdev/68328fb.c
index c24156eb3d0f..8b63b4e1aab0 100644
--- a/drivers/video/fbdev/68328fb.c
+++ b/drivers/video/fbdev/68328fb.c
@@ -91,7 +91,7 @@ static int mc68x328fb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
 			 u_int transp, struct fb_info *info);
 static int mc68x328fb_pan_display(struct fb_var_screeninfo *var,
 			   struct fb_info *info);
-static int mc68x328fb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int mc68x328fb_mmap(struct fb_info *info, struct mm_area *vma);
 
 static const struct fb_ops mc68x328fb_ops = {
 	.owner		= THIS_MODULE,
@@ -386,7 +386,7 @@ static int mc68x328fb_pan_display(struct fb_var_screeninfo *var,
      *  Most drivers don't need their own mmap function
      */
 
-static int mc68x328fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int mc68x328fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 #ifndef MMU
 	/* this is uClinux (no MMU) specific code */
diff --git a/drivers/video/fbdev/atafb.c b/drivers/video/fbdev/atafb.c
index b8ed1c537293..e6fbe997313f 100644
--- a/drivers/video/fbdev/atafb.c
+++ b/drivers/video/fbdev/atafb.c
@@ -291,7 +291,7 @@ static int *MV300_reg = MV300_reg_8bit;
  *			unsigned long arg);
  *
  *	* perform fb specific mmap *
- *	int (*fb_mmap)(struct fb_info *info, struct vm_area_struct *vma);
+ *	int (*fb_mmap)(struct fb_info *info, struct mm_area *vma);
  * } ;
  */
 
diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c
index 210fd3ac18a4..e9a48e71fbd4 100644
--- a/drivers/video/fbdev/aty/atyfb_base.c
+++ b/drivers/video/fbdev/aty/atyfb_base.c
@@ -253,7 +253,7 @@ static int atyfb_compat_ioctl(struct fb_info *info, u_int cmd, u_long arg)
 #endif
 
 #ifdef __sparc__
-static int atyfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int atyfb_mmap(struct fb_info *info, struct mm_area *vma);
 #endif
 static int atyfb_sync(struct fb_info *info);
 
@@ -1932,7 +1932,7 @@ static int atyfb_sync(struct fb_info *info)
 }
 
 #ifdef __sparc__
-static int atyfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int atyfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct atyfb_par *par = (struct atyfb_par *) info->par;
 	unsigned int size, page, map_size = 0;
diff --git a/drivers/video/fbdev/au1100fb.c b/drivers/video/fbdev/au1100fb.c
index 6251a6b07b3a..4ba693d12560 100644
--- a/drivers/video/fbdev/au1100fb.c
+++ b/drivers/video/fbdev/au1100fb.c
@@ -340,7 +340,7 @@ int au1100fb_fb_pan_display(struct fb_var_screeninfo *var, struct fb_info *fbi)
  * Map video memory in user space. We don't use the generic fb_mmap method mainly
  * to allow the use of the TLB streaming flag (CCA=6)
  */
-int au1100fb_fb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
+int au1100fb_fb_mmap(struct fb_info *fbi, struct mm_area *vma)
 {
 	struct au1100fb_device *fbdev = to_au1100fb_device(fbi);
 
diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
index ed770222660b..6f741b3ed47f 100644
--- a/drivers/video/fbdev/au1200fb.c
+++ b/drivers/video/fbdev/au1200fb.c
@@ -1232,7 +1232,7 @@ static int au1200fb_fb_blank(int blank_mode, struct fb_info *fbi)
  * Map video memory in user space. We don't use the generic fb_mmap
  * method mainly to allow the use of the TLB streaming flag (CCA=6)
  */
-static int au1200fb_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int au1200fb_fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct au1200fb_device *fbdev = info->par;
 
diff --git a/drivers/video/fbdev/bw2.c b/drivers/video/fbdev/bw2.c
index e757462af0a6..e56b43e62c57 100644
--- a/drivers/video/fbdev/bw2.c
+++ b/drivers/video/fbdev/bw2.c
@@ -31,7 +31,7 @@
 
 static int bw2_blank(int, struct fb_info *);
 
-static int bw2_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int bw2_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int bw2_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -154,7 +154,7 @@ static const struct sbus_mmap_map bw2_mmap_map[] = {
 	{ .size = 0 }
 };
 
-static int bw2_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int bw2_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct bw2_par *par = (struct bw2_par *)info->par;
 
diff --git a/drivers/video/fbdev/cg14.c b/drivers/video/fbdev/cg14.c
index 5389f8f07346..bc1619331049 100644
--- a/drivers/video/fbdev/cg14.c
+++ b/drivers/video/fbdev/cg14.c
@@ -33,7 +33,7 @@ static int cg14_setcolreg(unsigned, unsigned, unsigned, unsigned,
 			 unsigned, struct fb_info *);
 static int cg14_pan_display(struct fb_var_screeninfo *, struct fb_info *);
 
-static int cg14_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int cg14_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int cg14_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -258,7 +258,7 @@ static int cg14_setcolreg(unsigned regno,
 	return 0;
 }
 
-static int cg14_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int cg14_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct cg14_par *par = (struct cg14_par *) info->par;
 
diff --git a/drivers/video/fbdev/cg3.c b/drivers/video/fbdev/cg3.c
index a58a483014e6..e53243deaf87 100644
--- a/drivers/video/fbdev/cg3.c
+++ b/drivers/video/fbdev/cg3.c
@@ -33,7 +33,7 @@ static int cg3_setcolreg(unsigned, unsigned, unsigned, unsigned,
 			 unsigned, struct fb_info *);
 static int cg3_blank(int, struct fb_info *);
 
-static int cg3_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int cg3_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int cg3_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -218,7 +218,7 @@ static const struct sbus_mmap_map cg3_mmap_map[] = {
 	{ .size = 0 }
 };
 
-static int cg3_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int cg3_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct cg3_par *par = (struct cg3_par *)info->par;
 
diff --git a/drivers/video/fbdev/cg6.c b/drivers/video/fbdev/cg6.c
index 56d74468040a..826bace4fabd 100644
--- a/drivers/video/fbdev/cg6.c
+++ b/drivers/video/fbdev/cg6.c
@@ -39,7 +39,7 @@ static void cg6_copyarea(struct fb_info *info, const struct fb_copyarea *area);
 static int cg6_sync(struct fb_info *);
 static int cg6_pan_display(struct fb_var_screeninfo *, struct fb_info *);
 
-static int cg6_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int cg6_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int cg6_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -589,7 +589,7 @@ static const struct sbus_mmap_map cg6_mmap_map[] = {
 	{ .size	= 0 }
 };
 
-static int cg6_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int cg6_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct cg6_par *par = (struct cg6_par *)info->par;
 
diff --git a/drivers/video/fbdev/controlfb.c b/drivers/video/fbdev/controlfb.c
index 5c5284e8ae0e..0301ea641ba3 100644
--- a/drivers/video/fbdev/controlfb.c
+++ b/drivers/video/fbdev/controlfb.c
@@ -729,7 +729,7 @@ static int controlfb_blank(int blank_mode, struct fb_info *info)
  * Note there's no locking in here; it's done in fb_mmap() in fbmem.c.
  */
 static int controlfb_mmap(struct fb_info *info,
-                       struct vm_area_struct *vma)
+                       struct mm_area *vma)
 {
 	unsigned long mmio_pgoff;
 	unsigned long start;
diff --git a/drivers/video/fbdev/core/fb_chrdev.c b/drivers/video/fbdev/core/fb_chrdev.c
index 4ebd16b7e3b8..50a46c896978 100644
--- a/drivers/video/fbdev/core/fb_chrdev.c
+++ b/drivers/video/fbdev/core/fb_chrdev.c
@@ -311,7 +311,7 @@ static long fb_compat_ioctl(struct file *file, unsigned int cmd,
 }
 #endif
 
-static int fb_mmap(struct file *file, struct vm_area_struct *vma)
+static int fb_mmap(struct file *file, struct mm_area *vma)
 {
 	struct fb_info *info = file_fb_info(file);
 	int res;
diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
index 4fc93f253e06..01688f93cc91 100644
--- a/drivers/video/fbdev/core/fb_defio.c
+++ b/drivers/video/fbdev/core/fb_defio.c
@@ -243,7 +243,7 @@ static const struct address_space_operations fb_deferred_io_aops = {
 	.dirty_folio	= noop_dirty_folio,
 };
 
-int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
+int fb_deferred_io_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
diff --git a/drivers/video/fbdev/core/fb_io_fops.c b/drivers/video/fbdev/core/fb_io_fops.c
index 3408ff1b2b7a..e00756595b77 100644
--- a/drivers/video/fbdev/core/fb_io_fops.c
+++ b/drivers/video/fbdev/core/fb_io_fops.c
@@ -138,7 +138,7 @@ ssize_t fb_io_write(struct fb_info *info, const char __user *buf, size_t count,
 }
 EXPORT_SYMBOL(fb_io_write);
 
-int fb_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
+int fb_io_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	unsigned long start = info->fix.smem_start;
 	u32 len = info->fix.smem_len;
diff --git a/drivers/video/fbdev/ep93xx-fb.c b/drivers/video/fbdev/ep93xx-fb.c
index 801ef427f1ba..cab3e18fb52e 100644
--- a/drivers/video/fbdev/ep93xx-fb.c
+++ b/drivers/video/fbdev/ep93xx-fb.c
@@ -307,7 +307,7 @@ static int ep93xxfb_check_var(struct fb_var_screeninfo *var,
 	return 0;
 }
 
-static int ep93xxfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int ep93xxfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	unsigned int offset = vma->vm_pgoff << PAGE_SHIFT;
 
diff --git a/drivers/video/fbdev/ffb.c b/drivers/video/fbdev/ffb.c
index 34b6abff9493..75c2aaf77b81 100644
--- a/drivers/video/fbdev/ffb.c
+++ b/drivers/video/fbdev/ffb.c
@@ -39,7 +39,7 @@ static void ffb_copyarea(struct fb_info *, const struct fb_copyarea *);
 static int ffb_sync(struct fb_info *);
 static int ffb_pan_display(struct fb_var_screeninfo *, struct fb_info *);
 
-static int ffb_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int ffb_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int ffb_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -849,7 +849,7 @@ static const struct sbus_mmap_map ffb_mmap_map[] = {
 	{ .size = 0 }
 };
 
-static int ffb_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int ffb_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct ffb_par *par = (struct ffb_par *)info->par;
 
diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
index 4c36a3e409be..b3a423fbe0e9 100644
--- a/drivers/video/fbdev/gbefb.c
+++ b/drivers/video/fbdev/gbefb.c
@@ -992,7 +992,7 @@ static int gbefb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
 }
 
 static int gbefb_mmap(struct fb_info *info,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	unsigned long size = vma->vm_end - vma->vm_start;
 	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
diff --git a/drivers/video/fbdev/leo.c b/drivers/video/fbdev/leo.c
index b9fb059df2c7..76d44efee3c1 100644
--- a/drivers/video/fbdev/leo.c
+++ b/drivers/video/fbdev/leo.c
@@ -33,7 +33,7 @@ static int leo_setcolreg(unsigned, unsigned, unsigned, unsigned,
 static int leo_blank(int, struct fb_info *);
 static int leo_pan_display(struct fb_var_screeninfo *, struct fb_info *);
 
-static int leo_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int leo_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int leo_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -407,7 +407,7 @@ static const struct sbus_mmap_map leo_mmap_map[] = {
 	{ .size = 0 }
 };
 
-static int leo_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int leo_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct leo_par *par = (struct leo_par *)info->par;
 
diff --git a/drivers/video/fbdev/omap/omapfb.h b/drivers/video/fbdev/omap/omapfb.h
index ab1cb6e7f5f8..cfd41ba0dac7 100644
--- a/drivers/video/fbdev/omap/omapfb.h
+++ b/drivers/video/fbdev/omap/omapfb.h
@@ -159,7 +159,7 @@ struct lcd_ctrl {
 	int		(*setup_mem)	  (int plane, size_t size,
 					   int mem_type, unsigned long *paddr);
 	int		(*mmap)		  (struct fb_info *info,
-					   struct vm_area_struct *vma);
+					   struct mm_area *vma);
 	int		(*set_scale)	  (int plane,
 					   int orig_width, int orig_height,
 					   int out_width, int out_height);
diff --git a/drivers/video/fbdev/omap/omapfb_main.c b/drivers/video/fbdev/omap/omapfb_main.c
index 2682b20d184a..f6781f51b2cc 100644
--- a/drivers/video/fbdev/omap/omapfb_main.c
+++ b/drivers/video/fbdev/omap/omapfb_main.c
@@ -1197,7 +1197,7 @@ static int omapfb_ioctl(struct fb_info *fbi, unsigned int cmd,
 	return r;
 }
 
-static int omapfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int omapfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct omapfb_plane_struct *plane = info->par;
 	struct omapfb_device *fbdev = plane->fbdev;
diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-main.c b/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
index 211f23648686..081d6ea622bb 100644
--- a/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
+++ b/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
@@ -1063,7 +1063,7 @@ static int omapfb_pan_display(struct fb_var_screeninfo *var,
 	return r;
 }
 
-static void mmap_user_open(struct vm_area_struct *vma)
+static void mmap_user_open(struct mm_area *vma)
 {
 	struct omapfb2_mem_region *rg = vma->vm_private_data;
 
@@ -1072,7 +1072,7 @@ static void mmap_user_open(struct vm_area_struct *vma)
 	omapfb_put_mem_region(rg);
 }
 
-static void mmap_user_close(struct vm_area_struct *vma)
+static void mmap_user_close(struct mm_area *vma)
 {
 	struct omapfb2_mem_region *rg = vma->vm_private_data;
 
@@ -1086,7 +1086,7 @@ static const struct vm_operations_struct mmap_user_ops = {
 	.close = mmap_user_close,
 };
 
-static int omapfb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
+static int omapfb_mmap(struct fb_info *fbi, struct mm_area *vma)
 {
 	struct omapfb_info *ofbi = FB2OFB(fbi);
 	struct fb_fix_screeninfo *fix = &fbi->fix;
diff --git a/drivers/video/fbdev/p9100.c b/drivers/video/fbdev/p9100.c
index 0bc0f78fe4b9..62fdfe8c682d 100644
--- a/drivers/video/fbdev/p9100.c
+++ b/drivers/video/fbdev/p9100.c
@@ -31,7 +31,7 @@ static int p9100_setcolreg(unsigned, unsigned, unsigned, unsigned,
 			   unsigned, struct fb_info *);
 static int p9100_blank(int, struct fb_info *);
 
-static int p9100_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int p9100_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int p9100_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -211,7 +211,7 @@ static const struct sbus_mmap_map p9100_mmap_map[] = {
 	{ 0,			0,		0		    }
 };
 
-static int p9100_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int p9100_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct p9100_par *par = (struct p9100_par *)info->par;
 
diff --git a/drivers/video/fbdev/ps3fb.c b/drivers/video/fbdev/ps3fb.c
index dbcda307f6a6..55796e1765a7 100644
--- a/drivers/video/fbdev/ps3fb.c
+++ b/drivers/video/fbdev/ps3fb.c
@@ -704,7 +704,7 @@ static int ps3fb_pan_display(struct fb_var_screeninfo *var,
      *  As we have a virtual frame buffer, we need our own mmap function
      */
 
-static int ps3fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int ps3fb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	int r;
 
diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c
index 4a78b387b343..6a4ffc17299c 100644
--- a/drivers/video/fbdev/pxa3xx-gcu.c
+++ b/drivers/video/fbdev/pxa3xx-gcu.c
@@ -469,7 +469,7 @@ pxa3xx_gcu_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
 }
 
 static int
-pxa3xx_gcu_mmap(struct file *file, struct vm_area_struct *vma)
+pxa3xx_gcu_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned int size = vma->vm_end - vma->vm_start;
 	struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file);
diff --git a/drivers/video/fbdev/sa1100fb.c b/drivers/video/fbdev/sa1100fb.c
index 0d362d2bf0e3..d21ae655cca4 100644
--- a/drivers/video/fbdev/sa1100fb.c
+++ b/drivers/video/fbdev/sa1100fb.c
@@ -556,7 +556,7 @@ static int sa1100fb_blank(int blank, struct fb_info *info)
 }
 
 static int sa1100fb_mmap(struct fb_info *info,
-			 struct vm_area_struct *vma)
+			 struct mm_area *vma)
 {
 	struct sa1100fb_info *fbi =
 		container_of(info, struct sa1100fb_info, fb);
diff --git a/drivers/video/fbdev/sbuslib.c b/drivers/video/fbdev/sbuslib.c
index 4c79654bda30..8fced2f56b38 100644
--- a/drivers/video/fbdev/sbuslib.c
+++ b/drivers/video/fbdev/sbuslib.c
@@ -42,7 +42,7 @@ int sbusfb_mmap_helper(const struct sbus_mmap_map *map,
 		       unsigned long physbase,
 		       unsigned long fbsize,
 		       unsigned long iospace,
-		       struct vm_area_struct *vma)
+		       struct mm_area *vma)
 {
 	unsigned int size, page, r, map_size;
 	unsigned long map_offset = 0;
diff --git a/drivers/video/fbdev/sbuslib.h b/drivers/video/fbdev/sbuslib.h
index e9af2dc93f94..75e60f30957f 100644
--- a/drivers/video/fbdev/sbuslib.h
+++ b/drivers/video/fbdev/sbuslib.h
@@ -6,7 +6,7 @@
 struct device_node;
 struct fb_info;
 struct fb_var_screeninfo;
-struct vm_area_struct;
+struct mm_area;
 
 struct sbus_mmap_map {
 	unsigned long voff;
@@ -22,7 +22,7 @@ extern void sbusfb_fill_var(struct fb_var_screeninfo *var,
 extern int sbusfb_mmap_helper(const struct sbus_mmap_map *map,
 			      unsigned long physbase, unsigned long fbsize,
 			      unsigned long iospace,
-			      struct vm_area_struct *vma);
+			      struct mm_area *vma);
 int sbusfb_ioctl_helper(unsigned long cmd, unsigned long arg,
 			struct fb_info *info,
 			int type, int fb_depth, unsigned long fb_size);
diff --git a/drivers/video/fbdev/sh_mobile_lcdcfb.c b/drivers/video/fbdev/sh_mobile_lcdcfb.c
index dd950e4ab5ce..4b53eabd93fb 100644
--- a/drivers/video/fbdev/sh_mobile_lcdcfb.c
+++ b/drivers/video/fbdev/sh_mobile_lcdcfb.c
@@ -1478,7 +1478,7 @@ static int sh_mobile_lcdc_overlay_blank(int blank, struct fb_info *info)
 }
 
 static int
-sh_mobile_lcdc_overlay_mmap(struct fb_info *info, struct vm_area_struct *vma)
+sh_mobile_lcdc_overlay_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct sh_mobile_lcdc_overlay *ovl = info->par;
 
@@ -1947,7 +1947,7 @@ static int sh_mobile_lcdc_blank(int blank, struct fb_info *info)
 }
 
 static int
-sh_mobile_lcdc_mmap(struct fb_info *info, struct vm_area_struct *vma)
+sh_mobile_lcdc_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct sh_mobile_lcdc_chan *ch = info->par;
 
diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
index 5f0dd01fd834..0cf731d1c04c 100644
--- a/drivers/video/fbdev/smscufx.c
+++ b/drivers/video/fbdev/smscufx.c
@@ -773,7 +773,7 @@ static int ufx_set_vid_mode(struct ufx_data *dev, struct fb_var_screeninfo *var)
 	return 0;
 }
 
-static int ufx_ops_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int ufx_ops_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	unsigned long start = vma->vm_start;
 	unsigned long size = vma->vm_end - vma->vm_start;
diff --git a/drivers/video/fbdev/tcx.c b/drivers/video/fbdev/tcx.c
index f9a0085ad72b..fef8f2c55b15 100644
--- a/drivers/video/fbdev/tcx.c
+++ b/drivers/video/fbdev/tcx.c
@@ -34,7 +34,7 @@ static int tcx_setcolreg(unsigned, unsigned, unsigned, unsigned,
 static int tcx_blank(int, struct fb_info *);
 static int tcx_pan_display(struct fb_var_screeninfo *, struct fb_info *);
 
-static int tcx_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
+static int tcx_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
 static int tcx_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
 
 /*
@@ -292,7 +292,7 @@ static const struct sbus_mmap_map __tcx_mmap_map[TCX_MMAP_ENTRIES] = {
 	{ .size = 0 }
 };
 
-static int tcx_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int tcx_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	struct tcx_par *par = (struct tcx_par *)info->par;
 
diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
index acadf0eb450c..bcffed2bac09 100644
--- a/drivers/video/fbdev/udlfb.c
+++ b/drivers/video/fbdev/udlfb.c
@@ -321,7 +321,7 @@ static int dlfb_set_video_mode(struct dlfb_data *dlfb,
 	return retval;
 }
 
-static int dlfb_ops_mmap(struct fb_info *info, struct vm_area_struct *vma)
+static int dlfb_ops_mmap(struct fb_info *info, struct mm_area *vma)
 {
 	unsigned long start = vma->vm_start;
 	unsigned long size = vma->vm_end - vma->vm_start;
diff --git a/drivers/video/fbdev/vfb.c b/drivers/video/fbdev/vfb.c
index 5b7965f36c5e..5836aa107f86 100644
--- a/drivers/video/fbdev/vfb.c
+++ b/drivers/video/fbdev/vfb.c
@@ -76,7 +76,7 @@ static int vfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
 static int vfb_pan_display(struct fb_var_screeninfo *var,
 			   struct fb_info *info);
 static int vfb_mmap(struct fb_info *info,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 
 static const struct fb_ops vfb_ops = {
 	.owner		= THIS_MODULE,
@@ -380,7 +380,7 @@ static int vfb_pan_display(struct fb_var_screeninfo *var,
      */
 
 static int vfb_mmap(struct fb_info *info,
-		    struct vm_area_struct *vma)
+		    struct mm_area *vma)
 {
 	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
 
diff --git a/drivers/virt/acrn/mm.c b/drivers/virt/acrn/mm.c
index 4c2f28715b70..eeec17237749 100644
--- a/drivers/virt/acrn/mm.c
+++ b/drivers/virt/acrn/mm.c
@@ -163,7 +163,7 @@ int acrn_vm_ram_map(struct acrn_vm *vm, struct acrn_vm_memmap *memmap)
 	void *remap_vaddr;
 	int ret, pinned;
 	u64 user_vm_pa;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if (!vm || !memmap)
 		return -EINVAL;
diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
index f93f73ecefee..62dab536c2f6 100644
--- a/drivers/xen/gntalloc.c
+++ b/drivers/xen/gntalloc.c
@@ -445,7 +445,7 @@ static long gntalloc_ioctl(struct file *filp, unsigned int cmd,
 	return 0;
 }
 
-static void gntalloc_vma_open(struct vm_area_struct *vma)
+static void gntalloc_vma_open(struct mm_area *vma)
 {
 	struct gntalloc_vma_private_data *priv = vma->vm_private_data;
 
@@ -457,7 +457,7 @@ static void gntalloc_vma_open(struct vm_area_struct *vma)
 	mutex_unlock(&gref_mutex);
 }
 
-static void gntalloc_vma_close(struct vm_area_struct *vma)
+static void gntalloc_vma_close(struct mm_area *vma)
 {
 	struct gntalloc_vma_private_data *priv = vma->vm_private_data;
 	struct gntalloc_gref *gref, *next;
@@ -488,7 +488,7 @@ static const struct vm_operations_struct gntalloc_vmops = {
 	.close = gntalloc_vma_close,
 };
 
-static int gntalloc_mmap(struct file *filp, struct vm_area_struct *vma)
+static int gntalloc_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct gntalloc_file_private_data *priv = filp->private_data;
 	struct gntalloc_vma_private_data *vm_priv;
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 61faea1f0663..879c601543b8 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -496,7 +496,7 @@ static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
 
 /* ------------------------------------------------------------------ */
 
-static void gntdev_vma_open(struct vm_area_struct *vma)
+static void gntdev_vma_open(struct mm_area *vma)
 {
 	struct gntdev_grant_map *map = vma->vm_private_data;
 
@@ -504,7 +504,7 @@ static void gntdev_vma_open(struct vm_area_struct *vma)
 	refcount_inc(&map->users);
 }
 
-static void gntdev_vma_close(struct vm_area_struct *vma)
+static void gntdev_vma_close(struct mm_area *vma)
 {
 	struct gntdev_grant_map *map = vma->vm_private_data;
 	struct file *file = vma->vm_file;
@@ -516,7 +516,7 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
 	gntdev_put_map(priv, map);
 }
 
-static struct page *gntdev_vma_find_special_page(struct vm_area_struct *vma,
+static struct page *gntdev_vma_find_special_page(struct mm_area *vma,
 						 unsigned long addr)
 {
 	struct gntdev_grant_map *map = vma->vm_private_data;
@@ -690,7 +690,7 @@ static long gntdev_ioctl_get_offset_for_vaddr(struct gntdev_priv *priv,
 					      struct ioctl_gntdev_get_offset_for_vaddr __user *u)
 {
 	struct ioctl_gntdev_get_offset_for_vaddr op;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct gntdev_grant_map *map;
 	int rv = -EINVAL;
 
@@ -1030,7 +1030,7 @@ static long gntdev_ioctl(struct file *flip,
 	return 0;
 }
 
-static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
+static int gntdev_mmap(struct file *flip, struct mm_area *vma)
 {
 	struct gntdev_priv *priv = flip->private_data;
 	int index = vma->vm_pgoff;
diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
index 0f0dad427d7e..b0d391ea06a5 100644
--- a/drivers/xen/privcmd-buf.c
+++ b/drivers/xen/privcmd-buf.c
@@ -84,7 +84,7 @@ static int privcmd_buf_release(struct inode *ino, struct file *file)
 	return 0;
 }
 
-static void privcmd_buf_vma_open(struct vm_area_struct *vma)
+static void privcmd_buf_vma_open(struct mm_area *vma)
 {
 	struct privcmd_buf_vma_private *vma_priv = vma->vm_private_data;
 
@@ -96,7 +96,7 @@ static void privcmd_buf_vma_open(struct vm_area_struct *vma)
 	mutex_unlock(&vma_priv->file_priv->lock);
 }
 
-static void privcmd_buf_vma_close(struct vm_area_struct *vma)
+static void privcmd_buf_vma_close(struct mm_area *vma)
 {
 	struct privcmd_buf_vma_private *vma_priv = vma->vm_private_data;
 	struct privcmd_buf_private *file_priv;
@@ -130,7 +130,7 @@ static const struct vm_operations_struct privcmd_buf_vm_ops = {
 	.fault = privcmd_buf_vma_fault,
 };
 
-static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
+static int privcmd_buf_mmap(struct file *file, struct mm_area *vma)
 {
 	struct privcmd_buf_private *file_priv = file->private_data;
 	struct privcmd_buf_vma_private *vma_priv;
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 13a10f3294a8..6e064d04bab4 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -73,7 +73,7 @@ struct privcmd_data {
 };
 
 static int privcmd_vma_range_is_mapped(
-               struct vm_area_struct *vma,
+               struct mm_area *vma,
                unsigned long addr,
                unsigned long nr_pages);
 
@@ -226,7 +226,7 @@ static int traverse_pages_block(unsigned nelem, size_t size,
 
 struct mmap_gfn_state {
 	unsigned long va;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	domid_t domain;
 };
 
@@ -234,7 +234,7 @@ static int mmap_gfn_range(void *data, void *state)
 {
 	struct privcmd_mmap_entry *msg = data;
 	struct mmap_gfn_state *st = state;
-	struct vm_area_struct *vma = st->vma;
+	struct mm_area *vma = st->vma;
 	int rc;
 
 	/* Do not allow range to wrap the address space. */
@@ -265,7 +265,7 @@ static long privcmd_ioctl_mmap(struct file *file, void __user *udata)
 	struct privcmd_data *data = file->private_data;
 	struct privcmd_mmap mmapcmd;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int rc;
 	LIST_HEAD(pagelist);
 	struct mmap_gfn_state state;
@@ -324,7 +324,7 @@ static long privcmd_ioctl_mmap(struct file *file, void __user *udata)
 struct mmap_batch_state {
 	domid_t domain;
 	unsigned long va;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int index;
 	/* A tristate:
 	 *      0 for no errors
@@ -348,7 +348,7 @@ static int mmap_batch_fn(void *data, int nr, void *state)
 {
 	xen_pfn_t *gfnp = data;
 	struct mmap_batch_state *st = state;
-	struct vm_area_struct *vma = st->vma;
+	struct mm_area *vma = st->vma;
 	struct page **pages = vma->vm_private_data;
 	struct page **cur_pages = NULL;
 	int ret;
@@ -428,7 +428,7 @@ static int mmap_return_errors(void *data, int nr, void *state)
  * the vma with the page info to use later.
  * Returns: 0 if success, otherwise -errno
  */
-static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
+static int alloc_empty_pages(struct mm_area *vma, int numpgs)
 {
 	int rc;
 	struct page **pages;
@@ -459,7 +459,7 @@ static long privcmd_ioctl_mmap_batch(
 	int ret;
 	struct privcmd_mmapbatch_v2 m;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long nr_pages;
 	LIST_HEAD(pagelist);
 	struct mmap_batch_state state;
@@ -736,7 +736,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
 {
 	struct privcmd_data *data = file->private_data;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct privcmd_mmap_resource kdata;
 	xen_pfn_t *pfns = NULL;
 	struct xen_mem_acquire_resource xdata = { };
@@ -1222,7 +1222,7 @@ struct privcmd_kernel_ioreq *alloc_ioreq(struct privcmd_ioeventfd *ioeventfd)
 {
 	struct privcmd_kernel_ioreq *kioreq;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct page **pages;
 	unsigned int *ports;
 	int ret, size, i;
@@ -1584,7 +1584,7 @@ static int privcmd_release(struct inode *ino, struct file *file)
 	return 0;
 }
 
-static void privcmd_close(struct vm_area_struct *vma)
+static void privcmd_close(struct mm_area *vma)
 {
 	struct page **pages = vma->vm_private_data;
 	int numpgs = vma_pages(vma);
@@ -1617,7 +1617,7 @@ static const struct vm_operations_struct privcmd_vm_ops = {
 	.fault = privcmd_fault
 };
 
-static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
+static int privcmd_mmap(struct file *file, struct mm_area *vma)
 {
 	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
 	 * how to recreate these mappings */
@@ -1640,7 +1640,7 @@ static int is_mapped_fn(pte_t *pte, unsigned long addr, void *data)
 }
 
 static int privcmd_vma_range_is_mapped(
-	           struct vm_area_struct *vma,
+	           struct mm_area *vma,
 	           unsigned long addr,
 	           unsigned long nr_pages)
 {
diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
index edba5fecde4d..356bc765f133 100644
--- a/drivers/xen/xenbus/xenbus_dev_backend.c
+++ b/drivers/xen/xenbus/xenbus_dev_backend.c
@@ -89,7 +89,7 @@ static long xenbus_backend_ioctl(struct file *file, unsigned int cmd,
 	}
 }
 
-static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
+static int xenbus_backend_mmap(struct file *file, struct mm_area *vma)
 {
 	size_t size = vma->vm_end - vma->vm_start;
 
diff --git a/drivers/xen/xenfs/xenstored.c b/drivers/xen/xenfs/xenstored.c
index f59235f9f8a2..a4685a4f5bef 100644
--- a/drivers/xen/xenfs/xenstored.c
+++ b/drivers/xen/xenfs/xenstored.c
@@ -31,7 +31,7 @@ static int xsd_kva_open(struct inode *inode, struct file *file)
 	return 0;
 }
 
-static int xsd_kva_mmap(struct file *file, struct vm_area_struct *vma)
+static int xsd_kva_mmap(struct file *file, struct mm_area *vma)
 {
 	size_t size = vma->vm_end - vma->vm_start;
 
diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
index f17c4c03db30..a70ef3f8f617 100644
--- a/drivers/xen/xlate_mmu.c
+++ b/drivers/xen/xlate_mmu.c
@@ -66,7 +66,7 @@ struct remap_data {
 	int nr_fgfn; /* Number of foreign gfn left to map */
 	pgprot_t prot;
 	domid_t  domid;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int index;
 	struct page **pages;
 	struct xen_remap_gfn_info *info;
@@ -140,7 +140,7 @@ static int remap_pte_fn(pte_t *ptep, unsigned long addr, void *data)
 	return 0;
 }
 
-int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
+int xen_xlate_remap_gfn_array(struct mm_area *vma,
 			      unsigned long addr,
 			      xen_pfn_t *gfn, int nr,
 			      int *err_ptr, pgprot_t prot,
@@ -180,7 +180,7 @@ static void unmap_gfn(unsigned long gfn, void *data)
 	(void)HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
 }
 
-int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
+int xen_xlate_unmap_gfn_range(struct mm_area *vma,
 			      int nr, struct page **pages)
 {
 	xen_for_each_gfn(pages, nr, unmap_gfn, NULL);
@@ -282,7 +282,7 @@ static int remap_pfn_fn(pte_t *ptep, unsigned long addr, void *data)
 }
 
 /* Used by the privcmd module, but has to be built-in on ARM */
-int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr, unsigned long len)
+int xen_remap_vma_range(struct mm_area *vma, unsigned long addr, unsigned long len)
 {
 	struct remap_pfn r = {
 		.mm = vma->vm_mm,
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index 348cc90bf9c5..b2a7d581805b 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -454,7 +454,7 @@ int v9fs_file_fsync_dotl(struct file *filp, loff_t start, loff_t end,
 }
 
 static int
-v9fs_file_mmap(struct file *filp, struct vm_area_struct *vma)
+v9fs_file_mmap(struct file *filp, struct mm_area *vma)
 {
 	int retval;
 	struct inode *inode = file_inode(filp);
@@ -480,7 +480,7 @@ v9fs_vm_page_mkwrite(struct vm_fault *vmf)
 	return netfs_page_mkwrite(vmf, NULL);
 }
 
-static void v9fs_mmap_vm_close(struct vm_area_struct *vma)
+static void v9fs_mmap_vm_close(struct mm_area *vma)
 {
 	struct inode *inode;
 
diff --git a/fs/afs/file.c b/fs/afs/file.c
index fc15497608c6..1794c1138669 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -19,14 +19,14 @@
 #include <trace/events/netfs.h>
 #include "internal.h"
 
-static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
+static int afs_file_mmap(struct file *file, struct mm_area *vma);
 
 static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
 static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos,
 				    struct pipe_inode_info *pipe,
 				    size_t len, unsigned int flags);
-static void afs_vm_open(struct vm_area_struct *area);
-static void afs_vm_close(struct vm_area_struct *area);
+static void afs_vm_open(struct mm_area *area);
+static void afs_vm_close(struct mm_area *area);
 static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff);
 
 const struct file_operations afs_file_operations = {
@@ -492,7 +492,7 @@ static void afs_drop_open_mmap(struct afs_vnode *vnode)
 /*
  * Handle setting up a memory mapping on an AFS file.
  */
-static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int afs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
 	int ret;
@@ -507,12 +507,12 @@ static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
 	return ret;
 }
 
-static void afs_vm_open(struct vm_area_struct *vma)
+static void afs_vm_open(struct mm_area *vma)
 {
 	afs_add_open_mmap(AFS_FS_I(file_inode(vma->vm_file)));
 }
 
-static void afs_vm_close(struct vm_area_struct *vma)
+static void afs_vm_close(struct mm_area *vma)
 {
 	afs_drop_open_mmap(AFS_FS_I(file_inode(vma->vm_file)));
 }
diff --git a/fs/aio.c b/fs/aio.c
index 7b976b564cfc..140b42dd11ad 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -351,7 +351,7 @@ static void aio_free_ring(struct kioctx *ctx)
 	}
 }
 
-static int aio_ring_mremap(struct vm_area_struct *vma)
+static int aio_ring_mremap(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct mm_struct *mm = vma->vm_mm;
@@ -392,7 +392,7 @@ static const struct vm_operations_struct aio_ring_vm_ops = {
 #endif
 };
 
-static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
+static int aio_ring_mmap(struct file *file, struct mm_area *vma)
 {
 	vm_flags_set(vma, VM_DONTEXPAND);
 	vma->vm_ops = &aio_ring_vm_ops;
diff --git a/fs/backing-file.c b/fs/backing-file.c
index 763fbe9b72b2..95e6cea5fa7a 100644
--- a/fs/backing-file.c
+++ b/fs/backing-file.c
@@ -323,7 +323,7 @@ ssize_t backing_file_splice_write(struct pipe_inode_info *pipe,
 }
 EXPORT_SYMBOL_GPL(backing_file_splice_write);
 
-int backing_file_mmap(struct file *file, struct vm_area_struct *vma,
+int backing_file_mmap(struct file *file, struct mm_area *vma,
 		      struct backing_file_ctx *ctx)
 {
 	const struct cred *old_cred;
diff --git a/fs/bcachefs/fs.c b/fs/bcachefs/fs.c
index fc834bdf1f52..0cd13a91456c 100644
--- a/fs/bcachefs/fs.c
+++ b/fs/bcachefs/fs.c
@@ -1403,7 +1403,7 @@ static const struct vm_operations_struct bch_vm_ops = {
 	.page_mkwrite   = bch2_page_mkwrite,
 };
 
-static int bch2_mmap(struct file *file, struct vm_area_struct *vma)
+static int bch2_mmap(struct file *file, struct mm_area *vma)
 {
 	file_accessed(file);
 
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 584fa89bc877..b28c8bc74b45 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -173,7 +173,7 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
 	elf_addr_t flags = 0;
 	int ei_index;
 	const struct cred *cred = current_cred();
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * In some cases (e.g. Hyper-Threading), we want to avoid L1
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 262a707d8990..99026a1bf443 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1928,7 +1928,7 @@ static const struct vm_operations_struct btrfs_file_vm_ops = {
 	.page_mkwrite	= btrfs_page_mkwrite,
 };
 
-static int btrfs_file_mmap(struct file	*filp, struct vm_area_struct *vma)
+static int btrfs_file_mmap(struct file	*filp, struct mm_area *vma)
 {
 	struct address_space *mapping = filp->f_mapping;
 
diff --git a/fs/buffer.c b/fs/buffer.c
index c7abb4a029dc..aafb15b65afa 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2585,7 +2585,7 @@ EXPORT_SYMBOL(cont_write_begin);
  * Direct callers of this function should protect against filesystem freezing
  * using sb_start_pagefault() - sb_end_pagefault() functions.
  */
-int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
+int block_page_mkwrite(struct mm_area *vma, struct vm_fault *vmf,
 			 get_block_t get_block)
 {
 	struct folio *folio = page_folio(vmf->page);
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 29be367905a1..b6a99e66b1af 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1940,7 +1940,7 @@ static void ceph_restore_sigs(sigset_t *oldset)
  */
 static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct inode *inode = file_inode(vma->vm_file);
 	struct ceph_inode_info *ci = ceph_inode(inode);
 	struct ceph_client *cl = ceph_inode_to_client(inode);
@@ -2031,7 +2031,7 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
 
 static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct inode *inode = file_inode(vma->vm_file);
 	struct ceph_client *cl = ceph_inode_to_client(inode);
 	struct ceph_inode_info *ci = ceph_inode(inode);
@@ -2319,7 +2319,7 @@ static const struct vm_operations_struct ceph_vmops = {
 	.page_mkwrite	= ceph_page_mkwrite,
 };
 
-int ceph_mmap(struct file *file, struct vm_area_struct *vma)
+int ceph_mmap(struct file *file, struct mm_area *vma)
 {
 	struct address_space *mapping = file->f_mapping;
 
diff --git a/fs/ceph/super.h b/fs/ceph/super.h
index bb0db0cc8003..bdb01ebd811b 100644
--- a/fs/ceph/super.h
+++ b/fs/ceph/super.h
@@ -1286,7 +1286,7 @@ extern void __ceph_touch_fmode(struct ceph_inode_info *ci,
 /* addr.c */
 extern const struct address_space_operations ceph_aops;
 extern const struct netfs_request_ops ceph_netfs_ops;
-extern int ceph_mmap(struct file *file, struct vm_area_struct *vma);
+extern int ceph_mmap(struct file *file, struct mm_area *vma);
 extern int ceph_uninline_data(struct file *file);
 extern int ceph_pool_perm_check(struct inode *inode, int need);
 extern void ceph_pool_perm_destroy(struct ceph_mds_client* mdsc);
diff --git a/fs/coda/file.c b/fs/coda/file.c
index 148856a582a9..28d6240819a0 100644
--- a/fs/coda/file.c
+++ b/fs/coda/file.c
@@ -120,7 +120,7 @@ coda_file_splice_read(struct file *coda_file, loff_t *ppos,
 }
 
 static void
-coda_vm_open(struct vm_area_struct *vma)
+coda_vm_open(struct mm_area *vma)
 {
 	struct coda_vm_ops *cvm_ops =
 		container_of(vma->vm_ops, struct coda_vm_ops, vm_ops);
@@ -132,7 +132,7 @@ coda_vm_open(struct vm_area_struct *vma)
 }
 
 static void
-coda_vm_close(struct vm_area_struct *vma)
+coda_vm_close(struct mm_area *vma)
 {
 	struct coda_vm_ops *cvm_ops =
 		container_of(vma->vm_ops, struct coda_vm_ops, vm_ops);
@@ -148,7 +148,7 @@ coda_vm_close(struct vm_area_struct *vma)
 }
 
 static int
-coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
+coda_file_mmap(struct file *coda_file, struct mm_area *vma)
 {
 	struct inode *coda_inode = file_inode(coda_file);
 	struct coda_file_info *cfi = coda_ftoc(coda_file);
diff --git a/fs/coredump.c b/fs/coredump.c
index c33c177a701b..f9987d48c5a6 100644
--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -1082,7 +1082,7 @@ fs_initcall(init_fs_coredump_sysctls);
  * meant. These special mappings include - vDSO, vsyscall, and other
  * architecture specific mappings
  */
-static bool always_dump_vma(struct vm_area_struct *vma)
+static bool always_dump_vma(struct mm_area *vma)
 {
 	/* Any vsyscall mappings? */
 	if (vma == get_gate_vma(vma->vm_mm))
@@ -1110,7 +1110,7 @@ static bool always_dump_vma(struct vm_area_struct *vma)
 /*
  * Decide how much of @vma's contents should be included in a core dump.
  */
-static unsigned long vma_dump_size(struct vm_area_struct *vma,
+static unsigned long vma_dump_size(struct mm_area *vma,
 				   unsigned long mm_flags)
 {
 #define FILTER(type)	(mm_flags & (1UL << MMF_DUMP_##type))
@@ -1193,9 +1193,9 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma,
  * Helper function for iterating across a vma list.  It ensures that the caller
  * will visit `gate_vma' prior to terminating the search.
  */
-static struct vm_area_struct *coredump_next_vma(struct vma_iterator *vmi,
-				       struct vm_area_struct *vma,
-				       struct vm_area_struct *gate_vma)
+static struct mm_area *coredump_next_vma(struct vma_iterator *vmi,
+				       struct mm_area *vma,
+				       struct mm_area *gate_vma)
 {
 	if (gate_vma && (vma == gate_vma))
 		return NULL;
@@ -1238,7 +1238,7 @@ static int cmp_vma_size(const void *vma_meta_lhs_ptr, const void *vma_meta_rhs_p
  */
 static bool dump_vma_snapshot(struct coredump_params *cprm)
 {
-	struct vm_area_struct *gate_vma, *vma = NULL;
+	struct mm_area *gate_vma, *vma = NULL;
 	struct mm_struct *mm = current->mm;
 	VMA_ITERATOR(vmi, mm, 0);
 	int i = 0;
diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index b84d1747a020..9147633db9eb 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -343,7 +343,7 @@ static bool cramfs_last_page_is_shared(struct inode *inode)
 	return memchr_inv(tail_data, 0, PAGE_SIZE - partial) ? true : false;
 }
 
-static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
+static int cramfs_physmem_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 	struct cramfs_sb_info *sbi = CRAMFS_SB(inode->i_sb);
@@ -435,7 +435,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
 
 #else /* CONFIG_MMU */
 
-static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
+static int cramfs_physmem_mmap(struct file *file, struct mm_area *vma)
 {
 	return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -ENOSYS;
 }
diff --git a/fs/dax.c b/fs/dax.c
index af5045b0f476..a9c552127d9f 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -439,7 +439,7 @@ static void dax_folio_init(void *entry)
 }
 
 static void dax_associate_entry(void *entry, struct address_space *mapping,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				unsigned long address, bool shared)
 {
 	unsigned long size = dax_entry_size(entry), index;
@@ -1038,7 +1038,7 @@ static int copy_cow_page_dax(struct vm_fault *vmf, const struct iomap_iter *iter
  * flushed on write-faults (non-cow), but not read-faults.
  */
 static bool dax_fault_is_synchronous(const struct iomap_iter *iter,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	return (iter->flags & IOMAP_WRITE) && (vma->vm_flags & VM_SYNC) &&
 		(iter->iomap.flags & IOMAP_F_DIRTY);
@@ -1114,7 +1114,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 {
 	unsigned long pfn, index, count, end;
 	long ret = 0;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * A page got tagged dirty in DAX mapping? Something is seriously
@@ -1388,7 +1388,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct inode *inode = mapping->host;
 	pgtable_t pgtable = NULL;
 	struct folio *zero_folio;
diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
index ce0a3c5ed0ca..ed71003a5b20 100644
--- a/fs/ecryptfs/file.c
+++ b/fs/ecryptfs/file.c
@@ -185,7 +185,7 @@ static int read_or_initialize_metadata(struct dentry *dentry)
 	return rc;
 }
 
-static int ecryptfs_mmap(struct file *file, struct vm_area_struct *vma)
+static int ecryptfs_mmap(struct file *file, struct mm_area *vma)
 {
 	struct file *lower_file = ecryptfs_file_to_lower(file);
 	/*
diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index 2409d2ab0c28..05444e3d9326 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -408,7 +408,7 @@ static const struct vm_operations_struct erofs_dax_vm_ops = {
 	.huge_fault	= erofs_dax_huge_fault,
 };
 
-static int erofs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int erofs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!IS_DAX(file_inode(file)))
 		return generic_file_readonly_mmap(file, vma);
diff --git a/fs/exec.c b/fs/exec.c
index f511409b8cd5..c6c2cddb8cc7 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -198,7 +198,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos,
 		int write)
 {
 	struct page *page;
-	struct vm_area_struct *vma = bprm->vma;
+	struct mm_area *vma = bprm->vma;
 	struct mm_struct *mm = bprm->mm;
 	int ret;
 
@@ -245,7 +245,7 @@ static void flush_arg_page(struct linux_binprm *bprm, unsigned long pos,
 static int __bprm_mm_init(struct linux_binprm *bprm)
 {
 	int err;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	struct mm_struct *mm = bprm->mm;
 
 	bprm->vma = vma = vm_area_alloc(mm);
@@ -363,7 +363,7 @@ static bool valid_arg_len(struct linux_binprm *bprm, long len)
 
 /*
  * Create a new mm_struct and populate it with a temporary stack
- * vm_area_struct.  We don't have enough context at this point to set the stack
+ * mm_area.  We don't have enough context at this point to set the stack
  * flags, permissions, and offset, so we use temporary values.  We'll update
  * them later in setup_arg_pages().
  */
@@ -702,7 +702,7 @@ static int copy_strings_kernel(int argc, const char *const *argv,
 #ifdef CONFIG_MMU
 
 /*
- * Finalizes the stack vm_area_struct. The flags and permissions are updated,
+ * Finalizes the stack mm_area. The flags and permissions are updated,
  * the stack is optionally relocated, and some extra space is added.
  */
 int setup_arg_pages(struct linux_binprm *bprm,
@@ -712,8 +712,8 @@ int setup_arg_pages(struct linux_binprm *bprm,
 	unsigned long ret;
 	unsigned long stack_shift;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = bprm->vma;
-	struct vm_area_struct *prev = NULL;
+	struct mm_area *vma = bprm->vma;
+	struct mm_area *prev = NULL;
 	unsigned long vm_flags;
 	unsigned long stack_base;
 	unsigned long stack_size;
diff --git a/fs/exfat/file.c b/fs/exfat/file.c
index 841a5b18e3df..ae38e3545f0e 100644
--- a/fs/exfat/file.c
+++ b/fs/exfat/file.c
@@ -651,7 +651,7 @@ static ssize_t exfat_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
 static vm_fault_t exfat_page_mkwrite(struct vm_fault *vmf)
 {
 	int err;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct file *file = vma->vm_file;
 	struct inode *inode = file_inode(file);
 	struct exfat_inode_info *ei = EXFAT_I(inode);
@@ -683,7 +683,7 @@ static const struct vm_operations_struct exfat_file_vm_ops = {
 	.page_mkwrite	= exfat_page_mkwrite,
 };
 
-static int exfat_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int exfat_file_mmap(struct file *file, struct mm_area *vma)
 {
 	if (unlikely(exfat_forced_shutdown(file_inode(file)->i_sb)))
 		return -EIO;
diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index 10b061ac5bc0..cfa6459d23f8 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -122,7 +122,7 @@ static const struct vm_operations_struct ext2_dax_vm_ops = {
 	.pfn_mkwrite	= ext2_dax_fault,
 };
 
-static int ext2_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int ext2_file_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!IS_DAX(file_inode(file)))
 		return generic_file_mmap(file, vma);
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index beb078ee4811..f2bf09c18e64 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -799,7 +799,7 @@ static const struct vm_operations_struct ext4_file_vm_ops = {
 	.page_mkwrite   = ext4_page_mkwrite,
 };
 
-static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int ext4_file_mmap(struct file *file, struct mm_area *vma)
 {
 	int ret;
 	struct inode *inode = file->f_mapping->host;
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 1dc09ed5d403..335fe55c24d2 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -6172,7 +6172,7 @@ static int ext4_bh_unmapped(handle_t *handle, struct inode *inode,
 
 vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio = page_folio(vmf->page);
 	loff_t size;
 	unsigned long len;
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index abbcbb5865a3..1423c6e7e488 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -532,7 +532,7 @@ static loff_t f2fs_llseek(struct file *file, loff_t offset, int whence)
 	return -EINVAL;
 }
 
-static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int f2fs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 
diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
index 0502bf3cdf6a..72cb7b6a361c 100644
--- a/fs/fuse/dax.c
+++ b/fs/fuse/dax.c
@@ -821,7 +821,7 @@ static const struct vm_operations_struct fuse_dax_vm_ops = {
 	.pfn_mkwrite	= fuse_dax_pfn_mkwrite,
 };
 
-int fuse_dax_mmap(struct file *file, struct vm_area_struct *vma)
+int fuse_dax_mmap(struct file *file, struct mm_area *vma)
 {
 	file_accessed(file);
 	vma->vm_ops = &fuse_dax_vm_ops;
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 754378dd9f71..f75907398e60 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -2576,7 +2576,7 @@ static int fuse_launder_folio(struct folio *folio)
  * Write back dirty data/metadata now (there may not be any suitable
  * open files later for data)
  */
-static void fuse_vma_close(struct vm_area_struct *vma)
+static void fuse_vma_close(struct mm_area *vma)
 {
 	int err;
 
@@ -2622,7 +2622,7 @@ static const struct vm_operations_struct fuse_file_vm_ops = {
 	.page_mkwrite	= fuse_page_mkwrite,
 };
 
-static int fuse_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int fuse_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct fuse_file *ff = file->private_data;
 	struct fuse_conn *fc = ff->fm->fc;
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index d56d4fd956db..d86e9e62dbfc 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -1470,7 +1470,7 @@ void fuse_free_conn(struct fuse_conn *fc);
 
 ssize_t fuse_dax_read_iter(struct kiocb *iocb, struct iov_iter *to);
 ssize_t fuse_dax_write_iter(struct kiocb *iocb, struct iov_iter *from);
-int fuse_dax_mmap(struct file *file, struct vm_area_struct *vma);
+int fuse_dax_mmap(struct file *file, struct mm_area *vma);
 int fuse_dax_break_layouts(struct inode *inode, u64 dmap_start, u64 dmap_end);
 int fuse_dax_conn_alloc(struct fuse_conn *fc, enum fuse_dax_mode mode,
 			struct dax_device *dax_dev);
@@ -1567,7 +1567,7 @@ ssize_t fuse_passthrough_splice_read(struct file *in, loff_t *ppos,
 ssize_t fuse_passthrough_splice_write(struct pipe_inode_info *pipe,
 				      struct file *out, loff_t *ppos,
 				      size_t len, unsigned int flags);
-ssize_t fuse_passthrough_mmap(struct file *file, struct vm_area_struct *vma);
+ssize_t fuse_passthrough_mmap(struct file *file, struct mm_area *vma);
 
 #ifdef CONFIG_SYSCTL
 extern int fuse_sysctl_register(void);
diff --git a/fs/fuse/passthrough.c b/fs/fuse/passthrough.c
index 607ef735ad4a..6245304c35f2 100644
--- a/fs/fuse/passthrough.c
+++ b/fs/fuse/passthrough.c
@@ -129,7 +129,7 @@ ssize_t fuse_passthrough_splice_write(struct pipe_inode_info *pipe,
 	return ret;
 }
 
-ssize_t fuse_passthrough_mmap(struct file *file, struct vm_area_struct *vma)
+ssize_t fuse_passthrough_mmap(struct file *file, struct mm_area *vma)
 {
 	struct fuse_file *ff = file->private_data;
 	struct file *backing_file = fuse_file_passthrough(ff);
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index fd1147aa3891..21c6af00183e 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -588,7 +588,7 @@ static const struct vm_operations_struct gfs2_vm_ops = {
  * Returns: 0
  */
 
-static int gfs2_mmap(struct file *file, struct vm_area_struct *vma)
+static int gfs2_mmap(struct file *file, struct mm_area *vma)
 {
 	struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
 
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index e4de5425838d..33c1e3dd8b90 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -96,7 +96,7 @@ static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
 #define PGOFF_LOFFT_MAX \
 	(((1UL << (PAGE_SHIFT + 1)) - 1) <<  (BITS_PER_LONG - (PAGE_SHIFT + 1)))
 
-static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int hugetlbfs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 	loff_t len, vma_len;
@@ -340,7 +340,7 @@ static void hugetlb_delete_from_page_cache(struct folio *folio)
  * mutex for the page in the mapping.  So, we can not race with page being
  * faulted into the vma.
  */
-static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma,
+static bool hugetlb_vma_maps_pfn(struct mm_area *vma,
 				unsigned long addr, unsigned long pfn)
 {
 	pte_t *ptep, pte;
@@ -365,7 +365,7 @@ static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma,
  * which overlap the truncated area starting at pgoff,
  * and no vma on a 32-bit arch can span beyond the 4GB.
  */
-static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
+static unsigned long vma_offset_start(struct mm_area *vma, pgoff_t start)
 {
 	unsigned long offset = 0;
 
@@ -375,7 +375,7 @@ static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
 	return vma->vm_start + offset;
 }
 
-static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end)
+static unsigned long vma_offset_end(struct mm_area *vma, pgoff_t end)
 {
 	unsigned long t_end;
 
@@ -399,7 +399,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
 	struct rb_root_cached *root = &mapping->i_mmap;
 	struct hugetlb_vma_lock *vma_lock;
 	unsigned long pfn = folio_pfn(folio);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long v_start;
 	unsigned long v_end;
 	pgoff_t start, end;
@@ -479,7 +479,7 @@ static void
 hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
 		      zap_flags_t zap_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * end == 0 indicates that the entire range after start should be
@@ -730,7 +730,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
 	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
 	struct address_space *mapping = inode->i_mapping;
 	struct hstate *h = hstate_inode(inode);
-	struct vm_area_struct pseudo_vma;
+	struct mm_area pseudo_vma;
 	struct mm_struct *mm = current->mm;
 	loff_t hpage_size = huge_page_size(h);
 	unsigned long hpage_shift = huge_page_shift(h);
diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index 66fe8fe41f06..cd6ff826d3f5 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -349,7 +349,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
 	return len;
 }
 
-static void kernfs_vma_open(struct vm_area_struct *vma)
+static void kernfs_vma_open(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct kernfs_open_file *of = kernfs_of(file);
@@ -408,7 +408,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
 	return ret;
 }
 
-static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
+static int kernfs_vma_access(struct mm_area *vma, unsigned long addr,
 			     void *buf, int len, int write)
 {
 	struct file *file = vma->vm_file;
@@ -436,7 +436,7 @@ static const struct vm_operations_struct kernfs_vm_ops = {
 	.access		= kernfs_vma_access,
 };
 
-static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
+static int kernfs_fop_mmap(struct file *file, struct mm_area *vma)
 {
 	struct kernfs_open_file *of = kernfs_of(file);
 	const struct kernfs_ops *ops;
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 033feeab8c34..62e293a33325 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -207,7 +207,7 @@ nfs_file_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe
 EXPORT_SYMBOL_GPL(nfs_file_splice_read);
 
 int
-nfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+nfs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 	int	status;
diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index ec8d32d0e2e9..007e50305767 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -432,7 +432,7 @@ loff_t nfs_file_llseek(struct file *, loff_t, int);
 ssize_t nfs_file_read(struct kiocb *, struct iov_iter *);
 ssize_t nfs_file_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe,
 			     size_t len, unsigned int flags);
-int nfs_file_mmap(struct file *, struct vm_area_struct *);
+int nfs_file_mmap(struct file *, struct mm_area *);
 ssize_t nfs_file_write(struct kiocb *, struct iov_iter *);
 int nfs_file_release(struct inode *, struct file *);
 int nfs_lock(struct file *, int, struct file_lock *);
diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
index 0e3fc5ba33c7..3e424224cb56 100644
--- a/fs/nilfs2/file.c
+++ b/fs/nilfs2/file.c
@@ -44,7 +44,7 @@ int nilfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
 
 static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio = page_folio(vmf->page);
 	struct inode *inode = file_inode(vma->vm_file);
 	struct nilfs_transaction_info ti;
@@ -125,7 +125,7 @@ static const struct vm_operations_struct nilfs_file_vm_ops = {
 	.page_mkwrite	= nilfs_page_mkwrite,
 };
 
-static int nilfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int nilfs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	file_accessed(file);
 	vma->vm_ops = &nilfs_file_vm_ops;
diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
index 9b6a3f8d2e7c..72370c69d6dc 100644
--- a/fs/ntfs3/file.c
+++ b/fs/ntfs3/file.c
@@ -347,7 +347,7 @@ static int ntfs_zero_range(struct inode *inode, u64 vbo, u64 vbo_to)
 /*
  * ntfs_file_mmap - file_operations::mmap
  */
-static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int ntfs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 	struct ntfs_inode *ni = ntfs_i(inode);
diff --git a/fs/ocfs2/mmap.c b/fs/ocfs2/mmap.c
index 6a314e9f2b49..9586d4d287e7 100644
--- a/fs/ocfs2/mmap.c
+++ b/fs/ocfs2/mmap.c
@@ -30,7 +30,7 @@
 
 static vm_fault_t ocfs2_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	sigset_t oldset;
 	vm_fault_t ret;
 
@@ -159,7 +159,7 @@ static const struct vm_operations_struct ocfs2_file_vm_ops = {
 	.page_mkwrite	= ocfs2_page_mkwrite,
 };
 
-int ocfs2_mmap(struct file *file, struct vm_area_struct *vma)
+int ocfs2_mmap(struct file *file, struct mm_area *vma)
 {
 	int ret = 0, lock_level = 0;
 
diff --git a/fs/ocfs2/mmap.h b/fs/ocfs2/mmap.h
index 1051507cc684..8cf4bc586fb2 100644
--- a/fs/ocfs2/mmap.h
+++ b/fs/ocfs2/mmap.h
@@ -2,6 +2,6 @@
 #ifndef OCFS2_MMAP_H
 #define OCFS2_MMAP_H
 
-int ocfs2_mmap(struct file *file, struct vm_area_struct *vma);
+int ocfs2_mmap(struct file *file, struct mm_area *vma);
 
 #endif  /* OCFS2_MMAP_H */
diff --git a/fs/orangefs/file.c b/fs/orangefs/file.c
index 90c49c0de243..290e33bad497 100644
--- a/fs/orangefs/file.c
+++ b/fs/orangefs/file.c
@@ -398,7 +398,7 @@ static const struct vm_operations_struct orangefs_file_vm_ops = {
 /*
  * Memory map a region of a file.
  */
-static int orangefs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int orangefs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	int ret;
 
diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
index 969b458100fe..400f63fc2408 100644
--- a/fs/overlayfs/file.c
+++ b/fs/overlayfs/file.c
@@ -476,7 +476,7 @@ static int ovl_fsync(struct file *file, loff_t start, loff_t end, int datasync)
 	return ret;
 }
 
-static int ovl_mmap(struct file *file, struct vm_area_struct *vma)
+static int ovl_mmap(struct file *file, struct mm_area *vma)
 {
 	struct ovl_file *of = file->private_data;
 	struct backing_file_ctx ctx = {
diff --git a/fs/proc/base.c b/fs/proc/base.c
index b0d4e1908b22..4f23e14bee67 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -2244,7 +2244,7 @@ static const struct dentry_operations tid_map_files_dentry_operations = {
 static int map_files_get_link(struct dentry *dentry, struct path *path)
 {
 	unsigned long vm_start, vm_end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *task;
 	struct mm_struct *mm;
 	int rc;
@@ -2341,7 +2341,7 @@ static struct dentry *proc_map_files_lookup(struct inode *dir,
 		struct dentry *dentry, unsigned int flags)
 {
 	unsigned long vm_start, vm_end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *task;
 	struct dentry *result;
 	struct mm_struct *mm;
@@ -2395,7 +2395,7 @@ static const struct inode_operations proc_map_files_inode_operations = {
 static int
 proc_map_files_readdir(struct file *file, struct dir_context *ctx)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *task;
 	struct mm_struct *mm;
 	unsigned long nr_files, pos, i;
diff --git a/fs/proc/inode.c b/fs/proc/inode.c
index a3eb3b740f76..d5a6e680a0bd 100644
--- a/fs/proc/inode.c
+++ b/fs/proc/inode.c
@@ -412,7 +412,7 @@ static long proc_reg_compat_ioctl(struct file *file, unsigned int cmd, unsigned
 }
 #endif
 
-static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct vm_area_struct *vma)
+static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct mm_area *vma)
 {
 	__auto_type mmap = pde->proc_ops->proc_mmap;
 	if (mmap)
@@ -420,7 +420,7 @@ static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct vm_are
 	return -EIO;
 }
 
-static int proc_reg_mmap(struct file *file, struct vm_area_struct *vma)
+static int proc_reg_mmap(struct file *file, struct mm_area *vma)
 {
 	struct proc_dir_entry *pde = PDE(file_inode(file));
 	int rv = -EIO;
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 994cde10e3f4..66a47c2a2b98 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -127,10 +127,10 @@ static void release_task_mempolicy(struct proc_maps_private *priv)
 }
 #endif
 
-static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv,
+static struct mm_area *proc_get_vma(struct proc_maps_private *priv,
 						loff_t *ppos)
 {
-	struct vm_area_struct *vma = vma_next(&priv->iter);
+	struct mm_area *vma = vma_next(&priv->iter);
 
 	if (vma) {
 		*ppos = vma->vm_start;
@@ -240,7 +240,7 @@ static int do_maps_open(struct inode *inode, struct file *file,
 				sizeof(struct proc_maps_private));
 }
 
-static void get_vma_name(struct vm_area_struct *vma,
+static void get_vma_name(struct mm_area *vma,
 			 const struct path **path,
 			 const char **name,
 			 const char **name_fmt)
@@ -322,7 +322,7 @@ static void show_vma_header_prefix(struct seq_file *m,
 }
 
 static void
-show_map_vma(struct seq_file *m, struct vm_area_struct *vma)
+show_map_vma(struct seq_file *m, struct mm_area *vma)
 {
 	const struct path *path;
 	const char *name_fmt, *name;
@@ -394,20 +394,20 @@ static int query_vma_setup(struct mm_struct *mm)
 	return mmap_read_lock_killable(mm);
 }
 
-static void query_vma_teardown(struct mm_struct *mm, struct vm_area_struct *vma)
+static void query_vma_teardown(struct mm_struct *mm, struct mm_area *vma)
 {
 	mmap_read_unlock(mm);
 }
 
-static struct vm_area_struct *query_vma_find_by_addr(struct mm_struct *mm, unsigned long addr)
+static struct mm_area *query_vma_find_by_addr(struct mm_struct *mm, unsigned long addr)
 {
 	return find_vma(mm, addr);
 }
 
-static struct vm_area_struct *query_matching_vma(struct mm_struct *mm,
+static struct mm_area *query_matching_vma(struct mm_struct *mm,
 						 unsigned long addr, u32 flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 next_vma:
 	vma = query_vma_find_by_addr(mm, addr);
@@ -454,7 +454,7 @@ static struct vm_area_struct *query_matching_vma(struct mm_struct *mm,
 static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg)
 {
 	struct procmap_query karg;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm;
 	const char *name = NULL;
 	char build_id_buf[BUILD_ID_SIZE_MAX], *name_buf = NULL;
@@ -780,7 +780,7 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
 			  __always_unused int depth, struct mm_walk *walk)
 {
 	struct mem_size_stats *mss = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 
 	mss->swap += shmem_partial_swap_usage(walk->vma->vm_file->f_mapping,
 					      linear_page_index(vma, addr),
@@ -806,7 +806,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
 		struct mm_walk *walk)
 {
 	struct mem_size_stats *mss = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	bool locked = !!(vma->vm_flags & VM_LOCKED);
 	struct page *page = NULL;
 	bool present = false, young = false, dirty = false;
@@ -854,7 +854,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
 		struct mm_walk *walk)
 {
 	struct mem_size_stats *mss = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	bool locked = !!(vma->vm_flags & VM_LOCKED);
 	struct page *page = NULL;
 	bool present = false;
@@ -894,7 +894,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
 static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
 			   struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	pte_t *pte;
 	spinlock_t *ptl;
 
@@ -918,7 +918,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
 	return 0;
 }
 
-static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
+static void show_smap_vma_flags(struct seq_file *m, struct mm_area *vma)
 {
 	/*
 	 * Don't forget to update Documentation/ on changes.
@@ -1019,7 +1019,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
 				 struct mm_walk *walk)
 {
 	struct mem_size_stats *mss = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	pte_t ptent = huge_ptep_get(walk->mm, addr, pte);
 	struct folio *folio = NULL;
 	bool present = false;
@@ -1067,7 +1067,7 @@ static const struct mm_walk_ops smaps_shmem_walk_ops = {
  *
  * Use vm_start of @vma as the beginning address if @start is 0.
  */
-static void smap_gather_stats(struct vm_area_struct *vma,
+static void smap_gather_stats(struct mm_area *vma,
 		struct mem_size_stats *mss, unsigned long start)
 {
 	const struct mm_walk_ops *ops = &smaps_walk_ops;
@@ -1150,7 +1150,7 @@ static void __show_smap(struct seq_file *m, const struct mem_size_stats *mss,
 
 static int show_smap(struct seq_file *m, void *v)
 {
-	struct vm_area_struct *vma = v;
+	struct mm_area *vma = v;
 	struct mem_size_stats mss = {};
 
 	smap_gather_stats(vma, &mss, 0);
@@ -1180,7 +1180,7 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
 	struct proc_maps_private *priv = m->private;
 	struct mem_size_stats mss = {};
 	struct mm_struct *mm = priv->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long vma_start = 0, last_vma_end = 0;
 	int ret = 0;
 	VMA_ITERATOR(vmi, mm, 0);
@@ -1380,7 +1380,7 @@ struct clear_refs_private {
 
 #ifdef CONFIG_MEM_SOFT_DIRTY
 
-static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
+static inline bool pte_is_pinned(struct mm_area *vma, unsigned long addr, pte_t pte)
 {
 	struct folio *folio;
 
@@ -1396,7 +1396,7 @@ static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr,
 	return folio_maybe_dma_pinned(folio);
 }
 
-static inline void clear_soft_dirty(struct vm_area_struct *vma,
+static inline void clear_soft_dirty(struct mm_area *vma,
 		unsigned long addr, pte_t *pte)
 {
 	/*
@@ -1422,14 +1422,14 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
 	}
 }
 #else
-static inline void clear_soft_dirty(struct vm_area_struct *vma,
+static inline void clear_soft_dirty(struct mm_area *vma,
 		unsigned long addr, pte_t *pte)
 {
 }
 #endif
 
 #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
-static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
+static inline void clear_soft_dirty_pmd(struct mm_area *vma,
 		unsigned long addr, pmd_t *pmdp)
 {
 	pmd_t old, pmd = *pmdp;
@@ -1452,7 +1452,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
 	}
 }
 #else
-static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
+static inline void clear_soft_dirty_pmd(struct mm_area *vma,
 		unsigned long addr, pmd_t *pmdp)
 {
 }
@@ -1462,7 +1462,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
 				unsigned long end, struct mm_walk *walk)
 {
 	struct clear_refs_private *cp = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	pte_t *pte, ptent;
 	spinlock_t *ptl;
 	struct folio *folio;
@@ -1522,7 +1522,7 @@ static int clear_refs_test_walk(unsigned long start, unsigned long end,
 				struct mm_walk *walk)
 {
 	struct clear_refs_private *cp = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 
 	if (vma->vm_flags & VM_PFNMAP)
 		return 1;
@@ -1552,7 +1552,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
 	struct task_struct *task;
 	char buffer[PROC_NUMBUF] = {};
 	struct mm_struct *mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	enum clear_refs_types type;
 	int itype;
 	int rv;
@@ -1680,7 +1680,7 @@ static int pagemap_pte_hole(unsigned long start, unsigned long end,
 	int err = 0;
 
 	while (addr < end) {
-		struct vm_area_struct *vma = find_vma(walk->mm, addr);
+		struct mm_area *vma = find_vma(walk->mm, addr);
 		pagemap_entry_t pme = make_pme(0, 0);
 		/* End of address space hole, which we mark as non-present. */
 		unsigned long hole_end;
@@ -1713,7 +1713,7 @@ static int pagemap_pte_hole(unsigned long start, unsigned long end,
 }
 
 static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
-		struct vm_area_struct *vma, unsigned long addr, pte_t pte)
+		struct mm_area *vma, unsigned long addr, pte_t pte)
 {
 	u64 frame = 0, flags = 0;
 	struct page *page = NULL;
@@ -1774,7 +1774,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
 static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
 			     struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	struct pagemapread *pm = walk->private;
 	spinlock_t *ptl;
 	pte_t *pte, *orig_pte;
@@ -1887,7 +1887,7 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask,
 				 struct mm_walk *walk)
 {
 	struct pagemapread *pm = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	u64 flags = 0, frame = 0;
 	int err = 0;
 	pte_t pte;
@@ -2099,7 +2099,7 @@ struct pagemap_scan_private {
 };
 
 static unsigned long pagemap_page_category(struct pagemap_scan_private *p,
-					   struct vm_area_struct *vma,
+					   struct mm_area *vma,
 					   unsigned long addr, pte_t pte)
 {
 	unsigned long categories = 0;
@@ -2141,7 +2141,7 @@ static unsigned long pagemap_page_category(struct pagemap_scan_private *p,
 	return categories;
 }
 
-static void make_uffd_wp_pte(struct vm_area_struct *vma,
+static void make_uffd_wp_pte(struct mm_area *vma,
 			     unsigned long addr, pte_t *pte, pte_t ptent)
 {
 	if (pte_present(ptent)) {
@@ -2161,7 +2161,7 @@ static void make_uffd_wp_pte(struct vm_area_struct *vma,
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static unsigned long pagemap_thp_category(struct pagemap_scan_private *p,
-					  struct vm_area_struct *vma,
+					  struct mm_area *vma,
 					  unsigned long addr, pmd_t pmd)
 {
 	unsigned long categories = PAGE_IS_HUGE;
@@ -2203,7 +2203,7 @@ static unsigned long pagemap_thp_category(struct pagemap_scan_private *p,
 	return categories;
 }
 
-static void make_uffd_wp_pmd(struct vm_area_struct *vma,
+static void make_uffd_wp_pmd(struct mm_area *vma,
 			     unsigned long addr, pmd_t *pmdp)
 {
 	pmd_t old, pmd = *pmdp;
@@ -2250,7 +2250,7 @@ static unsigned long pagemap_hugetlb_category(pte_t pte)
 	return categories;
 }
 
-static void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
+static void make_uffd_wp_huge_pte(struct mm_area *vma,
 				  unsigned long addr, pte_t *ptep,
 				  pte_t ptent)
 {
@@ -2316,7 +2316,7 @@ static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
 				  struct mm_walk *walk)
 {
 	struct pagemap_scan_private *p = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	unsigned long vma_category = 0;
 	bool wp_allowed = userfaultfd_wp_async(vma) &&
 	    userfaultfd_wp_use_markers(vma);
@@ -2423,7 +2423,7 @@ static int pagemap_scan_thp_entry(pmd_t *pmd, unsigned long start,
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	struct pagemap_scan_private *p = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	unsigned long categories;
 	spinlock_t *ptl;
 	int ret = 0;
@@ -2473,7 +2473,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
 				  unsigned long end, struct mm_walk *walk)
 {
 	struct pagemap_scan_private *p = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	unsigned long addr, flush_end = 0;
 	pte_t *pte, *start_pte;
 	spinlock_t *ptl;
@@ -2573,7 +2573,7 @@ static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
 				      struct mm_walk *walk)
 {
 	struct pagemap_scan_private *p = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	unsigned long categories;
 	spinlock_t *ptl;
 	int ret = 0;
@@ -2632,7 +2632,7 @@ static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
 				 int depth, struct mm_walk *walk)
 {
 	struct pagemap_scan_private *p = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	int ret, err;
 
 	if (!vma || !pagemap_scan_is_interesting_page(p->cur_vma_category, p))
@@ -2905,7 +2905,7 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,
 	md->node[folio_nid(folio)] += nr_pages;
 }
 
-static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,
+static struct page *can_gather_numa_stats(pte_t pte, struct mm_area *vma,
 		unsigned long addr)
 {
 	struct page *page;
@@ -2930,7 +2930,7 @@ static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static struct page *can_gather_numa_stats_pmd(pmd_t pmd,
-					      struct vm_area_struct *vma,
+					      struct mm_area *vma,
 					      unsigned long addr)
 {
 	struct page *page;
@@ -2958,7 +2958,7 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
 		unsigned long end, struct mm_walk *walk)
 {
 	struct numa_maps *md = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	spinlock_t *ptl;
 	pte_t *orig_pte;
 	pte_t *pte;
@@ -3032,7 +3032,7 @@ static int show_numa_map(struct seq_file *m, void *v)
 {
 	struct numa_maps_private *numa_priv = m->private;
 	struct proc_maps_private *proc_priv = &numa_priv->proc_maps;
-	struct vm_area_struct *vma = v;
+	struct mm_area *vma = v;
 	struct numa_maps *md = &numa_priv->md;
 	struct file *file = vma->vm_file;
 	struct mm_struct *mm = vma->vm_mm;
diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c
index bce674533000..e45f014b5c81 100644
--- a/fs/proc/task_nommu.c
+++ b/fs/proc/task_nommu.c
@@ -21,7 +21,7 @@
 void task_mem(struct seq_file *m, struct mm_struct *mm)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_region *region;
 	unsigned long bytes = 0, sbytes = 0, slack = 0, size;
 
@@ -81,7 +81,7 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
 unsigned long task_vsize(struct mm_struct *mm)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long vsize = 0;
 
 	mmap_read_lock(mm);
@@ -96,7 +96,7 @@ unsigned long task_statm(struct mm_struct *mm,
 			 unsigned long *data, unsigned long *resident)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_region *region;
 	unsigned long size = kobjsize(mm);
 
@@ -124,7 +124,7 @@ unsigned long task_statm(struct mm_struct *mm,
 /*
  * display a single VMA to a sequenced file
  */
-static int nommu_vma_show(struct seq_file *m, struct vm_area_struct *vma)
+static int nommu_vma_show(struct seq_file *m, struct mm_area *vma)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long ino = 0;
@@ -175,10 +175,10 @@ static int show_map(struct seq_file *m, void *_p)
 	return nommu_vma_show(m, _p);
 }
 
-static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv,
+static struct mm_area *proc_get_vma(struct proc_maps_private *priv,
 						loff_t *ppos)
 {
-	struct vm_area_struct *vma = vma_next(&priv->iter);
+	struct mm_area *vma = vma_next(&priv->iter);
 
 	if (vma) {
 		*ppos = vma->vm_start;
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 10d01eb09c43..8e84ff70f57e 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -249,7 +249,7 @@ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
 /*
  * Architectures may override this function to map oldmem
  */
-int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
+int __weak remap_oldmem_pfn_range(struct mm_area *vma,
 				  unsigned long from, unsigned long pfn,
 				  unsigned long size, pgprot_t prot)
 {
@@ -295,7 +295,7 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size)
 }
 
 #ifdef CONFIG_MMU
-static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
+static int vmcoredd_mmap_dumps(struct mm_area *vma, unsigned long dst,
 			       u64 start, size_t size)
 {
 	struct vmcoredd_node *dump;
@@ -511,7 +511,7 @@ static const struct vm_operations_struct vmcore_mmap_ops = {
  * remap_oldmem_pfn_checked - do remap_oldmem_pfn_range replacing all pages
  * reported as not being ram with the zero page.
  *
- * @vma: vm_area_struct describing requested mapping
+ * @vma: mm_area describing requested mapping
  * @from: start remapping from
  * @pfn: page frame number to start remapping to
  * @size: remapping size
@@ -519,7 +519,7 @@ static const struct vm_operations_struct vmcore_mmap_ops = {
  *
  * Returns zero on success, -EAGAIN on failure.
  */
-static int remap_oldmem_pfn_checked(struct vm_area_struct *vma,
+static int remap_oldmem_pfn_checked(struct mm_area *vma,
 				    unsigned long from, unsigned long pfn,
 				    unsigned long size, pgprot_t prot)
 {
@@ -569,7 +569,7 @@ static int remap_oldmem_pfn_checked(struct vm_area_struct *vma,
 	return -EAGAIN;
 }
 
-static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma,
+static int vmcore_remap_oldmem_pfn(struct mm_area *vma,
 			    unsigned long from, unsigned long pfn,
 			    unsigned long size, pgprot_t prot)
 {
@@ -588,7 +588,7 @@ static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma,
 	return ret;
 }
 
-static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+static int mmap_vmcore(struct file *file, struct mm_area *vma)
 {
 	size_t size = vma->vm_end - vma->vm_start;
 	u64 start, end, len, tsz;
@@ -701,7 +701,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
 	return -EAGAIN;
 }
 #else
-static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+static int mmap_vmcore(struct file *file, struct mm_area *vma)
 {
 	return -ENOSYS;
 }
diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c
index 7a6d980e614d..39698a0acbf8 100644
--- a/fs/ramfs/file-nommu.c
+++ b/fs/ramfs/file-nommu.c
@@ -28,7 +28,7 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
 						   unsigned long len,
 						   unsigned long pgoff,
 						   unsigned long flags);
-static int ramfs_nommu_mmap(struct file *file, struct vm_area_struct *vma);
+static int ramfs_nommu_mmap(struct file *file, struct mm_area *vma);
 
 static unsigned ramfs_mmap_capabilities(struct file *file)
 {
@@ -262,7 +262,7 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
 /*
  * set up a mapping for shared memory segments
  */
-static int ramfs_nommu_mmap(struct file *file, struct vm_area_struct *vma)
+static int ramfs_nommu_mmap(struct file *file, struct mm_area *vma)
 {
 	if (!is_nommu_shared_mapping(vma->vm_flags))
 		return -ENOSYS;
diff --git a/fs/romfs/mmap-nommu.c b/fs/romfs/mmap-nommu.c
index 4520ca413867..704bc650e9fd 100644
--- a/fs/romfs/mmap-nommu.c
+++ b/fs/romfs/mmap-nommu.c
@@ -61,7 +61,7 @@ static unsigned long romfs_get_unmapped_area(struct file *file,
  * permit a R/O mapping to be made directly through onto an MTD device if
  * possible
  */
-static int romfs_mmap(struct file *file, struct vm_area_struct *vma)
+static int romfs_mmap(struct file *file, struct mm_area *vma)
 {
 	return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -ENOSYS;
 }
diff --git a/fs/smb/client/cifsfs.h b/fs/smb/client/cifsfs.h
index 8dea0cf3a8de..cadb123692c1 100644
--- a/fs/smb/client/cifsfs.h
+++ b/fs/smb/client/cifsfs.h
@@ -103,8 +103,8 @@ extern int cifs_lock(struct file *, int, struct file_lock *);
 extern int cifs_fsync(struct file *, loff_t, loff_t, int);
 extern int cifs_strict_fsync(struct file *, loff_t, loff_t, int);
 extern int cifs_flush(struct file *, fl_owner_t id);
-extern int cifs_file_mmap(struct file *file, struct vm_area_struct *vma);
-extern int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma);
+extern int cifs_file_mmap(struct file *file, struct mm_area *vma);
+extern int cifs_file_strict_mmap(struct file *file, struct mm_area *vma);
 extern const struct file_operations cifs_dir_ops;
 extern int cifs_readdir(struct file *file, struct dir_context *ctx);
 
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index 8407fb108664..ab822c809070 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -2964,7 +2964,7 @@ static const struct vm_operations_struct cifs_file_vm_ops = {
 	.page_mkwrite = cifs_page_mkwrite,
 };
 
-int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
+int cifs_file_strict_mmap(struct file *file, struct mm_area *vma)
 {
 	int xid, rc = 0;
 	struct inode *inode = file_inode(file);
@@ -2982,7 +2982,7 @@ int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
 	return rc;
 }
 
-int cifs_file_mmap(struct file *file, struct vm_area_struct *vma)
+int cifs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	int rc, xid;
 
diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
index c3d3b079aedd..ebddf13bd010 100644
--- a/fs/sysfs/file.c
+++ b/fs/sysfs/file.c
@@ -171,7 +171,7 @@ static ssize_t sysfs_kf_bin_write(struct kernfs_open_file *of, char *buf,
 }
 
 static int sysfs_kf_bin_mmap(struct kernfs_open_file *of,
-			     struct vm_area_struct *vma)
+			     struct mm_area *vma)
 {
 	struct bin_attribute *battr = of->kn->priv;
 	struct kobject *kobj = sysfs_file_kobj(of->kn);
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index bf311c38d9a8..0f0256b04a4a 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -1579,7 +1579,7 @@ static const struct vm_operations_struct ubifs_file_vm_ops = {
 	.page_mkwrite = ubifs_vm_page_mkwrite,
 };
 
-static int ubifs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int ubifs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	int err;
 
diff --git a/fs/udf/file.c b/fs/udf/file.c
index 0d76c4f37b3e..6d5fa7de4cb6 100644
--- a/fs/udf/file.c
+++ b/fs/udf/file.c
@@ -36,7 +36,7 @@
 
 static vm_fault_t udf_page_mkwrite(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct inode *inode = file_inode(vma->vm_file);
 	struct address_space *mapping = inode->i_mapping;
 	struct folio *folio = page_folio(vmf->page);
@@ -189,7 +189,7 @@ static int udf_release_file(struct inode *inode, struct file *filp)
 	return 0;
 }
 
-static int udf_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int udf_file_mmap(struct file *file, struct mm_area *vma)
 {
 	file_accessed(file);
 	vma->vm_ops = &udf_file_vm_ops;
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index d80f94346199..ade022a5af5f 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -94,7 +94,7 @@ static bool userfaultfd_wp_async_ctx(struct userfaultfd_ctx *ctx)
  * meaningful when userfaultfd_wp()==true on the vma and when it's
  * anonymous.
  */
-bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma)
+bool userfaultfd_wp_unpopulated(struct mm_area *vma)
 {
 	struct userfaultfd_ctx *ctx = vma->vm_userfaultfd_ctx.ctx;
 
@@ -231,7 +231,7 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
 					      struct vm_fault *vmf,
 					      unsigned long reason)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	pte_t *ptep, pte;
 	bool ret = true;
 
@@ -362,7 +362,7 @@ static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags)
  */
 vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	struct userfaultfd_ctx *ctx;
 	struct userfaultfd_wait_queue uwq;
@@ -614,7 +614,7 @@ static void userfaultfd_event_complete(struct userfaultfd_ctx *ctx,
 	__remove_wait_queue(&ctx->event_wqh, &ewq->wq);
 }
 
-int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
+int dup_userfaultfd(struct mm_area *vma, struct list_head *fcs)
 {
 	struct userfaultfd_ctx *ctx = NULL, *octx;
 	struct userfaultfd_fork_ctx *fctx;
@@ -719,7 +719,7 @@ void dup_userfaultfd_fail(struct list_head *fcs)
 	}
 }
 
-void mremap_userfaultfd_prep(struct vm_area_struct *vma,
+void mremap_userfaultfd_prep(struct mm_area *vma,
 			     struct vm_userfaultfd_ctx *vm_ctx)
 {
 	struct userfaultfd_ctx *ctx;
@@ -766,7 +766,7 @@ void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *vm_ctx,
 	userfaultfd_event_wait_completion(ctx, &ewq);
 }
 
-bool userfaultfd_remove(struct vm_area_struct *vma,
+bool userfaultfd_remove(struct mm_area *vma,
 			unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -807,7 +807,7 @@ static bool has_unmap_ctx(struct userfaultfd_ctx *ctx, struct list_head *unmaps,
 	return false;
 }
 
-int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start,
+int userfaultfd_unmap_prep(struct mm_area *vma, unsigned long start,
 			   unsigned long end, struct list_head *unmaps)
 {
 	struct userfaultfd_unmap_ctx *unmap_ctx;
@@ -1239,7 +1239,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
 				unsigned long arg)
 {
 	struct mm_struct *mm = ctx->mm;
-	struct vm_area_struct *vma, *cur;
+	struct mm_area *vma, *cur;
 	int ret;
 	struct uffdio_register uffdio_register;
 	struct uffdio_register __user *user_uffdio_register;
@@ -1413,7 +1413,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
 				  unsigned long arg)
 {
 	struct mm_struct *mm = ctx->mm;
-	struct vm_area_struct *vma, *prev, *cur;
+	struct mm_area *vma, *prev, *cur;
 	int ret;
 	struct uffdio_range uffdio_unregister;
 	bool found;
@@ -1845,7 +1845,7 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long
 	return ret;
 }
 
-bool userfaultfd_wp_async(struct vm_area_struct *vma)
+bool userfaultfd_wp_async(struct mm_area *vma)
 {
 	return userfaultfd_wp_async_ctx(vma->vm_userfaultfd_ctx.ctx);
 }
diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c
index b780deb81b02..a59e5521669f 100644
--- a/fs/vboxsf/file.c
+++ b/fs/vboxsf/file.c
@@ -154,7 +154,7 @@ static int vboxsf_file_release(struct inode *inode, struct file *file)
  * Write back dirty pages now, because there may not be any suitable
  * open files later
  */
-static void vboxsf_vma_close(struct vm_area_struct *vma)
+static void vboxsf_vma_close(struct mm_area *vma)
 {
 	filemap_write_and_wait(vma->vm_file->f_mapping);
 }
@@ -165,7 +165,7 @@ static const struct vm_operations_struct vboxsf_file_vm_ops = {
 	.map_pages	= filemap_map_pages,
 };
 
-static int vboxsf_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int vboxsf_file_mmap(struct file *file, struct mm_area *vma)
 {
 	int err;
 
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 84f08c976ac4..afe9512ae66f 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1846,7 +1846,7 @@ static const struct vm_operations_struct xfs_file_vm_ops = {
 STATIC int
 xfs_file_mmap(
 	struct file		*file,
-	struct vm_area_struct	*vma)
+	struct mm_area	*vma)
 {
 	struct inode		*inode = file_inode(file);
 	struct xfs_buftarg	*target = xfs_inode_buftarg(XFS_I(inode));
diff --git a/fs/zonefs/file.c b/fs/zonefs/file.c
index 42e2c0065bb3..09a25e7ae36b 100644
--- a/fs/zonefs/file.c
+++ b/fs/zonefs/file.c
@@ -312,7 +312,7 @@ static const struct vm_operations_struct zonefs_file_vm_ops = {
 	.page_mkwrite	= zonefs_filemap_page_mkwrite,
 };
 
-static int zonefs_file_mmap(struct file *file, struct vm_area_struct *vma)
+static int zonefs_file_mmap(struct file *file, struct mm_area *vma)
 {
 	/*
 	 * Conventional zones accept random writes, so their files can support
diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
index 7ee8a179d103..968dcbb599df 100644
--- a/include/asm-generic/cacheflush.h
+++ b/include/asm-generic/cacheflush.h
@@ -5,7 +5,7 @@
 #include <linux/instrumented.h>
 
 struct mm_struct;
-struct vm_area_struct;
+struct mm_area;
 struct page;
 struct address_space;
 
@@ -32,7 +32,7 @@ static inline void flush_cache_dup_mm(struct mm_struct *mm)
 #endif
 
 #ifndef flush_cache_range
-static inline void flush_cache_range(struct vm_area_struct *vma,
+static inline void flush_cache_range(struct mm_area *vma,
 				     unsigned long start,
 				     unsigned long end)
 {
@@ -40,7 +40,7 @@ static inline void flush_cache_range(struct vm_area_struct *vma,
 #endif
 
 #ifndef flush_cache_page
-static inline void flush_cache_page(struct vm_area_struct *vma,
+static inline void flush_cache_page(struct mm_area *vma,
 				    unsigned long vmaddr,
 				    unsigned long pfn)
 {
@@ -78,7 +78,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
 #endif
 
 #ifndef flush_icache_user_page
-static inline void flush_icache_user_page(struct vm_area_struct *vma,
+static inline void flush_icache_user_page(struct mm_area *vma,
 					   struct page *page,
 					   unsigned long addr, int len)
 {
diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
index 2afc95bf1655..837360772416 100644
--- a/include/asm-generic/hugetlb.h
+++ b/include/asm-generic/hugetlb.h
@@ -97,7 +97,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
 #endif
 
 #ifndef __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 		unsigned long addr, pte_t *ptep)
 {
 	return ptep_clear_flush(vma, addr, ptep);
@@ -136,7 +136,7 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
 #endif
 
 #ifndef __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
-static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+static inline int huge_ptep_set_access_flags(struct mm_area *vma,
 		unsigned long addr, pte_t *ptep,
 		pte_t pte, int dirty)
 {
diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index 6eea3b3c1e65..58db73bbd76f 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -17,7 +17,7 @@ static inline void arch_exit_mmap(struct mm_struct *mm)
 {
 }
 
-static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
+static inline bool arch_vma_access_permitted(struct mm_area *vma,
 		bool write, bool execute, bool foreign)
 {
 	/* by default, allow everything */
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 88a42973fa47..a86739bc57db 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -292,7 +292,7 @@ bool __tlb_remove_folio_pages(struct mmu_gather *tlb, struct page *page,
  * function, except we define it before the 'struct mmu_gather'.
  */
 #define tlb_delay_rmap(tlb) (((tlb)->delayed_rmap = 1), true)
-extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma);
+extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct mm_area *vma);
 #endif
 
 #endif
@@ -306,7 +306,7 @@ extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma);
  */
 #ifndef tlb_delay_rmap
 #define tlb_delay_rmap(tlb) (false)
-static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
+static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct mm_area *vma) { }
 #endif
 
 /*
@@ -435,7 +435,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 	if (tlb->fullmm || tlb->need_flush_all) {
 		flush_tlb_mm(tlb->mm);
 	} else if (tlb->end) {
-		struct vm_area_struct vma = {
+		struct mm_area vma = {
 			.vm_mm = tlb->mm,
 			.vm_flags = (tlb->vma_exec ? VM_EXEC    : 0) |
 				    (tlb->vma_huge ? VM_HUGETLB : 0),
@@ -449,7 +449,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 #endif /* CONFIG_MMU_GATHER_NO_RANGE */
 
 static inline void
-tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma)
+tlb_update_vma_flags(struct mmu_gather *tlb, struct mm_area *vma)
 {
 	/*
 	 * flush_tlb_range() implementations that look at VM_HUGETLB (tile,
@@ -535,7 +535,7 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb)
  * case where we're doing a full MM flush.  When we're doing a munmap,
  * the vmas are adjusted to only cover the region to be torn down.
  */
-static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
+static inline void tlb_start_vma(struct mmu_gather *tlb, struct mm_area *vma)
 {
 	if (tlb->fullmm)
 		return;
@@ -546,7 +546,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
 #endif
 }
 
-static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
+static inline void tlb_end_vma(struct mmu_gather *tlb, struct mm_area *vma)
 {
 	if (tlb->fullmm)
 		return;
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 2bf893eabb4b..84a5e980adee 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -186,7 +186,7 @@ struct drm_gem_object_funcs {
 	 * drm_gem_prime_mmap().  When @mmap is present @vm_ops is not
 	 * used, the @mmap callback must set vma->vm_ops instead.
 	 */
-	int (*mmap)(struct drm_gem_object *obj, struct vm_area_struct *vma);
+	int (*mmap)(struct drm_gem_object *obj, struct mm_area *vma);
 
 	/**
 	 * @evict:
@@ -482,11 +482,11 @@ int drm_gem_object_init_with_mnt(struct drm_device *dev,
 void drm_gem_private_object_init(struct drm_device *dev,
 				 struct drm_gem_object *obj, size_t size);
 void drm_gem_private_object_fini(struct drm_gem_object *obj);
-void drm_gem_vm_open(struct vm_area_struct *vma);
-void drm_gem_vm_close(struct vm_area_struct *vma);
+void drm_gem_vm_open(struct mm_area *vma);
+void drm_gem_vm_close(struct mm_area *vma);
 int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
-		     struct vm_area_struct *vma);
-int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
+		     struct mm_area *vma);
+int drm_gem_mmap(struct file *filp, struct mm_area *vma);
 
 /**
  * drm_gem_object_get - acquire a GEM buffer object reference
diff --git a/include/drm/drm_gem_dma_helper.h b/include/drm/drm_gem_dma_helper.h
index f2678e7ecb98..d097e0a46ceb 100644
--- a/include/drm/drm_gem_dma_helper.h
+++ b/include/drm/drm_gem_dma_helper.h
@@ -40,7 +40,7 @@ void drm_gem_dma_print_info(const struct drm_gem_dma_object *dma_obj,
 struct sg_table *drm_gem_dma_get_sg_table(struct drm_gem_dma_object *dma_obj);
 int drm_gem_dma_vmap(struct drm_gem_dma_object *dma_obj,
 		     struct iosys_map *map);
-int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *vma);
+int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct mm_area *vma);
 
 extern const struct vm_operations_struct drm_gem_dma_vm_ops;
 
@@ -126,7 +126,7 @@ static inline int drm_gem_dma_object_vmap(struct drm_gem_object *obj,
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-static inline int drm_gem_dma_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static inline int drm_gem_dma_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
 
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index cef5a6b5a4d6..3126f47424b4 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -109,7 +109,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 		       struct iosys_map *map);
 void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
 			  struct iosys_map *map);
-int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma);
+int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct mm_area *vma);
 
 int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem);
@@ -259,7 +259,7 @@ static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj,
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 7b53d673ae7e..2147aea16d62 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -21,7 +21,7 @@ int drm_gem_ttm_vmap(struct drm_gem_object *gem,
 void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
 			struct iosys_map *map);
 int drm_gem_ttm_mmap(struct drm_gem_object *gem,
-		     struct vm_area_struct *vma);
+		     struct mm_area *vma);
 
 int drm_gem_ttm_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
 				uint32_t handle, uint64_t *offset);
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 00830b49a3ff..395692607569 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -18,7 +18,7 @@ struct drm_mode_create_dumb;
 struct drm_plane;
 struct drm_plane_state;
 struct filp;
-struct vm_area_struct;
+struct mm_area;
 
 #define DRM_GEM_VRAM_PL_FLAG_SYSTEM	(1 << 0)
 #define DRM_GEM_VRAM_PL_FLAG_VRAM	(1 << 1)
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index fa085c44d4ca..feb9e2202049 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -89,8 +89,8 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map);
 void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map);
 
-int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
-int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
+int drm_gem_prime_mmap(struct drm_gem_object *obj, struct mm_area *vma);
+int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma);
 
 struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
 				       struct page **pages, unsigned int nr_pages);
diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
index 903cd1030110..cbfc05424ea7 100644
--- a/include/drm/ttm/ttm_bo.h
+++ b/include/drm/ttm/ttm_bo.h
@@ -433,7 +433,7 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
 void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
 int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map);
 void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct iosys_map *map);
-int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo);
+int ttm_bo_mmap_obj(struct mm_area *vma, struct ttm_buffer_object *bo);
 s64 ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx,
 		   struct ttm_resource_manager *man, gfp_t gfp_flags,
 		   s64 target);
@@ -450,9 +450,9 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
 				    pgprot_t prot,
 				    pgoff_t num_prefault);
 vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf);
-void ttm_bo_vm_open(struct vm_area_struct *vma);
-void ttm_bo_vm_close(struct vm_area_struct *vma);
-int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
+void ttm_bo_vm_open(struct mm_area *vma);
+void ttm_bo_vm_close(struct mm_area *vma);
+int ttm_bo_vm_access(struct mm_area *vma, unsigned long addr,
 		     void *buf, int len, int write);
 vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot);
 
diff --git a/include/linux/backing-file.h b/include/linux/backing-file.h
index 1476a6ed1bfd..ec845c283a65 100644
--- a/include/linux/backing-file.h
+++ b/include/linux/backing-file.h
@@ -38,7 +38,7 @@ ssize_t backing_file_splice_write(struct pipe_inode_info *pipe,
 				  struct file *out, struct kiocb *iocb,
 				  size_t len, unsigned int flags,
 				  struct backing_file_ctx *ctx);
-int backing_file_mmap(struct file *file, struct vm_area_struct *vma,
+int backing_file_mmap(struct file *file, struct mm_area *vma,
 		      struct backing_file_ctx *ctx);
 
 #endif /* _LINUX_BACKING_FILE_H */
diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
index 1625c8529e70..bf4593304fe5 100644
--- a/include/linux/binfmts.h
+++ b/include/linux/binfmts.h
@@ -17,7 +17,7 @@ struct coredump_params;
  */
 struct linux_binprm {
 #ifdef CONFIG_MMU
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long vma_pages;
 	unsigned long argmin; /* rlimit marker for copy_strings() */
 #else
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3f0cc89c0622..1a62e5398dfd 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -138,7 +138,7 @@ struct bpf_map_ops {
 				     u64 *imm, u32 off);
 	int (*map_direct_value_meta)(const struct bpf_map *map,
 				     u64 imm, u32 *off);
-	int (*map_mmap)(struct bpf_map *map, struct vm_area_struct *vma);
+	int (*map_mmap)(struct bpf_map *map, struct mm_area *vma);
 	__poll_t (*map_poll)(struct bpf_map *map, struct file *filp,
 			     struct poll_table_struct *pts);
 	unsigned long (*map_get_unmapped_area)(struct file *filep, unsigned long addr,
diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
index 139bdececdcf..3a040583a4b2 100644
--- a/include/linux/btf_ids.h
+++ b/include/linux/btf_ids.h
@@ -270,7 +270,7 @@ extern u32 btf_sock_ids[];
 #define BTF_TRACING_TYPE_xxx	\
 	BTF_TRACING_TYPE(BTF_TRACING_TYPE_TASK, task_struct)	\
 	BTF_TRACING_TYPE(BTF_TRACING_TYPE_FILE, file)		\
-	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, vm_area_struct)
+	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, mm_area)
 
 enum {
 #define BTF_TRACING_TYPE(name, type) name,
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index f0a4ad7839b6..3b16880622f2 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -271,7 +271,7 @@ int cont_write_begin(struct file *, struct address_space *, loff_t,
 			get_block_t *, loff_t *);
 int generic_cont_expand_simple(struct inode *inode, loff_t size);
 void block_commit_write(struct folio *folio, size_t from, size_t to);
-int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
+int block_page_mkwrite(struct mm_area *vma, struct vm_fault *vmf,
 				get_block_t get_block);
 sector_t generic_block_bmap(struct address_space *, sector_t, get_block_t *);
 int block_truncate_page(struct address_space *, loff_t, get_block_t *);
diff --git a/include/linux/buildid.h b/include/linux/buildid.h
index 014a88c41073..ccb20bbf6a32 100644
--- a/include/linux/buildid.h
+++ b/include/linux/buildid.h
@@ -6,9 +6,9 @@
 
 #define BUILD_ID_SIZE_MAX 20
 
-struct vm_area_struct;
-int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size);
-int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size);
+struct mm_area;
+int build_id_parse(struct mm_area *vma, unsigned char *build_id, __u32 *size);
+int build_id_parse_nofault(struct mm_area *vma, unsigned char *build_id, __u32 *size);
 int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size);
 
 #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) || IS_ENABLED(CONFIG_VMCORE_INFO)
diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h
index 55f297b2c23f..81e334b23709 100644
--- a/include/linux/cacheflush.h
+++ b/include/linux/cacheflush.h
@@ -18,7 +18,7 @@ static inline void flush_dcache_folio(struct folio *folio)
 #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */
 
 #ifndef flush_icache_pages
-static inline void flush_icache_pages(struct vm_area_struct *vma,
+static inline void flush_icache_pages(struct mm_area *vma,
 				     struct page *page, unsigned int nr)
 {
 }
diff --git a/include/linux/configfs.h b/include/linux/configfs.h
index c771e9d0d0b9..2fc8bc945f7c 100644
--- a/include/linux/configfs.h
+++ b/include/linux/configfs.h
@@ -146,7 +146,7 @@ static struct configfs_attribute _pfx##attr_##_name = {	\
 }
 
 struct file;
-struct vm_area_struct;
+struct mm_area;
 
 struct configfs_bin_attribute {
 	struct configfs_attribute cb_attr;	/* std. attribute */
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 2f2555e6407c..28c31aa4abf3 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -22,7 +22,7 @@ extern ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos);
 extern ssize_t elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos);
 void elfcorehdr_fill_device_ram_ptload_elf64(Elf64_Phdr *phdr,
 		unsigned long long paddr, unsigned long long size);
-extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
+extern int remap_oldmem_pfn_range(struct mm_area *vma,
 				  unsigned long from, unsigned long pfn,
 				  unsigned long size, pgprot_t prot);
 
diff --git a/include/linux/dax.h b/include/linux/dax.h
index dcc9fcdf14e4..92e61f46d8b2 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -65,7 +65,7 @@ size_t dax_recovery_write(struct dax_device *dax_dev, pgoff_t pgoff,
 /*
  * Check if given mapping is supported by the file / underlying device.
  */
-static inline bool daxdev_mapping_supported(struct vm_area_struct *vma,
+static inline bool daxdev_mapping_supported(struct mm_area *vma,
 					     struct dax_device *dax_dev)
 {
 	if (!(vma->vm_flags & VM_SYNC))
@@ -110,7 +110,7 @@ static inline void set_dax_nomc(struct dax_device *dax_dev)
 static inline void set_dax_synchronous(struct dax_device *dax_dev)
 {
 }
-static inline bool daxdev_mapping_supported(struct vm_area_struct *vma,
+static inline bool daxdev_mapping_supported(struct mm_area *vma,
 				struct dax_device *dax_dev)
 {
 	return !(vma->vm_flags & VM_SYNC);
diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
index 36216d28d8bd..8aa15c4fd02f 100644
--- a/include/linux/dma-buf.h
+++ b/include/linux/dma-buf.h
@@ -281,7 +281,7 @@ struct dma_buf_ops {
 	 *
 	 * 0 on success or a negative error code on failure.
 	 */
-	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
+	int (*mmap)(struct dma_buf *, struct mm_area *vma);
 
 	int (*vmap)(struct dma_buf *dmabuf, struct iosys_map *map);
 	void (*vunmap)(struct dma_buf *dmabuf, struct iosys_map *map);
@@ -630,7 +630,7 @@ void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
 				       struct sg_table *sg_table,
 				       enum dma_data_direction direction);
 
-int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
+int dma_buf_mmap(struct dma_buf *, struct mm_area *,
 		 unsigned long);
 int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map);
 void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map);
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index e172522cd936..c6bdde002279 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -24,7 +24,7 @@ struct dma_map_ops {
 			gfp_t gfp);
 	void (*free_pages)(struct device *dev, size_t size, struct page *vaddr,
 			dma_addr_t dma_handle, enum dma_data_direction dir);
-	int (*mmap)(struct device *, struct vm_area_struct *,
+	int (*mmap)(struct device *, struct mm_area *,
 			void *, dma_addr_t, size_t, unsigned long attrs);
 
 	int (*get_sgtable)(struct device *dev, struct sg_table *sgt,
@@ -162,7 +162,7 @@ void dma_release_coherent_memory(struct device *dev);
 int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
 		dma_addr_t *dma_handle, void **ret);
 int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
-int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_from_dev_coherent(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, size_t size, int *ret);
 #else
 static inline int dma_declare_coherent_memory(struct device *dev,
@@ -181,7 +181,7 @@ static inline void dma_release_coherent_memory(struct device *dev) { }
 void *dma_alloc_from_global_coherent(struct device *dev, ssize_t size,
 		dma_addr_t *dma_handle);
 int dma_release_from_global_coherent(int order, void *vaddr);
-int dma_mmap_from_global_coherent(struct vm_area_struct *vma, void *cpu_addr,
+int dma_mmap_from_global_coherent(struct mm_area *vma, void *cpu_addr,
 		size_t size, int *ret);
 int dma_init_global_coherent(phys_addr_t phys_addr, size_t size);
 #else
@@ -194,7 +194,7 @@ static inline int dma_release_from_global_coherent(int order, void *vaddr)
 {
 	return 0;
 }
-static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
+static inline int dma_mmap_from_global_coherent(struct mm_area *vma,
 		void *cpu_addr, size_t size, int *ret)
 {
 	return 0;
@@ -204,7 +204,7 @@ static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
 int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
-int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+int dma_common_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
 struct page *dma_common_alloc_pages(struct device *dev, size_t size,
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index b79925b1c433..06e43bf6536d 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -124,7 +124,7 @@ void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
 int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
-int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_attrs(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
 bool dma_can_mmap(struct device *dev);
@@ -143,7 +143,7 @@ void dma_free_noncontiguous(struct device *dev, size_t size,
 void *dma_vmap_noncontiguous(struct device *dev, size_t size,
 		struct sg_table *sgt);
 void dma_vunmap_noncontiguous(struct device *dev, void *vaddr);
-int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
 		size_t size, struct sg_table *sgt);
 #else /* CONFIG_HAS_DMA */
 static inline dma_addr_t dma_map_page_attrs(struct device *dev,
@@ -210,7 +210,7 @@ static inline int dma_get_sgtable_attrs(struct device *dev,
 {
 	return -ENXIO;
 }
-static inline int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+static inline int dma_mmap_attrs(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs)
 {
@@ -271,7 +271,7 @@ static inline void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
 {
 }
 static inline int dma_mmap_noncontiguous(struct device *dev,
-		struct vm_area_struct *vma, size_t size, struct sg_table *sgt)
+		struct mm_area *vma, size_t size, struct sg_table *sgt)
 {
 	return -EINVAL;
 }
@@ -357,7 +357,7 @@ struct page *dma_alloc_pages(struct device *dev, size_t size,
 		dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp);
 void dma_free_pages(struct device *dev, size_t size, struct page *page,
 		dma_addr_t dma_handle, enum dma_data_direction dir);
-int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_pages(struct device *dev, struct mm_area *vma,
 		size_t size, struct page *page);
 
 static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
@@ -611,7 +611,7 @@ static inline void dma_free_wc(struct device *dev, size_t size,
 }
 
 static inline int dma_mmap_wc(struct device *dev,
-			      struct vm_area_struct *vma,
+			      struct mm_area *vma,
 			      void *cpu_addr, dma_addr_t dma_addr,
 			      size_t size)
 {
diff --git a/include/linux/fb.h b/include/linux/fb.h
index cd653862ab99..f09a1e5e46a0 100644
--- a/include/linux/fb.h
+++ b/include/linux/fb.h
@@ -26,7 +26,7 @@ struct module;
 struct notifier_block;
 struct page;
 struct videomode;
-struct vm_area_struct;
+struct mm_area;
 
 /* Definitions below are used in the parsed monitor specs */
 #define FB_DPMS_ACTIVE_OFF	1
@@ -302,7 +302,7 @@ struct fb_ops {
 			unsigned long arg);
 
 	/* perform fb specific mmap */
-	int (*fb_mmap)(struct fb_info *info, struct vm_area_struct *vma);
+	int (*fb_mmap)(struct fb_info *info, struct mm_area *vma);
 
 	/* get capability given var */
 	void (*fb_get_caps)(struct fb_info *info, struct fb_blit_caps *caps,
@@ -555,7 +555,7 @@ extern ssize_t fb_io_read(struct fb_info *info, char __user *buf,
 			  size_t count, loff_t *ppos);
 extern ssize_t fb_io_write(struct fb_info *info, const char __user *buf,
 			   size_t count, loff_t *ppos);
-int fb_io_mmap(struct fb_info *info, struct vm_area_struct *vma);
+int fb_io_mmap(struct fb_info *info, struct mm_area *vma);
 
 #define __FB_DEFAULT_IOMEM_OPS_RDWR \
 	.fb_read	= fb_io_read, \
@@ -648,7 +648,7 @@ static inline void __fb_pad_aligned_buffer(u8 *dst, u32 d_pitch,
 }
 
 /* fb_defio.c */
-int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma);
+int fb_deferred_io_mmap(struct fb_info *info, struct mm_area *vma);
 extern int  fb_deferred_io_init(struct fb_info *info);
 extern void fb_deferred_io_open(struct fb_info *info,
 				struct inode *inode,
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 016b0fe1536e..2be4d710cdad 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -65,7 +65,7 @@ struct kobject;
 struct pipe_inode_info;
 struct poll_table_struct;
 struct kstatfs;
-struct vm_area_struct;
+struct mm_area;
 struct vfsmount;
 struct cred;
 struct swap_info_struct;
@@ -2140,7 +2140,7 @@ struct file_operations {
 	__poll_t (*poll) (struct file *, struct poll_table_struct *);
 	long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
 	long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
-	int (*mmap) (struct file *, struct vm_area_struct *);
+	int (*mmap) (struct file *, struct mm_area *);
 	int (*open) (struct inode *, struct file *);
 	int (*flush) (struct file *, fl_owner_t id);
 	int (*release) (struct inode *, struct file *);
@@ -2238,7 +2238,7 @@ struct inode_operations {
 	struct offset_ctx *(*get_offset_ctx)(struct inode *inode);
 } ____cacheline_aligned;
 
-static inline int call_mmap(struct file *file, struct vm_area_struct *vma)
+static inline int call_mmap(struct file *file, struct mm_area *vma)
 {
 	return file->f_op->mmap(file, vma);
 }
@@ -3341,8 +3341,8 @@ extern void inode_add_lru(struct inode *inode);
 extern int sb_set_blocksize(struct super_block *, int);
 extern int sb_min_blocksize(struct super_block *, int);
 
-extern int generic_file_mmap(struct file *, struct vm_area_struct *);
-extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
+extern int generic_file_mmap(struct file *, struct mm_area *);
+extern int generic_file_readonly_mmap(struct file *, struct mm_area *);
 extern ssize_t generic_write_checks(struct kiocb *, struct iov_iter *);
 int generic_write_checks_count(struct kiocb *iocb, loff_t *count);
 extern int generic_write_check_limits(struct file *file, loff_t pos,
@@ -3666,12 +3666,12 @@ void setattr_copy(struct mnt_idmap *, struct inode *inode,
 
 extern int file_update_time(struct file *file);
 
-static inline bool vma_is_dax(const struct vm_area_struct *vma)
+static inline bool vma_is_dax(const struct mm_area *vma)
 {
 	return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host);
 }
 
-static inline bool vma_is_fsdax(struct vm_area_struct *vma)
+static inline bool vma_is_fsdax(struct mm_area *vma)
 {
 	struct inode *inode;
 
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index c9fa6309c903..1198056004c8 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -9,7 +9,7 @@
 #include <linux/alloc_tag.h>
 #include <linux/sched.h>
 
-struct vm_area_struct;
+struct mm_area;
 struct mempolicy;
 
 /* Convert GFP flags to their corresponding migrate type */
@@ -318,7 +318,7 @@ struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order);
 struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order);
 struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
 		struct mempolicy *mpol, pgoff_t ilx, int nid);
-struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma,
+struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct mm_area *vma,
 		unsigned long addr);
 #else
 static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order)
@@ -346,7 +346,7 @@ static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int orde
 #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0)
 
 static inline struct page *alloc_page_vma_noprof(gfp_t gfp,
-		struct vm_area_struct *vma, unsigned long addr)
+		struct mm_area *vma, unsigned long addr)
 {
 	struct folio *folio = vma_alloc_folio_noprof(gfp, 0, vma, addr);
 
@@ -420,7 +420,7 @@ static inline bool gfp_compaction_allowed(gfp_t gfp_mask)
 	return IS_ENABLED(CONFIG_COMPACTION) && (gfp_mask & __GFP_IO);
 }
 
-extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma);
+extern gfp_t vma_thp_gfp_mask(struct mm_area *vma);
 
 #ifdef CONFIG_CONTIG_ALLOC
 /* The below functions must be run on a range from a single zone. */
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 5c6bea81a90e..76601fc06fab 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -183,7 +183,7 @@ static inline unsigned long nr_free_highpages(void);
 static inline unsigned long totalhigh_pages(void);
 
 #ifndef ARCH_HAS_FLUSH_ANON_PAGE
-static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
+static inline void flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr)
 {
 }
 #endif
@@ -221,7 +221,7 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
  * we are out of memory.
  */
 static inline
-struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
+struct folio *vma_alloc_zeroed_movable_folio(struct mm_area *vma,
 				   unsigned long vaddr)
 {
 	struct folio *folio;
@@ -301,7 +301,7 @@ static inline void zero_user(struct page *page,
 #ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE
 
 static inline void copy_user_highpage(struct page *to, struct page *from,
-	unsigned long vaddr, struct vm_area_struct *vma)
+	unsigned long vaddr, struct mm_area *vma)
 {
 	char *vfrom, *vto;
 
@@ -339,7 +339,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
  * of bytes not copied if there was a #MC, otherwise 0 for success.
  */
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
-					unsigned long vaddr, struct vm_area_struct *vma)
+					unsigned long vaddr, struct mm_area *vma)
 {
 	unsigned long ret;
 	char *vfrom, *vto;
@@ -378,7 +378,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
 }
 #else
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
-					unsigned long vaddr, struct vm_area_struct *vma)
+					unsigned long vaddr, struct mm_area *vma)
 {
 	copy_user_highpage(to, from, vaddr, vma);
 	return 0;
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index e893d546a49f..b8c548e672b0 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -10,11 +10,11 @@
 vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
 int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
-		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
+		  struct mm_area *dst_vma, struct mm_area *src_vma);
 void huge_pmd_set_accessed(struct vm_fault *vmf);
 int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
-		  struct vm_area_struct *vma);
+		  struct mm_area *vma);
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud);
@@ -25,15 +25,15 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
 #endif
 
 vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf);
-bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
 			   pmd_t *pmd, unsigned long addr, unsigned long next);
-int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd,
+int zap_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma, pmd_t *pmd,
 		 unsigned long addr);
-int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud,
+int zap_huge_pud(struct mmu_gather *tlb, struct mm_area *vma, pud_t *pud,
 		 unsigned long addr);
-bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+bool move_huge_pmd(struct mm_area *vma, unsigned long old_addr,
 		   unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd);
-int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+int change_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
 		    pmd_t *pmd, unsigned long addr, pgprot_t newprot,
 		    unsigned long cp_flags);
 
@@ -212,7 +212,7 @@ static inline int next_order(unsigned long *orders, int prev)
  *   - For all vmas, check if the haddr is in an aligned hugepage
  *     area.
  */
-static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
+static inline bool thp_vma_suitable_order(struct mm_area *vma,
 		unsigned long addr, int order)
 {
 	unsigned long hpage_size = PAGE_SIZE << order;
@@ -237,7 +237,7 @@ static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
  * See thp_vma_suitable_order().
  * All orders that pass the checks are returned as a bitfield.
  */
-static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
+static inline unsigned long thp_vma_suitable_orders(struct mm_area *vma,
 		unsigned long addr, unsigned long orders)
 {
 	int order;
@@ -260,7 +260,7 @@ static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
 	return orders;
 }
 
-unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
+unsigned long __thp_vma_allowable_orders(struct mm_area *vma,
 					 unsigned long vm_flags,
 					 unsigned long tva_flags,
 					 unsigned long orders);
@@ -281,7 +281,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
  * orders are allowed.
  */
 static inline
-unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
+unsigned long thp_vma_allowable_orders(struct mm_area *vma,
 				       unsigned long vm_flags,
 				       unsigned long tva_flags,
 				       unsigned long orders)
@@ -316,7 +316,7 @@ struct thpsize {
 	(transparent_hugepage_flags &					\
 	 (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG))
 
-static inline bool vma_thp_disabled(struct vm_area_struct *vma,
+static inline bool vma_thp_disabled(struct mm_area *vma,
 		unsigned long vm_flags)
 {
 	/*
@@ -394,7 +394,7 @@ static inline int split_huge_page(struct page *page)
 }
 void deferred_split_folio(struct folio *folio, bool partially_mapped);
 
-void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+void __split_huge_pmd(struct mm_area *vma, pmd_t *pmd,
 		unsigned long address, bool freeze, struct folio *folio);
 
 #define split_huge_pmd(__vma, __pmd, __address)				\
@@ -407,19 +407,19 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	}  while (0)
 
 
-void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
+void split_huge_pmd_address(struct mm_area *vma, unsigned long address,
 		bool freeze, struct folio *folio);
 
-void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
+void __split_huge_pud(struct mm_area *vma, pud_t *pud,
 		unsigned long address);
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
+int change_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
 		    pud_t *pudp, unsigned long addr, pgprot_t newprot,
 		    unsigned long cp_flags);
 #else
 static inline int
-change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
+change_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
 		pud_t *pudp, unsigned long addr, pgprot_t newprot,
 		unsigned long cp_flags) { return 0; }
 #endif
@@ -432,15 +432,15 @@ change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			__split_huge_pud(__vma, __pud, __address);	\
 	}  while (0)
 
-int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags,
+int hugepage_madvise(struct mm_area *vma, unsigned long *vm_flags,
 		     int advice);
-int madvise_collapse(struct vm_area_struct *vma,
-		     struct vm_area_struct **prev,
+int madvise_collapse(struct mm_area *vma,
+		     struct mm_area **prev,
 		     unsigned long start, unsigned long end);
-void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start,
-			   unsigned long end, struct vm_area_struct *next);
-spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma);
-spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma);
+void vma_adjust_trans_huge(struct mm_area *vma, unsigned long start,
+			   unsigned long end, struct mm_area *next);
+spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct mm_area *vma);
+spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct mm_area *vma);
 
 static inline int is_swap_pmd(pmd_t pmd)
 {
@@ -449,7 +449,7 @@ static inline int is_swap_pmd(pmd_t pmd)
 
 /* mmap_lock must be held on entry */
 static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd))
 		return __pmd_trans_huge_lock(pmd, vma);
@@ -457,7 +457,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
 		return NULL;
 }
 static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	if (pud_trans_huge(*pud) || pud_devmap(*pud))
 		return __pud_trans_huge_lock(pud, vma);
@@ -474,7 +474,7 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 	return folio_order(folio) >= HPAGE_PMD_ORDER;
 }
 
-struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
+struct page *follow_devmap_pmd(struct mm_area *vma, unsigned long addr,
 		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
@@ -502,9 +502,9 @@ static inline bool thp_migration_supported(void)
 	return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
 }
 
-void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+void split_huge_pmd_locked(struct mm_area *vma, unsigned long address,
 			   pmd_t *pmd, bool freeze, struct folio *folio);
-bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
+bool unmap_huge_pmd_locked(struct mm_area *vma, unsigned long addr,
 			   pmd_t *pmdp, struct folio *folio);
 
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -514,19 +514,19 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 	return false;
 }
 
-static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
+static inline bool thp_vma_suitable_order(struct mm_area *vma,
 		unsigned long addr, int order)
 {
 	return false;
 }
 
-static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
+static inline unsigned long thp_vma_suitable_orders(struct mm_area *vma,
 		unsigned long addr, unsigned long orders)
 {
 	return 0;
 }
 
-static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
+static inline unsigned long thp_vma_allowable_orders(struct mm_area *vma,
 					unsigned long vm_flags,
 					unsigned long tva_flags,
 					unsigned long orders)
@@ -577,15 +577,15 @@ static inline void deferred_split_folio(struct folio *folio, bool partially_mapp
 #define split_huge_pmd(__vma, __pmd, __address)	\
 	do { } while (0)
 
-static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+static inline void __split_huge_pmd(struct mm_area *vma, pmd_t *pmd,
 		unsigned long address, bool freeze, struct folio *folio) {}
-static inline void split_huge_pmd_address(struct vm_area_struct *vma,
+static inline void split_huge_pmd_address(struct mm_area *vma,
 		unsigned long address, bool freeze, struct folio *folio) {}
-static inline void split_huge_pmd_locked(struct vm_area_struct *vma,
+static inline void split_huge_pmd_locked(struct mm_area *vma,
 					 unsigned long address, pmd_t *pmd,
 					 bool freeze, struct folio *folio) {}
 
-static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
+static inline bool unmap_huge_pmd_locked(struct mm_area *vma,
 					 unsigned long addr, pmd_t *pmdp,
 					 struct folio *folio)
 {
@@ -595,23 +595,23 @@ static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
 #define split_huge_pud(__vma, __pmd, __address)	\
 	do { } while (0)
 
-static inline int hugepage_madvise(struct vm_area_struct *vma,
+static inline int hugepage_madvise(struct mm_area *vma,
 				   unsigned long *vm_flags, int advice)
 {
 	return -EINVAL;
 }
 
-static inline int madvise_collapse(struct vm_area_struct *vma,
-				   struct vm_area_struct **prev,
+static inline int madvise_collapse(struct mm_area *vma,
+				   struct mm_area **prev,
 				   unsigned long start, unsigned long end)
 {
 	return -EINVAL;
 }
 
-static inline void vma_adjust_trans_huge(struct vm_area_struct *vma,
+static inline void vma_adjust_trans_huge(struct mm_area *vma,
 					 unsigned long start,
 					 unsigned long end,
-					 struct vm_area_struct *next)
+					 struct mm_area *next)
 {
 }
 static inline int is_swap_pmd(pmd_t pmd)
@@ -619,12 +619,12 @@ static inline int is_swap_pmd(pmd_t pmd)
 	return 0;
 }
 static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	return NULL;
 }
 static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	return NULL;
 }
@@ -649,7 +649,7 @@ static inline void mm_put_huge_zero_folio(struct mm_struct *mm)
 	return;
 }
 
-static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
+static inline struct page *follow_devmap_pmd(struct mm_area *vma,
 	unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
 {
 	return NULL;
@@ -670,13 +670,13 @@ static inline int next_order(unsigned long *orders, int prev)
 	return 0;
 }
 
-static inline void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
+static inline void __split_huge_pud(struct mm_area *vma, pud_t *pud,
 				    unsigned long address)
 {
 }
 
 static inline int change_huge_pud(struct mmu_gather *tlb,
-				  struct vm_area_struct *vma, pud_t *pudp,
+				  struct mm_area *vma, pud_t *pudp,
 				  unsigned long addr, pgprot_t newprot,
 				  unsigned long cp_flags)
 {
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 8f3ac832ee7f..96d446761d94 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -104,7 +104,7 @@ struct file_region {
 struct hugetlb_vma_lock {
 	struct kref refs;
 	struct rw_semaphore rw_sema;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 };
 
 extern struct resv_map *resv_map_alloc(void);
@@ -119,37 +119,37 @@ struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_hpages,
 						long min_hpages);
 void hugepage_put_subpool(struct hugepage_subpool *spool);
 
-void hugetlb_dup_vma_private(struct vm_area_struct *vma);
-void clear_vma_resv_huge_pages(struct vm_area_struct *vma);
-int move_hugetlb_page_tables(struct vm_area_struct *vma,
-			     struct vm_area_struct *new_vma,
+void hugetlb_dup_vma_private(struct mm_area *vma);
+void clear_vma_resv_huge_pages(struct mm_area *vma);
+int move_hugetlb_page_tables(struct mm_area *vma,
+			     struct mm_area *new_vma,
 			     unsigned long old_addr, unsigned long new_addr,
 			     unsigned long len);
 int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
-			    struct vm_area_struct *, struct vm_area_struct *);
-void unmap_hugepage_range(struct vm_area_struct *,
+			    struct mm_area *, struct mm_area *);
+void unmap_hugepage_range(struct mm_area *,
 			  unsigned long, unsigned long, struct page *,
 			  zap_flags_t);
 void __unmap_hugepage_range(struct mmu_gather *tlb,
-			  struct vm_area_struct *vma,
+			  struct mm_area *vma,
 			  unsigned long start, unsigned long end,
 			  struct page *ref_page, zap_flags_t zap_flags);
 void hugetlb_report_meminfo(struct seq_file *);
 int hugetlb_report_node_meminfo(char *buf, int len, int nid);
 void hugetlb_show_meminfo_node(int nid);
 unsigned long hugetlb_total_pages(void);
-vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+vm_fault_t hugetlb_fault(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long address, unsigned int flags);
 #ifdef CONFIG_USERFAULTFD
 int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
-			     struct vm_area_struct *dst_vma,
+			     struct mm_area *dst_vma,
 			     unsigned long dst_addr,
 			     unsigned long src_addr,
 			     uffd_flags_t flags,
 			     struct folio **foliop);
 #endif /* CONFIG_USERFAULTFD */
 bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
-						struct vm_area_struct *vma,
+						struct mm_area *vma,
 						vm_flags_t vm_flags);
 long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
 						long freed);
@@ -163,10 +163,10 @@ void hugetlb_fix_reserve_counts(struct inode *inode);
 extern struct mutex *hugetlb_fault_mutex_table;
 u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
 
-pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pmd_share(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, pud_t *pud);
 bool hugetlbfs_pagecache_present(struct hstate *h,
-				 struct vm_area_struct *vma,
+				 struct mm_area *vma,
 				 unsigned long address);
 
 struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio);
@@ -196,7 +196,7 @@ static inline pte_t *pte_alloc_huge(struct mm_struct *mm, pmd_t *pmd,
 }
 #endif
 
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long addr, unsigned long sz);
 /*
  * huge_pte_offset(): Walk the hugetlb pgtable until the last level PTE.
@@ -238,51 +238,51 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
 pte_t *huge_pte_offset(struct mm_struct *mm,
 		       unsigned long addr, unsigned long sz);
 unsigned long hugetlb_mask_last_page(struct hstate *h);
-int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+int huge_pmd_unshare(struct mm_struct *mm, struct mm_area *vma,
 				unsigned long addr, pte_t *ptep);
-void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+void adjust_range_if_pmd_sharing_possible(struct mm_area *vma,
 				unsigned long *start, unsigned long *end);
 
-extern void __hugetlb_zap_begin(struct vm_area_struct *vma,
+extern void __hugetlb_zap_begin(struct mm_area *vma,
 				unsigned long *begin, unsigned long *end);
-extern void __hugetlb_zap_end(struct vm_area_struct *vma,
+extern void __hugetlb_zap_end(struct mm_area *vma,
 			      struct zap_details *details);
 
-static inline void hugetlb_zap_begin(struct vm_area_struct *vma,
+static inline void hugetlb_zap_begin(struct mm_area *vma,
 				     unsigned long *start, unsigned long *end)
 {
 	if (is_vm_hugetlb_page(vma))
 		__hugetlb_zap_begin(vma, start, end);
 }
 
-static inline void hugetlb_zap_end(struct vm_area_struct *vma,
+static inline void hugetlb_zap_end(struct mm_area *vma,
 				   struct zap_details *details)
 {
 	if (is_vm_hugetlb_page(vma))
 		__hugetlb_zap_end(vma, details);
 }
 
-void hugetlb_vma_lock_read(struct vm_area_struct *vma);
-void hugetlb_vma_unlock_read(struct vm_area_struct *vma);
-void hugetlb_vma_lock_write(struct vm_area_struct *vma);
-void hugetlb_vma_unlock_write(struct vm_area_struct *vma);
-int hugetlb_vma_trylock_write(struct vm_area_struct *vma);
-void hugetlb_vma_assert_locked(struct vm_area_struct *vma);
+void hugetlb_vma_lock_read(struct mm_area *vma);
+void hugetlb_vma_unlock_read(struct mm_area *vma);
+void hugetlb_vma_lock_write(struct mm_area *vma);
+void hugetlb_vma_unlock_write(struct mm_area *vma);
+int hugetlb_vma_trylock_write(struct mm_area *vma);
+void hugetlb_vma_assert_locked(struct mm_area *vma);
 void hugetlb_vma_lock_release(struct kref *kref);
-long hugetlb_change_protection(struct vm_area_struct *vma,
+long hugetlb_change_protection(struct mm_area *vma,
 		unsigned long address, unsigned long end, pgprot_t newprot,
 		unsigned long cp_flags);
 bool is_hugetlb_entry_migration(pte_t pte);
 bool is_hugetlb_entry_hwpoisoned(pte_t pte);
-void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
+void hugetlb_unshare_all_pmds(struct mm_area *vma);
 
 #else /* !CONFIG_HUGETLB_PAGE */
 
-static inline void hugetlb_dup_vma_private(struct vm_area_struct *vma)
+static inline void hugetlb_dup_vma_private(struct mm_area *vma)
 {
 }
 
-static inline void clear_vma_resv_huge_pages(struct vm_area_struct *vma)
+static inline void clear_vma_resv_huge_pages(struct mm_area *vma)
 {
 }
 
@@ -298,41 +298,41 @@ static inline struct address_space *hugetlb_folio_mapping_lock_write(
 }
 
 static inline int huge_pmd_unshare(struct mm_struct *mm,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					unsigned long addr, pte_t *ptep)
 {
 	return 0;
 }
 
 static inline void adjust_range_if_pmd_sharing_possible(
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				unsigned long *start, unsigned long *end)
 {
 }
 
 static inline void hugetlb_zap_begin(
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				unsigned long *start, unsigned long *end)
 {
 }
 
 static inline void hugetlb_zap_end(
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				struct zap_details *details)
 {
 }
 
 static inline int copy_hugetlb_page_range(struct mm_struct *dst,
 					  struct mm_struct *src,
-					  struct vm_area_struct *dst_vma,
-					  struct vm_area_struct *src_vma)
+					  struct mm_area *dst_vma,
+					  struct mm_area *src_vma)
 {
 	BUG();
 	return 0;
 }
 
-static inline int move_hugetlb_page_tables(struct vm_area_struct *vma,
-					   struct vm_area_struct *new_vma,
+static inline int move_hugetlb_page_tables(struct mm_area *vma,
+					   struct mm_area *new_vma,
 					   unsigned long old_addr,
 					   unsigned long new_addr,
 					   unsigned long len)
@@ -360,28 +360,28 @@ static inline int prepare_hugepage_range(struct file *file,
 	return -EINVAL;
 }
 
-static inline void hugetlb_vma_lock_read(struct vm_area_struct *vma)
+static inline void hugetlb_vma_lock_read(struct mm_area *vma)
 {
 }
 
-static inline void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
+static inline void hugetlb_vma_unlock_read(struct mm_area *vma)
 {
 }
 
-static inline void hugetlb_vma_lock_write(struct vm_area_struct *vma)
+static inline void hugetlb_vma_lock_write(struct mm_area *vma)
 {
 }
 
-static inline void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
+static inline void hugetlb_vma_unlock_write(struct mm_area *vma)
 {
 }
 
-static inline int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
+static inline int hugetlb_vma_trylock_write(struct mm_area *vma)
 {
 	return 1;
 }
 
-static inline void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
+static inline void hugetlb_vma_assert_locked(struct mm_area *vma)
 {
 }
 
@@ -400,7 +400,7 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb,
 
 #ifdef CONFIG_USERFAULTFD
 static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
-					   struct vm_area_struct *dst_vma,
+					   struct mm_area *dst_vma,
 					   unsigned long dst_addr,
 					   unsigned long src_addr,
 					   uffd_flags_t flags,
@@ -443,7 +443,7 @@ static inline void move_hugetlb_state(struct folio *old_folio,
 }
 
 static inline long hugetlb_change_protection(
-			struct vm_area_struct *vma, unsigned long address,
+			struct mm_area *vma, unsigned long address,
 			unsigned long end, pgprot_t newprot,
 			unsigned long cp_flags)
 {
@@ -451,7 +451,7 @@ static inline long hugetlb_change_protection(
 }
 
 static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
-			struct vm_area_struct *vma, unsigned long start,
+			struct mm_area *vma, unsigned long start,
 			unsigned long end, struct page *ref_page,
 			zap_flags_t zap_flags)
 {
@@ -459,14 +459,14 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
 }
 
 static inline vm_fault_t hugetlb_fault(struct mm_struct *mm,
-			struct vm_area_struct *vma, unsigned long address,
+			struct mm_area *vma, unsigned long address,
 			unsigned int flags)
 {
 	BUG();
 	return 0;
 }
 
-static inline void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) { }
+static inline void hugetlb_unshare_all_pmds(struct mm_area *vma) { }
 
 #endif /* !CONFIG_HUGETLB_PAGE */
 
@@ -698,7 +698,7 @@ bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m);
 int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
 int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
 void wait_for_freed_hugetlb_folios(void);
-struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+struct folio *alloc_hugetlb_folio(struct mm_area *vma,
 				unsigned long addr, bool cow_from_owner);
 struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
 				nodemask_t *nmask, gfp_t gfp_mask,
@@ -708,7 +708,7 @@ struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
 
 int hugetlb_add_to_page_cache(struct folio *folio, struct address_space *mapping,
 			pgoff_t idx);
-void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
+void restore_reserve_on_error(struct hstate *h, struct mm_area *vma,
 				unsigned long address, struct folio *folio);
 
 /* arch callback */
@@ -756,7 +756,7 @@ static inline struct hstate *hstate_sizelog(int page_size_log)
 	return NULL;
 }
 
-static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
+static inline struct hstate *hstate_vma(struct mm_area *vma)
 {
 	return hstate_file(vma->vm_file);
 }
@@ -766,9 +766,9 @@ static inline unsigned long huge_page_size(const struct hstate *h)
 	return (unsigned long)PAGE_SIZE << h->order;
 }
 
-extern unsigned long vma_kernel_pagesize(struct vm_area_struct *vma);
+extern unsigned long vma_kernel_pagesize(struct mm_area *vma);
 
-extern unsigned long vma_mmu_pagesize(struct vm_area_struct *vma);
+extern unsigned long vma_mmu_pagesize(struct mm_area *vma);
 
 static inline unsigned long huge_page_mask(struct hstate *h)
 {
@@ -1028,7 +1028,7 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm)
 
 #ifndef huge_ptep_modify_prot_start
 #define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
-static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
 						unsigned long addr, pte_t *ptep)
 {
 	unsigned long psize = huge_page_size(hstate_vma(vma));
@@ -1039,7 +1039,7 @@ static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
 
 #ifndef huge_ptep_modify_prot_commit
 #define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
-static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+static inline void huge_ptep_modify_prot_commit(struct mm_area *vma,
 						unsigned long addr, pte_t *ptep,
 						pte_t old_pte, pte_t pte)
 {
@@ -1099,7 +1099,7 @@ static inline void wait_for_freed_hugetlb_folios(void)
 {
 }
 
-static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+static inline struct folio *alloc_hugetlb_folio(struct mm_area *vma,
 					   unsigned long addr,
 					   bool cow_from_owner)
 {
@@ -1136,7 +1136,7 @@ static inline struct hstate *hstate_sizelog(int page_size_log)
 	return NULL;
 }
 
-static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
+static inline struct hstate *hstate_vma(struct mm_area *vma)
 {
 	return NULL;
 }
@@ -1161,12 +1161,12 @@ static inline unsigned long huge_page_mask(struct hstate *h)
 	return PAGE_MASK;
 }
 
-static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma)
+static inline unsigned long vma_kernel_pagesize(struct mm_area *vma)
 {
 	return PAGE_SIZE;
 }
 
-static inline unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
+static inline unsigned long vma_mmu_pagesize(struct mm_area *vma)
 {
 	return PAGE_SIZE;
 }
@@ -1255,7 +1255,7 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm)
 {
 }
 
-static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep)
 {
 #ifdef CONFIG_MMU
@@ -1279,7 +1279,7 @@ static inline void hugetlb_unregister_node(struct node *node)
 }
 
 static inline bool hugetlbfs_pagecache_present(
-    struct hstate *h, struct vm_area_struct *vma, unsigned long address)
+    struct hstate *h, struct mm_area *vma, unsigned long address)
 {
 	return false;
 }
@@ -1324,7 +1324,7 @@ static inline bool hugetlb_pmd_shared(pte_t *pte)
 }
 #endif
 
-bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr);
+bool want_pmd_share(struct mm_area *vma, unsigned long addr);
 
 #ifndef __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
 /*
@@ -1334,19 +1334,19 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr);
 #define flush_hugetlb_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
 #endif
 
-static inline bool __vma_shareable_lock(struct vm_area_struct *vma)
+static inline bool __vma_shareable_lock(struct mm_area *vma)
 {
 	return (vma->vm_flags & VM_MAYSHARE) && vma->vm_private_data;
 }
 
-bool __vma_private_lock(struct vm_area_struct *vma);
+bool __vma_private_lock(struct mm_area *vma);
 
 /*
  * Safe version of huge_pte_offset() to check the locks.  See comments
  * above huge_pte_offset().
  */
 static inline pte_t *
-hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz)
+hugetlb_walk(struct mm_area *vma, unsigned long addr, unsigned long sz)
 {
 #if defined(CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING) && defined(CONFIG_LOCKDEP)
 	struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
index 0660a03d37d9..d3d90fb50ebf 100644
--- a/include/linux/hugetlb_inline.h
+++ b/include/linux/hugetlb_inline.h
@@ -6,14 +6,14 @@
 
 #include <linux/mm.h>
 
-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
+static inline bool is_vm_hugetlb_page(struct mm_area *vma)
 {
 	return !!(vma->vm_flags & VM_HUGETLB);
 }
 
 #else
 
-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
+static inline bool is_vm_hugetlb_page(struct mm_area *vma)
 {
 	return false;
 }
diff --git a/include/linux/io-mapping.h b/include/linux/io-mapping.h
index 7376c1df9c90..04d6dfd172da 100644
--- a/include/linux/io-mapping.h
+++ b/include/linux/io-mapping.h
@@ -225,7 +225,7 @@ io_mapping_free(struct io_mapping *iomap)
 	kfree(iomap);
 }
 
-int io_mapping_map_user(struct io_mapping *iomap, struct vm_area_struct *vma,
+int io_mapping_map_user(struct io_mapping *iomap, struct mm_area *vma,
 		unsigned long addr, unsigned long pfn, unsigned long size);
 
 #endif /* _LINUX_IO_MAPPING_H */
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 02fe001feebb..2186061ce745 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -19,7 +19,7 @@ struct iomap_writepage_ctx;
 struct iov_iter;
 struct kiocb;
 struct page;
-struct vm_area_struct;
+struct mm_area;
 struct vm_fault;
 
 /*
diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h
index 508beaa44c39..ff772553d76b 100644
--- a/include/linux/iommu-dma.h
+++ b/include/linux/iommu-dma.h
@@ -32,7 +32,7 @@ void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
 		enum dma_data_direction dir, unsigned long attrs);
 void *iommu_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
 		gfp_t gfp, unsigned long attrs);
-int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+int iommu_dma_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
 int iommu_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
@@ -55,7 +55,7 @@ void *iommu_dma_vmap_noncontiguous(struct device *dev, size_t size,
 		struct sg_table *sgt);
 #define iommu_dma_vunmap_noncontiguous(dev, vaddr) \
 	vunmap(vaddr);
-int iommu_dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+int iommu_dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
 		size_t size, struct sg_table *sgt);
 void iommu_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
 		size_t size, enum dma_data_direction dir);
diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
index b5a5f32fdfd1..087c03af27b8 100644
--- a/include/linux/kernfs.h
+++ b/include/linux/kernfs.h
@@ -24,7 +24,7 @@ struct file;
 struct dentry;
 struct iattr;
 struct seq_file;
-struct vm_area_struct;
+struct mm_area;
 struct vm_operations_struct;
 struct super_block;
 struct file_system_type;
@@ -322,7 +322,7 @@ struct kernfs_ops {
 	__poll_t (*poll)(struct kernfs_open_file *of,
 			 struct poll_table_struct *pt);
 
-	int (*mmap)(struct kernfs_open_file *of, struct vm_area_struct *vma);
+	int (*mmap)(struct kernfs_open_file *of, struct mm_area *vma);
 	loff_t (*llseek)(struct kernfs_open_file *of, loff_t offset, int whence);
 };
 
diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
index 1f46046080f5..df545b9908b0 100644
--- a/include/linux/khugepaged.h
+++ b/include/linux/khugepaged.h
@@ -11,7 +11,7 @@ extern void khugepaged_destroy(void);
 extern int start_stop_khugepaged(void);
 extern void __khugepaged_enter(struct mm_struct *mm);
 extern void __khugepaged_exit(struct mm_struct *mm);
-extern void khugepaged_enter_vma(struct vm_area_struct *vma,
+extern void khugepaged_enter_vma(struct mm_area *vma,
 				 unsigned long vm_flags);
 extern void khugepaged_min_free_kbytes_update(void);
 extern bool current_is_khugepaged(void);
@@ -44,7 +44,7 @@ static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm
 static inline void khugepaged_exit(struct mm_struct *mm)
 {
 }
-static inline void khugepaged_enter_vma(struct vm_area_struct *vma,
+static inline void khugepaged_enter_vma(struct mm_area *vma,
 					unsigned long vm_flags)
 {
 }
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index d73095b5cd96..b215a192a192 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -15,10 +15,10 @@
 #include <linux/sched.h>
 
 #ifdef CONFIG_KSM
-int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+int ksm_madvise(struct mm_area *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags);
 
-void ksm_add_vma(struct vm_area_struct *vma);
+void ksm_add_vma(struct mm_area *vma);
 int ksm_enable_merge_any(struct mm_struct *mm);
 int ksm_disable_merge_any(struct mm_struct *mm);
 int ksm_disable(struct mm_struct *mm);
@@ -86,7 +86,7 @@ static inline void ksm_exit(struct mm_struct *mm)
  * but what if the vma was unmerged while the page was swapped out?
  */
 struct folio *ksm_might_need_to_copy(struct folio *folio,
-			struct vm_area_struct *vma, unsigned long addr);
+			struct mm_area *vma, unsigned long addr);
 
 void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
 void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
@@ -97,7 +97,7 @@ bool ksm_process_mergeable(struct mm_struct *mm);
 
 #else  /* !CONFIG_KSM */
 
-static inline void ksm_add_vma(struct vm_area_struct *vma)
+static inline void ksm_add_vma(struct mm_area *vma)
 {
 }
 
@@ -130,14 +130,14 @@ static inline void collect_procs_ksm(const struct folio *folio,
 }
 
 #ifdef CONFIG_MMU
-static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+static inline int ksm_madvise(struct mm_area *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags)
 {
 	return 0;
 }
 
 static inline struct folio *ksm_might_need_to_copy(struct folio *folio,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	return folio;
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 5438a1b446a6..09b7d56cacdb 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2340,7 +2340,7 @@ struct kvm_device_ops {
 	int (*has_attr)(struct kvm_device *dev, struct kvm_device_attr *attr);
 	long (*ioctl)(struct kvm_device *dev, unsigned int ioctl,
 		      unsigned long arg);
-	int (*mmap)(struct kvm_device *dev, struct vm_area_struct *vma);
+	int (*mmap)(struct kvm_device *dev, struct mm_area *vma);
 };
 
 struct kvm_device *kvm_device_from_filp(struct file *filp);
diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
index bf3bbac4e02a..0401c8ceeaa0 100644
--- a/include/linux/lsm_hook_defs.h
+++ b/include/linux/lsm_hook_defs.h
@@ -196,7 +196,7 @@ LSM_HOOK(int, 0, file_ioctl_compat, struct file *file, unsigned int cmd,
 LSM_HOOK(int, 0, mmap_addr, unsigned long addr)
 LSM_HOOK(int, 0, mmap_file, struct file *file, unsigned long reqprot,
 	 unsigned long prot, unsigned long flags)
-LSM_HOOK(int, 0, file_mprotect, struct vm_area_struct *vma,
+LSM_HOOK(int, 0, file_mprotect, struct mm_area *vma,
 	 unsigned long reqprot, unsigned long prot)
 LSM_HOOK(int, 0, file_lock, struct file *file, unsigned int cmd)
 LSM_HOOK(int, 0, file_fcntl, struct file *file, unsigned int cmd,
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index ce9885e0178a..8bf1d4d50ce8 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -118,27 +118,27 @@ struct sp_node {
 	struct mempolicy *policy;
 };
 
-int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst);
+int vma_dup_policy(struct mm_area *src, struct mm_area *dst);
 void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
 int mpol_set_shared_policy(struct shared_policy *sp,
-			   struct vm_area_struct *vma, struct mempolicy *mpol);
+			   struct mm_area *vma, struct mempolicy *mpol);
 void mpol_free_shared_policy(struct shared_policy *sp);
 struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
 					    pgoff_t idx);
 
 struct mempolicy *get_task_policy(struct task_struct *p);
-struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
+struct mempolicy *__get_vma_policy(struct mm_area *vma,
 		unsigned long addr, pgoff_t *ilx);
-struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+struct mempolicy *get_vma_policy(struct mm_area *vma,
 		unsigned long addr, int order, pgoff_t *ilx);
-bool vma_policy_mof(struct vm_area_struct *vma);
+bool vma_policy_mof(struct mm_area *vma);
 
 extern void numa_default_policy(void);
 extern void numa_policy_init(void);
 extern void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new);
 extern void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new);
 
-extern int huge_node(struct vm_area_struct *vma,
+extern int huge_node(struct mm_area *vma,
 				unsigned long addr, gfp_t gfp_flags,
 				struct mempolicy **mpol, nodemask_t **nodemask);
 extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
@@ -165,7 +165,7 @@ extern int mpol_parse_str(char *str, struct mempolicy **mpol);
 extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol);
 
 /* Check if a vma is migratable */
-extern bool vma_migratable(struct vm_area_struct *vma);
+extern bool vma_migratable(struct mm_area *vma);
 
 int mpol_misplaced(struct folio *folio, struct vm_fault *vmf,
 					unsigned long addr);
@@ -221,7 +221,7 @@ mpol_shared_policy_lookup(struct shared_policy *sp, pgoff_t idx)
 	return NULL;
 }
 
-static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+static inline struct mempolicy *get_vma_policy(struct mm_area *vma,
 				unsigned long addr, int order, pgoff_t *ilx)
 {
 	*ilx = 0;
@@ -229,7 +229,7 @@ static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
 }
 
 static inline int
-vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
+vma_dup_policy(struct mm_area *src, struct mm_area *dst)
 {
 	return 0;
 }
@@ -251,7 +251,7 @@ static inline void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
 {
 }
 
-static inline int huge_node(struct vm_area_struct *vma,
+static inline int huge_node(struct mm_area *vma,
 				unsigned long addr, gfp_t gfp_flags,
 				struct mempolicy **mpol, nodemask_t **nodemask)
 {
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index aaa2114498d6..e64c14d9bd5a 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -143,11 +143,11 @@ const struct movable_operations *page_movable_ops(struct page *page)
 
 #ifdef CONFIG_NUMA_BALANCING
 int migrate_misplaced_folio_prepare(struct folio *folio,
-		struct vm_area_struct *vma, int node);
+		struct mm_area *vma, int node);
 int migrate_misplaced_folio(struct folio *folio, int node);
 #else
 static inline int migrate_misplaced_folio_prepare(struct folio *folio,
-		struct vm_area_struct *vma, int node)
+		struct mm_area *vma, int node)
 {
 	return -EAGAIN; /* can't migrate now */
 }
@@ -188,7 +188,7 @@ enum migrate_vma_direction {
 };
 
 struct migrate_vma {
-	struct vm_area_struct	*vma;
+	struct mm_area	*vma;
 	/*
 	 * Both src and dst array must be big enough for
 	 * (end - start) >> PAGE_SHIFT entries.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b7f13f087954..193ef16cd441 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -230,9 +230,9 @@ void setup_initial_init_mm(void *start_code, void *end_code,
  * mmap() functions).
  */
 
-struct vm_area_struct *vm_area_alloc(struct mm_struct *);
-struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
-void vm_area_free(struct vm_area_struct *);
+struct mm_area *vm_area_alloc(struct mm_struct *);
+struct mm_area *vm_area_dup(struct mm_area *);
+void vm_area_free(struct mm_area *);
 
 #ifndef CONFIG_MMU
 extern struct rb_root nommu_region_tree;
@@ -242,7 +242,7 @@ extern unsigned int kobjsize(const void *objp);
 #endif
 
 /*
- * vm_flags in vm_area_struct, see mm_types.h.
+ * vm_flags in mm_area, see mm_types.h.
  * When changing, update also include/trace/events/mmflags.h
  */
 #define VM_NONE		0x00000000
@@ -533,7 +533,7 @@ static inline bool fault_flag_allow_retry_first(enum fault_flag flags)
  */
 struct vm_fault {
 	const struct {
-		struct vm_area_struct *vma;	/* Target VMA */
+		struct mm_area *vma;	/* Target VMA */
 		gfp_t gfp_mask;			/* gfp mask to be used for allocations */
 		pgoff_t pgoff;			/* Logical page offset based on vma */
 		unsigned long address;		/* Faulting virtual address - masked */
@@ -583,27 +583,27 @@ struct vm_fault {
  * to the functions called when a no-page or a wp-page exception occurs.
  */
 struct vm_operations_struct {
-	void (*open)(struct vm_area_struct * area);
+	void (*open)(struct mm_area * area);
 	/**
 	 * @close: Called when the VMA is being removed from the MM.
 	 * Context: User context.  May sleep.  Caller holds mmap_lock.
 	 */
-	void (*close)(struct vm_area_struct * area);
+	void (*close)(struct mm_area * area);
 	/* Called any time before splitting to check if it's allowed */
-	int (*may_split)(struct vm_area_struct *area, unsigned long addr);
-	int (*mremap)(struct vm_area_struct *area);
+	int (*may_split)(struct mm_area *area, unsigned long addr);
+	int (*mremap)(struct mm_area *area);
 	/*
 	 * Called by mprotect() to make driver-specific permission
 	 * checks before mprotect() is finalised.   The VMA must not
 	 * be modified.  Returns 0 if mprotect() can proceed.
 	 */
-	int (*mprotect)(struct vm_area_struct *vma, unsigned long start,
+	int (*mprotect)(struct mm_area *vma, unsigned long start,
 			unsigned long end, unsigned long newflags);
 	vm_fault_t (*fault)(struct vm_fault *vmf);
 	vm_fault_t (*huge_fault)(struct vm_fault *vmf, unsigned int order);
 	vm_fault_t (*map_pages)(struct vm_fault *vmf,
 			pgoff_t start_pgoff, pgoff_t end_pgoff);
-	unsigned long (*pagesize)(struct vm_area_struct * area);
+	unsigned long (*pagesize)(struct mm_area * area);
 
 	/* notification that a previously read-only page is about to become
 	 * writable, if an error is returned it will cause a SIGBUS */
@@ -616,13 +616,13 @@ struct vm_operations_struct {
 	 * for use by special VMAs. See also generic_access_phys() for a generic
 	 * implementation useful for any iomem mapping.
 	 */
-	int (*access)(struct vm_area_struct *vma, unsigned long addr,
+	int (*access)(struct mm_area *vma, unsigned long addr,
 		      void *buf, int len, int write);
 
 	/* Called by the /proc/PID/maps code to ask the vma whether it
 	 * has a special name.  Returning non-NULL will also cause this
 	 * vma to be dumped unconditionally. */
-	const char *(*name)(struct vm_area_struct *vma);
+	const char *(*name)(struct mm_area *vma);
 
 #ifdef CONFIG_NUMA
 	/*
@@ -632,7 +632,7 @@ struct vm_operations_struct {
 	 * install a MPOL_DEFAULT policy, nor the task or system default
 	 * mempolicy.
 	 */
-	int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new);
+	int (*set_policy)(struct mm_area *vma, struct mempolicy *new);
 
 	/*
 	 * get_policy() op must add reference [mpol_get()] to any policy at
@@ -644,7 +644,7 @@ struct vm_operations_struct {
 	 * must return NULL--i.e., do not "fallback" to task or system default
 	 * policy.
 	 */
-	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
+	struct mempolicy *(*get_policy)(struct mm_area *vma,
 					unsigned long addr, pgoff_t *ilx);
 #endif
 	/*
@@ -652,26 +652,26 @@ struct vm_operations_struct {
 	 * page for @addr.  This is useful if the default behavior
 	 * (using pte_page()) would not find the correct page.
 	 */
-	struct page *(*find_special_page)(struct vm_area_struct *vma,
+	struct page *(*find_special_page)(struct mm_area *vma,
 					  unsigned long addr);
 };
 
 #ifdef CONFIG_NUMA_BALANCING
-static inline void vma_numab_state_init(struct vm_area_struct *vma)
+static inline void vma_numab_state_init(struct mm_area *vma)
 {
 	vma->numab_state = NULL;
 }
-static inline void vma_numab_state_free(struct vm_area_struct *vma)
+static inline void vma_numab_state_free(struct mm_area *vma)
 {
 	kfree(vma->numab_state);
 }
 #else
-static inline void vma_numab_state_init(struct vm_area_struct *vma) {}
-static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
+static inline void vma_numab_state_init(struct mm_area *vma) {}
+static inline void vma_numab_state_free(struct mm_area *vma) {}
 #endif /* CONFIG_NUMA_BALANCING */
 
 #ifdef CONFIG_PER_VMA_LOCK
-static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt)
+static inline void vma_lock_init(struct mm_area *vma, bool reset_refcnt)
 {
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 	static struct lock_class_key lockdep_key;
@@ -694,7 +694,7 @@ static inline bool is_vma_writer_only(int refcnt)
 	return refcnt & VMA_LOCK_OFFSET && refcnt <= VMA_LOCK_OFFSET + 1;
 }
 
-static inline void vma_refcount_put(struct vm_area_struct *vma)
+static inline void vma_refcount_put(struct mm_area *vma)
 {
 	/* Use a copy of vm_mm in case vma is freed after we drop vm_refcnt */
 	struct mm_struct *mm = vma->vm_mm;
@@ -717,8 +717,8 @@ static inline void vma_refcount_put(struct vm_area_struct *vma)
  * Returns the vma on success, NULL on failure to lock and EAGAIN if vma got
  * detached.
  */
-static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
-						    struct vm_area_struct *vma)
+static inline struct mm_area *vma_start_read(struct mm_struct *mm,
+						    struct mm_area *vma)
 {
 	int oldcnt;
 
@@ -770,7 +770,7 @@ static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
  * not be used in such cases because it might fail due to mm_lock_seq overflow.
  * This functionality is used to obtain vma read lock and drop the mmap read lock.
  */
-static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
+static inline bool vma_start_read_locked_nested(struct mm_area *vma, int subclass)
 {
 	int oldcnt;
 
@@ -789,18 +789,18 @@ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int
  * not be used in such cases because it might fail due to mm_lock_seq overflow.
  * This functionality is used to obtain vma read lock and drop the mmap read lock.
  */
-static inline bool vma_start_read_locked(struct vm_area_struct *vma)
+static inline bool vma_start_read_locked(struct mm_area *vma)
 {
 	return vma_start_read_locked_nested(vma, 0);
 }
 
-static inline void vma_end_read(struct vm_area_struct *vma)
+static inline void vma_end_read(struct mm_area *vma)
 {
 	vma_refcount_put(vma);
 }
 
 /* WARNING! Can only be used if mmap_lock is expected to be write-locked */
-static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq)
+static bool __is_vma_write_locked(struct mm_area *vma, unsigned int *mm_lock_seq)
 {
 	mmap_assert_write_locked(vma->vm_mm);
 
@@ -812,14 +812,14 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l
 	return (vma->vm_lock_seq == *mm_lock_seq);
 }
 
-void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq);
+void __vma_start_write(struct mm_area *vma, unsigned int mm_lock_seq);
 
 /*
  * Begin writing to a VMA.
  * Exclude concurrent readers under the per-VMA lock until the currently
  * write-locked mmap_lock is dropped or downgraded.
  */
-static inline void vma_start_write(struct vm_area_struct *vma)
+static inline void vma_start_write(struct mm_area *vma)
 {
 	unsigned int mm_lock_seq;
 
@@ -829,14 +829,14 @@ static inline void vma_start_write(struct vm_area_struct *vma)
 	__vma_start_write(vma, mm_lock_seq);
 }
 
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+static inline void vma_assert_write_locked(struct mm_area *vma)
 {
 	unsigned int mm_lock_seq;
 
 	VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
 }
 
-static inline void vma_assert_locked(struct vm_area_struct *vma)
+static inline void vma_assert_locked(struct mm_area *vma)
 {
 	unsigned int mm_lock_seq;
 
@@ -849,24 +849,24 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
  * assertions should be made either under mmap_write_lock or when the object
  * has been isolated under mmap_write_lock, ensuring no competing writers.
  */
-static inline void vma_assert_attached(struct vm_area_struct *vma)
+static inline void vma_assert_attached(struct mm_area *vma)
 {
 	WARN_ON_ONCE(!refcount_read(&vma->vm_refcnt));
 }
 
-static inline void vma_assert_detached(struct vm_area_struct *vma)
+static inline void vma_assert_detached(struct mm_area *vma)
 {
 	WARN_ON_ONCE(refcount_read(&vma->vm_refcnt));
 }
 
-static inline void vma_mark_attached(struct vm_area_struct *vma)
+static inline void vma_mark_attached(struct mm_area *vma)
 {
 	vma_assert_write_locked(vma);
 	vma_assert_detached(vma);
 	refcount_set_release(&vma->vm_refcnt, 1);
 }
 
-void vma_mark_detached(struct vm_area_struct *vma);
+void vma_mark_detached(struct mm_area *vma);
 
 static inline void release_fault_lock(struct vm_fault *vmf)
 {
@@ -884,31 +884,31 @@ static inline void assert_fault_locked(struct vm_fault *vmf)
 		mmap_assert_locked(vmf->vma->vm_mm);
 }
 
-struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
+struct mm_area *lock_vma_under_rcu(struct mm_struct *mm,
 					  unsigned long address);
 
 #else /* CONFIG_PER_VMA_LOCK */
 
-static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {}
-static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
-						    struct vm_area_struct *vma)
+static inline void vma_lock_init(struct mm_area *vma, bool reset_refcnt) {}
+static inline struct mm_area *vma_start_read(struct mm_struct *mm,
+						    struct mm_area *vma)
 		{ return NULL; }
-static inline void vma_end_read(struct vm_area_struct *vma) {}
-static inline void vma_start_write(struct vm_area_struct *vma) {}
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+static inline void vma_end_read(struct mm_area *vma) {}
+static inline void vma_start_write(struct mm_area *vma) {}
+static inline void vma_assert_write_locked(struct mm_area *vma)
 		{ mmap_assert_write_locked(vma->vm_mm); }
-static inline void vma_assert_attached(struct vm_area_struct *vma) {}
-static inline void vma_assert_detached(struct vm_area_struct *vma) {}
-static inline void vma_mark_attached(struct vm_area_struct *vma) {}
-static inline void vma_mark_detached(struct vm_area_struct *vma) {}
+static inline void vma_assert_attached(struct mm_area *vma) {}
+static inline void vma_assert_detached(struct mm_area *vma) {}
+static inline void vma_mark_attached(struct mm_area *vma) {}
+static inline void vma_mark_detached(struct mm_area *vma) {}
 
-static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
+static inline struct mm_area *lock_vma_under_rcu(struct mm_struct *mm,
 		unsigned long address)
 {
 	return NULL;
 }
 
-static inline void vma_assert_locked(struct vm_area_struct *vma)
+static inline void vma_assert_locked(struct mm_area *vma)
 {
 	mmap_assert_locked(vma->vm_mm);
 }
@@ -927,7 +927,7 @@ static inline void assert_fault_locked(struct vm_fault *vmf)
 
 extern const struct vm_operations_struct vma_dummy_vm_ops;
 
-static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
+static inline void vma_init(struct mm_area *vma, struct mm_struct *mm)
 {
 	memset(vma, 0, sizeof(*vma));
 	vma->vm_mm = mm;
@@ -937,7 +937,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
 }
 
 /* Use when VMA is not part of the VMA tree and needs no locking */
-static inline void vm_flags_init(struct vm_area_struct *vma,
+static inline void vm_flags_init(struct mm_area *vma,
 				 vm_flags_t flags)
 {
 	ACCESS_PRIVATE(vma, __vm_flags) = flags;
@@ -948,28 +948,28 @@ static inline void vm_flags_init(struct vm_area_struct *vma,
  * Note: vm_flags_reset and vm_flags_reset_once do not lock the vma and
  * it should be locked explicitly beforehand.
  */
-static inline void vm_flags_reset(struct vm_area_struct *vma,
+static inline void vm_flags_reset(struct mm_area *vma,
 				  vm_flags_t flags)
 {
 	vma_assert_write_locked(vma);
 	vm_flags_init(vma, flags);
 }
 
-static inline void vm_flags_reset_once(struct vm_area_struct *vma,
+static inline void vm_flags_reset_once(struct mm_area *vma,
 				       vm_flags_t flags)
 {
 	vma_assert_write_locked(vma);
 	WRITE_ONCE(ACCESS_PRIVATE(vma, __vm_flags), flags);
 }
 
-static inline void vm_flags_set(struct vm_area_struct *vma,
+static inline void vm_flags_set(struct mm_area *vma,
 				vm_flags_t flags)
 {
 	vma_start_write(vma);
 	ACCESS_PRIVATE(vma, __vm_flags) |= flags;
 }
 
-static inline void vm_flags_clear(struct vm_area_struct *vma,
+static inline void vm_flags_clear(struct mm_area *vma,
 				  vm_flags_t flags)
 {
 	vma_start_write(vma);
@@ -980,7 +980,7 @@ static inline void vm_flags_clear(struct vm_area_struct *vma,
  * Use only if VMA is not part of the VMA tree or has no other users and
  * therefore needs no locking.
  */
-static inline void __vm_flags_mod(struct vm_area_struct *vma,
+static inline void __vm_flags_mod(struct mm_area *vma,
 				  vm_flags_t set, vm_flags_t clear)
 {
 	vm_flags_init(vma, (vma->vm_flags | set) & ~clear);
@@ -990,19 +990,19 @@ static inline void __vm_flags_mod(struct vm_area_struct *vma,
  * Use only when the order of set/clear operations is unimportant, otherwise
  * use vm_flags_{set|clear} explicitly.
  */
-static inline void vm_flags_mod(struct vm_area_struct *vma,
+static inline void vm_flags_mod(struct mm_area *vma,
 				vm_flags_t set, vm_flags_t clear)
 {
 	vma_start_write(vma);
 	__vm_flags_mod(vma, set, clear);
 }
 
-static inline void vma_set_anonymous(struct vm_area_struct *vma)
+static inline void vma_set_anonymous(struct mm_area *vma)
 {
 	vma->vm_ops = NULL;
 }
 
-static inline bool vma_is_anonymous(struct vm_area_struct *vma)
+static inline bool vma_is_anonymous(struct mm_area *vma)
 {
 	return !vma->vm_ops;
 }
@@ -1011,7 +1011,7 @@ static inline bool vma_is_anonymous(struct vm_area_struct *vma)
  * Indicate if the VMA is a heap for the given task; for
  * /proc/PID/maps that is the heap of the main task.
  */
-static inline bool vma_is_initial_heap(const struct vm_area_struct *vma)
+static inline bool vma_is_initial_heap(const struct mm_area *vma)
 {
 	return vma->vm_start < vma->vm_mm->brk &&
 		vma->vm_end > vma->vm_mm->start_brk;
@@ -1021,7 +1021,7 @@ static inline bool vma_is_initial_heap(const struct vm_area_struct *vma)
  * Indicate if the VMA is a stack for the given task; for
  * /proc/PID/maps that is the stack of the main task.
  */
-static inline bool vma_is_initial_stack(const struct vm_area_struct *vma)
+static inline bool vma_is_initial_stack(const struct mm_area *vma)
 {
 	/*
 	 * We make no effort to guess what a given thread considers to be
@@ -1032,7 +1032,7 @@ static inline bool vma_is_initial_stack(const struct vm_area_struct *vma)
 		vma->vm_end >= vma->vm_mm->start_stack;
 }
 
-static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
+static inline bool vma_is_temporary_stack(struct mm_area *vma)
 {
 	int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
 
@@ -1046,7 +1046,7 @@ static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
 	return false;
 }
 
-static inline bool vma_is_foreign(struct vm_area_struct *vma)
+static inline bool vma_is_foreign(struct mm_area *vma)
 {
 	if (!current->mm)
 		return true;
@@ -1057,7 +1057,7 @@ static inline bool vma_is_foreign(struct vm_area_struct *vma)
 	return false;
 }
 
-static inline bool vma_is_accessible(struct vm_area_struct *vma)
+static inline bool vma_is_accessible(struct mm_area *vma)
 {
 	return vma->vm_flags & VM_ACCESS_FLAGS;
 }
@@ -1068,18 +1068,18 @@ static inline bool is_shared_maywrite(vm_flags_t vm_flags)
 		(VM_SHARED | VM_MAYWRITE);
 }
 
-static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
+static inline bool vma_is_shared_maywrite(struct mm_area *vma)
 {
 	return is_shared_maywrite(vma->vm_flags);
 }
 
 static inline
-struct vm_area_struct *vma_find(struct vma_iterator *vmi, unsigned long max)
+struct mm_area *vma_find(struct vma_iterator *vmi, unsigned long max)
 {
 	return mas_find(&vmi->mas, max - 1);
 }
 
-static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
+static inline struct mm_area *vma_next(struct vma_iterator *vmi)
 {
 	/*
 	 * Uses mas_find() to get the first VMA when the iterator starts.
@@ -1089,13 +1089,13 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
 }
 
 static inline
-struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
+struct mm_area *vma_iter_next_range(struct vma_iterator *vmi)
 {
 	return mas_next_range(&vmi->mas, ULONG_MAX);
 }
 
 
-static inline struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
+static inline struct mm_area *vma_prev(struct vma_iterator *vmi)
 {
 	return mas_prev(&vmi->mas, 0);
 }
@@ -1118,7 +1118,7 @@ static inline void vma_iter_free(struct vma_iterator *vmi)
 }
 
 static inline int vma_iter_bulk_store(struct vma_iterator *vmi,
-				      struct vm_area_struct *vma)
+				      struct mm_area *vma)
 {
 	vmi->mas.index = vma->vm_start;
 	vmi->mas.last = vma->vm_end - 1;
@@ -1152,14 +1152,14 @@ static inline void vma_iter_set(struct vma_iterator *vmi, unsigned long addr)
  * The vma_is_shmem is not inline because it is used only by slow
  * paths in userfault.
  */
-bool vma_is_shmem(struct vm_area_struct *vma);
-bool vma_is_anon_shmem(struct vm_area_struct *vma);
+bool vma_is_shmem(struct mm_area *vma);
+bool vma_is_anon_shmem(struct mm_area *vma);
 #else
-static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; }
-static inline bool vma_is_anon_shmem(struct vm_area_struct *vma) { return false; }
+static inline bool vma_is_shmem(struct mm_area *vma) { return false; }
+static inline bool vma_is_anon_shmem(struct mm_area *vma) { return false; }
 #endif
 
-int vma_is_stack_for_current(struct vm_area_struct *vma);
+int vma_is_stack_for_current(struct mm_area *vma);
 
 /* flush_tlb_range() takes a vma, not a mm, and can care about flags */
 #define TLB_FLUSH_VMA(mm,flags) { .vm_mm = (mm), .vm_flags = (flags) }
@@ -1435,7 +1435,7 @@ static inline unsigned long thp_size(struct page *page)
  * pte_mkwrite.  But get_user_pages can cause write faults for mappings
  * that do not have writing enabled, when used by access_process_vm.
  */
-static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t maybe_mkwrite(pte_t pte, struct mm_area *vma)
 {
 	if (likely(vma->vm_flags & VM_WRITE))
 		pte = pte_mkwrite(pte, vma);
@@ -1811,7 +1811,7 @@ static inline int folio_xchg_access_time(struct folio *folio, int time)
 	return last_time << PAGE_ACCESS_TIME_BUCKETS;
 }
 
-static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
+static inline void vma_set_access_pid_bit(struct mm_area *vma)
 {
 	unsigned int pid_bit;
 
@@ -1872,7 +1872,7 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
 	return false;
 }
 
-static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
+static inline void vma_set_access_pid_bit(struct mm_area *vma)
 {
 }
 static inline bool folio_use_access_time(struct folio *folio)
@@ -2042,7 +2042,7 @@ static inline bool folio_maybe_dma_pinned(struct folio *folio)
  *
  * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_seq.
  */
-static inline bool folio_needs_cow_for_dma(struct vm_area_struct *vma,
+static inline bool folio_needs_cow_for_dma(struct mm_area *vma,
 					  struct folio *folio)
 {
 	VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1));
@@ -2445,26 +2445,26 @@ static inline bool can_do_mlock(void) { return false; }
 extern int user_shm_lock(size_t, struct ucounts *);
 extern void user_shm_unlock(size_t, struct ucounts *);
 
-struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
+struct folio *vm_normal_folio(struct mm_area *vma, unsigned long addr,
 			     pte_t pte);
-struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+struct page *vm_normal_page(struct mm_area *vma, unsigned long addr,
 			     pte_t pte);
-struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
+struct folio *vm_normal_folio_pmd(struct mm_area *vma,
 				  unsigned long addr, pmd_t pmd);
-struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
+struct page *vm_normal_page_pmd(struct mm_area *vma, unsigned long addr,
 				pmd_t pmd);
 
-void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
+void zap_vma_ptes(struct mm_area *vma, unsigned long address,
 		  unsigned long size);
-void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
+void zap_page_range_single(struct mm_area *vma, unsigned long address,
 			   unsigned long size, struct zap_details *details);
-static inline void zap_vma_pages(struct vm_area_struct *vma)
+static inline void zap_vma_pages(struct mm_area *vma)
 {
 	zap_page_range_single(vma, vma->vm_start,
 			      vma->vm_end - vma->vm_start, NULL);
 }
 void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
-		struct vm_area_struct *start_vma, unsigned long start,
+		struct mm_area *start_vma, unsigned long start,
 		unsigned long end, unsigned long tree_end, bool mm_wr_locked);
 
 struct mmu_notifier_range;
@@ -2472,17 +2472,17 @@ struct mmu_notifier_range;
 void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
 		unsigned long end, unsigned long floor, unsigned long ceiling);
 int
-copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
-int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
+copy_page_range(struct mm_area *dst_vma, struct mm_area *src_vma);
+int generic_access_phys(struct mm_area *vma, unsigned long addr,
 			void *buf, int len, int write);
 
 struct follow_pfnmap_args {
 	/**
 	 * Inputs:
-	 * @vma: Pointer to @vm_area_struct struct
+	 * @vma: Pointer to @mm_area struct
 	 * @address: the virtual address to walk
 	 */
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long address;
 	/**
 	 * Internals:
@@ -2516,11 +2516,11 @@ void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
 int generic_error_remove_folio(struct address_space *mapping,
 		struct folio *folio);
 
-struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
+struct mm_area *lock_mm_and_find_vma(struct mm_struct *mm,
 		unsigned long address, struct pt_regs *regs);
 
 #ifdef CONFIG_MMU
-extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
+extern vm_fault_t handle_mm_fault(struct mm_area *vma,
 				  unsigned long address, unsigned int flags,
 				  struct pt_regs *regs);
 extern int fixup_user_fault(struct mm_struct *mm,
@@ -2531,7 +2531,7 @@ void unmap_mapping_pages(struct address_space *mapping,
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
 #else
-static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
+static inline vm_fault_t handle_mm_fault(struct mm_area *vma,
 					 unsigned long address, unsigned int flags,
 					 struct pt_regs *regs)
 {
@@ -2558,7 +2558,7 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
 	unmap_mapping_range(mapping, holebegin, holelen, 0);
 }
 
-static inline struct vm_area_struct *vma_lookup(struct mm_struct *mm,
+static inline struct mm_area *vma_lookup(struct mm_struct *mm,
 						unsigned long addr);
 
 extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
@@ -2586,10 +2586,10 @@ long pin_user_pages_remote(struct mm_struct *mm,
 static inline struct page *get_user_page_vma_remote(struct mm_struct *mm,
 						    unsigned long addr,
 						    int gup_flags,
-						    struct vm_area_struct **vmap)
+						    struct mm_area **vmap)
 {
 	struct page *page;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int got;
 
 	if (WARN_ON_ONCE(unlikely(gup_flags & FOLL_NOWAIT)))
@@ -2663,13 +2663,13 @@ int get_cmdline(struct task_struct *task, char *buffer, int buflen);
 #define  MM_CP_UFFD_WP_ALL                 (MM_CP_UFFD_WP | \
 					    MM_CP_UFFD_WP_RESOLVE)
 
-bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
+bool can_change_pte_writable(struct mm_area *vma, unsigned long addr,
 			     pte_t pte);
 extern long change_protection(struct mmu_gather *tlb,
-			      struct vm_area_struct *vma, unsigned long start,
+			      struct mm_area *vma, unsigned long start,
 			      unsigned long end, unsigned long cp_flags);
 extern int mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
-	  struct vm_area_struct *vma, struct vm_area_struct **pprev,
+	  struct mm_area *vma, struct mm_area **pprev,
 	  unsigned long start, unsigned long end, unsigned long newflags);
 
 /*
@@ -3360,16 +3360,16 @@ extern atomic_long_t mmap_pages_allocated;
 extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t);
 
 /* interval_tree.c */
-void vma_interval_tree_insert(struct vm_area_struct *node,
+void vma_interval_tree_insert(struct mm_area *node,
 			      struct rb_root_cached *root);
-void vma_interval_tree_insert_after(struct vm_area_struct *node,
-				    struct vm_area_struct *prev,
+void vma_interval_tree_insert_after(struct mm_area *node,
+				    struct mm_area *prev,
 				    struct rb_root_cached *root);
-void vma_interval_tree_remove(struct vm_area_struct *node,
+void vma_interval_tree_remove(struct mm_area *node,
 			      struct rb_root_cached *root);
-struct vm_area_struct *vma_interval_tree_iter_first(struct rb_root_cached *root,
+struct mm_area *vma_interval_tree_iter_first(struct rb_root_cached *root,
 				unsigned long start, unsigned long last);
-struct vm_area_struct *vma_interval_tree_iter_next(struct vm_area_struct *node,
+struct mm_area *vma_interval_tree_iter_next(struct mm_area *node,
 				unsigned long start, unsigned long last);
 
 #define vma_interval_tree_foreach(vma, root, start, last)		\
@@ -3395,10 +3395,10 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
 
 /* mmap.c */
 extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
-extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
+extern int insert_vm_struct(struct mm_struct *, struct mm_area *);
 extern void exit_mmap(struct mm_struct *);
-int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift);
-bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_struct *vma,
+int relocate_vma_down(struct mm_area *vma, unsigned long shift);
+bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct mm_area *vma,
 				 unsigned long addr, bool write);
 
 static inline int check_data_rlimit(unsigned long rlim,
@@ -3426,9 +3426,9 @@ extern struct file *get_task_exe_file(struct task_struct *task);
 extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long npages);
 extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages);
 
-extern bool vma_is_special_mapping(const struct vm_area_struct *vma,
+extern bool vma_is_special_mapping(const struct mm_area *vma,
 				   const struct vm_special_mapping *sm);
-extern struct vm_area_struct *_install_special_mapping(struct mm_struct *mm,
+extern struct mm_area *_install_special_mapping(struct mm_struct *mm,
 				   unsigned long addr, unsigned long len,
 				   unsigned long flags,
 				   const struct vm_special_mapping *spec);
@@ -3454,7 +3454,7 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr,
 extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 			 unsigned long start, size_t len, struct list_head *uf,
 			 bool unlock);
-int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+int do_vmi_align_munmap(struct vma_iterator *vmi, struct mm_area *vma,
 		    struct mm_struct *mm, unsigned long start,
 		    unsigned long end, struct list_head *uf, bool unlock);
 extern int do_munmap(struct mm_struct *, unsigned long, size_t,
@@ -3507,19 +3507,19 @@ extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);
 
 extern unsigned long stack_guard_gap;
 /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
-int expand_stack_locked(struct vm_area_struct *vma, unsigned long address);
-struct vm_area_struct *expand_stack(struct mm_struct * mm, unsigned long addr);
+int expand_stack_locked(struct mm_area *vma, unsigned long address);
+struct mm_area *expand_stack(struct mm_struct * mm, unsigned long addr);
 
 /* Look up the first VMA which satisfies  addr < vm_end,  NULL if none. */
-extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr);
-extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr,
-					     struct vm_area_struct **pprev);
+extern struct mm_area * find_vma(struct mm_struct * mm, unsigned long addr);
+extern struct mm_area * find_vma_prev(struct mm_struct * mm, unsigned long addr,
+					     struct mm_area **pprev);
 
 /*
  * Look up the first VMA which intersects the interval [start_addr, end_addr)
  * NULL if none.  Assume start_addr < end_addr.
  */
-struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
+struct mm_area *find_vma_intersection(struct mm_struct *mm,
 			unsigned long start_addr, unsigned long end_addr);
 
 /**
@@ -3527,15 +3527,15 @@ struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
  * @mm: The process address space.
  * @addr: The user address.
  *
- * Return: The vm_area_struct at the given address, %NULL otherwise.
+ * Return: The mm_area at the given address, %NULL otherwise.
  */
 static inline
-struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
+struct mm_area *vma_lookup(struct mm_struct *mm, unsigned long addr)
 {
 	return mtree_load(&mm->mm_mt, addr);
 }
 
-static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
+static inline unsigned long stack_guard_start_gap(struct mm_area *vma)
 {
 	if (vma->vm_flags & VM_GROWSDOWN)
 		return stack_guard_gap;
@@ -3547,7 +3547,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
 	return 0;
 }
 
-static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_start_gap(struct mm_area *vma)
 {
 	unsigned long gap = stack_guard_start_gap(vma);
 	unsigned long vm_start = vma->vm_start;
@@ -3558,7 +3558,7 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
 	return vm_start;
 }
 
-static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_end_gap(struct mm_area *vma)
 {
 	unsigned long vm_end = vma->vm_end;
 
@@ -3570,16 +3570,16 @@ static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
 	return vm_end;
 }
 
-static inline unsigned long vma_pages(struct vm_area_struct *vma)
+static inline unsigned long vma_pages(struct mm_area *vma)
 {
 	return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
 }
 
 /* Look up the first VMA which exactly match the interval vm_start ... vm_end */
-static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
+static inline struct mm_area *find_exact_vma(struct mm_struct *mm,
 				unsigned long vm_start, unsigned long vm_end)
 {
-	struct vm_area_struct *vma = vma_lookup(mm, vm_start);
+	struct mm_area *vma = vma_lookup(mm, vm_start);
 
 	if (vma && (vma->vm_start != vm_start || vma->vm_end != vm_end))
 		vma = NULL;
@@ -3587,7 +3587,7 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
 	return vma;
 }
 
-static inline bool range_in_vma(struct vm_area_struct *vma,
+static inline bool range_in_vma(struct mm_area *vma,
 				unsigned long start, unsigned long end)
 {
 	return (vma && vma->vm_start <= start && end <= vma->vm_end);
@@ -3595,51 +3595,51 @@ static inline bool range_in_vma(struct vm_area_struct *vma,
 
 #ifdef CONFIG_MMU
 pgprot_t vm_get_page_prot(unsigned long vm_flags);
-void vma_set_page_prot(struct vm_area_struct *vma);
+void vma_set_page_prot(struct mm_area *vma);
 #else
 static inline pgprot_t vm_get_page_prot(unsigned long vm_flags)
 {
 	return __pgprot(0);
 }
-static inline void vma_set_page_prot(struct vm_area_struct *vma)
+static inline void vma_set_page_prot(struct mm_area *vma)
 {
 	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
 }
 #endif
 
-void vma_set_file(struct vm_area_struct *vma, struct file *file);
+void vma_set_file(struct mm_area *vma, struct file *file);
 
 #ifdef CONFIG_NUMA_BALANCING
-unsigned long change_prot_numa(struct vm_area_struct *vma,
+unsigned long change_prot_numa(struct mm_area *vma,
 			unsigned long start, unsigned long end);
 #endif
 
-struct vm_area_struct *find_extend_vma_locked(struct mm_struct *,
+struct mm_area *find_extend_vma_locked(struct mm_struct *,
 		unsigned long addr);
-int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
+int remap_pfn_range(struct mm_area *, unsigned long addr,
 			unsigned long pfn, unsigned long size, pgprot_t);
-int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
+int remap_pfn_range_notrack(struct mm_area *vma, unsigned long addr,
 		unsigned long pfn, unsigned long size, pgprot_t prot);
-int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
-int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
+int vm_insert_page(struct mm_area *, unsigned long addr, struct page *);
+int vm_insert_pages(struct mm_area *vma, unsigned long addr,
 			struct page **pages, unsigned long *num);
-int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
+int vm_map_pages(struct mm_area *vma, struct page **pages,
 				unsigned long num);
-int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
+int vm_map_pages_zero(struct mm_area *vma, struct page **pages,
 				unsigned long num);
 vm_fault_t vmf_insert_page_mkwrite(struct vm_fault *vmf, struct page *page,
 			bool write);
-vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn(struct mm_area *vma, unsigned long addr,
 			unsigned long pfn);
-vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn_prot(struct mm_area *vma, unsigned long addr,
 			unsigned long pfn, pgprot_t pgprot);
-vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_mixed(struct mm_area *vma, unsigned long addr,
 			pfn_t pfn);
-vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
+vm_fault_t vmf_insert_mixed_mkwrite(struct mm_area *vma,
 		unsigned long addr, pfn_t pfn);
-int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
+int vm_iomap_memory(struct mm_area *vma, phys_addr_t start, unsigned long len);
 
-static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
+static inline vm_fault_t vmf_insert_page(struct mm_area *vma,
 				unsigned long addr, struct page *page)
 {
 	int err = vm_insert_page(vma, addr, page);
@@ -3653,7 +3653,7 @@ static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
 }
 
 #ifndef io_remap_pfn_range
-static inline int io_remap_pfn_range(struct vm_area_struct *vma,
+static inline int io_remap_pfn_range(struct mm_area *vma,
 				     unsigned long addr, unsigned long pfn,
 				     unsigned long size, pgprot_t prot)
 {
@@ -3703,7 +3703,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
  * Indicates whether GUP can follow a PROT_NONE mapped page, or whether
  * a (NUMA hinting) fault is required.
  */
-static inline bool gup_can_follow_protnone(struct vm_area_struct *vma,
+static inline bool gup_can_follow_protnone(struct mm_area *vma,
 					   unsigned int flags)
 {
 	/*
@@ -3872,11 +3872,11 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 #endif	/* CONFIG_DEBUG_PAGEALLOC */
 
 #ifdef __HAVE_ARCH_GATE_AREA
-extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
+extern struct mm_area *get_gate_vma(struct mm_struct *mm);
 extern int in_gate_area_no_mm(unsigned long addr);
 extern int in_gate_area(struct mm_struct *mm, unsigned long addr);
 #else
-static inline struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
+static inline struct mm_area *get_gate_vma(struct mm_struct *mm)
 {
 	return NULL;
 }
@@ -3897,7 +3897,7 @@ void drop_slab(void);
 extern int randomize_va_space;
 #endif
 
-const char * arch_vma_name(struct vm_area_struct *vma);
+const char * arch_vma_name(struct mm_area *vma);
 #ifdef CONFIG_MMU
 void print_vma_addr(char *prefix, unsigned long rip);
 #else
@@ -4117,14 +4117,14 @@ enum mf_action_page_type {
 void folio_zero_user(struct folio *folio, unsigned long addr_hint);
 int copy_user_large_folio(struct folio *dst, struct folio *src,
 			  unsigned long addr_hint,
-			  struct vm_area_struct *vma);
+			  struct mm_area *vma);
 long copy_folio_from_user(struct folio *dst_folio,
 			   const void __user *usr_src,
 			   bool allow_pagefault);
 
 /**
  * vma_is_special_huge - Are transhuge page-table entries considered special?
- * @vma: Pointer to the struct vm_area_struct to consider
+ * @vma: Pointer to the struct mm_area to consider
  *
  * Whether transhuge page-table entries are considered "special" following
  * the definition in vm_normal_page().
@@ -4132,7 +4132,7 @@ long copy_folio_from_user(struct folio *dst_folio,
  * Return: true if transhuge page-table entries should be considered special,
  * false otherwise.
  */
-static inline bool vma_is_special_huge(const struct vm_area_struct *vma)
+static inline bool vma_is_special_huge(const struct mm_area *vma)
 {
 	return vma_is_dax(vma) || (vma->vm_file &&
 				   (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)));
@@ -4201,8 +4201,8 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(pfn << PAGE_SHIFT, PAGE_SIZE);
 }
 
-void vma_pgtable_walk_begin(struct vm_area_struct *vma);
-void vma_pgtable_walk_end(struct vm_area_struct *vma);
+void vma_pgtable_walk_begin(struct mm_area *vma);
+void vma_pgtable_walk_end(struct mm_area *vma);
 
 int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t *size);
 int reserve_mem_release_by_name(const char *name);
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index f9157a0c42a5..7b5bcca96464 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -404,8 +404,8 @@ struct anon_vma_name *anon_vma_name_reuse(struct anon_vma_name *anon_name)
 	return anon_vma_name_alloc(anon_name->name);
 }
 
-static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
-				     struct vm_area_struct *new_vma)
+static inline void dup_anon_vma_name(struct mm_area *orig_vma,
+				     struct mm_area *new_vma)
 {
 	struct anon_vma_name *anon_name = anon_vma_name(orig_vma);
 
@@ -413,7 +413,7 @@ static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
 		new_vma->anon_name = anon_vma_name_reuse(anon_name);
 }
 
-static inline void free_anon_vma_name(struct vm_area_struct *vma)
+static inline void free_anon_vma_name(struct mm_area *vma)
 {
 	/*
 	 * Not using anon_vma_name because it generates a warning if mmap_lock
@@ -435,9 +435,9 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
 #else /* CONFIG_ANON_VMA_NAME */
 static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {}
 static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {}
-static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
-				     struct vm_area_struct *new_vma) {}
-static inline void free_anon_vma_name(struct vm_area_struct *vma) {}
+static inline void dup_anon_vma_name(struct mm_area *orig_vma,
+				     struct mm_area *new_vma) {}
+static inline void free_anon_vma_name(struct mm_area *vma) {}
 
 static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
 				    struct anon_vma_name *anon_name2)
@@ -538,7 +538,7 @@ static inline bool mm_tlb_flush_nested(struct mm_struct *mm)
  * The caller should insert a new pte created with make_pte_marker().
  */
 static inline pte_marker copy_pte_marker(
-		swp_entry_t entry, struct vm_area_struct *dst_vma)
+		swp_entry_t entry, struct mm_area *dst_vma)
 {
 	pte_marker srcm = pte_marker_get(entry);
 	/* Always copy error entries. */
@@ -565,7 +565,7 @@ static inline pte_marker copy_pte_marker(
  * Returns true if an uffd-wp pte was installed, false otherwise.
  */
 static inline bool
-pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
+pte_install_uffd_wp_if_needed(struct mm_area *vma, unsigned long addr,
 			      pte_t *pte, pte_t pteval)
 {
 #ifdef CONFIG_PTE_MARKER_UFFD_WP
@@ -603,7 +603,7 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
 	return false;
 }
 
-static inline bool vma_has_recency(struct vm_area_struct *vma)
+static inline bool vma_has_recency(struct mm_area *vma)
 {
 	if (vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))
 		return false;
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 56d07edd01f9..185fdf91bda1 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -710,11 +710,11 @@ struct anon_vma_name {
  * either keep holding the lock while using the returned pointer or it should
  * raise anon_vma_name refcount before releasing the lock.
  */
-struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma);
+struct anon_vma_name *anon_vma_name(struct mm_area *vma);
 struct anon_vma_name *anon_vma_name_alloc(const char *name);
 void anon_vma_name_free(struct kref *kref);
 #else /* CONFIG_ANON_VMA_NAME */
-static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
+static inline struct anon_vma_name *anon_vma_name(struct mm_area *vma)
 {
 	return NULL;
 }
@@ -774,9 +774,9 @@ struct vma_numab_state {
  * getting a stable reference.
  *
  * WARNING: when adding new members, please update vm_area_init_from() to copy
- * them during vm_area_struct content duplication.
+ * them during mm_area content duplication.
  */
-struct vm_area_struct {
+struct mm_area {
 	/* The first cache line has the info for VMA tree walking. */
 
 	union {
@@ -1488,14 +1488,14 @@ struct vm_special_mapping {
 	 * on the special mapping.  If used, .pages is not checked.
 	 */
 	vm_fault_t (*fault)(const struct vm_special_mapping *sm,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				struct vm_fault *vmf);
 
 	int (*mremap)(const struct vm_special_mapping *sm,
-		     struct vm_area_struct *new_vma);
+		     struct mm_area *new_vma);
 
 	void (*close)(const struct vm_special_mapping *sm,
-		      struct vm_area_struct *vma);
+		      struct mm_area *vma);
 };
 
 enum tlb_flush_reason {
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index a0a3894900ed..b713e4921bb8 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -6,13 +6,13 @@
 #include <linux/stringify.h>
 
 struct page;
-struct vm_area_struct;
+struct mm_area;
 struct mm_struct;
 struct vma_iterator;
 struct vma_merge_struct;
 
 void dump_page(const struct page *page, const char *reason);
-void dump_vma(const struct vm_area_struct *vma);
+void dump_vma(const struct mm_area *vma);
 void dump_mm(const struct mm_struct *mm);
 void dump_vmg(const struct vma_merge_struct *vmg, const char *reason);
 void vma_iter_dump_tree(const struct vma_iterator *vmi);
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index bc2402a45741..1c83061bf690 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -518,7 +518,7 @@ static inline void mmu_notifier_range_init_owner(
 #define ptep_clear_flush_young_notify(__vma, __address, __ptep)		\
 ({									\
 	int __young;							\
-	struct vm_area_struct *___vma = __vma;				\
+	struct mm_area *___vma = __vma;				\
 	unsigned long ___address = __address;				\
 	__young = ptep_clear_flush_young(___vma, ___address, __ptep);	\
 	__young |= mmu_notifier_clear_flush_young(___vma->vm_mm,	\
@@ -531,7 +531,7 @@ static inline void mmu_notifier_range_init_owner(
 #define pmdp_clear_flush_young_notify(__vma, __address, __pmdp)		\
 ({									\
 	int __young;							\
-	struct vm_area_struct *___vma = __vma;				\
+	struct mm_area *___vma = __vma;				\
 	unsigned long ___address = __address;				\
 	__young = pmdp_clear_flush_young(___vma, ___address, __pmdp);	\
 	__young |= mmu_notifier_clear_flush_young(___vma->vm_mm,	\
@@ -544,7 +544,7 @@ static inline void mmu_notifier_range_init_owner(
 #define ptep_clear_young_notify(__vma, __address, __ptep)		\
 ({									\
 	int __young;							\
-	struct vm_area_struct *___vma = __vma;				\
+	struct mm_area *___vma = __vma;				\
 	unsigned long ___address = __address;				\
 	__young = ptep_test_and_clear_young(___vma, ___address, __ptep);\
 	__young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,	\
@@ -555,7 +555,7 @@ static inline void mmu_notifier_range_init_owner(
 #define pmdp_clear_young_notify(__vma, __address, __pmdp)		\
 ({									\
 	int __young;							\
-	struct vm_area_struct *___vma = __vma;				\
+	struct mm_area *___vma = __vma;				\
 	unsigned long ___address = __address;				\
 	__young = pmdp_test_and_clear_young(___vma, ___address, __pmdp);\
 	__young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,	\
diff --git a/include/linux/net.h b/include/linux/net.h
index 0ff950eecc6b..501f966667be 100644
--- a/include/linux/net.h
+++ b/include/linux/net.h
@@ -147,7 +147,7 @@ typedef struct {
 	int error;
 } read_descriptor_t;
 
-struct vm_area_struct;
+struct mm_area;
 struct page;
 struct sockaddr;
 struct msghdr;
@@ -208,7 +208,7 @@ struct proto_ops {
 	int		(*recvmsg)   (struct socket *sock, struct msghdr *m,
 				      size_t total_len, int flags);
 	int		(*mmap)	     (struct file *file, struct socket *sock,
-				      struct vm_area_struct * vma);
+				      struct mm_area * vma);
 	ssize_t 	(*splice_read)(struct socket *sock,  loff_t *ppos,
 				       struct pipe_inode_info *pipe, size_t len, unsigned int flags);
 	void		(*splice_eof)(struct socket *sock);
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 26baa78f1ca7..1848be69048a 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1043,7 +1043,7 @@ static inline pgoff_t folio_pgoff(struct folio *folio)
 	return folio->index;
 }
 
-static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
+static inline pgoff_t linear_page_index(struct mm_area *vma,
 					unsigned long address)
 {
 	pgoff_t pgoff;
diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
index 9700a29f8afb..026bb21ede0e 100644
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -120,7 +120,7 @@ struct mm_walk {
 	const struct mm_walk_ops *ops;
 	struct mm_struct *mm;
 	pgd_t *pgd;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	enum page_walk_action action;
 	bool no_vma;
 	void *private;
@@ -133,10 +133,10 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
 			  unsigned long end, const struct mm_walk_ops *ops,
 			  pgd_t *pgd,
 			  void *private);
-int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
+int walk_page_range_vma(struct mm_area *vma, unsigned long start,
 			unsigned long end, const struct mm_walk_ops *ops,
 			void *private);
-int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
+int walk_page_vma(struct mm_area *vma, const struct mm_walk_ops *ops,
 		void *private);
 int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
 		      pgoff_t nr, const struct mm_walk_ops *ops,
@@ -185,12 +185,12 @@ struct folio_walk {
 		pmd_t pmd;
 	};
 	/* private */
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	spinlock_t *ptl;
 };
 
 struct folio *folio_walk_start(struct folio_walk *fw,
-		struct vm_area_struct *vma, unsigned long addr,
+		struct mm_area *vma, unsigned long addr,
 		folio_walk_flags_t flags);
 
 #define folio_walk_end(__fw, __vma) do { \
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 0e8e3fd77e96..343fcd42b066 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -2103,7 +2103,7 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
  *
  */
 int pci_mmap_resource_range(struct pci_dev *dev, int bar,
-			    struct vm_area_struct *vma,
+			    struct mm_area *vma,
 			    enum pci_mmap_state mmap_state, int write_combine);
 
 #ifndef arch_can_pci_mmap_wc
@@ -2114,7 +2114,7 @@ int pci_mmap_resource_range(struct pci_dev *dev, int bar,
 #define arch_can_pci_mmap_io()		0
 #define pci_iobar_pfn(pdev, bar, vma) (-EINVAL)
 #else
-int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma);
+int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma);
 #endif
 
 #ifndef pci_root_bus_fwnode
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 5a9bf15d4461..cb7f59821923 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1596,7 +1596,7 @@ static inline void perf_event_task_sched_out(struct task_struct *prev,
 		__perf_event_task_sched_out(prev, next);
 }
 
-extern void perf_event_mmap(struct vm_area_struct *vma);
+extern void perf_event_mmap(struct mm_area *vma);
 
 extern void perf_event_ksymbol(u16 ksym_type, u64 addr, u32 len,
 			       bool unregister, const char *sym);
@@ -1889,7 +1889,7 @@ perf_sw_event(u32 event_id, u64 nr, struct pt_regs *regs, u64 addr)	{ }
 static inline void
 perf_bp_event(struct perf_event *event, void *data)			{ }
 
-static inline void perf_event_mmap(struct vm_area_struct *vma)		{ }
+static inline void perf_event_mmap(struct mm_area *vma)		{ }
 
 typedef int (perf_ksymbol_get_name_f)(char *name, int name_len, void *data);
 static inline void perf_event_ksymbol(u16 ksym_type, u64 addr, u32 len,
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index e2b705c14945..eb50af52018b 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -303,28 +303,28 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
 
 #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
-extern int ptep_set_access_flags(struct vm_area_struct *vma,
+extern int ptep_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pte_t *ptep,
 				 pte_t entry, int dirty);
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern int pmdp_set_access_flags(struct vm_area_struct *vma,
+extern int pmdp_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pmd_t *pmdp,
 				 pmd_t entry, int dirty);
-extern int pudp_set_access_flags(struct vm_area_struct *vma,
+extern int pudp_set_access_flags(struct mm_area *vma,
 				 unsigned long address, pud_t *pudp,
 				 pud_t entry, int dirty);
 #else
-static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
+static inline int pmdp_set_access_flags(struct mm_area *vma,
 					unsigned long address, pmd_t *pmdp,
 					pmd_t entry, int dirty)
 {
 	BUILD_BUG();
 	return 0;
 }
-static inline int pudp_set_access_flags(struct vm_area_struct *vma,
+static inline int pudp_set_access_flags(struct mm_area *vma,
 					unsigned long address, pud_t *pudp,
 					pud_t entry, int dirty)
 {
@@ -370,7 +370,7 @@ static inline pgd_t pgdp_get(pgd_t *pgdp)
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+static inline int ptep_test_and_clear_young(struct mm_area *vma,
 					    unsigned long address,
 					    pte_t *ptep)
 {
@@ -386,7 +386,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 
 #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
-static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+static inline int pmdp_test_and_clear_young(struct mm_area *vma,
 					    unsigned long address,
 					    pmd_t *pmdp)
 {
@@ -399,7 +399,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 	return r;
 }
 #else
-static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+static inline int pmdp_test_and_clear_young(struct mm_area *vma,
 					    unsigned long address,
 					    pmd_t *pmdp)
 {
@@ -410,20 +410,20 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-int ptep_clear_flush_young(struct vm_area_struct *vma,
+int ptep_clear_flush_young(struct mm_area *vma,
 			   unsigned long address, pte_t *ptep);
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
+extern int pmdp_clear_flush_young(struct mm_area *vma,
 				  unsigned long address, pmd_t *pmdp);
 #else
 /*
  * Despite relevant to THP only, this API is called from generic rmap code
  * under PageTransHuge(), hence needs a dummy implementation for !THP
  */
-static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
+static inline int pmdp_clear_flush_young(struct mm_area *vma,
 					 unsigned long address, pmd_t *pmdp)
 {
 	BUILD_BUG();
@@ -457,21 +457,21 @@ static inline bool arch_has_hw_pte_young(void)
 #endif
 
 #ifndef arch_check_zapped_pte
-static inline void arch_check_zapped_pte(struct vm_area_struct *vma,
+static inline void arch_check_zapped_pte(struct mm_area *vma,
 					 pte_t pte)
 {
 }
 #endif
 
 #ifndef arch_check_zapped_pmd
-static inline void arch_check_zapped_pmd(struct vm_area_struct *vma,
+static inline void arch_check_zapped_pmd(struct mm_area *vma,
 					 pmd_t pmd)
 {
 }
 #endif
 
 #ifndef arch_check_zapped_pud
-static inline void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
+static inline void arch_check_zapped_pud(struct mm_area *vma, pud_t pud)
 {
 }
 #endif
@@ -507,7 +507,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
  * Context: The caller holds the page table lock.  The PTEs map consecutive
  * pages that belong to the same folio.  The PTEs are all in the same PMD.
  */
-static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
+static inline void clear_young_dirty_ptes(struct mm_area *vma,
 					  unsigned long addr, pte_t *ptep,
 					  unsigned int nr, cydp_t flags)
 {
@@ -659,7 +659,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 #ifndef __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
-static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
+static inline pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
 					    unsigned long address, pmd_t *pmdp,
 					    int full)
 {
@@ -668,7 +668,7 @@ static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR_FULL
-static inline pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
+static inline pud_t pudp_huge_get_and_clear_full(struct mm_area *vma,
 					    unsigned long address, pud_t *pudp,
 					    int full)
 {
@@ -766,13 +766,13 @@ static inline void clear_full_ptes(struct mm_struct *mm, unsigned long addr,
  * It is the difference with function update_mmu_cache.
  */
 #ifndef update_mmu_tlb_range
-static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
+static inline void update_mmu_tlb_range(struct mm_area *vma,
 				unsigned long address, pte_t *ptep, unsigned int nr)
 {
 }
 #endif
 
-static inline void update_mmu_tlb(struct vm_area_struct *vma,
+static inline void update_mmu_tlb(struct mm_area *vma,
 				unsigned long address, pte_t *ptep)
 {
 	update_mmu_tlb_range(vma, address, ptep, 1);
@@ -823,29 +823,29 @@ static inline void clear_not_present_full_ptes(struct mm_struct *mm,
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
-extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
+extern pte_t ptep_clear_flush(struct mm_area *vma,
 			      unsigned long address,
 			      pte_t *ptep);
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
-extern pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
+extern pmd_t pmdp_huge_clear_flush(struct mm_area *vma,
 			      unsigned long address,
 			      pmd_t *pmdp);
-extern pud_t pudp_huge_clear_flush(struct vm_area_struct *vma,
+extern pud_t pudp_huge_clear_flush(struct mm_area *vma,
 			      unsigned long address,
 			      pud_t *pudp);
 #endif
 
 #ifndef pte_mkwrite
-static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
+static inline pte_t pte_mkwrite(pte_t pte, struct mm_area *vma)
 {
 	return pte_mkwrite_novma(pte);
 }
 #endif
 
 #if defined(CONFIG_ARCH_WANT_PMD_MKWRITE) && !defined(pmd_mkwrite)
-static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
+static inline pmd_t pmd_mkwrite(pmd_t pmd, struct mm_area *vma)
 {
 	return pmd_mkwrite_novma(pmd);
 }
@@ -945,10 +945,10 @@ static inline void pudp_set_wrprotect(struct mm_struct *mm,
 
 #ifndef pmdp_collapse_flush
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+extern pmd_t pmdp_collapse_flush(struct mm_area *vma,
 				 unsigned long address, pmd_t *pmdp);
 #else
-static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+static inline pmd_t pmdp_collapse_flush(struct mm_area *vma,
 					unsigned long address,
 					pmd_t *pmdp)
 {
@@ -978,7 +978,7 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
  * architecture that doesn't have hardware dirty/accessed bits. In this case we
  * can't race with CPU which sets these bits and non-atomic approach is fine.
  */
-static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma,
+static inline pmd_t generic_pmdp_establish(struct mm_area *vma,
 		unsigned long address, pmd_t *pmdp, pmd_t pmd)
 {
 	pmd_t old_pmd = *pmdp;
@@ -988,7 +988,7 @@ static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_INVALIDATE
-extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+extern pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
 			    pmd_t *pmdp);
 #endif
 
@@ -1008,7 +1008,7 @@ extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
  * to batch these TLB flushing operations, so fewer TLB flush operations are
  * needed.
  */
-extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma,
+extern pmd_t pmdp_invalidate_ad(struct mm_area *vma,
 				unsigned long address, pmd_t *pmdp);
 #endif
 
@@ -1088,7 +1088,7 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b)
 
 #ifndef __HAVE_ARCH_DO_SWAP_PAGE
 static inline void arch_do_swap_page_nr(struct mm_struct *mm,
-				     struct vm_area_struct *vma,
+				     struct mm_area *vma,
 				     unsigned long addr,
 				     pte_t pte, pte_t oldpte,
 				     int nr)
@@ -1105,7 +1105,7 @@ static inline void arch_do_swap_page_nr(struct mm_struct *mm,
  * metadata when a page is swapped back in.
  */
 static inline void arch_do_swap_page_nr(struct mm_struct *mm,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					unsigned long addr,
 					pte_t pte, pte_t oldpte,
 					int nr)
@@ -1128,7 +1128,7 @@ static inline void arch_do_swap_page_nr(struct mm_struct *mm,
  * metadata on a swap-out of a page.
  */
 static inline int arch_unmap_one(struct mm_struct *mm,
-				  struct vm_area_struct *vma,
+				  struct mm_area *vma,
 				  unsigned long addr,
 				  pte_t orig_pte)
 {
@@ -1277,7 +1277,7 @@ static inline int pmd_none_or_clear_bad(pmd_t *pmd)
 	return 0;
 }
 
-static inline pte_t __ptep_modify_prot_start(struct vm_area_struct *vma,
+static inline pte_t __ptep_modify_prot_start(struct mm_area *vma,
 					     unsigned long addr,
 					     pte_t *ptep)
 {
@@ -1289,7 +1289,7 @@ static inline pte_t __ptep_modify_prot_start(struct vm_area_struct *vma,
 	return ptep_get_and_clear(vma->vm_mm, addr, ptep);
 }
 
-static inline void __ptep_modify_prot_commit(struct vm_area_struct *vma,
+static inline void __ptep_modify_prot_commit(struct mm_area *vma,
 					     unsigned long addr,
 					     pte_t *ptep, pte_t pte)
 {
@@ -1315,7 +1315,7 @@ static inline void __ptep_modify_prot_commit(struct vm_area_struct *vma,
  * queue the update to be done at some later time.  The update must be
  * actually committed before the pte lock is released, however.
  */
-static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
+static inline pte_t ptep_modify_prot_start(struct mm_area *vma,
 					   unsigned long addr,
 					   pte_t *ptep)
 {
@@ -1326,7 +1326,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
  * Commit an update to a pte, leaving any hardware-controlled bits in
  * the PTE unmodified.
  */
-static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
+static inline void ptep_modify_prot_commit(struct mm_area *vma,
 					   unsigned long addr,
 					   pte_t *ptep, pte_t old_pte, pte_t pte)
 {
@@ -1493,7 +1493,7 @@ static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t pmd)
  * track_pfn_remap is called when a _new_ pfn mapping is being established
  * by remap_pfn_range() for physical range indicated by pfn and size.
  */
-static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
+static inline int track_pfn_remap(struct mm_area *vma, pgprot_t *prot,
 				  unsigned long pfn, unsigned long addr,
 				  unsigned long size)
 {
@@ -1504,7 +1504,7 @@ static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
  * track_pfn_insert is called when a _new_ single pfn is established
  * by vmf_insert_pfn().
  */
-static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
+static inline void track_pfn_insert(struct mm_area *vma, pgprot_t *prot,
 				    pfn_t pfn)
 {
 }
@@ -1514,8 +1514,8 @@ static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
  * tables copied during copy_page_range(). On success, stores the pfn to be
  * passed to untrack_pfn_copy().
  */
-static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma, unsigned long *pfn)
+static inline int track_pfn_copy(struct mm_area *dst_vma,
+		struct mm_area *src_vma, unsigned long *pfn)
 {
 	return 0;
 }
@@ -1524,7 +1524,7 @@ static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
  * untrack_pfn_copy is called when a VM_PFNMAP VMA failed to copy during
  * copy_page_range(), but after track_pfn_copy() was already called.
  */
-static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
+static inline void untrack_pfn_copy(struct mm_area *dst_vma,
 		unsigned long pfn)
 {
 }
@@ -1534,7 +1534,7 @@ static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
  * untrack can be called for a specific region indicated by pfn and size or
  * can be for the entire vma (in which case pfn, size are zero).
  */
-static inline void untrack_pfn(struct vm_area_struct *vma,
+static inline void untrack_pfn(struct mm_area *vma,
 			       unsigned long pfn, unsigned long size,
 			       bool mm_wr_locked)
 {
@@ -1546,22 +1546,22 @@ static inline void untrack_pfn(struct vm_area_struct *vma,
  * 1) During mremap() on the src VMA after the page tables were moved.
  * 2) During fork() on the dst VMA, immediately after duplicating the src VMA.
  */
-static inline void untrack_pfn_clear(struct vm_area_struct *vma)
+static inline void untrack_pfn_clear(struct mm_area *vma)
 {
 }
 #else
-extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
+extern int track_pfn_remap(struct mm_area *vma, pgprot_t *prot,
 			   unsigned long pfn, unsigned long addr,
 			   unsigned long size);
-extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
+extern void track_pfn_insert(struct mm_area *vma, pgprot_t *prot,
 			     pfn_t pfn);
-extern int track_pfn_copy(struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma, unsigned long *pfn);
-extern void untrack_pfn_copy(struct vm_area_struct *dst_vma,
+extern int track_pfn_copy(struct mm_area *dst_vma,
+		struct mm_area *src_vma, unsigned long *pfn);
+extern void untrack_pfn_copy(struct mm_area *dst_vma,
 		unsigned long pfn);
-extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+extern void untrack_pfn(struct mm_area *vma, unsigned long pfn,
 			unsigned long size, bool mm_wr_locked);
-extern void untrack_pfn_clear(struct vm_area_struct *vma);
+extern void untrack_pfn_clear(struct mm_area *vma);
 #endif
 
 #ifdef CONFIG_MMU
diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h
index 86be8bf27b41..71db2f7ec326 100644
--- a/include/linux/pkeys.h
+++ b/include/linux/pkeys.h
@@ -15,7 +15,7 @@
 #define PKEY_DEDICATED_EXECUTE_ONLY 0
 #define ARCH_VM_PKEY_FLAGS 0
 
-static inline int vma_pkey(struct vm_area_struct *vma)
+static inline int vma_pkey(struct mm_area *vma)
 {
 	return 0;
 }
diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
index ea62201c74c4..b123101b135e 100644
--- a/include/linux/proc_fs.h
+++ b/include/linux/proc_fs.h
@@ -43,7 +43,7 @@ struct proc_ops {
 #ifdef CONFIG_COMPAT
 	long	(*proc_compat_ioctl)(struct file *, unsigned int, unsigned long);
 #endif
-	int	(*proc_mmap)(struct file *, struct vm_area_struct *);
+	int	(*proc_mmap)(struct file *, struct mm_area *);
 	unsigned long (*proc_get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
 } __randomize_layout;
 
diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 56e27263acf8..d7bed10786f6 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -245,7 +245,7 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node);
 #endif
 
 int ring_buffer_map(struct trace_buffer *buffer, int cpu,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 int ring_buffer_unmap(struct trace_buffer *buffer, int cpu);
 int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu);
 #endif /* _LINUX_RING_BUFFER_H */
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 6b82b618846e..6e0a7da7a80a 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -81,7 +81,7 @@ struct anon_vma {
  * which link all the VMAs associated with this anon_vma.
  */
 struct anon_vma_chain {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct anon_vma *anon_vma;
 	struct list_head same_vma;   /* locked by mmap_lock & page_table_lock */
 	struct rb_node rb;			/* locked by anon_vma->rwsem */
@@ -152,12 +152,12 @@ static inline void anon_vma_unlock_read(struct anon_vma *anon_vma)
  * anon_vma helper functions.
  */
 void anon_vma_init(void);	/* create anon_vma_cachep */
-int  __anon_vma_prepare(struct vm_area_struct *);
-void unlink_anon_vmas(struct vm_area_struct *);
-int anon_vma_clone(struct vm_area_struct *, struct vm_area_struct *);
-int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
+int  __anon_vma_prepare(struct mm_area *);
+void unlink_anon_vmas(struct mm_area *);
+int anon_vma_clone(struct mm_area *, struct mm_area *);
+int anon_vma_fork(struct mm_area *, struct mm_area *);
 
-static inline int anon_vma_prepare(struct vm_area_struct *vma)
+static inline int anon_vma_prepare(struct mm_area *vma)
 {
 	if (likely(vma->anon_vma))
 		return 0;
@@ -165,8 +165,8 @@ static inline int anon_vma_prepare(struct vm_area_struct *vma)
 	return __anon_vma_prepare(vma);
 }
 
-static inline void anon_vma_merge(struct vm_area_struct *vma,
-				  struct vm_area_struct *next)
+static inline void anon_vma_merge(struct mm_area *vma,
+				  struct mm_area *next)
 {
 	VM_BUG_ON_VMA(vma->anon_vma != next->anon_vma, vma);
 	unlink_anon_vmas(next);
@@ -227,7 +227,7 @@ static inline void __folio_large_mapcount_sanity_checks(const struct folio *foli
 }
 
 static __always_inline void folio_set_large_mapcount(struct folio *folio,
-		int mapcount, struct vm_area_struct *vma)
+		int mapcount, struct mm_area *vma)
 {
 	__folio_large_mapcount_sanity_checks(folio, mapcount, vma->vm_mm->mm_id);
 
@@ -241,7 +241,7 @@ static __always_inline void folio_set_large_mapcount(struct folio *folio,
 }
 
 static __always_inline int folio_add_return_large_mapcount(struct folio *folio,
-		int diff, struct vm_area_struct *vma)
+		int diff, struct mm_area *vma)
 {
 	const mm_id_t mm_id = vma->vm_mm->mm_id;
 	int new_mapcount_val;
@@ -291,7 +291,7 @@ static __always_inline int folio_add_return_large_mapcount(struct folio *folio,
 #define folio_add_large_mapcount folio_add_return_large_mapcount
 
 static __always_inline int folio_sub_return_large_mapcount(struct folio *folio,
-		int diff, struct vm_area_struct *vma)
+		int diff, struct mm_area *vma)
 {
 	const mm_id_t mm_id = vma->vm_mm->mm_id;
 	int new_mapcount_val;
@@ -342,32 +342,32 @@ static __always_inline int folio_sub_return_large_mapcount(struct folio *folio,
  * CONFIG_TRANSPARENT_HUGEPAGE. We'll keep that working for now.
  */
 static inline void folio_set_large_mapcount(struct folio *folio, int mapcount,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	/* Note: mapcounts start at -1. */
 	atomic_set(&folio->_large_mapcount, mapcount - 1);
 }
 
 static inline void folio_add_large_mapcount(struct folio *folio,
-		int diff, struct vm_area_struct *vma)
+		int diff, struct mm_area *vma)
 {
 	atomic_add(diff, &folio->_large_mapcount);
 }
 
 static inline int folio_add_return_large_mapcount(struct folio *folio,
-		int diff, struct vm_area_struct *vma)
+		int diff, struct mm_area *vma)
 {
 	BUILD_BUG();
 }
 
 static inline void folio_sub_large_mapcount(struct folio *folio,
-		int diff, struct vm_area_struct *vma)
+		int diff, struct mm_area *vma)
 {
 	atomic_sub(diff, &folio->_large_mapcount);
 }
 
 static inline int folio_sub_return_large_mapcount(struct folio *folio,
-		int diff, struct vm_area_struct *vma)
+		int diff, struct mm_area *vma)
 {
 	BUILD_BUG();
 }
@@ -454,40 +454,40 @@ static inline void __folio_rmap_sanity_checks(const struct folio *folio,
 /*
  * rmap interfaces called when adding or removing pte of page
  */
-void folio_move_anon_rmap(struct folio *, struct vm_area_struct *);
+void folio_move_anon_rmap(struct folio *, struct mm_area *);
 void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
-		struct vm_area_struct *, unsigned long address, rmap_t flags);
+		struct mm_area *, unsigned long address, rmap_t flags);
 #define folio_add_anon_rmap_pte(folio, page, vma, address, flags) \
 	folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
 void folio_add_anon_rmap_pmd(struct folio *, struct page *,
-		struct vm_area_struct *, unsigned long address, rmap_t flags);
-void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
+		struct mm_area *, unsigned long address, rmap_t flags);
+void folio_add_new_anon_rmap(struct folio *, struct mm_area *,
 		unsigned long address, rmap_t flags);
 void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
-		struct vm_area_struct *);
+		struct mm_area *);
 #define folio_add_file_rmap_pte(folio, page, vma) \
 	folio_add_file_rmap_ptes(folio, page, 1, vma)
 void folio_add_file_rmap_pmd(struct folio *, struct page *,
-		struct vm_area_struct *);
+		struct mm_area *);
 void folio_add_file_rmap_pud(struct folio *, struct page *,
-		struct vm_area_struct *);
+		struct mm_area *);
 void folio_remove_rmap_ptes(struct folio *, struct page *, int nr_pages,
-		struct vm_area_struct *);
+		struct mm_area *);
 #define folio_remove_rmap_pte(folio, page, vma) \
 	folio_remove_rmap_ptes(folio, page, 1, vma)
 void folio_remove_rmap_pmd(struct folio *, struct page *,
-		struct vm_area_struct *);
+		struct mm_area *);
 void folio_remove_rmap_pud(struct folio *, struct page *,
-		struct vm_area_struct *);
+		struct mm_area *);
 
-void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *,
+void hugetlb_add_anon_rmap(struct folio *, struct mm_area *,
 		unsigned long address, rmap_t flags);
-void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
+void hugetlb_add_new_anon_rmap(struct folio *, struct mm_area *,
 		unsigned long address);
 
 /* See folio_try_dup_anon_rmap_*() */
 static inline int hugetlb_try_dup_anon_rmap(struct folio *folio,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio);
 	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
@@ -544,7 +544,7 @@ static inline void hugetlb_remove_rmap(struct folio *folio)
 }
 
 static __always_inline void __folio_dup_file_rmap(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
+		struct page *page, int nr_pages, struct mm_area *dst_vma,
 		enum rmap_level level)
 {
 	const int orig_nr_pages = nr_pages;
@@ -585,13 +585,13 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio,
  * The caller needs to hold the page table lock.
  */
 static inline void folio_dup_file_rmap_ptes(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *dst_vma)
+		struct page *page, int nr_pages, struct mm_area *dst_vma)
 {
 	__folio_dup_file_rmap(folio, page, nr_pages, dst_vma, RMAP_LEVEL_PTE);
 }
 
 static __always_inline void folio_dup_file_rmap_pte(struct folio *folio,
-		struct page *page, struct vm_area_struct *dst_vma)
+		struct page *page, struct mm_area *dst_vma)
 {
 	__folio_dup_file_rmap(folio, page, 1, dst_vma, RMAP_LEVEL_PTE);
 }
@@ -607,7 +607,7 @@ static __always_inline void folio_dup_file_rmap_pte(struct folio *folio,
  * The caller needs to hold the page table lock.
  */
 static inline void folio_dup_file_rmap_pmd(struct folio *folio,
-		struct page *page, struct vm_area_struct *dst_vma)
+		struct page *page, struct mm_area *dst_vma)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	__folio_dup_file_rmap(folio, page, HPAGE_PMD_NR, dst_vma, RMAP_LEVEL_PTE);
@@ -617,8 +617,8 @@ static inline void folio_dup_file_rmap_pmd(struct folio *folio,
 }
 
 static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma, enum rmap_level level)
+		struct page *page, int nr_pages, struct mm_area *dst_vma,
+		struct mm_area *src_vma, enum rmap_level level)
 {
 	const int orig_nr_pages = nr_pages;
 	bool maybe_pinned;
@@ -704,16 +704,16 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio,
  * Returns 0 if duplicating the mappings succeeded. Returns -EBUSY otherwise.
  */
 static inline int folio_try_dup_anon_rmap_ptes(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma)
+		struct page *page, int nr_pages, struct mm_area *dst_vma,
+		struct mm_area *src_vma)
 {
 	return __folio_try_dup_anon_rmap(folio, page, nr_pages, dst_vma,
 					 src_vma, RMAP_LEVEL_PTE);
 }
 
 static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio,
-		struct page *page, struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma)
+		struct page *page, struct mm_area *dst_vma,
+		struct mm_area *src_vma)
 {
 	return __folio_try_dup_anon_rmap(folio, page, 1, dst_vma, src_vma,
 					 RMAP_LEVEL_PTE);
@@ -743,8 +743,8 @@ static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio,
  * Returns 0 if duplicating the mapping succeeded. Returns -EBUSY otherwise.
  */
 static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio,
-		struct page *page, struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma)
+		struct page *page, struct mm_area *dst_vma,
+		struct mm_area *src_vma)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	return __folio_try_dup_anon_rmap(folio, page, HPAGE_PMD_NR, dst_vma,
@@ -910,7 +910,7 @@ struct page_vma_mapped_walk {
 	unsigned long pfn;
 	unsigned long nr_pages;
 	pgoff_t pgoff;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long address;
 	pmd_t *pmd;
 	pte_t *pte;
@@ -963,7 +963,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
 
 bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
 unsigned long page_address_in_vma(const struct folio *folio,
-		const struct page *, const struct vm_area_struct *);
+		const struct page *, const struct mm_area *);
 
 /*
  * Cleans the PTEs of shared mappings.
@@ -977,7 +977,7 @@ int mapping_wrprotect_range(struct address_space *mapping, pgoff_t pgoff,
 		unsigned long pfn, unsigned long nr_pages);
 
 int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
-		      struct vm_area_struct *vma);
+		      struct mm_area *vma);
 
 enum rmp_flags {
 	RMP_LOCKED		= 1 << 0,
@@ -1005,12 +1005,12 @@ struct rmap_walk_control {
 	 * Return false if page table scanning in rmap_walk should be stopped.
 	 * Otherwise, return true.
 	 */
-	bool (*rmap_one)(struct folio *folio, struct vm_area_struct *vma,
+	bool (*rmap_one)(struct folio *folio, struct mm_area *vma,
 					unsigned long addr, void *arg);
 	int (*done)(struct folio *folio);
 	struct anon_vma *(*anon_lock)(const struct folio *folio,
 				      struct rmap_walk_control *rwc);
-	bool (*invalid_vma)(struct vm_area_struct *vma, void *arg);
+	bool (*invalid_vma)(struct mm_area *vma, void *arg);
 };
 
 void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc);
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index e918f96881f5..a38896f49499 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -11,12 +11,12 @@ static inline bool secretmem_mapping(struct address_space *mapping)
 	return mapping->a_ops == &secretmem_aops;
 }
 
-bool vma_is_secretmem(struct vm_area_struct *vma);
+bool vma_is_secretmem(struct mm_area *vma);
 bool secretmem_active(void);
 
 #else
 
-static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+static inline bool vma_is_secretmem(struct mm_area *vma)
 {
 	return false;
 }
diff --git a/include/linux/security.h b/include/linux/security.h
index cc9b54d95d22..8478e56ee173 100644
--- a/include/linux/security.h
+++ b/include/linux/security.h
@@ -476,7 +476,7 @@ int security_file_ioctl_compat(struct file *file, unsigned int cmd,
 int security_mmap_file(struct file *file, unsigned long prot,
 			unsigned long flags);
 int security_mmap_addr(unsigned long addr);
-int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
+int security_file_mprotect(struct mm_area *vma, unsigned long reqprot,
 			   unsigned long prot);
 int security_file_lock(struct file *file, unsigned int cmd);
 int security_file_fcntl(struct file *file, unsigned int cmd, unsigned long arg);
@@ -1151,7 +1151,7 @@ static inline int security_mmap_addr(unsigned long addr)
 	return cap_mmap_addr(addr);
 }
 
-static inline int security_file_mprotect(struct vm_area_struct *vma,
+static inline int security_file_mprotect(struct mm_area *vma,
 					 unsigned long reqprot,
 					 unsigned long prot)
 {
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 0b273a7b9f01..e3913a29f10e 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -92,7 +92,7 @@ extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
 					    unsigned long flags);
 extern struct file *shmem_file_setup_with_mnt(struct vfsmount *mnt,
 		const char *name, loff_t size, unsigned long flags);
-extern int shmem_zero_setup(struct vm_area_struct *);
+extern int shmem_zero_setup(struct mm_area *);
 extern unsigned long shmem_get_unmapped_area(struct file *, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 extern int shmem_lock(struct file *file, int lock, struct ucounts *ucounts);
@@ -112,12 +112,12 @@ int shmem_unuse(unsigned int type);
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 unsigned long shmem_allowable_huge_orders(struct inode *inode,
-				struct vm_area_struct *vma, pgoff_t index,
+				struct mm_area *vma, pgoff_t index,
 				loff_t write_end, bool shmem_huge_force);
 bool shmem_hpage_pmd_enabled(void);
 #else
 static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
-				struct vm_area_struct *vma, pgoff_t index,
+				struct mm_area *vma, pgoff_t index,
 				loff_t write_end, bool shmem_huge_force)
 {
 	return 0;
@@ -130,9 +130,9 @@ static inline bool shmem_hpage_pmd_enabled(void)
 #endif
 
 #ifdef CONFIG_SHMEM
-extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
+extern unsigned long shmem_swap_usage(struct mm_area *vma);
 #else
-static inline unsigned long shmem_swap_usage(struct vm_area_struct *vma)
+static inline unsigned long shmem_swap_usage(struct mm_area *vma)
 {
 	return 0;
 }
@@ -194,7 +194,7 @@ extern void shmem_uncharge(struct inode *inode, long pages);
 #ifdef CONFIG_USERFAULTFD
 #ifdef CONFIG_SHMEM
 extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
-				  struct vm_area_struct *dst_vma,
+				  struct mm_area *dst_vma,
 				  unsigned long dst_addr,
 				  unsigned long src_addr,
 				  uffd_flags_t flags,
diff --git a/include/linux/swap.h b/include/linux/swap.h
index db46b25a65ae..1652caa8ceed 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -380,7 +380,7 @@ void lru_note_cost(struct lruvec *lruvec, bool file,
 		   unsigned int nr_io, unsigned int nr_rotated);
 void lru_note_cost_refault(struct folio *);
 void folio_add_lru(struct folio *);
-void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
+void folio_add_lru_vma(struct folio *, struct mm_area *);
 void mark_page_accessed(struct page *);
 void folio_mark_accessed(struct folio *);
 
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 64ea151a7ae3..697e5d60b776 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -315,7 +315,7 @@ static inline bool is_migration_entry_dirty(swp_entry_t entry)
 
 extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
 					unsigned long address);
-extern void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *pte);
+extern void migration_entry_wait_huge(struct mm_area *vma, unsigned long addr, pte_t *pte);
 #else  /* CONFIG_MIGRATION */
 static inline swp_entry_t make_readable_migration_entry(pgoff_t offset)
 {
@@ -339,7 +339,7 @@ static inline int is_migration_entry(swp_entry_t swp)
 
 static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
 					unsigned long address) { }
-static inline void migration_entry_wait_huge(struct vm_area_struct *vma,
+static inline void migration_entry_wait_huge(struct mm_area *vma,
 					     unsigned long addr, pte_t *pte) { }
 static inline int is_writable_migration_entry(swp_entry_t entry)
 {
diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
index 18f7e1fd093c..4b1c38978498 100644
--- a/include/linux/sysfs.h
+++ b/include/linux/sysfs.h
@@ -298,7 +298,7 @@ static const struct attribute_group _name##_group = {		\
 __ATTRIBUTE_GROUPS(_name)
 
 struct file;
-struct vm_area_struct;
+struct mm_area;
 struct address_space;
 
 struct bin_attribute {
@@ -317,7 +317,7 @@ struct bin_attribute {
 	loff_t (*llseek)(struct file *, struct kobject *, const struct bin_attribute *,
 			 loff_t, int);
 	int (*mmap)(struct file *, struct kobject *, const struct bin_attribute *attr,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 };
 
 /**
diff --git a/include/linux/time_namespace.h b/include/linux/time_namespace.h
index 0b8b32bf0655..12b3ecc86fe6 100644
--- a/include/linux/time_namespace.h
+++ b/include/linux/time_namespace.h
@@ -12,7 +12,7 @@
 struct user_namespace;
 extern struct user_namespace init_user_ns;
 
-struct vm_area_struct;
+struct mm_area;
 
 struct timens_offsets {
 	struct timespec64 monotonic;
@@ -47,7 +47,7 @@ struct time_namespace *copy_time_ns(unsigned long flags,
 				    struct time_namespace *old_ns);
 void free_time_ns(struct time_namespace *ns);
 void timens_on_fork(struct nsproxy *nsproxy, struct task_struct *tsk);
-struct page *find_timens_vvar_page(struct vm_area_struct *vma);
+struct page *find_timens_vvar_page(struct mm_area *vma);
 
 static inline void put_time_ns(struct time_namespace *ns)
 {
@@ -144,7 +144,7 @@ static inline void timens_on_fork(struct nsproxy *nsproxy,
 	return;
 }
 
-static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma)
+static inline struct page *find_timens_vvar_page(struct mm_area *vma)
 {
 	return NULL;
 }
diff --git a/include/linux/uacce.h b/include/linux/uacce.h
index e290c0269944..dcb2b94de9f1 100644
--- a/include/linux/uacce.h
+++ b/include/linux/uacce.h
@@ -43,7 +43,7 @@ struct uacce_ops {
 	int (*start_queue)(struct uacce_queue *q);
 	void (*stop_queue)(struct uacce_queue *q);
 	int (*is_q_updated)(struct uacce_queue *q);
-	int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma,
+	int (*mmap)(struct uacce_queue *q, struct mm_area *vma,
 		    struct uacce_qfile_region *qfr);
 	long (*ioctl)(struct uacce_queue *q, unsigned int cmd,
 		      unsigned long arg);
diff --git a/include/linux/uio_driver.h b/include/linux/uio_driver.h
index 18238dc8bfd3..69fdc49c1df4 100644
--- a/include/linux/uio_driver.h
+++ b/include/linux/uio_driver.h
@@ -112,7 +112,7 @@ struct uio_info {
 	unsigned long		irq_flags;
 	void			*priv;
 	irqreturn_t (*handler)(int irq, struct uio_info *dev_info);
-	int (*mmap)(struct uio_info *info, struct vm_area_struct *vma);
+	int (*mmap)(struct uio_info *info, struct mm_area *vma);
 	int (*open)(struct uio_info *info, struct inode *inode);
 	int (*release)(struct uio_info *info, struct inode *inode);
 	int (*irqcontrol)(struct uio_info *info, s32 irq_on);
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 2e46b69ff0a6..f8af45f0c683 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -19,7 +19,7 @@
 #include <linux/seqlock.h>
 
 struct uprobe;
-struct vm_area_struct;
+struct mm_area;
 struct mm_struct;
 struct inode;
 struct notifier_block;
@@ -199,8 +199,8 @@ extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t
 extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool);
 extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc);
 extern void uprobe_unregister_sync(void);
-extern int uprobe_mmap(struct vm_area_struct *vma);
-extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end);
+extern int uprobe_mmap(struct mm_area *vma);
+extern void uprobe_munmap(struct mm_area *vma, unsigned long start, unsigned long end);
 extern void uprobe_start_dup_mmap(void);
 extern void uprobe_end_dup_mmap(void);
 extern void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm);
@@ -253,12 +253,12 @@ uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc)
 static inline void uprobe_unregister_sync(void)
 {
 }
-static inline int uprobe_mmap(struct vm_area_struct *vma)
+static inline int uprobe_mmap(struct mm_area *vma)
 {
 	return 0;
 }
 static inline void
-uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+uprobe_munmap(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 }
 static inline void uprobe_start_dup_mmap(void)
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index 75342022d144..6b45a807875d 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -116,7 +116,7 @@ static inline uffd_flags_t uffd_flags_set_mode(uffd_flags_t flags, enum mfill_at
 #define MFILL_ATOMIC_WP MFILL_ATOMIC_FLAG(0)
 
 extern int mfill_atomic_install_pte(pmd_t *dst_pmd,
-				    struct vm_area_struct *dst_vma,
+				    struct mm_area *dst_vma,
 				    unsigned long dst_addr, struct page *page,
 				    bool newly_allocated, uffd_flags_t flags);
 
@@ -132,7 +132,7 @@ extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long st
 				   unsigned long len, uffd_flags_t flags);
 extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start,
 			       unsigned long len, bool enable_wp);
-extern long uffd_wp_range(struct vm_area_struct *vma,
+extern long uffd_wp_range(struct mm_area *vma,
 			  unsigned long start, unsigned long len, bool enable_wp);
 
 /* move_pages */
@@ -141,12 +141,12 @@ void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
 ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
 		   unsigned long src_start, unsigned long len, __u64 flags);
 int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
-			struct vm_area_struct *dst_vma,
-			struct vm_area_struct *src_vma,
+			struct mm_area *dst_vma,
+			struct mm_area *src_vma,
 			unsigned long dst_addr, unsigned long src_addr);
 
 /* mm helpers */
-static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
+static inline bool is_mergeable_vm_userfaultfd_ctx(struct mm_area *vma,
 					struct vm_userfaultfd_ctx vm_ctx)
 {
 	return vma->vm_userfaultfd_ctx.ctx == vm_ctx.ctx;
@@ -163,7 +163,7 @@ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
  *   with huge pmd sharing this would *also* setup the second UFFD-registered
  *   mapping, and we'd not get minor faults.)
  */
-static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma)
+static inline bool uffd_disable_huge_pmd_share(struct mm_area *vma)
 {
 	return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR);
 }
@@ -175,44 +175,44 @@ static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma)
  * as the fault around checks for pte_none() before the installation, however
  * to be super safe we just forbid it.
  */
-static inline bool uffd_disable_fault_around(struct vm_area_struct *vma)
+static inline bool uffd_disable_fault_around(struct mm_area *vma)
 {
 	return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR);
 }
 
-static inline bool userfaultfd_missing(struct vm_area_struct *vma)
+static inline bool userfaultfd_missing(struct mm_area *vma)
 {
 	return vma->vm_flags & VM_UFFD_MISSING;
 }
 
-static inline bool userfaultfd_wp(struct vm_area_struct *vma)
+static inline bool userfaultfd_wp(struct mm_area *vma)
 {
 	return vma->vm_flags & VM_UFFD_WP;
 }
 
-static inline bool userfaultfd_minor(struct vm_area_struct *vma)
+static inline bool userfaultfd_minor(struct mm_area *vma)
 {
 	return vma->vm_flags & VM_UFFD_MINOR;
 }
 
-static inline bool userfaultfd_pte_wp(struct vm_area_struct *vma,
+static inline bool userfaultfd_pte_wp(struct mm_area *vma,
 				      pte_t pte)
 {
 	return userfaultfd_wp(vma) && pte_uffd_wp(pte);
 }
 
-static inline bool userfaultfd_huge_pmd_wp(struct vm_area_struct *vma,
+static inline bool userfaultfd_huge_pmd_wp(struct mm_area *vma,
 					   pmd_t pmd)
 {
 	return userfaultfd_wp(vma) && pmd_uffd_wp(pmd);
 }
 
-static inline bool userfaultfd_armed(struct vm_area_struct *vma)
+static inline bool userfaultfd_armed(struct mm_area *vma)
 {
 	return vma->vm_flags & __VM_UFFD_FLAGS;
 }
 
-static inline bool vma_can_userfault(struct vm_area_struct *vma,
+static inline bool vma_can_userfault(struct mm_area *vma,
 				     unsigned long vm_flags,
 				     bool wp_async)
 {
@@ -247,44 +247,44 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
 	    vma_is_shmem(vma);
 }
 
-static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
+static inline bool vma_has_uffd_without_event_remap(struct mm_area *vma)
 {
 	struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx;
 
 	return uffd_ctx && (uffd_ctx->features & UFFD_FEATURE_EVENT_REMAP) == 0;
 }
 
-extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *);
+extern int dup_userfaultfd(struct mm_area *, struct list_head *);
 extern void dup_userfaultfd_complete(struct list_head *);
 void dup_userfaultfd_fail(struct list_head *);
 
-extern void mremap_userfaultfd_prep(struct vm_area_struct *,
+extern void mremap_userfaultfd_prep(struct mm_area *,
 				    struct vm_userfaultfd_ctx *);
 extern void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *,
 					unsigned long from, unsigned long to,
 					unsigned long len);
 
-extern bool userfaultfd_remove(struct vm_area_struct *vma,
+extern bool userfaultfd_remove(struct mm_area *vma,
 			       unsigned long start,
 			       unsigned long end);
 
-extern int userfaultfd_unmap_prep(struct vm_area_struct *vma,
+extern int userfaultfd_unmap_prep(struct mm_area *vma,
 		unsigned long start, unsigned long end, struct list_head *uf);
 extern void userfaultfd_unmap_complete(struct mm_struct *mm,
 				       struct list_head *uf);
-extern bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma);
-extern bool userfaultfd_wp_async(struct vm_area_struct *vma);
+extern bool userfaultfd_wp_unpopulated(struct mm_area *vma);
+extern bool userfaultfd_wp_async(struct mm_area *vma);
 
-void userfaultfd_reset_ctx(struct vm_area_struct *vma);
+void userfaultfd_reset_ctx(struct mm_area *vma);
 
-struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
-					     struct vm_area_struct *prev,
-					     struct vm_area_struct *vma,
+struct mm_area *userfaultfd_clear_vma(struct vma_iterator *vmi,
+					     struct mm_area *prev,
+					     struct mm_area *vma,
 					     unsigned long start,
 					     unsigned long end);
 
 int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
-			       struct vm_area_struct *vma,
+			       struct mm_area *vma,
 			       unsigned long vm_flags,
 			       unsigned long start, unsigned long end,
 			       bool wp_async);
@@ -303,53 +303,53 @@ static inline vm_fault_t handle_userfault(struct vm_fault *vmf,
 	return VM_FAULT_SIGBUS;
 }
 
-static inline long uffd_wp_range(struct vm_area_struct *vma,
+static inline long uffd_wp_range(struct mm_area *vma,
 				 unsigned long start, unsigned long len,
 				 bool enable_wp)
 {
 	return false;
 }
 
-static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
+static inline bool is_mergeable_vm_userfaultfd_ctx(struct mm_area *vma,
 					struct vm_userfaultfd_ctx vm_ctx)
 {
 	return true;
 }
 
-static inline bool userfaultfd_missing(struct vm_area_struct *vma)
+static inline bool userfaultfd_missing(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline bool userfaultfd_wp(struct vm_area_struct *vma)
+static inline bool userfaultfd_wp(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline bool userfaultfd_minor(struct vm_area_struct *vma)
+static inline bool userfaultfd_minor(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline bool userfaultfd_pte_wp(struct vm_area_struct *vma,
+static inline bool userfaultfd_pte_wp(struct mm_area *vma,
 				      pte_t pte)
 {
 	return false;
 }
 
-static inline bool userfaultfd_huge_pmd_wp(struct vm_area_struct *vma,
+static inline bool userfaultfd_huge_pmd_wp(struct mm_area *vma,
 					   pmd_t pmd)
 {
 	return false;
 }
 
 
-static inline bool userfaultfd_armed(struct vm_area_struct *vma)
+static inline bool userfaultfd_armed(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline int dup_userfaultfd(struct vm_area_struct *vma,
+static inline int dup_userfaultfd(struct mm_area *vma,
 				  struct list_head *l)
 {
 	return 0;
@@ -363,7 +363,7 @@ static inline void dup_userfaultfd_fail(struct list_head *l)
 {
 }
 
-static inline void mremap_userfaultfd_prep(struct vm_area_struct *vma,
+static inline void mremap_userfaultfd_prep(struct mm_area *vma,
 					   struct vm_userfaultfd_ctx *ctx)
 {
 }
@@ -375,14 +375,14 @@ static inline void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *ctx,
 {
 }
 
-static inline bool userfaultfd_remove(struct vm_area_struct *vma,
+static inline bool userfaultfd_remove(struct mm_area *vma,
 				      unsigned long start,
 				      unsigned long end)
 {
 	return true;
 }
 
-static inline int userfaultfd_unmap_prep(struct vm_area_struct *vma,
+static inline int userfaultfd_unmap_prep(struct mm_area *vma,
 					 unsigned long start, unsigned long end,
 					 struct list_head *uf)
 {
@@ -394,29 +394,29 @@ static inline void userfaultfd_unmap_complete(struct mm_struct *mm,
 {
 }
 
-static inline bool uffd_disable_fault_around(struct vm_area_struct *vma)
+static inline bool uffd_disable_fault_around(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma)
+static inline bool userfaultfd_wp_unpopulated(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline bool userfaultfd_wp_async(struct vm_area_struct *vma)
+static inline bool userfaultfd_wp_async(struct mm_area *vma)
 {
 	return false;
 }
 
-static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
+static inline bool vma_has_uffd_without_event_remap(struct mm_area *vma)
 {
 	return false;
 }
 
 #endif /* CONFIG_USERFAULTFD */
 
-static inline bool userfaultfd_wp_use_markers(struct vm_area_struct *vma)
+static inline bool userfaultfd_wp_use_markers(struct mm_area *vma)
 {
 	/* Only wr-protect mode uses pte markers */
 	if (!userfaultfd_wp(vma))
diff --git a/include/linux/vdso_datastore.h b/include/linux/vdso_datastore.h
index a91fa24b06e0..8523a57ba6c0 100644
--- a/include/linux/vdso_datastore.h
+++ b/include/linux/vdso_datastore.h
@@ -5,6 +5,6 @@
 #include <linux/mm_types.h>
 
 extern const struct vm_special_mapping vdso_vvar_mapping;
-struct vm_area_struct *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr);
+struct mm_area *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr);
 
 #endif /* _LINUX_VDSO_DATASTORE_H */
diff --git a/include/linux/vfio.h b/include/linux/vfio.h
index 707b00772ce1..3830567b796e 100644
--- a/include/linux/vfio.h
+++ b/include/linux/vfio.h
@@ -129,7 +129,7 @@ struct vfio_device_ops {
 			 size_t count, loff_t *size);
 	long	(*ioctl)(struct vfio_device *vdev, unsigned int cmd,
 			 unsigned long arg);
-	int	(*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
+	int	(*mmap)(struct vfio_device *vdev, struct mm_area *vma);
 	void	(*request)(struct vfio_device *vdev, unsigned int count);
 	int	(*match)(struct vfio_device *vdev, char *buf);
 	void	(*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
index fbb472dd99b3..0dcef04e3e8c 100644
--- a/include/linux/vfio_pci_core.h
+++ b/include/linux/vfio_pci_core.h
@@ -34,7 +34,7 @@ struct vfio_pci_regops {
 			   struct vfio_pci_region *region);
 	int	(*mmap)(struct vfio_pci_core_device *vdev,
 			struct vfio_pci_region *region,
-			struct vm_area_struct *vma);
+			struct mm_area *vma);
 	int	(*add_capability)(struct vfio_pci_core_device *vdev,
 				  struct vfio_pci_region *region,
 				  struct vfio_info_cap *caps);
@@ -119,7 +119,7 @@ ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf,
 		size_t count, loff_t *ppos);
 ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *buf,
 		size_t count, loff_t *ppos);
-int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma);
+int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct mm_area *vma);
 void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count);
 int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf);
 int vfio_pci_core_enable(struct vfio_pci_core_device *vdev);
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 31e9ffd936e3..3e555eb63f36 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -14,7 +14,7 @@
 
 #include <asm/vmalloc.h>
 
-struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
+struct mm_area;		/* vma defining user mapping in mm_types.h */
 struct notifier_block;		/* in notifier.h */
 struct iov_iter;		/* in uio.h */
 
@@ -195,11 +195,11 @@ extern void *vmap(struct page **pages, unsigned int count,
 void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot);
 extern void vunmap(const void *addr);
 
-extern int remap_vmalloc_range_partial(struct vm_area_struct *vma,
+extern int remap_vmalloc_range_partial(struct mm_area *vma,
 				       unsigned long uaddr, void *kaddr,
 				       unsigned long pgoff, unsigned long size);
 
-extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+extern int remap_vmalloc_range(struct mm_area *vma, void *addr,
 							unsigned long pgoff);
 
 int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
diff --git a/include/media/dvb_vb2.h b/include/media/dvb_vb2.h
index 8cb88452cd6c..42956944bba4 100644
--- a/include/media/dvb_vb2.h
+++ b/include/media/dvb_vb2.h
@@ -270,11 +270,11 @@ int dvb_vb2_dqbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b);
  * dvb_vb2_mmap() - Wrapper to vb2_mmap() for Digital TV buffer handling.
  *
  * @ctx:	control struct for VB2 handler
- * @vma:        pointer to &struct vm_area_struct with the vma passed
+ * @vma:        pointer to &struct mm_area with the vma passed
  *              to the mmap file operation handler in the driver.
  *
  * map Digital TV video buffers into application address space.
  */
-int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct vm_area_struct *vma);
+int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct mm_area *vma);
 
 #endif /* _DVB_VB2_H */
diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
index 1b6222fab24e..caef335b7731 100644
--- a/include/media/v4l2-dev.h
+++ b/include/media/v4l2-dev.h
@@ -209,7 +209,7 @@ struct v4l2_file_operations {
 #endif
 	unsigned long (*get_unmapped_area) (struct file *, unsigned long,
 				unsigned long, unsigned long, unsigned long);
-	int (*mmap) (struct file *, struct vm_area_struct *);
+	int (*mmap) (struct file *, struct mm_area *);
 	int (*open) (struct file *);
 	int (*release) (struct file *);
 };
diff --git a/include/media/v4l2-mem2mem.h b/include/media/v4l2-mem2mem.h
index 0af330cf91c3..19ee65878a35 100644
--- a/include/media/v4l2-mem2mem.h
+++ b/include/media/v4l2-mem2mem.h
@@ -490,7 +490,7 @@ __poll_t v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
  *
  * @file: pointer to struct &file
  * @m2m_ctx: m2m context assigned to the instance given by struct &v4l2_m2m_ctx
- * @vma: pointer to struct &vm_area_struct
+ * @vma: pointer to struct &mm_area
  *
  * Call from driver's mmap() function. Will handle mmap() for both queues
  * seamlessly for the video buffer, which will receive normal per-queue offsets
@@ -500,7 +500,7 @@ __poll_t v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
  * thus applications) receive modified offsets.
  */
 int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
-		  struct vm_area_struct *vma);
+		  struct mm_area *vma);
 
 #ifndef CONFIG_MMU
 unsigned long v4l2_m2m_get_unmapped_area(struct file *file, unsigned long addr,
@@ -895,7 +895,7 @@ int v4l2_m2m_ioctl_stateless_try_decoder_cmd(struct file *file, void *fh,
 					     struct v4l2_decoder_cmd *dc);
 int v4l2_m2m_ioctl_stateless_decoder_cmd(struct file *file, void *priv,
 					 struct v4l2_decoder_cmd *dc);
-int v4l2_m2m_fop_mmap(struct file *file, struct vm_area_struct *vma);
+int v4l2_m2m_fop_mmap(struct file *file, struct mm_area *vma);
 __poll_t v4l2_m2m_fop_poll(struct file *file, poll_table *wait);
 
 #endif /* _MEDIA_V4L2_MEM2MEM_H */
diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
index 9b02aeba4108..dbfb8876fbf9 100644
--- a/include/media/videobuf2-core.h
+++ b/include/media/videobuf2-core.h
@@ -146,7 +146,7 @@ struct vb2_mem_ops {
 
 	unsigned int	(*num_users)(void *buf_priv);
 
-	int		(*mmap)(void *buf_priv, struct vm_area_struct *vma);
+	int		(*mmap)(void *buf_priv, struct mm_area *vma);
 };
 
 /**
@@ -1033,7 +1033,7 @@ void vb2_queue_error(struct vb2_queue *q);
 /**
  * vb2_mmap() - map video buffers into application address space.
  * @q:		pointer to &struct vb2_queue with videobuf2 queue.
- * @vma:	pointer to &struct vm_area_struct with the vma passed
+ * @vma:	pointer to &struct mm_area with the vma passed
  *		to the mmap file operation handler in the driver.
  *
  * Should be called from mmap file operation handler of a driver.
@@ -1052,7 +1052,7 @@ void vb2_queue_error(struct vb2_queue *q);
  * The return values from this function are intended to be directly returned
  * from the mmap handler in driver.
  */
-int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma);
+int vb2_mmap(struct vb2_queue *q, struct mm_area *vma);
 
 #ifndef CONFIG_MMU
 /**
diff --git a/include/media/videobuf2-v4l2.h b/include/media/videobuf2-v4l2.h
index 77ce8238ab30..cd941372aab9 100644
--- a/include/media/videobuf2-v4l2.h
+++ b/include/media/videobuf2-v4l2.h
@@ -339,7 +339,7 @@ int vb2_ioctl_remove_bufs(struct file *file, void *priv,
 
 /* struct v4l2_file_operations helpers */
 
-int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma);
+int vb2_fop_mmap(struct file *file, struct mm_area *vma);
 int vb2_fop_release(struct file *file);
 int _vb2_fop_release(struct file *file, struct mutex *lock);
 ssize_t vb2_fop_write(struct file *file, const char __user *buf,
diff --git a/include/net/sock.h b/include/net/sock.h
index 8daf1b3b12c6..d75880bd2052 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1849,7 +1849,7 @@ int sock_no_sendmsg(struct socket *, struct msghdr *, size_t);
 int sock_no_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t len);
 int sock_no_recvmsg(struct socket *, struct msghdr *, size_t, int);
 int sock_no_mmap(struct file *file, struct socket *sock,
-		 struct vm_area_struct *vma);
+		 struct mm_area *vma);
 
 /*
  * Functions to fill in entries in struct proto_ops when a protocol
diff --git a/include/net/tcp.h b/include/net/tcp.h
index df04dc09c519..556704058c39 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -469,7 +469,7 @@ void tcp_recv_timestamp(struct msghdr *msg, const struct sock *sk,
 void tcp_data_ready(struct sock *sk);
 #ifdef CONFIG_MMU
 int tcp_mmap(struct file *file, struct socket *sock,
-	     struct vm_area_struct *vma);
+	     struct mm_area *vma);
 #endif
 void tcp_parse_options(const struct net *net, const struct sk_buff *skb,
 		       struct tcp_options_received *opt_rx,
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index d42eae69d9a8..8055f6f88816 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -2449,7 +2449,7 @@ struct ib_device_ops {
 	int (*alloc_ucontext)(struct ib_ucontext *context,
 			      struct ib_udata *udata);
 	void (*dealloc_ucontext)(struct ib_ucontext *context);
-	int (*mmap)(struct ib_ucontext *context, struct vm_area_struct *vma);
+	int (*mmap)(struct ib_ucontext *context, struct mm_area *vma);
 	/**
 	 * This will be called once refcount of an entry in mmap_xa reaches
 	 * zero. The type of the memory that was mapped may differ between
@@ -2976,7 +2976,7 @@ void  ib_set_client_data(struct ib_device *device, struct ib_client *client,
 void ib_set_device_ops(struct ib_device *device,
 		       const struct ib_device_ops *ops);
 
-int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
+int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct mm_area *vma,
 		      unsigned long pfn, unsigned long size, pgprot_t prot,
 		      struct rdma_user_mmap_entry *entry);
 int rdma_user_mmap_entry_insert(struct ib_ucontext *ucontext,
@@ -3009,7 +3009,7 @@ rdma_user_mmap_entry_get_pgoff(struct ib_ucontext *ucontext,
 			       unsigned long pgoff);
 struct rdma_user_mmap_entry *
 rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
-			 struct vm_area_struct *vma);
+			 struct mm_area *vma);
 void rdma_user_mmap_entry_put(struct rdma_user_mmap_entry *entry);
 
 void rdma_user_mmap_entry_remove(struct rdma_user_mmap_entry *entry);
diff --git a/include/rdma/rdma_vt.h b/include/rdma/rdma_vt.h
index c429d6ddb129..7baff31ec232 100644
--- a/include/rdma/rdma_vt.h
+++ b/include/rdma/rdma_vt.h
@@ -167,7 +167,7 @@ struct rvt_ah {
 
 /*
  * This structure is used by rvt_mmap() to validate an offset
- * when an mmap() request is made.  The vm_area_struct then uses
+ * when an mmap() request is made.  The mm_area then uses
  * this as its vm_private_data.
  */
 struct rvt_mmap_info {
diff --git a/include/sound/compress_driver.h b/include/sound/compress_driver.h
index b55c9eeb2b54..cbfb46ad05de 100644
--- a/include/sound/compress_driver.h
+++ b/include/sound/compress_driver.h
@@ -165,7 +165,7 @@ struct snd_compr_ops {
 	int (*copy)(struct snd_compr_stream *stream, char __user *buf,
 		       size_t count);
 	int (*mmap)(struct snd_compr_stream *stream,
-			struct vm_area_struct *vma);
+			struct mm_area *vma);
 	int (*ack)(struct snd_compr_stream *stream, size_t bytes);
 	int (*get_caps) (struct snd_compr_stream *stream,
 			struct snd_compr_caps *caps);
diff --git a/include/sound/hwdep.h b/include/sound/hwdep.h
index b0da633184cd..1ba044d50614 100644
--- a/include/sound/hwdep.h
+++ b/include/sound/hwdep.h
@@ -29,7 +29,7 @@ struct snd_hwdep_ops {
 	int (*ioctl_compat)(struct snd_hwdep *hw, struct file *file,
 			    unsigned int cmd, unsigned long arg);
 	int (*mmap)(struct snd_hwdep *hw, struct file *file,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 	int (*dsp_status)(struct snd_hwdep *hw,
 			  struct snd_hwdep_dsp_status *status);
 	int (*dsp_load)(struct snd_hwdep *hw,
diff --git a/include/sound/info.h b/include/sound/info.h
index adbc506860d6..369b6ba88869 100644
--- a/include/sound/info.h
+++ b/include/sound/info.h
@@ -54,7 +54,7 @@ struct snd_info_entry_ops {
 		     struct file *file, unsigned int cmd, unsigned long arg);
 	int (*mmap)(struct snd_info_entry *entry, void *file_private_data,
 		    struct inode *inode, struct file *file,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 };
 
 struct snd_info_entry {
diff --git a/include/sound/memalloc.h b/include/sound/memalloc.h
index 9dd475cf4e8c..38a2885a39e3 100644
--- a/include/sound/memalloc.h
+++ b/include/sound/memalloc.h
@@ -13,7 +13,7 @@
 #include <asm/page.h>
 
 struct device;
-struct vm_area_struct;
+struct mm_area;
 struct sg_table;
 
 /*
@@ -83,7 +83,7 @@ int snd_dma_alloc_pages_fallback(int type, struct device *dev, size_t size,
                                  struct snd_dma_buffer *dmab);
 void snd_dma_free_pages(struct snd_dma_buffer *dmab);
 int snd_dma_buffer_mmap(struct snd_dma_buffer *dmab,
-			struct vm_area_struct *area);
+			struct mm_area *area);
 
 enum snd_dma_sync_mode { SNDRV_DMA_SYNC_CPU, SNDRV_DMA_SYNC_DEVICE };
 #ifdef CONFIG_HAS_DMA
diff --git a/include/sound/pcm.h b/include/sound/pcm.h
index 8becb4504887..10129d8837e3 100644
--- a/include/sound/pcm.h
+++ b/include/sound/pcm.h
@@ -74,7 +74,7 @@ struct snd_pcm_ops {
 		    unsigned long pos, struct iov_iter *iter, unsigned long bytes);
 	struct page *(*page)(struct snd_pcm_substream *substream,
 			     unsigned long offset);
-	int (*mmap)(struct snd_pcm_substream *substream, struct vm_area_struct *vma);
+	int (*mmap)(struct snd_pcm_substream *substream, struct mm_area *vma);
 	int (*ack)(struct snd_pcm_substream *substream);
 };
 
@@ -605,7 +605,7 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream);
 int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream, struct file *file,
 			     struct snd_pcm_substream **rsubstream);
 void snd_pcm_detach_substream(struct snd_pcm_substream *substream);
-int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file, struct vm_area_struct *area);
+int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file, struct mm_area *area);
 
 
 #ifdef CONFIG_SND_DEBUG
@@ -1394,11 +1394,11 @@ snd_pcm_sgbuf_get_chunk_size(struct snd_pcm_substream *substream,
 }
 
 int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
-			     struct vm_area_struct *area);
+			     struct mm_area *area);
 /* mmap for io-memory area */
 #if defined(CONFIG_X86) || defined(CONFIG_PPC) || defined(CONFIG_ALPHA)
 #define SNDRV_PCM_INFO_MMAP_IOMEM	SNDRV_PCM_INFO_MMAP
-int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct vm_area_struct *area);
+int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct mm_area *area);
 #else
 #define SNDRV_PCM_INFO_MMAP_IOMEM	0
 #define snd_pcm_lib_mmap_iomem	NULL
diff --git a/include/sound/soc-component.h b/include/sound/soc-component.h
index 61534ac0edd1..4c37806639b1 100644
--- a/include/sound/soc-component.h
+++ b/include/sound/soc-component.h
@@ -53,7 +53,7 @@ struct snd_compress_ops {
 		    size_t count);
 	int (*mmap)(struct snd_soc_component *component,
 		    struct snd_compr_stream *stream,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 	int (*ack)(struct snd_soc_component *component,
 		   struct snd_compr_stream *stream, size_t bytes);
 	int (*get_caps)(struct snd_soc_component *component,
@@ -146,7 +146,7 @@ struct snd_soc_component_driver {
 			     unsigned long offset);
 	int (*mmap)(struct snd_soc_component *component,
 		    struct snd_pcm_substream *substream,
-		    struct vm_area_struct *vma);
+		    struct mm_area *vma);
 	int (*ack)(struct snd_soc_component *component,
 		   struct snd_pcm_substream *substream);
 	snd_pcm_sframes_t (*delay)(struct snd_soc_component *component,
@@ -517,7 +517,7 @@ int snd_soc_pcm_component_copy(struct snd_pcm_substream *substream,
 struct page *snd_soc_pcm_component_page(struct snd_pcm_substream *substream,
 					unsigned long offset);
 int snd_soc_pcm_component_mmap(struct snd_pcm_substream *substream,
-			       struct vm_area_struct *vma);
+			       struct mm_area *vma);
 int snd_soc_pcm_component_new(struct snd_soc_pcm_runtime *rtd);
 void snd_soc_pcm_component_free(struct snd_soc_pcm_runtime *rtd);
 int snd_soc_pcm_component_prepare(struct snd_pcm_substream *substream);
diff --git a/include/trace/events/mmap.h b/include/trace/events/mmap.h
index f8d61485de16..516a46ff75a5 100644
--- a/include/trace/events/mmap.h
+++ b/include/trace/events/mmap.h
@@ -69,13 +69,13 @@ TRACE_EVENT(vma_mas_szero,
 );
 
 TRACE_EVENT(vma_store,
-	TP_PROTO(struct maple_tree *mt, struct vm_area_struct *vma),
+	TP_PROTO(struct maple_tree *mt, struct mm_area *vma),
 
 	TP_ARGS(mt, vma),
 
 	TP_STRUCT__entry(
 			__field(struct maple_tree *, mt)
-			__field(struct vm_area_struct *, vma)
+			__field(struct mm_area *, vma)
 			__field(unsigned long, vm_start)
 			__field(unsigned long, vm_end)
 	),
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 8994e97d86c1..79ee1636a6ec 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -720,7 +720,7 @@ NUMAB_SKIP_REASON
 
 TRACE_EVENT(sched_skip_vma_numa,
 
-	TP_PROTO(struct mm_struct *mm, struct vm_area_struct *vma,
+	TP_PROTO(struct mm_struct *mm, struct mm_area *vma,
 		 enum numa_vmaskip_reason reason),
 
 	TP_ARGS(mm, vma, reason),
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 28705ae67784..7894f9c2ae9b 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -5368,7 +5368,7 @@ union bpf_attr {
  *
  *		The expected callback signature is
  *
- *		long (\*callback_fn)(struct task_struct \*task, struct vm_area_struct \*vma, void \*callback_ctx);
+ *		long (\*callback_fn)(struct task_struct \*task, struct mm_area \*vma, void \*callback_ctx);
  *
  *	Return
  *		0 on success.
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 47f11bec5e90..9c4c2e081be3 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -44,11 +44,11 @@ int xen_setup_shutdown_event(void);
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV)
-int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
+int xen_remap_pfn(struct mm_area *vma, unsigned long addr,
 		  xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,
 		  unsigned int domid, bool no_translate);
 #else
-static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
+static inline int xen_remap_pfn(struct mm_area *vma, unsigned long addr,
 				xen_pfn_t *pfn, int nr, int *err_ptr,
 				pgprot_t prot,  unsigned int domid,
 				bool no_translate)
@@ -58,23 +58,23 @@ static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
 }
 #endif
 
-struct vm_area_struct;
+struct mm_area;
 
 #ifdef CONFIG_XEN_AUTO_XLATE
-int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
+int xen_xlate_remap_gfn_array(struct mm_area *vma,
 			      unsigned long addr,
 			      xen_pfn_t *gfn, int nr,
 			      int *err_ptr, pgprot_t prot,
 			      unsigned int domid,
 			      struct page **pages);
-int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
+int xen_xlate_unmap_gfn_range(struct mm_area *vma,
 			      int nr, struct page **pages);
 #else
 /*
  * These two functions are called from arch/x86/xen/mmu.c and so stubs
  * are needed for a configuration not specifying CONFIG_XEN_AUTO_XLATE.
  */
-static inline int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
+static inline int xen_xlate_remap_gfn_array(struct mm_area *vma,
 					    unsigned long addr,
 					    xen_pfn_t *gfn, int nr,
 					    int *err_ptr, pgprot_t prot,
@@ -84,14 +84,14 @@ static inline int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
 	return -EOPNOTSUPP;
 }
 
-static inline int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
+static inline int xen_xlate_unmap_gfn_range(struct mm_area *vma,
 					    int nr, struct page **pages)
 {
 	return -EOPNOTSUPP;
 }
 #endif
 
-int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr,
+int xen_remap_vma_range(struct mm_area *vma, unsigned long addr,
 			unsigned long len);
 
 /*
@@ -111,7 +111,7 @@ int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr,
  * Returns the number of successfully mapped frames, or a -ve error
  * code.
  */
-static inline int xen_remap_domain_gfn_array(struct vm_area_struct *vma,
+static inline int xen_remap_domain_gfn_array(struct mm_area *vma,
 					     unsigned long addr,
 					     xen_pfn_t *gfn, int nr,
 					     int *err_ptr, pgprot_t prot,
@@ -147,7 +147,7 @@ static inline int xen_remap_domain_gfn_array(struct vm_area_struct *vma,
  * Returns the number of successfully mapped frames, or a -ve error
  * code.
  */
-static inline int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
+static inline int xen_remap_domain_mfn_array(struct mm_area *vma,
 					     unsigned long addr, xen_pfn_t *mfn,
 					     int nr, int *err_ptr,
 					     pgprot_t prot, unsigned int domid)
@@ -171,7 +171,7 @@ static inline int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
  * Returns the number of successfully mapped frames, or a -ve error
  * code.
  */
-static inline int xen_remap_domain_gfn_range(struct vm_area_struct *vma,
+static inline int xen_remap_domain_gfn_range(struct mm_area *vma,
 					     unsigned long addr,
 					     xen_pfn_t gfn, int nr,
 					     pgprot_t prot, unsigned int domid,
@@ -183,7 +183,7 @@ static inline int xen_remap_domain_gfn_range(struct vm_area_struct *vma,
 	return xen_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false);
 }
 
-int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
+int xen_unmap_domain_gfn_range(struct mm_area *vma,
 			       int numpgs, struct page **pages);
 
 int xen_xlate_map_ballooned_pages(xen_pfn_t **pfns, void **vaddr,
diff --git a/io_uring/memmap.c b/io_uring/memmap.c
index 76fcc79656b0..d606163f0524 100644
--- a/io_uring/memmap.c
+++ b/io_uring/memmap.c
@@ -306,7 +306,7 @@ static void *io_uring_validate_mmap_request(struct file *file, loff_t pgoff,
 
 static int io_region_mmap(struct io_ring_ctx *ctx,
 			  struct io_mapped_region *mr,
-			  struct vm_area_struct *vma,
+			  struct mm_area *vma,
 			  unsigned max_pages)
 {
 	unsigned long nr_pages = min(mr->nr_pages, max_pages);
@@ -315,7 +315,7 @@ static int io_region_mmap(struct io_ring_ctx *ctx,
 	return vm_insert_pages(vma, vma->vm_start, mr->pages, &nr_pages);
 }
 
-__cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+__cold int io_uring_mmap(struct file *file, struct mm_area *vma)
 {
 	struct io_ring_ctx *ctx = file->private_data;
 	size_t sz = vma->vm_end - vma->vm_start;
@@ -389,7 +389,7 @@ unsigned long io_uring_get_unmapped_area(struct file *filp, unsigned long addr,
 
 #else /* !CONFIG_MMU */
 
-int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
+int io_uring_mmap(struct file *file, struct mm_area *vma)
 {
 	return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -EINVAL;
 }
diff --git a/io_uring/memmap.h b/io_uring/memmap.h
index dad0aa5b1b45..67e0335cfe87 100644
--- a/io_uring/memmap.h
+++ b/io_uring/memmap.h
@@ -12,7 +12,7 @@ unsigned int io_uring_nommu_mmap_capabilities(struct file *file);
 unsigned long io_uring_get_unmapped_area(struct file *file, unsigned long addr,
 					 unsigned long len, unsigned long pgoff,
 					 unsigned long flags);
-int io_uring_mmap(struct file *file, struct vm_area_struct *vma);
+int io_uring_mmap(struct file *file, struct mm_area *vma);
 
 void io_free_region(struct io_ring_ctx *ctx, struct io_mapped_region *mr);
 int io_create_region(struct io_ring_ctx *ctx, struct io_mapped_region *mr,
diff --git a/ipc/shm.c b/ipc/shm.c
index 99564c870084..b1f32d82e02b 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -3,7 +3,7 @@
  * linux/ipc/shm.c
  * Copyright (C) 1992, 1993 Krishna Balasubramanian
  *	 Many improvements/fixes by Bruno Haible.
- * Replaced `struct shm_desc' by `struct vm_area_struct', July 1994.
+ * Replaced `struct shm_desc' by `struct mm_area', July 1994.
  * Fixed the shm swap deallocation (shm_unuse()), August 1998 Andrea Arcangeli.
  *
  * /proc/sysvipc/shm support (c) 1999 Dragos Acostachioaie <dragos@iname.com>
@@ -99,8 +99,8 @@ static const struct vm_operations_struct shm_vm_ops;
 	ipc_unlock(&(shp)->shm_perm)
 
 static int newseg(struct ipc_namespace *, struct ipc_params *);
-static void shm_open(struct vm_area_struct *vma);
-static void shm_close(struct vm_area_struct *vma);
+static void shm_open(struct mm_area *vma);
+static void shm_close(struct mm_area *vma);
 static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp);
 #ifdef CONFIG_PROC_FS
 static int sysvipc_shm_proc_show(struct seq_file *s, void *it);
@@ -299,7 +299,7 @@ static int __shm_open(struct shm_file_data *sfd)
 }
 
 /* This is called by fork, once for every shm attach. */
-static void shm_open(struct vm_area_struct *vma)
+static void shm_open(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct shm_file_data *sfd = shm_file_data(file);
@@ -393,7 +393,7 @@ static void __shm_close(struct shm_file_data *sfd)
 	up_write(&shm_ids(ns).rwsem);
 }
 
-static void shm_close(struct vm_area_struct *vma)
+static void shm_close(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct shm_file_data *sfd = shm_file_data(file);
@@ -540,7 +540,7 @@ static vm_fault_t shm_fault(struct vm_fault *vmf)
 	return sfd->vm_ops->fault(vmf);
 }
 
-static int shm_may_split(struct vm_area_struct *vma, unsigned long addr)
+static int shm_may_split(struct mm_area *vma, unsigned long addr)
 {
 	struct file *file = vma->vm_file;
 	struct shm_file_data *sfd = shm_file_data(file);
@@ -551,7 +551,7 @@ static int shm_may_split(struct vm_area_struct *vma, unsigned long addr)
 	return 0;
 }
 
-static unsigned long shm_pagesize(struct vm_area_struct *vma)
+static unsigned long shm_pagesize(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct shm_file_data *sfd = shm_file_data(file);
@@ -563,7 +563,7 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_NUMA
-static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
+static int shm_set_policy(struct mm_area *vma, struct mempolicy *mpol)
 {
 	struct shm_file_data *sfd = shm_file_data(vma->vm_file);
 	int err = 0;
@@ -573,7 +573,7 @@ static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
 	return err;
 }
 
-static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
+static struct mempolicy *shm_get_policy(struct mm_area *vma,
 					unsigned long addr, pgoff_t *ilx)
 {
 	struct shm_file_data *sfd = shm_file_data(vma->vm_file);
@@ -585,7 +585,7 @@ static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
 }
 #endif
 
-static int shm_mmap(struct file *file, struct vm_area_struct *vma)
+static int shm_mmap(struct file *file, struct mm_area *vma)
 {
 	struct shm_file_data *sfd = shm_file_data(file);
 	int ret;
@@ -1723,7 +1723,7 @@ COMPAT_SYSCALL_DEFINE3(shmat, int, shmid, compat_uptr_t, shmaddr, int, shmflg)
 long ksys_shmdt(char __user *shmaddr)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr = (unsigned long)shmaddr;
 	int retval = -EINVAL;
 #ifdef CONFIG_MMU
diff --git a/kernel/acct.c b/kernel/acct.c
index 6520baa13669..8f1124fddaa9 100644
--- a/kernel/acct.c
+++ b/kernel/acct.c
@@ -592,7 +592,7 @@ void acct_collect(long exitcode, int group_dead)
 	if (group_dead && current->mm) {
 		struct mm_struct *mm = current->mm;
 		VMA_ITERATOR(vmi, mm, 0);
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		mmap_read_lock(mm);
 		for_each_vma(vmi, vma)
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
index 0d56cea71602..bfefa32adb89 100644
--- a/kernel/bpf/arena.c
+++ b/kernel/bpf/arena.c
@@ -220,12 +220,12 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
 }
 
 struct vma_list {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct list_head head;
 	refcount_t mmap_count;
 };
 
-static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
+static int remember_vma(struct bpf_arena *arena, struct mm_area *vma)
 {
 	struct vma_list *vml;
 
@@ -239,14 +239,14 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
 	return 0;
 }
 
-static void arena_vm_open(struct vm_area_struct *vma)
+static void arena_vm_open(struct mm_area *vma)
 {
 	struct vma_list *vml = vma->vm_private_data;
 
 	refcount_inc(&vml->mmap_count);
 }
 
-static void arena_vm_close(struct vm_area_struct *vma)
+static void arena_vm_close(struct mm_area *vma)
 {
 	struct bpf_map *map = vma->vm_file->private_data;
 	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
@@ -345,7 +345,7 @@ static unsigned long arena_get_unmapped_area(struct file *filp, unsigned long ad
 	return round_up(ret, SZ_4G);
 }
 
-static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+static int arena_map_mmap(struct bpf_map *map, struct mm_area *vma)
 {
 	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
 
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index eb28c0f219ee..79dbdb433b55 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -557,7 +557,7 @@ static int array_map_check_btf(const struct bpf_map *map,
 	return 0;
 }
 
-static int array_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+static int array_map_mmap(struct bpf_map *map, struct mm_area *vma)
 {
 	struct bpf_array *array = container_of(map, struct bpf_array, map);
 	pgoff_t pgoff = PAGE_ALIGN(sizeof(*array)) >> PAGE_SHIFT;
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index 1499d8caa9a3..c59325124422 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -258,7 +258,7 @@ static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
 	return -ENOTSUPP;
 }
 
-static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma)
+static int ringbuf_map_mmap_kern(struct bpf_map *map, struct mm_area *vma)
 {
 	struct bpf_ringbuf_map *rb_map;
 
@@ -274,7 +274,7 @@ static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma
 				   vma->vm_pgoff + RINGBUF_PGOFF);
 }
 
-static int ringbuf_map_mmap_user(struct bpf_map *map, struct vm_area_struct *vma)
+static int ringbuf_map_mmap_user(struct bpf_map *map, struct mm_area *vma)
 {
 	struct bpf_ringbuf_map *rb_map;
 
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 3615c06b7dfa..9870b4a64f23 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -124,7 +124,7 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
 	return ERR_PTR(err);
 }
 
-static int fetch_build_id(struct vm_area_struct *vma, unsigned char *build_id, bool may_fault)
+static int fetch_build_id(struct mm_area *vma, unsigned char *build_id, bool may_fault)
 {
 	return may_fault ? build_id_parse(vma, build_id, NULL)
 			 : build_id_parse_nofault(vma, build_id, NULL);
@@ -146,7 +146,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs,
 	int i;
 	struct mmap_unlock_irq_work *work = NULL;
 	bool irq_work_busy = bpf_mmap_unlock_get_irq_work(&work);
-	struct vm_area_struct *vma, *prev_vma = NULL;
+	struct mm_area *vma, *prev_vma = NULL;
 	const char *prev_build_id;
 
 	/* If the irq_work is in use, fall back to report ips. Same
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 9794446bc8c6..e4bd08eba388 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1030,7 +1030,7 @@ static ssize_t bpf_dummy_write(struct file *filp, const char __user *buf,
 }
 
 /* called for any extra memory-mapped regions (except initial) */
-static void bpf_map_mmap_open(struct vm_area_struct *vma)
+static void bpf_map_mmap_open(struct mm_area *vma)
 {
 	struct bpf_map *map = vma->vm_file->private_data;
 
@@ -1039,7 +1039,7 @@ static void bpf_map_mmap_open(struct vm_area_struct *vma)
 }
 
 /* called for all unmapped memory region (including initial) */
-static void bpf_map_mmap_close(struct vm_area_struct *vma)
+static void bpf_map_mmap_close(struct mm_area *vma)
 {
 	struct bpf_map *map = vma->vm_file->private_data;
 
@@ -1052,7 +1052,7 @@ static const struct vm_operations_struct bpf_map_default_vmops = {
 	.close		= bpf_map_mmap_close,
 };
 
-static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+static int bpf_map_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct bpf_map *map = filp->private_data;
 	int err = 0;
diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
index 98d9b4c0daff..3f58b35ce94e 100644
--- a/kernel/bpf/task_iter.c
+++ b/kernel/bpf/task_iter.c
@@ -410,7 +410,7 @@ struct bpf_iter_seq_task_vma_info {
 	struct bpf_iter_seq_task_common common;
 	struct task_struct *task;
 	struct mm_struct *mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	u32 tid;
 	unsigned long prev_vm_start;
 	unsigned long prev_vm_end;
@@ -422,11 +422,11 @@ enum bpf_task_vma_iter_find_op {
 	task_vma_iter_find_vma,    /* use find_vma() to find next vma */
 };
 
-static struct vm_area_struct *
+static struct mm_area *
 task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
 {
 	enum bpf_task_vma_iter_find_op op;
-	struct vm_area_struct *curr_vma;
+	struct mm_area *curr_vma;
 	struct task_struct *curr_task;
 	struct mm_struct *curr_mm;
 	u32 saved_tid = info->tid;
@@ -577,7 +577,7 @@ task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
 static void *task_vma_seq_start(struct seq_file *seq, loff_t *pos)
 {
 	struct bpf_iter_seq_task_vma_info *info = seq->private;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = task_vma_seq_get_next(info);
 	if (vma && *pos == 0)
@@ -597,11 +597,11 @@ static void *task_vma_seq_next(struct seq_file *seq, void *v, loff_t *pos)
 struct bpf_iter__task_vma {
 	__bpf_md_ptr(struct bpf_iter_meta *, meta);
 	__bpf_md_ptr(struct task_struct *, task);
-	__bpf_md_ptr(struct vm_area_struct *, vma);
+	__bpf_md_ptr(struct mm_area *, vma);
 };
 
 DEFINE_BPF_ITER_FUNC(task_vma, struct bpf_iter_meta *meta,
-		     struct task_struct *task, struct vm_area_struct *vma)
+		     struct task_struct *task, struct mm_area *vma)
 
 static int __task_vma_seq_show(struct seq_file *seq, bool in_stop)
 {
@@ -752,7 +752,7 @@ BPF_CALL_5(bpf_find_vma, struct task_struct *, task, u64, start,
 	   bpf_callback_t, callback_fn, void *, callback_ctx, u64, flags)
 {
 	struct mmap_unlock_irq_work *work = NULL;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	bool irq_work_busy = false;
 	struct mm_struct *mm;
 	int ret = -ENOENT;
@@ -859,7 +859,7 @@ __bpf_kfunc int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
 	return err;
 }
 
-__bpf_kfunc struct vm_area_struct *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it)
+__bpf_kfunc struct mm_area *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it)
 {
 	struct bpf_iter_task_vma_kern *kit = (void *)it;
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 54c6953a8b84..efbe5060d0e9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -10720,7 +10720,7 @@ static int set_find_vma_callback_state(struct bpf_verifier_env *env,
 	/* bpf_find_vma(struct task_struct *task, u64 addr,
 	 *               void *callback_fn, void *callback_ctx, u64 flags)
 	 * (callback_fn)(struct task_struct *task,
-	 *               struct vm_area_struct *vma, void *callback_ctx);
+	 *               struct mm_area *vma, void *callback_ctx);
 	 */
 	callee->regs[BPF_REG_1] = caller->regs[BPF_REG_1];
 
diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
index 3b2bdca9f1d4..b92e5ddae43f 100644
--- a/kernel/dma/coherent.c
+++ b/kernel/dma/coherent.c
@@ -232,7 +232,7 @@ int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr)
 }
 
 static int __dma_mmap_from_coherent(struct dma_coherent_mem *mem,
-		struct vm_area_struct *vma, void *vaddr, size_t size, int *ret)
+		struct mm_area *vma, void *vaddr, size_t size, int *ret)
 {
 	if (mem && vaddr >= mem->virt_base && vaddr + size <=
 		   (mem->virt_base + ((dma_addr_t)mem->size << PAGE_SHIFT))) {
@@ -268,7 +268,7 @@ static int __dma_mmap_from_coherent(struct dma_coherent_mem *mem,
  * should return @ret, or 0 if they should proceed with mapping memory from
  * generic areas.
  */
-int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_from_dev_coherent(struct device *dev, struct mm_area *vma,
 			   void *vaddr, size_t size, int *ret)
 {
 	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
@@ -298,7 +298,7 @@ int dma_release_from_global_coherent(int order, void *vaddr)
 			vaddr);
 }
 
-int dma_mmap_from_global_coherent(struct vm_area_struct *vma, void *vaddr,
+int dma_mmap_from_global_coherent(struct mm_area *vma, void *vaddr,
 				   size_t size, int *ret)
 {
 	if (!dma_coherent_default_memory)
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index b8fe0b3d0ffb..0dba425ab6bf 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -537,7 +537,7 @@ bool dma_direct_can_mmap(struct device *dev)
 		IS_ENABLED(CONFIG_DMA_NONCOHERENT_MMAP);
 }
 
-int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
+int dma_direct_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs)
 {
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index d2c0b7e632fc..4ce4be1cad72 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -14,7 +14,7 @@ int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
 bool dma_direct_can_mmap(struct device *dev);
-int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
+int dma_direct_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs);
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr);
diff --git a/kernel/dma/dummy.c b/kernel/dma/dummy.c
index 92de80e5b057..eb7c1752b54e 100644
--- a/kernel/dma/dummy.c
+++ b/kernel/dma/dummy.c
@@ -4,7 +4,7 @@
  */
 #include <linux/dma-map-ops.h>
 
-static int dma_dummy_mmap(struct device *dev, struct vm_area_struct *vma,
+static int dma_dummy_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs)
 {
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index cda127027e48..37cfbcb1544c 100644
--- a/kernel/dma/mapping.c
+++ b/kernel/dma/mapping.c
@@ -536,7 +536,7 @@ EXPORT_SYMBOL_GPL(dma_can_mmap);
 /**
  * dma_mmap_attrs - map a coherent DMA allocation into user space
  * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
- * @vma: vm_area_struct describing requested user mapping
+ * @vma: mm_area describing requested user mapping
  * @cpu_addr: kernel CPU-view address returned from dma_alloc_attrs
  * @dma_addr: device-view address returned from dma_alloc_attrs
  * @size: size of memory originally requested in dma_alloc_attrs
@@ -546,7 +546,7 @@ EXPORT_SYMBOL_GPL(dma_can_mmap);
  * space.  The coherent DMA buffer must not be freed by the driver until the
  * user space mapping has been released.
  */
-int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_attrs(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs)
 {
@@ -725,7 +725,7 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page,
 }
 EXPORT_SYMBOL_GPL(dma_free_pages);
 
-int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_pages(struct device *dev, struct mm_area *vma,
 		size_t size, struct page *page)
 {
 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
@@ -828,7 +828,7 @@ void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
 }
 EXPORT_SYMBOL_GPL(dma_vunmap_noncontiguous);
 
-int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
+int dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
 		size_t size, struct sg_table *sgt)
 {
 	if (use_dma_iommu(dev))
diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
index 9afd569eadb9..9f7c560c3349 100644
--- a/kernel/dma/ops_helpers.c
+++ b/kernel/dma/ops_helpers.c
@@ -32,7 +32,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
 /*
  * Create userspace mapping for the DMA-coherent memory.
  */
-int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+int dma_common_mmap(struct device *dev, struct mm_area *vma,
 		void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		unsigned long attrs)
 {
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 128db74e9eab..bf6c0c90f88c 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6638,7 +6638,7 @@ void ring_buffer_put(struct perf_buffer *rb)
 	call_rcu(&rb->rcu_head, rb_free_rcu);
 }
 
-static void perf_mmap_open(struct vm_area_struct *vma)
+static void perf_mmap_open(struct mm_area *vma)
 {
 	struct perf_event *event = vma->vm_file->private_data;
 
@@ -6662,7 +6662,7 @@ static void perf_pmu_output_stop(struct perf_event *event);
  * the buffer here, where we still have a VM context. This means we need
  * to detach all events redirecting to us.
  */
-static void perf_mmap_close(struct vm_area_struct *vma)
+static void perf_mmap_close(struct mm_area *vma)
 {
 	struct perf_event *event = vma->vm_file->private_data;
 	struct perf_buffer *rb = ring_buffer_get(event);
@@ -6784,7 +6784,7 @@ static const struct vm_operations_struct perf_mmap_vmops = {
 	.pfn_mkwrite	= perf_mmap_pfn_mkwrite,
 };
 
-static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
+static int map_range(struct perf_buffer *rb, struct mm_area *vma)
 {
 	unsigned long nr_pages = vma_pages(vma);
 	int err = 0;
@@ -6853,7 +6853,7 @@ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
 	return err;
 }
 
-static int perf_mmap(struct file *file, struct vm_area_struct *vma)
+static int perf_mmap(struct file *file, struct mm_area *vma)
 {
 	struct perf_event *event = file->private_data;
 	unsigned long user_locked, user_lock_limit;
@@ -9155,7 +9155,7 @@ static void perf_event_cgroup(struct cgroup *cgrp)
  */
 
 struct perf_mmap_event {
-	struct vm_area_struct	*vma;
+	struct mm_area	*vma;
 
 	const char		*file_name;
 	int			file_size;
@@ -9181,7 +9181,7 @@ static int perf_event_mmap_match(struct perf_event *event,
 				 void *data)
 {
 	struct perf_mmap_event *mmap_event = data;
-	struct vm_area_struct *vma = mmap_event->vma;
+	struct mm_area *vma = mmap_event->vma;
 	int executable = vma->vm_flags & VM_EXEC;
 
 	return (!executable && event->attr.mmap_data) ||
@@ -9257,7 +9257,7 @@ static void perf_event_mmap_output(struct perf_event *event,
 
 static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
 {
-	struct vm_area_struct *vma = mmap_event->vma;
+	struct mm_area *vma = mmap_event->vma;
 	struct file *file = vma->vm_file;
 	int maj = 0, min = 0;
 	u64 ino = 0, gen = 0;
@@ -9387,7 +9387,7 @@ static bool perf_addr_filter_match(struct perf_addr_filter *filter,
 }
 
 static bool perf_addr_filter_vma_adjust(struct perf_addr_filter *filter,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					struct perf_addr_filter_range *fr)
 {
 	unsigned long vma_size = vma->vm_end - vma->vm_start;
@@ -9411,7 +9411,7 @@ static bool perf_addr_filter_vma_adjust(struct perf_addr_filter *filter,
 static void __perf_addr_filters_adjust(struct perf_event *event, void *data)
 {
 	struct perf_addr_filters_head *ifh = perf_event_addr_filters(event);
-	struct vm_area_struct *vma = data;
+	struct mm_area *vma = data;
 	struct perf_addr_filter *filter;
 	unsigned int restart = 0, count = 0;
 	unsigned long flags;
@@ -9442,7 +9442,7 @@ static void __perf_addr_filters_adjust(struct perf_event *event, void *data)
 /*
  * Adjust all task's events' filters to the new vma
  */
-static void perf_addr_filters_adjust(struct vm_area_struct *vma)
+static void perf_addr_filters_adjust(struct mm_area *vma)
 {
 	struct perf_event_context *ctx;
 
@@ -9460,7 +9460,7 @@ static void perf_addr_filters_adjust(struct vm_area_struct *vma)
 	rcu_read_unlock();
 }
 
-void perf_event_mmap(struct vm_area_struct *vma)
+void perf_event_mmap(struct mm_area *vma)
 {
 	struct perf_mmap_event mmap_event;
 
@@ -11255,7 +11255,7 @@ static void perf_addr_filter_apply(struct perf_addr_filter *filter,
 				   struct mm_struct *mm,
 				   struct perf_addr_filter_range *fr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	for_each_vma(vmi, vma) {
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 615b4e6d22c7..0fb6581e88fd 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -131,7 +131,7 @@ static void uprobe_warn(struct task_struct *t, const char *msg)
  *	- Return 1 if the specified virtual address is in an
  *	  executable vma.
  */
-static bool valid_vma(struct vm_area_struct *vma, bool is_register)
+static bool valid_vma(struct mm_area *vma, bool is_register)
 {
 	vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE;
 
@@ -141,12 +141,12 @@ static bool valid_vma(struct vm_area_struct *vma, bool is_register)
 	return vma->vm_file && (vma->vm_flags & flags) == VM_MAYEXEC;
 }
 
-static unsigned long offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
+static unsigned long offset_to_vaddr(struct mm_area *vma, loff_t offset)
 {
 	return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
 }
 
-static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
+static loff_t vaddr_to_offset(struct mm_area *vma, unsigned long vaddr)
 {
 	return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
 }
@@ -164,7 +164,7 @@ static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
  *
  * Returns 0 on success, negative error code otherwise.
  */
-static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+static int __replace_page(struct mm_area *vma, unsigned long addr,
 				struct page *old_page, struct page *new_page)
 {
 	struct folio *old_folio = page_folio(old_page);
@@ -360,7 +360,7 @@ static void delayed_uprobe_remove(struct uprobe *uprobe, struct mm_struct *mm)
 }
 
 static bool valid_ref_ctr_vma(struct uprobe *uprobe,
-			      struct vm_area_struct *vma)
+			      struct mm_area *vma)
 {
 	unsigned long vaddr = offset_to_vaddr(vma, uprobe->ref_ctr_offset);
 
@@ -372,11 +372,11 @@ static bool valid_ref_ctr_vma(struct uprobe *uprobe,
 		vma->vm_end > vaddr;
 }
 
-static struct vm_area_struct *
+static struct mm_area *
 find_ref_ctr_vma(struct uprobe *uprobe, struct mm_struct *mm)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *tmp;
+	struct mm_area *tmp;
 
 	for_each_vma(vmi, tmp)
 		if (valid_ref_ctr_vma(uprobe, tmp))
@@ -437,7 +437,7 @@ static void update_ref_ctr_warn(struct uprobe *uprobe,
 static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm,
 			  short d)
 {
-	struct vm_area_struct *rc_vma;
+	struct mm_area *rc_vma;
 	unsigned long rc_vaddr;
 	int ret = 0;
 
@@ -486,7 +486,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
 {
 	struct uprobe *uprobe;
 	struct page *old_page, *new_page;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret, is_register, ref_ctr_updated = 0;
 	bool orig_page_huge = false;
 	unsigned int gup_flags = FOLL_FORCE;
@@ -1136,7 +1136,7 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm)
 
 static int
 install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm,
-			struct vm_area_struct *vma, unsigned long vaddr)
+			struct mm_area *vma, unsigned long vaddr)
 {
 	bool first_uprobe;
 	int ret;
@@ -1186,7 +1186,7 @@ static struct map_info *
 build_map_info(struct address_space *mapping, loff_t offset, bool is_register)
 {
 	unsigned long pgoff = offset >> PAGE_SHIFT;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct map_info *curr = NULL;
 	struct map_info *prev = NULL;
 	struct map_info *info;
@@ -1269,7 +1269,7 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
 
 	while (info) {
 		struct mm_struct *mm = info->mm;
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		if (err && is_register)
 			goto free;
@@ -1454,7 +1454,7 @@ int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add)
 static int unapply_uprobe(struct uprobe *uprobe, struct mm_struct *mm)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int err = 0;
 
 	mmap_read_lock(mm);
@@ -1508,7 +1508,7 @@ find_node_in_range(struct inode *inode, loff_t min, loff_t max)
  * For a given range in vma, build a list of probes that need to be inserted.
  */
 static void build_probe_list(struct inode *inode,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				unsigned long start, unsigned long end,
 				struct list_head *head)
 {
@@ -1544,7 +1544,7 @@ static void build_probe_list(struct inode *inode,
 }
 
 /* @vma contains reference counter, not the probed instruction. */
-static int delayed_ref_ctr_inc(struct vm_area_struct *vma)
+static int delayed_ref_ctr_inc(struct mm_area *vma)
 {
 	struct list_head *pos, *q;
 	struct delayed_uprobe *du;
@@ -1578,7 +1578,7 @@ static int delayed_ref_ctr_inc(struct vm_area_struct *vma)
  * Currently we ignore all errors and always return 0, the callers
  * can't handle the failure anyway.
  */
-int uprobe_mmap(struct vm_area_struct *vma)
+int uprobe_mmap(struct mm_area *vma)
 {
 	struct list_head tmp_list;
 	struct uprobe *uprobe, *u;
@@ -1620,7 +1620,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
 }
 
 static bool
-vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+vma_has_uprobes(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	loff_t min, max;
 	struct inode *inode;
@@ -1641,7 +1641,7 @@ vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long e
 /*
  * Called in context of a munmap of a vma.
  */
-void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
+void uprobe_munmap(struct mm_area *vma, unsigned long start, unsigned long end)
 {
 	if (no_uprobe_events() || !valid_vma(vma, false))
 		return;
@@ -1658,7 +1658,7 @@ void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned lon
 }
 
 static vm_fault_t xol_fault(const struct vm_special_mapping *sm,
-			    struct vm_area_struct *vma, struct vm_fault *vmf)
+			    struct mm_area *vma, struct vm_fault *vmf)
 {
 	struct xol_area *area = vma->vm_mm->uprobes_state.xol_area;
 
@@ -1667,7 +1667,7 @@ static vm_fault_t xol_fault(const struct vm_special_mapping *sm,
 	return 0;
 }
 
-static int xol_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
+static int xol_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
 {
 	return -EPERM;
 }
@@ -1681,7 +1681,7 @@ static const struct vm_special_mapping xol_mapping = {
 /* Slot allocation for XOL */
 static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret;
 
 	if (mmap_write_lock_killable(mm))
@@ -2338,7 +2338,7 @@ bool uprobe_deny_signal(void)
 static void mmf_recalc_uprobes(struct mm_struct *mm)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	for_each_vma(vmi, vma) {
 		if (!valid_vma(vma, false))
@@ -2387,7 +2387,7 @@ static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr)
 {
 	struct mm_struct *mm = current->mm;
 	struct uprobe *uprobe = NULL;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct file *vm_file;
 	loff_t offset;
 	unsigned int seq;
@@ -2429,7 +2429,7 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb
 {
 	struct mm_struct *mm = current->mm;
 	struct uprobe *uprobe = NULL;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	uprobe = find_active_uprobe_speculative(bp_vaddr);
 	if (uprobe)
diff --git a/kernel/fork.c b/kernel/fork.c
index c4b26cd8998b..005774cb7b07 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -428,15 +428,15 @@ struct kmem_cache *files_cachep;
 /* SLAB cache for fs_struct structures (tsk->fs) */
 struct kmem_cache *fs_cachep;
 
-/* SLAB cache for vm_area_struct structures */
+/* SLAB cache for mm_area structures */
 static struct kmem_cache *vm_area_cachep;
 
 /* SLAB cache for mm_struct structures (tsk->mm) */
 static struct kmem_cache *mm_cachep;
 
-struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
+struct mm_area *vm_area_alloc(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
 	if (!vma)
@@ -447,8 +447,8 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
 	return vma;
 }
 
-static void vm_area_init_from(const struct vm_area_struct *src,
-			      struct vm_area_struct *dest)
+static void vm_area_init_from(const struct mm_area *src,
+			      struct mm_area *dest)
 {
 	dest->vm_mm = src->vm_mm;
 	dest->vm_ops = src->vm_ops;
@@ -483,9 +483,9 @@ static void vm_area_init_from(const struct vm_area_struct *src,
 #endif
 }
 
-struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
+struct mm_area *vm_area_dup(struct mm_area *orig)
 {
-	struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
+	struct mm_area *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
 
 	if (!new)
 		return NULL;
@@ -505,7 +505,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
 	return new;
 }
 
-void vm_area_free(struct vm_area_struct *vma)
+void vm_area_free(struct mm_area *vma)
 {
 	/* The vma should be detached while being destroyed. */
 	vma_assert_detached(vma);
@@ -611,7 +611,7 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm)
 static __latent_entropy int dup_mmap(struct mm_struct *mm,
 					struct mm_struct *oldmm)
 {
-	struct vm_area_struct *mpnt, *tmp;
+	struct mm_area *mpnt, *tmp;
 	int retval;
 	unsigned long charge = 0;
 	LIST_HEAD(uf);
@@ -1473,7 +1473,7 @@ int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
  */
 int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct file *old_exe_file;
 	int ret = 0;
 
@@ -3215,7 +3215,7 @@ void __init proc_caches_init(void)
 {
 	struct kmem_cache_args args = {
 		.use_freeptr_offset = true,
-		.freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr),
+		.freeptr_offset = offsetof(struct mm_area, vm_freeptr),
 	};
 
 	sighand_cachep = kmem_cache_create("sighand_cache",
@@ -3234,8 +3234,8 @@ void __init proc_caches_init(void)
 			sizeof(struct fs_struct), 0,
 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
 			NULL);
-	vm_area_cachep = kmem_cache_create("vm_area_struct",
-			sizeof(struct vm_area_struct), &args,
+	vm_area_cachep = kmem_cache_create("mm_area",
+			sizeof(struct mm_area), &args,
 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
 			SLAB_ACCOUNT);
 	mmap_init();
diff --git a/kernel/kcov.c b/kernel/kcov.c
index 187ba1b80bda..afd99afc9386 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -484,7 +484,7 @@ void kcov_task_exit(struct task_struct *t)
 	kcov_put(kcov);
 }
 
-static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
+static int kcov_mmap(struct file *filep, struct mm_area *vma)
 {
 	int res = 0;
 	struct kcov *kcov = vma->vm_file->private_data;
diff --git a/kernel/relay.c b/kernel/relay.c
index 5ac7e711e4b6..ca1dea370f80 100644
--- a/kernel/relay.c
+++ b/kernel/relay.c
@@ -74,13 +74,13 @@ static void relay_free_page_array(struct page **array)
 /**
  *	relay_mmap_buf: - mmap channel buffer to process address space
  *	@buf: relay channel buffer
- *	@vma: vm_area_struct describing memory to be mapped
+ *	@vma: mm_area describing memory to be mapped
  *
  *	Returns 0 if ok, negative on error
  *
  *	Caller should already have grabbed mmap_lock.
  */
-static int relay_mmap_buf(struct rchan_buf *buf, struct vm_area_struct *vma)
+static int relay_mmap_buf(struct rchan_buf *buf, struct mm_area *vma)
 {
 	unsigned long length = vma->vm_end - vma->vm_start;
 
@@ -825,7 +825,7 @@ static int relay_file_open(struct inode *inode, struct file *filp)
  *
  *	Calls upon relay_mmap_buf() to map the file into user space.
  */
-static int relay_file_mmap(struct file *filp, struct vm_area_struct *vma)
+static int relay_file_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct rchan_buf *buf = filp->private_data;
 	return relay_mmap_buf(buf, vma);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e43993a4e580..424c88801103 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3258,7 +3258,7 @@ static void reset_ptenuma_scan(struct task_struct *p)
 	p->mm->numa_scan_offset = 0;
 }
 
-static bool vma_is_accessed(struct mm_struct *mm, struct vm_area_struct *vma)
+static bool vma_is_accessed(struct mm_struct *mm, struct mm_area *vma)
 {
 	unsigned long pids;
 	/*
@@ -3307,7 +3307,7 @@ static void task_numa_work(struct callback_head *work)
 	struct task_struct *p = current;
 	struct mm_struct *mm = p->mm;
 	u64 runtime = p->se.sum_exec_runtime;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long start, end;
 	unsigned long nr_pte_updates = 0;
 	long pages, virtpages;
diff --git a/kernel/signal.c b/kernel/signal.c
index 614d78fe3451..39a1112b49e9 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -4892,7 +4892,7 @@ SYSCALL_DEFINE3(sigsuspend, int, unused1, int, unused2, old_sigset_t, mask)
 }
 #endif
 
-__weak const char *arch_vma_name(struct vm_area_struct *vma)
+__weak const char *arch_vma_name(struct mm_area *vma)
 {
 	return NULL;
 }
diff --git a/kernel/sys.c b/kernel/sys.c
index c434968e9f5d..bfcdd00e92bf 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2156,7 +2156,7 @@ static int prctl_set_mm(int opt, unsigned long addr,
 		.auxv_size = 0,
 		.exe_fd = -1,
 	};
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int error;
 
 	if (arg5 || (arg4 && (opt != PR_SET_MM_AUXV &&
diff --git a/kernel/time/namespace.c b/kernel/time/namespace.c
index e3642278df43..8b5a1d6c90ad 100644
--- a/kernel/time/namespace.c
+++ b/kernel/time/namespace.c
@@ -192,7 +192,7 @@ static void timens_setup_vdso_clock_data(struct vdso_clock *vc,
 	offset[CLOCK_BOOTTIME_ALARM]	= boottime;
 }
 
-struct page *find_timens_vvar_page(struct vm_area_struct *vma)
+struct page *find_timens_vvar_page(struct mm_area *vma)
 {
 	if (likely(vma->vm_mm == current->mm))
 		return current->nsproxy->time_ns->vvar_page;
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index d8d7b28e2c2f..2178bd0d5590 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -7028,7 +7028,7 @@ static int __rb_inc_dec_mapped(struct ring_buffer_per_cpu *cpu_buffer,
  */
 #ifdef CONFIG_MMU
 static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	unsigned long nr_subbufs, nr_pages, nr_vma_pages, pgoff = vma->vm_pgoff;
 	unsigned int subbuf_pages, subbuf_order;
@@ -7125,14 +7125,14 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
 }
 #else
 static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	return -EOPNOTSUPP;
 }
 #endif
 
 int ring_buffer_map(struct trace_buffer *buffer, int cpu,
-		    struct vm_area_struct *vma)
+		    struct mm_area *vma)
 {
 	struct ring_buffer_per_cpu *cpu_buffer;
 	unsigned long flags, *subbuf_ids;
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index bc957a2507e2..58694c4b18b6 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -8481,7 +8481,7 @@ static inline int get_snapshot_map(struct trace_array *tr) { return 0; }
 static inline void put_snapshot_map(struct trace_array *tr) { }
 #endif
 
-static void tracing_buffers_mmap_close(struct vm_area_struct *vma)
+static void tracing_buffers_mmap_close(struct mm_area *vma)
 {
 	struct ftrace_buffer_info *info = vma->vm_file->private_data;
 	struct trace_iterator *iter = &info->iter;
@@ -8494,7 +8494,7 @@ static const struct vm_operations_struct tracing_buffers_vmops = {
 	.close		= tracing_buffers_mmap_close,
 };
 
-static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma)
+static int tracing_buffers_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct ftrace_buffer_info *info = filp->private_data;
 	struct trace_iterator *iter = &info->iter;
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index fee40ffbd490..f8172a64070a 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -404,7 +404,7 @@ static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
 		return 0;
 
 	if (mm) {
-		const struct vm_area_struct *vma;
+		const struct mm_area *vma;
 
 		mmap_read_lock(mm);
 		vma = find_vma(mm, ip);
diff --git a/lib/buildid.c b/lib/buildid.c
index c4b0f376fb34..5acf0f755dd2 100644
--- a/lib/buildid.c
+++ b/lib/buildid.c
@@ -287,7 +287,7 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si
 /* enough for Elf64_Ehdr, Elf64_Phdr, and all the smaller requests */
 #define MAX_FREADER_BUF_SZ 64
 
-static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
+static int __build_id_parse(struct mm_area *vma, unsigned char *build_id,
 			    __u32 *size, bool may_fault)
 {
 	const Elf32_Ehdr *ehdr;
@@ -338,7 +338,7 @@ static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
  *
  * Return: 0 on success; negative error, otherwise
  */
-int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size)
+int build_id_parse_nofault(struct mm_area *vma, unsigned char *build_id, __u32 *size)
 {
 	return __build_id_parse(vma, build_id, size, false /* !may_fault */);
 }
@@ -354,7 +354,7 @@ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id,
  *
  * Return: 0 on success; negative error, otherwise
  */
-int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size)
+int build_id_parse(struct mm_area *vma, unsigned char *build_id, __u32 *size)
 {
 	return __build_id_parse(vma, build_id, size, true /* may_fault */);
 }
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index 5b144bc5c4ec..d08270e1c826 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -878,7 +878,7 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
 	unsigned long start, end, addr;
 	unsigned long size = cmd->npages << PAGE_SHIFT;
 	struct mm_struct *mm = dmirror->notifier.mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long src_pfns[64] = { 0 };
 	unsigned long dst_pfns[64] = { 0 };
 	struct migrate_vma args = { 0 };
@@ -938,7 +938,7 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
 	unsigned long start, end, addr;
 	unsigned long size = cmd->npages << PAGE_SHIFT;
 	struct mm_struct *mm = dmirror->notifier.mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long src_pfns[64] = { 0 };
 	unsigned long dst_pfns[64] = { 0 };
 	struct dmirror_bounce bounce;
@@ -1342,7 +1342,7 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
 	return 0;
 }
 
-static int dmirror_fops_mmap(struct file *file, struct vm_area_struct *vma)
+static int dmirror_fops_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned long addr;
 
diff --git a/lib/vdso/datastore.c b/lib/vdso/datastore.c
index 3693c6caf2c4..6079a11964e1 100644
--- a/lib/vdso/datastore.c
+++ b/lib/vdso/datastore.c
@@ -38,7 +38,7 @@ struct vdso_arch_data *vdso_k_arch_data = &vdso_arch_data_store.data;
 #endif /* CONFIG_ARCH_HAS_VDSO_ARCH_DATA */
 
 static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
-			     struct vm_area_struct *vma, struct vm_fault *vmf)
+			     struct mm_area *vma, struct vm_fault *vmf)
 {
 	struct page *timens_page = find_timens_vvar_page(vma);
 	unsigned long addr, pfn;
@@ -96,7 +96,7 @@ const struct vm_special_mapping vdso_vvar_mapping = {
 	.fault	= vvar_fault,
 };
 
-struct vm_area_struct *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr)
+struct mm_area *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr)
 {
 	return _install_special_mapping(mm, addr, VDSO_NR_PAGES * PAGE_SIZE,
 					VM_READ | VM_MAYREAD | VM_IO | VM_DONTDUMP |
@@ -115,7 +115,7 @@ struct vm_area_struct *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned
 int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
 {
 	struct mm_struct *mm = task->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	mmap_read_lock(mm);
diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index 0db1fc70c84d..db48cc64657f 100644
--- a/mm/damon/ops-common.c
+++ b/mm/damon/ops-common.c
@@ -39,7 +39,7 @@ struct folio *damon_get_folio(unsigned long pfn)
 	return folio;
 }
 
-void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr)
+void damon_ptep_mkold(pte_t *pte, struct mm_area *vma, unsigned long addr)
 {
 	pte_t pteval = ptep_get(pte);
 	struct folio *folio;
@@ -70,7 +70,7 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr
 	folio_put(folio);
 }
 
-void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
+void damon_pmdp_mkold(pmd_t *pmd, struct mm_area *vma, unsigned long addr)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	struct folio *folio = damon_get_folio(pmd_pfn(pmdp_get(pmd)));
diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
index 18d837d11bce..81857e66d09b 100644
--- a/mm/damon/ops-common.h
+++ b/mm/damon/ops-common.h
@@ -9,8 +9,8 @@
 
 struct folio *damon_get_folio(unsigned long pfn);
 
-void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr);
-void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr);
+void damon_ptep_mkold(pte_t *pte, struct mm_area *vma, unsigned long addr);
+void damon_pmdp_mkold(pmd_t *pmd, struct mm_area *vma, unsigned long addr);
 
 int damon_cold_score(struct damon_ctx *c, struct damon_region *r,
 			struct damos *s);
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 1b70d3f36046..5154132467eb 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -20,7 +20,7 @@
 #include "ops-common.h"
 
 static bool damon_folio_mkold_one(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long addr, void *arg)
+		struct mm_area *vma, unsigned long addr, void *arg)
 {
 	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
 
@@ -88,7 +88,7 @@ static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
 }
 
 static bool damon_folio_young_one(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long addr, void *arg)
+		struct mm_area *vma, unsigned long addr, void *arg)
 {
 	bool *accessed = arg;
 	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
index 7cd944266a92..5d07633be7fb 100644
--- a/mm/damon/tests/vaddr-kunit.h
+++ b/mm/damon/tests/vaddr-kunit.h
@@ -14,7 +14,7 @@
 
 #include <kunit/test.h>
 
-static int __link_vmas(struct maple_tree *mt, struct vm_area_struct *vmas,
+static int __link_vmas(struct maple_tree *mt, struct mm_area *vmas,
 			ssize_t nr_vmas)
 {
 	int i, ret = -ENOMEM;
@@ -68,13 +68,13 @@ static void damon_test_three_regions_in_vmas(struct kunit *test)
 	static struct mm_struct mm;
 	struct damon_addr_range regions[3] = {0};
 	/* 10-20-25, 200-210-220, 300-305, 307-330 */
-	static struct vm_area_struct vmas[] = {
-		(struct vm_area_struct) {.vm_start = 10, .vm_end = 20},
-		(struct vm_area_struct) {.vm_start = 20, .vm_end = 25},
-		(struct vm_area_struct) {.vm_start = 200, .vm_end = 210},
-		(struct vm_area_struct) {.vm_start = 210, .vm_end = 220},
-		(struct vm_area_struct) {.vm_start = 300, .vm_end = 305},
-		(struct vm_area_struct) {.vm_start = 307, .vm_end = 330},
+	static struct mm_area vmas[] = {
+		(struct mm_area) {.vm_start = 10, .vm_end = 20},
+		(struct mm_area) {.vm_start = 20, .vm_end = 25},
+		(struct mm_area) {.vm_start = 200, .vm_end = 210},
+		(struct mm_area) {.vm_start = 210, .vm_end = 220},
+		(struct mm_area) {.vm_start = 300, .vm_end = 305},
+		(struct mm_area) {.vm_start = 307, .vm_end = 330},
 	};
 
 	mt_init_flags(&mm.mm_mt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_USE_RCU);
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index e6d99106a7f9..ddd28b187cbb 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -121,7 +121,7 @@ static int __damon_va_three_regions(struct mm_struct *mm,
 {
 	struct damon_addr_range first_gap = {0}, second_gap = {0};
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma, *prev = NULL;
+	struct mm_area *vma, *prev = NULL;
 	unsigned long start;
 
 	/*
@@ -341,7 +341,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
 
 #ifdef CONFIG_HUGETLB_PAGE
 static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm,
-				struct vm_area_struct *vma, unsigned long addr)
+				struct mm_area *vma, unsigned long addr)
 {
 	bool referenced = false;
 	pte_t entry = huge_ptep_get(mm, addr, pte);
diff --git a/mm/debug.c b/mm/debug.c
index db83e381a8ae..ea36f9732a2a 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -184,7 +184,7 @@ EXPORT_SYMBOL(dump_page);
 
 #ifdef CONFIG_DEBUG_VM
 
-void dump_vma(const struct vm_area_struct *vma)
+void dump_vma(const struct mm_area *vma)
 {
 	pr_emerg("vma %px start %px end %px mm %px\n"
 		"prot %lx anon_vma %px vm_ops %px\n"
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index bc748f700a9e..ba1ca4c6a44f 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -45,7 +45,7 @@
 
 struct pgtable_debug_args {
 	struct mm_struct	*mm;
-	struct vm_area_struct	*vma;
+	struct mm_area	*vma;
 
 	pgd_t			*pgdp;
 	p4d_t			*p4dp;
diff --git a/mm/filemap.c b/mm/filemap.c
index b5e784f34d98..2a8150e9ac7b 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3293,7 +3293,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
 
 static vm_fault_t filemap_fault_recheck_pte_none(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	vm_fault_t ret = 0;
 	pte_t *ptep;
 
@@ -3689,7 +3689,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
 vm_fault_t filemap_map_pages(struct vm_fault *vmf,
 			     pgoff_t start_pgoff, pgoff_t end_pgoff)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct file *file = vma->vm_file;
 	struct address_space *mapping = file->f_mapping;
 	pgoff_t file_end, last_pgoff = start_pgoff;
@@ -3793,7 +3793,7 @@ const struct vm_operations_struct generic_file_vm_ops = {
 
 /* This is used for a general mmap of a disk file */
 
-int generic_file_mmap(struct file *file, struct vm_area_struct *vma)
+int generic_file_mmap(struct file *file, struct mm_area *vma)
 {
 	struct address_space *mapping = file->f_mapping;
 
@@ -3807,7 +3807,7 @@ int generic_file_mmap(struct file *file, struct vm_area_struct *vma)
 /*
  * This is for filesystems which do not implement ->writepage.
  */
-int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
+int generic_file_readonly_mmap(struct file *file, struct mm_area *vma)
 {
 	if (vma_is_shared_maywrite(vma))
 		return -EINVAL;
@@ -3818,11 +3818,11 @@ vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
 {
 	return VM_FAULT_SIGBUS;
 }
-int generic_file_mmap(struct file *file, struct vm_area_struct *vma)
+int generic_file_mmap(struct file *file, struct mm_area *vma)
 {
 	return -ENOSYS;
 }
-int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
+int generic_file_readonly_mmap(struct file *file, struct mm_area *vma)
 {
 	return -ENOSYS;
 }
diff --git a/mm/gup.c b/mm/gup.c
index 92351e2fa876..88928bea023f 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -595,7 +595,7 @@ static struct folio *try_grab_folio_fast(struct page *page, int refs,
 
 /* Common code for can_follow_write_* */
 static inline bool can_follow_write_common(struct page *page,
-		struct vm_area_struct *vma, unsigned int flags)
+		struct mm_area *vma, unsigned int flags)
 {
 	/* Maybe FOLL_FORCE is set to override it? */
 	if (!(flags & FOLL_FORCE))
@@ -620,7 +620,7 @@ static inline bool can_follow_write_common(struct page *page,
 	return page && PageAnon(page) && PageAnonExclusive(page);
 }
 
-static struct page *no_page_table(struct vm_area_struct *vma,
+static struct page *no_page_table(struct mm_area *vma,
 				  unsigned int flags, unsigned long address)
 {
 	if (!(flags & FOLL_DUMP))
@@ -648,7 +648,7 @@ static struct page *no_page_table(struct vm_area_struct *vma,
 #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
 /* FOLL_FORCE can write to even unwritable PUDs in COW mappings. */
 static inline bool can_follow_write_pud(pud_t pud, struct page *page,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					unsigned int flags)
 {
 	/* If the pud is writable, we can write to the page. */
@@ -658,7 +658,7 @@ static inline bool can_follow_write_pud(pud_t pud, struct page *page,
 	return can_follow_write_common(page, vma, flags);
 }
 
-static struct page *follow_huge_pud(struct vm_area_struct *vma,
+static struct page *follow_huge_pud(struct mm_area *vma,
 				    unsigned long addr, pud_t *pudp,
 				    int flags, struct follow_page_context *ctx)
 {
@@ -716,7 +716,7 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
 
 /* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
 static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					unsigned int flags)
 {
 	/* If the pmd is writable, we can write to the page. */
@@ -732,7 +732,7 @@ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
 	return !userfaultfd_huge_pmd_wp(vma, pmd);
 }
 
-static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+static struct page *follow_huge_pmd(struct mm_area *vma,
 				    unsigned long addr, pmd_t *pmd,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
@@ -778,14 +778,14 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,
 }
 
 #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
-static struct page *follow_huge_pud(struct vm_area_struct *vma,
+static struct page *follow_huge_pud(struct mm_area *vma,
 				    unsigned long addr, pud_t *pudp,
 				    int flags, struct follow_page_context *ctx)
 {
 	return NULL;
 }
 
-static struct page *follow_huge_pmd(struct vm_area_struct *vma,
+static struct page *follow_huge_pmd(struct mm_area *vma,
 				    unsigned long addr, pmd_t *pmd,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
@@ -794,7 +794,7 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,
 }
 #endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
 
-static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
+static int follow_pfn_pte(struct mm_area *vma, unsigned long address,
 		pte_t *pte, unsigned int flags)
 {
 	if (flags & FOLL_TOUCH) {
@@ -817,7 +817,7 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
 
 /* FOLL_FORCE can write to even unwritable PTEs in COW mappings. */
 static inline bool can_follow_write_pte(pte_t pte, struct page *page,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					unsigned int flags)
 {
 	/* If the pte is writable, we can write to the page. */
@@ -833,7 +833,7 @@ static inline bool can_follow_write_pte(pte_t pte, struct page *page,
 	return !userfaultfd_pte_wp(vma, pte);
 }
 
-static struct page *follow_page_pte(struct vm_area_struct *vma,
+static struct page *follow_page_pte(struct mm_area *vma,
 		unsigned long address, pmd_t *pmd, unsigned int flags,
 		struct dev_pagemap **pgmap)
 {
@@ -947,7 +947,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	return no_page_table(vma, flags, address);
 }
 
-static struct page *follow_pmd_mask(struct vm_area_struct *vma,
+static struct page *follow_pmd_mask(struct mm_area *vma,
 				    unsigned long address, pud_t *pudp,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
@@ -999,7 +999,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 	return page;
 }
 
-static struct page *follow_pud_mask(struct vm_area_struct *vma,
+static struct page *follow_pud_mask(struct mm_area *vma,
 				    unsigned long address, p4d_t *p4dp,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
@@ -1027,7 +1027,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 	return follow_pmd_mask(vma, address, pudp, flags, ctx);
 }
 
-static struct page *follow_p4d_mask(struct vm_area_struct *vma,
+static struct page *follow_p4d_mask(struct mm_area *vma,
 				    unsigned long address, pgd_t *pgdp,
 				    unsigned int flags,
 				    struct follow_page_context *ctx)
@@ -1046,7 +1046,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
 
 /**
  * follow_page_mask - look up a page descriptor from a user-virtual address
- * @vma: vm_area_struct mapping @address
+ * @vma: mm_area mapping @address
  * @address: virtual address to look up
  * @flags: flags modifying lookup behaviour
  * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a
@@ -1068,7 +1068,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
  * an error pointer if there is a mapping to something not represented
  * by a page descriptor (see also vm_normal_page()).
  */
-static struct page *follow_page_mask(struct vm_area_struct *vma,
+static struct page *follow_page_mask(struct mm_area *vma,
 			      unsigned long address, unsigned int flags,
 			      struct follow_page_context *ctx)
 {
@@ -1092,7 +1092,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 }
 
 static int get_gate_page(struct mm_struct *mm, unsigned long address,
-		unsigned int gup_flags, struct vm_area_struct **vma,
+		unsigned int gup_flags, struct mm_area **vma,
 		struct page **page)
 {
 	pgd_t *pgd;
@@ -1151,7 +1151,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
  * FOLL_NOWAIT, the mmap_lock may be released.  If it is, *@locked will be set
  * to 0 and -EBUSY returned.
  */
-static int faultin_page(struct vm_area_struct *vma,
+static int faultin_page(struct mm_area *vma,
 		unsigned long address, unsigned int flags, bool unshare,
 		int *locked)
 {
@@ -1246,7 +1246,7 @@ static int faultin_page(struct vm_area_struct *vma,
  * This results in both data being written to a folio without writenotify, and
  * the folio being dirtied unexpectedly (if the caller decides to do so).
  */
-static bool writable_file_mapping_allowed(struct vm_area_struct *vma,
+static bool writable_file_mapping_allowed(struct mm_area *vma,
 					  unsigned long gup_flags)
 {
 	/*
@@ -1264,7 +1264,7 @@ static bool writable_file_mapping_allowed(struct vm_area_struct *vma,
 	return !vma_needs_dirty_tracking(vma);
 }
 
-static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
+static int check_vma_flags(struct mm_area *vma, unsigned long gup_flags)
 {
 	vm_flags_t vm_flags = vma->vm_flags;
 	int write = (gup_flags & FOLL_WRITE);
@@ -1329,14 +1329,14 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
  * This is "vma_lookup()", but with a warning if we would have
  * historically expanded the stack in the GUP code.
  */
-static struct vm_area_struct *gup_vma_lookup(struct mm_struct *mm,
+static struct mm_area *gup_vma_lookup(struct mm_struct *mm,
 	 unsigned long addr)
 {
 #ifdef CONFIG_STACK_GROWSUP
 	return vma_lookup(mm, addr);
 #else
 	static volatile unsigned long next_warn;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long now, next;
 
 	vma = find_vma(mm, addr);
@@ -1424,7 +1424,7 @@ static long __get_user_pages(struct mm_struct *mm,
 		int *locked)
 {
 	long ret = 0, i = 0;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	struct follow_page_context ctx = { NULL };
 
 	if (!nr_pages)
@@ -1574,7 +1574,7 @@ static long __get_user_pages(struct mm_struct *mm,
 	return i ? i : ret;
 }
 
-static bool vma_permits_fault(struct vm_area_struct *vma,
+static bool vma_permits_fault(struct mm_area *vma,
 			      unsigned int fault_flags)
 {
 	bool write   = !!(fault_flags & FAULT_FLAG_WRITE);
@@ -1630,7 +1630,7 @@ int fixup_user_fault(struct mm_struct *mm,
 		     unsigned long address, unsigned int fault_flags,
 		     bool *unlocked)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	vm_fault_t ret;
 
 	address = untagged_addr_remote(mm, address);
@@ -1879,7 +1879,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
  * If @locked is non-NULL, it must held for read only and may be
  * released.  If it's released, *@locked will be set to 0.
  */
-long populate_vma_page_range(struct vm_area_struct *vma,
+long populate_vma_page_range(struct mm_area *vma,
 		unsigned long start, unsigned long end, int *locked)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -1995,7 +1995,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
 {
 	struct mm_struct *mm = current->mm;
 	unsigned long end, nstart, nend;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	int locked = 0;
 	long ret = 0;
 
@@ -2049,7 +2049,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
 		unsigned long nr_pages, struct page **pages,
 		int *locked, unsigned int foll_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	bool must_unlock = false;
 	unsigned long vm_flags;
 	long i;
diff --git a/mm/hmm.c b/mm/hmm.c
index 082f7b7c0b9e..b3fdbe6d2e2a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -64,7 +64,7 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end,
 			 unsigned int required_fault, struct mm_walk *walk)
 {
 	struct hmm_vma_walk *hmm_vma_walk = walk->private;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	unsigned int fault_flags = FAULT_FLAG_REMOTE;
 
 	WARN_ON_ONCE(!required_fault);
@@ -472,7 +472,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 	unsigned long addr = start, i, pfn;
 	struct hmm_vma_walk *hmm_vma_walk = walk->private;
 	struct hmm_range *range = hmm_vma_walk->range;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	unsigned int required_fault;
 	unsigned long pfn_req_flags;
 	unsigned long cpu_flags;
@@ -522,7 +522,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
 {
 	struct hmm_vma_walk *hmm_vma_walk = walk->private;
 	struct hmm_range *range = hmm_vma_walk->range;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 
 	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) &&
 	    vma->vm_flags & VM_READ)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2a47682d1ab7..30d01dbe55af 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -83,7 +83,7 @@ unsigned long huge_anon_orders_madvise __read_mostly;
 unsigned long huge_anon_orders_inherit __read_mostly;
 static bool anon_orders_configured __initdata;
 
-static inline bool file_thp_enabled(struct vm_area_struct *vma)
+static inline bool file_thp_enabled(struct mm_area *vma)
 {
 	struct inode *inode;
 
@@ -98,7 +98,7 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
 	return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode);
 }
 
-unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
+unsigned long __thp_vma_allowable_orders(struct mm_area *vma,
 					 unsigned long vm_flags,
 					 unsigned long tva_flags,
 					 unsigned long orders)
@@ -1050,7 +1050,7 @@ static int __init setup_thp_anon(char *str)
 }
 __setup("thp_anon=", setup_thp_anon);
 
-pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
+pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct mm_area *vma)
 {
 	if (likely(vma->vm_flags & VM_WRITE))
 		pmd = pmd_mkwrite(pmd, vma);
@@ -1155,7 +1155,7 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
 }
 EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
 
-static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
+static struct folio *vma_alloc_anon_folio_pmd(struct mm_area *vma,
 		unsigned long addr)
 {
 	gfp_t gfp = vma_thp_gfp_mask(vma);
@@ -1199,7 +1199,7 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
 }
 
 static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
-		struct vm_area_struct *vma, unsigned long haddr)
+		struct mm_area *vma, unsigned long haddr)
 {
 	pmd_t entry;
 
@@ -1218,7 +1218,7 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
 static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 {
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio;
 	pgtable_t pgtable;
 	vm_fault_t ret = 0;
@@ -1277,7 +1277,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
  *	    available
  * never: never stall for any thp allocation
  */
-gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma)
+gfp_t vma_thp_gfp_mask(struct mm_area *vma)
 {
 	const bool vma_madvised = vma && (vma->vm_flags & VM_HUGEPAGE);
 
@@ -1305,7 +1305,7 @@ gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma)
 
 /* Caller must hold page table lock. */
 static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm,
-		struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
+		struct mm_area *vma, unsigned long haddr, pmd_t *pmd,
 		struct folio *zero_folio)
 {
 	pmd_t entry;
@@ -1318,7 +1318,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm,
 
 vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	vm_fault_t ret;
 
@@ -1373,7 +1373,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 	return __do_huge_pmd_anonymous_page(vmf);
 }
 
-static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
+static int insert_pfn_pmd(struct mm_area *vma, unsigned long addr,
 		pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
 		pgtable_t pgtable)
 {
@@ -1430,7 +1430,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
 vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
 {
 	unsigned long addr = vmf->address & PMD_MASK;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	pgprot_t pgprot = vma->vm_page_prot;
 	pgtable_t pgtable = NULL;
 	spinlock_t *ptl;
@@ -1471,7 +1471,7 @@ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd);
 vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
 				bool write)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	unsigned long addr = vmf->address & PMD_MASK;
 	struct mm_struct *mm = vma->vm_mm;
 	spinlock_t *ptl;
@@ -1508,14 +1508,14 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
 EXPORT_SYMBOL_GPL(vmf_insert_folio_pmd);
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
+static pud_t maybe_pud_mkwrite(pud_t pud, struct mm_area *vma)
 {
 	if (likely(vma->vm_flags & VM_WRITE))
 		pud = pud_mkwrite(pud);
 	return pud;
 }
 
-static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
+static void insert_pfn_pud(struct mm_area *vma, unsigned long addr,
 		pud_t *pud, pfn_t pfn, bool write)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -1560,7 +1560,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
 vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
 {
 	unsigned long addr = vmf->address & PUD_MASK;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	pgprot_t pgprot = vma->vm_page_prot;
 	spinlock_t *ptl;
 
@@ -1599,7 +1599,7 @@ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
 vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
 				bool write)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	unsigned long addr = vmf->address & PUD_MASK;
 	pud_t *pud = vmf->pud;
 	struct mm_struct *mm = vma->vm_mm;
@@ -1633,7 +1633,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
 EXPORT_SYMBOL_GPL(vmf_insert_folio_pud);
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
-void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+void touch_pmd(struct mm_area *vma, unsigned long addr,
 	       pmd_t *pmd, bool write)
 {
 	pmd_t _pmd;
@@ -1646,7 +1646,7 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
 		update_mmu_cache_pmd(vma, addr, pmd);
 }
 
-struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
+struct page *follow_devmap_pmd(struct mm_area *vma, unsigned long addr,
 		pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
 {
 	unsigned long pfn = pmd_pfn(*pmd);
@@ -1688,7 +1688,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
 
 int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
-		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+		  struct mm_area *dst_vma, struct mm_area *src_vma)
 {
 	spinlock_t *dst_ptl, *src_ptl;
 	struct page *src_page;
@@ -1810,7 +1810,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+void touch_pud(struct mm_area *vma, unsigned long addr,
 	       pud_t *pud, bool write)
 {
 	pud_t _pud;
@@ -1825,7 +1825,7 @@ void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 
 int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
-		  struct vm_area_struct *vma)
+		  struct mm_area *vma)
 {
 	spinlock_t *dst_ptl, *src_ptl;
 	pud_t pud;
@@ -1889,7 +1889,7 @@ void huge_pmd_set_accessed(struct vm_fault *vmf)
 static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
 {
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mmu_notifier_range range;
 	struct folio *folio;
 	vm_fault_t ret = 0;
@@ -1921,7 +1921,7 @@ static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
 vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 {
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio;
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
@@ -2012,7 +2012,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	return VM_FAULT_FALLBACK;
 }
 
-static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
+static inline bool can_change_pmd_writable(struct mm_area *vma,
 					   unsigned long addr, pmd_t pmd)
 {
 	struct page *page;
@@ -2045,7 +2045,7 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
 /* NUMA hinting page fault entry point for trans huge pmds */
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	int nid = NUMA_NO_NODE;
@@ -2123,7 +2123,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
  * Return true if we do MADV_FREE successfully on entire pmd page.
  * Otherwise, return false.
  */
-bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
 		pmd_t *pmd, unsigned long addr, unsigned long next)
 {
 	spinlock_t *ptl;
@@ -2202,7 +2202,7 @@ static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
 	mm_dec_nr_ptes(mm);
 }
 
-int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+int zap_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
 		 pmd_t *pmd, unsigned long addr)
 {
 	pmd_t orig_pmd;
@@ -2272,7 +2272,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 #ifndef pmd_move_must_withdraw
 static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
 					 spinlock_t *old_pmd_ptl,
-					 struct vm_area_struct *vma)
+					 struct mm_area *vma)
 {
 	/*
 	 * With split pmd lock we also need to move preallocated
@@ -2305,7 +2305,7 @@ static pmd_t clear_uffd_wp_pmd(pmd_t pmd)
 	return pmd;
 }
 
-bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+bool move_huge_pmd(struct mm_area *vma, unsigned long old_addr,
 		  unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)
 {
 	spinlock_t *old_ptl, *new_ptl;
@@ -2363,7 +2363,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
  *      or if prot_numa but THP migration is not supported
  *  - HPAGE_PMD_NR if protections changed and TLB flush necessary
  */
-int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+int change_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
 		    pmd_t *pmd, unsigned long addr, pgprot_t newprot,
 		    unsigned long cp_flags)
 {
@@ -2502,7 +2502,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
  * - HPAGE_PUD_NR: if pud was successfully processed
  */
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
+int change_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
 		    pud_t *pudp, unsigned long addr, pgprot_t newprot,
 		    unsigned long cp_flags)
 {
@@ -2550,7 +2550,7 @@ int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
  * repeated by the caller, or other errors in case of failure.
  */
 int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
-			struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+			struct mm_area *dst_vma, struct mm_area *src_vma,
 			unsigned long dst_addr, unsigned long src_addr)
 {
 	pmd_t _dst_pmd, src_pmdval;
@@ -2687,7 +2687,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
  * Note that if it returns page table lock pointer, this routine returns without
  * unlocking page table lock. So callers must unlock it.
  */
-spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma)
+spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct mm_area *vma)
 {
 	spinlock_t *ptl;
 	ptl = pmd_lock(vma->vm_mm, pmd);
@@ -2704,7 +2704,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma)
  * Note that if it returns page table lock pointer, this routine returns without
  * unlocking page table lock. So callers must unlock it.
  */
-spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma)
+spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct mm_area *vma)
 {
 	spinlock_t *ptl;
 
@@ -2716,7 +2716,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
+int zap_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
 		 pud_t *pud, unsigned long addr)
 {
 	spinlock_t *ptl;
@@ -2751,7 +2751,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
 	return 1;
 }
 
-static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
+static void __split_huge_pud_locked(struct mm_area *vma, pud_t *pud,
 		unsigned long haddr)
 {
 	struct folio *folio;
@@ -2783,7 +2783,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
 		-HPAGE_PUD_NR);
 }
 
-void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
+void __split_huge_pud(struct mm_area *vma, pud_t *pud,
 		unsigned long address)
 {
 	spinlock_t *ptl;
@@ -2803,13 +2803,13 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
 	mmu_notifier_invalidate_range_end(&range);
 }
 #else
-void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
+void __split_huge_pud(struct mm_area *vma, pud_t *pud,
 		unsigned long address)
 {
 }
 #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
 
-static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
+static void __split_huge_zero_page_pmd(struct mm_area *vma,
 		unsigned long haddr, pmd_t *pmd)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -2850,7 +2850,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
 	pmd_populate(mm, pmd, pgtable);
 }
 
-static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
+static void __split_huge_pmd_locked(struct mm_area *vma, pmd_t *pmd,
 		unsigned long haddr, bool freeze)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -3072,7 +3072,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	pmd_populate(mm, pmd, pgtable);
 }
 
-void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+void split_huge_pmd_locked(struct mm_area *vma, unsigned long address,
 			   pmd_t *pmd, bool freeze, struct folio *folio)
 {
 	VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
@@ -3093,7 +3093,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
 	}
 }
 
-void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+void __split_huge_pmd(struct mm_area *vma, pmd_t *pmd,
 		unsigned long address, bool freeze, struct folio *folio)
 {
 	spinlock_t *ptl;
@@ -3109,7 +3109,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	mmu_notifier_invalidate_range_end(&range);
 }
 
-void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
+void split_huge_pmd_address(struct mm_area *vma, unsigned long address,
 		bool freeze, struct folio *folio)
 {
 	pmd_t *pmd = mm_find_pmd(vma->vm_mm, address);
@@ -3120,7 +3120,7 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
 	__split_huge_pmd(vma, pmd, address, freeze, folio);
 }
 
-static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address)
+static inline void split_huge_pmd_if_needed(struct mm_area *vma, unsigned long address)
 {
 	/*
 	 * If the new address isn't hpage aligned and it could previously
@@ -3132,10 +3132,10 @@ static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned
 		split_huge_pmd_address(vma, address, false, NULL);
 }
 
-void vma_adjust_trans_huge(struct vm_area_struct *vma,
+void vma_adjust_trans_huge(struct mm_area *vma,
 			   unsigned long start,
 			   unsigned long end,
-			   struct vm_area_struct *next)
+			   struct mm_area *next)
 {
 	/* Check if we need to split start first. */
 	split_huge_pmd_if_needed(vma, start);
@@ -3171,7 +3171,7 @@ static void unmap_folio(struct folio *folio)
 	try_to_unmap_flush();
 }
 
-static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma,
+static bool __discard_anon_folio_pmd_locked(struct mm_area *vma,
 					    unsigned long addr, pmd_t *pmdp,
 					    struct folio *folio)
 {
@@ -3234,7 +3234,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma,
 	return true;
 }
 
-bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
+bool unmap_huge_pmd_locked(struct mm_area *vma, unsigned long addr,
 			   pmd_t *pmdp, struct folio *folio)
 {
 	VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio);
@@ -4316,7 +4316,7 @@ static void split_huge_pages_all(void)
 	pr_debug("%lu of %lu THP split\n", split, total);
 }
 
-static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma)
+static inline bool vma_not_suitable_for_thp_split(struct mm_area *vma)
 {
 	return vma_is_special_huge(vma) || (vma->vm_flags & VM_IO) ||
 		    is_vm_hugetlb_page(vma);
@@ -4359,7 +4359,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
 	 * table filled with PTE-mapped THPs, each of which is distinct.
 	 */
 	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
-		struct vm_area_struct *vma = vma_lookup(mm, addr);
+		struct mm_area *vma = vma_lookup(mm, addr);
 		struct folio_walk fw;
 		struct folio *folio;
 		struct address_space *mapping;
@@ -4614,7 +4614,7 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 		struct page *page)
 {
 	struct folio *folio = page_folio(page);
-	struct vm_area_struct *vma = pvmw->vma;
+	struct mm_area *vma = pvmw->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long address = pvmw->address;
 	bool anon_exclusive;
@@ -4663,7 +4663,7 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
 void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
 {
 	struct folio *folio = page_folio(new);
-	struct vm_area_struct *vma = pvmw->vma;
+	struct mm_area *vma = pvmw->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long address = pvmw->address;
 	unsigned long haddr = address & HPAGE_PMD_MASK;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 39f92aad7bd1..96a0b225c1e8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -116,12 +116,12 @@ struct mutex *hugetlb_fault_mutex_table __ro_after_init;
 
 /* Forward declaration */
 static int hugetlb_acct_memory(struct hstate *h, long delta);
-static void hugetlb_vma_lock_free(struct vm_area_struct *vma);
-static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
-static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma);
-static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+static void hugetlb_vma_lock_free(struct mm_area *vma);
+static void hugetlb_vma_lock_alloc(struct mm_area *vma);
+static void __hugetlb_vma_unlock_write_free(struct mm_area *vma);
+static void hugetlb_unshare_pmds(struct mm_area *vma,
 		unsigned long start, unsigned long end);
-static struct resv_map *vma_resv_map(struct vm_area_struct *vma);
+static struct resv_map *vma_resv_map(struct mm_area *vma);
 
 static void hugetlb_free_folio(struct folio *folio)
 {
@@ -288,7 +288,7 @@ static inline struct hugepage_subpool *subpool_inode(struct inode *inode)
 	return HUGETLBFS_SB(inode->i_sb)->spool;
 }
 
-static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma)
+static inline struct hugepage_subpool *subpool_vma(struct mm_area *vma)
 {
 	return subpool_inode(file_inode(vma->vm_file));
 }
@@ -296,7 +296,7 @@ static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma)
 /*
  * hugetlb vma_lock helper routines
  */
-void hugetlb_vma_lock_read(struct vm_area_struct *vma)
+void hugetlb_vma_lock_read(struct mm_area *vma)
 {
 	if (__vma_shareable_lock(vma)) {
 		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
@@ -309,7 +309,7 @@ void hugetlb_vma_lock_read(struct vm_area_struct *vma)
 	}
 }
 
-void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
+void hugetlb_vma_unlock_read(struct mm_area *vma)
 {
 	if (__vma_shareable_lock(vma)) {
 		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
@@ -322,7 +322,7 @@ void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
 	}
 }
 
-void hugetlb_vma_lock_write(struct vm_area_struct *vma)
+void hugetlb_vma_lock_write(struct mm_area *vma)
 {
 	if (__vma_shareable_lock(vma)) {
 		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
@@ -335,7 +335,7 @@ void hugetlb_vma_lock_write(struct vm_area_struct *vma)
 	}
 }
 
-void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
+void hugetlb_vma_unlock_write(struct mm_area *vma)
 {
 	if (__vma_shareable_lock(vma)) {
 		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
@@ -348,7 +348,7 @@ void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
 	}
 }
 
-int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
+int hugetlb_vma_trylock_write(struct mm_area *vma)
 {
 
 	if (__vma_shareable_lock(vma)) {
@@ -364,7 +364,7 @@ int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
 	return 1;
 }
 
-void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
+void hugetlb_vma_assert_locked(struct mm_area *vma)
 {
 	if (__vma_shareable_lock(vma)) {
 		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
@@ -387,7 +387,7 @@ void hugetlb_vma_lock_release(struct kref *kref)
 
 static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
 {
-	struct vm_area_struct *vma = vma_lock->vma;
+	struct mm_area *vma = vma_lock->vma;
 
 	/*
 	 * vma_lock structure may or not be released as a result of put,
@@ -400,7 +400,7 @@ static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
 	kref_put(&vma_lock->refs, hugetlb_vma_lock_release);
 }
 
-static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
+static void __hugetlb_vma_unlock_write_free(struct mm_area *vma)
 {
 	if (__vma_shareable_lock(vma)) {
 		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
@@ -414,7 +414,7 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
 	}
 }
 
-static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
+static void hugetlb_vma_lock_free(struct mm_area *vma)
 {
 	/*
 	 * Only present in sharable vmas.
@@ -430,7 +430,7 @@ static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
 	}
 }
 
-static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma)
+static void hugetlb_vma_lock_alloc(struct mm_area *vma)
 {
 	struct hugetlb_vma_lock *vma_lock;
 
@@ -1021,7 +1021,7 @@ static long region_count(struct resv_map *resv, long f, long t)
  * the mapping, huge page units here.
  */
 static pgoff_t vma_hugecache_offset(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long address)
+			struct mm_area *vma, unsigned long address)
 {
 	return ((address - vma->vm_start) >> huge_page_shift(h)) +
 			(vma->vm_pgoff >> huge_page_order(h));
@@ -1036,7 +1036,7 @@ static pgoff_t vma_hugecache_offset(struct hstate *h,
  *
  * Return: The default size of the folios allocated when backing a VMA.
  */
-unsigned long vma_kernel_pagesize(struct vm_area_struct *vma)
+unsigned long vma_kernel_pagesize(struct mm_area *vma)
 {
 	if (vma->vm_ops && vma->vm_ops->pagesize)
 		return vma->vm_ops->pagesize(vma);
@@ -1050,7 +1050,7 @@ EXPORT_SYMBOL_GPL(vma_kernel_pagesize);
  * architectures where it differs, an architecture-specific 'strong'
  * version of this symbol is required.
  */
-__weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
+__weak unsigned long vma_mmu_pagesize(struct mm_area *vma)
 {
 	return vma_kernel_pagesize(vma);
 }
@@ -1083,12 +1083,12 @@ __weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
  * reference it, this region map represents those offsets which have consumed
  * reservation ie. where pages have been instantiated.
  */
-static unsigned long get_vma_private_data(struct vm_area_struct *vma)
+static unsigned long get_vma_private_data(struct mm_area *vma)
 {
 	return (unsigned long)vma->vm_private_data;
 }
 
-static void set_vma_private_data(struct vm_area_struct *vma,
+static void set_vma_private_data(struct mm_area *vma,
 							unsigned long value)
 {
 	vma->vm_private_data = (void *)value;
@@ -1178,7 +1178,7 @@ static inline struct resv_map *inode_resv_map(struct inode *inode)
 	return (struct resv_map *)(&inode->i_data)->i_private_data;
 }
 
-static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
+static struct resv_map *vma_resv_map(struct mm_area *vma)
 {
 	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
 	if (vma->vm_flags & VM_MAYSHARE) {
@@ -1193,7 +1193,7 @@ static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
 	}
 }
 
-static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
+static void set_vma_resv_map(struct mm_area *vma, struct resv_map *map)
 {
 	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
 	VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
@@ -1201,7 +1201,7 @@ static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
 	set_vma_private_data(vma, (unsigned long)map);
 }
 
-static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags)
+static void set_vma_resv_flags(struct mm_area *vma, unsigned long flags)
 {
 	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
 	VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
@@ -1209,21 +1209,21 @@ static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags)
 	set_vma_private_data(vma, get_vma_private_data(vma) | flags);
 }
 
-static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
+static int is_vma_resv_set(struct mm_area *vma, unsigned long flag)
 {
 	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
 
 	return (get_vma_private_data(vma) & flag) != 0;
 }
 
-bool __vma_private_lock(struct vm_area_struct *vma)
+bool __vma_private_lock(struct mm_area *vma)
 {
 	return !(vma->vm_flags & VM_MAYSHARE) &&
 		get_vma_private_data(vma) & ~HPAGE_RESV_MASK &&
 		is_vma_resv_set(vma, HPAGE_RESV_OWNER);
 }
 
-void hugetlb_dup_vma_private(struct vm_area_struct *vma)
+void hugetlb_dup_vma_private(struct mm_area *vma)
 {
 	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
 	/*
@@ -1254,7 +1254,7 @@ void hugetlb_dup_vma_private(struct vm_area_struct *vma)
  * same sized vma. It should never come here with last ref on the
  * reservation.
  */
-void clear_vma_resv_huge_pages(struct vm_area_struct *vma)
+void clear_vma_resv_huge_pages(struct mm_area *vma)
 {
 	/*
 	 * Clear the old hugetlb private page reservation.
@@ -1365,7 +1365,7 @@ static unsigned long available_huge_pages(struct hstate *h)
 }
 
 static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				unsigned long address, long gbl_chg)
 {
 	struct folio *folio = NULL;
@@ -2324,7 +2324,7 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas
  */
 static
 struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
-		struct vm_area_struct *vma, unsigned long addr)
+		struct mm_area *vma, unsigned long addr)
 {
 	struct folio *folio = NULL;
 	struct mempolicy *mpol;
@@ -2606,7 +2606,7 @@ enum vma_resv_mode {
 	VMA_DEL_RESV,
 };
 static long __vma_reservation_common(struct hstate *h,
-				struct vm_area_struct *vma, unsigned long addr,
+				struct mm_area *vma, unsigned long addr,
 				enum vma_resv_mode mode)
 {
 	struct resv_map *resv;
@@ -2686,31 +2686,31 @@ static long __vma_reservation_common(struct hstate *h,
 }
 
 static long vma_needs_reservation(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	return __vma_reservation_common(h, vma, addr, VMA_NEEDS_RESV);
 }
 
 static long vma_commit_reservation(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	return __vma_reservation_common(h, vma, addr, VMA_COMMIT_RESV);
 }
 
 static void vma_end_reservation(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	(void)__vma_reservation_common(h, vma, addr, VMA_END_RESV);
 }
 
 static long vma_add_reservation(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	return __vma_reservation_common(h, vma, addr, VMA_ADD_RESV);
 }
 
 static long vma_del_reservation(struct hstate *h,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	return __vma_reservation_common(h, vma, addr, VMA_DEL_RESV);
 }
@@ -2735,7 +2735,7 @@ static long vma_del_reservation(struct hstate *h,
  *
  * In case 2, simply undo reserve map modifications done by alloc_hugetlb_folio.
  */
-void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
+void restore_reserve_on_error(struct hstate *h, struct mm_area *vma,
 			unsigned long address, struct folio *folio)
 {
 	long rc = vma_needs_reservation(h, vma, address);
@@ -3004,7 +3004,7 @@ typedef enum {
  * allocation).  New call sites should (probably) never set it to true!!
  * When it's set, the allocation will bypass all vma level reservations.
  */
-struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+struct folio *alloc_hugetlb_folio(struct mm_area *vma,
 				    unsigned long addr, bool cow_from_owner)
 {
 	struct hugepage_subpool *spool = subpool_vma(vma);
@@ -5314,7 +5314,7 @@ static int hugetlb_acct_memory(struct hstate *h, long delta)
 	return ret;
 }
 
-static void hugetlb_vm_op_open(struct vm_area_struct *vma)
+static void hugetlb_vm_op_open(struct mm_area *vma)
 {
 	struct resv_map *resv = vma_resv_map(vma);
 
@@ -5352,7 +5352,7 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
 	}
 }
 
-static void hugetlb_vm_op_close(struct vm_area_struct *vma)
+static void hugetlb_vm_op_close(struct mm_area *vma)
 {
 	struct hstate *h = hstate_vma(vma);
 	struct resv_map *resv;
@@ -5383,7 +5383,7 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
 	kref_put(&resv->refs, resv_map_release);
 }
 
-static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
+static int hugetlb_vm_op_split(struct mm_area *vma, unsigned long addr)
 {
 	if (addr & ~(huge_page_mask(hstate_vma(vma))))
 		return -EINVAL;
@@ -5409,7 +5409,7 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
 	return 0;
 }
 
-static unsigned long hugetlb_vm_op_pagesize(struct vm_area_struct *vma)
+static unsigned long hugetlb_vm_op_pagesize(struct mm_area *vma)
 {
 	return huge_page_size(hstate_vma(vma));
 }
@@ -5441,7 +5441,7 @@ const struct vm_operations_struct hugetlb_vm_ops = {
 	.pagesize = hugetlb_vm_op_pagesize,
 };
 
-static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
+static pte_t make_huge_pte(struct mm_area *vma, struct page *page,
 		bool try_mkwrite)
 {
 	pte_t entry;
@@ -5460,7 +5460,7 @@ static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
 	return entry;
 }
 
-static void set_huge_ptep_writable(struct vm_area_struct *vma,
+static void set_huge_ptep_writable(struct mm_area *vma,
 				   unsigned long address, pte_t *ptep)
 {
 	pte_t entry;
@@ -5470,7 +5470,7 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma,
 		update_mmu_cache(vma, address, ptep);
 }
 
-static void set_huge_ptep_maybe_writable(struct vm_area_struct *vma,
+static void set_huge_ptep_maybe_writable(struct mm_area *vma,
 					 unsigned long address, pte_t *ptep)
 {
 	if (vma->vm_flags & VM_WRITE)
@@ -5504,7 +5504,7 @@ bool is_hugetlb_entry_hwpoisoned(pte_t pte)
 }
 
 static void
-hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long addr,
+hugetlb_install_folio(struct mm_area *vma, pte_t *ptep, unsigned long addr,
 		      struct folio *new_folio, pte_t old, unsigned long sz)
 {
 	pte_t newpte = make_huge_pte(vma, &new_folio->page, true);
@@ -5519,8 +5519,8 @@ hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long add
 }
 
 int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
-			    struct vm_area_struct *dst_vma,
-			    struct vm_area_struct *src_vma)
+			    struct mm_area *dst_vma,
+			    struct mm_area *src_vma)
 {
 	pte_t *src_pte, *dst_pte, entry;
 	struct folio *pte_folio;
@@ -5706,7 +5706,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 	return ret;
 }
 
-static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+static void move_huge_pte(struct mm_area *vma, unsigned long old_addr,
 			  unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte,
 			  unsigned long sz)
 {
@@ -5745,8 +5745,8 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
 	spin_unlock(dst_ptl);
 }
 
-int move_hugetlb_page_tables(struct vm_area_struct *vma,
-			     struct vm_area_struct *new_vma,
+int move_hugetlb_page_tables(struct mm_area *vma,
+			     struct mm_area *new_vma,
 			     unsigned long old_addr, unsigned long new_addr,
 			     unsigned long len)
 {
@@ -5809,7 +5809,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
 	return len + old_addr - old_end;
 }
 
-void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+void __unmap_hugepage_range(struct mmu_gather *tlb, struct mm_area *vma,
 			    unsigned long start, unsigned long end,
 			    struct page *ref_page, zap_flags_t zap_flags)
 {
@@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		tlb_flush_mmu_tlbonly(tlb);
 }
 
-void __hugetlb_zap_begin(struct vm_area_struct *vma,
+void __hugetlb_zap_begin(struct mm_area *vma,
 			 unsigned long *start, unsigned long *end)
 {
 	if (!vma->vm_file)	/* hugetlbfs_file_mmap error */
@@ -5989,7 +5989,7 @@ void __hugetlb_zap_begin(struct vm_area_struct *vma,
 		i_mmap_lock_write(vma->vm_file->f_mapping);
 }
 
-void __hugetlb_zap_end(struct vm_area_struct *vma,
+void __hugetlb_zap_end(struct mm_area *vma,
 		       struct zap_details *details)
 {
 	zap_flags_t zap_flags = details ? details->zap_flags : 0;
@@ -6016,7 +6016,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
 		i_mmap_unlock_write(vma->vm_file->f_mapping);
 }
 
-void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
+void unmap_hugepage_range(struct mm_area *vma, unsigned long start,
 			  unsigned long end, struct page *ref_page,
 			  zap_flags_t zap_flags)
 {
@@ -6041,11 +6041,11 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
  * from other VMAs and let the children be SIGKILLed if they are faulting the
  * same region.
  */
-static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
+static void unmap_ref_private(struct mm_struct *mm, struct mm_area *vma,
 			      struct page *page, unsigned long address)
 {
 	struct hstate *h = hstate_vma(vma);
-	struct vm_area_struct *iter_vma;
+	struct mm_area *iter_vma;
 	struct address_space *mapping;
 	pgoff_t pgoff;
 
@@ -6100,7 +6100,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
 static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
 		       struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	pte_t pte = huge_ptep_get(mm, vmf->address, vmf->pte);
@@ -6294,7 +6294,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
  * Return whether there is a pagecache page to back given address within VMA.
  */
 bool hugetlbfs_pagecache_present(struct hstate *h,
-				 struct vm_area_struct *vma, unsigned long address)
+				 struct mm_area *vma, unsigned long address)
 {
 	struct address_space *mapping = vma->vm_file->f_mapping;
 	pgoff_t idx = linear_page_index(vma, address);
@@ -6373,7 +6373,7 @@ static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm, unsigned
 static vm_fault_t hugetlb_no_page(struct address_space *mapping,
 			struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	struct hstate *h = hstate_vma(vma);
 	vm_fault_t ret = VM_FAULT_SIGBUS;
@@ -6611,7 +6611,7 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx)
 }
 #endif
 
-vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+vm_fault_t hugetlb_fault(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long address, unsigned int flags)
 {
 	vm_fault_t ret;
@@ -6824,7 +6824,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
  * Can probably be eliminated, but still used by hugetlb_mfill_atomic_pte().
  */
 static struct folio *alloc_hugetlb_folio_vma(struct hstate *h,
-		struct vm_area_struct *vma, unsigned long address)
+		struct mm_area *vma, unsigned long address)
 {
 	struct mempolicy *mpol;
 	nodemask_t *nodemask;
@@ -6851,7 +6851,7 @@ static struct folio *alloc_hugetlb_folio_vma(struct hstate *h,
  * with modifications for hugetlb pages.
  */
 int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
-			     struct vm_area_struct *dst_vma,
+			     struct mm_area *dst_vma,
 			     unsigned long dst_addr,
 			     unsigned long src_addr,
 			     uffd_flags_t flags,
@@ -7063,7 +7063,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 }
 #endif /* CONFIG_USERFAULTFD */
 
-long hugetlb_change_protection(struct vm_area_struct *vma,
+long hugetlb_change_protection(struct mm_area *vma,
 		unsigned long address, unsigned long end,
 		pgprot_t newprot, unsigned long cp_flags)
 {
@@ -7213,7 +7213,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
 /* Return true if reservation was successful, false otherwise.  */
 bool hugetlb_reserve_pages(struct inode *inode,
 					long from, long to,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					vm_flags_t vm_flags)
 {
 	long chg = -1, add = -1;
@@ -7413,8 +7413,8 @@ long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
 }
 
 #ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
-static unsigned long page_table_shareable(struct vm_area_struct *svma,
-				struct vm_area_struct *vma,
+static unsigned long page_table_shareable(struct mm_area *svma,
+				struct mm_area *vma,
 				unsigned long addr, pgoff_t idx)
 {
 	unsigned long saddr = ((idx - svma->vm_pgoff) << PAGE_SHIFT) +
@@ -7441,7 +7441,7 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
 	return saddr;
 }
 
-bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
+bool want_pmd_share(struct mm_area *vma, unsigned long addr)
 {
 	unsigned long start = addr & PUD_MASK;
 	unsigned long end = start + PUD_SIZE;
@@ -7467,7 +7467,7 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
  * If yes, adjust start and end to cover range associated with possible
  * shared pmd mappings.
  */
-void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+void adjust_range_if_pmd_sharing_possible(struct mm_area *vma,
 				unsigned long *start, unsigned long *end)
 {
 	unsigned long v_start = ALIGN(vma->vm_start, PUD_SIZE),
@@ -7498,13 +7498,13 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
  * racing tasks could either miss the sharing (see huge_pte_offset) or select a
  * bad pmd for sharing.
  */
-pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pmd_share(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, pud_t *pud)
 {
 	struct address_space *mapping = vma->vm_file->f_mapping;
 	pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) +
 			vma->vm_pgoff;
-	struct vm_area_struct *svma;
+	struct mm_area *svma;
 	unsigned long saddr;
 	pte_t *spte = NULL;
 	pte_t *pte;
@@ -7551,7 +7551,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
  * returns: 1 successfully unmapped a shared pte page
  *	    0 the underlying pte page is not shared, or it is the last user
  */
-int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+int huge_pmd_unshare(struct mm_struct *mm, struct mm_area *vma,
 					unsigned long addr, pte_t *ptep)
 {
 	unsigned long sz = huge_page_size(hstate_vma(vma));
@@ -7574,31 +7574,31 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
 
 #else /* !CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */
 
-pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pmd_share(struct mm_struct *mm, struct mm_area *vma,
 		      unsigned long addr, pud_t *pud)
 {
 	return NULL;
 }
 
-int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+int huge_pmd_unshare(struct mm_struct *mm, struct mm_area *vma,
 				unsigned long addr, pte_t *ptep)
 {
 	return 0;
 }
 
-void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
+void adjust_range_if_pmd_sharing_possible(struct mm_area *vma,
 				unsigned long *start, unsigned long *end)
 {
 }
 
-bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
+bool want_pmd_share(struct mm_area *vma, unsigned long addr)
 {
 	return false;
 }
 #endif /* CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */
 
 #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB
-pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
+pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
 			unsigned long addr, unsigned long sz)
 {
 	pgd_t *pgd;
@@ -7837,7 +7837,7 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re
 	spin_unlock_irq(&hugetlb_lock);
 }
 
-static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+static void hugetlb_unshare_pmds(struct mm_area *vma,
 				   unsigned long start,
 				   unsigned long end)
 {
@@ -7887,7 +7887,7 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
  * This function will unconditionally remove all the shared pmd pgtable entries
  * within the specific vma for a hugetlbfs memory range.
  */
-void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
+void hugetlb_unshare_all_pmds(struct mm_area *vma)
 {
 	hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE),
 			ALIGN_DOWN(vma->vm_end, PUD_SIZE));
diff --git a/mm/internal.h b/mm/internal.h
index 50c2f590b2d0..b2d2c52dfbd9 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -44,8 +44,8 @@ struct folio_batch;
  * represents the length of the range being copied as specified by the user.
  */
 struct pagetable_move_control {
-	struct vm_area_struct *old; /* Source VMA. */
-	struct vm_area_struct *new; /* Destination VMA. */
+	struct mm_area *old; /* Source VMA. */
+	struct mm_area *new; /* Destination VMA. */
 	unsigned long old_addr; /* Address from which the move begins. */
 	unsigned long old_end; /* Exclusive address at which old range ends. */
 	unsigned long new_addr; /* Address to move page tables to. */
@@ -162,7 +162,7 @@ static inline void *folio_raw_mapping(const struct folio *folio)
  *
  * Returns: 0 if success, error otherwise.
  */
-static inline int mmap_file(struct file *file, struct vm_area_struct *vma)
+static inline int mmap_file(struct file *file, struct mm_area *vma)
 {
 	int err = call_mmap(file, vma);
 
@@ -184,7 +184,7 @@ static inline int mmap_file(struct file *file, struct vm_area_struct *vma)
  * it in an inconsistent state which makes the use of any hooks suspect, clear
  * them down by installing dummy empty hooks.
  */
-static inline void vma_close(struct vm_area_struct *vma)
+static inline void vma_close(struct mm_area *vma)
 {
 	if (vma->vm_ops && vma->vm_ops->close) {
 		vma->vm_ops->close(vma);
@@ -426,13 +426,13 @@ void deactivate_file_folio(struct folio *folio);
 void folio_activate(struct folio *folio);
 
 void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
-		   struct vm_area_struct *start_vma, unsigned long floor,
+		   struct mm_area *start_vma, unsigned long floor,
 		   unsigned long ceiling, bool mm_wr_locked);
 void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte);
 
 struct zap_details;
 void unmap_page_range(struct mmu_gather *tlb,
-			     struct vm_area_struct *vma,
+			     struct mm_area *vma,
 			     unsigned long addr, unsigned long end,
 			     struct zap_details *details);
 int folio_unmap_invalidate(struct address_space *mapping, struct folio *folio,
@@ -927,7 +927,7 @@ struct anon_vma *folio_anon_vma(const struct folio *folio);
 
 #ifdef CONFIG_MMU
 void unmap_mapping_folio(struct folio *folio);
-extern long populate_vma_page_range(struct vm_area_struct *vma,
+extern long populate_vma_page_range(struct mm_area *vma,
 		unsigned long start, unsigned long end, int *locked);
 extern long faultin_page_range(struct mm_struct *mm, unsigned long start,
 		unsigned long end, bool write, int *locked);
@@ -950,7 +950,7 @@ extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags,
  * the page table to know whether the folio is fully mapped to the range.
  */
 static inline bool
-folio_within_range(struct folio *folio, struct vm_area_struct *vma,
+folio_within_range(struct folio *folio, struct mm_area *vma,
 		unsigned long start, unsigned long end)
 {
 	pgoff_t pgoff, addr;
@@ -978,7 +978,7 @@ folio_within_range(struct folio *folio, struct vm_area_struct *vma,
 }
 
 static inline bool
-folio_within_vma(struct folio *folio, struct vm_area_struct *vma)
+folio_within_vma(struct folio *folio, struct mm_area *vma)
 {
 	return folio_within_range(folio, vma, vma->vm_start, vma->vm_end);
 }
@@ -994,7 +994,7 @@ folio_within_vma(struct folio *folio, struct vm_area_struct *vma)
  */
 void mlock_folio(struct folio *folio);
 static inline void mlock_vma_folio(struct folio *folio,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	/*
 	 * The VM_SPECIAL check here serves two purposes.
@@ -1010,7 +1010,7 @@ static inline void mlock_vma_folio(struct folio *folio,
 
 void munlock_folio(struct folio *folio);
 static inline void munlock_vma_folio(struct folio *folio,
-					struct vm_area_struct *vma)
+					struct mm_area *vma)
 {
 	/*
 	 * munlock if the function is called. Ideally, we should only
@@ -1030,7 +1030,7 @@ bool need_mlock_drain(int cpu);
 void mlock_drain_local(void);
 void mlock_drain_remote(int cpu);
 
-extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
+extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct mm_area *vma);
 
 /**
  * vma_address - Find the virtual address a page range is mapped at
@@ -1041,7 +1041,7 @@ extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
  * If any page in this range is mapped by this VMA, return the first address
  * where any of these pages appear.  Otherwise, return -EFAULT.
  */
-static inline unsigned long vma_address(const struct vm_area_struct *vma,
+static inline unsigned long vma_address(const struct mm_area *vma,
 		pgoff_t pgoff, unsigned long nr_pages)
 {
 	unsigned long address;
@@ -1067,7 +1067,7 @@ static inline unsigned long vma_address(const struct vm_area_struct *vma,
  */
 static inline unsigned long vma_address_end(struct page_vma_mapped_walk *pvmw)
 {
-	struct vm_area_struct *vma = pvmw->vma;
+	struct mm_area *vma = pvmw->vma;
 	pgoff_t pgoff;
 	unsigned long address;
 
@@ -1210,10 +1210,10 @@ bool take_page_off_buddy(struct page *page);
 bool put_page_back_buddy(struct page *page);
 struct task_struct *task_early_kill(struct task_struct *tsk, int force_early);
 void add_to_kill_ksm(struct task_struct *tsk, const struct page *p,
-		     struct vm_area_struct *vma, struct list_head *to_kill,
+		     struct mm_area *vma, struct list_head *to_kill,
 		     unsigned long ksm_addr);
 unsigned long page_mapped_in_vma(const struct page *page,
-		struct vm_area_struct *vma);
+		struct mm_area *vma);
 
 #else
 static inline int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill)
@@ -1373,9 +1373,9 @@ int __must_check try_grab_folio(struct folio *folio, int refs,
 /*
  * mm/huge_memory.c
  */
-void touch_pud(struct vm_area_struct *vma, unsigned long addr,
+void touch_pud(struct mm_area *vma, unsigned long addr,
 	       pud_t *pud, bool write);
-void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
+void touch_pmd(struct mm_area *vma, unsigned long addr,
 	       pmd_t *pmd, bool write);
 
 /*
@@ -1441,7 +1441,7 @@ enum {
  * If the vma is NULL, we're coming from the GUP-fast path and might have
  * to fallback to the slow path just to lookup the vma.
  */
-static inline bool gup_must_unshare(struct vm_area_struct *vma,
+static inline bool gup_must_unshare(struct mm_area *vma,
 				    unsigned int flags, struct page *page)
 {
 	/*
@@ -1490,7 +1490,7 @@ extern bool mirrored_kernelcore;
 bool memblock_has_mirror(void);
 void memblock_free_all(void);
 
-static __always_inline void vma_set_range(struct vm_area_struct *vma,
+static __always_inline void vma_set_range(struct mm_area *vma,
 					  unsigned long start, unsigned long end,
 					  pgoff_t pgoff)
 {
@@ -1499,7 +1499,7 @@ static __always_inline void vma_set_range(struct vm_area_struct *vma,
 	vma->vm_pgoff = pgoff;
 }
 
-static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
+static inline bool vma_soft_dirty_enabled(struct mm_area *vma)
 {
 	/*
 	 * NOTE: we must check this before VM_SOFTDIRTY on soft-dirty
@@ -1517,12 +1517,12 @@ static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
 	return !(vma->vm_flags & VM_SOFTDIRTY);
 }
 
-static inline bool pmd_needs_soft_dirty_wp(struct vm_area_struct *vma, pmd_t pmd)
+static inline bool pmd_needs_soft_dirty_wp(struct mm_area *vma, pmd_t pmd)
 {
 	return vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd);
 }
 
-static inline bool pte_needs_soft_dirty_wp(struct vm_area_struct *vma, pte_t pte)
+static inline bool pte_needs_soft_dirty_wp(struct mm_area *vma, pte_t pte)
 {
 	return vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte);
 }
diff --git a/mm/interval_tree.c b/mm/interval_tree.c
index 32e390c42c53..864e9d3c733a 100644
--- a/mm/interval_tree.c
+++ b/mm/interval_tree.c
@@ -10,27 +10,27 @@
 #include <linux/rmap.h>
 #include <linux/interval_tree_generic.h>
 
-static inline unsigned long vma_start_pgoff(struct vm_area_struct *v)
+static inline unsigned long vma_start_pgoff(struct mm_area *v)
 {
 	return v->vm_pgoff;
 }
 
-static inline unsigned long vma_last_pgoff(struct vm_area_struct *v)
+static inline unsigned long vma_last_pgoff(struct mm_area *v)
 {
 	return v->vm_pgoff + vma_pages(v) - 1;
 }
 
-INTERVAL_TREE_DEFINE(struct vm_area_struct, shared.rb,
+INTERVAL_TREE_DEFINE(struct mm_area, shared.rb,
 		     unsigned long, shared.rb_subtree_last,
 		     vma_start_pgoff, vma_last_pgoff, /* empty */, vma_interval_tree)
 
 /* Insert node immediately after prev in the interval tree */
-void vma_interval_tree_insert_after(struct vm_area_struct *node,
-				    struct vm_area_struct *prev,
+void vma_interval_tree_insert_after(struct mm_area *node,
+				    struct mm_area *prev,
 				    struct rb_root_cached *root)
 {
 	struct rb_node **link;
-	struct vm_area_struct *parent;
+	struct mm_area *parent;
 	unsigned long last = vma_last_pgoff(node);
 
 	VM_BUG_ON_VMA(vma_start_pgoff(node) != vma_start_pgoff(prev), node);
@@ -40,12 +40,12 @@ void vma_interval_tree_insert_after(struct vm_area_struct *node,
 		link = &prev->shared.rb.rb_right;
 	} else {
 		parent = rb_entry(prev->shared.rb.rb_right,
-				  struct vm_area_struct, shared.rb);
+				  struct mm_area, shared.rb);
 		if (parent->shared.rb_subtree_last < last)
 			parent->shared.rb_subtree_last = last;
 		while (parent->shared.rb.rb_left) {
 			parent = rb_entry(parent->shared.rb.rb_left,
-				struct vm_area_struct, shared.rb);
+				struct mm_area, shared.rb);
 			if (parent->shared.rb_subtree_last < last)
 				parent->shared.rb_subtree_last = last;
 		}
diff --git a/mm/io-mapping.c b/mm/io-mapping.c
index 01b362799930..588ecb8ea446 100644
--- a/mm/io-mapping.c
+++ b/mm/io-mapping.c
@@ -13,7 +13,7 @@
  *
  *  Note: this is only safe if the mm semaphore is held when called.
  */
-int io_mapping_map_user(struct io_mapping *iomap, struct vm_area_struct *vma,
+int io_mapping_map_user(struct io_mapping *iomap, struct mm_area *vma,
 		unsigned long addr, unsigned long pfn, unsigned long size)
 {
 	vm_flags_t expected_flags = VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cc945c6ab3bd..e135208612f1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -346,7 +346,7 @@ struct attribute_group khugepaged_attr_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-int hugepage_madvise(struct vm_area_struct *vma,
+int hugepage_madvise(struct mm_area *vma,
 		     unsigned long *vm_flags, int advice)
 {
 	switch (advice) {
@@ -469,7 +469,7 @@ void __khugepaged_enter(struct mm_struct *mm)
 		wake_up_interruptible(&khugepaged_wait);
 }
 
-void khugepaged_enter_vma(struct vm_area_struct *vma,
+void khugepaged_enter_vma(struct mm_area *vma,
 			  unsigned long vm_flags)
 {
 	if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
@@ -561,7 +561,7 @@ static bool is_refcount_suitable(struct folio *folio)
 	return folio_ref_count(folio) == expected_refcount;
 }
 
-static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
+static int __collapse_huge_page_isolate(struct mm_area *vma,
 					unsigned long address,
 					pte_t *pte,
 					struct collapse_control *cc,
@@ -708,7 +708,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 }
 
 static void __collapse_huge_page_copy_succeeded(pte_t *pte,
-						struct vm_area_struct *vma,
+						struct mm_area *vma,
 						unsigned long address,
 						spinlock_t *ptl,
 						struct list_head *compound_pagelist)
@@ -763,7 +763,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
 static void __collapse_huge_page_copy_failed(pte_t *pte,
 					     pmd_t *pmd,
 					     pmd_t orig_pmd,
-					     struct vm_area_struct *vma,
+					     struct mm_area *vma,
 					     struct list_head *compound_pagelist)
 {
 	spinlock_t *pmd_ptl;
@@ -800,7 +800,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
  * @compound_pagelist: list that stores compound pages
  */
 static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
-		pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
+		pmd_t *pmd, pmd_t orig_pmd, struct mm_area *vma,
 		unsigned long address, spinlock_t *ptl,
 		struct list_head *compound_pagelist)
 {
@@ -919,10 +919,10 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc)
 
 static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
 				   bool expect_anon,
-				   struct vm_area_struct **vmap,
+				   struct mm_area **vmap,
 				   struct collapse_control *cc)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
 
 	if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
@@ -998,7 +998,7 @@ static int check_pmd_still_valid(struct mm_struct *mm,
  * Returns result: if not SCAN_SUCCEED, mmap_lock has been released.
  */
 static int __collapse_huge_page_swapin(struct mm_struct *mm,
-				       struct vm_area_struct *vma,
+				       struct mm_area *vma,
 				       unsigned long haddr, pmd_t *pmd,
 				       int referenced)
 {
@@ -1112,7 +1112,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 	struct folio *folio;
 	spinlock_t *pmd_ptl, *pte_ptl;
 	int result = SCAN_FAIL;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mmu_notifier_range range;
 
 	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
@@ -1265,7 +1265,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 }
 
 static int hpage_collapse_scan_pmd(struct mm_struct *mm,
-				   struct vm_area_struct *vma,
+				   struct mm_area *vma,
 				   unsigned long address, bool *mmap_locked,
 				   struct collapse_control *cc)
 {
@@ -1466,7 +1466,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
 
 #ifdef CONFIG_SHMEM
 /* hpage must be locked, and mmap_lock must be held */
-static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
+static int set_huge_pmd(struct mm_area *vma, unsigned long addr,
 			pmd_t *pmdp, struct page *hpage)
 {
 	struct vm_fault vmf = {
@@ -1504,7 +1504,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 	struct mmu_notifier_range range;
 	bool notified = false;
 	unsigned long haddr = addr & HPAGE_PMD_MASK;
-	struct vm_area_struct *vma = vma_lookup(mm, haddr);
+	struct mm_area *vma = vma_lookup(mm, haddr);
 	struct folio *folio;
 	pte_t *start_pte, *pte;
 	pmd_t *pmd, pgt_pmd;
@@ -1713,7 +1713,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 
 static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	i_mmap_lock_read(mapping);
 	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
@@ -2114,7 +2114,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	}
 
 	if (nr_none) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		int nr_none_check = 0;
 
 		i_mmap_lock_read(mapping);
@@ -2372,7 +2372,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
 	struct khugepaged_mm_slot *mm_slot;
 	struct mm_slot *slot;
 	struct mm_struct *mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int progress = 0;
 
 	VM_BUG_ON(!pages);
@@ -2736,7 +2736,7 @@ static int madvise_collapse_errno(enum scan_result r)
 	}
 }
 
-int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
+int madvise_collapse(struct mm_area *vma, struct mm_area **prev,
 		     unsigned long start, unsigned long end)
 {
 	struct collapse_control *cc;
diff --git a/mm/ksm.c b/mm/ksm.c
index 8583fb91ef13..0370e8d4ab02 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -620,7 +620,7 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
  * of the process that owns 'vma'.  We also do not want to enforce
  * protection keys here anyway.
  */
-static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_vma)
+static int break_ksm(struct mm_area *vma, unsigned long addr, bool lock_vma)
 {
 	vm_fault_t ret = 0;
 
@@ -677,7 +677,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
 	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
 }
 
-static bool vma_ksm_compatible(struct vm_area_struct *vma)
+static bool vma_ksm_compatible(struct mm_area *vma)
 {
 	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
 			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
@@ -699,10 +699,10 @@ static bool vma_ksm_compatible(struct vm_area_struct *vma)
 	return true;
 }
 
-static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
+static struct mm_area *find_mergeable_vma(struct mm_struct *mm,
 		unsigned long addr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	if (ksm_test_exit(mm))
 		return NULL;
 	vma = vma_lookup(mm, addr);
@@ -715,7 +715,7 @@ static void break_cow(struct ksm_rmap_item *rmap_item)
 {
 	struct mm_struct *mm = rmap_item->mm;
 	unsigned long addr = rmap_item->address;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * It is not an accident that whenever we want to break COW
@@ -734,7 +734,7 @@ static struct page *get_mergeable_page(struct ksm_rmap_item *rmap_item)
 {
 	struct mm_struct *mm = rmap_item->mm;
 	unsigned long addr = rmap_item->address;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct page *page = NULL;
 	struct folio_walk fw;
 	struct folio *folio;
@@ -1034,7 +1034,7 @@ static void remove_trailing_rmap_items(struct ksm_rmap_item **rmap_list)
  * to the next pass of ksmd - consider, for example, how ksmd might be
  * in cmp_and_merge_page on one of the rmap_items we would be removing.
  */
-static int unmerge_ksm_pages(struct vm_area_struct *vma,
+static int unmerge_ksm_pages(struct mm_area *vma,
 			     unsigned long start, unsigned long end, bool lock_vma)
 {
 	unsigned long addr;
@@ -1167,7 +1167,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 	struct ksm_mm_slot *mm_slot;
 	struct mm_slot *slot;
 	struct mm_struct *mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int err = 0;
 
 	spin_lock(&ksm_mmlist_lock);
@@ -1243,7 +1243,7 @@ static u32 calc_checksum(struct page *page)
 	return checksum;
 }
 
-static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
+static int write_protect_page(struct mm_area *vma, struct folio *folio,
 			      pte_t *orig_pte)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -1343,7 +1343,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
  *
  * Returns 0 on success, -EFAULT on failure.
  */
-static int replace_page(struct vm_area_struct *vma, struct page *page,
+static int replace_page(struct mm_area *vma, struct page *page,
 			struct page *kpage, pte_t orig_pte)
 {
 	struct folio *kfolio = page_folio(kpage);
@@ -1446,7 +1446,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
  *
  * This function returns 0 if the pages were merged, -EFAULT otherwise.
  */
-static int try_to_merge_one_page(struct vm_area_struct *vma,
+static int try_to_merge_one_page(struct mm_area *vma,
 				 struct page *page, struct page *kpage)
 {
 	struct folio *folio = page_folio(page);
@@ -1521,7 +1521,7 @@ static int try_to_merge_with_zero_page(struct ksm_rmap_item *rmap_item,
 	 * appropriate zero page if the user enabled this via sysfs.
 	 */
 	if (ksm_use_zero_pages && (rmap_item->oldchecksum == zero_checksum)) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		mmap_read_lock(mm);
 		vma = find_mergeable_vma(mm, rmap_item->address);
@@ -1554,7 +1554,7 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item,
 				      struct page *page, struct page *kpage)
 {
 	struct mm_struct *mm = rmap_item->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int err = -EFAULT;
 
 	mmap_read_lock(mm);
@@ -2459,7 +2459,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 	struct mm_struct *mm;
 	struct ksm_mm_slot *mm_slot;
 	struct mm_slot *slot;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct ksm_rmap_item *rmap_item;
 	struct vma_iterator vmi;
 	int nid;
@@ -2696,7 +2696,7 @@ static int ksm_scan_thread(void *nothing)
 	return 0;
 }
 
-static void __ksm_add_vma(struct vm_area_struct *vma)
+static void __ksm_add_vma(struct mm_area *vma)
 {
 	unsigned long vm_flags = vma->vm_flags;
 
@@ -2707,7 +2707,7 @@ static void __ksm_add_vma(struct vm_area_struct *vma)
 		vm_flags_set(vma, VM_MERGEABLE);
 }
 
-static int __ksm_del_vma(struct vm_area_struct *vma)
+static int __ksm_del_vma(struct mm_area *vma)
 {
 	int err;
 
@@ -2728,7 +2728,7 @@ static int __ksm_del_vma(struct vm_area_struct *vma)
  *
  * @vma:  Pointer to vma
  */
-void ksm_add_vma(struct vm_area_struct *vma)
+void ksm_add_vma(struct mm_area *vma)
 {
 	struct mm_struct *mm = vma->vm_mm;
 
@@ -2738,7 +2738,7 @@ void ksm_add_vma(struct vm_area_struct *vma)
 
 static void ksm_add_vmas(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	VMA_ITERATOR(vmi, mm, 0);
 	for_each_vma(vmi, vma)
@@ -2747,7 +2747,7 @@ static void ksm_add_vmas(struct mm_struct *mm)
 
 static int ksm_del_vmas(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int err;
 
 	VMA_ITERATOR(vmi, mm, 0);
@@ -2826,7 +2826,7 @@ int ksm_disable(struct mm_struct *mm)
 	return ksm_del_vmas(mm);
 }
 
-int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+int ksm_madvise(struct mm_area *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -2953,7 +2953,7 @@ void __ksm_exit(struct mm_struct *mm)
 }
 
 struct folio *ksm_might_need_to_copy(struct folio *folio,
-			struct vm_area_struct *vma, unsigned long addr)
+			struct mm_area *vma, unsigned long addr)
 {
 	struct page *page = folio_page(folio, 0);
 	struct anon_vma *anon_vma = folio_anon_vma(folio);
@@ -3021,7 +3021,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
 	hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) {
 		struct anon_vma *anon_vma = rmap_item->anon_vma;
 		struct anon_vma_chain *vmac;
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		cond_resched();
 		if (!anon_vma_trylock_read(anon_vma)) {
@@ -3079,7 +3079,7 @@ void collect_procs_ksm(const struct folio *folio, const struct page *page,
 {
 	struct ksm_stable_node *stable_node;
 	struct ksm_rmap_item *rmap_item;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk;
 
 	stable_node = folio_stable_node(folio);
@@ -3277,7 +3277,7 @@ static void wait_while_offlining(void)
  */
 bool ksm_process_mergeable(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mmap_assert_locked(mm);
 	VMA_ITERATOR(vmi, mm, 0);
diff --git a/mm/madvise.c b/mm/madvise.c
index b17f684322ad..8e401df400b1 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -99,7 +99,7 @@ void anon_vma_name_free(struct kref *kref)
 	kfree(anon_name);
 }
 
-struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
+struct anon_vma_name *anon_vma_name(struct mm_area *vma)
 {
 	mmap_assert_locked(vma->vm_mm);
 
@@ -107,7 +107,7 @@ struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
 }
 
 /* mmap_lock should be write-locked */
-static int replace_anon_vma_name(struct vm_area_struct *vma,
+static int replace_anon_vma_name(struct mm_area *vma,
 				 struct anon_vma_name *anon_name)
 {
 	struct anon_vma_name *orig_name = anon_vma_name(vma);
@@ -127,7 +127,7 @@ static int replace_anon_vma_name(struct vm_area_struct *vma,
 	return 0;
 }
 #else /* CONFIG_ANON_VMA_NAME */
-static int replace_anon_vma_name(struct vm_area_struct *vma,
+static int replace_anon_vma_name(struct mm_area *vma,
 				 struct anon_vma_name *anon_name)
 {
 	if (anon_name)
@@ -142,8 +142,8 @@ static int replace_anon_vma_name(struct vm_area_struct *vma,
  * Caller should ensure anon_name stability by raising its refcount even when
  * anon_name belongs to a valid vma because this function might free that vma.
  */
-static int madvise_update_vma(struct vm_area_struct *vma,
-			      struct vm_area_struct **prev, unsigned long start,
+static int madvise_update_vma(struct mm_area *vma,
+			      struct mm_area **prev, unsigned long start,
 			      unsigned long end, unsigned long new_flags,
 			      struct anon_vma_name *anon_name)
 {
@@ -179,7 +179,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
 static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
 		unsigned long end, struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->private;
+	struct mm_area *vma = walk->private;
 	struct swap_iocb *splug = NULL;
 	pte_t *ptep = NULL;
 	spinlock_t *ptl;
@@ -225,7 +225,7 @@ static const struct mm_walk_ops swapin_walk_ops = {
 	.walk_lock		= PGWALK_RDLOCK,
 };
 
-static void shmem_swapin_range(struct vm_area_struct *vma,
+static void shmem_swapin_range(struct mm_area *vma,
 		unsigned long start, unsigned long end,
 		struct address_space *mapping)
 {
@@ -266,8 +266,8 @@ static void shmem_swapin_range(struct vm_area_struct *vma,
 /*
  * Schedule all required I/O operations.  Do not wait for completion.
  */
-static long madvise_willneed(struct vm_area_struct *vma,
-			     struct vm_area_struct **prev,
+static long madvise_willneed(struct mm_area *vma,
+			     struct mm_area **prev,
 			     unsigned long start, unsigned long end)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -314,7 +314,7 @@ static long madvise_willneed(struct vm_area_struct *vma,
 	return 0;
 }
 
-static inline bool can_do_file_pageout(struct vm_area_struct *vma)
+static inline bool can_do_file_pageout(struct mm_area *vma)
 {
 	if (!vma->vm_file)
 		return false;
@@ -349,7 +349,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 	struct mmu_gather *tlb = private->tlb;
 	bool pageout = private->pageout;
 	struct mm_struct *mm = tlb->mm;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	pte_t *start_pte, *pte, ptent;
 	spinlock_t *ptl;
 	struct folio *folio = NULL;
@@ -567,7 +567,7 @@ static const struct mm_walk_ops cold_walk_ops = {
 };
 
 static void madvise_cold_page_range(struct mmu_gather *tlb,
-			     struct vm_area_struct *vma,
+			     struct mm_area *vma,
 			     unsigned long addr, unsigned long end)
 {
 	struct madvise_walk_private walk_private = {
@@ -580,13 +580,13 @@ static void madvise_cold_page_range(struct mmu_gather *tlb,
 	tlb_end_vma(tlb, vma);
 }
 
-static inline bool can_madv_lru_vma(struct vm_area_struct *vma)
+static inline bool can_madv_lru_vma(struct mm_area *vma)
 {
 	return !(vma->vm_flags & (VM_LOCKED|VM_PFNMAP|VM_HUGETLB));
 }
 
-static long madvise_cold(struct vm_area_struct *vma,
-			struct vm_area_struct **prev,
+static long madvise_cold(struct mm_area *vma,
+			struct mm_area **prev,
 			unsigned long start_addr, unsigned long end_addr)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -605,7 +605,7 @@ static long madvise_cold(struct vm_area_struct *vma,
 }
 
 static void madvise_pageout_page_range(struct mmu_gather *tlb,
-			     struct vm_area_struct *vma,
+			     struct mm_area *vma,
 			     unsigned long addr, unsigned long end)
 {
 	struct madvise_walk_private walk_private = {
@@ -618,8 +618,8 @@ static void madvise_pageout_page_range(struct mmu_gather *tlb,
 	tlb_end_vma(tlb, vma);
 }
 
-static long madvise_pageout(struct vm_area_struct *vma,
-			struct vm_area_struct **prev,
+static long madvise_pageout(struct mm_area *vma,
+			struct mm_area **prev,
 			unsigned long start_addr, unsigned long end_addr)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -654,7 +654,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 	const cydp_t cydp_flags = CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY;
 	struct mmu_gather *tlb = walk->private;
 	struct mm_struct *mm = tlb->mm;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	spinlock_t *ptl;
 	pte_t *start_pte, *pte, ptent;
 	struct folio *folio;
@@ -794,7 +794,7 @@ static const struct mm_walk_ops madvise_free_walk_ops = {
 	.walk_lock		= PGWALK_RDLOCK,
 };
 
-static int madvise_free_single_vma(struct vm_area_struct *vma,
+static int madvise_free_single_vma(struct mm_area *vma,
 			unsigned long start_addr, unsigned long end_addr)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -848,7 +848,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
  * An interface that causes the system to free clean pages and flush
  * dirty pages is already available as msync(MS_INVALIDATE).
  */
-static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
+static long madvise_dontneed_single_vma(struct mm_area *vma,
 					unsigned long start, unsigned long end)
 {
 	struct zap_details details = {
@@ -860,7 +860,7 @@ static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
 	return 0;
 }
 
-static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
+static bool madvise_dontneed_free_valid_vma(struct mm_area *vma,
 					    unsigned long start,
 					    unsigned long *end,
 					    int behavior)
@@ -890,8 +890,8 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
 	return true;
 }
 
-static long madvise_dontneed_free(struct vm_area_struct *vma,
-				  struct vm_area_struct **prev,
+static long madvise_dontneed_free(struct mm_area *vma,
+				  struct mm_area **prev,
 				  unsigned long start, unsigned long end,
 				  int behavior)
 {
@@ -994,8 +994,8 @@ static long madvise_populate(struct mm_struct *mm, unsigned long start,
  * Application wants to free up the pages and associated backing store.
  * This is effectively punching a hole into the middle of a file.
  */
-static long madvise_remove(struct vm_area_struct *vma,
-				struct vm_area_struct **prev,
+static long madvise_remove(struct mm_area *vma,
+				struct mm_area **prev,
 				unsigned long start, unsigned long end)
 {
 	loff_t offset;
@@ -1039,7 +1039,7 @@ static long madvise_remove(struct vm_area_struct *vma,
 	return error;
 }
 
-static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked)
+static bool is_valid_guard_vma(struct mm_area *vma, bool allow_locked)
 {
 	vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB;
 
@@ -1115,8 +1115,8 @@ static const struct mm_walk_ops guard_install_walk_ops = {
 	.walk_lock		= PGWALK_RDLOCK,
 };
 
-static long madvise_guard_install(struct vm_area_struct *vma,
-				 struct vm_area_struct **prev,
+static long madvise_guard_install(struct mm_area *vma,
+				 struct mm_area **prev,
 				 unsigned long start, unsigned long end)
 {
 	long err;
@@ -1225,8 +1225,8 @@ static const struct mm_walk_ops guard_remove_walk_ops = {
 	.walk_lock		= PGWALK_RDLOCK,
 };
 
-static long madvise_guard_remove(struct vm_area_struct *vma,
-				 struct vm_area_struct **prev,
+static long madvise_guard_remove(struct mm_area *vma,
+				 struct mm_area **prev,
 				 unsigned long start, unsigned long end)
 {
 	*prev = vma;
@@ -1246,8 +1246,8 @@ static long madvise_guard_remove(struct vm_area_struct *vma,
  * will handle splitting a vm area into separate areas, each area with its own
  * behavior.
  */
-static int madvise_vma_behavior(struct vm_area_struct *vma,
-				struct vm_area_struct **prev,
+static int madvise_vma_behavior(struct mm_area *vma,
+				struct mm_area **prev,
 				unsigned long start, unsigned long end,
 				unsigned long behavior)
 {
@@ -1488,12 +1488,12 @@ static bool process_madvise_remote_valid(int behavior)
 static
 int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
 		      unsigned long end, unsigned long arg,
-		      int (*visit)(struct vm_area_struct *vma,
-				   struct vm_area_struct **prev, unsigned long start,
+		      int (*visit)(struct mm_area *vma,
+				   struct mm_area **prev, unsigned long start,
 				   unsigned long end, unsigned long arg))
 {
-	struct vm_area_struct *vma;
-	struct vm_area_struct *prev;
+	struct mm_area *vma;
+	struct mm_area *prev;
 	unsigned long tmp;
 	int unmapped_error = 0;
 
@@ -1545,8 +1545,8 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
 }
 
 #ifdef CONFIG_ANON_VMA_NAME
-static int madvise_vma_anon_name(struct vm_area_struct *vma,
-				 struct vm_area_struct **prev,
+static int madvise_vma_anon_name(struct mm_area *vma,
+				 struct mm_area **prev,
 				 unsigned long start, unsigned long end,
 				 unsigned long anon_name)
 {
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index b91a33fb6c69..8a194e377443 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -398,7 +398,7 @@ static void shake_page(struct page *page)
 	shake_folio(page_folio(page));
 }
 
-static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
+static unsigned long dev_pagemap_mapping_shift(struct mm_area *vma,
 		unsigned long address)
 {
 	unsigned long ret = 0;
@@ -446,7 +446,7 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
  * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
  */
 static void __add_to_kill(struct task_struct *tsk, const struct page *p,
-			  struct vm_area_struct *vma, struct list_head *to_kill,
+			  struct mm_area *vma, struct list_head *to_kill,
 			  unsigned long addr)
 {
 	struct to_kill *tk;
@@ -487,7 +487,7 @@ static void __add_to_kill(struct task_struct *tsk, const struct page *p,
 }
 
 static void add_to_kill_anon_file(struct task_struct *tsk, const struct page *p,
-		struct vm_area_struct *vma, struct list_head *to_kill,
+		struct mm_area *vma, struct list_head *to_kill,
 		unsigned long addr)
 {
 	if (addr == -EFAULT)
@@ -510,7 +510,7 @@ static bool task_in_to_kill_list(struct list_head *to_kill,
 }
 
 void add_to_kill_ksm(struct task_struct *tsk, const struct page *p,
-		     struct vm_area_struct *vma, struct list_head *to_kill,
+		     struct mm_area *vma, struct list_head *to_kill,
 		     unsigned long addr)
 {
 	if (!task_in_to_kill_list(to_kill, tsk))
@@ -621,7 +621,7 @@ static void collect_procs_anon(const struct folio *folio,
 	pgoff = page_pgoff(folio, page);
 	rcu_read_lock();
 	for_each_process(tsk) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		struct anon_vma_chain *vmac;
 		struct task_struct *t = task_early_kill(tsk, force_early);
 		unsigned long addr;
@@ -648,7 +648,7 @@ static void collect_procs_file(const struct folio *folio,
 		const struct page *page, struct list_head *to_kill,
 		int force_early)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk;
 	struct address_space *mapping = folio->mapping;
 	pgoff_t pgoff;
@@ -683,7 +683,7 @@ static void collect_procs_file(const struct folio *folio,
 
 #ifdef CONFIG_FS_DAX
 static void add_to_kill_fsdax(struct task_struct *tsk, const struct page *p,
-			      struct vm_area_struct *vma,
+			      struct mm_area *vma,
 			      struct list_head *to_kill, pgoff_t pgoff)
 {
 	unsigned long addr = vma_address(vma, pgoff, 1);
@@ -697,7 +697,7 @@ static void collect_procs_fsdax(const struct page *page,
 		struct address_space *mapping, pgoff_t pgoff,
 		struct list_head *to_kill, bool pre_remove)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct task_struct *tsk;
 
 	i_mmap_lock_read(mapping);
diff --git a/mm/memory.c b/mm/memory.c
index 9d0ba6fe73c1..854615d98d2b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -344,14 +344,14 @@ void free_pgd_range(struct mmu_gather *tlb,
 }
 
 void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
-		   struct vm_area_struct *vma, unsigned long floor,
+		   struct mm_area *vma, unsigned long floor,
 		   unsigned long ceiling, bool mm_wr_locked)
 {
 	struct unlink_vma_file_batch vb;
 
 	do {
 		unsigned long addr = vma->vm_start;
-		struct vm_area_struct *next;
+		struct mm_area *next;
 
 		/*
 		 * Note: USER_PGTABLES_CEILING may be passed as ceiling and may
@@ -476,7 +476,7 @@ static inline void add_mm_rss_vec(struct mm_struct *mm, int *rss)
  *
  * The calling function must still handle the error.
  */
-static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
+static void print_bad_pte(struct mm_area *vma, unsigned long addr,
 			  pte_t pte, struct page *page)
 {
 	pgd_t *pgd = pgd_offset(vma->vm_mm, addr);
@@ -572,7 +572,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
  * order to support COWable mappings.
  *
  */
-struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
+struct page *vm_normal_page(struct mm_area *vma, unsigned long addr,
 			    pte_t pte)
 {
 	unsigned long pfn = pte_pfn(pte);
@@ -638,7 +638,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
 	return pfn_to_page(pfn);
 }
 
-struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
+struct folio *vm_normal_folio(struct mm_area *vma, unsigned long addr,
 			    pte_t pte)
 {
 	struct page *page = vm_normal_page(vma, addr, pte);
@@ -649,7 +649,7 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
 }
 
 #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
-struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
+struct page *vm_normal_page_pmd(struct mm_area *vma, unsigned long addr,
 				pmd_t pmd)
 {
 	unsigned long pfn = pmd_pfn(pmd);
@@ -688,7 +688,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
 	return pfn_to_page(pfn);
 }
 
-struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
+struct folio *vm_normal_folio_pmd(struct mm_area *vma,
 				  unsigned long addr, pmd_t pmd)
 {
 	struct page *page = vm_normal_page_pmd(vma, addr, pmd);
@@ -725,7 +725,7 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
  * page table modifications (e.g., MADV_DONTNEED, mprotect), so device drivers
  * must use MMU notifiers to sync against any concurrent changes.
  */
-static void restore_exclusive_pte(struct vm_area_struct *vma,
+static void restore_exclusive_pte(struct mm_area *vma,
 		struct folio *folio, struct page *page, unsigned long address,
 		pte_t *ptep, pte_t orig_pte)
 {
@@ -759,7 +759,7 @@ static void restore_exclusive_pte(struct vm_area_struct *vma,
  * Tries to restore an exclusive pte if the page lock can be acquired without
  * sleeping.
  */
-static int try_restore_exclusive_pte(struct vm_area_struct *vma,
+static int try_restore_exclusive_pte(struct mm_area *vma,
 		unsigned long addr, pte_t *ptep, pte_t orig_pte)
 {
 	struct page *page = pfn_swap_entry_to_page(pte_to_swp_entry(orig_pte));
@@ -782,8 +782,8 @@ static int try_restore_exclusive_pte(struct vm_area_struct *vma,
 
 static unsigned long
 copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
-		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma, unsigned long addr, int *rss)
+		pte_t *dst_pte, pte_t *src_pte, struct mm_area *dst_vma,
+		struct mm_area *src_vma, unsigned long addr, int *rss)
 {
 	unsigned long vm_flags = dst_vma->vm_flags;
 	pte_t orig_pte = ptep_get(src_pte);
@@ -903,7 +903,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
  * lock.
  */
 static inline int
-copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+copy_present_page(struct mm_area *dst_vma, struct mm_area *src_vma,
 		  pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
 		  struct folio **prealloc, struct page *page)
 {
@@ -938,8 +938,8 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 	return 0;
 }
 
-static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma,
-		struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte,
+static __always_inline void __copy_present_ptes(struct mm_area *dst_vma,
+		struct mm_area *src_vma, pte_t *dst_pte, pte_t *src_pte,
 		pte_t pte, unsigned long addr, int nr)
 {
 	struct mm_struct *src_mm = src_vma->vm_mm;
@@ -969,7 +969,7 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma,
  * Otherwise, returns the number of copied PTEs (at least 1).
  */
 static inline int
-copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+copy_present_ptes(struct mm_area *dst_vma, struct mm_area *src_vma,
 		 pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long addr,
 		 int max_nr, int *rss, struct folio **prealloc)
 {
@@ -1046,7 +1046,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 }
 
 static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
-		struct vm_area_struct *vma, unsigned long addr, bool need_zero)
+		struct mm_area *vma, unsigned long addr, bool need_zero)
 {
 	struct folio *new_folio;
 
@@ -1068,7 +1068,7 @@ static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
 }
 
 static int
-copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+copy_pte_range(struct mm_area *dst_vma, struct mm_area *src_vma,
 	       pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
 	       unsigned long end)
 {
@@ -1223,7 +1223,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 }
 
 static inline int
-copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+copy_pmd_range(struct mm_area *dst_vma, struct mm_area *src_vma,
 	       pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
 	       unsigned long end)
 {
@@ -1260,7 +1260,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 }
 
 static inline int
-copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+copy_pud_range(struct mm_area *dst_vma, struct mm_area *src_vma,
 	       p4d_t *dst_p4d, p4d_t *src_p4d, unsigned long addr,
 	       unsigned long end)
 {
@@ -1297,7 +1297,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 }
 
 static inline int
-copy_p4d_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
+copy_p4d_range(struct mm_area *dst_vma, struct mm_area *src_vma,
 	       pgd_t *dst_pgd, pgd_t *src_pgd, unsigned long addr,
 	       unsigned long end)
 {
@@ -1326,7 +1326,7 @@ copy_p4d_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
  * when the child accesses the memory range.
  */
 static bool
-vma_needs_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+vma_needs_copy(struct mm_area *dst_vma, struct mm_area *src_vma)
 {
 	/*
 	 * Always copy pgtables when dst_vma has uffd-wp enabled even if it's
@@ -1353,7 +1353,7 @@ vma_needs_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
 }
 
 int
-copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+copy_page_range(struct mm_area *dst_vma, struct mm_area *src_vma)
 {
 	pgd_t *src_pgd, *dst_pgd;
 	unsigned long addr = src_vma->vm_start;
@@ -1461,7 +1461,7 @@ static inline bool zap_drop_markers(struct zap_details *details)
  * Returns true if uffd-wp ptes was installed, false otherwise.
  */
 static inline bool
-zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
+zap_install_uffd_wp_if_needed(struct mm_area *vma,
 			      unsigned long addr, pte_t *pte, int nr,
 			      struct zap_details *details, pte_t pteval)
 {
@@ -1489,7 +1489,7 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
 }
 
 static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, struct folio *folio,
+		struct mm_area *vma, struct folio *folio,
 		struct page *page, pte_t *pte, pte_t ptent, unsigned int nr,
 		unsigned long addr, struct zap_details *details, int *rss,
 		bool *force_flush, bool *force_break, bool *any_skipped)
@@ -1540,7 +1540,7 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
  * Returns the number of processed (skipped or zapped) PTEs (at least 1).
  */
 static inline int zap_present_ptes(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, pte_t *pte, pte_t ptent,
+		struct mm_area *vma, pte_t *pte, pte_t ptent,
 		unsigned int max_nr, unsigned long addr,
 		struct zap_details *details, int *rss, bool *force_flush,
 		bool *force_break, bool *any_skipped)
@@ -1589,7 +1589,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
 }
 
 static inline int zap_nonpresent_ptes(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, pte_t *pte, pte_t ptent,
+		struct mm_area *vma, pte_t *pte, pte_t ptent,
 		unsigned int max_nr, unsigned long addr,
 		struct zap_details *details, int *rss, bool *any_skipped)
 {
@@ -1659,7 +1659,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather *tlb,
 }
 
 static inline int do_zap_pte_range(struct mmu_gather *tlb,
-				   struct vm_area_struct *vma, pte_t *pte,
+				   struct mm_area *vma, pte_t *pte,
 				   unsigned long addr, unsigned long end,
 				   struct zap_details *details, int *rss,
 				   bool *force_flush, bool *force_break,
@@ -1695,7 +1695,7 @@ static inline int do_zap_pte_range(struct mmu_gather *tlb,
 }
 
 static unsigned long zap_pte_range(struct mmu_gather *tlb,
-				struct vm_area_struct *vma, pmd_t *pmd,
+				struct mm_area *vma, pmd_t *pmd,
 				unsigned long addr, unsigned long end,
 				struct zap_details *details)
 {
@@ -1787,7 +1787,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
 }
 
 static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
-				struct vm_area_struct *vma, pud_t *pud,
+				struct mm_area *vma, pud_t *pud,
 				unsigned long addr, unsigned long end,
 				struct zap_details *details)
 {
@@ -1829,7 +1829,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
 }
 
 static inline unsigned long zap_pud_range(struct mmu_gather *tlb,
-				struct vm_area_struct *vma, p4d_t *p4d,
+				struct mm_area *vma, p4d_t *p4d,
 				unsigned long addr, unsigned long end,
 				struct zap_details *details)
 {
@@ -1858,7 +1858,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb,
 }
 
 static inline unsigned long zap_p4d_range(struct mmu_gather *tlb,
-				struct vm_area_struct *vma, pgd_t *pgd,
+				struct mm_area *vma, pgd_t *pgd,
 				unsigned long addr, unsigned long end,
 				struct zap_details *details)
 {
@@ -1877,7 +1877,7 @@ static inline unsigned long zap_p4d_range(struct mmu_gather *tlb,
 }
 
 void unmap_page_range(struct mmu_gather *tlb,
-			     struct vm_area_struct *vma,
+			     struct mm_area *vma,
 			     unsigned long addr, unsigned long end,
 			     struct zap_details *details)
 {
@@ -1898,7 +1898,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 
 
 static void unmap_single_vma(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, unsigned long start_addr,
+		struct mm_area *vma, unsigned long start_addr,
 		unsigned long end_addr,
 		struct zap_details *details, bool mm_wr_locked)
 {
@@ -1963,7 +1963,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
  * drops the lock and schedules.
  */
 void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
-		struct vm_area_struct *vma, unsigned long start_addr,
+		struct mm_area *vma, unsigned long start_addr,
 		unsigned long end_addr, unsigned long tree_end,
 		bool mm_wr_locked)
 {
@@ -1991,14 +1991,14 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
 
 /**
  * zap_page_range_single - remove user pages in a given range
- * @vma: vm_area_struct holding the applicable pages
+ * @vma: mm_area holding the applicable pages
  * @address: starting address of pages to zap
  * @size: number of bytes to zap
  * @details: details of shared cache invalidation
  *
  * The range must fit into one VMA.
  */
-void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
+void zap_page_range_single(struct mm_area *vma, unsigned long address,
 		unsigned long size, struct zap_details *details)
 {
 	const unsigned long end = address + size;
@@ -2023,7 +2023,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
 
 /**
  * zap_vma_ptes - remove ptes mapping the vma
- * @vma: vm_area_struct holding ptes to be zapped
+ * @vma: mm_area holding ptes to be zapped
  * @address: starting address of pages to zap
  * @size: number of bytes to zap
  *
@@ -2032,7 +2032,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
  * The entire address range must be fully contained within the vma.
  *
  */
-void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
+void zap_vma_ptes(struct mm_area *vma, unsigned long address,
 		unsigned long size)
 {
 	if (!range_in_vma(vma, address, address + size) ||
@@ -2075,7 +2075,7 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
 	return pte_alloc_map_lock(mm, pmd, addr, ptl);
 }
 
-static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma)
+static bool vm_mixed_zeropage_allowed(struct mm_area *vma)
 {
 	VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP);
 	/*
@@ -2105,7 +2105,7 @@ static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma)
 	       (vma_is_fsdax(vma) || vma->vm_flags & VM_IO);
 }
 
-static int validate_page_before_insert(struct vm_area_struct *vma,
+static int validate_page_before_insert(struct mm_area *vma,
 				       struct page *page)
 {
 	struct folio *folio = page_folio(page);
@@ -2124,7 +2124,7 @@ static int validate_page_before_insert(struct vm_area_struct *vma,
 	return 0;
 }
 
-static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
+static int insert_page_into_pte_locked(struct mm_area *vma, pte_t *pte,
 				unsigned long addr, struct page *page,
 				pgprot_t prot, bool mkwrite)
 {
@@ -2165,7 +2165,7 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
 	return 0;
 }
 
-static int insert_page(struct vm_area_struct *vma, unsigned long addr,
+static int insert_page(struct mm_area *vma, unsigned long addr,
 			struct page *page, pgprot_t prot, bool mkwrite)
 {
 	int retval;
@@ -2186,7 +2186,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
 	return retval;
 }
 
-static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte,
+static int insert_page_in_batch_locked(struct mm_area *vma, pte_t *pte,
 			unsigned long addr, struct page *page, pgprot_t prot)
 {
 	int err;
@@ -2200,7 +2200,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte,
 /* insert_pages() amortizes the cost of spinlock operations
  * when inserting pages in a loop.
  */
-static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
+static int insert_pages(struct mm_area *vma, unsigned long addr,
 			struct page **pages, unsigned long *num, pgprot_t prot)
 {
 	pmd_t *pmd = NULL;
@@ -2273,7 +2273,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
  *
  * The same restrictions apply as in vm_insert_page().
  */
-int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
+int vm_insert_pages(struct mm_area *vma, unsigned long addr,
 			struct page **pages, unsigned long *num)
 {
 	const unsigned long end_addr = addr + (*num * PAGE_SIZE) - 1;
@@ -2320,7 +2320,7 @@ EXPORT_SYMBOL(vm_insert_pages);
  *
  * Return: %0 on success, negative error code otherwise.
  */
-int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
+int vm_insert_page(struct mm_area *vma, unsigned long addr,
 			struct page *page)
 {
 	if (addr < vma->vm_start || addr >= vma->vm_end)
@@ -2347,7 +2347,7 @@ EXPORT_SYMBOL(vm_insert_page);
  *
  * Return: 0 on success and error code otherwise.
  */
-static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
+static int __vm_map_pages(struct mm_area *vma, struct page **pages,
 				unsigned long num, unsigned long offset)
 {
 	unsigned long count = vma_pages(vma);
@@ -2390,7 +2390,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
  * Context: Process context. Called by mmap handlers.
  * Return: 0 on success and error code otherwise.
  */
-int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
+int vm_map_pages(struct mm_area *vma, struct page **pages,
 				unsigned long num)
 {
 	return __vm_map_pages(vma, pages, num, vma->vm_pgoff);
@@ -2410,14 +2410,14 @@ EXPORT_SYMBOL(vm_map_pages);
  * Context: Process context. Called by mmap handlers.
  * Return: 0 on success and error code otherwise.
  */
-int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
+int vm_map_pages_zero(struct mm_area *vma, struct page **pages,
 				unsigned long num)
 {
 	return __vm_map_pages(vma, pages, num, 0);
 }
 EXPORT_SYMBOL(vm_map_pages_zero);
 
-static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+static vm_fault_t insert_pfn(struct mm_area *vma, unsigned long addr,
 			pfn_t pfn, pgprot_t prot, bool mkwrite)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -2504,7 +2504,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
  * Context: Process context.  May allocate using %GFP_KERNEL.
  * Return: vm_fault_t value.
  */
-vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn_prot(struct mm_area *vma, unsigned long addr,
 			unsigned long pfn, pgprot_t pgprot)
 {
 	/*
@@ -2552,14 +2552,14 @@ EXPORT_SYMBOL(vmf_insert_pfn_prot);
  * Context: Process context.  May allocate using %GFP_KERNEL.
  * Return: vm_fault_t value.
  */
-vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_pfn(struct mm_area *vma, unsigned long addr,
 			unsigned long pfn)
 {
 	return vmf_insert_pfn_prot(vma, addr, pfn, vma->vm_page_prot);
 }
 EXPORT_SYMBOL(vmf_insert_pfn);
 
-static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite)
+static bool vm_mixed_ok(struct mm_area *vma, pfn_t pfn, bool mkwrite)
 {
 	if (unlikely(is_zero_pfn(pfn_t_to_pfn(pfn))) &&
 	    (mkwrite || !vm_mixed_zeropage_allowed(vma)))
@@ -2576,7 +2576,7 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite)
 	return false;
 }
 
-static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
+static vm_fault_t __vm_insert_mixed(struct mm_area *vma,
 		unsigned long addr, pfn_t pfn, bool mkwrite)
 {
 	pgprot_t pgprot = vma->vm_page_prot;
@@ -2643,7 +2643,7 @@ vm_fault_t vmf_insert_page_mkwrite(struct vm_fault *vmf, struct page *page,
 }
 EXPORT_SYMBOL_GPL(vmf_insert_page_mkwrite);
 
-vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
+vm_fault_t vmf_insert_mixed(struct mm_area *vma, unsigned long addr,
 		pfn_t pfn)
 {
 	return __vm_insert_mixed(vma, addr, pfn, false);
@@ -2655,7 +2655,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
  *  different entry in the mean time, we treat that as success as we assume
  *  the same entry was actually inserted.
  */
-vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
+vm_fault_t vmf_insert_mixed_mkwrite(struct mm_area *vma,
 		unsigned long addr, pfn_t pfn)
 {
 	return __vm_insert_mixed(vma, addr, pfn, true);
@@ -2759,7 +2759,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
 	return 0;
 }
 
-static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long addr,
+static int remap_pfn_range_internal(struct mm_area *vma, unsigned long addr,
 		unsigned long pfn, unsigned long size, pgprot_t prot)
 {
 	pgd_t *pgd;
@@ -2816,7 +2816,7 @@ static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long ad
  * Variant of remap_pfn_range that does not call track_pfn_remap.  The caller
  * must have pre-validated the caching bits of the pgprot_t.
  */
-int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
+int remap_pfn_range_notrack(struct mm_area *vma, unsigned long addr,
 		unsigned long pfn, unsigned long size, pgprot_t prot)
 {
 	int error = remap_pfn_range_internal(vma, addr, pfn, size, prot);
@@ -2845,7 +2845,7 @@ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
  *
  * Return: %0 on success, negative error code otherwise.
  */
-int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
+int remap_pfn_range(struct mm_area *vma, unsigned long addr,
 		    unsigned long pfn, unsigned long size, pgprot_t prot)
 {
 	int err;
@@ -2876,7 +2876,7 @@ EXPORT_SYMBOL(remap_pfn_range);
  *
  * Return: %0 on success, negative error code otherwise.
  */
-int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
+int vm_iomap_memory(struct mm_area *vma, phys_addr_t start, unsigned long len)
 {
 	unsigned long vm_len, pfn, pages;
 
@@ -3161,7 +3161,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 	int ret;
 	void *kaddr;
 	void __user *uaddr;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long addr = vmf->address;
 
@@ -3253,7 +3253,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
 	return ret;
 }
 
-static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
+static gfp_t __get_fault_gfp_mask(struct mm_area *vma)
 {
 	struct file *vm_file = vma->vm_file;
 
@@ -3308,7 +3308,7 @@ static vm_fault_t do_page_mkwrite(struct vm_fault *vmf, struct folio *folio)
  */
 static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct address_space *mapping;
 	struct folio *folio = page_folio(vmf->page);
 	bool dirtied;
@@ -3362,7 +3362,7 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
 static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
 	__releases(vmf->ptl)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	pte_t entry;
 
 	VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
@@ -3395,7 +3395,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
  */
 static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 
 	if (vma->vm_ops->map_pages || !(vmf->flags & FAULT_FLAG_VMA_LOCK))
 		return 0;
@@ -3420,7 +3420,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
  */
 vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	vm_fault_t ret = 0;
 
 	if (likely(vma->anon_vma))
@@ -3456,7 +3456,7 @@ vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf)
 static vm_fault_t wp_page_copy(struct vm_fault *vmf)
 {
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	struct folio *old_folio = NULL;
 	struct folio *new_folio = NULL;
@@ -3647,7 +3647,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio
  */
 static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 
 	if (vma->vm_ops && vma->vm_ops->pfn_mkwrite) {
 		vm_fault_t ret;
@@ -3670,7 +3670,7 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
 static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
 	__releases(vmf->ptl)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	vm_fault_t ret = 0;
 
 	folio_get(folio);
@@ -3709,7 +3709,7 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	bool exclusive = false;
 
@@ -3775,14 +3775,14 @@ static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
 }
 #else /* !CONFIG_TRANSPARENT_HUGEPAGE */
 static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	BUILD_BUG();
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 static bool wp_can_reuse_anon_folio(struct folio *folio,
-				    struct vm_area_struct *vma)
+				    struct mm_area *vma)
 {
 	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && folio_test_large(folio))
 		return __wp_can_reuse_large_anon_folio(folio, vma);
@@ -3848,7 +3848,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	__releases(vmf->ptl)
 {
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio = NULL;
 	pte_t pte;
 
@@ -3939,7 +3939,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
 	return wp_page_copy(vmf);
 }
 
-static void unmap_mapping_range_vma(struct vm_area_struct *vma,
+static void unmap_mapping_range_vma(struct mm_area *vma,
 		unsigned long start_addr, unsigned long end_addr,
 		struct zap_details *details)
 {
@@ -3951,7 +3951,7 @@ static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
 					    pgoff_t last_index,
 					    struct zap_details *details)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	pgoff_t vba, vea, zba, zea;
 
 	vma_interval_tree_foreach(vma, root, first_index, last_index) {
@@ -4073,7 +4073,7 @@ EXPORT_SYMBOL(unmap_mapping_range);
 static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 {
 	struct folio *folio = page_folio(vmf->page);
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mmu_notifier_range range;
 	vm_fault_t ret;
 
@@ -4114,7 +4114,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 }
 
 static inline bool should_try_to_free_swap(struct folio *folio,
-					   struct vm_area_struct *vma,
+					   struct mm_area *vma,
 					   unsigned int fault_flags)
 {
 	if (!folio_test_swapcache(folio))
@@ -4205,7 +4205,7 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
 
 static struct folio *__alloc_swap_folio(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio;
 	swp_entry_t entry;
 
@@ -4303,7 +4303,7 @@ static inline unsigned long thp_swap_suitable_orders(pgoff_t swp_offset,
 
 static struct folio *alloc_swap_folio(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	unsigned long orders;
 	struct folio *folio;
 	unsigned long addr;
@@ -4399,7 +4399,7 @@ static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq);
  */
 vm_fault_t do_swap_page(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *swapcache, *folio = NULL;
 	DECLARE_WAITQUEUE(wait, current);
 	struct page *page;
@@ -4859,7 +4859,7 @@ static bool pte_range_none(pte_t *pte, int nr_pages)
 
 static struct folio *alloc_anon_folio(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	unsigned long orders;
 	struct folio *folio;
@@ -4949,7 +4949,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
  */
 static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	unsigned long addr = vmf->address;
 	struct folio *folio;
 	vm_fault_t ret = 0;
@@ -5069,7 +5069,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
  */
 static vm_fault_t __do_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio;
 	vm_fault_t ret;
 
@@ -5126,7 +5126,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static void deposit_prealloc_pte(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 
 	pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, vmf->prealloc_pte);
 	/*
@@ -5140,7 +5140,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
 vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 {
 	struct folio *folio = page_folio(page);
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	pmd_t entry;
@@ -5229,7 +5229,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
 void set_pte_range(struct vm_fault *vmf, struct folio *folio,
 		struct page *page, unsigned int nr, unsigned long addr)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool prefault = !in_range(vmf->address, addr, nr * PAGE_SIZE);
 	pte_t entry;
@@ -5285,7 +5285,7 @@ static bool vmf_pte_changed(struct vm_fault *vmf)
  */
 vm_fault_t finish_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct page *page;
 	struct folio *folio;
 	vm_fault_t ret;
@@ -5528,7 +5528,7 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf)
 
 static vm_fault_t do_cow_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio;
 	vm_fault_t ret;
 
@@ -5570,7 +5570,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
 
 static vm_fault_t do_shared_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	vm_fault_t ret, tmp;
 	struct folio *folio;
 
@@ -5620,7 +5620,7 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf)
  */
 static vm_fault_t do_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mm_struct *vm_mm = vma->vm_mm;
 	vm_fault_t ret;
 
@@ -5666,7 +5666,7 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf,
 		      unsigned long addr, int *flags,
 		      bool writable, int *last_cpupid)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 
 	/*
 	 * Avoid grouping on RO pages in general. RO pages shouldn't hurt as
@@ -5709,7 +5709,7 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf,
 	return mpol_misplaced(folio, vmf, addr);
 }
 
-static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
+static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct mm_area *vma,
 					unsigned long fault_addr, pte_t *fault_pte,
 					bool writable)
 {
@@ -5724,7 +5724,7 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str
 	update_mmu_cache_range(vmf, vma, fault_addr, fault_pte, 1);
 }
 
-static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
+static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct mm_area *vma,
 				       struct folio *folio, pte_t fault_pte,
 				       bool ignore_writable, bool pte_write_upgrade)
 {
@@ -5765,7 +5765,7 @@ static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_stru
 
 static vm_fault_t do_numa_page(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct folio *folio = NULL;
 	int nid = NUMA_NO_NODE;
 	bool writable = false, ignore_writable = false;
@@ -5856,7 +5856,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 
 static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	if (vma_is_anonymous(vma))
 		return do_huge_pmd_anonymous_page(vmf);
 	if (vma->vm_ops->huge_fault)
@@ -5867,7 +5867,7 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
 /* `inline' is required to avoid gcc 4.1.2 build error */
 static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 	vm_fault_t ret;
 
@@ -5900,7 +5900,7 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf)
 {
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) &&			\
 	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	/* No support for anonymous transparent PUD pages yet */
 	if (vma_is_anonymous(vma))
 		return VM_FAULT_FALLBACK;
@@ -5914,7 +5914,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
 {
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) &&			\
 	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	vm_fault_t ret;
 
 	/* No support for anonymous transparent PUD pages yet */
@@ -6043,7 +6043,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
  * the result, the mmap_lock is not held on exit.  See filemap_fault()
  * and __folio_lock_or_retry().
  */
-static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
+static vm_fault_t __handle_mm_fault(struct mm_area *vma,
 		unsigned long address, unsigned int flags)
 {
 	struct vm_fault vmf = {
@@ -6208,7 +6208,7 @@ static inline void mm_account_fault(struct mm_struct *mm, struct pt_regs *regs,
 }
 
 #ifdef CONFIG_LRU_GEN
-static void lru_gen_enter_fault(struct vm_area_struct *vma)
+static void lru_gen_enter_fault(struct mm_area *vma)
 {
 	/* the LRU algorithm only applies to accesses with recency */
 	current->in_lru_fault = vma_has_recency(vma);
@@ -6219,7 +6219,7 @@ static void lru_gen_exit_fault(void)
 	current->in_lru_fault = false;
 }
 #else
-static void lru_gen_enter_fault(struct vm_area_struct *vma)
+static void lru_gen_enter_fault(struct mm_area *vma)
 {
 }
 
@@ -6228,7 +6228,7 @@ static void lru_gen_exit_fault(void)
 }
 #endif /* CONFIG_LRU_GEN */
 
-static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma,
+static vm_fault_t sanitize_fault_flags(struct mm_area *vma,
 				       unsigned int *flags)
 {
 	if (unlikely(*flags & FAULT_FLAG_UNSHARE)) {
@@ -6270,7 +6270,7 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma,
  * The mmap_lock may have been released depending on flags and our
  * return value.  See filemap_fault() and __folio_lock_or_retry().
  */
-vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
+vm_fault_t handle_mm_fault(struct mm_area *vma, unsigned long address,
 			   unsigned int flags, struct pt_regs *regs)
 {
 	/* If the fault handler drops the mmap_lock, vma may be freed */
@@ -6397,10 +6397,10 @@ static inline bool upgrade_mmap_lock_carefully(struct mm_struct *mm, struct pt_r
  * We can also actually take the mm lock for writing if we
  * need to extend the vma, which helps the VM layer a lot.
  */
-struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
+struct mm_area *lock_mm_and_find_vma(struct mm_struct *mm,
 			unsigned long addr, struct pt_regs *regs)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if (!get_mmap_lock_carefully(mm, regs))
 		return NULL;
@@ -6454,7 +6454,7 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
 #endif
 
 #ifdef CONFIG_PER_VMA_LOCK
-static inline bool __vma_enter_locked(struct vm_area_struct *vma, bool detaching)
+static inline bool __vma_enter_locked(struct mm_area *vma, bool detaching)
 {
 	unsigned int tgt_refcnt = VMA_LOCK_OFFSET;
 
@@ -6478,13 +6478,13 @@ static inline bool __vma_enter_locked(struct vm_area_struct *vma, bool detaching
 	return true;
 }
 
-static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *detached)
+static inline void __vma_exit_locked(struct mm_area *vma, bool *detached)
 {
 	*detached = refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt);
 	rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
 }
 
-void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
+void __vma_start_write(struct mm_area *vma, unsigned int mm_lock_seq)
 {
 	bool locked;
 
@@ -6512,7 +6512,7 @@ void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
 }
 EXPORT_SYMBOL_GPL(__vma_start_write);
 
-void vma_mark_detached(struct vm_area_struct *vma)
+void vma_mark_detached(struct mm_area *vma)
 {
 	vma_assert_write_locked(vma);
 	vma_assert_attached(vma);
@@ -6541,11 +6541,11 @@ void vma_mark_detached(struct vm_area_struct *vma)
  * stable and not isolated. If the VMA is not found or is being modified the
  * function returns NULL.
  */
-struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
+struct mm_area *lock_vma_under_rcu(struct mm_struct *mm,
 					  unsigned long address)
 {
 	MA_STATE(mas, &mm->mm_mt, address, address);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	rcu_read_lock();
 retry:
@@ -6675,7 +6675,7 @@ static inline void pfnmap_args_setup(struct follow_pfnmap_args *args,
 	args->special = special;
 }
 
-static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma)
+static inline void pfnmap_lockdep_assert(struct mm_area *vma)
 {
 #ifdef CONFIG_LOCKDEP
 	struct file *file = vma->vm_file;
@@ -6722,7 +6722,7 @@ static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma)
  */
 int follow_pfnmap_start(struct follow_pfnmap_args *args)
 {
-	struct vm_area_struct *vma = args->vma;
+	struct mm_area *vma = args->vma;
 	unsigned long address = args->address;
 	struct mm_struct *mm = vma->vm_mm;
 	spinlock_t *lock;
@@ -6825,7 +6825,7 @@ EXPORT_SYMBOL_GPL(follow_pfnmap_end);
  * iomem mapping. This callback is used by access_process_vm() when the @vma is
  * not page based.
  */
-int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
+int generic_access_phys(struct mm_area *vma, unsigned long addr,
 			void *buf, int len, int write)
 {
 	resource_size_t phys_addr;
@@ -6899,7 +6899,7 @@ static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
 	while (len) {
 		int bytes, offset;
 		void *maddr;
-		struct vm_area_struct *vma = NULL;
+		struct mm_area *vma = NULL;
 		struct page *page = get_user_page_vma_remote(mm, addr,
 							     gup_flags, &vma);
 
@@ -7024,7 +7024,7 @@ static int __copy_remote_vm_str(struct mm_struct *mm, unsigned long addr,
 		int bytes, offset, retval;
 		void *maddr;
 		struct page *page;
-		struct vm_area_struct *vma = NULL;
+		struct mm_area *vma = NULL;
 
 		page = get_user_page_vma_remote(mm, addr, gup_flags, &vma);
 		if (IS_ERR(page)) {
@@ -7120,7 +7120,7 @@ EXPORT_SYMBOL_GPL(copy_remote_vm_str);
 void print_vma_addr(char *prefix, unsigned long ip)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * we might be running from an atomic context so we cannot sleep
@@ -7251,7 +7251,7 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 
 static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
 				   unsigned long addr_hint,
-				   struct vm_area_struct *vma,
+				   struct mm_area *vma,
 				   unsigned int nr_pages)
 {
 	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst));
@@ -7274,7 +7274,7 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
 struct copy_subpage_arg {
 	struct folio *dst;
 	struct folio *src;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 };
 
 static int copy_subpage(unsigned long addr, int idx, void *arg)
@@ -7289,7 +7289,7 @@ static int copy_subpage(unsigned long addr, int idx, void *arg)
 }
 
 int copy_user_large_folio(struct folio *dst, struct folio *src,
-			  unsigned long addr_hint, struct vm_area_struct *vma)
+			  unsigned long addr_hint, struct mm_area *vma)
 {
 	unsigned int nr_pages = folio_nr_pages(dst);
 	struct copy_subpage_arg arg = {
@@ -7364,13 +7364,13 @@ void ptlock_free(struct ptdesc *ptdesc)
 }
 #endif
 
-void vma_pgtable_walk_begin(struct vm_area_struct *vma)
+void vma_pgtable_walk_begin(struct mm_area *vma)
 {
 	if (is_vm_hugetlb_page(vma))
 		hugetlb_vma_lock_read(vma);
 }
 
-void vma_pgtable_walk_end(struct vm_area_struct *vma)
+void vma_pgtable_walk_end(struct mm_area *vma)
 {
 	if (is_vm_hugetlb_page(vma))
 		hugetlb_vma_unlock_read(vma);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b28a1e6ae096..3403a4805d17 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -445,7 +445,7 @@ void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new)
  */
 void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	mmap_write_lock(mm);
@@ -511,7 +511,7 @@ struct queue_pages {
 	nodemask_t *nmask;
 	unsigned long start;
 	unsigned long end;
-	struct vm_area_struct *first;
+	struct mm_area *first;
 	struct folio *large;		/* note last large folio encountered */
 	long nr_failed;			/* could not be isolated at this time */
 };
@@ -566,7 +566,7 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk)
 static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 			unsigned long end, struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	struct folio *folio;
 	struct queue_pages *qp = walk->private;
 	unsigned long flags = qp->flags;
@@ -698,7 +698,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
  * an architecture makes a different choice, it will need further
  * changes to the core.
  */
-unsigned long change_prot_numa(struct vm_area_struct *vma,
+unsigned long change_prot_numa(struct mm_area *vma,
 			unsigned long addr, unsigned long end)
 {
 	struct mmu_gather tlb;
@@ -721,7 +721,7 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
 static int queue_pages_test_walk(unsigned long start, unsigned long end,
 				struct mm_walk *walk)
 {
-	struct vm_area_struct *next, *vma = walk->vma;
+	struct mm_area *next, *vma = walk->vma;
 	struct queue_pages *qp = walk->private;
 	unsigned long flags = qp->flags;
 
@@ -817,7 +817,7 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
  * Apply policy to a single VMA
  * This must be called with the mmap_lock held for writing.
  */
-static int vma_replace_policy(struct vm_area_struct *vma,
+static int vma_replace_policy(struct mm_area *vma,
 				struct mempolicy *pol)
 {
 	int err;
@@ -847,8 +847,8 @@ static int vma_replace_policy(struct vm_area_struct *vma,
 }
 
 /* Split or merge the VMA (if required) and apply the new policy */
-static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
-		struct vm_area_struct **prev, unsigned long start,
+static int mbind_range(struct vma_iterator *vmi, struct mm_area *vma,
+		struct mm_area **prev, unsigned long start,
 		unsigned long end, struct mempolicy *new_pol)
 {
 	unsigned long vmstart, vmend;
@@ -960,7 +960,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
 {
 	int err;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	struct mempolicy *pol = current->mempolicy, *pol_refcount = NULL;
 
 	if (flags &
@@ -1094,7 +1094,7 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest,
 			    int flags)
 {
 	nodemask_t nmask;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	LIST_HEAD(pagelist);
 	long nr_failed;
 	long err = 0;
@@ -1299,7 +1299,7 @@ static long do_mbind(unsigned long start, unsigned long len,
 		     nodemask_t *nmask, unsigned long flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	struct vma_iterator vmi;
 	struct migration_mpol mmpol;
 	struct mempolicy *new;
@@ -1572,7 +1572,7 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, le
 		unsigned long, home_node, unsigned long, flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	struct mempolicy *new, *old;
 	unsigned long end;
 	int err = -ENOENT;
@@ -1799,7 +1799,7 @@ SYSCALL_DEFINE5(get_mempolicy, int __user *, policy,
 	return kernel_get_mempolicy(policy, nmask, maxnode, addr, flags);
 }
 
-bool vma_migratable(struct vm_area_struct *vma)
+bool vma_migratable(struct mm_area *vma)
 {
 	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
 		return false;
@@ -1827,7 +1827,7 @@ bool vma_migratable(struct vm_area_struct *vma)
 	return true;
 }
 
-struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
+struct mempolicy *__get_vma_policy(struct mm_area *vma,
 				   unsigned long addr, pgoff_t *ilx)
 {
 	*ilx = 0;
@@ -1850,7 +1850,7 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
  * freeing by another task.  It is the caller's responsibility to free the
  * extra reference for shared policies.
  */
-struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+struct mempolicy *get_vma_policy(struct mm_area *vma,
 				 unsigned long addr, int order, pgoff_t *ilx)
 {
 	struct mempolicy *pol;
@@ -1866,7 +1866,7 @@ struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
 	return pol;
 }
 
-bool vma_policy_mof(struct vm_area_struct *vma)
+bool vma_policy_mof(struct mm_area *vma)
 {
 	struct mempolicy *pol;
 
@@ -2135,7 +2135,7 @@ static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol,
  * If the effective policy is 'bind' or 'prefer-many', returns a pointer
  * to the mempolicy's @nodemask for filtering the zonelist.
  */
-int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
+int huge_node(struct mm_area *vma, unsigned long addr, gfp_t gfp_flags,
 		struct mempolicy **mpol, nodemask_t **nodemask)
 {
 	pgoff_t ilx;
@@ -2341,7 +2341,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
  *
  * Return: The folio on success or NULL if allocation fails.
  */
-struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma,
+struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct mm_area *vma,
 		unsigned long addr)
 {
 	struct mempolicy *pol;
@@ -2607,7 +2607,7 @@ unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t gfp,
 				       nr_pages, page_array);
 }
 
-int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
+int vma_dup_policy(struct mm_area *src, struct mm_area *dst)
 {
 	struct mempolicy *pol = mpol_dup(src->vm_policy);
 
@@ -2795,7 +2795,7 @@ int mpol_misplaced(struct folio *folio, struct vm_fault *vmf,
 	pgoff_t ilx;
 	struct zoneref *z;
 	int curnid = folio_nid(folio);
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	int thiscpu = raw_smp_processor_id();
 	int thisnid = numa_node_id();
 	int polnid = NUMA_NO_NODE;
@@ -3054,7 +3054,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
 }
 
 int mpol_set_shared_policy(struct shared_policy *sp,
-			struct vm_area_struct *vma, struct mempolicy *pol)
+			struct mm_area *vma, struct mempolicy *pol)
 {
 	int err;
 	struct sp_node *new = NULL;
diff --git a/mm/migrate.c b/mm/migrate.c
index f3ee6d8d5e2e..7909e4ae797c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -237,7 +237,7 @@ struct rmap_walk_arg {
  * Restore a potential migration pte to a working pte entry
  */
 static bool remove_migration_pte(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long addr, void *arg)
+		struct mm_area *vma, unsigned long addr, void *arg)
 {
 	struct rmap_walk_arg *rmap_walk_arg = arg;
 	DEFINE_FOLIO_VMA_WALK(pvmw, rmap_walk_arg->folio, vma, addr, PVMW_SYNC | PVMW_MIGRATION);
@@ -405,7 +405,7 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
  *
  * This function will release the vma lock before returning.
  */
-void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+void migration_entry_wait_huge(struct mm_area *vma, unsigned long addr, pte_t *ptep)
 {
 	spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, ptep);
 	pte_t pte;
@@ -2254,7 +2254,7 @@ static int __add_folio_for_migration(struct folio *folio, int node,
 static int add_folio_for_migration(struct mm_struct *mm, const void __user *p,
 		int node, struct list_head *pagelist, bool migrate_all)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct folio_walk fw;
 	struct folio *folio;
 	unsigned long addr;
@@ -2423,7 +2423,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
 
 	for (i = 0; i < nr_pages; i++) {
 		unsigned long addr = (unsigned long)(*pages);
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 		struct folio_walk fw;
 		struct folio *folio;
 		int err = -EFAULT;
@@ -2640,7 +2640,7 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
  * permitted. Must be called with the PTL still held.
  */
 int migrate_misplaced_folio_prepare(struct folio *folio,
-		struct vm_area_struct *vma, int node)
+		struct mm_area *vma, int node)
 {
 	int nr_pages = folio_nr_pages(folio);
 	pg_data_t *pgdat = NODE_DATA(node);
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 3158afe7eb23..96786d64edd6 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -62,7 +62,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
 	struct migrate_vma *migrate = walk->private;
 	struct folio *fault_folio = migrate->fault_page ?
 		page_folio(migrate->fault_page) : NULL;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long addr = start, unmapped = 0;
 	spinlock_t *ptl;
@@ -589,7 +589,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
 				    unsigned long *src)
 {
 	struct folio *folio = page_folio(page);
-	struct vm_area_struct *vma = migrate->vma;
+	struct mm_area *vma = migrate->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	bool flush = false;
 	spinlock_t *ptl;
diff --git a/mm/mincore.c b/mm/mincore.c
index 832f29f46767..6b53d9361ec7 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -70,7 +70,7 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t index)
 }
 
 static int __mincore_unmapped_range(unsigned long addr, unsigned long end,
-				struct vm_area_struct *vma, unsigned char *vec)
+				struct mm_area *vma, unsigned char *vec)
 {
 	unsigned long nr = (end - addr) >> PAGE_SHIFT;
 	int i;
@@ -101,7 +101,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
 			struct mm_walk *walk)
 {
 	spinlock_t *ptl;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	pte_t *ptep;
 	unsigned char *vec = walk->private;
 	int nr = (end - addr) >> PAGE_SHIFT;
@@ -155,7 +155,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
 	return 0;
 }
 
-static inline bool can_do_mincore(struct vm_area_struct *vma)
+static inline bool can_do_mincore(struct mm_area *vma)
 {
 	if (vma_is_anonymous(vma))
 		return true;
@@ -186,7 +186,7 @@ static const struct mm_walk_ops mincore_walk_ops = {
  */
 static long do_mincore(unsigned long addr, unsigned long pages, unsigned char *vec)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long end;
 	int err;
 
diff --git a/mm/mlock.c b/mm/mlock.c
index 3cb72b579ffd..8c13cce0d0cb 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -319,7 +319,7 @@ static inline unsigned int folio_mlock_step(struct folio *folio,
 }
 
 static inline bool allow_mlock_munlock(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long start,
+		struct mm_area *vma, unsigned long start,
 		unsigned long end, unsigned int step)
 {
 	/*
@@ -353,7 +353,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
 			   unsigned long end, struct mm_walk *walk)
 
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	spinlock_t *ptl;
 	pte_t *start_pte, *pte;
 	pte_t ptent;
@@ -422,7 +422,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
  * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED;
  * called for munlock() and munlockall(), to clear VM_LOCKED from @vma.
  */
-static void mlock_vma_pages_range(struct vm_area_struct *vma,
+static void mlock_vma_pages_range(struct mm_area *vma,
 	unsigned long start, unsigned long end, vm_flags_t newflags)
 {
 	static const struct mm_walk_ops mlock_walk_ops = {
@@ -465,8 +465,8 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
  *
  * For vmas that pass the filters, merge/split as appropriate.
  */
-static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
-	       struct vm_area_struct **prev, unsigned long start,
+static int mlock_fixup(struct vma_iterator *vmi, struct mm_area *vma,
+	       struct mm_area **prev, unsigned long start,
 	       unsigned long end, vm_flags_t newflags)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -517,7 +517,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
 				vm_flags_t flags)
 {
 	unsigned long nstart, end, tmp;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	VMA_ITERATOR(vmi, current->mm, start);
 
 	VM_BUG_ON(offset_in_page(start));
@@ -573,7 +573,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
 static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
 		unsigned long start, size_t len)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long count = 0;
 	unsigned long end;
 	VMA_ITERATOR(vmi, mm, start);
@@ -706,7 +706,7 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len)
 static int apply_mlockall_flags(int flags)
 {
 	VMA_ITERATOR(vmi, current->mm, 0);
-	struct vm_area_struct *vma, *prev = NULL;
+	struct mm_area *vma, *prev = NULL;
 	vm_flags_t to_add = 0;
 
 	current->mm->def_flags &= ~VM_LOCKED_MASK;
diff --git a/mm/mmap.c b/mm/mmap.c
index bd210aaf7ebd..d7d95a6f343d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -78,7 +78,7 @@ static bool ignore_rlimit_data;
 core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
 
 /* Update vma->vm_page_prot to reflect vma->vm_flags. */
-void vma_set_page_prot(struct vm_area_struct *vma)
+void vma_set_page_prot(struct mm_area *vma)
 {
 	unsigned long vm_flags = vma->vm_flags;
 	pgprot_t vm_page_prot;
@@ -116,7 +116,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
 {
 	unsigned long newbrk, oldbrk, origbrk;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *brkvma, *next = NULL;
+	struct mm_area *brkvma, *next = NULL;
 	unsigned long min_brk;
 	bool populate = false;
 	LIST_HEAD(uf);
@@ -693,7 +693,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
 			  unsigned long flags, vm_flags_t vm_flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	struct vm_unmapped_area_info info = {};
 	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
 
@@ -741,7 +741,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
 				  unsigned long len, unsigned long pgoff,
 				  unsigned long flags, vm_flags_t vm_flags)
 {
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	struct mm_struct *mm = current->mm;
 	struct vm_unmapped_area_info info = {};
 	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
@@ -886,7 +886,7 @@ EXPORT_SYMBOL(mm_get_unmapped_area);
  * Returns: The first VMA within the provided range, %NULL otherwise.  Assumes
  * start_addr < end_addr.
  */
-struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
+struct mm_area *find_vma_intersection(struct mm_struct *mm,
 					     unsigned long start_addr,
 					     unsigned long end_addr)
 {
@@ -905,7 +905,7 @@ EXPORT_SYMBOL(find_vma_intersection);
  * Returns: The VMA associated with addr, or the next VMA.
  * May return %NULL in the case of no VMA at addr or above.
  */
-struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+struct mm_area *find_vma(struct mm_struct *mm, unsigned long addr)
 {
 	unsigned long index = addr;
 
@@ -927,11 +927,11 @@ EXPORT_SYMBOL(find_vma);
  * Returns: The VMA associated with @addr, or the next vma.
  * May return %NULL in the case of no vma at addr or above.
  */
-struct vm_area_struct *
+struct mm_area *
 find_vma_prev(struct mm_struct *mm, unsigned long addr,
-			struct vm_area_struct **pprev)
+			struct mm_area **pprev)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, addr);
 
 	vma = vma_iter_load(&vmi);
@@ -958,14 +958,14 @@ static int __init cmdline_parse_stack_guard_gap(char *p)
 __setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
 
 #ifdef CONFIG_STACK_GROWSUP
-int expand_stack_locked(struct vm_area_struct *vma, unsigned long address)
+int expand_stack_locked(struct mm_area *vma, unsigned long address)
 {
 	return expand_upwards(vma, address);
 }
 
-struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
+struct mm_area *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
 {
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 
 	addr &= PAGE_MASK;
 	vma = find_vma_prev(mm, addr, &prev);
@@ -980,14 +980,14 @@ struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned lon
 	return prev;
 }
 #else
-int expand_stack_locked(struct vm_area_struct *vma, unsigned long address)
+int expand_stack_locked(struct mm_area *vma, unsigned long address)
 {
 	return expand_downwards(vma, address);
 }
 
-struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
+struct mm_area *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long start;
 
 	addr &= PAGE_MASK;
@@ -1028,9 +1028,9 @@ struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned lon
  * If no vma is found or it can't be expanded, it returns NULL and has
  * dropped the lock.
  */
-struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
+struct mm_area *expand_stack(struct mm_struct *mm, unsigned long addr)
 {
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 
 	mmap_read_unlock(mm);
 	if (mmap_write_lock_killable(mm))
@@ -1093,7 +1093,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
 {
 
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long populate = 0;
 	unsigned long ret = -EINVAL;
 	struct file *file;
@@ -1172,7 +1172,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
 
 	if (start + size > vma->vm_end) {
 		VMA_ITERATOR(vmi, mm, vma->vm_end);
-		struct vm_area_struct *next, *prev = vma;
+		struct mm_area *next, *prev = vma;
 
 		for_each_vma_range(vmi, next, start + size) {
 			/* hole between vmas ? */
@@ -1210,7 +1210,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
 int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	unsigned long len;
 	int ret;
 	bool populate;
@@ -1258,7 +1258,7 @@ EXPORT_SYMBOL(vm_brk_flags);
 void exit_mmap(struct mm_struct *mm)
 {
 	struct mmu_gather tlb;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long nr_accounted = 0;
 	VMA_ITERATOR(vmi, mm, 0);
 	int count = 0;
@@ -1325,7 +1325,7 @@ void exit_mmap(struct mm_struct *mm)
  * and into the inode's i_mmap tree.  If vm_file is non-NULL
  * then i_mmap_rwsem is taken here.
  */
-int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma)
+int insert_vm_struct(struct mm_struct *mm, struct mm_area *vma)
 {
 	unsigned long charged = vma_pages(vma);
 
@@ -1411,7 +1411,7 @@ static vm_fault_t special_mapping_fault(struct vm_fault *vmf);
  *
  * Having a close hook prevents vma merging regardless of flags.
  */
-static void special_mapping_close(struct vm_area_struct *vma)
+static void special_mapping_close(struct mm_area *vma)
 {
 	const struct vm_special_mapping *sm = vma->vm_private_data;
 
@@ -1419,12 +1419,12 @@ static void special_mapping_close(struct vm_area_struct *vma)
 		sm->close(sm, vma);
 }
 
-static const char *special_mapping_name(struct vm_area_struct *vma)
+static const char *special_mapping_name(struct mm_area *vma)
 {
 	return ((struct vm_special_mapping *)vma->vm_private_data)->name;
 }
 
-static int special_mapping_mremap(struct vm_area_struct *new_vma)
+static int special_mapping_mremap(struct mm_area *new_vma)
 {
 	struct vm_special_mapping *sm = new_vma->vm_private_data;
 
@@ -1437,7 +1437,7 @@ static int special_mapping_mremap(struct vm_area_struct *new_vma)
 	return 0;
 }
 
-static int special_mapping_split(struct vm_area_struct *vma, unsigned long addr)
+static int special_mapping_split(struct mm_area *vma, unsigned long addr)
 {
 	/*
 	 * Forbid splitting special mappings - kernel has expectations over
@@ -1460,7 +1460,7 @@ static const struct vm_operations_struct special_mapping_vmops = {
 
 static vm_fault_t special_mapping_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	pgoff_t pgoff;
 	struct page **pages;
 	struct vm_special_mapping *sm = vma->vm_private_data;
@@ -1483,14 +1483,14 @@ static vm_fault_t special_mapping_fault(struct vm_fault *vmf)
 	return VM_FAULT_SIGBUS;
 }
 
-static struct vm_area_struct *__install_special_mapping(
+static struct mm_area *__install_special_mapping(
 	struct mm_struct *mm,
 	unsigned long addr, unsigned long len,
 	unsigned long vm_flags, void *priv,
 	const struct vm_operations_struct *ops)
 {
 	int ret;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = vm_area_alloc(mm);
 	if (unlikely(vma == NULL))
@@ -1519,7 +1519,7 @@ static struct vm_area_struct *__install_special_mapping(
 	return ERR_PTR(ret);
 }
 
-bool vma_is_special_mapping(const struct vm_area_struct *vma,
+bool vma_is_special_mapping(const struct mm_area *vma,
 	const struct vm_special_mapping *sm)
 {
 	return vma->vm_private_data == sm &&
@@ -1535,7 +1535,7 @@ bool vma_is_special_mapping(const struct vm_area_struct *vma,
  * The array pointer and the pages it points to are assumed to stay alive
  * for as long as this mapping might exist.
  */
-struct vm_area_struct *_install_special_mapping(
+struct mm_area *_install_special_mapping(
 	struct mm_struct *mm,
 	unsigned long addr, unsigned long len,
 	unsigned long vm_flags, const struct vm_special_mapping *spec)
@@ -1725,7 +1725,7 @@ subsys_initcall(init_reserve_notifier);
  * This function is almost certainly NOT what you want for anything other than
  * early executable temporary stack relocation.
  */
-int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
+int relocate_vma_down(struct mm_area *vma, unsigned long shift)
 {
 	/*
 	 * The process proceeds as follows:
@@ -1746,7 +1746,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
 	unsigned long new_end = old_end - shift;
 	VMA_ITERATOR(vmi, mm, new_start);
 	VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff);
-	struct vm_area_struct *next;
+	struct mm_area *next;
 	struct mmu_gather tlb;
 	PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length);
 
@@ -1824,7 +1824,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
  * before downgrading it.
  */
 bool mmap_read_lock_maybe_expand(struct mm_struct *mm,
-				 struct vm_area_struct *new_vma,
+				 struct mm_area *new_vma,
 				 unsigned long addr, bool write)
 {
 	if (!write || addr >= new_vma->vm_start) {
@@ -1845,7 +1845,7 @@ bool mmap_read_lock_maybe_expand(struct mm_struct *mm,
 	return true;
 }
 #else
-bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_struct *vma,
+bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct mm_area *vma,
 				 unsigned long addr, bool write)
 {
 	return false;
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index db7ba4a725d6..c94257a65e5b 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -48,7 +48,7 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
 }
 
 #ifdef CONFIG_SMP
-static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma)
+static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct mm_area *vma)
 {
 	struct encoded_page **pages = batch->encoded_pages;
 
@@ -79,7 +79,7 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_
  * we only need to walk through the current active batch and the
  * original local one.
  */
-void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
+void tlb_flush_rmaps(struct mmu_gather *tlb, struct mm_area *vma)
 {
 	if (!tlb->delayed_rmap)
 		return;
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 62c1f7945741..2f1f44d80639 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -40,7 +40,7 @@
 
 #include "internal.h"
 
-bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
+bool can_change_pte_writable(struct mm_area *vma, unsigned long addr,
 			     pte_t pte)
 {
 	struct page *page;
@@ -84,7 +84,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
 }
 
 static long change_pte_range(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
+		struct mm_area *vma, pmd_t *pmd, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	pte_t *pte, oldpte;
@@ -292,7 +292,7 @@ static long change_pte_range(struct mmu_gather *tlb,
  * protection procedure, false otherwise.
  */
 static inline bool
-pgtable_split_needed(struct vm_area_struct *vma, unsigned long cp_flags)
+pgtable_split_needed(struct mm_area *vma, unsigned long cp_flags)
 {
 	/*
 	 * pte markers only resides in pte level, if we need pte markers,
@@ -308,7 +308,7 @@ pgtable_split_needed(struct vm_area_struct *vma, unsigned long cp_flags)
  * procedure, false otherwise
  */
 static inline bool
-pgtable_populate_needed(struct vm_area_struct *vma, unsigned long cp_flags)
+pgtable_populate_needed(struct mm_area *vma, unsigned long cp_flags)
 {
 	/* If not within ioctl(UFFDIO_WRITEPROTECT), then don't bother */
 	if (!(cp_flags & MM_CP_UFFD_WP))
@@ -351,7 +351,7 @@ pgtable_populate_needed(struct vm_area_struct *vma, unsigned long cp_flags)
 	})
 
 static inline long change_pmd_range(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, pud_t *pud, unsigned long addr,
+		struct mm_area *vma, pud_t *pud, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	pmd_t *pmd;
@@ -421,7 +421,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb,
 }
 
 static inline long change_pud_range(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr,
+		struct mm_area *vma, p4d_t *p4d, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	struct mmu_notifier_range range;
@@ -480,7 +480,7 @@ static inline long change_pud_range(struct mmu_gather *tlb,
 }
 
 static inline long change_p4d_range(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr,
+		struct mm_area *vma, pgd_t *pgd, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	p4d_t *p4d;
@@ -503,7 +503,7 @@ static inline long change_p4d_range(struct mmu_gather *tlb,
 }
 
 static long change_protection_range(struct mmu_gather *tlb,
-		struct vm_area_struct *vma, unsigned long addr,
+		struct mm_area *vma, unsigned long addr,
 		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -533,7 +533,7 @@ static long change_protection_range(struct mmu_gather *tlb,
 }
 
 long change_protection(struct mmu_gather *tlb,
-		       struct vm_area_struct *vma, unsigned long start,
+		       struct mm_area *vma, unsigned long start,
 		       unsigned long end, unsigned long cp_flags)
 {
 	pgprot_t newprot = vma->vm_page_prot;
@@ -595,7 +595,7 @@ static const struct mm_walk_ops prot_none_walk_ops = {
 
 int
 mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
-	       struct vm_area_struct *vma, struct vm_area_struct **pprev,
+	       struct mm_area *vma, struct mm_area **pprev,
 	       unsigned long start, unsigned long end, unsigned long newflags)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -704,7 +704,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 		unsigned long prot, int pkey)
 {
 	unsigned long nstart, end, tmp, reqprot;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	int error;
 	const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP);
 	const bool rier = (current->personality & READ_IMPLIES_EXEC) &&
diff --git a/mm/mremap.c b/mm/mremap.c
index 0865387531ed..2634b9f85423 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -61,7 +61,7 @@ struct vma_remap_struct {
 	struct list_head *uf_unmap;
 
 	/* VMA state, determined in do_mremap(). */
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/* Internal state, determined in do_mremap(). */
 	unsigned long delta;		/* Absolute delta of old_len,new_len. */
@@ -139,7 +139,7 @@ static pmd_t *alloc_new_pmd(struct mm_struct *mm, unsigned long addr)
 	return pmd;
 }
 
-static void take_rmap_locks(struct vm_area_struct *vma)
+static void take_rmap_locks(struct mm_area *vma)
 {
 	if (vma->vm_file)
 		i_mmap_lock_write(vma->vm_file->f_mapping);
@@ -147,7 +147,7 @@ static void take_rmap_locks(struct vm_area_struct *vma)
 		anon_vma_lock_write(vma->anon_vma);
 }
 
-static void drop_rmap_locks(struct vm_area_struct *vma)
+static void drop_rmap_locks(struct mm_area *vma)
 {
 	if (vma->anon_vma)
 		anon_vma_unlock_write(vma->anon_vma);
@@ -173,7 +173,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
 static int move_ptes(struct pagetable_move_control *pmc,
 		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
 {
-	struct vm_area_struct *vma = pmc->old;
+	struct mm_area *vma = pmc->old;
 	bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma);
 	struct mm_struct *mm = vma->vm_mm;
 	pte_t *old_pte, *new_pte, pte;
@@ -297,7 +297,7 @@ static bool move_normal_pmd(struct pagetable_move_control *pmc,
 			pmd_t *old_pmd, pmd_t *new_pmd)
 {
 	spinlock_t *old_ptl, *new_ptl;
-	struct vm_area_struct *vma = pmc->old;
+	struct mm_area *vma = pmc->old;
 	struct mm_struct *mm = vma->vm_mm;
 	bool res = false;
 	pmd_t pmd;
@@ -381,7 +381,7 @@ static bool move_normal_pud(struct pagetable_move_control *pmc,
 		pud_t *old_pud, pud_t *new_pud)
 {
 	spinlock_t *old_ptl, *new_ptl;
-	struct vm_area_struct *vma = pmc->old;
+	struct mm_area *vma = pmc->old;
 	struct mm_struct *mm = vma->vm_mm;
 	pud_t pud;
 
@@ -439,7 +439,7 @@ static bool move_huge_pud(struct pagetable_move_control *pmc,
 		pud_t *old_pud, pud_t *new_pud)
 {
 	spinlock_t *old_ptl, *new_ptl;
-	struct vm_area_struct *vma = pmc->old;
+	struct mm_area *vma = pmc->old;
 	struct mm_struct *mm = vma->vm_mm;
 	pud_t pud;
 
@@ -598,7 +598,7 @@ static bool move_pgt_entry(struct pagetable_move_control *pmc,
  * so we make an exception for it.
  */
 static bool can_align_down(struct pagetable_move_control *pmc,
-			   struct vm_area_struct *vma, unsigned long addr_to_align,
+			   struct mm_area *vma, unsigned long addr_to_align,
 			   unsigned long mask)
 {
 	unsigned long addr_masked = addr_to_align & mask;
@@ -902,7 +902,7 @@ static bool vrm_implies_new_addr(struct vma_remap_struct *vrm)
  */
 static unsigned long vrm_set_new_addr(struct vma_remap_struct *vrm)
 {
-	struct vm_area_struct *vma = vrm->vma;
+	struct mm_area *vma = vrm->vma;
 	unsigned long map_flags = 0;
 	/* Page Offset _into_ the VMA. */
 	pgoff_t internal_pgoff = (vrm->addr - vma->vm_start) >> PAGE_SHIFT;
@@ -978,7 +978,7 @@ static void vrm_stat_account(struct vma_remap_struct *vrm,
 {
 	unsigned long pages = bytes >> PAGE_SHIFT;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = vrm->vma;
+	struct mm_area *vma = vrm->vma;
 
 	vm_stat_account(mm, vma->vm_flags, pages);
 	if (vma->vm_flags & VM_LOCKED) {
@@ -994,7 +994,7 @@ static void vrm_stat_account(struct vma_remap_struct *vrm,
 static unsigned long prep_move_vma(struct vma_remap_struct *vrm)
 {
 	unsigned long err = 0;
-	struct vm_area_struct *vma = vrm->vma;
+	struct mm_area *vma = vrm->vma;
 	unsigned long old_addr = vrm->addr;
 	unsigned long old_len = vrm->old_len;
 	unsigned long dummy = vma->vm_flags;
@@ -1043,7 +1043,7 @@ static void unmap_source_vma(struct vma_remap_struct *vrm)
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = vrm->addr;
 	unsigned long len = vrm->old_len;
-	struct vm_area_struct *vma = vrm->vma;
+	struct mm_area *vma = vrm->vma;
 	VMA_ITERATOR(vmi, mm, addr);
 	int err;
 	unsigned long vm_start;
@@ -1119,13 +1119,13 @@ static void unmap_source_vma(struct vma_remap_struct *vrm)
 		unsigned long end = addr + len;
 
 		if (vm_start < addr) {
-			struct vm_area_struct *prev = vma_prev(&vmi);
+			struct mm_area *prev = vma_prev(&vmi);
 
 			vm_flags_set(prev, VM_ACCOUNT); /* Acquires VMA lock. */
 		}
 
 		if (vm_end > end) {
-			struct vm_area_struct *next = vma_next(&vmi);
+			struct mm_area *next = vma_next(&vmi);
 
 			vm_flags_set(next, VM_ACCOUNT); /* Acquires VMA lock. */
 		}
@@ -1141,14 +1141,14 @@ static void unmap_source_vma(struct vma_remap_struct *vrm)
  * error code.
  */
 static int copy_vma_and_data(struct vma_remap_struct *vrm,
-			     struct vm_area_struct **new_vma_ptr)
+			     struct mm_area **new_vma_ptr)
 {
 	unsigned long internal_offset = vrm->addr - vrm->vma->vm_start;
 	unsigned long internal_pgoff = internal_offset >> PAGE_SHIFT;
 	unsigned long new_pgoff = vrm->vma->vm_pgoff + internal_pgoff;
 	unsigned long moved_len;
-	struct vm_area_struct *vma = vrm->vma;
-	struct vm_area_struct *new_vma;
+	struct mm_area *vma = vrm->vma;
+	struct mm_area *new_vma;
 	int err = 0;
 	PAGETABLE_MOVE(pmc, NULL, NULL, vrm->addr, vrm->new_addr, vrm->old_len);
 
@@ -1206,7 +1206,7 @@ static int copy_vma_and_data(struct vma_remap_struct *vrm,
  * links from it (if the entire VMA was copied over).
  */
 static void dontunmap_complete(struct vma_remap_struct *vrm,
-			       struct vm_area_struct *new_vma)
+			       struct mm_area *new_vma)
 {
 	unsigned long start = vrm->addr;
 	unsigned long end = vrm->addr + vrm->old_len;
@@ -1232,7 +1232,7 @@ static void dontunmap_complete(struct vma_remap_struct *vrm,
 static unsigned long move_vma(struct vma_remap_struct *vrm)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *new_vma;
+	struct mm_area *new_vma;
 	unsigned long hiwater_vm;
 	int err;
 
@@ -1288,7 +1288,7 @@ static unsigned long move_vma(struct vma_remap_struct *vrm)
 static int resize_is_valid(struct vma_remap_struct *vrm)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = vrm->vma;
+	struct mm_area *vma = vrm->vma;
 	unsigned long addr = vrm->addr;
 	unsigned long old_len = vrm->old_len;
 	unsigned long new_len = vrm->new_len;
@@ -1444,7 +1444,7 @@ static unsigned long mremap_to(struct vma_remap_struct *vrm)
 	return move_vma(vrm);
 }
 
-static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
+static int vma_expandable(struct mm_area *vma, unsigned long delta)
 {
 	unsigned long end = vma->vm_end + delta;
 
@@ -1546,7 +1546,7 @@ static unsigned long check_mremap_params(struct vma_remap_struct *vrm)
 static unsigned long expand_vma_in_place(struct vma_remap_struct *vrm)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = vrm->vma;
+	struct mm_area *vma = vrm->vma;
 	VMA_ITERATOR(vmi, mm, vma->vm_end);
 
 	if (!vrm_charge(vrm))
@@ -1688,7 +1688,7 @@ static unsigned long mremap_at(struct vma_remap_struct *vrm)
 static unsigned long do_mremap(struct vma_remap_struct *vrm)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long ret;
 
 	ret = check_mremap_params(vrm);
diff --git a/mm/mseal.c b/mm/mseal.c
index c27197ac04e8..791ea7bc053a 100644
--- a/mm/mseal.c
+++ b/mm/mseal.c
@@ -16,7 +16,7 @@
 #include <linux/sched.h>
 #include "internal.h"
 
-static inline void set_vma_sealed(struct vm_area_struct *vma)
+static inline void set_vma_sealed(struct mm_area *vma)
 {
 	vm_flags_set(vma, VM_SEALED);
 }
@@ -37,7 +37,7 @@ static bool is_madv_discard(int behavior)
 	return false;
 }
 
-static bool is_ro_anon(struct vm_area_struct *vma)
+static bool is_ro_anon(struct mm_area *vma)
 {
 	/* check anonymous mapping. */
 	if (vma->vm_file || vma->vm_flags & VM_SHARED)
@@ -57,7 +57,7 @@ static bool is_ro_anon(struct vm_area_struct *vma)
 /*
  * Check if a vma is allowed to be modified by madvise.
  */
-bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
+bool can_modify_vma_madv(struct mm_area *vma, int behavior)
 {
 	if (!is_madv_discard(behavior))
 		return true;
@@ -69,8 +69,8 @@ bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
 	return true;
 }
 
-static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
-		struct vm_area_struct **prev, unsigned long start,
+static int mseal_fixup(struct vma_iterator *vmi, struct mm_area *vma,
+		struct mm_area **prev, unsigned long start,
 		unsigned long end, vm_flags_t newflags)
 {
 	int ret = 0;
@@ -100,7 +100,7 @@ static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
  */
 static int check_mm_seal(unsigned long start, unsigned long end)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long nstart = start;
 
 	VMA_ITERATOR(vmi, current->mm, start);
@@ -126,7 +126,7 @@ static int check_mm_seal(unsigned long start, unsigned long end)
 static int apply_mm_seal(unsigned long start, unsigned long end)
 {
 	unsigned long nstart;
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 
 	VMA_ITERATOR(vmi, current->mm, start);
 
diff --git a/mm/msync.c b/mm/msync.c
index ac4c9bfea2e7..c46feec8295a 100644
--- a/mm/msync.c
+++ b/mm/msync.c
@@ -33,7 +33,7 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len, int, flags)
 {
 	unsigned long end;
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int unmapped_error = 0;
 	int error = -EINVAL;
 
diff --git a/mm/nommu.c b/mm/nommu.c
index 617e7ba8022f..af225d5af3bb 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -89,7 +89,7 @@ unsigned int kobjsize(const void *objp)
 	 * PAGE_SIZE for 0-order pages.
 	 */
 	if (!PageCompound(page)) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		vma = find_vma(current->mm, (unsigned long)objp);
 		if (vma)
@@ -144,7 +144,7 @@ static void *__vmalloc_user_flags(unsigned long size, gfp_t flags)
 
 	ret = __vmalloc(size, flags);
 	if (ret) {
-		struct vm_area_struct *vma;
+		struct mm_area *vma;
 
 		mmap_write_lock(current->mm);
 		vma = find_vma(current->mm, (unsigned long)ret);
@@ -325,28 +325,28 @@ void free_vm_area(struct vm_struct *area)
 }
 EXPORT_SYMBOL_GPL(free_vm_area);
 
-int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
+int vm_insert_page(struct mm_area *vma, unsigned long addr,
 		   struct page *page)
 {
 	return -EINVAL;
 }
 EXPORT_SYMBOL(vm_insert_page);
 
-int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
+int vm_insert_pages(struct mm_area *vma, unsigned long addr,
 			struct page **pages, unsigned long *num)
 {
 	return -EINVAL;
 }
 EXPORT_SYMBOL(vm_insert_pages);
 
-int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
+int vm_map_pages(struct mm_area *vma, struct page **pages,
 			unsigned long num)
 {
 	return -EINVAL;
 }
 EXPORT_SYMBOL(vm_map_pages);
 
-int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
+int vm_map_pages_zero(struct mm_area *vma, struct page **pages,
 				unsigned long num)
 {
 	return -EINVAL;
@@ -540,7 +540,7 @@ static void put_nommu_region(struct vm_region *region)
 	__put_nommu_region(region);
 }
 
-static void setup_vma_to_mm(struct vm_area_struct *vma, struct mm_struct *mm)
+static void setup_vma_to_mm(struct mm_area *vma, struct mm_struct *mm)
 {
 	vma->vm_mm = mm;
 
@@ -556,7 +556,7 @@ static void setup_vma_to_mm(struct vm_area_struct *vma, struct mm_struct *mm)
 	}
 }
 
-static void cleanup_vma_from_mm(struct vm_area_struct *vma)
+static void cleanup_vma_from_mm(struct mm_area *vma)
 {
 	vma->vm_mm->map_count--;
 	/* remove the VMA from the mapping */
@@ -575,7 +575,7 @@ static void cleanup_vma_from_mm(struct vm_area_struct *vma)
 /*
  * delete a VMA from its owning mm_struct and address space
  */
-static int delete_vma_from_mm(struct vm_area_struct *vma)
+static int delete_vma_from_mm(struct mm_area *vma)
 {
 	VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_start);
 
@@ -594,7 +594,7 @@ static int delete_vma_from_mm(struct vm_area_struct *vma)
 /*
  * destroy a VMA record
  */
-static void delete_vma(struct mm_struct *mm, struct vm_area_struct *vma)
+static void delete_vma(struct mm_struct *mm, struct mm_area *vma)
 {
 	vma_close(vma);
 	if (vma->vm_file)
@@ -603,7 +603,7 @@ static void delete_vma(struct mm_struct *mm, struct vm_area_struct *vma)
 	vm_area_free(vma);
 }
 
-struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
+struct mm_area *find_vma_intersection(struct mm_struct *mm,
 					     unsigned long start_addr,
 					     unsigned long end_addr)
 {
@@ -618,7 +618,7 @@ EXPORT_SYMBOL(find_vma_intersection);
  * look up the first VMA in which addr resides, NULL if none
  * - should be called with mm->mmap_lock at least held readlocked
  */
-struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
+struct mm_area *find_vma(struct mm_struct *mm, unsigned long addr)
 {
 	VMA_ITERATOR(vmi, mm, addr);
 
@@ -630,10 +630,10 @@ EXPORT_SYMBOL(find_vma);
  * At least xtensa ends up having protection faults even with no
  * MMU.. No stack expansion, at least.
  */
-struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
+struct mm_area *lock_mm_and_find_vma(struct mm_struct *mm,
 			unsigned long addr, struct pt_regs *regs)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mmap_read_lock(mm);
 	vma = vma_lookup(mm, addr);
@@ -646,12 +646,12 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
  * expand a stack to a given address
  * - not supported under NOMMU conditions
  */
-int expand_stack_locked(struct vm_area_struct *vma, unsigned long addr)
+int expand_stack_locked(struct mm_area *vma, unsigned long addr)
 {
 	return -ENOMEM;
 }
 
-struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
+struct mm_area *expand_stack(struct mm_struct *mm, unsigned long addr)
 {
 	mmap_read_unlock(mm);
 	return NULL;
@@ -661,11 +661,11 @@ struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
  * look up the first VMA exactly that exactly matches addr
  * - should be called with mm->mmap_lock at least held readlocked
  */
-static struct vm_area_struct *find_vma_exact(struct mm_struct *mm,
+static struct mm_area *find_vma_exact(struct mm_struct *mm,
 					     unsigned long addr,
 					     unsigned long len)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long end = addr + len;
 	VMA_ITERATOR(vmi, mm, addr);
 
@@ -887,7 +887,7 @@ static unsigned long determine_vm_flags(struct file *file,
  * set up a shared mapping on a file (the driver or filesystem provides and
  * pins the storage)
  */
-static int do_mmap_shared_file(struct vm_area_struct *vma)
+static int do_mmap_shared_file(struct mm_area *vma)
 {
 	int ret;
 
@@ -908,7 +908,7 @@ static int do_mmap_shared_file(struct vm_area_struct *vma)
 /*
  * set up a private mapping or an anonymous shared mapping
  */
-static int do_mmap_private(struct vm_area_struct *vma,
+static int do_mmap_private(struct mm_area *vma,
 			   struct vm_region *region,
 			   unsigned long len,
 			   unsigned long capabilities)
@@ -1016,7 +1016,7 @@ unsigned long do_mmap(struct file *file,
 			unsigned long *populate,
 			struct list_head *uf)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_region *region;
 	struct rb_node *rb;
 	unsigned long capabilities, result;
@@ -1300,10 +1300,10 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg)
  * split a vma into two pieces at address 'addr', a new vma is allocated either
  * for the first part or the tail.
  */
-static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+static int split_vma(struct vma_iterator *vmi, struct mm_area *vma,
 		     unsigned long addr, int new_below)
 {
-	struct vm_area_struct *new;
+	struct mm_area *new;
 	struct vm_region *region;
 	unsigned long npages;
 	struct mm_struct *mm;
@@ -1379,7 +1379,7 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
  * the end
  */
 static int vmi_shrink_vma(struct vma_iterator *vmi,
-		      struct vm_area_struct *vma,
+		      struct mm_area *vma,
 		      unsigned long from, unsigned long to)
 {
 	struct vm_region *region;
@@ -1423,7 +1423,7 @@ static int vmi_shrink_vma(struct vma_iterator *vmi,
 int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf)
 {
 	VMA_ITERATOR(vmi, mm, start);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long end;
 	int ret = 0;
 
@@ -1505,7 +1505,7 @@ SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len)
 void exit_mmap(struct mm_struct *mm)
 {
 	VMA_ITERATOR(vmi, mm, 0);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if (!mm)
 		return;
@@ -1540,7 +1540,7 @@ static unsigned long do_mremap(unsigned long addr,
 			unsigned long old_len, unsigned long new_len,
 			unsigned long flags, unsigned long new_addr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/* insanity checks first */
 	old_len = PAGE_ALIGN(old_len);
@@ -1584,7 +1584,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
 	return ret;
 }
 
-int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
+int remap_pfn_range(struct mm_area *vma, unsigned long addr,
 		unsigned long pfn, unsigned long size, pgprot_t prot)
 {
 	if (addr != (pfn << PAGE_SHIFT))
@@ -1595,7 +1595,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
 }
 EXPORT_SYMBOL(remap_pfn_range);
 
-int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
+int vm_iomap_memory(struct mm_area *vma, phys_addr_t start, unsigned long len)
 {
 	unsigned long pfn = start >> PAGE_SHIFT;
 	unsigned long vm_len = vma->vm_end - vma->vm_start;
@@ -1605,7 +1605,7 @@ int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long
 }
 EXPORT_SYMBOL(vm_iomap_memory);
 
-int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+int remap_vmalloc_range(struct mm_area *vma, void *addr,
 			unsigned long pgoff)
 {
 	unsigned int size = vma->vm_end - vma->vm_start;
@@ -1638,7 +1638,7 @@ EXPORT_SYMBOL(filemap_map_pages);
 static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
 			      void *buf, int len, unsigned int gup_flags)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int write = gup_flags & FOLL_WRITE;
 
 	if (mmap_read_lock_killable(mm))
@@ -1717,7 +1717,7 @@ static int __copy_remote_vm_str(struct mm_struct *mm, unsigned long addr,
 				void *buf, int len)
 {
 	unsigned long addr_end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret = -EFAULT;
 
 	*(char *)buf = '\0';
@@ -1801,7 +1801,7 @@ EXPORT_SYMBOL_GPL(copy_remote_vm_str);
 int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
 				size_t newsize)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vm_region *region;
 	pgoff_t low, high;
 	size_t r_size, r_top;
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 25923cfec9c6..55bd5da45232 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -514,7 +514,7 @@ static DEFINE_SPINLOCK(oom_reaper_lock);
 
 static bool __oom_reap_task_mm(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	bool ret = true;
 	VMA_ITERATOR(vmi, mm, 0);
 
diff --git a/mm/page_idle.c b/mm/page_idle.c
index 408aaf29a3ea..655e4c716d0d 100644
--- a/mm/page_idle.c
+++ b/mm/page_idle.c
@@ -50,7 +50,7 @@ static struct folio *page_idle_get_folio(unsigned long pfn)
 }
 
 static bool page_idle_clear_pte_refs_one(struct folio *folio,
-					struct vm_area_struct *vma,
+					struct mm_area *vma,
 					unsigned long addr, void *arg)
 {
 	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index e463c3be934a..13f7bd3e99c9 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -183,7 +183,7 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
  */
 bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 {
-	struct vm_area_struct *vma = pvmw->vma;
+	struct mm_area *vma = pvmw->vma;
 	struct mm_struct *mm = vma->vm_mm;
 	unsigned long end;
 	spinlock_t *ptl;
@@ -342,7 +342,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
  * Only valid for normal file or anonymous VMAs.
  */
 unsigned long page_mapped_in_vma(const struct page *page,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	const struct folio *folio = page_folio(page);
 	struct page_vma_mapped_walk pvmw = {
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index e478777c86e1..2266b191ae3e 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -321,7 +321,7 @@ static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
 static int walk_hugetlb_range(unsigned long addr, unsigned long end,
 			      struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	struct hstate *h = hstate_vma(vma);
 	unsigned long next;
 	unsigned long hmask = huge_page_mask(h);
@@ -364,7 +364,7 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end,
 static int walk_page_test(unsigned long start, unsigned long end,
 			struct mm_walk *walk)
 {
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	const struct mm_walk_ops *ops = walk->ops;
 
 	if (ops->test_walk)
@@ -391,7 +391,7 @@ static int __walk_page_range(unsigned long start, unsigned long end,
 			struct mm_walk *walk)
 {
 	int err = 0;
-	struct vm_area_struct *vma = walk->vma;
+	struct mm_area *vma = walk->vma;
 	const struct mm_walk_ops *ops = walk->ops;
 	bool is_hugetlb = is_vm_hugetlb_page(vma);
 
@@ -426,7 +426,7 @@ static inline void process_mm_walk_lock(struct mm_struct *mm,
 		mmap_assert_write_locked(mm);
 }
 
-static inline void process_vma_walk_lock(struct vm_area_struct *vma,
+static inline void process_vma_walk_lock(struct mm_area *vma,
 					 enum page_walk_lock walk_lock)
 {
 #ifdef CONFIG_PER_VMA_LOCK
@@ -457,7 +457,7 @@ int walk_page_range_mm(struct mm_struct *mm, unsigned long start,
 {
 	int err = 0;
 	unsigned long next;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_walk walk = {
 		.ops		= ops,
 		.mm		= mm,
@@ -648,7 +648,7 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
 	return walk_pgd_range(start, end, &walk);
 }
 
-int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
+int walk_page_range_vma(struct mm_area *vma, unsigned long start,
 			unsigned long end, const struct mm_walk_ops *ops,
 			void *private)
 {
@@ -671,7 +671,7 @@ int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
 	return __walk_page_range(start, end, &walk);
 }
 
-int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
+int walk_page_vma(struct mm_area *vma, const struct mm_walk_ops *ops,
 		void *private)
 {
 	struct mm_walk walk = {
@@ -714,7 +714,7 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
  *   struct mm_struct::mmap_lock is not needed.
  *
  *   Also this means that a caller can't rely on the struct
- *   vm_area_struct::vm_flags to be constant across a call,
+ *   mm_area::vm_flags to be constant across a call,
  *   except for immutable flags. Callers requiring this shouldn't use
  *   this function.
  *
@@ -729,7 +729,7 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
 		.ops		= ops,
 		.private	= private,
 	};
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	pgoff_t vba, vea, cba, cea;
 	unsigned long start_addr, end_addr;
 	int err = 0;
@@ -827,7 +827,7 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
  * Return: folio pointer on success, otherwise NULL.
  */
 struct folio *folio_walk_start(struct folio_walk *fw,
-		struct vm_area_struct *vma, unsigned long addr,
+		struct mm_area *vma, unsigned long addr,
 		folio_walk_flags_t flags)
 {
 	unsigned long entry_size;
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 5a882f2b10f9..b6e5dc860ec0 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -65,7 +65,7 @@ void pmd_clear_bad(pmd_t *pmd)
  * used to be done in the caller, but sparc needs minor faults to
  * force that call on sun4c so we changed this macro slightly
  */
-int ptep_set_access_flags(struct vm_area_struct *vma,
+int ptep_set_access_flags(struct mm_area *vma,
 			  unsigned long address, pte_t *ptep,
 			  pte_t entry, int dirty)
 {
@@ -79,7 +79,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-int ptep_clear_flush_young(struct vm_area_struct *vma,
+int ptep_clear_flush_young(struct mm_area *vma,
 			   unsigned long address, pte_t *ptep)
 {
 	int young;
@@ -91,7 +91,7 @@ int ptep_clear_flush_young(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
-pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
+pte_t ptep_clear_flush(struct mm_area *vma, unsigned long address,
 		       pte_t *ptep)
 {
 	struct mm_struct *mm = (vma)->vm_mm;
@@ -106,7 +106,7 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 
 #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
-int pmdp_set_access_flags(struct vm_area_struct *vma,
+int pmdp_set_access_flags(struct mm_area *vma,
 			  unsigned long address, pmd_t *pmdp,
 			  pmd_t entry, int dirty)
 {
@@ -121,7 +121,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
-int pmdp_clear_flush_young(struct vm_area_struct *vma,
+int pmdp_clear_flush_young(struct mm_area *vma,
 			   unsigned long address, pmd_t *pmdp)
 {
 	int young;
@@ -134,7 +134,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
-pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_huge_clear_flush(struct mm_area *vma, unsigned long address,
 			    pmd_t *pmdp)
 {
 	pmd_t pmd;
@@ -147,7 +147,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
 }
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
-pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
+pud_t pudp_huge_clear_flush(struct mm_area *vma, unsigned long address,
 			    pud_t *pudp)
 {
 	pud_t pud;
@@ -195,7 +195,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_INVALIDATE
-pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
 		     pmd_t *pmdp)
 {
 	VM_WARN_ON_ONCE(!pmd_present(*pmdp));
@@ -206,7 +206,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_INVALIDATE_AD
-pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_invalidate_ad(struct mm_area *vma, unsigned long address,
 			 pmd_t *pmdp)
 {
 	VM_WARN_ON_ONCE(!pmd_present(*pmdp));
@@ -215,7 +215,7 @@ pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
 #endif
 
 #ifndef pmdp_collapse_flush
-pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
+pmd_t pmdp_collapse_flush(struct mm_area *vma, unsigned long address,
 			  pmd_t *pmdp)
 {
 	/*
diff --git a/mm/rmap.c b/mm/rmap.c
index 67bb273dfb80..6c00e97fec67 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -147,7 +147,7 @@ static void anon_vma_chain_free(struct anon_vma_chain *anon_vma_chain)
 	kmem_cache_free(anon_vma_chain_cachep, anon_vma_chain);
 }
 
-static void anon_vma_chain_link(struct vm_area_struct *vma,
+static void anon_vma_chain_link(struct mm_area *vma,
 				struct anon_vma_chain *avc,
 				struct anon_vma *anon_vma)
 {
@@ -183,7 +183,7 @@ static void anon_vma_chain_link(struct vm_area_struct *vma,
  * to do any locking for the common case of already having
  * an anon_vma.
  */
-int __anon_vma_prepare(struct vm_area_struct *vma)
+int __anon_vma_prepare(struct mm_area *vma)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	struct anon_vma *anon_vma, *allocated;
@@ -277,7 +277,7 @@ static inline void unlock_anon_vma_root(struct anon_vma *root)
  * walker has a good chance of avoiding scanning the whole hierarchy when it
  * searches where page is mapped.
  */
-int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
+int anon_vma_clone(struct mm_area *dst, struct mm_area *src)
 {
 	struct anon_vma_chain *avc, *pavc;
 	struct anon_vma *root = NULL;
@@ -331,7 +331,7 @@ int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
  * the corresponding VMA in the parent process is attached to.
  * Returns 0 on success, non-zero on failure.
  */
-int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
+int anon_vma_fork(struct mm_area *vma, struct mm_area *pvma)
 {
 	struct anon_vma_chain *avc;
 	struct anon_vma *anon_vma;
@@ -393,7 +393,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
 	return -ENOMEM;
 }
 
-void unlink_anon_vmas(struct vm_area_struct *vma)
+void unlink_anon_vmas(struct mm_area *vma)
 {
 	struct anon_vma_chain *avc, *next;
 	struct anon_vma *root = NULL;
@@ -786,7 +786,7 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
  * Return: The virtual address corresponding to this page in the VMA.
  */
 unsigned long page_address_in_vma(const struct folio *folio,
-		const struct page *page, const struct vm_area_struct *vma)
+		const struct page *page, const struct mm_area *vma)
 {
 	if (folio_test_anon(folio)) {
 		struct anon_vma *page__anon_vma = folio_anon_vma(folio);
@@ -847,7 +847,7 @@ struct folio_referenced_arg {
  * arg: folio_referenced_arg will be passed
  */
 static bool folio_referenced_one(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long address, void *arg)
+		struct mm_area *vma, unsigned long address, void *arg)
 {
 	struct folio_referenced_arg *pra = arg;
 	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0);
@@ -947,7 +947,7 @@ static bool folio_referenced_one(struct folio *folio,
 	return true;
 }
 
-static bool invalid_folio_referenced_vma(struct vm_area_struct *vma, void *arg)
+static bool invalid_folio_referenced_vma(struct mm_area *vma, void *arg)
 {
 	struct folio_referenced_arg *pra = arg;
 	struct mem_cgroup *memcg = pra->memcg;
@@ -1024,7 +1024,7 @@ int folio_referenced(struct folio *folio, int is_locked,
 static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 {
 	int cleaned = 0;
-	struct vm_area_struct *vma = pvmw->vma;
+	struct mm_area *vma = pvmw->vma;
 	struct mmu_notifier_range range;
 	unsigned long address = pvmw->address;
 
@@ -1091,7 +1091,7 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 	return cleaned;
 }
 
-static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
+static bool page_mkclean_one(struct folio *folio, struct mm_area *vma,
 			     unsigned long address, void *arg)
 {
 	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
@@ -1102,7 +1102,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
 	return true;
 }
 
-static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg)
+static bool invalid_mkclean_vma(struct mm_area *vma, void *arg)
 {
 	if (vma->vm_flags & VM_SHARED)
 		return false;
@@ -1143,7 +1143,7 @@ struct wrprotect_file_state {
 };
 
 static bool mapping_wrprotect_range_one(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long address, void *arg)
+		struct mm_area *vma, unsigned long address, void *arg)
 {
 	struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg;
 	struct page_vma_mapped_walk pvmw = {
@@ -1222,7 +1222,7 @@ EXPORT_SYMBOL_GPL(mapping_wrprotect_range);
  * Returns the number of cleaned PTEs (including PMDs).
  */
 int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
-		      struct vm_area_struct *vma)
+		      struct mm_area *vma)
 {
 	struct page_vma_mapped_walk pvmw = {
 		.pfn		= pfn,
@@ -1242,7 +1242,7 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
 }
 
 static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *vma,
+		struct page *page, int nr_pages, struct mm_area *vma,
 		enum rmap_level level, int *nr_pmdmapped)
 {
 	atomic_t *mapped = &folio->_nr_pages_mapped;
@@ -1327,7 +1327,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
  * that folio can be moved into the anon_vma that belongs to just that
  * process, so the rmap code will not search the parent or sibling processes.
  */
-void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
+void folio_move_anon_rmap(struct folio *folio, struct mm_area *vma)
 {
 	void *anon_vma = vma->anon_vma;
 
@@ -1350,7 +1350,7 @@ void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
  * @address:	User virtual address of the mapping
  * @exclusive:	Whether the folio is exclusive to the process.
  */
-static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
+static void __folio_set_anon(struct folio *folio, struct mm_area *vma,
 			     unsigned long address, bool exclusive)
 {
 	struct anon_vma *anon_vma = vma->anon_vma;
@@ -1383,7 +1383,7 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
  * @address:	the user virtual address mapped
  */
 static void __page_check_anon_rmap(const struct folio *folio,
-		const struct page *page, struct vm_area_struct *vma,
+		const struct page *page, struct mm_area *vma,
 		unsigned long address)
 {
 	/*
@@ -1426,7 +1426,7 @@ static void __folio_mod_stat(struct folio *folio, int nr, int nr_pmdmapped)
 }
 
 static __always_inline void __folio_add_anon_rmap(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *vma,
+		struct page *page, int nr_pages, struct mm_area *vma,
 		unsigned long address, rmap_t flags, enum rmap_level level)
 {
 	int i, nr, nr_pmdmapped = 0;
@@ -1505,7 +1505,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
  * (but KSM folios are never downgraded).
  */
 void folio_add_anon_rmap_ptes(struct folio *folio, struct page *page,
-		int nr_pages, struct vm_area_struct *vma, unsigned long address,
+		int nr_pages, struct mm_area *vma, unsigned long address,
 		rmap_t flags)
 {
 	__folio_add_anon_rmap(folio, page, nr_pages, vma, address, flags,
@@ -1526,7 +1526,7 @@ void folio_add_anon_rmap_ptes(struct folio *folio, struct page *page,
  * the anon_vma case: to serialize mapping,index checking after setting.
  */
 void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
-		struct vm_area_struct *vma, unsigned long address, rmap_t flags)
+		struct mm_area *vma, unsigned long address, rmap_t flags)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	__folio_add_anon_rmap(folio, page, HPAGE_PMD_NR, vma, address, flags,
@@ -1551,7 +1551,7 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
  *
  * If the folio is pmd-mappable, it is accounted as a THP.
  */
-void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
+void folio_add_new_anon_rmap(struct folio *folio, struct mm_area *vma,
 		unsigned long address, rmap_t flags)
 {
 	const bool exclusive = flags & RMAP_EXCLUSIVE;
@@ -1610,7 +1610,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
 }
 
 static __always_inline void __folio_add_file_rmap(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *vma,
+		struct page *page, int nr_pages, struct mm_area *vma,
 		enum rmap_level level)
 {
 	int nr, nr_pmdmapped = 0;
@@ -1637,7 +1637,7 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio,
  * The caller needs to hold the page table lock.
  */
 void folio_add_file_rmap_ptes(struct folio *folio, struct page *page,
-		int nr_pages, struct vm_area_struct *vma)
+		int nr_pages, struct mm_area *vma)
 {
 	__folio_add_file_rmap(folio, page, nr_pages, vma, RMAP_LEVEL_PTE);
 }
@@ -1653,7 +1653,7 @@ void folio_add_file_rmap_ptes(struct folio *folio, struct page *page,
  * The caller needs to hold the page table lock.
  */
 void folio_add_file_rmap_pmd(struct folio *folio, struct page *page,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	__folio_add_file_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_LEVEL_PMD);
@@ -1673,7 +1673,7 @@ void folio_add_file_rmap_pmd(struct folio *folio, struct page *page,
  * The caller needs to hold the page table lock.
  */
 void folio_add_file_rmap_pud(struct folio *folio, struct page *page,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
 	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
@@ -1684,7 +1684,7 @@ void folio_add_file_rmap_pud(struct folio *folio, struct page *page,
 }
 
 static __always_inline void __folio_remove_rmap(struct folio *folio,
-		struct page *page, int nr_pages, struct vm_area_struct *vma,
+		struct page *page, int nr_pages, struct mm_area *vma,
 		enum rmap_level level)
 {
 	atomic_t *mapped = &folio->_nr_pages_mapped;
@@ -1799,7 +1799,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
  * The caller needs to hold the page table lock.
  */
 void folio_remove_rmap_ptes(struct folio *folio, struct page *page,
-		int nr_pages, struct vm_area_struct *vma)
+		int nr_pages, struct mm_area *vma)
 {
 	__folio_remove_rmap(folio, page, nr_pages, vma, RMAP_LEVEL_PTE);
 }
@@ -1815,7 +1815,7 @@ void folio_remove_rmap_ptes(struct folio *folio, struct page *page,
  * The caller needs to hold the page table lock.
  */
 void folio_remove_rmap_pmd(struct folio *folio, struct page *page,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	__folio_remove_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_LEVEL_PMD);
@@ -1835,7 +1835,7 @@ void folio_remove_rmap_pmd(struct folio *folio, struct page *page,
  * The caller needs to hold the page table lock.
  */
 void folio_remove_rmap_pud(struct folio *folio, struct page *page,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
 	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
@@ -1867,7 +1867,7 @@ static inline bool can_batch_unmap_folio_ptes(unsigned long addr,
 /*
  * @arg: enum ttu_flags will be passed to this argument
  */
-static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
+static bool try_to_unmap_one(struct folio *folio, struct mm_area *vma,
 		     unsigned long address, void *arg)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -2227,7 +2227,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 	return ret;
 }
 
-static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg)
+static bool invalid_migration_vma(struct mm_area *vma, void *arg)
 {
 	return vma_is_temporary_stack(vma);
 }
@@ -2269,7 +2269,7 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
  * If TTU_SPLIT_HUGE_PMD is specified any PMD mappings will be split into PTEs
  * containing migration entries.
  */
-static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
+static bool try_to_migrate_one(struct folio *folio, struct mm_area *vma,
 		     unsigned long address, void *arg)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -2657,7 +2657,7 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
 {
 	struct mmu_notifier_range range;
 	struct folio *folio, *fw_folio;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct folio_walk fw;
 	struct page *page;
 	swp_entry_t entry;
@@ -2821,7 +2821,7 @@ static void rmap_walk_anon(struct folio *folio,
 	pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;
 	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
 			pgoff_start, pgoff_end) {
-		struct vm_area_struct *vma = avc->vma;
+		struct mm_area *vma = avc->vma;
 		unsigned long address = vma_address(vma, pgoff_start,
 				folio_nr_pages(folio));
 
@@ -2866,7 +2866,7 @@ static void __rmap_walk_file(struct folio *folio, struct address_space *mapping,
 			     struct rmap_walk_control *rwc, bool locked)
 {
 	pgoff_t pgoff_end = pgoff_start + nr_pages - 1;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	VM_WARN_ON_FOLIO(folio && mapping != folio_mapping(folio), folio);
 	VM_WARN_ON_FOLIO(folio && pgoff_start != folio_pgoff(folio), folio);
@@ -2958,7 +2958,7 @@ void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc)
  * Unlike common anonymous pages, anonymous hugepages have no accounting code
  * and no lru code, because we handle hugepages differently from common pages.
  */
-void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
+void hugetlb_add_anon_rmap(struct folio *folio, struct mm_area *vma,
 		unsigned long address, rmap_t flags)
 {
 	VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio);
@@ -2973,7 +2973,7 @@ void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
 }
 
 void hugetlb_add_new_anon_rmap(struct folio *folio,
-		struct vm_area_struct *vma, unsigned long address)
+		struct mm_area *vma, unsigned long address)
 {
 	VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio);
 
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 1b0a214ee558..6fc28aeec966 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -120,7 +120,7 @@ static int secretmem_release(struct inode *inode, struct file *file)
 	return 0;
 }
 
-static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+static int secretmem_mmap(struct file *file, struct mm_area *vma)
 {
 	unsigned long len = vma->vm_end - vma->vm_start;
 
@@ -136,7 +136,7 @@ static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 	return 0;
 }
 
-bool vma_is_secretmem(struct vm_area_struct *vma)
+bool vma_is_secretmem(struct mm_area *vma)
 {
 	return vma->vm_ops == &secretmem_vm_ops;
 }
diff --git a/mm/shmem.c b/mm/shmem.c
index 99327c30507c..c7535853a324 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -160,7 +160,7 @@ static unsigned long shmem_default_max_inodes(void)
 
 static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 			struct folio **foliop, enum sgp_type sgp, gfp_t gfp,
-			struct vm_area_struct *vma, vm_fault_t *fault_type);
+			struct mm_area *vma, vm_fault_t *fault_type);
 
 static inline struct shmem_sb_info *SHMEM_SB(struct super_block *sb)
 {
@@ -281,12 +281,12 @@ bool shmem_mapping(struct address_space *mapping)
 }
 EXPORT_SYMBOL_GPL(shmem_mapping);
 
-bool vma_is_anon_shmem(struct vm_area_struct *vma)
+bool vma_is_anon_shmem(struct mm_area *vma)
 {
 	return vma->vm_ops == &shmem_anon_vm_ops;
 }
 
-bool vma_is_shmem(struct vm_area_struct *vma)
+bool vma_is_shmem(struct mm_area *vma)
 {
 	return vma_is_anon_shmem(vma) || vma->vm_ops == &shmem_vm_ops;
 }
@@ -614,7 +614,7 @@ static unsigned int shmem_get_orders_within_size(struct inode *inode,
 
 static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
 					      loff_t write_end, bool shmem_huge_force,
-					      struct vm_area_struct *vma,
+					      struct mm_area *vma,
 					      unsigned long vm_flags)
 {
 	unsigned int maybe_pmd_order = HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ?
@@ -861,7 +861,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
 
 static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
 					      loff_t write_end, bool shmem_huge_force,
-					      struct vm_area_struct *vma,
+					      struct mm_area *vma,
 					      unsigned long vm_flags)
 {
 	return 0;
@@ -1003,7 +1003,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
  * This is safe to call without i_rwsem or the i_pages lock thanks to RCU,
  * as long as the inode doesn't go away and racy results are not a problem.
  */
-unsigned long shmem_swap_usage(struct vm_area_struct *vma)
+unsigned long shmem_swap_usage(struct mm_area *vma)
 {
 	struct inode *inode = file_inode(vma->vm_file);
 	struct shmem_inode_info *info = SHMEM_I(inode);
@@ -1755,7 +1755,7 @@ bool shmem_hpage_pmd_enabled(void)
 }
 
 unsigned long shmem_allowable_huge_orders(struct inode *inode,
-				struct vm_area_struct *vma, pgoff_t index,
+				struct mm_area *vma, pgoff_t index,
 				loff_t write_end, bool shmem_huge_force)
 {
 	unsigned long mask = READ_ONCE(huge_shmem_orders_always);
@@ -1802,7 +1802,7 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
 					   struct address_space *mapping, pgoff_t index,
 					   unsigned long orders)
 {
-	struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
+	struct mm_area *vma = vmf ? vmf->vma : NULL;
 	pgoff_t aligned_index;
 	unsigned long pages;
 	int order;
@@ -1959,7 +1959,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
 }
 
 static struct folio *shmem_swap_alloc_folio(struct inode *inode,
-		struct vm_area_struct *vma, pgoff_t index,
+		struct mm_area *vma, pgoff_t index,
 		swp_entry_t entry, int order, gfp_t gfp)
 {
 	struct shmem_inode_info *info = SHMEM_I(inode);
@@ -2036,7 +2036,7 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp)
 
 static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
 				struct shmem_inode_info *info, pgoff_t index,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	struct folio *new, *old = *foliop;
 	swp_entry_t entry = old->swap;
@@ -2231,7 +2231,7 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
  */
 static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
 			     struct folio **foliop, enum sgp_type sgp,
-			     gfp_t gfp, struct vm_area_struct *vma,
+			     gfp_t gfp, struct mm_area *vma,
 			     vm_fault_t *fault_type)
 {
 	struct address_space *mapping = inode->i_mapping;
@@ -2434,7 +2434,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
 		loff_t write_end, struct folio **foliop, enum sgp_type sgp,
 		gfp_t gfp, struct vm_fault *vmf, vm_fault_t *fault_type)
 {
-	struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
+	struct mm_area *vma = vmf ? vmf->vma : NULL;
 	struct mm_struct *fault_mm;
 	struct folio *folio;
 	int error;
@@ -2853,13 +2853,13 @@ unsigned long shmem_get_unmapped_area(struct file *file,
 }
 
 #ifdef CONFIG_NUMA
-static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
+static int shmem_set_policy(struct mm_area *vma, struct mempolicy *mpol)
 {
 	struct inode *inode = file_inode(vma->vm_file);
 	return mpol_set_shared_policy(&SHMEM_I(inode)->policy, vma, mpol);
 }
 
-static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
+static struct mempolicy *shmem_get_policy(struct mm_area *vma,
 					  unsigned long addr, pgoff_t *ilx)
 {
 	struct inode *inode = file_inode(vma->vm_file);
@@ -2924,7 +2924,7 @@ int shmem_lock(struct file *file, int lock, struct ucounts *ucounts)
 	return retval;
 }
 
-static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
+static int shmem_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 
@@ -3148,7 +3148,7 @@ static inline struct inode *shmem_get_inode(struct mnt_idmap *idmap,
 
 #ifdef CONFIG_USERFAULTFD
 int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
-			   struct vm_area_struct *dst_vma,
+			   struct mm_area *dst_vma,
 			   unsigned long dst_addr,
 			   unsigned long src_addr,
 			   uffd_flags_t flags,
@@ -5880,7 +5880,7 @@ EXPORT_SYMBOL_GPL(shmem_file_setup_with_mnt);
  * shmem_zero_setup - setup a shared anonymous mapping
  * @vma: the vma to be mmapped is prepared by do_mmap
  */
-int shmem_zero_setup(struct vm_area_struct *vma)
+int shmem_zero_setup(struct mm_area *vma)
 {
 	struct file *file;
 	loff_t size = vma->vm_end - vma->vm_start;
diff --git a/mm/swap.c b/mm/swap.c
index 77b2d5997873..e86133c365cc 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -514,7 +514,7 @@ EXPORT_SYMBOL(folio_add_lru);
  * If the VMA is mlocked, @folio is added to the unevictable list.
  * Otherwise, it is treated the same way as folio_add_lru().
  */
-void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma)
+void folio_add_lru_vma(struct folio *folio, struct mm_area *vma)
 {
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
diff --git a/mm/swap.h b/mm/swap.h
index 6f4a3f927edb..a2122e9848f5 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -61,12 +61,12 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
 				  unsigned long end);
 void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr);
 struct folio *swap_cache_get_folio(swp_entry_t entry,
-		struct vm_area_struct *vma, unsigned long addr);
+		struct mm_area *vma, unsigned long addr);
 struct folio *filemap_get_incore_folio(struct address_space *mapping,
 		pgoff_t index);
 
 struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
-		struct vm_area_struct *vma, unsigned long addr,
+		struct mm_area *vma, unsigned long addr,
 		struct swap_iocb **plug);
 struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags,
 		struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated,
@@ -151,7 +151,7 @@ static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entr
 }
 
 static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
-		struct vm_area_struct *vma, unsigned long addr)
+		struct mm_area *vma, unsigned long addr)
 {
 	return NULL;
 }
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 68fd981b514f..60a1d4571fc8 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -284,7 +284,7 @@ static inline bool swap_use_vma_readahead(void)
  * Caller must lock the swap device or hold a reference to keep it valid.
  */
 struct folio *swap_cache_get_folio(swp_entry_t entry,
-		struct vm_area_struct *vma, unsigned long addr)
+		struct mm_area *vma, unsigned long addr)
 {
 	struct folio *folio;
 
@@ -481,7 +481,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
  * swap cache folio lock.
  */
 struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
-		struct vm_area_struct *vma, unsigned long addr,
+		struct mm_area *vma, unsigned long addr,
 		struct swap_iocb **plug)
 {
 	struct swap_info_struct *si;
@@ -677,7 +677,7 @@ void exit_swap_address_space(unsigned int type)
 static int swap_vma_ra_win(struct vm_fault *vmf, unsigned long *start,
 			   unsigned long *end)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	unsigned long ra_val;
 	unsigned long faddr, prev_faddr, left, right;
 	unsigned int max_win, hits, prev_win, win;
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 2eff8b51a945..fb46d0ea6aec 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1971,7 +1971,7 @@ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
  * just let do_wp_page work it out if a write is requested later - to
  * force COW, vm_page_prot omits write permission from any private vma.
  */
-static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
+static int unuse_pte(struct mm_area *vma, pmd_t *pmd,
 		unsigned long addr, swp_entry_t entry, struct folio *folio)
 {
 	struct page *page;
@@ -2072,7 +2072,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
 	return ret;
 }
 
-static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
+static int unuse_pte_range(struct mm_area *vma, pmd_t *pmd,
 			unsigned long addr, unsigned long end,
 			unsigned int type)
 {
@@ -2145,7 +2145,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 	return 0;
 }
 
-static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
+static inline int unuse_pmd_range(struct mm_area *vma, pud_t *pud,
 				unsigned long addr, unsigned long end,
 				unsigned int type)
 {
@@ -2164,7 +2164,7 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
 	return 0;
 }
 
-static inline int unuse_pud_range(struct vm_area_struct *vma, p4d_t *p4d,
+static inline int unuse_pud_range(struct mm_area *vma, p4d_t *p4d,
 				unsigned long addr, unsigned long end,
 				unsigned int type)
 {
@@ -2184,7 +2184,7 @@ static inline int unuse_pud_range(struct vm_area_struct *vma, p4d_t *p4d,
 	return 0;
 }
 
-static inline int unuse_p4d_range(struct vm_area_struct *vma, pgd_t *pgd,
+static inline int unuse_p4d_range(struct mm_area *vma, pgd_t *pgd,
 				unsigned long addr, unsigned long end,
 				unsigned int type)
 {
@@ -2204,7 +2204,7 @@ static inline int unuse_p4d_range(struct vm_area_struct *vma, pgd_t *pgd,
 	return 0;
 }
 
-static int unuse_vma(struct vm_area_struct *vma, unsigned int type)
+static int unuse_vma(struct mm_area *vma, unsigned int type)
 {
 	pgd_t *pgd;
 	unsigned long addr, end, next;
@@ -2227,7 +2227,7 @@ static int unuse_vma(struct vm_area_struct *vma, unsigned int type)
 
 static int unuse_mm(struct mm_struct *mm, unsigned int type)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int ret = 0;
 	VMA_ITERATOR(vmi, mm, 0);
 
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index fbf2cf62ab9f..ed1f47504327 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -21,7 +21,7 @@
 #include "swap.h"
 
 static __always_inline
-bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
+bool validate_dst_vma(struct mm_area *dst_vma, unsigned long dst_end)
 {
 	/* Make sure that the dst range is fully within dst_vma. */
 	if (dst_end > dst_vma->vm_end)
@@ -39,10 +39,10 @@ bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
 }
 
 static __always_inline
-struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm,
+struct mm_area *find_vma_and_prepare_anon(struct mm_struct *mm,
 						 unsigned long addr)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mmap_assert_locked(mm);
 	vma = vma_lookup(mm, addr);
@@ -66,10 +66,10 @@ struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm,
  * Return: A locked vma containing @address, -ENOENT if no vma is found, or
  * -ENOMEM if anon_vma couldn't be allocated.
  */
-static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm,
+static struct mm_area *uffd_lock_vma(struct mm_struct *mm,
 				       unsigned long address)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = lock_vma_under_rcu(mm, address);
 	if (vma) {
@@ -96,11 +96,11 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm,
 	return vma;
 }
 
-static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
+static struct mm_area *uffd_mfill_lock(struct mm_struct *dst_mm,
 					      unsigned long dst_start,
 					      unsigned long len)
 {
-	struct vm_area_struct *dst_vma;
+	struct mm_area *dst_vma;
 
 	dst_vma = uffd_lock_vma(dst_mm, dst_start);
 	if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len))
@@ -110,18 +110,18 @@ static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
 	return ERR_PTR(-ENOENT);
 }
 
-static void uffd_mfill_unlock(struct vm_area_struct *vma)
+static void uffd_mfill_unlock(struct mm_area *vma)
 {
 	vma_end_read(vma);
 }
 
 #else
 
-static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
+static struct mm_area *uffd_mfill_lock(struct mm_struct *dst_mm,
 					      unsigned long dst_start,
 					      unsigned long len)
 {
-	struct vm_area_struct *dst_vma;
+	struct mm_area *dst_vma;
 
 	mmap_read_lock(dst_mm);
 	dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start);
@@ -137,14 +137,14 @@ static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
 	return dst_vma;
 }
 
-static void uffd_mfill_unlock(struct vm_area_struct *vma)
+static void uffd_mfill_unlock(struct mm_area *vma)
 {
 	mmap_read_unlock(vma->vm_mm);
 }
 #endif
 
 /* Check if dst_addr is outside of file's size. Must be called with ptl held. */
-static bool mfill_file_over_size(struct vm_area_struct *dst_vma,
+static bool mfill_file_over_size(struct mm_area *dst_vma,
 				 unsigned long dst_addr)
 {
 	struct inode *inode;
@@ -166,7 +166,7 @@ static bool mfill_file_over_size(struct vm_area_struct *dst_vma,
  * and anon, and for both shared and private VMAs.
  */
 int mfill_atomic_install_pte(pmd_t *dst_pmd,
-			     struct vm_area_struct *dst_vma,
+			     struct mm_area *dst_vma,
 			     unsigned long dst_addr, struct page *page,
 			     bool newly_allocated, uffd_flags_t flags)
 {
@@ -235,7 +235,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
 }
 
 static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
-				 struct vm_area_struct *dst_vma,
+				 struct mm_area *dst_vma,
 				 unsigned long dst_addr,
 				 unsigned long src_addr,
 				 uffd_flags_t flags,
@@ -311,7 +311,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
 }
 
 static int mfill_atomic_pte_zeroed_folio(pmd_t *dst_pmd,
-					 struct vm_area_struct *dst_vma,
+					 struct mm_area *dst_vma,
 					 unsigned long dst_addr)
 {
 	struct folio *folio;
@@ -343,7 +343,7 @@ static int mfill_atomic_pte_zeroed_folio(pmd_t *dst_pmd,
 }
 
 static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd,
-				     struct vm_area_struct *dst_vma,
+				     struct mm_area *dst_vma,
 				     unsigned long dst_addr)
 {
 	pte_t _dst_pte, *dst_pte;
@@ -378,7 +378,7 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd,
 
 /* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */
 static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
-				     struct vm_area_struct *dst_vma,
+				     struct mm_area *dst_vma,
 				     unsigned long dst_addr,
 				     uffd_flags_t flags)
 {
@@ -422,7 +422,7 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
 
 /* Handles UFFDIO_POISON for all non-hugetlb VMAs. */
 static int mfill_atomic_pte_poison(pmd_t *dst_pmd,
-				   struct vm_area_struct *dst_vma,
+				   struct mm_area *dst_vma,
 				   unsigned long dst_addr,
 				   uffd_flags_t flags)
 {
@@ -487,7 +487,7 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address)
  */
 static __always_inline ssize_t mfill_atomic_hugetlb(
 					      struct userfaultfd_ctx *ctx,
-					      struct vm_area_struct *dst_vma,
+					      struct mm_area *dst_vma,
 					      unsigned long dst_start,
 					      unsigned long src_start,
 					      unsigned long len,
@@ -643,7 +643,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
 #else /* !CONFIG_HUGETLB_PAGE */
 /* fail at build time if gcc attempts to use this */
 extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx,
-				    struct vm_area_struct *dst_vma,
+				    struct mm_area *dst_vma,
 				    unsigned long dst_start,
 				    unsigned long src_start,
 				    unsigned long len,
@@ -651,7 +651,7 @@ extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx,
 #endif /* CONFIG_HUGETLB_PAGE */
 
 static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd,
-						struct vm_area_struct *dst_vma,
+						struct mm_area *dst_vma,
 						unsigned long dst_addr,
 						unsigned long src_addr,
 						uffd_flags_t flags,
@@ -701,7 +701,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
 					    uffd_flags_t flags)
 {
 	struct mm_struct *dst_mm = ctx->mm;
-	struct vm_area_struct *dst_vma;
+	struct mm_area *dst_vma;
 	ssize_t err;
 	pmd_t *dst_pmd;
 	unsigned long src_addr, dst_addr;
@@ -897,7 +897,7 @@ ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start,
 			    uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON));
 }
 
-long uffd_wp_range(struct vm_area_struct *dst_vma,
+long uffd_wp_range(struct mm_area *dst_vma,
 		   unsigned long start, unsigned long len, bool enable_wp)
 {
 	unsigned int mm_cp_flags;
@@ -932,7 +932,7 @@ int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start,
 	struct mm_struct *dst_mm = ctx->mm;
 	unsigned long end = start + len;
 	unsigned long _start, _end;
-	struct vm_area_struct *dst_vma;
+	struct mm_area *dst_vma;
 	unsigned long page_mask;
 	long err;
 	VMA_ITERATOR(vmi, dst_mm, start);
@@ -1027,8 +1027,8 @@ static inline bool is_pte_pages_stable(pte_t *dst_pte, pte_t *src_pte,
 }
 
 static int move_present_pte(struct mm_struct *mm,
-			    struct vm_area_struct *dst_vma,
-			    struct vm_area_struct *src_vma,
+			    struct mm_area *dst_vma,
+			    struct mm_area *src_vma,
 			    unsigned long dst_addr, unsigned long src_addr,
 			    pte_t *dst_pte, pte_t *src_pte,
 			    pte_t orig_dst_pte, pte_t orig_src_pte,
@@ -1073,7 +1073,7 @@ static int move_present_pte(struct mm_struct *mm,
 	return err;
 }
 
-static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
+static int move_swap_pte(struct mm_struct *mm, struct mm_area *dst_vma,
 			 unsigned long dst_addr, unsigned long src_addr,
 			 pte_t *dst_pte, pte_t *src_pte,
 			 pte_t orig_dst_pte, pte_t orig_src_pte,
@@ -1107,8 +1107,8 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
 }
 
 static int move_zeropage_pte(struct mm_struct *mm,
-			     struct vm_area_struct *dst_vma,
-			     struct vm_area_struct *src_vma,
+			     struct mm_area *dst_vma,
+			     struct mm_area *src_vma,
 			     unsigned long dst_addr, unsigned long src_addr,
 			     pte_t *dst_pte, pte_t *src_pte,
 			     pte_t orig_dst_pte, pte_t orig_src_pte,
@@ -1140,8 +1140,8 @@ static int move_zeropage_pte(struct mm_struct *mm,
  * in moving the page.
  */
 static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
-			  struct vm_area_struct *dst_vma,
-			  struct vm_area_struct *src_vma,
+			  struct mm_area *dst_vma,
+			  struct mm_area *src_vma,
 			  unsigned long dst_addr, unsigned long src_addr,
 			  __u64 mode)
 {
@@ -1445,15 +1445,15 @@ static inline bool move_splits_huge_pmd(unsigned long dst_addr,
 }
 #endif
 
-static inline bool vma_move_compatible(struct vm_area_struct *vma)
+static inline bool vma_move_compatible(struct mm_area *vma)
 {
 	return !(vma->vm_flags & (VM_PFNMAP | VM_IO |  VM_HUGETLB |
 				  VM_MIXEDMAP | VM_SHADOW_STACK));
 }
 
 static int validate_move_areas(struct userfaultfd_ctx *ctx,
-			       struct vm_area_struct *src_vma,
-			       struct vm_area_struct *dst_vma)
+			       struct mm_area *src_vma,
+			       struct mm_area *dst_vma)
 {
 	/* Only allow moving if both have the same access and protection */
 	if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
@@ -1491,10 +1491,10 @@ static __always_inline
 int find_vmas_mm_locked(struct mm_struct *mm,
 			unsigned long dst_start,
 			unsigned long src_start,
-			struct vm_area_struct **dst_vmap,
-			struct vm_area_struct **src_vmap)
+			struct mm_area **dst_vmap,
+			struct mm_area **src_vmap)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mmap_assert_locked(mm);
 	vma = find_vma_and_prepare_anon(mm, dst_start);
@@ -1518,10 +1518,10 @@ int find_vmas_mm_locked(struct mm_struct *mm,
 static int uffd_move_lock(struct mm_struct *mm,
 			  unsigned long dst_start,
 			  unsigned long src_start,
-			  struct vm_area_struct **dst_vmap,
-			  struct vm_area_struct **src_vmap)
+			  struct mm_area **dst_vmap,
+			  struct mm_area **src_vmap)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int err;
 
 	vma = uffd_lock_vma(mm, dst_start);
@@ -1581,8 +1581,8 @@ static int uffd_move_lock(struct mm_struct *mm,
 	return err;
 }
 
-static void uffd_move_unlock(struct vm_area_struct *dst_vma,
-			     struct vm_area_struct *src_vma)
+static void uffd_move_unlock(struct mm_area *dst_vma,
+			     struct mm_area *src_vma)
 {
 	vma_end_read(src_vma);
 	if (src_vma != dst_vma)
@@ -1594,8 +1594,8 @@ static void uffd_move_unlock(struct vm_area_struct *dst_vma,
 static int uffd_move_lock(struct mm_struct *mm,
 			  unsigned long dst_start,
 			  unsigned long src_start,
-			  struct vm_area_struct **dst_vmap,
-			  struct vm_area_struct **src_vmap)
+			  struct mm_area **dst_vmap,
+			  struct mm_area **src_vmap)
 {
 	int err;
 
@@ -1606,8 +1606,8 @@ static int uffd_move_lock(struct mm_struct *mm,
 	return err;
 }
 
-static void uffd_move_unlock(struct vm_area_struct *dst_vma,
-			     struct vm_area_struct *src_vma)
+static void uffd_move_unlock(struct mm_area *dst_vma,
+			     struct mm_area *src_vma)
 {
 	mmap_assert_locked(src_vma->vm_mm);
 	mmap_read_unlock(dst_vma->vm_mm);
@@ -1694,7 +1694,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
 		   unsigned long src_start, unsigned long len, __u64 mode)
 {
 	struct mm_struct *mm = ctx->mm;
-	struct vm_area_struct *src_vma, *dst_vma;
+	struct mm_area *src_vma, *dst_vma;
 	unsigned long src_addr, dst_addr;
 	pmd_t *src_pmd, *dst_pmd;
 	long err = -EINVAL;
@@ -1865,7 +1865,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
 	return moved ? moved : err;
 }
 
-static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
+static void userfaultfd_set_vm_flags(struct mm_area *vma,
 				     vm_flags_t flags)
 {
 	const bool uffd_wp_changed = (vma->vm_flags ^ flags) & VM_UFFD_WP;
@@ -1880,7 +1880,7 @@ static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
 		vma_set_page_prot(vma);
 }
 
-static void userfaultfd_set_ctx(struct vm_area_struct *vma,
+static void userfaultfd_set_ctx(struct mm_area *vma,
 				struct userfaultfd_ctx *ctx,
 				unsigned long flags)
 {
@@ -1890,18 +1890,18 @@ static void userfaultfd_set_ctx(struct vm_area_struct *vma,
 				 (vma->vm_flags & ~__VM_UFFD_FLAGS) | flags);
 }
 
-void userfaultfd_reset_ctx(struct vm_area_struct *vma)
+void userfaultfd_reset_ctx(struct mm_area *vma)
 {
 	userfaultfd_set_ctx(vma, NULL, 0);
 }
 
-struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
-					     struct vm_area_struct *prev,
-					     struct vm_area_struct *vma,
+struct mm_area *userfaultfd_clear_vma(struct vma_iterator *vmi,
+					     struct mm_area *prev,
+					     struct mm_area *vma,
 					     unsigned long start,
 					     unsigned long end)
 {
-	struct vm_area_struct *ret;
+	struct mm_area *ret;
 
 	/* Reset ptes for the whole vma range if wr-protected */
 	if (userfaultfd_wp(vma))
@@ -1924,13 +1924,13 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
 
 /* Assumes mmap write lock taken, and mm_struct pinned. */
 int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
-			       struct vm_area_struct *vma,
+			       struct mm_area *vma,
 			       unsigned long vm_flags,
 			       unsigned long start, unsigned long end,
 			       bool wp_async)
 {
 	VMA_ITERATOR(vmi, ctx->mm, start);
-	struct vm_area_struct *prev = vma_prev(&vmi);
+	struct mm_area *prev = vma_prev(&vmi);
 	unsigned long vma_end;
 	unsigned long new_flags;
 
@@ -1985,7 +1985,7 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
 void userfaultfd_release_new(struct userfaultfd_ctx *ctx)
 {
 	struct mm_struct *mm = ctx->mm;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	/* the various vma->vm_userfaultfd_ctx still points to it */
@@ -2000,7 +2000,7 @@ void userfaultfd_release_new(struct userfaultfd_ctx *ctx)
 void userfaultfd_release_all(struct mm_struct *mm,
 			     struct userfaultfd_ctx *ctx)
 {
-	struct vm_area_struct *vma, *prev;
+	struct mm_area *vma, *prev;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	if (!mmget_not_zero(mm))
diff --git a/mm/util.c b/mm/util.c
index 448117da071f..e0ed4f7d00d4 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -314,7 +314,7 @@ void *memdup_user_nul(const void __user *src, size_t len)
 EXPORT_SYMBOL(memdup_user_nul);
 
 /* Check if the vma is being used as a stack by this task */
-int vma_is_stack_for_current(struct vm_area_struct *vma)
+int vma_is_stack_for_current(struct mm_area *vma)
 {
 	struct task_struct * __maybe_unused t = current;
 
@@ -324,7 +324,7 @@ int vma_is_stack_for_current(struct vm_area_struct *vma)
 /*
  * Change backing file, only valid to use during initial VMA setup.
  */
-void vma_set_file(struct vm_area_struct *vma, struct file *file)
+void vma_set_file(struct mm_area *vma, struct file *file)
 {
 	/* Changing an anonymous vma with this is illegal */
 	get_file(file);
diff --git a/mm/vma.c b/mm/vma.c
index 5cdc5612bfc1..06e6e9c02ab8 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -21,8 +21,8 @@ struct mmap_state {
 	unsigned long charged;
 	bool retry_merge;
 
-	struct vm_area_struct *prev;
-	struct vm_area_struct *next;
+	struct mm_area *prev;
+	struct mm_area *next;
 
 	/* Unmapping state. */
 	struct vma_munmap_struct vms;
@@ -59,7 +59,7 @@ struct mmap_state {
 
 static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool merge_next)
 {
-	struct vm_area_struct *vma = merge_next ? vmg->next : vmg->prev;
+	struct mm_area *vma = merge_next ? vmg->next : vmg->prev;
 
 	if (!mpol_equal(vmg->policy, vma_policy(vma)))
 		return false;
@@ -83,7 +83,7 @@ static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool merge_nex
 }
 
 static inline bool is_mergeable_anon_vma(struct anon_vma *anon_vma1,
-		 struct anon_vma *anon_vma2, struct vm_area_struct *vma)
+		 struct anon_vma *anon_vma2, struct mm_area *vma)
 {
 	/*
 	 * The list_is_singular() test is to avoid merging VMA cloned from
@@ -96,8 +96,8 @@ static inline bool is_mergeable_anon_vma(struct anon_vma *anon_vma1,
 }
 
 /* Are the anon_vma's belonging to each VMA compatible with one another? */
-static inline bool are_anon_vmas_compatible(struct vm_area_struct *vma1,
-					    struct vm_area_struct *vma2)
+static inline bool are_anon_vmas_compatible(struct mm_area *vma1,
+					    struct mm_area *vma2)
 {
 	return is_mergeable_anon_vma(vma1->anon_vma, vma2->anon_vma, NULL);
 }
@@ -110,11 +110,11 @@ static inline bool are_anon_vmas_compatible(struct vm_area_struct *vma1,
  *       removal.
  */
 static void init_multi_vma_prep(struct vma_prepare *vp,
-				struct vm_area_struct *vma,
+				struct mm_area *vma,
 				struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *adjust;
-	struct vm_area_struct **remove = &vp->remove;
+	struct mm_area *adjust;
+	struct mm_area **remove = &vp->remove;
 
 	memset(vp, 0, sizeof(struct vma_prepare));
 	vp->vma = vma;
@@ -191,7 +191,7 @@ static bool can_vma_merge_after(struct vma_merge_struct *vmg)
 	return false;
 }
 
-static void __vma_link_file(struct vm_area_struct *vma,
+static void __vma_link_file(struct mm_area *vma,
 			    struct address_space *mapping)
 {
 	if (vma_is_shared_maywrite(vma))
@@ -205,7 +205,7 @@ static void __vma_link_file(struct vm_area_struct *vma,
 /*
  * Requires inode->i_mapping->i_mmap_rwsem
  */
-static void __remove_shared_vm_struct(struct vm_area_struct *vma,
+static void __remove_shared_vm_struct(struct mm_area *vma,
 				      struct address_space *mapping)
 {
 	if (vma_is_shared_maywrite(vma))
@@ -231,7 +231,7 @@ static void __remove_shared_vm_struct(struct vm_area_struct *vma,
  * the root anon_vma's mutex.
  */
 static void
-anon_vma_interval_tree_pre_update_vma(struct vm_area_struct *vma)
+anon_vma_interval_tree_pre_update_vma(struct mm_area *vma)
 {
 	struct anon_vma_chain *avc;
 
@@ -240,7 +240,7 @@ anon_vma_interval_tree_pre_update_vma(struct vm_area_struct *vma)
 }
 
 static void
-anon_vma_interval_tree_post_update_vma(struct vm_area_struct *vma)
+anon_vma_interval_tree_post_update_vma(struct mm_area *vma)
 {
 	struct anon_vma_chain *avc;
 
@@ -374,7 +374,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
  * @vp: The vma_prepare struct
  * @vma: The vma that will be altered once locked
  */
-static void init_vma_prep(struct vma_prepare *vp, struct vm_area_struct *vma)
+static void init_vma_prep(struct vma_prepare *vp, struct mm_area *vma)
 {
 	init_multi_vma_prep(vp, vma, NULL);
 }
@@ -420,7 +420,7 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg,
 /*
  * Close a vm structure and free it.
  */
-void remove_vma(struct vm_area_struct *vma)
+void remove_vma(struct mm_area *vma)
 {
 	might_sleep();
 	vma_close(vma);
@@ -435,8 +435,8 @@ void remove_vma(struct vm_area_struct *vma)
  *
  * Called with the mm semaphore held.
  */
-void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
-		struct vm_area_struct *prev, struct vm_area_struct *next)
+void unmap_region(struct ma_state *mas, struct mm_area *vma,
+		struct mm_area *prev, struct mm_area *next)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	struct mmu_gather tlb;
@@ -458,11 +458,11 @@ void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
  * VMA Iterator will point to the original VMA.
  */
 static __must_check int
-__split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+__split_vma(struct vma_iterator *vmi, struct mm_area *vma,
 	    unsigned long addr, int new_below)
 {
 	struct vma_prepare vp;
-	struct vm_area_struct *new;
+	struct mm_area *new;
 	int err;
 
 	WARN_ON(vma->vm_start >= addr);
@@ -544,7 +544,7 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
  * Split a vma into two pieces at address 'addr', a new vma is allocated
  * either for the first part or the tail.
  */
-static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+static int split_vma(struct vma_iterator *vmi, struct mm_area *vma,
 		     unsigned long addr, int new_below)
 {
 	if (vma->vm_mm->map_count >= sysctl_max_map_count)
@@ -561,8 +561,8 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
  *
  * Returns: 0 on success.
  */
-static int dup_anon_vma(struct vm_area_struct *dst,
-			struct vm_area_struct *src, struct vm_area_struct **dup)
+static int dup_anon_vma(struct mm_area *dst,
+			struct mm_area *src, struct mm_area **dup)
 {
 	/*
 	 * Easily overlooked: when mprotect shifts the boundary, make sure the
@@ -589,7 +589,7 @@ void validate_mm(struct mm_struct *mm)
 {
 	int bug = 0;
 	int i = 0;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, 0);
 
 	mt_validate(&mm->mm_mt);
@@ -647,7 +647,7 @@ void validate_mm(struct mm_struct *mm)
  */
 static void vmg_adjust_set_range(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *adjust;
+	struct mm_area *adjust;
 	pgoff_t pgoff;
 
 	if (vmg->__adjust_middle_start) {
@@ -670,7 +670,7 @@ static void vmg_adjust_set_range(struct vma_merge_struct *vmg)
  */
 static int commit_merge(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct vma_prepare vp;
 
 	if (vmg->__adjust_next_start) {
@@ -705,7 +705,7 @@ static int commit_merge(struct vma_merge_struct *vmg)
 }
 
 /* We can only remove VMAs when merging if they do not have a close hook. */
-static bool can_merge_remove_vma(struct vm_area_struct *vma)
+static bool can_merge_remove_vma(struct mm_area *vma)
 {
 	return !vma->vm_ops || !vma->vm_ops->close;
 }
@@ -739,13 +739,13 @@ static bool can_merge_remove_vma(struct vm_area_struct *vma)
  * - The caller must hold a WRITE lock on the mm_struct->mmap_lock.
  * - vmi must be positioned within [@vmg->middle->vm_start, @vmg->middle->vm_end).
  */
-static __must_check struct vm_area_struct *vma_merge_existing_range(
+static __must_check struct mm_area *vma_merge_existing_range(
 		struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *middle = vmg->middle;
-	struct vm_area_struct *prev = vmg->prev;
-	struct vm_area_struct *next;
-	struct vm_area_struct *anon_dup = NULL;
+	struct mm_area *middle = vmg->middle;
+	struct mm_area *prev = vmg->prev;
+	struct mm_area *next;
+	struct mm_area *anon_dup = NULL;
 	unsigned long start = vmg->start;
 	unsigned long end = vmg->end;
 	bool left_side = middle && start == middle->vm_start;
@@ -974,10 +974,10 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
  * - The caller must have specified the next vma in @vmg->next.
  * - The caller must have positioned the vmi at or before the gap.
  */
-struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
+struct mm_area *vma_merge_new_range(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *prev = vmg->prev;
-	struct vm_area_struct *next = vmg->next;
+	struct mm_area *prev = vmg->prev;
+	struct mm_area *next = vmg->next;
 	unsigned long end = vmg->end;
 	bool can_merge_left, can_merge_right;
 
@@ -1053,10 +1053,10 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
  */
 int vma_expand(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *anon_dup = NULL;
+	struct mm_area *anon_dup = NULL;
 	bool remove_next = false;
-	struct vm_area_struct *middle = vmg->middle;
-	struct vm_area_struct *next = vmg->next;
+	struct mm_area *middle = vmg->middle;
+	struct mm_area *next = vmg->next;
 
 	mmap_assert_write_locked(vmg->mm);
 
@@ -1105,7 +1105,7 @@ int vma_expand(struct vma_merge_struct *vmg)
  *
  * Returns: 0 on success, -ENOMEM otherwise
  */
-int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
+int vma_shrink(struct vma_iterator *vmi, struct mm_area *vma,
 	       unsigned long start, unsigned long end, pgoff_t pgoff)
 {
 	struct vma_prepare vp;
@@ -1162,7 +1162,7 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms,
 static void vms_clean_up_area(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if (!vms->nr_pages)
 		return;
@@ -1185,7 +1185,7 @@ static void vms_clean_up_area(struct vma_munmap_struct *vms,
 static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct mm_struct *mm;
 
 	mm = current->mm;
@@ -1231,7 +1231,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
  */
 static void reattach_vmas(struct ma_state *mas_detach)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	mas_set(mas_detach, 0);
 	mas_for_each(mas_detach, vma, ULONG_MAX)
@@ -1253,7 +1253,7 @@ static void reattach_vmas(struct ma_state *mas_detach)
 static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 		struct ma_state *mas_detach)
 {
-	struct vm_area_struct *next = NULL;
+	struct mm_area *next = NULL;
 	int error;
 
 	/*
@@ -1356,7 +1356,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
 	/* Make sure no VMAs are about to be lost. */
 	{
 		MA_STATE(test, mas_detach->tree, 0, 0);
-		struct vm_area_struct *vma_mas, *vma_test;
+		struct mm_area *vma_mas, *vma_test;
 		int test_count = 0;
 
 		vma_iter_set(vms->vmi, vms->start);
@@ -1392,14 +1392,14 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
  * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
  * @vms: The vma munmap struct
  * @vmi: The vma iterator
- * @vma: The first vm_area_struct to munmap
+ * @vma: The first mm_area to munmap
  * @start: The aligned start address to munmap
  * @end: The aligned end address to munmap
  * @uf: The userfaultfd list_head
  * @unlock: Unlock after the operation.  Only unlocked on success
  */
 static void init_vma_munmap(struct vma_munmap_struct *vms,
-		struct vma_iterator *vmi, struct vm_area_struct *vma,
+		struct vma_iterator *vmi, struct mm_area *vma,
 		unsigned long start, unsigned long end, struct list_head *uf,
 		bool unlock)
 {
@@ -1424,7 +1424,7 @@ static void init_vma_munmap(struct vma_munmap_struct *vms,
 /*
  * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
  * @vmi: The vma iterator
- * @vma: The starting vm_area_struct
+ * @vma: The starting mm_area
  * @mm: The mm_struct
  * @start: The aligned start address to munmap.
  * @end: The aligned end address to munmap.
@@ -1435,7 +1435,7 @@ static void init_vma_munmap(struct vma_munmap_struct *vms,
  * Return: 0 on success and drops the lock if so directed, error and leaves the
  * lock held otherwise.
  */
-int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+int do_vmi_align_munmap(struct vma_iterator *vmi, struct mm_area *vma,
 		struct mm_struct *mm, unsigned long start, unsigned long end,
 		struct list_head *uf, bool unlock)
 {
@@ -1487,7 +1487,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 		  bool unlock)
 {
 	unsigned long end;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
 		return -EINVAL;
@@ -1520,12 +1520,12 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
  * The function returns either the merged VMA, the original VMA if a split was
  * required instead, or an error if the split failed.
  */
-static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
+static struct mm_area *vma_modify(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *vma = vmg->middle;
+	struct mm_area *vma = vmg->middle;
 	unsigned long start = vmg->start;
 	unsigned long end = vmg->end;
-	struct vm_area_struct *merged;
+	struct mm_area *merged;
 
 	/* First, try to merge. */
 	merged = vma_merge_existing_range(vmg);
@@ -1553,9 +1553,9 @@ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
 	return vma;
 }
 
-struct vm_area_struct *vma_modify_flags(
-	struct vma_iterator *vmi, struct vm_area_struct *prev,
-	struct vm_area_struct *vma, unsigned long start, unsigned long end,
+struct mm_area *vma_modify_flags(
+	struct vma_iterator *vmi, struct mm_area *prev,
+	struct mm_area *vma, unsigned long start, unsigned long end,
 	unsigned long new_flags)
 {
 	VMG_VMA_STATE(vmg, vmi, prev, vma, start, end);
@@ -1565,10 +1565,10 @@ struct vm_area_struct *vma_modify_flags(
 	return vma_modify(&vmg);
 }
 
-struct vm_area_struct
+struct mm_area
 *vma_modify_flags_name(struct vma_iterator *vmi,
-		       struct vm_area_struct *prev,
-		       struct vm_area_struct *vma,
+		       struct mm_area *prev,
+		       struct mm_area *vma,
 		       unsigned long start,
 		       unsigned long end,
 		       unsigned long new_flags,
@@ -1582,10 +1582,10 @@ struct vm_area_struct
 	return vma_modify(&vmg);
 }
 
-struct vm_area_struct
+struct mm_area
 *vma_modify_policy(struct vma_iterator *vmi,
-		   struct vm_area_struct *prev,
-		   struct vm_area_struct *vma,
+		   struct mm_area *prev,
+		   struct mm_area *vma,
 		   unsigned long start, unsigned long end,
 		   struct mempolicy *new_pol)
 {
@@ -1596,10 +1596,10 @@ struct vm_area_struct
 	return vma_modify(&vmg);
 }
 
-struct vm_area_struct
+struct mm_area
 *vma_modify_flags_uffd(struct vma_iterator *vmi,
-		       struct vm_area_struct *prev,
-		       struct vm_area_struct *vma,
+		       struct mm_area *prev,
+		       struct mm_area *vma,
 		       unsigned long start, unsigned long end,
 		       unsigned long new_flags,
 		       struct vm_userfaultfd_ctx new_ctx)
@@ -1616,8 +1616,8 @@ struct vm_area_struct
  * Expand vma by delta bytes, potentially merging with an immediately adjacent
  * VMA with identical properties.
  */
-struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
-					struct vm_area_struct *vma,
+struct mm_area *vma_merge_extend(struct vma_iterator *vmi,
+					struct mm_area *vma,
 					unsigned long delta)
 {
 	VMG_VMA_STATE(vmg, vmi, vma, vma, vma->vm_end, vma->vm_end + delta);
@@ -1650,7 +1650,7 @@ static void unlink_file_vma_batch_process(struct unlink_vma_file_batch *vb)
 }
 
 void unlink_file_vma_batch_add(struct unlink_vma_file_batch *vb,
-			       struct vm_area_struct *vma)
+			       struct mm_area *vma)
 {
 	if (vma->vm_file == NULL)
 		return;
@@ -1673,7 +1673,7 @@ void unlink_file_vma_batch_final(struct unlink_vma_file_batch *vb)
  * Unlink a file-based vm structure from its interval tree, to hide
  * vma from rmap and vmtruncate before freeing its page tables.
  */
-void unlink_file_vma(struct vm_area_struct *vma)
+void unlink_file_vma(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 
@@ -1686,7 +1686,7 @@ void unlink_file_vma(struct vm_area_struct *vma)
 	}
 }
 
-void vma_link_file(struct vm_area_struct *vma)
+void vma_link_file(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct address_space *mapping;
@@ -1699,7 +1699,7 @@ void vma_link_file(struct vm_area_struct *vma)
 	}
 }
 
-int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
+int vma_link(struct mm_struct *mm, struct mm_area *vma)
 {
 	VMA_ITERATOR(vmi, mm, 0);
 
@@ -1719,14 +1719,14 @@ int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
  * Copy the vma structure to a new location in the same mm,
  * prior to moving page table entries, to effect an mremap move.
  */
-struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
+struct mm_area *copy_vma(struct mm_area **vmap,
 	unsigned long addr, unsigned long len, pgoff_t pgoff,
 	bool *need_rmap_locks)
 {
-	struct vm_area_struct *vma = *vmap;
+	struct mm_area *vma = *vmap;
 	unsigned long vma_start = vma->vm_start;
 	struct mm_struct *mm = vma->vm_mm;
-	struct vm_area_struct *new_vma;
+	struct mm_area *new_vma;
 	bool faulted_in_anon_vma = true;
 	VMA_ITERATOR(vmi, mm, addr);
 	VMG_VMA_STATE(vmg, &vmi, NULL, vma, addr, addr + len);
@@ -1818,7 +1818,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
  * driver is doing some kind of reference counting. But that doesn't
  * really matter for the anon_vma sharing case.
  */
-static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct *b)
+static int anon_vma_compatible(struct mm_area *a, struct mm_area *b)
 {
 	return a->vm_end == b->vm_start &&
 		mpol_equal(vma_policy(a), vma_policy(b)) &&
@@ -1849,9 +1849,9 @@ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct *
  * and with the same memory policies). That's all stable, even with just
  * a read lock on the mmap_lock.
  */
-static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old,
-					  struct vm_area_struct *a,
-					  struct vm_area_struct *b)
+static struct anon_vma *reusable_anon_vma(struct mm_area *old,
+					  struct mm_area *a,
+					  struct mm_area *b)
 {
 	if (anon_vma_compatible(a, b)) {
 		struct anon_vma *anon_vma = READ_ONCE(old->anon_vma);
@@ -1870,10 +1870,10 @@ static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old,
  * anon_vmas being allocated, preventing vma merge in subsequent
  * mprotect.
  */
-struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma)
+struct anon_vma *find_mergeable_anon_vma(struct mm_area *vma)
 {
 	struct anon_vma *anon_vma = NULL;
-	struct vm_area_struct *prev, *next;
+	struct mm_area *prev, *next;
 	VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_end);
 
 	/* Try next first. */
@@ -1909,13 +1909,13 @@ static bool vm_ops_needs_writenotify(const struct vm_operations_struct *vm_ops)
 	return vm_ops && (vm_ops->page_mkwrite || vm_ops->pfn_mkwrite);
 }
 
-static bool vma_is_shared_writable(struct vm_area_struct *vma)
+static bool vma_is_shared_writable(struct mm_area *vma)
 {
 	return (vma->vm_flags & (VM_WRITE | VM_SHARED)) ==
 		(VM_WRITE | VM_SHARED);
 }
 
-static bool vma_fs_can_writeback(struct vm_area_struct *vma)
+static bool vma_fs_can_writeback(struct mm_area *vma)
 {
 	/* No managed pages to writeback. */
 	if (vma->vm_flags & VM_PFNMAP)
@@ -1929,7 +1929,7 @@ static bool vma_fs_can_writeback(struct vm_area_struct *vma)
  * Does this VMA require the underlying folios to have their dirty state
  * tracked?
  */
-bool vma_needs_dirty_tracking(struct vm_area_struct *vma)
+bool vma_needs_dirty_tracking(struct mm_area *vma)
 {
 	/* Only shared, writable VMAs require dirty tracking. */
 	if (!vma_is_shared_writable(vma))
@@ -1952,7 +1952,7 @@ bool vma_needs_dirty_tracking(struct vm_area_struct *vma)
  * to the private version (using protection_map[] without the
  * VM_SHARED bit).
  */
-bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
+bool vma_wants_writenotify(struct mm_area *vma, pgprot_t vm_page_prot)
 {
 	/* If it was private or non-writable, the write bit is already clear */
 	if (!vma_is_shared_writable(vma))
@@ -2066,7 +2066,7 @@ static void vm_lock_mapping(struct mm_struct *mm, struct address_space *mapping)
  */
 int mm_take_all_locks(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct anon_vma_chain *avc;
 	VMA_ITERATOR(vmi, mm, 0);
 
@@ -2162,7 +2162,7 @@ static void vm_unlock_mapping(struct address_space *mapping)
  */
 void mm_drop_all_locks(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct anon_vma_chain *avc;
 	VMA_ITERATOR(vmi, mm, 0);
 
@@ -2301,7 +2301,7 @@ static int __mmap_prepare(struct mmap_state *map, struct list_head *uf)
 
 
 static int __mmap_new_file_vma(struct mmap_state *map,
-			       struct vm_area_struct *vma)
+			       struct mm_area *vma)
 {
 	struct vma_iterator *vmi = map->vmi;
 	int error;
@@ -2345,11 +2345,11 @@ static int __mmap_new_file_vma(struct mmap_state *map,
  *
  * Returns: Zero on success, or an error.
  */
-static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
+static int __mmap_new_vma(struct mmap_state *map, struct mm_area **vmap)
 {
 	struct vma_iterator *vmi = map->vmi;
 	int error = 0;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * Determine the object being mapped and call the appropriate
@@ -2415,7 +2415,7 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
  * @map: Mapping state.
  * @vma: Merged or newly allocated VMA for the mmap()'d region.
  */
-static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
+static void __mmap_complete(struct mmap_state *map, struct mm_area *vma)
 {
 	struct mm_struct *mm = map->mm;
 	unsigned long vm_flags = vma->vm_flags;
@@ -2455,7 +2455,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
 		struct list_head *uf)
 {
 	struct mm_struct *mm = current->mm;
-	struct vm_area_struct *vma = NULL;
+	struct mm_area *vma = NULL;
 	int error;
 	VMA_ITERATOR(vmi, mm, addr);
 	MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
@@ -2480,7 +2480,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
 
 	/* If flags changed, we might be able to merge, so try again. */
 	if (map.retry_merge) {
-		struct vm_area_struct *merged;
+		struct mm_area *merged;
 		VMG_MMAP_STATE(vmg, &map, vma);
 
 		vma_iter_config(map.vmi, map.addr, map.end);
@@ -2573,7 +2573,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
  * do not match then create a new anonymous VMA.  Eventually we may be able to
  * do some brk-specific accounting here.
  */
-int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
+int do_brk_flags(struct vma_iterator *vmi, struct mm_area *vma,
 		 unsigned long addr, unsigned long len, unsigned long flags)
 {
 	struct mm_struct *mm = current->mm;
@@ -2657,7 +2657,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
 {
 	unsigned long length, gap;
 	unsigned long low_limit, high_limit;
-	struct vm_area_struct *tmp;
+	struct mm_area *tmp;
 	VMA_ITERATOR(vmi, current->mm, 0);
 
 	/* Adjust search length to account for worst case alignment overhead */
@@ -2714,7 +2714,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
 {
 	unsigned long length, gap, gap_end;
 	unsigned long low_limit, high_limit;
-	struct vm_area_struct *tmp;
+	struct mm_area *tmp;
 	VMA_ITERATOR(vmi, current->mm, 0);
 
 	/* Adjust search length to account for worst case alignment overhead */
@@ -2757,7 +2757,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
  * update accounting. This is shared with both the
  * grow-up and grow-down cases.
  */
-static int acct_stack_growth(struct vm_area_struct *vma,
+static int acct_stack_growth(struct mm_area *vma,
 			     unsigned long size, unsigned long grow)
 {
 	struct mm_struct *mm = vma->vm_mm;
@@ -2796,10 +2796,10 @@ static int acct_stack_growth(struct vm_area_struct *vma,
  * PA-RISC uses this for its stack.
  * vma is the last one with address > vma->vm_end.  Have to extend vma.
  */
-int expand_upwards(struct vm_area_struct *vma, unsigned long address)
+int expand_upwards(struct mm_area *vma, unsigned long address)
 {
 	struct mm_struct *mm = vma->vm_mm;
-	struct vm_area_struct *next;
+	struct mm_area *next;
 	unsigned long gap_addr;
 	int error = 0;
 	VMA_ITERATOR(vmi, mm, vma->vm_start);
@@ -2882,10 +2882,10 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
  * vma is the first one with address < vma->vm_start.  Have to extend vma.
  * mmap_lock held for writing.
  */
-int expand_downwards(struct vm_area_struct *vma, unsigned long address)
+int expand_downwards(struct mm_area *vma, unsigned long address)
 {
 	struct mm_struct *mm = vma->vm_mm;
-	struct vm_area_struct *prev;
+	struct mm_area *prev;
 	int error = 0;
 	VMA_ITERATOR(vmi, mm, vma->vm_start);
 
diff --git a/mm/vma.h b/mm/vma.h
index 7356ca5a22d3..b488a473fa97 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -11,19 +11,19 @@
  * VMA lock generalization
  */
 struct vma_prepare {
-	struct vm_area_struct *vma;
-	struct vm_area_struct *adj_next;
+	struct mm_area *vma;
+	struct mm_area *adj_next;
 	struct file *file;
 	struct address_space *mapping;
 	struct anon_vma *anon_vma;
-	struct vm_area_struct *insert;
-	struct vm_area_struct *remove;
-	struct vm_area_struct *remove2;
+	struct mm_area *insert;
+	struct mm_area *remove;
+	struct mm_area *remove2;
 };
 
 struct unlink_vma_file_batch {
 	int count;
-	struct vm_area_struct *vmas[8];
+	struct mm_area *vmas[8];
 };
 
 /*
@@ -31,9 +31,9 @@ struct unlink_vma_file_batch {
  */
 struct vma_munmap_struct {
 	struct vma_iterator *vmi;
-	struct vm_area_struct *vma;     /* The first vma to munmap */
-	struct vm_area_struct *prev;    /* vma before the munmap area */
-	struct vm_area_struct *next;    /* vma after the munmap area */
+	struct mm_area *vma;     /* The first vma to munmap */
+	struct mm_area *prev;    /* vma before the munmap area */
+	struct mm_area *next;    /* vma after the munmap area */
 	struct list_head *uf;           /* Userfaultfd list_head */
 	unsigned long start;            /* Aligned start addr (inclusive) */
 	unsigned long end;              /* Aligned end addr (exclusive) */
@@ -79,11 +79,11 @@ struct vma_merge_struct {
 	 *
 	 * next may be assigned by the caller.
 	 */
-	struct vm_area_struct *prev;
-	struct vm_area_struct *middle;
-	struct vm_area_struct *next;
+	struct mm_area *prev;
+	struct mm_area *middle;
+	struct mm_area *next;
 	/* This is the VMA we ultimately target to become the merged VMA. */
-	struct vm_area_struct *target;
+	struct mm_area *target;
 	/*
 	 * Initially, the start, end, pgoff fields are provided by the caller
 	 * and describe the proposed new VMA range, whether modifying an
@@ -145,7 +145,7 @@ static inline bool vmg_nomem(struct vma_merge_struct *vmg)
 }
 
 /* Assumes addr >= vma->vm_start. */
-static inline pgoff_t vma_pgoff_offset(struct vm_area_struct *vma,
+static inline pgoff_t vma_pgoff_offset(struct mm_area *vma,
 				       unsigned long addr)
 {
 	return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start);
@@ -189,11 +189,11 @@ void validate_mm(struct mm_struct *mm);
 
 __must_check int vma_expand(struct vma_merge_struct *vmg);
 __must_check int vma_shrink(struct vma_iterator *vmi,
-		struct vm_area_struct *vma,
+		struct mm_area *vma,
 		unsigned long start, unsigned long end, pgoff_t pgoff);
 
 static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
-			struct vm_area_struct *vma, gfp_t gfp)
+			struct mm_area *vma, gfp_t gfp)
 
 {
 	if (vmi->mas.status != ma_start &&
@@ -210,7 +210,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
 }
 
 int
-do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
+do_vmi_align_munmap(struct vma_iterator *vmi, struct mm_area *vma,
 		    struct mm_struct *mm, unsigned long start,
 		    unsigned long end, struct list_head *uf, bool unlock);
 
@@ -218,51 +218,51 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 		  unsigned long start, size_t len, struct list_head *uf,
 		  bool unlock);
 
-void remove_vma(struct vm_area_struct *vma);
+void remove_vma(struct mm_area *vma);
 
-void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
-		struct vm_area_struct *prev, struct vm_area_struct *next);
+void unmap_region(struct ma_state *mas, struct mm_area *vma,
+		struct mm_area *prev, struct mm_area *next);
 
 /* We are about to modify the VMA's flags. */
-__must_check struct vm_area_struct
+__must_check struct mm_area
 *vma_modify_flags(struct vma_iterator *vmi,
-		struct vm_area_struct *prev, struct vm_area_struct *vma,
+		struct mm_area *prev, struct mm_area *vma,
 		unsigned long start, unsigned long end,
 		unsigned long new_flags);
 
 /* We are about to modify the VMA's flags and/or anon_name. */
-__must_check struct vm_area_struct
+__must_check struct mm_area
 *vma_modify_flags_name(struct vma_iterator *vmi,
-		       struct vm_area_struct *prev,
-		       struct vm_area_struct *vma,
+		       struct mm_area *prev,
+		       struct mm_area *vma,
 		       unsigned long start,
 		       unsigned long end,
 		       unsigned long new_flags,
 		       struct anon_vma_name *new_name);
 
 /* We are about to modify the VMA's memory policy. */
-__must_check struct vm_area_struct
+__must_check struct mm_area
 *vma_modify_policy(struct vma_iterator *vmi,
-		   struct vm_area_struct *prev,
-		   struct vm_area_struct *vma,
+		   struct mm_area *prev,
+		   struct mm_area *vma,
 		   unsigned long start, unsigned long end,
 		   struct mempolicy *new_pol);
 
 /* We are about to modify the VMA's flags and/or uffd context. */
-__must_check struct vm_area_struct
+__must_check struct mm_area
 *vma_modify_flags_uffd(struct vma_iterator *vmi,
-		       struct vm_area_struct *prev,
-		       struct vm_area_struct *vma,
+		       struct mm_area *prev,
+		       struct mm_area *vma,
 		       unsigned long start, unsigned long end,
 		       unsigned long new_flags,
 		       struct vm_userfaultfd_ctx new_ctx);
 
-__must_check struct vm_area_struct
+__must_check struct mm_area
 *vma_merge_new_range(struct vma_merge_struct *vmg);
 
-__must_check struct vm_area_struct
+__must_check struct mm_area
 *vma_merge_extend(struct vma_iterator *vmi,
-		  struct vm_area_struct *vma,
+		  struct mm_area *vma,
 		  unsigned long delta);
 
 void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb);
@@ -270,22 +270,22 @@ void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb);
 void unlink_file_vma_batch_final(struct unlink_vma_file_batch *vb);
 
 void unlink_file_vma_batch_add(struct unlink_vma_file_batch *vb,
-			       struct vm_area_struct *vma);
+			       struct mm_area *vma);
 
-void unlink_file_vma(struct vm_area_struct *vma);
+void unlink_file_vma(struct mm_area *vma);
 
-void vma_link_file(struct vm_area_struct *vma);
+void vma_link_file(struct mm_area *vma);
 
-int vma_link(struct mm_struct *mm, struct vm_area_struct *vma);
+int vma_link(struct mm_struct *mm, struct mm_area *vma);
 
-struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
+struct mm_area *copy_vma(struct mm_area **vmap,
 	unsigned long addr, unsigned long len, pgoff_t pgoff,
 	bool *need_rmap_locks);
 
-struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma);
+struct anon_vma *find_mergeable_anon_vma(struct mm_area *vma);
 
-bool vma_needs_dirty_tracking(struct vm_area_struct *vma);
-bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
+bool vma_needs_dirty_tracking(struct mm_area *vma);
+bool vma_wants_writenotify(struct mm_area *vma, pgprot_t vm_page_prot);
 
 int mm_take_all_locks(struct mm_struct *mm);
 void mm_drop_all_locks(struct mm_struct *mm);
@@ -294,13 +294,13 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
 		struct list_head *uf);
 
-int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *brkvma,
+int do_brk_flags(struct vma_iterator *vmi, struct mm_area *brkvma,
 		 unsigned long addr, unsigned long request, unsigned long flags);
 
 unsigned long unmapped_area(struct vm_unmapped_area_info *info);
 unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
 
-static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma)
+static inline bool vma_wants_manual_pte_write_upgrade(struct mm_area *vma)
 {
 	/*
 	 * We want to check manually if we can change individual PTEs writable
@@ -320,7 +320,7 @@ static inline pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags
 }
 #endif
 
-static inline struct vm_area_struct *vma_prev_limit(struct vma_iterator *vmi,
+static inline struct mm_area *vma_prev_limit(struct vma_iterator *vmi,
 						    unsigned long min)
 {
 	return mas_prev(&vmi->mas, min);
@@ -370,13 +370,13 @@ static inline void vma_iter_reset(struct vma_iterator *vmi)
 }
 
 static inline
-struct vm_area_struct *vma_iter_prev_range_limit(struct vma_iterator *vmi, unsigned long min)
+struct mm_area *vma_iter_prev_range_limit(struct vma_iterator *vmi, unsigned long min)
 {
 	return mas_prev_range(&vmi->mas, min);
 }
 
 static inline
-struct vm_area_struct *vma_iter_next_range_limit(struct vma_iterator *vmi, unsigned long max)
+struct mm_area *vma_iter_next_range_limit(struct vma_iterator *vmi, unsigned long max)
 {
 	return mas_next_range(&vmi->mas, max);
 }
@@ -397,7 +397,7 @@ static inline int vma_iter_area_highest(struct vma_iterator *vmi, unsigned long
  * VMA Iterator functions shared between nommu and mmap
  */
 static inline int vma_iter_prealloc(struct vma_iterator *vmi,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	return mas_preallocate(&vmi->mas, vma, GFP_KERNEL);
 }
@@ -407,14 +407,14 @@ static inline void vma_iter_clear(struct vma_iterator *vmi)
 	mas_store_prealloc(&vmi->mas, NULL);
 }
 
-static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi)
+static inline struct mm_area *vma_iter_load(struct vma_iterator *vmi)
 {
 	return mas_walk(&vmi->mas);
 }
 
 /* Store a VMA with preallocated memory */
 static inline void vma_iter_store_overwrite(struct vma_iterator *vmi,
-					    struct vm_area_struct *vma)
+					    struct mm_area *vma)
 {
 	vma_assert_attached(vma);
 
@@ -442,7 +442,7 @@ static inline void vma_iter_store_overwrite(struct vma_iterator *vmi,
 }
 
 static inline void vma_iter_store_new(struct vma_iterator *vmi,
-				      struct vm_area_struct *vma)
+				      struct mm_area *vma)
 {
 	vma_mark_attached(vma);
 	vma_iter_store_overwrite(vmi, vma);
@@ -465,7 +465,7 @@ static inline int vma_iter_bulk_alloc(struct vma_iterator *vmi,
 }
 
 static inline
-struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
+struct mm_area *vma_iter_prev_range(struct vma_iterator *vmi)
 {
 	return mas_prev_range(&vmi->mas, 0);
 }
@@ -475,11 +475,11 @@ struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
  * if no previous VMA, to index 0.
  */
 static inline
-struct vm_area_struct *vma_iter_next_rewind(struct vma_iterator *vmi,
-		struct vm_area_struct **pprev)
+struct mm_area *vma_iter_next_rewind(struct vma_iterator *vmi,
+		struct mm_area **pprev)
 {
-	struct vm_area_struct *next = vma_next(vmi);
-	struct vm_area_struct *prev = vma_prev(vmi);
+	struct mm_area *next = vma_next(vmi);
+	struct mm_area *prev = vma_prev(vmi);
 
 	/*
 	 * Consider the case where no previous VMA exists. We advance to the
@@ -500,7 +500,7 @@ struct vm_area_struct *vma_iter_next_rewind(struct vma_iterator *vmi,
 
 #ifdef CONFIG_64BIT
 
-static inline bool vma_is_sealed(struct vm_area_struct *vma)
+static inline bool vma_is_sealed(struct mm_area *vma)
 {
 	return (vma->vm_flags & VM_SEALED);
 }
@@ -509,7 +509,7 @@ static inline bool vma_is_sealed(struct vm_area_struct *vma)
  * check if a vma is sealed for modification.
  * return true, if modification is allowed.
  */
-static inline bool can_modify_vma(struct vm_area_struct *vma)
+static inline bool can_modify_vma(struct mm_area *vma)
 {
 	if (unlikely(vma_is_sealed(vma)))
 		return false;
@@ -517,16 +517,16 @@ static inline bool can_modify_vma(struct vm_area_struct *vma)
 	return true;
 }
 
-bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior);
+bool can_modify_vma_madv(struct mm_area *vma, int behavior);
 
 #else
 
-static inline bool can_modify_vma(struct vm_area_struct *vma)
+static inline bool can_modify_vma(struct mm_area *vma)
 {
 	return true;
 }
 
-static inline bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
+static inline bool can_modify_vma_madv(struct mm_area *vma, int behavior)
 {
 	return true;
 }
@@ -534,10 +534,10 @@ static inline bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
 #endif
 
 #if defined(CONFIG_STACK_GROWSUP)
-int expand_upwards(struct vm_area_struct *vma, unsigned long address);
+int expand_upwards(struct mm_area *vma, unsigned long address);
 #endif
 
-int expand_downwards(struct vm_area_struct *vma, unsigned long address);
+int expand_downwards(struct mm_area *vma, unsigned long address);
 
 int __vm_munmap(unsigned long start, size_t len, bool unlock);
 
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3ed720a787ec..c3ad2c82c0f9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4450,7 +4450,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count)
  *
  * Similar to remap_pfn_range() (see mm/memory.c)
  */
-int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+int remap_vmalloc_range_partial(struct mm_area *vma, unsigned long uaddr,
 				void *kaddr, unsigned long pgoff,
 				unsigned long size)
 {
@@ -4510,7 +4510,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
  *
  * Similar to remap_pfn_range() (see mm/memory.c)
  */
-int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+int remap_vmalloc_range(struct mm_area *vma, void *addr,
 						unsigned long pgoff)
 {
 	return remap_vmalloc_range_partial(vma, vma->vm_start,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b620d74b0f66..9e629fea2e9a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3322,7 +3322,7 @@ static void reset_batch_size(struct lru_gen_mm_walk *walk)
 static int should_skip_vma(unsigned long start, unsigned long end, struct mm_walk *args)
 {
 	struct address_space *mapping;
-	struct vm_area_struct *vma = args->vma;
+	struct mm_area *vma = args->vma;
 	struct lru_gen_mm_walk *walk = args->private;
 
 	if (!vma_is_accessible(vma))
@@ -3391,7 +3391,7 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk
 	return false;
 }
 
-static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr,
+static unsigned long get_pte_pfn(pte_t pte, struct mm_area *vma, unsigned long addr,
 				 struct pglist_data *pgdat)
 {
 	unsigned long pfn = pte_pfn(pte);
@@ -3416,7 +3416,7 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned
 	return pfn;
 }
 
-static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr,
+static unsigned long get_pmd_pfn(pmd_t pmd, struct mm_area *vma, unsigned long addr,
 				 struct pglist_data *pgdat)
 {
 	unsigned long pfn = pmd_pfn(pmd);
@@ -3569,7 +3569,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
 	return suitable_to_scan(total, young);
 }
 
-static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma,
+static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct mm_area *vma,
 				  struct mm_walk *args, unsigned long *bitmap, unsigned long *first)
 {
 	int i;
@@ -3664,7 +3664,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
 	pmd_t *pmd;
 	unsigned long next;
 	unsigned long addr;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	DECLARE_BITMAP(bitmap, MIN_LRU_BATCH);
 	unsigned long first = -1;
 	struct lru_gen_mm_walk *walk = args->private;
@@ -4193,7 +4193,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
 	int young = 1;
 	pte_t *pte = pvmw->pte;
 	unsigned long addr = pvmw->address;
-	struct vm_area_struct *vma = pvmw->vma;
+	struct mm_area *vma = pvmw->vma;
 	struct folio *folio = pfn_folio(pvmw->pfn);
 	struct mem_cgroup *memcg = folio_memcg(folio);
 	struct pglist_data *pgdat = folio_pgdat(folio);
diff --git a/net/core/sock.c b/net/core/sock.c
index 323892066def..7d9b9ea0014d 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3467,7 +3467,7 @@ int sock_no_recvmsg(struct socket *sock, struct msghdr *m, size_t len,
 }
 EXPORT_SYMBOL(sock_no_recvmsg);
 
-int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct *vma)
+int sock_no_mmap(struct file *file, struct socket *sock, struct mm_area *vma)
 {
 	/* Mirror missing mmap method error code */
 	return -ENODEV;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index ea8de00f669d..f51b18d0fac2 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1801,7 +1801,7 @@ static const struct vm_operations_struct tcp_vm_ops = {
 };
 
 int tcp_mmap(struct file *file, struct socket *sock,
-	     struct vm_area_struct *vma)
+	     struct mm_area *vma)
 {
 	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
 		return -EPERM;
@@ -1997,7 +1997,7 @@ static int tcp_zc_handle_leftover(struct tcp_zerocopy_receive *zc,
 	return zc->copybuf_len < 0 ? 0 : copylen;
 }
 
-static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
+static int tcp_zerocopy_vm_insert_batch_error(struct mm_area *vma,
 					      struct page **pending_pages,
 					      unsigned long pages_remaining,
 					      unsigned long *address,
@@ -2045,7 +2045,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
 	return err;
 }
 
-static int tcp_zerocopy_vm_insert_batch(struct vm_area_struct *vma,
+static int tcp_zerocopy_vm_insert_batch(struct mm_area *vma,
 					struct page **pages,
 					unsigned int pages_to_map,
 					unsigned long *address,
@@ -2104,11 +2104,11 @@ static void tcp_zc_finalize_rx_tstamp(struct sock *sk,
 	}
 }
 
-static struct vm_area_struct *find_tcp_vma(struct mm_struct *mm,
+static struct mm_area *find_tcp_vma(struct mm_struct *mm,
 					   unsigned long address,
 					   bool *mmap_locked)
 {
-	struct vm_area_struct *vma = lock_vma_under_rcu(mm, address);
+	struct mm_area *vma = lock_vma_under_rcu(mm, address);
 
 	if (vma) {
 		if (vma->vm_ops != &tcp_vm_ops) {
@@ -2141,7 +2141,7 @@ static int tcp_zerocopy_receive(struct sock *sk,
 	struct tcp_sock *tp = tcp_sk(sk);
 	const skb_frag_t *frags = NULL;
 	unsigned int pages_to_map = 0;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	struct sk_buff *skb = NULL;
 	u32 seq = tp->copied_seq;
 	u32 total_bytes_to_map;
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 3e9ddf72cd03..c1ac0ed67f71 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -4358,7 +4358,7 @@ static __poll_t packet_poll(struct file *file, struct socket *sock,
  * for user mmaps.
  */
 
-static void packet_mm_open(struct vm_area_struct *vma)
+static void packet_mm_open(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct socket *sock = file->private_data;
@@ -4368,7 +4368,7 @@ static void packet_mm_open(struct vm_area_struct *vma)
 		atomic_long_inc(&pkt_sk(sk)->mapped);
 }
 
-static void packet_mm_close(struct vm_area_struct *vma)
+static void packet_mm_close(struct mm_area *vma)
 {
 	struct file *file = vma->vm_file;
 	struct socket *sock = file->private_data;
@@ -4619,7 +4619,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
 }
 
 static int packet_mmap(struct file *file, struct socket *sock,
-		struct vm_area_struct *vma)
+		struct mm_area *vma)
 {
 	struct sock *sk = sock->sk;
 	struct packet_sock *po = pkt_sk(sk);
diff --git a/net/socket.c b/net/socket.c
index 9a0e720f0859..796d8811c0cc 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -119,7 +119,7 @@ unsigned int sysctl_net_busy_poll __read_mostly;
 
 static ssize_t sock_read_iter(struct kiocb *iocb, struct iov_iter *to);
 static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from);
-static int sock_mmap(struct file *file, struct vm_area_struct *vma);
+static int sock_mmap(struct file *file, struct mm_area *vma);
 
 static int sock_close(struct inode *inode, struct file *file);
 static __poll_t sock_poll(struct file *file,
@@ -1379,7 +1379,7 @@ static __poll_t sock_poll(struct file *file, poll_table *wait)
 	return ops->poll(file, sock, wait) | flag;
 }
 
-static int sock_mmap(struct file *file, struct vm_area_struct *vma)
+static int sock_mmap(struct file *file, struct mm_area *vma)
 {
 	struct socket *sock = file->private_data;
 
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 5696af45bcf7..13d7febb2286 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -1595,7 +1595,7 @@ static int xsk_getsockopt(struct socket *sock, int level, int optname,
 }
 
 static int xsk_mmap(struct file *file, struct socket *sock,
-		    struct vm_area_struct *vma)
+		    struct mm_area *vma)
 {
 	loff_t offset = (loff_t)vma->vm_pgoff << PAGE_SHIFT;
 	unsigned long size = vma->vm_end - vma->vm_start;
diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c
index 3d0fa260332d..6c77296cf30b 100644
--- a/samples/ftrace/ftrace-direct-too.c
+++ b/samples/ftrace/ftrace-direct-too.c
@@ -7,10 +7,10 @@
 #include <asm/asm-offsets.h>
 #endif
 
-extern void my_direct_func(struct vm_area_struct *vma, unsigned long address,
+extern void my_direct_func(struct mm_area *vma, unsigned long address,
 			   unsigned int flags, struct pt_regs *regs);
 
-void my_direct_func(struct vm_area_struct *vma, unsigned long address,
+void my_direct_func(struct mm_area *vma, unsigned long address,
 		    unsigned int flags, struct pt_regs *regs)
 {
 	trace_printk("handle mm fault vma=%p address=%lx flags=%x regs=%p\n",
diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c
index 18623ba666e3..4b6121f12c27 100644
--- a/samples/vfio-mdev/mbochs.c
+++ b/samples/vfio-mdev/mbochs.c
@@ -777,7 +777,7 @@ static void mbochs_put_pages(struct mdev_state *mdev_state)
 
 static vm_fault_t mbochs_region_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mdev_state *mdev_state = vma->vm_private_data;
 	pgoff_t page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
 
@@ -795,7 +795,7 @@ static const struct vm_operations_struct mbochs_region_vm_ops = {
 	.fault = mbochs_region_vm_fault,
 };
 
-static int mbochs_mmap(struct vfio_device *vdev, struct vm_area_struct *vma)
+static int mbochs_mmap(struct vfio_device *vdev, struct mm_area *vma)
 {
 	struct mdev_state *mdev_state =
 		container_of(vdev, struct mdev_state, vdev);
@@ -816,7 +816,7 @@ static int mbochs_mmap(struct vfio_device *vdev, struct vm_area_struct *vma)
 
 static vm_fault_t mbochs_dmabuf_vm_fault(struct vm_fault *vmf)
 {
-	struct vm_area_struct *vma = vmf->vma;
+	struct mm_area *vma = vmf->vma;
 	struct mbochs_dmabuf *dmabuf = vma->vm_private_data;
 
 	if (WARN_ON(vmf->pgoff >= dmabuf->pagecount))
@@ -831,7 +831,7 @@ static const struct vm_operations_struct mbochs_dmabuf_vm_ops = {
 	.fault = mbochs_dmabuf_vm_fault,
 };
 
-static int mbochs_mmap_dmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
+static int mbochs_mmap_dmabuf(struct dma_buf *buf, struct mm_area *vma)
 {
 	struct mbochs_dmabuf *dmabuf = buf->priv;
 	struct device *dev = mdev_dev(dmabuf->mdev_state->mdev);
diff --git a/samples/vfio-mdev/mdpy.c b/samples/vfio-mdev/mdpy.c
index 8104831ae125..8f939e826acf 100644
--- a/samples/vfio-mdev/mdpy.c
+++ b/samples/vfio-mdev/mdpy.c
@@ -418,7 +418,7 @@ static ssize_t mdpy_write(struct vfio_device *vdev, const char __user *buf,
 	return -EFAULT;
 }
 
-static int mdpy_mmap(struct vfio_device *vdev, struct vm_area_struct *vma)
+static int mdpy_mmap(struct vfio_device *vdev, struct mm_area *vma)
 {
 	struct mdev_state *mdev_state =
 		container_of(vdev, struct mdev_state, vdev);
diff --git a/scripts/coccinelle/api/vma_pages.cocci b/scripts/coccinelle/api/vma_pages.cocci
index 10511b9bf35e..96c7790dff71 100644
--- a/scripts/coccinelle/api/vma_pages.cocci
+++ b/scripts/coccinelle/api/vma_pages.cocci
@@ -16,7 +16,7 @@ virtual report
 //----------------------------------------------------------
 
 @r_context depends on context && !patch && !org && !report@
-struct vm_area_struct *vma;
+struct mm_area *vma;
 @@
 
 * (vma->vm_end - vma->vm_start) >> PAGE_SHIFT
@@ -26,7 +26,7 @@ struct vm_area_struct *vma;
 //----------------------------------------------------------
 
 @r_patch depends on !context && patch && !org && !report@
-struct vm_area_struct *vma;
+struct mm_area *vma;
 @@
 
 - ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT)
@@ -37,7 +37,7 @@ struct vm_area_struct *vma;
 //----------------------------------------------------------
 
 @r_org depends on !context && !patch && (org || report)@
-struct vm_area_struct *vma;
+struct mm_area *vma;
 position p;
 @@
 
diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 7952e8cab353..cadd2fdbf01d 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -585,7 +585,7 @@ static int apparmor_mmap_file(struct file *file, unsigned long reqprot,
 	return common_mmap(OP_FMMAP, file, prot, flags, GFP_ATOMIC);
 }
 
-static int apparmor_file_mprotect(struct vm_area_struct *vma,
+static int apparmor_file_mprotect(struct mm_area *vma,
 				  unsigned long reqprot, unsigned long prot)
 {
 	return common_mmap(OP_FMPROT, vma->vm_file, prot,
diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
index f3e7ac513db3..a6e25bb8dc0b 100644
--- a/security/integrity/ima/ima_main.c
+++ b/security/integrity/ima/ima_main.c
@@ -478,7 +478,7 @@ static int ima_file_mmap(struct file *file, unsigned long reqprot,
 
 /**
  * ima_file_mprotect - based on policy, limit mprotect change
- * @vma: vm_area_struct protection is set to
+ * @vma: mm_area protection is set to
  * @reqprot: protection requested by the application
  * @prot: protection that will be applied by the kernel
  *
@@ -490,7 +490,7 @@ static int ima_file_mmap(struct file *file, unsigned long reqprot,
  *
  * On mprotect change success, return 0.  On failure, return -EACESS.
  */
-static int ima_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
+static int ima_file_mprotect(struct mm_area *vma, unsigned long reqprot,
 			     unsigned long prot)
 {
 	struct ima_template_desc *template = NULL;
diff --git a/security/ipe/hooks.c b/security/ipe/hooks.c
index d0323b81cd8f..5882e26563be 100644
--- a/security/ipe/hooks.c
+++ b/security/ipe/hooks.c
@@ -77,7 +77,7 @@ int ipe_mmap_file(struct file *f, unsigned long reqprot __always_unused,
  * * %0		- Success
  * * %-EACCES	- Did not pass IPE policy
  */
-int ipe_file_mprotect(struct vm_area_struct *vma,
+int ipe_file_mprotect(struct mm_area *vma,
 		      unsigned long reqprot __always_unused,
 		      unsigned long prot)
 {
diff --git a/security/ipe/hooks.h b/security/ipe/hooks.h
index 38d4a387d039..3b4b2f502809 100644
--- a/security/ipe/hooks.h
+++ b/security/ipe/hooks.h
@@ -27,7 +27,7 @@ int ipe_bprm_check_security(struct linux_binprm *bprm);
 int ipe_mmap_file(struct file *f, unsigned long reqprot, unsigned long prot,
 		  unsigned long flags);
 
-int ipe_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
+int ipe_file_mprotect(struct mm_area *vma, unsigned long reqprot,
 		      unsigned long prot);
 
 int ipe_kernel_read_file(struct file *file, enum kernel_read_file_id id,
diff --git a/security/security.c b/security/security.c
index fb57e8fddd91..1026b02ee7cf 100644
--- a/security/security.c
+++ b/security/security.c
@@ -3006,7 +3006,7 @@ int security_mmap_addr(unsigned long addr)
  *
  * Return: Returns 0 if permission is granted.
  */
-int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
+int security_file_mprotect(struct mm_area *vma, unsigned long reqprot,
 			   unsigned long prot)
 {
 	return call_int_hook(file_mprotect, vma, reqprot, prot);
diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
index e7a7dcab81db..28b458a22af8 100644
--- a/security/selinux/hooks.c
+++ b/security/selinux/hooks.c
@@ -3848,7 +3848,7 @@ static int selinux_mmap_file(struct file *file,
 				   (flags & MAP_TYPE) == MAP_SHARED);
 }
 
-static int selinux_file_mprotect(struct vm_area_struct *vma,
+static int selinux_file_mprotect(struct mm_area *vma,
 				 unsigned long reqprot __always_unused,
 				 unsigned long prot)
 {
diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
index 47480eb2189b..84ed683ce903 100644
--- a/security/selinux/selinuxfs.c
+++ b/security/selinux/selinuxfs.c
@@ -240,7 +240,7 @@ static ssize_t sel_read_handle_status(struct file *filp, char __user *buf,
 }
 
 static int sel_mmap_handle_status(struct file *filp,
-				  struct vm_area_struct *vma)
+				  struct mm_area *vma)
 {
 	struct page    *status = filp->private_data;
 	unsigned long	size = vma->vm_end - vma->vm_start;
@@ -465,7 +465,7 @@ static const struct vm_operations_struct sel_mmap_policy_ops = {
 	.page_mkwrite = sel_mmap_policy_fault,
 };
 
-static int sel_mmap_policy(struct file *filp, struct vm_area_struct *vma)
+static int sel_mmap_policy(struct file *filp, struct mm_area *vma)
 {
 	if (vma->vm_flags & VM_SHARED) {
 		/* do not allow mprotect to make mapping writable */
diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
index 840bb9cfe789..84e86bd99ead 100644
--- a/sound/core/compress_offload.c
+++ b/sound/core/compress_offload.c
@@ -389,7 +389,7 @@ static ssize_t snd_compr_read(struct file *f, char __user *buf,
 	return retval;
 }
 
-static int snd_compr_mmap(struct file *f, struct vm_area_struct *vma)
+static int snd_compr_mmap(struct file *f, struct mm_area *vma)
 {
 	return -ENXIO;
 }
diff --git a/sound/core/hwdep.c b/sound/core/hwdep.c
index 09200df2932c..ac5cf0c98ec4 100644
--- a/sound/core/hwdep.c
+++ b/sound/core/hwdep.c
@@ -253,7 +253,7 @@ static long snd_hwdep_ioctl(struct file * file, unsigned int cmd,
 	return -ENOTTY;
 }
 
-static int snd_hwdep_mmap(struct file * file, struct vm_area_struct * vma)
+static int snd_hwdep_mmap(struct file * file, struct mm_area * vma)
 {
 	struct snd_hwdep *hw = file->private_data;
 	if (hw->ops.mmap)
diff --git a/sound/core/info.c b/sound/core/info.c
index 1f5b8a3d9e3b..2d80eb13ab7e 100644
--- a/sound/core/info.c
+++ b/sound/core/info.c
@@ -211,7 +211,7 @@ static long snd_info_entry_ioctl(struct file *file, unsigned int cmd,
 				   file, cmd, arg);
 }
 
-static int snd_info_entry_mmap(struct file *file, struct vm_area_struct *vma)
+static int snd_info_entry_mmap(struct file *file, struct mm_area *vma)
 {
 	struct inode *inode = file_inode(file);
 	struct snd_info_private_data *data;
diff --git a/sound/core/init.c b/sound/core/init.c
index 114fb87de990..6c357c892dc4 100644
--- a/sound/core/init.c
+++ b/sound/core/init.c
@@ -451,7 +451,7 @@ static long snd_disconnect_ioctl(struct file *file,
 	return -ENODEV;
 }
 
-static int snd_disconnect_mmap(struct file *file, struct vm_area_struct *vma)
+static int snd_disconnect_mmap(struct file *file, struct mm_area *vma)
 {
 	return -ENODEV;
 }
diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
index b3853583d2ae..2c5f64a1c8fe 100644
--- a/sound/core/memalloc.c
+++ b/sound/core/memalloc.c
@@ -25,7 +25,7 @@ struct snd_malloc_ops {
 	struct page *(*get_page)(struct snd_dma_buffer *dmab, size_t offset);
 	unsigned int (*get_chunk_size)(struct snd_dma_buffer *dmab,
 				       unsigned int ofs, unsigned int size);
-	int (*mmap)(struct snd_dma_buffer *dmab, struct vm_area_struct *area);
+	int (*mmap)(struct snd_dma_buffer *dmab, struct mm_area *area);
 	void (*sync)(struct snd_dma_buffer *dmab, enum snd_dma_sync_mode mode);
 };
 
@@ -189,7 +189,7 @@ EXPORT_SYMBOL_GPL(snd_devm_alloc_dir_pages);
  * Return: zero if successful, or a negative error code
  */
 int snd_dma_buffer_mmap(struct snd_dma_buffer *dmab,
-			struct vm_area_struct *area)
+			struct mm_area *area)
 {
 	const struct snd_malloc_ops *ops;
 
@@ -334,7 +334,7 @@ static void snd_dma_continuous_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_continuous_mmap(struct snd_dma_buffer *dmab,
-				   struct vm_area_struct *area)
+				   struct mm_area *area)
 {
 	return remap_pfn_range(area, area->vm_start,
 			       dmab->addr >> PAGE_SHIFT,
@@ -362,7 +362,7 @@ static void snd_dma_vmalloc_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_vmalloc_mmap(struct snd_dma_buffer *dmab,
-				struct vm_area_struct *area)
+				struct mm_area *area)
 {
 	return remap_vmalloc_range(area, dmab->area, 0);
 }
@@ -451,7 +451,7 @@ static void snd_dma_iram_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_iram_mmap(struct snd_dma_buffer *dmab,
-			     struct vm_area_struct *area)
+			     struct mm_area *area)
 {
 	area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
 	return remap_pfn_range(area, area->vm_start,
@@ -481,7 +481,7 @@ static void snd_dma_dev_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_dev_mmap(struct snd_dma_buffer *dmab,
-			    struct vm_area_struct *area)
+			    struct mm_area *area)
 {
 	return dma_mmap_coherent(dmab->dev.dev, area,
 				 dmab->area, dmab->addr, dmab->bytes);
@@ -520,7 +520,7 @@ static void snd_dma_wc_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_wc_mmap(struct snd_dma_buffer *dmab,
-			   struct vm_area_struct *area)
+			   struct mm_area *area)
 {
 	area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
 	return dma_mmap_coherent(dmab->dev.dev, area,
@@ -538,7 +538,7 @@ static void snd_dma_wc_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_wc_mmap(struct snd_dma_buffer *dmab,
-			   struct vm_area_struct *area)
+			   struct mm_area *area)
 {
 	return dma_mmap_wc(dmab->dev.dev, area,
 			   dmab->area, dmab->addr, dmab->bytes);
@@ -585,7 +585,7 @@ static void snd_dma_noncontig_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_noncontig_mmap(struct snd_dma_buffer *dmab,
-				  struct vm_area_struct *area)
+				  struct mm_area *area)
 {
 	return dma_mmap_noncontiguous(dmab->dev.dev, area,
 				      dmab->bytes, dmab->private_data);
@@ -789,7 +789,7 @@ static void snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_sg_fallback_mmap(struct snd_dma_buffer *dmab,
-				    struct vm_area_struct *area)
+				    struct mm_area *area)
 {
 	struct snd_dma_sg_fallback *sgbuf = dmab->private_data;
 
@@ -849,7 +849,7 @@ static void snd_dma_noncoherent_free(struct snd_dma_buffer *dmab)
 }
 
 static int snd_dma_noncoherent_mmap(struct snd_dma_buffer *dmab,
-				    struct vm_area_struct *area)
+				    struct mm_area *area)
 {
 	area->vm_page_prot = vm_get_page_prot(area->vm_flags);
 	return dma_mmap_pages(dmab->dev.dev, area,
diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
index 4683b9139c56..884e96ea9cca 100644
--- a/sound/core/oss/pcm_oss.c
+++ b/sound/core/oss/pcm_oss.c
@@ -2867,7 +2867,7 @@ static __poll_t snd_pcm_oss_poll(struct file *file, poll_table * wait)
 	return mask;
 }
 
-static int snd_pcm_oss_mmap(struct file *file, struct vm_area_struct *area)
+static int snd_pcm_oss_mmap(struct file *file, struct mm_area *area)
 {
 	struct snd_pcm_oss_file *pcm_oss_file;
 	struct snd_pcm_substream *substream = NULL;
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 6c2b6a62d9d2..415c3dec027f 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -3668,7 +3668,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_status =
 };
 
 static int snd_pcm_mmap_status(struct snd_pcm_substream *substream, struct file *file,
-			       struct vm_area_struct *area)
+			       struct mm_area *area)
 {
 	long size;
 	if (!(area->vm_flags & VM_READ))
@@ -3706,7 +3706,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_control =
 };
 
 static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file *file,
-				struct vm_area_struct *area)
+				struct mm_area *area)
 {
 	long size;
 	if (!(area->vm_flags & VM_READ))
@@ -3762,12 +3762,12 @@ static bool pcm_control_mmap_allowed(struct snd_pcm_file *pcm_file)
 #define pcm_control_mmap_allowed(pcm_file)	false
 
 static int snd_pcm_mmap_status(struct snd_pcm_substream *substream, struct file *file,
-			       struct vm_area_struct *area)
+			       struct mm_area *area)
 {
 	return -ENXIO;
 }
 static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file *file,
-				struct vm_area_struct *area)
+				struct mm_area *area)
 {
 	return -ENXIO;
 }
@@ -3776,7 +3776,7 @@ static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file
 /*
  * snd_pcm_mmap_data_open - increase the mmap counter
  */
-static void snd_pcm_mmap_data_open(struct vm_area_struct *area)
+static void snd_pcm_mmap_data_open(struct mm_area *area)
 {
 	struct snd_pcm_substream *substream = area->vm_private_data;
 
@@ -3786,7 +3786,7 @@ static void snd_pcm_mmap_data_open(struct vm_area_struct *area)
 /*
  * snd_pcm_mmap_data_close - decrease the mmap counter
  */
-static void snd_pcm_mmap_data_close(struct vm_area_struct *area)
+static void snd_pcm_mmap_data_close(struct mm_area *area)
 {
 	struct snd_pcm_substream *substream = area->vm_private_data;
 
@@ -3852,7 +3852,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
  * Return: zero if successful, or a negative error code
  */
 int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
-			     struct vm_area_struct *area)
+			     struct mm_area *area)
 {
 	vm_flags_set(area, VM_DONTEXPAND | VM_DONTDUMP);
 	if (!substream->ops->page &&
@@ -3880,7 +3880,7 @@ EXPORT_SYMBOL_GPL(snd_pcm_lib_default_mmap);
  * Return: zero if successful, or a negative error code
  */
 int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream,
-			   struct vm_area_struct *area)
+			   struct mm_area *area)
 {
 	struct snd_pcm_runtime *runtime = substream->runtime;
 
@@ -3894,7 +3894,7 @@ EXPORT_SYMBOL(snd_pcm_lib_mmap_iomem);
  * mmap DMA buffer
  */
 int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file,
-		      struct vm_area_struct *area)
+		      struct mm_area *area)
 {
 	struct snd_pcm_runtime *runtime;
 	long size;
@@ -3937,7 +3937,7 @@ int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file,
 }
 EXPORT_SYMBOL(snd_pcm_mmap_data);
 
-static int snd_pcm_mmap(struct file *file, struct vm_area_struct *area)
+static int snd_pcm_mmap(struct file *file, struct mm_area *area)
 {
 	struct snd_pcm_file * pcm_file;
 	struct snd_pcm_substream *substream;	
diff --git a/sound/soc/fsl/fsl_asrc_m2m.c b/sound/soc/fsl/fsl_asrc_m2m.c
index f46881f71e43..32356e92f2ae 100644
--- a/sound/soc/fsl/fsl_asrc_m2m.c
+++ b/sound/soc/fsl/fsl_asrc_m2m.c
@@ -401,7 +401,7 @@ static int fsl_asrc_m2m_comp_set_params(struct snd_compr_stream *stream,
 	return 0;
 }
 
-static int fsl_asrc_m2m_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+static int fsl_asrc_m2m_mmap(struct dma_buf *dmabuf, struct mm_area *vma)
 {
 	struct snd_dma_buffer *dmab = dmabuf->priv;
 
diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
index dac463390da1..d595f2ef22a8 100644
--- a/sound/soc/intel/avs/pcm.c
+++ b/sound/soc/intel/avs/pcm.c
@@ -1240,7 +1240,7 @@ avs_component_pointer(struct snd_soc_component *component, struct snd_pcm_substr
 
 static int avs_component_mmap(struct snd_soc_component *component,
 			      struct snd_pcm_substream *substream,
-			      struct vm_area_struct *vma)
+			      struct mm_area *vma)
 {
 	return snd_pcm_lib_default_mmap(substream, vma);
 }
diff --git a/sound/soc/loongson/loongson_dma.c b/sound/soc/loongson/loongson_dma.c
index 20e4a0641340..2e05bc1683bd 100644
--- a/sound/soc/loongson/loongson_dma.c
+++ b/sound/soc/loongson/loongson_dma.c
@@ -295,7 +295,7 @@ static int loongson_pcm_close(struct snd_soc_component *component,
 
 static int loongson_pcm_mmap(struct snd_soc_component *component,
 			     struct snd_pcm_substream *substream,
-			     struct vm_area_struct *vma)
+			     struct mm_area *vma)
 {
 	return remap_pfn_range(vma, vma->vm_start,
 			substream->dma_buffer.addr >> PAGE_SHIFT,
diff --git a/sound/soc/pxa/mmp-sspa.c b/sound/soc/pxa/mmp-sspa.c
index 73f36c9dd35c..bbb0f3a15c39 100644
--- a/sound/soc/pxa/mmp-sspa.c
+++ b/sound/soc/pxa/mmp-sspa.c
@@ -402,7 +402,7 @@ static const struct snd_dmaengine_pcm_config mmp_pcm_config = {
 
 static int mmp_pcm_mmap(struct snd_soc_component *component,
 			struct snd_pcm_substream *substream,
-			struct vm_area_struct *vma)
+			struct mm_area *vma)
 {
 	vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP);
 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
index 9946f12254b3..bf8cd80fcf5a 100644
--- a/sound/soc/qcom/lpass-platform.c
+++ b/sound/soc/qcom/lpass-platform.c
@@ -894,7 +894,7 @@ static snd_pcm_uframes_t lpass_platform_pcmops_pointer(
 }
 
 static int lpass_platform_cdc_dma_mmap(struct snd_pcm_substream *substream,
-				       struct vm_area_struct *vma)
+				       struct mm_area *vma)
 {
 	struct snd_pcm_runtime *runtime = substream->runtime;
 	unsigned long size, offset;
@@ -910,7 +910,7 @@ static int lpass_platform_cdc_dma_mmap(struct snd_pcm_substream *substream,
 
 static int lpass_platform_pcmops_mmap(struct snd_soc_component *component,
 				      struct snd_pcm_substream *substream,
-				      struct vm_area_struct *vma)
+				      struct mm_area *vma)
 {
 	struct snd_soc_pcm_runtime *soc_runtime = snd_soc_substream_to_rtd(substream);
 	struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(soc_runtime, 0);
diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
index 2cd522108221..6a9ef02b5ab6 100644
--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
+++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
@@ -739,7 +739,7 @@ static int q6apm_dai_compr_set_metadata(struct snd_soc_component *component,
 
 static int q6apm_dai_compr_mmap(struct snd_soc_component *component,
 				struct snd_compr_stream *stream,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	struct snd_compr_runtime *runtime = stream->runtime;
 	struct q6apm_dai_rtd *prtd = runtime->private_data;
diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
index a400c9a31fea..7d382c459845 100644
--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
+++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
@@ -1114,7 +1114,7 @@ static int q6asm_compr_copy(struct snd_soc_component *component,
 
 static int q6asm_dai_compr_mmap(struct snd_soc_component *component,
 				struct snd_compr_stream *stream,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	struct snd_compr_runtime *runtime = stream->runtime;
 	struct q6asm_dai_rtd *prtd = runtime->private_data;
diff --git a/sound/soc/samsung/idma.c b/sound/soc/samsung/idma.c
index 402ccadad46c..618cc682b223 100644
--- a/sound/soc/samsung/idma.c
+++ b/sound/soc/samsung/idma.c
@@ -240,7 +240,7 @@ idma_pointer(struct snd_soc_component *component,
 
 static int idma_mmap(struct snd_soc_component *component,
 		     struct snd_pcm_substream *substream,
-	struct vm_area_struct *vma)
+	struct mm_area *vma)
 {
 	struct snd_pcm_runtime *runtime = substream->runtime;
 	unsigned long size, offset;
diff --git a/sound/soc/soc-component.c b/sound/soc/soc-component.c
index 25f5e543ae8d..019eabf1f618 100644
--- a/sound/soc/soc-component.c
+++ b/sound/soc/soc-component.c
@@ -1095,7 +1095,7 @@ struct page *snd_soc_pcm_component_page(struct snd_pcm_substream *substream,
 }
 
 int snd_soc_pcm_component_mmap(struct snd_pcm_substream *substream,
-			       struct vm_area_struct *vma)
+			       struct mm_area *vma)
 {
 	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
 	struct snd_soc_component *component;
diff --git a/sound/soc/uniphier/aio-dma.c b/sound/soc/uniphier/aio-dma.c
index 265d61723e99..e930c48f3ac2 100644
--- a/sound/soc/uniphier/aio-dma.c
+++ b/sound/soc/uniphier/aio-dma.c
@@ -193,7 +193,7 @@ static snd_pcm_uframes_t uniphier_aiodma_pointer(
 
 static int uniphier_aiodma_mmap(struct snd_soc_component *component,
 				struct snd_pcm_substream *substream,
-				struct vm_area_struct *vma)
+				struct mm_area *vma)
 {
 	vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
 
diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
index 6bcf8b859ebb..818228d5e3a2 100644
--- a/sound/usb/usx2y/us122l.c
+++ b/sound/usb/usx2y/us122l.c
@@ -173,7 +173,7 @@ static int usb_stream_hwdep_release(struct snd_hwdep *hw, struct file *file)
 }
 
 static int usb_stream_hwdep_mmap(struct snd_hwdep *hw,
-				 struct file *filp, struct vm_area_struct *area)
+				 struct file *filp, struct mm_area *area)
 {
 	unsigned long	size = area->vm_end - area->vm_start;
 	struct us122l	*us122l = hw->private_data;
diff --git a/sound/usb/usx2y/usX2Yhwdep.c b/sound/usb/usx2y/usX2Yhwdep.c
index 9fd6a86cc08e..f53ab11ba825 100644
--- a/sound/usb/usx2y/usX2Yhwdep.c
+++ b/sound/usb/usx2y/usX2Yhwdep.c
@@ -37,7 +37,7 @@ static const struct vm_operations_struct us428ctls_vm_ops = {
 	.fault = snd_us428ctls_vm_fault,
 };
 
-static int snd_us428ctls_mmap(struct snd_hwdep *hw, struct file *filp, struct vm_area_struct *area)
+static int snd_us428ctls_mmap(struct snd_hwdep *hw, struct file *filp, struct mm_area *area)
 {
 	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
 	struct usx2ydev	*us428 = hw->private_data;
diff --git a/sound/usb/usx2y/usx2yhwdeppcm.c b/sound/usb/usx2y/usx2yhwdeppcm.c
index 1b1496adb47e..acf7d36dc4e9 100644
--- a/sound/usb/usx2y/usx2yhwdeppcm.c
+++ b/sound/usb/usx2y/usx2yhwdeppcm.c
@@ -667,11 +667,11 @@ static int snd_usx2y_hwdep_pcm_release(struct snd_hwdep *hw, struct file *file)
 	return err;
 }
 
-static void snd_usx2y_hwdep_pcm_vm_open(struct vm_area_struct *area)
+static void snd_usx2y_hwdep_pcm_vm_open(struct mm_area *area)
 {
 }
 
-static void snd_usx2y_hwdep_pcm_vm_close(struct vm_area_struct *area)
+static void snd_usx2y_hwdep_pcm_vm_close(struct mm_area *area)
 {
 }
 
@@ -693,7 +693,7 @@ static const struct vm_operations_struct snd_usx2y_hwdep_pcm_vm_ops = {
 	.fault = snd_usx2y_hwdep_pcm_vm_fault,
 };
 
-static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep *hw, struct file *filp, struct vm_area_struct *area)
+static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep *hw, struct file *filp, struct mm_area *area)
 {
 	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
 	struct usx2ydev	*usx2y = hw->private_data;
diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
index 72ea363d434d..3c3285b1bb05 100644
--- a/tools/include/linux/btf_ids.h
+++ b/tools/include/linux/btf_ids.h
@@ -205,7 +205,7 @@ extern u32 btf_sock_ids[];
 #define BTF_TRACING_TYPE_xxx	\
 	BTF_TRACING_TYPE(BTF_TRACING_TYPE_TASK, task_struct)	\
 	BTF_TRACING_TYPE(BTF_TRACING_TYPE_FILE, file)		\
-	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, vm_area_struct)
+	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, mm_area)
 
 enum {
 #define BTF_TRACING_TYPE(name, type) name,
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 28705ae67784..7894f9c2ae9b 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -5368,7 +5368,7 @@ union bpf_attr {
  *
  *		The expected callback signature is
  *
- *		long (\*callback_fn)(struct task_struct \*task, struct vm_area_struct \*vma, void \*callback_ctx);
+ *		long (\*callback_fn)(struct task_struct \*task, struct mm_area \*vma, void \*callback_ctx);
  *
  *	Return
  *		0 on success.
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 6535c8ae3c46..08d8c40b8546 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -164,7 +164,7 @@ struct bpf_iter_task_vma;
 extern int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
 				 struct task_struct *task,
 				 __u64 addr) __ksym;
-extern struct vm_area_struct *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it) __ksym;
+extern struct mm_area *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it) __ksym;
 extern void bpf_iter_task_vma_destroy(struct bpf_iter_task_vma *it) __ksym;
 
 /* Convenience macro to wrap over bpf_obj_drop_impl */
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
index d64ba7ddaed5..899e6b03c070 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
@@ -25,7 +25,7 @@ __u32 one_task_error = 0;
 
 SEC("iter/task_vma") int proc_maps(struct bpf_iter__task_vma *ctx)
 {
-	struct vm_area_struct *vma = ctx->vma;
+	struct mm_area *vma = ctx->vma;
 	struct seq_file *seq = ctx->meta->seq;
 	struct task_struct *task = ctx->task;
 	struct file *file;
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c b/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c
index 174298e122d3..6a27844ef324 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c
@@ -15,7 +15,7 @@ __u32 page_shift = 0;
 SEC("iter/task_vma")
 int get_vma_offset(struct bpf_iter__task_vma *ctx)
 {
-	struct vm_area_struct *vma = ctx->vma;
+	struct mm_area *vma = ctx->vma;
 	struct seq_file *seq = ctx->meta->seq;
 	struct task_struct *task = ctx->task;
 
diff --git a/tools/testing/selftests/bpf/progs/find_vma.c b/tools/testing/selftests/bpf/progs/find_vma.c
index 02b82774469c..75f90cb21179 100644
--- a/tools/testing/selftests/bpf/progs/find_vma.c
+++ b/tools/testing/selftests/bpf/progs/find_vma.c
@@ -20,7 +20,7 @@ __u64 addr = 0;
 int find_zero_ret = -1;
 int find_addr_ret = -1;
 
-static long check_vma(struct task_struct *task, struct vm_area_struct *vma,
+static long check_vma(struct task_struct *task, struct mm_area *vma,
 		      struct callback_ctx *data)
 {
 	if (vma->vm_file)
diff --git a/tools/testing/selftests/bpf/progs/find_vma_fail1.c b/tools/testing/selftests/bpf/progs/find_vma_fail1.c
index 7ba9a428f228..4a5a41997169 100644
--- a/tools/testing/selftests/bpf/progs/find_vma_fail1.c
+++ b/tools/testing/selftests/bpf/progs/find_vma_fail1.c
@@ -10,7 +10,7 @@ struct callback_ctx {
 	int dummy;
 };
 
-static long write_vma(struct task_struct *task, struct vm_area_struct *vma,
+static long write_vma(struct task_struct *task, struct mm_area *vma,
 		      struct callback_ctx *data)
 {
 	/* writing to vma, which is illegal */
diff --git a/tools/testing/selftests/bpf/progs/find_vma_fail2.c b/tools/testing/selftests/bpf/progs/find_vma_fail2.c
index 9bcf3203e26b..1117fc0475f2 100644
--- a/tools/testing/selftests/bpf/progs/find_vma_fail2.c
+++ b/tools/testing/selftests/bpf/progs/find_vma_fail2.c
@@ -9,7 +9,7 @@ struct callback_ctx {
 	int dummy;
 };
 
-static long write_task(struct task_struct *task, struct vm_area_struct *vma,
+static long write_task(struct task_struct *task, struct mm_area *vma,
 		       struct callback_ctx *data)
 {
 	/* writing to task, which is illegal */
diff --git a/tools/testing/selftests/bpf/progs/iters_css_task.c b/tools/testing/selftests/bpf/progs/iters_css_task.c
index 9ac758649cb8..bc48b47d1793 100644
--- a/tools/testing/selftests/bpf/progs/iters_css_task.c
+++ b/tools/testing/selftests/bpf/progs/iters_css_task.c
@@ -19,7 +19,7 @@ int css_task_cnt;
 u64 cg_id;
 
 SEC("lsm/file_mprotect")
-int BPF_PROG(iter_css_task_for_each, struct vm_area_struct *vma,
+int BPF_PROG(iter_css_task_for_each, struct mm_area *vma,
 	    unsigned long reqprot, unsigned long prot, int ret)
 {
 	struct task_struct *cur_task = bpf_get_current_task_btf();
diff --git a/tools/testing/selftests/bpf/progs/iters_task_vma.c b/tools/testing/selftests/bpf/progs/iters_task_vma.c
index dc0c3691dcc2..6334a2d0518d 100644
--- a/tools/testing/selftests/bpf/progs/iters_task_vma.c
+++ b/tools/testing/selftests/bpf/progs/iters_task_vma.c
@@ -18,7 +18,7 @@ SEC("raw_tp/sys_enter")
 int iter_task_vma_for_each(const void *ctx)
 {
 	struct task_struct *task = bpf_get_current_task_btf();
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned int seen = 0;
 
 	if (task->pid != target_pid)
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod.c b/tools/testing/selftests/bpf/progs/iters_testmod.c
index 9e4b45201e69..d5303fb6d618 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod.c
@@ -14,7 +14,7 @@ int iter_next_trusted(const void *ctx)
 {
 	struct task_struct *cur_task = bpf_get_current_task_btf();
 	struct bpf_iter_task_vma vma_it;
-	struct vm_area_struct *vma_ptr;
+	struct mm_area *vma_ptr;
 
 	bpf_iter_task_vma_new(&vma_it, cur_task, 0);
 
@@ -34,7 +34,7 @@ int iter_next_trusted_or_null(const void *ctx)
 {
 	struct task_struct *cur_task = bpf_get_current_task_btf();
 	struct bpf_iter_task_vma vma_it;
-	struct vm_area_struct *vma_ptr;
+	struct mm_area *vma_ptr;
 
 	bpf_iter_task_vma_new(&vma_it, cur_task, 0);
 
diff --git a/tools/testing/selftests/bpf/progs/lsm.c b/tools/testing/selftests/bpf/progs/lsm.c
index 0c13b7409947..7218621a833a 100644
--- a/tools/testing/selftests/bpf/progs/lsm.c
+++ b/tools/testing/selftests/bpf/progs/lsm.c
@@ -86,7 +86,7 @@ int mprotect_count = 0;
 int bprm_count = 0;
 
 SEC("lsm/file_mprotect")
-int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
+int BPF_PROG(test_int_hook, struct mm_area *vma,
 	     unsigned long reqprot, unsigned long prot, int ret)
 {
 	if (ret != 0)
diff --git a/tools/testing/selftests/bpf/progs/test_bpf_cookie.c b/tools/testing/selftests/bpf/progs/test_bpf_cookie.c
index c83142b55f47..8f803369ad2d 100644
--- a/tools/testing/selftests/bpf/progs/test_bpf_cookie.c
+++ b/tools/testing/selftests/bpf/progs/test_bpf_cookie.c
@@ -125,7 +125,7 @@ int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
 }
 
 SEC("lsm/file_mprotect")
-int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
+int BPF_PROG(test_int_hook, struct mm_area *vma,
 	     unsigned long reqprot, unsigned long prot, int ret)
 {
 	if (my_tid != (u32)bpf_get_current_pid_tgid())
diff --git a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
index 75dd922e4e9f..aa00a677636b 100644
--- a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
+++ b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
@@ -14,7 +14,7 @@ struct {
 	__uint(max_entries, 8);
 } ringbuf SEC(".maps");
 
-struct vm_area_struct;
+struct mm_area;
 struct bpf_map;
 
 struct buf_context {
@@ -146,7 +146,7 @@ int unsafe_ringbuf_drain(void *unused)
 	return choice_arr[loop_ctx.i];
 }
 
-static __u64 find_vma_cb(struct task_struct *task, struct vm_area_struct *vma, void *data)
+static __u64 find_vma_cb(struct task_struct *task, struct mm_area *vma, void *data)
 {
 	return oob_state_machine(data);
 }
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index 3220f1d28697..b58ebc8ab3b1 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -198,7 +198,7 @@ __bpf_kfunc void bpf_kfunc_nested_release_test(struct sk_buff *ptr)
 {
 }
 
-__bpf_kfunc void bpf_kfunc_trusted_vma_test(struct vm_area_struct *ptr)
+__bpf_kfunc void bpf_kfunc_trusted_vma_test(struct mm_area *ptr)
 {
 }
 
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index b58817938deb..b28cf00b119b 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -154,7 +154,7 @@ int bpf_kfunc_st_ops_test_epilogue(struct st_ops_args *args) __ksym;
 int bpf_kfunc_st_ops_test_pro_epilogue(struct st_ops_args *args) __ksym;
 int bpf_kfunc_st_ops_inc10(struct st_ops_args *args) __ksym;
 
-void bpf_kfunc_trusted_vma_test(struct vm_area_struct *ptr) __ksym;
+void bpf_kfunc_trusted_vma_test(struct mm_area *ptr) __ksym;
 void bpf_kfunc_trusted_task_test(struct task_struct *ptr) __ksym;
 void bpf_kfunc_trusted_num_test(int *ptr) __ksym;
 void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c
index 11f761769b5b..57d129d16596 100644
--- a/tools/testing/vma/vma.c
+++ b/tools/testing/vma/vma.c
@@ -59,13 +59,13 @@ unsigned long rlimit(unsigned int limit)
 }
 
 /* Helper function to simply allocate a VMA. */
-static struct vm_area_struct *alloc_vma(struct mm_struct *mm,
+static struct mm_area *alloc_vma(struct mm_struct *mm,
 					unsigned long start,
 					unsigned long end,
 					pgoff_t pgoff,
 					vm_flags_t flags)
 {
-	struct vm_area_struct *ret = vm_area_alloc(mm);
+	struct mm_area *ret = vm_area_alloc(mm);
 
 	if (ret == NULL)
 		return NULL;
@@ -80,7 +80,7 @@ static struct vm_area_struct *alloc_vma(struct mm_struct *mm,
 }
 
 /* Helper function to allocate a VMA and link it to the tree. */
-static int attach_vma(struct mm_struct *mm, struct vm_area_struct *vma)
+static int attach_vma(struct mm_struct *mm, struct mm_area *vma)
 {
 	int res;
 
@@ -91,13 +91,13 @@ static int attach_vma(struct mm_struct *mm, struct vm_area_struct *vma)
 }
 
 /* Helper function to allocate a VMA and link it to the tree. */
-static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
+static struct mm_area *alloc_and_link_vma(struct mm_struct *mm,
 						 unsigned long start,
 						 unsigned long end,
 						 pgoff_t pgoff,
 						 vm_flags_t flags)
 {
-	struct vm_area_struct *vma = alloc_vma(mm, start, end, pgoff, flags);
+	struct mm_area *vma = alloc_vma(mm, start, end, pgoff, flags);
 
 	if (vma == NULL)
 		return NULL;
@@ -118,9 +118,9 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
 }
 
 /* Helper function which provides a wrapper around a merge new VMA operation. */
-static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
+static struct mm_area *merge_new(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	/*
 	 * For convenience, get prev and next VMAs. Which the new VMA operation
 	 * requires.
@@ -140,9 +140,9 @@ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
  * Helper function which provides a wrapper around a merge existing VMA
  * operation.
  */
-static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg)
+static struct mm_area *merge_existing(struct vma_merge_struct *vmg)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = vma_merge_existing_range(vmg);
 	if (vma)
@@ -191,13 +191,13 @@ static void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start,
  * Update vmg and the iterator for it and try to merge, otherwise allocate a new
  * VMA, link it to the maple tree and return it.
  */
-static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm,
+static struct mm_area *try_merge_new_vma(struct mm_struct *mm,
 						struct vma_merge_struct *vmg,
 						unsigned long start, unsigned long end,
 						pgoff_t pgoff, vm_flags_t flags,
 						bool *was_merged)
 {
-	struct vm_area_struct *merged;
+	struct mm_area *merged;
 
 	vmg_set_range(vmg, start, end, pgoff, flags);
 
@@ -231,7 +231,7 @@ static void reset_dummy_anon_vma(void)
  */
 static int cleanup_mm(struct mm_struct *mm, struct vma_iterator *vmi)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	int count = 0;
 
 	fail_prealloc = false;
@@ -249,7 +249,7 @@ static int cleanup_mm(struct mm_struct *mm, struct vma_iterator *vmi)
 }
 
 /* Helper function to determine if VMA has had vma_start_write() performed. */
-static bool vma_write_started(struct vm_area_struct *vma)
+static bool vma_write_started(struct mm_area *vma)
 {
 	int seq = vma->vm_lock_seq;
 
@@ -261,17 +261,17 @@ static bool vma_write_started(struct vm_area_struct *vma)
 }
 
 /* Helper function providing a dummy vm_ops->close() method.*/
-static void dummy_close(struct vm_area_struct *)
+static void dummy_close(struct mm_area *)
 {
 }
 
 static bool test_simple_merge(void)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
-	struct vm_area_struct *vma_left = alloc_vma(&mm, 0, 0x1000, 0, flags);
-	struct vm_area_struct *vma_right = alloc_vma(&mm, 0x2000, 0x3000, 2, flags);
+	struct mm_area *vma_left = alloc_vma(&mm, 0, 0x1000, 0, flags);
+	struct mm_area *vma_right = alloc_vma(&mm, 0x2000, 0x3000, 2, flags);
 	VMA_ITERATOR(vmi, &mm, 0x1000);
 	struct vma_merge_struct vmg = {
 		.mm = &mm,
@@ -301,10 +301,10 @@ static bool test_simple_merge(void)
 
 static bool test_simple_modify(void)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
-	struct vm_area_struct *init_vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
+	struct mm_area *init_vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
 	VMA_ITERATOR(vmi, &mm, 0x1000);
 
 	ASSERT_FALSE(attach_vma(&mm, init_vma));
@@ -363,7 +363,7 @@ static bool test_simple_expand(void)
 {
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
-	struct vm_area_struct *vma = alloc_vma(&mm, 0, 0x1000, 0, flags);
+	struct mm_area *vma = alloc_vma(&mm, 0, 0x1000, 0, flags);
 	VMA_ITERATOR(vmi, &mm, 0);
 	struct vma_merge_struct vmg = {
 		.vmi = &vmi,
@@ -391,7 +391,7 @@ static bool test_simple_shrink(void)
 {
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
-	struct vm_area_struct *vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
+	struct mm_area *vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
 	VMA_ITERATOR(vmi, &mm, 0);
 
 	ASSERT_FALSE(attach_vma(&mm, vma));
@@ -433,7 +433,7 @@ static bool test_merge_new(void)
 		.close = dummy_close,
 	};
 	int count;
-	struct vm_area_struct *vma, *vma_a, *vma_b, *vma_c, *vma_d;
+	struct mm_area *vma, *vma_a, *vma_b, *vma_c, *vma_d;
 	bool merged;
 
 	/*
@@ -616,7 +616,7 @@ static bool test_vma_merge_special_flags(void)
 	vm_flags_t special_flags[] = { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXEDMAP };
 	vm_flags_t all_special_flags = 0;
 	int i;
-	struct vm_area_struct *vma_left, *vma;
+	struct mm_area *vma_left, *vma;
 
 	/* Make sure there aren't new VM_SPECIAL flags. */
 	for (i = 0; i < ARRAY_SIZE(special_flags); i++) {
@@ -688,7 +688,7 @@ static bool test_vma_merge_with_close(void)
 	const struct vm_operations_struct vm_ops = {
 		.close = dummy_close,
 	};
-	struct vm_area_struct *vma_prev, *vma_next, *vma;
+	struct mm_area *vma_prev, *vma_next, *vma;
 
 	/*
 	 * When merging VMAs we are not permitted to remove any VMA that has a
@@ -894,12 +894,12 @@ static bool test_vma_merge_new_with_close(void)
 		.mm = &mm,
 		.vmi = &vmi,
 	};
-	struct vm_area_struct *vma_prev = alloc_and_link_vma(&mm, 0, 0x2000, 0, flags);
-	struct vm_area_struct *vma_next = alloc_and_link_vma(&mm, 0x5000, 0x7000, 5, flags);
+	struct mm_area *vma_prev = alloc_and_link_vma(&mm, 0, 0x2000, 0, flags);
+	struct mm_area *vma_next = alloc_and_link_vma(&mm, 0x5000, 0x7000, 5, flags);
 	const struct vm_operations_struct vm_ops = {
 		.close = dummy_close,
 	};
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	/*
 	 * We should allow the partial merge of a proposed new VMA if the
@@ -945,7 +945,7 @@ static bool test_merge_existing(void)
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
 	VMA_ITERATOR(vmi, &mm, 0);
-	struct vm_area_struct *vma, *vma_prev, *vma_next;
+	struct mm_area *vma, *vma_prev, *vma_next;
 	struct vma_merge_struct vmg = {
 		.mm = &mm,
 		.vmi = &vmi,
@@ -1175,7 +1175,7 @@ static bool test_anon_vma_non_mergeable(void)
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
 	VMA_ITERATOR(vmi, &mm, 0);
-	struct vm_area_struct *vma, *vma_prev, *vma_next;
+	struct mm_area *vma, *vma_prev, *vma_next;
 	struct vma_merge_struct vmg = {
 		.mm = &mm,
 		.vmi = &vmi,
@@ -1290,7 +1290,7 @@ static bool test_dup_anon_vma(void)
 	struct anon_vma_chain dummy_anon_vma_chain = {
 		.anon_vma = &dummy_anon_vma,
 	};
-	struct vm_area_struct *vma_prev, *vma_next, *vma;
+	struct mm_area *vma_prev, *vma_next, *vma;
 
 	reset_dummy_anon_vma();
 
@@ -1447,7 +1447,7 @@ static bool test_vmi_prealloc_fail(void)
 		.mm = &mm,
 		.vmi = &vmi,
 	};
-	struct vm_area_struct *vma_prev, *vma;
+	struct mm_area *vma_prev, *vma;
 
 	/*
 	 * We are merging vma into prev, with vma possessing an anon_vma, which
@@ -1507,7 +1507,7 @@ static bool test_merge_extend(void)
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
 	VMA_ITERATOR(vmi, &mm, 0x1000);
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 
 	vma = alloc_and_link_vma(&mm, 0, 0x1000, 0, flags);
 	alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags);
@@ -1538,7 +1538,7 @@ static bool test_copy_vma(void)
 	struct mm_struct mm = {};
 	bool need_locks = false;
 	VMA_ITERATOR(vmi, &mm, 0);
-	struct vm_area_struct *vma, *vma_new, *vma_next;
+	struct mm_area *vma, *vma_new, *vma_next;
 
 	/* Move backwards and do not merge. */
 
@@ -1570,7 +1570,7 @@ static bool test_expand_only_mode(void)
 	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
 	struct mm_struct mm = {};
 	VMA_ITERATOR(vmi, &mm, 0);
-	struct vm_area_struct *vma_prev, *vma;
+	struct mm_area *vma_prev, *vma;
 	VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, flags, 5);
 
 	/*
@@ -1609,7 +1609,7 @@ static bool test_mmap_region_basic(void)
 {
 	struct mm_struct mm = {};
 	unsigned long addr;
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, &mm, 0);
 
 	current->mm = &mm;
diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
index 572ab2cea763..acb90a6ff98a 100644
--- a/tools/testing/vma/vma_internal.h
+++ b/tools/testing/vma/vma_internal.h
@@ -235,7 +235,7 @@ struct file {
 
 #define VMA_LOCK_OFFSET	0x40000000
 
-struct vm_area_struct {
+struct mm_area {
 	/* The first cache line has the info for VMA tree walking. */
 
 	union {
@@ -337,27 +337,27 @@ struct vm_area_struct {
 struct vm_fault {};
 
 struct vm_operations_struct {
-	void (*open)(struct vm_area_struct * area);
+	void (*open)(struct mm_area * area);
 	/**
 	 * @close: Called when the VMA is being removed from the MM.
 	 * Context: User context.  May sleep.  Caller holds mmap_lock.
 	 */
-	void (*close)(struct vm_area_struct * area);
+	void (*close)(struct mm_area * area);
 	/* Called any time before splitting to check if it's allowed */
-	int (*may_split)(struct vm_area_struct *area, unsigned long addr);
-	int (*mremap)(struct vm_area_struct *area);
+	int (*may_split)(struct mm_area *area, unsigned long addr);
+	int (*mremap)(struct mm_area *area);
 	/*
 	 * Called by mprotect() to make driver-specific permission
 	 * checks before mprotect() is finalised.   The VMA must not
 	 * be modified.  Returns 0 if mprotect() can proceed.
 	 */
-	int (*mprotect)(struct vm_area_struct *vma, unsigned long start,
+	int (*mprotect)(struct mm_area *vma, unsigned long start,
 			unsigned long end, unsigned long newflags);
 	vm_fault_t (*fault)(struct vm_fault *vmf);
 	vm_fault_t (*huge_fault)(struct vm_fault *vmf, unsigned int order);
 	vm_fault_t (*map_pages)(struct vm_fault *vmf,
 			pgoff_t start_pgoff, pgoff_t end_pgoff);
-	unsigned long (*pagesize)(struct vm_area_struct * area);
+	unsigned long (*pagesize)(struct mm_area * area);
 
 	/* notification that a previously read-only page is about to become
 	 * writable, if an error is returned it will cause a SIGBUS */
@@ -370,13 +370,13 @@ struct vm_operations_struct {
 	 * for use by special VMAs. See also generic_access_phys() for a generic
 	 * implementation useful for any iomem mapping.
 	 */
-	int (*access)(struct vm_area_struct *vma, unsigned long addr,
+	int (*access)(struct mm_area *vma, unsigned long addr,
 		      void *buf, int len, int write);
 
 	/* Called by the /proc/PID/maps code to ask the vma whether it
 	 * has a special name.  Returning non-NULL will also cause this
 	 * vma to be dumped unconditionally. */
-	const char *(*name)(struct vm_area_struct *vma);
+	const char *(*name)(struct mm_area *vma);
 
 #ifdef CONFIG_NUMA
 	/*
@@ -386,7 +386,7 @@ struct vm_operations_struct {
 	 * install a MPOL_DEFAULT policy, nor the task or system default
 	 * mempolicy.
 	 */
-	int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new);
+	int (*set_policy)(struct mm_area *vma, struct mempolicy *new);
 
 	/*
 	 * get_policy() op must add reference [mpol_get()] to any policy at
@@ -398,7 +398,7 @@ struct vm_operations_struct {
 	 * must return NULL--i.e., do not "fallback" to task or system default
 	 * policy.
 	 */
-	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
+	struct mempolicy *(*get_policy)(struct mm_area *vma,
 					unsigned long addr, pgoff_t *ilx);
 #endif
 	/*
@@ -406,7 +406,7 @@ struct vm_operations_struct {
 	 * page for @addr.  This is useful if the default behavior
 	 * (using pte_page()) would not find the correct page.
 	 */
-	struct page *(*find_special_page)(struct vm_area_struct *vma,
+	struct page *(*find_special_page)(struct mm_area *vma,
 					  unsigned long addr);
 };
 
@@ -442,12 +442,12 @@ static inline bool is_shared_maywrite(vm_flags_t vm_flags)
 		(VM_SHARED | VM_MAYWRITE);
 }
 
-static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
+static inline bool vma_is_shared_maywrite(struct mm_area *vma)
 {
 	return is_shared_maywrite(vma->vm_flags);
 }
 
-static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
+static inline struct mm_area *vma_next(struct vma_iterator *vmi)
 {
 	/*
 	 * Uses mas_find() to get the first VMA when the iterator starts.
@@ -461,25 +461,25 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
  * assertions should be made either under mmap_write_lock or when the object
  * has been isolated under mmap_write_lock, ensuring no competing writers.
  */
-static inline void vma_assert_attached(struct vm_area_struct *vma)
+static inline void vma_assert_attached(struct mm_area *vma)
 {
 	WARN_ON_ONCE(!refcount_read(&vma->vm_refcnt));
 }
 
-static inline void vma_assert_detached(struct vm_area_struct *vma)
+static inline void vma_assert_detached(struct mm_area *vma)
 {
 	WARN_ON_ONCE(refcount_read(&vma->vm_refcnt));
 }
 
-static inline void vma_assert_write_locked(struct vm_area_struct *);
-static inline void vma_mark_attached(struct vm_area_struct *vma)
+static inline void vma_assert_write_locked(struct mm_area *);
+static inline void vma_mark_attached(struct mm_area *vma)
 {
 	vma_assert_write_locked(vma);
 	vma_assert_detached(vma);
 	refcount_set_release(&vma->vm_refcnt, 1);
 }
 
-static inline void vma_mark_detached(struct vm_area_struct *vma)
+static inline void vma_mark_detached(struct mm_area *vma)
 {
 	vma_assert_write_locked(vma);
 	vma_assert_attached(vma);
@@ -496,7 +496,7 @@ extern const struct vm_operations_struct vma_dummy_vm_ops;
 
 extern unsigned long rlimit(unsigned int limit);
 
-static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
+static inline void vma_init(struct mm_area *vma, struct mm_struct *mm)
 {
 	memset(vma, 0, sizeof(*vma));
 	vma->vm_mm = mm;
@@ -505,9 +505,9 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
 	vma->vm_lock_seq = UINT_MAX;
 }
 
-static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
+static inline struct mm_area *vm_area_alloc(struct mm_struct *mm)
 {
-	struct vm_area_struct *vma = calloc(1, sizeof(struct vm_area_struct));
+	struct mm_area *vma = calloc(1, sizeof(struct mm_area));
 
 	if (!vma)
 		return NULL;
@@ -517,9 +517,9 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
 	return vma;
 }
 
-static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
+static inline struct mm_area *vm_area_dup(struct mm_area *orig)
 {
-	struct vm_area_struct *new = calloc(1, sizeof(struct vm_area_struct));
+	struct mm_area *new = calloc(1, sizeof(struct mm_area));
 
 	if (!new)
 		return NULL;
@@ -576,7 +576,7 @@ static inline void mapping_allow_writable(struct address_space *mapping)
 	atomic_inc(&mapping->i_mmap_writable);
 }
 
-static inline void vma_set_range(struct vm_area_struct *vma,
+static inline void vma_set_range(struct mm_area *vma,
 				 unsigned long start, unsigned long end,
 				 pgoff_t pgoff)
 {
@@ -586,7 +586,7 @@ static inline void vma_set_range(struct vm_area_struct *vma,
 }
 
 static inline
-struct vm_area_struct *vma_find(struct vma_iterator *vmi, unsigned long max)
+struct mm_area *vma_find(struct vma_iterator *vmi, unsigned long max)
 {
 	return mas_find(&vmi->mas, max - 1);
 }
@@ -603,7 +603,7 @@ static inline int vma_iter_clear_gfp(struct vma_iterator *vmi,
 }
 
 static inline void mmap_assert_locked(struct mm_struct *);
-static inline struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
+static inline struct mm_area *find_vma_intersection(struct mm_struct *mm,
 						unsigned long start_addr,
 						unsigned long end_addr)
 {
@@ -614,12 +614,12 @@ static inline struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
 }
 
 static inline
-struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
+struct mm_area *vma_lookup(struct mm_struct *mm, unsigned long addr)
 {
 	return mtree_load(&mm->mm_mt, addr);
 }
 
-static inline struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
+static inline struct mm_area *vma_prev(struct vma_iterator *vmi)
 {
 	return mas_prev(&vmi->mas, 0);
 }
@@ -629,7 +629,7 @@ static inline void vma_iter_set(struct vma_iterator *vmi, unsigned long addr)
 	mas_set(&vmi->mas, addr);
 }
 
-static inline bool vma_is_anonymous(struct vm_area_struct *vma)
+static inline bool vma_is_anonymous(struct mm_area *vma)
 {
 	return !vma->vm_ops;
 }
@@ -638,11 +638,11 @@ static inline bool vma_is_anonymous(struct vm_area_struct *vma)
 #define vma_iter_load(vmi) \
 	mas_walk(&(vmi)->mas)
 
-static inline struct vm_area_struct *
+static inline struct mm_area *
 find_vma_prev(struct mm_struct *mm, unsigned long addr,
-			struct vm_area_struct **pprev)
+			struct mm_area **pprev)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	VMA_ITERATOR(vmi, mm, addr);
 
 	vma = vma_iter_load(&vmi);
@@ -662,12 +662,12 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
 
 /* Stubbed functions. */
 
-static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
+static inline struct anon_vma_name *anon_vma_name(struct mm_area *vma)
 {
 	return NULL;
 }
 
-static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
+static inline bool is_mergeable_vm_userfaultfd_ctx(struct mm_area *vma,
 					struct vm_userfaultfd_ctx vm_ctx)
 {
 	return true;
@@ -683,7 +683,7 @@ static inline void might_sleep(void)
 {
 }
 
-static inline unsigned long vma_pages(struct vm_area_struct *vma)
+static inline unsigned long vma_pages(struct mm_area *vma)
 {
 	return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
 }
@@ -696,7 +696,7 @@ static inline void mpol_put(struct mempolicy *)
 {
 }
 
-static inline void vm_area_free(struct vm_area_struct *vma)
+static inline void vm_area_free(struct mm_area *vma)
 {
 	free(vma);
 }
@@ -718,7 +718,7 @@ static inline void update_hiwater_vm(struct mm_struct *)
 }
 
 static inline void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
-		      struct vm_area_struct *vma, unsigned long start_addr,
+		      struct mm_area *vma, unsigned long start_addr,
 		      unsigned long end_addr, unsigned long tree_end,
 		      bool mm_wr_locked)
 {
@@ -732,7 +732,7 @@ static inline void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
 }
 
 static inline void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
-		   struct vm_area_struct *vma, unsigned long floor,
+		   struct mm_area *vma, unsigned long floor,
 		   unsigned long ceiling, bool mm_wr_locked)
 {
 	(void)tlb;
@@ -760,12 +760,12 @@ static inline struct file *get_file(struct file *f)
 	return f;
 }
 
-static inline int vma_dup_policy(struct vm_area_struct *, struct vm_area_struct *)
+static inline int vma_dup_policy(struct mm_area *, struct mm_area *)
 {
 	return 0;
 }
 
-static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
+static inline int anon_vma_clone(struct mm_area *dst, struct mm_area *src)
 {
 	/* For testing purposes. We indicate that an anon_vma has been cloned. */
 	if (src->anon_vma != NULL) {
@@ -776,16 +776,16 @@ static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_stru
 	return 0;
 }
 
-static inline void vma_start_write(struct vm_area_struct *vma)
+static inline void vma_start_write(struct mm_area *vma)
 {
 	/* Used to indicate to tests that a write operation has begun. */
 	vma->vm_lock_seq++;
 }
 
-static inline void vma_adjust_trans_huge(struct vm_area_struct *vma,
+static inline void vma_adjust_trans_huge(struct mm_area *vma,
 					 unsigned long start,
 					 unsigned long end,
-					 struct vm_area_struct *next)
+					 struct mm_area *next)
 {
 	(void)vma;
 	(void)start;
@@ -799,7 +799,7 @@ static inline void vma_iter_free(struct vma_iterator *vmi)
 }
 
 static inline
-struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
+struct mm_area *vma_iter_next_range(struct vma_iterator *vmi)
 {
 	return mas_next_range(&vmi->mas, ULONG_MAX);
 }
@@ -808,12 +808,12 @@ static inline void vm_acct_memory(long pages)
 {
 }
 
-static inline void vma_interval_tree_insert(struct vm_area_struct *,
+static inline void vma_interval_tree_insert(struct mm_area *,
 					    struct rb_root_cached *)
 {
 }
 
-static inline void vma_interval_tree_remove(struct vm_area_struct *,
+static inline void vma_interval_tree_remove(struct mm_area *,
 					    struct rb_root_cached *)
 {
 }
@@ -832,11 +832,11 @@ static inline void anon_vma_interval_tree_remove(struct anon_vma_chain*,
 {
 }
 
-static inline void uprobe_mmap(struct vm_area_struct *)
+static inline void uprobe_mmap(struct mm_area *)
 {
 }
 
-static inline void uprobe_munmap(struct vm_area_struct *vma,
+static inline void uprobe_munmap(struct mm_area *vma,
 				 unsigned long start, unsigned long end)
 {
 	(void)vma;
@@ -852,11 +852,11 @@ static inline void anon_vma_lock_write(struct anon_vma *)
 {
 }
 
-static inline void vma_assert_write_locked(struct vm_area_struct *)
+static inline void vma_assert_write_locked(struct mm_area *)
 {
 }
 
-static inline void unlink_anon_vmas(struct vm_area_struct *vma)
+static inline void unlink_anon_vmas(struct mm_area *vma)
 {
 	/* For testing purposes, indicate that the anon_vma was unlinked. */
 	vma->anon_vma->was_unlinked = true;
@@ -870,12 +870,12 @@ static inline void i_mmap_unlock_write(struct address_space *)
 {
 }
 
-static inline void anon_vma_merge(struct vm_area_struct *,
-				  struct vm_area_struct *)
+static inline void anon_vma_merge(struct mm_area *,
+				  struct mm_area *)
 {
 }
 
-static inline int userfaultfd_unmap_prep(struct vm_area_struct *vma,
+static inline int userfaultfd_unmap_prep(struct mm_area *vma,
 					 unsigned long start,
 					 unsigned long end,
 					 struct list_head *unmaps)
@@ -934,7 +934,7 @@ static inline bool mpol_equal(struct mempolicy *, struct mempolicy *)
 	return true;
 }
 
-static inline void khugepaged_enter_vma(struct vm_area_struct *vma,
+static inline void khugepaged_enter_vma(struct mm_area *vma,
 			  unsigned long vm_flags)
 {
 	(void)vma;
@@ -946,17 +946,17 @@ static inline bool mapping_can_writeback(struct address_space *)
 	return true;
 }
 
-static inline bool is_vm_hugetlb_page(struct vm_area_struct *)
+static inline bool is_vm_hugetlb_page(struct mm_area *)
 {
 	return false;
 }
 
-static inline bool vma_soft_dirty_enabled(struct vm_area_struct *)
+static inline bool vma_soft_dirty_enabled(struct mm_area *)
 {
 	return false;
 }
 
-static inline bool userfaultfd_wp(struct vm_area_struct *)
+static inline bool userfaultfd_wp(struct mm_area *)
 {
 	return false;
 }
@@ -998,63 +998,63 @@ static inline bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long)
 	return true;
 }
 
-static inline void vm_flags_init(struct vm_area_struct *vma,
+static inline void vm_flags_init(struct mm_area *vma,
 				 vm_flags_t flags)
 {
 	vma->__vm_flags = flags;
 }
 
-static inline void vm_flags_set(struct vm_area_struct *vma,
+static inline void vm_flags_set(struct mm_area *vma,
 				vm_flags_t flags)
 {
 	vma_start_write(vma);
 	vma->__vm_flags |= flags;
 }
 
-static inline void vm_flags_clear(struct vm_area_struct *vma,
+static inline void vm_flags_clear(struct mm_area *vma,
 				  vm_flags_t flags)
 {
 	vma_start_write(vma);
 	vma->__vm_flags &= ~flags;
 }
 
-static inline int call_mmap(struct file *, struct vm_area_struct *)
+static inline int call_mmap(struct file *, struct mm_area *)
 {
 	return 0;
 }
 
-static inline int shmem_zero_setup(struct vm_area_struct *)
+static inline int shmem_zero_setup(struct mm_area *)
 {
 	return 0;
 }
 
-static inline void vma_set_anonymous(struct vm_area_struct *vma)
+static inline void vma_set_anonymous(struct mm_area *vma)
 {
 	vma->vm_ops = NULL;
 }
 
-static inline void ksm_add_vma(struct vm_area_struct *)
+static inline void ksm_add_vma(struct mm_area *)
 {
 }
 
-static inline void perf_event_mmap(struct vm_area_struct *)
+static inline void perf_event_mmap(struct mm_area *)
 {
 }
 
-static inline bool vma_is_dax(struct vm_area_struct *)
+static inline bool vma_is_dax(struct mm_area *)
 {
 	return false;
 }
 
-static inline struct vm_area_struct *get_gate_vma(struct mm_struct *)
+static inline struct mm_area *get_gate_vma(struct mm_struct *)
 {
 	return NULL;
 }
 
-bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
+bool vma_wants_writenotify(struct mm_area *vma, pgprot_t vm_page_prot);
 
 /* Update vma->vm_page_prot to reflect vma->vm_flags. */
-static inline void vma_set_page_prot(struct vm_area_struct *vma)
+static inline void vma_set_page_prot(struct mm_area *vma)
 {
 	unsigned long vm_flags = vma->vm_flags;
 	pgprot_t vm_page_prot;
@@ -1076,16 +1076,16 @@ static inline bool arch_validate_flags(unsigned long)
 	return true;
 }
 
-static inline void vma_close(struct vm_area_struct *)
+static inline void vma_close(struct mm_area *)
 {
 }
 
-static inline int mmap_file(struct file *, struct vm_area_struct *)
+static inline int mmap_file(struct file *, struct mm_area *)
 {
 	return 0;
 }
 
-static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
+static inline unsigned long stack_guard_start_gap(struct mm_area *vma)
 {
 	if (vma->vm_flags & VM_GROWSDOWN)
 		return stack_guard_gap;
@@ -1097,7 +1097,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
 	return 0;
 }
 
-static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_start_gap(struct mm_area *vma)
 {
 	unsigned long gap = stack_guard_start_gap(vma);
 	unsigned long vm_start = vma->vm_start;
@@ -1108,7 +1108,7 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
 	return vm_start;
 }
 
-static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
+static inline unsigned long vm_end_gap(struct mm_area *vma)
 {
 	unsigned long vm_end = vma->vm_end;
 
@@ -1126,7 +1126,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
 	return 0;
 }
 
-static inline bool vma_is_accessible(struct vm_area_struct *vma)
+static inline bool vma_is_accessible(struct mm_area *vma)
 {
 	return vma->vm_flags & VM_ACCESS_FLAGS;
 }
@@ -1153,7 +1153,7 @@ static inline bool mlock_future_ok(struct mm_struct *mm, unsigned long flags,
 	return locked_pages <= limit_pages;
 }
 
-static inline int __anon_vma_prepare(struct vm_area_struct *vma)
+static inline int __anon_vma_prepare(struct mm_area *vma)
 {
 	struct anon_vma *anon_vma = calloc(1, sizeof(struct anon_vma));
 
@@ -1166,7 +1166,7 @@ static inline int __anon_vma_prepare(struct vm_area_struct *vma)
 	return 0;
 }
 
-static inline int anon_vma_prepare(struct vm_area_struct *vma)
+static inline int anon_vma_prepare(struct mm_area *vma)
 {
 	if (likely(vma->anon_vma))
 		return 0;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e85b33a92624..419e641a79a8 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2618,7 +2618,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_is_visible_gfn);
 
 unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	unsigned long addr, size;
 
 	size = PAGE_SIZE;
@@ -2860,7 +2860,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn)
 	return npages;
 }
 
-static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
+static bool vma_is_valid(struct mm_area *vma, bool write_fault)
 {
 	if (unlikely(!(vma->vm_flags & VM_READ)))
 		return false;
@@ -2871,7 +2871,7 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
 	return true;
 }
 
-static int hva_to_pfn_remapped(struct vm_area_struct *vma,
+static int hva_to_pfn_remapped(struct mm_area *vma,
 			       struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn)
 {
 	struct follow_pfnmap_args args = { .vma = vma, .address = kfp->hva };
@@ -2919,7 +2919,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
 
 kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp)
 {
-	struct vm_area_struct *vma;
+	struct mm_area *vma;
 	kvm_pfn_t pfn;
 	int npages, r;
 
@@ -3997,7 +3997,7 @@ static const struct vm_operations_struct kvm_vcpu_vm_ops = {
 	.fault = kvm_vcpu_fault,
 };
 
-static int kvm_vcpu_mmap(struct file *file, struct vm_area_struct *vma)
+static int kvm_vcpu_mmap(struct file *file, struct mm_area *vma)
 {
 	struct kvm_vcpu *vcpu = file->private_data;
 	unsigned long pages = vma_pages(vma);
@@ -4613,7 +4613,7 @@ static long kvm_vcpu_compat_ioctl(struct file *filp,
 }
 #endif
 
-static int kvm_device_mmap(struct file *filp, struct vm_area_struct *vma)
+static int kvm_device_mmap(struct file *filp, struct mm_area *vma)
 {
 	struct kvm_device *dev = filp->private_data;
 
-- 
2.47.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 12:25 [PATCH] mm: Rename vm_area_struct to mm_area Matthew Wilcox (Oracle)
@ 2025-04-01 12:35 ` Lorenzo Stoakes
  2025-04-01 14:17 ` Liam R. Howlett
  2025-04-01 15:11 ` David Hildenbrand
  2 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Stoakes @ 2025-04-01 12:35 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn, linux-mm

On Tue, Apr 01, 2025 at 01:25:51PM +0100, Matthew Wilcox (Oracle) wrote:
> We don't need to put "_struct" on the end of the name.  It's obviously
> a struct.  Just look at the word "struct" before the name.  The acronym
> "vm" tends to mean "virtual machine" rather than "virtual memory" these
> days, so use "mm_area" instead of "vm_area".  I decided not to rename
> the variables (typically "vma") of type "struct mm_area *" as that would
> be a fair bit more disruptive.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Hmm, while this is a quick and small patch (which I appreciate by the way,
thanks!), I must say I'm not sure why we are missing the opportunity here
to bring a little taste into the kernel.

Why 'mm_area' when it can be 'mmmmmm_area' [only because of course we can't
put a comma there, thanks to the C programming langugaes grr]?

Or perhaps... mmmmmm_tasty?

Regardless:

Rebabkaed-by: Lorenzimil Stoakeska <absolute_legend_1981_the_bestest@aol.com>

> ---
> Generated against next-20250401.
>
>  Documentation/bpf/prog_lsm.rst                |   6 +-
>  Documentation/core-api/cachetlb.rst           |  18 +-
>  Documentation/core-api/dma-api.rst            |   4 +-
>  Documentation/driver-api/uio-howto.rst        |   2 +-
>  Documentation/driver-api/vfio.rst             |   2 +-
>  Documentation/filesystems/locking.rst         |  12 +-
>  Documentation/filesystems/proc.rst            |   2 +-
>  Documentation/filesystems/vfs.rst             |   2 +-
>  Documentation/gpu/drm-mm.rst                  |   4 +-
>  Documentation/mm/hmm.rst                      |   2 +-
>  Documentation/mm/hugetlbfs_reserv.rst         |  12 +-
>  Documentation/mm/process_addrs.rst            |   6 +-
>  .../translations/zh_CN/core-api/cachetlb.rst  |  18 +-
>  Documentation/translations/zh_CN/mm/hmm.rst   |   2 +-
>  .../zh_CN/mm/hugetlbfs_reserv.rst             |  12 +-
>  .../userspace-api/media/conf_nitpick.py       |   2 +-
>  arch/alpha/include/asm/cacheflush.h           |   6 +-
>  arch/alpha/include/asm/machvec.h              |   2 +-
>  arch/alpha/include/asm/pci.h                  |   2 +-
>  arch/alpha/include/asm/pgtable.h              |   6 +-
>  arch/alpha/include/asm/tlbflush.h             |  10 +-
>  arch/alpha/kernel/pci-sysfs.c                 |  16 +-
>  arch/alpha/kernel/smp.c                       |   8 +-
>  arch/alpha/mm/fault.c                         |   2 +-
>  arch/arc/include/asm/hugepage.h               |   4 +-
>  arch/arc/include/asm/page.h                   |   4 +-
>  arch/arc/include/asm/pgtable-bits-arcv2.h     |   2 +-
>  arch/arc/include/asm/tlbflush.h               |  12 +-
>  arch/arc/kernel/arc_hostlink.c                |   2 +-
>  arch/arc/kernel/troubleshoot.c                |   2 +-
>  arch/arc/mm/cache.c                           |   2 +-
>  arch/arc/mm/fault.c                           |   2 +-
>  arch/arc/mm/mmap.c                            |   2 +-
>  arch/arc/mm/tlb.c                             |  20 +-
>  arch/arm/include/asm/cacheflush.h             |  14 +-
>  arch/arm/include/asm/page.h                   |  20 +-
>  arch/arm/include/asm/tlbflush.h               |  28 +-
>  arch/arm/kernel/asm-offsets.c                 |   4 +-
>  arch/arm/kernel/process.c                     |  10 +-
>  arch/arm/kernel/smp_tlb.c                     |   6 +-
>  arch/arm/kernel/vdso.c                        |   4 +-
>  arch/arm/mach-rpc/ecard.c                     |   2 +-
>  arch/arm/mm/cache-v6.S                        |   2 +-
>  arch/arm/mm/cache-v7.S                        |   2 +-
>  arch/arm/mm/cache-v7m.S                       |   2 +-
>  arch/arm/mm/copypage-fa.c                     |   2 +-
>  arch/arm/mm/copypage-feroceon.c               |   2 +-
>  arch/arm/mm/copypage-v4mc.c                   |   2 +-
>  arch/arm/mm/copypage-v4wb.c                   |   2 +-
>  arch/arm/mm/copypage-v4wt.c                   |   2 +-
>  arch/arm/mm/copypage-v6.c                     |   4 +-
>  arch/arm/mm/copypage-xsc3.c                   |   2 +-
>  arch/arm/mm/copypage-xscale.c                 |   2 +-
>  arch/arm/mm/dma-mapping.c                     |   2 +-
>  arch/arm/mm/fault-armv.c                      |  10 +-
>  arch/arm/mm/fault.c                           |   2 +-
>  arch/arm/mm/flush.c                           |  14 +-
>  arch/arm/mm/mmap.c                            |   4 +-
>  arch/arm/mm/nommu.c                           |   2 +-
>  arch/arm/mm/tlb-v6.S                          |   2 +-
>  arch/arm/mm/tlb-v7.S                          |   2 +-
>  arch/arm/mm/tlb.c                             |  12 +-
>  arch/arm/xen/enlighten.c                      |   2 +-
>  arch/arm64/include/asm/cacheflush.h           |   2 +-
>  arch/arm64/include/asm/hugetlb.h              |  10 +-
>  arch/arm64/include/asm/mmu_context.h          |   2 +-
>  arch/arm64/include/asm/page.h                 |   6 +-
>  arch/arm64/include/asm/pgtable.h              |  38 +--
>  arch/arm64/include/asm/pkeys.h                |   4 +-
>  arch/arm64/include/asm/tlb.h                  |   2 +-
>  arch/arm64/include/asm/tlbflush.h             |   8 +-
>  arch/arm64/kernel/mte.c                       |   2 +-
>  arch/arm64/kernel/vdso.c                      |   4 +-
>  arch/arm64/kvm/mmu.c                          |  10 +-
>  arch/arm64/mm/contpte.c                       |  10 +-
>  arch/arm64/mm/copypage.c                      |   2 +-
>  arch/arm64/mm/fault.c                         |  10 +-
>  arch/arm64/mm/flush.c                         |   4 +-
>  arch/arm64/mm/hugetlbpage.c                   |  14 +-
>  arch/arm64/mm/mmu.c                           |   4 +-
>  arch/csky/abiv1/cacheflush.c                  |   4 +-
>  arch/csky/abiv1/inc/abi/cacheflush.h          |   4 +-
>  arch/csky/abiv1/mmap.c                        |   2 +-
>  arch/csky/abiv2/cacheflush.c                  |   2 +-
>  arch/csky/include/asm/page.h                  |   2 +-
>  arch/csky/include/asm/pgtable.h               |   2 +-
>  arch/csky/include/asm/tlbflush.h              |   4 +-
>  arch/csky/kernel/vdso.c                       |   2 +-
>  arch/csky/mm/fault.c                          |   4 +-
>  arch/csky/mm/tlb.c                            |   4 +-
>  arch/hexagon/include/asm/cacheflush.h         |   4 +-
>  arch/hexagon/include/asm/tlbflush.h           |   4 +-
>  arch/hexagon/kernel/vdso.c                    |   4 +-
>  arch/hexagon/mm/cache.c                       |   2 +-
>  arch/hexagon/mm/vm_fault.c                    |   2 +-
>  arch/hexagon/mm/vm_tlb.c                      |   4 +-
>  arch/loongarch/include/asm/hugetlb.h          |   4 +-
>  arch/loongarch/include/asm/page.h             |   4 +-
>  arch/loongarch/include/asm/pgtable.h          |   8 +-
>  arch/loongarch/include/asm/tlb.h              |   2 +-
>  arch/loongarch/include/asm/tlbflush.h         |   8 +-
>  arch/loongarch/kernel/smp.c                   |   6 +-
>  arch/loongarch/kernel/vdso.c                  |   4 +-
>  arch/loongarch/mm/fault.c                     |   2 +-
>  arch/loongarch/mm/hugetlbpage.c               |   2 +-
>  arch/loongarch/mm/init.c                      |   2 +-
>  arch/loongarch/mm/mmap.c                      |   2 +-
>  arch/loongarch/mm/tlb.c                       |   8 +-
>  arch/m68k/include/asm/cacheflush_mm.h         |  10 +-
>  arch/m68k/include/asm/pgtable_mm.h            |   2 +-
>  arch/m68k/include/asm/tlbflush.h              |  12 +-
>  arch/m68k/kernel/sys_m68k.c                   |   2 +-
>  arch/m68k/mm/cache.c                          |   2 +-
>  arch/m68k/mm/fault.c                          |   2 +-
>  arch/microblaze/include/asm/cacheflush.h      |   2 +-
>  arch/microblaze/include/asm/pgtable.h         |   4 +-
>  arch/microblaze/include/asm/tlbflush.h        |   4 +-
>  arch/microblaze/mm/fault.c                    |   2 +-
>  arch/mips/alchemy/common/setup.c              |   2 +-
>  arch/mips/include/asm/cacheflush.h            |  10 +-
>  arch/mips/include/asm/hugetlb.h               |   4 +-
>  arch/mips/include/asm/page.h                  |   4 +-
>  arch/mips/include/asm/pgtable.h               |  14 +-
>  arch/mips/include/asm/tlbflush.h              |   8 +-
>  arch/mips/kernel/smp.c                        |   6 +-
>  arch/mips/kernel/vdso.c                       |   2 +-
>  arch/mips/mm/c-octeon.c                       |   6 +-
>  arch/mips/mm/c-r3k.c                          |   4 +-
>  arch/mips/mm/c-r4k.c                          |  10 +-
>  arch/mips/mm/cache.c                          |   4 +-
>  arch/mips/mm/fault.c                          |   2 +-
>  arch/mips/mm/hugetlbpage.c                    |   2 +-
>  arch/mips/mm/init.c                           |   6 +-
>  arch/mips/mm/mmap.c                           |   2 +-
>  arch/mips/mm/tlb-r3k.c                        |   6 +-
>  arch/mips/mm/tlb-r4k.c                        |   8 +-
>  arch/mips/vdso/genvdso.c                      |   2 +-
>  arch/nios2/include/asm/cacheflush.h           |  10 +-
>  arch/nios2/include/asm/pgtable.h              |   2 +-
>  arch/nios2/include/asm/tlbflush.h             |   6 +-
>  arch/nios2/kernel/sys_nios2.c                 |   2 +-
>  arch/nios2/mm/cacheflush.c                    |  14 +-
>  arch/nios2/mm/fault.c                         |   2 +-
>  arch/nios2/mm/init.c                          |   4 +-
>  arch/nios2/mm/tlb.c                           |   4 +-
>  arch/openrisc/include/asm/pgtable.h           |   8 +-
>  arch/openrisc/include/asm/tlbflush.h          |   8 +-
>  arch/openrisc/kernel/smp.c                    |   4 +-
>  arch/openrisc/mm/cache.c                      |   2 +-
>  arch/openrisc/mm/fault.c                      |   2 +-
>  arch/openrisc/mm/tlb.c                        |   4 +-
>  arch/parisc/include/asm/cacheflush.h          |  12 +-
>  arch/parisc/include/asm/hugetlb.h             |   4 +-
>  arch/parisc/include/asm/page.h                |   4 +-
>  arch/parisc/include/asm/pgtable.h             |   6 +-
>  arch/parisc/include/asm/tlbflush.h            |   2 +-
>  arch/parisc/kernel/cache.c                    |  30 +-
>  arch/parisc/kernel/sys_parisc.c               |   2 +-
>  arch/parisc/kernel/traps.c                    |   2 +-
>  arch/parisc/kernel/vdso.c                     |   4 +-
>  arch/parisc/mm/fault.c                        |   6 +-
>  arch/parisc/mm/hugetlbpage.c                  |   4 +-
>  arch/powerpc/include/asm/book3s/32/pgtable.h  |   2 +-
>  arch/powerpc/include/asm/book3s/32/tlbflush.h |   8 +-
>  arch/powerpc/include/asm/book3s/64/hash-4k.h  |   2 +-
>  arch/powerpc/include/asm/book3s/64/hash-64k.h |   6 +-
>  arch/powerpc/include/asm/book3s/64/hugetlb.h  |  14 +-
>  .../include/asm/book3s/64/pgtable-64k.h       |   2 +-
>  arch/powerpc/include/asm/book3s/64/pgtable.h  |  30 +-
>  arch/powerpc/include/asm/book3s/64/radix.h    |   6 +-
>  .../include/asm/book3s/64/tlbflush-radix.h    |  14 +-
>  arch/powerpc/include/asm/book3s/64/tlbflush.h |  14 +-
>  arch/powerpc/include/asm/cacheflush.h         |   2 +-
>  arch/powerpc/include/asm/hugetlb.h            |   6 +-
>  arch/powerpc/include/asm/mmu_context.h        |   4 +-
>  .../include/asm/nohash/32/hugetlb-8xx.h       |   2 +-
>  arch/powerpc/include/asm/nohash/32/pte-8xx.h  |   2 +-
>  .../powerpc/include/asm/nohash/hugetlb-e500.h |   2 +-
>  arch/powerpc/include/asm/nohash/pgtable.h     |   4 +-
>  arch/powerpc/include/asm/nohash/tlbflush.h    |  10 +-
>  arch/powerpc/include/asm/page.h               |   2 +-
>  arch/powerpc/include/asm/pci.h                |   4 +-
>  arch/powerpc/include/asm/pgtable.h            |   6 +-
>  arch/powerpc/include/asm/pkeys.h              |   6 +-
>  arch/powerpc/include/asm/vas.h                |   2 +-
>  arch/powerpc/kernel/pci-common.c              |   4 +-
>  arch/powerpc/kernel/proc_powerpc.c            |   2 +-
>  arch/powerpc/kernel/vdso.c                    |  10 +-
>  arch/powerpc/kvm/book3s_64_vio.c              |   2 +-
>  arch/powerpc/kvm/book3s_hv.c                  |   2 +-
>  arch/powerpc/kvm/book3s_hv_uvmem.c            |  16 +-
>  arch/powerpc/kvm/book3s_xive_native.c         |   6 +-
>  arch/powerpc/mm/book3s32/mmu.c                |   2 +-
>  arch/powerpc/mm/book3s32/tlb.c                |   4 +-
>  arch/powerpc/mm/book3s64/hash_pgtable.c       |   2 +-
>  arch/powerpc/mm/book3s64/hash_utils.c         |   2 +-
>  arch/powerpc/mm/book3s64/hugetlbpage.c        |   4 +-
>  arch/powerpc/mm/book3s64/iommu_api.c          |   2 +-
>  arch/powerpc/mm/book3s64/pgtable.c            |  22 +-
>  arch/powerpc/mm/book3s64/pkeys.c              |   6 +-
>  arch/powerpc/mm/book3s64/radix_hugetlbpage.c  |   8 +-
>  arch/powerpc/mm/book3s64/radix_pgtable.c      |   6 +-
>  arch/powerpc/mm/book3s64/radix_tlb.c          |  10 +-
>  arch/powerpc/mm/book3s64/slice.c              |   4 +-
>  arch/powerpc/mm/book3s64/subpage_prot.c       |   4 +-
>  arch/powerpc/mm/cacheflush.c                  |   2 +-
>  arch/powerpc/mm/copro_fault.c                 |   2 +-
>  arch/powerpc/mm/fault.c                       |  12 +-
>  arch/powerpc/mm/hugetlbpage.c                 |   2 +-
>  arch/powerpc/mm/nohash/e500_hugetlbpage.c     |   6 +-
>  arch/powerpc/mm/nohash/tlb.c                  |   6 +-
>  arch/powerpc/mm/pgtable.c                     |   6 +-
>  arch/powerpc/platforms/book3s/vas-api.c       |   6 +-
>  arch/powerpc/platforms/cell/spufs/file.c      |  18 +-
>  arch/powerpc/platforms/powernv/memtrace.c     |   2 +-
>  arch/powerpc/platforms/powernv/opal-prd.c     |   2 +-
>  arch/powerpc/platforms/pseries/vas.c          |   2 +-
>  arch/riscv/include/asm/hugetlb.h              |   4 +-
>  arch/riscv/include/asm/pgtable.h              |  18 +-
>  arch/riscv/include/asm/tlbflush.h             |   6 +-
>  arch/riscv/kernel/vdso.c                      |   2 +-
>  arch/riscv/kvm/mmu.c                          |   4 +-
>  arch/riscv/mm/fault.c                         |   4 +-
>  arch/riscv/mm/hugetlbpage.c                   |  10 +-
>  arch/riscv/mm/pgtable.c                       |   6 +-
>  arch/riscv/mm/tlbflush.c                      |   6 +-
>  arch/s390/include/asm/hugetlb.h               |   4 +-
>  arch/s390/include/asm/pgtable.h               |  28 +-
>  arch/s390/include/asm/tlbflush.h              |   2 +-
>  arch/s390/kernel/crash_dump.c                 |   6 +-
>  arch/s390/kernel/uv.c                         |   2 +-
>  arch/s390/kernel/vdso.c                       |   4 +-
>  arch/s390/mm/fault.c                          |   4 +-
>  arch/s390/mm/gmap.c                           |  10 +-
>  arch/s390/mm/hugetlbpage.c                    |   2 +-
>  arch/s390/mm/mmap.c                           |   4 +-
>  arch/s390/mm/pgtable.c                        |  12 +-
>  arch/s390/pci/pci_mmio.c                      |   4 +-
>  arch/sh/include/asm/cacheflush.h              |  14 +-
>  arch/sh/include/asm/hugetlb.h                 |   2 +-
>  arch/sh/include/asm/page.h                    |   4 +-
>  arch/sh/include/asm/pgtable.h                 |   8 +-
>  arch/sh/include/asm/tlb.h                     |   4 +-
>  arch/sh/include/asm/tlbflush.h                |   8 +-
>  arch/sh/kernel/smp.c                          |   6 +-
>  arch/sh/kernel/sys_sh.c                       |   2 +-
>  arch/sh/kernel/vsyscall/vsyscall.c            |   4 +-
>  arch/sh/mm/cache-sh4.c                        |   4 +-
>  arch/sh/mm/cache.c                            |  14 +-
>  arch/sh/mm/fault.c                            |   4 +-
>  arch/sh/mm/hugetlbpage.c                      |   2 +-
>  arch/sh/mm/mmap.c                             |   4 +-
>  arch/sh/mm/nommu.c                            |   6 +-
>  arch/sh/mm/tlb-pteaex.c                       |   2 +-
>  arch/sh/mm/tlb-sh3.c                          |   2 +-
>  arch/sh/mm/tlb-sh4.c                          |   2 +-
>  arch/sh/mm/tlb-urb.c                          |   2 +-
>  arch/sh/mm/tlbflush_32.c                      |   4 +-
>  arch/sparc/include/asm/cacheflush_64.h        |   2 +-
>  arch/sparc/include/asm/cachetlb_32.h          |  10 +-
>  arch/sparc/include/asm/hugetlb.h              |   4 +-
>  arch/sparc/include/asm/leon.h                 |   4 +-
>  arch/sparc/include/asm/page_64.h              |   4 +-
>  arch/sparc/include/asm/pgtable_32.h           |   6 +-
>  arch/sparc/include/asm/pgtable_64.h           |  20 +-
>  arch/sparc/include/asm/tlbflush_64.h          |   4 +-
>  arch/sparc/kernel/adi_64.c                    |   8 +-
>  arch/sparc/kernel/asm-offsets.c               |   2 +-
>  arch/sparc/kernel/pci.c                       |   2 +-
>  arch/sparc/kernel/ptrace_64.c                 |   2 +-
>  arch/sparc/kernel/sys_sparc_64.c              |   4 +-
>  arch/sparc/mm/fault_32.c                      |   4 +-
>  arch/sparc/mm/fault_64.c                      |   2 +-
>  arch/sparc/mm/hugetlbpage.c                   |   2 +-
>  arch/sparc/mm/init_64.c                       |   6 +-
>  arch/sparc/mm/leon_mm.c                       |  10 +-
>  arch/sparc/mm/srmmu.c                         |  54 +--
>  arch/sparc/mm/tlb.c                           |   4 +-
>  arch/sparc/vdso/vma.c                         |   2 +-
>  arch/um/drivers/mmapper_kern.c                |   2 +-
>  arch/um/include/asm/tlbflush.h                |   4 +-
>  arch/um/kernel/tlb.c                          |   2 +-
>  arch/um/kernel/trap.c                         |   2 +-
>  arch/x86/entry/vdso/vma.c                     |  12 +-
>  arch/x86/entry/vsyscall/vsyscall_64.c         |   8 +-
>  arch/x86/include/asm/mmu_context.h            |   2 +-
>  arch/x86/include/asm/paravirt.h               |   4 +-
>  arch/x86/include/asm/paravirt_types.h         |   6 +-
>  arch/x86/include/asm/pgtable-3level.h         |   2 +-
>  arch/x86/include/asm/pgtable.h                |  46 +--
>  arch/x86/include/asm/pgtable_32.h             |   2 +-
>  arch/x86/include/asm/pkeys.h                  |   6 +-
>  arch/x86/include/asm/tlbflush.h               |   2 +-
>  arch/x86/kernel/cpu/resctrl/pseudo_lock.c     |   4 +-
>  arch/x86/kernel/cpu/sgx/driver.c              |   2 +-
>  arch/x86/kernel/cpu/sgx/encl.c                |  14 +-
>  arch/x86/kernel/cpu/sgx/encl.h                |   4 +-
>  arch/x86/kernel/cpu/sgx/ioctl.c               |   2 +-
>  arch/x86/kernel/cpu/sgx/virt.c                |   6 +-
>  arch/x86/kernel/shstk.c                       |   2 +-
>  arch/x86/kernel/sys_x86_64.c                  |   4 +-
>  arch/x86/mm/fault.c                           |  10 +-
>  arch/x86/mm/pat/memtype.c                     |  18 +-
>  arch/x86/mm/pgtable.c                         |  30 +-
>  arch/x86/mm/pkeys.c                           |   4 +-
>  arch/x86/um/mem_32.c                          |   6 +-
>  arch/x86/um/mem_64.c                          |   2 +-
>  arch/x86/um/vdso/vma.c                        |   2 +-
>  arch/x86/xen/mmu.c                            |   2 +-
>  arch/x86/xen/mmu_pv.c                         |   6 +-
>  arch/xtensa/include/asm/cacheflush.h          |  12 +-
>  arch/xtensa/include/asm/page.h                |   4 +-
>  arch/xtensa/include/asm/pgtable.h             |   8 +-
>  arch/xtensa/include/asm/tlbflush.h            |   8 +-
>  arch/xtensa/kernel/pci.c                      |   2 +-
>  arch/xtensa/kernel/smp.c                      |  10 +-
>  arch/xtensa/kernel/syscall.c                  |   2 +-
>  arch/xtensa/mm/cache.c                        |  12 +-
>  arch/xtensa/mm/fault.c                        |   2 +-
>  arch/xtensa/mm/tlb.c                          |   6 +-
>  block/fops.c                                  |   2 +-
>  drivers/accel/amdxdna/amdxdna_gem.c           |   6 +-
>  .../accel/habanalabs/common/command_buffer.c  |   2 +-
>  drivers/accel/habanalabs/common/device.c      |   6 +-
>  drivers/accel/habanalabs/common/habanalabs.h  |  14 +-
>  drivers/accel/habanalabs/common/memory.c      |   8 +-
>  drivers/accel/habanalabs/common/memory_mgr.c  |   4 +-
>  drivers/accel/habanalabs/gaudi/gaudi.c        |   4 +-
>  drivers/accel/habanalabs/gaudi2/gaudi2.c      |   4 +-
>  drivers/accel/habanalabs/goya/goya.c          |   4 +-
>  drivers/accel/qaic/qaic_data.c                |   2 +-
>  drivers/acpi/pfr_telemetry.c                  |   2 +-
>  drivers/android/binder.c                      |   6 +-
>  drivers/android/binder_alloc.c                |   6 +-
>  drivers/android/binder_alloc.h                |   2 +-
>  drivers/auxdisplay/cfag12864bfb.c             |   2 +-
>  drivers/auxdisplay/ht16k33.c                  |   2 +-
>  drivers/block/ublk_drv.c                      |   2 +-
>  drivers/cdx/cdx.c                             |   4 +-
>  drivers/char/bsr.c                            |   2 +-
>  drivers/char/hpet.c                           |   4 +-
>  drivers/char/mem.c                            |   8 +-
>  drivers/char/uv_mmtimer.c                     |   4 +-
>  drivers/comedi/comedi_fops.c                  |   8 +-
>  drivers/crypto/hisilicon/qm.c                 |   2 +-
>  drivers/dax/device.c                          |   8 +-
>  drivers/dma-buf/dma-buf.c                     |   6 +-
>  drivers/dma-buf/heaps/cma_heap.c              |   4 +-
>  drivers/dma-buf/heaps/system_heap.c           |   2 +-
>  drivers/dma-buf/udmabuf.c                     |   4 +-
>  drivers/dma/idxd/cdev.c                       |   4 +-
>  drivers/firewire/core-cdev.c                  |   2 +-
>  drivers/fpga/dfl-afu-main.c                   |   2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c       |   2 +-
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |   2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      |   6 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c     |   2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_events.c       |   2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_migrate.c      |  12 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   8 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |   2 +-
>  drivers/gpu/drm/amd/amdkfd/kfd_svm.c          |  10 +-
>  drivers/gpu/drm/armada/armada_gem.c           |   2 +-
>  drivers/gpu/drm/drm_fbdev_dma.c               |   2 +-
>  drivers/gpu/drm/drm_fbdev_shmem.c             |   2 +-
>  drivers/gpu/drm/drm_gem.c                     |   8 +-
>  drivers/gpu/drm/drm_gem_dma_helper.c          |   2 +-
>  drivers/gpu/drm/drm_gem_shmem_helper.c        |   8 +-
>  drivers/gpu/drm/drm_gem_ttm_helper.c          |   2 +-
>  drivers/gpu/drm/drm_gpusvm.c                  |  10 +-
>  drivers/gpu/drm/drm_prime.c                   |   4 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem.c         |   8 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem.h         |   2 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c   |   2 +-
>  drivers/gpu/drm/exynos/exynos_drm_fbdev.c     |   2 +-
>  drivers/gpu/drm/exynos/exynos_drm_gem.c       |   6 +-
>  drivers/gpu/drm/gma500/fbdev.c                |   4 +-
>  drivers/gpu/drm/gma500/gem.c                  |   2 +-
>  drivers/gpu/drm/i915/display/intel_bo.c       |   2 +-
>  drivers/gpu/drm/i915/display/intel_bo.h       |   4 +-
>  drivers/gpu/drm/i915/display/intel_fbdev.c    |   2 +-
>  drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c    |   2 +-
>  drivers/gpu/drm/i915/gem/i915_gem_mman.c      |  22 +-
>  drivers/gpu/drm/i915/gem/i915_gem_mman.h      |   4 +-
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.c       |   8 +-
>  drivers/gpu/drm/i915/gem/i915_gem_userptr.c   |   2 +-
>  .../drm/i915/gem/selftests/i915_gem_mman.c    |   8 +-
>  .../gpu/drm/i915/gem/selftests/mock_dmabuf.c  |   2 +-
>  drivers/gpu/drm/i915/gvt/kvmgt.c              |   2 +-
>  drivers/gpu/drm/i915/i915_mm.c                |   4 +-
>  drivers/gpu/drm/i915/i915_mm.h                |   8 +-
>  drivers/gpu/drm/imagination/pvr_gem.c         |   2 +-
>  drivers/gpu/drm/lima/lima_gem.c               |   2 +-
>  drivers/gpu/drm/lima/lima_gem.h               |   2 +-
>  drivers/gpu/drm/loongson/lsdc_gem.c           |   2 +-
>  drivers/gpu/drm/mediatek/mtk_gem.c            |   4 +-
>  drivers/gpu/drm/msm/msm_fbdev.c               |   2 +-
>  drivers/gpu/drm/msm/msm_gem.c                 |   4 +-
>  drivers/gpu/drm/nouveau/nouveau_dmem.c        |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_dmem.h        |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_gem.c         |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_svm.c         |   2 +-
>  drivers/gpu/drm/omapdrm/omap_fbdev.c          |   2 +-
>  drivers/gpu/drm/omapdrm/omap_gem.c            |   8 +-
>  drivers/gpu/drm/omapdrm/omap_gem.h            |   2 +-
>  drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c     |   2 +-
>  drivers/gpu/drm/panthor/panthor_device.c      |   4 +-
>  drivers/gpu/drm/panthor/panthor_device.h      |   2 +-
>  drivers/gpu/drm/panthor/panthor_drv.c         |   2 +-
>  drivers/gpu/drm/panthor/panthor_gem.c         |   2 +-
>  drivers/gpu/drm/radeon/radeon_gem.c           |   2 +-
>  drivers/gpu/drm/radeon/radeon_ttm.c           |   2 +-
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.c   |   6 +-
>  drivers/gpu/drm/tegra/fbdev.c                 |   2 +-
>  drivers/gpu/drm/tegra/gem.c                   |   8 +-
>  drivers/gpu/drm/tegra/gem.h                   |   4 +-
>  drivers/gpu/drm/ttm/ttm_bo_vm.c               |  14 +-
>  drivers/gpu/drm/vc4/vc4_bo.c                  |   4 +-
>  drivers/gpu/drm/virtio/virtgpu_vram.c         |   2 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_gem.c           |   2 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c    |   4 +-
>  drivers/gpu/drm/xe/display/intel_bo.c         |   2 +-
>  drivers/gpu/drm/xe/xe_bo.c                    |   2 +-
>  drivers/gpu/drm/xe/xe_device.c                |  10 +-
>  drivers/gpu/drm/xe/xe_oa.c                    |   2 +-
>  drivers/gpu/drm/xen/xen_drm_front_gem.c       |   2 +-
>  drivers/hsi/clients/cmt_speech.c              |   2 +-
>  drivers/hv/mshv_root_main.c                   |   6 +-
>  drivers/hwtracing/intel_th/msu.c              |   6 +-
>  drivers/hwtracing/stm/core.c                  |   6 +-
>  drivers/infiniband/core/core_priv.h           |   4 +-
>  drivers/infiniband/core/ib_core_uverbs.c      |   6 +-
>  drivers/infiniband/core/uverbs_main.c         |   8 +-
>  drivers/infiniband/hw/bnxt_re/ib_verbs.c      |   2 +-
>  drivers/infiniband/hw/bnxt_re/ib_verbs.h      |   2 +-
>  drivers/infiniband/hw/cxgb4/provider.c        |   2 +-
>  drivers/infiniband/hw/efa/efa.h               |   2 +-
>  drivers/infiniband/hw/efa/efa_verbs.c         |   4 +-
>  drivers/infiniband/hw/erdma/erdma_verbs.c     |   2 +-
>  drivers/infiniband/hw/erdma/erdma_verbs.h     |   2 +-
>  drivers/infiniband/hw/hfi1/file_ops.c         |   6 +-
>  drivers/infiniband/hw/hns/hns_roce_main.c     |   2 +-
>  drivers/infiniband/hw/irdma/verbs.c           |   4 +-
>  drivers/infiniband/hw/mana/main.c             |   2 +-
>  drivers/infiniband/hw/mana/mana_ib.h          |   2 +-
>  drivers/infiniband/hw/mlx4/main.c             |   2 +-
>  drivers/infiniband/hw/mlx4/mr.c               |   2 +-
>  drivers/infiniband/hw/mlx5/main.c             |  10 +-
>  drivers/infiniband/hw/mthca/mthca_provider.c  |   2 +-
>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.c   |   2 +-
>  drivers/infiniband/hw/ocrdma/ocrdma_verbs.h   |   2 +-
>  drivers/infiniband/hw/qedr/verbs.c            |   2 +-
>  drivers/infiniband/hw/qedr/verbs.h            |   2 +-
>  drivers/infiniband/hw/qib/qib_file_ops.c      |  14 +-
>  drivers/infiniband/hw/usnic/usnic_ib_verbs.c  |   2 +-
>  drivers/infiniband/hw/usnic/usnic_ib_verbs.h  |   2 +-
>  .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.c   |   2 +-
>  .../infiniband/hw/vmw_pvrdma/pvrdma_verbs.h   |   2 +-
>  drivers/infiniband/sw/rdmavt/mmap.c           |   6 +-
>  drivers/infiniband/sw/rdmavt/mmap.h           |   2 +-
>  drivers/infiniband/sw/rxe/rxe_loc.h           |   2 +-
>  drivers/infiniband/sw/rxe/rxe_mmap.c          |   6 +-
>  drivers/infiniband/sw/siw/siw_verbs.c         |   2 +-
>  drivers/infiniband/sw/siw/siw_verbs.h         |   2 +-
>  drivers/iommu/dma-iommu.c                     |   4 +-
>  drivers/iommu/iommu-sva.c                     |   2 +-
>  .../media/common/videobuf2/videobuf2-core.c   |   2 +-
>  .../common/videobuf2/videobuf2-dma-contig.c   |   4 +-
>  .../media/common/videobuf2/videobuf2-dma-sg.c |   4 +-
>  .../media/common/videobuf2/videobuf2-memops.c |   4 +-
>  .../media/common/videobuf2/videobuf2-v4l2.c   |   2 +-
>  .../common/videobuf2/videobuf2-vmalloc.c      |   4 +-
>  drivers/media/dvb-core/dmxdev.c               |   4 +-
>  drivers/media/dvb-core/dvb_vb2.c              |   2 +-
>  drivers/media/pci/cx18/cx18-fileops.h         |   2 +-
>  drivers/media/pci/intel/ipu6/ipu6-dma.c       |   2 +-
>  drivers/media/pci/intel/ipu6/ipu6-dma.h       |   2 +-
>  .../platform/samsung/exynos-gsc/gsc-m2m.c     |   2 +-
>  .../samsung/s3c-camif/camif-capture.c         |   2 +-
>  .../media/platform/samsung/s5p-mfc/s5p_mfc.c  |   2 +-
>  drivers/media/platform/ti/omap3isp/ispvideo.c |   2 +-
>  drivers/media/usb/uvc/uvc_queue.c             |   2 +-
>  drivers/media/usb/uvc/uvc_v4l2.c              |   2 +-
>  drivers/media/usb/uvc/uvcvideo.h              |   2 +-
>  drivers/media/v4l2-core/v4l2-dev.c            |   2 +-
>  drivers/media/v4l2-core/v4l2-mem2mem.c        |   4 +-
>  drivers/misc/bcm-vk/bcm_vk_dev.c              |   2 +-
>  drivers/misc/fastrpc.c                        |   4 +-
>  drivers/misc/genwqe/card_dev.c                |   6 +-
>  drivers/misc/ocxl/context.c                   |  12 +-
>  drivers/misc/ocxl/file.c                      |   2 +-
>  drivers/misc/ocxl/ocxl_internal.h             |   2 +-
>  drivers/misc/ocxl/sysfs.c                     |   4 +-
>  drivers/misc/open-dice.c                      |   2 +-
>  drivers/misc/sgi-gru/grufault.c               |  14 +-
>  drivers/misc/sgi-gru/grufile.c                |   6 +-
>  drivers/misc/sgi-gru/grumain.c                |  10 +-
>  drivers/misc/sgi-gru/grutables.h              |  12 +-
>  drivers/misc/uacce/uacce.c                    |   4 +-
>  drivers/mtd/mtdchar.c                         |   2 +-
>  drivers/pci/mmap.c                            |   4 +-
>  drivers/pci/p2pdma.c                          |   2 +-
>  drivers/pci/pci-sysfs.c                       |  16 +-
>  drivers/pci/pci.h                             |   2 +-
>  drivers/pci/proc.c                            |   2 +-
>  drivers/platform/x86/intel/pmt/class.c        |   2 +-
>  drivers/ptp/ptp_vmclock.c                     |   2 +-
>  drivers/rapidio/devices/rio_mport_cdev.c      |   6 +-
>  drivers/sbus/char/flash.c                     |   2 +-
>  drivers/sbus/char/oradax.c                    |   4 +-
>  drivers/scsi/sg.c                             |   4 +-
>  drivers/soc/aspeed/aspeed-lpc-ctrl.c          |   2 +-
>  drivers/soc/aspeed/aspeed-p2a-ctrl.c          |   2 +-
>  drivers/soc/qcom/rmtfs_mem.c                  |   2 +-
>  .../staging/media/atomisp/include/hmm/hmm.h   |   2 +-
>  .../media/atomisp/include/hmm/hmm_bo.h        |   2 +-
>  drivers/staging/media/atomisp/pci/hmm/hmm.c   |   2 +-
>  .../staging/media/atomisp/pci/hmm/hmm_bo.c    |   6 +-
>  drivers/staging/vme_user/vme.c                |   2 +-
>  drivers/staging/vme_user/vme.h                |   2 +-
>  drivers/staging/vme_user/vme_user.c           |   8 +-
>  drivers/target/target_core_user.c             |   8 +-
>  drivers/tee/optee/call.c                      |   2 +-
>  drivers/tee/tee_shm.c                         |   2 +-
>  drivers/uio/uio.c                             |  10 +-
>  drivers/uio/uio_hv_generic.c                  |   2 +-
>  drivers/usb/core/devio.c                      |   6 +-
>  drivers/usb/gadget/function/uvc_queue.c       |   2 +-
>  drivers/usb/gadget/function/uvc_queue.h       |   2 +-
>  drivers/usb/gadget/function/uvc_v4l2.c        |   2 +-
>  drivers/usb/mon/mon_bin.c                     |   6 +-
>  drivers/vdpa/vdpa_user/iova_domain.c          |   2 +-
>  drivers/vfio/cdx/main.c                       |   4 +-
>  drivers/vfio/fsl-mc/vfio_fsl_mc.c             |   4 +-
>  .../vfio/pci/hisilicon/hisi_acc_vfio_pci.c    |   2 +-
>  drivers/vfio/pci/nvgrace-gpu/main.c           |   2 +-
>  drivers/vfio/pci/vfio_pci_core.c              |   6 +-
>  drivers/vfio/platform/vfio_platform_common.c  |   4 +-
>  drivers/vfio/platform/vfio_platform_private.h |   2 +-
>  drivers/vfio/vfio_iommu_type1.c               |   4 +-
>  drivers/vfio/vfio_main.c                      |   2 +-
>  drivers/vhost/vdpa.c                          |   6 +-
>  drivers/video/fbdev/68328fb.c                 |   4 +-
>  drivers/video/fbdev/atafb.c                   |   2 +-
>  drivers/video/fbdev/aty/atyfb_base.c          |   4 +-
>  drivers/video/fbdev/au1100fb.c                |   2 +-
>  drivers/video/fbdev/au1200fb.c                |   2 +-
>  drivers/video/fbdev/bw2.c                     |   4 +-
>  drivers/video/fbdev/cg14.c                    |   4 +-
>  drivers/video/fbdev/cg3.c                     |   4 +-
>  drivers/video/fbdev/cg6.c                     |   4 +-
>  drivers/video/fbdev/controlfb.c               |   2 +-
>  drivers/video/fbdev/core/fb_chrdev.c          |   2 +-
>  drivers/video/fbdev/core/fb_defio.c           |   2 +-
>  drivers/video/fbdev/core/fb_io_fops.c         |   2 +-
>  drivers/video/fbdev/ep93xx-fb.c               |   2 +-
>  drivers/video/fbdev/ffb.c                     |   4 +-
>  drivers/video/fbdev/gbefb.c                   |   2 +-
>  drivers/video/fbdev/leo.c                     |   4 +-
>  drivers/video/fbdev/omap/omapfb.h             |   2 +-
>  drivers/video/fbdev/omap/omapfb_main.c        |   2 +-
>  .../video/fbdev/omap2/omapfb/omapfb-main.c    |   6 +-
>  drivers/video/fbdev/p9100.c                   |   4 +-
>  drivers/video/fbdev/ps3fb.c                   |   2 +-
>  drivers/video/fbdev/pxa3xx-gcu.c              |   2 +-
>  drivers/video/fbdev/sa1100fb.c                |   2 +-
>  drivers/video/fbdev/sbuslib.c                 |   2 +-
>  drivers/video/fbdev/sbuslib.h                 |   4 +-
>  drivers/video/fbdev/sh_mobile_lcdcfb.c        |   4 +-
>  drivers/video/fbdev/smscufx.c                 |   2 +-
>  drivers/video/fbdev/tcx.c                     |   4 +-
>  drivers/video/fbdev/udlfb.c                   |   2 +-
>  drivers/video/fbdev/vfb.c                     |   4 +-
>  drivers/virt/acrn/mm.c                        |   2 +-
>  drivers/xen/gntalloc.c                        |   6 +-
>  drivers/xen/gntdev.c                          |  10 +-
>  drivers/xen/privcmd-buf.c                     |   6 +-
>  drivers/xen/privcmd.c                         |  26 +-
>  drivers/xen/xenbus/xenbus_dev_backend.c       |   2 +-
>  drivers/xen/xenfs/xenstored.c                 |   2 +-
>  drivers/xen/xlate_mmu.c                       |   8 +-
>  fs/9p/vfs_file.c                              |   4 +-
>  fs/afs/file.c                                 |  12 +-
>  fs/aio.c                                      |   4 +-
>  fs/backing-file.c                             |   2 +-
>  fs/bcachefs/fs.c                              |   2 +-
>  fs/binfmt_elf.c                               |   2 +-
>  fs/btrfs/file.c                               |   2 +-
>  fs/buffer.c                                   |   2 +-
>  fs/ceph/addr.c                                |   6 +-
>  fs/ceph/super.h                               |   2 +-
>  fs/coda/file.c                                |   6 +-
>  fs/coredump.c                                 |  12 +-
>  fs/cramfs/inode.c                             |   4 +-
>  fs/dax.c                                      |   8 +-
>  fs/ecryptfs/file.c                            |   2 +-
>  fs/erofs/data.c                               |   2 +-
>  fs/exec.c                                     |  12 +-
>  fs/exfat/file.c                               |   4 +-
>  fs/ext2/file.c                                |   2 +-
>  fs/ext4/file.c                                |   2 +-
>  fs/ext4/inode.c                               |   2 +-
>  fs/f2fs/file.c                                |   2 +-
>  fs/fuse/dax.c                                 |   2 +-
>  fs/fuse/file.c                                |   4 +-
>  fs/fuse/fuse_i.h                              |   4 +-
>  fs/fuse/passthrough.c                         |   2 +-
>  fs/gfs2/file.c                                |   2 +-
>  fs/hugetlbfs/inode.c                          |  14 +-
>  fs/kernfs/file.c                              |   6 +-
>  fs/nfs/file.c                                 |   2 +-
>  fs/nfs/internal.h                             |   2 +-
>  fs/nilfs2/file.c                              |   4 +-
>  fs/ntfs3/file.c                               |   2 +-
>  fs/ocfs2/mmap.c                               |   4 +-
>  fs/ocfs2/mmap.h                               |   2 +-
>  fs/orangefs/file.c                            |   2 +-
>  fs/overlayfs/file.c                           |   2 +-
>  fs/proc/base.c                                |   6 +-
>  fs/proc/inode.c                               |   4 +-
>  fs/proc/task_mmu.c                            |  88 ++---
>  fs/proc/task_nommu.c                          |  12 +-
>  fs/proc/vmcore.c                              |  14 +-
>  fs/ramfs/file-nommu.c                         |   4 +-
>  fs/romfs/mmap-nommu.c                         |   2 +-
>  fs/smb/client/cifsfs.h                        |   4 +-
>  fs/smb/client/file.c                          |   4 +-
>  fs/sysfs/file.c                               |   2 +-
>  fs/ubifs/file.c                               |   2 +-
>  fs/udf/file.c                                 |   4 +-
>  fs/userfaultfd.c                              |  20 +-
>  fs/vboxsf/file.c                              |   4 +-
>  fs/xfs/xfs_file.c                             |   2 +-
>  fs/zonefs/file.c                              |   2 +-
>  include/asm-generic/cacheflush.h              |   8 +-
>  include/asm-generic/hugetlb.h                 |   4 +-
>  include/asm-generic/mm_hooks.h                |   2 +-
>  include/asm-generic/tlb.h                     |  12 +-
>  include/drm/drm_gem.h                         |  10 +-
>  include/drm/drm_gem_dma_helper.h              |   4 +-
>  include/drm/drm_gem_shmem_helper.h            |   4 +-
>  include/drm/drm_gem_ttm_helper.h              |   2 +-
>  include/drm/drm_gem_vram_helper.h             |   2 +-
>  include/drm/drm_prime.h                       |   4 +-
>  include/drm/ttm/ttm_bo.h                      |   8 +-
>  include/linux/backing-file.h                  |   2 +-
>  include/linux/binfmts.h                       |   2 +-
>  include/linux/bpf.h                           |   2 +-
>  include/linux/btf_ids.h                       |   2 +-
>  include/linux/buffer_head.h                   |   2 +-
>  include/linux/buildid.h                       |   6 +-
>  include/linux/cacheflush.h                    |   2 +-
>  include/linux/configfs.h                      |   2 +-
>  include/linux/crash_dump.h                    |   2 +-
>  include/linux/dax.h                           |   4 +-
>  include/linux/dma-buf.h                       |   4 +-
>  include/linux/dma-map-ops.h                   |  10 +-
>  include/linux/dma-mapping.h                   |  12 +-
>  include/linux/fb.h                            |   8 +-
>  include/linux/fs.h                            |  14 +-
>  include/linux/gfp.h                           |   8 +-
>  include/linux/highmem.h                       |  10 +-
>  include/linux/huge_mm.h                       |  92 +++---
>  include/linux/hugetlb.h                       | 132 ++++----
>  include/linux/hugetlb_inline.h                |   4 +-
>  include/linux/io-mapping.h                    |   2 +-
>  include/linux/iomap.h                         |   2 +-
>  include/linux/iommu-dma.h                     |   4 +-
>  include/linux/kernfs.h                        |   4 +-
>  include/linux/khugepaged.h                    |   4 +-
>  include/linux/ksm.h                           |  12 +-
>  include/linux/kvm_host.h                      |   2 +-
>  include/linux/lsm_hook_defs.h                 |   2 +-
>  include/linux/mempolicy.h                     |  20 +-
>  include/linux/migrate.h                       |   6 +-
>  include/linux/mm.h                            | 308 +++++++++---------
>  include/linux/mm_inline.h                     |  18 +-
>  include/linux/mm_types.h                      |  14 +-
>  include/linux/mmdebug.h                       |   4 +-
>  include/linux/mmu_notifier.h                  |   8 +-
>  include/linux/net.h                           |   4 +-
>  include/linux/pagemap.h                       |   2 +-
>  include/linux/pagewalk.h                      |  10 +-
>  include/linux/pci.h                           |   4 +-
>  include/linux/perf_event.h                    |   4 +-
>  include/linux/pgtable.h                       | 100 +++---
>  include/linux/pkeys.h                         |   2 +-
>  include/linux/proc_fs.h                       |   2 +-
>  include/linux/ring_buffer.h                   |   2 +-
>  include/linux/rmap.h                          |  92 +++---
>  include/linux/secretmem.h                     |   4 +-
>  include/linux/security.h                      |   4 +-
>  include/linux/shmem_fs.h                      |  12 +-
>  include/linux/swap.h                          |   2 +-
>  include/linux/swapops.h                       |   4 +-
>  include/linux/sysfs.h                         |   4 +-
>  include/linux/time_namespace.h                |   6 +-
>  include/linux/uacce.h                         |   2 +-
>  include/linux/uio_driver.h                    |   2 +-
>  include/linux/uprobes.h                       |  10 +-
>  include/linux/userfaultfd_k.h                 |  86 ++---
>  include/linux/vdso_datastore.h                |   2 +-
>  include/linux/vfio.h                          |   2 +-
>  include/linux/vfio_pci_core.h                 |   4 +-
>  include/linux/vmalloc.h                       |   6 +-
>  include/media/dvb_vb2.h                       |   4 +-
>  include/media/v4l2-dev.h                      |   2 +-
>  include/media/v4l2-mem2mem.h                  |   6 +-
>  include/media/videobuf2-core.h                |   6 +-
>  include/media/videobuf2-v4l2.h                |   2 +-
>  include/net/sock.h                            |   2 +-
>  include/net/tcp.h                             |   2 +-
>  include/rdma/ib_verbs.h                       |   6 +-
>  include/rdma/rdma_vt.h                        |   2 +-
>  include/sound/compress_driver.h               |   2 +-
>  include/sound/hwdep.h                         |   2 +-
>  include/sound/info.h                          |   2 +-
>  include/sound/memalloc.h                      |   4 +-
>  include/sound/pcm.h                           |   8 +-
>  include/sound/soc-component.h                 |   6 +-
>  include/trace/events/mmap.h                   |   4 +-
>  include/trace/events/sched.h                  |   2 +-
>  include/uapi/linux/bpf.h                      |   2 +-
>  include/xen/xen-ops.h                         |  24 +-
>  io_uring/memmap.c                             |   6 +-
>  io_uring/memmap.h                             |   2 +-
>  ipc/shm.c                                     |  22 +-
>  kernel/acct.c                                 |   2 +-
>  kernel/bpf/arena.c                            |  10 +-
>  kernel/bpf/arraymap.c                         |   2 +-
>  kernel/bpf/ringbuf.c                          |   4 +-
>  kernel/bpf/stackmap.c                         |   4 +-
>  kernel/bpf/syscall.c                          |   6 +-
>  kernel/bpf/task_iter.c                        |  16 +-
>  kernel/bpf/verifier.c                         |   2 +-
>  kernel/dma/coherent.c                         |   6 +-
>  kernel/dma/direct.c                           |   2 +-
>  kernel/dma/direct.h                           |   2 +-
>  kernel/dma/dummy.c                            |   2 +-
>  kernel/dma/mapping.c                          |   8 +-
>  kernel/dma/ops_helpers.c                      |   2 +-
>  kernel/events/core.c                          |  24 +-
>  kernel/events/uprobes.c                       |  48 +--
>  kernel/fork.c                                 |  26 +-
>  kernel/kcov.c                                 |   2 +-
>  kernel/relay.c                                |   6 +-
>  kernel/sched/fair.c                           |   4 +-
>  kernel/signal.c                               |   2 +-
>  kernel/sys.c                                  |   2 +-
>  kernel/time/namespace.c                       |   2 +-
>  kernel/trace/ring_buffer.c                    |   6 +-
>  kernel/trace/trace.c                          |   4 +-
>  kernel/trace/trace_output.c                   |   2 +-
>  lib/buildid.c                                 |   6 +-
>  lib/test_hmm.c                                |   6 +-
>  lib/vdso/datastore.c                          |   6 +-
>  mm/damon/ops-common.c                         |   4 +-
>  mm/damon/ops-common.h                         |   4 +-
>  mm/damon/paddr.c                              |   4 +-
>  mm/damon/tests/vaddr-kunit.h                  |  16 +-
>  mm/damon/vaddr.c                              |   4 +-
>  mm/debug.c                                    |   2 +-
>  mm/debug_vm_pgtable.c                         |   2 +-
>  mm/filemap.c                                  |  12 +-
>  mm/gup.c                                      |  56 ++--
>  mm/hmm.c                                      |   6 +-
>  mm/huge_memory.c                              | 104 +++---
>  mm/hugetlb.c                                  | 158 ++++-----
>  mm/internal.h                                 |  46 +--
>  mm/interval_tree.c                            |  16 +-
>  mm/io-mapping.c                               |   2 +-
>  mm/khugepaged.c                               |  34 +-
>  mm/ksm.c                                      |  48 +--
>  mm/madvise.c                                  |  78 ++---
>  mm/memory-failure.c                           |  16 +-
>  mm/memory.c                                   | 244 +++++++-------
>  mm/mempolicy.c                                |  42 +--
>  mm/migrate.c                                  |  10 +-
>  mm/migrate_device.c                           |   4 +-
>  mm/mincore.c                                  |   8 +-
>  mm/mlock.c                                    |  16 +-
>  mm/mmap.c                                     |  70 ++--
>  mm/mmu_gather.c                               |   4 +-
>  mm/mprotect.c                                 |  22 +-
>  mm/mremap.c                                   |  46 +--
>  mm/mseal.c                                    |  14 +-
>  mm/msync.c                                    |   2 +-
>  mm/nommu.c                                    |  66 ++--
>  mm/oom_kill.c                                 |   2 +-
>  mm/page_idle.c                                |   2 +-
>  mm/page_vma_mapped.c                          |   4 +-
>  mm/pagewalk.c                                 |  20 +-
>  mm/pgtable-generic.c                          |  20 +-
>  mm/rmap.c                                     |  74 ++---
>  mm/secretmem.c                                |   4 +-
>  mm/shmem.c                                    |  34 +-
>  mm/swap.c                                     |   2 +-
>  mm/swap.h                                     |   6 +-
>  mm/swap_state.c                               |   6 +-
>  mm/swapfile.c                                 |  14 +-
>  mm/userfaultfd.c                              | 116 +++----
>  mm/util.c                                     |   4 +-
>  mm/vma.c                                      | 196 +++++------
>  mm/vma.h                                      | 126 +++----
>  mm/vmalloc.c                                  |   4 +-
>  mm/vmscan.c                                   |  12 +-
>  net/core/sock.c                               |   2 +-
>  net/ipv4/tcp.c                                |  12 +-
>  net/packet/af_packet.c                        |   6 +-
>  net/socket.c                                  |   4 +-
>  net/xdp/xsk.c                                 |   2 +-
>  samples/ftrace/ftrace-direct-too.c            |   4 +-
>  samples/vfio-mdev/mbochs.c                    |   8 +-
>  samples/vfio-mdev/mdpy.c                      |   2 +-
>  scripts/coccinelle/api/vma_pages.cocci        |   6 +-
>  security/apparmor/lsm.c                       |   2 +-
>  security/integrity/ima/ima_main.c             |   4 +-
>  security/ipe/hooks.c                          |   2 +-
>  security/ipe/hooks.h                          |   2 +-
>  security/security.c                           |   2 +-
>  security/selinux/hooks.c                      |   2 +-
>  security/selinux/selinuxfs.c                  |   4 +-
>  sound/core/compress_offload.c                 |   2 +-
>  sound/core/hwdep.c                            |   2 +-
>  sound/core/info.c                             |   2 +-
>  sound/core/init.c                             |   2 +-
>  sound/core/memalloc.c                         |  22 +-
>  sound/core/oss/pcm_oss.c                      |   2 +-
>  sound/core/pcm_native.c                       |  20 +-
>  sound/soc/fsl/fsl_asrc_m2m.c                  |   2 +-
>  sound/soc/intel/avs/pcm.c                     |   2 +-
>  sound/soc/loongson/loongson_dma.c             |   2 +-
>  sound/soc/pxa/mmp-sspa.c                      |   2 +-
>  sound/soc/qcom/lpass-platform.c               |   4 +-
>  sound/soc/qcom/qdsp6/q6apm-dai.c              |   2 +-
>  sound/soc/qcom/qdsp6/q6asm-dai.c              |   2 +-
>  sound/soc/samsung/idma.c                      |   2 +-
>  sound/soc/soc-component.c                     |   2 +-
>  sound/soc/uniphier/aio-dma.c                  |   2 +-
>  sound/usb/usx2y/us122l.c                      |   2 +-
>  sound/usb/usx2y/usX2Yhwdep.c                  |   2 +-
>  sound/usb/usx2y/usx2yhwdeppcm.c               |   6 +-
>  tools/include/linux/btf_ids.h                 |   2 +-
>  tools/include/uapi/linux/bpf.h                |   2 +-
>  .../testing/selftests/bpf/bpf_experimental.h  |   2 +-
>  .../selftests/bpf/progs/bpf_iter_task_vmas.c  |   2 +-
>  .../selftests/bpf/progs/bpf_iter_vma_offset.c |   2 +-
>  tools/testing/selftests/bpf/progs/find_vma.c  |   2 +-
>  .../selftests/bpf/progs/find_vma_fail1.c      |   2 +-
>  .../selftests/bpf/progs/find_vma_fail2.c      |   2 +-
>  .../selftests/bpf/progs/iters_css_task.c      |   2 +-
>  .../selftests/bpf/progs/iters_task_vma.c      |   2 +-
>  .../selftests/bpf/progs/iters_testmod.c       |   4 +-
>  tools/testing/selftests/bpf/progs/lsm.c       |   2 +-
>  .../selftests/bpf/progs/test_bpf_cookie.c     |   2 +-
>  .../bpf/progs/verifier_iterating_callbacks.c  |   4 +-
>  .../selftests/bpf/test_kmods/bpf_testmod.c    |   2 +-
>  .../bpf/test_kmods/bpf_testmod_kfunc.h        |   2 +-
>  tools/testing/vma/vma.c                       |  70 ++--
>  tools/testing/vma/vma_internal.h              | 156 ++++-----
>  virt/kvm/kvm_main.c                           |  12 +-
>  861 files changed, 3494 insertions(+), 3494 deletions(-)
>
> diff --git a/Documentation/bpf/prog_lsm.rst b/Documentation/bpf/prog_lsm.rst
> index ad2be02f30c2..f2b254b5a6ce 100644
> --- a/Documentation/bpf/prog_lsm.rst
> +++ b/Documentation/bpf/prog_lsm.rst
> @@ -15,7 +15,7 @@ Structure
>  The example shows an eBPF program that can be attached to the ``file_mprotect``
>  LSM hook:
>
> -.. c:function:: int file_mprotect(struct vm_area_struct *vma, unsigned long reqprot, unsigned long prot);
> +.. c:function:: int file_mprotect(struct mm_area *vma, unsigned long reqprot, unsigned long prot);
>
>  Other LSM hooks which can be instrumented can be found in
>  ``security/security.c``.
> @@ -31,7 +31,7 @@ the fields that need to be accessed.
>  		unsigned long start_brk, brk, start_stack;
>  	} __attribute__((preserve_access_index));
>
> -	struct vm_area_struct {
> +	struct mm_area {
>  		unsigned long start_brk, brk, start_stack;
>  		unsigned long vm_start, vm_end;
>  		struct mm_struct *vm_mm;
> @@ -65,7 +65,7 @@ example:
>  .. code-block:: c
>
>  	SEC("lsm/file_mprotect")
> -	int BPF_PROG(mprotect_audit, struct vm_area_struct *vma,
> +	int BPF_PROG(mprotect_audit, struct mm_area *vma,
>  		     unsigned long reqprot, unsigned long prot, int ret)
>  	{
>  		/* ret is the return value from the previous BPF program
> diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst
> index 889fc84ccd1b..597eb9760dea 100644
> --- a/Documentation/core-api/cachetlb.rst
> +++ b/Documentation/core-api/cachetlb.rst
> @@ -50,7 +50,7 @@ changes occur:
>  	page table operations such as what happens during
>  	fork, and exec.
>
> -3) ``void flush_tlb_range(struct vm_area_struct *vma,
> +3) ``void flush_tlb_range(struct mm_area *vma,
>     unsigned long start, unsigned long end)``
>
>  	Here we are flushing a specific range of (user) virtual
> @@ -70,7 +70,7 @@ changes occur:
>  	call flush_tlb_page (see below) for each entry which may be
>  	modified.
>
> -4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
> +4) ``void flush_tlb_page(struct mm_area *vma, unsigned long addr)``
>
>  	This time we need to remove the PAGE_SIZE sized translation
>  	from the TLB.  The 'vma' is the backing structure used by
> @@ -89,7 +89,7 @@ changes occur:
>  	This is used primarily during fault processing.
>
>  5) ``void update_mmu_cache_range(struct vm_fault *vmf,
> -   struct vm_area_struct *vma, unsigned long address, pte_t *ptep,
> +   struct mm_area *vma, unsigned long address, pte_t *ptep,
>     unsigned int nr)``
>
>  	At the end of every page fault, this routine is invoked to tell
> @@ -159,7 +159,7 @@ Here are the routines, one by one:
>  	This option is separate from flush_cache_mm to allow some
>  	optimizations for VIPT caches.
>
> -3) ``void flush_cache_range(struct vm_area_struct *vma,
> +3) ``void flush_cache_range(struct mm_area *vma,
>     unsigned long start, unsigned long end)``
>
>  	Here we are flushing a specific range of (user) virtual
> @@ -176,7 +176,7 @@ Here are the routines, one by one:
>  	call flush_cache_page (see below) for each entry which may be
>  	modified.
>
> -4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
> +4) ``void flush_cache_page(struct mm_area *vma, unsigned long addr, unsigned long pfn)``
>
>  	This time we need to remove a PAGE_SIZE sized range
>  	from the cache.  The 'vma' is the backing structure used by
> @@ -331,9 +331,9 @@ maps this page at its virtual address.
>  			dirty.  Again, see sparc64 for examples of how
>  			to deal with this.
>
> -  ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +  ``void copy_to_user_page(struct mm_area *vma, struct page *page,
>    unsigned long user_vaddr, void *dst, void *src, int len)``
> -  ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +  ``void copy_from_user_page(struct mm_area *vma, struct page *page,
>    unsigned long user_vaddr, void *dst, void *src, int len)``
>
>  	When the kernel needs to copy arbitrary data in and out
> @@ -346,7 +346,7 @@ maps this page at its virtual address.
>  	likely that you will need to flush the instruction cache
>  	for copy_to_user_page().
>
> -  ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
> +  ``void flush_anon_page(struct mm_area *vma, struct page *page,
>    unsigned long vmaddr)``
>
>    	When the kernel needs to access the contents of an anonymous
> @@ -365,7 +365,7 @@ maps this page at its virtual address.
>  	If the icache does not snoop stores then this routine will need
>  	to flush it.
>
> -  ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
> +  ``void flush_icache_page(struct mm_area *vma, struct page *page)``
>
>  	All the functionality of flush_icache_page can be implemented in
>  	flush_dcache_folio and update_mmu_cache_range. In the future, the hope
> diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dma-api.rst
> index 8e3cce3d0a23..ca0b3e0ef596 100644
> --- a/Documentation/core-api/dma-api.rst
> +++ b/Documentation/core-api/dma-api.rst
> @@ -581,7 +581,7 @@ dma_alloc_pages().  page must be the pointer returned by dma_alloc_pages().
>  ::
>
>  	int
> -	dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
> +	dma_mmap_pages(struct device *dev, struct mm_area *vma,
>  		       size_t size, struct page *page)
>
>  Map an allocation returned from dma_alloc_pages() into a user address space.
> @@ -679,7 +679,7 @@ returned by dma_vmap_noncontiguous().
>  ::
>
>  	int
> -	dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
> +	dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
>  			       size_t size, struct sg_table *sgt)
>
>  Map an allocation returned from dma_alloc_noncontiguous() into a user address
> diff --git a/Documentation/driver-api/uio-howto.rst b/Documentation/driver-api/uio-howto.rst
> index 907ffa3b38f5..9e68c745b295 100644
> --- a/Documentation/driver-api/uio-howto.rst
> +++ b/Documentation/driver-api/uio-howto.rst
> @@ -246,7 +246,7 @@ the members are required, others are optional.
>     hardware interrupt number. The flags given here will be used in the
>     call to :c:func:`request_irq()`.
>
> --  ``int (*mmap)(struct uio_info *info, struct vm_area_struct *vma)``:
> +-  ``int (*mmap)(struct uio_info *info, struct mm_area *vma)``:
>     Optional. If you need a special :c:func:`mmap()`
>     function, you can set it here. If this pointer is not NULL, your
>     :c:func:`mmap()` will be called instead of the built-in one.
> diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
> index 2a21a42c9386..056e27a40f3d 100644
> --- a/Documentation/driver-api/vfio.rst
> +++ b/Documentation/driver-api/vfio.rst
> @@ -419,7 +419,7 @@ similar to a file operations structure::
>  			 size_t count, loff_t *size);
>  		long	(*ioctl)(struct vfio_device *vdev, unsigned int cmd,
>  				 unsigned long arg);
> -		int	(*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
> +		int	(*mmap)(struct vfio_device *vdev, struct mm_area *vma);
>  		void	(*request)(struct vfio_device *vdev, unsigned int count);
>  		int	(*match)(struct vfio_device *vdev, char *buf);
>  		void	(*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
> diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
> index 0ec0bb6eb0fb..9c83c1262882 100644
> --- a/Documentation/filesystems/locking.rst
> +++ b/Documentation/filesystems/locking.rst
> @@ -530,7 +530,7 @@ prototypes::
>  	__poll_t (*poll) (struct file *, struct poll_table_struct *);
>  	long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
>  	long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
> -	int (*mmap) (struct file *, struct vm_area_struct *);
> +	int (*mmap) (struct file *, struct mm_area *);
>  	int (*open) (struct inode *, struct file *);
>  	int (*flush) (struct file *);
>  	int (*release) (struct inode *, struct file *);
> @@ -643,14 +643,14 @@ vm_operations_struct
>
>  prototypes::
>
> -	void (*open)(struct vm_area_struct *);
> -	void (*close)(struct vm_area_struct *);
> +	void (*open)(struct mm_area *);
> +	void (*close)(struct mm_area *);
>  	vm_fault_t (*fault)(struct vm_fault *);
>  	vm_fault_t (*huge_fault)(struct vm_fault *, unsigned int order);
>  	vm_fault_t (*map_pages)(struct vm_fault *, pgoff_t start, pgoff_t end);
> -	vm_fault_t (*page_mkwrite)(struct vm_area_struct *, struct vm_fault *);
> -	vm_fault_t (*pfn_mkwrite)(struct vm_area_struct *, struct vm_fault *);
> -	int (*access)(struct vm_area_struct *, unsigned long, void*, int, int);
> +	vm_fault_t (*page_mkwrite)(struct mm_area *, struct vm_fault *);
> +	vm_fault_t (*pfn_mkwrite)(struct mm_area *, struct vm_fault *);
> +	int (*access)(struct mm_area *, unsigned long, void*, int, int);
>
>  locking rules:
>
> diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst
> index 2a17865dfe39..2935efeceaa9 100644
> --- a/Documentation/filesystems/proc.rst
> +++ b/Documentation/filesystems/proc.rst
> @@ -2175,7 +2175,7 @@ the process is maintaining.  Example output::
>       | lr-------- 1 root root 64 Jan 27 11:24 400000-41a000 -> /usr/bin/ls
>
>  The name of a link represents the virtual memory bounds of a mapping, i.e.
> -vm_area_struct::vm_start-vm_area_struct::vm_end.
> +mm_area::vm_start-mm_area::vm_end.
>
>  The main purpose of the map_files is to retrieve a set of memory mapped
>  files in a fast way instead of parsing /proc/<pid>/maps or
> diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
> index ae79c30b6c0c..866485f271b0 100644
> --- a/Documentation/filesystems/vfs.rst
> +++ b/Documentation/filesystems/vfs.rst
> @@ -1102,7 +1102,7 @@ This describes how the VFS can manipulate an open file.  As of kernel
>  		__poll_t (*poll) (struct file *, struct poll_table_struct *);
>  		long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
>  		long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
> -		int (*mmap) (struct file *, struct vm_area_struct *);
> +		int (*mmap) (struct file *, struct mm_area *);
>  		int (*open) (struct inode *, struct file *);
>  		int (*flush) (struct file *, fl_owner_t id);
>  		int (*release) (struct inode *, struct file *);
> diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
> index d55751cad67c..aac2545c4a54 100644
> --- a/Documentation/gpu/drm-mm.rst
> +++ b/Documentation/gpu/drm-mm.rst
> @@ -280,8 +280,8 @@ made up of several fields, the more interesting ones being:
>  .. code-block:: c
>
>  	struct vm_operations_struct {
> -		void (*open)(struct vm_area_struct * area);
> -		void (*close)(struct vm_area_struct * area);
> +		void (*open)(struct mm_area * area);
> +		void (*close)(struct mm_area * area);
>  		vm_fault_t (*fault)(struct vm_fault *vmf);
>  	};
>
> diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst
> index 7d61b7a8b65b..63fbba00dc3d 100644
> --- a/Documentation/mm/hmm.rst
> +++ b/Documentation/mm/hmm.rst
> @@ -298,7 +298,7 @@ between device driver specific code and shared common code:
>
>  1. ``mmap_read_lock()``
>
> -   The device driver has to pass a ``struct vm_area_struct`` to
> +   The device driver has to pass a ``struct mm_area`` to
>     migrate_vma_setup() so the mmap_read_lock() or mmap_write_lock() needs to
>     be held for the duration of the migration.
>
> diff --git a/Documentation/mm/hugetlbfs_reserv.rst b/Documentation/mm/hugetlbfs_reserv.rst
> index 4914fbf07966..afb86d44c57e 100644
> --- a/Documentation/mm/hugetlbfs_reserv.rst
> +++ b/Documentation/mm/hugetlbfs_reserv.rst
> @@ -104,7 +104,7 @@ These operations result in a call to the routine hugetlb_reserve_pages()::
>
>  	int hugetlb_reserve_pages(struct inode *inode,
>  				  long from, long to,
> -				  struct vm_area_struct *vma,
> +				  struct mm_area *vma,
>  				  vm_flags_t vm_flags)
>
>  The first thing hugetlb_reserve_pages() does is check if the NORESERVE
> @@ -181,7 +181,7 @@ Reservations are consumed when huge pages associated with the reservations
>  are allocated and instantiated in the corresponding mapping.  The allocation
>  is performed within the routine alloc_hugetlb_folio()::
>
> -	struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> +	struct folio *alloc_hugetlb_folio(struct mm_area *vma,
>  				     unsigned long addr, int avoid_reserve)
>
>  alloc_hugetlb_folio is passed a VMA pointer and a virtual address, so it can
> @@ -464,14 +464,14 @@ account the 'opposite' meaning of reservation map entries for private and
>  shared mappings and hide this detail from the caller::
>
>  	long vma_needs_reservation(struct hstate *h,
> -				   struct vm_area_struct *vma,
> +				   struct mm_area *vma,
>  				   unsigned long addr)
>
>  This routine calls region_chg() for the specified page.  If no reservation
>  exists, 1 is returned.  If a reservation exists, 0 is returned::
>
>  	long vma_commit_reservation(struct hstate *h,
> -				    struct vm_area_struct *vma,
> +				    struct mm_area *vma,
>  				    unsigned long addr)
>
>  This calls region_add() for the specified page.  As in the case of region_chg
> @@ -483,7 +483,7 @@ vma_needs_reservation.  An unexpected difference indicates the reservation
>  map was modified between calls::
>
>  	void vma_end_reservation(struct hstate *h,
> -				 struct vm_area_struct *vma,
> +				 struct mm_area *vma,
>  				 unsigned long addr)
>
>  This calls region_abort() for the specified page.  As in the case of region_chg
> @@ -492,7 +492,7 @@ vma_needs_reservation.  It will abort/end the in progress reservation add
>  operation::
>
>  	long vma_add_reservation(struct hstate *h,
> -				 struct vm_area_struct *vma,
> +				 struct mm_area *vma,
>  				 unsigned long addr)
>
>  This is a special wrapper routine to help facilitate reservation cleanup
> diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst
> index e6756e78b476..674c30658f90 100644
> --- a/Documentation/mm/process_addrs.rst
> +++ b/Documentation/mm/process_addrs.rst
> @@ -9,10 +9,10 @@ Process Addresses
>
>
>  Userland memory ranges are tracked by the kernel via Virtual Memory Areas or
> -'VMA's of type :c:struct:`!struct vm_area_struct`.
> +'VMA's of type :c:struct:`!struct mm_area`.
>
>  Each VMA describes a virtually contiguous memory range with identical
> -attributes, each described by a :c:struct:`!struct vm_area_struct`
> +attributes, each described by a :c:struct:`!struct mm_area`
>  object. Userland access outside of VMAs is invalid except in the case where an
>  adjacent stack VMA could be extended to contain the accessed address.
>
> @@ -142,7 +142,7 @@ obtain either a read or a write lock for each of these.
>  VMA fields
>  ^^^^^^^^^^
>
> -We can subdivide :c:struct:`!struct vm_area_struct` fields by their purpose, which makes it
> +We can subdivide :c:struct:`!struct mm_area` fields by their purpose, which makes it
>  easier to explore their locking characteristics:
>
>  .. note:: We exclude VMA lock-specific fields here to avoid confusion, as these
> diff --git a/Documentation/translations/zh_CN/core-api/cachetlb.rst b/Documentation/translations/zh_CN/core-api/cachetlb.rst
> index 64295c61d1c1..96eefda0262e 100644
> --- a/Documentation/translations/zh_CN/core-api/cachetlb.rst
> +++ b/Documentation/translations/zh_CN/core-api/cachetlb.rst
> @@ -51,7 +51,7 @@ cpu上对这个地址空间进行刷新。
>  	这个接口被用来处理整个地址空间的页表操作,比如在fork和exec过程
>  	中发生的事情。
>
> -3) ``void flush_tlb_range(struct vm_area_struct *vma,
> +3) ``void flush_tlb_range(struct mm_area *vma,
>     unsigned long start, unsigned long end)``
>
>  	这里我们要从TLB中刷新一个特定范围的(用户)虚拟地址转换。在运行后,
> @@ -65,7 +65,7 @@ cpu上对这个地址空间进行刷新。
>  	个页面大小的转换,而不是让内核为每个可能被修改的页表项调用
>  	flush_tlb_page(见下文)。
>
> -4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
> +4) ``void flush_tlb_page(struct mm_area *vma, unsigned long addr)``
>
>  	这一次我们需要从TLB中删除PAGE_SIZE大小的转换。‘vma’是Linux用来跟
>  	踪进程的mmap区域的支持结构体,地址空间可以通过vma->vm_mm获得。另
> @@ -78,7 +78,7 @@ cpu上对这个地址空间进行刷新。
>
>  	这主要是在故障处理时使用。
>
> -5) ``void update_mmu_cache(struct vm_area_struct *vma,
> +5) ``void update_mmu_cache(struct mm_area *vma,
>     unsigned long address, pte_t *ptep)``
>
>  	在每个缺页异常结束时,这个程序被调用,以告诉体系结构特定的代码,在
> @@ -134,7 +134,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
>
>  	这个选项与flush_cache_mm分开,以允许对VIPT缓存进行一些优化。
>
> -3) ``void flush_cache_range(struct vm_area_struct *vma,
> +3) ``void flush_cache_range(struct mm_area *vma,
>     unsigned long start, unsigned long end)``
>
>  	在这里,我们要从缓存中刷新一个特定范围的(用户)虚拟地址。运行
> @@ -147,7 +147,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
>  	除多个页面大小的区域, 而不是让内核为每个可能被修改的页表项调
>  	用 flush_cache_page (见下文)。
>
> -4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
> +4) ``void flush_cache_page(struct mm_area *vma, unsigned long addr, unsigned long pfn)``
>
>  	这一次我们需要从缓存中删除一个PAGE_SIZE大小的区域。“vma”是
>  	Linux用来跟踪进程的mmap区域的支持结构体,地址空间可以通过
> @@ -284,9 +284,9 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
>  	该函数的调用情形与flush_dcache_page()相同。它允许架构针对刷新整个
>  	folio页面进行优化,而不是一次刷新一页。
>
> -  ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +  ``void copy_to_user_page(struct mm_area *vma, struct page *page,
>    unsigned long user_vaddr, void *dst, void *src, int len)``
> -  ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +  ``void copy_from_user_page(struct mm_area *vma, struct page *page,
>    unsigned long user_vaddr, void *dst, void *src, int len)``
>
>  	当内核需要复制任意的数据进出任意的用户页时(比如ptrace()),它将使
> @@ -296,7 +296,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
>  	处理器的指令缓存没有对cpu存储进行窥探,那么你很可能需要为
>  	copy_to_user_page()刷新指令缓存。
>
> -  ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
> +  ``void flush_anon_page(struct mm_area *vma, struct page *page,
>    unsigned long vmaddr)``
>
>  	当内核需要访问一个匿名页的内容时,它会调用这个函数(目前只有
> @@ -310,7 +310,7 @@ HyperSparc cpu就是这样一个具有这种属性的cpu。
>
>  	如果icache不对存储进行窥探,那么这个程序将需要对其进行刷新。
>
> -  ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
> +  ``void flush_icache_page(struct mm_area *vma, struct page *page)``
>
>  	flush_icache_page的所有功能都可以在flush_dcache_page和update_mmu_cache
>  	中实现。在未来,我们希望能够完全删除这个接口。
> diff --git a/Documentation/translations/zh_CN/mm/hmm.rst b/Documentation/translations/zh_CN/mm/hmm.rst
> index 22c210f4e94f..ad4e2847b119 100644
> --- a/Documentation/translations/zh_CN/mm/hmm.rst
> +++ b/Documentation/translations/zh_CN/mm/hmm.rst
> @@ -247,7 +247,7 @@ devm_memunmap_pages() 和 devm_release_mem_region() 当资源可以绑定到 ``s
>
>  1. ``mmap_read_lock()``
>
> -   设备驱动程序必须将 ``struct vm_area_struct`` 传递给migrate_vma_setup(),
> +   设备驱动程序必须将 ``struct mm_area`` 传递给migrate_vma_setup(),
>     因此需要在迁移期间保留 mmap_read_lock() 或 mmap_write_lock()。
>
>  2. ``migrate_vma_setup(struct migrate_vma *args)``
> diff --git a/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst b/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst
> index 20947f8bd065..b85b68f3afd4 100644
> --- a/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst
> +++ b/Documentation/translations/zh_CN/mm/hugetlbfs_reserv.rst
> @@ -95,7 +95,7 @@ Page Flags
>
>  	int hugetlb_reserve_pages(struct inode *inode,
>  				  long from, long to,
> -				  struct vm_area_struct *vma,
> +				  struct mm_area *vma,
>  				  vm_flags_t vm_flags)
>
>  hugetlb_reserve_pages()做的第一件事是检查在调用shmget()或mmap()时是否指定了NORESERVE
> @@ -146,7 +146,7 @@ HPAGE_RESV_OWNER标志被设置,以表明该VMA拥有预留。
>  当与预留相关的巨页在相应的映射中被分配和实例化时,预留就被消耗了。该分配是在函数alloc_hugetlb_folio()
>  中进行的::
>
> -	struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> +	struct folio *alloc_hugetlb_folio(struct mm_area *vma,
>  				     unsigned long addr, int avoid_reserve)
>
>  alloc_hugetlb_folio被传递给一个VMA指针和一个虚拟地址,因此它可以查阅预留映射以确定是否存在预留。
> @@ -342,13 +342,13 @@ region_count()在解除私有巨页映射时被调用。在私有映射中,预
>  它们确实考虑到了私有和共享映射的预留映射条目的 “相反” 含义,并向调用者隐藏了这个细节::
>
>  	long vma_needs_reservation(struct hstate *h,
> -				   struct vm_area_struct *vma,
> +				   struct mm_area *vma,
>  				   unsigned long addr)
>
>  该函数为指定的页面调用 region_chg()。如果不存在预留,则返回1。如果存在预留,则返回0::
>
>  	long vma_commit_reservation(struct hstate *h,
> -				    struct vm_area_struct *vma,
> +				    struct mm_area *vma,
>  				    unsigned long addr)
>
>  这将调用 region_add(),用于指定的页面。与region_chg和region_add的情况一样,该函数应在
> @@ -357,14 +357,14 @@ region_count()在解除私有巨页映射时被调用。在私有映射中,预
>  现意外的差异,说明在两次调用之间修改了预留映射::
>
>  	void vma_end_reservation(struct hstate *h,
> -				 struct vm_area_struct *vma,
> +				 struct mm_area *vma,
>  				 unsigned long addr)
>
>  这将调用指定页面的 region_abort()。与region_chg和region_abort的情况一样,该函数应在
>  先前调用的vma_needs_reservation后被调用。它将中止/结束正在进行的预留添加操作::
>
>  	long vma_add_reservation(struct hstate *h,
> -				 struct vm_area_struct *vma,
> +				 struct mm_area *vma,
>  				 unsigned long addr)
>
>  这是一个特殊的包装函数,有助于在错误路径上清理预留。它只从repare_reserve_on_error()函数
> diff --git a/Documentation/userspace-api/media/conf_nitpick.py b/Documentation/userspace-api/media/conf_nitpick.py
> index 0a8e236d07ab..3704eb6e4e3b 100644
> --- a/Documentation/userspace-api/media/conf_nitpick.py
> +++ b/Documentation/userspace-api/media/conf_nitpick.py
> @@ -103,7 +103,7 @@ nitpick_ignore = [
>      ("c:type", "usb_interface"),
>      ("c:type", "v4l2_std_id"),
>      ("c:type", "video_system_t"),
> -    ("c:type", "vm_area_struct"),
> +    ("c:type", "mm_area"),
>
>      # Opaque structures
>
> diff --git a/arch/alpha/include/asm/cacheflush.h b/arch/alpha/include/asm/cacheflush.h
> index 36a7e924c3b9..6a9f035ab3c9 100644
> --- a/arch/alpha/include/asm/cacheflush.h
> +++ b/arch/alpha/include/asm/cacheflush.h
> @@ -35,7 +35,7 @@ extern void smp_imb(void);
>
>  extern void __load_new_mm_context(struct mm_struct *);
>  static inline void
> -flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
> +flush_icache_user_page(struct mm_area *vma, struct page *page,
>  			unsigned long addr, int len)
>  {
>  	if (vma->vm_flags & VM_EXEC) {
> @@ -48,7 +48,7 @@ flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
>  }
>  #define flush_icache_user_page flush_icache_user_page
>  #else /* CONFIG_SMP */
> -extern void flush_icache_user_page(struct vm_area_struct *vma,
> +extern void flush_icache_user_page(struct mm_area *vma,
>  		struct page *page, unsigned long addr, int len);
>  #define flush_icache_user_page flush_icache_user_page
>  #endif /* CONFIG_SMP */
> @@ -57,7 +57,7 @@ extern void flush_icache_user_page(struct vm_area_struct *vma,
>   * Both implementations of flush_icache_user_page flush the entire
>   * address space, so one call, no matter how many pages.
>   */
> -static inline void flush_icache_pages(struct vm_area_struct *vma,
> +static inline void flush_icache_pages(struct mm_area *vma,
>  		struct page *page, unsigned int nr)
>  {
>  	flush_icache_user_page(vma, page, 0, 0);
> diff --git a/arch/alpha/include/asm/machvec.h b/arch/alpha/include/asm/machvec.h
> index 490fc880bb3f..964ae4fe2dd3 100644
> --- a/arch/alpha/include/asm/machvec.h
> +++ b/arch/alpha/include/asm/machvec.h
> @@ -16,7 +16,7 @@
>
>  struct task_struct;
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>  struct linux_hose_info;
>  struct pci_dev;
>  struct pci_ops;
> diff --git a/arch/alpha/include/asm/pci.h b/arch/alpha/include/asm/pci.h
> index 6c04fcbdc8ed..d402ba6d7a00 100644
> --- a/arch/alpha/include/asm/pci.h
> +++ b/arch/alpha/include/asm/pci.h
> @@ -82,7 +82,7 @@ extern int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val,
>  extern int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val,
>  			    size_t count);
>  extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
> -				      struct vm_area_struct *vma,
> +				      struct mm_area *vma,
>  				      enum pci_mmap_state mmap_state);
>  extern void pci_adjust_legacy_attr(struct pci_bus *bus,
>  				   enum pci_mmap_state mmap_type);
> diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
> index 02e8817a8921..fdb7f661c52a 100644
> --- a/arch/alpha/include/asm/pgtable.h
> +++ b/arch/alpha/include/asm/pgtable.h
> @@ -19,7 +19,7 @@
>  #include <asm/setup.h>
>
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>
>  /* Certain architectures need to do special things when PTEs
>   * within a page table are directly modified.  Thus, the following
> @@ -298,13 +298,13 @@ extern pgd_t swapper_pg_dir[1024];
>   * The Alpha doesn't have any external MMU info:  the kernel page
>   * tables contain all the necessary information.
>   */
> -extern inline void update_mmu_cache(struct vm_area_struct * vma,
> +extern inline void update_mmu_cache(struct mm_area * vma,
>  	unsigned long address, pte_t *ptep)
>  {
>  }
>
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  }
> diff --git a/arch/alpha/include/asm/tlbflush.h b/arch/alpha/include/asm/tlbflush.h
> index ba4b359d6c39..76232c200987 100644
> --- a/arch/alpha/include/asm/tlbflush.h
> +++ b/arch/alpha/include/asm/tlbflush.h
> @@ -26,7 +26,7 @@ ev5_flush_tlb_current(struct mm_struct *mm)
>
>  __EXTERN_INLINE void
>  ev5_flush_tlb_current_page(struct mm_struct * mm,
> -			   struct vm_area_struct *vma,
> +			   struct mm_area *vma,
>  			   unsigned long addr)
>  {
>  	if (vma->vm_flags & VM_EXEC)
> @@ -81,7 +81,7 @@ flush_tlb_mm(struct mm_struct *mm)
>
>  /* Page-granular tlb flush.  */
>  static inline void
> -flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> @@ -94,7 +94,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
>  /* Flush a specified range of user mapping.  On the Alpha we flush
>     the whole user tlb.  */
>  static inline void
> -flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +flush_tlb_range(struct mm_area *vma, unsigned long start,
>  		unsigned long end)
>  {
>  	flush_tlb_mm(vma->vm_mm);
> @@ -104,8 +104,8 @@ flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
>
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *);
> -extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
> -extern void flush_tlb_range(struct vm_area_struct *, unsigned long,
> +extern void flush_tlb_page(struct mm_area *, unsigned long);
> +extern void flush_tlb_range(struct mm_area *, unsigned long,
>  			    unsigned long);
>
>  #endif /* CONFIG_SMP */
> diff --git a/arch/alpha/kernel/pci-sysfs.c b/arch/alpha/kernel/pci-sysfs.c
> index 3048758304b5..ec66bae1cfae 100644
> --- a/arch/alpha/kernel/pci-sysfs.c
> +++ b/arch/alpha/kernel/pci-sysfs.c
> @@ -16,7 +16,7 @@
>  #include <linux/pci.h>
>
>  static int hose_mmap_page_range(struct pci_controller *hose,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				enum pci_mmap_state mmap_type, int sparse)
>  {
>  	unsigned long base;
> @@ -34,7 +34,7 @@ static int hose_mmap_page_range(struct pci_controller *hose,
>  }
>
>  static int __pci_mmap_fits(struct pci_dev *pdev, int num,
> -			   struct vm_area_struct *vma, int sparse)
> +			   struct mm_area *vma, int sparse)
>  {
>  	unsigned long nr, start, size;
>  	int shift = sparse ? 5 : 0;
> @@ -56,7 +56,7 @@ static int __pci_mmap_fits(struct pci_dev *pdev, int num,
>   * pci_mmap_resource - map a PCI resource into user memory space
>   * @kobj: kobject for mapping
>   * @attr: struct bin_attribute for the file being mapped
> - * @vma: struct vm_area_struct passed into the mmap
> + * @vma: struct mm_area passed into the mmap
>   * @sparse: address space type
>   *
>   * Use the bus mapping routines to map a PCI resource into userspace.
> @@ -65,7 +65,7 @@ static int __pci_mmap_fits(struct pci_dev *pdev, int num,
>   */
>  static int pci_mmap_resource(struct kobject *kobj,
>  			     const struct bin_attribute *attr,
> -			     struct vm_area_struct *vma, int sparse)
> +			     struct mm_area *vma, int sparse)
>  {
>  	struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
>  	struct resource *res = attr->private;
> @@ -94,14 +94,14 @@ static int pci_mmap_resource(struct kobject *kobj,
>
>  static int pci_mmap_resource_sparse(struct file *filp, struct kobject *kobj,
>  				    const struct bin_attribute *attr,
> -				    struct vm_area_struct *vma)
> +				    struct mm_area *vma)
>  {
>  	return pci_mmap_resource(kobj, attr, vma, 1);
>  }
>
>  static int pci_mmap_resource_dense(struct file *filp, struct kobject *kobj,
>  				   const struct bin_attribute *attr,
> -				   struct vm_area_struct *vma)
> +				   struct mm_area *vma)
>  {
>  	return pci_mmap_resource(kobj, attr, vma, 0);
>  }
> @@ -254,7 +254,7 @@ int pci_create_resource_files(struct pci_dev *pdev)
>  /* Legacy I/O bus mapping stuff. */
>
>  static int __legacy_mmap_fits(struct pci_controller *hose,
> -			      struct vm_area_struct *vma,
> +			      struct mm_area *vma,
>  			      unsigned long res_size, int sparse)
>  {
>  	unsigned long nr, start, size;
> @@ -283,7 +283,7 @@ static inline int has_sparse(struct pci_controller *hose,
>  	return base != 0;
>  }
>
> -int pci_mmap_legacy_page_range(struct pci_bus *bus, struct vm_area_struct *vma,
> +int pci_mmap_legacy_page_range(struct pci_bus *bus, struct mm_area *vma,
>  			       enum pci_mmap_state mmap_type)
>  {
>  	struct pci_controller *hose = bus->sysdata;
> diff --git a/arch/alpha/kernel/smp.c b/arch/alpha/kernel/smp.c
> index ed06367ece57..1f71a076196b 100644
> --- a/arch/alpha/kernel/smp.c
> +++ b/arch/alpha/kernel/smp.c
> @@ -658,7 +658,7 @@ flush_tlb_mm(struct mm_struct *mm)
>  EXPORT_SYMBOL(flush_tlb_mm);
>
>  struct flush_tlb_page_struct {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm;
>  	unsigned long addr;
>  };
> @@ -676,7 +676,7 @@ ipi_flush_tlb_page(void *x)
>  }
>
>  void
> -flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	struct flush_tlb_page_struct data;
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -709,7 +709,7 @@ flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
>  EXPORT_SYMBOL(flush_tlb_page);
>
>  void
> -flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	/* On the Alpha we always flush the whole user tlb.  */
>  	flush_tlb_mm(vma->vm_mm);
> @@ -727,7 +727,7 @@ ipi_flush_icache_page(void *x)
>  }
>
>  void
> -flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
> +flush_icache_user_page(struct mm_area *vma, struct page *page,
>  			unsigned long addr, int len)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
> index a9816bbc9f34..a65198563de8 100644
> --- a/arch/alpha/mm/fault.c
> +++ b/arch/alpha/mm/fault.c
> @@ -85,7 +85,7 @@ asmlinkage void
>  do_page_fault(unsigned long address, unsigned long mmcsr,
>  	      long cause, struct pt_regs *regs)
>  {
> -	struct vm_area_struct * vma;
> +	struct mm_area * vma;
>  	struct mm_struct *mm = current->mm;
>  	const struct exception_table_entry *fixup;
>  	int si_code = SEGV_MAPERR;
> diff --git a/arch/arc/include/asm/hugepage.h b/arch/arc/include/asm/hugepage.h
> index 8a2441670a8f..3f3e305802f6 100644
> --- a/arch/arc/include/asm/hugepage.h
> +++ b/arch/arc/include/asm/hugepage.h
> @@ -61,11 +61,11 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>  	*pmdp = pmd;
>  }
>
> -extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> +extern void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
>  				 pmd_t *pmd);
>
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> -extern void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
>  				unsigned long end);
>
>  /* We don't have hardware dirty/accessed bits, generic_pmdp_establish is fine.*/
> diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
> index def0dfb95b43..bb03a8165e36 100644
> --- a/arch/arc/include/asm/page.h
> +++ b/arch/arc/include/asm/page.h
> @@ -25,13 +25,13 @@
>  #define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
>  #define copy_page(to, from)		memcpy((to), (from), PAGE_SIZE)
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct page;
>
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long u_vaddr, struct vm_area_struct *vma);
> +			unsigned long u_vaddr, struct mm_area *vma);
>  void clear_user_page(void *to, unsigned long u_vaddr, struct page *page);
>
>  typedef struct {
> diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h
> index 8ebec1b21d24..80c4759894fc 100644
> --- a/arch/arc/include/asm/pgtable-bits-arcv2.h
> +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h
> @@ -101,7 +101,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
>  }
>
>  struct vm_fault;
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *ptep, unsigned int nr);
>
>  #define update_mmu_cache(vma, addr, ptep) \
> diff --git a/arch/arc/include/asm/tlbflush.h b/arch/arc/include/asm/tlbflush.h
> index 992a2837a53f..e442c338f36a 100644
> --- a/arch/arc/include/asm/tlbflush.h
> +++ b/arch/arc/include/asm/tlbflush.h
> @@ -10,12 +10,12 @@
>
>  void local_flush_tlb_all(void);
>  void local_flush_tlb_mm(struct mm_struct *mm);
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page);
>  void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
> -void local_flush_tlb_range(struct vm_area_struct *vma,
> +void local_flush_tlb_range(struct mm_area *vma,
>  			   unsigned long start, unsigned long end);
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
>  			       unsigned long end);
>  #endif
>
> @@ -29,14 +29,14 @@ void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
>  #define flush_pmd_tlb_range(vma, s, e)	local_flush_pmd_tlb_range(vma, s, e)
>  #endif
>  #else
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  							 unsigned long end);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long page);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -extern void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> +extern void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
>  #endif
>  #endif /* CONFIG_SMP */
>  #endif
> diff --git a/arch/arc/kernel/arc_hostlink.c b/arch/arc/kernel/arc_hostlink.c
> index 08c5196efe0a..ca695259edde 100644
> --- a/arch/arc/kernel/arc_hostlink.c
> +++ b/arch/arc/kernel/arc_hostlink.c
> @@ -15,7 +15,7 @@
>
>  static unsigned char __HOSTLINK__[4 * PAGE_SIZE] __aligned(PAGE_SIZE);
>
> -static int arc_hl_mmap(struct file *fp, struct vm_area_struct *vma)
> +static int arc_hl_mmap(struct file *fp, struct mm_area *vma)
>  {
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
>
> diff --git a/arch/arc/kernel/troubleshoot.c b/arch/arc/kernel/troubleshoot.c
> index c380d8c30704..0e54ebd71f6c 100644
> --- a/arch/arc/kernel/troubleshoot.c
> +++ b/arch/arc/kernel/troubleshoot.c
> @@ -76,7 +76,7 @@ static void print_task_path_n_nm(struct task_struct *tsk)
>
>  static void show_faulting_vma(unsigned long address)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *active_mm = current->active_mm;
>
>  	/* can't use print_vma_addr() yet as it doesn't check for
> diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
> index 9106ceac323c..29f282d3b006 100644
> --- a/arch/arc/mm/cache.c
> +++ b/arch/arc/mm/cache.c
> @@ -880,7 +880,7 @@ noinline void flush_cache_all(void)
>  }
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long u_vaddr, struct vm_area_struct *vma)
> +	unsigned long u_vaddr, struct mm_area *vma)
>  {
>  	struct folio *src = page_folio(from);
>  	struct folio *dst = page_folio(to);
> diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c
> index 95119a5e7761..a757e4c1aeca 100644
> --- a/arch/arc/mm/fault.c
> +++ b/arch/arc/mm/fault.c
> @@ -72,7 +72,7 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address)
>
>  void do_page_fault(unsigned long address, struct pt_regs *regs)
>  {
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	struct task_struct *tsk = current;
>  	struct mm_struct *mm = tsk->mm;
>  	int sig, si_code = SEGV_MAPERR;
> diff --git a/arch/arc/mm/mmap.c b/arch/arc/mm/mmap.c
> index 2185afe8d59f..d43d7ab91d3d 100644
> --- a/arch/arc/mm/mmap.c
> +++ b/arch/arc/mm/mmap.c
> @@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
>  		unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_unmapped_area_info info = {};
>
>  	/*
> diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
> index cae4a7aae0ed..94da2ce6b491 100644
> --- a/arch/arc/mm/tlb.c
> +++ b/arch/arc/mm/tlb.c
> @@ -205,7 +205,7 @@ noinline void local_flush_tlb_mm(struct mm_struct *mm)
>   *      without doing any explicit Shootdown
>   *  -In case of kernel Flush, entry has to be shot down explicitly
>   */
> -void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			   unsigned long end)
>  {
>  	const unsigned int cpu = smp_processor_id();
> @@ -275,7 +275,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
>   * NOTE One TLB entry contains translation for single PAGE
>   */
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	const unsigned int cpu = smp_processor_id();
>  	unsigned long flags;
> @@ -295,7 +295,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
>  #ifdef CONFIG_SMP
>
>  struct tlb_args {
> -	struct vm_area_struct *ta_vma;
> +	struct mm_area *ta_vma;
>  	unsigned long ta_start;
>  	unsigned long ta_end;
>  };
> @@ -341,7 +341,7 @@ void flush_tlb_mm(struct mm_struct *mm)
>  			 mm, 1);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
>  {
>  	struct tlb_args ta = {
>  		.ta_vma = vma,
> @@ -351,7 +351,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
>  	on_each_cpu_mask(mm_cpumask(vma->vm_mm), ipi_flush_tlb_page, &ta, 1);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  		     unsigned long end)
>  {
>  	struct tlb_args ta = {
> @@ -364,7 +364,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
>  }
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
>  			 unsigned long end)
>  {
>  	struct tlb_args ta = {
> @@ -391,7 +391,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  /*
>   * Routine to create a TLB entry
>   */
> -static void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep)
> +static void create_tlb(struct mm_area *vma, unsigned long vaddr, pte_t *ptep)
>  {
>  	unsigned long flags;
>  	unsigned int asid_or_sasid, rwx;
> @@ -469,7 +469,7 @@ static void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *p
>   * Note that flush (when done) involves both WBACK - so physical page is
>   * in sync as well as INV - so any non-congruent aliases don't remain
>   */
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long vaddr_unaligned, pte_t *ptep, unsigned int nr)
>  {
>  	unsigned long vaddr = vaddr_unaligned & PAGE_MASK;
> @@ -527,14 +527,14 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
>   * Thus THP PMD accessors are implemented in terms of PTE (just like sparc)
>   */
>
> -void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> +void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
>  				 pmd_t *pmd)
>  {
>  	pte_t pte = __pte(pmd_val(*pmd));
>  	update_mmu_cache_range(NULL, vma, addr, &pte, HPAGE_PMD_NR);
>  }
>
> -void local_flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
>  			       unsigned long end)
>  {
>  	unsigned int cpu;
> diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h
> index 8ed8b9a24efe..ad88660a95c4 100644
> --- a/arch/arm/include/asm/cacheflush.h
> +++ b/arch/arm/include/asm/cacheflush.h
> @@ -165,7 +165,7 @@ extern void dmac_flush_range(const void *, const void *);
>   * processes address space.  Really, we want to allow our "user
>   * space" model to handle this.
>   */
> -extern void copy_to_user_page(struct vm_area_struct *, struct page *,
> +extern void copy_to_user_page(struct mm_area *, struct page *,
>  	unsigned long, void *, const void *, unsigned long);
>  #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
>  	do {							\
> @@ -222,7 +222,7 @@ static inline void vivt_flush_cache_mm(struct mm_struct *mm)
>  }
>
>  static inline void
> -vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +vivt_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> @@ -231,7 +231,7 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
>  					vma->vm_flags);
>  }
>
> -static inline void vivt_flush_cache_pages(struct vm_area_struct *vma,
> +static inline void vivt_flush_cache_pages(struct mm_area *vma,
>  		unsigned long user_addr, unsigned long pfn, unsigned int nr)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -252,8 +252,8 @@ static inline void vivt_flush_cache_pages(struct vm_area_struct *vma,
>  		vivt_flush_cache_pages(vma, addr, pfn, nr)
>  #else
>  void flush_cache_mm(struct mm_struct *mm);
> -void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> -void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr,
> +void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
> +void flush_cache_pages(struct mm_area *vma, unsigned long user_addr,
>  		unsigned long pfn, unsigned int nr);
>  #endif
>
> @@ -309,10 +309,10 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
>  }
>
>  #define ARCH_HAS_FLUSH_ANON_PAGE
> -static inline void flush_anon_page(struct vm_area_struct *vma,
> +static inline void flush_anon_page(struct mm_area *vma,
>  			 struct page *page, unsigned long vmaddr)
>  {
> -	extern void __flush_anon_page(struct vm_area_struct *vma,
> +	extern void __flush_anon_page(struct mm_area *vma,
>  				struct page *, unsigned long);
>  	if (PageAnon(page))
>  		__flush_anon_page(vma, page, vmaddr);
> diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
> index ef11b721230e..ba8262198322 100644
> --- a/arch/arm/include/asm/page.h
> +++ b/arch/arm/include/asm/page.h
> @@ -102,34 +102,34 @@
>  #endif
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>
>  struct cpu_user_fns {
>  	void (*cpu_clear_user_highpage)(struct page *page, unsigned long vaddr);
>  	void (*cpu_copy_user_highpage)(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma);
> +			unsigned long vaddr, struct mm_area *vma);
>  };
>
>  void fa_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void fa_clear_user_highpage(struct page *page, unsigned long vaddr);
>  void feroceon_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void feroceon_clear_user_highpage(struct page *page, unsigned long vaddr);
>  void v4_mc_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void v4_mc_clear_user_highpage(struct page *page, unsigned long vaddr);
>  void v4wb_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void v4wb_clear_user_highpage(struct page *page, unsigned long vaddr);
>  void v4wt_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void v4wt_clear_user_highpage(struct page *page, unsigned long vaddr);
>  void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void xsc3_mc_clear_user_highpage(struct page *page, unsigned long vaddr);
>  void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>  void xscale_mc_clear_user_highpage(struct page *page, unsigned long vaddr);
>
>  #ifdef MULTI_USER
> @@ -145,7 +145,7 @@ extern struct cpu_user_fns cpu_user;
>
>  extern void __cpu_clear_user_highpage(struct page *page, unsigned long vaddr);
>  extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma);
> +			unsigned long vaddr, struct mm_area *vma);
>  #endif
>
>  #define clear_user_highpage(page,vaddr)		\
> diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h
> index 38c6e4a2a0b6..401ec430d0fd 100644
> --- a/arch/arm/include/asm/tlbflush.h
> +++ b/arch/arm/include/asm/tlbflush.h
> @@ -205,7 +205,7 @@
>  #include <linux/sched.h>
>
>  struct cpu_tlb_fns {
> -	void (*flush_user_range)(unsigned long, unsigned long, struct vm_area_struct *);
> +	void (*flush_user_range)(unsigned long, unsigned long, struct mm_area *);
>  	void (*flush_kern_range)(unsigned long, unsigned long);
>  	unsigned long tlb_flags;
>  };
> @@ -223,7 +223,7 @@ struct cpu_tlb_fns {
>  #define __cpu_flush_user_tlb_range	__glue(_TLB,_flush_user_tlb_range)
>  #define __cpu_flush_kern_tlb_range	__glue(_TLB,_flush_kern_tlb_range)
>
> -extern void __cpu_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +extern void __cpu_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  extern void __cpu_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  #endif
> @@ -264,7 +264,7 @@ extern struct cpu_tlb_fns cpu_tlb;
>   *	flush_tlb_page(vma, uaddr)
>   *
>   *		Invalidate the specified page in the specified address range.
> - *		- vma	- vm_area_struct describing address range
> + *		- vma	- mm_area describing address range
>   *		- vaddr - virtual address (may not be aligned)
>   */
>
> @@ -410,7 +410,7 @@ static inline void __flush_tlb_mm(struct mm_struct *mm)
>  }
>
>  static inline void
> -__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
> +__local_flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
>  {
>  	const int zero = 0;
>  	const unsigned int __tlb_flag = __cpu_tlb_flags;
> @@ -432,7 +432,7 @@ __local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
>  }
>
>  static inline void
> -local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
> +local_flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
>  {
>  	const unsigned int __tlb_flag = __cpu_tlb_flags;
>
> @@ -449,7 +449,7 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
>  }
>
>  static inline void
> -__flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
> +__flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
>  {
>  	const unsigned int __tlb_flag = __cpu_tlb_flags;
>
> @@ -608,9 +608,9 @@ static inline void clean_pmd_entry(void *pmd)
>  #else
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr);
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long uaddr);
>  extern void flush_tlb_kernel_page(unsigned long kaddr);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  extern void flush_bp_all(void);
>  #endif
> @@ -622,11 +622,11 @@ extern void flush_bp_all(void);
>   * the set_ptes() function.
>   */
>  #if __LINUX_ARM_ARCH__ < 6
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep, unsigned int nr);
>  #else
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> +		struct mm_area *vma, unsigned long addr, pte_t *ptep,
>  		unsigned int nr)
>  {
>  }
> @@ -644,17 +644,17 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>  #ifndef __ASSEMBLY__
>  static inline void local_flush_tlb_all(void)									{ }
>  static inline void local_flush_tlb_mm(struct mm_struct *mm)							{ }
> -static inline void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)			{ }
> +static inline void local_flush_tlb_page(struct mm_area *vma, unsigned long uaddr)			{ }
>  static inline void local_flush_tlb_kernel_page(unsigned long kaddr)						{ }
> -static inline void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)	{ }
> +static inline void local_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)	{ }
>  static inline void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)				{ }
>  static inline void local_flush_bp_all(void)									{ }
>
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr);
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long uaddr);
>  extern void flush_tlb_kernel_page(unsigned long kaddr);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  extern void flush_bp_all(void);
>  #endif	/* __ASSEMBLY__ */
> diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
> index 123f4a8ef446..026d60dfd19e 100644
> --- a/arch/arm/kernel/asm-offsets.c
> +++ b/arch/arm/kernel/asm-offsets.c
> @@ -106,8 +106,8 @@ int main(void)
>    DEFINE(MM_CONTEXT_ID,		offsetof(struct mm_struct, context.id.counter));
>    BLANK();
>  #endif
> -  DEFINE(VMA_VM_MM,		offsetof(struct vm_area_struct, vm_mm));
> -  DEFINE(VMA_VM_FLAGS,		offsetof(struct vm_area_struct, vm_flags));
> +  DEFINE(VMA_VM_MM,		offsetof(struct mm_area, vm_mm));
> +  DEFINE(VMA_VM_FLAGS,		offsetof(struct mm_area, vm_flags));
>    BLANK();
>    DEFINE(VM_EXEC,	       	VM_EXEC);
>    BLANK();
> diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
> index e16ed102960c..d35d4687e6a8 100644
> --- a/arch/arm/kernel/process.c
> +++ b/arch/arm/kernel/process.c
> @@ -306,7 +306,7 @@ unsigned long __get_wchan(struct task_struct *p)
>   * atomic helpers. Insert it into the gate_vma so that it is visible
>   * through ptrace and /proc/<pid>/mem.
>   */
> -static struct vm_area_struct gate_vma;
> +static struct mm_area gate_vma;
>
>  static int __init gate_vma_init(void)
>  {
> @@ -319,7 +319,7 @@ static int __init gate_vma_init(void)
>  }
>  arch_initcall(gate_vma_init);
>
> -struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
> +struct mm_area *get_gate_vma(struct mm_struct *mm)
>  {
>  	return &gate_vma;
>  }
> @@ -338,7 +338,7 @@ int in_gate_area_no_mm(unsigned long addr)
>  #define is_gate_vma(vma)	0
>  #endif
>
> -const char *arch_vma_name(struct vm_area_struct *vma)
> +const char *arch_vma_name(struct mm_area *vma)
>  {
>  	return is_gate_vma(vma) ? "[vectors]" : NULL;
>  }
> @@ -380,7 +380,7 @@ static struct page *signal_page;
>  extern struct page *get_signal_page(void);
>
>  static int sigpage_mremap(const struct vm_special_mapping *sm,
> -		struct vm_area_struct *new_vma)
> +		struct mm_area *new_vma)
>  {
>  	current->mm->context.sigpage = new_vma->vm_start;
>  	return 0;
> @@ -395,7 +395,7 @@ static const struct vm_special_mapping sigpage_mapping = {
>  int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long npages;
>  	unsigned long addr;
>  	unsigned long hint;
> diff --git a/arch/arm/kernel/smp_tlb.c b/arch/arm/kernel/smp_tlb.c
> index d4908b3736d8..d827500c7538 100644
> --- a/arch/arm/kernel/smp_tlb.c
> +++ b/arch/arm/kernel/smp_tlb.c
> @@ -18,7 +18,7 @@
>   * TLB operations
>   */
>  struct tlb_args {
> -	struct vm_area_struct *ta_vma;
> +	struct mm_area *ta_vma;
>  	unsigned long ta_start;
>  	unsigned long ta_end;
>  };
> @@ -193,7 +193,7 @@ void flush_tlb_mm(struct mm_struct *mm)
>  	broadcast_tlb_mm_a15_erratum(mm);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
>  {
>  	if (tlb_ops_need_broadcast()) {
>  		struct tlb_args ta;
> @@ -217,7 +217,7 @@ void flush_tlb_kernel_page(unsigned long kaddr)
>  	broadcast_tlb_a15_erratum();
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma,
> +void flush_tlb_range(struct mm_area *vma,
>                       unsigned long start, unsigned long end)
>  {
>  	if (tlb_ops_need_broadcast()) {
> diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
> index 325448ffbba0..97b28ef9742a 100644
> --- a/arch/arm/kernel/vdso.c
> +++ b/arch/arm/kernel/vdso.c
> @@ -35,7 +35,7 @@ extern char vdso_start[], vdso_end[];
>  unsigned int vdso_total_pages __ro_after_init;
>
>  static int vdso_mremap(const struct vm_special_mapping *sm,
> -		struct vm_area_struct *new_vma)
> +		struct mm_area *new_vma)
>  {
>  	current->mm->context.vdso = new_vma->vm_start;
>
> @@ -210,7 +210,7 @@ static_assert(__VDSO_PAGES == VDSO_NR_PAGES);
>  /* assumes mmap_lock is write-locked */
>  void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long len;
>
>  	mm->context.vdso = 0;
> diff --git a/arch/arm/mach-rpc/ecard.c b/arch/arm/mach-rpc/ecard.c
> index 2cde4c83b7f9..08d17ee66891 100644
> --- a/arch/arm/mach-rpc/ecard.c
> +++ b/arch/arm/mach-rpc/ecard.c
> @@ -213,7 +213,7 @@ static DEFINE_MUTEX(ecard_mutex);
>   */
>  static void ecard_init_pgtables(struct mm_struct *mm)
>  {
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, VM_EXEC);
> +	struct mm_area vma = TLB_FLUSH_VMA(mm, VM_EXEC);
>
>  	/* We want to set up the page tables for the following mapping:
>  	 *  Virtual	Physical
> diff --git a/arch/arm/mm/cache-v6.S b/arch/arm/mm/cache-v6.S
> index 9f415476e218..560bf185d275 100644
> --- a/arch/arm/mm/cache-v6.S
> +++ b/arch/arm/mm/cache-v6.S
> @@ -94,7 +94,7 @@ SYM_FUNC_END(v6_flush_user_cache_all)
>   *
>   *	- start - start address (may not be aligned)
>   *	- end   - end address (exclusive, may not be aligned)
> - *	- flags	- vm_area_struct flags describing address space
> + *	- flags	- mm_area flags describing address space
>   *
>   *	It is assumed that:
>   *	- we have a VIPT cache.
> diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
> index 201ca05436fa..c3d5c874c895 100644
> --- a/arch/arm/mm/cache-v7.S
> +++ b/arch/arm/mm/cache-v7.S
> @@ -238,7 +238,7 @@ SYM_FUNC_END(v7_flush_user_cache_all)
>   *
>   *	- start - start address (may not be aligned)
>   *	- end   - end address (exclusive, may not be aligned)
> - *	- flags	- vm_area_struct flags describing address space
> + *	- flags	- mm_area flags describing address space
>   *
>   *	It is assumed that:
>   *	- we have a VIPT cache.
> diff --git a/arch/arm/mm/cache-v7m.S b/arch/arm/mm/cache-v7m.S
> index 14d719eba729..611e0c7c4875 100644
> --- a/arch/arm/mm/cache-v7m.S
> +++ b/arch/arm/mm/cache-v7m.S
> @@ -263,7 +263,7 @@ SYM_FUNC_END(v7m_flush_user_cache_all)
>   *
>   *	- start - start address (may not be aligned)
>   *	- end   - end address (exclusive, may not be aligned)
> - *	- flags	- vm_area_struct flags describing address space
> + *	- flags	- mm_area flags describing address space
>   *
>   *	It is assumed that:
>   *	- we have a VIPT cache.
> diff --git a/arch/arm/mm/copypage-fa.c b/arch/arm/mm/copypage-fa.c
> index 7e28c26f5aa4..6620d7e4ef45 100644
> --- a/arch/arm/mm/copypage-fa.c
> +++ b/arch/arm/mm/copypage-fa.c
> @@ -36,7 +36,7 @@ static void fa_copy_user_page(void *kto, const void *kfrom)
>  }
>
>  void fa_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> diff --git a/arch/arm/mm/copypage-feroceon.c b/arch/arm/mm/copypage-feroceon.c
> index 5fc8ef1e665f..c2b763bb8b94 100644
> --- a/arch/arm/mm/copypage-feroceon.c
> +++ b/arch/arm/mm/copypage-feroceon.c
> @@ -64,7 +64,7 @@ static void feroceon_copy_user_page(void *kto, const void *kfrom)
>  }
>
>  void feroceon_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> diff --git a/arch/arm/mm/copypage-v4mc.c b/arch/arm/mm/copypage-v4mc.c
> index 7ddd82b9fe8b..c151e91373b7 100644
> --- a/arch/arm/mm/copypage-v4mc.c
> +++ b/arch/arm/mm/copypage-v4mc.c
> @@ -62,7 +62,7 @@ static void mc_copy_user_page(void *from, void *to)
>  }
>
>  void v4_mc_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	struct folio *src = page_folio(from);
>  	void *kto = kmap_atomic(to);
> diff --git a/arch/arm/mm/copypage-v4wb.c b/arch/arm/mm/copypage-v4wb.c
> index c3581b226459..04541e74d6a6 100644
> --- a/arch/arm/mm/copypage-v4wb.c
> +++ b/arch/arm/mm/copypage-v4wb.c
> @@ -45,7 +45,7 @@ static void v4wb_copy_user_page(void *kto, const void *kfrom)
>  }
>
>  void v4wb_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> diff --git a/arch/arm/mm/copypage-v4wt.c b/arch/arm/mm/copypage-v4wt.c
> index 1fb10733305a..68cafffaeba6 100644
> --- a/arch/arm/mm/copypage-v4wt.c
> +++ b/arch/arm/mm/copypage-v4wt.c
> @@ -41,7 +41,7 @@ static void v4wt_copy_user_page(void *kto, const void *kfrom)
>  }
>
>  void v4wt_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> diff --git a/arch/arm/mm/copypage-v6.c b/arch/arm/mm/copypage-v6.c
> index a1a71f36d850..dff1dd0f9e98 100644
> --- a/arch/arm/mm/copypage-v6.c
> +++ b/arch/arm/mm/copypage-v6.c
> @@ -28,7 +28,7 @@ static DEFINE_RAW_SPINLOCK(v6_lock);
>   * attack the kernel's existing mapping of these pages.
>   */
>  static void v6_copy_user_highpage_nonaliasing(struct page *to,
> -	struct page *from, unsigned long vaddr, struct vm_area_struct *vma)
> +	struct page *from, unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> @@ -67,7 +67,7 @@ static void discard_old_kernel_data(void *kto)
>   * Copy the page, taking account of the cache colour.
>   */
>  static void v6_copy_user_highpage_aliasing(struct page *to,
> -	struct page *from, unsigned long vaddr, struct vm_area_struct *vma)
> +	struct page *from, unsigned long vaddr, struct mm_area *vma)
>  {
>  	struct folio *src = page_folio(from);
>  	unsigned int offset = CACHE_COLOUR(vaddr);
> diff --git a/arch/arm/mm/copypage-xsc3.c b/arch/arm/mm/copypage-xsc3.c
> index c86e79677ff9..4f866b2aba21 100644
> --- a/arch/arm/mm/copypage-xsc3.c
> +++ b/arch/arm/mm/copypage-xsc3.c
> @@ -62,7 +62,7 @@ static void xsc3_mc_copy_user_page(void *kto, const void *kfrom)
>  }
>
>  void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> diff --git a/arch/arm/mm/copypage-xscale.c b/arch/arm/mm/copypage-xscale.c
> index f1e29d3e8193..dcc5b53e7d8a 100644
> --- a/arch/arm/mm/copypage-xscale.c
> +++ b/arch/arm/mm/copypage-xscale.c
> @@ -82,7 +82,7 @@ static void mc_copy_user_page(void *from, void *to)
>  }
>
>  void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	struct folio *src = page_folio(from);
>  	void *kto = kmap_atomic(to);
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 88c2d68a69c9..88ec2665d5d9 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -1112,7 +1112,7 @@ static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
>  	return NULL;
>  }
>
> -static int arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
> +static int arm_iommu_mmap_attrs(struct device *dev, struct mm_area *vma,
>  		    void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		    unsigned long attrs)
>  {
> diff --git a/arch/arm/mm/fault-armv.c b/arch/arm/mm/fault-armv.c
> index 39fd5df73317..4717aa3256bb 100644
> --- a/arch/arm/mm/fault-armv.c
> +++ b/arch/arm/mm/fault-armv.c
> @@ -33,7 +33,7 @@ static pteval_t shared_pte_mask = L_PTE_MT_BUFFERABLE;
>   * Therefore those configurations which might call adjust_pte (those
>   * without CONFIG_CPU_CACHE_VIPT) cannot support split page_table_lock.
>   */
> -static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address,
> +static int do_adjust_pte(struct mm_area *vma, unsigned long address,
>  	unsigned long pfn, pte_t *ptep)
>  {
>  	pte_t entry = *ptep;
> @@ -61,7 +61,7 @@ static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address,
>  	return ret;
>  }
>
> -static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
> +static int adjust_pte(struct mm_area *vma, unsigned long address,
>  		      unsigned long pfn, bool need_lock)
>  {
>  	spinlock_t *ptl;
> @@ -121,13 +121,13 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
>  }
>
>  static void
> -make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
> +make_coherent(struct address_space *mapping, struct mm_area *vma,
>  	      unsigned long addr, pte_t *ptep, unsigned long pfn)
>  {
>  	const unsigned long pmd_start_addr = ALIGN_DOWN(addr, PMD_SIZE);
>  	const unsigned long pmd_end_addr = pmd_start_addr + PMD_SIZE;
>  	struct mm_struct *mm = vma->vm_mm;
> -	struct vm_area_struct *mpnt;
> +	struct mm_area *mpnt;
>  	unsigned long offset;
>  	pgoff_t pgoff;
>  	int aliases = 0;
> @@ -184,7 +184,7 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
>   *
>   * Note that the pte lock will be held.
>   */
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep, unsigned int nr)
>  {
>  	unsigned long pfn = pte_pfn(*ptep);
> diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
> index ab01b51de559..b89935868510 100644
> --- a/arch/arm/mm/fault.c
> +++ b/arch/arm/mm/fault.c
> @@ -264,7 +264,7 @@ static int __kprobes
>  do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int sig, code;
>  	vm_fault_t fault;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
> diff --git a/arch/arm/mm/flush.c b/arch/arm/mm/flush.c
> index 0749cf8a6637..8b674a426eae 100644
> --- a/arch/arm/mm/flush.c
> +++ b/arch/arm/mm/flush.c
> @@ -76,7 +76,7 @@ void flush_cache_mm(struct mm_struct *mm)
>  	}
>  }
>
> -void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	if (cache_is_vivt()) {
>  		vivt_flush_cache_range(vma, start, end);
> @@ -95,7 +95,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
>  		__flush_icache_all();
>  }
>
> -void flush_cache_pages(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr)
> +void flush_cache_pages(struct mm_area *vma, unsigned long user_addr, unsigned long pfn, unsigned int nr)
>  {
>  	if (cache_is_vivt()) {
>  		vivt_flush_cache_pages(vma, user_addr, pfn, nr);
> @@ -156,7 +156,7 @@ void __flush_ptrace_access(struct page *page, unsigned long uaddr, void *kaddr,
>  }
>
>  static
> -void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
> +void flush_ptrace_access(struct mm_area *vma, struct page *page,
>  			 unsigned long uaddr, void *kaddr, unsigned long len)
>  {
>  	unsigned int flags = 0;
> @@ -182,7 +182,7 @@ void flush_uprobe_xol_access(struct page *page, unsigned long uaddr,
>   *
>   * Note that this code needs to run on the current CPU.
>   */
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		       unsigned long uaddr, void *dst, const void *src,
>  		       unsigned long len)
>  {
> @@ -238,7 +238,7 @@ void __flush_dcache_folio(struct address_space *mapping, struct folio *folio)
>  static void __flush_dcache_aliases(struct address_space *mapping, struct folio *folio)
>  {
>  	struct mm_struct *mm = current->active_mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	pgoff_t pgoff, pgoff_end;
>
>  	/*
> @@ -378,8 +378,8 @@ EXPORT_SYMBOL(flush_dcache_page);
>   *  memcpy() to/from page
>   *  if written to page, flush_dcache_page()
>   */
> -void __flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr);
> -void __flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
> +void __flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr);
> +void __flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr)
>  {
>  	unsigned long pfn;
>
> diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
> index 3dbb383c26d5..4077f5184814 100644
> --- a/arch/arm/mm/mmap.c
> +++ b/arch/arm/mm/mmap.c
> @@ -32,7 +32,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
>  		unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int do_align = 0;
>  	int aliasing = cache_is_vipt_aliasing();
>  	struct vm_unmapped_area_info info = {};
> @@ -82,7 +82,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
>  		        const unsigned long len, const unsigned long pgoff,
>  		        const unsigned long flags, vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long addr = addr0;
>  	int do_align = 0;
> diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
> index d638cc87807e..57b8172a4830 100644
> --- a/arch/arm/mm/nommu.c
> +++ b/arch/arm/mm/nommu.c
> @@ -189,7 +189,7 @@ void flush_dcache_page(struct page *page)
>  }
>  EXPORT_SYMBOL(flush_dcache_page);
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		       unsigned long uaddr, void *dst, const void *src,
>  		       unsigned long len)
>  {
> diff --git a/arch/arm/mm/tlb-v6.S b/arch/arm/mm/tlb-v6.S
> index 8256a67ac654..d4481f9f0757 100644
> --- a/arch/arm/mm/tlb-v6.S
> +++ b/arch/arm/mm/tlb-v6.S
> @@ -27,7 +27,7 @@
>   *
>   *	- start - start address (may not be aligned)
>   *	- end   - end address (exclusive, may not be aligned)
> - *	- vma   - vm_area_struct describing address range
> + *	- vma   - mm_area describing address range
>   *
>   *	It is assumed that:
>   *	- the "Invalidate single entry" instruction will invalidate
> diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S
> index f1aa0764a2cc..28490bba1cf0 100644
> --- a/arch/arm/mm/tlb-v7.S
> +++ b/arch/arm/mm/tlb-v7.S
> @@ -26,7 +26,7 @@
>   *
>   *	- start - start address (may not be aligned)
>   *	- end   - end address (exclusive, may not be aligned)
> - *	- vma   - vm_area_struct describing address range
> + *	- vma   - mm_area describing address range
>   *
>   *	It is assumed that:
>   *	- the "Invalidate single entry" instruction will invalidate
> diff --git a/arch/arm/mm/tlb.c b/arch/arm/mm/tlb.c
> index 42359793120b..57a2184da8ae 100644
> --- a/arch/arm/mm/tlb.c
> +++ b/arch/arm/mm/tlb.c
> @@ -6,7 +6,7 @@
>  #include <asm/tlbflush.h>
>
>  #ifdef CONFIG_CPU_TLB_V4WT
> -void v4_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +void v4_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  void v4_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  struct cpu_tlb_fns v4_tlb_fns __initconst = {
> @@ -17,7 +17,7 @@ struct cpu_tlb_fns v4_tlb_fns __initconst = {
>  #endif
>
>  #ifdef CONFIG_CPU_TLB_V4WB
> -void v4wb_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +void v4wb_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  void v4wb_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  struct cpu_tlb_fns v4wb_tlb_fns __initconst = {
> @@ -28,7 +28,7 @@ struct cpu_tlb_fns v4wb_tlb_fns __initconst = {
>  #endif
>
>  #if defined(CONFIG_CPU_TLB_V4WBI) || defined(CONFIG_CPU_TLB_FEROCEON)
> -void v4wbi_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +void v4wbi_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  void v4wbi_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  struct cpu_tlb_fns v4wbi_tlb_fns __initconst = {
> @@ -39,7 +39,7 @@ struct cpu_tlb_fns v4wbi_tlb_fns __initconst = {
>  #endif
>
>  #ifdef CONFIG_CPU_TLB_V6
> -void v6wbi_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +void v6wbi_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  void v6wbi_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  struct cpu_tlb_fns v6wbi_tlb_fns __initconst = {
> @@ -50,7 +50,7 @@ struct cpu_tlb_fns v6wbi_tlb_fns __initconst = {
>  #endif
>
>  #ifdef CONFIG_CPU_TLB_V7
> -void v7wbi_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +void v7wbi_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  void v7wbi_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  struct cpu_tlb_fns v7wbi_tlb_fns __initconst = {
> @@ -73,7 +73,7 @@ asm("	.pushsection	\".alt.smp.init\", \"a\"		\n" \
>  #endif
>
>  #ifdef CONFIG_CPU_TLB_FA
> -void fa_flush_user_tlb_range(unsigned long, unsigned long, struct vm_area_struct *);
> +void fa_flush_user_tlb_range(unsigned long, unsigned long, struct mm_area *);
>  void fa_flush_kern_tlb_range(unsigned long, unsigned long);
>
>  struct cpu_tlb_fns fa_tlb_fns __initconst = {
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index a395b6c0aae2..11029e2a5413 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -68,7 +68,7 @@ static __read_mostly phys_addr_t xen_grant_frames;
>  uint32_t xen_start_flags;
>  EXPORT_SYMBOL(xen_start_flags);
>
> -int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
> +int xen_unmap_domain_gfn_range(struct mm_area *vma,
>  			       int nr, struct page **pages)
>  {
>  	return xen_xlate_unmap_gfn_range(vma, nr, pages);
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 28ab96e808ef..aaf770ee6d2f 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -109,7 +109,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
>   * processes address space.  Really, we want to allow our "user
>   * space" model to handle this.
>   */
> -extern void copy_to_user_page(struct vm_area_struct *, struct page *,
> +extern void copy_to_user_page(struct mm_area *, struct page *,
>  	unsigned long, void *, const void *, unsigned long);
>  #define copy_to_user_page copy_to_user_page
>
> diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
> index 07fbf5bf85a7..0b84bfffd34e 100644
> --- a/arch/arm64/include/asm/hugetlb.h
> +++ b/arch/arm64/include/asm/hugetlb.h
> @@ -38,7 +38,7 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
>  extern void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
>  			    pte_t *ptep, pte_t pte, unsigned long sz);
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +extern int huge_ptep_set_access_flags(struct mm_area *vma,
>  				      unsigned long addr, pte_t *ptep,
>  				      pte_t pte, int dirty);
>  #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
> @@ -48,7 +48,7 @@ extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
>  extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  				    unsigned long addr, pte_t *ptep);
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -extern pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +extern pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  				   unsigned long addr, pte_t *ptep);
>  #define __HAVE_ARCH_HUGE_PTE_CLEAR
>  extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
> @@ -59,18 +59,18 @@ extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep
>  void __init arm64_hugetlb_cma_reserve(void);
>
>  #define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
> -extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
> +extern pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
>  					 unsigned long addr, pte_t *ptep);
>
>  #define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
> -extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> +extern void huge_ptep_modify_prot_commit(struct mm_area *vma,
>  					 unsigned long addr, pte_t *ptep,
>  					 pte_t old_pte, pte_t new_pte);
>
>  #include <asm-generic/hugetlb.h>
>
>  #define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
> -static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_hugetlb_tlb_range(struct mm_area *vma,
>  					   unsigned long start,
>  					   unsigned long end)
>  {
> diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
> index 0dbe3b29049b..f0f70fb6934e 100644
> --- a/arch/arm64/include/asm/mmu_context.h
> +++ b/arch/arm64/include/asm/mmu_context.h
> @@ -307,7 +307,7 @@ static inline unsigned long mm_untag_mask(struct mm_struct *mm)
>   * Only enforce protection keys on the current process, because there is no
>   * user context to access POR_EL0 for another address space.
>   */
> -static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
> +static inline bool arch_vma_access_permitted(struct mm_area *vma,
>  		bool write, bool execute, bool foreign)
>  {
>  	if (!system_supports_poe())
> diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
> index 2312e6ee595f..d2258e036fae 100644
> --- a/arch/arm64/include/asm/page.h
> +++ b/arch/arm64/include/asm/page.h
> @@ -17,19 +17,19 @@
>  #include <asm/pgtable-types.h>
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>
>  extern void copy_page(void *to, const void *from);
>  extern void clear_page(void *to);
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma);
> +			unsigned long vaddr, struct mm_area *vma);
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>
>  void copy_highpage(struct page *to, struct page *from);
>  #define __HAVE_ARCH_COPY_HIGHPAGE
>
> -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
> +struct folio *vma_alloc_zeroed_movable_folio(struct mm_area *vma,
>  						unsigned long vaddr);
>  #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index d3b538be1500..914caa15c4c8 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1207,13 +1207,13 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
>  	return pte_pmd(pte_modify(pmd_pte(pmd), newprot));
>  }
>
> -extern int __ptep_set_access_flags(struct vm_area_struct *vma,
> +extern int __ptep_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pte_t *ptep,
>  				 pte_t entry, int dirty);
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> -static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
> +static inline int pmdp_set_access_flags(struct mm_area *vma,
>  					unsigned long address, pmd_t *pmdp,
>  					pmd_t entry, int dirty)
>  {
> @@ -1252,7 +1252,7 @@ static inline bool pud_user_accessible_page(pud_t pud)
>  /*
>   * Atomic pte/pmd modifications.
>   */
> -static inline int __ptep_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int __ptep_test_and_clear_young(struct mm_area *vma,
>  					      unsigned long address,
>  					      pte_t *ptep)
>  {
> @@ -1269,7 +1269,7 @@ static inline int __ptep_test_and_clear_young(struct vm_area_struct *vma,
>  	return pte_young(pte);
>  }
>
> -static inline int __ptep_clear_flush_young(struct vm_area_struct *vma,
> +static inline int __ptep_clear_flush_young(struct mm_area *vma,
>  					 unsigned long address, pte_t *ptep)
>  {
>  	int young = __ptep_test_and_clear_young(vma, address, ptep);
> @@ -1291,7 +1291,7 @@ static inline int __ptep_clear_flush_young(struct vm_area_struct *vma,
>
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
>  #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> -static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int pmdp_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long address,
>  					    pmd_t *pmdp)
>  {
> @@ -1388,7 +1388,7 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address,
>  		__ptep_set_wrprotect(mm, address, ptep);
>  }
>
> -static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
> +static inline void __clear_young_dirty_pte(struct mm_area *vma,
>  					   unsigned long addr, pte_t *ptep,
>  					   pte_t pte, cydp_t flags)
>  {
> @@ -1407,7 +1407,7 @@ static inline void __clear_young_dirty_pte(struct vm_area_struct *vma,
>  	} while (pte_val(pte) != pte_val(old_pte));
>  }
>
> -static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma,
> +static inline void __clear_young_dirty_ptes(struct mm_area *vma,
>  					    unsigned long addr, pte_t *ptep,
>  					    unsigned int nr, cydp_t flags)
>  {
> @@ -1437,7 +1437,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  }
>
>  #define pmdp_establish pmdp_establish
> -static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_establish(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
>  	page_table_check_pmd_set(vma->vm_mm, pmdp, pmd);
> @@ -1506,7 +1506,7 @@ extern void arch_swap_restore(swp_entry_t entry, struct folio *folio);
>   * On AArch64, the cache coherency is handled via the __set_ptes() function.
>   */
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> +		struct mm_area *vma, unsigned long addr, pte_t *ptep,
>  		unsigned int nr)
>  {
>  	/*
> @@ -1552,11 +1552,11 @@ static inline bool pud_sect_supported(void)
>
>  #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
>  #define ptep_modify_prot_start ptep_modify_prot_start
> -extern pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
> +extern pte_t ptep_modify_prot_start(struct mm_area *vma,
>  				    unsigned long addr, pte_t *ptep);
>
>  #define ptep_modify_prot_commit ptep_modify_prot_commit
> -extern void ptep_modify_prot_commit(struct vm_area_struct *vma,
> +extern void ptep_modify_prot_commit(struct mm_area *vma,
>  				    unsigned long addr, pte_t *ptep,
>  				    pte_t old_pte, pte_t new_pte);
>
> @@ -1580,16 +1580,16 @@ extern void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr,
>  extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>  				unsigned long addr, pte_t *ptep,
>  				unsigned int nr, int full);
> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
> +extern int contpte_ptep_test_and_clear_young(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep);
> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
> +extern int contpte_ptep_clear_flush_young(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep);
>  extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  				pte_t *ptep, unsigned int nr);
> -extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> +extern int contpte_ptep_set_access_flags(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep,
>  				pte_t entry, int dirty);
> -extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> +extern void contpte_clear_young_dirty_ptes(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep,
>  				unsigned int nr, cydp_t flags);
>
> @@ -1747,7 +1747,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
> -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int ptep_test_and_clear_young(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep)
>  {
>  	pte_t orig_pte = __ptep_get(ptep);
> @@ -1759,7 +1759,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> -static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
> +static inline int ptep_clear_flush_young(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep)
>  {
>  	pte_t orig_pte = __ptep_get(ptep);
> @@ -1802,7 +1802,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> -static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int ptep_set_access_flags(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep,
>  				pte_t entry, int dirty)
>  {
> @@ -1817,7 +1817,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma,
>  }
>
>  #define clear_young_dirty_ptes clear_young_dirty_ptes
> -static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> +static inline void clear_young_dirty_ptes(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep,
>  					  unsigned int nr, cydp_t flags)
>  {
> diff --git a/arch/arm64/include/asm/pkeys.h b/arch/arm64/include/asm/pkeys.h
> index 0ca5f83ce148..14b1d4bfc8c0 100644
> --- a/arch/arm64/include/asm/pkeys.h
> +++ b/arch/arm64/include/asm/pkeys.h
> @@ -20,12 +20,12 @@ static inline bool arch_pkeys_enabled(void)
>  	return system_supports_poe();
>  }
>
> -static inline int vma_pkey(struct vm_area_struct *vma)
> +static inline int vma_pkey(struct mm_area *vma)
>  {
>  	return (vma->vm_flags & ARCH_VM_PKEY_FLAGS) >> VM_PKEY_SHIFT;
>  }
>
> -static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
> +static inline int arch_override_mprotect_pkey(struct mm_area *vma,
>  		int prot, int pkey)
>  {
>  	if (pkey != -1)
> diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h
> index 8d762607285c..31aac313a4b8 100644
> --- a/arch/arm64/include/asm/tlb.h
> +++ b/arch/arm64/include/asm/tlb.h
> @@ -52,7 +52,7 @@ static inline int tlb_get_level(struct mmu_gather *tlb)
>
>  static inline void tlb_flush(struct mmu_gather *tlb)
>  {
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0);
> +	struct mm_area vma = TLB_FLUSH_VMA(tlb->mm, 0);
>  	bool last_level = !tlb->freed_tables;
>  	unsigned long stride = tlb_get_unmap_size(tlb);
>  	int tlb_level = tlb_get_level(tlb);
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index eba1a98657f1..bfed61ba7b05 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -295,13 +295,13 @@ static inline void __flush_tlb_page_nosync(struct mm_struct *mm,
>  						(uaddr & PAGE_MASK) + PAGE_SIZE);
>  }
>
> -static inline void flush_tlb_page_nosync(struct vm_area_struct *vma,
> +static inline void flush_tlb_page_nosync(struct mm_area *vma,
>  					 unsigned long uaddr)
>  {
>  	return __flush_tlb_page_nosync(vma->vm_mm, uaddr);
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma,
> +static inline void flush_tlb_page(struct mm_area *vma,
>  				  unsigned long uaddr)
>  {
>  	flush_tlb_page_nosync(vma, uaddr);
> @@ -472,7 +472,7 @@ static inline void __flush_tlb_range_nosync(struct mm_struct *mm,
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
>  }
>
> -static inline void __flush_tlb_range(struct vm_area_struct *vma,
> +static inline void __flush_tlb_range(struct mm_area *vma,
>  				     unsigned long start, unsigned long end,
>  				     unsigned long stride, bool last_level,
>  				     int tlb_level)
> @@ -482,7 +482,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
>  	dsb(ish);
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  	/*
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index 2fbfd27ff5f2..cc561fb4203d 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -422,7 +422,7 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
>  		return -EIO;
>
>  	while (len) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		unsigned long tags, offset;
>  		void *maddr;
>  		struct page *page = get_user_page_vma_remote(mm, addr,
> diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
> index 78ddf6bdecad..5e3564b842a4 100644
> --- a/arch/arm64/kernel/vdso.c
> +++ b/arch/arm64/kernel/vdso.c
> @@ -58,7 +58,7 @@ static struct vdso_abi_info vdso_info[] __ro_after_init = {
>  };
>
>  static int vdso_mremap(const struct vm_special_mapping *sm,
> -		struct vm_area_struct *new_vma)
> +		struct mm_area *new_vma)
>  {
>  	current->mm->context.vdso = (void *)new_vma->vm_start;
>
> @@ -157,7 +157,7 @@ static struct page *aarch32_vectors_page __ro_after_init;
>  static struct page *aarch32_sig_page __ro_after_init;
>
>  static int aarch32_sigpage_mremap(const struct vm_special_mapping *sm,
> -				  struct vm_area_struct *new_vma)
> +				  struct mm_area *new_vma)
>  {
>  	current->mm->context.sigpage = (void *)new_vma->vm_start;
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 2feb6c6b63af..54ca059f6a02 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1017,7 +1017,7 @@ static void stage2_unmap_memslot(struct kvm *kvm,
>  	 *     +--------------------------------------------+
>  	 */
>  	do {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		hva_t vm_start, vm_end;
>
>  		vma = find_vma_intersection(current->mm, hva, reg_end);
> @@ -1393,7 +1393,7 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
>  	return PAGE_SIZE;
>  }
>
> -static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
> +static int get_vma_page_shift(struct mm_area *vma, unsigned long hva)
>  {
>  	unsigned long pa;
>
> @@ -1461,7 +1461,7 @@ static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
>  	}
>  }
>
> -static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> +static bool kvm_vma_mte_allowed(struct mm_area *vma)
>  {
>  	return vma->vm_flags & VM_MTE_ALLOWED;
>  }
> @@ -1478,7 +1478,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>  	unsigned long mmu_seq;
>  	phys_addr_t ipa = fault_ipa;
>  	struct kvm *kvm = vcpu->kvm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	short vma_shift;
>  	void *memcache;
>  	gfn_t gfn;
> @@ -2190,7 +2190,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  	 *     +--------------------------------------------+
>  	 */
>  	do {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		vma = find_vma_intersection(current->mm, hva, reg_end);
>  		if (!vma)
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index bcac4f55f9c1..8bec9a656558 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -49,7 +49,7 @@ static void contpte_try_unfold_partial(struct mm_struct *mm, unsigned long addr,
>  static void contpte_convert(struct mm_struct *mm, unsigned long addr,
>  			    pte_t *ptep, pte_t pte)
>  {
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
> +	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
>  	unsigned long start_addr;
>  	pte_t *start_ptep;
>  	int i;
> @@ -297,7 +297,7 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>  }
>  EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>
> -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
> +int contpte_ptep_test_and_clear_young(struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep)
>  {
>  	/*
> @@ -322,7 +322,7 @@ int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
>
> -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
> +int contpte_ptep_clear_flush_young(struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep)
>  {
>  	int young;
> @@ -361,7 +361,7 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  }
>  EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes);
>
> -void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
> +void contpte_clear_young_dirty_ptes(struct mm_area *vma,
>  				    unsigned long addr, pte_t *ptep,
>  				    unsigned int nr, cydp_t flags)
>  {
> @@ -390,7 +390,7 @@ void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma,
>  }
>  EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes);
>
> -int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> +int contpte_ptep_set_access_flags(struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep,
>  					pte_t entry, int dirty)
>  {
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index a86c897017df..8bb8e592eab4 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -61,7 +61,7 @@ void copy_highpage(struct page *to, struct page *from)
>  EXPORT_SYMBOL(copy_highpage);
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma)
> +			unsigned long vaddr, struct mm_area *vma)
>  {
>  	copy_highpage(to, from);
>  	flush_dcache_page(to);
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index ec0a337891dd..340ac8c5bc25 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -210,7 +210,7 @@ static void show_pte(unsigned long addr)
>   *
>   * Returns whether or not the PTE actually changed.
>   */
> -int __ptep_set_access_flags(struct vm_area_struct *vma,
> +int __ptep_set_access_flags(struct mm_area *vma,
>  			    unsigned long address, pte_t *ptep,
>  			    pte_t entry, int dirty)
>  {
> @@ -487,7 +487,7 @@ static void do_bad_area(unsigned long far, unsigned long esr,
>  	}
>  }
>
> -static bool fault_from_pkey(unsigned long esr, struct vm_area_struct *vma,
> +static bool fault_from_pkey(unsigned long esr, struct mm_area *vma,
>  			unsigned int mm_flags)
>  {
>  	unsigned long iss2 = ESR_ELx_ISS2(esr);
> @@ -526,7 +526,7 @@ static bool is_write_abort(unsigned long esr)
>  	return (esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM);
>  }
>
> -static bool is_invalid_gcs_access(struct vm_area_struct *vma, u64 esr)
> +static bool is_invalid_gcs_access(struct mm_area *vma, u64 esr)
>  {
>  	if (!system_supports_gcs())
>  		return false;
> @@ -552,7 +552,7 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr,
>  	unsigned long vm_flags;
>  	unsigned int mm_flags = FAULT_FLAG_DEFAULT;
>  	unsigned long addr = untagged_addr(far);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int si_code;
>  	int pkey = -1;
>
> @@ -1010,7 +1010,7 @@ NOKPROBE_SYMBOL(do_debug_exception);
>  /*
>   * Used during anonymous page fault handling.
>   */
> -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
> +struct folio *vma_alloc_zeroed_movable_folio(struct mm_area *vma,
>  						unsigned long vaddr)
>  {
>  	gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO;
> diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
> index 013eead9b695..4931bb9d9937 100644
> --- a/arch/arm64/mm/flush.c
> +++ b/arch/arm64/mm/flush.c
> @@ -29,7 +29,7 @@ void sync_icache_aliases(unsigned long start, unsigned long end)
>  	}
>  }
>
> -static void flush_ptrace_access(struct vm_area_struct *vma, unsigned long start,
> +static void flush_ptrace_access(struct mm_area *vma, unsigned long start,
>  				unsigned long end)
>  {
>  	if (vma->vm_flags & VM_EXEC)
> @@ -41,7 +41,7 @@ static void flush_ptrace_access(struct vm_area_struct *vma, unsigned long start,
>   * address space.  Really, we want to allow our "user space" model to handle
>   * this.
>   */
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		       unsigned long uaddr, void *dst, const void *src,
>  		       unsigned long len)
>  {
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index cfe8cb8ba1cc..55246c6e60d0 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -182,7 +182,7 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
>  				    unsigned long ncontig)
>  {
>  	pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig);
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
> +	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
>
>  	flush_tlb_range(&vma, addr, addr + (pgsize * ncontig));
>  	return orig_pte;
> @@ -203,7 +203,7 @@ static void clear_flush(struct mm_struct *mm,
>  			     unsigned long pgsize,
>  			     unsigned long ncontig)
>  {
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
> +	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
>  	unsigned long i, saddr = addr;
>
>  	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
> @@ -244,7 +244,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
>  		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
>  }
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgdp;
> @@ -427,7 +427,7 @@ static int __cont_access_flags_changed(pte_t *ptep, pte_t pte, int ncontig)
>  	return 0;
>  }
>
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  			       unsigned long addr, pte_t *ptep,
>  			       pte_t pte, int dirty)
>  {
> @@ -490,7 +490,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  		__set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1);
>  }
>
> -pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  			    unsigned long addr, pte_t *ptep)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -534,7 +534,7 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
>  	return __hugetlb_valid_size(size);
>  }
>
> -pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +pte_t huge_ptep_modify_prot_start(struct mm_area *vma, unsigned long addr, pte_t *ptep)
>  {
>  	unsigned long psize = huge_page_size(hstate_vma(vma));
>
> @@ -550,7 +550,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr
>  	return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize);
>  }
>
> -void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> +void huge_ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr, pte_t *ptep,
>  				  pte_t old_pte, pte_t pte)
>  {
>  	unsigned long psize = huge_page_size(hstate_vma(vma));
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ea6695d53fb9..4945b810f03c 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1504,7 +1504,7 @@ static int __init prevent_bootmem_remove_init(void)
>  early_initcall(prevent_bootmem_remove_init);
>  #endif
>
> -pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr, pte_t *ptep)
>  {
>  	if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) {
>  		/*
> @@ -1518,7 +1518,7 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte
>  	return ptep_get_and_clear(vma->vm_mm, addr, ptep);
>  }
>
> -void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
> +void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr, pte_t *ptep,
>  			     pte_t old_pte, pte_t pte)
>  {
>  	set_pte_at(vma->vm_mm, addr, ptep, pte);
> diff --git a/arch/csky/abiv1/cacheflush.c b/arch/csky/abiv1/cacheflush.c
> index 171e8fb32285..9253db16358c 100644
> --- a/arch/csky/abiv1/cacheflush.c
> +++ b/arch/csky/abiv1/cacheflush.c
> @@ -41,7 +41,7 @@ void flush_dcache_page(struct page *page)
>  }
>  EXPORT_SYMBOL(flush_dcache_page);
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep, unsigned int nr)
>  {
>  	unsigned long pfn = pte_pfn(*ptep);
> @@ -65,7 +65,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
>  	}
>  }
>
> -void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_cache_range(struct mm_area *vma, unsigned long start,
>  	unsigned long end)
>  {
>  	dcache_wbinv_all();
> diff --git a/arch/csky/abiv1/inc/abi/cacheflush.h b/arch/csky/abiv1/inc/abi/cacheflush.h
> index d011a81575d2..be382265c4dc 100644
> --- a/arch/csky/abiv1/inc/abi/cacheflush.h
> +++ b/arch/csky/abiv1/inc/abi/cacheflush.h
> @@ -30,7 +30,7 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
>  }
>
>  #define ARCH_HAS_FLUSH_ANON_PAGE
> -static inline void flush_anon_page(struct vm_area_struct *vma,
> +static inline void flush_anon_page(struct mm_area *vma,
>  			 struct page *page, unsigned long vmaddr)
>  {
>  	if (PageAnon(page))
> @@ -41,7 +41,7 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
>   * if (current_mm != vma->mm) cache_wbinv_range(start, end) will be broken.
>   * Use cache_wbinv_all() here and need to be improved in future.
>   */
> -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> +extern void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
>  #define flush_cache_vmap(start, end)		cache_wbinv_all()
>  #define flush_cache_vmap_early(start, end)	do { } while (0)
>  #define flush_cache_vunmap(start, end)		cache_wbinv_all()
> diff --git a/arch/csky/abiv1/mmap.c b/arch/csky/abiv1/mmap.c
> index 1047865e82a9..587ea707e56a 100644
> --- a/arch/csky/abiv1/mmap.c
> +++ b/arch/csky/abiv1/mmap.c
> @@ -27,7 +27,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
>  		unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int do_align = 0;
>  	struct vm_unmapped_area_info info = {
>  		.length = len,
> diff --git a/arch/csky/abiv2/cacheflush.c b/arch/csky/abiv2/cacheflush.c
> index 876028b1083f..9001fc55ca76 100644
> --- a/arch/csky/abiv2/cacheflush.c
> +++ b/arch/csky/abiv2/cacheflush.c
> @@ -7,7 +7,7 @@
>  #include <asm/cache.h>
>  #include <asm/tlbflush.h>
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *pte, unsigned int nr)
>  {
>  	unsigned long pfn = pte_pfn(*pte);
> diff --git a/arch/csky/include/asm/page.h b/arch/csky/include/asm/page.h
> index 4911d0892b71..bd643891e28a 100644
> --- a/arch/csky/include/asm/page.h
> +++ b/arch/csky/include/asm/page.h
> @@ -43,7 +43,7 @@ struct page;
>
>  #include <abi/page.h>
>
> -struct vm_area_struct;
> +struct mm_area;
>
>  typedef struct { unsigned long pte_low; } pte_t;
>  #define pte_val(x)	((x).pte_low)
> diff --git a/arch/csky/include/asm/pgtable.h b/arch/csky/include/asm/pgtable.h
> index a397e1718ab6..17de85d6cae5 100644
> --- a/arch/csky/include/asm/pgtable.h
> +++ b/arch/csky/include/asm/pgtable.h
> @@ -263,7 +263,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
>  extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
>  extern void paging_init(void);
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *pte, unsigned int nr);
>  #define update_mmu_cache(vma, addr, ptep) \
>  	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
> diff --git a/arch/csky/include/asm/tlbflush.h b/arch/csky/include/asm/tlbflush.h
> index 407160b4fde7..1bb6e57ee7a5 100644
> --- a/arch/csky/include/asm/tlbflush.h
> +++ b/arch/csky/include/asm/tlbflush.h
> @@ -14,8 +14,8 @@
>   */
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long page);
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
> diff --git a/arch/csky/kernel/vdso.c b/arch/csky/kernel/vdso.c
> index c54d019d66bc..cb26b07cc994 100644
> --- a/arch/csky/kernel/vdso.c
> +++ b/arch/csky/kernel/vdso.c
> @@ -40,7 +40,7 @@ arch_initcall(vdso_init);
>  int arch_setup_additional_pages(struct linux_binprm *bprm,
>  	int uses_interp)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long vdso_base, vdso_len;
>  	int ret;
> diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
> index 5226bc08c336..f64991717a1a 100644
> --- a/arch/csky/mm/fault.c
> +++ b/arch/csky/mm/fault.c
> @@ -168,7 +168,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
>  	flush_tlb_one(addr);
>  }
>
> -static inline bool access_error(struct pt_regs *regs, struct vm_area_struct *vma)
> +static inline bool access_error(struct pt_regs *regs, struct mm_area *vma)
>  {
>  	if (is_write(regs)) {
>  		if (!(vma->vm_flags & VM_WRITE))
> @@ -187,7 +187,7 @@ static inline bool access_error(struct pt_regs *regs, struct vm_area_struct *vma
>  asmlinkage void do_page_fault(struct pt_regs *regs)
>  {
>  	struct task_struct *tsk;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm;
>  	unsigned long addr = read_mmu_entryhi() & PAGE_MASK;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
> diff --git a/arch/csky/mm/tlb.c b/arch/csky/mm/tlb.c
> index 9234c5e5ceaf..ad8e9be1a714 100644
> --- a/arch/csky/mm/tlb.c
> +++ b/arch/csky/mm/tlb.c
> @@ -49,7 +49,7 @@ do { \
>  } while (0)
>  #endif
>
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			unsigned long end)
>  {
>  	unsigned long newpid = cpu_asid(vma->vm_mm);
> @@ -132,7 +132,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  #endif
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	int newpid = cpu_asid(vma->vm_mm);
>
> diff --git a/arch/hexagon/include/asm/cacheflush.h b/arch/hexagon/include/asm/cacheflush.h
> index bfff514a81c8..29c492c45995 100644
> --- a/arch/hexagon/include/asm/cacheflush.h
> +++ b/arch/hexagon/include/asm/cacheflush.h
> @@ -59,7 +59,7 @@ extern void flush_cache_all_hexagon(void);
>   *
>   */
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	/*  generic_ptrace_pokedata doesn't wind up here, does it?  */
> @@ -68,7 +68,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>  #define update_mmu_cache(vma, addr, ptep) \
>  	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		       unsigned long vaddr, void *dst, void *src, int len);
>  #define copy_to_user_page copy_to_user_page
>
> diff --git a/arch/hexagon/include/asm/tlbflush.h b/arch/hexagon/include/asm/tlbflush.h
> index a7c9ab398cab..e79e62a0e132 100644
> --- a/arch/hexagon/include/asm/tlbflush.h
> +++ b/arch/hexagon/include/asm/tlbflush.h
> @@ -23,8 +23,8 @@
>   */
>  extern void tlb_flush_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
> -extern void flush_tlb_range(struct vm_area_struct *vma,
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long addr);
> +extern void flush_tlb_range(struct mm_area *vma,
>  				unsigned long start, unsigned long end);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  extern void flush_tlb_one(unsigned long);
> diff --git a/arch/hexagon/kernel/vdso.c b/arch/hexagon/kernel/vdso.c
> index 8119084dc519..c4728b6e7b05 100644
> --- a/arch/hexagon/kernel/vdso.c
> +++ b/arch/hexagon/kernel/vdso.c
> @@ -51,7 +51,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  {
>  	int ret;
>  	unsigned long vdso_base;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	static struct vm_special_mapping vdso_mapping = {
>  		.name = "[vdso]",
> @@ -87,7 +87,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  	return ret;
>  }
>
> -const char *arch_vma_name(struct vm_area_struct *vma)
> +const char *arch_vma_name(struct mm_area *vma)
>  {
>  	if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso)
>  		return "[vdso]";
> diff --git a/arch/hexagon/mm/cache.c b/arch/hexagon/mm/cache.c
> index 7e46f40c8b54..c16d16954a28 100644
> --- a/arch/hexagon/mm/cache.c
> +++ b/arch/hexagon/mm/cache.c
> @@ -115,7 +115,7 @@ void flush_cache_all_hexagon(void)
>  	mb();
>  }
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		       unsigned long vaddr, void *dst, void *src, int len)
>  {
>  	memcpy(dst, src, len);
> diff --git a/arch/hexagon/mm/vm_fault.c b/arch/hexagon/mm/vm_fault.c
> index 3771fb453898..5eef0342fcaa 100644
> --- a/arch/hexagon/mm/vm_fault.c
> +++ b/arch/hexagon/mm/vm_fault.c
> @@ -36,7 +36,7 @@
>   */
>  static void do_page_fault(unsigned long address, long cause, struct pt_regs *regs)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	int si_signo;
>  	int si_code = SEGV_MAPERR;
> diff --git a/arch/hexagon/mm/vm_tlb.c b/arch/hexagon/mm/vm_tlb.c
> index 8b6405e2234b..fee2184306a4 100644
> --- a/arch/hexagon/mm/vm_tlb.c
> +++ b/arch/hexagon/mm/vm_tlb.c
> @@ -23,7 +23,7 @@
>   * processors must be induced to flush the copies in their local TLBs,
>   * but Hexagon thread-based virtual processors share the same MMU.
>   */
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -64,7 +64,7 @@ void flush_tlb_mm(struct mm_struct *mm)
>  /*
>   * Flush TLB state associated with a page of a vma.
>   */
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long vaddr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long vaddr)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
> index 4dc4b3e04225..6b92e8c42e37 100644
> --- a/arch/loongarch/include/asm/hugetlb.h
> +++ b/arch/loongarch/include/asm/hugetlb.h
> @@ -48,7 +48,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  	pte_t pte;
> @@ -67,7 +67,7 @@ static inline int huge_pte_none(pte_t pte)
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int huge_ptep_set_access_flags(struct mm_area *vma,
>  					     unsigned long addr,
>  					     pte_t *ptep, pte_t pte,
>  					     int dirty)
> diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm/page.h
> index 7368f12b7cb1..d58207b68c4b 100644
> --- a/arch/loongarch/include/asm/page.h
> +++ b/arch/loongarch/include/asm/page.h
> @@ -36,9 +36,9 @@ extern void copy_page(void *to, void *from);
>  extern unsigned long shm_align_mask;
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>  void copy_user_highpage(struct page *to, struct page *from,
> -	      unsigned long vaddr, struct vm_area_struct *vma);
> +	      unsigned long vaddr, struct mm_area *vma);
>
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>
> diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
> index da346733a1da..8f8764731345 100644
> --- a/arch/loongarch/include/asm/pgtable.h
> +++ b/arch/loongarch/include/asm/pgtable.h
> @@ -63,7 +63,7 @@
>  #include <asm/sparsemem.h>
>
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>
>  /*
>   * ZERO_PAGE is a global shared page that is always zero; used
> @@ -438,11 +438,11 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
>  		     (pgprot_val(newprot) & ~_PAGE_CHG_MASK));
>  }
>
> -extern void __update_tlb(struct vm_area_struct *vma,
> +extern void __update_tlb(struct mm_area *vma,
>  			unsigned long address, pte_t *ptep);
>
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	for (;;) {
> @@ -459,7 +459,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>  #define update_mmu_tlb_range(vma, addr, ptep, nr) \
>  	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
>
> -static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pmd(struct mm_area *vma,
>  			unsigned long address, pmd_t *pmdp)
>  {
>  	__update_tlb(vma, address, (pte_t *)pmdp);
> diff --git a/arch/loongarch/include/asm/tlb.h b/arch/loongarch/include/asm/tlb.h
> index e071f5e9e858..38a860530433 100644
> --- a/arch/loongarch/include/asm/tlb.h
> +++ b/arch/loongarch/include/asm/tlb.h
> @@ -139,7 +139,7 @@ static void tlb_flush(struct mmu_gather *tlb);
>
>  static inline void tlb_flush(struct mmu_gather *tlb)
>  {
> -	struct vm_area_struct vma;
> +	struct mm_area vma;
>
>  	vma.vm_mm = tlb->mm;
>  	vm_flags_init(&vma, 0);
> diff --git a/arch/loongarch/include/asm/tlbflush.h b/arch/loongarch/include/asm/tlbflush.h
> index a0785e590681..3cab349279d8 100644
> --- a/arch/loongarch/include/asm/tlbflush.h
> +++ b/arch/loongarch/include/asm/tlbflush.h
> @@ -20,18 +20,18 @@ extern void local_flush_tlb_all(void);
>  extern void local_flush_tlb_user(void);
>  extern void local_flush_tlb_kernel(void);
>  extern void local_flush_tlb_mm(struct mm_struct *mm);
> -extern void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> +extern void local_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
>  extern void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
> -extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +extern void local_flush_tlb_page(struct mm_area *vma, unsigned long page);
>  extern void local_flush_tlb_one(unsigned long vaddr);
>
>  #ifdef CONFIG_SMP
>
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long, unsigned long);
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long, unsigned long);
>  extern void flush_tlb_kernel_range(unsigned long, unsigned long);
> -extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
> +extern void flush_tlb_page(struct mm_area *, unsigned long);
>  extern void flush_tlb_one(unsigned long vaddr);
>
>  #else /* CONFIG_SMP */
> diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
> index 4b24589c0b56..f3cf1633dcc4 100644
> --- a/arch/loongarch/kernel/smp.c
> +++ b/arch/loongarch/kernel/smp.c
> @@ -703,7 +703,7 @@ void flush_tlb_mm(struct mm_struct *mm)
>  }
>
>  struct flush_tlb_data {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr1;
>  	unsigned long addr2;
>  };
> @@ -715,7 +715,7 @@ static void flush_tlb_range_ipi(void *info)
>  	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> @@ -764,7 +764,7 @@ static void flush_tlb_page_ipi(void *info)
>  	local_flush_tlb_page(fd->vma, fd->addr1);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	preempt_disable();
>  	if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) {
> diff --git a/arch/loongarch/kernel/vdso.c b/arch/loongarch/kernel/vdso.c
> index 10cf1608c7b3..a33039241859 100644
> --- a/arch/loongarch/kernel/vdso.c
> +++ b/arch/loongarch/kernel/vdso.c
> @@ -25,7 +25,7 @@
>
>  extern char vdso_start[], vdso_end[];
>
> -static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
> +static int vdso_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
>  {
>  	current->mm->context.vdso = (void *)(new_vma->vm_start);
>
> @@ -79,7 +79,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  	int ret;
>  	unsigned long size, data_addr, vdso_addr;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct loongarch_vdso_info *info = current->thread.vdso;
>
>  	if (mmap_write_lock_killable(mm))
> diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c
> index deefd9617d00..b61c282fe87b 100644
> --- a/arch/loongarch/mm/fault.c
> +++ b/arch/loongarch/mm/fault.c
> @@ -179,7 +179,7 @@ static void __kprobes __do_page_fault(struct pt_regs *regs,
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
>  	struct task_struct *tsk = current;
>  	struct mm_struct *mm = tsk->mm;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	vm_fault_t fault;
>
>  	if (kprobe_page_fault(regs, current->thread.trap_nr))
> diff --git a/arch/loongarch/mm/hugetlbpage.c b/arch/loongarch/mm/hugetlbpage.c
> index e4068906143b..44d9969da492 100644
> --- a/arch/loongarch/mm/hugetlbpage.c
> +++ b/arch/loongarch/mm/hugetlbpage.c
> @@ -13,7 +13,7 @@
>  #include <asm/tlb.h>
>  #include <asm/tlbflush.h>
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgd;
> diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
> index fdb7f73ad160..f238502ebed5 100644
> --- a/arch/loongarch/mm/init.c
> +++ b/arch/loongarch/mm/init.c
> @@ -40,7 +40,7 @@ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_
>  EXPORT_SYMBOL(empty_zero_page);
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *vfrom, *vto;
>
> diff --git a/arch/loongarch/mm/mmap.c b/arch/loongarch/mm/mmap.c
> index 1df9e99582cc..438f85199a7b 100644
> --- a/arch/loongarch/mm/mmap.c
> +++ b/arch/loongarch/mm/mmap.c
> @@ -23,7 +23,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
>  	unsigned long flags, enum mmap_allocation_direction dir)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr = addr0;
>  	int do_color_align;
>  	struct vm_unmapped_area_info info = {};
> diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c
> index 3b427b319db2..ec386b53110b 100644
> --- a/arch/loongarch/mm/tlb.c
> +++ b/arch/loongarch/mm/tlb.c
> @@ -54,7 +54,7 @@ void local_flush_tlb_mm(struct mm_struct *mm)
>  	preempt_enable();
>  }
>
> -void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  	unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -110,7 +110,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  	local_irq_restore(flags);
>  }
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	int cpu = smp_processor_id();
>
> @@ -135,7 +135,7 @@ void local_flush_tlb_one(unsigned long page)
>  	invtlb_addr(INVTLB_ADDR_GTRUE_OR_ASID, 0, page);
>  }
>
> -static void __update_hugetlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
> +static void __update_hugetlb(struct mm_area *vma, unsigned long address, pte_t *ptep)
>  {
>  #ifdef CONFIG_HUGETLB_PAGE
>  	int idx;
> @@ -163,7 +163,7 @@ static void __update_hugetlb(struct vm_area_struct *vma, unsigned long address,
>  #endif
>  }
>
> -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
> +void __update_tlb(struct mm_area *vma, unsigned long address, pte_t *ptep)
>  {
>  	int idx;
>  	unsigned long flags;
> diff --git a/arch/m68k/include/asm/cacheflush_mm.h b/arch/m68k/include/asm/cacheflush_mm.h
> index 9a71b0148461..edf5f643578d 100644
> --- a/arch/m68k/include/asm/cacheflush_mm.h
> +++ b/arch/m68k/include/asm/cacheflush_mm.h
> @@ -204,7 +204,7 @@ static inline void flush_cache_mm(struct mm_struct *mm)
>
>  /* flush_cache_range/flush_cache_page must be macros to avoid
>     a dependency on linux/mm.h, which includes this file... */
> -static inline void flush_cache_range(struct vm_area_struct *vma,
> +static inline void flush_cache_range(struct mm_area *vma,
>  				     unsigned long start,
>  				     unsigned long end)
>  {
> @@ -212,7 +212,7 @@ static inline void flush_cache_range(struct vm_area_struct *vma,
>  	        __flush_cache_030();
>  }
>
> -static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
> +static inline void flush_cache_page(struct mm_area *vma, unsigned long vmaddr, unsigned long pfn)
>  {
>  	if (vma->vm_mm == current->mm)
>  	        __flush_cache_030();
> @@ -263,13 +263,13 @@ static inline void __flush_pages_to_ram(void *vaddr, unsigned int nr)
>  #define flush_icache_pages(vma, page, nr)	\
>  	__flush_pages_to_ram(page_address(page), nr)
>
> -extern void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
> +extern void flush_icache_user_page(struct mm_area *vma, struct page *page,
>  				    unsigned long addr, int len);
>  extern void flush_icache_range(unsigned long address, unsigned long endaddr);
>  extern void flush_icache_user_range(unsigned long address,
>  		unsigned long endaddr);
>
> -static inline void copy_to_user_page(struct vm_area_struct *vma,
> +static inline void copy_to_user_page(struct mm_area *vma,
>  				     struct page *page, unsigned long vaddr,
>  				     void *dst, void *src, int len)
>  {
> @@ -277,7 +277,7 @@ static inline void copy_to_user_page(struct vm_area_struct *vma,
>  	memcpy(dst, src, len);
>  	flush_icache_user_page(vma, page, vaddr, len);
>  }
> -static inline void copy_from_user_page(struct vm_area_struct *vma,
> +static inline void copy_from_user_page(struct mm_area *vma,
>  				       struct page *page, unsigned long vaddr,
>  				       void *dst, void *src, int len)
>  {
> diff --git a/arch/m68k/include/asm/pgtable_mm.h b/arch/m68k/include/asm/pgtable_mm.h
> index dbdf1c2b2f66..fadc4c0e77cc 100644
> --- a/arch/m68k/include/asm/pgtable_mm.h
> +++ b/arch/m68k/include/asm/pgtable_mm.h
> @@ -137,7 +137,7 @@ extern void kernel_set_cachemode(void *addr, unsigned long size, int cmode);
>   * they are updated on demand.
>   */
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  }
> diff --git a/arch/m68k/include/asm/tlbflush.h b/arch/m68k/include/asm/tlbflush.h
> index 6d42e2906887..925c19068569 100644
> --- a/arch/m68k/include/asm/tlbflush.h
> +++ b/arch/m68k/include/asm/tlbflush.h
> @@ -81,13 +81,13 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  		__flush_tlb();
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +static inline void flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	if (vma->vm_mm == current->active_mm)
>  		__flush_tlb_one(addr);
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  	if (vma->vm_mm == current->active_mm)
> @@ -161,7 +161,7 @@ static inline void flush_tlb_mm (struct mm_struct *mm)
>
>  /* Flush a single TLB page. In this case, we're limited to flushing a
>     single PMEG */
> -static inline void flush_tlb_page (struct vm_area_struct *vma,
> +static inline void flush_tlb_page (struct mm_area *vma,
>  				   unsigned long addr)
>  {
>  	unsigned char oldctx;
> @@ -182,7 +182,7 @@ static inline void flush_tlb_page (struct vm_area_struct *vma,
>  }
>  /* Flush a range of pages from TLB. */
>
> -static inline void flush_tlb_range (struct vm_area_struct *vma,
> +static inline void flush_tlb_range (struct mm_area *vma,
>  		      unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -252,12 +252,12 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  	BUG();
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +static inline void flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	BUG();
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  	BUG();
> diff --git a/arch/m68k/kernel/sys_m68k.c b/arch/m68k/kernel/sys_m68k.c
> index 1af5e6082467..cc534ec40930 100644
> --- a/arch/m68k/kernel/sys_m68k.c
> +++ b/arch/m68k/kernel/sys_m68k.c
> @@ -391,7 +391,7 @@ sys_cacheflush (unsigned long addr, int scope, int cache, unsigned long len)
>
>  		mmap_read_lock(current->mm);
>  	} else {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		/* Check for overflow.  */
>  		if (addr + len < addr)
> diff --git a/arch/m68k/mm/cache.c b/arch/m68k/mm/cache.c
> index dde978e66f14..2858f1113768 100644
> --- a/arch/m68k/mm/cache.c
> +++ b/arch/m68k/mm/cache.c
> @@ -96,7 +96,7 @@ void flush_icache_range(unsigned long address, unsigned long endaddr)
>  }
>  EXPORT_SYMBOL(flush_icache_range);
>
> -void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_user_page(struct mm_area *vma, struct page *page,
>  			     unsigned long addr, int len)
>  {
>  	if (CPU_IS_COLDFIRE) {
> diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
> index fa3c5f38d989..af2e500427fd 100644
> --- a/arch/m68k/mm/fault.c
> +++ b/arch/m68k/mm/fault.c
> @@ -71,7 +71,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
>  			      unsigned long error_code)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct * vma;
> +	struct mm_area * vma;
>  	vm_fault_t fault;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
>
> diff --git a/arch/microblaze/include/asm/cacheflush.h b/arch/microblaze/include/asm/cacheflush.h
> index ffa2cf3893e4..c509ae39fec5 100644
> --- a/arch/microblaze/include/asm/cacheflush.h
> +++ b/arch/microblaze/include/asm/cacheflush.h
> @@ -85,7 +85,7 @@ static inline void flush_dcache_folio(struct folio *folio)
>  #define flush_cache_page(vma, vmaddr, pfn) \
>  	flush_dcache_range(pfn << PAGE_SHIFT, (pfn << PAGE_SHIFT) + PAGE_SIZE);
>
> -static inline void copy_to_user_page(struct vm_area_struct *vma,
> +static inline void copy_to_user_page(struct mm_area *vma,
>  				     struct page *page, unsigned long vaddr,
>  				     void *dst, void *src, int len)
>  {
> diff --git a/arch/microblaze/include/asm/pgtable.h b/arch/microblaze/include/asm/pgtable.h
> index e4ea2ec3642f..659f30da0029 100644
> --- a/arch/microblaze/include/asm/pgtable.h
> +++ b/arch/microblaze/include/asm/pgtable.h
> @@ -336,8 +336,8 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
>  }
>
>  #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
> -struct vm_area_struct;
> -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +struct mm_area;
> +static inline int ptep_test_and_clear_young(struct mm_area *vma,
>  		unsigned long address, pte_t *ptep)
>  {
>  	return (pte_update(ptep, _PAGE_ACCESSED, 0) & _PAGE_ACCESSED) != 0;
> diff --git a/arch/microblaze/include/asm/tlbflush.h b/arch/microblaze/include/asm/tlbflush.h
> index a31ae9d44083..88e958108295 100644
> --- a/arch/microblaze/include/asm/tlbflush.h
> +++ b/arch/microblaze/include/asm/tlbflush.h
> @@ -24,10 +24,10 @@ static inline void local_flush_tlb_all(void)
>  	{ __tlbia(); }
>  static inline void local_flush_tlb_mm(struct mm_struct *mm)
>  	{ __tlbia(); }
> -static inline void local_flush_tlb_page(struct vm_area_struct *vma,
> +static inline void local_flush_tlb_page(struct mm_area *vma,
>  				unsigned long vmaddr)
>  	{ __tlbie(vmaddr); }
> -static inline void local_flush_tlb_range(struct vm_area_struct *vma,
> +static inline void local_flush_tlb_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end)
>  	{ __tlbia(); }
>
> diff --git a/arch/microblaze/mm/fault.c b/arch/microblaze/mm/fault.c
> index d3c3c33b73a6..3a0d2463eb4a 100644
> --- a/arch/microblaze/mm/fault.c
> +++ b/arch/microblaze/mm/fault.c
> @@ -86,7 +86,7 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig)
>  void do_page_fault(struct pt_regs *regs, unsigned long address,
>  		   unsigned long error_code)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	int code = SEGV_MAPERR;
>  	int is_write = error_code & ESR_S;
> diff --git a/arch/mips/alchemy/common/setup.c b/arch/mips/alchemy/common/setup.c
> index a7a6d31a7a41..b10a34b4a2ce 100644
> --- a/arch/mips/alchemy/common/setup.c
> +++ b/arch/mips/alchemy/common/setup.c
> @@ -94,7 +94,7 @@ phys_addr_t fixup_bigphys_addr(phys_addr_t phys_addr, phys_addr_t size)
>  	return phys_addr;
>  }
>
> -int io_remap_pfn_range(struct vm_area_struct *vma, unsigned long vaddr,
> +int io_remap_pfn_range(struct mm_area *vma, unsigned long vaddr,
>  		unsigned long pfn, unsigned long size, pgprot_t prot)
>  {
>  	phys_addr_t phys_addr = fixup_bigphys_addr(pfn << PAGE_SHIFT, size);
> diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h
> index 1f14132b3fc9..6a10565c2726 100644
> --- a/arch/mips/include/asm/cacheflush.h
> +++ b/arch/mips/include/asm/cacheflush.h
> @@ -47,9 +47,9 @@ extern void (*flush_cache_all)(void);
>  extern void (*__flush_cache_all)(void);
>  extern void (*flush_cache_mm)(struct mm_struct *mm);
>  #define flush_cache_dup_mm(mm)	do { (void) (mm); } while (0)
> -extern void (*flush_cache_range)(struct vm_area_struct *vma,
> +extern void (*flush_cache_range)(struct mm_area *vma,
>  	unsigned long start, unsigned long end);
> -extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
> +extern void (*flush_cache_page)(struct mm_area *vma, unsigned long page, unsigned long pfn);
>  extern void __flush_dcache_pages(struct page *page, unsigned int nr);
>
>  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
> @@ -75,7 +75,7 @@ static inline void flush_dcache_page(struct page *page)
>
>  #define ARCH_HAS_FLUSH_ANON_PAGE
>  extern void __flush_anon_page(struct page *, unsigned long);
> -static inline void flush_anon_page(struct vm_area_struct *vma,
> +static inline void flush_anon_page(struct mm_area *vma,
>  	struct page *page, unsigned long vmaddr)
>  {
>  	if (cpu_has_dc_aliases && PageAnon(page))
> @@ -107,11 +107,11 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
>  		__flush_cache_vunmap();
>  }
>
> -extern void copy_to_user_page(struct vm_area_struct *vma,
> +extern void copy_to_user_page(struct mm_area *vma,
>  	struct page *page, unsigned long vaddr, void *dst, const void *src,
>  	unsigned long len);
>
> -extern void copy_from_user_page(struct vm_area_struct *vma,
> +extern void copy_from_user_page(struct mm_area *vma,
>  	struct page *page, unsigned long vaddr, void *dst, const void *src,
>  	unsigned long len);
>
> diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
> index fbc71ddcf0f6..abe7683fc4c4 100644
> --- a/arch/mips/include/asm/hugetlb.h
> +++ b/arch/mips/include/asm/hugetlb.h
> @@ -39,7 +39,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  	pte_t pte;
> @@ -63,7 +63,7 @@ static inline int huge_pte_none(pte_t pte)
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int huge_ptep_set_access_flags(struct mm_area *vma,
>  					     unsigned long addr,
>  					     pte_t *ptep, pte_t pte,
>  					     int dirty)
> diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h
> index bc3e3484c1bf..5be4423baee8 100644
> --- a/arch/mips/include/asm/page.h
> +++ b/arch/mips/include/asm/page.h
> @@ -91,9 +91,9 @@ static inline void clear_user_page(void *addr, unsigned long vaddr,
>  		flush_data_cache_page((unsigned long)addr);
>  }
>
> -struct vm_area_struct;
> +struct mm_area;
>  extern void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma);
> +	unsigned long vaddr, struct mm_area *vma);
>
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>
> diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
> index c29a551eb0ca..ab28b3855dfc 100644
> --- a/arch/mips/include/asm/pgtable.h
> +++ b/arch/mips/include/asm/pgtable.h
> @@ -23,7 +23,7 @@
>  #include <asm/cpu-features.h>
>
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>
>  #define PAGE_SHARED	vm_get_page_prot(VM_READ|VM_WRITE|VM_SHARED)
>
> @@ -478,7 +478,7 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot)
>  	return __pgprot(prot);
>  }
>
> -static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
> +static inline void flush_tlb_fix_spurious_fault(struct mm_area *vma,
>  						unsigned long address,
>  						pte_t *ptep)
>  {
> @@ -491,7 +491,7 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b)
>  }
>
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> -static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int ptep_set_access_flags(struct mm_area *vma,
>  					unsigned long address, pte_t *ptep,
>  					pte_t entry, int dirty)
>  {
> @@ -575,11 +575,11 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
>  }
>  #endif
>
> -extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
> +extern void __update_tlb(struct mm_area *vma, unsigned long address,
>  	pte_t pte);
>
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	for (;;) {
> @@ -597,7 +597,7 @@ static inline void update_mmu_cache_range(struct vm_fault *vmf,
>  #define update_mmu_tlb_range(vma, address, ptep, nr) \
>  	update_mmu_cache_range(NULL, vma, address, ptep, nr)
>
> -static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pmd(struct mm_area *vma,
>  	unsigned long address, pmd_t *pmdp)
>  {
>  	pte_t pte = *(pte_t *)pmdp;
> @@ -610,7 +610,7 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
>   */
>  #ifdef CONFIG_MIPS_FIXUP_BIGPHYS_ADDR
>  phys_addr_t fixup_bigphys_addr(phys_addr_t addr, phys_addr_t size);
> -int io_remap_pfn_range(struct vm_area_struct *vma, unsigned long vaddr,
> +int io_remap_pfn_range(struct mm_area *vma, unsigned long vaddr,
>  		unsigned long pfn, unsigned long size, pgprot_t prot);
>  #define io_remap_pfn_range io_remap_pfn_range
>  #else
> diff --git a/arch/mips/include/asm/tlbflush.h b/arch/mips/include/asm/tlbflush.h
> index 9789e7a32def..26d11d18b2b4 100644
> --- a/arch/mips/include/asm/tlbflush.h
> +++ b/arch/mips/include/asm/tlbflush.h
> @@ -14,11 +14,11 @@
>   *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
>   */
>  extern void local_flush_tlb_all(void);
> -extern void local_flush_tlb_range(struct vm_area_struct *vma,
> +extern void local_flush_tlb_range(struct mm_area *vma,
>  	unsigned long start, unsigned long end);
>  extern void local_flush_tlb_kernel_range(unsigned long start,
>  	unsigned long end);
> -extern void local_flush_tlb_page(struct vm_area_struct *vma,
> +extern void local_flush_tlb_page(struct mm_area *vma,
>  	unsigned long page);
>  extern void local_flush_tlb_one(unsigned long vaddr);
>
> @@ -28,10 +28,10 @@ extern void local_flush_tlb_one(unsigned long vaddr);
>
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long,
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long,
>  	unsigned long);
>  extern void flush_tlb_kernel_range(unsigned long, unsigned long);
> -extern void flush_tlb_page(struct vm_area_struct *, unsigned long);
> +extern void flush_tlb_page(struct mm_area *, unsigned long);
>  extern void flush_tlb_one(unsigned long vaddr);
>
>  #else /* CONFIG_SMP */
> diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
> index 39e193cad2b9..6f006e89d2f3 100644
> --- a/arch/mips/kernel/smp.c
> +++ b/arch/mips/kernel/smp.c
> @@ -566,7 +566,7 @@ void flush_tlb_mm(struct mm_struct *mm)
>  }
>
>  struct flush_tlb_data {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr1;
>  	unsigned long addr2;
>  };
> @@ -578,7 +578,7 @@ static void flush_tlb_range_ipi(void *info)
>  	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +void flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long addr;
> @@ -652,7 +652,7 @@ static void flush_tlb_page_ipi(void *info)
>  	local_flush_tlb_page(fd->vma, fd->addr1);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	u32 old_mmid;
>
> diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
> index de096777172f..4ab46161d876 100644
> --- a/arch/mips/kernel/vdso.c
> +++ b/arch/mips/kernel/vdso.c
> @@ -79,7 +79,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  	struct mips_vdso_image *image = current->thread.abi->vdso;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long gic_size, size, base, data_addr, vdso_addr, gic_pfn, gic_base;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret;
>
>  	if (mmap_write_lock_killable(mm))
> diff --git a/arch/mips/mm/c-octeon.c b/arch/mips/mm/c-octeon.c
> index b7393b61cfa7..ba064d76dd1b 100644
> --- a/arch/mips/mm/c-octeon.c
> +++ b/arch/mips/mm/c-octeon.c
> @@ -60,7 +60,7 @@ static void local_octeon_flush_icache_range(unsigned long start,
>   *
>   * @vma:    VMA to flush or NULL to flush all icaches.
>   */
> -static void octeon_flush_icache_all_cores(struct vm_area_struct *vma)
> +static void octeon_flush_icache_all_cores(struct mm_area *vma)
>  {
>  	extern void octeon_send_ipi_single(int cpu, unsigned int action);
>  #ifdef CONFIG_SMP
> @@ -136,7 +136,7 @@ static void octeon_flush_icache_range(unsigned long start, unsigned long end)
>   * @start:  beginning address for flush
>   * @end:    ending address for flush
>   */
> -static void octeon_flush_cache_range(struct vm_area_struct *vma,
> +static void octeon_flush_cache_range(struct mm_area *vma,
>  				     unsigned long start, unsigned long end)
>  {
>  	if (vma->vm_flags & VM_EXEC)
> @@ -151,7 +151,7 @@ static void octeon_flush_cache_range(struct vm_area_struct *vma,
>   * @page:   Page to flush
>   * @pfn:    Page frame number
>   */
> -static void octeon_flush_cache_page(struct vm_area_struct *vma,
> +static void octeon_flush_cache_page(struct mm_area *vma,
>  				    unsigned long page, unsigned long pfn)
>  {
>  	if (vma->vm_flags & VM_EXEC)
> diff --git a/arch/mips/mm/c-r3k.c b/arch/mips/mm/c-r3k.c
> index 5869df848fab..c97e789bb9cb 100644
> --- a/arch/mips/mm/c-r3k.c
> +++ b/arch/mips/mm/c-r3k.c
> @@ -228,12 +228,12 @@ static void r3k_flush_cache_mm(struct mm_struct *mm)
>  {
>  }
>
> -static void r3k_flush_cache_range(struct vm_area_struct *vma,
> +static void r3k_flush_cache_range(struct mm_area *vma,
>  				  unsigned long start, unsigned long end)
>  {
>  }
>
> -static void r3k_flush_cache_page(struct vm_area_struct *vma,
> +static void r3k_flush_cache_page(struct mm_area *vma,
>  				 unsigned long addr, unsigned long pfn)
>  {
>  	unsigned long kaddr = KSEG0ADDR(pfn << PAGE_SHIFT);
> diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
> index 10413b6f6662..d2e65e6548e4 100644
> --- a/arch/mips/mm/c-r4k.c
> +++ b/arch/mips/mm/c-r4k.c
> @@ -469,7 +469,7 @@ static void r4k__flush_cache_vunmap(void)
>   */
>  static inline void local_r4k_flush_cache_range(void * args)
>  {
> -	struct vm_area_struct *vma = args;
> +	struct mm_area *vma = args;
>  	int exec = vma->vm_flags & VM_EXEC;
>
>  	if (!has_valid_asid(vma->vm_mm, R4K_INDEX))
> @@ -487,7 +487,7 @@ static inline void local_r4k_flush_cache_range(void * args)
>  		r4k_blast_icache();
>  }
>
> -static void r4k_flush_cache_range(struct vm_area_struct *vma,
> +static void r4k_flush_cache_range(struct mm_area *vma,
>  	unsigned long start, unsigned long end)
>  {
>  	int exec = vma->vm_flags & VM_EXEC;
> @@ -529,7 +529,7 @@ static void r4k_flush_cache_mm(struct mm_struct *mm)
>  }
>
>  struct flush_cache_page_args {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr;
>  	unsigned long pfn;
>  };
> @@ -537,7 +537,7 @@ struct flush_cache_page_args {
>  static inline void local_r4k_flush_cache_page(void *args)
>  {
>  	struct flush_cache_page_args *fcp_args = args;
> -	struct vm_area_struct *vma = fcp_args->vma;
> +	struct mm_area *vma = fcp_args->vma;
>  	unsigned long addr = fcp_args->addr;
>  	struct page *page = pfn_to_page(fcp_args->pfn);
>  	int exec = vma->vm_flags & VM_EXEC;
> @@ -605,7 +605,7 @@ static inline void local_r4k_flush_cache_page(void *args)
>  	}
>  }
>
> -static void r4k_flush_cache_page(struct vm_area_struct *vma,
> +static void r4k_flush_cache_page(struct mm_area *vma,
>  	unsigned long addr, unsigned long pfn)
>  {
>  	struct flush_cache_page_args args;
> diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
> index bf9a37c60e9f..10eba2a62402 100644
> --- a/arch/mips/mm/cache.c
> +++ b/arch/mips/mm/cache.c
> @@ -30,9 +30,9 @@ void (*flush_cache_all)(void);
>  void (*__flush_cache_all)(void);
>  EXPORT_SYMBOL_GPL(__flush_cache_all);
>  void (*flush_cache_mm)(struct mm_struct *mm);
> -void (*flush_cache_range)(struct vm_area_struct *vma, unsigned long start,
> +void (*flush_cache_range)(struct mm_area *vma, unsigned long start,
>  	unsigned long end);
> -void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page,
> +void (*flush_cache_page)(struct mm_area *vma, unsigned long page,
>  	unsigned long pfn);
>  void (*flush_icache_range)(unsigned long start, unsigned long end);
>  EXPORT_SYMBOL_GPL(flush_icache_range);
> diff --git a/arch/mips/mm/fault.c b/arch/mips/mm/fault.c
> index 37fedeaca2e9..a18c0a590a1e 100644
> --- a/arch/mips/mm/fault.c
> +++ b/arch/mips/mm/fault.c
> @@ -39,7 +39,7 @@ int show_unhandled_signals = 1;
>  static void __do_page_fault(struct pt_regs *regs, unsigned long write,
>  	unsigned long address)
>  {
> -	struct vm_area_struct * vma = NULL;
> +	struct mm_area * vma = NULL;
>  	struct task_struct *tsk = current;
>  	struct mm_struct *mm = tsk->mm;
>  	const int field = sizeof(unsigned long) * 2;
> diff --git a/arch/mips/mm/hugetlbpage.c b/arch/mips/mm/hugetlbpage.c
> index 0b9e15555b59..a1b62b2ce516 100644
> --- a/arch/mips/mm/hugetlbpage.c
> +++ b/arch/mips/mm/hugetlbpage.c
> @@ -21,7 +21,7 @@
>  #include <asm/tlb.h>
>  #include <asm/tlbflush.h>
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgd;
> diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
> index a673d3d68254..69ae87f80ad8 100644
> --- a/arch/mips/mm/init.c
> +++ b/arch/mips/mm/init.c
> @@ -161,7 +161,7 @@ void kunmap_coherent(void)
>  }
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	struct folio *src = page_folio(from);
>  	void *vfrom, *vto;
> @@ -185,7 +185,7 @@ void copy_user_highpage(struct page *to, struct page *from,
>  	smp_wmb();
>  }
>
> -void copy_to_user_page(struct vm_area_struct *vma,
> +void copy_to_user_page(struct mm_area *vma,
>  	struct page *page, unsigned long vaddr, void *dst, const void *src,
>  	unsigned long len)
>  {
> @@ -205,7 +205,7 @@ void copy_to_user_page(struct vm_area_struct *vma,
>  		flush_cache_page(vma, vaddr, page_to_pfn(page));
>  }
>
> -void copy_from_user_page(struct vm_area_struct *vma,
> +void copy_from_user_page(struct mm_area *vma,
>  	struct page *page, unsigned long vaddr, void *dst, const void *src,
>  	unsigned long len)
>  {
> diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
> index 5d2a1225785b..5451673f26d2 100644
> --- a/arch/mips/mm/mmap.c
> +++ b/arch/mips/mm/mmap.c
> @@ -31,7 +31,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
>  	unsigned long flags, enum mmap_allocation_direction dir)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr = addr0;
>  	int do_color_align;
>  	struct vm_unmapped_area_info info = {};
> diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c
> index 173f7b36033b..b43ba28e3a6a 100644
> --- a/arch/mips/mm/tlb-r3k.c
> +++ b/arch/mips/mm/tlb-r3k.c
> @@ -64,7 +64,7 @@ void local_flush_tlb_all(void)
>  	local_irq_restore(flags);
>  }
>
> -void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			   unsigned long end)
>  {
>  	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
> @@ -144,7 +144,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  	local_irq_restore(flags);
>  }
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
>  	int cpu = smp_processor_id();
> @@ -176,7 +176,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
>  	}
>  }
>
> -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
> +void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
>  {
>  	unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
>  	unsigned long flags;
> diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
> index 76f3b9c0a9f0..391bc8414146 100644
> --- a/arch/mips/mm/tlb-r4k.c
> +++ b/arch/mips/mm/tlb-r4k.c
> @@ -45,7 +45,7 @@ static inline void flush_micro_tlb(void)
>  	}
>  }
>
> -static inline void flush_micro_tlb_vm(struct vm_area_struct *vma)
> +static inline void flush_micro_tlb_vm(struct mm_area *vma)
>  {
>  	if (vma->vm_flags & VM_EXEC)
>  		flush_micro_tlb();
> @@ -103,7 +103,7 @@ void local_flush_tlb_all(void)
>  }
>  EXPORT_SYMBOL(local_flush_tlb_all);
>
> -void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  	unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -208,7 +208,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  	local_irq_restore(flags);
>  }
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	int cpu = smp_processor_id();
>
> @@ -290,7 +290,7 @@ void local_flush_tlb_one(unsigned long page)
>   * updates the TLB with the new pte(s), and another which also checks
>   * for the R4k "end of page" hardware bug and does the needy.
>   */
> -void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
> +void __update_tlb(struct mm_area * vma, unsigned long address, pte_t pte)
>  {
>  	unsigned long flags;
>  	pgd_t *pgdp;
> diff --git a/arch/mips/vdso/genvdso.c b/arch/mips/vdso/genvdso.c
> index d47412ea6e67..4fdccdfe055d 100644
> --- a/arch/mips/vdso/genvdso.c
> +++ b/arch/mips/vdso/genvdso.c
> @@ -261,7 +261,7 @@ int main(int argc, char **argv)
>  	fprintf(out_file, "#include <asm/vdso.h>\n");
>  	fprintf(out_file, "static int vdso_mremap(\n");
>  	fprintf(out_file, "	const struct vm_special_mapping *sm,\n");
> -	fprintf(out_file, "	struct vm_area_struct *new_vma)\n");
> +	fprintf(out_file, "	struct mm_area *new_vma)\n");
>  	fprintf(out_file, "{\n");
>  	fprintf(out_file, "	current->mm->context.vdso =\n");
>  	fprintf(out_file, "	(void *)(new_vma->vm_start);\n");
> diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
> index 81484a776b33..c87da07c790b 100644
> --- a/arch/nios2/include/asm/cacheflush.h
> +++ b/arch/nios2/include/asm/cacheflush.h
> @@ -23,9 +23,9 @@ struct mm_struct;
>  extern void flush_cache_all(void);
>  extern void flush_cache_mm(struct mm_struct *mm);
>  extern void flush_cache_dup_mm(struct mm_struct *mm);
> -extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_cache_range(struct mm_area *vma, unsigned long start,
>  	unsigned long end);
> -extern void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
> +extern void flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
>  	unsigned long pfn);
>  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
>  void flush_dcache_page(struct page *page);
> @@ -33,7 +33,7 @@ void flush_dcache_folio(struct folio *folio);
>  #define flush_dcache_folio flush_dcache_folio
>
>  extern void flush_icache_range(unsigned long start, unsigned long end);
> -void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_pages(struct mm_area *vma, struct page *page,
>  		unsigned int nr);
>  #define flush_icache_pages flush_icache_pages
>
> @@ -41,10 +41,10 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
>  #define flush_cache_vmap_early(start, end)	do { } while (0)
>  #define flush_cache_vunmap(start, end)		flush_dcache_range(start, end)
>
> -extern void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +extern void copy_to_user_page(struct mm_area *vma, struct page *page,
>  				unsigned long user_vaddr,
>  				void *dst, void *src, int len);
> -extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +extern void copy_from_user_page(struct mm_area *vma, struct page *page,
>  				unsigned long user_vaddr,
>  				void *dst, void *src, int len);
>
> diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
> index eab87c6beacb..558eda85615e 100644
> --- a/arch/nios2/include/asm/pgtable.h
> +++ b/arch/nios2/include/asm/pgtable.h
> @@ -285,7 +285,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
>  extern void __init paging_init(void);
>  extern void __init mmu_init(void);
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *ptep, unsigned int nr);
>
>  #define update_mmu_cache(vma, addr, ptep) \
> diff --git a/arch/nios2/include/asm/tlbflush.h b/arch/nios2/include/asm/tlbflush.h
> index 362d6da09d02..913f409d9777 100644
> --- a/arch/nios2/include/asm/tlbflush.h
> +++ b/arch/nios2/include/asm/tlbflush.h
> @@ -23,11 +23,11 @@ struct mm_struct;
>   */
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma,
> +static inline void flush_tlb_page(struct mm_area *vma,
>  				  unsigned long address)
>  {
>  	flush_tlb_range(vma, address, address + PAGE_SIZE);
> @@ -38,7 +38,7 @@ static inline void flush_tlb_kernel_page(unsigned long address)
>  	flush_tlb_kernel_range(address, address + PAGE_SIZE);
>  }
>
> -extern void reload_tlb_page(struct vm_area_struct *vma, unsigned long addr,
> +extern void reload_tlb_page(struct mm_area *vma, unsigned long addr,
>  			    pte_t pte);
>
>  #endif /* _ASM_NIOS2_TLBFLUSH_H */
> diff --git a/arch/nios2/kernel/sys_nios2.c b/arch/nios2/kernel/sys_nios2.c
> index b1ca85699952..7c275dff5822 100644
> --- a/arch/nios2/kernel/sys_nios2.c
> +++ b/arch/nios2/kernel/sys_nios2.c
> @@ -21,7 +21,7 @@
>  asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len,
>  				unsigned int op)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>
>  	if (len == 0)
> diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c
> index 0ee9c5f02e08..357ea747ea3d 100644
> --- a/arch/nios2/mm/cacheflush.c
> +++ b/arch/nios2/mm/cacheflush.c
> @@ -74,7 +74,7 @@ static void __flush_icache(unsigned long start, unsigned long end)
>  static void flush_aliases(struct address_space *mapping, struct folio *folio)
>  {
>  	struct mm_struct *mm = current->active_mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long flags;
>  	pgoff_t pgoff;
>  	unsigned long nr = folio_nr_pages(folio);
> @@ -131,7 +131,7 @@ void invalidate_dcache_range(unsigned long start, unsigned long end)
>  }
>  EXPORT_SYMBOL(invalidate_dcache_range);
>
> -void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_cache_range(struct mm_area *vma, unsigned long start,
>  			unsigned long end)
>  {
>  	__flush_dcache(start, end);
> @@ -139,7 +139,7 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
>  		__flush_icache(start, end);
>  }
>
> -void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_pages(struct mm_area *vma, struct page *page,
>  		unsigned int nr)
>  {
>  	unsigned long start = (unsigned long) page_address(page);
> @@ -149,7 +149,7 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
>  	__flush_icache(start, end);
>  }
>
> -void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
> +void flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
>  			unsigned long pfn)
>  {
>  	unsigned long start = vmaddr;
> @@ -206,7 +206,7 @@ void flush_dcache_page(struct page *page)
>  }
>  EXPORT_SYMBOL(flush_dcache_page);
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *ptep, unsigned int nr)
>  {
>  	pte_t pte = *ptep;
> @@ -258,7 +258,7 @@ void clear_user_page(void *addr, unsigned long vaddr, struct page *page)
>  	__flush_icache((unsigned long)addr, (unsigned long)addr + PAGE_SIZE);
>  }
>
> -void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_from_user_page(struct mm_area *vma, struct page *page,
>  			unsigned long user_vaddr,
>  			void *dst, void *src, int len)
>  {
> @@ -269,7 +269,7 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
>  		__flush_icache((unsigned long)src, (unsigned long)src + len);
>  }
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  			unsigned long user_vaddr,
>  			void *dst, void *src, int len)
>  {
> diff --git a/arch/nios2/mm/fault.c b/arch/nios2/mm/fault.c
> index e3fa9c15181d..7901f945202e 100644
> --- a/arch/nios2/mm/fault.c
> +++ b/arch/nios2/mm/fault.c
> @@ -43,7 +43,7 @@
>  asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long cause,
>  				unsigned long address)
>  {
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	struct task_struct *tsk = current;
>  	struct mm_struct *mm = tsk->mm;
>  	int code = SEGV_MAPERR;
> diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
> index 94efa3de3933..8f5a08ff465d 100644
> --- a/arch/nios2/mm/init.c
> +++ b/arch/nios2/mm/init.c
> @@ -96,7 +96,7 @@ arch_initcall(alloc_kuser_page);
>  int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mmap_write_lock(mm);
>
> @@ -110,7 +110,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  	return IS_ERR(vma) ? PTR_ERR(vma) : 0;
>  }
>
> -const char *arch_vma_name(struct vm_area_struct *vma)
> +const char *arch_vma_name(struct mm_area *vma)
>  {
>  	return (vma->vm_start == KUSER_BASE) ? "[kuser]" : NULL;
>  }
> diff --git a/arch/nios2/mm/tlb.c b/arch/nios2/mm/tlb.c
> index f90ac35f05f3..749b4fd052cf 100644
> --- a/arch/nios2/mm/tlb.c
> +++ b/arch/nios2/mm/tlb.c
> @@ -99,7 +99,7 @@ static void reload_tlb_one_pid(unsigned long addr, unsigned long mmu_pid, pte_t
>  	replace_tlb_one_pid(addr, mmu_pid, pte_val(pte));
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			unsigned long end)
>  {
>  	unsigned long mmu_pid = get_pid_from_context(&vma->vm_mm->context);
> @@ -110,7 +110,7 @@ void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
>  	}
>  }
>
> -void reload_tlb_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> +void reload_tlb_page(struct mm_area *vma, unsigned long addr, pte_t pte)
>  {
>  	unsigned long mmu_pid = get_pid_from_context(&vma->vm_mm->context);
>
> diff --git a/arch/openrisc/include/asm/pgtable.h b/arch/openrisc/include/asm/pgtable.h
> index 60c6ce7ff2dc..0acc625d0607 100644
> --- a/arch/openrisc/include/asm/pgtable.h
> +++ b/arch/openrisc/include/asm/pgtable.h
> @@ -370,18 +370,18 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
>
>  extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; /* defined in head.S */
>
> -struct vm_area_struct;
> +struct mm_area;
>
> -static inline void update_tlb(struct vm_area_struct *vma,
> +static inline void update_tlb(struct mm_area *vma,
>  	unsigned long address, pte_t *pte)
>  {
>  }
>
> -extern void update_cache(struct vm_area_struct *vma,
> +extern void update_cache(struct mm_area *vma,
>  	unsigned long address, pte_t *pte);
>
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	update_tlb(vma, address, ptep);
> diff --git a/arch/openrisc/include/asm/tlbflush.h b/arch/openrisc/include/asm/tlbflush.h
> index dbf030365ab4..4773da3c2d29 100644
> --- a/arch/openrisc/include/asm/tlbflush.h
> +++ b/arch/openrisc/include/asm/tlbflush.h
> @@ -29,9 +29,9 @@
>   */
>  extern void local_flush_tlb_all(void);
>  extern void local_flush_tlb_mm(struct mm_struct *mm);
> -extern void local_flush_tlb_page(struct vm_area_struct *vma,
> +extern void local_flush_tlb_page(struct mm_area *vma,
>  				 unsigned long addr);
> -extern void local_flush_tlb_range(struct vm_area_struct *vma,
> +extern void local_flush_tlb_range(struct mm_area *vma,
>  				  unsigned long start,
>  				  unsigned long end);
>
> @@ -43,8 +43,8 @@ extern void local_flush_tlb_range(struct vm_area_struct *vma,
>  #else
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long addr);
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end);
>  #endif
>
> diff --git a/arch/openrisc/kernel/smp.c b/arch/openrisc/kernel/smp.c
> index 86da4bc5ee0b..1eb34b914609 100644
> --- a/arch/openrisc/kernel/smp.c
> +++ b/arch/openrisc/kernel/smp.c
> @@ -300,12 +300,12 @@ void flush_tlb_mm(struct mm_struct *mm)
>  	smp_flush_tlb_mm(mm_cpumask(mm), mm);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long uaddr)
>  {
>  	smp_flush_tlb_range(mm_cpumask(vma->vm_mm), uaddr, uaddr + PAGE_SIZE);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma,
> +void flush_tlb_range(struct mm_area *vma,
>  		     unsigned long start, unsigned long end)
>  {
>  	const struct cpumask *cmask = vma ? mm_cpumask(vma->vm_mm)
> diff --git a/arch/openrisc/mm/cache.c b/arch/openrisc/mm/cache.c
> index 7bdd07cfca60..64649f65f943 100644
> --- a/arch/openrisc/mm/cache.c
> +++ b/arch/openrisc/mm/cache.c
> @@ -78,7 +78,7 @@ void local_icache_range_inv(unsigned long start, unsigned long end)
>  	cache_loop(start, end, SPR_ICBIR, SPR_UPR_ICP);
>  }
>
> -void update_cache(struct vm_area_struct *vma, unsigned long address,
> +void update_cache(struct mm_area *vma, unsigned long address,
>  	pte_t *pte)
>  {
>  	unsigned long pfn = pte_val(*pte) >> PAGE_SHIFT;
> diff --git a/arch/openrisc/mm/fault.c b/arch/openrisc/mm/fault.c
> index 29e232d78d82..800bceca3bcd 100644
> --- a/arch/openrisc/mm/fault.c
> +++ b/arch/openrisc/mm/fault.c
> @@ -48,7 +48,7 @@ asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long address,
>  {
>  	struct task_struct *tsk;
>  	struct mm_struct *mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int si_code;
>  	vm_fault_t fault;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
> diff --git a/arch/openrisc/mm/tlb.c b/arch/openrisc/mm/tlb.c
> index 3115f2e4f864..594a5adb8646 100644
> --- a/arch/openrisc/mm/tlb.c
> +++ b/arch/openrisc/mm/tlb.c
> @@ -80,7 +80,7 @@ void local_flush_tlb_all(void)
>  #define flush_itlb_page_no_eir(addr) \
>  	mtspr_off(SPR_ITLBMR_BASE(0), ITLB_OFFSET(addr), 0);
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	if (have_dtlbeir)
>  		flush_dtlb_page_eir(addr);
> @@ -93,7 +93,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
>  		flush_itlb_page_no_eir(addr);
>  }
>
> -void local_flush_tlb_range(struct vm_area_struct *vma,
> +void local_flush_tlb_range(struct mm_area *vma,
>  			   unsigned long start, unsigned long end)
>  {
>  	int addr;
> diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
> index 8394718870e1..fe13de0d9a12 100644
> --- a/arch/parisc/include/asm/cacheflush.h
> +++ b/arch/parisc/include/asm/cacheflush.h
> @@ -58,7 +58,7 @@ static inline void flush_dcache_page(struct page *page)
>  #define flush_dcache_mmap_unlock_irqrestore(mapping, flags)	\
>  		xa_unlock_irqrestore(&mapping->i_pages, flags)
>
> -void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_pages(struct mm_area *vma, struct page *page,
>  		unsigned int nr);
>  #define flush_icache_pages flush_icache_pages
>
> @@ -67,17 +67,17 @@ void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
>  	flush_kernel_icache_range_asm(s,e); 		\
>  } while (0)
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long user_vaddr, void *dst, void *src, int len);
> -void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_from_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long user_vaddr, void *dst, void *src, int len);
> -void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
> +void flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
>  		unsigned long pfn);
> -void flush_cache_range(struct vm_area_struct *vma,
> +void flush_cache_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end);
>
>  #define ARCH_HAS_FLUSH_ANON_PAGE
> -void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr);
> +void flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr);
>
>  #define ARCH_HAS_FLUSH_ON_KUNMAP
>  void kunmap_flush_on_unmap(const void *addr);
> diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h
> index 21e9ace17739..f19c029f612b 100644
> --- a/arch/parisc/include/asm/hugetlb.h
> +++ b/arch/parisc/include/asm/hugetlb.h
> @@ -13,7 +13,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, unsigned long sz);
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  	return *ptep;
> @@ -24,7 +24,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  					   unsigned long addr, pte_t *ptep);
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  					     unsigned long addr, pte_t *ptep,
>  					     pte_t pte, int dirty);
>
> diff --git a/arch/parisc/include/asm/page.h b/arch/parisc/include/asm/page.h
> index 7fd447092630..427bf90b3f98 100644
> --- a/arch/parisc/include/asm/page.h
> +++ b/arch/parisc/include/asm/page.h
> @@ -17,13 +17,13 @@
>  #define copy_page(to, from)	copy_page_asm((void *)(to), (void *)(from))
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>
>  void clear_page_asm(void *page);
>  void copy_page_asm(void *to, void *from);
>  #define clear_user_page(vto, vaddr, page) clear_page_asm(vto)
>  void copy_user_highpage(struct page *to, struct page *from, unsigned long vaddr,
> -		struct vm_area_struct *vma);
> +		struct mm_area *vma);
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>
>  /*
> diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
> index babf65751e81..4b59b5fbd85c 100644
> --- a/arch/parisc/include/asm/pgtable.h
> +++ b/arch/parisc/include/asm/pgtable.h
> @@ -454,7 +454,7 @@ static inline pte_t ptep_get(pte_t *ptep)
>  }
>  #define ptep_get ptep_get
>
> -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +static inline int ptep_test_and_clear_young(struct mm_area *vma, unsigned long addr, pte_t *ptep)
>  {
>  	pte_t pte;
>
> @@ -466,8 +466,8 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned
>  	return 1;
>  }
>
> -int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep);
> -pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep);
> +int ptep_clear_flush_young(struct mm_area *vma, unsigned long addr, pte_t *ptep);
> +pte_t ptep_clear_flush(struct mm_area *vma, unsigned long addr, pte_t *ptep);
>
>  struct mm_struct;
>  static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
> diff --git a/arch/parisc/include/asm/tlbflush.h b/arch/parisc/include/asm/tlbflush.h
> index 5ffd7c17f593..3683645fd41d 100644
> --- a/arch/parisc/include/asm/tlbflush.h
> +++ b/arch/parisc/include/asm/tlbflush.h
> @@ -61,7 +61,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  #endif
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma,
> +static inline void flush_tlb_page(struct mm_area *vma,
>  	unsigned long addr)
>  {
>  	purge_tlb_entries(vma->vm_mm, addr);
> diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
> index db531e58d70e..752562b78d90 100644
> --- a/arch/parisc/kernel/cache.c
> +++ b/arch/parisc/kernel/cache.c
> @@ -328,7 +328,7 @@ void disable_sr_hashing(void)
>  }
>
>  static inline void
> -__flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr,
> +__flush_cache_page(struct mm_area *vma, unsigned long vmaddr,
>  		   unsigned long physaddr)
>  {
>  	if (!static_branch_likely(&parisc_has_cache))
> @@ -390,7 +390,7 @@ void kunmap_flush_on_unmap(const void *addr)
>  }
>  EXPORT_SYMBOL(kunmap_flush_on_unmap);
>
> -void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_pages(struct mm_area *vma, struct page *page,
>  		unsigned int nr)
>  {
>  	void *kaddr = page_address(page);
> @@ -473,7 +473,7 @@ static inline unsigned long get_upa(struct mm_struct *mm, unsigned long addr)
>  void flush_dcache_folio(struct folio *folio)
>  {
>  	struct address_space *mapping = folio_flush_mapping(folio);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr, old_addr = 0;
>  	void *kaddr;
>  	unsigned long count = 0;
> @@ -620,7 +620,7 @@ extern void purge_kernel_dcache_page_asm(unsigned long);
>  extern void clear_user_page_asm(void *, unsigned long);
>  extern void copy_user_page_asm(void *, void *, unsigned long);
>
> -static void flush_cache_page_if_present(struct vm_area_struct *vma,
> +static void flush_cache_page_if_present(struct mm_area *vma,
>  	unsigned long vmaddr)
>  {
>  #if CONFIG_FLUSH_PAGE_ACCESSED
> @@ -645,7 +645,7 @@ static void flush_cache_page_if_present(struct vm_area_struct *vma,
>  }
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	void *kto, *kfrom;
>
> @@ -657,7 +657,7 @@ void copy_user_highpage(struct page *to, struct page *from,
>  	kunmap_local(kfrom);
>  }
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long user_vaddr, void *dst, void *src, int len)
>  {
>  	__flush_cache_page(vma, user_vaddr, PFN_PHYS(page_to_pfn(page)));
> @@ -665,7 +665,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
>  	flush_kernel_dcache_page_addr(PTR_PAGE_ALIGN_DOWN(dst));
>  }
>
> -void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_from_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long user_vaddr, void *dst, void *src, int len)
>  {
>  	__flush_cache_page(vma, user_vaddr, PFN_PHYS(page_to_pfn(page)));
> @@ -702,7 +702,7 @@ int __flush_tlb_range(unsigned long sid, unsigned long start,
>  	return 0;
>  }
>
> -static void flush_cache_pages(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +static void flush_cache_pages(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	unsigned long addr;
>
> @@ -712,7 +712,7 @@ static void flush_cache_pages(struct vm_area_struct *vma, unsigned long start, u
>
>  static inline unsigned long mm_total_size(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long usize = 0;
>  	VMA_ITERATOR(vmi, mm, 0);
>
> @@ -726,7 +726,7 @@ static inline unsigned long mm_total_size(struct mm_struct *mm)
>
>  void flush_cache_mm(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	/*
> @@ -751,7 +751,7 @@ void flush_cache_mm(struct mm_struct *mm)
>  		flush_cache_pages(vma, vma->vm_start, vma->vm_end);
>  }
>
> -void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +void flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	if (!parisc_requires_coherency()
>  	    || end - start >= parisc_cache_flush_threshold) {
> @@ -768,12 +768,12 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned
>  	flush_cache_pages(vma, start & PAGE_MASK, end);
>  }
>
> -void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
> +void flush_cache_page(struct mm_area *vma, unsigned long vmaddr, unsigned long pfn)
>  {
>  	__flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
>  }
>
> -void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
> +void flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr)
>  {
>  	if (!PageAnon(page))
>  		return;
> @@ -781,7 +781,7 @@ void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned lon
>  	__flush_cache_page(vma, vmaddr, PFN_PHYS(page_to_pfn(page)));
>  }
>
> -int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr,
> +int ptep_clear_flush_young(struct mm_area *vma, unsigned long addr,
>  			   pte_t *ptep)
>  {
>  	pte_t pte = ptep_get(ptep);
> @@ -801,7 +801,7 @@ int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long addr,
>   * can cause random cache corruption. Thus, we must flush the cache
>   * as well as the TLB when clearing a PTE that's valid.
>   */
> -pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long addr,
> +pte_t ptep_clear_flush(struct mm_area *vma, unsigned long addr,
>  		       pte_t *ptep)
>  {
>  	struct mm_struct *mm = (vma)->vm_mm;
> diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
> index f852fe274abe..15fd6e8979d7 100644
> --- a/arch/parisc/kernel/sys_parisc.c
> +++ b/arch/parisc/kernel/sys_parisc.c
> @@ -101,7 +101,7 @@ static unsigned long arch_get_unmapped_area_common(struct file *filp,
>  	unsigned long flags, enum mmap_allocation_direction dir)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	unsigned long filp_pgoff;
>  	int do_color_align;
>  	struct vm_unmapped_area_info info = {
> diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
> index b9b3d527bc90..6c26d9c5d7f9 100644
> --- a/arch/parisc/kernel/traps.c
> +++ b/arch/parisc/kernel/traps.c
> @@ -711,7 +711,7 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
>  		 */
>
>  		if (user_mode(regs)) {
> -			struct vm_area_struct *vma;
> +			struct mm_area *vma;
>
>  			mmap_read_lock(current->mm);
>  			vma = find_vma(current->mm,regs->iaoq[0]);
> diff --git a/arch/parisc/kernel/vdso.c b/arch/parisc/kernel/vdso.c
> index c5cbfce7a84c..f7075a8b3bd1 100644
> --- a/arch/parisc/kernel/vdso.c
> +++ b/arch/parisc/kernel/vdso.c
> @@ -27,7 +27,7 @@ extern char vdso32_start, vdso32_end;
>  extern char vdso64_start, vdso64_end;
>
>  static int vdso_mremap(const struct vm_special_mapping *sm,
> -		       struct vm_area_struct *vma)
> +		       struct mm_area *vma)
>  {
>  	current->mm->context.vdso_base = vma->vm_start;
>  	return 0;
> @@ -56,7 +56,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
>  	unsigned long vdso_text_start, vdso_text_len, map_base;
>  	struct vm_special_mapping *vdso_mapping;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int rc;
>
>  	if (mmap_write_lock_killable(mm))
> diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
> index c39de84e98b0..c1fbc50fc840 100644
> --- a/arch/parisc/mm/fault.c
> +++ b/arch/parisc/mm/fault.c
> @@ -241,7 +241,7 @@ const char *trap_name(unsigned long code)
>  static inline void
>  show_signal_msg(struct pt_regs *regs, unsigned long code,
>  		unsigned long address, struct task_struct *tsk,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	if (!unhandled_signal(tsk, SIGSEGV))
>  		return;
> @@ -267,7 +267,7 @@ show_signal_msg(struct pt_regs *regs, unsigned long code,
>  void do_page_fault(struct pt_regs *regs, unsigned long code,
>  			      unsigned long address)
>  {
> -	struct vm_area_struct *vma, *prev_vma;
> +	struct mm_area *vma, *prev_vma;
>  	struct task_struct *tsk;
>  	struct mm_struct *mm;
>  	unsigned long acc_type;
> @@ -454,7 +454,7 @@ handle_nadtlb_fault(struct pt_regs *regs)
>  {
>  	unsigned long insn = regs->iir;
>  	int breg, treg, xreg, val = 0;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk;
>  	struct mm_struct *mm;
>  	unsigned long address;
> diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c
> index a94fe546d434..31fa175e4b67 100644
> --- a/arch/parisc/mm/hugetlbpage.c
> +++ b/arch/parisc/mm/hugetlbpage.c
> @@ -23,7 +23,7 @@
>
>
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgd;
> @@ -146,7 +146,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  	__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
>  }
>
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep,
>  				pte_t pte, int dirty)
>  {
> diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
> index 42c3af90d1f0..87c6abe37935 100644
> --- a/arch/powerpc/include/asm/book3s/32/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
> @@ -325,7 +325,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
>  	pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 0);
>  }
>
> -static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline void __ptep_set_access_flags(struct mm_area *vma,
>  					   pte_t *ptep, pte_t entry,
>  					   unsigned long address,
>  					   int psize)
> diff --git a/arch/powerpc/include/asm/book3s/32/tlbflush.h b/arch/powerpc/include/asm/book3s/32/tlbflush.h
> index e43534da5207..dd7630bfcab8 100644
> --- a/arch/powerpc/include/asm/book3s/32/tlbflush.h
> +++ b/arch/powerpc/include/asm/book3s/32/tlbflush.h
> @@ -9,7 +9,7 @@
>   * TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
>   */
>  void hash__flush_tlb_mm(struct mm_struct *mm);
> -void hash__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +void hash__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
>  void hash__flush_range(struct mm_struct *mm, unsigned long start, unsigned long end);
>
>  #ifdef CONFIG_SMP
> @@ -52,7 +52,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  		_tlbia();
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +static inline void flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
>  		hash__flush_tlb_page(vma, vmaddr);
> @@ -61,7 +61,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmad
>  }
>
>  static inline void
> -flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	flush_range(vma->vm_mm, start, end);
>  }
> @@ -71,7 +71,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
>  	flush_range(&init_mm, start, end);
>  }
>
> -static inline void local_flush_tlb_page(struct vm_area_struct *vma,
> +static inline void local_flush_tlb_page(struct mm_area *vma,
>  					unsigned long vmaddr)
>  {
>  	flush_tlb_page(vma, vmaddr);
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
> index aa90a048f319..47b4b0ee9aff 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
> @@ -158,7 +158,7 @@ static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
>  extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
>  					   unsigned long addr, pmd_t *pmdp,
>  					   unsigned long clr, unsigned long set);
> -extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
> +extern pmd_t hash__pmdp_collapse_flush(struct mm_area *vma,
>  				   unsigned long address, pmd_t *pmdp);
>  extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
>  					 pgtable_t pgtable);
> diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> index 0bf6fd0bf42a..5d42aee48d90 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
> @@ -170,9 +170,9 @@ extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
>  #define pte_pagesize_index(mm, addr, pte)	\
>  	(((pte) & H_PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
>
> -extern int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
> +extern int remap_pfn_range(struct mm_area *, unsigned long addr,
>  			   unsigned long pfn, unsigned long size, pgprot_t);
> -static inline int hash__remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
> +static inline int hash__remap_4k_pfn(struct mm_area *vma, unsigned long addr,
>  				 unsigned long pfn, pgprot_t prot)
>  {
>  	if (pfn > (PTE_RPN_MASK >> PAGE_SHIFT)) {
> @@ -271,7 +271,7 @@ static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
>  extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
>  					   unsigned long addr, pmd_t *pmdp,
>  					   unsigned long clr, unsigned long set);
> -extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
> +extern pmd_t hash__pmdp_collapse_flush(struct mm_area *vma,
>  				   unsigned long address, pmd_t *pmdp);
>  extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
>  					 pgtable_t pgtable);
> diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
> index bb786694dd26..212cdb6c7e1f 100644
> --- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
> +++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
> @@ -9,10 +9,10 @@
>   * both hash and radix to be enabled together we need to workaround the
>   * limitations.
>   */
> -void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> -void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +void radix__flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
> +void radix__local_flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
>
> -extern void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> +extern void radix__huge_ptep_modify_prot_commit(struct mm_area *vma,
>  						unsigned long addr, pte_t *ptep,
>  						pte_t old_pte, pte_t pte);
>
> @@ -50,22 +50,22 @@ static inline bool gigantic_page_runtime_supported(void)
>  }
>
>  #define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
> -extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
> +extern pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
>  					 unsigned long addr, pte_t *ptep);
>
>  #define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
> -extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> +extern void huge_ptep_modify_prot_commit(struct mm_area *vma,
>  					 unsigned long addr, pte_t *ptep,
>  					 pte_t old_pte, pte_t new_pte);
>
> -static inline void flush_hugetlb_page(struct vm_area_struct *vma,
> +static inline void flush_hugetlb_page(struct mm_area *vma,
>  				      unsigned long vmaddr)
>  {
>  	if (radix_enabled())
>  		return radix__flush_hugetlb_page(vma, vmaddr);
>  }
>
> -void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +void flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
>
>  static inline int check_and_get_huge_psize(int shift)
>  {
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
> index 4d8d7b4ea16b..430ded76ad49 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
> @@ -7,7 +7,7 @@
>
>  #endif /* CONFIG_HUGETLB_PAGE */
>
> -static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
> +static inline int remap_4k_pfn(struct mm_area *vma, unsigned long addr,
>  			       unsigned long pfn, pgprot_t prot)
>  {
>  	if (radix_enabled())
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 6d98e6f08d4d..18222f1eab2e 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -722,7 +722,7 @@ static inline bool check_pte_access(unsigned long access, unsigned long ptev)
>   * Generic functions with hash/radix callbacks
>   */
>
> -static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline void __ptep_set_access_flags(struct mm_area *vma,
>  					   pte_t *ptep, pte_t entry,
>  					   unsigned long address,
>  					   int psize)
> @@ -1104,12 +1104,12 @@ extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>  extern void set_pud_at(struct mm_struct *mm, unsigned long addr,
>  		       pud_t *pudp, pud_t pud);
>
> -static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pmd(struct mm_area *vma,
>  					unsigned long addr, pmd_t *pmd)
>  {
>  }
>
> -static inline void update_mmu_cache_pud(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pud(struct mm_area *vma,
>  					unsigned long addr, pud_t *pud)
>  {
>  }
> @@ -1284,19 +1284,19 @@ static inline pud_t pud_mkhuge(pud_t pud)
>
>
>  #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> -extern int pmdp_set_access_flags(struct vm_area_struct *vma,
> +extern int pmdp_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pmd_t *pmdp,
>  				 pmd_t entry, int dirty);
>  #define __HAVE_ARCH_PUDP_SET_ACCESS_FLAGS
> -extern int pudp_set_access_flags(struct vm_area_struct *vma,
> +extern int pudp_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pud_t *pudp,
>  				 pud_t entry, int dirty);
>
>  #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> -extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +extern int pmdp_test_and_clear_young(struct mm_area *vma,
>  				     unsigned long address, pmd_t *pmdp);
>  #define __HAVE_ARCH_PUDP_TEST_AND_CLEAR_YOUNG
> -extern int pudp_test_and_clear_young(struct vm_area_struct *vma,
> +extern int pudp_test_and_clear_young(struct mm_area *vma,
>  				     unsigned long address, pud_t *pudp);
>
>
> @@ -1319,7 +1319,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>  	return *pudp;
>  }
>
> -static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_collapse_flush(struct mm_area *vma,
>  					unsigned long address, pmd_t *pmdp)
>  {
>  	if (radix_enabled())
> @@ -1329,12 +1329,12 @@ static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
>  #define pmdp_collapse_flush pmdp_collapse_flush
>
>  #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
> -pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
>  				   unsigned long addr,
>  				   pmd_t *pmdp, int full);
>
>  #define __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR_FULL
> -pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +pud_t pudp_huge_get_and_clear_full(struct mm_area *vma,
>  				   unsigned long addr,
>  				   pud_t *pudp, int full);
>
> @@ -1357,16 +1357,16 @@ static inline pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PMDP_INVALIDATE
> -extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +extern pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
>  			     pmd_t *pmdp);
> -extern pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +extern pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
>  			     pud_t *pudp);
>
>  #define pmd_move_must_withdraw pmd_move_must_withdraw
>  struct spinlock;
>  extern int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
>  				  struct spinlock *old_pmd_ptl,
> -				  struct vm_area_struct *vma);
> +				  struct mm_area *vma);
>  /*
>   * Hash translation mode use the deposited table to store hash pte
>   * slot information.
> @@ -1413,8 +1413,8 @@ static inline int pgd_devmap(pgd_t pgd)
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
>  #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
> -pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
> -void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
> +pte_t ptep_modify_prot_start(struct mm_area *, unsigned long, pte_t *);
> +void ptep_modify_prot_commit(struct mm_area *, unsigned long,
>  			     pte_t *, pte_t, pte_t);
>
>  /*
> diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
> index 8f55ff74bb68..ffbeb52f4beb 100644
> --- a/arch/powerpc/include/asm/book3s/64/radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/radix.h
> @@ -143,11 +143,11 @@ extern void radix__mark_rodata_ro(void);
>  extern void radix__mark_initmem_nx(void);
>  #endif
>
> -extern void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
> +extern void radix__ptep_set_access_flags(struct mm_area *vma, pte_t *ptep,
>  					 pte_t entry, unsigned long address,
>  					 int psize);
>
> -extern void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
> +extern void radix__ptep_modify_prot_commit(struct mm_area *vma,
>  					   unsigned long addr, pte_t *ptep,
>  					   pte_t old_pte, pte_t pte);
>
> @@ -288,7 +288,7 @@ extern unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned l
>  extern unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long addr,
>  						pud_t *pudp, unsigned long clr,
>  						unsigned long set);
> -extern pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma,
> +extern pmd_t radix__pmdp_collapse_flush(struct mm_area *vma,
>  				  unsigned long address, pmd_t *pmdp);
>  extern void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
>  					pgtable_t pgtable);
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
> index a38542259fab..369f7d20a25a 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
> @@ -8,7 +8,7 @@
>  #define RIC_FLUSH_PWC 1
>  #define RIC_FLUSH_ALL 2
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct mm_struct;
>  struct mmu_gather;
>
> @@ -60,30 +60,30 @@ static inline void radix__flush_all_lpid_guest(unsigned int lpid)
>  }
>  #endif
>
> -extern void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma,
> +extern void radix__flush_hugetlb_tlb_range(struct mm_area *vma,
>  					   unsigned long start, unsigned long end);
>  extern void radix__flush_tlb_range_psize(struct mm_struct *mm, unsigned long start,
>  					 unsigned long end, int psize);
>  void radix__flush_tlb_pwc_range_psize(struct mm_struct *mm, unsigned long start,
>  				      unsigned long end, int psize);
> -extern void radix__flush_pmd_tlb_range(struct vm_area_struct *vma,
> +extern void radix__flush_pmd_tlb_range(struct mm_area *vma,
>  				       unsigned long start, unsigned long end);
> -extern void radix__flush_pud_tlb_range(struct vm_area_struct *vma,
> +extern void radix__flush_pud_tlb_range(struct mm_area *vma,
>  				       unsigned long start, unsigned long end);
> -extern void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void radix__flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end);
>  extern void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
>  extern void radix__local_flush_tlb_mm(struct mm_struct *mm);
>  extern void radix__local_flush_all_mm(struct mm_struct *mm);
> -extern void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +extern void radix__local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
>  extern void radix__local_flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
>  					      int psize);
>  extern void radix__tlb_flush(struct mmu_gather *tlb);
>  #ifdef CONFIG_SMP
>  extern void radix__flush_tlb_mm(struct mm_struct *mm);
>  extern void radix__flush_all_mm(struct mm_struct *mm);
> -extern void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +extern void radix__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
>  extern void radix__flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
>  					int psize);
>  #else
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h
> index fd642b729775..73cc7feff758 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h
> @@ -44,7 +44,7 @@ static inline void tlbiel_all_lpid(bool radix)
>
>
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> -static inline void flush_pmd_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_pmd_tlb_range(struct mm_area *vma,
>  				       unsigned long start, unsigned long end)
>  {
>  	if (radix_enabled())
> @@ -52,7 +52,7 @@ static inline void flush_pmd_tlb_range(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_FLUSH_PUD_TLB_RANGE
> -static inline void flush_pud_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_pud_tlb_range(struct mm_area *vma,
>  				       unsigned long start, unsigned long end)
>  {
>  	if (radix_enabled())
> @@ -60,7 +60,7 @@ static inline void flush_pud_tlb_range(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
> -static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_hugetlb_tlb_range(struct mm_area *vma,
>  					   unsigned long start,
>  					   unsigned long end)
>  {
> @@ -68,7 +68,7 @@ static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
>  		radix__flush_hugetlb_tlb_range(vma, start, end);
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  	if (radix_enabled())
> @@ -88,7 +88,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
>  		radix__local_flush_tlb_mm(mm);
>  }
>
> -static inline void local_flush_tlb_page(struct vm_area_struct *vma,
> +static inline void local_flush_tlb_page(struct mm_area *vma,
>  					unsigned long vmaddr)
>  {
>  	if (radix_enabled())
> @@ -117,7 +117,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  		radix__flush_tlb_mm(mm);
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma,
> +static inline void flush_tlb_page(struct mm_area *vma,
>  				  unsigned long vmaddr)
>  {
>  	if (radix_enabled())
> @@ -129,7 +129,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
>  #endif /* CONFIG_SMP */
>
>  #define flush_tlb_fix_spurious_fault flush_tlb_fix_spurious_fault
> -static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
> +static inline void flush_tlb_fix_spurious_fault(struct mm_area *vma,
>  						unsigned long address,
>  						pte_t *ptep)
>  {
> diff --git a/arch/powerpc/include/asm/cacheflush.h b/arch/powerpc/include/asm/cacheflush.h
> index f2656774aaa9..a7be13f896ca 100644
> --- a/arch/powerpc/include/asm/cacheflush.h
> +++ b/arch/powerpc/include/asm/cacheflush.h
> @@ -53,7 +53,7 @@ static inline void flush_dcache_page(struct page *page)
>  void flush_icache_range(unsigned long start, unsigned long stop);
>  #define flush_icache_range flush_icache_range
>
> -void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long addr, int len);
>  #define flush_icache_user_page flush_icache_user_page
>
> diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
> index 86326587e58d..84540436e22c 100644
> --- a/arch/powerpc/include/asm/hugetlb.h
> +++ b/arch/powerpc/include/asm/hugetlb.h
> @@ -52,7 +52,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  	pte_t pte;
> @@ -64,7 +64,7 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  			       unsigned long addr, pte_t *ptep,
>  			       pte_t pte, int dirty);
>
> @@ -72,7 +72,7 @@ void gigantic_hugetlb_cma_reserve(void) __init;
>  #include <asm-generic/hugetlb.h>
>
>  #else /* ! CONFIG_HUGETLB_PAGE */
> -static inline void flush_hugetlb_page(struct vm_area_struct *vma,
> +static inline void flush_hugetlb_page(struct mm_area *vma,
>  				      unsigned long vmaddr)
>  {
>  }
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index a157ab513347..9677c3775f7a 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -258,11 +258,11 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
>  extern void arch_exit_mmap(struct mm_struct *mm);
>
>  #ifdef CONFIG_PPC_MEM_KEYS
> -bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
> +bool arch_vma_access_permitted(struct mm_area *vma, bool write,
>  			       bool execute, bool foreign);
>  void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm);
>  #else /* CONFIG_PPC_MEM_KEYS */
> -static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
> +static inline bool arch_vma_access_permitted(struct mm_area *vma,
>  		bool write, bool execute, bool foreign)
>  {
>  	/* by default, allow everything */
> diff --git a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
> index 014799557f60..5f9e81383526 100644
> --- a/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
> +++ b/arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h
> @@ -4,7 +4,7 @@
>
>  #define PAGE_SHIFT_8M		23
>
> -static inline void flush_hugetlb_page(struct vm_area_struct *vma,
> +static inline void flush_hugetlb_page(struct mm_area *vma,
>  				      unsigned long vmaddr)
>  {
>  	flush_tlb_page(vma, vmaddr);
> diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
> index 54ebb91dbdcf..ac6c02a4c26e 100644
> --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h
> +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h
> @@ -128,7 +128,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
>  }
>  #define ptep_set_wrprotect ptep_set_wrprotect
>
> -static inline void __ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
> +static inline void __ptep_set_access_flags(struct mm_area *vma, pte_t *ptep,
>  					   pte_t entry, unsigned long address, int psize)
>  {
>  	unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_EXEC);
> diff --git a/arch/powerpc/include/asm/nohash/hugetlb-e500.h b/arch/powerpc/include/asm/nohash/hugetlb-e500.h
> index cab0e1f1eea0..788c610b8dff 100644
> --- a/arch/powerpc/include/asm/nohash/hugetlb-e500.h
> +++ b/arch/powerpc/include/asm/nohash/hugetlb-e500.h
> @@ -2,7 +2,7 @@
>  #ifndef _ASM_POWERPC_NOHASH_HUGETLB_E500_H
>  #define _ASM_POWERPC_NOHASH_HUGETLB_E500_H
>
> -void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +void flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr);
>
>  static inline int check_and_get_huge_psize(int shift)
>  {
> diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
> index 8d1f0b7062eb..0aad651197ef 100644
> --- a/arch/powerpc/include/asm/nohash/pgtable.h
> +++ b/arch/powerpc/include/asm/nohash/pgtable.h
> @@ -99,7 +99,7 @@ static inline pte_basic_t pte_update(struct mm_struct *mm, unsigned long addr, p
>  }
>  #endif
>
> -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int ptep_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long addr, pte_t *ptep)
>  {
>  	unsigned long old;
> @@ -133,7 +133,7 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
>
>  /* Set the dirty and/or accessed bits atomically in a linux PTE */
>  #ifndef __ptep_set_access_flags
> -static inline void __ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline void __ptep_set_access_flags(struct mm_area *vma,
>  					   pte_t *ptep, pte_t entry,
>  					   unsigned long address,
>  					   int psize)
> diff --git a/arch/powerpc/include/asm/nohash/tlbflush.h b/arch/powerpc/include/asm/nohash/tlbflush.h
> index 9a2cf83ea4f1..8f013d3b3e17 100644
> --- a/arch/powerpc/include/asm/nohash/tlbflush.h
> +++ b/arch/powerpc/include/asm/nohash/tlbflush.h
> @@ -23,12 +23,12 @@
>   * specific tlbie's
>   */
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct mm_struct;
>
>  #define MMU_NO_CONTEXT      	((unsigned int)-1)
>
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end);
>
>  #ifdef CONFIG_PPC_8xx
> @@ -40,7 +40,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
>  		asm volatile ("sync; tlbia; isync" : : : "memory");
>  }
>
> -static inline void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +static inline void local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	asm volatile ("tlbie %0; sync" : : "r" (vmaddr) : "memory");
>  }
> @@ -63,7 +63,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
>  #else
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  extern void local_flush_tlb_mm(struct mm_struct *mm);
> -extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +extern void local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
>  void local_flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr, int psize);
>
>  extern void __local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
> @@ -72,7 +72,7 @@ extern void __local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
>
>  #ifdef CONFIG_SMP
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long vmaddr);
>  extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
>  			     int tsize, int ind);
>  #else
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index af9a2628d1df..c5d6d4087e3c 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -280,7 +280,7 @@ void arch_free_page(struct page *page, int order);
>  #define HAVE_ARCH_FREE_PAGE
>  #endif
>
> -struct vm_area_struct;
> +struct mm_area;
>
>  extern unsigned long kernstart_virt_addr;
>
> diff --git a/arch/powerpc/include/asm/pci.h b/arch/powerpc/include/asm/pci.h
> index 46a9c4491ed0..1fa9e34182b4 100644
> --- a/arch/powerpc/include/asm/pci.h
> +++ b/arch/powerpc/include/asm/pci.h
> @@ -67,7 +67,7 @@ extern int pci_domain_nr(struct pci_bus *bus);
>  /* Decide whether to display the domain number in /proc */
>  extern int pci_proc_domain(struct pci_bus *bus);
>
> -struct vm_area_struct;
> +struct mm_area;
>
>  /* Tell PCI code what kind of PCI resource mappings we support */
>  #define HAVE_PCI_MMAP			1
> @@ -80,7 +80,7 @@ extern int pci_legacy_read(struct pci_bus *bus, loff_t port, u32 *val,
>  extern int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val,
>  			   size_t count);
>  extern int pci_mmap_legacy_page_range(struct pci_bus *bus,
> -				      struct vm_area_struct *vma,
> +				      struct mm_area *vma,
>  				      enum pci_mmap_state mmap_state);
>  extern void pci_adjust_legacy_attr(struct pci_bus *bus,
>  				   enum pci_mmap_state mmap_type);
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 2f72ad885332..d375c25ff925 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -119,7 +119,7 @@ static inline void mark_initmem_nx(void) { }
>  #endif
>
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> -int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +int ptep_set_access_flags(struct mm_area *vma, unsigned long address,
>  			  pte_t *ptep, pte_t entry, int dirty);
>
>  pgprot_t __phys_mem_access_prot(unsigned long pfn, unsigned long size,
> @@ -133,7 +133,7 @@ static inline pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn
>  }
>  #define __HAVE_PHYS_MEM_ACCESS_PROT
>
> -void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep);
> +void __update_mmu_cache(struct mm_area *vma, unsigned long address, pte_t *ptep);
>
>  /*
>   * This gets called at the end of handling a page fault, when
> @@ -145,7 +145,7 @@ void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t
>   * waiting for the inevitable extra hash-table miss exception.
>   */
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	if ((mmu_has_feature(MMU_FTR_HPTE_TABLE) && !radix_enabled()) ||
> diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pkeys.h
> index 59a2c7dbc78f..b36ac2edf846 100644
> --- a/arch/powerpc/include/asm/pkeys.h
> +++ b/arch/powerpc/include/asm/pkeys.h
> @@ -35,7 +35,7 @@ static inline u64 pkey_to_vmflag_bits(u16 pkey)
>  	return (((u64)pkey << VM_PKEY_SHIFT) & ARCH_VM_PKEY_FLAGS);
>  }
>
> -static inline int vma_pkey(struct vm_area_struct *vma)
> +static inline int vma_pkey(struct mm_area *vma)
>  {
>  	if (!mmu_has_feature(MMU_FTR_PKEY))
>  		return 0;
> @@ -125,9 +125,9 @@ static inline int mm_pkey_free(struct mm_struct *mm, int pkey)
>   * execute-only protection key.
>   */
>  extern int execute_only_pkey(struct mm_struct *mm);
> -extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
> +extern int __arch_override_mprotect_pkey(struct mm_area *vma,
>  					 int prot, int pkey);
> -static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
> +static inline int arch_override_mprotect_pkey(struct mm_area *vma,
>  					      int prot, int pkey)
>  {
>  	if (!mmu_has_feature(MMU_FTR_PKEY))
> diff --git a/arch/powerpc/include/asm/vas.h b/arch/powerpc/include/asm/vas.h
> index c36f71e01c0f..086d494bd3d9 100644
> --- a/arch/powerpc/include/asm/vas.h
> +++ b/arch/powerpc/include/asm/vas.h
> @@ -71,7 +71,7 @@ struct vas_user_win_ref {
>  	struct mm_struct *mm;	/* Linux process mm_struct */
>  	struct mutex mmap_mutex;	/* protects paste address mmap() */
>  					/* with DLPAR close/open windows */
> -	struct vm_area_struct *vma;	/* Save VMA and used in DLPAR ops */
> +	struct mm_area *vma;	/* Save VMA and used in DLPAR ops */
>  };
>
>  /*
> diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
> index eac84d687b53..ce9a82d8120f 100644
> --- a/arch/powerpc/kernel/pci-common.c
> +++ b/arch/powerpc/kernel/pci-common.c
> @@ -501,7 +501,7 @@ static int pci_read_irq_line(struct pci_dev *pci_dev)
>   * Platform support for /proc/bus/pci/X/Y mmap()s.
>   *  -- paulus.
>   */
> -int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
> +int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma)
>  {
>  	struct pci_controller *hose = pci_bus_to_host(pdev->bus);
>  	resource_size_t ioaddr = pci_resource_start(pdev, bar);
> @@ -651,7 +651,7 @@ int pci_legacy_write(struct pci_bus *bus, loff_t port, u32 val, size_t size)
>
>  /* This provides legacy IO or memory mmap access on a bus */
>  int pci_mmap_legacy_page_range(struct pci_bus *bus,
> -			       struct vm_area_struct *vma,
> +			       struct mm_area *vma,
>  			       enum pci_mmap_state mmap_state)
>  {
>  	struct pci_controller *hose = pci_bus_to_host(bus);
> diff --git a/arch/powerpc/kernel/proc_powerpc.c b/arch/powerpc/kernel/proc_powerpc.c
> index 3816a2bf2b84..c80bc0cb32db 100644
> --- a/arch/powerpc/kernel/proc_powerpc.c
> +++ b/arch/powerpc/kernel/proc_powerpc.c
> @@ -30,7 +30,7 @@ static ssize_t page_map_read( struct file *file, char __user *buf, size_t nbytes
>  			pde_data(file_inode(file)), PAGE_SIZE);
>  }
>
> -static int page_map_mmap( struct file *file, struct vm_area_struct *vma )
> +static int page_map_mmap( struct file *file, struct mm_area *vma )
>  {
>  	if ((vma->vm_end - vma->vm_start) > PAGE_SIZE)
>  		return -EINVAL;
> diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
> index 219d67bcf747..f6a853ae5dc7 100644
> --- a/arch/powerpc/kernel/vdso.c
> +++ b/arch/powerpc/kernel/vdso.c
> @@ -42,7 +42,7 @@ extern char vdso64_start, vdso64_end;
>
>  long sys_ni_syscall(void);
>
> -static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma,
> +static int vdso_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma,
>  		       unsigned long text_size)
>  {
>  	unsigned long new_size = new_vma->vm_end - new_vma->vm_start;
> @@ -55,17 +55,17 @@ static int vdso_mremap(const struct vm_special_mapping *sm, struct vm_area_struc
>  	return 0;
>  }
>
> -static int vdso32_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
> +static int vdso32_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
>  {
>  	return vdso_mremap(sm, new_vma, &vdso32_end - &vdso32_start);
>  }
>
> -static int vdso64_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
> +static int vdso64_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
>  {
>  	return vdso_mremap(sm, new_vma, &vdso64_end - &vdso64_start);
>  }
>
> -static void vdso_close(const struct vm_special_mapping *sm, struct vm_area_struct *vma)
> +static void vdso_close(const struct vm_special_mapping *sm, struct mm_area *vma)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> @@ -102,7 +102,7 @@ static int __arch_setup_additional_pages(struct linux_binprm *bprm, int uses_int
>  	struct vm_special_mapping *vdso_spec;
>  	unsigned long vvar_size = VDSO_NR_PAGES * PAGE_SIZE;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if (is_32bit_task()) {
>  		vdso_spec = &vdso32_spec;
> diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c
> index 742aa58a7c7e..236d3f95c4dd 100644
> --- a/arch/powerpc/kvm/book3s_64_vio.c
> +++ b/arch/powerpc/kvm/book3s_64_vio.c
> @@ -247,7 +247,7 @@ static const struct vm_operations_struct kvm_spapr_tce_vm_ops = {
>  	.fault = kvm_spapr_tce_fault,
>  };
>
> -static int kvm_spapr_tce_mmap(struct file *file, struct vm_area_struct *vma)
> +static int kvm_spapr_tce_mmap(struct file *file, struct mm_area *vma)
>  {
>  	vma->vm_ops = &kvm_spapr_tce_vm_ops;
>  	return 0;
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 86bff159c51e..62de957ec6da 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -5473,7 +5473,7 @@ static int kvmppc_hv_setup_htab_rma(struct kvm_vcpu *vcpu)
>  	struct kvm *kvm = vcpu->kvm;
>  	unsigned long hva;
>  	struct kvm_memory_slot *memslot;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long lpcr = 0, senc;
>  	unsigned long psize, porder;
>  	int srcu_idx;
> diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
> index 3a6592a31a10..16a49d4b5e47 100644
> --- a/arch/powerpc/kvm/book3s_hv_uvmem.c
> +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
> @@ -395,7 +395,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm,
>  	unsigned long end, start = gfn_to_hva(kvm, gfn);
>  	unsigned long vm_flags;
>  	int ret = 0;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int merge_flag = (merge) ? MADV_MERGEABLE : MADV_UNMERGEABLE;
>
>  	if (kvm_is_error_hva(start))
> @@ -510,7 +510,7 @@ unsigned long kvmppc_h_svm_init_start(struct kvm *kvm)
>   * from secure memory using UV_PAGE_OUT uvcall.
>   * Caller must held kvm->arch.uvmem_lock.
>   */
> -static int __kvmppc_svm_page_out(struct vm_area_struct *vma,
> +static int __kvmppc_svm_page_out(struct mm_area *vma,
>  		unsigned long start,
>  		unsigned long end, unsigned long page_shift,
>  		struct kvm *kvm, unsigned long gpa, struct page *fault_page)
> @@ -583,7 +583,7 @@ static int __kvmppc_svm_page_out(struct vm_area_struct *vma,
>  	return ret;
>  }
>
> -static inline int kvmppc_svm_page_out(struct vm_area_struct *vma,
> +static inline int kvmppc_svm_page_out(struct mm_area *vma,
>  				      unsigned long start, unsigned long end,
>  				      unsigned long page_shift,
>  				      struct kvm *kvm, unsigned long gpa,
> @@ -613,7 +613,7 @@ void kvmppc_uvmem_drop_pages(const struct kvm_memory_slot *slot,
>  	int i;
>  	struct kvmppc_uvmem_page_pvt *pvt;
>  	struct page *uvmem_page;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	unsigned long uvmem_pfn, gfn;
>  	unsigned long addr;
>
> @@ -737,7 +737,7 @@ static struct page *kvmppc_uvmem_get_page(unsigned long gpa, struct kvm *kvm)
>   * Alloc a PFN from private device memory pool. If @pagein is true,
>   * copy page from normal memory to secure memory using UV_PAGE_IN uvcall.
>   */
> -static int kvmppc_svm_page_in(struct vm_area_struct *vma,
> +static int kvmppc_svm_page_in(struct mm_area *vma,
>  		unsigned long start,
>  		unsigned long end, unsigned long gpa, struct kvm *kvm,
>  		unsigned long page_shift,
> @@ -795,7 +795,7 @@ static int kvmppc_uv_migrate_mem_slot(struct kvm *kvm,
>  		const struct kvm_memory_slot *memslot)
>  {
>  	unsigned long gfn = memslot->base_gfn;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long start, end;
>  	int ret = 0;
>
> @@ -937,7 +937,7 @@ unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa,
>  		unsigned long page_shift)
>  {
>  	unsigned long start, end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int srcu_idx;
>  	unsigned long gfn = gpa >> page_shift;
>  	int ret;
> @@ -1047,7 +1047,7 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa,
>  {
>  	unsigned long gfn = gpa >> page_shift;
>  	unsigned long start, end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int srcu_idx;
>  	int ret;
>
> diff --git a/arch/powerpc/kvm/book3s_xive_native.c b/arch/powerpc/kvm/book3s_xive_native.c
> index d9bf1bc3ff61..90ff2d0ed2a7 100644
> --- a/arch/powerpc/kvm/book3s_xive_native.c
> +++ b/arch/powerpc/kvm/book3s_xive_native.c
> @@ -227,7 +227,7 @@ static struct kvmppc_xive_ops kvmppc_xive_native_ops =  {
>
>  static vm_fault_t xive_native_esb_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct kvm_device *dev = vma->vm_file->private_data;
>  	struct kvmppc_xive *xive = dev->private;
>  	struct kvmppc_xive_src_block *sb;
> @@ -287,7 +287,7 @@ static const struct vm_operations_struct xive_native_esb_vmops = {
>
>  static vm_fault_t xive_native_tima_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>
>  	switch (vmf->pgoff - vma->vm_pgoff) {
>  	case 0: /* HW - forbid access */
> @@ -307,7 +307,7 @@ static const struct vm_operations_struct xive_native_tima_vmops = {
>  };
>
>  static int kvmppc_xive_native_mmap(struct kvm_device *dev,
> -				   struct vm_area_struct *vma)
> +				   struct mm_area *vma)
>  {
>  	struct kvmppc_xive *xive = dev->private;
>
> diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
> index be9c4106e22f..438af9822627 100644
> --- a/arch/powerpc/mm/book3s32/mmu.c
> +++ b/arch/powerpc/mm/book3s32/mmu.c
> @@ -319,7 +319,7 @@ static void hash_preload(struct mm_struct *mm, unsigned long ea)
>   *
>   * This must always be called with the pte lock held.
>   */
> -void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
> +void __update_mmu_cache(struct mm_area *vma, unsigned long address,
>  		      pte_t *ptep)
>  {
>  	/*
> diff --git a/arch/powerpc/mm/book3s32/tlb.c b/arch/powerpc/mm/book3s32/tlb.c
> index 9ad6b56bfec9..badcf34a99b4 100644
> --- a/arch/powerpc/mm/book3s32/tlb.c
> +++ b/arch/powerpc/mm/book3s32/tlb.c
> @@ -80,7 +80,7 @@ EXPORT_SYMBOL(hash__flush_range);
>   */
>  void hash__flush_tlb_mm(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *mp;
> +	struct mm_area *mp;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	/*
> @@ -94,7 +94,7 @@ void hash__flush_tlb_mm(struct mm_struct *mm)
>  }
>  EXPORT_SYMBOL(hash__flush_tlb_mm);
>
> -void hash__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void hash__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	struct mm_struct *mm;
>  	pmd_t *pmd;
> diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
> index 988948d69bc1..444a148f54f8 100644
> --- a/arch/powerpc/mm/book3s64/hash_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
> @@ -220,7 +220,7 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr
>  	return old;
>  }
>
> -pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
> +pmd_t hash__pmdp_collapse_flush(struct mm_area *vma, unsigned long address,
>  			    pmd_t *pmdp)
>  {
>  	pmd_t pmd;
> diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
> index 5158aefe4873..8a135a261f2e 100644
> --- a/arch/powerpc/mm/book3s64/hash_utils.c
> +++ b/arch/powerpc/mm/book3s64/hash_utils.c
> @@ -2099,7 +2099,7 @@ static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
>   *
>   * This must always be called with the pte lock held.
>   */
> -void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
> +void __update_mmu_cache(struct mm_area *vma, unsigned long address,
>  		      pte_t *ptep)
>  {
>  	/*
> diff --git a/arch/powerpc/mm/book3s64/hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c
> index 83c3361b358b..a26f928dbf56 100644
> --- a/arch/powerpc/mm/book3s64/hugetlbpage.c
> +++ b/arch/powerpc/mm/book3s64/hugetlbpage.c
> @@ -135,7 +135,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
>  }
>  #endif
>
> -pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
> +pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
>  				  unsigned long addr, pte_t *ptep)
>  {
>  	unsigned long pte_val;
> @@ -150,7 +150,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
>  	return __pte(pte_val);
>  }
>
> -void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
> +void huge_ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
>  				  pte_t *ptep, pte_t old_pte, pte_t pte)
>  {
>  	unsigned long psize;
> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c
> index c0e8d597e4cb..fbf8a7ae297a 100644
> --- a/arch/powerpc/mm/book3s64/iommu_api.c
> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
> @@ -98,7 +98,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
>
>  	mmap_read_lock(mm);
>  	chunk = (1UL << (PAGE_SHIFT + MAX_PAGE_ORDER)) /
> -			sizeof(struct vm_area_struct *);
> +			sizeof(struct mm_area *);
>  	chunk = min(chunk, entries);
>  	for (entry = 0; entry < entries; entry += chunk) {
>  		unsigned long n = min(entries - entry, chunk);
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 8f7d41ce2ca1..58f7938e9872 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -57,7 +57,7 @@ early_param("kfence.sample_interval", parse_kfence_early_init);
>   * handled those two for us, we additionally deal with missing execute
>   * permission here on some processors
>   */
> -int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +int pmdp_set_access_flags(struct mm_area *vma, unsigned long address,
>  			  pmd_t *pmdp, pmd_t entry, int dirty)
>  {
>  	int changed;
> @@ -77,7 +77,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>  	return changed;
>  }
>
> -int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +int pudp_set_access_flags(struct mm_area *vma, unsigned long address,
>  			  pud_t *pudp, pud_t entry, int dirty)
>  {
>  	int changed;
> @@ -98,13 +98,13 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>  }
>
>
> -int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +int pmdp_test_and_clear_young(struct mm_area *vma,
>  			      unsigned long address, pmd_t *pmdp)
>  {
>  	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
>  }
>
> -int pudp_test_and_clear_young(struct vm_area_struct *vma,
> +int pudp_test_and_clear_young(struct mm_area *vma,
>  			      unsigned long address, pud_t *pudp)
>  {
>  	return __pudp_test_and_clear_young(vma->vm_mm, address, pudp);
> @@ -177,7 +177,7 @@ void serialize_against_pte_lookup(struct mm_struct *mm)
>   * We use this to invalidate a pmdp entry before switching from a
>   * hugepte to regular pmd entry.
>   */
> -pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
>  		     pmd_t *pmdp)
>  {
>  	unsigned long old_pmd;
> @@ -188,7 +188,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>  	return __pmd(old_pmd);
>  }
>
> -pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
>  		      pud_t *pudp)
>  {
>  	unsigned long old_pud;
> @@ -199,7 +199,7 @@ pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
>  	return __pud(old_pud);
>  }
>
> -pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
>  				   unsigned long addr, pmd_t *pmdp, int full)
>  {
>  	pmd_t pmd;
> @@ -217,7 +217,7 @@ pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
>  	return pmd;
>  }
>
> -pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +pud_t pudp_huge_get_and_clear_full(struct mm_area *vma,
>  				   unsigned long addr, pud_t *pudp, int full)
>  {
>  	pud_t pud;
> @@ -534,7 +534,7 @@ void arch_report_meminfo(struct seq_file *m)
>  }
>  #endif /* CONFIG_PROC_FS */
>
> -pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
> +pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr,
>  			     pte_t *ptep)
>  {
>  	unsigned long pte_val;
> @@ -550,7 +550,7 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
>
>  }
>
> -void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
> +void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
>  			     pte_t *ptep, pte_t old_pte, pte_t pte)
>  {
>  	if (radix_enabled())
> @@ -574,7 +574,7 @@ void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
>   */
>  int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
>  			   struct spinlock *old_pmd_ptl,
> -			   struct vm_area_struct *vma)
> +			   struct mm_area *vma)
>  {
>  	if (radix_enabled())
>  		return (new_pmd_ptl != old_pmd_ptl) && vma_is_anonymous(vma);
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
> index a974baf8f327..3bdeb406fa0f 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -376,7 +376,7 @@ int execute_only_pkey(struct mm_struct *mm)
>  	return mm->context.execute_only_pkey;
>  }
>
> -static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
> +static inline bool vma_is_pkey_exec_only(struct mm_area *vma)
>  {
>  	/* Do this check first since the vm_flags should be hot */
>  	if ((vma->vm_flags & VM_ACCESS_FLAGS) != VM_EXEC)
> @@ -388,7 +388,7 @@ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
>  /*
>   * This should only be called for *plain* mprotect calls.
>   */
> -int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot,
> +int __arch_override_mprotect_pkey(struct mm_area *vma, int prot,
>  				  int pkey)
>  {
>  	/*
> @@ -444,7 +444,7 @@ bool arch_pte_access_permitted(u64 pte, bool write, bool execute)
>   * So do not enforce things if the VMA is not from the current mm, or if we are
>   * in a kernel thread.
>   */
> -bool arch_vma_access_permitted(struct vm_area_struct *vma, bool write,
> +bool arch_vma_access_permitted(struct mm_area *vma, bool write,
>  			       bool execute, bool foreign)
>  {
>  	if (!mmu_has_feature(MMU_FTR_PKEY))
> diff --git a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c
> index 35fd2a95be24..81569a2ec474 100644
> --- a/arch/powerpc/mm/book3s64/radix_hugetlbpage.c
> +++ b/arch/powerpc/mm/book3s64/radix_hugetlbpage.c
> @@ -7,7 +7,7 @@
>  #include <asm/mman.h>
>  #include <asm/tlb.h>
>
> -void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void radix__flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	int psize;
>  	struct hstate *hstate = hstate_file(vma->vm_file);
> @@ -16,7 +16,7 @@ void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
>  	radix__flush_tlb_page_psize(vma->vm_mm, vmaddr, psize);
>  }
>
> -void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void radix__local_flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	int psize;
>  	struct hstate *hstate = hstate_file(vma->vm_file);
> @@ -25,7 +25,7 @@ void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long v
>  	radix__local_flush_tlb_page_psize(vma->vm_mm, vmaddr, psize);
>  }
>
> -void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void radix__flush_hugetlb_tlb_range(struct mm_area *vma, unsigned long start,
>  				   unsigned long end)
>  {
>  	int psize;
> @@ -42,7 +42,7 @@ void radix__flush_hugetlb_tlb_range(struct vm_area_struct *vma, unsigned long st
>  	mmu_notifier_arch_invalidate_secondary_tlbs(vma->vm_mm, start, end);
>  }
>
> -void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> +void radix__huge_ptep_modify_prot_commit(struct mm_area *vma,
>  					 unsigned long addr, pte_t *ptep,
>  					 pte_t old_pte, pte_t pte)
>  {
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 311e2112d782..abb8ee24f4ec 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -1439,7 +1439,7 @@ unsigned long radix__pud_hugepage_update(struct mm_struct *mm, unsigned long add
>  	return old;
>  }
>
> -pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
> +pmd_t radix__pmdp_collapse_flush(struct mm_area *vma, unsigned long address,
>  			pmd_t *pmdp)
>
>  {
> @@ -1528,7 +1528,7 @@ pud_t radix__pudp_huge_get_and_clear(struct mm_struct *mm,
>
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> -void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
> +void radix__ptep_set_access_flags(struct mm_area *vma, pte_t *ptep,
>  				  pte_t entry, unsigned long address, int psize)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -1570,7 +1570,7 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
>  	/* See ptesync comment in radix__set_pte_at */
>  }
>
> -void radix__ptep_modify_prot_commit(struct vm_area_struct *vma,
> +void radix__ptep_modify_prot_commit(struct mm_area *vma,
>  				    unsigned long addr, pte_t *ptep,
>  				    pte_t old_pte, pte_t pte)
>  {
> diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
> index 9e1f6558d026..522515490a77 100644
> --- a/arch/powerpc/mm/book3s64/radix_tlb.c
> +++ b/arch/powerpc/mm/book3s64/radix_tlb.c
> @@ -625,7 +625,7 @@ void radix__local_flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmadd
>  	preempt_enable();
>  }
>
> -void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void radix__local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  #ifdef CONFIG_HUGETLB_PAGE
>  	/* need the return fix for nohash.c */
> @@ -947,7 +947,7 @@ void radix__flush_tlb_page_psize(struct mm_struct *mm, unsigned long vmaddr,
>  	preempt_enable();
>  }
>
> -void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void radix__flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  #ifdef CONFIG_HUGETLB_PAGE
>  	if (is_vm_hugetlb_page(vma))
> @@ -1114,7 +1114,7 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm,
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
>  }
>
> -void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void radix__flush_tlb_range(struct mm_area *vma, unsigned long start,
>  		     unsigned long end)
>
>  {
> @@ -1360,14 +1360,14 @@ void radix__flush_tlb_collapsed_pmd(struct mm_struct *mm, unsigned long addr)
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> -void radix__flush_pmd_tlb_range(struct vm_area_struct *vma,
> +void radix__flush_pmd_tlb_range(struct mm_area *vma,
>  				unsigned long start, unsigned long end)
>  {
>  	radix__flush_tlb_range_psize(vma->vm_mm, start, end, MMU_PAGE_2M);
>  }
>  EXPORT_SYMBOL(radix__flush_pmd_tlb_range);
>
> -void radix__flush_pud_tlb_range(struct vm_area_struct *vma,
> +void radix__flush_pud_tlb_range(struct mm_area *vma,
>  				unsigned long start, unsigned long end)
>  {
>  	radix__flush_tlb_range_psize(vma->vm_mm, start, end, MMU_PAGE_1G);
> diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c
> index 28bec5bc7879..7ea8f4a1046b 100644
> --- a/arch/powerpc/mm/book3s64/slice.c
> +++ b/arch/powerpc/mm/book3s64/slice.c
> @@ -86,7 +86,7 @@ static void slice_range_to_mask(unsigned long start, unsigned long len,
>  static int slice_area_is_free(struct mm_struct *mm, unsigned long addr,
>  			      unsigned long len)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if ((mm_ctx_slb_addr_limit(&mm->context) - len) < addr)
>  		return 0;
> @@ -808,7 +808,7 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
>  	return !slice_check_range_fits(mm, maskp, addr, len);
>  }
>
> -unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
> +unsigned long vma_mmu_pagesize(struct mm_area *vma)
>  {
>  	/* With radix we don't use slice, so derive it from vma*/
>  	if (radix_enabled())
> diff --git a/arch/powerpc/mm/book3s64/subpage_prot.c b/arch/powerpc/mm/book3s64/subpage_prot.c
> index ec98e526167e..574aa22bb238 100644
> --- a/arch/powerpc/mm/book3s64/subpage_prot.c
> +++ b/arch/powerpc/mm/book3s64/subpage_prot.c
> @@ -138,7 +138,7 @@ static void subpage_prot_clear(unsigned long addr, unsigned long len)
>  static int subpage_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
>  				  unsigned long end, struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	split_huge_pmd(vma, pmd, addr);
>  	return 0;
>  }
> @@ -151,7 +151,7 @@ static const struct mm_walk_ops subpage_walk_ops = {
>  static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr,
>  				    unsigned long len)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, addr);
>
>  	/*
> diff --git a/arch/powerpc/mm/cacheflush.c b/arch/powerpc/mm/cacheflush.c
> index 7186516eca52..75547ebd112c 100644
> --- a/arch/powerpc/mm/cacheflush.c
> +++ b/arch/powerpc/mm/cacheflush.c
> @@ -210,7 +210,7 @@ void copy_user_page(void *vto, void *vfrom, unsigned long vaddr,
>  	flush_dcache_page(pg);
>  }
>
> -void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_user_page(struct mm_area *vma, struct page *page,
>  			     unsigned long addr, int len)
>  {
>  	void *maddr;
> diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
> index f5f8692e2c69..b6196e004f19 100644
> --- a/arch/powerpc/mm/copro_fault.c
> +++ b/arch/powerpc/mm/copro_fault.c
> @@ -21,7 +21,7 @@
>  int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
>  		unsigned long dsisr, vm_fault_t *flt)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long is_write;
>  	int ret;
>
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index c156fe0d53c3..45b8039647f6 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -72,7 +72,7 @@ static noinline int bad_area_nosemaphore(struct pt_regs *regs, unsigned long add
>  }
>
>  static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code,
> -		      struct mm_struct *mm, struct vm_area_struct *vma)
> +		      struct mm_struct *mm, struct mm_area *vma)
>  {
>
>  	/*
> @@ -89,7 +89,7 @@ static int __bad_area(struct pt_regs *regs, unsigned long address, int si_code,
>
>  static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address,
>  				    struct mm_struct *mm,
> -				    struct vm_area_struct *vma)
> +				    struct mm_area *vma)
>  {
>  	int pkey;
>
> @@ -131,7 +131,7 @@ static noinline int bad_access_pkey(struct pt_regs *regs, unsigned long address,
>  }
>
>  static noinline int bad_access(struct pt_regs *regs, unsigned long address,
> -			       struct mm_struct *mm, struct vm_area_struct *vma)
> +			       struct mm_struct *mm, struct mm_area *vma)
>  {
>  	return __bad_area(regs, address, SEGV_ACCERR, mm, vma);
>  }
> @@ -235,7 +235,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
>  }
>
>  static bool access_pkey_error(bool is_write, bool is_exec, bool is_pkey,
> -			      struct vm_area_struct *vma)
> +			      struct mm_area *vma)
>  {
>  	/*
>  	 * Make sure to check the VMA so that we do not perform
> @@ -248,7 +248,7 @@ static bool access_pkey_error(bool is_write, bool is_exec, bool is_pkey,
>  	return false;
>  }
>
> -static bool access_error(bool is_write, bool is_exec, struct vm_area_struct *vma)
> +static bool access_error(bool is_write, bool is_exec, struct mm_area *vma)
>  {
>  	/*
>  	 * Allow execution from readable areas if the MMU does not
> @@ -413,7 +413,7 @@ static int page_fault_is_bad(unsigned long err)
>  static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
>  			   unsigned long error_code)
>  {
> -	struct vm_area_struct * vma;
> +	struct mm_area * vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
>  	int is_exec = TRAP(regs) == INTERRUPT_INST_STORAGE;
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index d3c1b749dcfc..290850810f27 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -40,7 +40,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long s
>  	return __find_linux_pte(mm->pgd, addr, NULL, NULL);
>  }
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, unsigned long sz)
>  {
>  	p4d_t *p4d;
> diff --git a/arch/powerpc/mm/nohash/e500_hugetlbpage.c b/arch/powerpc/mm/nohash/e500_hugetlbpage.c
> index a134d28a0e4d..1117ec25cafc 100644
> --- a/arch/powerpc/mm/nohash/e500_hugetlbpage.c
> +++ b/arch/powerpc/mm/nohash/e500_hugetlbpage.c
> @@ -116,7 +116,7 @@ static inline int book3e_tlb_exists(unsigned long ea, unsigned long pid)
>  }
>
>  static void
> -book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte)
> +book3e_hugetlb_preload(struct mm_area *vma, unsigned long ea, pte_t pte)
>  {
>  	unsigned long mas1, mas2;
>  	u64 mas7_3;
> @@ -178,13 +178,13 @@ book3e_hugetlb_preload(struct vm_area_struct *vma, unsigned long ea, pte_t pte)
>   *
>   * This must always be called with the pte lock held.
>   */
> -void __update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
> +void __update_mmu_cache(struct mm_area *vma, unsigned long address, pte_t *ptep)
>  {
>  	if (is_vm_hugetlb_page(vma))
>  		book3e_hugetlb_preload(vma, address, *ptep);
>  }
>
> -void flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void flush_hugetlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	struct hstate *hstate = hstate_file(vma->vm_file);
>  	unsigned long tsize = huge_page_shift(hstate) - 10;
> diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c
> index 0a650742f3a0..cd62f02ed016 100644
> --- a/arch/powerpc/mm/nohash/tlb.c
> +++ b/arch/powerpc/mm/nohash/tlb.c
> @@ -149,7 +149,7 @@ void __local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
>  	preempt_enable();
>  }
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  	__local_flush_tlb_page(vma ? vma->vm_mm : NULL, vmaddr,
>  			       mmu_get_tsize(mmu_virtual_psize), 0);
> @@ -275,7 +275,7 @@ void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
>  	preempt_enable();
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long vmaddr)
>  {
>  #ifdef CONFIG_HUGETLB_PAGE
>  	if (vma && is_vm_hugetlb_page(vma))
> @@ -313,7 +313,7 @@ EXPORT_SYMBOL(flush_tlb_kernel_range);
>   * some implementation can stack multiple tlbivax before a tlbsync but
>   * for now, we keep it that way
>   */
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  		     unsigned long end)
>
>  {
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index 61df5aed7989..425f2f8a2d95 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -141,7 +141,7 @@ static inline pte_t set_pte_filter(pte_t pte, unsigned long addr)
>  	return pte_exprotect(pte);
>  }
>
> -static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma,
> +static pte_t set_access_flags_filter(pte_t pte, struct mm_area *vma,
>  				     int dirty)
>  {
>  	struct folio *folio;
> @@ -240,7 +240,7 @@ void unmap_kernel_page(unsigned long va)
>   * handled those two for us, we additionally deal with missing execute
>   * permission here on some processors
>   */
> -int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +int ptep_set_access_flags(struct mm_area *vma, unsigned long address,
>  			  pte_t *ptep, pte_t entry, int dirty)
>  {
>  	int changed;
> @@ -255,7 +255,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>  }
>
>  #ifdef CONFIG_HUGETLB_PAGE
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  			       unsigned long addr, pte_t *ptep,
>  			       pte_t pte, int dirty)
>  {
> diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
> index 0b6365d85d11..ee6e08d98377 100644
> --- a/arch/powerpc/platforms/book3s/vas-api.c
> +++ b/arch/powerpc/platforms/book3s/vas-api.c
> @@ -394,7 +394,7 @@ static int do_fail_paste(void)
>   */
>  static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct file *fp = vma->vm_file;
>  	struct coproc_instance *cp_inst = fp->private_data;
>  	struct vas_window *txwin;
> @@ -472,7 +472,7 @@ static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
>   * be invalid. Set VAS window VMA to NULL in this function which
>   * is called before VMA free.
>   */
> -static void vas_mmap_close(struct vm_area_struct *vma)
> +static void vas_mmap_close(struct mm_area *vma)
>  {
>  	struct file *fp = vma->vm_file;
>  	struct coproc_instance *cp_inst = fp->private_data;
> @@ -504,7 +504,7 @@ static const struct vm_operations_struct vas_vm_ops = {
>  	.fault = vas_mmap_fault,
>  };
>
> -static int coproc_mmap(struct file *fp, struct vm_area_struct *vma)
> +static int coproc_mmap(struct file *fp, struct mm_area *vma)
>  {
>  	struct coproc_instance *cp_inst = fp->private_data;
>  	struct vas_window *txwin;
> diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
> index d5a2c77bc908..a7ec9abc6d00 100644
> --- a/arch/powerpc/platforms/cell/spufs/file.c
> +++ b/arch/powerpc/platforms/cell/spufs/file.c
> @@ -229,7 +229,7 @@ spufs_mem_write(struct file *file, const char __user *buffer,
>  static vm_fault_t
>  spufs_mem_mmap_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct spu_context *ctx	= vma->vm_file->private_data;
>  	unsigned long pfn, offset;
>  	vm_fault_t ret;
> @@ -258,7 +258,7 @@ spufs_mem_mmap_fault(struct vm_fault *vmf)
>  	return ret;
>  }
>
> -static int spufs_mem_mmap_access(struct vm_area_struct *vma,
> +static int spufs_mem_mmap_access(struct mm_area *vma,
>  				unsigned long address,
>  				void *buf, int len, int write)
>  {
> @@ -286,7 +286,7 @@ static const struct vm_operations_struct spufs_mem_mmap_vmops = {
>  	.access = spufs_mem_mmap_access,
>  };
>
> -static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_mem_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> @@ -376,7 +376,7 @@ static const struct vm_operations_struct spufs_cntl_mmap_vmops = {
>  /*
>   * mmap support for problem state control area [0x4000 - 0x4fff].
>   */
> -static int spufs_cntl_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_cntl_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> @@ -1031,7 +1031,7 @@ static const struct vm_operations_struct spufs_signal1_mmap_vmops = {
>  	.fault = spufs_signal1_mmap_fault,
>  };
>
> -static int spufs_signal1_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_signal1_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> @@ -1165,7 +1165,7 @@ static const struct vm_operations_struct spufs_signal2_mmap_vmops = {
>  	.fault = spufs_signal2_mmap_fault,
>  };
>
> -static int spufs_signal2_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_signal2_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> @@ -1286,7 +1286,7 @@ static const struct vm_operations_struct spufs_mss_mmap_vmops = {
>  /*
>   * mmap support for problem state MFC DMA area [0x0000 - 0x0fff].
>   */
> -static int spufs_mss_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_mss_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> @@ -1347,7 +1347,7 @@ static const struct vm_operations_struct spufs_psmap_mmap_vmops = {
>  /*
>   * mmap support for full problem state area [0x00000 - 0x1ffff].
>   */
> -static int spufs_psmap_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_psmap_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> @@ -1406,7 +1406,7 @@ static const struct vm_operations_struct spufs_mfc_mmap_vmops = {
>  /*
>   * mmap support for problem state MFC DMA area [0x0000 - 0x0fff].
>   */
> -static int spufs_mfc_mmap(struct file *file, struct vm_area_struct *vma)
> +static int spufs_mfc_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!(vma->vm_flags & VM_SHARED))
>  		return -EINVAL;
> diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
> index 4ac9808e55a4..1fd35cc9716e 100644
> --- a/arch/powerpc/platforms/powernv/memtrace.c
> +++ b/arch/powerpc/platforms/powernv/memtrace.c
> @@ -45,7 +45,7 @@ static ssize_t memtrace_read(struct file *filp, char __user *ubuf,
>  	return simple_read_from_buffer(ubuf, count, ppos, ent->mem, ent->size);
>  }
>
> -static int memtrace_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int memtrace_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct memtrace_entry *ent = filp->private_data;
>
> diff --git a/arch/powerpc/platforms/powernv/opal-prd.c b/arch/powerpc/platforms/powernv/opal-prd.c
> index dc246ed4b7b4..5a922ddd9b62 100644
> --- a/arch/powerpc/platforms/powernv/opal-prd.c
> +++ b/arch/powerpc/platforms/powernv/opal-prd.c
> @@ -110,7 +110,7 @@ static int opal_prd_open(struct inode *inode, struct file *file)
>   * @vma: VMA to map the registers into
>   */
>
> -static int opal_prd_mmap(struct file *file, struct vm_area_struct *vma)
> +static int opal_prd_mmap(struct file *file, struct mm_area *vma)
>  {
>  	size_t addr, size;
>  	pgprot_t page_prot;
> diff --git a/arch/powerpc/platforms/pseries/vas.c b/arch/powerpc/platforms/pseries/vas.c
> index c25eb1a38185..a47633bd7586 100644
> --- a/arch/powerpc/platforms/pseries/vas.c
> +++ b/arch/powerpc/platforms/pseries/vas.c
> @@ -763,7 +763,7 @@ static int reconfig_close_windows(struct vas_caps *vcap, int excess_creds,
>  {
>  	struct pseries_vas_window *win, *tmp;
>  	struct vas_user_win_ref *task_ref;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int rc = 0, flag;
>
>  	if (migrate)
> diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
> index 446126497768..1a0ebd9019eb 100644
> --- a/arch/riscv/include/asm/hugetlb.h
> +++ b/arch/riscv/include/asm/hugetlb.h
> @@ -32,7 +32,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  			      unsigned long sz);
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  			    unsigned long addr, pte_t *ptep);
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
> @@ -40,7 +40,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  			     unsigned long addr, pte_t *ptep);
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  			       unsigned long addr, pte_t *ptep,
>  			       pte_t pte, int dirty);
>
> diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
> index 428e48e5f57d..2fa52e4eae6a 100644
> --- a/arch/riscv/include/asm/pgtable.h
> +++ b/arch/riscv/include/asm/pgtable.h
> @@ -506,7 +506,7 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
>
>  /* Commit new configuration to MMU hardware */
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	asm goto(ALTERNATIVE("nop", "j %l[svvptc]", 0, RISCV_ISA_EXT_SVVPTC, 1)
> @@ -535,7 +535,7 @@ svvptc:;
>  #define update_mmu_tlb_range(vma, addr, ptep, nr) \
>  	update_mmu_cache_range(NULL, vma, addr, ptep, nr)
>
> -static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pmd(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmdp)
>  {
>  	pte_t *ptep = (pte_t *)pmdp;
> @@ -593,10 +593,10 @@ static inline void pte_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS	/* defined in mm/pgtable.c */
> -extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +extern int ptep_set_access_flags(struct mm_area *vma, unsigned long address,
>  				 pte_t *ptep, pte_t entry, int dirty);
>  #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG	/* defined in mm/pgtable.c */
> -extern int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address,
> +extern int ptep_test_and_clear_young(struct mm_area *vma, unsigned long address,
>  				     pte_t *ptep);
>
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
> @@ -618,7 +618,7 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> -static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
> +static inline int ptep_clear_flush_young(struct mm_area *vma,
>  					 unsigned long address, pte_t *ptep)
>  {
>  	/*
> @@ -859,7 +859,7 @@ static inline int pmd_trans_huge(pmd_t pmd)
>  }
>
>  #define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> -static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
> +static inline int pmdp_set_access_flags(struct mm_area *vma,
>  					unsigned long address, pmd_t *pmdp,
>  					pmd_t entry, int dirty)
>  {
> @@ -867,7 +867,7 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> -static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int pmdp_test_and_clear_young(struct mm_area *vma,
>  					unsigned long address, pmd_t *pmdp)
>  {
>  	return ptep_test_and_clear_young(vma, address, (pte_t *)pmdp);
> @@ -892,7 +892,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  }
>
>  #define pmdp_establish pmdp_establish
> -static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_establish(struct mm_area *vma,
>  				unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
>  	page_table_check_pmd_set(vma->vm_mm, pmdp, pmd);
> @@ -900,7 +900,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  }
>
>  #define pmdp_collapse_flush pmdp_collapse_flush
> -extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +extern pmd_t pmdp_collapse_flush(struct mm_area *vma,
>  				 unsigned long address, pmd_t *pmdp);
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> index ce0dd0fed764..18dbd9b692b9 100644
> --- a/arch/riscv/include/asm/tlbflush.h
> +++ b/arch/riscv/include/asm/tlbflush.h
> @@ -47,14 +47,14 @@ void flush_tlb_all(void);
>  void flush_tlb_mm(struct mm_struct *mm);
>  void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>  			unsigned long end, unsigned int page_size);
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr);
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_page(struct mm_area *vma, unsigned long addr);
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  		     unsigned long end);
>  void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
> -void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
>  			unsigned long end);
>  #endif
>
> diff --git a/arch/riscv/kernel/vdso.c b/arch/riscv/kernel/vdso.c
> index cc2895d1fbc2..0aada37e5b12 100644
> --- a/arch/riscv/kernel/vdso.c
> +++ b/arch/riscv/kernel/vdso.c
> @@ -34,7 +34,7 @@ static struct __vdso_info compat_vdso_info;
>  #endif
>
>  static int vdso_mremap(const struct vm_special_mapping *sm,
> -		       struct vm_area_struct *new_vma)
> +		       struct mm_area *new_vma)
>  {
>  	current->mm->context.vdso = (void *)new_vma->vm_start;
>
> diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c
> index 1087ea74567b..afd478082547 100644
> --- a/arch/riscv/kvm/mmu.c
> +++ b/arch/riscv/kvm/mmu.c
> @@ -487,7 +487,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>  	 *     +--------------------------------------------+
>  	 */
>  	do {
> -		struct vm_area_struct *vma = find_vma(current->mm, hva);
> +		struct mm_area *vma = find_vma(current->mm, hva);
>  		hva_t vm_start, vm_end;
>
>  		if (!vma || vma->vm_start >= reg_end)
> @@ -595,7 +595,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu,
>  	bool writable;
>  	short vma_pageshift;
>  	gfn_t gfn = gpa >> PAGE_SHIFT;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct kvm *kvm = vcpu->kvm;
>  	struct kvm_mmu_memory_cache *pcache = &vcpu->arch.mmu_page_cache;
>  	bool logging = (memslot->dirty_bitmap &&
> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> index 0194324a0c50..75986abf7b4e 100644
> --- a/arch/riscv/mm/fault.c
> +++ b/arch/riscv/mm/fault.c
> @@ -243,7 +243,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
>  	local_flush_tlb_page(addr);
>  }
>
> -static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
> +static inline bool access_error(unsigned long cause, struct mm_area *vma)
>  {
>  	switch (cause) {
>  	case EXC_INST_PAGE_FAULT:
> @@ -275,7 +275,7 @@ static inline bool access_error(unsigned long cause, struct vm_area_struct *vma)
>  void handle_page_fault(struct pt_regs *regs)
>  {
>  	struct task_struct *tsk;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm;
>  	unsigned long addr, cause;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
> diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
> index b4a78a4b35cf..f9ef0699f193 100644
> --- a/arch/riscv/mm/hugetlbpage.c
> +++ b/arch/riscv/mm/hugetlbpage.c
> @@ -28,7 +28,7 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
>  }
>
>  pte_t *huge_pte_alloc(struct mm_struct *mm,
> -		      struct vm_area_struct *vma,
> +		      struct mm_area *vma,
>  		      unsigned long addr,
>  		      unsigned long sz)
>  {
> @@ -172,7 +172,7 @@ static pte_t get_clear_contig_flush(struct mm_struct *mm,
>  				    unsigned long pte_num)
>  {
>  	pte_t orig_pte = get_clear_contig(mm, addr, ptep, pte_num);
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
> +	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
>  	bool valid = !pte_none(orig_pte);
>
>  	if (valid)
> @@ -203,7 +203,7 @@ static void clear_flush(struct mm_struct *mm,
>  			unsigned long pgsize,
>  			unsigned long ncontig)
>  {
> -	struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0);
> +	struct mm_area vma = TLB_FLUSH_VMA(mm, 0);
>  	unsigned long i, saddr = addr;
>
>  	for (i = 0; i < ncontig; i++, addr += pgsize, ptep++)
> @@ -260,7 +260,7 @@ void set_huge_pte_at(struct mm_struct *mm,
>  		set_pte_at(mm, addr, ptep, pte);
>  }
>
> -int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +int huge_ptep_set_access_flags(struct mm_area *vma,
>  			       unsigned long addr,
>  			       pte_t *ptep,
>  			       pte_t pte,
> @@ -331,7 +331,7 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  		set_pte_at(mm, addr, ptep, orig_pte);
>  }
>
> -pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  			    unsigned long addr,
>  			    pte_t *ptep)
>  {
> diff --git a/arch/riscv/mm/pgtable.c b/arch/riscv/mm/pgtable.c
> index 4ae67324f992..f81997996346 100644
> --- a/arch/riscv/mm/pgtable.c
> +++ b/arch/riscv/mm/pgtable.c
> @@ -5,7 +5,7 @@
>  #include <linux/kernel.h>
>  #include <linux/pgtable.h>
>
> -int ptep_set_access_flags(struct vm_area_struct *vma,
> +int ptep_set_access_flags(struct mm_area *vma,
>  			  unsigned long address, pte_t *ptep,
>  			  pte_t entry, int dirty)
>  {
> @@ -31,7 +31,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
>  	return false;
>  }
>
> -int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +int ptep_test_and_clear_young(struct mm_area *vma,
>  			      unsigned long address,
>  			      pte_t *ptep)
>  {
> @@ -136,7 +136,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
>
>  #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +pmd_t pmdp_collapse_flush(struct mm_area *vma,
>  					unsigned long address, pmd_t *pmdp)
>  {
>  	pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, address, pmdp);
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index c25a40aa2fe0..1ae019b7e60b 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -130,13 +130,13 @@ void flush_tlb_mm_range(struct mm_struct *mm,
>  	__flush_tlb_range(mm, mm_cpumask(mm), start, end - start, page_size);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	__flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
>  			  addr, PAGE_SIZE, PAGE_SIZE);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  		     unsigned long end)
>  {
>  	unsigned long stride_size;
> @@ -176,7 +176,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  }
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -void flush_pmd_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_pmd_tlb_range(struct mm_area *vma, unsigned long start,
>  			unsigned long end)
>  {
>  	__flush_tlb_range(vma->vm_mm, mm_cpumask(vma->vm_mm),
> diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
> index 931fcc413598..ad92be48a9e4 100644
> --- a/arch/s390/include/asm/hugetlb.h
> +++ b/arch/s390/include/asm/hugetlb.h
> @@ -54,14 +54,14 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long address, pte_t *ptep)
>  {
>  	return __huge_ptep_get_and_clear(vma->vm_mm, address, ptep);
>  }
>
>  #define  __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int huge_ptep_set_access_flags(struct mm_area *vma,
>  					     unsigned long addr, pte_t *ptep,
>  					     pte_t pte, int dirty)
>  {
> diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
> index f8a6b54986ec..6bc573582112 100644
> --- a/arch/s390/include/asm/pgtable.h
> +++ b/arch/s390/include/asm/pgtable.h
> @@ -1215,7 +1215,7 @@ pte_t ptep_xchg_direct(struct mm_struct *, unsigned long, pte_t *, pte_t);
>  pte_t ptep_xchg_lazy(struct mm_struct *, unsigned long, pte_t *, pte_t);
>
>  #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
> -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int ptep_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long addr, pte_t *ptep)
>  {
>  	pte_t pte = *ptep;
> @@ -1225,7 +1225,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> -static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
> +static inline int ptep_clear_flush_young(struct mm_area *vma,
>  					 unsigned long address, pte_t *ptep)
>  {
>  	return ptep_test_and_clear_young(vma, address, ptep);
> @@ -1245,12 +1245,12 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
> -pte_t ptep_modify_prot_start(struct vm_area_struct *, unsigned long, pte_t *);
> -void ptep_modify_prot_commit(struct vm_area_struct *, unsigned long,
> +pte_t ptep_modify_prot_start(struct mm_area *, unsigned long, pte_t *);
> +void ptep_modify_prot_commit(struct mm_area *, unsigned long,
>  			     pte_t *, pte_t, pte_t);
>
>  #define __HAVE_ARCH_PTEP_CLEAR_FLUSH
> -static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t ptep_clear_flush(struct mm_area *vma,
>  				     unsigned long addr, pte_t *ptep)
>  {
>  	pte_t res;
> @@ -1327,7 +1327,7 @@ static inline int pte_allow_rdp(pte_t old, pte_t new)
>  	return (pte_val(old) & _PAGE_RDP_MASK) == (pte_val(new) & _PAGE_RDP_MASK);
>  }
>
> -static inline void flush_tlb_fix_spurious_fault(struct vm_area_struct *vma,
> +static inline void flush_tlb_fix_spurious_fault(struct mm_area *vma,
>  						unsigned long address,
>  						pte_t *ptep)
>  {
> @@ -1350,7 +1350,7 @@ void ptep_reset_dat_prot(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
>  			 pte_t new);
>
>  #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> -static inline int ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int ptep_set_access_flags(struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep,
>  					pte_t entry, int dirty)
>  {
> @@ -1776,7 +1776,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
>  pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
>
>  #define  __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> -static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
> +static inline int pmdp_set_access_flags(struct mm_area *vma,
>  					unsigned long addr, pmd_t *pmdp,
>  					pmd_t entry, int dirty)
>  {
> @@ -1792,7 +1792,7 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> -static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int pmdp_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long addr, pmd_t *pmdp)
>  {
>  	pmd_t pmd = *pmdp;
> @@ -1802,7 +1802,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
> -static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +static inline int pmdp_clear_flush_young(struct mm_area *vma,
>  					 unsigned long addr, pmd_t *pmdp)
>  {
>  	VM_BUG_ON(addr & ~HPAGE_MASK);
> @@ -1830,7 +1830,7 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
> -static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
>  						 unsigned long addr,
>  						 pmd_t *pmdp, int full)
>  {
> @@ -1843,14 +1843,14 @@ static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
>  }
>
>  #define __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
> -static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_huge_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pmd_t *pmdp)
>  {
>  	return pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp);
>  }
>
>  #define __HAVE_ARCH_PMDP_INVALIDATE
> -static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_invalidate(struct mm_area *vma,
>  				   unsigned long addr, pmd_t *pmdp)
>  {
>  	pmd_t pmd;
> @@ -1870,7 +1870,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>  		pmd = pmdp_xchg_lazy(mm, addr, pmdp, pmd_wrprotect(pmd));
>  }
>
> -static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_collapse_flush(struct mm_area *vma,
>  					unsigned long address,
>  					pmd_t *pmdp)
>  {
> diff --git a/arch/s390/include/asm/tlbflush.h b/arch/s390/include/asm/tlbflush.h
> index 75491baa2197..8eab59435a2c 100644
> --- a/arch/s390/include/asm/tlbflush.h
> +++ b/arch/s390/include/asm/tlbflush.h
> @@ -111,7 +111,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  	__tlb_flush_mm_lazy(mm);
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  	__tlb_flush_mm_lazy(vma->vm_mm);
> diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
> index 4a981266b483..cff27a7da9bc 100644
> --- a/arch/s390/kernel/crash_dump.c
> +++ b/arch/s390/kernel/crash_dump.c
> @@ -176,7 +176,7 @@ ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
>   * For the kdump reserved memory this functions performs a swap operation:
>   * [0 - OLDMEM_SIZE] is mapped to [OLDMEM_BASE - OLDMEM_BASE + OLDMEM_SIZE]
>   */
> -static int remap_oldmem_pfn_range_kdump(struct vm_area_struct *vma,
> +static int remap_oldmem_pfn_range_kdump(struct mm_area *vma,
>  					unsigned long from, unsigned long pfn,
>  					unsigned long size, pgprot_t prot)
>  {
> @@ -203,7 +203,7 @@ static int remap_oldmem_pfn_range_kdump(struct vm_area_struct *vma,
>   * We only map available memory above HSA size. Memory below HSA size
>   * is read on demand using the copy_oldmem_page() function.
>   */
> -static int remap_oldmem_pfn_range_zfcpdump(struct vm_area_struct *vma,
> +static int remap_oldmem_pfn_range_zfcpdump(struct mm_area *vma,
>  					   unsigned long from,
>  					   unsigned long pfn,
>  					   unsigned long size, pgprot_t prot)
> @@ -225,7 +225,7 @@ static int remap_oldmem_pfn_range_zfcpdump(struct vm_area_struct *vma,
>  /*
>   * Remap "oldmem" for kdump or zfcp/nvme dump
>   */
> -int remap_oldmem_pfn_range(struct vm_area_struct *vma, unsigned long from,
> +int remap_oldmem_pfn_range(struct mm_area *vma, unsigned long from,
>  			   unsigned long pfn, unsigned long size, pgprot_t prot)
>  {
>  	if (oldmem_data.start)
> diff --git a/arch/s390/kernel/uv.c b/arch/s390/kernel/uv.c
> index 9a5d5be8acf4..a41b180a29bc 100644
> --- a/arch/s390/kernel/uv.c
> +++ b/arch/s390/kernel/uv.c
> @@ -356,7 +356,7 @@ static int s390_wiggle_split_folio(struct mm_struct *mm, struct folio *folio, bo
>
>  int make_hva_secure(struct mm_struct *mm, unsigned long hva, struct uv_cb_header *uvcb)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct folio_walk fw;
>  	struct folio *folio;
>  	int rc;
> diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c
> index 430feb1a5013..f660415e46c0 100644
> --- a/arch/s390/kernel/vdso.c
> +++ b/arch/s390/kernel/vdso.c
> @@ -27,7 +27,7 @@ extern char vdso64_start[], vdso64_end[];
>  extern char vdso32_start[], vdso32_end[];
>
>  static int vdso_mremap(const struct vm_special_mapping *sm,
> -		       struct vm_area_struct *vma)
> +		       struct mm_area *vma)
>  {
>  	current->mm->context.vdso_base = vma->vm_start;
>  	return 0;
> @@ -55,7 +55,7 @@ static int map_vdso(unsigned long addr, unsigned long vdso_mapping_len)
>  	unsigned long vvar_start, vdso_text_start, vdso_text_len;
>  	struct vm_special_mapping *vdso_mapping;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int rc;
>
>  	BUILD_BUG_ON(VDSO_NR_PAGES != __VDSO_PAGES);
> diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
> index da84ff6770de..119a4c17873b 100644
> --- a/arch/s390/mm/fault.c
> +++ b/arch/s390/mm/fault.c
> @@ -258,7 +258,7 @@ static void do_sigbus(struct pt_regs *regs)
>   */
>  static void do_exception(struct pt_regs *regs, int access)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long address;
>  	struct mm_struct *mm;
>  	unsigned int flags;
> @@ -405,7 +405,7 @@ void do_secure_storage_access(struct pt_regs *regs)
>  {
>  	union teid teid = { .val = regs->int_parm_long };
>  	unsigned long addr = get_fault_address(regs);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct folio_walk fw;
>  	struct mm_struct *mm;
>  	struct folio *folio;
> diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
> index a94bd4870c65..8c6a886f71d1 100644
> --- a/arch/s390/mm/gmap.c
> +++ b/arch/s390/mm/gmap.c
> @@ -620,7 +620,7 @@ EXPORT_SYMBOL(__gmap_link);
>   */
>  void __gmap_zap(struct gmap *gmap, unsigned long gaddr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long vmaddr;
>  	spinlock_t *ptl;
>  	pte_t *ptep;
> @@ -648,7 +648,7 @@ EXPORT_SYMBOL_GPL(__gmap_zap);
>  void gmap_discard(struct gmap *gmap, unsigned long from, unsigned long to)
>  {
>  	unsigned long gaddr, vmaddr, size;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mmap_read_lock(gmap->mm);
>  	for (gaddr = from; gaddr < to;
> @@ -2222,7 +2222,7 @@ EXPORT_SYMBOL_GPL(gmap_sync_dirty_log_pmd);
>  static int thp_split_walk_pmd_entry(pmd_t *pmd, unsigned long addr,
>  				    unsigned long end, struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>
>  	split_huge_pmd(vma, pmd, addr);
>  	return 0;
> @@ -2235,7 +2235,7 @@ static const struct mm_walk_ops thp_split_walk_ops = {
>
>  static inline void thp_split_mm(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	for_each_vma(vmi, vma) {
> @@ -2312,7 +2312,7 @@ static const struct mm_walk_ops find_zeropage_ops = {
>   */
>  static int __s390_unshare_zeropages(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>  	unsigned long addr;
>  	vm_fault_t fault;
> diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
> index e88c02c9e642..c54f4772b8bf 100644
> --- a/arch/s390/mm/hugetlbpage.c
> +++ b/arch/s390/mm/hugetlbpage.c
> @@ -203,7 +203,7 @@ pte_t __huge_ptep_get_and_clear(struct mm_struct *mm,
>  	return pte;
>  }
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgdp;
> diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
> index 40a526d28184..edbd4688f56a 100644
> --- a/arch/s390/mm/mmap.c
> +++ b/arch/s390/mm/mmap.c
> @@ -81,7 +81,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
>  				     unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_unmapped_area_info info = {};
>
>  	if (len > TASK_SIZE - mmap_min_addr)
> @@ -116,7 +116,7 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp, unsigned long ad
>  					     unsigned long len, unsigned long pgoff,
>  					     unsigned long flags, vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	struct vm_unmapped_area_info info = {};
>
> diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
> index 9901934284ec..28f0316e4db1 100644
> --- a/arch/s390/mm/pgtable.c
> +++ b/arch/s390/mm/pgtable.c
> @@ -327,7 +327,7 @@ pte_t ptep_xchg_lazy(struct mm_struct *mm, unsigned long addr,
>  }
>  EXPORT_SYMBOL(ptep_xchg_lazy);
>
> -pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
> +pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr,
>  			     pte_t *ptep)
>  {
>  	pgste_t pgste;
> @@ -346,7 +346,7 @@ pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
>  	return old;
>  }
>
> -void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
> +void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
>  			     pte_t *ptep, pte_t old_pte, pte_t pte)
>  {
>  	pgste_t pgste;
> @@ -437,7 +437,7 @@ static inline pmd_t pmdp_flush_lazy(struct mm_struct *mm,
>  #ifdef CONFIG_PGSTE
>  static int pmd_lookup(struct mm_struct *mm, unsigned long addr, pmd_t **pmdp)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	pgd_t *pgd;
>  	p4d_t *p4d;
>  	pud_t *pud;
> @@ -1032,7 +1032,7 @@ EXPORT_SYMBOL(get_guest_storage_key);
>  int pgste_perform_essa(struct mm_struct *mm, unsigned long hva, int orc,
>  			unsigned long *oldpte, unsigned long *oldpgste)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long pgstev;
>  	spinlock_t *ptl;
>  	pgste_t pgste;
> @@ -1138,7 +1138,7 @@ EXPORT_SYMBOL(pgste_perform_essa);
>  int set_pgste_bits(struct mm_struct *mm, unsigned long hva,
>  			unsigned long bits, unsigned long value)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	spinlock_t *ptl;
>  	pgste_t new;
>  	pte_t *ptep;
> @@ -1170,7 +1170,7 @@ EXPORT_SYMBOL(set_pgste_bits);
>   */
>  int get_pgste(struct mm_struct *mm, unsigned long hva, unsigned long *pgstep)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	spinlock_t *ptl;
>  	pte_t *ptep;
>
> diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
> index 5fcc1a3b04bd..77d158f08245 100644
> --- a/arch/s390/pci/pci_mmio.c
> +++ b/arch/s390/pci/pci_mmio.c
> @@ -126,7 +126,7 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
>  	u8 local_buf[64];
>  	void __iomem *io_addr;
>  	void *buf;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	long ret;
>
>  	if (!zpci_is_enabled())
> @@ -279,7 +279,7 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
>  	u8 local_buf[64];
>  	void __iomem *io_addr;
>  	void *buf;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	long ret;
>
>  	if (!zpci_is_enabled())
> diff --git a/arch/sh/include/asm/cacheflush.h b/arch/sh/include/asm/cacheflush.h
> index e6642ff14889..87666383d58a 100644
> --- a/arch/sh/include/asm/cacheflush.h
> +++ b/arch/sh/include/asm/cacheflush.h
> @@ -37,9 +37,9 @@ extern void (*__flush_invalidate_region)(void *start, int size);
>  extern void flush_cache_all(void);
>  extern void flush_cache_mm(struct mm_struct *mm);
>  extern void flush_cache_dup_mm(struct mm_struct *mm);
> -extern void flush_cache_page(struct vm_area_struct *vma,
> +extern void flush_cache_page(struct mm_area *vma,
>  				unsigned long addr, unsigned long pfn);
> -extern void flush_cache_range(struct vm_area_struct *vma,
> +extern void flush_cache_range(struct mm_area *vma,
>  				 unsigned long start, unsigned long end);
>  #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
>  void flush_dcache_folio(struct folio *folio);
> @@ -51,20 +51,20 @@ static inline void flush_dcache_page(struct page *page)
>
>  extern void flush_icache_range(unsigned long start, unsigned long end);
>  #define flush_icache_user_range flush_icache_range
> -void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_pages(struct mm_area *vma, struct page *page,
>  		unsigned int nr);
>  #define flush_icache_pages flush_icache_pages
>  extern void flush_cache_sigtramp(unsigned long address);
>
>  struct flusher_data {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr1, addr2;
>  };
>
>  #define ARCH_HAS_FLUSH_ANON_PAGE
>  extern void __flush_anon_page(struct page *page, unsigned long);
>
> -static inline void flush_anon_page(struct vm_area_struct *vma,
> +static inline void flush_anon_page(struct mm_area *vma,
>  				   struct page *page, unsigned long vmaddr)
>  {
>  	if (boot_cpu_data.dcache.n_aliases && PageAnon(page))
> @@ -81,11 +81,11 @@ static inline void invalidate_kernel_vmap_range(void *addr, int size)
>  	__flush_invalidate_region(addr, size);
>  }
>
> -extern void copy_to_user_page(struct vm_area_struct *vma,
> +extern void copy_to_user_page(struct mm_area *vma,
>  	struct page *page, unsigned long vaddr, void *dst, const void *src,
>  	unsigned long len);
>
> -extern void copy_from_user_page(struct vm_area_struct *vma,
> +extern void copy_from_user_page(struct mm_area *vma,
>  	struct page *page, unsigned long vaddr, void *dst, const void *src,
>  	unsigned long len);
>
> diff --git a/arch/sh/include/asm/hugetlb.h b/arch/sh/include/asm/hugetlb.h
> index 4a92e6e4d627..f2f364330ed9 100644
> --- a/arch/sh/include/asm/hugetlb.h
> +++ b/arch/sh/include/asm/hugetlb.h
> @@ -6,7 +6,7 @@
>  #include <asm/page.h>
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  	return *ptep;
> diff --git a/arch/sh/include/asm/page.h b/arch/sh/include/asm/page.h
> index 3990cbd9aa04..feba697dd921 100644
> --- a/arch/sh/include/asm/page.h
> +++ b/arch/sh/include/asm/page.h
> @@ -48,10 +48,10 @@ extern void copy_page(void *to, void *from);
>  #define copy_user_page(to, from, vaddr, pg)  __copy_user(to, from, PAGE_SIZE)
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>
>  extern void copy_user_highpage(struct page *to, struct page *from,
> -			       unsigned long vaddr, struct vm_area_struct *vma);
> +			       unsigned long vaddr, struct mm_area *vma);
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>  extern void clear_user_highpage(struct page *page, unsigned long vaddr);
>  #define clear_user_highpage	clear_user_highpage
> diff --git a/arch/sh/include/asm/pgtable.h b/arch/sh/include/asm/pgtable.h
> index 729f5c6225fb..1cc0974cae6c 100644
> --- a/arch/sh/include/asm/pgtable.h
> +++ b/arch/sh/include/asm/pgtable.h
> @@ -94,16 +94,16 @@ typedef pte_t *pte_addr_t;
>
>  #define pte_pfn(x)		((unsigned long)(((x).pte_low >> PAGE_SHIFT)))
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct mm_struct;
>
> -extern void __update_cache(struct vm_area_struct *vma,
> +extern void __update_cache(struct mm_area *vma,
>  			   unsigned long address, pte_t pte);
> -extern void __update_tlb(struct vm_area_struct *vma,
> +extern void __update_tlb(struct mm_area *vma,
>  			 unsigned long address, pte_t pte);
>
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long address,
> +		struct mm_area *vma, unsigned long address,
>  		pte_t *ptep, unsigned int nr)
>  {
>  	pte_t pte = *ptep;
> diff --git a/arch/sh/include/asm/tlb.h b/arch/sh/include/asm/tlb.h
> index ddf324bfb9a0..6d1e9c61e24c 100644
> --- a/arch/sh/include/asm/tlb.h
> +++ b/arch/sh/include/asm/tlb.h
> @@ -10,10 +10,10 @@
>  #include <linux/swap.h>
>
>  #if defined(CONFIG_CPU_SH4)
> -extern void tlb_wire_entry(struct vm_area_struct *, unsigned long, pte_t);
> +extern void tlb_wire_entry(struct mm_area *, unsigned long, pte_t);
>  extern void tlb_unwire_entry(void);
>  #else
> -static inline void tlb_wire_entry(struct vm_area_struct *vma ,
> +static inline void tlb_wire_entry(struct mm_area *vma ,
>  				  unsigned long addr, pte_t pte)
>  {
>  	BUG();
> diff --git a/arch/sh/include/asm/tlbflush.h b/arch/sh/include/asm/tlbflush.h
> index 8f180cd3bcd6..ca2de60ad063 100644
> --- a/arch/sh/include/asm/tlbflush.h
> +++ b/arch/sh/include/asm/tlbflush.h
> @@ -13,10 +13,10 @@
>   */
>  extern void local_flush_tlb_all(void);
>  extern void local_flush_tlb_mm(struct mm_struct *mm);
> -extern void local_flush_tlb_range(struct vm_area_struct *vma,
> +extern void local_flush_tlb_range(struct mm_area *vma,
>  				  unsigned long start,
>  				  unsigned long end);
> -extern void local_flush_tlb_page(struct vm_area_struct *vma,
> +extern void local_flush_tlb_page(struct mm_area *vma,
>  				 unsigned long page);
>  extern void local_flush_tlb_kernel_range(unsigned long start,
>  					 unsigned long end);
> @@ -28,9 +28,9 @@ extern void __flush_tlb_global(void);
>
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
> -extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end);
> -extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +extern void flush_tlb_page(struct mm_area *vma, unsigned long page);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>  extern void flush_tlb_one(unsigned long asid, unsigned long page);
>
> diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c
> index 108d808767fa..61d56994d473 100644
> --- a/arch/sh/kernel/smp.c
> +++ b/arch/sh/kernel/smp.c
> @@ -377,7 +377,7 @@ void flush_tlb_mm(struct mm_struct *mm)
>  }
>
>  struct flush_tlb_data {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr1;
>  	unsigned long addr2;
>  };
> @@ -389,7 +389,7 @@ static void flush_tlb_range_ipi(void *info)
>  	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma,
> +void flush_tlb_range(struct mm_area *vma,
>  		     unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -435,7 +435,7 @@ static void flush_tlb_page_ipi(void *info)
>  	local_flush_tlb_page(fd->vma, fd->addr1);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	preempt_disable();
>  	if ((atomic_read(&vma->vm_mm->mm_users) != 1) ||
> diff --git a/arch/sh/kernel/sys_sh.c b/arch/sh/kernel/sys_sh.c
> index a5a7b33ed81a..2d263feef643 100644
> --- a/arch/sh/kernel/sys_sh.c
> +++ b/arch/sh/kernel/sys_sh.c
> @@ -57,7 +57,7 @@ asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
>  /* sys_cacheflush -- flush (part of) the processor cache.  */
>  asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len, int op)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if ((op <= 0) || (op > (CACHEFLUSH_D_PURGE|CACHEFLUSH_I)))
>  		return -EINVAL;
> diff --git a/arch/sh/kernel/vsyscall/vsyscall.c b/arch/sh/kernel/vsyscall/vsyscall.c
> index 1563dcc55fd3..9916506a052a 100644
> --- a/arch/sh/kernel/vsyscall/vsyscall.c
> +++ b/arch/sh/kernel/vsyscall/vsyscall.c
> @@ -83,7 +83,7 @@ fs_initcall(vm_sysctl_init);
>  int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr;
>  	int ret;
>
> @@ -113,7 +113,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  	return ret;
>  }
>
> -const char *arch_vma_name(struct vm_area_struct *vma)
> +const char *arch_vma_name(struct mm_area *vma)
>  {
>  	if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso)
>  		return "[vdso]";
> diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c
> index 46393b00137e..f4d37a852d27 100644
> --- a/arch/sh/mm/cache-sh4.c
> +++ b/arch/sh/mm/cache-sh4.c
> @@ -214,7 +214,7 @@ static void sh4_flush_cache_mm(void *arg)
>  static void sh4_flush_cache_page(void *args)
>  {
>  	struct flusher_data *data = args;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct page *page;
>  	unsigned long address, pfn, phys;
>  	int map_coherent = 0;
> @@ -283,7 +283,7 @@ static void sh4_flush_cache_page(void *args)
>  static void sh4_flush_cache_range(void *args)
>  {
>  	struct flusher_data *data = args;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long start, end;
>
>  	vma = data->vma;
> diff --git a/arch/sh/mm/cache.c b/arch/sh/mm/cache.c
> index 6ebdeaff3021..2f85019529ff 100644
> --- a/arch/sh/mm/cache.c
> +++ b/arch/sh/mm/cache.c
> @@ -57,7 +57,7 @@ static inline void cacheop_on_each_cpu(void (*func) (void *info), void *info,
>  	preempt_enable();
>  }
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		       unsigned long vaddr, void *dst, const void *src,
>  		       unsigned long len)
>  {
> @@ -78,7 +78,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
>  		flush_cache_page(vma, vaddr, page_to_pfn(page));
>  }
>
> -void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_from_user_page(struct mm_area *vma, struct page *page,
>  			 unsigned long vaddr, void *dst, const void *src,
>  			 unsigned long len)
>  {
> @@ -97,7 +97,7 @@ void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
>  }
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma)
> +			unsigned long vaddr, struct mm_area *vma)
>  {
>  	struct folio *src = page_folio(from);
>  	void *vfrom, *vto;
> @@ -138,7 +138,7 @@ void clear_user_highpage(struct page *page, unsigned long vaddr)
>  }
>  EXPORT_SYMBOL(clear_user_highpage);
>
> -void __update_cache(struct vm_area_struct *vma,
> +void __update_cache(struct mm_area *vma,
>  		    unsigned long address, pte_t pte)
>  {
>  	unsigned long pfn = pte_pfn(pte);
> @@ -197,7 +197,7 @@ void flush_cache_dup_mm(struct mm_struct *mm)
>  	cacheop_on_each_cpu(local_flush_cache_dup_mm, mm, 1);
>  }
>
> -void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
> +void flush_cache_page(struct mm_area *vma, unsigned long addr,
>  		      unsigned long pfn)
>  {
>  	struct flusher_data data;
> @@ -209,7 +209,7 @@ void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
>  	cacheop_on_each_cpu(local_flush_cache_page, (void *)&data, 1);
>  }
>
> -void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
> +void flush_cache_range(struct mm_area *vma, unsigned long start,
>  		       unsigned long end)
>  {
>  	struct flusher_data data;
> @@ -240,7 +240,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
>  }
>  EXPORT_SYMBOL(flush_icache_range);
>
> -void flush_icache_pages(struct vm_area_struct *vma, struct page *page,
> +void flush_icache_pages(struct mm_area *vma, struct page *page,
>  		unsigned int nr)
>  {
>  	/* Nothing uses the VMA, so just pass the folio along */
> diff --git a/arch/sh/mm/fault.c b/arch/sh/mm/fault.c
> index 06e6b4952924..962137e245fc 100644
> --- a/arch/sh/mm/fault.c
> +++ b/arch/sh/mm/fault.c
> @@ -355,7 +355,7 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
>  	return 1;
>  }
>
> -static inline int access_error(int error_code, struct vm_area_struct *vma)
> +static inline int access_error(int error_code, struct mm_area *vma)
>  {
>  	if (error_code & FAULT_CODE_WRITE) {
>  		/* write, present and write, not present: */
> @@ -393,7 +393,7 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
>  	unsigned long vec;
>  	struct task_struct *tsk;
>  	struct mm_struct *mm;
> -	struct vm_area_struct * vma;
> +	struct mm_area * vma;
>  	vm_fault_t fault;
>  	unsigned int flags = FAULT_FLAG_DEFAULT;
>
> diff --git a/arch/sh/mm/hugetlbpage.c b/arch/sh/mm/hugetlbpage.c
> index ff209b55285a..ea147dc50cfa 100644
> --- a/arch/sh/mm/hugetlbpage.c
> +++ b/arch/sh/mm/hugetlbpage.c
> @@ -21,7 +21,7 @@
>  #include <asm/tlbflush.h>
>  #include <asm/cacheflush.h>
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgd;
> diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
> index c442734d9b0c..a015e881f62f 100644
> --- a/arch/sh/mm/mmap.c
> +++ b/arch/sh/mm/mmap.c
> @@ -56,7 +56,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
>  	vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int do_colour_align;
>  	struct vm_unmapped_area_info info = {};
>
> @@ -102,7 +102,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
>  			  const unsigned long len, const unsigned long pgoff,
>  			  const unsigned long flags, vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long addr = addr0;
>  	int do_colour_align;
> diff --git a/arch/sh/mm/nommu.c b/arch/sh/mm/nommu.c
> index fa3dc9428a73..739f316eb55a 100644
> --- a/arch/sh/mm/nommu.c
> +++ b/arch/sh/mm/nommu.c
> @@ -46,13 +46,13 @@ void local_flush_tlb_mm(struct mm_struct *mm)
>  	BUG();
>  }
>
> -void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			    unsigned long end)
>  {
>  	BUG();
>  }
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	BUG();
>  }
> @@ -71,7 +71,7 @@ void __flush_tlb_global(void)
>  {
>  }
>
> -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
> +void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
>  {
>  }
>
> diff --git a/arch/sh/mm/tlb-pteaex.c b/arch/sh/mm/tlb-pteaex.c
> index 4db21adfe5de..c88f5cdca94e 100644
> --- a/arch/sh/mm/tlb-pteaex.c
> +++ b/arch/sh/mm/tlb-pteaex.c
> @@ -15,7 +15,7 @@
>  #include <asm/mmu_context.h>
>  #include <asm/cacheflush.h>
>
> -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
> +void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
>  {
>  	unsigned long flags, pteval, vpn;
>
> diff --git a/arch/sh/mm/tlb-sh3.c b/arch/sh/mm/tlb-sh3.c
> index fb400afc2a49..77369712a89c 100644
> --- a/arch/sh/mm/tlb-sh3.c
> +++ b/arch/sh/mm/tlb-sh3.c
> @@ -24,7 +24,7 @@
>  #include <asm/mmu_context.h>
>  #include <asm/cacheflush.h>
>
> -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
> +void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
>  {
>  	unsigned long flags, pteval, vpn;
>
> diff --git a/arch/sh/mm/tlb-sh4.c b/arch/sh/mm/tlb-sh4.c
> index aa0a9f4680a1..edd340097b4a 100644
> --- a/arch/sh/mm/tlb-sh4.c
> +++ b/arch/sh/mm/tlb-sh4.c
> @@ -13,7 +13,7 @@
>  #include <asm/mmu_context.h>
>  #include <asm/cacheflush.h>
>
> -void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
> +void __update_tlb(struct mm_area *vma, unsigned long address, pte_t pte)
>  {
>  	unsigned long flags, pteval, vpn;
>
> diff --git a/arch/sh/mm/tlb-urb.c b/arch/sh/mm/tlb-urb.c
> index c92ce20db39b..78a98552ccac 100644
> --- a/arch/sh/mm/tlb-urb.c
> +++ b/arch/sh/mm/tlb-urb.c
> @@ -17,7 +17,7 @@
>  /*
>   * Load the entry for 'addr' into the TLB and wire the entry.
>   */
> -void tlb_wire_entry(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> +void tlb_wire_entry(struct mm_area *vma, unsigned long addr, pte_t pte)
>  {
>  	unsigned long status, flags;
>  	int urb;
> diff --git a/arch/sh/mm/tlbflush_32.c b/arch/sh/mm/tlbflush_32.c
> index a6a20d6de4c0..6307b906924a 100644
> --- a/arch/sh/mm/tlbflush_32.c
> +++ b/arch/sh/mm/tlbflush_32.c
> @@ -12,7 +12,7 @@
>  #include <asm/mmu_context.h>
>  #include <asm/tlbflush.h>
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	unsigned int cpu = smp_processor_id();
>
> @@ -36,7 +36,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
>  	}
>  }
>
> -void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +void local_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  			   unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> diff --git a/arch/sparc/include/asm/cacheflush_64.h b/arch/sparc/include/asm/cacheflush_64.h
> index 2b1261b77ecd..1e6477ef34bb 100644
> --- a/arch/sparc/include/asm/cacheflush_64.h
> +++ b/arch/sparc/include/asm/cacheflush_64.h
> @@ -53,7 +53,7 @@ static inline void flush_dcache_page(struct page *page)
>  	flush_dcache_folio(page_folio(page));
>  }
>
> -void flush_ptrace_access(struct vm_area_struct *, struct page *,
> +void flush_ptrace_access(struct mm_area *, struct page *,
>  			 unsigned long uaddr, void *kaddr,
>  			 unsigned long len, int write);
>
> diff --git a/arch/sparc/include/asm/cachetlb_32.h b/arch/sparc/include/asm/cachetlb_32.h
> index 534da70c6357..1ae6b8f58673 100644
> --- a/arch/sparc/include/asm/cachetlb_32.h
> +++ b/arch/sparc/include/asm/cachetlb_32.h
> @@ -3,20 +3,20 @@
>  #define _SPARC_CACHETLB_H
>
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>
>  struct sparc32_cachetlb_ops {
>  	void (*cache_all)(void);
>  	void (*cache_mm)(struct mm_struct *);
> -	void (*cache_range)(struct vm_area_struct *, unsigned long,
> +	void (*cache_range)(struct mm_area *, unsigned long,
>  			    unsigned long);
> -	void (*cache_page)(struct vm_area_struct *, unsigned long);
> +	void (*cache_page)(struct mm_area *, unsigned long);
>
>  	void (*tlb_all)(void);
>  	void (*tlb_mm)(struct mm_struct *);
> -	void (*tlb_range)(struct vm_area_struct *, unsigned long,
> +	void (*tlb_range)(struct mm_area *, unsigned long,
>  			  unsigned long);
> -	void (*tlb_page)(struct vm_area_struct *, unsigned long);
> +	void (*tlb_page)(struct mm_area *, unsigned long);
>
>  	void (*page_to_ram)(unsigned long);
>  	void (*sig_insns)(struct mm_struct *, unsigned long);
> diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h
> index e7a9cdd498dc..fdc29771a6a6 100644
> --- a/arch/sparc/include/asm/hugetlb.h
> +++ b/arch/sparc/include/asm/hugetlb.h
> @@ -23,7 +23,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
>  			      pte_t *ptep, unsigned long sz);
>
>  #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  	return *ptep;
> @@ -38,7 +38,7 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  }
>
>  #define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int huge_ptep_set_access_flags(struct mm_area *vma,
>  					     unsigned long addr, pte_t *ptep,
>  					     pte_t pte, int dirty)
>  {
> diff --git a/arch/sparc/include/asm/leon.h b/arch/sparc/include/asm/leon.h
> index c1e05e4ab9e3..e0cf0f724fb4 100644
> --- a/arch/sparc/include/asm/leon.h
> +++ b/arch/sparc/include/asm/leon.h
> @@ -195,7 +195,7 @@ static inline int sparc_leon3_cpuid(void)
>  #define LEON2_CFG_SSIZE_MASK 0x00007000UL
>
>  #ifndef __ASSEMBLY__
> -struct vm_area_struct;
> +struct mm_area;
>
>  unsigned long leon_swprobe(unsigned long vaddr, unsigned long *paddr);
>  void leon_flush_icache_all(void);
> @@ -204,7 +204,7 @@ void leon_flush_cache_all(void);
>  void leon_flush_tlb_all(void);
>  extern int leon_flush_during_switch;
>  int leon_flush_needed(void);
> -void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page);
> +void leon_flush_pcache_all(struct mm_area *vma, unsigned long page);
>
>  /* struct that hold LEON3 cache configuration registers */
>  struct leon3_cacheregs {
> diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
> index 2a68ff5b6eab..1abc1d8743c5 100644
> --- a/arch/sparc/include/asm/page_64.h
> +++ b/arch/sparc/include/asm/page_64.h
> @@ -46,9 +46,9 @@ void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
>  #define copy_page(X,Y)	memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
>  void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
> -struct vm_area_struct;
> +struct mm_area;
>  void copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma);
> +			unsigned long vaddr, struct mm_area *vma);
>  #define __HAVE_ARCH_COPY_HIGHPAGE
>  void copy_highpage(struct page *to, struct page *from);
>
> diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
> index 62bcafe38b1f..a451d5430db1 100644
> --- a/arch/sparc/include/asm/pgtable_32.h
> +++ b/arch/sparc/include/asm/pgtable_32.h
> @@ -33,7 +33,7 @@
>  #include <asm/cpu_type.h>
>
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct page;
>
>  void load_mmu(void);
> @@ -400,10 +400,10 @@ __get_iospace (unsigned long addr)
>  #define GET_IOSPACE(pfn)		(pfn >> (BITS_PER_LONG - 4))
>  #define GET_PFN(pfn)			(pfn & 0x0fffffffUL)
>
> -int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
> +int remap_pfn_range(struct mm_area *, unsigned long, unsigned long,
>  		    unsigned long, pgprot_t);
>
> -static inline int io_remap_pfn_range(struct vm_area_struct *vma,
> +static inline int io_remap_pfn_range(struct mm_area *vma,
>  				     unsigned long from, unsigned long pfn,
>  				     unsigned long size, pgprot_t prot)
>  {
> diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
> index dc28f2c4eee3..7d06b4894f2a 100644
> --- a/arch/sparc/include/asm/pgtable_64.h
> +++ b/arch/sparc/include/asm/pgtable_64.h
> @@ -979,17 +979,17 @@ unsigned long find_ecache_flush_span(unsigned long size);
>  struct seq_file;
>  void mmu_info(struct seq_file *);
>
> -struct vm_area_struct;
> -void update_mmu_cache_range(struct vm_fault *, struct vm_area_struct *,
> +struct mm_area;
> +void update_mmu_cache_range(struct vm_fault *, struct mm_area *,
>  		unsigned long addr, pte_t *ptep, unsigned int nr);
>  #define update_mmu_cache(vma, addr, ptep) \
>  	update_mmu_cache_range(NULL, vma, addr, ptep, 1)
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> +void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
>  			  pmd_t *pmd);
>
>  #define __HAVE_ARCH_PMDP_INVALIDATE
> -extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +extern pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
>  			    pmd_t *pmdp);
>
>  #define __HAVE_ARCH_PGTABLE_DEPOSIT
> @@ -1050,18 +1050,18 @@ int page_in_phys_avail(unsigned long paddr);
>  #define GET_IOSPACE(pfn)		(pfn >> (BITS_PER_LONG - 4))
>  #define GET_PFN(pfn)			(pfn & 0x0fffffffffffffffUL)
>
> -int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
> +int remap_pfn_range(struct mm_area *, unsigned long, unsigned long,
>  		    unsigned long, pgprot_t);
>
> -void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +void adi_restore_tags(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, pte_t pte);
>
> -int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +int adi_save_tags(struct mm_struct *mm, struct mm_area *vma,
>  		  unsigned long addr, pte_t oldpte);
>
>  #define __HAVE_ARCH_DO_SWAP_PAGE
>  static inline void arch_do_swap_page(struct mm_struct *mm,
> -				     struct vm_area_struct *vma,
> +				     struct mm_area *vma,
>  				     unsigned long addr,
>  				     pte_t pte, pte_t oldpte)
>  {
> @@ -1078,7 +1078,7 @@ static inline void arch_do_swap_page(struct mm_struct *mm,
>
>  #define __HAVE_ARCH_UNMAP_ONE
>  static inline int arch_unmap_one(struct mm_struct *mm,
> -				 struct vm_area_struct *vma,
> +				 struct mm_area *vma,
>  				 unsigned long addr, pte_t oldpte)
>  {
>  	if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
> @@ -1086,7 +1086,7 @@ static inline int arch_unmap_one(struct mm_struct *mm,
>  	return 0;
>  }
>
> -static inline int io_remap_pfn_range(struct vm_area_struct *vma,
> +static inline int io_remap_pfn_range(struct mm_area *vma,
>  				     unsigned long from, unsigned long pfn,
>  				     unsigned long size, pgprot_t prot)
>  {
> diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h
> index 8b8cdaa69272..c41114cbd3fe 100644
> --- a/arch/sparc/include/asm/tlbflush_64.h
> +++ b/arch/sparc/include/asm/tlbflush_64.h
> @@ -27,12 +27,12 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
>  {
>  }
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma,
> +static inline void flush_tlb_page(struct mm_area *vma,
>  				  unsigned long vmaddr)
>  {
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  }
> diff --git a/arch/sparc/kernel/adi_64.c b/arch/sparc/kernel/adi_64.c
> index e0e4fc527b24..3e7c7bb97fd8 100644
> --- a/arch/sparc/kernel/adi_64.c
> +++ b/arch/sparc/kernel/adi_64.c
> @@ -122,7 +122,7 @@ void __init mdesc_adi_init(void)
>  }
>
>  static tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
> -					  struct vm_area_struct *vma,
> +					  struct mm_area *vma,
>  					  unsigned long addr)
>  {
>  	tag_storage_desc_t *tag_desc = NULL;
> @@ -154,7 +154,7 @@ static tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
>  }
>
>  static tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
> -					   struct vm_area_struct *vma,
> +					   struct mm_area *vma,
>  					   unsigned long addr)
>  {
>  	unsigned char *tags;
> @@ -324,7 +324,7 @@ static void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
>  /* Retrieve any saved ADI tags for the page being swapped back in and
>   * restore these tags to the newly allocated physical page.
>   */
> -void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +void adi_restore_tags(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, pte_t pte)
>  {
>  	unsigned char *tag;
> @@ -367,7 +367,7 @@ void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
>   * this physical page so they can be restored later when the page is swapped
>   * back in.
>   */
> -int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
> +int adi_save_tags(struct mm_struct *mm, struct mm_area *vma,
>  		  unsigned long addr, pte_t oldpte)
>  {
>  	unsigned char *tag;
> diff --git a/arch/sparc/kernel/asm-offsets.c b/arch/sparc/kernel/asm-offsets.c
> index 3d9b9855dce9..360c8cb8f396 100644
> --- a/arch/sparc/kernel/asm-offsets.c
> +++ b/arch/sparc/kernel/asm-offsets.c
> @@ -52,7 +52,7 @@ static int __used foo(void)
>  	BLANK();
>  	DEFINE(AOFF_mm_context, offsetof(struct mm_struct, context));
>  	BLANK();
> -	DEFINE(VMA_VM_MM,    offsetof(struct vm_area_struct, vm_mm));
> +	DEFINE(VMA_VM_MM,    offsetof(struct mm_area, vm_mm));
>
>  	/* DEFINE(NUM_USER_SEGMENTS, TASK_SIZE>>28); */
>  	return 0;
> diff --git a/arch/sparc/kernel/pci.c b/arch/sparc/kernel/pci.c
> index ddac216a2aff..64767a6e60cd 100644
> --- a/arch/sparc/kernel/pci.c
> +++ b/arch/sparc/kernel/pci.c
> @@ -750,7 +750,7 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
>  }
>
>  /* Platform support for /proc/bus/pci/X/Y mmap()s. */
> -int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
> +int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma)
>  {
>  	struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller;
>  	resource_size_t ioaddr = pci_resource_start(pdev, bar);
> diff --git a/arch/sparc/kernel/ptrace_64.c b/arch/sparc/kernel/ptrace_64.c
> index 4deba5b6eddb..2bbee6413504 100644
> --- a/arch/sparc/kernel/ptrace_64.c
> +++ b/arch/sparc/kernel/ptrace_64.c
> @@ -103,7 +103,7 @@ void ptrace_disable(struct task_struct *child)
>   *    has been created
>   * 2) flush the I-cache if this is pre-cheetah and we did a write
>   */
> -void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
> +void flush_ptrace_access(struct mm_area *vma, struct page *page,
>  			 unsigned long uaddr, void *kaddr,
>  			 unsigned long len, int write)
>  {
> diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
> index c5a284df7b41..261c971b346a 100644
> --- a/arch/sparc/kernel/sys_sparc_64.c
> +++ b/arch/sparc/kernel/sys_sparc_64.c
> @@ -101,7 +101,7 @@ static unsigned long get_align_mask(struct file *filp, unsigned long flags)
>  unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct * vma;
> +	struct mm_area * vma;
>  	unsigned long task_size = TASK_SIZE;
>  	int do_color_align;
>  	struct vm_unmapped_area_info info = {};
> @@ -164,7 +164,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
>  			  const unsigned long len, const unsigned long pgoff,
>  			  const unsigned long flags, vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long task_size = STACK_TOP32;
>  	unsigned long addr = addr0;
> diff --git a/arch/sparc/mm/fault_32.c b/arch/sparc/mm/fault_32.c
> index 86a831ebd8c8..27bb2c2a8d54 100644
> --- a/arch/sparc/mm/fault_32.c
> +++ b/arch/sparc/mm/fault_32.c
> @@ -112,7 +112,7 @@ static noinline void do_fault_siginfo(int code, int sig, struct pt_regs *regs,
>  asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
>  			       unsigned long address)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk = current;
>  	struct mm_struct *mm = tsk->mm;
>  	int from_user = !(regs->psr & PSR_PS);
> @@ -304,7 +304,7 @@ asmlinkage void do_sparc_fault(struct pt_regs *regs, int text_fault, int write,
>  /* This always deals with user addresses. */
>  static void force_user_fault(unsigned long address, int write)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk = current;
>  	struct mm_struct *mm = tsk->mm;
>  	unsigned int flags = FAULT_FLAG_USER;
> diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c
> index e326caf708c6..1dd10e512d61 100644
> --- a/arch/sparc/mm/fault_64.c
> +++ b/arch/sparc/mm/fault_64.c
> @@ -268,7 +268,7 @@ asmlinkage void __kprobes do_sparc64_fault(struct pt_regs *regs)
>  {
>  	enum ctx_state prev_state = exception_enter();
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned int insn = 0;
>  	int si_code, fault_code;
>  	vm_fault_t fault;
> diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
> index 80504148d8a5..c02f3fa3a0fa 100644
> --- a/arch/sparc/mm/hugetlbpage.c
> +++ b/arch/sparc/mm/hugetlbpage.c
> @@ -167,7 +167,7 @@ unsigned long pud_leaf_size(pud_t pud) { return 1UL << tte_to_shift(*(pte_t *)&p
>  unsigned long pmd_leaf_size(pmd_t pmd) { return 1UL << tte_to_shift(*(pte_t *)&pmd); }
>  unsigned long pte_leaf_size(pte_t pte) { return 1UL << tte_to_shift(pte); }
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgd;
> diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
> index 760818950464..235770b832be 100644
> --- a/arch/sparc/mm/init_64.c
> +++ b/arch/sparc/mm/init_64.c
> @@ -394,7 +394,7 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
>  }
>  #endif	/* CONFIG_HUGETLB_PAGE */
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *ptep, unsigned int nr)
>  {
>  	struct mm_struct *mm;
> @@ -2945,7 +2945,7 @@ void pte_free_defer(struct mm_struct *mm, pgtable_t pgtable)
>  	call_rcu(&page->rcu_head, pte_free_now);
>  }
>
> -void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> +void update_mmu_cache_pmd(struct mm_area *vma, unsigned long addr,
>  			  pmd_t *pmd)
>  {
>  	unsigned long pte, flags;
> @@ -3134,7 +3134,7 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  }
>
>  void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	char *vfrom, *vto;
>
> diff --git a/arch/sparc/mm/leon_mm.c b/arch/sparc/mm/leon_mm.c
> index 1dc9b3d70eda..2e36b02d81d2 100644
> --- a/arch/sparc/mm/leon_mm.c
> +++ b/arch/sparc/mm/leon_mm.c
> @@ -185,7 +185,7 @@ void leon_flush_dcache_all(void)
>  			     "i"(ASI_LEON_DFLUSH) : "memory");
>  }
>
> -void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page)
> +void leon_flush_pcache_all(struct mm_area *vma, unsigned long page)
>  {
>  	if (vma->vm_flags & VM_EXEC)
>  		leon_flush_icache_all();
> @@ -273,12 +273,12 @@ static void leon_flush_cache_mm(struct mm_struct *mm)
>  	leon_flush_cache_all();
>  }
>
> -static void leon_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
> +static void leon_flush_cache_page(struct mm_area *vma, unsigned long page)
>  {
>  	leon_flush_pcache_all(vma, page);
>  }
>
> -static void leon_flush_cache_range(struct vm_area_struct *vma,
> +static void leon_flush_cache_range(struct mm_area *vma,
>  				   unsigned long start,
>  				   unsigned long end)
>  {
> @@ -290,13 +290,13 @@ static void leon_flush_tlb_mm(struct mm_struct *mm)
>  	leon_flush_tlb_all();
>  }
>
> -static void leon_flush_tlb_page(struct vm_area_struct *vma,
> +static void leon_flush_tlb_page(struct mm_area *vma,
>  				unsigned long page)
>  {
>  	leon_flush_tlb_all();
>  }
>
> -static void leon_flush_tlb_range(struct vm_area_struct *vma,
> +static void leon_flush_tlb_range(struct mm_area *vma,
>  				 unsigned long start,
>  				 unsigned long end)
>  {
> diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
> index dd32711022f5..1337bc4daf6f 100644
> --- a/arch/sparc/mm/srmmu.c
> +++ b/arch/sparc/mm/srmmu.c
> @@ -555,34 +555,34 @@ void srmmu_unmapiorange(unsigned long virt_addr, unsigned int len)
>  /* tsunami.S */
>  extern void tsunami_flush_cache_all(void);
>  extern void tsunami_flush_cache_mm(struct mm_struct *mm);
> -extern void tsunami_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> -extern void tsunami_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
> +extern void tsunami_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
> +extern void tsunami_flush_cache_page(struct mm_area *vma, unsigned long page);
>  extern void tsunami_flush_page_to_ram(unsigned long page);
>  extern void tsunami_flush_page_for_dma(unsigned long page);
>  extern void tsunami_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
>  extern void tsunami_flush_tlb_all(void);
>  extern void tsunami_flush_tlb_mm(struct mm_struct *mm);
> -extern void tsunami_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> -extern void tsunami_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +extern void tsunami_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
> +extern void tsunami_flush_tlb_page(struct mm_area *vma, unsigned long page);
>  extern void tsunami_setup_blockops(void);
>
>  /* swift.S */
>  extern void swift_flush_cache_all(void);
>  extern void swift_flush_cache_mm(struct mm_struct *mm);
> -extern void swift_flush_cache_range(struct vm_area_struct *vma,
> +extern void swift_flush_cache_range(struct mm_area *vma,
>  				    unsigned long start, unsigned long end);
> -extern void swift_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
> +extern void swift_flush_cache_page(struct mm_area *vma, unsigned long page);
>  extern void swift_flush_page_to_ram(unsigned long page);
>  extern void swift_flush_page_for_dma(unsigned long page);
>  extern void swift_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
>  extern void swift_flush_tlb_all(void);
>  extern void swift_flush_tlb_mm(struct mm_struct *mm);
> -extern void swift_flush_tlb_range(struct vm_area_struct *vma,
> +extern void swift_flush_tlb_range(struct mm_area *vma,
>  				  unsigned long start, unsigned long end);
> -extern void swift_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +extern void swift_flush_tlb_page(struct mm_area *vma, unsigned long page);
>
>  #if 0  /* P3: deadwood to debug precise flushes on Swift. */
> -void swift_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void swift_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	int cctx, ctx1;
>
> @@ -621,9 +621,9 @@ void swift_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
>  /* viking.S */
>  extern void viking_flush_cache_all(void);
>  extern void viking_flush_cache_mm(struct mm_struct *mm);
> -extern void viking_flush_cache_range(struct vm_area_struct *vma, unsigned long start,
> +extern void viking_flush_cache_range(struct mm_area *vma, unsigned long start,
>  				     unsigned long end);
> -extern void viking_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
> +extern void viking_flush_cache_page(struct mm_area *vma, unsigned long page);
>  extern void viking_flush_page_to_ram(unsigned long page);
>  extern void viking_flush_page_for_dma(unsigned long page);
>  extern void viking_flush_sig_insns(struct mm_struct *mm, unsigned long addr);
> @@ -631,29 +631,29 @@ extern void viking_flush_page(unsigned long page);
>  extern void viking_mxcc_flush_page(unsigned long page);
>  extern void viking_flush_tlb_all(void);
>  extern void viking_flush_tlb_mm(struct mm_struct *mm);
> -extern void viking_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void viking_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  				   unsigned long end);
> -extern void viking_flush_tlb_page(struct vm_area_struct *vma,
> +extern void viking_flush_tlb_page(struct mm_area *vma,
>  				  unsigned long page);
>  extern void sun4dsmp_flush_tlb_all(void);
>  extern void sun4dsmp_flush_tlb_mm(struct mm_struct *mm);
> -extern void sun4dsmp_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
> +extern void sun4dsmp_flush_tlb_range(struct mm_area *vma, unsigned long start,
>  				   unsigned long end);
> -extern void sun4dsmp_flush_tlb_page(struct vm_area_struct *vma,
> +extern void sun4dsmp_flush_tlb_page(struct mm_area *vma,
>  				  unsigned long page);
>
>  /* hypersparc.S */
>  extern void hypersparc_flush_cache_all(void);
>  extern void hypersparc_flush_cache_mm(struct mm_struct *mm);
> -extern void hypersparc_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> -extern void hypersparc_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
> +extern void hypersparc_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end);
> +extern void hypersparc_flush_cache_page(struct mm_area *vma, unsigned long page);
>  extern void hypersparc_flush_page_to_ram(unsigned long page);
>  extern void hypersparc_flush_page_for_dma(unsigned long page);
>  extern void hypersparc_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
>  extern void hypersparc_flush_tlb_all(void);
>  extern void hypersparc_flush_tlb_mm(struct mm_struct *mm);
> -extern void hypersparc_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> -extern void hypersparc_flush_tlb_page(struct vm_area_struct *vma, unsigned long page);
> +extern void hypersparc_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end);
> +extern void hypersparc_flush_tlb_page(struct mm_area *vma, unsigned long page);
>  extern void hypersparc_setup_blockops(void);
>
>  /*
> @@ -1235,7 +1235,7 @@ static void turbosparc_flush_cache_mm(struct mm_struct *mm)
>  	FLUSH_END
>  }
>
> -static void turbosparc_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +static void turbosparc_flush_cache_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	FLUSH_BEGIN(vma->vm_mm)
>  	flush_user_windows();
> @@ -1243,7 +1243,7 @@ static void turbosparc_flush_cache_range(struct vm_area_struct *vma, unsigned lo
>  	FLUSH_END
>  }
>
> -static void turbosparc_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
> +static void turbosparc_flush_cache_page(struct mm_area *vma, unsigned long page)
>  {
>  	FLUSH_BEGIN(vma->vm_mm)
>  	flush_user_windows();
> @@ -1286,14 +1286,14 @@ static void turbosparc_flush_tlb_mm(struct mm_struct *mm)
>  	FLUSH_END
>  }
>
> -static void turbosparc_flush_tlb_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +static void turbosparc_flush_tlb_range(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	FLUSH_BEGIN(vma->vm_mm)
>  	srmmu_flush_whole_tlb();
>  	FLUSH_END
>  }
>
> -static void turbosparc_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +static void turbosparc_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	FLUSH_BEGIN(vma->vm_mm)
>  	srmmu_flush_whole_tlb();
> @@ -1672,7 +1672,7 @@ static void smp_flush_tlb_mm(struct mm_struct *mm)
>  	}
>  }
>
> -static void smp_flush_cache_range(struct vm_area_struct *vma,
> +static void smp_flush_cache_range(struct mm_area *vma,
>  				  unsigned long start,
>  				  unsigned long end)
>  {
> @@ -1686,7 +1686,7 @@ static void smp_flush_cache_range(struct vm_area_struct *vma,
>  	}
>  }
>
> -static void smp_flush_tlb_range(struct vm_area_struct *vma,
> +static void smp_flush_tlb_range(struct mm_area *vma,
>  				unsigned long start,
>  				unsigned long end)
>  {
> @@ -1700,7 +1700,7 @@ static void smp_flush_tlb_range(struct vm_area_struct *vma,
>  	}
>  }
>
> -static void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
> +static void smp_flush_cache_page(struct mm_area *vma, unsigned long page)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> @@ -1711,7 +1711,7 @@ static void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page)
>  	}
>  }
>
> -static void smp_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +static void smp_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
> index a35ddcca5e76..dd950cbd4fd7 100644
> --- a/arch/sparc/mm/tlb.c
> +++ b/arch/sparc/mm/tlb.c
> @@ -231,7 +231,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>  	__set_pmd_acct(mm, addr, orig, pmd);
>  }
>
> -static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_establish(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
>  	pmd_t old;
> @@ -247,7 +247,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  /*
>   * This routine is only called when splitting a THP
>   */
> -pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
>  		     pmd_t *pmdp)
>  {
>  	pmd_t old, entry;
> diff --git a/arch/sparc/vdso/vma.c b/arch/sparc/vdso/vma.c
> index bab7a59575e8..f8124af4d6f0 100644
> --- a/arch/sparc/vdso/vma.c
> +++ b/arch/sparc/vdso/vma.c
> @@ -363,7 +363,7 @@ static int map_vdso(const struct vdso_image *image,
>  		struct vm_special_mapping *vdso_mapping)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long text_start, addr = 0;
>  	int ret = 0;
>
> diff --git a/arch/um/drivers/mmapper_kern.c b/arch/um/drivers/mmapper_kern.c
> index 807cd3358740..0cb875338307 100644
> --- a/arch/um/drivers/mmapper_kern.c
> +++ b/arch/um/drivers/mmapper_kern.c
> @@ -46,7 +46,7 @@ static long mmapper_ioctl(struct file *file, unsigned int cmd, unsigned long arg
>  	return -ENOIOCTLCMD;
>  }
>
> -static int mmapper_mmap(struct file *file, struct vm_area_struct *vma)
> +static int mmapper_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int ret = -EINVAL;
>  	int size;
> diff --git a/arch/um/include/asm/tlbflush.h b/arch/um/include/asm/tlbflush.h
> index 13a3009942be..cb9e58edd300 100644
> --- a/arch/um/include/asm/tlbflush.h
> +++ b/arch/um/include/asm/tlbflush.h
> @@ -35,13 +35,13 @@ extern int um_tlb_sync(struct mm_struct *mm);
>  extern void flush_tlb_all(void);
>  extern void flush_tlb_mm(struct mm_struct *mm);
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma,
> +static inline void flush_tlb_page(struct mm_area *vma,
>  				  unsigned long address)
>  {
>  	um_tlb_mark_sync(vma->vm_mm, address, address + PAGE_SIZE);
>  }
>
> -static inline void flush_tlb_range(struct vm_area_struct *vma,
> +static inline void flush_tlb_range(struct mm_area *vma,
>  				   unsigned long start, unsigned long end)
>  {
>  	um_tlb_mark_sync(vma->vm_mm, start, end);
> diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c
> index cf7e0d4407f2..9d8fc85b2896 100644
> --- a/arch/um/kernel/tlb.c
> +++ b/arch/um/kernel/tlb.c
> @@ -214,7 +214,7 @@ void flush_tlb_all(void)
>
>  void flush_tlb_mm(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	for_each_vma(vmi, vma)
> diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c
> index ce073150dc20..22dd6c703a70 100644
> --- a/arch/um/kernel/trap.c
> +++ b/arch/um/kernel/trap.c
> @@ -26,7 +26,7 @@ int handle_page_fault(unsigned long address, unsigned long ip,
>  		      int is_write, int is_user, int *code_out)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	pmd_t *pmd;
>  	pte_t *pte;
>  	int err = -EFAULT;
> diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
> index adb299d3b6a1..987c2d16ed16 100644
> --- a/arch/x86/entry/vdso/vma.c
> +++ b/arch/x86/entry/vdso/vma.c
> @@ -50,7 +50,7 @@ int __init init_vdso_image(const struct vdso_image *image)
>  struct linux_binprm;
>
>  static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
> -		      struct vm_area_struct *vma, struct vm_fault *vmf)
> +		      struct mm_area *vma, struct vm_fault *vmf)
>  {
>  	const struct vdso_image *image = vma->vm_mm->context.vdso_image;
>
> @@ -63,7 +63,7 @@ static vm_fault_t vdso_fault(const struct vm_special_mapping *sm,
>  }
>
>  static void vdso_fix_landing(const struct vdso_image *image,
> -		struct vm_area_struct *new_vma)
> +		struct mm_area *new_vma)
>  {
>  #if defined CONFIG_X86_32 || defined CONFIG_IA32_EMULATION
>  	if (in_ia32_syscall() && image == &vdso_image_32) {
> @@ -80,7 +80,7 @@ static void vdso_fix_landing(const struct vdso_image *image,
>  }
>
>  static int vdso_mremap(const struct vm_special_mapping *sm,
> -		struct vm_area_struct *new_vma)
> +		struct mm_area *new_vma)
>  {
>  	const struct vdso_image *image = current->mm->context.vdso_image;
>
> @@ -91,7 +91,7 @@ static int vdso_mremap(const struct vm_special_mapping *sm,
>  }
>
>  static vm_fault_t vvar_vclock_fault(const struct vm_special_mapping *sm,
> -				    struct vm_area_struct *vma, struct vm_fault *vmf)
> +				    struct mm_area *vma, struct vm_fault *vmf)
>  {
>  	switch (vmf->pgoff) {
>  #ifdef CONFIG_PARAVIRT_CLOCK
> @@ -139,7 +139,7 @@ static const struct vm_special_mapping vvar_vclock_mapping = {
>  static int map_vdso(const struct vdso_image *image, unsigned long addr)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long text_start;
>  	int ret = 0;
>
> @@ -203,7 +203,7 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
>  int map_vdso_once(const struct vdso_image *image, unsigned long addr)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	mmap_write_lock(mm);
> diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
> index 2fb7d53cf333..155a54569893 100644
> --- a/arch/x86/entry/vsyscall/vsyscall_64.c
> +++ b/arch/x86/entry/vsyscall/vsyscall_64.c
> @@ -275,14 +275,14 @@ bool emulate_vsyscall(unsigned long error_code,
>   * covers the 64bit vsyscall page now. 32bit has a real VMA now and does
>   * not need special handling anymore:
>   */
> -static const char *gate_vma_name(struct vm_area_struct *vma)
> +static const char *gate_vma_name(struct mm_area *vma)
>  {
>  	return "[vsyscall]";
>  }
>  static const struct vm_operations_struct gate_vma_ops = {
>  	.name = gate_vma_name,
>  };
> -static struct vm_area_struct gate_vma __ro_after_init = {
> +static struct mm_area gate_vma __ro_after_init = {
>  	.vm_start	= VSYSCALL_ADDR,
>  	.vm_end		= VSYSCALL_ADDR + PAGE_SIZE,
>  	.vm_page_prot	= PAGE_READONLY_EXEC,
> @@ -290,7 +290,7 @@ static struct vm_area_struct gate_vma __ro_after_init = {
>  	.vm_ops		= &gate_vma_ops,
>  };
>
> -struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
> +struct mm_area *get_gate_vma(struct mm_struct *mm)
>  {
>  #ifdef CONFIG_COMPAT
>  	if (!mm || !test_bit(MM_CONTEXT_HAS_VSYSCALL, &mm->context.flags))
> @@ -303,7 +303,7 @@ struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
>
>  int in_gate_area(struct mm_struct *mm, unsigned long addr)
>  {
> -	struct vm_area_struct *vma = get_gate_vma(mm);
> +	struct mm_area *vma = get_gate_vma(mm);
>
>  	if (!vma)
>  		return 0;
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index 2398058b6e83..45915a6f2b9e 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -256,7 +256,7 @@ static inline bool is_64bit_mm(struct mm_struct *mm)
>   * So do not enforce things if the VMA is not from the current
>   * mm, or if we are in a kernel thread.
>   */
> -static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
> +static inline bool arch_vma_access_permitted(struct mm_area *vma,
>  		bool write, bool execute, bool foreign)
>  {
>  	/* pkeys never affect instruction fetches */
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index c4c23190925c..3e73c01c3ba0 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -402,7 +402,7 @@ static inline pgdval_t pgd_val(pgd_t pgd)
>  }
>
>  #define  __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION
> -static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr,
> +static inline pte_t ptep_modify_prot_start(struct mm_area *vma, unsigned long addr,
>  					   pte_t *ptep)
>  {
>  	pteval_t ret;
> @@ -412,7 +412,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma, unsigned
>  	return (pte_t) { .pte = ret };
>  }
>
> -static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
> +static inline void ptep_modify_prot_commit(struct mm_area *vma, unsigned long addr,
>  					   pte_t *ptep, pte_t old_pte, pte_t pte)
>  {
>
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 631c306ce1ff..dd67df3d8d0d 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -21,7 +21,7 @@ struct task_struct;
>  struct cpumask;
>  struct flush_tlb_info;
>  struct mmu_gather;
> -struct vm_area_struct;
> +struct mm_area;
>
>  /*
>   * Wrapper type for pointers to code which uses the non-standard
> @@ -168,9 +168,9 @@ struct pv_mmu_ops {
>  	void (*set_pte)(pte_t *ptep, pte_t pteval);
>  	void (*set_pmd)(pmd_t *pmdp, pmd_t pmdval);
>
> -	pte_t (*ptep_modify_prot_start)(struct vm_area_struct *vma, unsigned long addr,
> +	pte_t (*ptep_modify_prot_start)(struct mm_area *vma, unsigned long addr,
>  					pte_t *ptep);
> -	void (*ptep_modify_prot_commit)(struct vm_area_struct *vma, unsigned long addr,
> +	void (*ptep_modify_prot_commit)(struct mm_area *vma, unsigned long addr,
>  					pte_t *ptep, pte_t pte);
>
>  	struct paravirt_callee_save pte_val;
> diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
> index dabafba957ea..b39a39a46f7a 100644
> --- a/arch/x86/include/asm/pgtable-3level.h
> +++ b/arch/x86/include/asm/pgtable-3level.h
> @@ -122,7 +122,7 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp)
>
>  #ifndef pmdp_establish
>  #define pmdp_establish pmdp_establish
> -static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_establish(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
>  	pmd_t old;
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 5ddba366d3b4..1415b469056b 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -498,8 +498,8 @@ static inline pte_t pte_mkwrite_novma(pte_t pte)
>  	return pte_set_flags(pte, _PAGE_RW);
>  }
>
> -struct vm_area_struct;
> -pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma);
> +struct mm_area;
> +pte_t pte_mkwrite(pte_t pte, struct mm_area *vma);
>  #define pte_mkwrite pte_mkwrite
>
>  static inline pte_t pte_mkhuge(pte_t pte)
> @@ -623,7 +623,7 @@ static inline pmd_t pmd_mkwrite_novma(pmd_t pmd)
>  	return pmd_set_flags(pmd, _PAGE_RW);
>  }
>
> -pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
> +pmd_t pmd_mkwrite(pmd_t pmd, struct mm_area *vma);
>  #define pmd_mkwrite pmd_mkwrite
>
>  /* See comments above mksaveddirty_shift() */
> @@ -1291,19 +1291,19 @@ static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
>   * race with other CPU's that might be updating the dirty
>   * bit at the same time.
>   */
> -struct vm_area_struct;
> +struct mm_area;
>
>  #define  __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> -extern int ptep_set_access_flags(struct vm_area_struct *vma,
> +extern int ptep_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pte_t *ptep,
>  				 pte_t entry, int dirty);
>
>  #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
> -extern int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +extern int ptep_test_and_clear_young(struct mm_area *vma,
>  				     unsigned long addr, pte_t *ptep);
>
>  #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> -extern int ptep_clear_flush_young(struct vm_area_struct *vma,
> +extern int ptep_clear_flush_young(struct mm_area *vma,
>  				  unsigned long address, pte_t *ptep);
>
>  #define __HAVE_ARCH_PTEP_GET_AND_CLEAR
> @@ -1356,21 +1356,21 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
>  #define mk_pmd(page, pgprot)   pfn_pmd(page_to_pfn(page), (pgprot))
>
>  #define  __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> -extern int pmdp_set_access_flags(struct vm_area_struct *vma,
> +extern int pmdp_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pmd_t *pmdp,
>  				 pmd_t entry, int dirty);
> -extern int pudp_set_access_flags(struct vm_area_struct *vma,
> +extern int pudp_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pud_t *pudp,
>  				 pud_t entry, int dirty);
>
>  #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> -extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +extern int pmdp_test_and_clear_young(struct mm_area *vma,
>  				     unsigned long addr, pmd_t *pmdp);
> -extern int pudp_test_and_clear_young(struct vm_area_struct *vma,
> +extern int pudp_test_and_clear_young(struct mm_area *vma,
>  				     unsigned long addr, pud_t *pudp);
>
>  #define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
> -extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +extern int pmdp_clear_flush_young(struct mm_area *vma,
>  				  unsigned long address, pmd_t *pmdp);
>
>
> @@ -1415,7 +1415,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
>
>  #ifndef pmdp_establish
>  #define pmdp_establish pmdp_establish
> -static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_establish(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
>  	page_table_check_pmd_set(vma->vm_mm, pmdp, pmd);
> @@ -1430,7 +1430,7 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
>  #endif
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -static inline pud_t pudp_establish(struct vm_area_struct *vma,
> +static inline pud_t pudp_establish(struct mm_area *vma,
>  		unsigned long address, pud_t *pudp, pud_t pud)
>  {
>  	page_table_check_pud_set(vma->vm_mm, pudp, pud);
> @@ -1445,10 +1445,10 @@ static inline pud_t pudp_establish(struct vm_area_struct *vma,
>  #endif
>
>  #define __HAVE_ARCH_PMDP_INVALIDATE_AD
> -extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma,
> +extern pmd_t pmdp_invalidate_ad(struct mm_area *vma,
>  				unsigned long address, pmd_t *pmdp);
>
> -pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
>  		      pud_t *pudp);
>
>  /*
> @@ -1554,20 +1554,20 @@ static inline unsigned long page_level_mask(enum pg_level level)
>   * The x86 doesn't have any external MMU info: the kernel page
>   * tables contain all the necessary information.
>   */
> -static inline void update_mmu_cache(struct vm_area_struct *vma,
> +static inline void update_mmu_cache(struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep)
>  {
>  }
>  static inline void update_mmu_cache_range(struct vm_fault *vmf,
> -		struct vm_area_struct *vma, unsigned long addr,
> +		struct mm_area *vma, unsigned long addr,
>  		pte_t *ptep, unsigned int nr)
>  {
>  }
> -static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pmd(struct mm_area *vma,
>  		unsigned long addr, pmd_t *pmd)
>  {
>  }
> -static inline void update_mmu_cache_pud(struct vm_area_struct *vma,
> +static inline void update_mmu_cache_pud(struct mm_area *vma,
>  		unsigned long addr, pud_t *pud)
>  {
>  }
> @@ -1724,13 +1724,13 @@ static inline bool arch_has_pfn_modify_check(void)
>  }
>
>  #define arch_check_zapped_pte arch_check_zapped_pte
> -void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte);
> +void arch_check_zapped_pte(struct mm_area *vma, pte_t pte);
>
>  #define arch_check_zapped_pmd arch_check_zapped_pmd
> -void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd);
> +void arch_check_zapped_pmd(struct mm_area *vma, pmd_t pmd);
>
>  #define arch_check_zapped_pud arch_check_zapped_pud
> -void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud);
> +void arch_check_zapped_pud(struct mm_area *vma, pud_t pud);
>
>  #ifdef CONFIG_XEN_PV
>  #define arch_has_hw_nonleaf_pmd_young arch_has_hw_nonleaf_pmd_young
> diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
> index b612cc57a4d3..ce08b06f7b85 100644
> --- a/arch/x86/include/asm/pgtable_32.h
> +++ b/arch/x86/include/asm/pgtable_32.h
> @@ -23,7 +23,7 @@
>  #include <linux/spinlock.h>
>
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>
>  extern pgd_t swapper_pg_dir[1024];
>  extern pgd_t initial_page_table[1024];
> diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h
> index 2e6c04d8a45b..c92d445a2d4d 100644
> --- a/arch/x86/include/asm/pkeys.h
> +++ b/arch/x86/include/asm/pkeys.h
> @@ -30,9 +30,9 @@ static inline int execute_only_pkey(struct mm_struct *mm)
>  	return __execute_only_pkey(mm);
>  }
>
> -extern int __arch_override_mprotect_pkey(struct vm_area_struct *vma,
> +extern int __arch_override_mprotect_pkey(struct mm_area *vma,
>  		int prot, int pkey);
> -static inline int arch_override_mprotect_pkey(struct vm_area_struct *vma,
> +static inline int arch_override_mprotect_pkey(struct mm_area *vma,
>  		int prot, int pkey)
>  {
>  	if (!cpu_feature_enabled(X86_FEATURE_OSPKE))
> @@ -115,7 +115,7 @@ int mm_pkey_free(struct mm_struct *mm, int pkey)
>  	return 0;
>  }
>
> -static inline int vma_pkey(struct vm_area_struct *vma)
> +static inline int vma_pkey(struct mm_area *vma)
>  {
>  	unsigned long vma_pkey_mask = VM_PKEY_BIT0 | VM_PKEY_BIT1 |
>  				      VM_PKEY_BIT2 | VM_PKEY_BIT3;
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index e9b81876ebe4..0db9ba656abc 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -319,7 +319,7 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
>  				bool freed_tables);
>  extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
> -static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a)
> +static inline void flush_tlb_page(struct mm_area *vma, unsigned long a)
>  {
>  	flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT, false);
>  }
> diff --git a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> index 92ea1472bde9..a223490e1042 100644
> --- a/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> +++ b/arch/x86/kernel/cpu/resctrl/pseudo_lock.c
> @@ -1484,7 +1484,7 @@ static int pseudo_lock_dev_release(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>
> -static int pseudo_lock_dev_mremap(struct vm_area_struct *area)
> +static int pseudo_lock_dev_mremap(struct mm_area *area)
>  {
>  	/* Not supported */
>  	return -EINVAL;
> @@ -1494,7 +1494,7 @@ static const struct vm_operations_struct pseudo_mmap_ops = {
>  	.mremap = pseudo_lock_dev_mremap,
>  };
>
> -static int pseudo_lock_dev_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int pseudo_lock_dev_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	unsigned long vsize = vma->vm_end - vma->vm_start;
>  	unsigned long off = vma->vm_pgoff << PAGE_SHIFT;
> diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
> index 7f8d1e11dbee..e7e41b05b5c8 100644
> --- a/arch/x86/kernel/cpu/sgx/driver.c
> +++ b/arch/x86/kernel/cpu/sgx/driver.c
> @@ -81,7 +81,7 @@ static int sgx_release(struct inode *inode, struct file *file)
>  	return 0;
>  }
>
> -static int sgx_mmap(struct file *file, struct vm_area_struct *vma)
> +static int sgx_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct sgx_encl *encl = file->private_data;
>  	int ret;
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 279148e72459..8455a87e56f2 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -324,7 +324,7 @@ struct sgx_encl_page *sgx_encl_load_page(struct sgx_encl *encl,
>   * Returns: Appropriate vm_fault_t: VM_FAULT_NOPAGE when PTE was installed
>   * successfully, VM_FAULT_SIGBUS or VM_FAULT_OOM as error otherwise.
>   */
> -static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
> +static vm_fault_t sgx_encl_eaug_page(struct mm_area *vma,
>  				     struct sgx_encl *encl, unsigned long addr)
>  {
>  	vm_fault_t vmret = VM_FAULT_SIGBUS;
> @@ -430,7 +430,7 @@ static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
>  static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
>  {
>  	unsigned long addr = (unsigned long)vmf->address;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct sgx_encl_page *entry;
>  	unsigned long phys_addr;
>  	struct sgx_encl *encl;
> @@ -484,7 +484,7 @@ static vm_fault_t sgx_vma_fault(struct vm_fault *vmf)
>  	return VM_FAULT_NOPAGE;
>  }
>
> -static void sgx_vma_open(struct vm_area_struct *vma)
> +static void sgx_vma_open(struct mm_area *vma)
>  {
>  	struct sgx_encl *encl = vma->vm_private_data;
>
> @@ -567,7 +567,7 @@ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start,
>  	return ret;
>  }
>
> -static int sgx_vma_mprotect(struct vm_area_struct *vma, unsigned long start,
> +static int sgx_vma_mprotect(struct mm_area *vma, unsigned long start,
>  			    unsigned long end, unsigned long newflags)
>  {
>  	return sgx_encl_may_map(vma->vm_private_data, start, end, newflags);
> @@ -625,7 +625,7 @@ static struct sgx_encl_page *sgx_encl_reserve_page(struct sgx_encl *encl,
>  	return entry;
>  }
>
> -static int sgx_vma_access(struct vm_area_struct *vma, unsigned long addr,
> +static int sgx_vma_access(struct mm_area *vma, unsigned long addr,
>  			  void *buf, int len, int write)
>  {
>  	struct sgx_encl *encl = vma->vm_private_data;
> @@ -1137,7 +1137,7 @@ int sgx_encl_test_and_clear_young(struct mm_struct *mm,
>  {
>  	unsigned long addr = page->desc & PAGE_MASK;
>  	struct sgx_encl *encl = page->encl;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret;
>
>  	ret = sgx_encl_find(mm, addr, &vma);
> @@ -1200,7 +1200,7 @@ void sgx_zap_enclave_ptes(struct sgx_encl *encl, unsigned long addr)
>  {
>  	unsigned long mm_list_version;
>  	struct sgx_encl_mm *encl_mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int idx, ret;
>
>  	do {
> diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h
> index f94ff14c9486..de567cd442bc 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.h
> +++ b/arch/x86/kernel/cpu/sgx/encl.h
> @@ -87,9 +87,9 @@ struct sgx_backing {
>  extern const struct vm_operations_struct sgx_vm_ops;
>
>  static inline int sgx_encl_find(struct mm_struct *mm, unsigned long addr,
> -				struct vm_area_struct **vma)
> +				struct mm_area **vma)
>  {
> -	struct vm_area_struct *result;
> +	struct mm_area *result;
>
>  	result = vma_lookup(mm, addr);
>  	if (!result || result->vm_ops != &sgx_vm_ops)
> diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
> index 776a20172867..b25b51724b3a 100644
> --- a/arch/x86/kernel/cpu/sgx/ioctl.c
> +++ b/arch/x86/kernel/cpu/sgx/ioctl.c
> @@ -209,7 +209,7 @@ static int __sgx_encl_add_page(struct sgx_encl *encl,
>  			       struct sgx_secinfo *secinfo, unsigned long src)
>  {
>  	struct sgx_pageinfo pginfo;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct page *src_page;
>  	int ret;
>
> diff --git a/arch/x86/kernel/cpu/sgx/virt.c b/arch/x86/kernel/cpu/sgx/virt.c
> index 7aaa3652e31d..a601d9e1d867 100644
> --- a/arch/x86/kernel/cpu/sgx/virt.c
> +++ b/arch/x86/kernel/cpu/sgx/virt.c
> @@ -31,7 +31,7 @@ static struct mutex zombie_secs_pages_lock;
>  static struct list_head zombie_secs_pages;
>
>  static int __sgx_vepc_fault(struct sgx_vepc *vepc,
> -			    struct vm_area_struct *vma, unsigned long addr)
> +			    struct mm_area *vma, unsigned long addr)
>  {
>  	struct sgx_epc_page *epc_page;
>  	unsigned long index, pfn;
> @@ -73,7 +73,7 @@ static int __sgx_vepc_fault(struct sgx_vepc *vepc,
>
>  static vm_fault_t sgx_vepc_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct sgx_vepc *vepc = vma->vm_private_data;
>  	int ret;
>
> @@ -96,7 +96,7 @@ static const struct vm_operations_struct sgx_vepc_vm_ops = {
>  	.fault = sgx_vepc_fault,
>  };
>
> -static int sgx_vepc_mmap(struct file *file, struct vm_area_struct *vma)
> +static int sgx_vepc_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct sgx_vepc *vepc = file->private_data;
>
> diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c
> index 059685612362..f18dd5e2beff 100644
> --- a/arch/x86/kernel/shstk.c
> +++ b/arch/x86/kernel/shstk.c
> @@ -294,7 +294,7 @@ static int shstk_push_sigframe(unsigned long *ssp)
>
>  static int shstk_pop_sigframe(unsigned long *ssp)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long token_addr;
>  	bool need_to_check_vma;
>  	int err = 1;
> diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
> index 776ae6fa7f2d..ab965bc812a7 100644
> --- a/arch/x86/kernel/sys_x86_64.c
> +++ b/arch/x86/kernel/sys_x86_64.c
> @@ -128,7 +128,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len,
>  		       unsigned long pgoff, unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_unmapped_area_info info = {};
>  	unsigned long begin, end;
>
> @@ -168,7 +168,7 @@ arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr0,
>  			  unsigned long len, unsigned long pgoff,
>  			  unsigned long flags, vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long addr = addr0;
>  	struct vm_unmapped_area_info info = {};
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 296d294142c8..9255779b17f4 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -836,7 +836,7 @@ bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
>  static void
>  __bad_area(struct pt_regs *regs, unsigned long error_code,
>  	   unsigned long address, struct mm_struct *mm,
> -	   struct vm_area_struct *vma, u32 pkey, int si_code)
> +	   struct mm_area *vma, u32 pkey, int si_code)
>  {
>  	/*
>  	 * Something tried to access memory that isn't in our memory map..
> @@ -851,7 +851,7 @@ __bad_area(struct pt_regs *regs, unsigned long error_code,
>  }
>
>  static inline bool bad_area_access_from_pkeys(unsigned long error_code,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	/* This code is always called on the current mm */
>  	bool foreign = false;
> @@ -870,7 +870,7 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code,
>  static noinline void
>  bad_area_access_error(struct pt_regs *regs, unsigned long error_code,
>  		      unsigned long address, struct mm_struct *mm,
> -		      struct vm_area_struct *vma)
> +		      struct mm_area *vma)
>  {
>  	/*
>  	 * This OSPKE check is not strictly necessary at runtime.
> @@ -1049,7 +1049,7 @@ NOKPROBE_SYMBOL(spurious_kernel_fault);
>  int show_unhandled_signals = 1;
>
>  static inline int
> -access_error(unsigned long error_code, struct vm_area_struct *vma)
> +access_error(unsigned long error_code, struct mm_area *vma)
>  {
>  	/* This is only called for the current mm, so: */
>  	bool foreign = false;
> @@ -1211,7 +1211,7 @@ void do_user_addr_fault(struct pt_regs *regs,
>  			unsigned long error_code,
>  			unsigned long address)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk;
>  	struct mm_struct *mm;
>  	vm_fault_t fault;
> diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
> index 72d8cbc61158..f301b40be91b 100644
> --- a/arch/x86/mm/pat/memtype.c
> +++ b/arch/x86/mm/pat/memtype.c
> @@ -932,7 +932,7 @@ static void free_pfn_range(u64 paddr, unsigned long size)
>  		memtype_free(paddr, paddr + size);
>  }
>
> -static int follow_phys(struct vm_area_struct *vma, unsigned long *prot,
> +static int follow_phys(struct mm_area *vma, unsigned long *prot,
>  		resource_size_t *phys)
>  {
>  	struct follow_pfnmap_args args = { .vma = vma, .address = vma->vm_start };
> @@ -952,7 +952,7 @@ static int follow_phys(struct vm_area_struct *vma, unsigned long *prot,
>  	return 0;
>  }
>
> -static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
> +static int get_pat_info(struct mm_area *vma, resource_size_t *paddr,
>  		pgprot_t *pgprot)
>  {
>  	unsigned long prot;
> @@ -984,8 +984,8 @@ static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
>  	return -EINVAL;
>  }
>
> -int track_pfn_copy(struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma, unsigned long *pfn)
> +int track_pfn_copy(struct mm_area *dst_vma,
> +		struct mm_area *src_vma, unsigned long *pfn)
>  {
>  	const unsigned long vma_size = src_vma->vm_end - src_vma->vm_start;
>  	resource_size_t paddr;
> @@ -1011,7 +1011,7 @@ int track_pfn_copy(struct vm_area_struct *dst_vma,
>  	return 0;
>  }
>
> -void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
> +void untrack_pfn_copy(struct mm_area *dst_vma, unsigned long pfn)
>  {
>  	untrack_pfn(dst_vma, pfn, dst_vma->vm_end - dst_vma->vm_start, true);
>  	/*
> @@ -1026,7 +1026,7 @@ void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
>   * reserve the entire pfn + size range with single reserve_pfn_range
>   * call.
>   */
> -int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
> +int track_pfn_remap(struct mm_area *vma, pgprot_t *prot,
>  		    unsigned long pfn, unsigned long addr, unsigned long size)
>  {
>  	resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
> @@ -1066,7 +1066,7 @@ int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
>  	return 0;
>  }
>
> -void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
> +void track_pfn_insert(struct mm_area *vma, pgprot_t *prot, pfn_t pfn)
>  {
>  	enum page_cache_mode pcm;
>
> @@ -1084,7 +1084,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn)
>   * untrack can be called for a specific region indicated by pfn and size or
>   * can be for the entire vma (in which case pfn, size are zero).
>   */
> -void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> +void untrack_pfn(struct mm_area *vma, unsigned long pfn,
>  		 unsigned long size, bool mm_wr_locked)
>  {
>  	resource_size_t paddr;
> @@ -1108,7 +1108,7 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
>  	}
>  }
>
> -void untrack_pfn_clear(struct vm_area_struct *vma)
> +void untrack_pfn_clear(struct mm_area *vma)
>  {
>  	vm_flags_clear(vma, VM_PAT);
>  }
> diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
> index a05fcddfc811..c0105e8b5130 100644
> --- a/arch/x86/mm/pgtable.c
> +++ b/arch/x86/mm/pgtable.c
> @@ -458,7 +458,7 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd)
>   * to also make the pte writeable at the same time the dirty bit is
>   * set. In that case we do actually need to write the PTE.
>   */
> -int ptep_set_access_flags(struct vm_area_struct *vma,
> +int ptep_set_access_flags(struct mm_area *vma,
>  			  unsigned long address, pte_t *ptep,
>  			  pte_t entry, int dirty)
>  {
> @@ -471,7 +471,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
>  }
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -int pmdp_set_access_flags(struct vm_area_struct *vma,
> +int pmdp_set_access_flags(struct mm_area *vma,
>  			  unsigned long address, pmd_t *pmdp,
>  			  pmd_t entry, int dirty)
>  {
> @@ -492,7 +492,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
>  	return changed;
>  }
>
> -int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +int pudp_set_access_flags(struct mm_area *vma, unsigned long address,
>  			  pud_t *pudp, pud_t entry, int dirty)
>  {
>  	int changed = !pud_same(*pudp, entry);
> @@ -513,7 +513,7 @@ int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>  }
>  #endif
>
> -int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +int ptep_test_and_clear_young(struct mm_area *vma,
>  			      unsigned long addr, pte_t *ptep)
>  {
>  	int ret = 0;
> @@ -526,7 +526,7 @@ int ptep_test_and_clear_young(struct vm_area_struct *vma,
>  }
>
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
> -int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +int pmdp_test_and_clear_young(struct mm_area *vma,
>  			      unsigned long addr, pmd_t *pmdp)
>  {
>  	int ret = 0;
> @@ -540,7 +540,7 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  #endif
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -int pudp_test_and_clear_young(struct vm_area_struct *vma,
> +int pudp_test_and_clear_young(struct mm_area *vma,
>  			      unsigned long addr, pud_t *pudp)
>  {
>  	int ret = 0;
> @@ -553,7 +553,7 @@ int pudp_test_and_clear_young(struct vm_area_struct *vma,
>  }
>  #endif
>
> -int ptep_clear_flush_young(struct vm_area_struct *vma,
> +int ptep_clear_flush_young(struct mm_area *vma,
>  			   unsigned long address, pte_t *ptep)
>  {
>  	/*
> @@ -573,7 +573,7 @@ int ptep_clear_flush_young(struct vm_area_struct *vma,
>  }
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +int pmdp_clear_flush_young(struct mm_area *vma,
>  			   unsigned long address, pmd_t *pmdp)
>  {
>  	int young;
> @@ -587,7 +587,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
>  	return young;
>  }
>
> -pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_invalidate_ad(struct mm_area *vma, unsigned long address,
>  			 pmd_t *pmdp)
>  {
>  	VM_WARN_ON_ONCE(!pmd_present(*pmdp));
> @@ -602,7 +602,7 @@ pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
>
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
>  	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> -pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +pud_t pudp_invalidate(struct mm_area *vma, unsigned long address,
>  		     pud_t *pudp)
>  {
>  	VM_WARN_ON_ONCE(!pud_present(*pudp));
> @@ -858,7 +858,7 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
>  #endif /* CONFIG_X86_64 */
>  #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
>
> -pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +pte_t pte_mkwrite(pte_t pte, struct mm_area *vma)
>  {
>  	if (vma->vm_flags & VM_SHADOW_STACK)
>  		return pte_mkwrite_shstk(pte);
> @@ -868,7 +868,7 @@ pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
>  	return pte_clear_saveddirty(pte);
>  }
>
> -pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
> +pmd_t pmd_mkwrite(pmd_t pmd, struct mm_area *vma)
>  {
>  	if (vma->vm_flags & VM_SHADOW_STACK)
>  		return pmd_mkwrite_shstk(pmd);
> @@ -878,7 +878,7 @@ pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
>  	return pmd_clear_saveddirty(pmd);
>  }
>
> -void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte)
> +void arch_check_zapped_pte(struct mm_area *vma, pte_t pte)
>  {
>  	/*
>  	 * Hardware before shadow stack can (rarely) set Dirty=1
> @@ -891,14 +891,14 @@ void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte)
>  			pte_shstk(pte));
>  }
>
> -void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd)
> +void arch_check_zapped_pmd(struct mm_area *vma, pmd_t pmd)
>  {
>  	/* See note in arch_check_zapped_pte() */
>  	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
>  			pmd_shstk(pmd));
>  }
>
> -void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
> +void arch_check_zapped_pud(struct mm_area *vma, pud_t pud)
>  {
>  	/* See note in arch_check_zapped_pte() */
>  	VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && pud_shstk(pud));
> diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
> index 7418c367e328..8626515f8331 100644
> --- a/arch/x86/mm/pkeys.c
> +++ b/arch/x86/mm/pkeys.c
> @@ -59,7 +59,7 @@ int __execute_only_pkey(struct mm_struct *mm)
>  	return execute_only_pkey;
>  }
>
> -static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
> +static inline bool vma_is_pkey_exec_only(struct mm_area *vma)
>  {
>  	/* Do this check first since the vm_flags should be hot */
>  	if ((vma->vm_flags & VM_ACCESS_FLAGS) != VM_EXEC)
> @@ -73,7 +73,7 @@ static inline bool vma_is_pkey_exec_only(struct vm_area_struct *vma)
>  /*
>   * This is only called for *plain* mprotect calls.
>   */
> -int __arch_override_mprotect_pkey(struct vm_area_struct *vma, int prot, int pkey)
> +int __arch_override_mprotect_pkey(struct mm_area *vma, int prot, int pkey)
>  {
>  	/*
>  	 * Is this an mprotect_pkey() call?  If so, never
> diff --git a/arch/x86/um/mem_32.c b/arch/x86/um/mem_32.c
> index 29b2203bc82c..495b032f68f5 100644
> --- a/arch/x86/um/mem_32.c
> +++ b/arch/x86/um/mem_32.c
> @@ -6,7 +6,7 @@
>  #include <linux/mm.h>
>  #include <asm/elf.h>
>
> -static struct vm_area_struct gate_vma;
> +static struct mm_area gate_vma;
>
>  static int __init gate_vma_init(void)
>  {
> @@ -23,7 +23,7 @@ static int __init gate_vma_init(void)
>  }
>  __initcall(gate_vma_init);
>
> -struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
> +struct mm_area *get_gate_vma(struct mm_struct *mm)
>  {
>  	return FIXADDR_USER_START ? &gate_vma : NULL;
>  }
> @@ -41,7 +41,7 @@ int in_gate_area_no_mm(unsigned long addr)
>
>  int in_gate_area(struct mm_struct *mm, unsigned long addr)
>  {
> -	struct vm_area_struct *vma = get_gate_vma(mm);
> +	struct mm_area *vma = get_gate_vma(mm);
>
>  	if (!vma)
>  		return 0;
> diff --git a/arch/x86/um/mem_64.c b/arch/x86/um/mem_64.c
> index c027e93d1002..5fd2a34ebe23 100644
> --- a/arch/x86/um/mem_64.c
> +++ b/arch/x86/um/mem_64.c
> @@ -2,7 +2,7 @@
>  #include <linux/mm.h>
>  #include <asm/elf.h>
>
> -const char *arch_vma_name(struct vm_area_struct *vma)
> +const char *arch_vma_name(struct mm_area *vma)
>  {
>  	if (vma->vm_mm && vma->vm_start == um_vdso_addr)
>  		return "[vdso]";
> diff --git a/arch/x86/um/vdso/vma.c b/arch/x86/um/vdso/vma.c
> index dc8dfb2abd80..2f80bb140815 100644
> --- a/arch/x86/um/vdso/vma.c
> +++ b/arch/x86/um/vdso/vma.c
> @@ -41,7 +41,7 @@ subsys_initcall(init_vdso);
>
>  int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm = current->mm;
>  	static struct vm_special_mapping vdso_mapping = {
>  		.name = "[vdso]",
> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> index c4c479373249..c268d7d323ab 100644
> --- a/arch/x86/xen/mmu.c
> +++ b/arch/x86/xen/mmu.c
> @@ -38,7 +38,7 @@ xmaddr_t arbitrary_virt_to_machine(void *vaddr)
>  EXPORT_SYMBOL_GPL(arbitrary_virt_to_machine);
>
>  /* Returns: 0 success */
> -int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
> +int xen_unmap_domain_gfn_range(struct mm_area *vma,
>  			       int nr, struct page **pages)
>  {
>  	if (xen_feature(XENFEAT_auto_translated_physmap))
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index 38971c6dcd4b..ddb7a5dcce88 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -348,7 +348,7 @@ static void xen_set_pte(pte_t *ptep, pte_t pteval)
>  	__xen_set_pte(ptep, pteval);
>  }
>
> -static pte_t xen_ptep_modify_prot_start(struct vm_area_struct *vma,
> +static pte_t xen_ptep_modify_prot_start(struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep)
>  {
>  	/* Just return the pte as-is.  We preserve the bits on commit */
> @@ -356,7 +356,7 @@ static pte_t xen_ptep_modify_prot_start(struct vm_area_struct *vma,
>  	return *ptep;
>  }
>
> -static void xen_ptep_modify_prot_commit(struct vm_area_struct *vma,
> +static void xen_ptep_modify_prot_commit(struct mm_area *vma,
>  					unsigned long addr,
>  					pte_t *ptep, pte_t pte)
>  {
> @@ -2494,7 +2494,7 @@ static int remap_area_pfn_pte_fn(pte_t *ptep, unsigned long addr, void *data)
>  	return 0;
>  }
>
> -int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
> +int xen_remap_pfn(struct mm_area *vma, unsigned long addr,
>  		  xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,
>  		  unsigned int domid, bool no_translate)
>  {
> diff --git a/arch/xtensa/include/asm/cacheflush.h b/arch/xtensa/include/asm/cacheflush.h
> index a2b6bb5429f5..6d4a401875c2 100644
> --- a/arch/xtensa/include/asm/cacheflush.h
> +++ b/arch/xtensa/include/asm/cacheflush.h
> @@ -96,9 +96,9 @@ static inline void __invalidate_icache_page_alias(unsigned long virt,
>
>  #ifdef CONFIG_SMP
>  void flush_cache_all(void);
> -void flush_cache_range(struct vm_area_struct*, ulong, ulong);
> +void flush_cache_range(struct mm_area*, ulong, ulong);
>  void flush_icache_range(unsigned long start, unsigned long end);
> -void flush_cache_page(struct vm_area_struct*,
> +void flush_cache_page(struct mm_area*,
>  			     unsigned long, unsigned long);
>  #define flush_cache_all flush_cache_all
>  #define flush_cache_range flush_cache_range
> @@ -133,9 +133,9 @@ static inline void flush_dcache_page(struct page *page)
>  	flush_dcache_folio(page_folio(page));
>  }
>
> -void local_flush_cache_range(struct vm_area_struct *vma,
> +void local_flush_cache_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end);
> -void local_flush_cache_page(struct vm_area_struct *vma,
> +void local_flush_cache_page(struct mm_area *vma,
>  		unsigned long address, unsigned long pfn);
>
>  #else
> @@ -155,9 +155,9 @@ void local_flush_cache_page(struct vm_area_struct *vma,
>
>  #if defined(CONFIG_MMU) && (DCACHE_WAY_SIZE > PAGE_SIZE)
>
> -extern void copy_to_user_page(struct vm_area_struct*, struct page*,
> +extern void copy_to_user_page(struct mm_area*, struct page*,
>  		unsigned long, void*, const void*, unsigned long);
> -extern void copy_from_user_page(struct vm_area_struct*, struct page*,
> +extern void copy_from_user_page(struct mm_area*, struct page*,
>  		unsigned long, void*, const void*, unsigned long);
>  #define copy_to_user_page copy_to_user_page
>  #define copy_from_user_page copy_from_user_page
> diff --git a/arch/xtensa/include/asm/page.h b/arch/xtensa/include/asm/page.h
> index 644413792bf3..47df5872733a 100644
> --- a/arch/xtensa/include/asm/page.h
> +++ b/arch/xtensa/include/asm/page.h
> @@ -106,7 +106,7 @@ typedef struct page *pgtable_t;
>  # include <asm-generic/getorder.h>
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>  extern void clear_page(void *page);
>  extern void copy_page(void *to, void *from);
>
> @@ -124,7 +124,7 @@ extern void copy_page_alias(void *to, void *from,
>  void clear_user_highpage(struct page *page, unsigned long vaddr);
>  #define __HAVE_ARCH_COPY_USER_HIGHPAGE
>  void copy_user_highpage(struct page *to, struct page *from,
> -			unsigned long vaddr, struct vm_area_struct *vma);
> +			unsigned long vaddr, struct mm_area *vma);
>  #else
>  # define clear_user_page(page, vaddr, pg)	clear_page(page)
>  # define copy_user_page(to, from, vaddr, pg)	copy_page(to, from)
> diff --git a/arch/xtensa/include/asm/pgtable.h b/arch/xtensa/include/asm/pgtable.h
> index 1647a7cc3fbf..247b9d7b91b4 100644
> --- a/arch/xtensa/include/asm/pgtable.h
> +++ b/arch/xtensa/include/asm/pgtable.h
> @@ -313,10 +313,10 @@ set_pmd(pmd_t *pmdp, pmd_t pmdval)
>  	*pmdp = pmdval;
>  }
>
> -struct vm_area_struct;
> +struct mm_area;
>
>  static inline int
> -ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr,
> +ptep_test_and_clear_young(struct mm_area *vma, unsigned long addr,
>  			  pte_t *ptep)
>  {
>  	pte_t pte = *ptep;
> @@ -403,14 +403,14 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
>  #else
>
>  struct vm_fault;
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long address, pte_t *ptep, unsigned int nr);
>  #define update_mmu_cache(vma, address, ptep) \
>  	update_mmu_cache_range(NULL, vma, address, ptep, 1)
>
>  typedef pte_t *pte_addr_t;
>
> -void update_mmu_tlb_range(struct vm_area_struct *vma,
> +void update_mmu_tlb_range(struct mm_area *vma,
>  		unsigned long address, pte_t *ptep, unsigned int nr);
>  #define update_mmu_tlb_range update_mmu_tlb_range
>
> diff --git a/arch/xtensa/include/asm/tlbflush.h b/arch/xtensa/include/asm/tlbflush.h
> index 573df8cea200..36a5ca4f41b8 100644
> --- a/arch/xtensa/include/asm/tlbflush.h
> +++ b/arch/xtensa/include/asm/tlbflush.h
> @@ -32,9 +32,9 @@
>
>  void local_flush_tlb_all(void);
>  void local_flush_tlb_mm(struct mm_struct *mm);
> -void local_flush_tlb_page(struct vm_area_struct *vma,
> +void local_flush_tlb_page(struct mm_area *vma,
>  		unsigned long page);
> -void local_flush_tlb_range(struct vm_area_struct *vma,
> +void local_flush_tlb_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end);
>  void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
> @@ -42,8 +42,8 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
>  void flush_tlb_all(void);
>  void flush_tlb_mm(struct mm_struct *);
> -void flush_tlb_page(struct vm_area_struct *, unsigned long);
> -void flush_tlb_range(struct vm_area_struct *, unsigned long,
> +void flush_tlb_page(struct mm_area *, unsigned long);
> +void flush_tlb_range(struct mm_area *, unsigned long,
>  		unsigned long);
>  void flush_tlb_kernel_range(unsigned long start, unsigned long end);
>
> diff --git a/arch/xtensa/kernel/pci.c b/arch/xtensa/kernel/pci.c
> index 62c900e400d6..81f6d62f9bff 100644
> --- a/arch/xtensa/kernel/pci.c
> +++ b/arch/xtensa/kernel/pci.c
> @@ -71,7 +71,7 @@ void pcibios_fixup_bus(struct pci_bus *bus)
>   *  -- paulus.
>   */
>
> -int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma)
> +int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma)
>  {
>  	struct pci_controller *pci_ctrl = (struct pci_controller*) pdev->sysdata;
>  	resource_size_t ioaddr = pci_resource_start(pdev, bar);
> diff --git a/arch/xtensa/kernel/smp.c b/arch/xtensa/kernel/smp.c
> index 94a23f100726..66c0c20799ef 100644
> --- a/arch/xtensa/kernel/smp.c
> +++ b/arch/xtensa/kernel/smp.c
> @@ -468,7 +468,7 @@ int setup_profiling_timer(unsigned int multiplier)
>  /* TLB flush functions */
>
>  struct flush_data {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr1;
>  	unsigned long addr2;
>  };
> @@ -499,7 +499,7 @@ static void ipi_flush_tlb_page(void *arg)
>  	local_flush_tlb_page(fd->vma, fd->addr1);
>  }
>
> -void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
> +void flush_tlb_page(struct mm_area *vma, unsigned long addr)
>  {
>  	struct flush_data fd = {
>  		.vma = vma,
> @@ -514,7 +514,7 @@ static void ipi_flush_tlb_range(void *arg)
>  	local_flush_tlb_range(fd->vma, fd->addr1, fd->addr2);
>  }
>
> -void flush_tlb_range(struct vm_area_struct *vma,
> +void flush_tlb_range(struct mm_area *vma,
>  		     unsigned long start, unsigned long end)
>  {
>  	struct flush_data fd = {
> @@ -558,7 +558,7 @@ static void ipi_flush_cache_page(void *arg)
>  	local_flush_cache_page(fd->vma, fd->addr1, fd->addr2);
>  }
>
> -void flush_cache_page(struct vm_area_struct *vma,
> +void flush_cache_page(struct mm_area *vma,
>  		     unsigned long address, unsigned long pfn)
>  {
>  	struct flush_data fd = {
> @@ -575,7 +575,7 @@ static void ipi_flush_cache_range(void *arg)
>  	local_flush_cache_range(fd->vma, fd->addr1, fd->addr2);
>  }
>
> -void flush_cache_range(struct vm_area_struct *vma,
> +void flush_cache_range(struct mm_area *vma,
>  		     unsigned long start, unsigned long end)
>  {
>  	struct flush_data fd = {
> diff --git a/arch/xtensa/kernel/syscall.c b/arch/xtensa/kernel/syscall.c
> index dc54f854c2f5..9dd4ee487337 100644
> --- a/arch/xtensa/kernel/syscall.c
> +++ b/arch/xtensa/kernel/syscall.c
> @@ -58,7 +58,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
>  		unsigned long len, unsigned long pgoff, unsigned long flags,
>  		vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vmm;
> +	struct mm_area *vmm;
>  	struct vma_iterator vmi;
>
>  	if (flags & MAP_FIXED) {
> diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c
> index 23be0e7516ce..b1f503c39d58 100644
> --- a/arch/xtensa/mm/cache.c
> +++ b/arch/xtensa/mm/cache.c
> @@ -100,7 +100,7 @@ void clear_user_highpage(struct page *page, unsigned long vaddr)
>  EXPORT_SYMBOL(clear_user_highpage);
>
>  void copy_user_highpage(struct page *dst, struct page *src,
> -			unsigned long vaddr, struct vm_area_struct *vma)
> +			unsigned long vaddr, struct mm_area *vma)
>  {
>  	struct folio *folio = page_folio(dst);
>  	unsigned long dst_paddr, src_paddr;
> @@ -181,7 +181,7 @@ EXPORT_SYMBOL(flush_dcache_folio);
>   * For now, flush the whole cache. FIXME??
>   */
>
> -void local_flush_cache_range(struct vm_area_struct *vma,
> +void local_flush_cache_range(struct mm_area *vma,
>  		       unsigned long start, unsigned long end)
>  {
>  	__flush_invalidate_dcache_all();
> @@ -196,7 +196,7 @@ EXPORT_SYMBOL(local_flush_cache_range);
>   * alias versions of the cache flush functions.
>   */
>
> -void local_flush_cache_page(struct vm_area_struct *vma, unsigned long address,
> +void local_flush_cache_page(struct mm_area *vma, unsigned long address,
>  		      unsigned long pfn)
>  {
>  	/* Note that we have to use the 'alias' address to avoid multi-hit */
> @@ -213,7 +213,7 @@ EXPORT_SYMBOL(local_flush_cache_page);
>
>  #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */
>
> -void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
> +void update_mmu_cache_range(struct vm_fault *vmf, struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep, unsigned int nr)
>  {
>  	unsigned long pfn = pte_pfn(*ptep);
> @@ -270,7 +270,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
>
>  #if (DCACHE_WAY_SIZE > PAGE_SIZE)
>
> -void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
> +void copy_to_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long vaddr, void *dst, const void *src,
>  		unsigned long len)
>  {
> @@ -310,7 +310,7 @@ void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
>  	}
>  }
>
> -extern void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
> +extern void copy_from_user_page(struct mm_area *vma, struct page *page,
>  		unsigned long vaddr, void *dst, const void *src,
>  		unsigned long len)
>  {
> diff --git a/arch/xtensa/mm/fault.c b/arch/xtensa/mm/fault.c
> index 16e11b6f6f78..02d6bcea445d 100644
> --- a/arch/xtensa/mm/fault.c
> +++ b/arch/xtensa/mm/fault.c
> @@ -87,7 +87,7 @@ static void vmalloc_fault(struct pt_regs *regs, unsigned int address)
>
>  void do_page_fault(struct pt_regs *regs)
>  {
> -	struct vm_area_struct * vma;
> +	struct mm_area * vma;
>  	struct mm_struct *mm = current->mm;
>  	unsigned int exccause = regs->exccause;
>  	unsigned int address = regs->excvaddr;
> diff --git a/arch/xtensa/mm/tlb.c b/arch/xtensa/mm/tlb.c
> index 0a1a815dc796..b8fcadd0460a 100644
> --- a/arch/xtensa/mm/tlb.c
> +++ b/arch/xtensa/mm/tlb.c
> @@ -86,7 +86,7 @@ void local_flush_tlb_mm(struct mm_struct *mm)
>  # define _TLB_ENTRIES _DTLB_ENTRIES
>  #endif
>
> -void local_flush_tlb_range(struct vm_area_struct *vma,
> +void local_flush_tlb_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end)
>  {
>  	int cpu = smp_processor_id();
> @@ -124,7 +124,7 @@ void local_flush_tlb_range(struct vm_area_struct *vma,
>  	local_irq_restore(flags);
>  }
>
> -void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
> +void local_flush_tlb_page(struct mm_area *vma, unsigned long page)
>  {
>  	int cpu = smp_processor_id();
>  	struct mm_struct* mm = vma->vm_mm;
> @@ -163,7 +163,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
>  	}
>  }
>
> -void update_mmu_tlb_range(struct vm_area_struct *vma,
> +void update_mmu_tlb_range(struct mm_area *vma,
>  			unsigned long address, pte_t *ptep, unsigned int nr)
>  {
>  	local_flush_tlb_range(vma, address, address + PAGE_SIZE * nr);
> diff --git a/block/fops.c b/block/fops.c
> index be9f1dbea9ce..6b5d92baf4b6 100644
> --- a/block/fops.c
> +++ b/block/fops.c
> @@ -871,7 +871,7 @@ static long blkdev_fallocate(struct file *file, int mode, loff_t start,
>  	return error;
>  }
>
> -static int blkdev_mmap(struct file *file, struct vm_area_struct *vma)
> +static int blkdev_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *bd_inode = bdev_file_inode(file);
>
> diff --git a/drivers/accel/amdxdna/amdxdna_gem.c b/drivers/accel/amdxdna/amdxdna_gem.c
> index 606433d73236..10a1bd65acb0 100644
> --- a/drivers/accel/amdxdna/amdxdna_gem.c
> +++ b/drivers/accel/amdxdna/amdxdna_gem.c
> @@ -159,7 +159,7 @@ static int amdxdna_hmm_register(struct amdxdna_gem_obj *abo, unsigned long addr,
>  }
>
>  static int amdxdna_gem_obj_mmap(struct drm_gem_object *gobj,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	struct amdxdna_gem_obj *abo = to_xdna_obj(gobj);
>  	unsigned long num_pages;
> @@ -192,12 +192,12 @@ static vm_fault_t amdxdna_gem_vm_fault(struct vm_fault *vmf)
>  	return drm_gem_shmem_vm_ops.fault(vmf);
>  }
>
> -static void amdxdna_gem_vm_open(struct vm_area_struct *vma)
> +static void amdxdna_gem_vm_open(struct mm_area *vma)
>  {
>  	drm_gem_shmem_vm_ops.open(vma);
>  }
>
> -static void amdxdna_gem_vm_close(struct vm_area_struct *vma)
> +static void amdxdna_gem_vm_close(struct mm_area *vma)
>  {
>  	struct drm_gem_object *gobj = vma->vm_private_data;
>
> diff --git a/drivers/accel/habanalabs/common/command_buffer.c b/drivers/accel/habanalabs/common/command_buffer.c
> index 0f0d295116e7..6dab3015eb48 100644
> --- a/drivers/accel/habanalabs/common/command_buffer.c
> +++ b/drivers/accel/habanalabs/common/command_buffer.c
> @@ -247,7 +247,7 @@ static int hl_cb_mmap_mem_alloc(struct hl_mmap_mem_buf *buf, gfp_t gfp, void *ar
>  }
>
>  static int hl_cb_mmap(struct hl_mmap_mem_buf *buf,
> -				      struct vm_area_struct *vma, void *args)
> +				      struct mm_area *vma, void *args)
>  {
>  	struct hl_cb *cb = buf->private;
>
> diff --git a/drivers/accel/habanalabs/common/device.c b/drivers/accel/habanalabs/common/device.c
> index 68eebed3b050..b86d048f3954 100644
> --- a/drivers/accel/habanalabs/common/device.c
> +++ b/drivers/accel/habanalabs/common/device.c
> @@ -647,7 +647,7 @@ static int hl_device_release_ctrl(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>
> -static int __hl_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
> +static int __hl_mmap(struct hl_fpriv *hpriv, struct mm_area *vma)
>  {
>  	struct hl_device *hdev = hpriv->hdev;
>  	unsigned long vm_pgoff;
> @@ -675,12 +675,12 @@ static int __hl_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
>   * hl_mmap - mmap function for habanalabs device
>   *
>   * @*filp: pointer to file structure
> - * @*vma: pointer to vm_area_struct of the process
> + * @*vma: pointer to mm_area of the process
>   *
>   * Called when process does an mmap on habanalabs device. Call the relevant mmap
>   * function at the end of the common code.
>   */
> -int hl_mmap(struct file *filp, struct vm_area_struct *vma)
> +int hl_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct drm_file *file_priv = filp->private_data;
>  	struct hl_fpriv *hpriv = file_priv->driver_priv;
> diff --git a/drivers/accel/habanalabs/common/habanalabs.h b/drivers/accel/habanalabs/common/habanalabs.h
> index 6f27ce4fa01b..2cb705768786 100644
> --- a/drivers/accel/habanalabs/common/habanalabs.h
> +++ b/drivers/accel/habanalabs/common/habanalabs.h
> @@ -45,7 +45,7 @@ struct hl_fpriv;
>   * bits[63:59] - Encode mmap type
>   * bits[45:0]  - mmap offset value
>   *
> - * NOTE: struct vm_area_struct.vm_pgoff uses offset in pages. Hence, these
> + * NOTE: struct mm_area.vm_pgoff uses offset in pages. Hence, these
>   *  defines are w.r.t to PAGE_SIZE
>   */
>  #define HL_MMAP_TYPE_SHIFT		(59 - PAGE_SHIFT)
> @@ -931,7 +931,7 @@ struct hl_mmap_mem_buf_behavior {
>  	u64 mem_id;
>
>  	int (*alloc)(struct hl_mmap_mem_buf *buf, gfp_t gfp, void *args);
> -	int (*mmap)(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, void *args);
> +	int (*mmap)(struct hl_mmap_mem_buf *buf, struct mm_area *vma, void *args);
>  	void (*release)(struct hl_mmap_mem_buf *buf);
>  };
>
> @@ -1650,7 +1650,7 @@ struct hl_asic_funcs {
>  	void (*halt_engines)(struct hl_device *hdev, bool hard_reset, bool fw_reset);
>  	int (*suspend)(struct hl_device *hdev);
>  	int (*resume)(struct hl_device *hdev);
> -	int (*mmap)(struct hl_device *hdev, struct vm_area_struct *vma,
> +	int (*mmap)(struct hl_device *hdev, struct mm_area *vma,
>  			void *cpu_addr, dma_addr_t dma_addr, size_t size);
>  	void (*ring_doorbell)(struct hl_device *hdev, u32 hw_queue_id, u32 pi);
>  	void (*pqe_write)(struct hl_device *hdev, __le64 *pqe,
> @@ -1745,7 +1745,7 @@ struct hl_asic_funcs {
>  	void (*ack_protection_bits_errors)(struct hl_device *hdev);
>  	int (*get_hw_block_id)(struct hl_device *hdev, u64 block_addr,
>  				u32 *block_size, u32 *block_id);
> -	int (*hw_block_mmap)(struct hl_device *hdev, struct vm_area_struct *vma,
> +	int (*hw_block_mmap)(struct hl_device *hdev, struct mm_area *vma,
>  			u32 block_id, u32 block_size);
>  	void (*enable_events_from_fw)(struct hl_device *hdev);
>  	int (*ack_mmu_errors)(struct hl_device *hdev, u64 mmu_cap_mask);
> @@ -3733,7 +3733,7 @@ int hl_access_cfg_region(struct hl_device *hdev, u64 addr, u64 *val,
>  int hl_access_dev_mem(struct hl_device *hdev, enum pci_region region_type,
>  			u64 addr, u64 *val, enum debugfs_access_type acc_type);
>
> -int hl_mmap(struct file *filp, struct vm_area_struct *vma);
> +int hl_mmap(struct file *filp, struct mm_area *vma);
>
>  int hl_device_open(struct drm_device *drm, struct drm_file *file_priv);
>  void hl_device_release(struct drm_device *ddev, struct drm_file *file_priv);
> @@ -3819,7 +3819,7 @@ int hl_cb_create(struct hl_device *hdev, struct hl_mem_mgr *mmg,
>  			struct hl_ctx *ctx, u32 cb_size, bool internal_cb,
>  			bool map_cb, u64 *handle);
>  int hl_cb_destroy(struct hl_mem_mgr *mmg, u64 cb_handle);
> -int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma);
> +int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct mm_area *vma);
>  struct hl_cb *hl_cb_get(struct hl_mem_mgr *mmg, u64 handle);
>  void hl_cb_put(struct hl_cb *cb);
>  struct hl_cb *hl_cb_kernel_create(struct hl_device *hdev, u32 cb_size,
> @@ -4063,7 +4063,7 @@ const char *hl_sync_engine_to_string(enum hl_sync_engine_type engine_type);
>  void hl_mem_mgr_init(struct device *dev, struct hl_mem_mgr *mmg);
>  void hl_mem_mgr_fini(struct hl_mem_mgr *mmg, struct hl_mem_mgr_fini_stats *stats);
>  void hl_mem_mgr_idr_destroy(struct hl_mem_mgr *mmg);
> -int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma,
> +int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct mm_area *vma,
>  		    void *args);
>  struct hl_mmap_mem_buf *hl_mmap_mem_buf_get(struct hl_mem_mgr *mmg,
>  						   u64 handle);
> diff --git a/drivers/accel/habanalabs/common/memory.c b/drivers/accel/habanalabs/common/memory.c
> index 601fdbe70179..4688d24b34df 100644
> --- a/drivers/accel/habanalabs/common/memory.c
> +++ b/drivers/accel/habanalabs/common/memory.c
> @@ -1424,7 +1424,7 @@ static int map_block(struct hl_device *hdev, u64 address, u64 *handle, u32 *size
>  	return 0;
>  }
>
> -static void hw_block_vm_close(struct vm_area_struct *vma)
> +static void hw_block_vm_close(struct mm_area *vma)
>  {
>  	struct hl_vm_hw_block_list_node *lnode =
>  		(struct hl_vm_hw_block_list_node *) vma->vm_private_data;
> @@ -1452,12 +1452,12 @@ static const struct vm_operations_struct hw_block_vm_ops = {
>  /**
>   * hl_hw_block_mmap() - mmap a hw block to user.
>   * @hpriv: pointer to the private data of the fd
> - * @vma: pointer to vm_area_struct of the process
> + * @vma: pointer to mm_area of the process
>   *
>   * Driver increments context reference for every HW block mapped in order
>   * to prevent user from closing FD without unmapping first
>   */
> -int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
> +int hl_hw_block_mmap(struct hl_fpriv *hpriv, struct mm_area *vma)
>  {
>  	struct hl_vm_hw_block_list_node *lnode;
>  	struct hl_device *hdev = hpriv->hdev;
> @@ -2103,7 +2103,7 @@ static void ts_buff_release(struct hl_mmap_mem_buf *buf)
>  	kfree(ts_buff);
>  }
>
> -static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct vm_area_struct *vma, void *args)
> +static int hl_ts_mmap(struct hl_mmap_mem_buf *buf, struct mm_area *vma, void *args)
>  {
>  	struct hl_ts_buff *ts_buff = buf->private;
>
> diff --git a/drivers/accel/habanalabs/common/memory_mgr.c b/drivers/accel/habanalabs/common/memory_mgr.c
> index 99cd83139d46..ea06e092b341 100644
> --- a/drivers/accel/habanalabs/common/memory_mgr.c
> +++ b/drivers/accel/habanalabs/common/memory_mgr.c
> @@ -196,7 +196,7 @@ hl_mmap_mem_buf_alloc(struct hl_mem_mgr *mmg,
>   *
>   * Put the memory buffer if it is no longer mapped.
>   */
> -static void hl_mmap_mem_buf_vm_close(struct vm_area_struct *vma)
> +static void hl_mmap_mem_buf_vm_close(struct mm_area *vma)
>  {
>  	struct hl_mmap_mem_buf *buf =
>  		(struct hl_mmap_mem_buf *)vma->vm_private_data;
> @@ -227,7 +227,7 @@ static const struct vm_operations_struct hl_mmap_mem_buf_vm_ops = {
>   *
>   * Map the buffer specified by the vma->vm_pgoff to the given vma.
>   */
> -int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct vm_area_struct *vma,
> +int hl_mem_mgr_mmap(struct hl_mem_mgr *mmg, struct mm_area *vma,
>  		    void *args)
>  {
>  	struct hl_mmap_mem_buf *buf;
> diff --git a/drivers/accel/habanalabs/gaudi/gaudi.c b/drivers/accel/habanalabs/gaudi/gaudi.c
> index fa893a9b826e..a52647a1b640 100644
> --- a/drivers/accel/habanalabs/gaudi/gaudi.c
> +++ b/drivers/accel/habanalabs/gaudi/gaudi.c
> @@ -4160,7 +4160,7 @@ static int gaudi_resume(struct hl_device *hdev)
>  	return gaudi_init_iatu(hdev);
>  }
>
> -static int gaudi_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
> +static int gaudi_mmap(struct hl_device *hdev, struct mm_area *vma,
>  			void *cpu_addr, dma_addr_t dma_addr, size_t size)
>  {
>  	int rc;
> @@ -8769,7 +8769,7 @@ static int gaudi_get_hw_block_id(struct hl_device *hdev, u64 block_addr,
>  }
>
>  static int gaudi_block_mmap(struct hl_device *hdev,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				u32 block_id, u32 block_size)
>  {
>  	return -EPERM;
> diff --git a/drivers/accel/habanalabs/gaudi2/gaudi2.c b/drivers/accel/habanalabs/gaudi2/gaudi2.c
> index a38b88baadf2..12ef2bdebe5d 100644
> --- a/drivers/accel/habanalabs/gaudi2/gaudi2.c
> +++ b/drivers/accel/habanalabs/gaudi2/gaudi2.c
> @@ -6475,7 +6475,7 @@ static int gaudi2_resume(struct hl_device *hdev)
>  	return gaudi2_init_iatu(hdev);
>  }
>
> -static int gaudi2_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
> +static int gaudi2_mmap(struct hl_device *hdev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size)
>  {
>  	int rc;
> @@ -11238,7 +11238,7 @@ static int gaudi2_get_hw_block_id(struct hl_device *hdev, u64 block_addr,
>  	return -EINVAL;
>  }
>
> -static int gaudi2_block_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
> +static int gaudi2_block_mmap(struct hl_device *hdev, struct mm_area *vma,
>  			u32 block_id, u32 block_size)
>  {
>  	struct gaudi2_device *gaudi2 = hdev->asic_specific;
> diff --git a/drivers/accel/habanalabs/goya/goya.c b/drivers/accel/habanalabs/goya/goya.c
> index 84768e306269..9319d29bb802 100644
> --- a/drivers/accel/habanalabs/goya/goya.c
> +++ b/drivers/accel/habanalabs/goya/goya.c
> @@ -2869,7 +2869,7 @@ int goya_resume(struct hl_device *hdev)
>  	return goya_init_iatu(hdev);
>  }
>
> -static int goya_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
> +static int goya_mmap(struct hl_device *hdev, struct mm_area *vma,
>  			void *cpu_addr, dma_addr_t dma_addr, size_t size)
>  {
>  	int rc;
> @@ -5313,7 +5313,7 @@ static int goya_get_hw_block_id(struct hl_device *hdev, u64 block_addr,
>  	return -EPERM;
>  }
>
> -static int goya_block_mmap(struct hl_device *hdev, struct vm_area_struct *vma,
> +static int goya_block_mmap(struct hl_device *hdev, struct mm_area *vma,
>  				u32 block_id, u32 block_size)
>  {
>  	return -EPERM;
> diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c
> index 43aba57b48f0..331e4683f42a 100644
> --- a/drivers/accel/qaic/qaic_data.c
> +++ b/drivers/accel/qaic/qaic_data.c
> @@ -602,7 +602,7 @@ static const struct vm_operations_struct drm_vm_ops = {
>  	.close = drm_gem_vm_close,
>  };
>
> -static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct qaic_bo *bo = to_qaic_bo(obj);
>  	unsigned long offset = 0;
> diff --git a/drivers/acpi/pfr_telemetry.c b/drivers/acpi/pfr_telemetry.c
> index 32bdf8cbe8f2..4222c75ced8e 100644
> --- a/drivers/acpi/pfr_telemetry.c
> +++ b/drivers/acpi/pfr_telemetry.c
> @@ -295,7 +295,7 @@ static long pfrt_log_ioctl(struct file *file, unsigned int cmd, unsigned long ar
>  }
>
>  static int
> -pfrt_log_mmap(struct file *file, struct vm_area_struct *vma)
> +pfrt_log_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct pfrt_log_device *pfrt_log_dev;
>  	struct pfrt_log_data_info info;
> diff --git a/drivers/android/binder.c b/drivers/android/binder.c
> index 76052006bd87..a674ff1ab9a5 100644
> --- a/drivers/android/binder.c
> +++ b/drivers/android/binder.c
> @@ -5935,7 +5935,7 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>  	return ret;
>  }
>
> -static void binder_vma_open(struct vm_area_struct *vma)
> +static void binder_vma_open(struct mm_area *vma)
>  {
>  	struct binder_proc *proc = vma->vm_private_data;
>
> @@ -5946,7 +5946,7 @@ static void binder_vma_open(struct vm_area_struct *vma)
>  		     (unsigned long)pgprot_val(vma->vm_page_prot));
>  }
>
> -static void binder_vma_close(struct vm_area_struct *vma)
> +static void binder_vma_close(struct mm_area *vma)
>  {
>  	struct binder_proc *proc = vma->vm_private_data;
>
> @@ -5969,7 +5969,7 @@ static const struct vm_operations_struct binder_vm_ops = {
>  	.fault = binder_vm_fault,
>  };
>
> -static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int binder_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct binder_proc *proc = filp->private_data;
>
> diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> index fcfaf1b899c8..95d8a0def3c5 100644
> --- a/drivers/android/binder_alloc.c
> +++ b/drivers/android/binder_alloc.c
> @@ -258,7 +258,7 @@ static int binder_page_insert(struct binder_alloc *alloc,
>  			      struct page *page)
>  {
>  	struct mm_struct *mm = alloc->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret = -ESRCH;
>
>  	/* attempt per-vma lock first */
> @@ -892,7 +892,7 @@ void binder_alloc_free_buf(struct binder_alloc *alloc,
>   *      -ENOMEM = failed to map memory to given address space
>   */
>  int binder_alloc_mmap_handler(struct binder_alloc *alloc,
> -			      struct vm_area_struct *vma)
> +			      struct mm_area *vma)
>  {
>  	struct binder_buffer *buffer;
>  	const char *failure_string;
> @@ -1140,7 +1140,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
>  	struct binder_shrinker_mdata *mdata = container_of(item, typeof(*mdata), lru);
>  	struct binder_alloc *alloc = mdata->alloc;
>  	struct mm_struct *mm = alloc->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct page *page_to_free;
>  	unsigned long page_addr;
>  	int mm_locked = 0;
> diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
> index feecd7414241..71474a96c9dd 100644
> --- a/drivers/android/binder_alloc.h
> +++ b/drivers/android/binder_alloc.h
> @@ -143,7 +143,7 @@ binder_alloc_prepare_to_free(struct binder_alloc *alloc,
>  void binder_alloc_free_buf(struct binder_alloc *alloc,
>  			   struct binder_buffer *buffer);
>  int binder_alloc_mmap_handler(struct binder_alloc *alloc,
> -			      struct vm_area_struct *vma);
> +			      struct mm_area *vma);
>  void binder_alloc_deferred_release(struct binder_alloc *alloc);
>  int binder_alloc_get_allocated_count(struct binder_alloc *alloc);
>  void binder_alloc_print_allocated(struct seq_file *m,
> diff --git a/drivers/auxdisplay/cfag12864bfb.c b/drivers/auxdisplay/cfag12864bfb.c
> index 24baf6b2c587..c8953939f33a 100644
> --- a/drivers/auxdisplay/cfag12864bfb.c
> +++ b/drivers/auxdisplay/cfag12864bfb.c
> @@ -47,7 +47,7 @@ static const struct fb_var_screeninfo cfag12864bfb_var = {
>  	.vmode = FB_VMODE_NONINTERLACED,
>  };
>
> -static int cfag12864bfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int cfag12864bfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct page *pages = virt_to_page(cfag12864b_buffer);
>
> diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c
> index 0b8ba754b343..835db2ac68c3 100644
> --- a/drivers/auxdisplay/ht16k33.c
> +++ b/drivers/auxdisplay/ht16k33.c
> @@ -303,7 +303,7 @@ static int ht16k33_blank(int blank, struct fb_info *info)
>  	return 0;
>  }
>
> -static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int ht16k33_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct ht16k33_priv *priv = info->par;
>  	struct page *pages = virt_to_page(priv->fbdev.buffer);
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 2fd05c1bd30b..55cfd9965a5d 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -1467,7 +1467,7 @@ static int ublk_ch_release(struct inode *inode, struct file *filp)
>  }
>
>  /* map pre-allocated per-queue cmd buffer to ublksrv daemon */
> -static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int ublk_ch_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct ublk_device *ub = filp->private_data;
>  	size_t sz = vma->vm_end - vma->vm_start;
> diff --git a/drivers/cdx/cdx.c b/drivers/cdx/cdx.c
> index 092306ca2541..f3f114c29555 100644
> --- a/drivers/cdx/cdx.c
> +++ b/drivers/cdx/cdx.c
> @@ -708,7 +708,7 @@ static const struct vm_operations_struct cdx_phys_vm_ops = {
>   *      this API is registered as a callback.
>   * @kobj: kobject for mapping
>   * @attr: struct bin_attribute for the file being mapped
> - * @vma: struct vm_area_struct passed into the mmap
> + * @vma: struct mm_area passed into the mmap
>   *
>   * Use the regular CDX mapping routines to map a CDX resource into userspace.
>   *
> @@ -716,7 +716,7 @@ static const struct vm_operations_struct cdx_phys_vm_ops = {
>   */
>  static int cdx_mmap_resource(struct file *fp, struct kobject *kobj,
>  			     const struct bin_attribute *attr,
> -			     struct vm_area_struct *vma)
> +			     struct mm_area *vma)
>  {
>  	struct cdx_device *cdx_dev = to_cdx_device(kobj_to_dev(kobj));
>  	int num = (unsigned long)attr->private;
> diff --git a/drivers/char/bsr.c b/drivers/char/bsr.c
> index 837109ef6766..005cbf590708 100644
> --- a/drivers/char/bsr.c
> +++ b/drivers/char/bsr.c
> @@ -111,7 +111,7 @@ static const struct class bsr_class = {
>  	.dev_groups	= bsr_dev_groups,
>  };
>
> -static int bsr_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int bsr_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	unsigned long size   = vma->vm_end - vma->vm_start;
>  	struct bsr_dev *dev = filp->private_data;
> diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c
> index e110857824fc..af1076b99117 100644
> --- a/drivers/char/hpet.c
> +++ b/drivers/char/hpet.c
> @@ -354,7 +354,7 @@ static __init int hpet_mmap_enable(char *str)
>  }
>  __setup("hpet_mmap=", hpet_mmap_enable);
>
> -static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
> +static int hpet_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct hpet_dev *devp;
>  	unsigned long addr;
> @@ -372,7 +372,7 @@ static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
>  	return vm_iomap_memory(vma, addr, PAGE_SIZE);
>  }
>  #else
> -static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
> +static int hpet_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return -ENOSYS;
>  }
> diff --git a/drivers/char/mem.c b/drivers/char/mem.c
> index 169eed162a7f..350af6fa120a 100644
> --- a/drivers/char/mem.c
> +++ b/drivers/char/mem.c
> @@ -322,13 +322,13 @@ static unsigned zero_mmap_capabilities(struct file *file)
>  }
>
>  /* can't do an in-place private mapping if there's no MMU */
> -static inline int private_mapping_ok(struct vm_area_struct *vma)
> +static inline int private_mapping_ok(struct mm_area *vma)
>  {
>  	return is_nommu_shared_mapping(vma->vm_flags);
>  }
>  #else
>
> -static inline int private_mapping_ok(struct vm_area_struct *vma)
> +static inline int private_mapping_ok(struct mm_area *vma)
>  {
>  	return 1;
>  }
> @@ -340,7 +340,7 @@ static const struct vm_operations_struct mmap_mem_ops = {
>  #endif
>  };
>
> -static int mmap_mem(struct file *file, struct vm_area_struct *vma)
> +static int mmap_mem(struct file *file, struct mm_area *vma)
>  {
>  	size_t size = vma->vm_end - vma->vm_start;
>  	phys_addr_t offset = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
> @@ -519,7 +519,7 @@ static ssize_t read_zero(struct file *file, char __user *buf,
>  	return cleared;
>  }
>
> -static int mmap_zero(struct file *file, struct vm_area_struct *vma)
> +static int mmap_zero(struct file *file, struct mm_area *vma)
>  {
>  #ifndef CONFIG_MMU
>  	return -ENOSYS;
> diff --git a/drivers/char/uv_mmtimer.c b/drivers/char/uv_mmtimer.c
> index 956ebe2080a5..3a8a210592db 100644
> --- a/drivers/char/uv_mmtimer.c
> +++ b/drivers/char/uv_mmtimer.c
> @@ -40,7 +40,7 @@ MODULE_LICENSE("GPL");
>
>  static long uv_mmtimer_ioctl(struct file *file, unsigned int cmd,
>  						unsigned long arg);
> -static int uv_mmtimer_mmap(struct file *file, struct vm_area_struct *vma);
> +static int uv_mmtimer_mmap(struct file *file, struct mm_area *vma);
>
>  /*
>   * Period in femtoseconds (10^-15 s)
> @@ -144,7 +144,7 @@ static long uv_mmtimer_ioctl(struct file *file, unsigned int cmd,
>   * Calls remap_pfn_range() to map the clock's registers into
>   * the calling process' address space.
>   */
> -static int uv_mmtimer_mmap(struct file *file, struct vm_area_struct *vma)
> +static int uv_mmtimer_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned long uv_mmtimer_addr;
>
> diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
> index b9df9b19d4bd..9e3ef27295ec 100644
> --- a/drivers/comedi/comedi_fops.c
> +++ b/drivers/comedi/comedi_fops.c
> @@ -2282,7 +2282,7 @@ static long comedi_unlocked_ioctl(struct file *file, unsigned int cmd,
>  	return rc;
>  }
>
> -static void comedi_vm_open(struct vm_area_struct *area)
> +static void comedi_vm_open(struct mm_area *area)
>  {
>  	struct comedi_buf_map *bm;
>
> @@ -2290,7 +2290,7 @@ static void comedi_vm_open(struct vm_area_struct *area)
>  	comedi_buf_map_get(bm);
>  }
>
> -static void comedi_vm_close(struct vm_area_struct *area)
> +static void comedi_vm_close(struct mm_area *area)
>  {
>  	struct comedi_buf_map *bm;
>
> @@ -2298,7 +2298,7 @@ static void comedi_vm_close(struct vm_area_struct *area)
>  	comedi_buf_map_put(bm);
>  }
>
> -static int comedi_vm_access(struct vm_area_struct *vma, unsigned long addr,
> +static int comedi_vm_access(struct mm_area *vma, unsigned long addr,
>  			    void *buf, int len, int write)
>  {
>  	struct comedi_buf_map *bm = vma->vm_private_data;
> @@ -2318,7 +2318,7 @@ static const struct vm_operations_struct comedi_vm_ops = {
>  	.access = comedi_vm_access,
>  };
>
> -static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
> +static int comedi_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct comedi_file *cfp = file->private_data;
>  	struct comedi_device *dev = cfp->dev;
> diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
> index d3f5d108b898..c9d9b977c07a 100644
> --- a/drivers/crypto/hisilicon/qm.c
> +++ b/drivers/crypto/hisilicon/qm.c
> @@ -2454,7 +2454,7 @@ static void hisi_qm_uacce_put_queue(struct uacce_queue *q)
>
>  /* map sq/cq/doorbell to user space */
>  static int hisi_qm_uacce_mmap(struct uacce_queue *q,
> -			      struct vm_area_struct *vma,
> +			      struct mm_area *vma,
>  			      struct uacce_qfile_region *qfr)
>  {
>  	struct hisi_qp *qp = q->priv;
> diff --git a/drivers/dax/device.c b/drivers/dax/device.c
> index 328231cfb028..6a5724727688 100644
> --- a/drivers/dax/device.c
> +++ b/drivers/dax/device.c
> @@ -14,7 +14,7 @@
>  #include "dax-private.h"
>  #include "bus.h"
>
> -static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
> +static int check_vma(struct dev_dax *dev_dax, struct mm_area *vma,
>  		const char *func)
>  {
>  	struct device *dev = &dev_dax->dev;
> @@ -261,7 +261,7 @@ static vm_fault_t dev_dax_fault(struct vm_fault *vmf)
>  	return dev_dax_huge_fault(vmf, 0);
>  }
>
> -static int dev_dax_may_split(struct vm_area_struct *vma, unsigned long addr)
> +static int dev_dax_may_split(struct mm_area *vma, unsigned long addr)
>  {
>  	struct file *filp = vma->vm_file;
>  	struct dev_dax *dev_dax = filp->private_data;
> @@ -271,7 +271,7 @@ static int dev_dax_may_split(struct vm_area_struct *vma, unsigned long addr)
>  	return 0;
>  }
>
> -static unsigned long dev_dax_pagesize(struct vm_area_struct *vma)
> +static unsigned long dev_dax_pagesize(struct mm_area *vma)
>  {
>  	struct file *filp = vma->vm_file;
>  	struct dev_dax *dev_dax = filp->private_data;
> @@ -286,7 +286,7 @@ static const struct vm_operations_struct dax_vm_ops = {
>  	.pagesize = dev_dax_pagesize,
>  };
>
> -static int dax_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int dax_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct dev_dax *dev_dax = filp->private_data;
>  	int rc, id;
> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
> index 5baa83b85515..afc92bd59362 100644
> --- a/drivers/dma-buf/dma-buf.c
> +++ b/drivers/dma-buf/dma-buf.c
> @@ -144,7 +144,7 @@ static struct file_system_type dma_buf_fs_type = {
>  	.kill_sb = kill_anon_super,
>  };
>
> -static int dma_buf_mmap_internal(struct file *file, struct vm_area_struct *vma)
> +static int dma_buf_mmap_internal(struct file *file, struct mm_area *vma)
>  {
>  	struct dma_buf *dmabuf;
>
> @@ -1364,7 +1364,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_move_notify, "DMA_BUF");
>   *
>   *   .. code-block:: c
>   *
> - *     int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *, unsigned long);
> + *     int dma_buf_mmap(struct dma_buf *, struct mm_area *, unsigned long);
>   *
>   *   If the importing subsystem simply provides a special-purpose mmap call to
>   *   set up a mapping in userspace, calling do_mmap with &dma_buf.file will
> @@ -1474,7 +1474,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, "DMA_BUF");
>   *
>   * Can return negative error values, returns 0 on success.
>   */
> -int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma,
> +int dma_buf_mmap(struct dma_buf *dmabuf, struct mm_area *vma,
>  		 unsigned long pgoff)
>  {
>  	if (WARN_ON(!dmabuf || !vma))
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> index 9512d050563a..17ae7983a93a 100644
> --- a/drivers/dma-buf/heaps/cma_heap.c
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -162,7 +162,7 @@ static int cma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>
>  static vm_fault_t cma_heap_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct cma_heap_buffer *buffer = vma->vm_private_data;
>
>  	if (vmf->pgoff >= buffer->pagecount)
> @@ -175,7 +175,7 @@ static const struct vm_operations_struct dma_heap_vm_ops = {
>  	.fault = cma_heap_vm_fault,
>  };
>
> -static int cma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +static int cma_heap_mmap(struct dma_buf *dmabuf, struct mm_area *vma)
>  {
>  	struct cma_heap_buffer *buffer = dmabuf->priv;
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 26d5dc89ea16..43fd8260f29b 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -192,7 +192,7 @@ static int system_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>  	return 0;
>  }
>
> -static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +static int system_heap_mmap(struct dma_buf *dmabuf, struct mm_area *vma)
>  {
>  	struct system_heap_buffer *buffer = dmabuf->priv;
>  	struct sg_table *table = &buffer->sg_table;
> diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
> index e74e36a8ecda..7c3de3568e46 100644
> --- a/drivers/dma-buf/udmabuf.c
> +++ b/drivers/dma-buf/udmabuf.c
> @@ -46,7 +46,7 @@ struct udmabuf {
>
>  static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct udmabuf *ubuf = vma->vm_private_data;
>  	pgoff_t pgoff = vmf->pgoff;
>  	unsigned long addr, pfn;
> @@ -93,7 +93,7 @@ static const struct vm_operations_struct udmabuf_vm_ops = {
>  	.fault = udmabuf_vm_fault,
>  };
>
> -static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
> +static int mmap_udmabuf(struct dma_buf *buf, struct mm_area *vma)
>  {
>  	struct udmabuf *ubuf = buf->priv;
>
> diff --git a/drivers/dma/idxd/cdev.c b/drivers/dma/idxd/cdev.c
> index ff94ee892339..2fd71e61d6c8 100644
> --- a/drivers/dma/idxd/cdev.c
> +++ b/drivers/dma/idxd/cdev.c
> @@ -368,7 +368,7 @@ static int idxd_cdev_release(struct inode *node, struct file *filep)
>  	return 0;
>  }
>
> -static int check_vma(struct idxd_wq *wq, struct vm_area_struct *vma,
> +static int check_vma(struct idxd_wq *wq, struct mm_area *vma,
>  		     const char *func)
>  {
>  	struct device *dev = &wq->idxd->pdev->dev;
> @@ -384,7 +384,7 @@ static int check_vma(struct idxd_wq *wq, struct vm_area_struct *vma,
>  	return 0;
>  }
>
> -static int idxd_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int idxd_cdev_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct idxd_user_context *ctx = filp->private_data;
>  	struct idxd_wq *wq = ctx->wq;
> diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
> index bd04980009a4..a8a2ccd8af78 100644
> --- a/drivers/firewire/core-cdev.c
> +++ b/drivers/firewire/core-cdev.c
> @@ -1786,7 +1786,7 @@ static long fw_device_op_ioctl(struct file *file,
>  	return dispatch_ioctl(file->private_data, cmd, (void __user *)arg);
>  }
>
> -static int fw_device_op_mmap(struct file *file, struct vm_area_struct *vma)
> +static int fw_device_op_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct client *client = file->private_data;
>  	unsigned long size;
> diff --git a/drivers/fpga/dfl-afu-main.c b/drivers/fpga/dfl-afu-main.c
> index 3bf8e7338dbe..1b9b86d2ee0f 100644
> --- a/drivers/fpga/dfl-afu-main.c
> +++ b/drivers/fpga/dfl-afu-main.c
> @@ -805,7 +805,7 @@ static const struct vm_operations_struct afu_vma_ops = {
>  #endif
>  };
>
> -static int afu_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int afu_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct platform_device *pdev = filp->private_data;
>  	u64 size = vma->vm_end - vma->vm_start;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 69429df09477..993513183c9c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -272,7 +272,7 @@ static void amdgpu_gem_object_close(struct drm_gem_object *obj,
>  	drm_exec_fini(&exec);
>  }
>
> -static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int amdgpu_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 53b71e9d8076..304a1c09b89c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -700,7 +700,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages,
>  	struct ttm_tt *ttm = bo->tbo.ttm;
>  	struct amdgpu_ttm_tt *gtt = ttm_to_amdgpu_ttm_tt(ttm);
>  	unsigned long start = gtt->userptr;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm;
>  	bool readonly;
>  	int r = 0;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> index 1e9dd00620bf..00a7f935b0a7 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
> @@ -48,7 +48,7 @@
>  static long kfd_ioctl(struct file *, unsigned int, unsigned long);
>  static int kfd_open(struct inode *, struct file *);
>  static int kfd_release(struct inode *, struct file *);
> -static int kfd_mmap(struct file *, struct vm_area_struct *);
> +static int kfd_mmap(struct file *, struct mm_area *);
>
>  static const char kfd_dev_name[] = "kfd";
>
> @@ -3360,7 +3360,7 @@ static long kfd_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>  }
>
>  static int kfd_mmio_mmap(struct kfd_node *dev, struct kfd_process *process,
> -		      struct vm_area_struct *vma)
> +		      struct mm_area *vma)
>  {
>  	phys_addr_t address;
>
> @@ -3393,7 +3393,7 @@ static int kfd_mmio_mmap(struct kfd_node *dev, struct kfd_process *process,
>  }
>
>
> -static int kfd_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int kfd_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct kfd_process *process;
>  	struct kfd_node *dev = NULL;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
> index 05c74887fd6f..cff9e53c009c 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
> @@ -104,7 +104,7 @@ void kfd_doorbell_fini(struct kfd_dev *kfd)
>  }
>
>  int kfd_doorbell_mmap(struct kfd_node *dev, struct kfd_process *process,
> -		      struct vm_area_struct *vma)
> +		      struct mm_area *vma)
>  {
>  	phys_addr_t address;
>  	struct kfd_process_device *pdd;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> index fecdb6794075..8b767a08782a 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> @@ -1063,7 +1063,7 @@ int kfd_wait_on_events(struct kfd_process *p,
>  	return ret;
>  }
>
> -int kfd_event_mmap(struct kfd_process *p, struct vm_area_struct *vma)
> +int kfd_event_mmap(struct kfd_process *p, struct mm_area *vma)
>  {
>  	unsigned long pfn;
>  	struct kfd_signal_page *page;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> index 79251f22b702..86560564d30d 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
> @@ -240,7 +240,7 @@ svm_migrate_addr(struct amdgpu_device *adev, struct page *page)
>  }
>
>  static struct page *
> -svm_migrate_get_sys_page(struct vm_area_struct *vma, unsigned long addr)
> +svm_migrate_get_sys_page(struct mm_area *vma, unsigned long addr)
>  {
>  	struct page *page;
>
> @@ -385,7 +385,7 @@ svm_migrate_copy_to_vram(struct kfd_node *node, struct svm_range *prange,
>
>  static long
>  svm_migrate_vma_to_vram(struct kfd_node *node, struct svm_range *prange,
> -			struct vm_area_struct *vma, uint64_t start,
> +			struct mm_area *vma, uint64_t start,
>  			uint64_t end, uint32_t trigger, uint64_t ttm_res_offset)
>  {
>  	struct kfd_process *p = container_of(prange->svms, struct kfd_process, svms);
> @@ -489,7 +489,7 @@ svm_migrate_ram_to_vram(struct svm_range *prange, uint32_t best_loc,
>  			struct mm_struct *mm, uint32_t trigger)
>  {
>  	unsigned long addr, start, end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	uint64_t ttm_res_offset;
>  	struct kfd_node *node;
>  	unsigned long mpages = 0;
> @@ -668,7 +668,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
>   * svm_migrate_vma_to_ram - migrate range inside one vma from device to system
>   *
>   * @prange: svm range structure
> - * @vma: vm_area_struct that range [start, end] belongs to
> + * @vma: mm_area that range [start, end] belongs to
>   * @start: range start virtual address in pages
>   * @end: range end virtual address in pages
>   * @node: kfd node device to migrate from
> @@ -683,7 +683,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
>   */
>  static long
>  svm_migrate_vma_to_ram(struct kfd_node *node, struct svm_range *prange,
> -		       struct vm_area_struct *vma, uint64_t start, uint64_t end,
> +		       struct mm_area *vma, uint64_t start, uint64_t end,
>  		       uint32_t trigger, struct page *fault_page)
>  {
>  	struct kfd_process *p = container_of(prange->svms, struct kfd_process, svms);
> @@ -793,7 +793,7 @@ int svm_migrate_vram_to_ram(struct svm_range *prange, struct mm_struct *mm,
>  			    uint32_t trigger, struct page *fault_page)
>  {
>  	struct kfd_node *node;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr;
>  	unsigned long start;
>  	unsigned long end;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> index f6aedf69c644..82d332c7bdd1 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
> @@ -61,7 +61,7 @@
>   * BITS[61:46] - Encode gpu_id. To identify to which GPU the offset belongs to
>   * BITS[45:0]  - MMAP offset value
>   *
> - * NOTE: struct vm_area_struct.vm_pgoff uses offset in pages. Hence, these
> + * NOTE: struct mm_area.vm_pgoff uses offset in pages. Hence, these
>   *  defines are w.r.t to PAGE_SIZE
>   */
>  #define KFD_MMAP_TYPE_SHIFT	62
> @@ -1077,7 +1077,7 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
>  bool kfd_process_xnack_mode(struct kfd_process *p, bool supported);
>
>  int kfd_reserved_mem_mmap(struct kfd_node *dev, struct kfd_process *process,
> -			  struct vm_area_struct *vma);
> +			  struct mm_area *vma);
>
>  /* KFD process API for creating and translating handles */
>  int kfd_process_device_create_obj_handle(struct kfd_process_device *pdd,
> @@ -1099,7 +1099,7 @@ size_t kfd_doorbell_process_slice(struct kfd_dev *kfd);
>  int kfd_doorbell_init(struct kfd_dev *kfd);
>  void kfd_doorbell_fini(struct kfd_dev *kfd);
>  int kfd_doorbell_mmap(struct kfd_node *dev, struct kfd_process *process,
> -		      struct vm_area_struct *vma);
> +		      struct mm_area *vma);
>  void __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
>  					unsigned int *doorbell_off);
>  void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr);
> @@ -1487,7 +1487,7 @@ extern const struct kfd_device_global_init_class device_global_init_class_cik;
>
>  int kfd_event_init_process(struct kfd_process *p);
>  void kfd_event_free_process(struct kfd_process *p);
> -int kfd_event_mmap(struct kfd_process *process, struct vm_area_struct *vma);
> +int kfd_event_mmap(struct kfd_process *process, struct mm_area *vma);
>  int kfd_wait_on_events(struct kfd_process *p,
>  		       uint32_t num_events, void __user *data,
>  		       bool all, uint32_t *user_timeout_ms,
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> index 7c0c24732481..94056ffd51d7 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
> @@ -2111,7 +2111,7 @@ int kfd_resume_all_processes(void)
>  }
>
>  int kfd_reserved_mem_mmap(struct kfd_node *dev, struct kfd_process *process,
> -			  struct vm_area_struct *vma)
> +			  struct mm_area *vma)
>  {
>  	struct kfd_process_device *pdd;
>  	struct qcm_process_device *qpd;
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> index 100717a98ec1..01e2538d9622 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
> @@ -1704,7 +1704,7 @@ static int svm_range_validate_and_map(struct mm_struct *mm,
>  		struct hmm_range *hmm_range = NULL;
>  		unsigned long map_start_vma;
>  		unsigned long map_last_vma;
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		unsigned long next = 0;
>  		unsigned long offset;
>  		unsigned long npages;
> @@ -2721,7 +2721,7 @@ svm_range_get_range_boundaries(struct kfd_process *p, int64_t addr,
>  			       unsigned long *start, unsigned long *last,
>  			       bool *is_heap_stack)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct interval_tree_node *node;
>  	struct rb_node *rb_node;
>  	unsigned long start_limit, end_limit;
> @@ -2938,7 +2938,7 @@ svm_range_count_fault(struct kfd_node *node, struct kfd_process *p,
>  }
>
>  static bool
> -svm_fault_allowed(struct vm_area_struct *vma, bool write_fault)
> +svm_fault_allowed(struct mm_area *vma, bool write_fault)
>  {
>  	unsigned long requested = VM_READ;
>
> @@ -2965,7 +2965,7 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
>  	int32_t best_loc;
>  	int32_t gpuid, gpuidx = MAX_GPU_INSTANCE;
>  	bool write_locked = false;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	bool migration = false;
>  	int r = 0;
>
> @@ -3373,7 +3373,7 @@ static int
>  svm_range_is_valid(struct kfd_process *p, uint64_t start, uint64_t size)
>  {
>  	const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long end;
>  	unsigned long start_unchg = start;
>
> diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
> index 1a1680d71486..94767247f919 100644
> --- a/drivers/gpu/drm/armada/armada_gem.c
> +++ b/drivers/gpu/drm/armada/armada_gem.c
> @@ -471,7 +471,7 @@ static void armada_gem_prime_unmap_dma_buf(struct dma_buf_attachment *attach,
>  }
>
>  static int
> -armada_gem_dmabuf_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
> +armada_gem_dmabuf_mmap(struct dma_buf *buf, struct mm_area *vma)
>  {
>  	return -EINVAL;
>  }
> diff --git a/drivers/gpu/drm/drm_fbdev_dma.c b/drivers/gpu/drm/drm_fbdev_dma.c
> index 02a516e77192..d6b5bcdbc19f 100644
> --- a/drivers/gpu/drm/drm_fbdev_dma.c
> +++ b/drivers/gpu/drm/drm_fbdev_dma.c
> @@ -35,7 +35,7 @@ static int drm_fbdev_dma_fb_release(struct fb_info *info, int user)
>  	return 0;
>  }
>
> -static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct drm_fb_helper *fb_helper = info->par;
>
> diff --git a/drivers/gpu/drm/drm_fbdev_shmem.c b/drivers/gpu/drm/drm_fbdev_shmem.c
> index f824369baacd..3077d8e6e55b 100644
> --- a/drivers/gpu/drm/drm_fbdev_shmem.c
> +++ b/drivers/gpu/drm/drm_fbdev_shmem.c
> @@ -38,7 +38,7 @@ FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(drm_fbdev_shmem,
>  				   drm_fb_helper_damage_range,
>  				   drm_fb_helper_damage_area);
>
> -static int drm_fbdev_shmem_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int drm_fbdev_shmem_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct drm_fb_helper *fb_helper = info->par;
>  	struct drm_framebuffer *fb = fb_helper->fb;
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index c6240bab3fa5..f7a750cea62c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1013,7 +1013,7 @@ EXPORT_SYMBOL(drm_gem_object_free);
>   * This function implements the #vm_operations_struct open() callback for GEM
>   * drivers. This must be used together with drm_gem_vm_close().
>   */
> -void drm_gem_vm_open(struct vm_area_struct *vma)
> +void drm_gem_vm_open(struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = vma->vm_private_data;
>
> @@ -1028,7 +1028,7 @@ EXPORT_SYMBOL(drm_gem_vm_open);
>   * This function implements the #vm_operations_struct close() callback for GEM
>   * drivers. This must be used together with drm_gem_vm_open().
>   */
> -void drm_gem_vm_close(struct vm_area_struct *vma)
> +void drm_gem_vm_close(struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = vma->vm_private_data;
>
> @@ -1061,7 +1061,7 @@ EXPORT_SYMBOL(drm_gem_vm_close);
>   * size, or if no vm_ops are provided.
>   */
>  int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
> -		     struct vm_area_struct *vma)
> +		     struct mm_area *vma)
>  {
>  	int ret;
>
> @@ -1119,7 +1119,7 @@ EXPORT_SYMBOL(drm_gem_mmap_obj);
>   * If the caller is not granted access to the buffer object, the mmap will fail
>   * with EACCES. Please see the vma manager for more information.
>   */
> -int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +int drm_gem_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct drm_file *priv = filp->private_data;
>  	struct drm_device *dev = priv->minor->dev;
> diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c
> index b7f033d4352a..d3ae2d67fcc0 100644
> --- a/drivers/gpu/drm/drm_gem_dma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_dma_helper.c
> @@ -519,7 +519,7 @@ EXPORT_SYMBOL_GPL(drm_gem_dma_vmap);
>   * Returns:
>   * 0 on success or a negative error code on failure.
>   */
> -int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *vma)
> +int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = &dma_obj->base;
>  	int ret;
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index d99dee67353a..b98f02716ad7 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -533,7 +533,7 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_dumb_create);
>
>  static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  	loff_t num_pages = obj->size >> PAGE_SHIFT;
> @@ -561,7 +561,7 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
>  	return ret;
>  }
>
> -static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
> +static void drm_gem_shmem_vm_open(struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> @@ -583,7 +583,7 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
>  	drm_gem_vm_open(vma);
>  }
>
> -static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> +static void drm_gem_shmem_vm_close(struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> @@ -613,7 +613,7 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
>   * Returns:
>   * 0 on success or a negative error code on failure.
>   */
> -int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma)
> +int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
>  	int ret;
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 3734aa2d1c5b..5ab41caf8e4a 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -97,7 +97,7 @@ EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>   * callback.
>   */
>  int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> -		     struct vm_area_struct *vma)
> +		     struct mm_area *vma)
>  {
>  	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>  	int ret;
> diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
> index 38431e8360e7..8d7fd83f2f1f 100644
> --- a/drivers/gpu/drm/drm_gpusvm.c
> +++ b/drivers/gpu/drm/drm_gpusvm.c
> @@ -902,7 +902,7 @@ static bool drm_gpusvm_check_pages(struct drm_gpusvm *gpusvm,
>  static unsigned long
>  drm_gpusvm_range_chunk_size(struct drm_gpusvm *gpusvm,
>  			    struct drm_gpusvm_notifier *notifier,
> -			    struct vm_area_struct *vas,
> +			    struct mm_area *vas,
>  			    unsigned long fault_addr,
>  			    unsigned long gpuva_start,
>  			    unsigned long gpuva_end,
> @@ -1003,7 +1003,7 @@ drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
>  	struct drm_gpusvm_notifier *notifier;
>  	struct drm_gpusvm_range *range;
>  	struct mm_struct *mm = gpusvm->mm;
> -	struct vm_area_struct *vas;
> +	struct mm_area *vas;
>  	bool notifier_alloc = false;
>  	unsigned long chunk_size;
>  	int err;
> @@ -1678,7 +1678,7 @@ int drm_gpusvm_migrate_to_devmem(struct drm_gpusvm *gpusvm,
>  	};
>  	struct mm_struct *mm = gpusvm->mm;
>  	unsigned long i, npages = npages_in_range(start, end);
> -	struct vm_area_struct *vas;
> +	struct mm_area *vas;
>  	struct drm_gpusvm_zdd *zdd = NULL;
>  	struct page **pages;
>  	dma_addr_t *dma_addr;
> @@ -1800,7 +1800,7 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_migrate_to_devmem);
>   *
>   * Return: 0 on success, negative error code on failure.
>   */
> -static int drm_gpusvm_migrate_populate_ram_pfn(struct vm_area_struct *vas,
> +static int drm_gpusvm_migrate_populate_ram_pfn(struct mm_area *vas,
>  					       struct page *fault_page,
>  					       unsigned long npages,
>  					       unsigned long *mpages,
> @@ -1962,7 +1962,7 @@ EXPORT_SYMBOL_GPL(drm_gpusvm_evict_to_ram);
>   *
>   * Return: 0 on success, negative error code on failure.
>   */
> -static int __drm_gpusvm_migrate_to_ram(struct vm_area_struct *vas,
> +static int __drm_gpusvm_migrate_to_ram(struct mm_area *vas,
>  				       void *device_private_page_owner,
>  				       struct page *page,
>  				       unsigned long fault_addr,
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index bdb51c8f262e..3691e0445696 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -737,7 +737,7 @@ EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
>   * The fake GEM offset is added to vma->vm_pgoff and &drm_driver->fops->mmap is
>   * called to set up the mapping.
>   */
> -int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +int drm_gem_prime_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct drm_file *priv;
>  	struct file *fil;
> @@ -795,7 +795,7 @@ EXPORT_SYMBOL(drm_gem_prime_mmap);
>   *
>   * Returns 0 on success or a negative error code on failure.
>   */
> -int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
> +int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 2f844e82bc46..8a5d096ddb36 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -125,7 +125,7 @@ void etnaviv_gem_put_pages(struct etnaviv_gem_object *etnaviv_obj)
>  }
>
>  static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	pgprot_t vm_page_prot;
>
> @@ -152,7 +152,7 @@ static int etnaviv_gem_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
>  	return 0;
>  }
>
> -static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
>
> @@ -161,7 +161,7 @@ static int etnaviv_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *v
>
>  static vm_fault_t etnaviv_gem_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
>  	struct page **pages;
> @@ -718,7 +718,7 @@ static void etnaviv_gem_userptr_release(struct etnaviv_gem_object *etnaviv_obj)
>  }
>
>  static int etnaviv_gem_userptr_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	return -EINVAL;
>  }
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.h b/drivers/gpu/drm/etnaviv/etnaviv_gem.h
> index e5ee82a0674c..20c10d1bedd2 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.h
> @@ -68,7 +68,7 @@ struct etnaviv_gem_ops {
>  	int (*get_pages)(struct etnaviv_gem_object *);
>  	void (*release)(struct etnaviv_gem_object *);
>  	void *(*vmap)(struct etnaviv_gem_object *);
> -	int (*mmap)(struct etnaviv_gem_object *, struct vm_area_struct *);
> +	int (*mmap)(struct etnaviv_gem_object *, struct mm_area *);
>  };
>
>  static inline bool is_active(struct etnaviv_gem_object *etnaviv_obj)
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 42e57d142554..b81b597367e0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -89,7 +89,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
>  }
>
>  static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	int ret;
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
> index 9526a25e90ac..637b38b274cd 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_fbdev.c
> @@ -24,7 +24,7 @@
>
>  #define MAX_CONNECTOR		4
>
> -static int exynos_drm_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int exynos_drm_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct drm_fb_helper *helper = info->par;
>  	struct drm_gem_object *obj = drm_gem_fb_get_obj(helper->fb, 0);
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index 4787fee4696f..8ab046d62150 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -20,7 +20,7 @@
>
>  MODULE_IMPORT_NS("DMA_BUF");
>
> -static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> +static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma);
>
>  static int exynos_drm_alloc_buf(struct exynos_drm_gem *exynos_gem, bool kvmap)
>  {
> @@ -268,7 +268,7 @@ struct exynos_drm_gem *exynos_drm_gem_get(struct drm_file *filp,
>  }
>
>  static int exynos_drm_gem_mmap_buffer(struct exynos_drm_gem *exynos_gem,
> -				      struct vm_area_struct *vma)
> +				      struct mm_area *vma)
>  {
>  	struct drm_device *drm_dev = exynos_gem->base.dev;
>  	unsigned long vm_size;
> @@ -360,7 +360,7 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
>  	return 0;
>  }
>
> -static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int exynos_drm_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct exynos_drm_gem *exynos_gem = to_exynos_gem(obj);
>  	int ret;
> diff --git a/drivers/gpu/drm/gma500/fbdev.c b/drivers/gpu/drm/gma500/fbdev.c
> index 8edefea2ef59..57ff0f19937d 100644
> --- a/drivers/gpu/drm/gma500/fbdev.c
> +++ b/drivers/gpu/drm/gma500/fbdev.c
> @@ -22,7 +22,7 @@
>
>  static vm_fault_t psb_fbdev_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct fb_info *info = vma->vm_private_data;
>  	unsigned long address = vmf->address - (vmf->pgoff << PAGE_SHIFT);
>  	unsigned long pfn = info->fix.smem_start >> PAGE_SHIFT;
> @@ -93,7 +93,7 @@ static int psb_fbdev_fb_setcolreg(unsigned int regno,
>  	return 0;
>  }
>
> -static int psb_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int psb_fbdev_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	if (vma->vm_pgoff != 0)
>  		return -EINVAL;
> diff --git a/drivers/gpu/drm/gma500/gem.c b/drivers/gpu/drm/gma500/gem.c
> index 4b7627a72637..b458c86773dd 100644
> --- a/drivers/gpu/drm/gma500/gem.c
> +++ b/drivers/gpu/drm/gma500/gem.c
> @@ -253,7 +253,7 @@ int psb_gem_dumb_create(struct drm_file *file, struct drm_device *dev,
>   */
>  static vm_fault_t psb_gem_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *obj;
>  	struct psb_gem_object *pobj;
>  	int err;
> diff --git a/drivers/gpu/drm/i915/display/intel_bo.c b/drivers/gpu/drm/i915/display/intel_bo.c
> index fbd16d7b58d9..b193ee0f7171 100644
> --- a/drivers/gpu/drm/i915/display/intel_bo.c
> +++ b/drivers/gpu/drm/i915/display/intel_bo.c
> @@ -32,7 +32,7 @@ void intel_bo_flush_if_display(struct drm_gem_object *obj)
>  	i915_gem_object_flush_if_display(to_intel_bo(obj));
>  }
>
> -int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +int intel_bo_fb_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	return i915_gem_fb_mmap(to_intel_bo(obj), vma);
>  }
> diff --git a/drivers/gpu/drm/i915/display/intel_bo.h b/drivers/gpu/drm/i915/display/intel_bo.h
> index ea7a2253aaa5..38f3518bb80f 100644
> --- a/drivers/gpu/drm/i915/display/intel_bo.h
> +++ b/drivers/gpu/drm/i915/display/intel_bo.h
> @@ -8,14 +8,14 @@
>
>  struct drm_gem_object;
>  struct seq_file;
> -struct vm_area_struct;
> +struct mm_area;
>
>  bool intel_bo_is_tiled(struct drm_gem_object *obj);
>  bool intel_bo_is_userptr(struct drm_gem_object *obj);
>  bool intel_bo_is_shmem(struct drm_gem_object *obj);
>  bool intel_bo_is_protected(struct drm_gem_object *obj);
>  void intel_bo_flush_if_display(struct drm_gem_object *obj);
> -int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> +int intel_bo_fb_mmap(struct drm_gem_object *obj, struct mm_area *vma);
>  int intel_bo_read_from_page(struct drm_gem_object *obj, u64 offset, void *dst, int size);
>
>  struct intel_frontbuffer *intel_bo_get_frontbuffer(struct drm_gem_object *obj);
> diff --git a/drivers/gpu/drm/i915/display/intel_fbdev.c b/drivers/gpu/drm/i915/display/intel_fbdev.c
> index adc19d5607de..69ade9a6ca90 100644
> --- a/drivers/gpu/drm/i915/display/intel_fbdev.c
> +++ b/drivers/gpu/drm/i915/display/intel_fbdev.c
> @@ -121,7 +121,7 @@ static int intel_fbdev_pan_display(struct fb_var_screeninfo *var,
>  	return ret;
>  }
>
> -static int intel_fbdev_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int intel_fbdev_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct drm_fb_helper *fb_helper = info->par;
>  	struct drm_gem_object *obj = drm_gem_fb_get_obj(fb_helper->fb, 0);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> index 9473050ac842..2caf031bfbc1 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
> @@ -91,7 +91,7 @@ static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf,
>  	i915_gem_object_unpin_map(obj);
>  }
>
> -static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
> +static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma)
>  {
>  	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
>  	struct drm_i915_private *i915 = to_i915(obj->base.dev);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> index c3dabb857960..9fcb86c991fd 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
> @@ -27,7 +27,7 @@
>  #include "i915_vma.h"
>
>  static inline bool
> -__vma_matches(struct vm_area_struct *vma, struct file *filp,
> +__vma_matches(struct mm_area *vma, struct file *filp,
>  	      unsigned long addr, unsigned long size)
>  {
>  	if (vma->vm_file != filp)
> @@ -104,7 +104,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
>
>  	if (args->flags & I915_MMAP_WC) {
>  		struct mm_struct *mm = current->mm;
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		if (mmap_write_lock_killable(mm)) {
>  			addr = -EINTR;
> @@ -252,7 +252,7 @@ static vm_fault_t i915_error_to_vmf_fault(int err)
>
>  static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *area = vmf->vma;
> +	struct mm_area *area = vmf->vma;
>  	struct i915_mmap_offset *mmo = area->vm_private_data;
>  	struct drm_i915_gem_object *obj = mmo->obj;
>  	unsigned long obj_offset;
> @@ -295,7 +295,7 @@ static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
>  	return i915_error_to_vmf_fault(err);
>  }
>
> -static void set_address_limits(struct vm_area_struct *area,
> +static void set_address_limits(struct mm_area *area,
>  			       struct i915_vma *vma,
>  			       unsigned long obj_offset,
>  			       resource_size_t gmadr_start,
> @@ -339,7 +339,7 @@ static void set_address_limits(struct vm_area_struct *area,
>  static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
>  {
>  #define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
> -	struct vm_area_struct *area = vmf->vma;
> +	struct mm_area *area = vmf->vma;
>  	struct i915_mmap_offset *mmo = area->vm_private_data;
>  	struct drm_i915_gem_object *obj = mmo->obj;
>  	struct drm_device *dev = obj->base.dev;
> @@ -506,7 +506,7 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
>  }
>
>  static int
> -vm_access(struct vm_area_struct *area, unsigned long addr,
> +vm_access(struct mm_area *area, unsigned long addr,
>  	  void *buf, int len, int write)
>  {
>  	struct i915_mmap_offset *mmo = area->vm_private_data;
> @@ -919,7 +919,7 @@ i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
>  	return __assign_mmap_offset_handle(file, args->handle, type, &args->offset);
>  }
>
> -static void vm_open(struct vm_area_struct *vma)
> +static void vm_open(struct mm_area *vma)
>  {
>  	struct i915_mmap_offset *mmo = vma->vm_private_data;
>  	struct drm_i915_gem_object *obj = mmo->obj;
> @@ -928,7 +928,7 @@ static void vm_open(struct vm_area_struct *vma)
>  	i915_gem_object_get(obj);
>  }
>
> -static void vm_close(struct vm_area_struct *vma)
> +static void vm_close(struct mm_area *vma)
>  {
>  	struct i915_mmap_offset *mmo = vma->vm_private_data;
>  	struct drm_i915_gem_object *obj = mmo->obj;
> @@ -990,7 +990,7 @@ static struct file *mmap_singleton(struct drm_i915_private *i915)
>  static int
>  i915_gem_object_mmap(struct drm_i915_gem_object *obj,
>  		     struct i915_mmap_offset *mmo,
> -		     struct vm_area_struct *vma)
> +		     struct mm_area *vma)
>  {
>  	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>  	struct drm_device *dev = &i915->drm;
> @@ -1071,7 +1071,7 @@ i915_gem_object_mmap(struct drm_i915_gem_object *obj,
>   * be able to resolve multiple mmap offsets which could be tied
>   * to a single gem object.
>   */
> -int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +int i915_gem_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct drm_vma_offset_node *node;
>  	struct drm_file *priv = filp->private_data;
> @@ -1114,7 +1114,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>  	return i915_gem_object_mmap(obj, mmo, vma);
>  }
>
> -int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma)
> +int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct mm_area *vma)
>  {
>  	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>  	struct drm_device *dev = &i915->drm;
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.h b/drivers/gpu/drm/i915/gem/i915_gem_mman.h
> index 196417fd0f5c..5e6faa37dbc2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.h
> @@ -18,7 +18,7 @@ struct i915_mmap_offset;
>  struct mutex;
>
>  int i915_gem_mmap_gtt_version(void);
> -int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> +int i915_gem_mmap(struct file *filp, struct mm_area *vma);
>
>  int i915_gem_dumb_mmap_offset(struct drm_file *file_priv,
>  			      struct drm_device *dev,
> @@ -29,5 +29,5 @@ void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj);
>
>  void i915_gem_object_runtime_pm_release_mmap_offset(struct drm_i915_gem_object *obj);
>  void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj);
> -int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma);
> +int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct mm_area *vma);
>  #endif
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index 1f4814968868..b65ee3c4c4fc 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -1034,7 +1034,7 @@ static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>
>  static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *area = vmf->vma;
> +	struct mm_area *area = vmf->vma;
>  	struct ttm_buffer_object *bo = area->vm_private_data;
>  	struct drm_device *dev = bo->base.dev;
>  	struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
> @@ -1147,7 +1147,7 @@ static vm_fault_t vm_fault_ttm(struct vm_fault *vmf)
>  }
>
>  static int
> -vm_access_ttm(struct vm_area_struct *area, unsigned long addr,
> +vm_access_ttm(struct mm_area *area, unsigned long addr,
>  	      void *buf, int len, int write)
>  {
>  	struct drm_i915_gem_object *obj =
> @@ -1159,7 +1159,7 @@ vm_access_ttm(struct vm_area_struct *area, unsigned long addr,
>  	return ttm_bo_vm_access(area, addr, buf, len, write);
>  }
>
> -static void ttm_vm_open(struct vm_area_struct *vma)
> +static void ttm_vm_open(struct mm_area *vma)
>  {
>  	struct drm_i915_gem_object *obj =
>  		i915_ttm_to_gem(vma->vm_private_data);
> @@ -1168,7 +1168,7 @@ static void ttm_vm_open(struct vm_area_struct *vma)
>  	i915_gem_object_get(obj);
>  }
>
> -static void ttm_vm_close(struct vm_area_struct *vma)
> +static void ttm_vm_close(struct mm_area *vma)
>  {
>  	struct drm_i915_gem_object *obj =
>  		i915_ttm_to_gem(vma->vm_private_data);
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index 09b68713ab32..a3badd817b6b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -401,7 +401,7 @@ static int
>  probe_range(struct mm_struct *mm, unsigned long addr, unsigned long len)
>  {
>  	VMA_ITERATOR(vmi, mm, addr);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long end = addr + len;
>
>  	mmap_read_lock(mm);
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> index 804f74084bd4..c0a2c9bed6da 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> @@ -896,7 +896,7 @@ static int __igt_mmap(struct drm_i915_private *i915,
>  		      struct drm_i915_gem_object *obj,
>  		      enum i915_mmap_type type)
>  {
> -	struct vm_area_struct *area;
> +	struct mm_area *area;
>  	unsigned long addr;
>  	int err, i;
>  	u64 offset;
> @@ -924,7 +924,7 @@ static int __igt_mmap(struct drm_i915_private *i915,
>  	area = vma_lookup(current->mm, addr);
>  	mmap_read_unlock(current->mm);
>  	if (!area) {
> -		pr_err("%s: Did not create a vm_area_struct for the mmap\n",
> +		pr_err("%s: Did not create a mm_area for the mmap\n",
>  		       obj->mm.region->name);
>  		err = -EINVAL;
>  		goto out_unmap;
> @@ -1096,7 +1096,7 @@ static int ___igt_mmap_migrate(struct drm_i915_private *i915,
>  			       unsigned long addr,
>  			       bool unfaultable)
>  {
> -	struct vm_area_struct *area;
> +	struct mm_area *area;
>  	int err = 0, i;
>
>  	pr_info("igt_mmap(%s, %d) @ %lx\n",
> @@ -1106,7 +1106,7 @@ static int ___igt_mmap_migrate(struct drm_i915_private *i915,
>  	area = vma_lookup(current->mm, addr);
>  	mmap_read_unlock(current->mm);
>  	if (!area) {
> -		pr_err("%s: Did not create a vm_area_struct for the mmap\n",
> +		pr_err("%s: Did not create a mm_area for the mmap\n",
>  		       obj->mm.region->name);
>  		err = -EINVAL;
>  		goto out_unmap;
> diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
> index 5cd58e0f0dcf..11140801f804 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c
> @@ -82,7 +82,7 @@ static void mock_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map)
>  	vm_unmap_ram(map->vaddr, mock->npages);
>  }
>
> -static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma)
> +static int mock_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma)
>  {
>  	return -ENODEV;
>  }
> diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
> index 69830a5c49d3..8f4cc972a94c 100644
> --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
> +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
> @@ -1011,7 +1011,7 @@ static ssize_t intel_vgpu_write(struct vfio_device *vfio_dev,
>  }
>
>  static int intel_vgpu_mmap(struct vfio_device *vfio_dev,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	struct intel_vgpu *vgpu = vfio_dev_to_vgpu(vfio_dev);
>  	unsigned int index;
> diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c
> index 76e2801619f0..d92cf85a65cf 100644
> --- a/drivers/gpu/drm/i915/i915_mm.c
> +++ b/drivers/gpu/drm/i915/i915_mm.c
> @@ -91,7 +91,7 @@ static int remap_pfn(pte_t *pte, unsigned long addr, void *data)
>   *
>   *  Note: this is only safe if the mm semaphore is held when called.
>   */
> -int remap_io_mapping(struct vm_area_struct *vma,
> +int remap_io_mapping(struct mm_area *vma,
>  		     unsigned long addr, unsigned long pfn, unsigned long size,
>  		     struct io_mapping *iomap)
>  {
> @@ -127,7 +127,7 @@ int remap_io_mapping(struct vm_area_struct *vma,
>   *
>   *  Note: this is only safe if the mm semaphore is held when called.
>   */
> -int remap_io_sg(struct vm_area_struct *vma,
> +int remap_io_sg(struct mm_area *vma,
>  		unsigned long addr, unsigned long size,
>  		struct scatterlist *sgl, unsigned long offset,
>  		resource_size_t iobase)
> diff --git a/drivers/gpu/drm/i915/i915_mm.h b/drivers/gpu/drm/i915/i915_mm.h
> index 69f9351b1a1c..0ba12093b9ed 100644
> --- a/drivers/gpu/drm/i915/i915_mm.h
> +++ b/drivers/gpu/drm/i915/i915_mm.h
> @@ -9,17 +9,17 @@
>  #include <linux/bug.h>
>  #include <linux/types.h>
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct io_mapping;
>  struct scatterlist;
>
>  #if IS_ENABLED(CONFIG_X86)
> -int remap_io_mapping(struct vm_area_struct *vma,
> +int remap_io_mapping(struct mm_area *vma,
>  		     unsigned long addr, unsigned long pfn, unsigned long size,
>  		     struct io_mapping *iomap);
>  #else
>  static inline
> -int remap_io_mapping(struct vm_area_struct *vma,
> +int remap_io_mapping(struct mm_area *vma,
>  		     unsigned long addr, unsigned long pfn, unsigned long size,
>  		     struct io_mapping *iomap)
>  {
> @@ -28,7 +28,7 @@ int remap_io_mapping(struct vm_area_struct *vma,
>  }
>  #endif
>
> -int remap_io_sg(struct vm_area_struct *vma,
> +int remap_io_sg(struct mm_area *vma,
>  		unsigned long addr, unsigned long size,
>  		struct scatterlist *sgl, unsigned long offset,
>  		resource_size_t iobase);
> diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
> index 6a8c81fe8c1e..b89482468e95 100644
> --- a/drivers/gpu/drm/imagination/pvr_gem.c
> +++ b/drivers/gpu/drm/imagination/pvr_gem.c
> @@ -27,7 +27,7 @@ static void pvr_gem_object_free(struct drm_gem_object *obj)
>  	drm_gem_shmem_object_free(obj);
>  }
>
> -static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma)
> +static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct mm_area *vma)
>  {
>  	struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(gem_obj);
>  	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 9bb997dbb4b9..236327d428cd 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -198,7 +198,7 @@ static int lima_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
>  	return drm_gem_shmem_vmap(&bo->base, map);
>  }
>
> -static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int lima_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct lima_bo *bo = to_lima_bo(obj);
>
> diff --git a/drivers/gpu/drm/lima/lima_gem.h b/drivers/gpu/drm/lima/lima_gem.h
> index ccea06142f4b..2dc229d7a747 100644
> --- a/drivers/gpu/drm/lima/lima_gem.h
> +++ b/drivers/gpu/drm/lima/lima_gem.h
> @@ -42,6 +42,6 @@ int lima_gem_get_info(struct drm_file *file, u32 handle, u32 *va, u64 *offset);
>  int lima_gem_submit(struct drm_file *file, struct lima_submit *submit);
>  int lima_gem_wait(struct drm_file *file, u32 handle, u32 op, s64 timeout_ns);
>
> -void lima_set_vma_flags(struct vm_area_struct *vma);
> +void lima_set_vma_flags(struct mm_area *vma);
>
>  #endif
> diff --git a/drivers/gpu/drm/loongson/lsdc_gem.c b/drivers/gpu/drm/loongson/lsdc_gem.c
> index a720d8f53209..21d13a9acde5 100644
> --- a/drivers/gpu/drm/loongson/lsdc_gem.c
> +++ b/drivers/gpu/drm/loongson/lsdc_gem.c
> @@ -110,7 +110,7 @@ static void lsdc_gem_object_vunmap(struct drm_gem_object *obj, struct iosys_map
>  	}
>  }
>
> -static int lsdc_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int lsdc_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct ttm_buffer_object *tbo = to_ttm_bo(obj);
>  	int ret;
> diff --git a/drivers/gpu/drm/mediatek/mtk_gem.c b/drivers/gpu/drm/mediatek/mtk_gem.c
> index a172456d1d7b..254a991e94b2 100644
> --- a/drivers/gpu/drm/mediatek/mtk_gem.c
> +++ b/drivers/gpu/drm/mediatek/mtk_gem.c
> @@ -15,7 +15,7 @@
>  #include "mtk_drm_drv.h"
>  #include "mtk_gem.h"
>
> -static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> +static int mtk_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma);
>
>  static const struct vm_operations_struct vm_ops = {
>  	.open = drm_gem_vm_open,
> @@ -157,7 +157,7 @@ int mtk_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev,
>  }
>
>  static int mtk_gem_object_mmap(struct drm_gem_object *obj,
> -			       struct vm_area_struct *vma)
> +			       struct mm_area *vma)
>
>  {
>  	int ret;
> diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c
> index c62249b1ab3d..058585d17be3 100644
> --- a/drivers/gpu/drm/msm/msm_fbdev.c
> +++ b/drivers/gpu/drm/msm/msm_fbdev.c
> @@ -29,7 +29,7 @@ FB_GEN_DEFAULT_DEFERRED_SYSMEM_OPS(msm_fbdev,
>  				   drm_fb_helper_damage_range,
>  				   drm_fb_helper_damage_area)
>
> -static int msm_fbdev_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int msm_fbdev_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct drm_fb_helper *helper = (struct drm_fb_helper *)info->par;
>  	struct drm_gem_object *bo = msm_framebuffer_bo(helper->fb, 0);
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index ebc9ba66efb8..4564662c845c 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -321,7 +321,7 @@ static pgprot_t msm_gem_pgprot(struct msm_gem_object *msm_obj, pgprot_t prot)
>
>  static vm_fault_t msm_gem_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct msm_gem_object *msm_obj = to_msm_bo(obj);
>  	struct page **pages;
> @@ -1097,7 +1097,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj)
>  	kfree(msm_obj);
>  }
>
> -static int msm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int msm_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct msm_gem_object *msm_obj = to_msm_bo(obj);
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index 61d0f411ef84..4dd166e36cfe 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -691,7 +691,7 @@ static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm,
>  int
>  nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
>  			 struct nouveau_svmm *svmm,
> -			 struct vm_area_struct *vma,
> +			 struct mm_area *vma,
>  			 unsigned long start,
>  			 unsigned long end)
>  {
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.h b/drivers/gpu/drm/nouveau/nouveau_dmem.h
> index 64da5d3635c8..c52336b7729f 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.h
> @@ -36,7 +36,7 @@ void nouveau_dmem_resume(struct nouveau_drm *);
>
>  int nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
>  			     struct nouveau_svmm *svmm,
> -			     struct vm_area_struct *vma,
> +			     struct mm_area *vma,
>  			     unsigned long start,
>  			     unsigned long end);
>  unsigned long nouveau_dmem_page_addr(struct page *page);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 67e3c99de73a..db3fe08c1ee6 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -41,7 +41,7 @@
>
>  static vm_fault_t nouveau_ttm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
>  	pgprot_t prot;
>  	vm_fault_t ret;
> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
> index e12e2596ed84..43e5f70f664e 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
> @@ -173,7 +173,7 @@ nouveau_svmm_bind(struct drm_device *dev, void *data,
>  	}
>
>  	for (addr = args->va_start, end = args->va_end; addr < end;) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		unsigned long next;
>
>  		vma = find_vma_intersection(mm, addr, end);
> diff --git a/drivers/gpu/drm/omapdrm/omap_fbdev.c b/drivers/gpu/drm/omapdrm/omap_fbdev.c
> index 7b6396890681..5a1818a59244 100644
> --- a/drivers/gpu/drm/omapdrm/omap_fbdev.c
> +++ b/drivers/gpu/drm/omapdrm/omap_fbdev.c
> @@ -81,7 +81,7 @@ static int omap_fbdev_pan_display(struct fb_var_screeninfo *var, struct fb_info
>  	return drm_fb_helper_pan_display(var, fbi);
>  }
>
> -static int omap_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int omap_fbdev_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
> index b9c67e4ca360..cbbdaf381ad3 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem.c
> @@ -351,7 +351,7 @@ size_t omap_gem_mmap_size(struct drm_gem_object *obj)
>
>  /* Normal handling for the case of faulting in non-tiled buffers */
>  static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj,
> -		struct vm_area_struct *vma, struct vm_fault *vmf)
> +		struct mm_area *vma, struct vm_fault *vmf)
>  {
>  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
>  	unsigned long pfn;
> @@ -377,7 +377,7 @@ static vm_fault_t omap_gem_fault_1d(struct drm_gem_object *obj,
>
>  /* Special handling for the case of faulting in 2d tiled buffers */
>  static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj,
> -		struct vm_area_struct *vma, struct vm_fault *vmf)
> +		struct mm_area *vma, struct vm_fault *vmf)
>  {
>  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
>  	struct omap_drm_private *priv = obj->dev->dev_private;
> @@ -496,7 +496,7 @@ static vm_fault_t omap_gem_fault_2d(struct drm_gem_object *obj,
>   */
>  static vm_fault_t omap_gem_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
>  	int err;
> @@ -531,7 +531,7 @@ static vm_fault_t omap_gem_fault(struct vm_fault *vmf)
>  	return ret;
>  }
>
> -static int omap_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int omap_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
>
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem.h b/drivers/gpu/drm/omapdrm/omap_gem.h
> index fec3fa0e4c33..d28793a23d46 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem.h
> +++ b/drivers/gpu/drm/omapdrm/omap_gem.h
> @@ -23,7 +23,7 @@ struct file;
>  struct list_head;
>  struct page;
>  struct seq_file;
> -struct vm_area_struct;
> +struct mm_area;
>  struct vm_fault;
>
>  union omap_gem_size;
> diff --git a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> index 30cf1cdc1aa3..64d9520d20c0 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c
> @@ -61,7 +61,7 @@ static int omap_gem_dmabuf_end_cpu_access(struct dma_buf *buffer,
>  }
>
>  static int omap_gem_dmabuf_mmap(struct dma_buf *buffer,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	struct drm_gem_object *obj = buffer->priv;
>
> diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
> index a9da1d1eeb70..c3092cf8f280 100644
> --- a/drivers/gpu/drm/panthor/panthor_device.c
> +++ b/drivers/gpu/drm/panthor/panthor_device.c
> @@ -359,7 +359,7 @@ const char *panthor_exception_name(struct panthor_device *ptdev, u32 exception_c
>
>  static vm_fault_t panthor_mmio_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct panthor_device *ptdev = vma->vm_private_data;
>  	u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT;
>  	unsigned long pfn;
> @@ -403,7 +403,7 @@ static const struct vm_operations_struct panthor_mmio_vm_ops = {
>  	.fault = panthor_mmio_vm_fault,
>  };
>
> -int panthor_device_mmap_io(struct panthor_device *ptdev, struct vm_area_struct *vma)
> +int panthor_device_mmap_io(struct panthor_device *ptdev, struct mm_area *vma)
>  {
>  	u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT;
>
> diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
> index da6574021664..a3205e6b0518 100644
> --- a/drivers/gpu/drm/panthor/panthor_device.h
> +++ b/drivers/gpu/drm/panthor/panthor_device.h
> @@ -253,7 +253,7 @@ static inline bool panthor_device_reset_is_pending(struct panthor_device *ptdev)
>  }
>
>  int panthor_device_mmap_io(struct panthor_device *ptdev,
> -			   struct vm_area_struct *vma);
> +			   struct mm_area *vma);
>
>  int panthor_device_resume(struct device *dev);
>  int panthor_device_suspend(struct device *dev);
> diff --git a/drivers/gpu/drm/panthor/panthor_drv.c b/drivers/gpu/drm/panthor/panthor_drv.c
> index 06fe46e32073..3fca24a494d4 100644
> --- a/drivers/gpu/drm/panthor/panthor_drv.c
> +++ b/drivers/gpu/drm/panthor/panthor_drv.c
> @@ -1402,7 +1402,7 @@ static const struct drm_ioctl_desc panthor_drm_driver_ioctls[] = {
>  	PANTHOR_IOCTL(GROUP_SUBMIT, group_submit, DRM_RENDER_ALLOW),
>  };
>
> -static int panthor_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int panthor_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct drm_file *file = filp->private_data;
>  	struct panthor_file *pfile = file->driver_priv;
> diff --git a/drivers/gpu/drm/panthor/panthor_gem.c b/drivers/gpu/drm/panthor/panthor_gem.c
> index 8244a4e6c2a2..a323f6580f9c 100644
> --- a/drivers/gpu/drm/panthor/panthor_gem.c
> +++ b/drivers/gpu/drm/panthor/panthor_gem.c
> @@ -129,7 +129,7 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
>  	return ERR_PTR(ret);
>  }
>
> -static int panthor_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int panthor_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct panthor_gem_object *bo = to_panthor_bo(obj);
>
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index f86773f3db20..83230ce4e4f3 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -263,7 +263,7 @@ static int radeon_gem_handle_lockup(struct radeon_device *rdev, int r)
>  	return r;
>  }
>
> -static int radeon_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int radeon_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>  	struct radeon_device *rdev = radeon_get_rdev(bo->tbo.bdev);
> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> index 616d25c8c2de..a9007d171911 100644
> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> @@ -338,7 +338,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_device *bdev, struct ttm_tt *ttm
>  		/* check that we only pin down anonymous memory
>  		   to prevent problems with writeback */
>  		unsigned long end = gtt->userptr + (u64)ttm->num_pages * PAGE_SIZE;
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		vma = find_vma(gtt->usermm, gtt->userptr);
>  		if (!vma || vma->vm_file || vma->vm_end < end)
>  			return -EPERM;
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 6330b883efc3..f35e43ef35c0 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -213,7 +213,7 @@ static void rockchip_gem_free_buf(struct rockchip_gem_object *rk_obj)
>  }
>
>  static int rockchip_drm_gem_object_mmap_iommu(struct drm_gem_object *obj,
> -					      struct vm_area_struct *vma)
> +					      struct mm_area *vma)
>  {
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>  	unsigned int count = obj->size >> PAGE_SHIFT;
> @@ -226,7 +226,7 @@ static int rockchip_drm_gem_object_mmap_iommu(struct drm_gem_object *obj,
>  }
>
>  static int rockchip_drm_gem_object_mmap_dma(struct drm_gem_object *obj,
> -					    struct vm_area_struct *vma)
> +					    struct mm_area *vma)
>  {
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>  	struct drm_device *drm = obj->dev;
> @@ -236,7 +236,7 @@ static int rockchip_drm_gem_object_mmap_dma(struct drm_gem_object *obj,
>  }
>
>  static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
> -					struct vm_area_struct *vma)
> +					struct mm_area *vma)
>  {
>  	int ret;
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
> diff --git a/drivers/gpu/drm/tegra/fbdev.c b/drivers/gpu/drm/tegra/fbdev.c
> index cd9d798f8870..bb7d18a7ee7c 100644
> --- a/drivers/gpu/drm/tegra/fbdev.c
> +++ b/drivers/gpu/drm/tegra/fbdev.c
> @@ -22,7 +22,7 @@
>  #include "drm.h"
>  #include "gem.h"
>
> -static int tegra_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int tegra_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct drm_fb_helper *helper = info->par;
>  	struct tegra_bo *bo;
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index ace3e5a805cf..8c8233eeeaf9 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -560,7 +560,7 @@ int tegra_bo_dumb_create(struct drm_file *file, struct drm_device *drm,
>
>  static vm_fault_t tegra_bo_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *gem = vma->vm_private_data;
>  	struct tegra_bo *bo = to_tegra_bo(gem);
>  	struct page *page;
> @@ -581,7 +581,7 @@ const struct vm_operations_struct tegra_bo_vm_ops = {
>  	.close = drm_gem_vm_close,
>  };
>
> -int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
> +int __tegra_gem_mmap(struct drm_gem_object *gem, struct mm_area *vma)
>  {
>  	struct tegra_bo *bo = to_tegra_bo(gem);
>
> @@ -616,7 +616,7 @@ int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -int tegra_drm_mmap(struct file *file, struct vm_area_struct *vma)
> +int tegra_drm_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct drm_gem_object *gem;
>  	int err;
> @@ -708,7 +708,7 @@ static int tegra_gem_prime_end_cpu_access(struct dma_buf *buf,
>  	return 0;
>  }
>
> -static int tegra_gem_prime_mmap(struct dma_buf *buf, struct vm_area_struct *vma)
> +static int tegra_gem_prime_mmap(struct dma_buf *buf, struct mm_area *vma)
>  {
>  	struct drm_gem_object *gem = buf->priv;
>  	int err;
> diff --git a/drivers/gpu/drm/tegra/gem.h b/drivers/gpu/drm/tegra/gem.h
> index bf2cbd48eb3f..ca8e8a5e3335 100644
> --- a/drivers/gpu/drm/tegra/gem.h
> +++ b/drivers/gpu/drm/tegra/gem.h
> @@ -93,8 +93,8 @@ int tegra_bo_dumb_create(struct drm_file *file, struct drm_device *drm,
>
>  extern const struct vm_operations_struct tegra_bo_vm_ops;
>
> -int __tegra_gem_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma);
> -int tegra_drm_mmap(struct file *file, struct vm_area_struct *vma);
> +int __tegra_gem_mmap(struct drm_gem_object *gem, struct mm_area *vma);
> +int tegra_drm_mmap(struct file *file, struct mm_area *vma);
>
>  struct dma_buf *tegra_gem_prime_export(struct drm_gem_object *gem,
>  				       int flags);
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> index a194db83421d..4139e029b35f 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
> @@ -182,7 +182,7 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>  				    pgprot_t prot,
>  				    pgoff_t num_prefault)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
>  	struct ttm_device *bdev = bo->bdev;
>  	unsigned long page_offset;
> @@ -290,7 +290,7 @@ static void ttm_bo_release_dummy_page(struct drm_device *dev, void *res)
>
>  vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
>  	struct drm_device *ddev = bo->base.dev;
>  	vm_fault_t ret = VM_FAULT_NOPAGE;
> @@ -320,7 +320,7 @@ EXPORT_SYMBOL(ttm_bo_vm_dummy_page);
>
>  vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	pgprot_t prot;
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
>  	struct drm_device *ddev = bo->base.dev;
> @@ -347,7 +347,7 @@ vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
>  }
>  EXPORT_SYMBOL(ttm_bo_vm_fault);
>
> -void ttm_bo_vm_open(struct vm_area_struct *vma)
> +void ttm_bo_vm_open(struct mm_area *vma)
>  {
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
>
> @@ -357,7 +357,7 @@ void ttm_bo_vm_open(struct vm_area_struct *vma)
>  }
>  EXPORT_SYMBOL(ttm_bo_vm_open);
>
> -void ttm_bo_vm_close(struct vm_area_struct *vma)
> +void ttm_bo_vm_close(struct mm_area *vma)
>  {
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
>
> @@ -453,7 +453,7 @@ int ttm_bo_access(struct ttm_buffer_object *bo, unsigned long offset,
>  }
>  EXPORT_SYMBOL(ttm_bo_access);
>
> -int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
> +int ttm_bo_vm_access(struct mm_area *vma, unsigned long addr,
>  		     void *buf, int len, int write)
>  {
>  	struct ttm_buffer_object *bo = vma->vm_private_data;
> @@ -480,7 +480,7 @@ static const struct vm_operations_struct ttm_bo_vm_ops = {
>   *
>   * Maps a buffer object.
>   */
> -int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo)
> +int ttm_bo_mmap_obj(struct mm_area *vma, struct ttm_buffer_object *bo)
>  {
>  	/* Enforce no COW since would have really strange behavior with it. */
>  	if (is_cow_mapping(vma->vm_flags))
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index fb450b6a4d44..beedeaeecab4 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -715,7 +715,7 @@ static struct dma_buf *vc4_prime_export(struct drm_gem_object *obj, int flags)
>
>  static vm_fault_t vc4_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct drm_gem_object *obj = vma->vm_private_data;
>  	struct vc4_bo *bo = to_vc4_bo(obj);
>
> @@ -729,7 +729,7 @@ static vm_fault_t vc4_fault(struct vm_fault *vmf)
>  	return VM_FAULT_SIGBUS;
>  }
>
> -static int vc4_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int vc4_gem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct vc4_bo *bo = to_vc4_bo(obj);
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_vram.c b/drivers/gpu/drm/virtio/virtgpu_vram.c
> index 5ad3b7c6f73c..02a03a237fb5 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_vram.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_vram.c
> @@ -30,7 +30,7 @@ static const struct vm_operations_struct virtio_gpu_vram_vm_ops = {
>  };
>
>  static int virtio_gpu_vram_mmap(struct drm_gem_object *obj,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	int ret;
>  	struct virtio_gpu_device *vgdev = obj->dev->dev_private;
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
> index ed5015ced392..3d857670a3a1 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
> @@ -107,7 +107,7 @@ static void vmw_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
>  		drm_gem_ttm_vunmap(obj, map);
>  }
>
> -static int vmw_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static int vmw_gem_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	int ret;
>
> diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
> index 74ff2812d66a..38567fdf7163 100644
> --- a/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
> +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
> @@ -374,7 +374,7 @@ void vmw_bo_dirty_clear_res(struct vmw_resource *res)
>
>  vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
>  	    vma->vm_private_data;
>  	vm_fault_t ret;
> @@ -415,7 +415,7 @@ vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf)
>
>  vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ttm_buffer_object *bo = (struct ttm_buffer_object *)
>  	    vma->vm_private_data;
>  	struct vmw_bo *vbo = to_vmw_bo(&bo->base);
> diff --git a/drivers/gpu/drm/xe/display/intel_bo.c b/drivers/gpu/drm/xe/display/intel_bo.c
> index 27437c22bd70..6e32ab48de68 100644
> --- a/drivers/gpu/drm/xe/display/intel_bo.c
> +++ b/drivers/gpu/drm/xe/display/intel_bo.c
> @@ -32,7 +32,7 @@ void intel_bo_flush_if_display(struct drm_gem_object *obj)
>  {
>  }
>
> -int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +int intel_bo_fb_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	return drm_gem_prime_mmap(obj, vma);
>  }
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 3c7c2353d3c8..20e08ee00eee 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -1579,7 +1579,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
>  	return ret;
>  }
>
> -static int xe_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
> +static int xe_bo_vm_access(struct mm_area *vma, unsigned long addr,
>  			   void *buf, int len, int write)
>  {
>  	struct ttm_buffer_object *ttm_bo = vma->vm_private_data;
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index d8e227ddf255..30a5eb67d7a1 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -237,12 +237,12 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
>  #define xe_drm_compat_ioctl NULL
>  #endif
>
> -static void barrier_open(struct vm_area_struct *vma)
> +static void barrier_open(struct mm_area *vma)
>  {
>  	drm_dev_get(vma->vm_private_data);
>  }
>
> -static void barrier_close(struct vm_area_struct *vma)
> +static void barrier_close(struct mm_area *vma)
>  {
>  	drm_dev_put(vma->vm_private_data);
>  }
> @@ -257,7 +257,7 @@ static void barrier_release_dummy_page(struct drm_device *dev, void *res)
>  static vm_fault_t barrier_fault(struct vm_fault *vmf)
>  {
>  	struct drm_device *dev = vmf->vma->vm_private_data;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	vm_fault_t ret = VM_FAULT_NOPAGE;
>  	pgprot_t prot;
>  	int idx;
> @@ -299,7 +299,7 @@ static const struct vm_operations_struct vm_ops_barrier = {
>  };
>
>  static int xe_pci_barrier_mmap(struct file *filp,
> -			       struct vm_area_struct *vma)
> +			       struct mm_area *vma)
>  {
>  	struct drm_file *priv = filp->private_data;
>  	struct drm_device *dev = priv->minor->dev;
> @@ -326,7 +326,7 @@ static int xe_pci_barrier_mmap(struct file *filp,
>  	return 0;
>  }
>
> -static int xe_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int xe_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct drm_file *priv = filp->private_data;
>  	struct drm_device *dev = priv->minor->dev;
> diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
> index 346f357b3d1f..d44ce76b3465 100644
> --- a/drivers/gpu/drm/xe/xe_oa.c
> +++ b/drivers/gpu/drm/xe/xe_oa.c
> @@ -1623,7 +1623,7 @@ static int xe_oa_release(struct inode *inode, struct file *file)
>  	return 0;
>  }
>
> -static int xe_oa_mmap(struct file *file, struct vm_area_struct *vma)
> +static int xe_oa_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct xe_oa_stream *stream = file->private_data;
>  	struct xe_bo *bo = stream->oa_buffer.bo;
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 63112ed975c4..41449a270d89 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -58,7 +58,7 @@ static void gem_free_pages_array(struct xen_gem_object *xen_obj)
>  }
>
>  static int xen_drm_front_gem_object_mmap(struct drm_gem_object *gem_obj,
> -					 struct vm_area_struct *vma)
> +					 struct mm_area *vma)
>  {
>  	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>  	int ret;
> diff --git a/drivers/hsi/clients/cmt_speech.c b/drivers/hsi/clients/cmt_speech.c
> index daa8e1bff5d9..ca6debea1173 100644
> --- a/drivers/hsi/clients/cmt_speech.c
> +++ b/drivers/hsi/clients/cmt_speech.c
> @@ -1256,7 +1256,7 @@ static long cs_char_ioctl(struct file *file, unsigned int cmd,
>  	return r;
>  }
>
> -static int cs_char_mmap(struct file *file, struct vm_area_struct *vma)
> +static int cs_char_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (vma->vm_end < vma->vm_start)
>  		return -EINVAL;
> diff --git a/drivers/hv/mshv_root_main.c b/drivers/hv/mshv_root_main.c
> index 72df774e410a..ac1e44563cbf 100644
> --- a/drivers/hv/mshv_root_main.c
> +++ b/drivers/hv/mshv_root_main.c
> @@ -75,7 +75,7 @@ static int mshv_vp_release(struct inode *inode, struct file *filp);
>  static long mshv_vp_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg);
>  static int mshv_partition_release(struct inode *inode, struct file *filp);
>  static long mshv_partition_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg);
> -static int mshv_vp_mmap(struct file *file, struct vm_area_struct *vma);
> +static int mshv_vp_mmap(struct file *file, struct mm_area *vma);
>  static vm_fault_t mshv_vp_fault(struct vm_fault *vmf);
>  static int mshv_init_async_handler(struct mshv_partition *partition);
>  static void mshv_async_hvcall_handler(void *data, u64 *status);
> @@ -831,7 +831,7 @@ static vm_fault_t mshv_vp_fault(struct vm_fault *vmf)
>  	return 0;
>  }
>
> -static int mshv_vp_mmap(struct file *file, struct vm_area_struct *vma)
> +static int mshv_vp_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct mshv_vp *vp = file->private_data;
>
> @@ -1332,7 +1332,7 @@ mshv_map_user_memory(struct mshv_partition *partition,
>  		     struct mshv_user_mem_region mem)
>  {
>  	struct mshv_mem_region *region;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	bool is_mmio;
>  	ulong mmio_pfn;
>  	long ret;
> diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
> index bf99d79a4192..f51cbe4a8c55 100644
> --- a/drivers/hwtracing/intel_th/msu.c
> +++ b/drivers/hwtracing/intel_th/msu.c
> @@ -1589,7 +1589,7 @@ static ssize_t intel_th_msc_read(struct file *file, char __user *buf,
>   * vm operations callbacks (vm_ops)
>   */
>
> -static void msc_mmap_open(struct vm_area_struct *vma)
> +static void msc_mmap_open(struct mm_area *vma)
>  {
>  	struct msc_iter *iter = vma->vm_file->private_data;
>  	struct msc *msc = iter->msc;
> @@ -1597,7 +1597,7 @@ static void msc_mmap_open(struct vm_area_struct *vma)
>  	atomic_inc(&msc->mmap_count);
>  }
>
> -static void msc_mmap_close(struct vm_area_struct *vma)
> +static void msc_mmap_close(struct mm_area *vma)
>  {
>  	struct msc_iter *iter = vma->vm_file->private_data;
>  	struct msc *msc = iter->msc;
> @@ -1644,7 +1644,7 @@ static const struct vm_operations_struct msc_mmap_ops = {
>  	.fault	= msc_mmap_fault,
>  };
>
> -static int intel_th_msc_mmap(struct file *file, struct vm_area_struct *vma)
> +static int intel_th_msc_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned long size = vma->vm_end - vma->vm_start;
>  	struct msc_iter *iter = vma->vm_file->private_data;
> diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
> index cdba4e875b28..c27322d82289 100644
> --- a/drivers/hwtracing/stm/core.c
> +++ b/drivers/hwtracing/stm/core.c
> @@ -666,7 +666,7 @@ static ssize_t stm_char_write(struct file *file, const char __user *buf,
>  	return count;
>  }
>
> -static void stm_mmap_open(struct vm_area_struct *vma)
> +static void stm_mmap_open(struct mm_area *vma)
>  {
>  	struct stm_file *stmf = vma->vm_file->private_data;
>  	struct stm_device *stm = stmf->stm;
> @@ -674,7 +674,7 @@ static void stm_mmap_open(struct vm_area_struct *vma)
>  	pm_runtime_get(&stm->dev);
>  }
>
> -static void stm_mmap_close(struct vm_area_struct *vma)
> +static void stm_mmap_close(struct mm_area *vma)
>  {
>  	struct stm_file *stmf = vma->vm_file->private_data;
>  	struct stm_device *stm = stmf->stm;
> @@ -688,7 +688,7 @@ static const struct vm_operations_struct stm_mmap_vmops = {
>  	.close	= stm_mmap_close,
>  };
>
> -static int stm_char_mmap(struct file *file, struct vm_area_struct *vma)
> +static int stm_char_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct stm_file *stmf = file->private_data;
>  	struct stm_device *stm = stmf->stm;
> diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h
> index 05102769a918..6662f745c123 100644
> --- a/drivers/infiniband/core/core_priv.h
> +++ b/drivers/infiniband/core/core_priv.h
> @@ -359,13 +359,13 @@ int rdma_nl_net_init(struct rdma_dev_net *rnet);
>  void rdma_nl_net_exit(struct rdma_dev_net *rnet);
>
>  struct rdma_umap_priv {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct list_head list;
>  	struct rdma_user_mmap_entry *entry;
>  };
>
>  void rdma_umap_priv_init(struct rdma_umap_priv *priv,
> -			 struct vm_area_struct *vma,
> +			 struct mm_area *vma,
>  			 struct rdma_user_mmap_entry *entry);
>
>  void ib_cq_pool_cleanup(struct ib_device *dev);
> diff --git a/drivers/infiniband/core/ib_core_uverbs.c b/drivers/infiniband/core/ib_core_uverbs.c
> index b51bd7087a88..949863e7c66f 100644
> --- a/drivers/infiniband/core/ib_core_uverbs.c
> +++ b/drivers/infiniband/core/ib_core_uverbs.c
> @@ -28,7 +28,7 @@
>   *
>   */
>  void rdma_umap_priv_init(struct rdma_umap_priv *priv,
> -			 struct vm_area_struct *vma,
> +			 struct mm_area *vma,
>  			 struct rdma_user_mmap_entry *entry)
>  {
>  	struct ib_uverbs_file *ufile = vma->vm_file->private_data;
> @@ -64,7 +64,7 @@ EXPORT_SYMBOL(rdma_umap_priv_init);
>   * Return -EINVAL on wrong flags or size, -EAGAIN on failure to map. 0 on
>   * success.
>   */
> -int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
> +int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct mm_area *vma,
>  		      unsigned long pfn, unsigned long size, pgprot_t prot,
>  		      struct rdma_user_mmap_entry *entry)
>  {
> @@ -159,7 +159,7 @@ EXPORT_SYMBOL(rdma_user_mmap_entry_get_pgoff);
>   */
>  struct rdma_user_mmap_entry *
>  rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
> -			 struct vm_area_struct *vma)
> +			 struct mm_area *vma)
>  {
>  	struct rdma_user_mmap_entry *entry;
>
> diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
> index 973fe2c7ef53..565b497a4523 100644
> --- a/drivers/infiniband/core/uverbs_main.c
> +++ b/drivers/infiniband/core/uverbs_main.c
> @@ -688,7 +688,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
>
>  static const struct vm_operations_struct rdma_umap_ops;
>
> -static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int ib_uverbs_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct ib_uverbs_file *file = filp->private_data;
>  	struct ib_ucontext *ucontext;
> @@ -717,7 +717,7 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
>   * The VMA has been dup'd, initialize the vm_private_data with a new tracking
>   * struct
>   */
> -static void rdma_umap_open(struct vm_area_struct *vma)
> +static void rdma_umap_open(struct mm_area *vma)
>  {
>  	struct ib_uverbs_file *ufile = vma->vm_file->private_data;
>  	struct rdma_umap_priv *opriv = vma->vm_private_data;
> @@ -759,7 +759,7 @@ static void rdma_umap_open(struct vm_area_struct *vma)
>  	zap_vma_ptes(vma, vma->vm_start, vma->vm_end - vma->vm_start);
>  }
>
> -static void rdma_umap_close(struct vm_area_struct *vma)
> +static void rdma_umap_close(struct mm_area *vma)
>  {
>  	struct ib_uverbs_file *ufile = vma->vm_file->private_data;
>  	struct rdma_umap_priv *priv = vma->vm_private_data;
> @@ -872,7 +872,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
>  		mutex_lock(&ufile->umap_lock);
>  		list_for_each_entry_safe (priv, next_priv, &ufile->umaps,
>  					  list) {
> -			struct vm_area_struct *vma = priv->vma;
> +			struct mm_area *vma = priv->vma;
>
>  			if (vma->vm_mm != mm)
>  				continue;
> diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> index 9082b3fd2b47..fd7b8fdc9bfb 100644
> --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
> @@ -4425,7 +4425,7 @@ static struct bnxt_re_srq *bnxt_re_search_for_srq(struct bnxt_re_dev *rdev, u32
>  }
>
>  /* Helper function to mmap the virtual memory from user app */
> -int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct vm_area_struct *vma)
> +int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct mm_area *vma)
>  {
>  	struct bnxt_re_ucontext *uctx = container_of(ib_uctx,
>  						   struct bnxt_re_ucontext,
> diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
> index 22c9eb8e9cfc..6f709d4bfc12 100644
> --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
> +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
> @@ -265,7 +265,7 @@ struct ib_mr *bnxt_re_reg_user_mr_dmabuf(struct ib_pd *ib_pd, u64 start,
>  					 struct uverbs_attr_bundle *attrs);
>  int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata);
>  void bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
> -int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
> +int bnxt_re_mmap(struct ib_ucontext *context, struct mm_area *vma);
>  void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
>
>  int bnxt_re_process_mad(struct ib_device *device, int process_mad_flags,
> diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
> index e059f92d90fd..c3b14c76e9fd 100644
> --- a/drivers/infiniband/hw/cxgb4/provider.c
> +++ b/drivers/infiniband/hw/cxgb4/provider.c
> @@ -125,7 +125,7 @@ static int c4iw_alloc_ucontext(struct ib_ucontext *ucontext,
>  	return ret;
>  }
>
> -static int c4iw_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
> +static int c4iw_mmap(struct ib_ucontext *context, struct mm_area *vma)
>  {
>  	int len = vma->vm_end - vma->vm_start;
>  	u32 key = vma->vm_pgoff << PAGE_SHIFT;
> diff --git a/drivers/infiniband/hw/efa/efa.h b/drivers/infiniband/hw/efa/efa.h
> index 838182d0409c..12502e6326bc 100644
> --- a/drivers/infiniband/hw/efa/efa.h
> +++ b/drivers/infiniband/hw/efa/efa.h
> @@ -175,7 +175,7 @@ int efa_get_port_immutable(struct ib_device *ibdev, u32 port_num,
>  int efa_alloc_ucontext(struct ib_ucontext *ibucontext, struct ib_udata *udata);
>  void efa_dealloc_ucontext(struct ib_ucontext *ibucontext);
>  int efa_mmap(struct ib_ucontext *ibucontext,
> -	     struct vm_area_struct *vma);
> +	     struct mm_area *vma);
>  void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
>  int efa_create_ah(struct ib_ah *ibah,
>  		  struct rdma_ah_init_attr *init_attr,
> diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
> index a8645a40730f..3b9b6308bada 100644
> --- a/drivers/infiniband/hw/efa/efa_verbs.c
> +++ b/drivers/infiniband/hw/efa/efa_verbs.c
> @@ -1978,7 +1978,7 @@ void efa_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
>  }
>
>  static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
> -		      struct vm_area_struct *vma)
> +		      struct mm_area *vma)
>  {
>  	struct rdma_user_mmap_entry *rdma_entry;
>  	struct efa_user_mmap_entry *entry;
> @@ -2041,7 +2041,7 @@ static int __efa_mmap(struct efa_dev *dev, struct efa_ucontext *ucontext,
>  }
>
>  int efa_mmap(struct ib_ucontext *ibucontext,
> -	     struct vm_area_struct *vma)
> +	     struct mm_area *vma)
>  {
>  	struct efa_ucontext *ucontext = to_eucontext(ibucontext);
>  	struct efa_dev *dev = to_edev(ibucontext->device);
> diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.c b/drivers/infiniband/hw/erdma/erdma_verbs.c
> index af36a8d2df22..159f245e2e6b 100644
> --- a/drivers/infiniband/hw/erdma/erdma_verbs.c
> +++ b/drivers/infiniband/hw/erdma/erdma_verbs.c
> @@ -1371,7 +1371,7 @@ void erdma_qp_put_ref(struct ib_qp *ibqp)
>  	erdma_qp_put(to_eqp(ibqp));
>  }
>
> -int erdma_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma)
> +int erdma_mmap(struct ib_ucontext *ctx, struct mm_area *vma)
>  {
>  	struct rdma_user_mmap_entry *rdma_entry;
>  	struct erdma_user_mmap_entry *entry;
> diff --git a/drivers/infiniband/hw/erdma/erdma_verbs.h b/drivers/infiniband/hw/erdma/erdma_verbs.h
> index f9408ccc8bad..a4fd2061301c 100644
> --- a/drivers/infiniband/hw/erdma/erdma_verbs.h
> +++ b/drivers/infiniband/hw/erdma/erdma_verbs.h
> @@ -455,7 +455,7 @@ struct ib_mr *erdma_reg_user_mr(struct ib_pd *ibpd, u64 start, u64 len,
>  				u64 virt, int access, struct ib_udata *udata);
>  struct ib_mr *erdma_get_dma_mr(struct ib_pd *ibpd, int rights);
>  int erdma_dereg_mr(struct ib_mr *ibmr, struct ib_udata *data);
> -int erdma_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma);
> +int erdma_mmap(struct ib_ucontext *ctx, struct mm_area *vma);
>  void erdma_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
>  void erdma_qp_get_ref(struct ib_qp *ibqp);
>  void erdma_qp_put_ref(struct ib_qp *ibqp);
> diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
> index 503abec709c9..239416504cd9 100644
> --- a/drivers/infiniband/hw/hfi1/file_ops.c
> +++ b/drivers/infiniband/hw/hfi1/file_ops.c
> @@ -35,7 +35,7 @@ static int hfi1_file_open(struct inode *inode, struct file *fp);
>  static int hfi1_file_close(struct inode *inode, struct file *fp);
>  static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from);
>  static __poll_t hfi1_poll(struct file *fp, struct poll_table_struct *pt);
> -static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma);
> +static int hfi1_file_mmap(struct file *fp, struct mm_area *vma);
>
>  static u64 kvirt_to_phys(void *addr);
>  static int assign_ctxt(struct hfi1_filedata *fd, unsigned long arg, u32 len);
> @@ -306,7 +306,7 @@ static ssize_t hfi1_write_iter(struct kiocb *kiocb, struct iov_iter *from)
>
>  static inline void mmap_cdbg(u16 ctxt, u8 subctxt, u8 type, u8 mapio, u8 vmf,
>  			     u64 memaddr, void *memvirt, dma_addr_t memdma,
> -			     ssize_t memlen, struct vm_area_struct *vma)
> +			     ssize_t memlen, struct mm_area *vma)
>  {
>  	hfi1_cdbg(PROC,
>  		  "%u:%u type:%u io/vf/dma:%d/%d/%d, addr:0x%llx, len:%lu(%lu), flags:0x%lx",
> @@ -315,7 +315,7 @@ static inline void mmap_cdbg(u16 ctxt, u8 subctxt, u8 type, u8 mapio, u8 vmf,
>  		  vma->vm_end - vma->vm_start, vma->vm_flags);
>  }
>
> -static int hfi1_file_mmap(struct file *fp, struct vm_area_struct *vma)
> +static int hfi1_file_mmap(struct file *fp, struct mm_area *vma)
>  {
>  	struct hfi1_filedata *fd = fp->private_data;
>  	struct hfi1_ctxtdata *uctxt = fd->uctxt;
> diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
> index cf89a8db4f64..098c1ec4de0a 100644
> --- a/drivers/infiniband/hw/hns/hns_roce_main.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_main.c
> @@ -457,7 +457,7 @@ static void hns_roce_dealloc_ucontext(struct ib_ucontext *ibcontext)
>  	ida_free(&hr_dev->uar_ida.ida, (int)context->uar.logic_idx);
>  }
>
> -static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
> +static int hns_roce_mmap(struct ib_ucontext *uctx, struct mm_area *vma)
>  {
>  	struct hns_roce_dev *hr_dev = to_hr_dev(uctx->device);
>  	struct rdma_user_mmap_entry *rdma_entry;
> diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
> index eeb932e58730..a361f423e140 100644
> --- a/drivers/infiniband/hw/irdma/verbs.c
> +++ b/drivers/infiniband/hw/irdma/verbs.c
> @@ -117,7 +117,7 @@ static void irdma_disassociate_ucontext(struct ib_ucontext *context)
>  }
>
>  static int irdma_mmap_legacy(struct irdma_ucontext *ucontext,
> -			     struct vm_area_struct *vma)
> +			     struct mm_area *vma)
>  {
>  	u64 pfn;
>
> @@ -168,7 +168,7 @@ irdma_user_mmap_entry_insert(struct irdma_ucontext *ucontext, u64 bar_offset,
>   * @context: context created during alloc
>   * @vma: kernel info for user memory map
>   */
> -static int irdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
> +static int irdma_mmap(struct ib_ucontext *context, struct mm_area *vma)
>  {
>  	struct rdma_user_mmap_entry *rdma_entry;
>  	struct irdma_user_mmap_entry *entry;
> diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
> index eda9c5b971de..a11368d8c979 100644
> --- a/drivers/infiniband/hw/mana/main.c
> +++ b/drivers/infiniband/hw/mana/main.c
> @@ -512,7 +512,7 @@ int mana_ib_gd_destroy_dma_region(struct mana_ib_dev *dev, u64 gdma_region)
>  	return mana_gd_destroy_dma_region(gc, gdma_region);
>  }
>
> -int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
> +int mana_ib_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma)
>  {
>  	struct mana_ib_ucontext *mana_ucontext =
>  		container_of(ibcontext, struct mana_ib_ucontext, ibucontext);
> diff --git a/drivers/infiniband/hw/mana/mana_ib.h b/drivers/infiniband/hw/mana/mana_ib.h
> index 6903946677e5..f02d93ed4fec 100644
> --- a/drivers/infiniband/hw/mana/mana_ib.h
> +++ b/drivers/infiniband/hw/mana/mana_ib.h
> @@ -628,7 +628,7 @@ int mana_ib_alloc_ucontext(struct ib_ucontext *ibcontext,
>  			   struct ib_udata *udata);
>  void mana_ib_dealloc_ucontext(struct ib_ucontext *ibcontext);
>
> -int mana_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma);
> +int mana_ib_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma);
>
>  int mana_ib_get_port_immutable(struct ib_device *ibdev, u32 port_num,
>  			       struct ib_port_immutable *immutable);
> diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
> index dd35e03402ab..26abc9faca3a 100644
> --- a/drivers/infiniband/hw/mlx4/main.c
> +++ b/drivers/infiniband/hw/mlx4/main.c
> @@ -1150,7 +1150,7 @@ static void mlx4_ib_disassociate_ucontext(struct ib_ucontext *ibcontext)
>  {
>  }
>
> -static int mlx4_ib_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
> +static int mlx4_ib_mmap(struct ib_ucontext *context, struct mm_area *vma)
>  {
>  	struct mlx4_ib_dev *dev = to_mdev(context->device);
>
> diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
> index e77645a673fb..92821271c4a2 100644
> --- a/drivers/infiniband/hw/mlx4/mr.c
> +++ b/drivers/infiniband/hw/mlx4/mr.c
> @@ -114,7 +114,7 @@ static struct ib_umem *mlx4_get_umem_mr(struct ib_device *device, u64 start,
>  	 */
>  	if (!ib_access_writable(access_flags)) {
>  		unsigned long untagged_start = untagged_addr(start);
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		mmap_read_lock(current->mm);
>  		/*
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index d07cacaa0abd..9434b1c99b60 100644
> --- a/drivers/infiniband/hw/mlx5/main.c
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -2201,7 +2201,7 @@ static inline char *mmap_cmd2str(enum mlx5_ib_mmap_cmd cmd)
>  }
>
>  static int mlx5_ib_mmap_clock_info_page(struct mlx5_ib_dev *dev,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					struct mlx5_ib_ucontext *context)
>  {
>  	if ((vma->vm_end - vma->vm_start != PAGE_SIZE) ||
> @@ -2252,7 +2252,7 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
>  }
>
>  static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
> -		    struct vm_area_struct *vma,
> +		    struct mm_area *vma,
>  		    struct mlx5_ib_ucontext *context)
>  {
>  	struct mlx5_bfreg_info *bfregi = &context->bfregi;
> @@ -2359,7 +2359,7 @@ static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
>  	return err;
>  }
>
> -static unsigned long mlx5_vma_to_pgoff(struct vm_area_struct *vma)
> +static unsigned long mlx5_vma_to_pgoff(struct mm_area *vma)
>  {
>  	unsigned long idx;
>  	u8 command;
> @@ -2371,7 +2371,7 @@ static unsigned long mlx5_vma_to_pgoff(struct vm_area_struct *vma)
>  }
>
>  static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev,
> -			       struct vm_area_struct *vma,
> +			       struct mm_area *vma,
>  			       struct ib_ucontext *ucontext)
>  {
>  	struct mlx5_user_mmap_entry *mentry;
> @@ -2410,7 +2410,7 @@ static u64 mlx5_entry_to_mmap_offset(struct mlx5_user_mmap_entry *entry)
>  		(index & 0xFF)) << PAGE_SHIFT;
>  }
>
> -static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
> +static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma)
>  {
>  	struct mlx5_ib_ucontext *context = to_mucontext(ibcontext);
>  	struct mlx5_ib_dev *dev = to_mdev(ibcontext->device);
> diff --git a/drivers/infiniband/hw/mthca/mthca_provider.c b/drivers/infiniband/hw/mthca/mthca_provider.c
> index 6a1e2e79ddc3..5934a0cc68a0 100644
> --- a/drivers/infiniband/hw/mthca/mthca_provider.c
> +++ b/drivers/infiniband/hw/mthca/mthca_provider.c
> @@ -330,7 +330,7 @@ static void mthca_dealloc_ucontext(struct ib_ucontext *context)
>  }
>
>  static int mthca_mmap_uar(struct ib_ucontext *context,
> -			  struct vm_area_struct *vma)
> +			  struct mm_area *vma)
>  {
>  	if (vma->vm_end - vma->vm_start != PAGE_SIZE)
>  		return -EINVAL;
> diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> index 979de8f8df14..a4940538d888 100644
> --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
> @@ -536,7 +536,7 @@ void ocrdma_dealloc_ucontext(struct ib_ucontext *ibctx)
>  	}
>  }
>
> -int ocrdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
> +int ocrdma_mmap(struct ib_ucontext *context, struct mm_area *vma)
>  {
>  	struct ocrdma_ucontext *ucontext = get_ocrdma_ucontext(context);
>  	struct ocrdma_dev *dev = get_ocrdma_dev(context->device);
> diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
> index 0644346d8d98..7e9ff740faad 100644
> --- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
> +++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
> @@ -64,7 +64,7 @@ int ocrdma_query_pkey(struct ib_device *ibdev, u32 port, u16 index, u16 *pkey);
>  int ocrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
>  void ocrdma_dealloc_ucontext(struct ib_ucontext *uctx);
>
> -int ocrdma_mmap(struct ib_ucontext *, struct vm_area_struct *vma);
> +int ocrdma_mmap(struct ib_ucontext *, struct mm_area *vma);
>
>  int ocrdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata);
>  int ocrdma_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata);
> diff --git a/drivers/infiniband/hw/qedr/verbs.c b/drivers/infiniband/hw/qedr/verbs.c
> index 568a5b18803f..779bcac34ca1 100644
> --- a/drivers/infiniband/hw/qedr/verbs.c
> +++ b/drivers/infiniband/hw/qedr/verbs.c
> @@ -385,7 +385,7 @@ void qedr_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
>  	kfree(entry);
>  }
>
> -int qedr_mmap(struct ib_ucontext *ucontext, struct vm_area_struct *vma)
> +int qedr_mmap(struct ib_ucontext *ucontext, struct mm_area *vma)
>  {
>  	struct ib_device *dev = ucontext->device;
>  	size_t length = vma->vm_end - vma->vm_start;
> diff --git a/drivers/infiniband/hw/qedr/verbs.h b/drivers/infiniband/hw/qedr/verbs.h
> index 5731458abb06..50654f10a4ea 100644
> --- a/drivers/infiniband/hw/qedr/verbs.h
> +++ b/drivers/infiniband/hw/qedr/verbs.h
> @@ -45,7 +45,7 @@ int qedr_query_pkey(struct ib_device *ibdev, u32 port, u16 index, u16 *pkey);
>  int qedr_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
>  void qedr_dealloc_ucontext(struct ib_ucontext *uctx);
>
> -int qedr_mmap(struct ib_ucontext *ucontext, struct vm_area_struct *vma);
> +int qedr_mmap(struct ib_ucontext *ucontext, struct mm_area *vma);
>  void qedr_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
>  int qedr_alloc_pd(struct ib_pd *pd, struct ib_udata *udata);
>  int qedr_dealloc_pd(struct ib_pd *pd, struct ib_udata *udata);
> diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
> index 29e4c59aa23b..b7ff897e3729 100644
> --- a/drivers/infiniband/hw/qib/qib_file_ops.c
> +++ b/drivers/infiniband/hw/qib/qib_file_ops.c
> @@ -59,7 +59,7 @@ static int qib_close(struct inode *, struct file *);
>  static ssize_t qib_write(struct file *, const char __user *, size_t, loff_t *);
>  static ssize_t qib_write_iter(struct kiocb *, struct iov_iter *);
>  static __poll_t qib_poll(struct file *, struct poll_table_struct *);
> -static int qib_mmapf(struct file *, struct vm_area_struct *);
> +static int qib_mmapf(struct file *, struct mm_area *);
>
>  /*
>   * This is really, really weird shit - write() and writev() here
> @@ -705,7 +705,7 @@ static void qib_clean_part_key(struct qib_ctxtdata *rcd,
>  }
>
>  /* common code for the mappings on dma_alloc_coherent mem */
> -static int qib_mmap_mem(struct vm_area_struct *vma, struct qib_ctxtdata *rcd,
> +static int qib_mmap_mem(struct mm_area *vma, struct qib_ctxtdata *rcd,
>  			unsigned len, void *kvaddr, u32 write_ok, char *what)
>  {
>  	struct qib_devdata *dd = rcd->dd;
> @@ -747,7 +747,7 @@ static int qib_mmap_mem(struct vm_area_struct *vma, struct qib_ctxtdata *rcd,
>  	return ret;
>  }
>
> -static int mmap_ureg(struct vm_area_struct *vma, struct qib_devdata *dd,
> +static int mmap_ureg(struct mm_area *vma, struct qib_devdata *dd,
>  		     u64 ureg)
>  {
>  	unsigned long phys;
> @@ -778,7 +778,7 @@ static int mmap_ureg(struct vm_area_struct *vma, struct qib_devdata *dd,
>  	return ret;
>  }
>
> -static int mmap_piobufs(struct vm_area_struct *vma,
> +static int mmap_piobufs(struct mm_area *vma,
>  			struct qib_devdata *dd,
>  			struct qib_ctxtdata *rcd,
>  			unsigned piobufs, unsigned piocnt)
> @@ -823,7 +823,7 @@ static int mmap_piobufs(struct vm_area_struct *vma,
>  	return ret;
>  }
>
> -static int mmap_rcvegrbufs(struct vm_area_struct *vma,
> +static int mmap_rcvegrbufs(struct mm_area *vma,
>  			   struct qib_ctxtdata *rcd)
>  {
>  	struct qib_devdata *dd = rcd->dd;
> @@ -889,7 +889,7 @@ static const struct vm_operations_struct qib_file_vm_ops = {
>  	.fault = qib_file_vma_fault,
>  };
>
> -static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
> +static int mmap_kvaddr(struct mm_area *vma, u64 pgaddr,
>  		       struct qib_ctxtdata *rcd, unsigned subctxt)
>  {
>  	struct qib_devdata *dd = rcd->dd;
> @@ -971,7 +971,7 @@ static int mmap_kvaddr(struct vm_area_struct *vma, u64 pgaddr,
>   * buffers in the chip.  We have the open and close entries so we can bump
>   * the ref count and keep the driver from being unloaded while still mapped.
>   */
> -static int qib_mmapf(struct file *fp, struct vm_area_struct *vma)
> +static int qib_mmapf(struct file *fp, struct mm_area *vma)
>  {
>  	struct qib_ctxtdata *rcd;
>  	struct qib_devdata *dd;
> diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> index 217af34e82b3..9ed349e5fcc3 100644
> --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.c
> @@ -658,7 +658,7 @@ void usnic_ib_dealloc_ucontext(struct ib_ucontext *ibcontext)
>  }
>
>  int usnic_ib_mmap(struct ib_ucontext *context,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	struct usnic_ib_ucontext *uctx = to_ucontext(context);
>  	struct usnic_ib_dev *us_ibdev;
> diff --git a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
> index 53f53f2d53be..e445f74b027f 100644
> --- a/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
> +++ b/drivers/infiniband/hw/usnic/usnic_ib_verbs.h
> @@ -65,5 +65,5 @@ int usnic_ib_dereg_mr(struct ib_mr *ibmr, struct ib_udata *udata);
>  int usnic_ib_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
>  void usnic_ib_dealloc_ucontext(struct ib_ucontext *ibcontext);
>  int usnic_ib_mmap(struct ib_ucontext *context,
> -			struct vm_area_struct *vma);
> +			struct mm_area *vma);
>  #endif /* !USNIC_IB_VERBS_H */
> diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
> index bcd43dc30e21..e536181063cf 100644
> --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
> +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
> @@ -364,7 +364,7 @@ void pvrdma_dealloc_ucontext(struct ib_ucontext *ibcontext)
>   *
>   * @return: 0 on success, otherwise errno.
>   */
> -int pvrdma_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
> +int pvrdma_mmap(struct ib_ucontext *ibcontext, struct mm_area *vma)
>  {
>  	struct pvrdma_ucontext *context = to_vucontext(ibcontext);
>  	unsigned long start = vma->vm_start;
> diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
> index fd47b0b1df5c..a3720f30cb8d 100644
> --- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
> +++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.h
> @@ -358,7 +358,7 @@ enum rdma_link_layer pvrdma_port_link_layer(struct ib_device *ibdev,
>  					    u32 port);
>  int pvrdma_modify_port(struct ib_device *ibdev, u32 port,
>  		       int mask, struct ib_port_modify *props);
> -int pvrdma_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
> +int pvrdma_mmap(struct ib_ucontext *context, struct mm_area *vma);
>  int pvrdma_alloc_ucontext(struct ib_ucontext *uctx, struct ib_udata *udata);
>  void pvrdma_dealloc_ucontext(struct ib_ucontext *context);
>  int pvrdma_alloc_pd(struct ib_pd *pd, struct ib_udata *udata);
> diff --git a/drivers/infiniband/sw/rdmavt/mmap.c b/drivers/infiniband/sw/rdmavt/mmap.c
> index 46e3b3e0643a..45d7caafa4d0 100644
> --- a/drivers/infiniband/sw/rdmavt/mmap.c
> +++ b/drivers/infiniband/sw/rdmavt/mmap.c
> @@ -39,14 +39,14 @@ void rvt_release_mmap_info(struct kref *ref)
>  	kfree(ip);
>  }
>
> -static void rvt_vma_open(struct vm_area_struct *vma)
> +static void rvt_vma_open(struct mm_area *vma)
>  {
>  	struct rvt_mmap_info *ip = vma->vm_private_data;
>
>  	kref_get(&ip->ref);
>  }
>
> -static void rvt_vma_close(struct vm_area_struct *vma)
> +static void rvt_vma_close(struct mm_area *vma)
>  {
>  	struct rvt_mmap_info *ip = vma->vm_private_data;
>
> @@ -65,7 +65,7 @@ static const struct vm_operations_struct rvt_vm_ops = {
>   *
>   * Return: zero if the mmap is OK. Otherwise, return an errno.
>   */
> -int rvt_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
> +int rvt_mmap(struct ib_ucontext *context, struct mm_area *vma)
>  {
>  	struct rvt_dev_info *rdi = ib_to_rvt(context->device);
>  	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
> diff --git a/drivers/infiniband/sw/rdmavt/mmap.h b/drivers/infiniband/sw/rdmavt/mmap.h
> index 29aaca3e8b83..7075597849cd 100644
> --- a/drivers/infiniband/sw/rdmavt/mmap.h
> +++ b/drivers/infiniband/sw/rdmavt/mmap.h
> @@ -10,7 +10,7 @@
>
>  void rvt_mmap_init(struct rvt_dev_info *rdi);
>  void rvt_release_mmap_info(struct kref *ref);
> -int rvt_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
> +int rvt_mmap(struct ib_ucontext *context, struct mm_area *vma);
>  struct rvt_mmap_info *rvt_create_mmap_info(struct rvt_dev_info *rdi, u32 size,
>  					   struct ib_udata *udata, void *obj);
>  void rvt_update_mmap_info(struct rvt_dev_info *rdi, struct rvt_mmap_info *ip,
> diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
> index feb386d98d1d..3f40a7a141af 100644
> --- a/drivers/infiniband/sw/rxe/rxe_loc.h
> +++ b/drivers/infiniband/sw/rxe/rxe_loc.h
> @@ -54,7 +54,7 @@ void rxe_mmap_release(struct kref *ref);
>  struct rxe_mmap_info *rxe_create_mmap_info(struct rxe_dev *dev, u32 size,
>  					   struct ib_udata *udata, void *obj);
>
> -int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
> +int rxe_mmap(struct ib_ucontext *context, struct mm_area *vma);
>
>  /* rxe_mr.c */
>  u8 rxe_get_next_key(u32 last_key);
> diff --git a/drivers/infiniband/sw/rxe/rxe_mmap.c b/drivers/infiniband/sw/rxe/rxe_mmap.c
> index 6b7f2bd69879..2b478c3138b9 100644
> --- a/drivers/infiniband/sw/rxe/rxe_mmap.c
> +++ b/drivers/infiniband/sw/rxe/rxe_mmap.c
> @@ -34,14 +34,14 @@ void rxe_mmap_release(struct kref *ref)
>   * open and close keep track of how many times the memory region is mapped,
>   * to avoid releasing it.
>   */
> -static void rxe_vma_open(struct vm_area_struct *vma)
> +static void rxe_vma_open(struct mm_area *vma)
>  {
>  	struct rxe_mmap_info *ip = vma->vm_private_data;
>
>  	kref_get(&ip->ref);
>  }
>
> -static void rxe_vma_close(struct vm_area_struct *vma)
> +static void rxe_vma_close(struct mm_area *vma)
>  {
>  	struct rxe_mmap_info *ip = vma->vm_private_data;
>
> @@ -59,7 +59,7 @@ static const struct vm_operations_struct rxe_vm_ops = {
>   * @vma: the VMA to be initialized
>   * Return zero if the mmap is OK. Otherwise, return an errno.
>   */
> -int rxe_mmap(struct ib_ucontext *context, struct vm_area_struct *vma)
> +int rxe_mmap(struct ib_ucontext *context, struct mm_area *vma)
>  {
>  	struct rxe_dev *rxe = to_rdev(context->device);
>  	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
> diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
> index fd7b266a221b..e04bb047470d 100644
> --- a/drivers/infiniband/sw/siw/siw_verbs.c
> +++ b/drivers/infiniband/sw/siw/siw_verbs.c
> @@ -51,7 +51,7 @@ void siw_mmap_free(struct rdma_user_mmap_entry *rdma_entry)
>  	kfree(entry);
>  }
>
> -int siw_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma)
> +int siw_mmap(struct ib_ucontext *ctx, struct mm_area *vma)
>  {
>  	struct siw_ucontext *uctx = to_siw_ctx(ctx);
>  	size_t size = vma->vm_end - vma->vm_start;
> diff --git a/drivers/infiniband/sw/siw/siw_verbs.h b/drivers/infiniband/sw/siw/siw_verbs.h
> index 1f1a305540af..0df2ef43317c 100644
> --- a/drivers/infiniband/sw/siw/siw_verbs.h
> +++ b/drivers/infiniband/sw/siw/siw_verbs.h
> @@ -80,7 +80,7 @@ int siw_query_srq(struct ib_srq *base_srq, struct ib_srq_attr *attr);
>  int siw_destroy_srq(struct ib_srq *base_srq, struct ib_udata *udata);
>  int siw_post_srq_recv(struct ib_srq *base_srq, const struct ib_recv_wr *wr,
>  		      const struct ib_recv_wr **bad_wr);
> -int siw_mmap(struct ib_ucontext *ctx, struct vm_area_struct *vma);
> +int siw_mmap(struct ib_ucontext *ctx, struct mm_area *vma);
>  void siw_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
>  void siw_qp_event(struct siw_qp *qp, enum ib_event_type type);
>  void siw_cq_event(struct siw_cq *cq, enum ib_event_type type);
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 6054d0ab8023..44e86a5bf175 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1066,7 +1066,7 @@ void *iommu_dma_vmap_noncontiguous(struct device *dev, size_t size,
>  	return vmap(sgt_handle(sgt)->pages, count, VM_MAP, PAGE_KERNEL);
>  }
>
> -int iommu_dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
> +int iommu_dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
>  		size_t size, struct sg_table *sgt)
>  {
>  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> @@ -1643,7 +1643,7 @@ void *iommu_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
>  	return cpu_addr;
>  }
>
> -int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
> +int iommu_dma_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs)
>  {
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> index ab18bc494eef..9d70a137db53 100644
> --- a/drivers/iommu/iommu-sva.c
> +++ b/drivers/iommu/iommu-sva.c
> @@ -209,7 +209,7 @@ static enum iommu_page_response_code
>  iommu_sva_handle_mm(struct iommu_fault *fault, struct mm_struct *mm)
>  {
>  	vm_fault_t ret;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned int access_flags = 0;
>  	unsigned int fault_flags = FAULT_FLAG_REMOTE;
>  	struct iommu_fault_page_request *prm = &fault->prm;
> diff --git a/drivers/media/common/videobuf2/videobuf2-core.c b/drivers/media/common/videobuf2/videobuf2-core.c
> index 2df566f409b6..77bafec1433d 100644
> --- a/drivers/media/common/videobuf2/videobuf2-core.c
> +++ b/drivers/media/common/videobuf2/videobuf2-core.c
> @@ -2496,7 +2496,7 @@ int vb2_core_expbuf(struct vb2_queue *q, int *fd, unsigned int type,
>  }
>  EXPORT_SYMBOL_GPL(vb2_core_expbuf);
>
> -int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma)
> +int vb2_mmap(struct vb2_queue *q, struct mm_area *vma)
>  {
>  	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
>  	struct vb2_buffer *vb;
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> index a13ec569c82f..e038533f7541 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
> @@ -271,7 +271,7 @@ static void *vb2_dc_alloc(struct vb2_buffer *vb,
>  	return buf;
>  }
>
> -static int vb2_dc_mmap(void *buf_priv, struct vm_area_struct *vma)
> +static int vb2_dc_mmap(void *buf_priv, struct mm_area *vma)
>  {
>  	struct vb2_dc_buf *buf = buf_priv;
>  	int ret;
> @@ -453,7 +453,7 @@ static int vb2_dc_dmabuf_ops_vmap(struct dma_buf *dbuf, struct iosys_map *map)
>  }
>
>  static int vb2_dc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> -	struct vm_area_struct *vma)
> +	struct mm_area *vma)
>  {
>  	return vb2_dc_mmap(dbuf->priv, vma);
>  }
> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> index c6ddf2357c58..78bc6dd98236 100644
> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> @@ -329,7 +329,7 @@ static unsigned int vb2_dma_sg_num_users(void *buf_priv)
>  	return refcount_read(&buf->refcount);
>  }
>
> -static int vb2_dma_sg_mmap(void *buf_priv, struct vm_area_struct *vma)
> +static int vb2_dma_sg_mmap(void *buf_priv, struct mm_area *vma)
>  {
>  	struct vb2_dma_sg_buf *buf = buf_priv;
>  	int err;
> @@ -501,7 +501,7 @@ static int vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf,
>  }
>
>  static int vb2_dma_sg_dmabuf_ops_mmap(struct dma_buf *dbuf,
> -	struct vm_area_struct *vma)
> +	struct mm_area *vma)
>  {
>  	return vb2_dma_sg_mmap(dbuf->priv, vma);
>  }
> diff --git a/drivers/media/common/videobuf2/videobuf2-memops.c b/drivers/media/common/videobuf2/videobuf2-memops.c
> index f9a4ec44422e..3012d5b5c2d9 100644
> --- a/drivers/media/common/videobuf2/videobuf2-memops.c
> +++ b/drivers/media/common/videobuf2/videobuf2-memops.c
> @@ -87,7 +87,7 @@ EXPORT_SYMBOL(vb2_destroy_framevec);
>   * This function adds another user to the provided vma. It expects
>   * struct vb2_vmarea_handler pointer in vma->vm_private_data.
>   */
> -static void vb2_common_vm_open(struct vm_area_struct *vma)
> +static void vb2_common_vm_open(struct mm_area *vma)
>  {
>  	struct vb2_vmarea_handler *h = vma->vm_private_data;
>
> @@ -105,7 +105,7 @@ static void vb2_common_vm_open(struct vm_area_struct *vma)
>   * This function releases the user from the provided vma. It expects
>   * struct vb2_vmarea_handler pointer in vma->vm_private_data.
>   */
> -static void vb2_common_vm_close(struct vm_area_struct *vma)
> +static void vb2_common_vm_close(struct mm_area *vma)
>  {
>  	struct vb2_vmarea_handler *h = vma->vm_private_data;
>
> diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> index 9201d854dbcc..73aa54baf3a0 100644
> --- a/drivers/media/common/videobuf2/videobuf2-v4l2.c
> +++ b/drivers/media/common/videobuf2/videobuf2-v4l2.c
> @@ -1141,7 +1141,7 @@ EXPORT_SYMBOL_GPL(vb2_ioctl_expbuf);
>
>  /* v4l2_file_operations helpers */
>
> -int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma)
> +int vb2_fop_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct video_device *vdev = video_devdata(file);
>
> diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> index 3f777068cd34..7f9526ab3e5a 100644
> --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c
> @@ -167,7 +167,7 @@ static unsigned int vb2_vmalloc_num_users(void *buf_priv)
>  	return refcount_read(&buf->refcount);
>  }
>
> -static int vb2_vmalloc_mmap(void *buf_priv, struct vm_area_struct *vma)
> +static int vb2_vmalloc_mmap(void *buf_priv, struct mm_area *vma)
>  {
>  	struct vb2_vmalloc_buf *buf = buf_priv;
>  	int ret;
> @@ -318,7 +318,7 @@ static int vb2_vmalloc_dmabuf_ops_vmap(struct dma_buf *dbuf,
>  }
>
>  static int vb2_vmalloc_dmabuf_ops_mmap(struct dma_buf *dbuf,
> -	struct vm_area_struct *vma)
> +	struct mm_area *vma)
>  {
>  	return vb2_vmalloc_mmap(dbuf->priv, vma);
>  }
> diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
> index 6063782e937a..72eae59b0646 100644
> --- a/drivers/media/dvb-core/dmxdev.c
> +++ b/drivers/media/dvb-core/dmxdev.c
> @@ -1212,7 +1212,7 @@ static __poll_t dvb_demux_poll(struct file *file, poll_table *wait)
>  }
>
>  #ifdef CONFIG_DVB_MMAP
> -static int dvb_demux_mmap(struct file *file, struct vm_area_struct *vma)
> +static int dvb_demux_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct dmxdev_filter *dmxdevfilter = file->private_data;
>  	struct dmxdev *dmxdev = dmxdevfilter->dev;
> @@ -1362,7 +1362,7 @@ static __poll_t dvb_dvr_poll(struct file *file, poll_table *wait)
>  }
>
>  #ifdef CONFIG_DVB_MMAP
> -static int dvb_dvr_mmap(struct file *file, struct vm_area_struct *vma)
> +static int dvb_dvr_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct dvb_device *dvbdev = file->private_data;
>  	struct dmxdev *dmxdev = dvbdev->priv;
> diff --git a/drivers/media/dvb-core/dvb_vb2.c b/drivers/media/dvb-core/dvb_vb2.c
> index 29edaaff7a5c..8e6b7b0463e9 100644
> --- a/drivers/media/dvb-core/dvb_vb2.c
> +++ b/drivers/media/dvb-core/dvb_vb2.c
> @@ -431,7 +431,7 @@ int dvb_vb2_dqbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
>  	return 0;
>  }
>
> -int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct vm_area_struct *vma)
> +int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct mm_area *vma)
>  {
>  	int ret;
>
> diff --git a/drivers/media/pci/cx18/cx18-fileops.h b/drivers/media/pci/cx18/cx18-fileops.h
> index 943057b83d94..be284bd28c53 100644
> --- a/drivers/media/pci/cx18/cx18-fileops.h
> +++ b/drivers/media/pci/cx18/cx18-fileops.h
> @@ -19,7 +19,7 @@ int cx18_start_capture(struct cx18_open_id *id);
>  void cx18_stop_capture(struct cx18_stream *s, int gop_end);
>  void cx18_mute(struct cx18 *cx);
>  void cx18_unmute(struct cx18 *cx);
> -int cx18_v4l2_mmap(struct file *file, struct vm_area_struct *vma);
> +int cx18_v4l2_mmap(struct file *file, struct mm_area *vma);
>  void cx18_clear_queue(struct cx18_stream *s, enum vb2_buffer_state state);
>  void cx18_vb_timeout(struct timer_list *t);
>
> diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c
> index 1ca60ca79dba..ffcd43703d6a 100644
> --- a/drivers/media/pci/intel/ipu6/ipu6-dma.c
> +++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c
> @@ -294,7 +294,7 @@ void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
>  }
>  EXPORT_SYMBOL_NS_GPL(ipu6_dma_free, "INTEL_IPU6");
>
> -int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
> +int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct mm_area *vma,
>  		  void *addr, dma_addr_t iova, size_t size,
>  		  unsigned long attrs)
>  {
> diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h
> index 2882850d9366..8c63e2883ebb 100644
> --- a/drivers/media/pci/intel/ipu6/ipu6-dma.h
> +++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h
> @@ -30,7 +30,7 @@ void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
>  		     unsigned long attrs);
>  void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
>  		   dma_addr_t dma_handle, unsigned long attrs);
> -int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
> +int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct mm_area *vma,
>  		  void *addr, dma_addr_t iova, size_t size,
>  		  unsigned long attrs);
>  int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
> diff --git a/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c b/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c
> index 4bda1c369c44..8c35172b0e38 100644
> --- a/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c
> +++ b/drivers/media/platform/samsung/exynos-gsc/gsc-m2m.c
> @@ -703,7 +703,7 @@ static __poll_t gsc_m2m_poll(struct file *file,
>  	return ret;
>  }
>
> -static int gsc_m2m_mmap(struct file *file, struct vm_area_struct *vma)
> +static int gsc_m2m_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct gsc_ctx *ctx = fh_to_ctx(file->private_data);
>  	struct gsc_dev *gsc = ctx->gsc_dev;
> diff --git a/drivers/media/platform/samsung/s3c-camif/camif-capture.c b/drivers/media/platform/samsung/s3c-camif/camif-capture.c
> index bd1149e8abc2..5ee766d8c40e 100644
> --- a/drivers/media/platform/samsung/s3c-camif/camif-capture.c
> +++ b/drivers/media/platform/samsung/s3c-camif/camif-capture.c
> @@ -604,7 +604,7 @@ static __poll_t s3c_camif_poll(struct file *file,
>  	return ret;
>  }
>
> -static int s3c_camif_mmap(struct file *file, struct vm_area_struct *vma)
> +static int s3c_camif_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct camif_vp *vp = video_drvdata(file);
>  	int ret;
> diff --git a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
> index 5f80931f056d..81656e3f2c49 100644
> --- a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
> +++ b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc.c
> @@ -1062,7 +1062,7 @@ static __poll_t s5p_mfc_poll(struct file *file,
>  }
>
>  /* Mmap */
> -static int s5p_mfc_mmap(struct file *file, struct vm_area_struct *vma)
> +static int s5p_mfc_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct s5p_mfc_ctx *ctx = fh_to_ctx(file->private_data);
>  	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
> diff --git a/drivers/media/platform/ti/omap3isp/ispvideo.c b/drivers/media/platform/ti/omap3isp/ispvideo.c
> index 5c9aa80023fd..ddab948fa88f 100644
> --- a/drivers/media/platform/ti/omap3isp/ispvideo.c
> +++ b/drivers/media/platform/ti/omap3isp/ispvideo.c
> @@ -1401,7 +1401,7 @@ static __poll_t isp_video_poll(struct file *file, poll_table *wait)
>  	return ret;
>  }
>
> -static int isp_video_mmap(struct file *file, struct vm_area_struct *vma)
> +static int isp_video_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct isp_video_fh *vfh = to_isp_video_fh(file->private_data);
>
> diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
> index 2ee142621042..25642a2e8eec 100644
> --- a/drivers/media/usb/uvc/uvc_queue.c
> +++ b/drivers/media/usb/uvc/uvc_queue.c
> @@ -346,7 +346,7 @@ int uvc_queue_streamoff(struct uvc_video_queue *queue, enum v4l2_buf_type type)
>  	return ret;
>  }
>
> -int uvc_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma)
> +int uvc_queue_mmap(struct uvc_video_queue *queue, struct mm_area *vma)
>  {
>  	return vb2_mmap(&queue->queue, vma);
>  }
> diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
> index 39065db44e86..f73fd604a62d 100644
> --- a/drivers/media/usb/uvc/uvc_v4l2.c
> +++ b/drivers/media/usb/uvc/uvc_v4l2.c
> @@ -1413,7 +1413,7 @@ static ssize_t uvc_v4l2_read(struct file *file, char __user *data,
>  	return -EINVAL;
>  }
>
> -static int uvc_v4l2_mmap(struct file *file, struct vm_area_struct *vma)
> +static int uvc_v4l2_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct uvc_fh *handle = file->private_data;
>  	struct uvc_streaming *stream = handle->stream;
> diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
> index b4ee701835fc..a56e30f5a487 100644
> --- a/drivers/media/usb/uvc/uvcvideo.h
> +++ b/drivers/media/usb/uvc/uvcvideo.h
> @@ -708,7 +708,7 @@ struct uvc_buffer *uvc_queue_next_buffer(struct uvc_video_queue *queue,
>  struct uvc_buffer *uvc_queue_get_current_buffer(struct uvc_video_queue *queue);
>  void uvc_queue_buffer_release(struct uvc_buffer *buf);
>  int uvc_queue_mmap(struct uvc_video_queue *queue,
> -		   struct vm_area_struct *vma);
> +		   struct mm_area *vma);
>  __poll_t uvc_queue_poll(struct uvc_video_queue *queue, struct file *file,
>  			poll_table *wait);
>  #ifndef CONFIG_MMU
> diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
> index b40c08ce909d..172f16bd0d79 100644
> --- a/drivers/media/v4l2-core/v4l2-dev.c
> +++ b/drivers/media/v4l2-core/v4l2-dev.c
> @@ -392,7 +392,7 @@ static unsigned long v4l2_get_unmapped_area(struct file *filp,
>  }
>  #endif
>
> -static int v4l2_mmap(struct file *filp, struct vm_area_struct *vm)
> +static int v4l2_mmap(struct file *filp, struct mm_area *vm)
>  {
>  	struct video_device *vdev = video_devdata(filp);
>  	int ret = -ENODEV;
> diff --git a/drivers/media/v4l2-core/v4l2-mem2mem.c b/drivers/media/v4l2-core/v4l2-mem2mem.c
> index eb22d6172462..219609e59ee1 100644
> --- a/drivers/media/v4l2-core/v4l2-mem2mem.c
> +++ b/drivers/media/v4l2-core/v4l2-mem2mem.c
> @@ -983,7 +983,7 @@ __poll_t v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
>  EXPORT_SYMBOL_GPL(v4l2_m2m_poll);
>
>  int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> -			 struct vm_area_struct *vma)
> +			 struct mm_area *vma)
>  {
>  	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
>  	struct vb2_queue *vq;
> @@ -1615,7 +1615,7 @@ EXPORT_SYMBOL_GPL(v4l2_m2m_ioctl_stateless_decoder_cmd);
>   * for the output and the capture buffer queue.
>   */
>
> -int v4l2_m2m_fop_mmap(struct file *file, struct vm_area_struct *vma)
> +int v4l2_m2m_fop_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct v4l2_fh *fh = file->private_data;
>
> diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
> index d4a96137728d..5742434e1178 100644
> --- a/drivers/misc/bcm-vk/bcm_vk_dev.c
> +++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
> @@ -1201,7 +1201,7 @@ static long bcm_vk_reset(struct bcm_vk *vk, struct vk_reset __user *arg)
>  	return ret;
>  }
>
> -static int bcm_vk_mmap(struct file *file, struct vm_area_struct *vma)
> +static int bcm_vk_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct bcm_vk_ctx *ctx = file->private_data;
>  	struct bcm_vk *vk = container_of(ctx->miscdev, struct bcm_vk, miscdev);
> diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
> index 7b7a22c91fe4..e8c4ed8aea52 100644
> --- a/drivers/misc/fastrpc.c
> +++ b/drivers/misc/fastrpc.c
> @@ -731,7 +731,7 @@ static int fastrpc_vmap(struct dma_buf *dmabuf, struct iosys_map *map)
>  }
>
>  static int fastrpc_mmap(struct dma_buf *dmabuf,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	struct fastrpc_buf *buf = dmabuf->priv;
>  	size_t size = vma->vm_end - vma->vm_start;
> @@ -984,7 +984,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
>  			continue;
>
>  		if (ctx->maps[i]) {
> -			struct vm_area_struct *vma = NULL;
> +			struct mm_area *vma = NULL;
>
>  			rpra[i].buf.pv = (u64) ctx->args[i].ptr;
>  			pages[i].addr = ctx->maps[i]->phys;
> diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c
> index 4441aca2280a..acff9681d657 100644
> --- a/drivers/misc/genwqe/card_dev.c
> +++ b/drivers/misc/genwqe/card_dev.c
> @@ -376,7 +376,7 @@ static int genwqe_release(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>
> -static void genwqe_vma_open(struct vm_area_struct *vma)
> +static void genwqe_vma_open(struct mm_area *vma)
>  {
>  	/* nothing ... */
>  }
> @@ -387,7 +387,7 @@ static void genwqe_vma_open(struct vm_area_struct *vma)
>   *
>   * Free memory which got allocated by GenWQE mmap().
>   */
> -static void genwqe_vma_close(struct vm_area_struct *vma)
> +static void genwqe_vma_close(struct mm_area *vma)
>  {
>  	unsigned long vsize = vma->vm_end - vma->vm_start;
>  	struct inode *inode = file_inode(vma->vm_file);
> @@ -432,7 +432,7 @@ static const struct vm_operations_struct genwqe_vma_ops = {
>   * plain buffer, we lookup our dma_mapping list to find the
>   * corresponding DMA address for the associated user-space address.
>   */
> -static int genwqe_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int genwqe_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	int rc;
>  	unsigned long pfn, vsize = vma->vm_end - vma->vm_start;
> diff --git a/drivers/misc/ocxl/context.c b/drivers/misc/ocxl/context.c
> index cded7d1caf32..da4b82b2c938 100644
> --- a/drivers/misc/ocxl/context.c
> +++ b/drivers/misc/ocxl/context.c
> @@ -95,7 +95,7 @@ int ocxl_context_attach(struct ocxl_context *ctx, u64 amr, struct mm_struct *mm)
>  }
>  EXPORT_SYMBOL_GPL(ocxl_context_attach);
>
> -static vm_fault_t map_afu_irq(struct vm_area_struct *vma, unsigned long address,
> +static vm_fault_t map_afu_irq(struct mm_area *vma, unsigned long address,
>  		u64 offset, struct ocxl_context *ctx)
>  {
>  	u64 trigger_addr;
> @@ -108,7 +108,7 @@ static vm_fault_t map_afu_irq(struct vm_area_struct *vma, unsigned long address,
>  	return vmf_insert_pfn(vma, address, trigger_addr >> PAGE_SHIFT);
>  }
>
> -static vm_fault_t map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
> +static vm_fault_t map_pp_mmio(struct mm_area *vma, unsigned long address,
>  		u64 offset, struct ocxl_context *ctx)
>  {
>  	u64 pp_mmio_addr;
> @@ -138,7 +138,7 @@ static vm_fault_t map_pp_mmio(struct vm_area_struct *vma, unsigned long address,
>
>  static vm_fault_t ocxl_mmap_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ocxl_context *ctx = vma->vm_file->private_data;
>  	u64 offset;
>  	vm_fault_t ret;
> @@ -159,7 +159,7 @@ static const struct vm_operations_struct ocxl_vmops = {
>  };
>
>  static int check_mmap_afu_irq(struct ocxl_context *ctx,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	int irq_id = ocxl_irq_offset_to_id(ctx, vma->vm_pgoff << PAGE_SHIFT);
>
> @@ -185,7 +185,7 @@ static int check_mmap_afu_irq(struct ocxl_context *ctx,
>  }
>
>  static int check_mmap_mmio(struct ocxl_context *ctx,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	if ((vma_pages(vma) + vma->vm_pgoff) >
>  		(ctx->afu->config.pp_mmio_stride >> PAGE_SHIFT))
> @@ -193,7 +193,7 @@ static int check_mmap_mmio(struct ocxl_context *ctx,
>  	return 0;
>  }
>
> -int ocxl_context_mmap(struct ocxl_context *ctx, struct vm_area_struct *vma)
> +int ocxl_context_mmap(struct ocxl_context *ctx, struct mm_area *vma)
>  {
>  	int rc;
>
> diff --git a/drivers/misc/ocxl/file.c b/drivers/misc/ocxl/file.c
> index 7eb74711ac96..68ce28450ac8 100644
> --- a/drivers/misc/ocxl/file.c
> +++ b/drivers/misc/ocxl/file.c
> @@ -289,7 +289,7 @@ static long afu_compat_ioctl(struct file *file, unsigned int cmd,
>  	return afu_ioctl(file, cmd, args);
>  }
>
> -static int afu_mmap(struct file *file, struct vm_area_struct *vma)
> +static int afu_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct ocxl_context *ctx = file->private_data;
>
> diff --git a/drivers/misc/ocxl/ocxl_internal.h b/drivers/misc/ocxl/ocxl_internal.h
> index d2028d6c6f08..4008b894d983 100644
> --- a/drivers/misc/ocxl/ocxl_internal.h
> +++ b/drivers/misc/ocxl/ocxl_internal.h
> @@ -139,7 +139,7 @@ int ocxl_config_check_afu_index(struct pci_dev *dev,
>  int ocxl_link_update_pe(void *link_handle, int pasid, __u16 tid);
>
>  int ocxl_context_mmap(struct ocxl_context *ctx,
> -			struct vm_area_struct *vma);
> +			struct mm_area *vma);
>  void ocxl_context_detach_all(struct ocxl_afu *afu);
>
>  int ocxl_sysfs_register_afu(struct ocxl_file_info *info);
> diff --git a/drivers/misc/ocxl/sysfs.c b/drivers/misc/ocxl/sysfs.c
> index e849641687a0..2ba0dc539358 100644
> --- a/drivers/misc/ocxl/sysfs.c
> +++ b/drivers/misc/ocxl/sysfs.c
> @@ -108,7 +108,7 @@ static ssize_t global_mmio_read(struct file *filp, struct kobject *kobj,
>
>  static vm_fault_t global_mmio_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct ocxl_afu *afu = vma->vm_private_data;
>  	unsigned long offset;
>
> @@ -126,7 +126,7 @@ static const struct vm_operations_struct global_mmio_vmops = {
>
>  static int global_mmio_mmap(struct file *filp, struct kobject *kobj,
>  			const struct bin_attribute *bin_attr,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	struct ocxl_afu *afu = to_afu(kobj_to_dev(kobj));
>
> diff --git a/drivers/misc/open-dice.c b/drivers/misc/open-dice.c
> index 24c29e0f00ef..d763a0bd0c8a 100644
> --- a/drivers/misc/open-dice.c
> +++ b/drivers/misc/open-dice.c
> @@ -86,7 +86,7 @@ static ssize_t open_dice_write(struct file *filp, const char __user *ptr,
>  /*
>   * Creates a mapping of the reserved memory region in user address space.
>   */
> -static int open_dice_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int open_dice_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct open_dice_drvdata *drvdata = to_open_dice_drvdata(filp);
>
> diff --git a/drivers/misc/sgi-gru/grufault.c b/drivers/misc/sgi-gru/grufault.c
> index 3557d78ee47a..a97dde2c3775 100644
> --- a/drivers/misc/sgi-gru/grufault.c
> +++ b/drivers/misc/sgi-gru/grufault.c
> @@ -45,9 +45,9 @@ static inline int is_gru_paddr(unsigned long paddr)
>  /*
>   * Find the vma of a GRU segment. Caller must hold mmap_lock.
>   */
> -struct vm_area_struct *gru_find_vma(unsigned long vaddr)
> +struct mm_area *gru_find_vma(unsigned long vaddr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = vma_lookup(current->mm, vaddr);
>  	if (vma && vma->vm_ops == &gru_vm_ops)
> @@ -66,7 +66,7 @@ struct vm_area_struct *gru_find_vma(unsigned long vaddr)
>  static struct gru_thread_state *gru_find_lock_gts(unsigned long vaddr)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct gru_thread_state *gts = NULL;
>
>  	mmap_read_lock(mm);
> @@ -83,7 +83,7 @@ static struct gru_thread_state *gru_find_lock_gts(unsigned long vaddr)
>  static struct gru_thread_state *gru_alloc_locked_gts(unsigned long vaddr)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct gru_thread_state *gts = ERR_PTR(-EINVAL);
>
>  	mmap_write_lock(mm);
> @@ -174,7 +174,7 @@ static void get_clear_fault_map(struct gru_state *gru,
>   * 		< 0 - error code
>   * 		  1 - (atomic only) try again in non-atomic context
>   */
> -static int non_atomic_pte_lookup(struct vm_area_struct *vma,
> +static int non_atomic_pte_lookup(struct mm_area *vma,
>  				 unsigned long vaddr, int write,
>  				 unsigned long *paddr, int *pageshift)
>  {
> @@ -202,7 +202,7 @@ static int non_atomic_pte_lookup(struct vm_area_struct *vma,
>   * NOTE: mmap_lock is already held on entry to this function. This
>   * guarantees existence of the page tables.
>   */
> -static int atomic_pte_lookup(struct vm_area_struct *vma, unsigned long vaddr,
> +static int atomic_pte_lookup(struct mm_area *vma, unsigned long vaddr,
>  	int write, unsigned long *paddr, int *pageshift)
>  {
>  	pgd_t *pgdp;
> @@ -253,7 +253,7 @@ static int gru_vtop(struct gru_thread_state *gts, unsigned long vaddr,
>  		    int write, int atomic, unsigned long *gpa, int *pageshift)
>  {
>  	struct mm_struct *mm = gts->ts_mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long paddr;
>  	int ret, ps;
>
> diff --git a/drivers/misc/sgi-gru/grufile.c b/drivers/misc/sgi-gru/grufile.c
> index e755690c9805..b831fdb27841 100644
> --- a/drivers/misc/sgi-gru/grufile.c
> +++ b/drivers/misc/sgi-gru/grufile.c
> @@ -58,7 +58,7 @@ static int gru_supported(void)
>   * Called when unmapping a device mapping. Frees all gru resources
>   * and tables belonging to the vma.
>   */
> -static void gru_vma_close(struct vm_area_struct *vma)
> +static void gru_vma_close(struct mm_area *vma)
>  {
>  	struct gru_vma_data *vdata;
>  	struct gru_thread_state *gts;
> @@ -92,7 +92,7 @@ static void gru_vma_close(struct vm_area_struct *vma)
>   * and private data structure necessary to allocate, track, and free the
>   * underlying pages.
>   */
> -static int gru_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int gru_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if ((vma->vm_flags & (VM_SHARED | VM_WRITE)) != (VM_SHARED | VM_WRITE))
>  		return -EPERM;
> @@ -121,7 +121,7 @@ static int gru_file_mmap(struct file *file, struct vm_area_struct *vma)
>  static int gru_create_new_context(unsigned long arg)
>  {
>  	struct gru_create_context_req req;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct gru_vma_data *vdata;
>  	int ret = -EINVAL;
>
> diff --git a/drivers/misc/sgi-gru/grumain.c b/drivers/misc/sgi-gru/grumain.c
> index 3036c15f3689..96374726d7e6 100644
> --- a/drivers/misc/sgi-gru/grumain.c
> +++ b/drivers/misc/sgi-gru/grumain.c
> @@ -303,7 +303,7 @@ static struct gru_thread_state *gru_find_current_gts_nolock(struct gru_vma_data
>  /*
>   * Allocate a thread state structure.
>   */
> -struct gru_thread_state *gru_alloc_gts(struct vm_area_struct *vma,
> +struct gru_thread_state *gru_alloc_gts(struct mm_area *vma,
>  		int cbr_au_count, int dsr_au_count,
>  		unsigned char tlb_preload_count, int options, int tsid)
>  {
> @@ -352,7 +352,7 @@ struct gru_thread_state *gru_alloc_gts(struct vm_area_struct *vma,
>  /*
>   * Allocate a vma private data structure.
>   */
> -struct gru_vma_data *gru_alloc_vma_data(struct vm_area_struct *vma, int tsid)
> +struct gru_vma_data *gru_alloc_vma_data(struct mm_area *vma, int tsid)
>  {
>  	struct gru_vma_data *vdata = NULL;
>
> @@ -370,7 +370,7 @@ struct gru_vma_data *gru_alloc_vma_data(struct vm_area_struct *vma, int tsid)
>  /*
>   * Find the thread state structure for the current thread.
>   */
> -struct gru_thread_state *gru_find_thread_state(struct vm_area_struct *vma,
> +struct gru_thread_state *gru_find_thread_state(struct mm_area *vma,
>  					int tsid)
>  {
>  	struct gru_vma_data *vdata = vma->vm_private_data;
> @@ -387,7 +387,7 @@ struct gru_thread_state *gru_find_thread_state(struct vm_area_struct *vma,
>   * Allocate a new thread state for a GSEG. Note that races may allow
>   * another thread to race to create a gts.
>   */
> -struct gru_thread_state *gru_alloc_thread_state(struct vm_area_struct *vma,
> +struct gru_thread_state *gru_alloc_thread_state(struct mm_area *vma,
>  					int tsid)
>  {
>  	struct gru_vma_data *vdata = vma->vm_private_data;
> @@ -920,7 +920,7 @@ struct gru_state *gru_assign_gru_context(struct gru_thread_state *gts)
>   */
>  vm_fault_t gru_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct gru_thread_state *gts;
>  	unsigned long paddr, vaddr;
>  	unsigned long expires;
> diff --git a/drivers/misc/sgi-gru/grutables.h b/drivers/misc/sgi-gru/grutables.h
> index 640daf1994df..cd0756f1e7c4 100644
> --- a/drivers/misc/sgi-gru/grutables.h
> +++ b/drivers/misc/sgi-gru/grutables.h
> @@ -337,7 +337,7 @@ struct gru_thread_state {
>  	struct mutex		ts_ctxlock;	/* load/unload CTX lock */
>  	struct mm_struct	*ts_mm;		/* mm currently mapped to
>  						   context */
> -	struct vm_area_struct	*ts_vma;	/* vma of GRU context */
> +	struct mm_area	*ts_vma;	/* vma of GRU context */
>  	struct gru_state	*ts_gru;	/* GRU where the context is
>  						   loaded */
>  	struct gru_mm_struct	*ts_gms;	/* asid & ioproc struct */
> @@ -607,11 +607,11 @@ struct gru_unload_context_req;
>  extern const struct vm_operations_struct gru_vm_ops;
>  extern struct device *grudev;
>
> -extern struct gru_vma_data *gru_alloc_vma_data(struct vm_area_struct *vma,
> +extern struct gru_vma_data *gru_alloc_vma_data(struct mm_area *vma,
>  				int tsid);
> -extern struct gru_thread_state *gru_find_thread_state(struct vm_area_struct
> +extern struct gru_thread_state *gru_find_thread_state(struct mm_area
>  				*vma, int tsid);
> -extern struct gru_thread_state *gru_alloc_thread_state(struct vm_area_struct
> +extern struct gru_thread_state *gru_alloc_thread_state(struct mm_area
>  				*vma, int tsid);
>  extern struct gru_state *gru_assign_gru_context(struct gru_thread_state *gts);
>  extern void gru_load_context(struct gru_thread_state *gts);
> @@ -634,12 +634,12 @@ extern int gru_get_exception_detail(unsigned long arg);
>  extern int gru_set_context_option(unsigned long address);
>  extern int gru_check_context_placement(struct gru_thread_state *gts);
>  extern int gru_cpu_fault_map_id(void);
> -extern struct vm_area_struct *gru_find_vma(unsigned long vaddr);
> +extern struct mm_area *gru_find_vma(unsigned long vaddr);
>  extern void gru_flush_all_tlb(struct gru_state *gru);
>  extern int gru_proc_init(void);
>  extern void gru_proc_exit(void);
>
> -extern struct gru_thread_state *gru_alloc_gts(struct vm_area_struct *vma,
> +extern struct gru_thread_state *gru_alloc_gts(struct mm_area *vma,
>  		int cbr_au_count, int dsr_au_count,
>  		unsigned char tlb_preload_count, int options, int tsid);
>  extern unsigned long gru_reserve_cb_resources(struct gru_state *gru,
> diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
> index bdc2e6fda782..316f5f5af318 100644
> --- a/drivers/misc/uacce/uacce.c
> +++ b/drivers/misc/uacce/uacce.c
> @@ -200,7 +200,7 @@ static int uacce_fops_release(struct inode *inode, struct file *filep)
>  	return 0;
>  }
>
> -static void uacce_vma_close(struct vm_area_struct *vma)
> +static void uacce_vma_close(struct mm_area *vma)
>  {
>  	struct uacce_queue *q = vma->vm_private_data;
>
> @@ -218,7 +218,7 @@ static const struct vm_operations_struct uacce_vm_ops = {
>  	.close = uacce_vma_close,
>  };
>
> -static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
> +static int uacce_fops_mmap(struct file *filep, struct mm_area *vma)
>  {
>  	struct uacce_queue *q = filep->private_data;
>  	struct uacce_device *uacce = q->uacce;
> diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
> index 8dc4f5c493fc..389461af2b3e 100644
> --- a/drivers/mtd/mtdchar.c
> +++ b/drivers/mtd/mtdchar.c
> @@ -1374,7 +1374,7 @@ static unsigned mtdchar_mmap_capabilities(struct file *file)
>  /*
>   * set up a mapping for shared memory segments
>   */
> -static int mtdchar_mmap(struct file *file, struct vm_area_struct *vma)
> +static int mtdchar_mmap(struct file *file, struct mm_area *vma)
>  {
>  #ifdef CONFIG_MMU
>  	struct mtd_file_info *mfi = file->private_data;
> diff --git a/drivers/pci/mmap.c b/drivers/pci/mmap.c
> index 8da3347a95c4..183568aa7b8c 100644
> --- a/drivers/pci/mmap.c
> +++ b/drivers/pci/mmap.c
> @@ -22,7 +22,7 @@ static const struct vm_operations_struct pci_phys_vm_ops = {
>  };
>
>  int pci_mmap_resource_range(struct pci_dev *pdev, int bar,
> -			    struct vm_area_struct *vma,
> +			    struct mm_area *vma,
>  			    enum pci_mmap_state mmap_state, int write_combine)
>  {
>  	unsigned long size;
> @@ -56,7 +56,7 @@ int pci_mmap_resource_range(struct pci_dev *pdev, int bar,
>  #if (defined(CONFIG_SYSFS) || defined(CONFIG_PROC_FS)) && \
>      (defined(HAVE_PCI_MMAP) || defined(ARCH_GENERIC_PCI_MMAP_RESOURCE))
>
> -int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vma,
> +int pci_mmap_fits(struct pci_dev *pdev, int resno, struct mm_area *vma,
>  		  enum pci_mmap_api mmap_api)
>  {
>  	resource_size_t pci_start = 0, pci_end;
> diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
> index 19214ec81fbb..ba40bd4cb2a1 100644
> --- a/drivers/pci/p2pdma.c
> +++ b/drivers/pci/p2pdma.c
> @@ -90,7 +90,7 @@ static ssize_t published_show(struct device *dev, struct device_attribute *attr,
>  static DEVICE_ATTR_RO(published);
>
>  static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj,
> -		const struct bin_attribute *attr, struct vm_area_struct *vma)
> +		const struct bin_attribute *attr, struct mm_area *vma)
>  {
>  	struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
>  	size_t len = vma->vm_end - vma->vm_start;
> diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
> index c6cda56ca52c..4ceec1061fe5 100644
> --- a/drivers/pci/pci-sysfs.c
> +++ b/drivers/pci/pci-sysfs.c
> @@ -930,7 +930,7 @@ static ssize_t pci_write_legacy_io(struct file *filp, struct kobject *kobj,
>   * @filp: open sysfs file
>   * @kobj: kobject corresponding to device to be mapped
>   * @attr: struct bin_attribute for this file
> - * @vma: struct vm_area_struct passed to mmap
> + * @vma: struct mm_area passed to mmap
>   *
>   * Uses an arch specific callback, pci_mmap_legacy_mem_page_range, to mmap
>   * legacy memory space (first meg of bus space) into application virtual
> @@ -938,7 +938,7 @@ static ssize_t pci_write_legacy_io(struct file *filp, struct kobject *kobj,
>   */
>  static int pci_mmap_legacy_mem(struct file *filp, struct kobject *kobj,
>  			       const struct bin_attribute *attr,
> -			       struct vm_area_struct *vma)
> +			       struct mm_area *vma)
>  {
>  	struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj));
>
> @@ -950,7 +950,7 @@ static int pci_mmap_legacy_mem(struct file *filp, struct kobject *kobj,
>   * @filp: open sysfs file
>   * @kobj: kobject corresponding to device to be mapped
>   * @attr: struct bin_attribute for this file
> - * @vma: struct vm_area_struct passed to mmap
> + * @vma: struct mm_area passed to mmap
>   *
>   * Uses an arch specific callback, pci_mmap_legacy_io_page_range, to mmap
>   * legacy IO space (first meg of bus space) into application virtual
> @@ -958,7 +958,7 @@ static int pci_mmap_legacy_mem(struct file *filp, struct kobject *kobj,
>   */
>  static int pci_mmap_legacy_io(struct file *filp, struct kobject *kobj,
>  			      const struct bin_attribute *attr,
> -			      struct vm_area_struct *vma)
> +			      struct mm_area *vma)
>  {
>  	struct pci_bus *bus = to_pci_bus(kobj_to_dev(kobj));
>
> @@ -1056,13 +1056,13 @@ void pci_remove_legacy_files(struct pci_bus *b)
>   * pci_mmap_resource - map a PCI resource into user memory space
>   * @kobj: kobject for mapping
>   * @attr: struct bin_attribute for the file being mapped
> - * @vma: struct vm_area_struct passed into the mmap
> + * @vma: struct mm_area passed into the mmap
>   * @write_combine: 1 for write_combine mapping
>   *
>   * Use the regular PCI mapping routines to map a PCI resource into userspace.
>   */
>  static int pci_mmap_resource(struct kobject *kobj, const struct bin_attribute *attr,
> -			     struct vm_area_struct *vma, int write_combine)
> +			     struct mm_area *vma, int write_combine)
>  {
>  	struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj));
>  	int bar = (unsigned long)attr->private;
> @@ -1087,14 +1087,14 @@ static int pci_mmap_resource(struct kobject *kobj, const struct bin_attribute *a
>
>  static int pci_mmap_resource_uc(struct file *filp, struct kobject *kobj,
>  				const struct bin_attribute *attr,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	return pci_mmap_resource(kobj, attr, vma, 0);
>  }
>
>  static int pci_mmap_resource_wc(struct file *filp, struct kobject *kobj,
>  				const struct bin_attribute *attr,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	return pci_mmap_resource(kobj, attr, vma, 1);
>  }
> diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
> index b81e99cd4b62..3595cd20c401 100644
> --- a/drivers/pci/pci.h
> +++ b/drivers/pci/pci.h
> @@ -99,7 +99,7 @@ enum pci_mmap_api {
>  	PCI_MMAP_SYSFS,	/* mmap on /sys/bus/pci/devices/<BDF>/resource<N> */
>  	PCI_MMAP_PROCFS	/* mmap on /proc/bus/pci/<BDF> */
>  };
> -int pci_mmap_fits(struct pci_dev *pdev, int resno, struct vm_area_struct *vmai,
> +int pci_mmap_fits(struct pci_dev *pdev, int resno, struct mm_area *vmai,
>  		  enum pci_mmap_api mmap_api);
>
>  bool pci_reset_supported(struct pci_dev *dev);
> diff --git a/drivers/pci/proc.c b/drivers/pci/proc.c
> index 9348a0fb8084..bb9b1a16c6b4 100644
> --- a/drivers/pci/proc.c
> +++ b/drivers/pci/proc.c
> @@ -240,7 +240,7 @@ static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd,
>  }
>
>  #ifdef HAVE_PCI_MMAP
> -static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma)
> +static int proc_bus_pci_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct pci_dev *dev = pde_data(file_inode(file));
>  	struct pci_filp_private *fpriv = file->private_data;
> diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c
> index 7233b654bbad..1757c1109a16 100644
> --- a/drivers/platform/x86/intel/pmt/class.c
> +++ b/drivers/platform/x86/intel/pmt/class.c
> @@ -105,7 +105,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj,
>
>  static int
>  intel_pmt_mmap(struct file *filp, struct kobject *kobj,
> -		const struct bin_attribute *attr, struct vm_area_struct *vma)
> +		const struct bin_attribute *attr, struct mm_area *vma)
>  {
>  	struct intel_pmt_entry *entry = container_of(attr,
>  						     struct intel_pmt_entry,
> diff --git a/drivers/ptp/ptp_vmclock.c b/drivers/ptp/ptp_vmclock.c
> index b3a83b03d9c1..b1dddbc99ce7 100644
> --- a/drivers/ptp/ptp_vmclock.c
> +++ b/drivers/ptp/ptp_vmclock.c
> @@ -357,7 +357,7 @@ static struct ptp_clock *vmclock_ptp_register(struct device *dev,
>  	return ptp_clock_register(&st->ptp_clock_info, dev);
>  }
>
> -static int vmclock_miscdev_mmap(struct file *fp, struct vm_area_struct *vma)
> +static int vmclock_miscdev_mmap(struct file *fp, struct mm_area *vma)
>  {
>  	struct vmclock_state *st = container_of(fp->private_data,
>  						struct vmclock_state, miscdev);
> diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
> index cbf531d0ba68..e6f7cd47e550 100644
> --- a/drivers/rapidio/devices/rio_mport_cdev.c
> +++ b/drivers/rapidio/devices/rio_mport_cdev.c
> @@ -2173,7 +2173,7 @@ static void mport_release_mapping(struct kref *ref)
>  	kfree(map);
>  }
>
> -static void mport_mm_open(struct vm_area_struct *vma)
> +static void mport_mm_open(struct mm_area *vma)
>  {
>  	struct rio_mport_mapping *map = vma->vm_private_data;
>
> @@ -2181,7 +2181,7 @@ static void mport_mm_open(struct vm_area_struct *vma)
>  	kref_get(&map->ref);
>  }
>
> -static void mport_mm_close(struct vm_area_struct *vma)
> +static void mport_mm_close(struct mm_area *vma)
>  {
>  	struct rio_mport_mapping *map = vma->vm_private_data;
>
> @@ -2196,7 +2196,7 @@ static const struct vm_operations_struct vm_ops = {
>  	.close = mport_mm_close,
>  };
>
> -static int mport_cdev_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int mport_cdev_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct mport_cdev_priv *priv = filp->private_data;
>  	struct mport_dev *md;
> diff --git a/drivers/sbus/char/flash.c b/drivers/sbus/char/flash.c
> index 6524a4a19109..20e2687a4cc7 100644
> --- a/drivers/sbus/char/flash.c
> +++ b/drivers/sbus/char/flash.c
> @@ -31,7 +31,7 @@ static struct {
>  } flash;
>
>  static int
> -flash_mmap(struct file *file, struct vm_area_struct *vma)
> +flash_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned long addr;
>  	unsigned long size;
> diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c
> index a536dd6f4f7c..151f9f99565f 100644
> --- a/drivers/sbus/char/oradax.c
> +++ b/drivers/sbus/char/oradax.c
> @@ -208,7 +208,7 @@ static ssize_t dax_read(struct file *filp, char __user *buf,
>  			size_t count, loff_t *ppos);
>  static ssize_t dax_write(struct file *filp, const char __user *buf,
>  			 size_t count, loff_t *ppos);
> -static int dax_devmap(struct file *f, struct vm_area_struct *vma);
> +static int dax_devmap(struct file *f, struct mm_area *vma);
>  static int dax_close(struct inode *i, struct file *f);
>
>  static const struct file_operations dax_fops = {
> @@ -368,7 +368,7 @@ static void __exit dax_detach(void)
>  module_exit(dax_detach);
>
>  /* map completion area */
> -static int dax_devmap(struct file *f, struct vm_area_struct *vma)
> +static int dax_devmap(struct file *f, struct mm_area *vma)
>  {
>  	struct dax_ctx *ctx = (struct dax_ctx *)f->private_data;
>  	size_t len = vma->vm_end - vma->vm_start;
> diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
> index effb7e768165..a20fc2341c3c 100644
> --- a/drivers/scsi/sg.c
> +++ b/drivers/scsi/sg.c
> @@ -1214,7 +1214,7 @@ sg_fasync(int fd, struct file *filp, int mode)
>  static vm_fault_t
>  sg_vma_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	Sg_fd *sfp;
>  	unsigned long offset, len, sa;
>  	Sg_scatter_hold *rsv_schp;
> @@ -1253,7 +1253,7 @@ static const struct vm_operations_struct sg_mmap_vm_ops = {
>  };
>
>  static int
> -sg_mmap(struct file *filp, struct vm_area_struct *vma)
> +sg_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	Sg_fd *sfp;
>  	unsigned long req_sz, len, sa;
> diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
> index ee58151bd69e..9a64d76880a9 100644
> --- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c
> +++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c
> @@ -46,7 +46,7 @@ static struct aspeed_lpc_ctrl *file_aspeed_lpc_ctrl(struct file *file)
>  			miscdev);
>  }
>
> -static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma)
> +static int aspeed_lpc_ctrl_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct aspeed_lpc_ctrl *lpc_ctrl = file_aspeed_lpc_ctrl(file);
>  	unsigned long vsize = vma->vm_end - vma->vm_start;
> diff --git a/drivers/soc/aspeed/aspeed-p2a-ctrl.c b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
> index 6cc943744e12..8ad07f33f25c 100644
> --- a/drivers/soc/aspeed/aspeed-p2a-ctrl.c
> +++ b/drivers/soc/aspeed/aspeed-p2a-ctrl.c
> @@ -97,7 +97,7 @@ static void aspeed_p2a_disable_bridge(struct aspeed_p2a_ctrl *p2a_ctrl)
>  	regmap_update_bits(p2a_ctrl->regmap, SCU180, SCU180_ENP2A, 0);
>  }
>
> -static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma)
> +static int aspeed_p2a_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned long vsize;
>  	pgprot_t prot;
> diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
> index 1b32469f2789..f07526023635 100644
> --- a/drivers/soc/qcom/rmtfs_mem.c
> +++ b/drivers/soc/qcom/rmtfs_mem.c
> @@ -129,7 +129,7 @@ static const struct class rmtfs_class = {
>  	.name           = "rmtfs",
>  };
>
> -static int qcom_rmtfs_mem_mmap(struct file *filep, struct vm_area_struct *vma)
> +static int qcom_rmtfs_mem_mmap(struct file *filep, struct mm_area *vma)
>  {
>  	struct qcom_rmtfs_mem *rmtfs_mem = filep->private_data;
>
> diff --git a/drivers/staging/media/atomisp/include/hmm/hmm.h b/drivers/staging/media/atomisp/include/hmm/hmm.h
> index a7aef27f54de..6c20072ca7e0 100644
> --- a/drivers/staging/media/atomisp/include/hmm/hmm.h
> +++ b/drivers/staging/media/atomisp/include/hmm/hmm.h
> @@ -63,7 +63,7 @@ void hmm_flush_vmap(ia_css_ptr virt);
>   * virt must be the start address of ISP memory (return by hmm_alloc),
>   * do not pass any other address.
>   */
> -int hmm_mmap(struct vm_area_struct *vma, ia_css_ptr virt);
> +int hmm_mmap(struct mm_area *vma, ia_css_ptr virt);
>
>  extern struct hmm_bo_device bo_device;
>
> diff --git a/drivers/staging/media/atomisp/include/hmm/hmm_bo.h b/drivers/staging/media/atomisp/include/hmm/hmm_bo.h
> index e09ac29ac43d..9546a39e747b 100644
> --- a/drivers/staging/media/atomisp/include/hmm/hmm_bo.h
> +++ b/drivers/staging/media/atomisp/include/hmm/hmm_bo.h
> @@ -232,7 +232,7 @@ void hmm_bo_vunmap(struct hmm_buffer_object *bo);
>   *
>   * vma->vm_flags will be set to (VM_RESERVED | VM_IO).
>   */
> -int hmm_bo_mmap(struct vm_area_struct *vma,
> +int hmm_bo_mmap(struct mm_area *vma,
>  		struct hmm_buffer_object *bo);
>
>  /*
> diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c
> index 84102c3aaf97..64712310f850 100644
> --- a/drivers/staging/media/atomisp/pci/hmm/hmm.c
> +++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c
> @@ -522,7 +522,7 @@ phys_addr_t hmm_virt_to_phys(ia_css_ptr virt)
>  	return page_to_phys(bo->pages[idx]) + offset;
>  }
>
> -int hmm_mmap(struct vm_area_struct *vma, ia_css_ptr virt)
> +int hmm_mmap(struct mm_area *vma, ia_css_ptr virt)
>  {
>  	struct hmm_buffer_object *bo;
>
> diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> index 224ca8d42721..15c48650d883 100644
> --- a/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> +++ b/drivers/staging/media/atomisp/pci/hmm/hmm_bo.c
> @@ -974,7 +974,7 @@ void hmm_bo_unref(struct hmm_buffer_object *bo)
>  	kref_put(&bo->kref, kref_hmm_bo_release);
>  }
>
> -static void hmm_bo_vm_open(struct vm_area_struct *vma)
> +static void hmm_bo_vm_open(struct mm_area *vma)
>  {
>  	struct hmm_buffer_object *bo =
>  	    (struct hmm_buffer_object *)vma->vm_private_data;
> @@ -992,7 +992,7 @@ static void hmm_bo_vm_open(struct vm_area_struct *vma)
>  	mutex_unlock(&bo->mutex);
>  }
>
> -static void hmm_bo_vm_close(struct vm_area_struct *vma)
> +static void hmm_bo_vm_close(struct mm_area *vma)
>  {
>  	struct hmm_buffer_object *bo =
>  	    (struct hmm_buffer_object *)vma->vm_private_data;
> @@ -1021,7 +1021,7 @@ static const struct vm_operations_struct hmm_bo_vm_ops = {
>  /*
>   * mmap the bo to user space.
>   */
> -int hmm_bo_mmap(struct vm_area_struct *vma, struct hmm_buffer_object *bo)
> +int hmm_bo_mmap(struct mm_area *vma, struct hmm_buffer_object *bo)
>  {
>  	unsigned int start, end;
>  	unsigned int virt;
> diff --git a/drivers/staging/vme_user/vme.c b/drivers/staging/vme_user/vme.c
> index 42304c9f83a2..ed589a97da4f 100644
> --- a/drivers/staging/vme_user/vme.c
> +++ b/drivers/staging/vme_user/vme.c
> @@ -745,7 +745,7 @@ EXPORT_SYMBOL(vme_master_rmw);
>   *         resource or -EFAULT if map exceeds window size. Other generic mmap
>   *         errors may also be returned.
>   */
> -int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma)
> +int vme_master_mmap(struct vme_resource *resource, struct mm_area *vma)
>  {
>  	struct vme_bridge *bridge = find_bridge(resource);
>  	struct vme_master_resource *image;
> diff --git a/drivers/staging/vme_user/vme.h b/drivers/staging/vme_user/vme.h
> index 7753e736f9fd..a1505b68907f 100644
> --- a/drivers/staging/vme_user/vme.h
> +++ b/drivers/staging/vme_user/vme.h
> @@ -151,7 +151,7 @@ ssize_t vme_master_read(struct vme_resource *, void *, size_t, loff_t);
>  ssize_t vme_master_write(struct vme_resource *, void *, size_t, loff_t);
>  unsigned int vme_master_rmw(struct vme_resource *, unsigned int, unsigned int,
>  			    unsigned int, loff_t);
> -int vme_master_mmap(struct vme_resource *resource, struct vm_area_struct *vma);
> +int vme_master_mmap(struct vme_resource *resource, struct mm_area *vma);
>  void vme_master_free(struct vme_resource *);
>
>  struct vme_resource *vme_dma_request(struct vme_dev *, u32);
> diff --git a/drivers/staging/vme_user/vme_user.c b/drivers/staging/vme_user/vme_user.c
> index 5829a4141561..fd777648698d 100644
> --- a/drivers/staging/vme_user/vme_user.c
> +++ b/drivers/staging/vme_user/vme_user.c
> @@ -424,14 +424,14 @@ vme_user_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>  	return ret;
>  }
>
> -static void vme_user_vm_open(struct vm_area_struct *vma)
> +static void vme_user_vm_open(struct mm_area *vma)
>  {
>  	struct vme_user_vma_priv *vma_priv = vma->vm_private_data;
>
>  	refcount_inc(&vma_priv->refcnt);
>  }
>
> -static void vme_user_vm_close(struct vm_area_struct *vma)
> +static void vme_user_vm_close(struct mm_area *vma)
>  {
>  	struct vme_user_vma_priv *vma_priv = vma->vm_private_data;
>  	unsigned int minor = vma_priv->minor;
> @@ -451,7 +451,7 @@ static const struct vm_operations_struct vme_user_vm_ops = {
>  	.close = vme_user_vm_close,
>  };
>
> -static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct *vma)
> +static int vme_user_master_mmap(unsigned int minor, struct mm_area *vma)
>  {
>  	int err;
>  	struct vme_user_vma_priv *vma_priv;
> @@ -482,7 +482,7 @@ static int vme_user_master_mmap(unsigned int minor, struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -static int vme_user_mmap(struct file *file, struct vm_area_struct *vma)
> +static int vme_user_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned int minor = iminor(file_inode(file));
>
> diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
> index 0f5d820af119..eaff895205b4 100644
> --- a/drivers/target/target_core_user.c
> +++ b/drivers/target/target_core_user.c
> @@ -1823,7 +1823,7 @@ static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
>   * mmap code from uio.c. Copied here because we want to hook mmap()
>   * and this stuff must come along.
>   */
> -static int tcmu_find_mem_index(struct vm_area_struct *vma)
> +static int tcmu_find_mem_index(struct mm_area *vma)
>  {
>  	struct tcmu_dev *udev = vma->vm_private_data;
>  	struct uio_info *info = &udev->uio_info;
> @@ -1860,7 +1860,7 @@ static struct page *tcmu_try_get_data_page(struct tcmu_dev *udev, uint32_t dpi)
>  	return NULL;
>  }
>
> -static void tcmu_vma_open(struct vm_area_struct *vma)
> +static void tcmu_vma_open(struct mm_area *vma)
>  {
>  	struct tcmu_dev *udev = vma->vm_private_data;
>
> @@ -1869,7 +1869,7 @@ static void tcmu_vma_open(struct vm_area_struct *vma)
>  	kref_get(&udev->kref);
>  }
>
> -static void tcmu_vma_close(struct vm_area_struct *vma)
> +static void tcmu_vma_close(struct mm_area *vma)
>  {
>  	struct tcmu_dev *udev = vma->vm_private_data;
>
> @@ -1924,7 +1924,7 @@ static const struct vm_operations_struct tcmu_vm_ops = {
>  	.fault = tcmu_vma_fault,
>  };
>
> -static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma)
> +static int tcmu_mmap(struct uio_info *info, struct mm_area *vma)
>  {
>  	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
>
> diff --git a/drivers/tee/optee/call.c b/drivers/tee/optee/call.c
> index 16eb953e14bb..24db89ca4e26 100644
> --- a/drivers/tee/optee/call.c
> +++ b/drivers/tee/optee/call.c
> @@ -611,7 +611,7 @@ static bool is_normal_memory(pgprot_t p)
>  static int __check_mem_type(struct mm_struct *mm, unsigned long start,
>  				unsigned long end)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, start);
>
>  	for_each_vma_range(vmi, vma, end) {
> diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
> index daf6e5cfd59a..c6b120e0d3ae 100644
> --- a/drivers/tee/tee_shm.c
> +++ b/drivers/tee/tee_shm.c
> @@ -434,7 +434,7 @@ static int tee_shm_fop_release(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>
> -static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int tee_shm_fop_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct tee_shm *shm = filp->private_data;
>  	size_t size = vma->vm_end - vma->vm_start;
> diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
> index d93ed4e86a17..93d41eddc33c 100644
> --- a/drivers/uio/uio.c
> +++ b/drivers/uio/uio.c
> @@ -669,7 +669,7 @@ static ssize_t uio_write(struct file *filep, const char __user *buf,
>  	return retval ? retval : sizeof(s32);
>  }
>
> -static int uio_find_mem_index(struct vm_area_struct *vma)
> +static int uio_find_mem_index(struct mm_area *vma)
>  {
>  	struct uio_device *idev = vma->vm_private_data;
>
> @@ -726,7 +726,7 @@ static const struct vm_operations_struct uio_logical_vm_ops = {
>  	.fault = uio_vma_fault,
>  };
>
> -static int uio_mmap_logical(struct vm_area_struct *vma)
> +static int uio_mmap_logical(struct mm_area *vma)
>  {
>  	vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_ops = &uio_logical_vm_ops;
> @@ -739,7 +739,7 @@ static const struct vm_operations_struct uio_physical_vm_ops = {
>  #endif
>  };
>
> -static int uio_mmap_physical(struct vm_area_struct *vma)
> +static int uio_mmap_physical(struct mm_area *vma)
>  {
>  	struct uio_device *idev = vma->vm_private_data;
>  	int mi = uio_find_mem_index(vma);
> @@ -774,7 +774,7 @@ static int uio_mmap_physical(struct vm_area_struct *vma)
>  			       vma->vm_page_prot);
>  }
>
> -static int uio_mmap_dma_coherent(struct vm_area_struct *vma)
> +static int uio_mmap_dma_coherent(struct mm_area *vma)
>  {
>  	struct uio_device *idev = vma->vm_private_data;
>  	struct uio_mem *mem;
> @@ -817,7 +817,7 @@ static int uio_mmap_dma_coherent(struct vm_area_struct *vma)
>  	return ret;
>  }
>
> -static int uio_mmap(struct file *filep, struct vm_area_struct *vma)
> +static int uio_mmap(struct file *filep, struct mm_area *vma)
>  {
>  	struct uio_listener *listener = filep->private_data;
>  	struct uio_device *idev = listener->dev;
> diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
> index 1b19b5647495..5283c75d0860 100644
> --- a/drivers/uio/uio_hv_generic.c
> +++ b/drivers/uio/uio_hv_generic.c
> @@ -136,7 +136,7 @@ static void hv_uio_rescind(struct vmbus_channel *channel)
>   */
>  static int hv_uio_ring_mmap(struct file *filp, struct kobject *kobj,
>  			    const struct bin_attribute *attr,
> -			    struct vm_area_struct *vma)
> +			    struct mm_area *vma)
>  {
>  	struct vmbus_channel *channel
>  		= container_of(kobj, struct vmbus_channel, kobj);
> diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
> index f6ce6e26e0d4..328bdbc57cf0 100644
> --- a/drivers/usb/core/devio.c
> +++ b/drivers/usb/core/devio.c
> @@ -205,7 +205,7 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count)
>  	}
>  }
>
> -static void usbdev_vm_open(struct vm_area_struct *vma)
> +static void usbdev_vm_open(struct mm_area *vma)
>  {
>  	struct usb_memory *usbm = vma->vm_private_data;
>  	unsigned long flags;
> @@ -215,7 +215,7 @@ static void usbdev_vm_open(struct vm_area_struct *vma)
>  	spin_unlock_irqrestore(&usbm->ps->lock, flags);
>  }
>
> -static void usbdev_vm_close(struct vm_area_struct *vma)
> +static void usbdev_vm_close(struct mm_area *vma)
>  {
>  	struct usb_memory *usbm = vma->vm_private_data;
>
> @@ -227,7 +227,7 @@ static const struct vm_operations_struct usbdev_vm_ops = {
>  	.close = usbdev_vm_close
>  };
>
> -static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
> +static int usbdev_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct usb_memory *usbm = NULL;
>  	struct usb_dev_state *ps = file->private_data;
> diff --git a/drivers/usb/gadget/function/uvc_queue.c b/drivers/usb/gadget/function/uvc_queue.c
> index 9a1bbd79ff5a..519586dfeb0f 100644
> --- a/drivers/usb/gadget/function/uvc_queue.c
> +++ b/drivers/usb/gadget/function/uvc_queue.c
> @@ -212,7 +212,7 @@ __poll_t uvcg_queue_poll(struct uvc_video_queue *queue, struct file *file,
>  	return vb2_poll(&queue->queue, file, wait);
>  }
>
> -int uvcg_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma)
> +int uvcg_queue_mmap(struct uvc_video_queue *queue, struct mm_area *vma)
>  {
>  	return vb2_mmap(&queue->queue, vma);
>  }
> diff --git a/drivers/usb/gadget/function/uvc_queue.h b/drivers/usb/gadget/function/uvc_queue.h
> index b54becc570a3..4f8a2d2ef2ae 100644
> --- a/drivers/usb/gadget/function/uvc_queue.h
> +++ b/drivers/usb/gadget/function/uvc_queue.h
> @@ -83,7 +83,7 @@ int uvcg_dequeue_buffer(struct uvc_video_queue *queue,
>  __poll_t uvcg_queue_poll(struct uvc_video_queue *queue,
>  			     struct file *file, poll_table *wait);
>
> -int uvcg_queue_mmap(struct uvc_video_queue *queue, struct vm_area_struct *vma);
> +int uvcg_queue_mmap(struct uvc_video_queue *queue, struct mm_area *vma);
>
>  #ifndef CONFIG_MMU
>  unsigned long uvcg_queue_get_unmapped_area(struct uvc_video_queue *queue,
> diff --git a/drivers/usb/gadget/function/uvc_v4l2.c b/drivers/usb/gadget/function/uvc_v4l2.c
> index fc9a8d31a1e9..f0016d03f5bb 100644
> --- a/drivers/usb/gadget/function/uvc_v4l2.c
> +++ b/drivers/usb/gadget/function/uvc_v4l2.c
> @@ -702,7 +702,7 @@ uvc_v4l2_release(struct file *file)
>  }
>
>  static int
> -uvc_v4l2_mmap(struct file *file, struct vm_area_struct *vma)
> +uvc_v4l2_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct video_device *vdev = video_devdata(file);
>  	struct uvc_device *uvc = video_get_drvdata(vdev);
> diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c
> index c93b43f5bc46..765efbb61818 100644
> --- a/drivers/usb/mon/mon_bin.c
> +++ b/drivers/usb/mon/mon_bin.c
> @@ -1222,7 +1222,7 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait)
>   * open and close: just keep track of how many times the device is
>   * mapped, to use the proper memory allocation function.
>   */
> -static void mon_bin_vma_open(struct vm_area_struct *vma)
> +static void mon_bin_vma_open(struct mm_area *vma)
>  {
>  	struct mon_reader_bin *rp = vma->vm_private_data;
>  	unsigned long flags;
> @@ -1232,7 +1232,7 @@ static void mon_bin_vma_open(struct vm_area_struct *vma)
>  	spin_unlock_irqrestore(&rp->b_lock, flags);
>  }
>
> -static void mon_bin_vma_close(struct vm_area_struct *vma)
> +static void mon_bin_vma_close(struct mm_area *vma)
>  {
>  	unsigned long flags;
>
> @@ -1272,7 +1272,7 @@ static const struct vm_operations_struct mon_bin_vm_ops = {
>  	.fault =    mon_bin_vma_fault,
>  };
>
> -static int mon_bin_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int mon_bin_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	/* don't do anything here: "fault" will set up page table entries */
>  	vma->vm_ops = &mon_bin_vm_ops;
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> index 58116f89d8da..372456ffd5a3 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> @@ -532,7 +532,7 @@ static const struct vm_operations_struct vduse_domain_mmap_ops = {
>  	.fault = vduse_domain_mmap_fault,
>  };
>
> -static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma)
> +static int vduse_domain_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct vduse_iova_domain *domain = file->private_data;
>
> diff --git a/drivers/vfio/cdx/main.c b/drivers/vfio/cdx/main.c
> index 5dd5f5ad7686..81d6e3d2293d 100644
> --- a/drivers/vfio/cdx/main.c
> +++ b/drivers/vfio/cdx/main.c
> @@ -233,7 +233,7 @@ static long vfio_cdx_ioctl(struct vfio_device *core_vdev,
>  }
>
>  static int vfio_cdx_mmap_mmio(struct vfio_cdx_region region,
> -			      struct vm_area_struct *vma)
> +			      struct mm_area *vma)
>  {
>  	u64 size = vma->vm_end - vma->vm_start;
>  	u64 pgoff, base;
> @@ -253,7 +253,7 @@ static int vfio_cdx_mmap_mmio(struct vfio_cdx_region region,
>  }
>
>  static int vfio_cdx_mmap(struct vfio_device *core_vdev,
> -			 struct vm_area_struct *vma)
> +			 struct mm_area *vma)
>  {
>  	struct vfio_cdx_device *vdev =
>  		container_of(core_vdev, struct vfio_cdx_device, vdev);
> diff --git a/drivers/vfio/fsl-mc/vfio_fsl_mc.c b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
> index f65d91c01f2e..27b03c09f016 100644
> --- a/drivers/vfio/fsl-mc/vfio_fsl_mc.c
> +++ b/drivers/vfio/fsl-mc/vfio_fsl_mc.c
> @@ -357,7 +357,7 @@ static ssize_t vfio_fsl_mc_write(struct vfio_device *core_vdev,
>  }
>
>  static int vfio_fsl_mc_mmap_mmio(struct vfio_fsl_mc_region region,
> -				 struct vm_area_struct *vma)
> +				 struct mm_area *vma)
>  {
>  	u64 size = vma->vm_end - vma->vm_start;
>  	u64 pgoff, base;
> @@ -382,7 +382,7 @@ static int vfio_fsl_mc_mmap_mmio(struct vfio_fsl_mc_region region,
>  }
>
>  static int vfio_fsl_mc_mmap(struct vfio_device *core_vdev,
> -			    struct vm_area_struct *vma)
> +			    struct mm_area *vma)
>  {
>  	struct vfio_fsl_mc_device *vdev =
>  		container_of(core_vdev, struct vfio_fsl_mc_device, vdev);
> diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> index 451c639299eb..e61c19772dc2 100644
> --- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> +++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
> @@ -1218,7 +1218,7 @@ static int hisi_acc_pci_rw_access_check(struct vfio_device *core_vdev,
>  }
>
>  static int hisi_acc_vfio_pci_mmap(struct vfio_device *core_vdev,
> -				  struct vm_area_struct *vma)
> +				  struct mm_area *vma)
>  {
>  	struct vfio_pci_core_device *vdev =
>  		container_of(core_vdev, struct vfio_pci_core_device, vdev);
> diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
> index e5ac39c4cc6b..935332b63571 100644
> --- a/drivers/vfio/pci/nvgrace-gpu/main.c
> +++ b/drivers/vfio/pci/nvgrace-gpu/main.c
> @@ -131,7 +131,7 @@ static void nvgrace_gpu_close_device(struct vfio_device *core_vdev)
>  }
>
>  static int nvgrace_gpu_mmap(struct vfio_device *core_vdev,
> -			    struct vm_area_struct *vma)
> +			    struct mm_area *vma)
>  {
>  	struct nvgrace_gpu_pci_core_device *nvdev =
>  		container_of(core_vdev, struct nvgrace_gpu_pci_core_device,
> diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
> index 35f9046af315..3e24952b7309 100644
> --- a/drivers/vfio/pci/vfio_pci_core.c
> +++ b/drivers/vfio/pci/vfio_pci_core.c
> @@ -1629,7 +1629,7 @@ void vfio_pci_memory_unlock_and_restore(struct vfio_pci_core_device *vdev, u16 c
>  	up_write(&vdev->memory_lock);
>  }
>
> -static unsigned long vma_to_pfn(struct vm_area_struct *vma)
> +static unsigned long vma_to_pfn(struct mm_area *vma)
>  {
>  	struct vfio_pci_core_device *vdev = vma->vm_private_data;
>  	int index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
> @@ -1644,7 +1644,7 @@ static unsigned long vma_to_pfn(struct vm_area_struct *vma)
>  static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
>  					   unsigned int order)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct vfio_pci_core_device *vdev = vma->vm_private_data;
>  	unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
>  	vm_fault_t ret = VM_FAULT_SIGBUS;
> @@ -1708,7 +1708,7 @@ static const struct vm_operations_struct vfio_pci_mmap_ops = {
>  #endif
>  };
>
> -int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma)
> +int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct mm_area *vma)
>  {
>  	struct vfio_pci_core_device *vdev =
>  		container_of(core_vdev, struct vfio_pci_core_device, vdev);
> diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
> index 3bf1043cd795..194cd554d8e8 100644
> --- a/drivers/vfio/platform/vfio_platform_common.c
> +++ b/drivers/vfio/platform/vfio_platform_common.c
> @@ -550,7 +550,7 @@ ssize_t vfio_platform_write(struct vfio_device *core_vdev, const char __user *bu
>  EXPORT_SYMBOL_GPL(vfio_platform_write);
>
>  static int vfio_platform_mmap_mmio(struct vfio_platform_region region,
> -				   struct vm_area_struct *vma)
> +				   struct mm_area *vma)
>  {
>  	u64 req_len, pgoff, req_start;
>
> @@ -569,7 +569,7 @@ static int vfio_platform_mmap_mmio(struct vfio_platform_region region,
>  			       req_len, vma->vm_page_prot);
>  }
>
> -int vfio_platform_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma)
> +int vfio_platform_mmap(struct vfio_device *core_vdev, struct mm_area *vma)
>  {
>  	struct vfio_platform_device *vdev =
>  		container_of(core_vdev, struct vfio_platform_device, vdev);
> diff --git a/drivers/vfio/platform/vfio_platform_private.h b/drivers/vfio/platform/vfio_platform_private.h
> index 8d8fab516849..a7355a03e43c 100644
> --- a/drivers/vfio/platform/vfio_platform_private.h
> +++ b/drivers/vfio/platform/vfio_platform_private.h
> @@ -92,7 +92,7 @@ ssize_t vfio_platform_write(struct vfio_device *core_vdev,
>  			    const char __user *buf,
>  			    size_t count, loff_t *ppos);
>  int vfio_platform_mmap(struct vfio_device *core_vdev,
> -		       struct vm_area_struct *vma);
> +		       struct mm_area *vma);
>
>  int vfio_platform_irq_init(struct vfio_platform_device *vdev);
>  void vfio_platform_irq_cleanup(struct vfio_platform_device *vdev);
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 0ac56072af9f..acf89ab4e254 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -518,7 +518,7 @@ static void vfio_batch_fini(struct vfio_batch *batch)
>  		free_page((unsigned long)batch->pages);
>  }
>
> -static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm,
> +static int follow_fault_pfn(struct mm_area *vma, struct mm_struct *mm,
>  			    unsigned long vaddr, unsigned long *pfn,
>  			    unsigned long *addr_mask, bool write_fault)
>  {
> @@ -567,7 +567,7 @@ static long vaddr_get_pfns(struct mm_struct *mm, unsigned long vaddr,
>  			   struct vfio_batch *batch)
>  {
>  	unsigned long pin_pages = min_t(unsigned long, npages, batch->capacity);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned int flags = 0;
>  	long ret;
>
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index 1fd261efc582..24eca55e4635 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -1339,7 +1339,7 @@ static ssize_t vfio_device_fops_write(struct file *filep,
>  	return device->ops->write(device, buf, count, ppos);
>  }
>
> -static int vfio_device_fops_mmap(struct file *filep, struct vm_area_struct *vma)
> +static int vfio_device_fops_mmap(struct file *filep, struct mm_area *vma)
>  {
>  	struct vfio_device_file *df = filep->private_data;
>  	struct vfio_device *device = df->device;
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 5a49b5a6d496..00dac20fc834 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -1048,7 +1048,7 @@ static int vhost_vdpa_va_map(struct vhost_vdpa *v,
>  	struct vhost_dev *dev = &v->vdev;
>  	u64 offset, map_size, map_iova = iova;
>  	struct vdpa_map_file *map_file;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret = 0;
>
>  	mmap_read_lock(dev->mm);
> @@ -1486,7 +1486,7 @@ static vm_fault_t vhost_vdpa_fault(struct vm_fault *vmf)
>  	struct vdpa_device *vdpa = v->vdpa;
>  	const struct vdpa_config_ops *ops = vdpa->config;
>  	struct vdpa_notification_area notify;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	u16 index = vma->vm_pgoff;
>
>  	notify = ops->get_vq_notification(vdpa, index);
> @@ -1498,7 +1498,7 @@ static const struct vm_operations_struct vhost_vdpa_vm_ops = {
>  	.fault = vhost_vdpa_fault,
>  };
>
> -static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
> +static int vhost_vdpa_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct vhost_vdpa *v = vma->vm_file->private_data;
>  	struct vdpa_device *vdpa = v->vdpa;
> diff --git a/drivers/video/fbdev/68328fb.c b/drivers/video/fbdev/68328fb.c
> index c24156eb3d0f..8b63b4e1aab0 100644
> --- a/drivers/video/fbdev/68328fb.c
> +++ b/drivers/video/fbdev/68328fb.c
> @@ -91,7 +91,7 @@ static int mc68x328fb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
>  			 u_int transp, struct fb_info *info);
>  static int mc68x328fb_pan_display(struct fb_var_screeninfo *var,
>  			   struct fb_info *info);
> -static int mc68x328fb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int mc68x328fb_mmap(struct fb_info *info, struct mm_area *vma);
>
>  static const struct fb_ops mc68x328fb_ops = {
>  	.owner		= THIS_MODULE,
> @@ -386,7 +386,7 @@ static int mc68x328fb_pan_display(struct fb_var_screeninfo *var,
>       *  Most drivers don't need their own mmap function
>       */
>
> -static int mc68x328fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int mc68x328fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  #ifndef MMU
>  	/* this is uClinux (no MMU) specific code */
> diff --git a/drivers/video/fbdev/atafb.c b/drivers/video/fbdev/atafb.c
> index b8ed1c537293..e6fbe997313f 100644
> --- a/drivers/video/fbdev/atafb.c
> +++ b/drivers/video/fbdev/atafb.c
> @@ -291,7 +291,7 @@ static int *MV300_reg = MV300_reg_8bit;
>   *			unsigned long arg);
>   *
>   *	* perform fb specific mmap *
> - *	int (*fb_mmap)(struct fb_info *info, struct vm_area_struct *vma);
> + *	int (*fb_mmap)(struct fb_info *info, struct mm_area *vma);
>   * } ;
>   */
>
> diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c
> index 210fd3ac18a4..e9a48e71fbd4 100644
> --- a/drivers/video/fbdev/aty/atyfb_base.c
> +++ b/drivers/video/fbdev/aty/atyfb_base.c
> @@ -253,7 +253,7 @@ static int atyfb_compat_ioctl(struct fb_info *info, u_int cmd, u_long arg)
>  #endif
>
>  #ifdef __sparc__
> -static int atyfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int atyfb_mmap(struct fb_info *info, struct mm_area *vma);
>  #endif
>  static int atyfb_sync(struct fb_info *info);
>
> @@ -1932,7 +1932,7 @@ static int atyfb_sync(struct fb_info *info)
>  }
>
>  #ifdef __sparc__
> -static int atyfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int atyfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct atyfb_par *par = (struct atyfb_par *) info->par;
>  	unsigned int size, page, map_size = 0;
> diff --git a/drivers/video/fbdev/au1100fb.c b/drivers/video/fbdev/au1100fb.c
> index 6251a6b07b3a..4ba693d12560 100644
> --- a/drivers/video/fbdev/au1100fb.c
> +++ b/drivers/video/fbdev/au1100fb.c
> @@ -340,7 +340,7 @@ int au1100fb_fb_pan_display(struct fb_var_screeninfo *var, struct fb_info *fbi)
>   * Map video memory in user space. We don't use the generic fb_mmap method mainly
>   * to allow the use of the TLB streaming flag (CCA=6)
>   */
> -int au1100fb_fb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
> +int au1100fb_fb_mmap(struct fb_info *fbi, struct mm_area *vma)
>  {
>  	struct au1100fb_device *fbdev = to_au1100fb_device(fbi);
>
> diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
> index ed770222660b..6f741b3ed47f 100644
> --- a/drivers/video/fbdev/au1200fb.c
> +++ b/drivers/video/fbdev/au1200fb.c
> @@ -1232,7 +1232,7 @@ static int au1200fb_fb_blank(int blank_mode, struct fb_info *fbi)
>   * Map video memory in user space. We don't use the generic fb_mmap
>   * method mainly to allow the use of the TLB streaming flag (CCA=6)
>   */
> -static int au1200fb_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int au1200fb_fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct au1200fb_device *fbdev = info->par;
>
> diff --git a/drivers/video/fbdev/bw2.c b/drivers/video/fbdev/bw2.c
> index e757462af0a6..e56b43e62c57 100644
> --- a/drivers/video/fbdev/bw2.c
> +++ b/drivers/video/fbdev/bw2.c
> @@ -31,7 +31,7 @@
>
>  static int bw2_blank(int, struct fb_info *);
>
> -static int bw2_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int bw2_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int bw2_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -154,7 +154,7 @@ static const struct sbus_mmap_map bw2_mmap_map[] = {
>  	{ .size = 0 }
>  };
>
> -static int bw2_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int bw2_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct bw2_par *par = (struct bw2_par *)info->par;
>
> diff --git a/drivers/video/fbdev/cg14.c b/drivers/video/fbdev/cg14.c
> index 5389f8f07346..bc1619331049 100644
> --- a/drivers/video/fbdev/cg14.c
> +++ b/drivers/video/fbdev/cg14.c
> @@ -33,7 +33,7 @@ static int cg14_setcolreg(unsigned, unsigned, unsigned, unsigned,
>  			 unsigned, struct fb_info *);
>  static int cg14_pan_display(struct fb_var_screeninfo *, struct fb_info *);
>
> -static int cg14_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int cg14_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int cg14_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -258,7 +258,7 @@ static int cg14_setcolreg(unsigned regno,
>  	return 0;
>  }
>
> -static int cg14_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int cg14_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct cg14_par *par = (struct cg14_par *) info->par;
>
> diff --git a/drivers/video/fbdev/cg3.c b/drivers/video/fbdev/cg3.c
> index a58a483014e6..e53243deaf87 100644
> --- a/drivers/video/fbdev/cg3.c
> +++ b/drivers/video/fbdev/cg3.c
> @@ -33,7 +33,7 @@ static int cg3_setcolreg(unsigned, unsigned, unsigned, unsigned,
>  			 unsigned, struct fb_info *);
>  static int cg3_blank(int, struct fb_info *);
>
> -static int cg3_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int cg3_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int cg3_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -218,7 +218,7 @@ static const struct sbus_mmap_map cg3_mmap_map[] = {
>  	{ .size = 0 }
>  };
>
> -static int cg3_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int cg3_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct cg3_par *par = (struct cg3_par *)info->par;
>
> diff --git a/drivers/video/fbdev/cg6.c b/drivers/video/fbdev/cg6.c
> index 56d74468040a..826bace4fabd 100644
> --- a/drivers/video/fbdev/cg6.c
> +++ b/drivers/video/fbdev/cg6.c
> @@ -39,7 +39,7 @@ static void cg6_copyarea(struct fb_info *info, const struct fb_copyarea *area);
>  static int cg6_sync(struct fb_info *);
>  static int cg6_pan_display(struct fb_var_screeninfo *, struct fb_info *);
>
> -static int cg6_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int cg6_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int cg6_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -589,7 +589,7 @@ static const struct sbus_mmap_map cg6_mmap_map[] = {
>  	{ .size	= 0 }
>  };
>
> -static int cg6_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int cg6_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct cg6_par *par = (struct cg6_par *)info->par;
>
> diff --git a/drivers/video/fbdev/controlfb.c b/drivers/video/fbdev/controlfb.c
> index 5c5284e8ae0e..0301ea641ba3 100644
> --- a/drivers/video/fbdev/controlfb.c
> +++ b/drivers/video/fbdev/controlfb.c
> @@ -729,7 +729,7 @@ static int controlfb_blank(int blank_mode, struct fb_info *info)
>   * Note there's no locking in here; it's done in fb_mmap() in fbmem.c.
>   */
>  static int controlfb_mmap(struct fb_info *info,
> -                       struct vm_area_struct *vma)
> +                       struct mm_area *vma)
>  {
>  	unsigned long mmio_pgoff;
>  	unsigned long start;
> diff --git a/drivers/video/fbdev/core/fb_chrdev.c b/drivers/video/fbdev/core/fb_chrdev.c
> index 4ebd16b7e3b8..50a46c896978 100644
> --- a/drivers/video/fbdev/core/fb_chrdev.c
> +++ b/drivers/video/fbdev/core/fb_chrdev.c
> @@ -311,7 +311,7 @@ static long fb_compat_ioctl(struct file *file, unsigned int cmd,
>  }
>  #endif
>
> -static int fb_mmap(struct file *file, struct vm_area_struct *vma)
> +static int fb_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct fb_info *info = file_fb_info(file);
>  	int res;
> diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
> index 4fc93f253e06..01688f93cc91 100644
> --- a/drivers/video/fbdev/core/fb_defio.c
> +++ b/drivers/video/fbdev/core/fb_defio.c
> @@ -243,7 +243,7 @@ static const struct address_space_operations fb_deferred_io_aops = {
>  	.dirty_folio	= noop_dirty_folio,
>  };
>
> -int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +int fb_deferred_io_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>
> diff --git a/drivers/video/fbdev/core/fb_io_fops.c b/drivers/video/fbdev/core/fb_io_fops.c
> index 3408ff1b2b7a..e00756595b77 100644
> --- a/drivers/video/fbdev/core/fb_io_fops.c
> +++ b/drivers/video/fbdev/core/fb_io_fops.c
> @@ -138,7 +138,7 @@ ssize_t fb_io_write(struct fb_info *info, const char __user *buf, size_t count,
>  }
>  EXPORT_SYMBOL(fb_io_write);
>
> -int fb_io_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +int fb_io_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	unsigned long start = info->fix.smem_start;
>  	u32 len = info->fix.smem_len;
> diff --git a/drivers/video/fbdev/ep93xx-fb.c b/drivers/video/fbdev/ep93xx-fb.c
> index 801ef427f1ba..cab3e18fb52e 100644
> --- a/drivers/video/fbdev/ep93xx-fb.c
> +++ b/drivers/video/fbdev/ep93xx-fb.c
> @@ -307,7 +307,7 @@ static int ep93xxfb_check_var(struct fb_var_screeninfo *var,
>  	return 0;
>  }
>
> -static int ep93xxfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int ep93xxfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	unsigned int offset = vma->vm_pgoff << PAGE_SHIFT;
>
> diff --git a/drivers/video/fbdev/ffb.c b/drivers/video/fbdev/ffb.c
> index 34b6abff9493..75c2aaf77b81 100644
> --- a/drivers/video/fbdev/ffb.c
> +++ b/drivers/video/fbdev/ffb.c
> @@ -39,7 +39,7 @@ static void ffb_copyarea(struct fb_info *, const struct fb_copyarea *);
>  static int ffb_sync(struct fb_info *);
>  static int ffb_pan_display(struct fb_var_screeninfo *, struct fb_info *);
>
> -static int ffb_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int ffb_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int ffb_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -849,7 +849,7 @@ static const struct sbus_mmap_map ffb_mmap_map[] = {
>  	{ .size = 0 }
>  };
>
> -static int ffb_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int ffb_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct ffb_par *par = (struct ffb_par *)info->par;
>
> diff --git a/drivers/video/fbdev/gbefb.c b/drivers/video/fbdev/gbefb.c
> index 4c36a3e409be..b3a423fbe0e9 100644
> --- a/drivers/video/fbdev/gbefb.c
> +++ b/drivers/video/fbdev/gbefb.c
> @@ -992,7 +992,7 @@ static int gbefb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
>  }
>
>  static int gbefb_mmap(struct fb_info *info,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	unsigned long size = vma->vm_end - vma->vm_start;
>  	unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
> diff --git a/drivers/video/fbdev/leo.c b/drivers/video/fbdev/leo.c
> index b9fb059df2c7..76d44efee3c1 100644
> --- a/drivers/video/fbdev/leo.c
> +++ b/drivers/video/fbdev/leo.c
> @@ -33,7 +33,7 @@ static int leo_setcolreg(unsigned, unsigned, unsigned, unsigned,
>  static int leo_blank(int, struct fb_info *);
>  static int leo_pan_display(struct fb_var_screeninfo *, struct fb_info *);
>
> -static int leo_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int leo_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int leo_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -407,7 +407,7 @@ static const struct sbus_mmap_map leo_mmap_map[] = {
>  	{ .size = 0 }
>  };
>
> -static int leo_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int leo_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct leo_par *par = (struct leo_par *)info->par;
>
> diff --git a/drivers/video/fbdev/omap/omapfb.h b/drivers/video/fbdev/omap/omapfb.h
> index ab1cb6e7f5f8..cfd41ba0dac7 100644
> --- a/drivers/video/fbdev/omap/omapfb.h
> +++ b/drivers/video/fbdev/omap/omapfb.h
> @@ -159,7 +159,7 @@ struct lcd_ctrl {
>  	int		(*setup_mem)	  (int plane, size_t size,
>  					   int mem_type, unsigned long *paddr);
>  	int		(*mmap)		  (struct fb_info *info,
> -					   struct vm_area_struct *vma);
> +					   struct mm_area *vma);
>  	int		(*set_scale)	  (int plane,
>  					   int orig_width, int orig_height,
>  					   int out_width, int out_height);
> diff --git a/drivers/video/fbdev/omap/omapfb_main.c b/drivers/video/fbdev/omap/omapfb_main.c
> index 2682b20d184a..f6781f51b2cc 100644
> --- a/drivers/video/fbdev/omap/omapfb_main.c
> +++ b/drivers/video/fbdev/omap/omapfb_main.c
> @@ -1197,7 +1197,7 @@ static int omapfb_ioctl(struct fb_info *fbi, unsigned int cmd,
>  	return r;
>  }
>
> -static int omapfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int omapfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct omapfb_plane_struct *plane = info->par;
>  	struct omapfb_device *fbdev = plane->fbdev;
> diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-main.c b/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
> index 211f23648686..081d6ea622bb 100644
> --- a/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
> +++ b/drivers/video/fbdev/omap2/omapfb/omapfb-main.c
> @@ -1063,7 +1063,7 @@ static int omapfb_pan_display(struct fb_var_screeninfo *var,
>  	return r;
>  }
>
> -static void mmap_user_open(struct vm_area_struct *vma)
> +static void mmap_user_open(struct mm_area *vma)
>  {
>  	struct omapfb2_mem_region *rg = vma->vm_private_data;
>
> @@ -1072,7 +1072,7 @@ static void mmap_user_open(struct vm_area_struct *vma)
>  	omapfb_put_mem_region(rg);
>  }
>
> -static void mmap_user_close(struct vm_area_struct *vma)
> +static void mmap_user_close(struct mm_area *vma)
>  {
>  	struct omapfb2_mem_region *rg = vma->vm_private_data;
>
> @@ -1086,7 +1086,7 @@ static const struct vm_operations_struct mmap_user_ops = {
>  	.close = mmap_user_close,
>  };
>
> -static int omapfb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
> +static int omapfb_mmap(struct fb_info *fbi, struct mm_area *vma)
>  {
>  	struct omapfb_info *ofbi = FB2OFB(fbi);
>  	struct fb_fix_screeninfo *fix = &fbi->fix;
> diff --git a/drivers/video/fbdev/p9100.c b/drivers/video/fbdev/p9100.c
> index 0bc0f78fe4b9..62fdfe8c682d 100644
> --- a/drivers/video/fbdev/p9100.c
> +++ b/drivers/video/fbdev/p9100.c
> @@ -31,7 +31,7 @@ static int p9100_setcolreg(unsigned, unsigned, unsigned, unsigned,
>  			   unsigned, struct fb_info *);
>  static int p9100_blank(int, struct fb_info *);
>
> -static int p9100_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int p9100_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int p9100_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -211,7 +211,7 @@ static const struct sbus_mmap_map p9100_mmap_map[] = {
>  	{ 0,			0,		0		    }
>  };
>
> -static int p9100_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int p9100_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct p9100_par *par = (struct p9100_par *)info->par;
>
> diff --git a/drivers/video/fbdev/ps3fb.c b/drivers/video/fbdev/ps3fb.c
> index dbcda307f6a6..55796e1765a7 100644
> --- a/drivers/video/fbdev/ps3fb.c
> +++ b/drivers/video/fbdev/ps3fb.c
> @@ -704,7 +704,7 @@ static int ps3fb_pan_display(struct fb_var_screeninfo *var,
>       *  As we have a virtual frame buffer, we need our own mmap function
>       */
>
> -static int ps3fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int ps3fb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	int r;
>
> diff --git a/drivers/video/fbdev/pxa3xx-gcu.c b/drivers/video/fbdev/pxa3xx-gcu.c
> index 4a78b387b343..6a4ffc17299c 100644
> --- a/drivers/video/fbdev/pxa3xx-gcu.c
> +++ b/drivers/video/fbdev/pxa3xx-gcu.c
> @@ -469,7 +469,7 @@ pxa3xx_gcu_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
>  }
>
>  static int
> -pxa3xx_gcu_mmap(struct file *file, struct vm_area_struct *vma)
> +pxa3xx_gcu_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned int size = vma->vm_end - vma->vm_start;
>  	struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file);
> diff --git a/drivers/video/fbdev/sa1100fb.c b/drivers/video/fbdev/sa1100fb.c
> index 0d362d2bf0e3..d21ae655cca4 100644
> --- a/drivers/video/fbdev/sa1100fb.c
> +++ b/drivers/video/fbdev/sa1100fb.c
> @@ -556,7 +556,7 @@ static int sa1100fb_blank(int blank, struct fb_info *info)
>  }
>
>  static int sa1100fb_mmap(struct fb_info *info,
> -			 struct vm_area_struct *vma)
> +			 struct mm_area *vma)
>  {
>  	struct sa1100fb_info *fbi =
>  		container_of(info, struct sa1100fb_info, fb);
> diff --git a/drivers/video/fbdev/sbuslib.c b/drivers/video/fbdev/sbuslib.c
> index 4c79654bda30..8fced2f56b38 100644
> --- a/drivers/video/fbdev/sbuslib.c
> +++ b/drivers/video/fbdev/sbuslib.c
> @@ -42,7 +42,7 @@ int sbusfb_mmap_helper(const struct sbus_mmap_map *map,
>  		       unsigned long physbase,
>  		       unsigned long fbsize,
>  		       unsigned long iospace,
> -		       struct vm_area_struct *vma)
> +		       struct mm_area *vma)
>  {
>  	unsigned int size, page, r, map_size;
>  	unsigned long map_offset = 0;
> diff --git a/drivers/video/fbdev/sbuslib.h b/drivers/video/fbdev/sbuslib.h
> index e9af2dc93f94..75e60f30957f 100644
> --- a/drivers/video/fbdev/sbuslib.h
> +++ b/drivers/video/fbdev/sbuslib.h
> @@ -6,7 +6,7 @@
>  struct device_node;
>  struct fb_info;
>  struct fb_var_screeninfo;
> -struct vm_area_struct;
> +struct mm_area;
>
>  struct sbus_mmap_map {
>  	unsigned long voff;
> @@ -22,7 +22,7 @@ extern void sbusfb_fill_var(struct fb_var_screeninfo *var,
>  extern int sbusfb_mmap_helper(const struct sbus_mmap_map *map,
>  			      unsigned long physbase, unsigned long fbsize,
>  			      unsigned long iospace,
> -			      struct vm_area_struct *vma);
> +			      struct mm_area *vma);
>  int sbusfb_ioctl_helper(unsigned long cmd, unsigned long arg,
>  			struct fb_info *info,
>  			int type, int fb_depth, unsigned long fb_size);
> diff --git a/drivers/video/fbdev/sh_mobile_lcdcfb.c b/drivers/video/fbdev/sh_mobile_lcdcfb.c
> index dd950e4ab5ce..4b53eabd93fb 100644
> --- a/drivers/video/fbdev/sh_mobile_lcdcfb.c
> +++ b/drivers/video/fbdev/sh_mobile_lcdcfb.c
> @@ -1478,7 +1478,7 @@ static int sh_mobile_lcdc_overlay_blank(int blank, struct fb_info *info)
>  }
>
>  static int
> -sh_mobile_lcdc_overlay_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +sh_mobile_lcdc_overlay_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct sh_mobile_lcdc_overlay *ovl = info->par;
>
> @@ -1947,7 +1947,7 @@ static int sh_mobile_lcdc_blank(int blank, struct fb_info *info)
>  }
>
>  static int
> -sh_mobile_lcdc_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +sh_mobile_lcdc_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct sh_mobile_lcdc_chan *ch = info->par;
>
> diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
> index 5f0dd01fd834..0cf731d1c04c 100644
> --- a/drivers/video/fbdev/smscufx.c
> +++ b/drivers/video/fbdev/smscufx.c
> @@ -773,7 +773,7 @@ static int ufx_set_vid_mode(struct ufx_data *dev, struct fb_var_screeninfo *var)
>  	return 0;
>  }
>
> -static int ufx_ops_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int ufx_ops_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	unsigned long start = vma->vm_start;
>  	unsigned long size = vma->vm_end - vma->vm_start;
> diff --git a/drivers/video/fbdev/tcx.c b/drivers/video/fbdev/tcx.c
> index f9a0085ad72b..fef8f2c55b15 100644
> --- a/drivers/video/fbdev/tcx.c
> +++ b/drivers/video/fbdev/tcx.c
> @@ -34,7 +34,7 @@ static int tcx_setcolreg(unsigned, unsigned, unsigned, unsigned,
>  static int tcx_blank(int, struct fb_info *);
>  static int tcx_pan_display(struct fb_var_screeninfo *, struct fb_info *);
>
> -static int tcx_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +static int tcx_sbusfb_mmap(struct fb_info *info, struct mm_area *vma);
>  static int tcx_sbusfb_ioctl(struct fb_info *info, unsigned int cmd, unsigned long arg);
>
>  /*
> @@ -292,7 +292,7 @@ static const struct sbus_mmap_map __tcx_mmap_map[TCX_MMAP_ENTRIES] = {
>  	{ .size = 0 }
>  };
>
> -static int tcx_sbusfb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int tcx_sbusfb_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	struct tcx_par *par = (struct tcx_par *)info->par;
>
> diff --git a/drivers/video/fbdev/udlfb.c b/drivers/video/fbdev/udlfb.c
> index acadf0eb450c..bcffed2bac09 100644
> --- a/drivers/video/fbdev/udlfb.c
> +++ b/drivers/video/fbdev/udlfb.c
> @@ -321,7 +321,7 @@ static int dlfb_set_video_mode(struct dlfb_data *dlfb,
>  	return retval;
>  }
>
> -static int dlfb_ops_mmap(struct fb_info *info, struct vm_area_struct *vma)
> +static int dlfb_ops_mmap(struct fb_info *info, struct mm_area *vma)
>  {
>  	unsigned long start = vma->vm_start;
>  	unsigned long size = vma->vm_end - vma->vm_start;
> diff --git a/drivers/video/fbdev/vfb.c b/drivers/video/fbdev/vfb.c
> index 5b7965f36c5e..5836aa107f86 100644
> --- a/drivers/video/fbdev/vfb.c
> +++ b/drivers/video/fbdev/vfb.c
> @@ -76,7 +76,7 @@ static int vfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
>  static int vfb_pan_display(struct fb_var_screeninfo *var,
>  			   struct fb_info *info);
>  static int vfb_mmap(struct fb_info *info,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>
>  static const struct fb_ops vfb_ops = {
>  	.owner		= THIS_MODULE,
> @@ -380,7 +380,7 @@ static int vfb_pan_display(struct fb_var_screeninfo *var,
>       */
>
>  static int vfb_mmap(struct fb_info *info,
> -		    struct vm_area_struct *vma)
> +		    struct mm_area *vma)
>  {
>  	vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>
> diff --git a/drivers/virt/acrn/mm.c b/drivers/virt/acrn/mm.c
> index 4c2f28715b70..eeec17237749 100644
> --- a/drivers/virt/acrn/mm.c
> +++ b/drivers/virt/acrn/mm.c
> @@ -163,7 +163,7 @@ int acrn_vm_ram_map(struct acrn_vm *vm, struct acrn_vm_memmap *memmap)
>  	void *remap_vaddr;
>  	int ret, pinned;
>  	u64 user_vm_pa;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if (!vm || !memmap)
>  		return -EINVAL;
> diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
> index f93f73ecefee..62dab536c2f6 100644
> --- a/drivers/xen/gntalloc.c
> +++ b/drivers/xen/gntalloc.c
> @@ -445,7 +445,7 @@ static long gntalloc_ioctl(struct file *filp, unsigned int cmd,
>  	return 0;
>  }
>
> -static void gntalloc_vma_open(struct vm_area_struct *vma)
> +static void gntalloc_vma_open(struct mm_area *vma)
>  {
>  	struct gntalloc_vma_private_data *priv = vma->vm_private_data;
>
> @@ -457,7 +457,7 @@ static void gntalloc_vma_open(struct vm_area_struct *vma)
>  	mutex_unlock(&gref_mutex);
>  }
>
> -static void gntalloc_vma_close(struct vm_area_struct *vma)
> +static void gntalloc_vma_close(struct mm_area *vma)
>  {
>  	struct gntalloc_vma_private_data *priv = vma->vm_private_data;
>  	struct gntalloc_gref *gref, *next;
> @@ -488,7 +488,7 @@ static const struct vm_operations_struct gntalloc_vmops = {
>  	.close = gntalloc_vma_close,
>  };
>
> -static int gntalloc_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int gntalloc_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct gntalloc_file_private_data *priv = filp->private_data;
>  	struct gntalloc_vma_private_data *vm_priv;
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 61faea1f0663..879c601543b8 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -496,7 +496,7 @@ static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,
>
>  /* ------------------------------------------------------------------ */
>
> -static void gntdev_vma_open(struct vm_area_struct *vma)
> +static void gntdev_vma_open(struct mm_area *vma)
>  {
>  	struct gntdev_grant_map *map = vma->vm_private_data;
>
> @@ -504,7 +504,7 @@ static void gntdev_vma_open(struct vm_area_struct *vma)
>  	refcount_inc(&map->users);
>  }
>
> -static void gntdev_vma_close(struct vm_area_struct *vma)
> +static void gntdev_vma_close(struct mm_area *vma)
>  {
>  	struct gntdev_grant_map *map = vma->vm_private_data;
>  	struct file *file = vma->vm_file;
> @@ -516,7 +516,7 @@ static void gntdev_vma_close(struct vm_area_struct *vma)
>  	gntdev_put_map(priv, map);
>  }
>
> -static struct page *gntdev_vma_find_special_page(struct vm_area_struct *vma,
> +static struct page *gntdev_vma_find_special_page(struct mm_area *vma,
>  						 unsigned long addr)
>  {
>  	struct gntdev_grant_map *map = vma->vm_private_data;
> @@ -690,7 +690,7 @@ static long gntdev_ioctl_get_offset_for_vaddr(struct gntdev_priv *priv,
>  					      struct ioctl_gntdev_get_offset_for_vaddr __user *u)
>  {
>  	struct ioctl_gntdev_get_offset_for_vaddr op;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct gntdev_grant_map *map;
>  	int rv = -EINVAL;
>
> @@ -1030,7 +1030,7 @@ static long gntdev_ioctl(struct file *flip,
>  	return 0;
>  }
>
> -static int gntdev_mmap(struct file *flip, struct vm_area_struct *vma)
> +static int gntdev_mmap(struct file *flip, struct mm_area *vma)
>  {
>  	struct gntdev_priv *priv = flip->private_data;
>  	int index = vma->vm_pgoff;
> diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
> index 0f0dad427d7e..b0d391ea06a5 100644
> --- a/drivers/xen/privcmd-buf.c
> +++ b/drivers/xen/privcmd-buf.c
> @@ -84,7 +84,7 @@ static int privcmd_buf_release(struct inode *ino, struct file *file)
>  	return 0;
>  }
>
> -static void privcmd_buf_vma_open(struct vm_area_struct *vma)
> +static void privcmd_buf_vma_open(struct mm_area *vma)
>  {
>  	struct privcmd_buf_vma_private *vma_priv = vma->vm_private_data;
>
> @@ -96,7 +96,7 @@ static void privcmd_buf_vma_open(struct vm_area_struct *vma)
>  	mutex_unlock(&vma_priv->file_priv->lock);
>  }
>
> -static void privcmd_buf_vma_close(struct vm_area_struct *vma)
> +static void privcmd_buf_vma_close(struct mm_area *vma)
>  {
>  	struct privcmd_buf_vma_private *vma_priv = vma->vm_private_data;
>  	struct privcmd_buf_private *file_priv;
> @@ -130,7 +130,7 @@ static const struct vm_operations_struct privcmd_buf_vm_ops = {
>  	.fault = privcmd_buf_vma_fault,
>  };
>
> -static int privcmd_buf_mmap(struct file *file, struct vm_area_struct *vma)
> +static int privcmd_buf_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct privcmd_buf_private *file_priv = file->private_data;
>  	struct privcmd_buf_vma_private *vma_priv;
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 13a10f3294a8..6e064d04bab4 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -73,7 +73,7 @@ struct privcmd_data {
>  };
>
>  static int privcmd_vma_range_is_mapped(
> -               struct vm_area_struct *vma,
> +               struct mm_area *vma,
>                 unsigned long addr,
>                 unsigned long nr_pages);
>
> @@ -226,7 +226,7 @@ static int traverse_pages_block(unsigned nelem, size_t size,
>
>  struct mmap_gfn_state {
>  	unsigned long va;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	domid_t domain;
>  };
>
> @@ -234,7 +234,7 @@ static int mmap_gfn_range(void *data, void *state)
>  {
>  	struct privcmd_mmap_entry *msg = data;
>  	struct mmap_gfn_state *st = state;
> -	struct vm_area_struct *vma = st->vma;
> +	struct mm_area *vma = st->vma;
>  	int rc;
>
>  	/* Do not allow range to wrap the address space. */
> @@ -265,7 +265,7 @@ static long privcmd_ioctl_mmap(struct file *file, void __user *udata)
>  	struct privcmd_data *data = file->private_data;
>  	struct privcmd_mmap mmapcmd;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int rc;
>  	LIST_HEAD(pagelist);
>  	struct mmap_gfn_state state;
> @@ -324,7 +324,7 @@ static long privcmd_ioctl_mmap(struct file *file, void __user *udata)
>  struct mmap_batch_state {
>  	domid_t domain;
>  	unsigned long va;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int index;
>  	/* A tristate:
>  	 *      0 for no errors
> @@ -348,7 +348,7 @@ static int mmap_batch_fn(void *data, int nr, void *state)
>  {
>  	xen_pfn_t *gfnp = data;
>  	struct mmap_batch_state *st = state;
> -	struct vm_area_struct *vma = st->vma;
> +	struct mm_area *vma = st->vma;
>  	struct page **pages = vma->vm_private_data;
>  	struct page **cur_pages = NULL;
>  	int ret;
> @@ -428,7 +428,7 @@ static int mmap_return_errors(void *data, int nr, void *state)
>   * the vma with the page info to use later.
>   * Returns: 0 if success, otherwise -errno
>   */
> -static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
> +static int alloc_empty_pages(struct mm_area *vma, int numpgs)
>  {
>  	int rc;
>  	struct page **pages;
> @@ -459,7 +459,7 @@ static long privcmd_ioctl_mmap_batch(
>  	int ret;
>  	struct privcmd_mmapbatch_v2 m;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long nr_pages;
>  	LIST_HEAD(pagelist);
>  	struct mmap_batch_state state;
> @@ -736,7 +736,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
>  {
>  	struct privcmd_data *data = file->private_data;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct privcmd_mmap_resource kdata;
>  	xen_pfn_t *pfns = NULL;
>  	struct xen_mem_acquire_resource xdata = { };
> @@ -1222,7 +1222,7 @@ struct privcmd_kernel_ioreq *alloc_ioreq(struct privcmd_ioeventfd *ioeventfd)
>  {
>  	struct privcmd_kernel_ioreq *kioreq;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct page **pages;
>  	unsigned int *ports;
>  	int ret, size, i;
> @@ -1584,7 +1584,7 @@ static int privcmd_release(struct inode *ino, struct file *file)
>  	return 0;
>  }
>
> -static void privcmd_close(struct vm_area_struct *vma)
> +static void privcmd_close(struct mm_area *vma)
>  {
>  	struct page **pages = vma->vm_private_data;
>  	int numpgs = vma_pages(vma);
> @@ -1617,7 +1617,7 @@ static const struct vm_operations_struct privcmd_vm_ops = {
>  	.fault = privcmd_fault
>  };
>
> -static int privcmd_mmap(struct file *file, struct vm_area_struct *vma)
> +static int privcmd_mmap(struct file *file, struct mm_area *vma)
>  {
>  	/* DONTCOPY is essential for Xen because copy_page_range doesn't know
>  	 * how to recreate these mappings */
> @@ -1640,7 +1640,7 @@ static int is_mapped_fn(pte_t *pte, unsigned long addr, void *data)
>  }
>
>  static int privcmd_vma_range_is_mapped(
> -	           struct vm_area_struct *vma,
> +	           struct mm_area *vma,
>  	           unsigned long addr,
>  	           unsigned long nr_pages)
>  {
> diff --git a/drivers/xen/xenbus/xenbus_dev_backend.c b/drivers/xen/xenbus/xenbus_dev_backend.c
> index edba5fecde4d..356bc765f133 100644
> --- a/drivers/xen/xenbus/xenbus_dev_backend.c
> +++ b/drivers/xen/xenbus/xenbus_dev_backend.c
> @@ -89,7 +89,7 @@ static long xenbus_backend_ioctl(struct file *file, unsigned int cmd,
>  	}
>  }
>
> -static int xenbus_backend_mmap(struct file *file, struct vm_area_struct *vma)
> +static int xenbus_backend_mmap(struct file *file, struct mm_area *vma)
>  {
>  	size_t size = vma->vm_end - vma->vm_start;
>
> diff --git a/drivers/xen/xenfs/xenstored.c b/drivers/xen/xenfs/xenstored.c
> index f59235f9f8a2..a4685a4f5bef 100644
> --- a/drivers/xen/xenfs/xenstored.c
> +++ b/drivers/xen/xenfs/xenstored.c
> @@ -31,7 +31,7 @@ static int xsd_kva_open(struct inode *inode, struct file *file)
>  	return 0;
>  }
>
> -static int xsd_kva_mmap(struct file *file, struct vm_area_struct *vma)
> +static int xsd_kva_mmap(struct file *file, struct mm_area *vma)
>  {
>  	size_t size = vma->vm_end - vma->vm_start;
>
> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
> index f17c4c03db30..a70ef3f8f617 100644
> --- a/drivers/xen/xlate_mmu.c
> +++ b/drivers/xen/xlate_mmu.c
> @@ -66,7 +66,7 @@ struct remap_data {
>  	int nr_fgfn; /* Number of foreign gfn left to map */
>  	pgprot_t prot;
>  	domid_t  domid;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int index;
>  	struct page **pages;
>  	struct xen_remap_gfn_info *info;
> @@ -140,7 +140,7 @@ static int remap_pte_fn(pte_t *ptep, unsigned long addr, void *data)
>  	return 0;
>  }
>
> -int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
> +int xen_xlate_remap_gfn_array(struct mm_area *vma,
>  			      unsigned long addr,
>  			      xen_pfn_t *gfn, int nr,
>  			      int *err_ptr, pgprot_t prot,
> @@ -180,7 +180,7 @@ static void unmap_gfn(unsigned long gfn, void *data)
>  	(void)HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrp);
>  }
>
> -int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
> +int xen_xlate_unmap_gfn_range(struct mm_area *vma,
>  			      int nr, struct page **pages)
>  {
>  	xen_for_each_gfn(pages, nr, unmap_gfn, NULL);
> @@ -282,7 +282,7 @@ static int remap_pfn_fn(pte_t *ptep, unsigned long addr, void *data)
>  }
>
>  /* Used by the privcmd module, but has to be built-in on ARM */
> -int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr, unsigned long len)
> +int xen_remap_vma_range(struct mm_area *vma, unsigned long addr, unsigned long len)
>  {
>  	struct remap_pfn r = {
>  		.mm = vma->vm_mm,
> diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
> index 348cc90bf9c5..b2a7d581805b 100644
> --- a/fs/9p/vfs_file.c
> +++ b/fs/9p/vfs_file.c
> @@ -454,7 +454,7 @@ int v9fs_file_fsync_dotl(struct file *filp, loff_t start, loff_t end,
>  }
>
>  static int
> -v9fs_file_mmap(struct file *filp, struct vm_area_struct *vma)
> +v9fs_file_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	int retval;
>  	struct inode *inode = file_inode(filp);
> @@ -480,7 +480,7 @@ v9fs_vm_page_mkwrite(struct vm_fault *vmf)
>  	return netfs_page_mkwrite(vmf, NULL);
>  }
>
> -static void v9fs_mmap_vm_close(struct vm_area_struct *vma)
> +static void v9fs_mmap_vm_close(struct mm_area *vma)
>  {
>  	struct inode *inode;
>
> diff --git a/fs/afs/file.c b/fs/afs/file.c
> index fc15497608c6..1794c1138669 100644
> --- a/fs/afs/file.c
> +++ b/fs/afs/file.c
> @@ -19,14 +19,14 @@
>  #include <trace/events/netfs.h>
>  #include "internal.h"
>
> -static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
> +static int afs_file_mmap(struct file *file, struct mm_area *vma);
>
>  static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
>  static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos,
>  				    struct pipe_inode_info *pipe,
>  				    size_t len, unsigned int flags);
> -static void afs_vm_open(struct vm_area_struct *area);
> -static void afs_vm_close(struct vm_area_struct *area);
> +static void afs_vm_open(struct mm_area *area);
> +static void afs_vm_close(struct mm_area *area);
>  static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pgoff_t end_pgoff);
>
>  const struct file_operations afs_file_operations = {
> @@ -492,7 +492,7 @@ static void afs_drop_open_mmap(struct afs_vnode *vnode)
>  /*
>   * Handle setting up a memory mapping on an AFS file.
>   */
> -static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int afs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
>  	int ret;
> @@ -507,12 +507,12 @@ static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
>  	return ret;
>  }
>
> -static void afs_vm_open(struct vm_area_struct *vma)
> +static void afs_vm_open(struct mm_area *vma)
>  {
>  	afs_add_open_mmap(AFS_FS_I(file_inode(vma->vm_file)));
>  }
>
> -static void afs_vm_close(struct vm_area_struct *vma)
> +static void afs_vm_close(struct mm_area *vma)
>  {
>  	afs_drop_open_mmap(AFS_FS_I(file_inode(vma->vm_file)));
>  }
> diff --git a/fs/aio.c b/fs/aio.c
> index 7b976b564cfc..140b42dd11ad 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -351,7 +351,7 @@ static void aio_free_ring(struct kioctx *ctx)
>  	}
>  }
>
> -static int aio_ring_mremap(struct vm_area_struct *vma)
> +static int aio_ring_mremap(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -392,7 +392,7 @@ static const struct vm_operations_struct aio_ring_vm_ops = {
>  #endif
>  };
>
> -static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
> +static int aio_ring_mmap(struct file *file, struct mm_area *vma)
>  {
>  	vm_flags_set(vma, VM_DONTEXPAND);
>  	vma->vm_ops = &aio_ring_vm_ops;
> diff --git a/fs/backing-file.c b/fs/backing-file.c
> index 763fbe9b72b2..95e6cea5fa7a 100644
> --- a/fs/backing-file.c
> +++ b/fs/backing-file.c
> @@ -323,7 +323,7 @@ ssize_t backing_file_splice_write(struct pipe_inode_info *pipe,
>  }
>  EXPORT_SYMBOL_GPL(backing_file_splice_write);
>
> -int backing_file_mmap(struct file *file, struct vm_area_struct *vma,
> +int backing_file_mmap(struct file *file, struct mm_area *vma,
>  		      struct backing_file_ctx *ctx)
>  {
>  	const struct cred *old_cred;
> diff --git a/fs/bcachefs/fs.c b/fs/bcachefs/fs.c
> index fc834bdf1f52..0cd13a91456c 100644
> --- a/fs/bcachefs/fs.c
> +++ b/fs/bcachefs/fs.c
> @@ -1403,7 +1403,7 @@ static const struct vm_operations_struct bch_vm_ops = {
>  	.page_mkwrite   = bch2_page_mkwrite,
>  };
>
> -static int bch2_mmap(struct file *file, struct vm_area_struct *vma)
> +static int bch2_mmap(struct file *file, struct mm_area *vma)
>  {
>  	file_accessed(file);
>
> diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
> index 584fa89bc877..b28c8bc74b45 100644
> --- a/fs/binfmt_elf.c
> +++ b/fs/binfmt_elf.c
> @@ -173,7 +173,7 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
>  	elf_addr_t flags = 0;
>  	int ei_index;
>  	const struct cred *cred = current_cred();
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * In some cases (e.g. Hyper-Threading), we want to avoid L1
> diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
> index 262a707d8990..99026a1bf443 100644
> --- a/fs/btrfs/file.c
> +++ b/fs/btrfs/file.c
> @@ -1928,7 +1928,7 @@ static const struct vm_operations_struct btrfs_file_vm_ops = {
>  	.page_mkwrite	= btrfs_page_mkwrite,
>  };
>
> -static int btrfs_file_mmap(struct file	*filp, struct vm_area_struct *vma)
> +static int btrfs_file_mmap(struct file	*filp, struct mm_area *vma)
>  {
>  	struct address_space *mapping = filp->f_mapping;
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index c7abb4a029dc..aafb15b65afa 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -2585,7 +2585,7 @@ EXPORT_SYMBOL(cont_write_begin);
>   * Direct callers of this function should protect against filesystem freezing
>   * using sb_start_pagefault() - sb_end_pagefault() functions.
>   */
> -int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
> +int block_page_mkwrite(struct mm_area *vma, struct vm_fault *vmf,
>  			 get_block_t get_block)
>  {
>  	struct folio *folio = page_folio(vmf->page);
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 29be367905a1..b6a99e66b1af 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1940,7 +1940,7 @@ static void ceph_restore_sigs(sigset_t *oldset)
>   */
>  static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct inode *inode = file_inode(vma->vm_file);
>  	struct ceph_inode_info *ci = ceph_inode(inode);
>  	struct ceph_client *cl = ceph_inode_to_client(inode);
> @@ -2031,7 +2031,7 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf)
>
>  static vm_fault_t ceph_page_mkwrite(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct inode *inode = file_inode(vma->vm_file);
>  	struct ceph_client *cl = ceph_inode_to_client(inode);
>  	struct ceph_inode_info *ci = ceph_inode(inode);
> @@ -2319,7 +2319,7 @@ static const struct vm_operations_struct ceph_vmops = {
>  	.page_mkwrite	= ceph_page_mkwrite,
>  };
>
> -int ceph_mmap(struct file *file, struct vm_area_struct *vma)
> +int ceph_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct address_space *mapping = file->f_mapping;
>
> diff --git a/fs/ceph/super.h b/fs/ceph/super.h
> index bb0db0cc8003..bdb01ebd811b 100644
> --- a/fs/ceph/super.h
> +++ b/fs/ceph/super.h
> @@ -1286,7 +1286,7 @@ extern void __ceph_touch_fmode(struct ceph_inode_info *ci,
>  /* addr.c */
>  extern const struct address_space_operations ceph_aops;
>  extern const struct netfs_request_ops ceph_netfs_ops;
> -extern int ceph_mmap(struct file *file, struct vm_area_struct *vma);
> +extern int ceph_mmap(struct file *file, struct mm_area *vma);
>  extern int ceph_uninline_data(struct file *file);
>  extern int ceph_pool_perm_check(struct inode *inode, int need);
>  extern void ceph_pool_perm_destroy(struct ceph_mds_client* mdsc);
> diff --git a/fs/coda/file.c b/fs/coda/file.c
> index 148856a582a9..28d6240819a0 100644
> --- a/fs/coda/file.c
> +++ b/fs/coda/file.c
> @@ -120,7 +120,7 @@ coda_file_splice_read(struct file *coda_file, loff_t *ppos,
>  }
>
>  static void
> -coda_vm_open(struct vm_area_struct *vma)
> +coda_vm_open(struct mm_area *vma)
>  {
>  	struct coda_vm_ops *cvm_ops =
>  		container_of(vma->vm_ops, struct coda_vm_ops, vm_ops);
> @@ -132,7 +132,7 @@ coda_vm_open(struct vm_area_struct *vma)
>  }
>
>  static void
> -coda_vm_close(struct vm_area_struct *vma)
> +coda_vm_close(struct mm_area *vma)
>  {
>  	struct coda_vm_ops *cvm_ops =
>  		container_of(vma->vm_ops, struct coda_vm_ops, vm_ops);
> @@ -148,7 +148,7 @@ coda_vm_close(struct vm_area_struct *vma)
>  }
>
>  static int
> -coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma)
> +coda_file_mmap(struct file *coda_file, struct mm_area *vma)
>  {
>  	struct inode *coda_inode = file_inode(coda_file);
>  	struct coda_file_info *cfi = coda_ftoc(coda_file);
> diff --git a/fs/coredump.c b/fs/coredump.c
> index c33c177a701b..f9987d48c5a6 100644
> --- a/fs/coredump.c
> +++ b/fs/coredump.c
> @@ -1082,7 +1082,7 @@ fs_initcall(init_fs_coredump_sysctls);
>   * meant. These special mappings include - vDSO, vsyscall, and other
>   * architecture specific mappings
>   */
> -static bool always_dump_vma(struct vm_area_struct *vma)
> +static bool always_dump_vma(struct mm_area *vma)
>  {
>  	/* Any vsyscall mappings? */
>  	if (vma == get_gate_vma(vma->vm_mm))
> @@ -1110,7 +1110,7 @@ static bool always_dump_vma(struct vm_area_struct *vma)
>  /*
>   * Decide how much of @vma's contents should be included in a core dump.
>   */
> -static unsigned long vma_dump_size(struct vm_area_struct *vma,
> +static unsigned long vma_dump_size(struct mm_area *vma,
>  				   unsigned long mm_flags)
>  {
>  #define FILTER(type)	(mm_flags & (1UL << MMF_DUMP_##type))
> @@ -1193,9 +1193,9 @@ static unsigned long vma_dump_size(struct vm_area_struct *vma,
>   * Helper function for iterating across a vma list.  It ensures that the caller
>   * will visit `gate_vma' prior to terminating the search.
>   */
> -static struct vm_area_struct *coredump_next_vma(struct vma_iterator *vmi,
> -				       struct vm_area_struct *vma,
> -				       struct vm_area_struct *gate_vma)
> +static struct mm_area *coredump_next_vma(struct vma_iterator *vmi,
> +				       struct mm_area *vma,
> +				       struct mm_area *gate_vma)
>  {
>  	if (gate_vma && (vma == gate_vma))
>  		return NULL;
> @@ -1238,7 +1238,7 @@ static int cmp_vma_size(const void *vma_meta_lhs_ptr, const void *vma_meta_rhs_p
>   */
>  static bool dump_vma_snapshot(struct coredump_params *cprm)
>  {
> -	struct vm_area_struct *gate_vma, *vma = NULL;
> +	struct mm_area *gate_vma, *vma = NULL;
>  	struct mm_struct *mm = current->mm;
>  	VMA_ITERATOR(vmi, mm, 0);
>  	int i = 0;
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index b84d1747a020..9147633db9eb 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -343,7 +343,7 @@ static bool cramfs_last_page_is_shared(struct inode *inode)
>  	return memchr_inv(tail_data, 0, PAGE_SIZE - partial) ? true : false;
>  }
>
> -static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
> +static int cramfs_physmem_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>  	struct cramfs_sb_info *sbi = CRAMFS_SB(inode->i_sb);
> @@ -435,7 +435,7 @@ static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
>
>  #else /* CONFIG_MMU */
>
> -static int cramfs_physmem_mmap(struct file *file, struct vm_area_struct *vma)
> +static int cramfs_physmem_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -ENOSYS;
>  }
> diff --git a/fs/dax.c b/fs/dax.c
> index af5045b0f476..a9c552127d9f 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -439,7 +439,7 @@ static void dax_folio_init(void *entry)
>  }
>
>  static void dax_associate_entry(void *entry, struct address_space *mapping,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				unsigned long address, bool shared)
>  {
>  	unsigned long size = dax_entry_size(entry), index;
> @@ -1038,7 +1038,7 @@ static int copy_cow_page_dax(struct vm_fault *vmf, const struct iomap_iter *iter
>   * flushed on write-faults (non-cow), but not read-faults.
>   */
>  static bool dax_fault_is_synchronous(const struct iomap_iter *iter,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	return (iter->flags & IOMAP_WRITE) && (vma->vm_flags & VM_SYNC) &&
>  		(iter->iomap.flags & IOMAP_F_DIRTY);
> @@ -1114,7 +1114,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
>  {
>  	unsigned long pfn, index, count, end;
>  	long ret = 0;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * A page got tagged dirty in DAX mapping? Something is seriously
> @@ -1388,7 +1388,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
>  {
>  	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
>  	unsigned long pmd_addr = vmf->address & PMD_MASK;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct inode *inode = mapping->host;
>  	pgtable_t pgtable = NULL;
>  	struct folio *zero_folio;
> diff --git a/fs/ecryptfs/file.c b/fs/ecryptfs/file.c
> index ce0a3c5ed0ca..ed71003a5b20 100644
> --- a/fs/ecryptfs/file.c
> +++ b/fs/ecryptfs/file.c
> @@ -185,7 +185,7 @@ static int read_or_initialize_metadata(struct dentry *dentry)
>  	return rc;
>  }
>
> -static int ecryptfs_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ecryptfs_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct file *lower_file = ecryptfs_file_to_lower(file);
>  	/*
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index 2409d2ab0c28..05444e3d9326 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -408,7 +408,7 @@ static const struct vm_operations_struct erofs_dax_vm_ops = {
>  	.huge_fault	= erofs_dax_huge_fault,
>  };
>
> -static int erofs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int erofs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!IS_DAX(file_inode(file)))
>  		return generic_file_readonly_mmap(file, vma);
> diff --git a/fs/exec.c b/fs/exec.c
> index f511409b8cd5..c6c2cddb8cc7 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -198,7 +198,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, unsigned long pos,
>  		int write)
>  {
>  	struct page *page;
> -	struct vm_area_struct *vma = bprm->vma;
> +	struct mm_area *vma = bprm->vma;
>  	struct mm_struct *mm = bprm->mm;
>  	int ret;
>
> @@ -245,7 +245,7 @@ static void flush_arg_page(struct linux_binprm *bprm, unsigned long pos,
>  static int __bprm_mm_init(struct linux_binprm *bprm)
>  {
>  	int err;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	struct mm_struct *mm = bprm->mm;
>
>  	bprm->vma = vma = vm_area_alloc(mm);
> @@ -363,7 +363,7 @@ static bool valid_arg_len(struct linux_binprm *bprm, long len)
>
>  /*
>   * Create a new mm_struct and populate it with a temporary stack
> - * vm_area_struct.  We don't have enough context at this point to set the stack
> + * mm_area.  We don't have enough context at this point to set the stack
>   * flags, permissions, and offset, so we use temporary values.  We'll update
>   * them later in setup_arg_pages().
>   */
> @@ -702,7 +702,7 @@ static int copy_strings_kernel(int argc, const char *const *argv,
>  #ifdef CONFIG_MMU
>
>  /*
> - * Finalizes the stack vm_area_struct. The flags and permissions are updated,
> + * Finalizes the stack mm_area. The flags and permissions are updated,
>   * the stack is optionally relocated, and some extra space is added.
>   */
>  int setup_arg_pages(struct linux_binprm *bprm,
> @@ -712,8 +712,8 @@ int setup_arg_pages(struct linux_binprm *bprm,
>  	unsigned long ret;
>  	unsigned long stack_shift;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = bprm->vma;
> -	struct vm_area_struct *prev = NULL;
> +	struct mm_area *vma = bprm->vma;
> +	struct mm_area *prev = NULL;
>  	unsigned long vm_flags;
>  	unsigned long stack_base;
>  	unsigned long stack_size;
> diff --git a/fs/exfat/file.c b/fs/exfat/file.c
> index 841a5b18e3df..ae38e3545f0e 100644
> --- a/fs/exfat/file.c
> +++ b/fs/exfat/file.c
> @@ -651,7 +651,7 @@ static ssize_t exfat_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
>  static vm_fault_t exfat_page_mkwrite(struct vm_fault *vmf)
>  {
>  	int err;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct file *file = vma->vm_file;
>  	struct inode *inode = file_inode(file);
>  	struct exfat_inode_info *ei = EXFAT_I(inode);
> @@ -683,7 +683,7 @@ static const struct vm_operations_struct exfat_file_vm_ops = {
>  	.page_mkwrite	= exfat_page_mkwrite,
>  };
>
> -static int exfat_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int exfat_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (unlikely(exfat_forced_shutdown(file_inode(file)->i_sb)))
>  		return -EIO;
> diff --git a/fs/ext2/file.c b/fs/ext2/file.c
> index 10b061ac5bc0..cfa6459d23f8 100644
> --- a/fs/ext2/file.c
> +++ b/fs/ext2/file.c
> @@ -122,7 +122,7 @@ static const struct vm_operations_struct ext2_dax_vm_ops = {
>  	.pfn_mkwrite	= ext2_dax_fault,
>  };
>
> -static int ext2_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ext2_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!IS_DAX(file_inode(file)))
>  		return generic_file_mmap(file, vma);
> diff --git a/fs/ext4/file.c b/fs/ext4/file.c
> index beb078ee4811..f2bf09c18e64 100644
> --- a/fs/ext4/file.c
> +++ b/fs/ext4/file.c
> @@ -799,7 +799,7 @@ static const struct vm_operations_struct ext4_file_vm_ops = {
>  	.page_mkwrite   = ext4_page_mkwrite,
>  };
>
> -static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ext4_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int ret;
>  	struct inode *inode = file->f_mapping->host;
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 1dc09ed5d403..335fe55c24d2 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -6172,7 +6172,7 @@ static int ext4_bh_unmapped(handle_t *handle, struct inode *inode,
>
>  vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio = page_folio(vmf->page);
>  	loff_t size;
>  	unsigned long len;
> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
> index abbcbb5865a3..1423c6e7e488 100644
> --- a/fs/f2fs/file.c
> +++ b/fs/f2fs/file.c
> @@ -532,7 +532,7 @@ static loff_t f2fs_llseek(struct file *file, loff_t offset, int whence)
>  	return -EINVAL;
>  }
>
> -static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int f2fs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>
> diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
> index 0502bf3cdf6a..72cb7b6a361c 100644
> --- a/fs/fuse/dax.c
> +++ b/fs/fuse/dax.c
> @@ -821,7 +821,7 @@ static const struct vm_operations_struct fuse_dax_vm_ops = {
>  	.pfn_mkwrite	= fuse_dax_pfn_mkwrite,
>  };
>
> -int fuse_dax_mmap(struct file *file, struct vm_area_struct *vma)
> +int fuse_dax_mmap(struct file *file, struct mm_area *vma)
>  {
>  	file_accessed(file);
>  	vma->vm_ops = &fuse_dax_vm_ops;
> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
> index 754378dd9f71..f75907398e60 100644
> --- a/fs/fuse/file.c
> +++ b/fs/fuse/file.c
> @@ -2576,7 +2576,7 @@ static int fuse_launder_folio(struct folio *folio)
>   * Write back dirty data/metadata now (there may not be any suitable
>   * open files later for data)
>   */
> -static void fuse_vma_close(struct vm_area_struct *vma)
> +static void fuse_vma_close(struct mm_area *vma)
>  {
>  	int err;
>
> @@ -2622,7 +2622,7 @@ static const struct vm_operations_struct fuse_file_vm_ops = {
>  	.page_mkwrite	= fuse_page_mkwrite,
>  };
>
> -static int fuse_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int fuse_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct fuse_file *ff = file->private_data;
>  	struct fuse_conn *fc = ff->fm->fc;
> diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
> index d56d4fd956db..d86e9e62dbfc 100644
> --- a/fs/fuse/fuse_i.h
> +++ b/fs/fuse/fuse_i.h
> @@ -1470,7 +1470,7 @@ void fuse_free_conn(struct fuse_conn *fc);
>
>  ssize_t fuse_dax_read_iter(struct kiocb *iocb, struct iov_iter *to);
>  ssize_t fuse_dax_write_iter(struct kiocb *iocb, struct iov_iter *from);
> -int fuse_dax_mmap(struct file *file, struct vm_area_struct *vma);
> +int fuse_dax_mmap(struct file *file, struct mm_area *vma);
>  int fuse_dax_break_layouts(struct inode *inode, u64 dmap_start, u64 dmap_end);
>  int fuse_dax_conn_alloc(struct fuse_conn *fc, enum fuse_dax_mode mode,
>  			struct dax_device *dax_dev);
> @@ -1567,7 +1567,7 @@ ssize_t fuse_passthrough_splice_read(struct file *in, loff_t *ppos,
>  ssize_t fuse_passthrough_splice_write(struct pipe_inode_info *pipe,
>  				      struct file *out, loff_t *ppos,
>  				      size_t len, unsigned int flags);
> -ssize_t fuse_passthrough_mmap(struct file *file, struct vm_area_struct *vma);
> +ssize_t fuse_passthrough_mmap(struct file *file, struct mm_area *vma);
>
>  #ifdef CONFIG_SYSCTL
>  extern int fuse_sysctl_register(void);
> diff --git a/fs/fuse/passthrough.c b/fs/fuse/passthrough.c
> index 607ef735ad4a..6245304c35f2 100644
> --- a/fs/fuse/passthrough.c
> +++ b/fs/fuse/passthrough.c
> @@ -129,7 +129,7 @@ ssize_t fuse_passthrough_splice_write(struct pipe_inode_info *pipe,
>  	return ret;
>  }
>
> -ssize_t fuse_passthrough_mmap(struct file *file, struct vm_area_struct *vma)
> +ssize_t fuse_passthrough_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct fuse_file *ff = file->private_data;
>  	struct file *backing_file = fuse_file_passthrough(ff);
> diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
> index fd1147aa3891..21c6af00183e 100644
> --- a/fs/gfs2/file.c
> +++ b/fs/gfs2/file.c
> @@ -588,7 +588,7 @@ static const struct vm_operations_struct gfs2_vm_ops = {
>   * Returns: 0
>   */
>
> -static int gfs2_mmap(struct file *file, struct vm_area_struct *vma)
> +static int gfs2_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct gfs2_inode *ip = GFS2_I(file->f_mapping->host);
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index e4de5425838d..33c1e3dd8b90 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -96,7 +96,7 @@ static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
>  #define PGOFF_LOFFT_MAX \
>  	(((1UL << (PAGE_SHIFT + 1)) - 1) <<  (BITS_PER_LONG - (PAGE_SHIFT + 1)))
>
> -static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int hugetlbfs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>  	loff_t len, vma_len;
> @@ -340,7 +340,7 @@ static void hugetlb_delete_from_page_cache(struct folio *folio)
>   * mutex for the page in the mapping.  So, we can not race with page being
>   * faulted into the vma.
>   */
> -static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma,
> +static bool hugetlb_vma_maps_pfn(struct mm_area *vma,
>  				unsigned long addr, unsigned long pfn)
>  {
>  	pte_t *ptep, pte;
> @@ -365,7 +365,7 @@ static bool hugetlb_vma_maps_pfn(struct vm_area_struct *vma,
>   * which overlap the truncated area starting at pgoff,
>   * and no vma on a 32-bit arch can span beyond the 4GB.
>   */
> -static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
> +static unsigned long vma_offset_start(struct mm_area *vma, pgoff_t start)
>  {
>  	unsigned long offset = 0;
>
> @@ -375,7 +375,7 @@ static unsigned long vma_offset_start(struct vm_area_struct *vma, pgoff_t start)
>  	return vma->vm_start + offset;
>  }
>
> -static unsigned long vma_offset_end(struct vm_area_struct *vma, pgoff_t end)
> +static unsigned long vma_offset_end(struct mm_area *vma, pgoff_t end)
>  {
>  	unsigned long t_end;
>
> @@ -399,7 +399,7 @@ static void hugetlb_unmap_file_folio(struct hstate *h,
>  	struct rb_root_cached *root = &mapping->i_mmap;
>  	struct hugetlb_vma_lock *vma_lock;
>  	unsigned long pfn = folio_pfn(folio);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long v_start;
>  	unsigned long v_end;
>  	pgoff_t start, end;
> @@ -479,7 +479,7 @@ static void
>  hugetlb_vmdelete_list(struct rb_root_cached *root, pgoff_t start, pgoff_t end,
>  		      zap_flags_t zap_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * end == 0 indicates that the entire range after start should be
> @@ -730,7 +730,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
>  	struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);
>  	struct address_space *mapping = inode->i_mapping;
>  	struct hstate *h = hstate_inode(inode);
> -	struct vm_area_struct pseudo_vma;
> +	struct mm_area pseudo_vma;
>  	struct mm_struct *mm = current->mm;
>  	loff_t hpage_size = huge_page_size(h);
>  	unsigned long hpage_shift = huge_page_shift(h);
> diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
> index 66fe8fe41f06..cd6ff826d3f5 100644
> --- a/fs/kernfs/file.c
> +++ b/fs/kernfs/file.c
> @@ -349,7 +349,7 @@ static ssize_t kernfs_fop_write_iter(struct kiocb *iocb, struct iov_iter *iter)
>  	return len;
>  }
>
> -static void kernfs_vma_open(struct vm_area_struct *vma)
> +static void kernfs_vma_open(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct kernfs_open_file *of = kernfs_of(file);
> @@ -408,7 +408,7 @@ static vm_fault_t kernfs_vma_page_mkwrite(struct vm_fault *vmf)
>  	return ret;
>  }
>
> -static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
> +static int kernfs_vma_access(struct mm_area *vma, unsigned long addr,
>  			     void *buf, int len, int write)
>  {
>  	struct file *file = vma->vm_file;
> @@ -436,7 +436,7 @@ static const struct vm_operations_struct kernfs_vm_ops = {
>  	.access		= kernfs_vma_access,
>  };
>
> -static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
> +static int kernfs_fop_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct kernfs_open_file *of = kernfs_of(file);
>  	const struct kernfs_ops *ops;
> diff --git a/fs/nfs/file.c b/fs/nfs/file.c
> index 033feeab8c34..62e293a33325 100644
> --- a/fs/nfs/file.c
> +++ b/fs/nfs/file.c
> @@ -207,7 +207,7 @@ nfs_file_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe
>  EXPORT_SYMBOL_GPL(nfs_file_splice_read);
>
>  int
> -nfs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +nfs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>  	int	status;
> diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
> index ec8d32d0e2e9..007e50305767 100644
> --- a/fs/nfs/internal.h
> +++ b/fs/nfs/internal.h
> @@ -432,7 +432,7 @@ loff_t nfs_file_llseek(struct file *, loff_t, int);
>  ssize_t nfs_file_read(struct kiocb *, struct iov_iter *);
>  ssize_t nfs_file_splice_read(struct file *in, loff_t *ppos, struct pipe_inode_info *pipe,
>  			     size_t len, unsigned int flags);
> -int nfs_file_mmap(struct file *, struct vm_area_struct *);
> +int nfs_file_mmap(struct file *, struct mm_area *);
>  ssize_t nfs_file_write(struct kiocb *, struct iov_iter *);
>  int nfs_file_release(struct inode *, struct file *);
>  int nfs_lock(struct file *, int, struct file_lock *);
> diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c
> index 0e3fc5ba33c7..3e424224cb56 100644
> --- a/fs/nilfs2/file.c
> +++ b/fs/nilfs2/file.c
> @@ -44,7 +44,7 @@ int nilfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
>
>  static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio = page_folio(vmf->page);
>  	struct inode *inode = file_inode(vma->vm_file);
>  	struct nilfs_transaction_info ti;
> @@ -125,7 +125,7 @@ static const struct vm_operations_struct nilfs_file_vm_ops = {
>  	.page_mkwrite	= nilfs_page_mkwrite,
>  };
>
> -static int nilfs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int nilfs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	file_accessed(file);
>  	vma->vm_ops = &nilfs_file_vm_ops;
> diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
> index 9b6a3f8d2e7c..72370c69d6dc 100644
> --- a/fs/ntfs3/file.c
> +++ b/fs/ntfs3/file.c
> @@ -347,7 +347,7 @@ static int ntfs_zero_range(struct inode *inode, u64 vbo, u64 vbo_to)
>  /*
>   * ntfs_file_mmap - file_operations::mmap
>   */
> -static int ntfs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ntfs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>  	struct ntfs_inode *ni = ntfs_i(inode);
> diff --git a/fs/ocfs2/mmap.c b/fs/ocfs2/mmap.c
> index 6a314e9f2b49..9586d4d287e7 100644
> --- a/fs/ocfs2/mmap.c
> +++ b/fs/ocfs2/mmap.c
> @@ -30,7 +30,7 @@
>
>  static vm_fault_t ocfs2_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	sigset_t oldset;
>  	vm_fault_t ret;
>
> @@ -159,7 +159,7 @@ static const struct vm_operations_struct ocfs2_file_vm_ops = {
>  	.page_mkwrite	= ocfs2_page_mkwrite,
>  };
>
> -int ocfs2_mmap(struct file *file, struct vm_area_struct *vma)
> +int ocfs2_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int ret = 0, lock_level = 0;
>
> diff --git a/fs/ocfs2/mmap.h b/fs/ocfs2/mmap.h
> index 1051507cc684..8cf4bc586fb2 100644
> --- a/fs/ocfs2/mmap.h
> +++ b/fs/ocfs2/mmap.h
> @@ -2,6 +2,6 @@
>  #ifndef OCFS2_MMAP_H
>  #define OCFS2_MMAP_H
>
> -int ocfs2_mmap(struct file *file, struct vm_area_struct *vma);
> +int ocfs2_mmap(struct file *file, struct mm_area *vma);
>
>  #endif  /* OCFS2_MMAP_H */
> diff --git a/fs/orangefs/file.c b/fs/orangefs/file.c
> index 90c49c0de243..290e33bad497 100644
> --- a/fs/orangefs/file.c
> +++ b/fs/orangefs/file.c
> @@ -398,7 +398,7 @@ static const struct vm_operations_struct orangefs_file_vm_ops = {
>  /*
>   * Memory map a region of a file.
>   */
> -static int orangefs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int orangefs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int ret;
>
> diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
> index 969b458100fe..400f63fc2408 100644
> --- a/fs/overlayfs/file.c
> +++ b/fs/overlayfs/file.c
> @@ -476,7 +476,7 @@ static int ovl_fsync(struct file *file, loff_t start, loff_t end, int datasync)
>  	return ret;
>  }
>
> -static int ovl_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ovl_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct ovl_file *of = file->private_data;
>  	struct backing_file_ctx ctx = {
> diff --git a/fs/proc/base.c b/fs/proc/base.c
> index b0d4e1908b22..4f23e14bee67 100644
> --- a/fs/proc/base.c
> +++ b/fs/proc/base.c
> @@ -2244,7 +2244,7 @@ static const struct dentry_operations tid_map_files_dentry_operations = {
>  static int map_files_get_link(struct dentry *dentry, struct path *path)
>  {
>  	unsigned long vm_start, vm_end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *task;
>  	struct mm_struct *mm;
>  	int rc;
> @@ -2341,7 +2341,7 @@ static struct dentry *proc_map_files_lookup(struct inode *dir,
>  		struct dentry *dentry, unsigned int flags)
>  {
>  	unsigned long vm_start, vm_end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *task;
>  	struct dentry *result;
>  	struct mm_struct *mm;
> @@ -2395,7 +2395,7 @@ static const struct inode_operations proc_map_files_inode_operations = {
>  static int
>  proc_map_files_readdir(struct file *file, struct dir_context *ctx)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *task;
>  	struct mm_struct *mm;
>  	unsigned long nr_files, pos, i;
> diff --git a/fs/proc/inode.c b/fs/proc/inode.c
> index a3eb3b740f76..d5a6e680a0bd 100644
> --- a/fs/proc/inode.c
> +++ b/fs/proc/inode.c
> @@ -412,7 +412,7 @@ static long proc_reg_compat_ioctl(struct file *file, unsigned int cmd, unsigned
>  }
>  #endif
>
> -static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct vm_area_struct *vma)
> +static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct mm_area *vma)
>  {
>  	__auto_type mmap = pde->proc_ops->proc_mmap;
>  	if (mmap)
> @@ -420,7 +420,7 @@ static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct vm_are
>  	return -EIO;
>  }
>
> -static int proc_reg_mmap(struct file *file, struct vm_area_struct *vma)
> +static int proc_reg_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct proc_dir_entry *pde = PDE(file_inode(file));
>  	int rv = -EIO;
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 994cde10e3f4..66a47c2a2b98 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -127,10 +127,10 @@ static void release_task_mempolicy(struct proc_maps_private *priv)
>  }
>  #endif
>
> -static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv,
> +static struct mm_area *proc_get_vma(struct proc_maps_private *priv,
>  						loff_t *ppos)
>  {
> -	struct vm_area_struct *vma = vma_next(&priv->iter);
> +	struct mm_area *vma = vma_next(&priv->iter);
>
>  	if (vma) {
>  		*ppos = vma->vm_start;
> @@ -240,7 +240,7 @@ static int do_maps_open(struct inode *inode, struct file *file,
>  				sizeof(struct proc_maps_private));
>  }
>
> -static void get_vma_name(struct vm_area_struct *vma,
> +static void get_vma_name(struct mm_area *vma,
>  			 const struct path **path,
>  			 const char **name,
>  			 const char **name_fmt)
> @@ -322,7 +322,7 @@ static void show_vma_header_prefix(struct seq_file *m,
>  }
>
>  static void
> -show_map_vma(struct seq_file *m, struct vm_area_struct *vma)
> +show_map_vma(struct seq_file *m, struct mm_area *vma)
>  {
>  	const struct path *path;
>  	const char *name_fmt, *name;
> @@ -394,20 +394,20 @@ static int query_vma_setup(struct mm_struct *mm)
>  	return mmap_read_lock_killable(mm);
>  }
>
> -static void query_vma_teardown(struct mm_struct *mm, struct vm_area_struct *vma)
> +static void query_vma_teardown(struct mm_struct *mm, struct mm_area *vma)
>  {
>  	mmap_read_unlock(mm);
>  }
>
> -static struct vm_area_struct *query_vma_find_by_addr(struct mm_struct *mm, unsigned long addr)
> +static struct mm_area *query_vma_find_by_addr(struct mm_struct *mm, unsigned long addr)
>  {
>  	return find_vma(mm, addr);
>  }
>
> -static struct vm_area_struct *query_matching_vma(struct mm_struct *mm,
> +static struct mm_area *query_matching_vma(struct mm_struct *mm,
>  						 unsigned long addr, u32 flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  next_vma:
>  	vma = query_vma_find_by_addr(mm, addr);
> @@ -454,7 +454,7 @@ static struct vm_area_struct *query_matching_vma(struct mm_struct *mm,
>  static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg)
>  {
>  	struct procmap_query karg;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm;
>  	const char *name = NULL;
>  	char build_id_buf[BUILD_ID_SIZE_MAX], *name_buf = NULL;
> @@ -780,7 +780,7 @@ static int smaps_pte_hole(unsigned long addr, unsigned long end,
>  			  __always_unused int depth, struct mm_walk *walk)
>  {
>  	struct mem_size_stats *mss = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>
>  	mss->swap += shmem_partial_swap_usage(walk->vma->vm_file->f_mapping,
>  					      linear_page_index(vma, addr),
> @@ -806,7 +806,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr,
>  		struct mm_walk *walk)
>  {
>  	struct mem_size_stats *mss = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	bool locked = !!(vma->vm_flags & VM_LOCKED);
>  	struct page *page = NULL;
>  	bool present = false, young = false, dirty = false;
> @@ -854,7 +854,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
>  		struct mm_walk *walk)
>  {
>  	struct mem_size_stats *mss = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	bool locked = !!(vma->vm_flags & VM_LOCKED);
>  	struct page *page = NULL;
>  	bool present = false;
> @@ -894,7 +894,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
>  static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  			   struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	pte_t *pte;
>  	spinlock_t *ptl;
>
> @@ -918,7 +918,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  	return 0;
>  }
>
> -static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
> +static void show_smap_vma_flags(struct seq_file *m, struct mm_area *vma)
>  {
>  	/*
>  	 * Don't forget to update Documentation/ on changes.
> @@ -1019,7 +1019,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
>  				 struct mm_walk *walk)
>  {
>  	struct mem_size_stats *mss = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	pte_t ptent = huge_ptep_get(walk->mm, addr, pte);
>  	struct folio *folio = NULL;
>  	bool present = false;
> @@ -1067,7 +1067,7 @@ static const struct mm_walk_ops smaps_shmem_walk_ops = {
>   *
>   * Use vm_start of @vma as the beginning address if @start is 0.
>   */
> -static void smap_gather_stats(struct vm_area_struct *vma,
> +static void smap_gather_stats(struct mm_area *vma,
>  		struct mem_size_stats *mss, unsigned long start)
>  {
>  	const struct mm_walk_ops *ops = &smaps_walk_ops;
> @@ -1150,7 +1150,7 @@ static void __show_smap(struct seq_file *m, const struct mem_size_stats *mss,
>
>  static int show_smap(struct seq_file *m, void *v)
>  {
> -	struct vm_area_struct *vma = v;
> +	struct mm_area *vma = v;
>  	struct mem_size_stats mss = {};
>
>  	smap_gather_stats(vma, &mss, 0);
> @@ -1180,7 +1180,7 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
>  	struct proc_maps_private *priv = m->private;
>  	struct mem_size_stats mss = {};
>  	struct mm_struct *mm = priv->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long vma_start = 0, last_vma_end = 0;
>  	int ret = 0;
>  	VMA_ITERATOR(vmi, mm, 0);
> @@ -1380,7 +1380,7 @@ struct clear_refs_private {
>
>  #ifdef CONFIG_MEM_SOFT_DIRTY
>
> -static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> +static inline bool pte_is_pinned(struct mm_area *vma, unsigned long addr, pte_t pte)
>  {
>  	struct folio *folio;
>
> @@ -1396,7 +1396,7 @@ static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr,
>  	return folio_maybe_dma_pinned(folio);
>  }
>
> -static inline void clear_soft_dirty(struct vm_area_struct *vma,
> +static inline void clear_soft_dirty(struct mm_area *vma,
>  		unsigned long addr, pte_t *pte)
>  {
>  	/*
> @@ -1422,14 +1422,14 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
>  	}
>  }
>  #else
> -static inline void clear_soft_dirty(struct vm_area_struct *vma,
> +static inline void clear_soft_dirty(struct mm_area *vma,
>  		unsigned long addr, pte_t *pte)
>  {
>  }
>  #endif
>
>  #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE)
> -static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
> +static inline void clear_soft_dirty_pmd(struct mm_area *vma,
>  		unsigned long addr, pmd_t *pmdp)
>  {
>  	pmd_t old, pmd = *pmdp;
> @@ -1452,7 +1452,7 @@ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
>  	}
>  }
>  #else
> -static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
> +static inline void clear_soft_dirty_pmd(struct mm_area *vma,
>  		unsigned long addr, pmd_t *pmdp)
>  {
>  }
> @@ -1462,7 +1462,7 @@ static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr,
>  				unsigned long end, struct mm_walk *walk)
>  {
>  	struct clear_refs_private *cp = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	pte_t *pte, ptent;
>  	spinlock_t *ptl;
>  	struct folio *folio;
> @@ -1522,7 +1522,7 @@ static int clear_refs_test_walk(unsigned long start, unsigned long end,
>  				struct mm_walk *walk)
>  {
>  	struct clear_refs_private *cp = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>
>  	if (vma->vm_flags & VM_PFNMAP)
>  		return 1;
> @@ -1552,7 +1552,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
>  	struct task_struct *task;
>  	char buffer[PROC_NUMBUF] = {};
>  	struct mm_struct *mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	enum clear_refs_types type;
>  	int itype;
>  	int rv;
> @@ -1680,7 +1680,7 @@ static int pagemap_pte_hole(unsigned long start, unsigned long end,
>  	int err = 0;
>
>  	while (addr < end) {
> -		struct vm_area_struct *vma = find_vma(walk->mm, addr);
> +		struct mm_area *vma = find_vma(walk->mm, addr);
>  		pagemap_entry_t pme = make_pme(0, 0);
>  		/* End of address space hole, which we mark as non-present. */
>  		unsigned long hole_end;
> @@ -1713,7 +1713,7 @@ static int pagemap_pte_hole(unsigned long start, unsigned long end,
>  }
>
>  static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
> -		struct vm_area_struct *vma, unsigned long addr, pte_t pte)
> +		struct mm_area *vma, unsigned long addr, pte_t pte)
>  {
>  	u64 frame = 0, flags = 0;
>  	struct page *page = NULL;
> @@ -1774,7 +1774,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm,
>  static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
>  			     struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	struct pagemapread *pm = walk->private;
>  	spinlock_t *ptl;
>  	pte_t *pte, *orig_pte;
> @@ -1887,7 +1887,7 @@ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask,
>  				 struct mm_walk *walk)
>  {
>  	struct pagemapread *pm = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	u64 flags = 0, frame = 0;
>  	int err = 0;
>  	pte_t pte;
> @@ -2099,7 +2099,7 @@ struct pagemap_scan_private {
>  };
>
>  static unsigned long pagemap_page_category(struct pagemap_scan_private *p,
> -					   struct vm_area_struct *vma,
> +					   struct mm_area *vma,
>  					   unsigned long addr, pte_t pte)
>  {
>  	unsigned long categories = 0;
> @@ -2141,7 +2141,7 @@ static unsigned long pagemap_page_category(struct pagemap_scan_private *p,
>  	return categories;
>  }
>
> -static void make_uffd_wp_pte(struct vm_area_struct *vma,
> +static void make_uffd_wp_pte(struct mm_area *vma,
>  			     unsigned long addr, pte_t *pte, pte_t ptent)
>  {
>  	if (pte_present(ptent)) {
> @@ -2161,7 +2161,7 @@ static void make_uffd_wp_pte(struct vm_area_struct *vma,
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  static unsigned long pagemap_thp_category(struct pagemap_scan_private *p,
> -					  struct vm_area_struct *vma,
> +					  struct mm_area *vma,
>  					  unsigned long addr, pmd_t pmd)
>  {
>  	unsigned long categories = PAGE_IS_HUGE;
> @@ -2203,7 +2203,7 @@ static unsigned long pagemap_thp_category(struct pagemap_scan_private *p,
>  	return categories;
>  }
>
> -static void make_uffd_wp_pmd(struct vm_area_struct *vma,
> +static void make_uffd_wp_pmd(struct mm_area *vma,
>  			     unsigned long addr, pmd_t *pmdp)
>  {
>  	pmd_t old, pmd = *pmdp;
> @@ -2250,7 +2250,7 @@ static unsigned long pagemap_hugetlb_category(pte_t pte)
>  	return categories;
>  }
>
> -static void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
> +static void make_uffd_wp_huge_pte(struct mm_area *vma,
>  				  unsigned long addr, pte_t *ptep,
>  				  pte_t ptent)
>  {
> @@ -2316,7 +2316,7 @@ static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
>  				  struct mm_walk *walk)
>  {
>  	struct pagemap_scan_private *p = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	unsigned long vma_category = 0;
>  	bool wp_allowed = userfaultfd_wp_async(vma) &&
>  	    userfaultfd_wp_use_markers(vma);
> @@ -2423,7 +2423,7 @@ static int pagemap_scan_thp_entry(pmd_t *pmd, unsigned long start,
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	struct pagemap_scan_private *p = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	unsigned long categories;
>  	spinlock_t *ptl;
>  	int ret = 0;
> @@ -2473,7 +2473,7 @@ static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>  				  unsigned long end, struct mm_walk *walk)
>  {
>  	struct pagemap_scan_private *p = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	unsigned long addr, flush_end = 0;
>  	pte_t *pte, *start_pte;
>  	spinlock_t *ptl;
> @@ -2573,7 +2573,7 @@ static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
>  				      struct mm_walk *walk)
>  {
>  	struct pagemap_scan_private *p = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	unsigned long categories;
>  	spinlock_t *ptl;
>  	int ret = 0;
> @@ -2632,7 +2632,7 @@ static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
>  				 int depth, struct mm_walk *walk)
>  {
>  	struct pagemap_scan_private *p = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	int ret, err;
>
>  	if (!vma || !pagemap_scan_is_interesting_page(p->cur_vma_category, p))
> @@ -2905,7 +2905,7 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty,
>  	md->node[folio_nid(folio)] += nr_pages;
>  }
>
> -static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,
> +static struct page *can_gather_numa_stats(pte_t pte, struct mm_area *vma,
>  		unsigned long addr)
>  {
>  	struct page *page;
> @@ -2930,7 +2930,7 @@ static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma,
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  static struct page *can_gather_numa_stats_pmd(pmd_t pmd,
> -					      struct vm_area_struct *vma,
> +					      struct mm_area *vma,
>  					      unsigned long addr)
>  {
>  	struct page *page;
> @@ -2958,7 +2958,7 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
>  		unsigned long end, struct mm_walk *walk)
>  {
>  	struct numa_maps *md = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	spinlock_t *ptl;
>  	pte_t *orig_pte;
>  	pte_t *pte;
> @@ -3032,7 +3032,7 @@ static int show_numa_map(struct seq_file *m, void *v)
>  {
>  	struct numa_maps_private *numa_priv = m->private;
>  	struct proc_maps_private *proc_priv = &numa_priv->proc_maps;
> -	struct vm_area_struct *vma = v;
> +	struct mm_area *vma = v;
>  	struct numa_maps *md = &numa_priv->md;
>  	struct file *file = vma->vm_file;
>  	struct mm_struct *mm = vma->vm_mm;
> diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c
> index bce674533000..e45f014b5c81 100644
> --- a/fs/proc/task_nommu.c
> +++ b/fs/proc/task_nommu.c
> @@ -21,7 +21,7 @@
>  void task_mem(struct seq_file *m, struct mm_struct *mm)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_region *region;
>  	unsigned long bytes = 0, sbytes = 0, slack = 0, size;
>
> @@ -81,7 +81,7 @@ void task_mem(struct seq_file *m, struct mm_struct *mm)
>  unsigned long task_vsize(struct mm_struct *mm)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long vsize = 0;
>
>  	mmap_read_lock(mm);
> @@ -96,7 +96,7 @@ unsigned long task_statm(struct mm_struct *mm,
>  			 unsigned long *data, unsigned long *resident)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_region *region;
>  	unsigned long size = kobjsize(mm);
>
> @@ -124,7 +124,7 @@ unsigned long task_statm(struct mm_struct *mm,
>  /*
>   * display a single VMA to a sequenced file
>   */
> -static int nommu_vma_show(struct seq_file *m, struct vm_area_struct *vma)
> +static int nommu_vma_show(struct seq_file *m, struct mm_area *vma)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long ino = 0;
> @@ -175,10 +175,10 @@ static int show_map(struct seq_file *m, void *_p)
>  	return nommu_vma_show(m, _p);
>  }
>
> -static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv,
> +static struct mm_area *proc_get_vma(struct proc_maps_private *priv,
>  						loff_t *ppos)
>  {
> -	struct vm_area_struct *vma = vma_next(&priv->iter);
> +	struct mm_area *vma = vma_next(&priv->iter);
>
>  	if (vma) {
>  		*ppos = vma->vm_start;
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 10d01eb09c43..8e84ff70f57e 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -249,7 +249,7 @@ ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
>  /*
>   * Architectures may override this function to map oldmem
>   */
> -int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
> +int __weak remap_oldmem_pfn_range(struct mm_area *vma,
>  				  unsigned long from, unsigned long pfn,
>  				  unsigned long size, pgprot_t prot)
>  {
> @@ -295,7 +295,7 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size)
>  }
>
>  #ifdef CONFIG_MMU
> -static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
> +static int vmcoredd_mmap_dumps(struct mm_area *vma, unsigned long dst,
>  			       u64 start, size_t size)
>  {
>  	struct vmcoredd_node *dump;
> @@ -511,7 +511,7 @@ static const struct vm_operations_struct vmcore_mmap_ops = {
>   * remap_oldmem_pfn_checked - do remap_oldmem_pfn_range replacing all pages
>   * reported as not being ram with the zero page.
>   *
> - * @vma: vm_area_struct describing requested mapping
> + * @vma: mm_area describing requested mapping
>   * @from: start remapping from
>   * @pfn: page frame number to start remapping to
>   * @size: remapping size
> @@ -519,7 +519,7 @@ static const struct vm_operations_struct vmcore_mmap_ops = {
>   *
>   * Returns zero on success, -EAGAIN on failure.
>   */
> -static int remap_oldmem_pfn_checked(struct vm_area_struct *vma,
> +static int remap_oldmem_pfn_checked(struct mm_area *vma,
>  				    unsigned long from, unsigned long pfn,
>  				    unsigned long size, pgprot_t prot)
>  {
> @@ -569,7 +569,7 @@ static int remap_oldmem_pfn_checked(struct vm_area_struct *vma,
>  	return -EAGAIN;
>  }
>
> -static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma,
> +static int vmcore_remap_oldmem_pfn(struct mm_area *vma,
>  			    unsigned long from, unsigned long pfn,
>  			    unsigned long size, pgprot_t prot)
>  {
> @@ -588,7 +588,7 @@ static int vmcore_remap_oldmem_pfn(struct vm_area_struct *vma,
>  	return ret;
>  }
>
> -static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
> +static int mmap_vmcore(struct file *file, struct mm_area *vma)
>  {
>  	size_t size = vma->vm_end - vma->vm_start;
>  	u64 start, end, len, tsz;
> @@ -701,7 +701,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
>  	return -EAGAIN;
>  }
>  #else
> -static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
> +static int mmap_vmcore(struct file *file, struct mm_area *vma)
>  {
>  	return -ENOSYS;
>  }
> diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c
> index 7a6d980e614d..39698a0acbf8 100644
> --- a/fs/ramfs/file-nommu.c
> +++ b/fs/ramfs/file-nommu.c
> @@ -28,7 +28,7 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
>  						   unsigned long len,
>  						   unsigned long pgoff,
>  						   unsigned long flags);
> -static int ramfs_nommu_mmap(struct file *file, struct vm_area_struct *vma);
> +static int ramfs_nommu_mmap(struct file *file, struct mm_area *vma);
>
>  static unsigned ramfs_mmap_capabilities(struct file *file)
>  {
> @@ -262,7 +262,7 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
>  /*
>   * set up a mapping for shared memory segments
>   */
> -static int ramfs_nommu_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ramfs_nommu_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (!is_nommu_shared_mapping(vma->vm_flags))
>  		return -ENOSYS;
> diff --git a/fs/romfs/mmap-nommu.c b/fs/romfs/mmap-nommu.c
> index 4520ca413867..704bc650e9fd 100644
> --- a/fs/romfs/mmap-nommu.c
> +++ b/fs/romfs/mmap-nommu.c
> @@ -61,7 +61,7 @@ static unsigned long romfs_get_unmapped_area(struct file *file,
>   * permit a R/O mapping to be made directly through onto an MTD device if
>   * possible
>   */
> -static int romfs_mmap(struct file *file, struct vm_area_struct *vma)
> +static int romfs_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -ENOSYS;
>  }
> diff --git a/fs/smb/client/cifsfs.h b/fs/smb/client/cifsfs.h
> index 8dea0cf3a8de..cadb123692c1 100644
> --- a/fs/smb/client/cifsfs.h
> +++ b/fs/smb/client/cifsfs.h
> @@ -103,8 +103,8 @@ extern int cifs_lock(struct file *, int, struct file_lock *);
>  extern int cifs_fsync(struct file *, loff_t, loff_t, int);
>  extern int cifs_strict_fsync(struct file *, loff_t, loff_t, int);
>  extern int cifs_flush(struct file *, fl_owner_t id);
> -extern int cifs_file_mmap(struct file *file, struct vm_area_struct *vma);
> -extern int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma);
> +extern int cifs_file_mmap(struct file *file, struct mm_area *vma);
> +extern int cifs_file_strict_mmap(struct file *file, struct mm_area *vma);
>  extern const struct file_operations cifs_dir_ops;
>  extern int cifs_readdir(struct file *file, struct dir_context *ctx);
>
> diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
> index 8407fb108664..ab822c809070 100644
> --- a/fs/smb/client/file.c
> +++ b/fs/smb/client/file.c
> @@ -2964,7 +2964,7 @@ static const struct vm_operations_struct cifs_file_vm_ops = {
>  	.page_mkwrite = cifs_page_mkwrite,
>  };
>
> -int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
> +int cifs_file_strict_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int xid, rc = 0;
>  	struct inode *inode = file_inode(file);
> @@ -2982,7 +2982,7 @@ int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
>  	return rc;
>  }
>
> -int cifs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +int cifs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int rc, xid;
>
> diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
> index c3d3b079aedd..ebddf13bd010 100644
> --- a/fs/sysfs/file.c
> +++ b/fs/sysfs/file.c
> @@ -171,7 +171,7 @@ static ssize_t sysfs_kf_bin_write(struct kernfs_open_file *of, char *buf,
>  }
>
>  static int sysfs_kf_bin_mmap(struct kernfs_open_file *of,
> -			     struct vm_area_struct *vma)
> +			     struct mm_area *vma)
>  {
>  	struct bin_attribute *battr = of->kn->priv;
>  	struct kobject *kobj = sysfs_file_kobj(of->kn);
> diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
> index bf311c38d9a8..0f0256b04a4a 100644
> --- a/fs/ubifs/file.c
> +++ b/fs/ubifs/file.c
> @@ -1579,7 +1579,7 @@ static const struct vm_operations_struct ubifs_file_vm_ops = {
>  	.page_mkwrite = ubifs_vm_page_mkwrite,
>  };
>
> -static int ubifs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int ubifs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int err;
>
> diff --git a/fs/udf/file.c b/fs/udf/file.c
> index 0d76c4f37b3e..6d5fa7de4cb6 100644
> --- a/fs/udf/file.c
> +++ b/fs/udf/file.c
> @@ -36,7 +36,7 @@
>
>  static vm_fault_t udf_page_mkwrite(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct inode *inode = file_inode(vma->vm_file);
>  	struct address_space *mapping = inode->i_mapping;
>  	struct folio *folio = page_folio(vmf->page);
> @@ -189,7 +189,7 @@ static int udf_release_file(struct inode *inode, struct file *filp)
>  	return 0;
>  }
>
> -static int udf_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int udf_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	file_accessed(file);
>  	vma->vm_ops = &udf_file_vm_ops;
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index d80f94346199..ade022a5af5f 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -94,7 +94,7 @@ static bool userfaultfd_wp_async_ctx(struct userfaultfd_ctx *ctx)
>   * meaningful when userfaultfd_wp()==true on the vma and when it's
>   * anonymous.
>   */
> -bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma)
> +bool userfaultfd_wp_unpopulated(struct mm_area *vma)
>  {
>  	struct userfaultfd_ctx *ctx = vma->vm_userfaultfd_ctx.ctx;
>
> @@ -231,7 +231,7 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx,
>  					      struct vm_fault *vmf,
>  					      unsigned long reason)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	pte_t *ptep, pte;
>  	bool ret = true;
>
> @@ -362,7 +362,7 @@ static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags)
>   */
>  vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct userfaultfd_ctx *ctx;
>  	struct userfaultfd_wait_queue uwq;
> @@ -614,7 +614,7 @@ static void userfaultfd_event_complete(struct userfaultfd_ctx *ctx,
>  	__remove_wait_queue(&ctx->event_wqh, &ewq->wq);
>  }
>
> -int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs)
> +int dup_userfaultfd(struct mm_area *vma, struct list_head *fcs)
>  {
>  	struct userfaultfd_ctx *ctx = NULL, *octx;
>  	struct userfaultfd_fork_ctx *fctx;
> @@ -719,7 +719,7 @@ void dup_userfaultfd_fail(struct list_head *fcs)
>  	}
>  }
>
> -void mremap_userfaultfd_prep(struct vm_area_struct *vma,
> +void mremap_userfaultfd_prep(struct mm_area *vma,
>  			     struct vm_userfaultfd_ctx *vm_ctx)
>  {
>  	struct userfaultfd_ctx *ctx;
> @@ -766,7 +766,7 @@ void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *vm_ctx,
>  	userfaultfd_event_wait_completion(ctx, &ewq);
>  }
>
> -bool userfaultfd_remove(struct vm_area_struct *vma,
> +bool userfaultfd_remove(struct mm_area *vma,
>  			unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -807,7 +807,7 @@ static bool has_unmap_ctx(struct userfaultfd_ctx *ctx, struct list_head *unmaps,
>  	return false;
>  }
>
> -int userfaultfd_unmap_prep(struct vm_area_struct *vma, unsigned long start,
> +int userfaultfd_unmap_prep(struct mm_area *vma, unsigned long start,
>  			   unsigned long end, struct list_head *unmaps)
>  {
>  	struct userfaultfd_unmap_ctx *unmap_ctx;
> @@ -1239,7 +1239,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
>  				unsigned long arg)
>  {
>  	struct mm_struct *mm = ctx->mm;
> -	struct vm_area_struct *vma, *cur;
> +	struct mm_area *vma, *cur;
>  	int ret;
>  	struct uffdio_register uffdio_register;
>  	struct uffdio_register __user *user_uffdio_register;
> @@ -1413,7 +1413,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
>  				  unsigned long arg)
>  {
>  	struct mm_struct *mm = ctx->mm;
> -	struct vm_area_struct *vma, *prev, *cur;
> +	struct mm_area *vma, *prev, *cur;
>  	int ret;
>  	struct uffdio_range uffdio_unregister;
>  	bool found;
> @@ -1845,7 +1845,7 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long
>  	return ret;
>  }
>
> -bool userfaultfd_wp_async(struct vm_area_struct *vma)
> +bool userfaultfd_wp_async(struct mm_area *vma)
>  {
>  	return userfaultfd_wp_async_ctx(vma->vm_userfaultfd_ctx.ctx);
>  }
> diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c
> index b780deb81b02..a59e5521669f 100644
> --- a/fs/vboxsf/file.c
> +++ b/fs/vboxsf/file.c
> @@ -154,7 +154,7 @@ static int vboxsf_file_release(struct inode *inode, struct file *file)
>   * Write back dirty pages now, because there may not be any suitable
>   * open files later
>   */
> -static void vboxsf_vma_close(struct vm_area_struct *vma)
> +static void vboxsf_vma_close(struct mm_area *vma)
>  {
>  	filemap_write_and_wait(vma->vm_file->f_mapping);
>  }
> @@ -165,7 +165,7 @@ static const struct vm_operations_struct vboxsf_file_vm_ops = {
>  	.map_pages	= filemap_map_pages,
>  };
>
> -static int vboxsf_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int vboxsf_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	int err;
>
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 84f08c976ac4..afe9512ae66f 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1846,7 +1846,7 @@ static const struct vm_operations_struct xfs_file_vm_ops = {
>  STATIC int
>  xfs_file_mmap(
>  	struct file		*file,
> -	struct vm_area_struct	*vma)
> +	struct mm_area	*vma)
>  {
>  	struct inode		*inode = file_inode(file);
>  	struct xfs_buftarg	*target = xfs_inode_buftarg(XFS_I(inode));
> diff --git a/fs/zonefs/file.c b/fs/zonefs/file.c
> index 42e2c0065bb3..09a25e7ae36b 100644
> --- a/fs/zonefs/file.c
> +++ b/fs/zonefs/file.c
> @@ -312,7 +312,7 @@ static const struct vm_operations_struct zonefs_file_vm_ops = {
>  	.page_mkwrite	= zonefs_filemap_page_mkwrite,
>  };
>
> -static int zonefs_file_mmap(struct file *file, struct vm_area_struct *vma)
> +static int zonefs_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	/*
>  	 * Conventional zones accept random writes, so their files can support
> diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h
> index 7ee8a179d103..968dcbb599df 100644
> --- a/include/asm-generic/cacheflush.h
> +++ b/include/asm-generic/cacheflush.h
> @@ -5,7 +5,7 @@
>  #include <linux/instrumented.h>
>
>  struct mm_struct;
> -struct vm_area_struct;
> +struct mm_area;
>  struct page;
>  struct address_space;
>
> @@ -32,7 +32,7 @@ static inline void flush_cache_dup_mm(struct mm_struct *mm)
>  #endif
>
>  #ifndef flush_cache_range
> -static inline void flush_cache_range(struct vm_area_struct *vma,
> +static inline void flush_cache_range(struct mm_area *vma,
>  				     unsigned long start,
>  				     unsigned long end)
>  {
> @@ -40,7 +40,7 @@ static inline void flush_cache_range(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef flush_cache_page
> -static inline void flush_cache_page(struct vm_area_struct *vma,
> +static inline void flush_cache_page(struct mm_area *vma,
>  				    unsigned long vmaddr,
>  				    unsigned long pfn)
>  {
> @@ -78,7 +78,7 @@ static inline void flush_icache_range(unsigned long start, unsigned long end)
>  #endif
>
>  #ifndef flush_icache_user_page
> -static inline void flush_icache_user_page(struct vm_area_struct *vma,
> +static inline void flush_icache_user_page(struct mm_area *vma,
>  					   struct page *page,
>  					   unsigned long addr, int len)
>  {
> diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
> index 2afc95bf1655..837360772416 100644
> --- a/include/asm-generic/hugetlb.h
> +++ b/include/asm-generic/hugetlb.h
> @@ -97,7 +97,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>  #endif
>
>  #ifndef __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep)
>  {
>  	return ptep_clear_flush(vma, addr, ptep);
> @@ -136,7 +136,7 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
>  #endif
>
>  #ifndef __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
> -static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
> +static inline int huge_ptep_set_access_flags(struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep,
>  		pte_t pte, int dirty)
>  {
> diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
> index 6eea3b3c1e65..58db73bbd76f 100644
> --- a/include/asm-generic/mm_hooks.h
> +++ b/include/asm-generic/mm_hooks.h
> @@ -17,7 +17,7 @@ static inline void arch_exit_mmap(struct mm_struct *mm)
>  {
>  }
>
> -static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
> +static inline bool arch_vma_access_permitted(struct mm_area *vma,
>  		bool write, bool execute, bool foreign)
>  {
>  	/* by default, allow everything */
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 88a42973fa47..a86739bc57db 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -292,7 +292,7 @@ bool __tlb_remove_folio_pages(struct mmu_gather *tlb, struct page *page,
>   * function, except we define it before the 'struct mmu_gather'.
>   */
>  #define tlb_delay_rmap(tlb) (((tlb)->delayed_rmap = 1), true)
> -extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma);
> +extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct mm_area *vma);
>  #endif
>
>  #endif
> @@ -306,7 +306,7 @@ extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma);
>   */
>  #ifndef tlb_delay_rmap
>  #define tlb_delay_rmap(tlb) (false)
> -static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
> +static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct mm_area *vma) { }
>  #endif
>
>  /*
> @@ -435,7 +435,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
>  	if (tlb->fullmm || tlb->need_flush_all) {
>  		flush_tlb_mm(tlb->mm);
>  	} else if (tlb->end) {
> -		struct vm_area_struct vma = {
> +		struct mm_area vma = {
>  			.vm_mm = tlb->mm,
>  			.vm_flags = (tlb->vma_exec ? VM_EXEC    : 0) |
>  				    (tlb->vma_huge ? VM_HUGETLB : 0),
> @@ -449,7 +449,7 @@ static inline void tlb_flush(struct mmu_gather *tlb)
>  #endif /* CONFIG_MMU_GATHER_NO_RANGE */
>
>  static inline void
> -tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma)
> +tlb_update_vma_flags(struct mmu_gather *tlb, struct mm_area *vma)
>  {
>  	/*
>  	 * flush_tlb_range() implementations that look at VM_HUGETLB (tile,
> @@ -535,7 +535,7 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb)
>   * case where we're doing a full MM flush.  When we're doing a munmap,
>   * the vmas are adjusted to only cover the region to be torn down.
>   */
> -static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
> +static inline void tlb_start_vma(struct mmu_gather *tlb, struct mm_area *vma)
>  {
>  	if (tlb->fullmm)
>  		return;
> @@ -546,7 +546,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *
>  #endif
>  }
>
> -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma)
> +static inline void tlb_end_vma(struct mmu_gather *tlb, struct mm_area *vma)
>  {
>  	if (tlb->fullmm)
>  		return;
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 2bf893eabb4b..84a5e980adee 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -186,7 +186,7 @@ struct drm_gem_object_funcs {
>  	 * drm_gem_prime_mmap().  When @mmap is present @vm_ops is not
>  	 * used, the @mmap callback must set vma->vm_ops instead.
>  	 */
> -	int (*mmap)(struct drm_gem_object *obj, struct vm_area_struct *vma);
> +	int (*mmap)(struct drm_gem_object *obj, struct mm_area *vma);
>
>  	/**
>  	 * @evict:
> @@ -482,11 +482,11 @@ int drm_gem_object_init_with_mnt(struct drm_device *dev,
>  void drm_gem_private_object_init(struct drm_device *dev,
>  				 struct drm_gem_object *obj, size_t size);
>  void drm_gem_private_object_fini(struct drm_gem_object *obj);
> -void drm_gem_vm_open(struct vm_area_struct *vma);
> -void drm_gem_vm_close(struct vm_area_struct *vma);
> +void drm_gem_vm_open(struct mm_area *vma);
> +void drm_gem_vm_close(struct mm_area *vma);
>  int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
> -		     struct vm_area_struct *vma);
> -int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> +		     struct mm_area *vma);
> +int drm_gem_mmap(struct file *filp, struct mm_area *vma);
>
>  /**
>   * drm_gem_object_get - acquire a GEM buffer object reference
> diff --git a/include/drm/drm_gem_dma_helper.h b/include/drm/drm_gem_dma_helper.h
> index f2678e7ecb98..d097e0a46ceb 100644
> --- a/include/drm/drm_gem_dma_helper.h
> +++ b/include/drm/drm_gem_dma_helper.h
> @@ -40,7 +40,7 @@ void drm_gem_dma_print_info(const struct drm_gem_dma_object *dma_obj,
>  struct sg_table *drm_gem_dma_get_sg_table(struct drm_gem_dma_object *dma_obj);
>  int drm_gem_dma_vmap(struct drm_gem_dma_object *dma_obj,
>  		     struct iosys_map *map);
> -int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *vma);
> +int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct mm_area *vma);
>
>  extern const struct vm_operations_struct drm_gem_dma_vm_ops;
>
> @@ -126,7 +126,7 @@ static inline int drm_gem_dma_object_vmap(struct drm_gem_object *obj,
>   * Returns:
>   * 0 on success or a negative error code on failure.
>   */
> -static inline int drm_gem_dma_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static inline int drm_gem_dma_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
>
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index cef5a6b5a4d6..3126f47424b4 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -109,7 +109,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
>  		       struct iosys_map *map);
>  void drm_gem_shmem_vunmap(struct drm_gem_shmem_object *shmem,
>  			  struct iosys_map *map);
> -int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct *vma);
> +int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct mm_area *vma);
>
>  int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem);
>  void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem);
> @@ -259,7 +259,7 @@ static inline void drm_gem_shmem_object_vunmap(struct drm_gem_object *obj,
>   * Returns:
>   * 0 on success or a negative error code on failure.
>   */
> -static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +static inline int drm_gem_shmem_object_mmap(struct drm_gem_object *obj, struct mm_area *vma)
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 7b53d673ae7e..2147aea16d62 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -21,7 +21,7 @@ int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>  void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>  			struct iosys_map *map);
>  int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> -		     struct vm_area_struct *vma);
> +		     struct mm_area *vma);
>
>  int drm_gem_ttm_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
>  				uint32_t handle, uint64_t *offset);
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 00830b49a3ff..395692607569 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -18,7 +18,7 @@ struct drm_mode_create_dumb;
>  struct drm_plane;
>  struct drm_plane_state;
>  struct filp;
> -struct vm_area_struct;
> +struct mm_area;
>
>  #define DRM_GEM_VRAM_PL_FLAG_SYSTEM	(1 << 0)
>  #define DRM_GEM_VRAM_PL_FLAG_VRAM	(1 << 1)
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index fa085c44d4ca..feb9e2202049 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -89,8 +89,8 @@ void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
>  int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct iosys_map *map);
>  void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct iosys_map *map);
>
> -int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
> -int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
> +int drm_gem_prime_mmap(struct drm_gem_object *obj, struct mm_area *vma);
> +int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct mm_area *vma);
>
>  struct sg_table *drm_prime_pages_to_sg(struct drm_device *dev,
>  				       struct page **pages, unsigned int nr_pages);
> diff --git a/include/drm/ttm/ttm_bo.h b/include/drm/ttm/ttm_bo.h
> index 903cd1030110..cbfc05424ea7 100644
> --- a/include/drm/ttm/ttm_bo.h
> +++ b/include/drm/ttm/ttm_bo.h
> @@ -433,7 +433,7 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
>  void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>  int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map);
>  void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct iosys_map *map);
> -int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo);
> +int ttm_bo_mmap_obj(struct mm_area *vma, struct ttm_buffer_object *bo);
>  s64 ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx,
>  		   struct ttm_resource_manager *man, gfp_t gfp_flags,
>  		   s64 target);
> @@ -450,9 +450,9 @@ vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
>  				    pgprot_t prot,
>  				    pgoff_t num_prefault);
>  vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf);
> -void ttm_bo_vm_open(struct vm_area_struct *vma);
> -void ttm_bo_vm_close(struct vm_area_struct *vma);
> -int ttm_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
> +void ttm_bo_vm_open(struct mm_area *vma);
> +void ttm_bo_vm_close(struct mm_area *vma);
> +int ttm_bo_vm_access(struct mm_area *vma, unsigned long addr,
>  		     void *buf, int len, int write);
>  vm_fault_t ttm_bo_vm_dummy_page(struct vm_fault *vmf, pgprot_t prot);
>
> diff --git a/include/linux/backing-file.h b/include/linux/backing-file.h
> index 1476a6ed1bfd..ec845c283a65 100644
> --- a/include/linux/backing-file.h
> +++ b/include/linux/backing-file.h
> @@ -38,7 +38,7 @@ ssize_t backing_file_splice_write(struct pipe_inode_info *pipe,
>  				  struct file *out, struct kiocb *iocb,
>  				  size_t len, unsigned int flags,
>  				  struct backing_file_ctx *ctx);
> -int backing_file_mmap(struct file *file, struct vm_area_struct *vma,
> +int backing_file_mmap(struct file *file, struct mm_area *vma,
>  		      struct backing_file_ctx *ctx);
>
>  #endif /* _LINUX_BACKING_FILE_H */
> diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
> index 1625c8529e70..bf4593304fe5 100644
> --- a/include/linux/binfmts.h
> +++ b/include/linux/binfmts.h
> @@ -17,7 +17,7 @@ struct coredump_params;
>   */
>  struct linux_binprm {
>  #ifdef CONFIG_MMU
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long vma_pages;
>  	unsigned long argmin; /* rlimit marker for copy_strings() */
>  #else
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 3f0cc89c0622..1a62e5398dfd 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -138,7 +138,7 @@ struct bpf_map_ops {
>  				     u64 *imm, u32 off);
>  	int (*map_direct_value_meta)(const struct bpf_map *map,
>  				     u64 imm, u32 *off);
> -	int (*map_mmap)(struct bpf_map *map, struct vm_area_struct *vma);
> +	int (*map_mmap)(struct bpf_map *map, struct mm_area *vma);
>  	__poll_t (*map_poll)(struct bpf_map *map, struct file *filp,
>  			     struct poll_table_struct *pts);
>  	unsigned long (*map_get_unmapped_area)(struct file *filep, unsigned long addr,
> diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h
> index 139bdececdcf..3a040583a4b2 100644
> --- a/include/linux/btf_ids.h
> +++ b/include/linux/btf_ids.h
> @@ -270,7 +270,7 @@ extern u32 btf_sock_ids[];
>  #define BTF_TRACING_TYPE_xxx	\
>  	BTF_TRACING_TYPE(BTF_TRACING_TYPE_TASK, task_struct)	\
>  	BTF_TRACING_TYPE(BTF_TRACING_TYPE_FILE, file)		\
> -	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, vm_area_struct)
> +	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, mm_area)
>
>  enum {
>  #define BTF_TRACING_TYPE(name, type) name,
> diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
> index f0a4ad7839b6..3b16880622f2 100644
> --- a/include/linux/buffer_head.h
> +++ b/include/linux/buffer_head.h
> @@ -271,7 +271,7 @@ int cont_write_begin(struct file *, struct address_space *, loff_t,
>  			get_block_t *, loff_t *);
>  int generic_cont_expand_simple(struct inode *inode, loff_t size);
>  void block_commit_write(struct folio *folio, size_t from, size_t to);
> -int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf,
> +int block_page_mkwrite(struct mm_area *vma, struct vm_fault *vmf,
>  				get_block_t get_block);
>  sector_t generic_block_bmap(struct address_space *, sector_t, get_block_t *);
>  int block_truncate_page(struct address_space *, loff_t, get_block_t *);
> diff --git a/include/linux/buildid.h b/include/linux/buildid.h
> index 014a88c41073..ccb20bbf6a32 100644
> --- a/include/linux/buildid.h
> +++ b/include/linux/buildid.h
> @@ -6,9 +6,9 @@
>
>  #define BUILD_ID_SIZE_MAX 20
>
> -struct vm_area_struct;
> -int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size);
> -int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size);
> +struct mm_area;
> +int build_id_parse(struct mm_area *vma, unsigned char *build_id, __u32 *size);
> +int build_id_parse_nofault(struct mm_area *vma, unsigned char *build_id, __u32 *size);
>  int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size);
>
>  #if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID) || IS_ENABLED(CONFIG_VMCORE_INFO)
> diff --git a/include/linux/cacheflush.h b/include/linux/cacheflush.h
> index 55f297b2c23f..81e334b23709 100644
> --- a/include/linux/cacheflush.h
> +++ b/include/linux/cacheflush.h
> @@ -18,7 +18,7 @@ static inline void flush_dcache_folio(struct folio *folio)
>  #endif /* ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE */
>
>  #ifndef flush_icache_pages
> -static inline void flush_icache_pages(struct vm_area_struct *vma,
> +static inline void flush_icache_pages(struct mm_area *vma,
>  				     struct page *page, unsigned int nr)
>  {
>  }
> diff --git a/include/linux/configfs.h b/include/linux/configfs.h
> index c771e9d0d0b9..2fc8bc945f7c 100644
> --- a/include/linux/configfs.h
> +++ b/include/linux/configfs.h
> @@ -146,7 +146,7 @@ static struct configfs_attribute _pfx##attr_##_name = {	\
>  }
>
>  struct file;
> -struct vm_area_struct;
> +struct mm_area;
>
>  struct configfs_bin_attribute {
>  	struct configfs_attribute cb_attr;	/* std. attribute */
> diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
> index 2f2555e6407c..28c31aa4abf3 100644
> --- a/include/linux/crash_dump.h
> +++ b/include/linux/crash_dump.h
> @@ -22,7 +22,7 @@ extern ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos);
>  extern ssize_t elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos);
>  void elfcorehdr_fill_device_ram_ptload_elf64(Elf64_Phdr *phdr,
>  		unsigned long long paddr, unsigned long long size);
> -extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
> +extern int remap_oldmem_pfn_range(struct mm_area *vma,
>  				  unsigned long from, unsigned long pfn,
>  				  unsigned long size, pgprot_t prot);
>
> diff --git a/include/linux/dax.h b/include/linux/dax.h
> index dcc9fcdf14e4..92e61f46d8b2 100644
> --- a/include/linux/dax.h
> +++ b/include/linux/dax.h
> @@ -65,7 +65,7 @@ size_t dax_recovery_write(struct dax_device *dax_dev, pgoff_t pgoff,
>  /*
>   * Check if given mapping is supported by the file / underlying device.
>   */
> -static inline bool daxdev_mapping_supported(struct vm_area_struct *vma,
> +static inline bool daxdev_mapping_supported(struct mm_area *vma,
>  					     struct dax_device *dax_dev)
>  {
>  	if (!(vma->vm_flags & VM_SYNC))
> @@ -110,7 +110,7 @@ static inline void set_dax_nomc(struct dax_device *dax_dev)
>  static inline void set_dax_synchronous(struct dax_device *dax_dev)
>  {
>  }
> -static inline bool daxdev_mapping_supported(struct vm_area_struct *vma,
> +static inline bool daxdev_mapping_supported(struct mm_area *vma,
>  				struct dax_device *dax_dev)
>  {
>  	return !(vma->vm_flags & VM_SYNC);
> diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h
> index 36216d28d8bd..8aa15c4fd02f 100644
> --- a/include/linux/dma-buf.h
> +++ b/include/linux/dma-buf.h
> @@ -281,7 +281,7 @@ struct dma_buf_ops {
>  	 *
>  	 * 0 on success or a negative error code on failure.
>  	 */
> -	int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
> +	int (*mmap)(struct dma_buf *, struct mm_area *vma);
>
>  	int (*vmap)(struct dma_buf *dmabuf, struct iosys_map *map);
>  	void (*vunmap)(struct dma_buf *dmabuf, struct iosys_map *map);
> @@ -630,7 +630,7 @@ void dma_buf_unmap_attachment_unlocked(struct dma_buf_attachment *attach,
>  				       struct sg_table *sg_table,
>  				       enum dma_data_direction direction);
>
> -int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
> +int dma_buf_mmap(struct dma_buf *, struct mm_area *,
>  		 unsigned long);
>  int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map);
>  void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map);
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index e172522cd936..c6bdde002279 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -24,7 +24,7 @@ struct dma_map_ops {
>  			gfp_t gfp);
>  	void (*free_pages)(struct device *dev, size_t size, struct page *vaddr,
>  			dma_addr_t dma_handle, enum dma_data_direction dir);
> -	int (*mmap)(struct device *, struct vm_area_struct *,
> +	int (*mmap)(struct device *, struct mm_area *,
>  			void *, dma_addr_t, size_t, unsigned long attrs);
>
>  	int (*get_sgtable)(struct device *dev, struct sg_table *sgt,
> @@ -162,7 +162,7 @@ void dma_release_coherent_memory(struct device *dev);
>  int dma_alloc_from_dev_coherent(struct device *dev, ssize_t size,
>  		dma_addr_t *dma_handle, void **ret);
>  int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr);
> -int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_from_dev_coherent(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, size_t size, int *ret);
>  #else
>  static inline int dma_declare_coherent_memory(struct device *dev,
> @@ -181,7 +181,7 @@ static inline void dma_release_coherent_memory(struct device *dev) { }
>  void *dma_alloc_from_global_coherent(struct device *dev, ssize_t size,
>  		dma_addr_t *dma_handle);
>  int dma_release_from_global_coherent(int order, void *vaddr);
> -int dma_mmap_from_global_coherent(struct vm_area_struct *vma, void *cpu_addr,
> +int dma_mmap_from_global_coherent(struct mm_area *vma, void *cpu_addr,
>  		size_t size, int *ret);
>  int dma_init_global_coherent(phys_addr_t phys_addr, size_t size);
>  #else
> @@ -194,7 +194,7 @@ static inline int dma_release_from_global_coherent(int order, void *vaddr)
>  {
>  	return 0;
>  }
> -static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
> +static inline int dma_mmap_from_global_coherent(struct mm_area *vma,
>  		void *cpu_addr, size_t size, int *ret)
>  {
>  	return 0;
> @@ -204,7 +204,7 @@ static inline int dma_mmap_from_global_coherent(struct vm_area_struct *vma,
>  int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
> -int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> +int dma_common_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
>  struct page *dma_common_alloc_pages(struct device *dev, size_t size,
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index b79925b1c433..06e43bf6536d 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -124,7 +124,7 @@ void dmam_free_coherent(struct device *dev, size_t size, void *vaddr,
>  int dma_get_sgtable_attrs(struct device *dev, struct sg_table *sgt,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
> -int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_attrs(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
>  bool dma_can_mmap(struct device *dev);
> @@ -143,7 +143,7 @@ void dma_free_noncontiguous(struct device *dev, size_t size,
>  void *dma_vmap_noncontiguous(struct device *dev, size_t size,
>  		struct sg_table *sgt);
>  void dma_vunmap_noncontiguous(struct device *dev, void *vaddr);
> -int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
>  		size_t size, struct sg_table *sgt);
>  #else /* CONFIG_HAS_DMA */
>  static inline dma_addr_t dma_map_page_attrs(struct device *dev,
> @@ -210,7 +210,7 @@ static inline int dma_get_sgtable_attrs(struct device *dev,
>  {
>  	return -ENXIO;
>  }
> -static inline int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
> +static inline int dma_mmap_attrs(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs)
>  {
> @@ -271,7 +271,7 @@ static inline void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
>  {
>  }
>  static inline int dma_mmap_noncontiguous(struct device *dev,
> -		struct vm_area_struct *vma, size_t size, struct sg_table *sgt)
> +		struct mm_area *vma, size_t size, struct sg_table *sgt)
>  {
>  	return -EINVAL;
>  }
> @@ -357,7 +357,7 @@ struct page *dma_alloc_pages(struct device *dev, size_t size,
>  		dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp);
>  void dma_free_pages(struct device *dev, size_t size, struct page *page,
>  		dma_addr_t dma_handle, enum dma_data_direction dir);
> -int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_pages(struct device *dev, struct mm_area *vma,
>  		size_t size, struct page *page);
>
>  static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
> @@ -611,7 +611,7 @@ static inline void dma_free_wc(struct device *dev, size_t size,
>  }
>
>  static inline int dma_mmap_wc(struct device *dev,
> -			      struct vm_area_struct *vma,
> +			      struct mm_area *vma,
>  			      void *cpu_addr, dma_addr_t dma_addr,
>  			      size_t size)
>  {
> diff --git a/include/linux/fb.h b/include/linux/fb.h
> index cd653862ab99..f09a1e5e46a0 100644
> --- a/include/linux/fb.h
> +++ b/include/linux/fb.h
> @@ -26,7 +26,7 @@ struct module;
>  struct notifier_block;
>  struct page;
>  struct videomode;
> -struct vm_area_struct;
> +struct mm_area;
>
>  /* Definitions below are used in the parsed monitor specs */
>  #define FB_DPMS_ACTIVE_OFF	1
> @@ -302,7 +302,7 @@ struct fb_ops {
>  			unsigned long arg);
>
>  	/* perform fb specific mmap */
> -	int (*fb_mmap)(struct fb_info *info, struct vm_area_struct *vma);
> +	int (*fb_mmap)(struct fb_info *info, struct mm_area *vma);
>
>  	/* get capability given var */
>  	void (*fb_get_caps)(struct fb_info *info, struct fb_blit_caps *caps,
> @@ -555,7 +555,7 @@ extern ssize_t fb_io_read(struct fb_info *info, char __user *buf,
>  			  size_t count, loff_t *ppos);
>  extern ssize_t fb_io_write(struct fb_info *info, const char __user *buf,
>  			   size_t count, loff_t *ppos);
> -int fb_io_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +int fb_io_mmap(struct fb_info *info, struct mm_area *vma);
>
>  #define __FB_DEFAULT_IOMEM_OPS_RDWR \
>  	.fb_read	= fb_io_read, \
> @@ -648,7 +648,7 @@ static inline void __fb_pad_aligned_buffer(u8 *dst, u32 d_pitch,
>  }
>
>  /* fb_defio.c */
> -int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma);
> +int fb_deferred_io_mmap(struct fb_info *info, struct mm_area *vma);
>  extern int  fb_deferred_io_init(struct fb_info *info);
>  extern void fb_deferred_io_open(struct fb_info *info,
>  				struct inode *inode,
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 016b0fe1536e..2be4d710cdad 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -65,7 +65,7 @@ struct kobject;
>  struct pipe_inode_info;
>  struct poll_table_struct;
>  struct kstatfs;
> -struct vm_area_struct;
> +struct mm_area;
>  struct vfsmount;
>  struct cred;
>  struct swap_info_struct;
> @@ -2140,7 +2140,7 @@ struct file_operations {
>  	__poll_t (*poll) (struct file *, struct poll_table_struct *);
>  	long (*unlocked_ioctl) (struct file *, unsigned int, unsigned long);
>  	long (*compat_ioctl) (struct file *, unsigned int, unsigned long);
> -	int (*mmap) (struct file *, struct vm_area_struct *);
> +	int (*mmap) (struct file *, struct mm_area *);
>  	int (*open) (struct inode *, struct file *);
>  	int (*flush) (struct file *, fl_owner_t id);
>  	int (*release) (struct inode *, struct file *);
> @@ -2238,7 +2238,7 @@ struct inode_operations {
>  	struct offset_ctx *(*get_offset_ctx)(struct inode *inode);
>  } ____cacheline_aligned;
>
> -static inline int call_mmap(struct file *file, struct vm_area_struct *vma)
> +static inline int call_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return file->f_op->mmap(file, vma);
>  }
> @@ -3341,8 +3341,8 @@ extern void inode_add_lru(struct inode *inode);
>  extern int sb_set_blocksize(struct super_block *, int);
>  extern int sb_min_blocksize(struct super_block *, int);
>
> -extern int generic_file_mmap(struct file *, struct vm_area_struct *);
> -extern int generic_file_readonly_mmap(struct file *, struct vm_area_struct *);
> +extern int generic_file_mmap(struct file *, struct mm_area *);
> +extern int generic_file_readonly_mmap(struct file *, struct mm_area *);
>  extern ssize_t generic_write_checks(struct kiocb *, struct iov_iter *);
>  int generic_write_checks_count(struct kiocb *iocb, loff_t *count);
>  extern int generic_write_check_limits(struct file *file, loff_t pos,
> @@ -3666,12 +3666,12 @@ void setattr_copy(struct mnt_idmap *, struct inode *inode,
>
>  extern int file_update_time(struct file *file);
>
> -static inline bool vma_is_dax(const struct vm_area_struct *vma)
> +static inline bool vma_is_dax(const struct mm_area *vma)
>  {
>  	return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host);
>  }
>
> -static inline bool vma_is_fsdax(struct vm_area_struct *vma)
> +static inline bool vma_is_fsdax(struct mm_area *vma)
>  {
>  	struct inode *inode;
>
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index c9fa6309c903..1198056004c8 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -9,7 +9,7 @@
>  #include <linux/alloc_tag.h>
>  #include <linux/sched.h>
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct mempolicy;
>
>  /* Convert GFP flags to their corresponding migrate type */
> @@ -318,7 +318,7 @@ struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order);
>  struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order);
>  struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
>  		struct mempolicy *mpol, pgoff_t ilx, int nid);
> -struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma,
> +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct mm_area *vma,
>  		unsigned long addr);
>  #else
>  static inline struct page *alloc_pages_noprof(gfp_t gfp_mask, unsigned int order)
> @@ -346,7 +346,7 @@ static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int orde
>  #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0)
>
>  static inline struct page *alloc_page_vma_noprof(gfp_t gfp,
> -		struct vm_area_struct *vma, unsigned long addr)
> +		struct mm_area *vma, unsigned long addr)
>  {
>  	struct folio *folio = vma_alloc_folio_noprof(gfp, 0, vma, addr);
>
> @@ -420,7 +420,7 @@ static inline bool gfp_compaction_allowed(gfp_t gfp_mask)
>  	return IS_ENABLED(CONFIG_COMPACTION) && (gfp_mask & __GFP_IO);
>  }
>
> -extern gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma);
> +extern gfp_t vma_thp_gfp_mask(struct mm_area *vma);
>
>  #ifdef CONFIG_CONTIG_ALLOC
>  /* The below functions must be run on a range from a single zone. */
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index 5c6bea81a90e..76601fc06fab 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -183,7 +183,7 @@ static inline unsigned long nr_free_highpages(void);
>  static inline unsigned long totalhigh_pages(void);
>
>  #ifndef ARCH_HAS_FLUSH_ANON_PAGE
> -static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr)
> +static inline void flush_anon_page(struct mm_area *vma, struct page *page, unsigned long vmaddr)
>  {
>  }
>  #endif
> @@ -221,7 +221,7 @@ static inline void clear_user_highpage(struct page *page, unsigned long vaddr)
>   * we are out of memory.
>   */
>  static inline
> -struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
> +struct folio *vma_alloc_zeroed_movable_folio(struct mm_area *vma,
>  				   unsigned long vaddr)
>  {
>  	struct folio *folio;
> @@ -301,7 +301,7 @@ static inline void zero_user(struct page *page,
>  #ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE
>
>  static inline void copy_user_highpage(struct page *to, struct page *from,
> -	unsigned long vaddr, struct vm_area_struct *vma)
> +	unsigned long vaddr, struct mm_area *vma)
>  {
>  	char *vfrom, *vto;
>
> @@ -339,7 +339,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
>   * of bytes not copied if there was a #MC, otherwise 0 for success.
>   */
>  static inline int copy_mc_user_highpage(struct page *to, struct page *from,
> -					unsigned long vaddr, struct vm_area_struct *vma)
> +					unsigned long vaddr, struct mm_area *vma)
>  {
>  	unsigned long ret;
>  	char *vfrom, *vto;
> @@ -378,7 +378,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
>  }
>  #else
>  static inline int copy_mc_user_highpage(struct page *to, struct page *from,
> -					unsigned long vaddr, struct vm_area_struct *vma)
> +					unsigned long vaddr, struct mm_area *vma)
>  {
>  	copy_user_highpage(to, from, vaddr, vma);
>  	return 0;
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index e893d546a49f..b8c548e672b0 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -10,11 +10,11 @@
>  vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
>  int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
> -		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
> +		  struct mm_area *dst_vma, struct mm_area *src_vma);
>  void huge_pmd_set_accessed(struct vm_fault *vmf);
>  int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
> -		  struct vm_area_struct *vma);
> +		  struct mm_area *vma);
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
>  void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud);
> @@ -25,15 +25,15 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
>  #endif
>
>  vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf);
> -bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
>  			   pmd_t *pmd, unsigned long addr, unsigned long next);
> -int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, pmd_t *pmd,
> +int zap_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma, pmd_t *pmd,
>  		 unsigned long addr);
> -int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud,
> +int zap_huge_pud(struct mmu_gather *tlb, struct mm_area *vma, pud_t *pud,
>  		 unsigned long addr);
> -bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> +bool move_huge_pmd(struct mm_area *vma, unsigned long old_addr,
>  		   unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd);
> -int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +int change_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
>  		    pmd_t *pmd, unsigned long addr, pgprot_t newprot,
>  		    unsigned long cp_flags);
>
> @@ -212,7 +212,7 @@ static inline int next_order(unsigned long *orders, int prev)
>   *   - For all vmas, check if the haddr is in an aligned hugepage
>   *     area.
>   */
> -static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
> +static inline bool thp_vma_suitable_order(struct mm_area *vma,
>  		unsigned long addr, int order)
>  {
>  	unsigned long hpage_size = PAGE_SIZE << order;
> @@ -237,7 +237,7 @@ static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
>   * See thp_vma_suitable_order().
>   * All orders that pass the checks are returned as a bitfield.
>   */
> -static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
> +static inline unsigned long thp_vma_suitable_orders(struct mm_area *vma,
>  		unsigned long addr, unsigned long orders)
>  {
>  	int order;
> @@ -260,7 +260,7 @@ static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
>  	return orders;
>  }
>
> -unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> +unsigned long __thp_vma_allowable_orders(struct mm_area *vma,
>  					 unsigned long vm_flags,
>  					 unsigned long tva_flags,
>  					 unsigned long orders);
> @@ -281,7 +281,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>   * orders are allowed.
>   */
>  static inline
> -unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
> +unsigned long thp_vma_allowable_orders(struct mm_area *vma,
>  				       unsigned long vm_flags,
>  				       unsigned long tva_flags,
>  				       unsigned long orders)
> @@ -316,7 +316,7 @@ struct thpsize {
>  	(transparent_hugepage_flags &					\
>  	 (1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG))
>
> -static inline bool vma_thp_disabled(struct vm_area_struct *vma,
> +static inline bool vma_thp_disabled(struct mm_area *vma,
>  		unsigned long vm_flags)
>  {
>  	/*
> @@ -394,7 +394,7 @@ static inline int split_huge_page(struct page *page)
>  }
>  void deferred_split_folio(struct folio *folio, bool partially_mapped);
>
> -void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> +void __split_huge_pmd(struct mm_area *vma, pmd_t *pmd,
>  		unsigned long address, bool freeze, struct folio *folio);
>
>  #define split_huge_pmd(__vma, __pmd, __address)				\
> @@ -407,19 +407,19 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>  	}  while (0)
>
>
> -void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
> +void split_huge_pmd_address(struct mm_area *vma, unsigned long address,
>  		bool freeze, struct folio *folio);
>
> -void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> +void __split_huge_pud(struct mm_area *vma, pud_t *pud,
>  		unsigned long address);
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +int change_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
>  		    pud_t *pudp, unsigned long addr, pgprot_t newprot,
>  		    unsigned long cp_flags);
>  #else
>  static inline int
> -change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +change_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
>  		pud_t *pudp, unsigned long addr, pgprot_t newprot,
>  		unsigned long cp_flags) { return 0; }
>  #endif
> @@ -432,15 +432,15 @@ change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  			__split_huge_pud(__vma, __pud, __address);	\
>  	}  while (0)
>
> -int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags,
> +int hugepage_madvise(struct mm_area *vma, unsigned long *vm_flags,
>  		     int advice);
> -int madvise_collapse(struct vm_area_struct *vma,
> -		     struct vm_area_struct **prev,
> +int madvise_collapse(struct mm_area *vma,
> +		     struct mm_area **prev,
>  		     unsigned long start, unsigned long end);
> -void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start,
> -			   unsigned long end, struct vm_area_struct *next);
> -spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma);
> -spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma);
> +void vma_adjust_trans_huge(struct mm_area *vma, unsigned long start,
> +			   unsigned long end, struct mm_area *next);
> +spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct mm_area *vma);
> +spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct mm_area *vma);
>
>  static inline int is_swap_pmd(pmd_t pmd)
>  {
> @@ -449,7 +449,7 @@ static inline int is_swap_pmd(pmd_t pmd)
>
>  /* mmap_lock must be held on entry */
>  static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd))
>  		return __pmd_trans_huge_lock(pmd, vma);
> @@ -457,7 +457,7 @@ static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
>  		return NULL;
>  }
>  static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	if (pud_trans_huge(*pud) || pud_devmap(*pud))
>  		return __pud_trans_huge_lock(pud, vma);
> @@ -474,7 +474,7 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
>  	return folio_order(folio) >= HPAGE_PMD_ORDER;
>  }
>
> -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
> +struct page *follow_devmap_pmd(struct mm_area *vma, unsigned long addr,
>  		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
>
>  vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
> @@ -502,9 +502,9 @@ static inline bool thp_migration_supported(void)
>  	return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
>  }
>
> -void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
> +void split_huge_pmd_locked(struct mm_area *vma, unsigned long address,
>  			   pmd_t *pmd, bool freeze, struct folio *folio);
> -bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
> +bool unmap_huge_pmd_locked(struct mm_area *vma, unsigned long addr,
>  			   pmd_t *pmdp, struct folio *folio);
>
>  #else /* CONFIG_TRANSPARENT_HUGEPAGE */
> @@ -514,19 +514,19 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
>  	return false;
>  }
>
> -static inline bool thp_vma_suitable_order(struct vm_area_struct *vma,
> +static inline bool thp_vma_suitable_order(struct mm_area *vma,
>  		unsigned long addr, int order)
>  {
>  	return false;
>  }
>
> -static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
> +static inline unsigned long thp_vma_suitable_orders(struct mm_area *vma,
>  		unsigned long addr, unsigned long orders)
>  {
>  	return 0;
>  }
>
> -static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
> +static inline unsigned long thp_vma_allowable_orders(struct mm_area *vma,
>  					unsigned long vm_flags,
>  					unsigned long tva_flags,
>  					unsigned long orders)
> @@ -577,15 +577,15 @@ static inline void deferred_split_folio(struct folio *folio, bool partially_mapp
>  #define split_huge_pmd(__vma, __pmd, __address)	\
>  	do { } while (0)
>
> -static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> +static inline void __split_huge_pmd(struct mm_area *vma, pmd_t *pmd,
>  		unsigned long address, bool freeze, struct folio *folio) {}
> -static inline void split_huge_pmd_address(struct vm_area_struct *vma,
> +static inline void split_huge_pmd_address(struct mm_area *vma,
>  		unsigned long address, bool freeze, struct folio *folio) {}
> -static inline void split_huge_pmd_locked(struct vm_area_struct *vma,
> +static inline void split_huge_pmd_locked(struct mm_area *vma,
>  					 unsigned long address, pmd_t *pmd,
>  					 bool freeze, struct folio *folio) {}
>
> -static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
> +static inline bool unmap_huge_pmd_locked(struct mm_area *vma,
>  					 unsigned long addr, pmd_t *pmdp,
>  					 struct folio *folio)
>  {
> @@ -595,23 +595,23 @@ static inline bool unmap_huge_pmd_locked(struct vm_area_struct *vma,
>  #define split_huge_pud(__vma, __pmd, __address)	\
>  	do { } while (0)
>
> -static inline int hugepage_madvise(struct vm_area_struct *vma,
> +static inline int hugepage_madvise(struct mm_area *vma,
>  				   unsigned long *vm_flags, int advice)
>  {
>  	return -EINVAL;
>  }
>
> -static inline int madvise_collapse(struct vm_area_struct *vma,
> -				   struct vm_area_struct **prev,
> +static inline int madvise_collapse(struct mm_area *vma,
> +				   struct mm_area **prev,
>  				   unsigned long start, unsigned long end)
>  {
>  	return -EINVAL;
>  }
>
> -static inline void vma_adjust_trans_huge(struct vm_area_struct *vma,
> +static inline void vma_adjust_trans_huge(struct mm_area *vma,
>  					 unsigned long start,
>  					 unsigned long end,
> -					 struct vm_area_struct *next)
> +					 struct mm_area *next)
>  {
>  }
>  static inline int is_swap_pmd(pmd_t pmd)
> @@ -619,12 +619,12 @@ static inline int is_swap_pmd(pmd_t pmd)
>  	return 0;
>  }
>  static inline spinlock_t *pmd_trans_huge_lock(pmd_t *pmd,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	return NULL;
>  }
>  static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	return NULL;
>  }
> @@ -649,7 +649,7 @@ static inline void mm_put_huge_zero_folio(struct mm_struct *mm)
>  	return;
>  }
>
> -static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
> +static inline struct page *follow_devmap_pmd(struct mm_area *vma,
>  	unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
>  {
>  	return NULL;
> @@ -670,13 +670,13 @@ static inline int next_order(unsigned long *orders, int prev)
>  	return 0;
>  }
>
> -static inline void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> +static inline void __split_huge_pud(struct mm_area *vma, pud_t *pud,
>  				    unsigned long address)
>  {
>  }
>
>  static inline int change_huge_pud(struct mmu_gather *tlb,
> -				  struct vm_area_struct *vma, pud_t *pudp,
> +				  struct mm_area *vma, pud_t *pudp,
>  				  unsigned long addr, pgprot_t newprot,
>  				  unsigned long cp_flags)
>  {
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 8f3ac832ee7f..96d446761d94 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -104,7 +104,7 @@ struct file_region {
>  struct hugetlb_vma_lock {
>  	struct kref refs;
>  	struct rw_semaphore rw_sema;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  };
>
>  extern struct resv_map *resv_map_alloc(void);
> @@ -119,37 +119,37 @@ struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_hpages,
>  						long min_hpages);
>  void hugepage_put_subpool(struct hugepage_subpool *spool);
>
> -void hugetlb_dup_vma_private(struct vm_area_struct *vma);
> -void clear_vma_resv_huge_pages(struct vm_area_struct *vma);
> -int move_hugetlb_page_tables(struct vm_area_struct *vma,
> -			     struct vm_area_struct *new_vma,
> +void hugetlb_dup_vma_private(struct mm_area *vma);
> +void clear_vma_resv_huge_pages(struct mm_area *vma);
> +int move_hugetlb_page_tables(struct mm_area *vma,
> +			     struct mm_area *new_vma,
>  			     unsigned long old_addr, unsigned long new_addr,
>  			     unsigned long len);
>  int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
> -			    struct vm_area_struct *, struct vm_area_struct *);
> -void unmap_hugepage_range(struct vm_area_struct *,
> +			    struct mm_area *, struct mm_area *);
> +void unmap_hugepage_range(struct mm_area *,
>  			  unsigned long, unsigned long, struct page *,
>  			  zap_flags_t);
>  void __unmap_hugepage_range(struct mmu_gather *tlb,
> -			  struct vm_area_struct *vma,
> +			  struct mm_area *vma,
>  			  unsigned long start, unsigned long end,
>  			  struct page *ref_page, zap_flags_t zap_flags);
>  void hugetlb_report_meminfo(struct seq_file *);
>  int hugetlb_report_node_meminfo(char *buf, int len, int nid);
>  void hugetlb_show_meminfo_node(int nid);
>  unsigned long hugetlb_total_pages(void);
> -vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> +vm_fault_t hugetlb_fault(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long address, unsigned int flags);
>  #ifdef CONFIG_USERFAULTFD
>  int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
> -			     struct vm_area_struct *dst_vma,
> +			     struct mm_area *dst_vma,
>  			     unsigned long dst_addr,
>  			     unsigned long src_addr,
>  			     uffd_flags_t flags,
>  			     struct folio **foliop);
>  #endif /* CONFIG_USERFAULTFD */
>  bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
> -						struct vm_area_struct *vma,
> +						struct mm_area *vma,
>  						vm_flags_t vm_flags);
>  long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
>  						long freed);
> @@ -163,10 +163,10 @@ void hugetlb_fix_reserve_counts(struct inode *inode);
>  extern struct mutex *hugetlb_fault_mutex_table;
>  u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx);
>
> -pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pmd_share(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, pud_t *pud);
>  bool hugetlbfs_pagecache_present(struct hstate *h,
> -				 struct vm_area_struct *vma,
> +				 struct mm_area *vma,
>  				 unsigned long address);
>
>  struct address_space *hugetlb_folio_mapping_lock_write(struct folio *folio);
> @@ -196,7 +196,7 @@ static inline pte_t *pte_alloc_huge(struct mm_struct *mm, pmd_t *pmd,
>  }
>  #endif
>
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long addr, unsigned long sz);
>  /*
>   * huge_pte_offset(): Walk the hugetlb pgtable until the last level PTE.
> @@ -238,51 +238,51 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
>  pte_t *huge_pte_offset(struct mm_struct *mm,
>  		       unsigned long addr, unsigned long sz);
>  unsigned long hugetlb_mask_last_page(struct hstate *h);
> -int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
> +int huge_pmd_unshare(struct mm_struct *mm, struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep);
> -void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
> +void adjust_range_if_pmd_sharing_possible(struct mm_area *vma,
>  				unsigned long *start, unsigned long *end);
>
> -extern void __hugetlb_zap_begin(struct vm_area_struct *vma,
> +extern void __hugetlb_zap_begin(struct mm_area *vma,
>  				unsigned long *begin, unsigned long *end);
> -extern void __hugetlb_zap_end(struct vm_area_struct *vma,
> +extern void __hugetlb_zap_end(struct mm_area *vma,
>  			      struct zap_details *details);
>
> -static inline void hugetlb_zap_begin(struct vm_area_struct *vma,
> +static inline void hugetlb_zap_begin(struct mm_area *vma,
>  				     unsigned long *start, unsigned long *end)
>  {
>  	if (is_vm_hugetlb_page(vma))
>  		__hugetlb_zap_begin(vma, start, end);
>  }
>
> -static inline void hugetlb_zap_end(struct vm_area_struct *vma,
> +static inline void hugetlb_zap_end(struct mm_area *vma,
>  				   struct zap_details *details)
>  {
>  	if (is_vm_hugetlb_page(vma))
>  		__hugetlb_zap_end(vma, details);
>  }
>
> -void hugetlb_vma_lock_read(struct vm_area_struct *vma);
> -void hugetlb_vma_unlock_read(struct vm_area_struct *vma);
> -void hugetlb_vma_lock_write(struct vm_area_struct *vma);
> -void hugetlb_vma_unlock_write(struct vm_area_struct *vma);
> -int hugetlb_vma_trylock_write(struct vm_area_struct *vma);
> -void hugetlb_vma_assert_locked(struct vm_area_struct *vma);
> +void hugetlb_vma_lock_read(struct mm_area *vma);
> +void hugetlb_vma_unlock_read(struct mm_area *vma);
> +void hugetlb_vma_lock_write(struct mm_area *vma);
> +void hugetlb_vma_unlock_write(struct mm_area *vma);
> +int hugetlb_vma_trylock_write(struct mm_area *vma);
> +void hugetlb_vma_assert_locked(struct mm_area *vma);
>  void hugetlb_vma_lock_release(struct kref *kref);
> -long hugetlb_change_protection(struct vm_area_struct *vma,
> +long hugetlb_change_protection(struct mm_area *vma,
>  		unsigned long address, unsigned long end, pgprot_t newprot,
>  		unsigned long cp_flags);
>  bool is_hugetlb_entry_migration(pte_t pte);
>  bool is_hugetlb_entry_hwpoisoned(pte_t pte);
> -void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
> +void hugetlb_unshare_all_pmds(struct mm_area *vma);
>
>  #else /* !CONFIG_HUGETLB_PAGE */
>
> -static inline void hugetlb_dup_vma_private(struct vm_area_struct *vma)
> +static inline void hugetlb_dup_vma_private(struct mm_area *vma)
>  {
>  }
>
> -static inline void clear_vma_resv_huge_pages(struct vm_area_struct *vma)
> +static inline void clear_vma_resv_huge_pages(struct mm_area *vma)
>  {
>  }
>
> @@ -298,41 +298,41 @@ static inline struct address_space *hugetlb_folio_mapping_lock_write(
>  }
>
>  static inline int huge_pmd_unshare(struct mm_struct *mm,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep)
>  {
>  	return 0;
>  }
>
>  static inline void adjust_range_if_pmd_sharing_possible(
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				unsigned long *start, unsigned long *end)
>  {
>  }
>
>  static inline void hugetlb_zap_begin(
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				unsigned long *start, unsigned long *end)
>  {
>  }
>
>  static inline void hugetlb_zap_end(
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				struct zap_details *details)
>  {
>  }
>
>  static inline int copy_hugetlb_page_range(struct mm_struct *dst,
>  					  struct mm_struct *src,
> -					  struct vm_area_struct *dst_vma,
> -					  struct vm_area_struct *src_vma)
> +					  struct mm_area *dst_vma,
> +					  struct mm_area *src_vma)
>  {
>  	BUG();
>  	return 0;
>  }
>
> -static inline int move_hugetlb_page_tables(struct vm_area_struct *vma,
> -					   struct vm_area_struct *new_vma,
> +static inline int move_hugetlb_page_tables(struct mm_area *vma,
> +					   struct mm_area *new_vma,
>  					   unsigned long old_addr,
>  					   unsigned long new_addr,
>  					   unsigned long len)
> @@ -360,28 +360,28 @@ static inline int prepare_hugepage_range(struct file *file,
>  	return -EINVAL;
>  }
>
> -static inline void hugetlb_vma_lock_read(struct vm_area_struct *vma)
> +static inline void hugetlb_vma_lock_read(struct mm_area *vma)
>  {
>  }
>
> -static inline void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
> +static inline void hugetlb_vma_unlock_read(struct mm_area *vma)
>  {
>  }
>
> -static inline void hugetlb_vma_lock_write(struct vm_area_struct *vma)
> +static inline void hugetlb_vma_lock_write(struct mm_area *vma)
>  {
>  }
>
> -static inline void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
> +static inline void hugetlb_vma_unlock_write(struct mm_area *vma)
>  {
>  }
>
> -static inline int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
> +static inline int hugetlb_vma_trylock_write(struct mm_area *vma)
>  {
>  	return 1;
>  }
>
> -static inline void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
> +static inline void hugetlb_vma_assert_locked(struct mm_area *vma)
>  {
>  }
>
> @@ -400,7 +400,7 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb,
>
>  #ifdef CONFIG_USERFAULTFD
>  static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
> -					   struct vm_area_struct *dst_vma,
> +					   struct mm_area *dst_vma,
>  					   unsigned long dst_addr,
>  					   unsigned long src_addr,
>  					   uffd_flags_t flags,
> @@ -443,7 +443,7 @@ static inline void move_hugetlb_state(struct folio *old_folio,
>  }
>
>  static inline long hugetlb_change_protection(
> -			struct vm_area_struct *vma, unsigned long address,
> +			struct mm_area *vma, unsigned long address,
>  			unsigned long end, pgprot_t newprot,
>  			unsigned long cp_flags)
>  {
> @@ -451,7 +451,7 @@ static inline long hugetlb_change_protection(
>  }
>
>  static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
> -			struct vm_area_struct *vma, unsigned long start,
> +			struct mm_area *vma, unsigned long start,
>  			unsigned long end, struct page *ref_page,
>  			zap_flags_t zap_flags)
>  {
> @@ -459,14 +459,14 @@ static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
>  }
>
>  static inline vm_fault_t hugetlb_fault(struct mm_struct *mm,
> -			struct vm_area_struct *vma, unsigned long address,
> +			struct mm_area *vma, unsigned long address,
>  			unsigned int flags)
>  {
>  	BUG();
>  	return 0;
>  }
>
> -static inline void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) { }
> +static inline void hugetlb_unshare_all_pmds(struct mm_area *vma) { }
>
>  #endif /* !CONFIG_HUGETLB_PAGE */
>
> @@ -698,7 +698,7 @@ bool hugetlb_bootmem_page_zones_valid(int nid, struct huge_bootmem_page *m);
>  int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
>  int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
>  void wait_for_freed_hugetlb_folios(void);
> -struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> +struct folio *alloc_hugetlb_folio(struct mm_area *vma,
>  				unsigned long addr, bool cow_from_owner);
>  struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
>  				nodemask_t *nmask, gfp_t gfp_mask,
> @@ -708,7 +708,7 @@ struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
>
>  int hugetlb_add_to_page_cache(struct folio *folio, struct address_space *mapping,
>  			pgoff_t idx);
> -void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
> +void restore_reserve_on_error(struct hstate *h, struct mm_area *vma,
>  				unsigned long address, struct folio *folio);
>
>  /* arch callback */
> @@ -756,7 +756,7 @@ static inline struct hstate *hstate_sizelog(int page_size_log)
>  	return NULL;
>  }
>
> -static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
> +static inline struct hstate *hstate_vma(struct mm_area *vma)
>  {
>  	return hstate_file(vma->vm_file);
>  }
> @@ -766,9 +766,9 @@ static inline unsigned long huge_page_size(const struct hstate *h)
>  	return (unsigned long)PAGE_SIZE << h->order;
>  }
>
> -extern unsigned long vma_kernel_pagesize(struct vm_area_struct *vma);
> +extern unsigned long vma_kernel_pagesize(struct mm_area *vma);
>
> -extern unsigned long vma_mmu_pagesize(struct vm_area_struct *vma);
> +extern unsigned long vma_mmu_pagesize(struct mm_area *vma);
>
>  static inline unsigned long huge_page_mask(struct hstate *h)
>  {
> @@ -1028,7 +1028,7 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm)
>
>  #ifndef huge_ptep_modify_prot_start
>  #define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
> -static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_modify_prot_start(struct mm_area *vma,
>  						unsigned long addr, pte_t *ptep)
>  {
>  	unsigned long psize = huge_page_size(hstate_vma(vma));
> @@ -1039,7 +1039,7 @@ static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
>
>  #ifndef huge_ptep_modify_prot_commit
>  #define huge_ptep_modify_prot_commit huge_ptep_modify_prot_commit
> -static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> +static inline void huge_ptep_modify_prot_commit(struct mm_area *vma,
>  						unsigned long addr, pte_t *ptep,
>  						pte_t old_pte, pte_t pte)
>  {
> @@ -1099,7 +1099,7 @@ static inline void wait_for_freed_hugetlb_folios(void)
>  {
>  }
>
> -static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> +static inline struct folio *alloc_hugetlb_folio(struct mm_area *vma,
>  					   unsigned long addr,
>  					   bool cow_from_owner)
>  {
> @@ -1136,7 +1136,7 @@ static inline struct hstate *hstate_sizelog(int page_size_log)
>  	return NULL;
>  }
>
> -static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
> +static inline struct hstate *hstate_vma(struct mm_area *vma)
>  {
>  	return NULL;
>  }
> @@ -1161,12 +1161,12 @@ static inline unsigned long huge_page_mask(struct hstate *h)
>  	return PAGE_MASK;
>  }
>
> -static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma)
> +static inline unsigned long vma_kernel_pagesize(struct mm_area *vma)
>  {
>  	return PAGE_SIZE;
>  }
>
> -static inline unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
> +static inline unsigned long vma_mmu_pagesize(struct mm_area *vma)
>  {
>  	return PAGE_SIZE;
>  }
> @@ -1255,7 +1255,7 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm)
>  {
>  }
>
> -static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
> +static inline pte_t huge_ptep_clear_flush(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep)
>  {
>  #ifdef CONFIG_MMU
> @@ -1279,7 +1279,7 @@ static inline void hugetlb_unregister_node(struct node *node)
>  }
>
>  static inline bool hugetlbfs_pagecache_present(
> -    struct hstate *h, struct vm_area_struct *vma, unsigned long address)
> +    struct hstate *h, struct mm_area *vma, unsigned long address)
>  {
>  	return false;
>  }
> @@ -1324,7 +1324,7 @@ static inline bool hugetlb_pmd_shared(pte_t *pte)
>  }
>  #endif
>
> -bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr);
> +bool want_pmd_share(struct mm_area *vma, unsigned long addr);
>
>  #ifndef __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
>  /*
> @@ -1334,19 +1334,19 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr);
>  #define flush_hugetlb_tlb_range(vma, addr, end)	flush_tlb_range(vma, addr, end)
>  #endif
>
> -static inline bool __vma_shareable_lock(struct vm_area_struct *vma)
> +static inline bool __vma_shareable_lock(struct mm_area *vma)
>  {
>  	return (vma->vm_flags & VM_MAYSHARE) && vma->vm_private_data;
>  }
>
> -bool __vma_private_lock(struct vm_area_struct *vma);
> +bool __vma_private_lock(struct mm_area *vma);
>
>  /*
>   * Safe version of huge_pte_offset() to check the locks.  See comments
>   * above huge_pte_offset().
>   */
>  static inline pte_t *
> -hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz)
> +hugetlb_walk(struct mm_area *vma, unsigned long addr, unsigned long sz)
>  {
>  #if defined(CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING) && defined(CONFIG_LOCKDEP)
>  	struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> diff --git a/include/linux/hugetlb_inline.h b/include/linux/hugetlb_inline.h
> index 0660a03d37d9..d3d90fb50ebf 100644
> --- a/include/linux/hugetlb_inline.h
> +++ b/include/linux/hugetlb_inline.h
> @@ -6,14 +6,14 @@
>
>  #include <linux/mm.h>
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(struct mm_area *vma)
>  {
>  	return !!(vma->vm_flags & VM_HUGETLB);
>  }
>
>  #else
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
> +static inline bool is_vm_hugetlb_page(struct mm_area *vma)
>  {
>  	return false;
>  }
> diff --git a/include/linux/io-mapping.h b/include/linux/io-mapping.h
> index 7376c1df9c90..04d6dfd172da 100644
> --- a/include/linux/io-mapping.h
> +++ b/include/linux/io-mapping.h
> @@ -225,7 +225,7 @@ io_mapping_free(struct io_mapping *iomap)
>  	kfree(iomap);
>  }
>
> -int io_mapping_map_user(struct io_mapping *iomap, struct vm_area_struct *vma,
> +int io_mapping_map_user(struct io_mapping *iomap, struct mm_area *vma,
>  		unsigned long addr, unsigned long pfn, unsigned long size);
>
>  #endif /* _LINUX_IO_MAPPING_H */
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index 02fe001feebb..2186061ce745 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -19,7 +19,7 @@ struct iomap_writepage_ctx;
>  struct iov_iter;
>  struct kiocb;
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>  struct vm_fault;
>
>  /*
> diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h
> index 508beaa44c39..ff772553d76b 100644
> --- a/include/linux/iommu-dma.h
> +++ b/include/linux/iommu-dma.h
> @@ -32,7 +32,7 @@ void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
>  		enum dma_data_direction dir, unsigned long attrs);
>  void *iommu_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
>  		gfp_t gfp, unsigned long attrs);
> -int iommu_dma_mmap(struct device *dev, struct vm_area_struct *vma,
> +int iommu_dma_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
>  int iommu_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
> @@ -55,7 +55,7 @@ void *iommu_dma_vmap_noncontiguous(struct device *dev, size_t size,
>  		struct sg_table *sgt);
>  #define iommu_dma_vunmap_noncontiguous(dev, vaddr) \
>  	vunmap(vaddr);
> -int iommu_dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
> +int iommu_dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
>  		size_t size, struct sg_table *sgt);
>  void iommu_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
>  		size_t size, enum dma_data_direction dir);
> diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
> index b5a5f32fdfd1..087c03af27b8 100644
> --- a/include/linux/kernfs.h
> +++ b/include/linux/kernfs.h
> @@ -24,7 +24,7 @@ struct file;
>  struct dentry;
>  struct iattr;
>  struct seq_file;
> -struct vm_area_struct;
> +struct mm_area;
>  struct vm_operations_struct;
>  struct super_block;
>  struct file_system_type;
> @@ -322,7 +322,7 @@ struct kernfs_ops {
>  	__poll_t (*poll)(struct kernfs_open_file *of,
>  			 struct poll_table_struct *pt);
>
> -	int (*mmap)(struct kernfs_open_file *of, struct vm_area_struct *vma);
> +	int (*mmap)(struct kernfs_open_file *of, struct mm_area *vma);
>  	loff_t (*llseek)(struct kernfs_open_file *of, loff_t offset, int whence);
>  };
>
> diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
> index 1f46046080f5..df545b9908b0 100644
> --- a/include/linux/khugepaged.h
> +++ b/include/linux/khugepaged.h
> @@ -11,7 +11,7 @@ extern void khugepaged_destroy(void);
>  extern int start_stop_khugepaged(void);
>  extern void __khugepaged_enter(struct mm_struct *mm);
>  extern void __khugepaged_exit(struct mm_struct *mm);
> -extern void khugepaged_enter_vma(struct vm_area_struct *vma,
> +extern void khugepaged_enter_vma(struct mm_area *vma,
>  				 unsigned long vm_flags);
>  extern void khugepaged_min_free_kbytes_update(void);
>  extern bool current_is_khugepaged(void);
> @@ -44,7 +44,7 @@ static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm
>  static inline void khugepaged_exit(struct mm_struct *mm)
>  {
>  }
> -static inline void khugepaged_enter_vma(struct vm_area_struct *vma,
> +static inline void khugepaged_enter_vma(struct mm_area *vma,
>  					unsigned long vm_flags)
>  {
>  }
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index d73095b5cd96..b215a192a192 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -15,10 +15,10 @@
>  #include <linux/sched.h>
>
>  #ifdef CONFIG_KSM
> -int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> +int ksm_madvise(struct mm_area *vma, unsigned long start,
>  		unsigned long end, int advice, unsigned long *vm_flags);
>
> -void ksm_add_vma(struct vm_area_struct *vma);
> +void ksm_add_vma(struct mm_area *vma);
>  int ksm_enable_merge_any(struct mm_struct *mm);
>  int ksm_disable_merge_any(struct mm_struct *mm);
>  int ksm_disable(struct mm_struct *mm);
> @@ -86,7 +86,7 @@ static inline void ksm_exit(struct mm_struct *mm)
>   * but what if the vma was unmerged while the page was swapped out?
>   */
>  struct folio *ksm_might_need_to_copy(struct folio *folio,
> -			struct vm_area_struct *vma, unsigned long addr);
> +			struct mm_area *vma, unsigned long addr);
>
>  void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
>  void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
> @@ -97,7 +97,7 @@ bool ksm_process_mergeable(struct mm_struct *mm);
>
>  #else  /* !CONFIG_KSM */
>
> -static inline void ksm_add_vma(struct vm_area_struct *vma)
> +static inline void ksm_add_vma(struct mm_area *vma)
>  {
>  }
>
> @@ -130,14 +130,14 @@ static inline void collect_procs_ksm(const struct folio *folio,
>  }
>
>  #ifdef CONFIG_MMU
> -static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> +static inline int ksm_madvise(struct mm_area *vma, unsigned long start,
>  		unsigned long end, int advice, unsigned long *vm_flags)
>  {
>  	return 0;
>  }
>
>  static inline struct folio *ksm_might_need_to_copy(struct folio *folio,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	return folio;
>  }
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 5438a1b446a6..09b7d56cacdb 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2340,7 +2340,7 @@ struct kvm_device_ops {
>  	int (*has_attr)(struct kvm_device *dev, struct kvm_device_attr *attr);
>  	long (*ioctl)(struct kvm_device *dev, unsigned int ioctl,
>  		      unsigned long arg);
> -	int (*mmap)(struct kvm_device *dev, struct vm_area_struct *vma);
> +	int (*mmap)(struct kvm_device *dev, struct mm_area *vma);
>  };
>
>  struct kvm_device *kvm_device_from_filp(struct file *filp);
> diff --git a/include/linux/lsm_hook_defs.h b/include/linux/lsm_hook_defs.h
> index bf3bbac4e02a..0401c8ceeaa0 100644
> --- a/include/linux/lsm_hook_defs.h
> +++ b/include/linux/lsm_hook_defs.h
> @@ -196,7 +196,7 @@ LSM_HOOK(int, 0, file_ioctl_compat, struct file *file, unsigned int cmd,
>  LSM_HOOK(int, 0, mmap_addr, unsigned long addr)
>  LSM_HOOK(int, 0, mmap_file, struct file *file, unsigned long reqprot,
>  	 unsigned long prot, unsigned long flags)
> -LSM_HOOK(int, 0, file_mprotect, struct vm_area_struct *vma,
> +LSM_HOOK(int, 0, file_mprotect, struct mm_area *vma,
>  	 unsigned long reqprot, unsigned long prot)
>  LSM_HOOK(int, 0, file_lock, struct file *file, unsigned int cmd)
>  LSM_HOOK(int, 0, file_fcntl, struct file *file, unsigned int cmd,
> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> index ce9885e0178a..8bf1d4d50ce8 100644
> --- a/include/linux/mempolicy.h
> +++ b/include/linux/mempolicy.h
> @@ -118,27 +118,27 @@ struct sp_node {
>  	struct mempolicy *policy;
>  };
>
> -int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst);
> +int vma_dup_policy(struct mm_area *src, struct mm_area *dst);
>  void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
>  int mpol_set_shared_policy(struct shared_policy *sp,
> -			   struct vm_area_struct *vma, struct mempolicy *mpol);
> +			   struct mm_area *vma, struct mempolicy *mpol);
>  void mpol_free_shared_policy(struct shared_policy *sp);
>  struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
>  					    pgoff_t idx);
>
>  struct mempolicy *get_task_policy(struct task_struct *p);
> -struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> +struct mempolicy *__get_vma_policy(struct mm_area *vma,
>  		unsigned long addr, pgoff_t *ilx);
> -struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> +struct mempolicy *get_vma_policy(struct mm_area *vma,
>  		unsigned long addr, int order, pgoff_t *ilx);
> -bool vma_policy_mof(struct vm_area_struct *vma);
> +bool vma_policy_mof(struct mm_area *vma);
>
>  extern void numa_default_policy(void);
>  extern void numa_policy_init(void);
>  extern void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new);
>  extern void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new);
>
> -extern int huge_node(struct vm_area_struct *vma,
> +extern int huge_node(struct mm_area *vma,
>  				unsigned long addr, gfp_t gfp_flags,
>  				struct mempolicy **mpol, nodemask_t **nodemask);
>  extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
> @@ -165,7 +165,7 @@ extern int mpol_parse_str(char *str, struct mempolicy **mpol);
>  extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol);
>
>  /* Check if a vma is migratable */
> -extern bool vma_migratable(struct vm_area_struct *vma);
> +extern bool vma_migratable(struct mm_area *vma);
>
>  int mpol_misplaced(struct folio *folio, struct vm_fault *vmf,
>  					unsigned long addr);
> @@ -221,7 +221,7 @@ mpol_shared_policy_lookup(struct shared_policy *sp, pgoff_t idx)
>  	return NULL;
>  }
>
> -static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> +static inline struct mempolicy *get_vma_policy(struct mm_area *vma,
>  				unsigned long addr, int order, pgoff_t *ilx)
>  {
>  	*ilx = 0;
> @@ -229,7 +229,7 @@ static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
>  }
>
>  static inline int
> -vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
> +vma_dup_policy(struct mm_area *src, struct mm_area *dst)
>  {
>  	return 0;
>  }
> @@ -251,7 +251,7 @@ static inline void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
>  {
>  }
>
> -static inline int huge_node(struct vm_area_struct *vma,
> +static inline int huge_node(struct mm_area *vma,
>  				unsigned long addr, gfp_t gfp_flags,
>  				struct mempolicy **mpol, nodemask_t **nodemask)
>  {
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index aaa2114498d6..e64c14d9bd5a 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -143,11 +143,11 @@ const struct movable_operations *page_movable_ops(struct page *page)
>
>  #ifdef CONFIG_NUMA_BALANCING
>  int migrate_misplaced_folio_prepare(struct folio *folio,
> -		struct vm_area_struct *vma, int node);
> +		struct mm_area *vma, int node);
>  int migrate_misplaced_folio(struct folio *folio, int node);
>  #else
>  static inline int migrate_misplaced_folio_prepare(struct folio *folio,
> -		struct vm_area_struct *vma, int node)
> +		struct mm_area *vma, int node)
>  {
>  	return -EAGAIN; /* can't migrate now */
>  }
> @@ -188,7 +188,7 @@ enum migrate_vma_direction {
>  };
>
>  struct migrate_vma {
> -	struct vm_area_struct	*vma;
> +	struct mm_area	*vma;
>  	/*
>  	 * Both src and dst array must be big enough for
>  	 * (end - start) >> PAGE_SHIFT entries.
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index b7f13f087954..193ef16cd441 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -230,9 +230,9 @@ void setup_initial_init_mm(void *start_code, void *end_code,
>   * mmap() functions).
>   */
>
> -struct vm_area_struct *vm_area_alloc(struct mm_struct *);
> -struct vm_area_struct *vm_area_dup(struct vm_area_struct *);
> -void vm_area_free(struct vm_area_struct *);
> +struct mm_area *vm_area_alloc(struct mm_struct *);
> +struct mm_area *vm_area_dup(struct mm_area *);
> +void vm_area_free(struct mm_area *);
>
>  #ifndef CONFIG_MMU
>  extern struct rb_root nommu_region_tree;
> @@ -242,7 +242,7 @@ extern unsigned int kobjsize(const void *objp);
>  #endif
>
>  /*
> - * vm_flags in vm_area_struct, see mm_types.h.
> + * vm_flags in mm_area, see mm_types.h.
>   * When changing, update also include/trace/events/mmflags.h
>   */
>  #define VM_NONE		0x00000000
> @@ -533,7 +533,7 @@ static inline bool fault_flag_allow_retry_first(enum fault_flag flags)
>   */
>  struct vm_fault {
>  	const struct {
> -		struct vm_area_struct *vma;	/* Target VMA */
> +		struct mm_area *vma;	/* Target VMA */
>  		gfp_t gfp_mask;			/* gfp mask to be used for allocations */
>  		pgoff_t pgoff;			/* Logical page offset based on vma */
>  		unsigned long address;		/* Faulting virtual address - masked */
> @@ -583,27 +583,27 @@ struct vm_fault {
>   * to the functions called when a no-page or a wp-page exception occurs.
>   */
>  struct vm_operations_struct {
> -	void (*open)(struct vm_area_struct * area);
> +	void (*open)(struct mm_area * area);
>  	/**
>  	 * @close: Called when the VMA is being removed from the MM.
>  	 * Context: User context.  May sleep.  Caller holds mmap_lock.
>  	 */
> -	void (*close)(struct vm_area_struct * area);
> +	void (*close)(struct mm_area * area);
>  	/* Called any time before splitting to check if it's allowed */
> -	int (*may_split)(struct vm_area_struct *area, unsigned long addr);
> -	int (*mremap)(struct vm_area_struct *area);
> +	int (*may_split)(struct mm_area *area, unsigned long addr);
> +	int (*mremap)(struct mm_area *area);
>  	/*
>  	 * Called by mprotect() to make driver-specific permission
>  	 * checks before mprotect() is finalised.   The VMA must not
>  	 * be modified.  Returns 0 if mprotect() can proceed.
>  	 */
> -	int (*mprotect)(struct vm_area_struct *vma, unsigned long start,
> +	int (*mprotect)(struct mm_area *vma, unsigned long start,
>  			unsigned long end, unsigned long newflags);
>  	vm_fault_t (*fault)(struct vm_fault *vmf);
>  	vm_fault_t (*huge_fault)(struct vm_fault *vmf, unsigned int order);
>  	vm_fault_t (*map_pages)(struct vm_fault *vmf,
>  			pgoff_t start_pgoff, pgoff_t end_pgoff);
> -	unsigned long (*pagesize)(struct vm_area_struct * area);
> +	unsigned long (*pagesize)(struct mm_area * area);
>
>  	/* notification that a previously read-only page is about to become
>  	 * writable, if an error is returned it will cause a SIGBUS */
> @@ -616,13 +616,13 @@ struct vm_operations_struct {
>  	 * for use by special VMAs. See also generic_access_phys() for a generic
>  	 * implementation useful for any iomem mapping.
>  	 */
> -	int (*access)(struct vm_area_struct *vma, unsigned long addr,
> +	int (*access)(struct mm_area *vma, unsigned long addr,
>  		      void *buf, int len, int write);
>
>  	/* Called by the /proc/PID/maps code to ask the vma whether it
>  	 * has a special name.  Returning non-NULL will also cause this
>  	 * vma to be dumped unconditionally. */
> -	const char *(*name)(struct vm_area_struct *vma);
> +	const char *(*name)(struct mm_area *vma);
>
>  #ifdef CONFIG_NUMA
>  	/*
> @@ -632,7 +632,7 @@ struct vm_operations_struct {
>  	 * install a MPOL_DEFAULT policy, nor the task or system default
>  	 * mempolicy.
>  	 */
> -	int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new);
> +	int (*set_policy)(struct mm_area *vma, struct mempolicy *new);
>
>  	/*
>  	 * get_policy() op must add reference [mpol_get()] to any policy at
> @@ -644,7 +644,7 @@ struct vm_operations_struct {
>  	 * must return NULL--i.e., do not "fallback" to task or system default
>  	 * policy.
>  	 */
> -	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
> +	struct mempolicy *(*get_policy)(struct mm_area *vma,
>  					unsigned long addr, pgoff_t *ilx);
>  #endif
>  	/*
> @@ -652,26 +652,26 @@ struct vm_operations_struct {
>  	 * page for @addr.  This is useful if the default behavior
>  	 * (using pte_page()) would not find the correct page.
>  	 */
> -	struct page *(*find_special_page)(struct vm_area_struct *vma,
> +	struct page *(*find_special_page)(struct mm_area *vma,
>  					  unsigned long addr);
>  };
>
>  #ifdef CONFIG_NUMA_BALANCING
> -static inline void vma_numab_state_init(struct vm_area_struct *vma)
> +static inline void vma_numab_state_init(struct mm_area *vma)
>  {
>  	vma->numab_state = NULL;
>  }
> -static inline void vma_numab_state_free(struct vm_area_struct *vma)
> +static inline void vma_numab_state_free(struct mm_area *vma)
>  {
>  	kfree(vma->numab_state);
>  }
>  #else
> -static inline void vma_numab_state_init(struct vm_area_struct *vma) {}
> -static inline void vma_numab_state_free(struct vm_area_struct *vma) {}
> +static inline void vma_numab_state_init(struct mm_area *vma) {}
> +static inline void vma_numab_state_free(struct mm_area *vma) {}
>  #endif /* CONFIG_NUMA_BALANCING */
>
>  #ifdef CONFIG_PER_VMA_LOCK
> -static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt)
> +static inline void vma_lock_init(struct mm_area *vma, bool reset_refcnt)
>  {
>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>  	static struct lock_class_key lockdep_key;
> @@ -694,7 +694,7 @@ static inline bool is_vma_writer_only(int refcnt)
>  	return refcnt & VMA_LOCK_OFFSET && refcnt <= VMA_LOCK_OFFSET + 1;
>  }
>
> -static inline void vma_refcount_put(struct vm_area_struct *vma)
> +static inline void vma_refcount_put(struct mm_area *vma)
>  {
>  	/* Use a copy of vm_mm in case vma is freed after we drop vm_refcnt */
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -717,8 +717,8 @@ static inline void vma_refcount_put(struct vm_area_struct *vma)
>   * Returns the vma on success, NULL on failure to lock and EAGAIN if vma got
>   * detached.
>   */
> -static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
> -						    struct vm_area_struct *vma)
> +static inline struct mm_area *vma_start_read(struct mm_struct *mm,
> +						    struct mm_area *vma)
>  {
>  	int oldcnt;
>
> @@ -770,7 +770,7 @@ static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
>   * not be used in such cases because it might fail due to mm_lock_seq overflow.
>   * This functionality is used to obtain vma read lock and drop the mmap read lock.
>   */
> -static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int subclass)
> +static inline bool vma_start_read_locked_nested(struct mm_area *vma, int subclass)
>  {
>  	int oldcnt;
>
> @@ -789,18 +789,18 @@ static inline bool vma_start_read_locked_nested(struct vm_area_struct *vma, int
>   * not be used in such cases because it might fail due to mm_lock_seq overflow.
>   * This functionality is used to obtain vma read lock and drop the mmap read lock.
>   */
> -static inline bool vma_start_read_locked(struct vm_area_struct *vma)
> +static inline bool vma_start_read_locked(struct mm_area *vma)
>  {
>  	return vma_start_read_locked_nested(vma, 0);
>  }
>
> -static inline void vma_end_read(struct vm_area_struct *vma)
> +static inline void vma_end_read(struct mm_area *vma)
>  {
>  	vma_refcount_put(vma);
>  }
>
>  /* WARNING! Can only be used if mmap_lock is expected to be write-locked */
> -static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_lock_seq)
> +static bool __is_vma_write_locked(struct mm_area *vma, unsigned int *mm_lock_seq)
>  {
>  	mmap_assert_write_locked(vma->vm_mm);
>
> @@ -812,14 +812,14 @@ static bool __is_vma_write_locked(struct vm_area_struct *vma, unsigned int *mm_l
>  	return (vma->vm_lock_seq == *mm_lock_seq);
>  }
>
> -void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq);
> +void __vma_start_write(struct mm_area *vma, unsigned int mm_lock_seq);
>
>  /*
>   * Begin writing to a VMA.
>   * Exclude concurrent readers under the per-VMA lock until the currently
>   * write-locked mmap_lock is dropped or downgraded.
>   */
> -static inline void vma_start_write(struct vm_area_struct *vma)
> +static inline void vma_start_write(struct mm_area *vma)
>  {
>  	unsigned int mm_lock_seq;
>
> @@ -829,14 +829,14 @@ static inline void vma_start_write(struct vm_area_struct *vma)
>  	__vma_start_write(vma, mm_lock_seq);
>  }
>
> -static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> +static inline void vma_assert_write_locked(struct mm_area *vma)
>  {
>  	unsigned int mm_lock_seq;
>
>  	VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
>  }
>
> -static inline void vma_assert_locked(struct vm_area_struct *vma)
> +static inline void vma_assert_locked(struct mm_area *vma)
>  {
>  	unsigned int mm_lock_seq;
>
> @@ -849,24 +849,24 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
>   * assertions should be made either under mmap_write_lock or when the object
>   * has been isolated under mmap_write_lock, ensuring no competing writers.
>   */
> -static inline void vma_assert_attached(struct vm_area_struct *vma)
> +static inline void vma_assert_attached(struct mm_area *vma)
>  {
>  	WARN_ON_ONCE(!refcount_read(&vma->vm_refcnt));
>  }
>
> -static inline void vma_assert_detached(struct vm_area_struct *vma)
> +static inline void vma_assert_detached(struct mm_area *vma)
>  {
>  	WARN_ON_ONCE(refcount_read(&vma->vm_refcnt));
>  }
>
> -static inline void vma_mark_attached(struct vm_area_struct *vma)
> +static inline void vma_mark_attached(struct mm_area *vma)
>  {
>  	vma_assert_write_locked(vma);
>  	vma_assert_detached(vma);
>  	refcount_set_release(&vma->vm_refcnt, 1);
>  }
>
> -void vma_mark_detached(struct vm_area_struct *vma);
> +void vma_mark_detached(struct mm_area *vma);
>
>  static inline void release_fault_lock(struct vm_fault *vmf)
>  {
> @@ -884,31 +884,31 @@ static inline void assert_fault_locked(struct vm_fault *vmf)
>  		mmap_assert_locked(vmf->vma->vm_mm);
>  }
>
> -struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> +struct mm_area *lock_vma_under_rcu(struct mm_struct *mm,
>  					  unsigned long address);
>
>  #else /* CONFIG_PER_VMA_LOCK */
>
> -static inline void vma_lock_init(struct vm_area_struct *vma, bool reset_refcnt) {}
> -static inline struct vm_area_struct *vma_start_read(struct mm_struct *mm,
> -						    struct vm_area_struct *vma)
> +static inline void vma_lock_init(struct mm_area *vma, bool reset_refcnt) {}
> +static inline struct mm_area *vma_start_read(struct mm_struct *mm,
> +						    struct mm_area *vma)
>  		{ return NULL; }
> -static inline void vma_end_read(struct vm_area_struct *vma) {}
> -static inline void vma_start_write(struct vm_area_struct *vma) {}
> -static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> +static inline void vma_end_read(struct mm_area *vma) {}
> +static inline void vma_start_write(struct mm_area *vma) {}
> +static inline void vma_assert_write_locked(struct mm_area *vma)
>  		{ mmap_assert_write_locked(vma->vm_mm); }
> -static inline void vma_assert_attached(struct vm_area_struct *vma) {}
> -static inline void vma_assert_detached(struct vm_area_struct *vma) {}
> -static inline void vma_mark_attached(struct vm_area_struct *vma) {}
> -static inline void vma_mark_detached(struct vm_area_struct *vma) {}
> +static inline void vma_assert_attached(struct mm_area *vma) {}
> +static inline void vma_assert_detached(struct mm_area *vma) {}
> +static inline void vma_mark_attached(struct mm_area *vma) {}
> +static inline void vma_mark_detached(struct mm_area *vma) {}
>
> -static inline struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> +static inline struct mm_area *lock_vma_under_rcu(struct mm_struct *mm,
>  		unsigned long address)
>  {
>  	return NULL;
>  }
>
> -static inline void vma_assert_locked(struct vm_area_struct *vma)
> +static inline void vma_assert_locked(struct mm_area *vma)
>  {
>  	mmap_assert_locked(vma->vm_mm);
>  }
> @@ -927,7 +927,7 @@ static inline void assert_fault_locked(struct vm_fault *vmf)
>
>  extern const struct vm_operations_struct vma_dummy_vm_ops;
>
> -static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
> +static inline void vma_init(struct mm_area *vma, struct mm_struct *mm)
>  {
>  	memset(vma, 0, sizeof(*vma));
>  	vma->vm_mm = mm;
> @@ -937,7 +937,7 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  }
>
>  /* Use when VMA is not part of the VMA tree and needs no locking */
> -static inline void vm_flags_init(struct vm_area_struct *vma,
> +static inline void vm_flags_init(struct mm_area *vma,
>  				 vm_flags_t flags)
>  {
>  	ACCESS_PRIVATE(vma, __vm_flags) = flags;
> @@ -948,28 +948,28 @@ static inline void vm_flags_init(struct vm_area_struct *vma,
>   * Note: vm_flags_reset and vm_flags_reset_once do not lock the vma and
>   * it should be locked explicitly beforehand.
>   */
> -static inline void vm_flags_reset(struct vm_area_struct *vma,
> +static inline void vm_flags_reset(struct mm_area *vma,
>  				  vm_flags_t flags)
>  {
>  	vma_assert_write_locked(vma);
>  	vm_flags_init(vma, flags);
>  }
>
> -static inline void vm_flags_reset_once(struct vm_area_struct *vma,
> +static inline void vm_flags_reset_once(struct mm_area *vma,
>  				       vm_flags_t flags)
>  {
>  	vma_assert_write_locked(vma);
>  	WRITE_ONCE(ACCESS_PRIVATE(vma, __vm_flags), flags);
>  }
>
> -static inline void vm_flags_set(struct vm_area_struct *vma,
> +static inline void vm_flags_set(struct mm_area *vma,
>  				vm_flags_t flags)
>  {
>  	vma_start_write(vma);
>  	ACCESS_PRIVATE(vma, __vm_flags) |= flags;
>  }
>
> -static inline void vm_flags_clear(struct vm_area_struct *vma,
> +static inline void vm_flags_clear(struct mm_area *vma,
>  				  vm_flags_t flags)
>  {
>  	vma_start_write(vma);
> @@ -980,7 +980,7 @@ static inline void vm_flags_clear(struct vm_area_struct *vma,
>   * Use only if VMA is not part of the VMA tree or has no other users and
>   * therefore needs no locking.
>   */
> -static inline void __vm_flags_mod(struct vm_area_struct *vma,
> +static inline void __vm_flags_mod(struct mm_area *vma,
>  				  vm_flags_t set, vm_flags_t clear)
>  {
>  	vm_flags_init(vma, (vma->vm_flags | set) & ~clear);
> @@ -990,19 +990,19 @@ static inline void __vm_flags_mod(struct vm_area_struct *vma,
>   * Use only when the order of set/clear operations is unimportant, otherwise
>   * use vm_flags_{set|clear} explicitly.
>   */
> -static inline void vm_flags_mod(struct vm_area_struct *vma,
> +static inline void vm_flags_mod(struct mm_area *vma,
>  				vm_flags_t set, vm_flags_t clear)
>  {
>  	vma_start_write(vma);
>  	__vm_flags_mod(vma, set, clear);
>  }
>
> -static inline void vma_set_anonymous(struct vm_area_struct *vma)
> +static inline void vma_set_anonymous(struct mm_area *vma)
>  {
>  	vma->vm_ops = NULL;
>  }
>
> -static inline bool vma_is_anonymous(struct vm_area_struct *vma)
> +static inline bool vma_is_anonymous(struct mm_area *vma)
>  {
>  	return !vma->vm_ops;
>  }
> @@ -1011,7 +1011,7 @@ static inline bool vma_is_anonymous(struct vm_area_struct *vma)
>   * Indicate if the VMA is a heap for the given task; for
>   * /proc/PID/maps that is the heap of the main task.
>   */
> -static inline bool vma_is_initial_heap(const struct vm_area_struct *vma)
> +static inline bool vma_is_initial_heap(const struct mm_area *vma)
>  {
>  	return vma->vm_start < vma->vm_mm->brk &&
>  		vma->vm_end > vma->vm_mm->start_brk;
> @@ -1021,7 +1021,7 @@ static inline bool vma_is_initial_heap(const struct vm_area_struct *vma)
>   * Indicate if the VMA is a stack for the given task; for
>   * /proc/PID/maps that is the stack of the main task.
>   */
> -static inline bool vma_is_initial_stack(const struct vm_area_struct *vma)
> +static inline bool vma_is_initial_stack(const struct mm_area *vma)
>  {
>  	/*
>  	 * We make no effort to guess what a given thread considers to be
> @@ -1032,7 +1032,7 @@ static inline bool vma_is_initial_stack(const struct vm_area_struct *vma)
>  		vma->vm_end >= vma->vm_mm->start_stack;
>  }
>
> -static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
> +static inline bool vma_is_temporary_stack(struct mm_area *vma)
>  {
>  	int maybe_stack = vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP);
>
> @@ -1046,7 +1046,7 @@ static inline bool vma_is_temporary_stack(struct vm_area_struct *vma)
>  	return false;
>  }
>
> -static inline bool vma_is_foreign(struct vm_area_struct *vma)
> +static inline bool vma_is_foreign(struct mm_area *vma)
>  {
>  	if (!current->mm)
>  		return true;
> @@ -1057,7 +1057,7 @@ static inline bool vma_is_foreign(struct vm_area_struct *vma)
>  	return false;
>  }
>
> -static inline bool vma_is_accessible(struct vm_area_struct *vma)
> +static inline bool vma_is_accessible(struct mm_area *vma)
>  {
>  	return vma->vm_flags & VM_ACCESS_FLAGS;
>  }
> @@ -1068,18 +1068,18 @@ static inline bool is_shared_maywrite(vm_flags_t vm_flags)
>  		(VM_SHARED | VM_MAYWRITE);
>  }
>
> -static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
> +static inline bool vma_is_shared_maywrite(struct mm_area *vma)
>  {
>  	return is_shared_maywrite(vma->vm_flags);
>  }
>
>  static inline
> -struct vm_area_struct *vma_find(struct vma_iterator *vmi, unsigned long max)
> +struct mm_area *vma_find(struct vma_iterator *vmi, unsigned long max)
>  {
>  	return mas_find(&vmi->mas, max - 1);
>  }
>
> -static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
> +static inline struct mm_area *vma_next(struct vma_iterator *vmi)
>  {
>  	/*
>  	 * Uses mas_find() to get the first VMA when the iterator starts.
> @@ -1089,13 +1089,13 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
>  }
>
>  static inline
> -struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
> +struct mm_area *vma_iter_next_range(struct vma_iterator *vmi)
>  {
>  	return mas_next_range(&vmi->mas, ULONG_MAX);
>  }
>
>
> -static inline struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
> +static inline struct mm_area *vma_prev(struct vma_iterator *vmi)
>  {
>  	return mas_prev(&vmi->mas, 0);
>  }
> @@ -1118,7 +1118,7 @@ static inline void vma_iter_free(struct vma_iterator *vmi)
>  }
>
>  static inline int vma_iter_bulk_store(struct vma_iterator *vmi,
> -				      struct vm_area_struct *vma)
> +				      struct mm_area *vma)
>  {
>  	vmi->mas.index = vma->vm_start;
>  	vmi->mas.last = vma->vm_end - 1;
> @@ -1152,14 +1152,14 @@ static inline void vma_iter_set(struct vma_iterator *vmi, unsigned long addr)
>   * The vma_is_shmem is not inline because it is used only by slow
>   * paths in userfault.
>   */
> -bool vma_is_shmem(struct vm_area_struct *vma);
> -bool vma_is_anon_shmem(struct vm_area_struct *vma);
> +bool vma_is_shmem(struct mm_area *vma);
> +bool vma_is_anon_shmem(struct mm_area *vma);
>  #else
> -static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; }
> -static inline bool vma_is_anon_shmem(struct vm_area_struct *vma) { return false; }
> +static inline bool vma_is_shmem(struct mm_area *vma) { return false; }
> +static inline bool vma_is_anon_shmem(struct mm_area *vma) { return false; }
>  #endif
>
> -int vma_is_stack_for_current(struct vm_area_struct *vma);
> +int vma_is_stack_for_current(struct mm_area *vma);
>
>  /* flush_tlb_range() takes a vma, not a mm, and can care about flags */
>  #define TLB_FLUSH_VMA(mm,flags) { .vm_mm = (mm), .vm_flags = (flags) }
> @@ -1435,7 +1435,7 @@ static inline unsigned long thp_size(struct page *page)
>   * pte_mkwrite.  But get_user_pages can cause write faults for mappings
>   * that do not have writing enabled, when used by access_process_vm.
>   */
> -static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +static inline pte_t maybe_mkwrite(pte_t pte, struct mm_area *vma)
>  {
>  	if (likely(vma->vm_flags & VM_WRITE))
>  		pte = pte_mkwrite(pte, vma);
> @@ -1811,7 +1811,7 @@ static inline int folio_xchg_access_time(struct folio *folio, int time)
>  	return last_time << PAGE_ACCESS_TIME_BUCKETS;
>  }
>
> -static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
> +static inline void vma_set_access_pid_bit(struct mm_area *vma)
>  {
>  	unsigned int pid_bit;
>
> @@ -1872,7 +1872,7 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
>  	return false;
>  }
>
> -static inline void vma_set_access_pid_bit(struct vm_area_struct *vma)
> +static inline void vma_set_access_pid_bit(struct mm_area *vma)
>  {
>  }
>  static inline bool folio_use_access_time(struct folio *folio)
> @@ -2042,7 +2042,7 @@ static inline bool folio_maybe_dma_pinned(struct folio *folio)
>   *
>   * The caller has to hold the PT lock and the vma->vm_mm->->write_protect_seq.
>   */
> -static inline bool folio_needs_cow_for_dma(struct vm_area_struct *vma,
> +static inline bool folio_needs_cow_for_dma(struct mm_area *vma,
>  					  struct folio *folio)
>  {
>  	VM_BUG_ON(!(raw_read_seqcount(&vma->vm_mm->write_protect_seq) & 1));
> @@ -2445,26 +2445,26 @@ static inline bool can_do_mlock(void) { return false; }
>  extern int user_shm_lock(size_t, struct ucounts *);
>  extern void user_shm_unlock(size_t, struct ucounts *);
>
> -struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
> +struct folio *vm_normal_folio(struct mm_area *vma, unsigned long addr,
>  			     pte_t pte);
> -struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> +struct page *vm_normal_page(struct mm_area *vma, unsigned long addr,
>  			     pte_t pte);
> -struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
> +struct folio *vm_normal_folio_pmd(struct mm_area *vma,
>  				  unsigned long addr, pmd_t pmd);
> -struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> +struct page *vm_normal_page_pmd(struct mm_area *vma, unsigned long addr,
>  				pmd_t pmd);
>
> -void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> +void zap_vma_ptes(struct mm_area *vma, unsigned long address,
>  		  unsigned long size);
> -void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> +void zap_page_range_single(struct mm_area *vma, unsigned long address,
>  			   unsigned long size, struct zap_details *details);
> -static inline void zap_vma_pages(struct vm_area_struct *vma)
> +static inline void zap_vma_pages(struct mm_area *vma)
>  {
>  	zap_page_range_single(vma, vma->vm_start,
>  			      vma->vm_end - vma->vm_start, NULL);
>  }
>  void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
> -		struct vm_area_struct *start_vma, unsigned long start,
> +		struct mm_area *start_vma, unsigned long start,
>  		unsigned long end, unsigned long tree_end, bool mm_wr_locked);
>
>  struct mmu_notifier_range;
> @@ -2472,17 +2472,17 @@ struct mmu_notifier_range;
>  void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
>  		unsigned long end, unsigned long floor, unsigned long ceiling);
>  int
> -copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
> -int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
> +copy_page_range(struct mm_area *dst_vma, struct mm_area *src_vma);
> +int generic_access_phys(struct mm_area *vma, unsigned long addr,
>  			void *buf, int len, int write);
>
>  struct follow_pfnmap_args {
>  	/**
>  	 * Inputs:
> -	 * @vma: Pointer to @vm_area_struct struct
> +	 * @vma: Pointer to @mm_area struct
>  	 * @address: the virtual address to walk
>  	 */
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long address;
>  	/**
>  	 * Internals:
> @@ -2516,11 +2516,11 @@ void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end);
>  int generic_error_remove_folio(struct address_space *mapping,
>  		struct folio *folio);
>
> -struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
> +struct mm_area *lock_mm_and_find_vma(struct mm_struct *mm,
>  		unsigned long address, struct pt_regs *regs);
>
>  #ifdef CONFIG_MMU
> -extern vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
> +extern vm_fault_t handle_mm_fault(struct mm_area *vma,
>  				  unsigned long address, unsigned int flags,
>  				  struct pt_regs *regs);
>  extern int fixup_user_fault(struct mm_struct *mm,
> @@ -2531,7 +2531,7 @@ void unmap_mapping_pages(struct address_space *mapping,
>  void unmap_mapping_range(struct address_space *mapping,
>  		loff_t const holebegin, loff_t const holelen, int even_cows);
>  #else
> -static inline vm_fault_t handle_mm_fault(struct vm_area_struct *vma,
> +static inline vm_fault_t handle_mm_fault(struct mm_area *vma,
>  					 unsigned long address, unsigned int flags,
>  					 struct pt_regs *regs)
>  {
> @@ -2558,7 +2558,7 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping,
>  	unmap_mapping_range(mapping, holebegin, holelen, 0);
>  }
>
> -static inline struct vm_area_struct *vma_lookup(struct mm_struct *mm,
> +static inline struct mm_area *vma_lookup(struct mm_struct *mm,
>  						unsigned long addr);
>
>  extern int access_process_vm(struct task_struct *tsk, unsigned long addr,
> @@ -2586,10 +2586,10 @@ long pin_user_pages_remote(struct mm_struct *mm,
>  static inline struct page *get_user_page_vma_remote(struct mm_struct *mm,
>  						    unsigned long addr,
>  						    int gup_flags,
> -						    struct vm_area_struct **vmap)
> +						    struct mm_area **vmap)
>  {
>  	struct page *page;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int got;
>
>  	if (WARN_ON_ONCE(unlikely(gup_flags & FOLL_NOWAIT)))
> @@ -2663,13 +2663,13 @@ int get_cmdline(struct task_struct *task, char *buffer, int buflen);
>  #define  MM_CP_UFFD_WP_ALL                 (MM_CP_UFFD_WP | \
>  					    MM_CP_UFFD_WP_RESOLVE)
>
> -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> +bool can_change_pte_writable(struct mm_area *vma, unsigned long addr,
>  			     pte_t pte);
>  extern long change_protection(struct mmu_gather *tlb,
> -			      struct vm_area_struct *vma, unsigned long start,
> +			      struct mm_area *vma, unsigned long start,
>  			      unsigned long end, unsigned long cp_flags);
>  extern int mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> -	  struct vm_area_struct *vma, struct vm_area_struct **pprev,
> +	  struct mm_area *vma, struct mm_area **pprev,
>  	  unsigned long start, unsigned long end, unsigned long newflags);
>
>  /*
> @@ -3360,16 +3360,16 @@ extern atomic_long_t mmap_pages_allocated;
>  extern int nommu_shrink_inode_mappings(struct inode *, size_t, size_t);
>
>  /* interval_tree.c */
> -void vma_interval_tree_insert(struct vm_area_struct *node,
> +void vma_interval_tree_insert(struct mm_area *node,
>  			      struct rb_root_cached *root);
> -void vma_interval_tree_insert_after(struct vm_area_struct *node,
> -				    struct vm_area_struct *prev,
> +void vma_interval_tree_insert_after(struct mm_area *node,
> +				    struct mm_area *prev,
>  				    struct rb_root_cached *root);
> -void vma_interval_tree_remove(struct vm_area_struct *node,
> +void vma_interval_tree_remove(struct mm_area *node,
>  			      struct rb_root_cached *root);
> -struct vm_area_struct *vma_interval_tree_iter_first(struct rb_root_cached *root,
> +struct mm_area *vma_interval_tree_iter_first(struct rb_root_cached *root,
>  				unsigned long start, unsigned long last);
> -struct vm_area_struct *vma_interval_tree_iter_next(struct vm_area_struct *node,
> +struct mm_area *vma_interval_tree_iter_next(struct mm_area *node,
>  				unsigned long start, unsigned long last);
>
>  #define vma_interval_tree_foreach(vma, root, start, last)		\
> @@ -3395,10 +3395,10 @@ void anon_vma_interval_tree_verify(struct anon_vma_chain *node);
>
>  /* mmap.c */
>  extern int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin);
> -extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
> +extern int insert_vm_struct(struct mm_struct *, struct mm_area *);
>  extern void exit_mmap(struct mm_struct *);
> -int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift);
> -bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_struct *vma,
> +int relocate_vma_down(struct mm_area *vma, unsigned long shift);
> +bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct mm_area *vma,
>  				 unsigned long addr, bool write);
>
>  static inline int check_data_rlimit(unsigned long rlim,
> @@ -3426,9 +3426,9 @@ extern struct file *get_task_exe_file(struct task_struct *task);
>  extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long npages);
>  extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages);
>
> -extern bool vma_is_special_mapping(const struct vm_area_struct *vma,
> +extern bool vma_is_special_mapping(const struct mm_area *vma,
>  				   const struct vm_special_mapping *sm);
> -extern struct vm_area_struct *_install_special_mapping(struct mm_struct *mm,
> +extern struct mm_area *_install_special_mapping(struct mm_struct *mm,
>  				   unsigned long addr, unsigned long len,
>  				   unsigned long flags,
>  				   const struct vm_special_mapping *spec);
> @@ -3454,7 +3454,7 @@ extern unsigned long do_mmap(struct file *file, unsigned long addr,
>  extern int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
>  			 unsigned long start, size_t len, struct list_head *uf,
>  			 bool unlock);
> -int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +int do_vmi_align_munmap(struct vma_iterator *vmi, struct mm_area *vma,
>  		    struct mm_struct *mm, unsigned long start,
>  		    unsigned long end, struct list_head *uf, bool unlock);
>  extern int do_munmap(struct mm_struct *, unsigned long, size_t,
> @@ -3507,19 +3507,19 @@ extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);
>
>  extern unsigned long stack_guard_gap;
>  /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
> -int expand_stack_locked(struct vm_area_struct *vma, unsigned long address);
> -struct vm_area_struct *expand_stack(struct mm_struct * mm, unsigned long addr);
> +int expand_stack_locked(struct mm_area *vma, unsigned long address);
> +struct mm_area *expand_stack(struct mm_struct * mm, unsigned long addr);
>
>  /* Look up the first VMA which satisfies  addr < vm_end,  NULL if none. */
> -extern struct vm_area_struct * find_vma(struct mm_struct * mm, unsigned long addr);
> -extern struct vm_area_struct * find_vma_prev(struct mm_struct * mm, unsigned long addr,
> -					     struct vm_area_struct **pprev);
> +extern struct mm_area * find_vma(struct mm_struct * mm, unsigned long addr);
> +extern struct mm_area * find_vma_prev(struct mm_struct * mm, unsigned long addr,
> +					     struct mm_area **pprev);
>
>  /*
>   * Look up the first VMA which intersects the interval [start_addr, end_addr)
>   * NULL if none.  Assume start_addr < end_addr.
>   */
> -struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
> +struct mm_area *find_vma_intersection(struct mm_struct *mm,
>  			unsigned long start_addr, unsigned long end_addr);
>
>  /**
> @@ -3527,15 +3527,15 @@ struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
>   * @mm: The process address space.
>   * @addr: The user address.
>   *
> - * Return: The vm_area_struct at the given address, %NULL otherwise.
> + * Return: The mm_area at the given address, %NULL otherwise.
>   */
>  static inline
> -struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *vma_lookup(struct mm_struct *mm, unsigned long addr)
>  {
>  	return mtree_load(&mm->mm_mt, addr);
>  }
>
> -static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
> +static inline unsigned long stack_guard_start_gap(struct mm_area *vma)
>  {
>  	if (vma->vm_flags & VM_GROWSDOWN)
>  		return stack_guard_gap;
> @@ -3547,7 +3547,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
> +static inline unsigned long vm_start_gap(struct mm_area *vma)
>  {
>  	unsigned long gap = stack_guard_start_gap(vma);
>  	unsigned long vm_start = vma->vm_start;
> @@ -3558,7 +3558,7 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
>  	return vm_start;
>  }
>
> -static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
> +static inline unsigned long vm_end_gap(struct mm_area *vma)
>  {
>  	unsigned long vm_end = vma->vm_end;
>
> @@ -3570,16 +3570,16 @@ static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
>  	return vm_end;
>  }
>
> -static inline unsigned long vma_pages(struct vm_area_struct *vma)
> +static inline unsigned long vma_pages(struct mm_area *vma)
>  {
>  	return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
>  }
>
>  /* Look up the first VMA which exactly match the interval vm_start ... vm_end */
> -static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
> +static inline struct mm_area *find_exact_vma(struct mm_struct *mm,
>  				unsigned long vm_start, unsigned long vm_end)
>  {
> -	struct vm_area_struct *vma = vma_lookup(mm, vm_start);
> +	struct mm_area *vma = vma_lookup(mm, vm_start);
>
>  	if (vma && (vma->vm_start != vm_start || vma->vm_end != vm_end))
>  		vma = NULL;
> @@ -3587,7 +3587,7 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
>  	return vma;
>  }
>
> -static inline bool range_in_vma(struct vm_area_struct *vma,
> +static inline bool range_in_vma(struct mm_area *vma,
>  				unsigned long start, unsigned long end)
>  {
>  	return (vma && vma->vm_start <= start && end <= vma->vm_end);
> @@ -3595,51 +3595,51 @@ static inline bool range_in_vma(struct vm_area_struct *vma,
>
>  #ifdef CONFIG_MMU
>  pgprot_t vm_get_page_prot(unsigned long vm_flags);
> -void vma_set_page_prot(struct vm_area_struct *vma);
> +void vma_set_page_prot(struct mm_area *vma);
>  #else
>  static inline pgprot_t vm_get_page_prot(unsigned long vm_flags)
>  {
>  	return __pgprot(0);
>  }
> -static inline void vma_set_page_prot(struct vm_area_struct *vma)
> +static inline void vma_set_page_prot(struct mm_area *vma)
>  {
>  	vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
>  }
>  #endif
>
> -void vma_set_file(struct vm_area_struct *vma, struct file *file);
> +void vma_set_file(struct mm_area *vma, struct file *file);
>
>  #ifdef CONFIG_NUMA_BALANCING
> -unsigned long change_prot_numa(struct vm_area_struct *vma,
> +unsigned long change_prot_numa(struct mm_area *vma,
>  			unsigned long start, unsigned long end);
>  #endif
>
> -struct vm_area_struct *find_extend_vma_locked(struct mm_struct *,
> +struct mm_area *find_extend_vma_locked(struct mm_struct *,
>  		unsigned long addr);
> -int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
> +int remap_pfn_range(struct mm_area *, unsigned long addr,
>  			unsigned long pfn, unsigned long size, pgprot_t);
> -int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
> +int remap_pfn_range_notrack(struct mm_area *vma, unsigned long addr,
>  		unsigned long pfn, unsigned long size, pgprot_t prot);
> -int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
> -int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
> +int vm_insert_page(struct mm_area *, unsigned long addr, struct page *);
> +int vm_insert_pages(struct mm_area *vma, unsigned long addr,
>  			struct page **pages, unsigned long *num);
> -int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> +int vm_map_pages(struct mm_area *vma, struct page **pages,
>  				unsigned long num);
> -int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
> +int vm_map_pages_zero(struct mm_area *vma, struct page **pages,
>  				unsigned long num);
>  vm_fault_t vmf_insert_page_mkwrite(struct vm_fault *vmf, struct page *page,
>  			bool write);
> -vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> +vm_fault_t vmf_insert_pfn(struct mm_area *vma, unsigned long addr,
>  			unsigned long pfn);
> -vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
> +vm_fault_t vmf_insert_pfn_prot(struct mm_area *vma, unsigned long addr,
>  			unsigned long pfn, pgprot_t pgprot);
> -vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
> +vm_fault_t vmf_insert_mixed(struct mm_area *vma, unsigned long addr,
>  			pfn_t pfn);
> -vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
> +vm_fault_t vmf_insert_mixed_mkwrite(struct mm_area *vma,
>  		unsigned long addr, pfn_t pfn);
> -int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
> +int vm_iomap_memory(struct mm_area *vma, phys_addr_t start, unsigned long len);
>
> -static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
> +static inline vm_fault_t vmf_insert_page(struct mm_area *vma,
>  				unsigned long addr, struct page *page)
>  {
>  	int err = vm_insert_page(vma, addr, page);
> @@ -3653,7 +3653,7 @@ static inline vm_fault_t vmf_insert_page(struct vm_area_struct *vma,
>  }
>
>  #ifndef io_remap_pfn_range
> -static inline int io_remap_pfn_range(struct vm_area_struct *vma,
> +static inline int io_remap_pfn_range(struct mm_area *vma,
>  				     unsigned long addr, unsigned long pfn,
>  				     unsigned long size, pgprot_t prot)
>  {
> @@ -3703,7 +3703,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags)
>   * Indicates whether GUP can follow a PROT_NONE mapped page, or whether
>   * a (NUMA hinting) fault is required.
>   */
> -static inline bool gup_can_follow_protnone(struct vm_area_struct *vma,
> +static inline bool gup_can_follow_protnone(struct mm_area *vma,
>  					   unsigned int flags)
>  {
>  	/*
> @@ -3872,11 +3872,11 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
>  #endif	/* CONFIG_DEBUG_PAGEALLOC */
>
>  #ifdef __HAVE_ARCH_GATE_AREA
> -extern struct vm_area_struct *get_gate_vma(struct mm_struct *mm);
> +extern struct mm_area *get_gate_vma(struct mm_struct *mm);
>  extern int in_gate_area_no_mm(unsigned long addr);
>  extern int in_gate_area(struct mm_struct *mm, unsigned long addr);
>  #else
> -static inline struct vm_area_struct *get_gate_vma(struct mm_struct *mm)
> +static inline struct mm_area *get_gate_vma(struct mm_struct *mm)
>  {
>  	return NULL;
>  }
> @@ -3897,7 +3897,7 @@ void drop_slab(void);
>  extern int randomize_va_space;
>  #endif
>
> -const char * arch_vma_name(struct vm_area_struct *vma);
> +const char * arch_vma_name(struct mm_area *vma);
>  #ifdef CONFIG_MMU
>  void print_vma_addr(char *prefix, unsigned long rip);
>  #else
> @@ -4117,14 +4117,14 @@ enum mf_action_page_type {
>  void folio_zero_user(struct folio *folio, unsigned long addr_hint);
>  int copy_user_large_folio(struct folio *dst, struct folio *src,
>  			  unsigned long addr_hint,
> -			  struct vm_area_struct *vma);
> +			  struct mm_area *vma);
>  long copy_folio_from_user(struct folio *dst_folio,
>  			   const void __user *usr_src,
>  			   bool allow_pagefault);
>
>  /**
>   * vma_is_special_huge - Are transhuge page-table entries considered special?
> - * @vma: Pointer to the struct vm_area_struct to consider
> + * @vma: Pointer to the struct mm_area to consider
>   *
>   * Whether transhuge page-table entries are considered "special" following
>   * the definition in vm_normal_page().
> @@ -4132,7 +4132,7 @@ long copy_folio_from_user(struct folio *dst_folio,
>   * Return: true if transhuge page-table entries should be considered special,
>   * false otherwise.
>   */
> -static inline bool vma_is_special_huge(const struct vm_area_struct *vma)
> +static inline bool vma_is_special_huge(const struct mm_area *vma)
>  {
>  	return vma_is_dax(vma) || (vma->vm_file &&
>  				   (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)));
> @@ -4201,8 +4201,8 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
>  	return range_contains_unaccepted_memory(pfn << PAGE_SHIFT, PAGE_SIZE);
>  }
>
> -void vma_pgtable_walk_begin(struct vm_area_struct *vma);
> -void vma_pgtable_walk_end(struct vm_area_struct *vma);
> +void vma_pgtable_walk_begin(struct mm_area *vma);
> +void vma_pgtable_walk_end(struct mm_area *vma);
>
>  int reserve_mem_find_by_name(const char *name, phys_addr_t *start, phys_addr_t *size);
>  int reserve_mem_release_by_name(const char *name);
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index f9157a0c42a5..7b5bcca96464 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -404,8 +404,8 @@ struct anon_vma_name *anon_vma_name_reuse(struct anon_vma_name *anon_name)
>  	return anon_vma_name_alloc(anon_name->name);
>  }
>
> -static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
> -				     struct vm_area_struct *new_vma)
> +static inline void dup_anon_vma_name(struct mm_area *orig_vma,
> +				     struct mm_area *new_vma)
>  {
>  	struct anon_vma_name *anon_name = anon_vma_name(orig_vma);
>
> @@ -413,7 +413,7 @@ static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
>  		new_vma->anon_name = anon_vma_name_reuse(anon_name);
>  }
>
> -static inline void free_anon_vma_name(struct vm_area_struct *vma)
> +static inline void free_anon_vma_name(struct mm_area *vma)
>  {
>  	/*
>  	 * Not using anon_vma_name because it generates a warning if mmap_lock
> @@ -435,9 +435,9 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
>  #else /* CONFIG_ANON_VMA_NAME */
>  static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {}
>  static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {}
> -static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
> -				     struct vm_area_struct *new_vma) {}
> -static inline void free_anon_vma_name(struct vm_area_struct *vma) {}
> +static inline void dup_anon_vma_name(struct mm_area *orig_vma,
> +				     struct mm_area *new_vma) {}
> +static inline void free_anon_vma_name(struct mm_area *vma) {}
>
>  static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
>  				    struct anon_vma_name *anon_name2)
> @@ -538,7 +538,7 @@ static inline bool mm_tlb_flush_nested(struct mm_struct *mm)
>   * The caller should insert a new pte created with make_pte_marker().
>   */
>  static inline pte_marker copy_pte_marker(
> -		swp_entry_t entry, struct vm_area_struct *dst_vma)
> +		swp_entry_t entry, struct mm_area *dst_vma)
>  {
>  	pte_marker srcm = pte_marker_get(entry);
>  	/* Always copy error entries. */
> @@ -565,7 +565,7 @@ static inline pte_marker copy_pte_marker(
>   * Returns true if an uffd-wp pte was installed, false otherwise.
>   */
>  static inline bool
> -pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
> +pte_install_uffd_wp_if_needed(struct mm_area *vma, unsigned long addr,
>  			      pte_t *pte, pte_t pteval)
>  {
>  #ifdef CONFIG_PTE_MARKER_UFFD_WP
> @@ -603,7 +603,7 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
>  	return false;
>  }
>
> -static inline bool vma_has_recency(struct vm_area_struct *vma)
> +static inline bool vma_has_recency(struct mm_area *vma)
>  {
>  	if (vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))
>  		return false;
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 56d07edd01f9..185fdf91bda1 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -710,11 +710,11 @@ struct anon_vma_name {
>   * either keep holding the lock while using the returned pointer or it should
>   * raise anon_vma_name refcount before releasing the lock.
>   */
> -struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma);
> +struct anon_vma_name *anon_vma_name(struct mm_area *vma);
>  struct anon_vma_name *anon_vma_name_alloc(const char *name);
>  void anon_vma_name_free(struct kref *kref);
>  #else /* CONFIG_ANON_VMA_NAME */
> -static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> +static inline struct anon_vma_name *anon_vma_name(struct mm_area *vma)
>  {
>  	return NULL;
>  }
> @@ -774,9 +774,9 @@ struct vma_numab_state {
>   * getting a stable reference.
>   *
>   * WARNING: when adding new members, please update vm_area_init_from() to copy
> - * them during vm_area_struct content duplication.
> + * them during mm_area content duplication.
>   */
> -struct vm_area_struct {
> +struct mm_area {
>  	/* The first cache line has the info for VMA tree walking. */
>
>  	union {
> @@ -1488,14 +1488,14 @@ struct vm_special_mapping {
>  	 * on the special mapping.  If used, .pages is not checked.
>  	 */
>  	vm_fault_t (*fault)(const struct vm_special_mapping *sm,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				struct vm_fault *vmf);
>
>  	int (*mremap)(const struct vm_special_mapping *sm,
> -		     struct vm_area_struct *new_vma);
> +		     struct mm_area *new_vma);
>
>  	void (*close)(const struct vm_special_mapping *sm,
> -		      struct vm_area_struct *vma);
> +		      struct mm_area *vma);
>  };
>
>  enum tlb_flush_reason {
> diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
> index a0a3894900ed..b713e4921bb8 100644
> --- a/include/linux/mmdebug.h
> +++ b/include/linux/mmdebug.h
> @@ -6,13 +6,13 @@
>  #include <linux/stringify.h>
>
>  struct page;
> -struct vm_area_struct;
> +struct mm_area;
>  struct mm_struct;
>  struct vma_iterator;
>  struct vma_merge_struct;
>
>  void dump_page(const struct page *page, const char *reason);
> -void dump_vma(const struct vm_area_struct *vma);
> +void dump_vma(const struct mm_area *vma);
>  void dump_mm(const struct mm_struct *mm);
>  void dump_vmg(const struct vma_merge_struct *vmg, const char *reason);
>  void vma_iter_dump_tree(const struct vma_iterator *vmi);
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index bc2402a45741..1c83061bf690 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -518,7 +518,7 @@ static inline void mmu_notifier_range_init_owner(
>  #define ptep_clear_flush_young_notify(__vma, __address, __ptep)		\
>  ({									\
>  	int __young;							\
> -	struct vm_area_struct *___vma = __vma;				\
> +	struct mm_area *___vma = __vma;				\
>  	unsigned long ___address = __address;				\
>  	__young = ptep_clear_flush_young(___vma, ___address, __ptep);	\
>  	__young |= mmu_notifier_clear_flush_young(___vma->vm_mm,	\
> @@ -531,7 +531,7 @@ static inline void mmu_notifier_range_init_owner(
>  #define pmdp_clear_flush_young_notify(__vma, __address, __pmdp)		\
>  ({									\
>  	int __young;							\
> -	struct vm_area_struct *___vma = __vma;				\
> +	struct mm_area *___vma = __vma;				\
>  	unsigned long ___address = __address;				\
>  	__young = pmdp_clear_flush_young(___vma, ___address, __pmdp);	\
>  	__young |= mmu_notifier_clear_flush_young(___vma->vm_mm,	\
> @@ -544,7 +544,7 @@ static inline void mmu_notifier_range_init_owner(
>  #define ptep_clear_young_notify(__vma, __address, __ptep)		\
>  ({									\
>  	int __young;							\
> -	struct vm_area_struct *___vma = __vma;				\
> +	struct mm_area *___vma = __vma;				\
>  	unsigned long ___address = __address;				\
>  	__young = ptep_test_and_clear_young(___vma, ___address, __ptep);\
>  	__young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,	\
> @@ -555,7 +555,7 @@ static inline void mmu_notifier_range_init_owner(
>  #define pmdp_clear_young_notify(__vma, __address, __pmdp)		\
>  ({									\
>  	int __young;							\
> -	struct vm_area_struct *___vma = __vma;				\
> +	struct mm_area *___vma = __vma;				\
>  	unsigned long ___address = __address;				\
>  	__young = pmdp_test_and_clear_young(___vma, ___address, __pmdp);\
>  	__young |= mmu_notifier_clear_young(___vma->vm_mm, ___address,	\
> diff --git a/include/linux/net.h b/include/linux/net.h
> index 0ff950eecc6b..501f966667be 100644
> --- a/include/linux/net.h
> +++ b/include/linux/net.h
> @@ -147,7 +147,7 @@ typedef struct {
>  	int error;
>  } read_descriptor_t;
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct page;
>  struct sockaddr;
>  struct msghdr;
> @@ -208,7 +208,7 @@ struct proto_ops {
>  	int		(*recvmsg)   (struct socket *sock, struct msghdr *m,
>  				      size_t total_len, int flags);
>  	int		(*mmap)	     (struct file *file, struct socket *sock,
> -				      struct vm_area_struct * vma);
> +				      struct mm_area * vma);
>  	ssize_t 	(*splice_read)(struct socket *sock,  loff_t *ppos,
>  				       struct pipe_inode_info *pipe, size_t len, unsigned int flags);
>  	void		(*splice_eof)(struct socket *sock);
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index 26baa78f1ca7..1848be69048a 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -1043,7 +1043,7 @@ static inline pgoff_t folio_pgoff(struct folio *folio)
>  	return folio->index;
>  }
>
> -static inline pgoff_t linear_page_index(struct vm_area_struct *vma,
> +static inline pgoff_t linear_page_index(struct mm_area *vma,
>  					unsigned long address)
>  {
>  	pgoff_t pgoff;
> diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h
> index 9700a29f8afb..026bb21ede0e 100644
> --- a/include/linux/pagewalk.h
> +++ b/include/linux/pagewalk.h
> @@ -120,7 +120,7 @@ struct mm_walk {
>  	const struct mm_walk_ops *ops;
>  	struct mm_struct *mm;
>  	pgd_t *pgd;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	enum page_walk_action action;
>  	bool no_vma;
>  	void *private;
> @@ -133,10 +133,10 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
>  			  unsigned long end, const struct mm_walk_ops *ops,
>  			  pgd_t *pgd,
>  			  void *private);
> -int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
> +int walk_page_range_vma(struct mm_area *vma, unsigned long start,
>  			unsigned long end, const struct mm_walk_ops *ops,
>  			void *private);
> -int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
> +int walk_page_vma(struct mm_area *vma, const struct mm_walk_ops *ops,
>  		void *private);
>  int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
>  		      pgoff_t nr, const struct mm_walk_ops *ops,
> @@ -185,12 +185,12 @@ struct folio_walk {
>  		pmd_t pmd;
>  	};
>  	/* private */
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	spinlock_t *ptl;
>  };
>
>  struct folio *folio_walk_start(struct folio_walk *fw,
> -		struct vm_area_struct *vma, unsigned long addr,
> +		struct mm_area *vma, unsigned long addr,
>  		folio_walk_flags_t flags);
>
>  #define folio_walk_end(__fw, __vma) do { \
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index 0e8e3fd77e96..343fcd42b066 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -2103,7 +2103,7 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs,
>   *
>   */
>  int pci_mmap_resource_range(struct pci_dev *dev, int bar,
> -			    struct vm_area_struct *vma,
> +			    struct mm_area *vma,
>  			    enum pci_mmap_state mmap_state, int write_combine);
>
>  #ifndef arch_can_pci_mmap_wc
> @@ -2114,7 +2114,7 @@ int pci_mmap_resource_range(struct pci_dev *dev, int bar,
>  #define arch_can_pci_mmap_io()		0
>  #define pci_iobar_pfn(pdev, bar, vma) (-EINVAL)
>  #else
> -int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma);
> +int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct mm_area *vma);
>  #endif
>
>  #ifndef pci_root_bus_fwnode
> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index 5a9bf15d4461..cb7f59821923 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1596,7 +1596,7 @@ static inline void perf_event_task_sched_out(struct task_struct *prev,
>  		__perf_event_task_sched_out(prev, next);
>  }
>
> -extern void perf_event_mmap(struct vm_area_struct *vma);
> +extern void perf_event_mmap(struct mm_area *vma);
>
>  extern void perf_event_ksymbol(u16 ksym_type, u64 addr, u32 len,
>  			       bool unregister, const char *sym);
> @@ -1889,7 +1889,7 @@ perf_sw_event(u32 event_id, u64 nr, struct pt_regs *regs, u64 addr)	{ }
>  static inline void
>  perf_bp_event(struct perf_event *event, void *data)			{ }
>
> -static inline void perf_event_mmap(struct vm_area_struct *vma)		{ }
> +static inline void perf_event_mmap(struct mm_area *vma)		{ }
>
>  typedef int (perf_ksymbol_get_name_f)(char *name, int name_len, void *data);
>  static inline void perf_event_ksymbol(u16 ksym_type, u64 addr, u32 len,
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index e2b705c14945..eb50af52018b 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -303,28 +303,28 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
>  #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
>
>  #ifndef __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
> -extern int ptep_set_access_flags(struct vm_area_struct *vma,
> +extern int ptep_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pte_t *ptep,
>  				 pte_t entry, int dirty);
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -extern int pmdp_set_access_flags(struct vm_area_struct *vma,
> +extern int pmdp_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pmd_t *pmdp,
>  				 pmd_t entry, int dirty);
> -extern int pudp_set_access_flags(struct vm_area_struct *vma,
> +extern int pudp_set_access_flags(struct mm_area *vma,
>  				 unsigned long address, pud_t *pudp,
>  				 pud_t entry, int dirty);
>  #else
> -static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
> +static inline int pmdp_set_access_flags(struct mm_area *vma,
>  					unsigned long address, pmd_t *pmdp,
>  					pmd_t entry, int dirty)
>  {
>  	BUILD_BUG();
>  	return 0;
>  }
> -static inline int pudp_set_access_flags(struct vm_area_struct *vma,
> +static inline int pudp_set_access_flags(struct mm_area *vma,
>  					unsigned long address, pud_t *pudp,
>  					pud_t entry, int dirty)
>  {
> @@ -370,7 +370,7 @@ static inline pgd_t pgdp_get(pgd_t *pgdp)
>  #endif
>
>  #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
> -static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int ptep_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long address,
>  					    pte_t *ptep)
>  {
> @@ -386,7 +386,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>
>  #ifndef __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG)
> -static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int pmdp_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long address,
>  					    pmd_t *pmdp)
>  {
> @@ -399,7 +399,7 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  	return r;
>  }
>  #else
> -static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +static inline int pmdp_test_and_clear_young(struct mm_area *vma,
>  					    unsigned long address,
>  					    pmd_t *pmdp)
>  {
> @@ -410,20 +410,20 @@ static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> -int ptep_clear_flush_young(struct vm_area_struct *vma,
> +int ptep_clear_flush_young(struct mm_area *vma,
>  			   unsigned long address, pte_t *ptep);
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +extern int pmdp_clear_flush_young(struct mm_area *vma,
>  				  unsigned long address, pmd_t *pmdp);
>  #else
>  /*
>   * Despite relevant to THP only, this API is called from generic rmap code
>   * under PageTransHuge(), hence needs a dummy implementation for !THP
>   */
> -static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +static inline int pmdp_clear_flush_young(struct mm_area *vma,
>  					 unsigned long address, pmd_t *pmdp)
>  {
>  	BUILD_BUG();
> @@ -457,21 +457,21 @@ static inline bool arch_has_hw_pte_young(void)
>  #endif
>
>  #ifndef arch_check_zapped_pte
> -static inline void arch_check_zapped_pte(struct vm_area_struct *vma,
> +static inline void arch_check_zapped_pte(struct mm_area *vma,
>  					 pte_t pte)
>  {
>  }
>  #endif
>
>  #ifndef arch_check_zapped_pmd
> -static inline void arch_check_zapped_pmd(struct vm_area_struct *vma,
> +static inline void arch_check_zapped_pmd(struct mm_area *vma,
>  					 pmd_t pmd)
>  {
>  }
>  #endif
>
>  #ifndef arch_check_zapped_pud
> -static inline void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud)
> +static inline void arch_check_zapped_pud(struct mm_area *vma, pud_t pud)
>  {
>  }
>  #endif
> @@ -507,7 +507,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
>   * Context: The caller holds the page table lock.  The PTEs map consecutive
>   * pages that belong to the same folio.  The PTEs are all in the same PMD.
>   */
> -static inline void clear_young_dirty_ptes(struct vm_area_struct *vma,
> +static inline void clear_young_dirty_ptes(struct mm_area *vma,
>  					  unsigned long addr, pte_t *ptep,
>  					  unsigned int nr, cydp_t flags)
>  {
> @@ -659,7 +659,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  #ifndef __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
> -static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_huge_get_and_clear_full(struct mm_area *vma,
>  					    unsigned long address, pmd_t *pmdp,
>  					    int full)
>  {
> @@ -668,7 +668,7 @@ static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PUDP_HUGE_GET_AND_CLEAR_FULL
> -static inline pud_t pudp_huge_get_and_clear_full(struct vm_area_struct *vma,
> +static inline pud_t pudp_huge_get_and_clear_full(struct mm_area *vma,
>  					    unsigned long address, pud_t *pudp,
>  					    int full)
>  {
> @@ -766,13 +766,13 @@ static inline void clear_full_ptes(struct mm_struct *mm, unsigned long addr,
>   * It is the difference with function update_mmu_cache.
>   */
>  #ifndef update_mmu_tlb_range
> -static inline void update_mmu_tlb_range(struct vm_area_struct *vma,
> +static inline void update_mmu_tlb_range(struct mm_area *vma,
>  				unsigned long address, pte_t *ptep, unsigned int nr)
>  {
>  }
>  #endif
>
> -static inline void update_mmu_tlb(struct vm_area_struct *vma,
> +static inline void update_mmu_tlb(struct mm_area *vma,
>  				unsigned long address, pte_t *ptep)
>  {
>  	update_mmu_tlb_range(vma, address, ptep, 1);
> @@ -823,29 +823,29 @@ static inline void clear_not_present_full_ptes(struct mm_struct *mm,
>  #endif
>
>  #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> -extern pte_t ptep_clear_flush(struct vm_area_struct *vma,
> +extern pte_t ptep_clear_flush(struct mm_area *vma,
>  			      unsigned long address,
>  			      pte_t *ptep);
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
> -extern pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
> +extern pmd_t pmdp_huge_clear_flush(struct mm_area *vma,
>  			      unsigned long address,
>  			      pmd_t *pmdp);
> -extern pud_t pudp_huge_clear_flush(struct vm_area_struct *vma,
> +extern pud_t pudp_huge_clear_flush(struct mm_area *vma,
>  			      unsigned long address,
>  			      pud_t *pudp);
>  #endif
>
>  #ifndef pte_mkwrite
> -static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
> +static inline pte_t pte_mkwrite(pte_t pte, struct mm_area *vma)
>  {
>  	return pte_mkwrite_novma(pte);
>  }
>  #endif
>
>  #if defined(CONFIG_ARCH_WANT_PMD_MKWRITE) && !defined(pmd_mkwrite)
> -static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
> +static inline pmd_t pmd_mkwrite(pmd_t pmd, struct mm_area *vma)
>  {
>  	return pmd_mkwrite_novma(pmd);
>  }
> @@ -945,10 +945,10 @@ static inline void pudp_set_wrprotect(struct mm_struct *mm,
>
>  #ifndef pmdp_collapse_flush
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +extern pmd_t pmdp_collapse_flush(struct mm_area *vma,
>  				 unsigned long address, pmd_t *pmdp);
>  #else
> -static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
> +static inline pmd_t pmdp_collapse_flush(struct mm_area *vma,
>  					unsigned long address,
>  					pmd_t *pmdp)
>  {
> @@ -978,7 +978,7 @@ extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
>   * architecture that doesn't have hardware dirty/accessed bits. In this case we
>   * can't race with CPU which sets these bits and non-atomic approach is fine.
>   */
> -static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma,
> +static inline pmd_t generic_pmdp_establish(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmdp, pmd_t pmd)
>  {
>  	pmd_t old_pmd = *pmdp;
> @@ -988,7 +988,7 @@ static inline pmd_t generic_pmdp_establish(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_INVALIDATE
> -extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +extern pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
>  			    pmd_t *pmdp);
>  #endif
>
> @@ -1008,7 +1008,7 @@ extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>   * to batch these TLB flushing operations, so fewer TLB flush operations are
>   * needed.
>   */
> -extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma,
> +extern pmd_t pmdp_invalidate_ad(struct mm_area *vma,
>  				unsigned long address, pmd_t *pmdp);
>  #endif
>
> @@ -1088,7 +1088,7 @@ static inline int pgd_same(pgd_t pgd_a, pgd_t pgd_b)
>
>  #ifndef __HAVE_ARCH_DO_SWAP_PAGE
>  static inline void arch_do_swap_page_nr(struct mm_struct *mm,
> -				     struct vm_area_struct *vma,
> +				     struct mm_area *vma,
>  				     unsigned long addr,
>  				     pte_t pte, pte_t oldpte,
>  				     int nr)
> @@ -1105,7 +1105,7 @@ static inline void arch_do_swap_page_nr(struct mm_struct *mm,
>   * metadata when a page is swapped back in.
>   */
>  static inline void arch_do_swap_page_nr(struct mm_struct *mm,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					unsigned long addr,
>  					pte_t pte, pte_t oldpte,
>  					int nr)
> @@ -1128,7 +1128,7 @@ static inline void arch_do_swap_page_nr(struct mm_struct *mm,
>   * metadata on a swap-out of a page.
>   */
>  static inline int arch_unmap_one(struct mm_struct *mm,
> -				  struct vm_area_struct *vma,
> +				  struct mm_area *vma,
>  				  unsigned long addr,
>  				  pte_t orig_pte)
>  {
> @@ -1277,7 +1277,7 @@ static inline int pmd_none_or_clear_bad(pmd_t *pmd)
>  	return 0;
>  }
>
> -static inline pte_t __ptep_modify_prot_start(struct vm_area_struct *vma,
> +static inline pte_t __ptep_modify_prot_start(struct mm_area *vma,
>  					     unsigned long addr,
>  					     pte_t *ptep)
>  {
> @@ -1289,7 +1289,7 @@ static inline pte_t __ptep_modify_prot_start(struct vm_area_struct *vma,
>  	return ptep_get_and_clear(vma->vm_mm, addr, ptep);
>  }
>
> -static inline void __ptep_modify_prot_commit(struct vm_area_struct *vma,
> +static inline void __ptep_modify_prot_commit(struct mm_area *vma,
>  					     unsigned long addr,
>  					     pte_t *ptep, pte_t pte)
>  {
> @@ -1315,7 +1315,7 @@ static inline void __ptep_modify_prot_commit(struct vm_area_struct *vma,
>   * queue the update to be done at some later time.  The update must be
>   * actually committed before the pte lock is released, however.
>   */
> -static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
> +static inline pte_t ptep_modify_prot_start(struct mm_area *vma,
>  					   unsigned long addr,
>  					   pte_t *ptep)
>  {
> @@ -1326,7 +1326,7 @@ static inline pte_t ptep_modify_prot_start(struct vm_area_struct *vma,
>   * Commit an update to a pte, leaving any hardware-controlled bits in
>   * the PTE unmodified.
>   */
> -static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
> +static inline void ptep_modify_prot_commit(struct mm_area *vma,
>  					   unsigned long addr,
>  					   pte_t *ptep, pte_t old_pte, pte_t pte)
>  {
> @@ -1493,7 +1493,7 @@ static inline pmd_t pmd_swp_clear_soft_dirty(pmd_t pmd)
>   * track_pfn_remap is called when a _new_ pfn mapping is being established
>   * by remap_pfn_range() for physical range indicated by pfn and size.
>   */
> -static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
> +static inline int track_pfn_remap(struct mm_area *vma, pgprot_t *prot,
>  				  unsigned long pfn, unsigned long addr,
>  				  unsigned long size)
>  {
> @@ -1504,7 +1504,7 @@ static inline int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
>   * track_pfn_insert is called when a _new_ single pfn is established
>   * by vmf_insert_pfn().
>   */
> -static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
> +static inline void track_pfn_insert(struct mm_area *vma, pgprot_t *prot,
>  				    pfn_t pfn)
>  {
>  }
> @@ -1514,8 +1514,8 @@ static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
>   * tables copied during copy_page_range(). On success, stores the pfn to be
>   * passed to untrack_pfn_copy().
>   */
> -static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma, unsigned long *pfn)
> +static inline int track_pfn_copy(struct mm_area *dst_vma,
> +		struct mm_area *src_vma, unsigned long *pfn)
>  {
>  	return 0;
>  }
> @@ -1524,7 +1524,7 @@ static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
>   * untrack_pfn_copy is called when a VM_PFNMAP VMA failed to copy during
>   * copy_page_range(), but after track_pfn_copy() was already called.
>   */
> -static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
> +static inline void untrack_pfn_copy(struct mm_area *dst_vma,
>  		unsigned long pfn)
>  {
>  }
> @@ -1534,7 +1534,7 @@ static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
>   * untrack can be called for a specific region indicated by pfn and size or
>   * can be for the entire vma (in which case pfn, size are zero).
>   */
> -static inline void untrack_pfn(struct vm_area_struct *vma,
> +static inline void untrack_pfn(struct mm_area *vma,
>  			       unsigned long pfn, unsigned long size,
>  			       bool mm_wr_locked)
>  {
> @@ -1546,22 +1546,22 @@ static inline void untrack_pfn(struct vm_area_struct *vma,
>   * 1) During mremap() on the src VMA after the page tables were moved.
>   * 2) During fork() on the dst VMA, immediately after duplicating the src VMA.
>   */
> -static inline void untrack_pfn_clear(struct vm_area_struct *vma)
> +static inline void untrack_pfn_clear(struct mm_area *vma)
>  {
>  }
>  #else
> -extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
> +extern int track_pfn_remap(struct mm_area *vma, pgprot_t *prot,
>  			   unsigned long pfn, unsigned long addr,
>  			   unsigned long size);
> -extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
> +extern void track_pfn_insert(struct mm_area *vma, pgprot_t *prot,
>  			     pfn_t pfn);
> -extern int track_pfn_copy(struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma, unsigned long *pfn);
> -extern void untrack_pfn_copy(struct vm_area_struct *dst_vma,
> +extern int track_pfn_copy(struct mm_area *dst_vma,
> +		struct mm_area *src_vma, unsigned long *pfn);
> +extern void untrack_pfn_copy(struct mm_area *dst_vma,
>  		unsigned long pfn);
> -extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
> +extern void untrack_pfn(struct mm_area *vma, unsigned long pfn,
>  			unsigned long size, bool mm_wr_locked);
> -extern void untrack_pfn_clear(struct vm_area_struct *vma);
> +extern void untrack_pfn_clear(struct mm_area *vma);
>  #endif
>
>  #ifdef CONFIG_MMU
> diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h
> index 86be8bf27b41..71db2f7ec326 100644
> --- a/include/linux/pkeys.h
> +++ b/include/linux/pkeys.h
> @@ -15,7 +15,7 @@
>  #define PKEY_DEDICATED_EXECUTE_ONLY 0
>  #define ARCH_VM_PKEY_FLAGS 0
>
> -static inline int vma_pkey(struct vm_area_struct *vma)
> +static inline int vma_pkey(struct mm_area *vma)
>  {
>  	return 0;
>  }
> diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
> index ea62201c74c4..b123101b135e 100644
> --- a/include/linux/proc_fs.h
> +++ b/include/linux/proc_fs.h
> @@ -43,7 +43,7 @@ struct proc_ops {
>  #ifdef CONFIG_COMPAT
>  	long	(*proc_compat_ioctl)(struct file *, unsigned int, unsigned long);
>  #endif
> -	int	(*proc_mmap)(struct file *, struct vm_area_struct *);
> +	int	(*proc_mmap)(struct file *, struct mm_area *);
>  	unsigned long (*proc_get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
>  } __randomize_layout;
>
> diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
> index 56e27263acf8..d7bed10786f6 100644
> --- a/include/linux/ring_buffer.h
> +++ b/include/linux/ring_buffer.h
> @@ -245,7 +245,7 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node);
>  #endif
>
>  int ring_buffer_map(struct trace_buffer *buffer, int cpu,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>  int ring_buffer_unmap(struct trace_buffer *buffer, int cpu);
>  int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu);
>  #endif /* _LINUX_RING_BUFFER_H */
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 6b82b618846e..6e0a7da7a80a 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -81,7 +81,7 @@ struct anon_vma {
>   * which link all the VMAs associated with this anon_vma.
>   */
>  struct anon_vma_chain {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct anon_vma *anon_vma;
>  	struct list_head same_vma;   /* locked by mmap_lock & page_table_lock */
>  	struct rb_node rb;			/* locked by anon_vma->rwsem */
> @@ -152,12 +152,12 @@ static inline void anon_vma_unlock_read(struct anon_vma *anon_vma)
>   * anon_vma helper functions.
>   */
>  void anon_vma_init(void);	/* create anon_vma_cachep */
> -int  __anon_vma_prepare(struct vm_area_struct *);
> -void unlink_anon_vmas(struct vm_area_struct *);
> -int anon_vma_clone(struct vm_area_struct *, struct vm_area_struct *);
> -int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
> +int  __anon_vma_prepare(struct mm_area *);
> +void unlink_anon_vmas(struct mm_area *);
> +int anon_vma_clone(struct mm_area *, struct mm_area *);
> +int anon_vma_fork(struct mm_area *, struct mm_area *);
>
> -static inline int anon_vma_prepare(struct vm_area_struct *vma)
> +static inline int anon_vma_prepare(struct mm_area *vma)
>  {
>  	if (likely(vma->anon_vma))
>  		return 0;
> @@ -165,8 +165,8 @@ static inline int anon_vma_prepare(struct vm_area_struct *vma)
>  	return __anon_vma_prepare(vma);
>  }
>
> -static inline void anon_vma_merge(struct vm_area_struct *vma,
> -				  struct vm_area_struct *next)
> +static inline void anon_vma_merge(struct mm_area *vma,
> +				  struct mm_area *next)
>  {
>  	VM_BUG_ON_VMA(vma->anon_vma != next->anon_vma, vma);
>  	unlink_anon_vmas(next);
> @@ -227,7 +227,7 @@ static inline void __folio_large_mapcount_sanity_checks(const struct folio *foli
>  }
>
>  static __always_inline void folio_set_large_mapcount(struct folio *folio,
> -		int mapcount, struct vm_area_struct *vma)
> +		int mapcount, struct mm_area *vma)
>  {
>  	__folio_large_mapcount_sanity_checks(folio, mapcount, vma->vm_mm->mm_id);
>
> @@ -241,7 +241,7 @@ static __always_inline void folio_set_large_mapcount(struct folio *folio,
>  }
>
>  static __always_inline int folio_add_return_large_mapcount(struct folio *folio,
> -		int diff, struct vm_area_struct *vma)
> +		int diff, struct mm_area *vma)
>  {
>  	const mm_id_t mm_id = vma->vm_mm->mm_id;
>  	int new_mapcount_val;
> @@ -291,7 +291,7 @@ static __always_inline int folio_add_return_large_mapcount(struct folio *folio,
>  #define folio_add_large_mapcount folio_add_return_large_mapcount
>
>  static __always_inline int folio_sub_return_large_mapcount(struct folio *folio,
> -		int diff, struct vm_area_struct *vma)
> +		int diff, struct mm_area *vma)
>  {
>  	const mm_id_t mm_id = vma->vm_mm->mm_id;
>  	int new_mapcount_val;
> @@ -342,32 +342,32 @@ static __always_inline int folio_sub_return_large_mapcount(struct folio *folio,
>   * CONFIG_TRANSPARENT_HUGEPAGE. We'll keep that working for now.
>   */
>  static inline void folio_set_large_mapcount(struct folio *folio, int mapcount,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	/* Note: mapcounts start at -1. */
>  	atomic_set(&folio->_large_mapcount, mapcount - 1);
>  }
>
>  static inline void folio_add_large_mapcount(struct folio *folio,
> -		int diff, struct vm_area_struct *vma)
> +		int diff, struct mm_area *vma)
>  {
>  	atomic_add(diff, &folio->_large_mapcount);
>  }
>
>  static inline int folio_add_return_large_mapcount(struct folio *folio,
> -		int diff, struct vm_area_struct *vma)
> +		int diff, struct mm_area *vma)
>  {
>  	BUILD_BUG();
>  }
>
>  static inline void folio_sub_large_mapcount(struct folio *folio,
> -		int diff, struct vm_area_struct *vma)
> +		int diff, struct mm_area *vma)
>  {
>  	atomic_sub(diff, &folio->_large_mapcount);
>  }
>
>  static inline int folio_sub_return_large_mapcount(struct folio *folio,
> -		int diff, struct vm_area_struct *vma)
> +		int diff, struct mm_area *vma)
>  {
>  	BUILD_BUG();
>  }
> @@ -454,40 +454,40 @@ static inline void __folio_rmap_sanity_checks(const struct folio *folio,
>  /*
>   * rmap interfaces called when adding or removing pte of page
>   */
> -void folio_move_anon_rmap(struct folio *, struct vm_area_struct *);
> +void folio_move_anon_rmap(struct folio *, struct mm_area *);
>  void folio_add_anon_rmap_ptes(struct folio *, struct page *, int nr_pages,
> -		struct vm_area_struct *, unsigned long address, rmap_t flags);
> +		struct mm_area *, unsigned long address, rmap_t flags);
>  #define folio_add_anon_rmap_pte(folio, page, vma, address, flags) \
>  	folio_add_anon_rmap_ptes(folio, page, 1, vma, address, flags)
>  void folio_add_anon_rmap_pmd(struct folio *, struct page *,
> -		struct vm_area_struct *, unsigned long address, rmap_t flags);
> -void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> +		struct mm_area *, unsigned long address, rmap_t flags);
> +void folio_add_new_anon_rmap(struct folio *, struct mm_area *,
>  		unsigned long address, rmap_t flags);
>  void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_pages,
> -		struct vm_area_struct *);
> +		struct mm_area *);
>  #define folio_add_file_rmap_pte(folio, page, vma) \
>  	folio_add_file_rmap_ptes(folio, page, 1, vma)
>  void folio_add_file_rmap_pmd(struct folio *, struct page *,
> -		struct vm_area_struct *);
> +		struct mm_area *);
>  void folio_add_file_rmap_pud(struct folio *, struct page *,
> -		struct vm_area_struct *);
> +		struct mm_area *);
>  void folio_remove_rmap_ptes(struct folio *, struct page *, int nr_pages,
> -		struct vm_area_struct *);
> +		struct mm_area *);
>  #define folio_remove_rmap_pte(folio, page, vma) \
>  	folio_remove_rmap_ptes(folio, page, 1, vma)
>  void folio_remove_rmap_pmd(struct folio *, struct page *,
> -		struct vm_area_struct *);
> +		struct mm_area *);
>  void folio_remove_rmap_pud(struct folio *, struct page *,
> -		struct vm_area_struct *);
> +		struct mm_area *);
>
> -void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *,
> +void hugetlb_add_anon_rmap(struct folio *, struct mm_area *,
>  		unsigned long address, rmap_t flags);
> -void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> +void hugetlb_add_new_anon_rmap(struct folio *, struct mm_area *,
>  		unsigned long address);
>
>  /* See folio_try_dup_anon_rmap_*() */
>  static inline int hugetlb_try_dup_anon_rmap(struct folio *folio,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio);
>  	VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
> @@ -544,7 +544,7 @@ static inline void hugetlb_remove_rmap(struct folio *folio)
>  }
>
>  static __always_inline void __folio_dup_file_rmap(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
> +		struct page *page, int nr_pages, struct mm_area *dst_vma,
>  		enum rmap_level level)
>  {
>  	const int orig_nr_pages = nr_pages;
> @@ -585,13 +585,13 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio,
>   * The caller needs to hold the page table lock.
>   */
>  static inline void folio_dup_file_rmap_ptes(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *dst_vma)
> +		struct page *page, int nr_pages, struct mm_area *dst_vma)
>  {
>  	__folio_dup_file_rmap(folio, page, nr_pages, dst_vma, RMAP_LEVEL_PTE);
>  }
>
>  static __always_inline void folio_dup_file_rmap_pte(struct folio *folio,
> -		struct page *page, struct vm_area_struct *dst_vma)
> +		struct page *page, struct mm_area *dst_vma)
>  {
>  	__folio_dup_file_rmap(folio, page, 1, dst_vma, RMAP_LEVEL_PTE);
>  }
> @@ -607,7 +607,7 @@ static __always_inline void folio_dup_file_rmap_pte(struct folio *folio,
>   * The caller needs to hold the page table lock.
>   */
>  static inline void folio_dup_file_rmap_pmd(struct folio *folio,
> -		struct page *page, struct vm_area_struct *dst_vma)
> +		struct page *page, struct mm_area *dst_vma)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	__folio_dup_file_rmap(folio, page, HPAGE_PMD_NR, dst_vma, RMAP_LEVEL_PTE);
> @@ -617,8 +617,8 @@ static inline void folio_dup_file_rmap_pmd(struct folio *folio,
>  }
>
>  static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma, enum rmap_level level)
> +		struct page *page, int nr_pages, struct mm_area *dst_vma,
> +		struct mm_area *src_vma, enum rmap_level level)
>  {
>  	const int orig_nr_pages = nr_pages;
>  	bool maybe_pinned;
> @@ -704,16 +704,16 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio,
>   * Returns 0 if duplicating the mappings succeeded. Returns -EBUSY otherwise.
>   */
>  static inline int folio_try_dup_anon_rmap_ptes(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma)
> +		struct page *page, int nr_pages, struct mm_area *dst_vma,
> +		struct mm_area *src_vma)
>  {
>  	return __folio_try_dup_anon_rmap(folio, page, nr_pages, dst_vma,
>  					 src_vma, RMAP_LEVEL_PTE);
>  }
>
>  static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio,
> -		struct page *page, struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma)
> +		struct page *page, struct mm_area *dst_vma,
> +		struct mm_area *src_vma)
>  {
>  	return __folio_try_dup_anon_rmap(folio, page, 1, dst_vma, src_vma,
>  					 RMAP_LEVEL_PTE);
> @@ -743,8 +743,8 @@ static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio,
>   * Returns 0 if duplicating the mapping succeeded. Returns -EBUSY otherwise.
>   */
>  static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio,
> -		struct page *page, struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma)
> +		struct page *page, struct mm_area *dst_vma,
> +		struct mm_area *src_vma)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	return __folio_try_dup_anon_rmap(folio, page, HPAGE_PMD_NR, dst_vma,
> @@ -910,7 +910,7 @@ struct page_vma_mapped_walk {
>  	unsigned long pfn;
>  	unsigned long nr_pages;
>  	pgoff_t pgoff;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long address;
>  	pmd_t *pmd;
>  	pte_t *pte;
> @@ -963,7 +963,7 @@ page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw)
>
>  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
>  unsigned long page_address_in_vma(const struct folio *folio,
> -		const struct page *, const struct vm_area_struct *);
> +		const struct page *, const struct mm_area *);
>
>  /*
>   * Cleans the PTEs of shared mappings.
> @@ -977,7 +977,7 @@ int mapping_wrprotect_range(struct address_space *mapping, pgoff_t pgoff,
>  		unsigned long pfn, unsigned long nr_pages);
>
>  int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
> -		      struct vm_area_struct *vma);
> +		      struct mm_area *vma);
>
>  enum rmp_flags {
>  	RMP_LOCKED		= 1 << 0,
> @@ -1005,12 +1005,12 @@ struct rmap_walk_control {
>  	 * Return false if page table scanning in rmap_walk should be stopped.
>  	 * Otherwise, return true.
>  	 */
> -	bool (*rmap_one)(struct folio *folio, struct vm_area_struct *vma,
> +	bool (*rmap_one)(struct folio *folio, struct mm_area *vma,
>  					unsigned long addr, void *arg);
>  	int (*done)(struct folio *folio);
>  	struct anon_vma *(*anon_lock)(const struct folio *folio,
>  				      struct rmap_walk_control *rwc);
> -	bool (*invalid_vma)(struct vm_area_struct *vma, void *arg);
> +	bool (*invalid_vma)(struct mm_area *vma, void *arg);
>  };
>
>  void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc);
> diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
> index e918f96881f5..a38896f49499 100644
> --- a/include/linux/secretmem.h
> +++ b/include/linux/secretmem.h
> @@ -11,12 +11,12 @@ static inline bool secretmem_mapping(struct address_space *mapping)
>  	return mapping->a_ops == &secretmem_aops;
>  }
>
> -bool vma_is_secretmem(struct vm_area_struct *vma);
> +bool vma_is_secretmem(struct mm_area *vma);
>  bool secretmem_active(void);
>
>  #else
>
> -static inline bool vma_is_secretmem(struct vm_area_struct *vma)
> +static inline bool vma_is_secretmem(struct mm_area *vma)
>  {
>  	return false;
>  }
> diff --git a/include/linux/security.h b/include/linux/security.h
> index cc9b54d95d22..8478e56ee173 100644
> --- a/include/linux/security.h
> +++ b/include/linux/security.h
> @@ -476,7 +476,7 @@ int security_file_ioctl_compat(struct file *file, unsigned int cmd,
>  int security_mmap_file(struct file *file, unsigned long prot,
>  			unsigned long flags);
>  int security_mmap_addr(unsigned long addr);
> -int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
> +int security_file_mprotect(struct mm_area *vma, unsigned long reqprot,
>  			   unsigned long prot);
>  int security_file_lock(struct file *file, unsigned int cmd);
>  int security_file_fcntl(struct file *file, unsigned int cmd, unsigned long arg);
> @@ -1151,7 +1151,7 @@ static inline int security_mmap_addr(unsigned long addr)
>  	return cap_mmap_addr(addr);
>  }
>
> -static inline int security_file_mprotect(struct vm_area_struct *vma,
> +static inline int security_file_mprotect(struct mm_area *vma,
>  					 unsigned long reqprot,
>  					 unsigned long prot)
>  {
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index 0b273a7b9f01..e3913a29f10e 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -92,7 +92,7 @@ extern struct file *shmem_kernel_file_setup(const char *name, loff_t size,
>  					    unsigned long flags);
>  extern struct file *shmem_file_setup_with_mnt(struct vfsmount *mnt,
>  		const char *name, loff_t size, unsigned long flags);
> -extern int shmem_zero_setup(struct vm_area_struct *);
> +extern int shmem_zero_setup(struct mm_area *);
>  extern unsigned long shmem_get_unmapped_area(struct file *, unsigned long addr,
>  		unsigned long len, unsigned long pgoff, unsigned long flags);
>  extern int shmem_lock(struct file *file, int lock, struct ucounts *ucounts);
> @@ -112,12 +112,12 @@ int shmem_unuse(unsigned int type);
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  unsigned long shmem_allowable_huge_orders(struct inode *inode,
> -				struct vm_area_struct *vma, pgoff_t index,
> +				struct mm_area *vma, pgoff_t index,
>  				loff_t write_end, bool shmem_huge_force);
>  bool shmem_hpage_pmd_enabled(void);
>  #else
>  static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
> -				struct vm_area_struct *vma, pgoff_t index,
> +				struct mm_area *vma, pgoff_t index,
>  				loff_t write_end, bool shmem_huge_force)
>  {
>  	return 0;
> @@ -130,9 +130,9 @@ static inline bool shmem_hpage_pmd_enabled(void)
>  #endif
>
>  #ifdef CONFIG_SHMEM
> -extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
> +extern unsigned long shmem_swap_usage(struct mm_area *vma);
>  #else
> -static inline unsigned long shmem_swap_usage(struct vm_area_struct *vma)
> +static inline unsigned long shmem_swap_usage(struct mm_area *vma)
>  {
>  	return 0;
>  }
> @@ -194,7 +194,7 @@ extern void shmem_uncharge(struct inode *inode, long pages);
>  #ifdef CONFIG_USERFAULTFD
>  #ifdef CONFIG_SHMEM
>  extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
> -				  struct vm_area_struct *dst_vma,
> +				  struct mm_area *dst_vma,
>  				  unsigned long dst_addr,
>  				  unsigned long src_addr,
>  				  uffd_flags_t flags,
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index db46b25a65ae..1652caa8ceed 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -380,7 +380,7 @@ void lru_note_cost(struct lruvec *lruvec, bool file,
>  		   unsigned int nr_io, unsigned int nr_rotated);
>  void lru_note_cost_refault(struct folio *);
>  void folio_add_lru(struct folio *);
> -void folio_add_lru_vma(struct folio *, struct vm_area_struct *);
> +void folio_add_lru_vma(struct folio *, struct mm_area *);
>  void mark_page_accessed(struct page *);
>  void folio_mark_accessed(struct folio *);
>
> diff --git a/include/linux/swapops.h b/include/linux/swapops.h
> index 64ea151a7ae3..697e5d60b776 100644
> --- a/include/linux/swapops.h
> +++ b/include/linux/swapops.h
> @@ -315,7 +315,7 @@ static inline bool is_migration_entry_dirty(swp_entry_t entry)
>
>  extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>  					unsigned long address);
> -extern void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *pte);
> +extern void migration_entry_wait_huge(struct mm_area *vma, unsigned long addr, pte_t *pte);
>  #else  /* CONFIG_MIGRATION */
>  static inline swp_entry_t make_readable_migration_entry(pgoff_t offset)
>  {
> @@ -339,7 +339,7 @@ static inline int is_migration_entry(swp_entry_t swp)
>
>  static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>  					unsigned long address) { }
> -static inline void migration_entry_wait_huge(struct vm_area_struct *vma,
> +static inline void migration_entry_wait_huge(struct mm_area *vma,
>  					     unsigned long addr, pte_t *pte) { }
>  static inline int is_writable_migration_entry(swp_entry_t entry)
>  {
> diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
> index 18f7e1fd093c..4b1c38978498 100644
> --- a/include/linux/sysfs.h
> +++ b/include/linux/sysfs.h
> @@ -298,7 +298,7 @@ static const struct attribute_group _name##_group = {		\
>  __ATTRIBUTE_GROUPS(_name)
>
>  struct file;
> -struct vm_area_struct;
> +struct mm_area;
>  struct address_space;
>
>  struct bin_attribute {
> @@ -317,7 +317,7 @@ struct bin_attribute {
>  	loff_t (*llseek)(struct file *, struct kobject *, const struct bin_attribute *,
>  			 loff_t, int);
>  	int (*mmap)(struct file *, struct kobject *, const struct bin_attribute *attr,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>  };
>
>  /**
> diff --git a/include/linux/time_namespace.h b/include/linux/time_namespace.h
> index 0b8b32bf0655..12b3ecc86fe6 100644
> --- a/include/linux/time_namespace.h
> +++ b/include/linux/time_namespace.h
> @@ -12,7 +12,7 @@
>  struct user_namespace;
>  extern struct user_namespace init_user_ns;
>
> -struct vm_area_struct;
> +struct mm_area;
>
>  struct timens_offsets {
>  	struct timespec64 monotonic;
> @@ -47,7 +47,7 @@ struct time_namespace *copy_time_ns(unsigned long flags,
>  				    struct time_namespace *old_ns);
>  void free_time_ns(struct time_namespace *ns);
>  void timens_on_fork(struct nsproxy *nsproxy, struct task_struct *tsk);
> -struct page *find_timens_vvar_page(struct vm_area_struct *vma);
> +struct page *find_timens_vvar_page(struct mm_area *vma);
>
>  static inline void put_time_ns(struct time_namespace *ns)
>  {
> @@ -144,7 +144,7 @@ static inline void timens_on_fork(struct nsproxy *nsproxy,
>  	return;
>  }
>
> -static inline struct page *find_timens_vvar_page(struct vm_area_struct *vma)
> +static inline struct page *find_timens_vvar_page(struct mm_area *vma)
>  {
>  	return NULL;
>  }
> diff --git a/include/linux/uacce.h b/include/linux/uacce.h
> index e290c0269944..dcb2b94de9f1 100644
> --- a/include/linux/uacce.h
> +++ b/include/linux/uacce.h
> @@ -43,7 +43,7 @@ struct uacce_ops {
>  	int (*start_queue)(struct uacce_queue *q);
>  	void (*stop_queue)(struct uacce_queue *q);
>  	int (*is_q_updated)(struct uacce_queue *q);
> -	int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma,
> +	int (*mmap)(struct uacce_queue *q, struct mm_area *vma,
>  		    struct uacce_qfile_region *qfr);
>  	long (*ioctl)(struct uacce_queue *q, unsigned int cmd,
>  		      unsigned long arg);
> diff --git a/include/linux/uio_driver.h b/include/linux/uio_driver.h
> index 18238dc8bfd3..69fdc49c1df4 100644
> --- a/include/linux/uio_driver.h
> +++ b/include/linux/uio_driver.h
> @@ -112,7 +112,7 @@ struct uio_info {
>  	unsigned long		irq_flags;
>  	void			*priv;
>  	irqreturn_t (*handler)(int irq, struct uio_info *dev_info);
> -	int (*mmap)(struct uio_info *info, struct vm_area_struct *vma);
> +	int (*mmap)(struct uio_info *info, struct mm_area *vma);
>  	int (*open)(struct uio_info *info, struct inode *inode);
>  	int (*release)(struct uio_info *info, struct inode *inode);
>  	int (*irqcontrol)(struct uio_info *info, s32 irq_on);
> diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> index 2e46b69ff0a6..f8af45f0c683 100644
> --- a/include/linux/uprobes.h
> +++ b/include/linux/uprobes.h
> @@ -19,7 +19,7 @@
>  #include <linux/seqlock.h>
>
>  struct uprobe;
> -struct vm_area_struct;
> +struct mm_area;
>  struct mm_struct;
>  struct inode;
>  struct notifier_block;
> @@ -199,8 +199,8 @@ extern struct uprobe *uprobe_register(struct inode *inode, loff_t offset, loff_t
>  extern int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool);
>  extern void uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc);
>  extern void uprobe_unregister_sync(void);
> -extern int uprobe_mmap(struct vm_area_struct *vma);
> -extern void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end);
> +extern int uprobe_mmap(struct mm_area *vma);
> +extern void uprobe_munmap(struct mm_area *vma, unsigned long start, unsigned long end);
>  extern void uprobe_start_dup_mmap(void);
>  extern void uprobe_end_dup_mmap(void);
>  extern void uprobe_dup_mmap(struct mm_struct *oldmm, struct mm_struct *newmm);
> @@ -253,12 +253,12 @@ uprobe_unregister_nosync(struct uprobe *uprobe, struct uprobe_consumer *uc)
>  static inline void uprobe_unregister_sync(void)
>  {
>  }
> -static inline int uprobe_mmap(struct vm_area_struct *vma)
> +static inline int uprobe_mmap(struct mm_area *vma)
>  {
>  	return 0;
>  }
>  static inline void
> -uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +uprobe_munmap(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  }
>  static inline void uprobe_start_dup_mmap(void)
> diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> index 75342022d144..6b45a807875d 100644
> --- a/include/linux/userfaultfd_k.h
> +++ b/include/linux/userfaultfd_k.h
> @@ -116,7 +116,7 @@ static inline uffd_flags_t uffd_flags_set_mode(uffd_flags_t flags, enum mfill_at
>  #define MFILL_ATOMIC_WP MFILL_ATOMIC_FLAG(0)
>
>  extern int mfill_atomic_install_pte(pmd_t *dst_pmd,
> -				    struct vm_area_struct *dst_vma,
> +				    struct mm_area *dst_vma,
>  				    unsigned long dst_addr, struct page *page,
>  				    bool newly_allocated, uffd_flags_t flags);
>
> @@ -132,7 +132,7 @@ extern ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long st
>  				   unsigned long len, uffd_flags_t flags);
>  extern int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start,
>  			       unsigned long len, bool enable_wp);
> -extern long uffd_wp_range(struct vm_area_struct *vma,
> +extern long uffd_wp_range(struct mm_area *vma,
>  			  unsigned long start, unsigned long len, bool enable_wp);
>
>  /* move_pages */
> @@ -141,12 +141,12 @@ void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
>  ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
>  		   unsigned long src_start, unsigned long len, __u64 flags);
>  int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> -			struct vm_area_struct *dst_vma,
> -			struct vm_area_struct *src_vma,
> +			struct mm_area *dst_vma,
> +			struct mm_area *src_vma,
>  			unsigned long dst_addr, unsigned long src_addr);
>
>  /* mm helpers */
> -static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> +static inline bool is_mergeable_vm_userfaultfd_ctx(struct mm_area *vma,
>  					struct vm_userfaultfd_ctx vm_ctx)
>  {
>  	return vma->vm_userfaultfd_ctx.ctx == vm_ctx.ctx;
> @@ -163,7 +163,7 @@ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
>   *   with huge pmd sharing this would *also* setup the second UFFD-registered
>   *   mapping, and we'd not get minor faults.)
>   */
> -static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma)
> +static inline bool uffd_disable_huge_pmd_share(struct mm_area *vma)
>  {
>  	return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR);
>  }
> @@ -175,44 +175,44 @@ static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma)
>   * as the fault around checks for pte_none() before the installation, however
>   * to be super safe we just forbid it.
>   */
> -static inline bool uffd_disable_fault_around(struct vm_area_struct *vma)
> +static inline bool uffd_disable_fault_around(struct mm_area *vma)
>  {
>  	return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR);
>  }
>
> -static inline bool userfaultfd_missing(struct vm_area_struct *vma)
> +static inline bool userfaultfd_missing(struct mm_area *vma)
>  {
>  	return vma->vm_flags & VM_UFFD_MISSING;
>  }
>
> -static inline bool userfaultfd_wp(struct vm_area_struct *vma)
> +static inline bool userfaultfd_wp(struct mm_area *vma)
>  {
>  	return vma->vm_flags & VM_UFFD_WP;
>  }
>
> -static inline bool userfaultfd_minor(struct vm_area_struct *vma)
> +static inline bool userfaultfd_minor(struct mm_area *vma)
>  {
>  	return vma->vm_flags & VM_UFFD_MINOR;
>  }
>
> -static inline bool userfaultfd_pte_wp(struct vm_area_struct *vma,
> +static inline bool userfaultfd_pte_wp(struct mm_area *vma,
>  				      pte_t pte)
>  {
>  	return userfaultfd_wp(vma) && pte_uffd_wp(pte);
>  }
>
> -static inline bool userfaultfd_huge_pmd_wp(struct vm_area_struct *vma,
> +static inline bool userfaultfd_huge_pmd_wp(struct mm_area *vma,
>  					   pmd_t pmd)
>  {
>  	return userfaultfd_wp(vma) && pmd_uffd_wp(pmd);
>  }
>
> -static inline bool userfaultfd_armed(struct vm_area_struct *vma)
> +static inline bool userfaultfd_armed(struct mm_area *vma)
>  {
>  	return vma->vm_flags & __VM_UFFD_FLAGS;
>  }
>
> -static inline bool vma_can_userfault(struct vm_area_struct *vma,
> +static inline bool vma_can_userfault(struct mm_area *vma,
>  				     unsigned long vm_flags,
>  				     bool wp_async)
>  {
> @@ -247,44 +247,44 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
>  	    vma_is_shmem(vma);
>  }
>
> -static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
> +static inline bool vma_has_uffd_without_event_remap(struct mm_area *vma)
>  {
>  	struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx;
>
>  	return uffd_ctx && (uffd_ctx->features & UFFD_FEATURE_EVENT_REMAP) == 0;
>  }
>
> -extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *);
> +extern int dup_userfaultfd(struct mm_area *, struct list_head *);
>  extern void dup_userfaultfd_complete(struct list_head *);
>  void dup_userfaultfd_fail(struct list_head *);
>
> -extern void mremap_userfaultfd_prep(struct vm_area_struct *,
> +extern void mremap_userfaultfd_prep(struct mm_area *,
>  				    struct vm_userfaultfd_ctx *);
>  extern void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *,
>  					unsigned long from, unsigned long to,
>  					unsigned long len);
>
> -extern bool userfaultfd_remove(struct vm_area_struct *vma,
> +extern bool userfaultfd_remove(struct mm_area *vma,
>  			       unsigned long start,
>  			       unsigned long end);
>
> -extern int userfaultfd_unmap_prep(struct vm_area_struct *vma,
> +extern int userfaultfd_unmap_prep(struct mm_area *vma,
>  		unsigned long start, unsigned long end, struct list_head *uf);
>  extern void userfaultfd_unmap_complete(struct mm_struct *mm,
>  				       struct list_head *uf);
> -extern bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma);
> -extern bool userfaultfd_wp_async(struct vm_area_struct *vma);
> +extern bool userfaultfd_wp_unpopulated(struct mm_area *vma);
> +extern bool userfaultfd_wp_async(struct mm_area *vma);
>
> -void userfaultfd_reset_ctx(struct vm_area_struct *vma);
> +void userfaultfd_reset_ctx(struct mm_area *vma);
>
> -struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
> -					     struct vm_area_struct *prev,
> -					     struct vm_area_struct *vma,
> +struct mm_area *userfaultfd_clear_vma(struct vma_iterator *vmi,
> +					     struct mm_area *prev,
> +					     struct mm_area *vma,
>  					     unsigned long start,
>  					     unsigned long end);
>
>  int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
> -			       struct vm_area_struct *vma,
> +			       struct mm_area *vma,
>  			       unsigned long vm_flags,
>  			       unsigned long start, unsigned long end,
>  			       bool wp_async);
> @@ -303,53 +303,53 @@ static inline vm_fault_t handle_userfault(struct vm_fault *vmf,
>  	return VM_FAULT_SIGBUS;
>  }
>
> -static inline long uffd_wp_range(struct vm_area_struct *vma,
> +static inline long uffd_wp_range(struct mm_area *vma,
>  				 unsigned long start, unsigned long len,
>  				 bool enable_wp)
>  {
>  	return false;
>  }
>
> -static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> +static inline bool is_mergeable_vm_userfaultfd_ctx(struct mm_area *vma,
>  					struct vm_userfaultfd_ctx vm_ctx)
>  {
>  	return true;
>  }
>
> -static inline bool userfaultfd_missing(struct vm_area_struct *vma)
> +static inline bool userfaultfd_missing(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_wp(struct vm_area_struct *vma)
> +static inline bool userfaultfd_wp(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_minor(struct vm_area_struct *vma)
> +static inline bool userfaultfd_minor(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_pte_wp(struct vm_area_struct *vma,
> +static inline bool userfaultfd_pte_wp(struct mm_area *vma,
>  				      pte_t pte)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_huge_pmd_wp(struct vm_area_struct *vma,
> +static inline bool userfaultfd_huge_pmd_wp(struct mm_area *vma,
>  					   pmd_t pmd)
>  {
>  	return false;
>  }
>
>
> -static inline bool userfaultfd_armed(struct vm_area_struct *vma)
> +static inline bool userfaultfd_armed(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline int dup_userfaultfd(struct vm_area_struct *vma,
> +static inline int dup_userfaultfd(struct mm_area *vma,
>  				  struct list_head *l)
>  {
>  	return 0;
> @@ -363,7 +363,7 @@ static inline void dup_userfaultfd_fail(struct list_head *l)
>  {
>  }
>
> -static inline void mremap_userfaultfd_prep(struct vm_area_struct *vma,
> +static inline void mremap_userfaultfd_prep(struct mm_area *vma,
>  					   struct vm_userfaultfd_ctx *ctx)
>  {
>  }
> @@ -375,14 +375,14 @@ static inline void mremap_userfaultfd_complete(struct vm_userfaultfd_ctx *ctx,
>  {
>  }
>
> -static inline bool userfaultfd_remove(struct vm_area_struct *vma,
> +static inline bool userfaultfd_remove(struct mm_area *vma,
>  				      unsigned long start,
>  				      unsigned long end)
>  {
>  	return true;
>  }
>
> -static inline int userfaultfd_unmap_prep(struct vm_area_struct *vma,
> +static inline int userfaultfd_unmap_prep(struct mm_area *vma,
>  					 unsigned long start, unsigned long end,
>  					 struct list_head *uf)
>  {
> @@ -394,29 +394,29 @@ static inline void userfaultfd_unmap_complete(struct mm_struct *mm,
>  {
>  }
>
> -static inline bool uffd_disable_fault_around(struct vm_area_struct *vma)
> +static inline bool uffd_disable_fault_around(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_wp_unpopulated(struct vm_area_struct *vma)
> +static inline bool userfaultfd_wp_unpopulated(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_wp_async(struct vm_area_struct *vma)
> +static inline bool userfaultfd_wp_async(struct mm_area *vma)
>  {
>  	return false;
>  }
>
> -static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
> +static inline bool vma_has_uffd_without_event_remap(struct mm_area *vma)
>  {
>  	return false;
>  }
>
>  #endif /* CONFIG_USERFAULTFD */
>
> -static inline bool userfaultfd_wp_use_markers(struct vm_area_struct *vma)
> +static inline bool userfaultfd_wp_use_markers(struct mm_area *vma)
>  {
>  	/* Only wr-protect mode uses pte markers */
>  	if (!userfaultfd_wp(vma))
> diff --git a/include/linux/vdso_datastore.h b/include/linux/vdso_datastore.h
> index a91fa24b06e0..8523a57ba6c0 100644
> --- a/include/linux/vdso_datastore.h
> +++ b/include/linux/vdso_datastore.h
> @@ -5,6 +5,6 @@
>  #include <linux/mm_types.h>
>
>  extern const struct vm_special_mapping vdso_vvar_mapping;
> -struct vm_area_struct *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr);
> +struct mm_area *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr);
>
>  #endif /* _LINUX_VDSO_DATASTORE_H */
> diff --git a/include/linux/vfio.h b/include/linux/vfio.h
> index 707b00772ce1..3830567b796e 100644
> --- a/include/linux/vfio.h
> +++ b/include/linux/vfio.h
> @@ -129,7 +129,7 @@ struct vfio_device_ops {
>  			 size_t count, loff_t *size);
>  	long	(*ioctl)(struct vfio_device *vdev, unsigned int cmd,
>  			 unsigned long arg);
> -	int	(*mmap)(struct vfio_device *vdev, struct vm_area_struct *vma);
> +	int	(*mmap)(struct vfio_device *vdev, struct mm_area *vma);
>  	void	(*request)(struct vfio_device *vdev, unsigned int count);
>  	int	(*match)(struct vfio_device *vdev, char *buf);
>  	void	(*dma_unmap)(struct vfio_device *vdev, u64 iova, u64 length);
> diff --git a/include/linux/vfio_pci_core.h b/include/linux/vfio_pci_core.h
> index fbb472dd99b3..0dcef04e3e8c 100644
> --- a/include/linux/vfio_pci_core.h
> +++ b/include/linux/vfio_pci_core.h
> @@ -34,7 +34,7 @@ struct vfio_pci_regops {
>  			   struct vfio_pci_region *region);
>  	int	(*mmap)(struct vfio_pci_core_device *vdev,
>  			struct vfio_pci_region *region,
> -			struct vm_area_struct *vma);
> +			struct mm_area *vma);
>  	int	(*add_capability)(struct vfio_pci_core_device *vdev,
>  				  struct vfio_pci_region *region,
>  				  struct vfio_info_cap *caps);
> @@ -119,7 +119,7 @@ ssize_t vfio_pci_core_read(struct vfio_device *core_vdev, char __user *buf,
>  		size_t count, loff_t *ppos);
>  ssize_t vfio_pci_core_write(struct vfio_device *core_vdev, const char __user *buf,
>  		size_t count, loff_t *ppos);
> -int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct vm_area_struct *vma);
> +int vfio_pci_core_mmap(struct vfio_device *core_vdev, struct mm_area *vma);
>  void vfio_pci_core_request(struct vfio_device *core_vdev, unsigned int count);
>  int vfio_pci_core_match(struct vfio_device *core_vdev, char *buf);
>  int vfio_pci_core_enable(struct vfio_pci_core_device *vdev);
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 31e9ffd936e3..3e555eb63f36 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -14,7 +14,7 @@
>
>  #include <asm/vmalloc.h>
>
> -struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
> +struct mm_area;		/* vma defining user mapping in mm_types.h */
>  struct notifier_block;		/* in notifier.h */
>  struct iov_iter;		/* in uio.h */
>
> @@ -195,11 +195,11 @@ extern void *vmap(struct page **pages, unsigned int count,
>  void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot);
>  extern void vunmap(const void *addr);
>
> -extern int remap_vmalloc_range_partial(struct vm_area_struct *vma,
> +extern int remap_vmalloc_range_partial(struct mm_area *vma,
>  				       unsigned long uaddr, void *kaddr,
>  				       unsigned long pgoff, unsigned long size);
>
> -extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
> +extern int remap_vmalloc_range(struct mm_area *vma, void *addr,
>  							unsigned long pgoff);
>
>  int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
> diff --git a/include/media/dvb_vb2.h b/include/media/dvb_vb2.h
> index 8cb88452cd6c..42956944bba4 100644
> --- a/include/media/dvb_vb2.h
> +++ b/include/media/dvb_vb2.h
> @@ -270,11 +270,11 @@ int dvb_vb2_dqbuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b);
>   * dvb_vb2_mmap() - Wrapper to vb2_mmap() for Digital TV buffer handling.
>   *
>   * @ctx:	control struct for VB2 handler
> - * @vma:        pointer to &struct vm_area_struct with the vma passed
> + * @vma:        pointer to &struct mm_area with the vma passed
>   *              to the mmap file operation handler in the driver.
>   *
>   * map Digital TV video buffers into application address space.
>   */
> -int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct vm_area_struct *vma);
> +int dvb_vb2_mmap(struct dvb_vb2_ctx *ctx, struct mm_area *vma);
>
>  #endif /* _DVB_VB2_H */
> diff --git a/include/media/v4l2-dev.h b/include/media/v4l2-dev.h
> index 1b6222fab24e..caef335b7731 100644
> --- a/include/media/v4l2-dev.h
> +++ b/include/media/v4l2-dev.h
> @@ -209,7 +209,7 @@ struct v4l2_file_operations {
>  #endif
>  	unsigned long (*get_unmapped_area) (struct file *, unsigned long,
>  				unsigned long, unsigned long, unsigned long);
> -	int (*mmap) (struct file *, struct vm_area_struct *);
> +	int (*mmap) (struct file *, struct mm_area *);
>  	int (*open) (struct file *);
>  	int (*release) (struct file *);
>  };
> diff --git a/include/media/v4l2-mem2mem.h b/include/media/v4l2-mem2mem.h
> index 0af330cf91c3..19ee65878a35 100644
> --- a/include/media/v4l2-mem2mem.h
> +++ b/include/media/v4l2-mem2mem.h
> @@ -490,7 +490,7 @@ __poll_t v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
>   *
>   * @file: pointer to struct &file
>   * @m2m_ctx: m2m context assigned to the instance given by struct &v4l2_m2m_ctx
> - * @vma: pointer to struct &vm_area_struct
> + * @vma: pointer to struct &mm_area
>   *
>   * Call from driver's mmap() function. Will handle mmap() for both queues
>   * seamlessly for the video buffer, which will receive normal per-queue offsets
> @@ -500,7 +500,7 @@ __poll_t v4l2_m2m_poll(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
>   * thus applications) receive modified offsets.
>   */
>  int v4l2_m2m_mmap(struct file *file, struct v4l2_m2m_ctx *m2m_ctx,
> -		  struct vm_area_struct *vma);
> +		  struct mm_area *vma);
>
>  #ifndef CONFIG_MMU
>  unsigned long v4l2_m2m_get_unmapped_area(struct file *file, unsigned long addr,
> @@ -895,7 +895,7 @@ int v4l2_m2m_ioctl_stateless_try_decoder_cmd(struct file *file, void *fh,
>  					     struct v4l2_decoder_cmd *dc);
>  int v4l2_m2m_ioctl_stateless_decoder_cmd(struct file *file, void *priv,
>  					 struct v4l2_decoder_cmd *dc);
> -int v4l2_m2m_fop_mmap(struct file *file, struct vm_area_struct *vma);
> +int v4l2_m2m_fop_mmap(struct file *file, struct mm_area *vma);
>  __poll_t v4l2_m2m_fop_poll(struct file *file, poll_table *wait);
>
>  #endif /* _MEDIA_V4L2_MEM2MEM_H */
> diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
> index 9b02aeba4108..dbfb8876fbf9 100644
> --- a/include/media/videobuf2-core.h
> +++ b/include/media/videobuf2-core.h
> @@ -146,7 +146,7 @@ struct vb2_mem_ops {
>
>  	unsigned int	(*num_users)(void *buf_priv);
>
> -	int		(*mmap)(void *buf_priv, struct vm_area_struct *vma);
> +	int		(*mmap)(void *buf_priv, struct mm_area *vma);
>  };
>
>  /**
> @@ -1033,7 +1033,7 @@ void vb2_queue_error(struct vb2_queue *q);
>  /**
>   * vb2_mmap() - map video buffers into application address space.
>   * @q:		pointer to &struct vb2_queue with videobuf2 queue.
> - * @vma:	pointer to &struct vm_area_struct with the vma passed
> + * @vma:	pointer to &struct mm_area with the vma passed
>   *		to the mmap file operation handler in the driver.
>   *
>   * Should be called from mmap file operation handler of a driver.
> @@ -1052,7 +1052,7 @@ void vb2_queue_error(struct vb2_queue *q);
>   * The return values from this function are intended to be directly returned
>   * from the mmap handler in driver.
>   */
> -int vb2_mmap(struct vb2_queue *q, struct vm_area_struct *vma);
> +int vb2_mmap(struct vb2_queue *q, struct mm_area *vma);
>
>  #ifndef CONFIG_MMU
>  /**
> diff --git a/include/media/videobuf2-v4l2.h b/include/media/videobuf2-v4l2.h
> index 77ce8238ab30..cd941372aab9 100644
> --- a/include/media/videobuf2-v4l2.h
> +++ b/include/media/videobuf2-v4l2.h
> @@ -339,7 +339,7 @@ int vb2_ioctl_remove_bufs(struct file *file, void *priv,
>
>  /* struct v4l2_file_operations helpers */
>
> -int vb2_fop_mmap(struct file *file, struct vm_area_struct *vma);
> +int vb2_fop_mmap(struct file *file, struct mm_area *vma);
>  int vb2_fop_release(struct file *file);
>  int _vb2_fop_release(struct file *file, struct mutex *lock);
>  ssize_t vb2_fop_write(struct file *file, const char __user *buf,
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 8daf1b3b12c6..d75880bd2052 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1849,7 +1849,7 @@ int sock_no_sendmsg(struct socket *, struct msghdr *, size_t);
>  int sock_no_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t len);
>  int sock_no_recvmsg(struct socket *, struct msghdr *, size_t, int);
>  int sock_no_mmap(struct file *file, struct socket *sock,
> -		 struct vm_area_struct *vma);
> +		 struct mm_area *vma);
>
>  /*
>   * Functions to fill in entries in struct proto_ops when a protocol
> diff --git a/include/net/tcp.h b/include/net/tcp.h
> index df04dc09c519..556704058c39 100644
> --- a/include/net/tcp.h
> +++ b/include/net/tcp.h
> @@ -469,7 +469,7 @@ void tcp_recv_timestamp(struct msghdr *msg, const struct sock *sk,
>  void tcp_data_ready(struct sock *sk);
>  #ifdef CONFIG_MMU
>  int tcp_mmap(struct file *file, struct socket *sock,
> -	     struct vm_area_struct *vma);
> +	     struct mm_area *vma);
>  #endif
>  void tcp_parse_options(const struct net *net, const struct sk_buff *skb,
>  		       struct tcp_options_received *opt_rx,
> diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
> index d42eae69d9a8..8055f6f88816 100644
> --- a/include/rdma/ib_verbs.h
> +++ b/include/rdma/ib_verbs.h
> @@ -2449,7 +2449,7 @@ struct ib_device_ops {
>  	int (*alloc_ucontext)(struct ib_ucontext *context,
>  			      struct ib_udata *udata);
>  	void (*dealloc_ucontext)(struct ib_ucontext *context);
> -	int (*mmap)(struct ib_ucontext *context, struct vm_area_struct *vma);
> +	int (*mmap)(struct ib_ucontext *context, struct mm_area *vma);
>  	/**
>  	 * This will be called once refcount of an entry in mmap_xa reaches
>  	 * zero. The type of the memory that was mapped may differ between
> @@ -2976,7 +2976,7 @@ void  ib_set_client_data(struct ib_device *device, struct ib_client *client,
>  void ib_set_device_ops(struct ib_device *device,
>  		       const struct ib_device_ops *ops);
>
> -int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct vm_area_struct *vma,
> +int rdma_user_mmap_io(struct ib_ucontext *ucontext, struct mm_area *vma,
>  		      unsigned long pfn, unsigned long size, pgprot_t prot,
>  		      struct rdma_user_mmap_entry *entry);
>  int rdma_user_mmap_entry_insert(struct ib_ucontext *ucontext,
> @@ -3009,7 +3009,7 @@ rdma_user_mmap_entry_get_pgoff(struct ib_ucontext *ucontext,
>  			       unsigned long pgoff);
>  struct rdma_user_mmap_entry *
>  rdma_user_mmap_entry_get(struct ib_ucontext *ucontext,
> -			 struct vm_area_struct *vma);
> +			 struct mm_area *vma);
>  void rdma_user_mmap_entry_put(struct rdma_user_mmap_entry *entry);
>
>  void rdma_user_mmap_entry_remove(struct rdma_user_mmap_entry *entry);
> diff --git a/include/rdma/rdma_vt.h b/include/rdma/rdma_vt.h
> index c429d6ddb129..7baff31ec232 100644
> --- a/include/rdma/rdma_vt.h
> +++ b/include/rdma/rdma_vt.h
> @@ -167,7 +167,7 @@ struct rvt_ah {
>
>  /*
>   * This structure is used by rvt_mmap() to validate an offset
> - * when an mmap() request is made.  The vm_area_struct then uses
> + * when an mmap() request is made.  The mm_area then uses
>   * this as its vm_private_data.
>   */
>  struct rvt_mmap_info {
> diff --git a/include/sound/compress_driver.h b/include/sound/compress_driver.h
> index b55c9eeb2b54..cbfb46ad05de 100644
> --- a/include/sound/compress_driver.h
> +++ b/include/sound/compress_driver.h
> @@ -165,7 +165,7 @@ struct snd_compr_ops {
>  	int (*copy)(struct snd_compr_stream *stream, char __user *buf,
>  		       size_t count);
>  	int (*mmap)(struct snd_compr_stream *stream,
> -			struct vm_area_struct *vma);
> +			struct mm_area *vma);
>  	int (*ack)(struct snd_compr_stream *stream, size_t bytes);
>  	int (*get_caps) (struct snd_compr_stream *stream,
>  			struct snd_compr_caps *caps);
> diff --git a/include/sound/hwdep.h b/include/sound/hwdep.h
> index b0da633184cd..1ba044d50614 100644
> --- a/include/sound/hwdep.h
> +++ b/include/sound/hwdep.h
> @@ -29,7 +29,7 @@ struct snd_hwdep_ops {
>  	int (*ioctl_compat)(struct snd_hwdep *hw, struct file *file,
>  			    unsigned int cmd, unsigned long arg);
>  	int (*mmap)(struct snd_hwdep *hw, struct file *file,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>  	int (*dsp_status)(struct snd_hwdep *hw,
>  			  struct snd_hwdep_dsp_status *status);
>  	int (*dsp_load)(struct snd_hwdep *hw,
> diff --git a/include/sound/info.h b/include/sound/info.h
> index adbc506860d6..369b6ba88869 100644
> --- a/include/sound/info.h
> +++ b/include/sound/info.h
> @@ -54,7 +54,7 @@ struct snd_info_entry_ops {
>  		     struct file *file, unsigned int cmd, unsigned long arg);
>  	int (*mmap)(struct snd_info_entry *entry, void *file_private_data,
>  		    struct inode *inode, struct file *file,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>  };
>
>  struct snd_info_entry {
> diff --git a/include/sound/memalloc.h b/include/sound/memalloc.h
> index 9dd475cf4e8c..38a2885a39e3 100644
> --- a/include/sound/memalloc.h
> +++ b/include/sound/memalloc.h
> @@ -13,7 +13,7 @@
>  #include <asm/page.h>
>
>  struct device;
> -struct vm_area_struct;
> +struct mm_area;
>  struct sg_table;
>
>  /*
> @@ -83,7 +83,7 @@ int snd_dma_alloc_pages_fallback(int type, struct device *dev, size_t size,
>                                   struct snd_dma_buffer *dmab);
>  void snd_dma_free_pages(struct snd_dma_buffer *dmab);
>  int snd_dma_buffer_mmap(struct snd_dma_buffer *dmab,
> -			struct vm_area_struct *area);
> +			struct mm_area *area);
>
>  enum snd_dma_sync_mode { SNDRV_DMA_SYNC_CPU, SNDRV_DMA_SYNC_DEVICE };
>  #ifdef CONFIG_HAS_DMA
> diff --git a/include/sound/pcm.h b/include/sound/pcm.h
> index 8becb4504887..10129d8837e3 100644
> --- a/include/sound/pcm.h
> +++ b/include/sound/pcm.h
> @@ -74,7 +74,7 @@ struct snd_pcm_ops {
>  		    unsigned long pos, struct iov_iter *iter, unsigned long bytes);
>  	struct page *(*page)(struct snd_pcm_substream *substream,
>  			     unsigned long offset);
> -	int (*mmap)(struct snd_pcm_substream *substream, struct vm_area_struct *vma);
> +	int (*mmap)(struct snd_pcm_substream *substream, struct mm_area *vma);
>  	int (*ack)(struct snd_pcm_substream *substream);
>  };
>
> @@ -605,7 +605,7 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream);
>  int snd_pcm_attach_substream(struct snd_pcm *pcm, int stream, struct file *file,
>  			     struct snd_pcm_substream **rsubstream);
>  void snd_pcm_detach_substream(struct snd_pcm_substream *substream);
> -int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file, struct vm_area_struct *area);
> +int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file, struct mm_area *area);
>
>
>  #ifdef CONFIG_SND_DEBUG
> @@ -1394,11 +1394,11 @@ snd_pcm_sgbuf_get_chunk_size(struct snd_pcm_substream *substream,
>  }
>
>  int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
> -			     struct vm_area_struct *area);
> +			     struct mm_area *area);
>  /* mmap for io-memory area */
>  #if defined(CONFIG_X86) || defined(CONFIG_PPC) || defined(CONFIG_ALPHA)
>  #define SNDRV_PCM_INFO_MMAP_IOMEM	SNDRV_PCM_INFO_MMAP
> -int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct vm_area_struct *area);
> +int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream, struct mm_area *area);
>  #else
>  #define SNDRV_PCM_INFO_MMAP_IOMEM	0
>  #define snd_pcm_lib_mmap_iomem	NULL
> diff --git a/include/sound/soc-component.h b/include/sound/soc-component.h
> index 61534ac0edd1..4c37806639b1 100644
> --- a/include/sound/soc-component.h
> +++ b/include/sound/soc-component.h
> @@ -53,7 +53,7 @@ struct snd_compress_ops {
>  		    size_t count);
>  	int (*mmap)(struct snd_soc_component *component,
>  		    struct snd_compr_stream *stream,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>  	int (*ack)(struct snd_soc_component *component,
>  		   struct snd_compr_stream *stream, size_t bytes);
>  	int (*get_caps)(struct snd_soc_component *component,
> @@ -146,7 +146,7 @@ struct snd_soc_component_driver {
>  			     unsigned long offset);
>  	int (*mmap)(struct snd_soc_component *component,
>  		    struct snd_pcm_substream *substream,
> -		    struct vm_area_struct *vma);
> +		    struct mm_area *vma);
>  	int (*ack)(struct snd_soc_component *component,
>  		   struct snd_pcm_substream *substream);
>  	snd_pcm_sframes_t (*delay)(struct snd_soc_component *component,
> @@ -517,7 +517,7 @@ int snd_soc_pcm_component_copy(struct snd_pcm_substream *substream,
>  struct page *snd_soc_pcm_component_page(struct snd_pcm_substream *substream,
>  					unsigned long offset);
>  int snd_soc_pcm_component_mmap(struct snd_pcm_substream *substream,
> -			       struct vm_area_struct *vma);
> +			       struct mm_area *vma);
>  int snd_soc_pcm_component_new(struct snd_soc_pcm_runtime *rtd);
>  void snd_soc_pcm_component_free(struct snd_soc_pcm_runtime *rtd);
>  int snd_soc_pcm_component_prepare(struct snd_pcm_substream *substream);
> diff --git a/include/trace/events/mmap.h b/include/trace/events/mmap.h
> index f8d61485de16..516a46ff75a5 100644
> --- a/include/trace/events/mmap.h
> +++ b/include/trace/events/mmap.h
> @@ -69,13 +69,13 @@ TRACE_EVENT(vma_mas_szero,
>  );
>
>  TRACE_EVENT(vma_store,
> -	TP_PROTO(struct maple_tree *mt, struct vm_area_struct *vma),
> +	TP_PROTO(struct maple_tree *mt, struct mm_area *vma),
>
>  	TP_ARGS(mt, vma),
>
>  	TP_STRUCT__entry(
>  			__field(struct maple_tree *, mt)
> -			__field(struct vm_area_struct *, vma)
> +			__field(struct mm_area *, vma)
>  			__field(unsigned long, vm_start)
>  			__field(unsigned long, vm_end)
>  	),
> diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> index 8994e97d86c1..79ee1636a6ec 100644
> --- a/include/trace/events/sched.h
> +++ b/include/trace/events/sched.h
> @@ -720,7 +720,7 @@ NUMAB_SKIP_REASON
>
>  TRACE_EVENT(sched_skip_vma_numa,
>
> -	TP_PROTO(struct mm_struct *mm, struct vm_area_struct *vma,
> +	TP_PROTO(struct mm_struct *mm, struct mm_area *vma,
>  		 enum numa_vmaskip_reason reason),
>
>  	TP_ARGS(mm, vma, reason),
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 28705ae67784..7894f9c2ae9b 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -5368,7 +5368,7 @@ union bpf_attr {
>   *
>   *		The expected callback signature is
>   *
> - *		long (\*callback_fn)(struct task_struct \*task, struct vm_area_struct \*vma, void \*callback_ctx);
> + *		long (\*callback_fn)(struct task_struct \*task, struct mm_area \*vma, void \*callback_ctx);
>   *
>   *	Return
>   *		0 on success.
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 47f11bec5e90..9c4c2e081be3 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -44,11 +44,11 @@ int xen_setup_shutdown_event(void);
>  extern unsigned long *xen_contiguous_bitmap;
>
>  #if defined(CONFIG_XEN_PV)
> -int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
> +int xen_remap_pfn(struct mm_area *vma, unsigned long addr,
>  		  xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot,
>  		  unsigned int domid, bool no_translate);
>  #else
> -static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
> +static inline int xen_remap_pfn(struct mm_area *vma, unsigned long addr,
>  				xen_pfn_t *pfn, int nr, int *err_ptr,
>  				pgprot_t prot,  unsigned int domid,
>  				bool no_translate)
> @@ -58,23 +58,23 @@ static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr,
>  }
>  #endif
>
> -struct vm_area_struct;
> +struct mm_area;
>
>  #ifdef CONFIG_XEN_AUTO_XLATE
> -int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
> +int xen_xlate_remap_gfn_array(struct mm_area *vma,
>  			      unsigned long addr,
>  			      xen_pfn_t *gfn, int nr,
>  			      int *err_ptr, pgprot_t prot,
>  			      unsigned int domid,
>  			      struct page **pages);
> -int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
> +int xen_xlate_unmap_gfn_range(struct mm_area *vma,
>  			      int nr, struct page **pages);
>  #else
>  /*
>   * These two functions are called from arch/x86/xen/mmu.c and so stubs
>   * are needed for a configuration not specifying CONFIG_XEN_AUTO_XLATE.
>   */
> -static inline int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
> +static inline int xen_xlate_remap_gfn_array(struct mm_area *vma,
>  					    unsigned long addr,
>  					    xen_pfn_t *gfn, int nr,
>  					    int *err_ptr, pgprot_t prot,
> @@ -84,14 +84,14 @@ static inline int xen_xlate_remap_gfn_array(struct vm_area_struct *vma,
>  	return -EOPNOTSUPP;
>  }
>
> -static inline int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma,
> +static inline int xen_xlate_unmap_gfn_range(struct mm_area *vma,
>  					    int nr, struct page **pages)
>  {
>  	return -EOPNOTSUPP;
>  }
>  #endif
>
> -int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr,
> +int xen_remap_vma_range(struct mm_area *vma, unsigned long addr,
>  			unsigned long len);
>
>  /*
> @@ -111,7 +111,7 @@ int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr,
>   * Returns the number of successfully mapped frames, or a -ve error
>   * code.
>   */
> -static inline int xen_remap_domain_gfn_array(struct vm_area_struct *vma,
> +static inline int xen_remap_domain_gfn_array(struct mm_area *vma,
>  					     unsigned long addr,
>  					     xen_pfn_t *gfn, int nr,
>  					     int *err_ptr, pgprot_t prot,
> @@ -147,7 +147,7 @@ static inline int xen_remap_domain_gfn_array(struct vm_area_struct *vma,
>   * Returns the number of successfully mapped frames, or a -ve error
>   * code.
>   */
> -static inline int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
> +static inline int xen_remap_domain_mfn_array(struct mm_area *vma,
>  					     unsigned long addr, xen_pfn_t *mfn,
>  					     int nr, int *err_ptr,
>  					     pgprot_t prot, unsigned int domid)
> @@ -171,7 +171,7 @@ static inline int xen_remap_domain_mfn_array(struct vm_area_struct *vma,
>   * Returns the number of successfully mapped frames, or a -ve error
>   * code.
>   */
> -static inline int xen_remap_domain_gfn_range(struct vm_area_struct *vma,
> +static inline int xen_remap_domain_gfn_range(struct mm_area *vma,
>  					     unsigned long addr,
>  					     xen_pfn_t gfn, int nr,
>  					     pgprot_t prot, unsigned int domid,
> @@ -183,7 +183,7 @@ static inline int xen_remap_domain_gfn_range(struct vm_area_struct *vma,
>  	return xen_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false);
>  }
>
> -int xen_unmap_domain_gfn_range(struct vm_area_struct *vma,
> +int xen_unmap_domain_gfn_range(struct mm_area *vma,
>  			       int numpgs, struct page **pages);
>
>  int xen_xlate_map_ballooned_pages(xen_pfn_t **pfns, void **vaddr,
> diff --git a/io_uring/memmap.c b/io_uring/memmap.c
> index 76fcc79656b0..d606163f0524 100644
> --- a/io_uring/memmap.c
> +++ b/io_uring/memmap.c
> @@ -306,7 +306,7 @@ static void *io_uring_validate_mmap_request(struct file *file, loff_t pgoff,
>
>  static int io_region_mmap(struct io_ring_ctx *ctx,
>  			  struct io_mapped_region *mr,
> -			  struct vm_area_struct *vma,
> +			  struct mm_area *vma,
>  			  unsigned max_pages)
>  {
>  	unsigned long nr_pages = min(mr->nr_pages, max_pages);
> @@ -315,7 +315,7 @@ static int io_region_mmap(struct io_ring_ctx *ctx,
>  	return vm_insert_pages(vma, vma->vm_start, mr->pages, &nr_pages);
>  }
>
> -__cold int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
> +__cold int io_uring_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct io_ring_ctx *ctx = file->private_data;
>  	size_t sz = vma->vm_end - vma->vm_start;
> @@ -389,7 +389,7 @@ unsigned long io_uring_get_unmapped_area(struct file *filp, unsigned long addr,
>
>  #else /* !CONFIG_MMU */
>
> -int io_uring_mmap(struct file *file, struct vm_area_struct *vma)
> +int io_uring_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return is_nommu_shared_mapping(vma->vm_flags) ? 0 : -EINVAL;
>  }
> diff --git a/io_uring/memmap.h b/io_uring/memmap.h
> index dad0aa5b1b45..67e0335cfe87 100644
> --- a/io_uring/memmap.h
> +++ b/io_uring/memmap.h
> @@ -12,7 +12,7 @@ unsigned int io_uring_nommu_mmap_capabilities(struct file *file);
>  unsigned long io_uring_get_unmapped_area(struct file *file, unsigned long addr,
>  					 unsigned long len, unsigned long pgoff,
>  					 unsigned long flags);
> -int io_uring_mmap(struct file *file, struct vm_area_struct *vma);
> +int io_uring_mmap(struct file *file, struct mm_area *vma);
>
>  void io_free_region(struct io_ring_ctx *ctx, struct io_mapped_region *mr);
>  int io_create_region(struct io_ring_ctx *ctx, struct io_mapped_region *mr,
> diff --git a/ipc/shm.c b/ipc/shm.c
> index 99564c870084..b1f32d82e02b 100644
> --- a/ipc/shm.c
> +++ b/ipc/shm.c
> @@ -3,7 +3,7 @@
>   * linux/ipc/shm.c
>   * Copyright (C) 1992, 1993 Krishna Balasubramanian
>   *	 Many improvements/fixes by Bruno Haible.
> - * Replaced `struct shm_desc' by `struct vm_area_struct', July 1994.
> + * Replaced `struct shm_desc' by `struct mm_area', July 1994.
>   * Fixed the shm swap deallocation (shm_unuse()), August 1998 Andrea Arcangeli.
>   *
>   * /proc/sysvipc/shm support (c) 1999 Dragos Acostachioaie <dragos@iname.com>
> @@ -99,8 +99,8 @@ static const struct vm_operations_struct shm_vm_ops;
>  	ipc_unlock(&(shp)->shm_perm)
>
>  static int newseg(struct ipc_namespace *, struct ipc_params *);
> -static void shm_open(struct vm_area_struct *vma);
> -static void shm_close(struct vm_area_struct *vma);
> +static void shm_open(struct mm_area *vma);
> +static void shm_close(struct mm_area *vma);
>  static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp);
>  #ifdef CONFIG_PROC_FS
>  static int sysvipc_shm_proc_show(struct seq_file *s, void *it);
> @@ -299,7 +299,7 @@ static int __shm_open(struct shm_file_data *sfd)
>  }
>
>  /* This is called by fork, once for every shm attach. */
> -static void shm_open(struct vm_area_struct *vma)
> +static void shm_open(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct shm_file_data *sfd = shm_file_data(file);
> @@ -393,7 +393,7 @@ static void __shm_close(struct shm_file_data *sfd)
>  	up_write(&shm_ids(ns).rwsem);
>  }
>
> -static void shm_close(struct vm_area_struct *vma)
> +static void shm_close(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct shm_file_data *sfd = shm_file_data(file);
> @@ -540,7 +540,7 @@ static vm_fault_t shm_fault(struct vm_fault *vmf)
>  	return sfd->vm_ops->fault(vmf);
>  }
>
> -static int shm_may_split(struct vm_area_struct *vma, unsigned long addr)
> +static int shm_may_split(struct mm_area *vma, unsigned long addr)
>  {
>  	struct file *file = vma->vm_file;
>  	struct shm_file_data *sfd = shm_file_data(file);
> @@ -551,7 +551,7 @@ static int shm_may_split(struct vm_area_struct *vma, unsigned long addr)
>  	return 0;
>  }
>
> -static unsigned long shm_pagesize(struct vm_area_struct *vma)
> +static unsigned long shm_pagesize(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct shm_file_data *sfd = shm_file_data(file);
> @@ -563,7 +563,7 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma)
>  }
>
>  #ifdef CONFIG_NUMA
> -static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
> +static int shm_set_policy(struct mm_area *vma, struct mempolicy *mpol)
>  {
>  	struct shm_file_data *sfd = shm_file_data(vma->vm_file);
>  	int err = 0;
> @@ -573,7 +573,7 @@ static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
>  	return err;
>  }
>
> -static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
> +static struct mempolicy *shm_get_policy(struct mm_area *vma,
>  					unsigned long addr, pgoff_t *ilx)
>  {
>  	struct shm_file_data *sfd = shm_file_data(vma->vm_file);
> @@ -585,7 +585,7 @@ static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
>  }
>  #endif
>
> -static int shm_mmap(struct file *file, struct vm_area_struct *vma)
> +static int shm_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct shm_file_data *sfd = shm_file_data(file);
>  	int ret;
> @@ -1723,7 +1723,7 @@ COMPAT_SYSCALL_DEFINE3(shmat, int, shmid, compat_uptr_t, shmaddr, int, shmflg)
>  long ksys_shmdt(char __user *shmaddr)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr = (unsigned long)shmaddr;
>  	int retval = -EINVAL;
>  #ifdef CONFIG_MMU
> diff --git a/kernel/acct.c b/kernel/acct.c
> index 6520baa13669..8f1124fddaa9 100644
> --- a/kernel/acct.c
> +++ b/kernel/acct.c
> @@ -592,7 +592,7 @@ void acct_collect(long exitcode, int group_dead)
>  	if (group_dead && current->mm) {
>  		struct mm_struct *mm = current->mm;
>  		VMA_ITERATOR(vmi, mm, 0);
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		mmap_read_lock(mm);
>  		for_each_vma(vmi, vma)
> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> index 0d56cea71602..bfefa32adb89 100644
> --- a/kernel/bpf/arena.c
> +++ b/kernel/bpf/arena.c
> @@ -220,12 +220,12 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
>  }
>
>  struct vma_list {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct list_head head;
>  	refcount_t mmap_count;
>  };
>
> -static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
> +static int remember_vma(struct bpf_arena *arena, struct mm_area *vma)
>  {
>  	struct vma_list *vml;
>
> @@ -239,14 +239,14 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -static void arena_vm_open(struct vm_area_struct *vma)
> +static void arena_vm_open(struct mm_area *vma)
>  {
>  	struct vma_list *vml = vma->vm_private_data;
>
>  	refcount_inc(&vml->mmap_count);
>  }
>
> -static void arena_vm_close(struct vm_area_struct *vma)
> +static void arena_vm_close(struct mm_area *vma)
>  {
>  	struct bpf_map *map = vma->vm_file->private_data;
>  	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> @@ -345,7 +345,7 @@ static unsigned long arena_get_unmapped_area(struct file *filp, unsigned long ad
>  	return round_up(ret, SZ_4G);
>  }
>
> -static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
> +static int arena_map_mmap(struct bpf_map *map, struct mm_area *vma)
>  {
>  	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
>
> diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> index eb28c0f219ee..79dbdb433b55 100644
> --- a/kernel/bpf/arraymap.c
> +++ b/kernel/bpf/arraymap.c
> @@ -557,7 +557,7 @@ static int array_map_check_btf(const struct bpf_map *map,
>  	return 0;
>  }
>
> -static int array_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
> +static int array_map_mmap(struct bpf_map *map, struct mm_area *vma)
>  {
>  	struct bpf_array *array = container_of(map, struct bpf_array, map);
>  	pgoff_t pgoff = PAGE_ALIGN(sizeof(*array)) >> PAGE_SHIFT;
> diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
> index 1499d8caa9a3..c59325124422 100644
> --- a/kernel/bpf/ringbuf.c
> +++ b/kernel/bpf/ringbuf.c
> @@ -258,7 +258,7 @@ static int ringbuf_map_get_next_key(struct bpf_map *map, void *key,
>  	return -ENOTSUPP;
>  }
>
> -static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma)
> +static int ringbuf_map_mmap_kern(struct bpf_map *map, struct mm_area *vma)
>  {
>  	struct bpf_ringbuf_map *rb_map;
>
> @@ -274,7 +274,7 @@ static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma
>  				   vma->vm_pgoff + RINGBUF_PGOFF);
>  }
>
> -static int ringbuf_map_mmap_user(struct bpf_map *map, struct vm_area_struct *vma)
> +static int ringbuf_map_mmap_user(struct bpf_map *map, struct mm_area *vma)
>  {
>  	struct bpf_ringbuf_map *rb_map;
>
> diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> index 3615c06b7dfa..9870b4a64f23 100644
> --- a/kernel/bpf/stackmap.c
> +++ b/kernel/bpf/stackmap.c
> @@ -124,7 +124,7 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
>  	return ERR_PTR(err);
>  }
>
> -static int fetch_build_id(struct vm_area_struct *vma, unsigned char *build_id, bool may_fault)
> +static int fetch_build_id(struct mm_area *vma, unsigned char *build_id, bool may_fault)
>  {
>  	return may_fault ? build_id_parse(vma, build_id, NULL)
>  			 : build_id_parse_nofault(vma, build_id, NULL);
> @@ -146,7 +146,7 @@ static void stack_map_get_build_id_offset(struct bpf_stack_build_id *id_offs,
>  	int i;
>  	struct mmap_unlock_irq_work *work = NULL;
>  	bool irq_work_busy = bpf_mmap_unlock_get_irq_work(&work);
> -	struct vm_area_struct *vma, *prev_vma = NULL;
> +	struct mm_area *vma, *prev_vma = NULL;
>  	const char *prev_build_id;
>
>  	/* If the irq_work is in use, fall back to report ips. Same
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 9794446bc8c6..e4bd08eba388 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1030,7 +1030,7 @@ static ssize_t bpf_dummy_write(struct file *filp, const char __user *buf,
>  }
>
>  /* called for any extra memory-mapped regions (except initial) */
> -static void bpf_map_mmap_open(struct vm_area_struct *vma)
> +static void bpf_map_mmap_open(struct mm_area *vma)
>  {
>  	struct bpf_map *map = vma->vm_file->private_data;
>
> @@ -1039,7 +1039,7 @@ static void bpf_map_mmap_open(struct vm_area_struct *vma)
>  }
>
>  /* called for all unmapped memory region (including initial) */
> -static void bpf_map_mmap_close(struct vm_area_struct *vma)
> +static void bpf_map_mmap_close(struct mm_area *vma)
>  {
>  	struct bpf_map *map = vma->vm_file->private_data;
>
> @@ -1052,7 +1052,7 @@ static const struct vm_operations_struct bpf_map_default_vmops = {
>  	.close		= bpf_map_mmap_close,
>  };
>
> -static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int bpf_map_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct bpf_map *map = filp->private_data;
>  	int err = 0;
> diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
> index 98d9b4c0daff..3f58b35ce94e 100644
> --- a/kernel/bpf/task_iter.c
> +++ b/kernel/bpf/task_iter.c
> @@ -410,7 +410,7 @@ struct bpf_iter_seq_task_vma_info {
>  	struct bpf_iter_seq_task_common common;
>  	struct task_struct *task;
>  	struct mm_struct *mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	u32 tid;
>  	unsigned long prev_vm_start;
>  	unsigned long prev_vm_end;
> @@ -422,11 +422,11 @@ enum bpf_task_vma_iter_find_op {
>  	task_vma_iter_find_vma,    /* use find_vma() to find next vma */
>  };
>
> -static struct vm_area_struct *
> +static struct mm_area *
>  task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
>  {
>  	enum bpf_task_vma_iter_find_op op;
> -	struct vm_area_struct *curr_vma;
> +	struct mm_area *curr_vma;
>  	struct task_struct *curr_task;
>  	struct mm_struct *curr_mm;
>  	u32 saved_tid = info->tid;
> @@ -577,7 +577,7 @@ task_vma_seq_get_next(struct bpf_iter_seq_task_vma_info *info)
>  static void *task_vma_seq_start(struct seq_file *seq, loff_t *pos)
>  {
>  	struct bpf_iter_seq_task_vma_info *info = seq->private;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = task_vma_seq_get_next(info);
>  	if (vma && *pos == 0)
> @@ -597,11 +597,11 @@ static void *task_vma_seq_next(struct seq_file *seq, void *v, loff_t *pos)
>  struct bpf_iter__task_vma {
>  	__bpf_md_ptr(struct bpf_iter_meta *, meta);
>  	__bpf_md_ptr(struct task_struct *, task);
> -	__bpf_md_ptr(struct vm_area_struct *, vma);
> +	__bpf_md_ptr(struct mm_area *, vma);
>  };
>
>  DEFINE_BPF_ITER_FUNC(task_vma, struct bpf_iter_meta *meta,
> -		     struct task_struct *task, struct vm_area_struct *vma)
> +		     struct task_struct *task, struct mm_area *vma)
>
>  static int __task_vma_seq_show(struct seq_file *seq, bool in_stop)
>  {
> @@ -752,7 +752,7 @@ BPF_CALL_5(bpf_find_vma, struct task_struct *, task, u64, start,
>  	   bpf_callback_t, callback_fn, void *, callback_ctx, u64, flags)
>  {
>  	struct mmap_unlock_irq_work *work = NULL;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	bool irq_work_busy = false;
>  	struct mm_struct *mm;
>  	int ret = -ENOENT;
> @@ -859,7 +859,7 @@ __bpf_kfunc int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
>  	return err;
>  }
>
> -__bpf_kfunc struct vm_area_struct *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it)
> +__bpf_kfunc struct mm_area *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it)
>  {
>  	struct bpf_iter_task_vma_kern *kit = (void *)it;
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 54c6953a8b84..efbe5060d0e9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -10720,7 +10720,7 @@ static int set_find_vma_callback_state(struct bpf_verifier_env *env,
>  	/* bpf_find_vma(struct task_struct *task, u64 addr,
>  	 *               void *callback_fn, void *callback_ctx, u64 flags)
>  	 * (callback_fn)(struct task_struct *task,
> -	 *               struct vm_area_struct *vma, void *callback_ctx);
> +	 *               struct mm_area *vma, void *callback_ctx);
>  	 */
>  	callee->regs[BPF_REG_1] = caller->regs[BPF_REG_1];
>
> diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
> index 3b2bdca9f1d4..b92e5ddae43f 100644
> --- a/kernel/dma/coherent.c
> +++ b/kernel/dma/coherent.c
> @@ -232,7 +232,7 @@ int dma_release_from_dev_coherent(struct device *dev, int order, void *vaddr)
>  }
>
>  static int __dma_mmap_from_coherent(struct dma_coherent_mem *mem,
> -		struct vm_area_struct *vma, void *vaddr, size_t size, int *ret)
> +		struct mm_area *vma, void *vaddr, size_t size, int *ret)
>  {
>  	if (mem && vaddr >= mem->virt_base && vaddr + size <=
>  		   (mem->virt_base + ((dma_addr_t)mem->size << PAGE_SHIFT))) {
> @@ -268,7 +268,7 @@ static int __dma_mmap_from_coherent(struct dma_coherent_mem *mem,
>   * should return @ret, or 0 if they should proceed with mapping memory from
>   * generic areas.
>   */
> -int dma_mmap_from_dev_coherent(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_from_dev_coherent(struct device *dev, struct mm_area *vma,
>  			   void *vaddr, size_t size, int *ret)
>  {
>  	struct dma_coherent_mem *mem = dev_get_coherent_memory(dev);
> @@ -298,7 +298,7 @@ int dma_release_from_global_coherent(int order, void *vaddr)
>  			vaddr);
>  }
>
> -int dma_mmap_from_global_coherent(struct vm_area_struct *vma, void *vaddr,
> +int dma_mmap_from_global_coherent(struct mm_area *vma, void *vaddr,
>  				   size_t size, int *ret)
>  {
>  	if (!dma_coherent_default_memory)
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index b8fe0b3d0ffb..0dba425ab6bf 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -537,7 +537,7 @@ bool dma_direct_can_mmap(struct device *dev)
>  		IS_ENABLED(CONFIG_DMA_NONCOHERENT_MMAP);
>  }
>
> -int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
> +int dma_direct_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs)
>  {
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index d2c0b7e632fc..4ce4be1cad72 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -14,7 +14,7 @@ int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
>  bool dma_direct_can_mmap(struct device *dev);
> -int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
> +int dma_direct_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs);
>  bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr);
> diff --git a/kernel/dma/dummy.c b/kernel/dma/dummy.c
> index 92de80e5b057..eb7c1752b54e 100644
> --- a/kernel/dma/dummy.c
> +++ b/kernel/dma/dummy.c
> @@ -4,7 +4,7 @@
>   */
>  #include <linux/dma-map-ops.h>
>
> -static int dma_dummy_mmap(struct device *dev, struct vm_area_struct *vma,
> +static int dma_dummy_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs)
>  {
> diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
> index cda127027e48..37cfbcb1544c 100644
> --- a/kernel/dma/mapping.c
> +++ b/kernel/dma/mapping.c
> @@ -536,7 +536,7 @@ EXPORT_SYMBOL_GPL(dma_can_mmap);
>  /**
>   * dma_mmap_attrs - map a coherent DMA allocation into user space
>   * @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
> - * @vma: vm_area_struct describing requested user mapping
> + * @vma: mm_area describing requested user mapping
>   * @cpu_addr: kernel CPU-view address returned from dma_alloc_attrs
>   * @dma_addr: device-view address returned from dma_alloc_attrs
>   * @size: size of memory originally requested in dma_alloc_attrs
> @@ -546,7 +546,7 @@ EXPORT_SYMBOL_GPL(dma_can_mmap);
>   * space.  The coherent DMA buffer must not be freed by the driver until the
>   * user space mapping has been released.
>   */
> -int dma_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_attrs(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs)
>  {
> @@ -725,7 +725,7 @@ void dma_free_pages(struct device *dev, size_t size, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(dma_free_pages);
>
> -int dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_pages(struct device *dev, struct mm_area *vma,
>  		size_t size, struct page *page)
>  {
>  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> @@ -828,7 +828,7 @@ void dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
>  }
>  EXPORT_SYMBOL_GPL(dma_vunmap_noncontiguous);
>
> -int dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
> +int dma_mmap_noncontiguous(struct device *dev, struct mm_area *vma,
>  		size_t size, struct sg_table *sgt)
>  {
>  	if (use_dma_iommu(dev))
> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> index 9afd569eadb9..9f7c560c3349 100644
> --- a/kernel/dma/ops_helpers.c
> +++ b/kernel/dma/ops_helpers.c
> @@ -32,7 +32,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
>  /*
>   * Create userspace mapping for the DMA-coherent memory.
>   */
> -int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> +int dma_common_mmap(struct device *dev, struct mm_area *vma,
>  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
>  		unsigned long attrs)
>  {
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 128db74e9eab..bf6c0c90f88c 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -6638,7 +6638,7 @@ void ring_buffer_put(struct perf_buffer *rb)
>  	call_rcu(&rb->rcu_head, rb_free_rcu);
>  }
>
> -static void perf_mmap_open(struct vm_area_struct *vma)
> +static void perf_mmap_open(struct mm_area *vma)
>  {
>  	struct perf_event *event = vma->vm_file->private_data;
>
> @@ -6662,7 +6662,7 @@ static void perf_pmu_output_stop(struct perf_event *event);
>   * the buffer here, where we still have a VM context. This means we need
>   * to detach all events redirecting to us.
>   */
> -static void perf_mmap_close(struct vm_area_struct *vma)
> +static void perf_mmap_close(struct mm_area *vma)
>  {
>  	struct perf_event *event = vma->vm_file->private_data;
>  	struct perf_buffer *rb = ring_buffer_get(event);
> @@ -6784,7 +6784,7 @@ static const struct vm_operations_struct perf_mmap_vmops = {
>  	.pfn_mkwrite	= perf_mmap_pfn_mkwrite,
>  };
>
> -static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
> +static int map_range(struct perf_buffer *rb, struct mm_area *vma)
>  {
>  	unsigned long nr_pages = vma_pages(vma);
>  	int err = 0;
> @@ -6853,7 +6853,7 @@ static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma)
>  	return err;
>  }
>
> -static int perf_mmap(struct file *file, struct vm_area_struct *vma)
> +static int perf_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct perf_event *event = file->private_data;
>  	unsigned long user_locked, user_lock_limit;
> @@ -9155,7 +9155,7 @@ static void perf_event_cgroup(struct cgroup *cgrp)
>   */
>
>  struct perf_mmap_event {
> -	struct vm_area_struct	*vma;
> +	struct mm_area	*vma;
>
>  	const char		*file_name;
>  	int			file_size;
> @@ -9181,7 +9181,7 @@ static int perf_event_mmap_match(struct perf_event *event,
>  				 void *data)
>  {
>  	struct perf_mmap_event *mmap_event = data;
> -	struct vm_area_struct *vma = mmap_event->vma;
> +	struct mm_area *vma = mmap_event->vma;
>  	int executable = vma->vm_flags & VM_EXEC;
>
>  	return (!executable && event->attr.mmap_data) ||
> @@ -9257,7 +9257,7 @@ static void perf_event_mmap_output(struct perf_event *event,
>
>  static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
>  {
> -	struct vm_area_struct *vma = mmap_event->vma;
> +	struct mm_area *vma = mmap_event->vma;
>  	struct file *file = vma->vm_file;
>  	int maj = 0, min = 0;
>  	u64 ino = 0, gen = 0;
> @@ -9387,7 +9387,7 @@ static bool perf_addr_filter_match(struct perf_addr_filter *filter,
>  }
>
>  static bool perf_addr_filter_vma_adjust(struct perf_addr_filter *filter,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					struct perf_addr_filter_range *fr)
>  {
>  	unsigned long vma_size = vma->vm_end - vma->vm_start;
> @@ -9411,7 +9411,7 @@ static bool perf_addr_filter_vma_adjust(struct perf_addr_filter *filter,
>  static void __perf_addr_filters_adjust(struct perf_event *event, void *data)
>  {
>  	struct perf_addr_filters_head *ifh = perf_event_addr_filters(event);
> -	struct vm_area_struct *vma = data;
> +	struct mm_area *vma = data;
>  	struct perf_addr_filter *filter;
>  	unsigned int restart = 0, count = 0;
>  	unsigned long flags;
> @@ -9442,7 +9442,7 @@ static void __perf_addr_filters_adjust(struct perf_event *event, void *data)
>  /*
>   * Adjust all task's events' filters to the new vma
>   */
> -static void perf_addr_filters_adjust(struct vm_area_struct *vma)
> +static void perf_addr_filters_adjust(struct mm_area *vma)
>  {
>  	struct perf_event_context *ctx;
>
> @@ -9460,7 +9460,7 @@ static void perf_addr_filters_adjust(struct vm_area_struct *vma)
>  	rcu_read_unlock();
>  }
>
> -void perf_event_mmap(struct vm_area_struct *vma)
> +void perf_event_mmap(struct mm_area *vma)
>  {
>  	struct perf_mmap_event mmap_event;
>
> @@ -11255,7 +11255,7 @@ static void perf_addr_filter_apply(struct perf_addr_filter *filter,
>  				   struct mm_struct *mm,
>  				   struct perf_addr_filter_range *fr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	for_each_vma(vmi, vma) {
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 615b4e6d22c7..0fb6581e88fd 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -131,7 +131,7 @@ static void uprobe_warn(struct task_struct *t, const char *msg)
>   *	- Return 1 if the specified virtual address is in an
>   *	  executable vma.
>   */
> -static bool valid_vma(struct vm_area_struct *vma, bool is_register)
> +static bool valid_vma(struct mm_area *vma, bool is_register)
>  {
>  	vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE;
>
> @@ -141,12 +141,12 @@ static bool valid_vma(struct vm_area_struct *vma, bool is_register)
>  	return vma->vm_file && (vma->vm_flags & flags) == VM_MAYEXEC;
>  }
>
> -static unsigned long offset_to_vaddr(struct vm_area_struct *vma, loff_t offset)
> +static unsigned long offset_to_vaddr(struct mm_area *vma, loff_t offset)
>  {
>  	return vma->vm_start + offset - ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
>  }
>
> -static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
> +static loff_t vaddr_to_offset(struct mm_area *vma, unsigned long vaddr)
>  {
>  	return ((loff_t)vma->vm_pgoff << PAGE_SHIFT) + (vaddr - vma->vm_start);
>  }
> @@ -164,7 +164,7 @@ static loff_t vaddr_to_offset(struct vm_area_struct *vma, unsigned long vaddr)
>   *
>   * Returns 0 on success, negative error code otherwise.
>   */
> -static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> +static int __replace_page(struct mm_area *vma, unsigned long addr,
>  				struct page *old_page, struct page *new_page)
>  {
>  	struct folio *old_folio = page_folio(old_page);
> @@ -360,7 +360,7 @@ static void delayed_uprobe_remove(struct uprobe *uprobe, struct mm_struct *mm)
>  }
>
>  static bool valid_ref_ctr_vma(struct uprobe *uprobe,
> -			      struct vm_area_struct *vma)
> +			      struct mm_area *vma)
>  {
>  	unsigned long vaddr = offset_to_vaddr(vma, uprobe->ref_ctr_offset);
>
> @@ -372,11 +372,11 @@ static bool valid_ref_ctr_vma(struct uprobe *uprobe,
>  		vma->vm_end > vaddr;
>  }
>
> -static struct vm_area_struct *
> +static struct mm_area *
>  find_ref_ctr_vma(struct uprobe *uprobe, struct mm_struct *mm)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *tmp;
> +	struct mm_area *tmp;
>
>  	for_each_vma(vmi, tmp)
>  		if (valid_ref_ctr_vma(uprobe, tmp))
> @@ -437,7 +437,7 @@ static void update_ref_ctr_warn(struct uprobe *uprobe,
>  static int update_ref_ctr(struct uprobe *uprobe, struct mm_struct *mm,
>  			  short d)
>  {
> -	struct vm_area_struct *rc_vma;
> +	struct mm_area *rc_vma;
>  	unsigned long rc_vaddr;
>  	int ret = 0;
>
> @@ -486,7 +486,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
>  {
>  	struct uprobe *uprobe;
>  	struct page *old_page, *new_page;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret, is_register, ref_ctr_updated = 0;
>  	bool orig_page_huge = false;
>  	unsigned int gup_flags = FOLL_FORCE;
> @@ -1136,7 +1136,7 @@ static bool filter_chain(struct uprobe *uprobe, struct mm_struct *mm)
>
>  static int
>  install_breakpoint(struct uprobe *uprobe, struct mm_struct *mm,
> -			struct vm_area_struct *vma, unsigned long vaddr)
> +			struct mm_area *vma, unsigned long vaddr)
>  {
>  	bool first_uprobe;
>  	int ret;
> @@ -1186,7 +1186,7 @@ static struct map_info *
>  build_map_info(struct address_space *mapping, loff_t offset, bool is_register)
>  {
>  	unsigned long pgoff = offset >> PAGE_SHIFT;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct map_info *curr = NULL;
>  	struct map_info *prev = NULL;
>  	struct map_info *info;
> @@ -1269,7 +1269,7 @@ register_for_each_vma(struct uprobe *uprobe, struct uprobe_consumer *new)
>
>  	while (info) {
>  		struct mm_struct *mm = info->mm;
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		if (err && is_register)
>  			goto free;
> @@ -1454,7 +1454,7 @@ int uprobe_apply(struct uprobe *uprobe, struct uprobe_consumer *uc, bool add)
>  static int unapply_uprobe(struct uprobe *uprobe, struct mm_struct *mm)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int err = 0;
>
>  	mmap_read_lock(mm);
> @@ -1508,7 +1508,7 @@ find_node_in_range(struct inode *inode, loff_t min, loff_t max)
>   * For a given range in vma, build a list of probes that need to be inserted.
>   */
>  static void build_probe_list(struct inode *inode,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				unsigned long start, unsigned long end,
>  				struct list_head *head)
>  {
> @@ -1544,7 +1544,7 @@ static void build_probe_list(struct inode *inode,
>  }
>
>  /* @vma contains reference counter, not the probed instruction. */
> -static int delayed_ref_ctr_inc(struct vm_area_struct *vma)
> +static int delayed_ref_ctr_inc(struct mm_area *vma)
>  {
>  	struct list_head *pos, *q;
>  	struct delayed_uprobe *du;
> @@ -1578,7 +1578,7 @@ static int delayed_ref_ctr_inc(struct vm_area_struct *vma)
>   * Currently we ignore all errors and always return 0, the callers
>   * can't handle the failure anyway.
>   */
> -int uprobe_mmap(struct vm_area_struct *vma)
> +int uprobe_mmap(struct mm_area *vma)
>  {
>  	struct list_head tmp_list;
>  	struct uprobe *uprobe, *u;
> @@ -1620,7 +1620,7 @@ int uprobe_mmap(struct vm_area_struct *vma)
>  }
>
>  static bool
> -vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +vma_has_uprobes(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	loff_t min, max;
>  	struct inode *inode;
> @@ -1641,7 +1641,7 @@ vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long e
>  /*
>   * Called in context of a munmap of a vma.
>   */
> -void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned long end)
> +void uprobe_munmap(struct mm_area *vma, unsigned long start, unsigned long end)
>  {
>  	if (no_uprobe_events() || !valid_vma(vma, false))
>  		return;
> @@ -1658,7 +1658,7 @@ void uprobe_munmap(struct vm_area_struct *vma, unsigned long start, unsigned lon
>  }
>
>  static vm_fault_t xol_fault(const struct vm_special_mapping *sm,
> -			    struct vm_area_struct *vma, struct vm_fault *vmf)
> +			    struct mm_area *vma, struct vm_fault *vmf)
>  {
>  	struct xol_area *area = vma->vm_mm->uprobes_state.xol_area;
>
> @@ -1667,7 +1667,7 @@ static vm_fault_t xol_fault(const struct vm_special_mapping *sm,
>  	return 0;
>  }
>
> -static int xol_mremap(const struct vm_special_mapping *sm, struct vm_area_struct *new_vma)
> +static int xol_mremap(const struct vm_special_mapping *sm, struct mm_area *new_vma)
>  {
>  	return -EPERM;
>  }
> @@ -1681,7 +1681,7 @@ static const struct vm_special_mapping xol_mapping = {
>  /* Slot allocation for XOL */
>  static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret;
>
>  	if (mmap_write_lock_killable(mm))
> @@ -2338,7 +2338,7 @@ bool uprobe_deny_signal(void)
>  static void mmf_recalc_uprobes(struct mm_struct *mm)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	for_each_vma(vmi, vma) {
>  		if (!valid_vma(vma, false))
> @@ -2387,7 +2387,7 @@ static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr)
>  {
>  	struct mm_struct *mm = current->mm;
>  	struct uprobe *uprobe = NULL;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct file *vm_file;
>  	loff_t offset;
>  	unsigned int seq;
> @@ -2429,7 +2429,7 @@ static struct uprobe *find_active_uprobe_rcu(unsigned long bp_vaddr, int *is_swb
>  {
>  	struct mm_struct *mm = current->mm;
>  	struct uprobe *uprobe = NULL;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	uprobe = find_active_uprobe_speculative(bp_vaddr);
>  	if (uprobe)
> diff --git a/kernel/fork.c b/kernel/fork.c
> index c4b26cd8998b..005774cb7b07 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -428,15 +428,15 @@ struct kmem_cache *files_cachep;
>  /* SLAB cache for fs_struct structures (tsk->fs) */
>  struct kmem_cache *fs_cachep;
>
> -/* SLAB cache for vm_area_struct structures */
> +/* SLAB cache for mm_area structures */
>  static struct kmem_cache *vm_area_cachep;
>
>  /* SLAB cache for mm_struct structures (tsk->mm) */
>  static struct kmem_cache *mm_cachep;
>
> -struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
> +struct mm_area *vm_area_alloc(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
>  	if (!vma)
> @@ -447,8 +447,8 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
>  	return vma;
>  }
>
> -static void vm_area_init_from(const struct vm_area_struct *src,
> -			      struct vm_area_struct *dest)
> +static void vm_area_init_from(const struct mm_area *src,
> +			      struct mm_area *dest)
>  {
>  	dest->vm_mm = src->vm_mm;
>  	dest->vm_ops = src->vm_ops;
> @@ -483,9 +483,9 @@ static void vm_area_init_from(const struct vm_area_struct *src,
>  #endif
>  }
>
> -struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
> +struct mm_area *vm_area_dup(struct mm_area *orig)
>  {
> -	struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
> +	struct mm_area *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
>
>  	if (!new)
>  		return NULL;
> @@ -505,7 +505,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
>  	return new;
>  }
>
> -void vm_area_free(struct vm_area_struct *vma)
> +void vm_area_free(struct mm_area *vma)
>  {
>  	/* The vma should be detached while being destroyed. */
>  	vma_assert_detached(vma);
> @@ -611,7 +611,7 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm)
>  static __latent_entropy int dup_mmap(struct mm_struct *mm,
>  					struct mm_struct *oldmm)
>  {
> -	struct vm_area_struct *mpnt, *tmp;
> +	struct mm_area *mpnt, *tmp;
>  	int retval;
>  	unsigned long charge = 0;
>  	LIST_HEAD(uf);
> @@ -1473,7 +1473,7 @@ int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
>   */
>  int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct file *old_exe_file;
>  	int ret = 0;
>
> @@ -3215,7 +3215,7 @@ void __init proc_caches_init(void)
>  {
>  	struct kmem_cache_args args = {
>  		.use_freeptr_offset = true,
> -		.freeptr_offset = offsetof(struct vm_area_struct, vm_freeptr),
> +		.freeptr_offset = offsetof(struct mm_area, vm_freeptr),
>  	};
>
>  	sighand_cachep = kmem_cache_create("sighand_cache",
> @@ -3234,8 +3234,8 @@ void __init proc_caches_init(void)
>  			sizeof(struct fs_struct), 0,
>  			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_ACCOUNT,
>  			NULL);
> -	vm_area_cachep = kmem_cache_create("vm_area_struct",
> -			sizeof(struct vm_area_struct), &args,
> +	vm_area_cachep = kmem_cache_create("mm_area",
> +			sizeof(struct mm_area), &args,
>  			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_TYPESAFE_BY_RCU|
>  			SLAB_ACCOUNT);
>  	mmap_init();
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index 187ba1b80bda..afd99afc9386 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -484,7 +484,7 @@ void kcov_task_exit(struct task_struct *t)
>  	kcov_put(kcov);
>  }
>
> -static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
> +static int kcov_mmap(struct file *filep, struct mm_area *vma)
>  {
>  	int res = 0;
>  	struct kcov *kcov = vma->vm_file->private_data;
> diff --git a/kernel/relay.c b/kernel/relay.c
> index 5ac7e711e4b6..ca1dea370f80 100644
> --- a/kernel/relay.c
> +++ b/kernel/relay.c
> @@ -74,13 +74,13 @@ static void relay_free_page_array(struct page **array)
>  /**
>   *	relay_mmap_buf: - mmap channel buffer to process address space
>   *	@buf: relay channel buffer
> - *	@vma: vm_area_struct describing memory to be mapped
> + *	@vma: mm_area describing memory to be mapped
>   *
>   *	Returns 0 if ok, negative on error
>   *
>   *	Caller should already have grabbed mmap_lock.
>   */
> -static int relay_mmap_buf(struct rchan_buf *buf, struct vm_area_struct *vma)
> +static int relay_mmap_buf(struct rchan_buf *buf, struct mm_area *vma)
>  {
>  	unsigned long length = vma->vm_end - vma->vm_start;
>
> @@ -825,7 +825,7 @@ static int relay_file_open(struct inode *inode, struct file *filp)
>   *
>   *	Calls upon relay_mmap_buf() to map the file into user space.
>   */
> -static int relay_file_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int relay_file_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct rchan_buf *buf = filp->private_data;
>  	return relay_mmap_buf(buf, vma);
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e43993a4e580..424c88801103 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3258,7 +3258,7 @@ static void reset_ptenuma_scan(struct task_struct *p)
>  	p->mm->numa_scan_offset = 0;
>  }
>
> -static bool vma_is_accessed(struct mm_struct *mm, struct vm_area_struct *vma)
> +static bool vma_is_accessed(struct mm_struct *mm, struct mm_area *vma)
>  {
>  	unsigned long pids;
>  	/*
> @@ -3307,7 +3307,7 @@ static void task_numa_work(struct callback_head *work)
>  	struct task_struct *p = current;
>  	struct mm_struct *mm = p->mm;
>  	u64 runtime = p->se.sum_exec_runtime;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long start, end;
>  	unsigned long nr_pte_updates = 0;
>  	long pages, virtpages;
> diff --git a/kernel/signal.c b/kernel/signal.c
> index 614d78fe3451..39a1112b49e9 100644
> --- a/kernel/signal.c
> +++ b/kernel/signal.c
> @@ -4892,7 +4892,7 @@ SYSCALL_DEFINE3(sigsuspend, int, unused1, int, unused2, old_sigset_t, mask)
>  }
>  #endif
>
> -__weak const char *arch_vma_name(struct vm_area_struct *vma)
> +__weak const char *arch_vma_name(struct mm_area *vma)
>  {
>  	return NULL;
>  }
> diff --git a/kernel/sys.c b/kernel/sys.c
> index c434968e9f5d..bfcdd00e92bf 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -2156,7 +2156,7 @@ static int prctl_set_mm(int opt, unsigned long addr,
>  		.auxv_size = 0,
>  		.exe_fd = -1,
>  	};
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int error;
>
>  	if (arg5 || (arg4 && (opt != PR_SET_MM_AUXV &&
> diff --git a/kernel/time/namespace.c b/kernel/time/namespace.c
> index e3642278df43..8b5a1d6c90ad 100644
> --- a/kernel/time/namespace.c
> +++ b/kernel/time/namespace.c
> @@ -192,7 +192,7 @@ static void timens_setup_vdso_clock_data(struct vdso_clock *vc,
>  	offset[CLOCK_BOOTTIME_ALARM]	= boottime;
>  }
>
> -struct page *find_timens_vvar_page(struct vm_area_struct *vma)
> +struct page *find_timens_vvar_page(struct mm_area *vma)
>  {
>  	if (likely(vma->vm_mm == current->mm))
>  		return current->nsproxy->time_ns->vvar_page;
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index d8d7b28e2c2f..2178bd0d5590 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -7028,7 +7028,7 @@ static int __rb_inc_dec_mapped(struct ring_buffer_per_cpu *cpu_buffer,
>   */
>  #ifdef CONFIG_MMU
>  static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	unsigned long nr_subbufs, nr_pages, nr_vma_pages, pgoff = vma->vm_pgoff;
>  	unsigned int subbuf_pages, subbuf_order;
> @@ -7125,14 +7125,14 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
>  }
>  #else
>  static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	return -EOPNOTSUPP;
>  }
>  #endif
>
>  int ring_buffer_map(struct trace_buffer *buffer, int cpu,
> -		    struct vm_area_struct *vma)
> +		    struct mm_area *vma)
>  {
>  	struct ring_buffer_per_cpu *cpu_buffer;
>  	unsigned long flags, *subbuf_ids;
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index bc957a2507e2..58694c4b18b6 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -8481,7 +8481,7 @@ static inline int get_snapshot_map(struct trace_array *tr) { return 0; }
>  static inline void put_snapshot_map(struct trace_array *tr) { }
>  #endif
>
> -static void tracing_buffers_mmap_close(struct vm_area_struct *vma)
> +static void tracing_buffers_mmap_close(struct mm_area *vma)
>  {
>  	struct ftrace_buffer_info *info = vma->vm_file->private_data;
>  	struct trace_iterator *iter = &info->iter;
> @@ -8494,7 +8494,7 @@ static const struct vm_operations_struct tracing_buffers_vmops = {
>  	.close		= tracing_buffers_mmap_close,
>  };
>
> -static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int tracing_buffers_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct ftrace_buffer_info *info = filp->private_data;
>  	struct trace_iterator *iter = &info->iter;
> diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
> index fee40ffbd490..f8172a64070a 100644
> --- a/kernel/trace/trace_output.c
> +++ b/kernel/trace/trace_output.c
> @@ -404,7 +404,7 @@ static int seq_print_user_ip(struct trace_seq *s, struct mm_struct *mm,
>  		return 0;
>
>  	if (mm) {
> -		const struct vm_area_struct *vma;
> +		const struct mm_area *vma;
>
>  		mmap_read_lock(mm);
>  		vma = find_vma(mm, ip);
> diff --git a/lib/buildid.c b/lib/buildid.c
> index c4b0f376fb34..5acf0f755dd2 100644
> --- a/lib/buildid.c
> +++ b/lib/buildid.c
> @@ -287,7 +287,7 @@ static int get_build_id_64(struct freader *r, unsigned char *build_id, __u32 *si
>  /* enough for Elf64_Ehdr, Elf64_Phdr, and all the smaller requests */
>  #define MAX_FREADER_BUF_SZ 64
>
> -static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
> +static int __build_id_parse(struct mm_area *vma, unsigned char *build_id,
>  			    __u32 *size, bool may_fault)
>  {
>  	const Elf32_Ehdr *ehdr;
> @@ -338,7 +338,7 @@ static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
>   *
>   * Return: 0 on success; negative error, otherwise
>   */
> -int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size)
> +int build_id_parse_nofault(struct mm_area *vma, unsigned char *build_id, __u32 *size)
>  {
>  	return __build_id_parse(vma, build_id, size, false /* !may_fault */);
>  }
> @@ -354,7 +354,7 @@ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id,
>   *
>   * Return: 0 on success; negative error, otherwise
>   */
> -int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size)
> +int build_id_parse(struct mm_area *vma, unsigned char *build_id, __u32 *size)
>  {
>  	return __build_id_parse(vma, build_id, size, true /* may_fault */);
>  }
> diff --git a/lib/test_hmm.c b/lib/test_hmm.c
> index 5b144bc5c4ec..d08270e1c826 100644
> --- a/lib/test_hmm.c
> +++ b/lib/test_hmm.c
> @@ -878,7 +878,7 @@ static int dmirror_migrate_to_system(struct dmirror *dmirror,
>  	unsigned long start, end, addr;
>  	unsigned long size = cmd->npages << PAGE_SHIFT;
>  	struct mm_struct *mm = dmirror->notifier.mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long src_pfns[64] = { 0 };
>  	unsigned long dst_pfns[64] = { 0 };
>  	struct migrate_vma args = { 0 };
> @@ -938,7 +938,7 @@ static int dmirror_migrate_to_device(struct dmirror *dmirror,
>  	unsigned long start, end, addr;
>  	unsigned long size = cmd->npages << PAGE_SHIFT;
>  	struct mm_struct *mm = dmirror->notifier.mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long src_pfns[64] = { 0 };
>  	unsigned long dst_pfns[64] = { 0 };
>  	struct dmirror_bounce bounce;
> @@ -1342,7 +1342,7 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp,
>  	return 0;
>  }
>
> -static int dmirror_fops_mmap(struct file *file, struct vm_area_struct *vma)
> +static int dmirror_fops_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned long addr;
>
> diff --git a/lib/vdso/datastore.c b/lib/vdso/datastore.c
> index 3693c6caf2c4..6079a11964e1 100644
> --- a/lib/vdso/datastore.c
> +++ b/lib/vdso/datastore.c
> @@ -38,7 +38,7 @@ struct vdso_arch_data *vdso_k_arch_data = &vdso_arch_data_store.data;
>  #endif /* CONFIG_ARCH_HAS_VDSO_ARCH_DATA */
>
>  static vm_fault_t vvar_fault(const struct vm_special_mapping *sm,
> -			     struct vm_area_struct *vma, struct vm_fault *vmf)
> +			     struct mm_area *vma, struct vm_fault *vmf)
>  {
>  	struct page *timens_page = find_timens_vvar_page(vma);
>  	unsigned long addr, pfn;
> @@ -96,7 +96,7 @@ const struct vm_special_mapping vdso_vvar_mapping = {
>  	.fault	= vvar_fault,
>  };
>
> -struct vm_area_struct *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned long addr)
>  {
>  	return _install_special_mapping(mm, addr, VDSO_NR_PAGES * PAGE_SIZE,
>  					VM_READ | VM_MAYREAD | VM_IO | VM_DONTDUMP |
> @@ -115,7 +115,7 @@ struct vm_area_struct *vdso_install_vvar_mapping(struct mm_struct *mm, unsigned
>  int vdso_join_timens(struct task_struct *task, struct time_namespace *ns)
>  {
>  	struct mm_struct *mm = task->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	mmap_read_lock(mm);
> diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
> index 0db1fc70c84d..db48cc64657f 100644
> --- a/mm/damon/ops-common.c
> +++ b/mm/damon/ops-common.c
> @@ -39,7 +39,7 @@ struct folio *damon_get_folio(unsigned long pfn)
>  	return folio;
>  }
>
> -void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr)
> +void damon_ptep_mkold(pte_t *pte, struct mm_area *vma, unsigned long addr)
>  {
>  	pte_t pteval = ptep_get(pte);
>  	struct folio *folio;
> @@ -70,7 +70,7 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr
>  	folio_put(folio);
>  }
>
> -void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr)
> +void damon_pmdp_mkold(pmd_t *pmd, struct mm_area *vma, unsigned long addr)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	struct folio *folio = damon_get_folio(pmd_pfn(pmdp_get(pmd)));
> diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h
> index 18d837d11bce..81857e66d09b 100644
> --- a/mm/damon/ops-common.h
> +++ b/mm/damon/ops-common.h
> @@ -9,8 +9,8 @@
>
>  struct folio *damon_get_folio(unsigned long pfn);
>
> -void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr);
> -void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr);
> +void damon_ptep_mkold(pte_t *pte, struct mm_area *vma, unsigned long addr);
> +void damon_pmdp_mkold(pmd_t *pmd, struct mm_area *vma, unsigned long addr);
>
>  int damon_cold_score(struct damon_ctx *c, struct damon_region *r,
>  			struct damos *s);
> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
> index 1b70d3f36046..5154132467eb 100644
> --- a/mm/damon/paddr.c
> +++ b/mm/damon/paddr.c
> @@ -20,7 +20,7 @@
>  #include "ops-common.h"
>
>  static bool damon_folio_mkold_one(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long addr, void *arg)
> +		struct mm_area *vma, unsigned long addr, void *arg)
>  {
>  	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
>
> @@ -88,7 +88,7 @@ static void damon_pa_prepare_access_checks(struct damon_ctx *ctx)
>  }
>
>  static bool damon_folio_young_one(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long addr, void *arg)
> +		struct mm_area *vma, unsigned long addr, void *arg)
>  {
>  	bool *accessed = arg;
>  	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
> diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
> index 7cd944266a92..5d07633be7fb 100644
> --- a/mm/damon/tests/vaddr-kunit.h
> +++ b/mm/damon/tests/vaddr-kunit.h
> @@ -14,7 +14,7 @@
>
>  #include <kunit/test.h>
>
> -static int __link_vmas(struct maple_tree *mt, struct vm_area_struct *vmas,
> +static int __link_vmas(struct maple_tree *mt, struct mm_area *vmas,
>  			ssize_t nr_vmas)
>  {
>  	int i, ret = -ENOMEM;
> @@ -68,13 +68,13 @@ static void damon_test_three_regions_in_vmas(struct kunit *test)
>  	static struct mm_struct mm;
>  	struct damon_addr_range regions[3] = {0};
>  	/* 10-20-25, 200-210-220, 300-305, 307-330 */
> -	static struct vm_area_struct vmas[] = {
> -		(struct vm_area_struct) {.vm_start = 10, .vm_end = 20},
> -		(struct vm_area_struct) {.vm_start = 20, .vm_end = 25},
> -		(struct vm_area_struct) {.vm_start = 200, .vm_end = 210},
> -		(struct vm_area_struct) {.vm_start = 210, .vm_end = 220},
> -		(struct vm_area_struct) {.vm_start = 300, .vm_end = 305},
> -		(struct vm_area_struct) {.vm_start = 307, .vm_end = 330},
> +	static struct mm_area vmas[] = {
> +		(struct mm_area) {.vm_start = 10, .vm_end = 20},
> +		(struct mm_area) {.vm_start = 20, .vm_end = 25},
> +		(struct mm_area) {.vm_start = 200, .vm_end = 210},
> +		(struct mm_area) {.vm_start = 210, .vm_end = 220},
> +		(struct mm_area) {.vm_start = 300, .vm_end = 305},
> +		(struct mm_area) {.vm_start = 307, .vm_end = 330},
>  	};
>
>  	mt_init_flags(&mm.mm_mt, MT_FLAGS_ALLOC_RANGE | MT_FLAGS_USE_RCU);
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index e6d99106a7f9..ddd28b187cbb 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -121,7 +121,7 @@ static int __damon_va_three_regions(struct mm_struct *mm,
>  {
>  	struct damon_addr_range first_gap = {0}, second_gap = {0};
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma, *prev = NULL;
> +	struct mm_area *vma, *prev = NULL;
>  	unsigned long start;
>
>  	/*
> @@ -341,7 +341,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
>
>  #ifdef CONFIG_HUGETLB_PAGE
>  static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm,
> -				struct vm_area_struct *vma, unsigned long addr)
> +				struct mm_area *vma, unsigned long addr)
>  {
>  	bool referenced = false;
>  	pte_t entry = huge_ptep_get(mm, addr, pte);
> diff --git a/mm/debug.c b/mm/debug.c
> index db83e381a8ae..ea36f9732a2a 100644
> --- a/mm/debug.c
> +++ b/mm/debug.c
> @@ -184,7 +184,7 @@ EXPORT_SYMBOL(dump_page);
>
>  #ifdef CONFIG_DEBUG_VM
>
> -void dump_vma(const struct vm_area_struct *vma)
> +void dump_vma(const struct mm_area *vma)
>  {
>  	pr_emerg("vma %px start %px end %px mm %px\n"
>  		"prot %lx anon_vma %px vm_ops %px\n"
> diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
> index bc748f700a9e..ba1ca4c6a44f 100644
> --- a/mm/debug_vm_pgtable.c
> +++ b/mm/debug_vm_pgtable.c
> @@ -45,7 +45,7 @@
>
>  struct pgtable_debug_args {
>  	struct mm_struct	*mm;
> -	struct vm_area_struct	*vma;
> +	struct mm_area	*vma;
>
>  	pgd_t			*pgdp;
>  	p4d_t			*p4dp;
> diff --git a/mm/filemap.c b/mm/filemap.c
> index b5e784f34d98..2a8150e9ac7b 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3293,7 +3293,7 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf,
>
>  static vm_fault_t filemap_fault_recheck_pte_none(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	vm_fault_t ret = 0;
>  	pte_t *ptep;
>
> @@ -3689,7 +3689,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
>  vm_fault_t filemap_map_pages(struct vm_fault *vmf,
>  			     pgoff_t start_pgoff, pgoff_t end_pgoff)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct file *file = vma->vm_file;
>  	struct address_space *mapping = file->f_mapping;
>  	pgoff_t file_end, last_pgoff = start_pgoff;
> @@ -3793,7 +3793,7 @@ const struct vm_operations_struct generic_file_vm_ops = {
>
>  /* This is used for a general mmap of a disk file */
>
> -int generic_file_mmap(struct file *file, struct vm_area_struct *vma)
> +int generic_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct address_space *mapping = file->f_mapping;
>
> @@ -3807,7 +3807,7 @@ int generic_file_mmap(struct file *file, struct vm_area_struct *vma)
>  /*
>   * This is for filesystems which do not implement ->writepage.
>   */
> -int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
> +int generic_file_readonly_mmap(struct file *file, struct mm_area *vma)
>  {
>  	if (vma_is_shared_maywrite(vma))
>  		return -EINVAL;
> @@ -3818,11 +3818,11 @@ vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf)
>  {
>  	return VM_FAULT_SIGBUS;
>  }
> -int generic_file_mmap(struct file *file, struct vm_area_struct *vma)
> +int generic_file_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return -ENOSYS;
>  }
> -int generic_file_readonly_mmap(struct file *file, struct vm_area_struct *vma)
> +int generic_file_readonly_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return -ENOSYS;
>  }
> diff --git a/mm/gup.c b/mm/gup.c
> index 92351e2fa876..88928bea023f 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -595,7 +595,7 @@ static struct folio *try_grab_folio_fast(struct page *page, int refs,
>
>  /* Common code for can_follow_write_* */
>  static inline bool can_follow_write_common(struct page *page,
> -		struct vm_area_struct *vma, unsigned int flags)
> +		struct mm_area *vma, unsigned int flags)
>  {
>  	/* Maybe FOLL_FORCE is set to override it? */
>  	if (!(flags & FOLL_FORCE))
> @@ -620,7 +620,7 @@ static inline bool can_follow_write_common(struct page *page,
>  	return page && PageAnon(page) && PageAnonExclusive(page);
>  }
>
> -static struct page *no_page_table(struct vm_area_struct *vma,
> +static struct page *no_page_table(struct mm_area *vma,
>  				  unsigned int flags, unsigned long address)
>  {
>  	if (!(flags & FOLL_DUMP))
> @@ -648,7 +648,7 @@ static struct page *no_page_table(struct vm_area_struct *vma,
>  #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
>  /* FOLL_FORCE can write to even unwritable PUDs in COW mappings. */
>  static inline bool can_follow_write_pud(pud_t pud, struct page *page,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					unsigned int flags)
>  {
>  	/* If the pud is writable, we can write to the page. */
> @@ -658,7 +658,7 @@ static inline bool can_follow_write_pud(pud_t pud, struct page *page,
>  	return can_follow_write_common(page, vma, flags);
>  }
>
> -static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +static struct page *follow_huge_pud(struct mm_area *vma,
>  				    unsigned long addr, pud_t *pudp,
>  				    int flags, struct follow_page_context *ctx)
>  {
> @@ -716,7 +716,7 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma,
>
>  /* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */
>  static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					unsigned int flags)
>  {
>  	/* If the pmd is writable, we can write to the page. */
> @@ -732,7 +732,7 @@ static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page,
>  	return !userfaultfd_huge_pmd_wp(vma, pmd);
>  }
>
> -static struct page *follow_huge_pmd(struct vm_area_struct *vma,
> +static struct page *follow_huge_pmd(struct mm_area *vma,
>  				    unsigned long addr, pmd_t *pmd,
>  				    unsigned int flags,
>  				    struct follow_page_context *ctx)
> @@ -778,14 +778,14 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,
>  }
>
>  #else  /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
> -static struct page *follow_huge_pud(struct vm_area_struct *vma,
> +static struct page *follow_huge_pud(struct mm_area *vma,
>  				    unsigned long addr, pud_t *pudp,
>  				    int flags, struct follow_page_context *ctx)
>  {
>  	return NULL;
>  }
>
> -static struct page *follow_huge_pmd(struct vm_area_struct *vma,
> +static struct page *follow_huge_pmd(struct mm_area *vma,
>  				    unsigned long addr, pmd_t *pmd,
>  				    unsigned int flags,
>  				    struct follow_page_context *ctx)
> @@ -794,7 +794,7 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,
>  }
>  #endif	/* CONFIG_PGTABLE_HAS_HUGE_LEAVES */
>
> -static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
> +static int follow_pfn_pte(struct mm_area *vma, unsigned long address,
>  		pte_t *pte, unsigned int flags)
>  {
>  	if (flags & FOLL_TOUCH) {
> @@ -817,7 +817,7 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
>
>  /* FOLL_FORCE can write to even unwritable PTEs in COW mappings. */
>  static inline bool can_follow_write_pte(pte_t pte, struct page *page,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					unsigned int flags)
>  {
>  	/* If the pte is writable, we can write to the page. */
> @@ -833,7 +833,7 @@ static inline bool can_follow_write_pte(pte_t pte, struct page *page,
>  	return !userfaultfd_pte_wp(vma, pte);
>  }
>
> -static struct page *follow_page_pte(struct vm_area_struct *vma,
> +static struct page *follow_page_pte(struct mm_area *vma,
>  		unsigned long address, pmd_t *pmd, unsigned int flags,
>  		struct dev_pagemap **pgmap)
>  {
> @@ -947,7 +947,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
>  	return no_page_table(vma, flags, address);
>  }
>
> -static struct page *follow_pmd_mask(struct vm_area_struct *vma,
> +static struct page *follow_pmd_mask(struct mm_area *vma,
>  				    unsigned long address, pud_t *pudp,
>  				    unsigned int flags,
>  				    struct follow_page_context *ctx)
> @@ -999,7 +999,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
>  	return page;
>  }
>
> -static struct page *follow_pud_mask(struct vm_area_struct *vma,
> +static struct page *follow_pud_mask(struct mm_area *vma,
>  				    unsigned long address, p4d_t *p4dp,
>  				    unsigned int flags,
>  				    struct follow_page_context *ctx)
> @@ -1027,7 +1027,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
>  	return follow_pmd_mask(vma, address, pudp, flags, ctx);
>  }
>
> -static struct page *follow_p4d_mask(struct vm_area_struct *vma,
> +static struct page *follow_p4d_mask(struct mm_area *vma,
>  				    unsigned long address, pgd_t *pgdp,
>  				    unsigned int flags,
>  				    struct follow_page_context *ctx)
> @@ -1046,7 +1046,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
>
>  /**
>   * follow_page_mask - look up a page descriptor from a user-virtual address
> - * @vma: vm_area_struct mapping @address
> + * @vma: mm_area mapping @address
>   * @address: virtual address to look up
>   * @flags: flags modifying lookup behaviour
>   * @ctx: contains dev_pagemap for %ZONE_DEVICE memory pinning and a
> @@ -1068,7 +1068,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
>   * an error pointer if there is a mapping to something not represented
>   * by a page descriptor (see also vm_normal_page()).
>   */
> -static struct page *follow_page_mask(struct vm_area_struct *vma,
> +static struct page *follow_page_mask(struct mm_area *vma,
>  			      unsigned long address, unsigned int flags,
>  			      struct follow_page_context *ctx)
>  {
> @@ -1092,7 +1092,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
>  }
>
>  static int get_gate_page(struct mm_struct *mm, unsigned long address,
> -		unsigned int gup_flags, struct vm_area_struct **vma,
> +		unsigned int gup_flags, struct mm_area **vma,
>  		struct page **page)
>  {
>  	pgd_t *pgd;
> @@ -1151,7 +1151,7 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address,
>   * FOLL_NOWAIT, the mmap_lock may be released.  If it is, *@locked will be set
>   * to 0 and -EBUSY returned.
>   */
> -static int faultin_page(struct vm_area_struct *vma,
> +static int faultin_page(struct mm_area *vma,
>  		unsigned long address, unsigned int flags, bool unshare,
>  		int *locked)
>  {
> @@ -1246,7 +1246,7 @@ static int faultin_page(struct vm_area_struct *vma,
>   * This results in both data being written to a folio without writenotify, and
>   * the folio being dirtied unexpectedly (if the caller decides to do so).
>   */
> -static bool writable_file_mapping_allowed(struct vm_area_struct *vma,
> +static bool writable_file_mapping_allowed(struct mm_area *vma,
>  					  unsigned long gup_flags)
>  {
>  	/*
> @@ -1264,7 +1264,7 @@ static bool writable_file_mapping_allowed(struct vm_area_struct *vma,
>  	return !vma_needs_dirty_tracking(vma);
>  }
>
> -static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
> +static int check_vma_flags(struct mm_area *vma, unsigned long gup_flags)
>  {
>  	vm_flags_t vm_flags = vma->vm_flags;
>  	int write = (gup_flags & FOLL_WRITE);
> @@ -1329,14 +1329,14 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
>   * This is "vma_lookup()", but with a warning if we would have
>   * historically expanded the stack in the GUP code.
>   */
> -static struct vm_area_struct *gup_vma_lookup(struct mm_struct *mm,
> +static struct mm_area *gup_vma_lookup(struct mm_struct *mm,
>  	 unsigned long addr)
>  {
>  #ifdef CONFIG_STACK_GROWSUP
>  	return vma_lookup(mm, addr);
>  #else
>  	static volatile unsigned long next_warn;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long now, next;
>
>  	vma = find_vma(mm, addr);
> @@ -1424,7 +1424,7 @@ static long __get_user_pages(struct mm_struct *mm,
>  		int *locked)
>  {
>  	long ret = 0, i = 0;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	struct follow_page_context ctx = { NULL };
>
>  	if (!nr_pages)
> @@ -1574,7 +1574,7 @@ static long __get_user_pages(struct mm_struct *mm,
>  	return i ? i : ret;
>  }
>
> -static bool vma_permits_fault(struct vm_area_struct *vma,
> +static bool vma_permits_fault(struct mm_area *vma,
>  			      unsigned int fault_flags)
>  {
>  	bool write   = !!(fault_flags & FAULT_FLAG_WRITE);
> @@ -1630,7 +1630,7 @@ int fixup_user_fault(struct mm_struct *mm,
>  		     unsigned long address, unsigned int fault_flags,
>  		     bool *unlocked)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	vm_fault_t ret;
>
>  	address = untagged_addr_remote(mm, address);
> @@ -1879,7 +1879,7 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm,
>   * If @locked is non-NULL, it must held for read only and may be
>   * released.  If it's released, *@locked will be set to 0.
>   */
> -long populate_vma_page_range(struct vm_area_struct *vma,
> +long populate_vma_page_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end, int *locked)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -1995,7 +1995,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
>  {
>  	struct mm_struct *mm = current->mm;
>  	unsigned long end, nstart, nend;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	int locked = 0;
>  	long ret = 0;
>
> @@ -2049,7 +2049,7 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start,
>  		unsigned long nr_pages, struct page **pages,
>  		int *locked, unsigned int foll_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	bool must_unlock = false;
>  	unsigned long vm_flags;
>  	long i;
> diff --git a/mm/hmm.c b/mm/hmm.c
> index 082f7b7c0b9e..b3fdbe6d2e2a 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -64,7 +64,7 @@ static int hmm_vma_fault(unsigned long addr, unsigned long end,
>  			 unsigned int required_fault, struct mm_walk *walk)
>  {
>  	struct hmm_vma_walk *hmm_vma_walk = walk->private;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	unsigned int fault_flags = FAULT_FLAG_REMOTE;
>
>  	WARN_ON_ONCE(!required_fault);
> @@ -472,7 +472,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
>  	unsigned long addr = start, i, pfn;
>  	struct hmm_vma_walk *hmm_vma_walk = walk->private;
>  	struct hmm_range *range = hmm_vma_walk->range;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	unsigned int required_fault;
>  	unsigned long pfn_req_flags;
>  	unsigned long cpu_flags;
> @@ -522,7 +522,7 @@ static int hmm_vma_walk_test(unsigned long start, unsigned long end,
>  {
>  	struct hmm_vma_walk *hmm_vma_walk = walk->private;
>  	struct hmm_range *range = hmm_vma_walk->range;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>
>  	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) &&
>  	    vma->vm_flags & VM_READ)
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2a47682d1ab7..30d01dbe55af 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -83,7 +83,7 @@ unsigned long huge_anon_orders_madvise __read_mostly;
>  unsigned long huge_anon_orders_inherit __read_mostly;
>  static bool anon_orders_configured __initdata;
>
> -static inline bool file_thp_enabled(struct vm_area_struct *vma)
> +static inline bool file_thp_enabled(struct mm_area *vma)
>  {
>  	struct inode *inode;
>
> @@ -98,7 +98,7 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
>  	return !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode);
>  }
>
> -unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> +unsigned long __thp_vma_allowable_orders(struct mm_area *vma,
>  					 unsigned long vm_flags,
>  					 unsigned long tva_flags,
>  					 unsigned long orders)
> @@ -1050,7 +1050,7 @@ static int __init setup_thp_anon(char *str)
>  }
>  __setup("thp_anon=", setup_thp_anon);
>
> -pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
> +pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct mm_area *vma)
>  {
>  	if (likely(vma->vm_flags & VM_WRITE))
>  		pmd = pmd_mkwrite(pmd, vma);
> @@ -1155,7 +1155,7 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
>  }
>  EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>
> -static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
> +static struct folio *vma_alloc_anon_folio_pmd(struct mm_area *vma,
>  		unsigned long addr)
>  {
>  	gfp_t gfp = vma_thp_gfp_mask(vma);
> @@ -1199,7 +1199,7 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
>  }
>
>  static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
> -		struct vm_area_struct *vma, unsigned long haddr)
> +		struct mm_area *vma, unsigned long haddr)
>  {
>  	pmd_t entry;
>
> @@ -1218,7 +1218,7 @@ static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
>  static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>  {
>  	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio;
>  	pgtable_t pgtable;
>  	vm_fault_t ret = 0;
> @@ -1277,7 +1277,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>   *	    available
>   * never: never stall for any thp allocation
>   */
> -gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma)
> +gfp_t vma_thp_gfp_mask(struct mm_area *vma)
>  {
>  	const bool vma_madvised = vma && (vma->vm_flags & VM_HUGEPAGE);
>
> @@ -1305,7 +1305,7 @@ gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma)
>
>  /* Caller must hold page table lock. */
>  static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm,
> -		struct vm_area_struct *vma, unsigned long haddr, pmd_t *pmd,
> +		struct mm_area *vma, unsigned long haddr, pmd_t *pmd,
>  		struct folio *zero_folio)
>  {
>  	pmd_t entry;
> @@ -1318,7 +1318,7 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm,
>
>  vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>  	vm_fault_t ret;
>
> @@ -1373,7 +1373,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>  	return __do_huge_pmd_anonymous_page(vmf);
>  }
>
> -static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
> +static int insert_pfn_pmd(struct mm_area *vma, unsigned long addr,
>  		pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write,
>  		pgtable_t pgtable)
>  {
> @@ -1430,7 +1430,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
>  vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write)
>  {
>  	unsigned long addr = vmf->address & PMD_MASK;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	pgprot_t pgprot = vma->vm_page_prot;
>  	pgtable_t pgtable = NULL;
>  	spinlock_t *ptl;
> @@ -1471,7 +1471,7 @@ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pmd);
>  vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
>  				bool write)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	unsigned long addr = vmf->address & PMD_MASK;
>  	struct mm_struct *mm = vma->vm_mm;
>  	spinlock_t *ptl;
> @@ -1508,14 +1508,14 @@ vm_fault_t vmf_insert_folio_pmd(struct vm_fault *vmf, struct folio *folio,
>  EXPORT_SYMBOL_GPL(vmf_insert_folio_pmd);
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma)
> +static pud_t maybe_pud_mkwrite(pud_t pud, struct mm_area *vma)
>  {
>  	if (likely(vma->vm_flags & VM_WRITE))
>  		pud = pud_mkwrite(pud);
>  	return pud;
>  }
>
> -static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
> +static void insert_pfn_pud(struct mm_area *vma, unsigned long addr,
>  		pud_t *pud, pfn_t pfn, bool write)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -1560,7 +1560,7 @@ static void insert_pfn_pud(struct vm_area_struct *vma, unsigned long addr,
>  vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write)
>  {
>  	unsigned long addr = vmf->address & PUD_MASK;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	pgprot_t pgprot = vma->vm_page_prot;
>  	spinlock_t *ptl;
>
> @@ -1599,7 +1599,7 @@ EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud);
>  vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
>  				bool write)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	unsigned long addr = vmf->address & PUD_MASK;
>  	pud_t *pud = vmf->pud;
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -1633,7 +1633,7 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
>  EXPORT_SYMBOL_GPL(vmf_insert_folio_pud);
>  #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>
> -void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
> +void touch_pmd(struct mm_area *vma, unsigned long addr,
>  	       pmd_t *pmd, bool write)
>  {
>  	pmd_t _pmd;
> @@ -1646,7 +1646,7 @@ void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
>  		update_mmu_cache_pmd(vma, addr, pmd);
>  }
>
> -struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
> +struct page *follow_devmap_pmd(struct mm_area *vma, unsigned long addr,
>  		pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
>  {
>  	unsigned long pfn = pmd_pfn(*pmd);
> @@ -1688,7 +1688,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
>
>  int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
> -		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
> +		  struct mm_area *dst_vma, struct mm_area *src_vma)
>  {
>  	spinlock_t *dst_ptl, *src_ptl;
>  	struct page *src_page;
> @@ -1810,7 +1810,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  }
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -void touch_pud(struct vm_area_struct *vma, unsigned long addr,
> +void touch_pud(struct mm_area *vma, unsigned long addr,
>  	       pud_t *pud, bool write)
>  {
>  	pud_t _pud;
> @@ -1825,7 +1825,7 @@ void touch_pud(struct vm_area_struct *vma, unsigned long addr,
>
>  int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>  		  pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
> -		  struct vm_area_struct *vma)
> +		  struct mm_area *vma)
>  {
>  	spinlock_t *dst_ptl, *src_ptl;
>  	pud_t pud;
> @@ -1889,7 +1889,7 @@ void huge_pmd_set_accessed(struct vm_fault *vmf)
>  static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
>  {
>  	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mmu_notifier_range range;
>  	struct folio *folio;
>  	vm_fault_t ret = 0;
> @@ -1921,7 +1921,7 @@ static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
>  vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
>  {
>  	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio;
>  	struct page *page;
>  	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
> @@ -2012,7 +2012,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
>  	return VM_FAULT_FALLBACK;
>  }
>
> -static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
> +static inline bool can_change_pmd_writable(struct mm_area *vma,
>  					   unsigned long addr, pmd_t pmd)
>  {
>  	struct page *page;
> @@ -2045,7 +2045,7 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma,
>  /* NUMA hinting page fault entry point for trans huge pmds */
>  vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio;
>  	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>  	int nid = NUMA_NO_NODE;
> @@ -2123,7 +2123,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
>   * Return true if we do MADV_FREE successfully on entire pmd page.
>   * Otherwise, return false.
>   */
> -bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
>  		pmd_t *pmd, unsigned long addr, unsigned long next)
>  {
>  	spinlock_t *ptl;
> @@ -2202,7 +2202,7 @@ static inline void zap_deposited_table(struct mm_struct *mm, pmd_t *pmd)
>  	mm_dec_nr_ptes(mm);
>  }
>
> -int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +int zap_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
>  		 pmd_t *pmd, unsigned long addr)
>  {
>  	pmd_t orig_pmd;
> @@ -2272,7 +2272,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  #ifndef pmd_move_must_withdraw
>  static inline int pmd_move_must_withdraw(spinlock_t *new_pmd_ptl,
>  					 spinlock_t *old_pmd_ptl,
> -					 struct vm_area_struct *vma)
> +					 struct mm_area *vma)
>  {
>  	/*
>  	 * With split pmd lock we also need to move preallocated
> @@ -2305,7 +2305,7 @@ static pmd_t clear_uffd_wp_pmd(pmd_t pmd)
>  	return pmd;
>  }
>
> -bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> +bool move_huge_pmd(struct mm_area *vma, unsigned long old_addr,
>  		  unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)
>  {
>  	spinlock_t *old_ptl, *new_ptl;
> @@ -2363,7 +2363,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
>   *      or if prot_numa but THP migration is not supported
>   *  - HPAGE_PMD_NR if protections changed and TLB flush necessary
>   */
> -int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +int change_huge_pmd(struct mmu_gather *tlb, struct mm_area *vma,
>  		    pmd_t *pmd, unsigned long addr, pgprot_t newprot,
>  		    unsigned long cp_flags)
>  {
> @@ -2502,7 +2502,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   * - HPAGE_PUD_NR: if pud was successfully processed
>   */
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +int change_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
>  		    pud_t *pudp, unsigned long addr, pgprot_t newprot,
>  		    unsigned long cp_flags)
>  {
> @@ -2550,7 +2550,7 @@ int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>   * repeated by the caller, or other errors in case of failure.
>   */
>  int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> -			struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +			struct mm_area *dst_vma, struct mm_area *src_vma,
>  			unsigned long dst_addr, unsigned long src_addr)
>  {
>  	pmd_t _dst_pmd, src_pmdval;
> @@ -2687,7 +2687,7 @@ int move_pages_huge_pmd(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, pm
>   * Note that if it returns page table lock pointer, this routine returns without
>   * unlocking page table lock. So callers must unlock it.
>   */
> -spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma)
> +spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct mm_area *vma)
>  {
>  	spinlock_t *ptl;
>  	ptl = pmd_lock(vma->vm_mm, pmd);
> @@ -2704,7 +2704,7 @@ spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma)
>   * Note that if it returns page table lock pointer, this routine returns without
>   * unlocking page table lock. So callers must unlock it.
>   */
> -spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma)
> +spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct mm_area *vma)
>  {
>  	spinlock_t *ptl;
>
> @@ -2716,7 +2716,7 @@ spinlock_t *__pud_trans_huge_lock(pud_t *pud, struct vm_area_struct *vma)
>  }
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +int zap_huge_pud(struct mmu_gather *tlb, struct mm_area *vma,
>  		 pud_t *pud, unsigned long addr)
>  {
>  	spinlock_t *ptl;
> @@ -2751,7 +2751,7 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  	return 1;
>  }
>
> -static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
> +static void __split_huge_pud_locked(struct mm_area *vma, pud_t *pud,
>  		unsigned long haddr)
>  {
>  	struct folio *folio;
> @@ -2783,7 +2783,7 @@ static void __split_huge_pud_locked(struct vm_area_struct *vma, pud_t *pud,
>  		-HPAGE_PUD_NR);
>  }
>
> -void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> +void __split_huge_pud(struct mm_area *vma, pud_t *pud,
>  		unsigned long address)
>  {
>  	spinlock_t *ptl;
> @@ -2803,13 +2803,13 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>  	mmu_notifier_invalidate_range_end(&range);
>  }
>  #else
> -void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> +void __split_huge_pud(struct mm_area *vma, pud_t *pud,
>  		unsigned long address)
>  {
>  }
>  #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>
> -static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
> +static void __split_huge_zero_page_pmd(struct mm_area *vma,
>  		unsigned long haddr, pmd_t *pmd)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -2850,7 +2850,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
>  	pmd_populate(mm, pmd, pgtable);
>  }
>
> -static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
> +static void __split_huge_pmd_locked(struct mm_area *vma, pmd_t *pmd,
>  		unsigned long haddr, bool freeze)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -3072,7 +3072,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
>  	pmd_populate(mm, pmd, pgtable);
>  }
>
> -void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
> +void split_huge_pmd_locked(struct mm_area *vma, unsigned long address,
>  			   pmd_t *pmd, bool freeze, struct folio *folio)
>  {
>  	VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
> @@ -3093,7 +3093,7 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
>  	}
>  }
>
> -void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> +void __split_huge_pmd(struct mm_area *vma, pmd_t *pmd,
>  		unsigned long address, bool freeze, struct folio *folio)
>  {
>  	spinlock_t *ptl;
> @@ -3109,7 +3109,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>  	mmu_notifier_invalidate_range_end(&range);
>  }
>
> -void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
> +void split_huge_pmd_address(struct mm_area *vma, unsigned long address,
>  		bool freeze, struct folio *folio)
>  {
>  	pmd_t *pmd = mm_find_pmd(vma->vm_mm, address);
> @@ -3120,7 +3120,7 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
>  	__split_huge_pmd(vma, pmd, address, freeze, folio);
>  }
>
> -static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned long address)
> +static inline void split_huge_pmd_if_needed(struct mm_area *vma, unsigned long address)
>  {
>  	/*
>  	 * If the new address isn't hpage aligned and it could previously
> @@ -3132,10 +3132,10 @@ static inline void split_huge_pmd_if_needed(struct vm_area_struct *vma, unsigned
>  		split_huge_pmd_address(vma, address, false, NULL);
>  }
>
> -void vma_adjust_trans_huge(struct vm_area_struct *vma,
> +void vma_adjust_trans_huge(struct mm_area *vma,
>  			   unsigned long start,
>  			   unsigned long end,
> -			   struct vm_area_struct *next)
> +			   struct mm_area *next)
>  {
>  	/* Check if we need to split start first. */
>  	split_huge_pmd_if_needed(vma, start);
> @@ -3171,7 +3171,7 @@ static void unmap_folio(struct folio *folio)
>  	try_to_unmap_flush();
>  }
>
> -static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma,
> +static bool __discard_anon_folio_pmd_locked(struct mm_area *vma,
>  					    unsigned long addr, pmd_t *pmdp,
>  					    struct folio *folio)
>  {
> @@ -3234,7 +3234,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm_area_struct *vma,
>  	return true;
>  }
>
> -bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
> +bool unmap_huge_pmd_locked(struct mm_area *vma, unsigned long addr,
>  			   pmd_t *pmdp, struct folio *folio)
>  {
>  	VM_WARN_ON_FOLIO(!folio_test_pmd_mappable(folio), folio);
> @@ -4316,7 +4316,7 @@ static void split_huge_pages_all(void)
>  	pr_debug("%lu of %lu THP split\n", split, total);
>  }
>
> -static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *vma)
> +static inline bool vma_not_suitable_for_thp_split(struct mm_area *vma)
>  {
>  	return vma_is_special_huge(vma) || (vma->vm_flags & VM_IO) ||
>  		    is_vm_hugetlb_page(vma);
> @@ -4359,7 +4359,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
>  	 * table filled with PTE-mapped THPs, each of which is distinct.
>  	 */
>  	for (addr = vaddr_start; addr < vaddr_end; addr += PAGE_SIZE) {
> -		struct vm_area_struct *vma = vma_lookup(mm, addr);
> +		struct mm_area *vma = vma_lookup(mm, addr);
>  		struct folio_walk fw;
>  		struct folio *folio;
>  		struct address_space *mapping;
> @@ -4614,7 +4614,7 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  		struct page *page)
>  {
>  	struct folio *folio = page_folio(page);
> -	struct vm_area_struct *vma = pvmw->vma;
> +	struct mm_area *vma = pvmw->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long address = pvmw->address;
>  	bool anon_exclusive;
> @@ -4663,7 +4663,7 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>  void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
>  {
>  	struct folio *folio = page_folio(new);
> -	struct vm_area_struct *vma = pvmw->vma;
> +	struct mm_area *vma = pvmw->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long address = pvmw->address;
>  	unsigned long haddr = address & HPAGE_PMD_MASK;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 39f92aad7bd1..96a0b225c1e8 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -116,12 +116,12 @@ struct mutex *hugetlb_fault_mutex_table __ro_after_init;
>
>  /* Forward declaration */
>  static int hugetlb_acct_memory(struct hstate *h, long delta);
> -static void hugetlb_vma_lock_free(struct vm_area_struct *vma);
> -static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
> -static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma);
> -static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
> +static void hugetlb_vma_lock_free(struct mm_area *vma);
> +static void hugetlb_vma_lock_alloc(struct mm_area *vma);
> +static void __hugetlb_vma_unlock_write_free(struct mm_area *vma);
> +static void hugetlb_unshare_pmds(struct mm_area *vma,
>  		unsigned long start, unsigned long end);
> -static struct resv_map *vma_resv_map(struct vm_area_struct *vma);
> +static struct resv_map *vma_resv_map(struct mm_area *vma);
>
>  static void hugetlb_free_folio(struct folio *folio)
>  {
> @@ -288,7 +288,7 @@ static inline struct hugepage_subpool *subpool_inode(struct inode *inode)
>  	return HUGETLBFS_SB(inode->i_sb)->spool;
>  }
>
> -static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma)
> +static inline struct hugepage_subpool *subpool_vma(struct mm_area *vma)
>  {
>  	return subpool_inode(file_inode(vma->vm_file));
>  }
> @@ -296,7 +296,7 @@ static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma)
>  /*
>   * hugetlb vma_lock helper routines
>   */
> -void hugetlb_vma_lock_read(struct vm_area_struct *vma)
> +void hugetlb_vma_lock_read(struct mm_area *vma)
>  {
>  	if (__vma_shareable_lock(vma)) {
>  		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> @@ -309,7 +309,7 @@ void hugetlb_vma_lock_read(struct vm_area_struct *vma)
>  	}
>  }
>
> -void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
> +void hugetlb_vma_unlock_read(struct mm_area *vma)
>  {
>  	if (__vma_shareable_lock(vma)) {
>  		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> @@ -322,7 +322,7 @@ void hugetlb_vma_unlock_read(struct vm_area_struct *vma)
>  	}
>  }
>
> -void hugetlb_vma_lock_write(struct vm_area_struct *vma)
> +void hugetlb_vma_lock_write(struct mm_area *vma)
>  {
>  	if (__vma_shareable_lock(vma)) {
>  		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> @@ -335,7 +335,7 @@ void hugetlb_vma_lock_write(struct vm_area_struct *vma)
>  	}
>  }
>
> -void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
> +void hugetlb_vma_unlock_write(struct mm_area *vma)
>  {
>  	if (__vma_shareable_lock(vma)) {
>  		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> @@ -348,7 +348,7 @@ void hugetlb_vma_unlock_write(struct vm_area_struct *vma)
>  	}
>  }
>
> -int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
> +int hugetlb_vma_trylock_write(struct mm_area *vma)
>  {
>
>  	if (__vma_shareable_lock(vma)) {
> @@ -364,7 +364,7 @@ int hugetlb_vma_trylock_write(struct vm_area_struct *vma)
>  	return 1;
>  }
>
> -void hugetlb_vma_assert_locked(struct vm_area_struct *vma)
> +void hugetlb_vma_assert_locked(struct mm_area *vma)
>  {
>  	if (__vma_shareable_lock(vma)) {
>  		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> @@ -387,7 +387,7 @@ void hugetlb_vma_lock_release(struct kref *kref)
>
>  static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
>  {
> -	struct vm_area_struct *vma = vma_lock->vma;
> +	struct mm_area *vma = vma_lock->vma;
>
>  	/*
>  	 * vma_lock structure may or not be released as a result of put,
> @@ -400,7 +400,7 @@ static void __hugetlb_vma_unlock_write_put(struct hugetlb_vma_lock *vma_lock)
>  	kref_put(&vma_lock->refs, hugetlb_vma_lock_release);
>  }
>
> -static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
> +static void __hugetlb_vma_unlock_write_free(struct mm_area *vma)
>  {
>  	if (__vma_shareable_lock(vma)) {
>  		struct hugetlb_vma_lock *vma_lock = vma->vm_private_data;
> @@ -414,7 +414,7 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma)
>  	}
>  }
>
> -static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
> +static void hugetlb_vma_lock_free(struct mm_area *vma)
>  {
>  	/*
>  	 * Only present in sharable vmas.
> @@ -430,7 +430,7 @@ static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
>  	}
>  }
>
> -static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma)
> +static void hugetlb_vma_lock_alloc(struct mm_area *vma)
>  {
>  	struct hugetlb_vma_lock *vma_lock;
>
> @@ -1021,7 +1021,7 @@ static long region_count(struct resv_map *resv, long f, long t)
>   * the mapping, huge page units here.
>   */
>  static pgoff_t vma_hugecache_offset(struct hstate *h,
> -			struct vm_area_struct *vma, unsigned long address)
> +			struct mm_area *vma, unsigned long address)
>  {
>  	return ((address - vma->vm_start) >> huge_page_shift(h)) +
>  			(vma->vm_pgoff >> huge_page_order(h));
> @@ -1036,7 +1036,7 @@ static pgoff_t vma_hugecache_offset(struct hstate *h,
>   *
>   * Return: The default size of the folios allocated when backing a VMA.
>   */
> -unsigned long vma_kernel_pagesize(struct vm_area_struct *vma)
> +unsigned long vma_kernel_pagesize(struct mm_area *vma)
>  {
>  	if (vma->vm_ops && vma->vm_ops->pagesize)
>  		return vma->vm_ops->pagesize(vma);
> @@ -1050,7 +1050,7 @@ EXPORT_SYMBOL_GPL(vma_kernel_pagesize);
>   * architectures where it differs, an architecture-specific 'strong'
>   * version of this symbol is required.
>   */
> -__weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
> +__weak unsigned long vma_mmu_pagesize(struct mm_area *vma)
>  {
>  	return vma_kernel_pagesize(vma);
>  }
> @@ -1083,12 +1083,12 @@ __weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
>   * reference it, this region map represents those offsets which have consumed
>   * reservation ie. where pages have been instantiated.
>   */
> -static unsigned long get_vma_private_data(struct vm_area_struct *vma)
> +static unsigned long get_vma_private_data(struct mm_area *vma)
>  {
>  	return (unsigned long)vma->vm_private_data;
>  }
>
> -static void set_vma_private_data(struct vm_area_struct *vma,
> +static void set_vma_private_data(struct mm_area *vma,
>  							unsigned long value)
>  {
>  	vma->vm_private_data = (void *)value;
> @@ -1178,7 +1178,7 @@ static inline struct resv_map *inode_resv_map(struct inode *inode)
>  	return (struct resv_map *)(&inode->i_data)->i_private_data;
>  }
>
> -static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
> +static struct resv_map *vma_resv_map(struct mm_area *vma)
>  {
>  	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
>  	if (vma->vm_flags & VM_MAYSHARE) {
> @@ -1193,7 +1193,7 @@ static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
>  	}
>  }
>
> -static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
> +static void set_vma_resv_map(struct mm_area *vma, struct resv_map *map)
>  {
>  	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
>  	VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
> @@ -1201,7 +1201,7 @@ static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
>  	set_vma_private_data(vma, (unsigned long)map);
>  }
>
> -static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags)
> +static void set_vma_resv_flags(struct mm_area *vma, unsigned long flags)
>  {
>  	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
>  	VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
> @@ -1209,21 +1209,21 @@ static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags)
>  	set_vma_private_data(vma, get_vma_private_data(vma) | flags);
>  }
>
> -static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
> +static int is_vma_resv_set(struct mm_area *vma, unsigned long flag)
>  {
>  	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
>
>  	return (get_vma_private_data(vma) & flag) != 0;
>  }
>
> -bool __vma_private_lock(struct vm_area_struct *vma)
> +bool __vma_private_lock(struct mm_area *vma)
>  {
>  	return !(vma->vm_flags & VM_MAYSHARE) &&
>  		get_vma_private_data(vma) & ~HPAGE_RESV_MASK &&
>  		is_vma_resv_set(vma, HPAGE_RESV_OWNER);
>  }
>
> -void hugetlb_dup_vma_private(struct vm_area_struct *vma)
> +void hugetlb_dup_vma_private(struct mm_area *vma)
>  {
>  	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
>  	/*
> @@ -1254,7 +1254,7 @@ void hugetlb_dup_vma_private(struct vm_area_struct *vma)
>   * same sized vma. It should never come here with last ref on the
>   * reservation.
>   */
> -void clear_vma_resv_huge_pages(struct vm_area_struct *vma)
> +void clear_vma_resv_huge_pages(struct mm_area *vma)
>  {
>  	/*
>  	 * Clear the old hugetlb private page reservation.
> @@ -1365,7 +1365,7 @@ static unsigned long available_huge_pages(struct hstate *h)
>  }
>
>  static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				unsigned long address, long gbl_chg)
>  {
>  	struct folio *folio = NULL;
> @@ -2324,7 +2324,7 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas
>   */
>  static
>  struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h,
> -		struct vm_area_struct *vma, unsigned long addr)
> +		struct mm_area *vma, unsigned long addr)
>  {
>  	struct folio *folio = NULL;
>  	struct mempolicy *mpol;
> @@ -2606,7 +2606,7 @@ enum vma_resv_mode {
>  	VMA_DEL_RESV,
>  };
>  static long __vma_reservation_common(struct hstate *h,
> -				struct vm_area_struct *vma, unsigned long addr,
> +				struct mm_area *vma, unsigned long addr,
>  				enum vma_resv_mode mode)
>  {
>  	struct resv_map *resv;
> @@ -2686,31 +2686,31 @@ static long __vma_reservation_common(struct hstate *h,
>  }
>
>  static long vma_needs_reservation(struct hstate *h,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	return __vma_reservation_common(h, vma, addr, VMA_NEEDS_RESV);
>  }
>
>  static long vma_commit_reservation(struct hstate *h,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	return __vma_reservation_common(h, vma, addr, VMA_COMMIT_RESV);
>  }
>
>  static void vma_end_reservation(struct hstate *h,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	(void)__vma_reservation_common(h, vma, addr, VMA_END_RESV);
>  }
>
>  static long vma_add_reservation(struct hstate *h,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	return __vma_reservation_common(h, vma, addr, VMA_ADD_RESV);
>  }
>
>  static long vma_del_reservation(struct hstate *h,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	return __vma_reservation_common(h, vma, addr, VMA_DEL_RESV);
>  }
> @@ -2735,7 +2735,7 @@ static long vma_del_reservation(struct hstate *h,
>   *
>   * In case 2, simply undo reserve map modifications done by alloc_hugetlb_folio.
>   */
> -void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
> +void restore_reserve_on_error(struct hstate *h, struct mm_area *vma,
>  			unsigned long address, struct folio *folio)
>  {
>  	long rc = vma_needs_reservation(h, vma, address);
> @@ -3004,7 +3004,7 @@ typedef enum {
>   * allocation).  New call sites should (probably) never set it to true!!
>   * When it's set, the allocation will bypass all vma level reservations.
>   */
> -struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> +struct folio *alloc_hugetlb_folio(struct mm_area *vma,
>  				    unsigned long addr, bool cow_from_owner)
>  {
>  	struct hugepage_subpool *spool = subpool_vma(vma);
> @@ -5314,7 +5314,7 @@ static int hugetlb_acct_memory(struct hstate *h, long delta)
>  	return ret;
>  }
>
> -static void hugetlb_vm_op_open(struct vm_area_struct *vma)
> +static void hugetlb_vm_op_open(struct mm_area *vma)
>  {
>  	struct resv_map *resv = vma_resv_map(vma);
>
> @@ -5352,7 +5352,7 @@ static void hugetlb_vm_op_open(struct vm_area_struct *vma)
>  	}
>  }
>
> -static void hugetlb_vm_op_close(struct vm_area_struct *vma)
> +static void hugetlb_vm_op_close(struct mm_area *vma)
>  {
>  	struct hstate *h = hstate_vma(vma);
>  	struct resv_map *resv;
> @@ -5383,7 +5383,7 @@ static void hugetlb_vm_op_close(struct vm_area_struct *vma)
>  	kref_put(&resv->refs, resv_map_release);
>  }
>
> -static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
> +static int hugetlb_vm_op_split(struct mm_area *vma, unsigned long addr)
>  {
>  	if (addr & ~(huge_page_mask(hstate_vma(vma))))
>  		return -EINVAL;
> @@ -5409,7 +5409,7 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
>  	return 0;
>  }
>
> -static unsigned long hugetlb_vm_op_pagesize(struct vm_area_struct *vma)
> +static unsigned long hugetlb_vm_op_pagesize(struct mm_area *vma)
>  {
>  	return huge_page_size(hstate_vma(vma));
>  }
> @@ -5441,7 +5441,7 @@ const struct vm_operations_struct hugetlb_vm_ops = {
>  	.pagesize = hugetlb_vm_op_pagesize,
>  };
>
> -static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
> +static pte_t make_huge_pte(struct mm_area *vma, struct page *page,
>  		bool try_mkwrite)
>  {
>  	pte_t entry;
> @@ -5460,7 +5460,7 @@ static pte_t make_huge_pte(struct vm_area_struct *vma, struct page *page,
>  	return entry;
>  }
>
> -static void set_huge_ptep_writable(struct vm_area_struct *vma,
> +static void set_huge_ptep_writable(struct mm_area *vma,
>  				   unsigned long address, pte_t *ptep)
>  {
>  	pte_t entry;
> @@ -5470,7 +5470,7 @@ static void set_huge_ptep_writable(struct vm_area_struct *vma,
>  		update_mmu_cache(vma, address, ptep);
>  }
>
> -static void set_huge_ptep_maybe_writable(struct vm_area_struct *vma,
> +static void set_huge_ptep_maybe_writable(struct mm_area *vma,
>  					 unsigned long address, pte_t *ptep)
>  {
>  	if (vma->vm_flags & VM_WRITE)
> @@ -5504,7 +5504,7 @@ bool is_hugetlb_entry_hwpoisoned(pte_t pte)
>  }
>
>  static void
> -hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long addr,
> +hugetlb_install_folio(struct mm_area *vma, pte_t *ptep, unsigned long addr,
>  		      struct folio *new_folio, pte_t old, unsigned long sz)
>  {
>  	pte_t newpte = make_huge_pte(vma, &new_folio->page, true);
> @@ -5519,8 +5519,8 @@ hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long add
>  }
>
>  int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> -			    struct vm_area_struct *dst_vma,
> -			    struct vm_area_struct *src_vma)
> +			    struct mm_area *dst_vma,
> +			    struct mm_area *src_vma)
>  {
>  	pte_t *src_pte, *dst_pte, entry;
>  	struct folio *pte_folio;
> @@ -5706,7 +5706,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
>  	return ret;
>  }
>
> -static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
> +static void move_huge_pte(struct mm_area *vma, unsigned long old_addr,
>  			  unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte,
>  			  unsigned long sz)
>  {
> @@ -5745,8 +5745,8 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
>  	spin_unlock(dst_ptl);
>  }
>
> -int move_hugetlb_page_tables(struct vm_area_struct *vma,
> -			     struct vm_area_struct *new_vma,
> +int move_hugetlb_page_tables(struct mm_area *vma,
> +			     struct mm_area *new_vma,
>  			     unsigned long old_addr, unsigned long new_addr,
>  			     unsigned long len)
>  {
> @@ -5809,7 +5809,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
>  	return len + old_addr - old_end;
>  }
>
> -void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +void __unmap_hugepage_range(struct mmu_gather *tlb, struct mm_area *vma,
>  			    unsigned long start, unsigned long end,
>  			    struct page *ref_page, zap_flags_t zap_flags)
>  {
> @@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  		tlb_flush_mmu_tlbonly(tlb);
>  }
>
> -void __hugetlb_zap_begin(struct vm_area_struct *vma,
> +void __hugetlb_zap_begin(struct mm_area *vma,
>  			 unsigned long *start, unsigned long *end)
>  {
>  	if (!vma->vm_file)	/* hugetlbfs_file_mmap error */
> @@ -5989,7 +5989,7 @@ void __hugetlb_zap_begin(struct vm_area_struct *vma,
>  		i_mmap_lock_write(vma->vm_file->f_mapping);
>  }
>
> -void __hugetlb_zap_end(struct vm_area_struct *vma,
> +void __hugetlb_zap_end(struct mm_area *vma,
>  		       struct zap_details *details)
>  {
>  	zap_flags_t zap_flags = details ? details->zap_flags : 0;
> @@ -6016,7 +6016,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
>  		i_mmap_unlock_write(vma->vm_file->f_mapping);
>  }
>
> -void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> +void unmap_hugepage_range(struct mm_area *vma, unsigned long start,
>  			  unsigned long end, struct page *ref_page,
>  			  zap_flags_t zap_flags)
>  {
> @@ -6041,11 +6041,11 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
>   * from other VMAs and let the children be SIGKILLed if they are faulting the
>   * same region.
>   */
> -static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> +static void unmap_ref_private(struct mm_struct *mm, struct mm_area *vma,
>  			      struct page *page, unsigned long address)
>  {
>  	struct hstate *h = hstate_vma(vma);
> -	struct vm_area_struct *iter_vma;
> +	struct mm_area *iter_vma;
>  	struct address_space *mapping;
>  	pgoff_t pgoff;
>
> @@ -6100,7 +6100,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
>  static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
>  		       struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
>  	pte_t pte = huge_ptep_get(mm, vmf->address, vmf->pte);
> @@ -6294,7 +6294,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
>   * Return whether there is a pagecache page to back given address within VMA.
>   */
>  bool hugetlbfs_pagecache_present(struct hstate *h,
> -				 struct vm_area_struct *vma, unsigned long address)
> +				 struct mm_area *vma, unsigned long address)
>  {
>  	struct address_space *mapping = vma->vm_file->f_mapping;
>  	pgoff_t idx = linear_page_index(vma, address);
> @@ -6373,7 +6373,7 @@ static bool hugetlb_pte_stable(struct hstate *h, struct mm_struct *mm, unsigned
>  static vm_fault_t hugetlb_no_page(struct address_space *mapping,
>  			struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct hstate *h = hstate_vma(vma);
>  	vm_fault_t ret = VM_FAULT_SIGBUS;
> @@ -6611,7 +6611,7 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx)
>  }
>  #endif
>
> -vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> +vm_fault_t hugetlb_fault(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long address, unsigned int flags)
>  {
>  	vm_fault_t ret;
> @@ -6824,7 +6824,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
>   * Can probably be eliminated, but still used by hugetlb_mfill_atomic_pte().
>   */
>  static struct folio *alloc_hugetlb_folio_vma(struct hstate *h,
> -		struct vm_area_struct *vma, unsigned long address)
> +		struct mm_area *vma, unsigned long address)
>  {
>  	struct mempolicy *mpol;
>  	nodemask_t *nodemask;
> @@ -6851,7 +6851,7 @@ static struct folio *alloc_hugetlb_folio_vma(struct hstate *h,
>   * with modifications for hugetlb pages.
>   */
>  int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
> -			     struct vm_area_struct *dst_vma,
> +			     struct mm_area *dst_vma,
>  			     unsigned long dst_addr,
>  			     unsigned long src_addr,
>  			     uffd_flags_t flags,
> @@ -7063,7 +7063,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
>  }
>  #endif /* CONFIG_USERFAULTFD */
>
> -long hugetlb_change_protection(struct vm_area_struct *vma,
> +long hugetlb_change_protection(struct mm_area *vma,
>  		unsigned long address, unsigned long end,
>  		pgprot_t newprot, unsigned long cp_flags)
>  {
> @@ -7213,7 +7213,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
>  /* Return true if reservation was successful, false otherwise.  */
>  bool hugetlb_reserve_pages(struct inode *inode,
>  					long from, long to,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					vm_flags_t vm_flags)
>  {
>  	long chg = -1, add = -1;
> @@ -7413,8 +7413,8 @@ long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
>  }
>
>  #ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
> -static unsigned long page_table_shareable(struct vm_area_struct *svma,
> -				struct vm_area_struct *vma,
> +static unsigned long page_table_shareable(struct mm_area *svma,
> +				struct mm_area *vma,
>  				unsigned long addr, pgoff_t idx)
>  {
>  	unsigned long saddr = ((idx - svma->vm_pgoff) << PAGE_SHIFT) +
> @@ -7441,7 +7441,7 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma,
>  	return saddr;
>  }
>
> -bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
> +bool want_pmd_share(struct mm_area *vma, unsigned long addr)
>  {
>  	unsigned long start = addr & PUD_MASK;
>  	unsigned long end = start + PUD_SIZE;
> @@ -7467,7 +7467,7 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
>   * If yes, adjust start and end to cover range associated with possible
>   * shared pmd mappings.
>   */
> -void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
> +void adjust_range_if_pmd_sharing_possible(struct mm_area *vma,
>  				unsigned long *start, unsigned long *end)
>  {
>  	unsigned long v_start = ALIGN(vma->vm_start, PUD_SIZE),
> @@ -7498,13 +7498,13 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>   * racing tasks could either miss the sharing (see huge_pte_offset) or select a
>   * bad pmd for sharing.
>   */
> -pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pmd_share(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, pud_t *pud)
>  {
>  	struct address_space *mapping = vma->vm_file->f_mapping;
>  	pgoff_t idx = ((addr - vma->vm_start) >> PAGE_SHIFT) +
>  			vma->vm_pgoff;
> -	struct vm_area_struct *svma;
> +	struct mm_area *svma;
>  	unsigned long saddr;
>  	pte_t *spte = NULL;
>  	pte_t *pte;
> @@ -7551,7 +7551,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
>   * returns: 1 successfully unmapped a shared pte page
>   *	    0 the underlying pte page is not shared, or it is the last user
>   */
> -int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
> +int huge_pmd_unshare(struct mm_struct *mm, struct mm_area *vma,
>  					unsigned long addr, pte_t *ptep)
>  {
>  	unsigned long sz = huge_page_size(hstate_vma(vma));
> @@ -7574,31 +7574,31 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
>
>  #else /* !CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */
>
> -pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pmd_share(struct mm_struct *mm, struct mm_area *vma,
>  		      unsigned long addr, pud_t *pud)
>  {
>  	return NULL;
>  }
>
> -int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
> +int huge_pmd_unshare(struct mm_struct *mm, struct mm_area *vma,
>  				unsigned long addr, pte_t *ptep)
>  {
>  	return 0;
>  }
>
> -void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
> +void adjust_range_if_pmd_sharing_possible(struct mm_area *vma,
>  				unsigned long *start, unsigned long *end)
>  {
>  }
>
> -bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr)
> +bool want_pmd_share(struct mm_area *vma, unsigned long addr)
>  {
>  	return false;
>  }
>  #endif /* CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING */
>
>  #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB
> -pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,
> +pte_t *huge_pte_alloc(struct mm_struct *mm, struct mm_area *vma,
>  			unsigned long addr, unsigned long sz)
>  {
>  	pgd_t *pgd;
> @@ -7837,7 +7837,7 @@ void move_hugetlb_state(struct folio *old_folio, struct folio *new_folio, int re
>  	spin_unlock_irq(&hugetlb_lock);
>  }
>
> -static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
> +static void hugetlb_unshare_pmds(struct mm_area *vma,
>  				   unsigned long start,
>  				   unsigned long end)
>  {
> @@ -7887,7 +7887,7 @@ static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
>   * This function will unconditionally remove all the shared pmd pgtable entries
>   * within the specific vma for a hugetlbfs memory range.
>   */
> -void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
> +void hugetlb_unshare_all_pmds(struct mm_area *vma)
>  {
>  	hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE),
>  			ALIGN_DOWN(vma->vm_end, PUD_SIZE));
> diff --git a/mm/internal.h b/mm/internal.h
> index 50c2f590b2d0..b2d2c52dfbd9 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -44,8 +44,8 @@ struct folio_batch;
>   * represents the length of the range being copied as specified by the user.
>   */
>  struct pagetable_move_control {
> -	struct vm_area_struct *old; /* Source VMA. */
> -	struct vm_area_struct *new; /* Destination VMA. */
> +	struct mm_area *old; /* Source VMA. */
> +	struct mm_area *new; /* Destination VMA. */
>  	unsigned long old_addr; /* Address from which the move begins. */
>  	unsigned long old_end; /* Exclusive address at which old range ends. */
>  	unsigned long new_addr; /* Address to move page tables to. */
> @@ -162,7 +162,7 @@ static inline void *folio_raw_mapping(const struct folio *folio)
>   *
>   * Returns: 0 if success, error otherwise.
>   */
> -static inline int mmap_file(struct file *file, struct vm_area_struct *vma)
> +static inline int mmap_file(struct file *file, struct mm_area *vma)
>  {
>  	int err = call_mmap(file, vma);
>
> @@ -184,7 +184,7 @@ static inline int mmap_file(struct file *file, struct vm_area_struct *vma)
>   * it in an inconsistent state which makes the use of any hooks suspect, clear
>   * them down by installing dummy empty hooks.
>   */
> -static inline void vma_close(struct vm_area_struct *vma)
> +static inline void vma_close(struct mm_area *vma)
>  {
>  	if (vma->vm_ops && vma->vm_ops->close) {
>  		vma->vm_ops->close(vma);
> @@ -426,13 +426,13 @@ void deactivate_file_folio(struct folio *folio);
>  void folio_activate(struct folio *folio);
>
>  void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> -		   struct vm_area_struct *start_vma, unsigned long floor,
> +		   struct mm_area *start_vma, unsigned long floor,
>  		   unsigned long ceiling, bool mm_wr_locked);
>  void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte);
>
>  struct zap_details;
>  void unmap_page_range(struct mmu_gather *tlb,
> -			     struct vm_area_struct *vma,
> +			     struct mm_area *vma,
>  			     unsigned long addr, unsigned long end,
>  			     struct zap_details *details);
>  int folio_unmap_invalidate(struct address_space *mapping, struct folio *folio,
> @@ -927,7 +927,7 @@ struct anon_vma *folio_anon_vma(const struct folio *folio);
>
>  #ifdef CONFIG_MMU
>  void unmap_mapping_folio(struct folio *folio);
> -extern long populate_vma_page_range(struct vm_area_struct *vma,
> +extern long populate_vma_page_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end, int *locked);
>  extern long faultin_page_range(struct mm_struct *mm, unsigned long start,
>  		unsigned long end, bool write, int *locked);
> @@ -950,7 +950,7 @@ extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags,
>   * the page table to know whether the folio is fully mapped to the range.
>   */
>  static inline bool
> -folio_within_range(struct folio *folio, struct vm_area_struct *vma,
> +folio_within_range(struct folio *folio, struct mm_area *vma,
>  		unsigned long start, unsigned long end)
>  {
>  	pgoff_t pgoff, addr;
> @@ -978,7 +978,7 @@ folio_within_range(struct folio *folio, struct vm_area_struct *vma,
>  }
>
>  static inline bool
> -folio_within_vma(struct folio *folio, struct vm_area_struct *vma)
> +folio_within_vma(struct folio *folio, struct mm_area *vma)
>  {
>  	return folio_within_range(folio, vma, vma->vm_start, vma->vm_end);
>  }
> @@ -994,7 +994,7 @@ folio_within_vma(struct folio *folio, struct vm_area_struct *vma)
>   */
>  void mlock_folio(struct folio *folio);
>  static inline void mlock_vma_folio(struct folio *folio,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	/*
>  	 * The VM_SPECIAL check here serves two purposes.
> @@ -1010,7 +1010,7 @@ static inline void mlock_vma_folio(struct folio *folio,
>
>  void munlock_folio(struct folio *folio);
>  static inline void munlock_vma_folio(struct folio *folio,
> -					struct vm_area_struct *vma)
> +					struct mm_area *vma)
>  {
>  	/*
>  	 * munlock if the function is called. Ideally, we should only
> @@ -1030,7 +1030,7 @@ bool need_mlock_drain(int cpu);
>  void mlock_drain_local(void);
>  void mlock_drain_remote(int cpu);
>
> -extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
> +extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct mm_area *vma);
>
>  /**
>   * vma_address - Find the virtual address a page range is mapped at
> @@ -1041,7 +1041,7 @@ extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
>   * If any page in this range is mapped by this VMA, return the first address
>   * where any of these pages appear.  Otherwise, return -EFAULT.
>   */
> -static inline unsigned long vma_address(const struct vm_area_struct *vma,
> +static inline unsigned long vma_address(const struct mm_area *vma,
>  		pgoff_t pgoff, unsigned long nr_pages)
>  {
>  	unsigned long address;
> @@ -1067,7 +1067,7 @@ static inline unsigned long vma_address(const struct vm_area_struct *vma,
>   */
>  static inline unsigned long vma_address_end(struct page_vma_mapped_walk *pvmw)
>  {
> -	struct vm_area_struct *vma = pvmw->vma;
> +	struct mm_area *vma = pvmw->vma;
>  	pgoff_t pgoff;
>  	unsigned long address;
>
> @@ -1210,10 +1210,10 @@ bool take_page_off_buddy(struct page *page);
>  bool put_page_back_buddy(struct page *page);
>  struct task_struct *task_early_kill(struct task_struct *tsk, int force_early);
>  void add_to_kill_ksm(struct task_struct *tsk, const struct page *p,
> -		     struct vm_area_struct *vma, struct list_head *to_kill,
> +		     struct mm_area *vma, struct list_head *to_kill,
>  		     unsigned long ksm_addr);
>  unsigned long page_mapped_in_vma(const struct page *page,
> -		struct vm_area_struct *vma);
> +		struct mm_area *vma);
>
>  #else
>  static inline int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill)
> @@ -1373,9 +1373,9 @@ int __must_check try_grab_folio(struct folio *folio, int refs,
>  /*
>   * mm/huge_memory.c
>   */
> -void touch_pud(struct vm_area_struct *vma, unsigned long addr,
> +void touch_pud(struct mm_area *vma, unsigned long addr,
>  	       pud_t *pud, bool write);
> -void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
> +void touch_pmd(struct mm_area *vma, unsigned long addr,
>  	       pmd_t *pmd, bool write);
>
>  /*
> @@ -1441,7 +1441,7 @@ enum {
>   * If the vma is NULL, we're coming from the GUP-fast path and might have
>   * to fallback to the slow path just to lookup the vma.
>   */
> -static inline bool gup_must_unshare(struct vm_area_struct *vma,
> +static inline bool gup_must_unshare(struct mm_area *vma,
>  				    unsigned int flags, struct page *page)
>  {
>  	/*
> @@ -1490,7 +1490,7 @@ extern bool mirrored_kernelcore;
>  bool memblock_has_mirror(void);
>  void memblock_free_all(void);
>
> -static __always_inline void vma_set_range(struct vm_area_struct *vma,
> +static __always_inline void vma_set_range(struct mm_area *vma,
>  					  unsigned long start, unsigned long end,
>  					  pgoff_t pgoff)
>  {
> @@ -1499,7 +1499,7 @@ static __always_inline void vma_set_range(struct vm_area_struct *vma,
>  	vma->vm_pgoff = pgoff;
>  }
>
> -static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
> +static inline bool vma_soft_dirty_enabled(struct mm_area *vma)
>  {
>  	/*
>  	 * NOTE: we must check this before VM_SOFTDIRTY on soft-dirty
> @@ -1517,12 +1517,12 @@ static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma)
>  	return !(vma->vm_flags & VM_SOFTDIRTY);
>  }
>
> -static inline bool pmd_needs_soft_dirty_wp(struct vm_area_struct *vma, pmd_t pmd)
> +static inline bool pmd_needs_soft_dirty_wp(struct mm_area *vma, pmd_t pmd)
>  {
>  	return vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd);
>  }
>
> -static inline bool pte_needs_soft_dirty_wp(struct vm_area_struct *vma, pte_t pte)
> +static inline bool pte_needs_soft_dirty_wp(struct mm_area *vma, pte_t pte)
>  {
>  	return vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte);
>  }
> diff --git a/mm/interval_tree.c b/mm/interval_tree.c
> index 32e390c42c53..864e9d3c733a 100644
> --- a/mm/interval_tree.c
> +++ b/mm/interval_tree.c
> @@ -10,27 +10,27 @@
>  #include <linux/rmap.h>
>  #include <linux/interval_tree_generic.h>
>
> -static inline unsigned long vma_start_pgoff(struct vm_area_struct *v)
> +static inline unsigned long vma_start_pgoff(struct mm_area *v)
>  {
>  	return v->vm_pgoff;
>  }
>
> -static inline unsigned long vma_last_pgoff(struct vm_area_struct *v)
> +static inline unsigned long vma_last_pgoff(struct mm_area *v)
>  {
>  	return v->vm_pgoff + vma_pages(v) - 1;
>  }
>
> -INTERVAL_TREE_DEFINE(struct vm_area_struct, shared.rb,
> +INTERVAL_TREE_DEFINE(struct mm_area, shared.rb,
>  		     unsigned long, shared.rb_subtree_last,
>  		     vma_start_pgoff, vma_last_pgoff, /* empty */, vma_interval_tree)
>
>  /* Insert node immediately after prev in the interval tree */
> -void vma_interval_tree_insert_after(struct vm_area_struct *node,
> -				    struct vm_area_struct *prev,
> +void vma_interval_tree_insert_after(struct mm_area *node,
> +				    struct mm_area *prev,
>  				    struct rb_root_cached *root)
>  {
>  	struct rb_node **link;
> -	struct vm_area_struct *parent;
> +	struct mm_area *parent;
>  	unsigned long last = vma_last_pgoff(node);
>
>  	VM_BUG_ON_VMA(vma_start_pgoff(node) != vma_start_pgoff(prev), node);
> @@ -40,12 +40,12 @@ void vma_interval_tree_insert_after(struct vm_area_struct *node,
>  		link = &prev->shared.rb.rb_right;
>  	} else {
>  		parent = rb_entry(prev->shared.rb.rb_right,
> -				  struct vm_area_struct, shared.rb);
> +				  struct mm_area, shared.rb);
>  		if (parent->shared.rb_subtree_last < last)
>  			parent->shared.rb_subtree_last = last;
>  		while (parent->shared.rb.rb_left) {
>  			parent = rb_entry(parent->shared.rb.rb_left,
> -				struct vm_area_struct, shared.rb);
> +				struct mm_area, shared.rb);
>  			if (parent->shared.rb_subtree_last < last)
>  				parent->shared.rb_subtree_last = last;
>  		}
> diff --git a/mm/io-mapping.c b/mm/io-mapping.c
> index 01b362799930..588ecb8ea446 100644
> --- a/mm/io-mapping.c
> +++ b/mm/io-mapping.c
> @@ -13,7 +13,7 @@
>   *
>   *  Note: this is only safe if the mm semaphore is held when called.
>   */
> -int io_mapping_map_user(struct io_mapping *iomap, struct vm_area_struct *vma,
> +int io_mapping_map_user(struct io_mapping *iomap, struct mm_area *vma,
>  		unsigned long addr, unsigned long pfn, unsigned long size)
>  {
>  	vm_flags_t expected_flags = VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index cc945c6ab3bd..e135208612f1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -346,7 +346,7 @@ struct attribute_group khugepaged_attr_group = {
>  };
>  #endif /* CONFIG_SYSFS */
>
> -int hugepage_madvise(struct vm_area_struct *vma,
> +int hugepage_madvise(struct mm_area *vma,
>  		     unsigned long *vm_flags, int advice)
>  {
>  	switch (advice) {
> @@ -469,7 +469,7 @@ void __khugepaged_enter(struct mm_struct *mm)
>  		wake_up_interruptible(&khugepaged_wait);
>  }
>
> -void khugepaged_enter_vma(struct vm_area_struct *vma,
> +void khugepaged_enter_vma(struct mm_area *vma,
>  			  unsigned long vm_flags)
>  {
>  	if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
> @@ -561,7 +561,7 @@ static bool is_refcount_suitable(struct folio *folio)
>  	return folio_ref_count(folio) == expected_refcount;
>  }
>
> -static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> +static int __collapse_huge_page_isolate(struct mm_area *vma,
>  					unsigned long address,
>  					pte_t *pte,
>  					struct collapse_control *cc,
> @@ -708,7 +708,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  }
>
>  static void __collapse_huge_page_copy_succeeded(pte_t *pte,
> -						struct vm_area_struct *vma,
> +						struct mm_area *vma,
>  						unsigned long address,
>  						spinlock_t *ptl,
>  						struct list_head *compound_pagelist)
> @@ -763,7 +763,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>  static void __collapse_huge_page_copy_failed(pte_t *pte,
>  					     pmd_t *pmd,
>  					     pmd_t orig_pmd,
> -					     struct vm_area_struct *vma,
> +					     struct mm_area *vma,
>  					     struct list_head *compound_pagelist)
>  {
>  	spinlock_t *pmd_ptl;
> @@ -800,7 +800,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
>   * @compound_pagelist: list that stores compound pages
>   */
>  static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> -		pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
> +		pmd_t *pmd, pmd_t orig_pmd, struct mm_area *vma,
>  		unsigned long address, spinlock_t *ptl,
>  		struct list_head *compound_pagelist)
>  {
> @@ -919,10 +919,10 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc)
>
>  static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
>  				   bool expect_anon,
> -				   struct vm_area_struct **vmap,
> +				   struct mm_area **vmap,
>  				   struct collapse_control *cc)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
>
>  	if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
> @@ -998,7 +998,7 @@ static int check_pmd_still_valid(struct mm_struct *mm,
>   * Returns result: if not SCAN_SUCCEED, mmap_lock has been released.
>   */
>  static int __collapse_huge_page_swapin(struct mm_struct *mm,
> -				       struct vm_area_struct *vma,
> +				       struct mm_area *vma,
>  				       unsigned long haddr, pmd_t *pmd,
>  				       int referenced)
>  {
> @@ -1112,7 +1112,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>  	struct folio *folio;
>  	spinlock_t *pmd_ptl, *pte_ptl;
>  	int result = SCAN_FAIL;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mmu_notifier_range range;
>
>  	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> @@ -1265,7 +1265,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>  }
>
>  static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> -				   struct vm_area_struct *vma,
> +				   struct mm_area *vma,
>  				   unsigned long address, bool *mmap_locked,
>  				   struct collapse_control *cc)
>  {
> @@ -1466,7 +1466,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
>
>  #ifdef CONFIG_SHMEM
>  /* hpage must be locked, and mmap_lock must be held */
> -static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
> +static int set_huge_pmd(struct mm_area *vma, unsigned long addr,
>  			pmd_t *pmdp, struct page *hpage)
>  {
>  	struct vm_fault vmf = {
> @@ -1504,7 +1504,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>  	struct mmu_notifier_range range;
>  	bool notified = false;
>  	unsigned long haddr = addr & HPAGE_PMD_MASK;
> -	struct vm_area_struct *vma = vma_lookup(mm, haddr);
> +	struct mm_area *vma = vma_lookup(mm, haddr);
>  	struct folio *folio;
>  	pte_t *start_pte, *pte;
>  	pmd_t *pmd, pgt_pmd;
> @@ -1713,7 +1713,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
>
>  static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	i_mmap_lock_read(mapping);
>  	vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
> @@ -2114,7 +2114,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
>  	}
>
>  	if (nr_none) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		int nr_none_check = 0;
>
>  		i_mmap_lock_read(mapping);
> @@ -2372,7 +2372,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
>  	struct khugepaged_mm_slot *mm_slot;
>  	struct mm_slot *slot;
>  	struct mm_struct *mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int progress = 0;
>
>  	VM_BUG_ON(!pages);
> @@ -2736,7 +2736,7 @@ static int madvise_collapse_errno(enum scan_result r)
>  	}
>  }
>
> -int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
> +int madvise_collapse(struct mm_area *vma, struct mm_area **prev,
>  		     unsigned long start, unsigned long end)
>  {
>  	struct collapse_control *cc;
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 8583fb91ef13..0370e8d4ab02 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -620,7 +620,7 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
>   * of the process that owns 'vma'.  We also do not want to enforce
>   * protection keys here anyway.
>   */
> -static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_vma)
> +static int break_ksm(struct mm_area *vma, unsigned long addr, bool lock_vma)
>  {
>  	vm_fault_t ret = 0;
>
> @@ -677,7 +677,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
>  	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>  }
>
> -static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +static bool vma_ksm_compatible(struct mm_area *vma)
>  {
>  	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
>  			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
> @@ -699,10 +699,10 @@ static bool vma_ksm_compatible(struct vm_area_struct *vma)
>  	return true;
>  }
>
> -static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
> +static struct mm_area *find_mergeable_vma(struct mm_struct *mm,
>  		unsigned long addr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	if (ksm_test_exit(mm))
>  		return NULL;
>  	vma = vma_lookup(mm, addr);
> @@ -715,7 +715,7 @@ static void break_cow(struct ksm_rmap_item *rmap_item)
>  {
>  	struct mm_struct *mm = rmap_item->mm;
>  	unsigned long addr = rmap_item->address;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * It is not an accident that whenever we want to break COW
> @@ -734,7 +734,7 @@ static struct page *get_mergeable_page(struct ksm_rmap_item *rmap_item)
>  {
>  	struct mm_struct *mm = rmap_item->mm;
>  	unsigned long addr = rmap_item->address;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct page *page = NULL;
>  	struct folio_walk fw;
>  	struct folio *folio;
> @@ -1034,7 +1034,7 @@ static void remove_trailing_rmap_items(struct ksm_rmap_item **rmap_list)
>   * to the next pass of ksmd - consider, for example, how ksmd might be
>   * in cmp_and_merge_page on one of the rmap_items we would be removing.
>   */
> -static int unmerge_ksm_pages(struct vm_area_struct *vma,
> +static int unmerge_ksm_pages(struct mm_area *vma,
>  			     unsigned long start, unsigned long end, bool lock_vma)
>  {
>  	unsigned long addr;
> @@ -1167,7 +1167,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>  	struct ksm_mm_slot *mm_slot;
>  	struct mm_slot *slot;
>  	struct mm_struct *mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int err = 0;
>
>  	spin_lock(&ksm_mmlist_lock);
> @@ -1243,7 +1243,7 @@ static u32 calc_checksum(struct page *page)
>  	return checksum;
>  }
>
> -static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
> +static int write_protect_page(struct mm_area *vma, struct folio *folio,
>  			      pte_t *orig_pte)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -1343,7 +1343,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
>   *
>   * Returns 0 on success, -EFAULT on failure.
>   */
> -static int replace_page(struct vm_area_struct *vma, struct page *page,
> +static int replace_page(struct mm_area *vma, struct page *page,
>  			struct page *kpage, pte_t orig_pte)
>  {
>  	struct folio *kfolio = page_folio(kpage);
> @@ -1446,7 +1446,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
>   *
>   * This function returns 0 if the pages were merged, -EFAULT otherwise.
>   */
> -static int try_to_merge_one_page(struct vm_area_struct *vma,
> +static int try_to_merge_one_page(struct mm_area *vma,
>  				 struct page *page, struct page *kpage)
>  {
>  	struct folio *folio = page_folio(page);
> @@ -1521,7 +1521,7 @@ static int try_to_merge_with_zero_page(struct ksm_rmap_item *rmap_item,
>  	 * appropriate zero page if the user enabled this via sysfs.
>  	 */
>  	if (ksm_use_zero_pages && (rmap_item->oldchecksum == zero_checksum)) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		mmap_read_lock(mm);
>  		vma = find_mergeable_vma(mm, rmap_item->address);
> @@ -1554,7 +1554,7 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item,
>  				      struct page *page, struct page *kpage)
>  {
>  	struct mm_struct *mm = rmap_item->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int err = -EFAULT;
>
>  	mmap_read_lock(mm);
> @@ -2459,7 +2459,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>  	struct mm_struct *mm;
>  	struct ksm_mm_slot *mm_slot;
>  	struct mm_slot *slot;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct ksm_rmap_item *rmap_item;
>  	struct vma_iterator vmi;
>  	int nid;
> @@ -2696,7 +2696,7 @@ static int ksm_scan_thread(void *nothing)
>  	return 0;
>  }
>
> -static void __ksm_add_vma(struct vm_area_struct *vma)
> +static void __ksm_add_vma(struct mm_area *vma)
>  {
>  	unsigned long vm_flags = vma->vm_flags;
>
> @@ -2707,7 +2707,7 @@ static void __ksm_add_vma(struct vm_area_struct *vma)
>  		vm_flags_set(vma, VM_MERGEABLE);
>  }
>
> -static int __ksm_del_vma(struct vm_area_struct *vma)
> +static int __ksm_del_vma(struct mm_area *vma)
>  {
>  	int err;
>
> @@ -2728,7 +2728,7 @@ static int __ksm_del_vma(struct vm_area_struct *vma)
>   *
>   * @vma:  Pointer to vma
>   */
> -void ksm_add_vma(struct vm_area_struct *vma)
> +void ksm_add_vma(struct mm_area *vma)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>
> @@ -2738,7 +2738,7 @@ void ksm_add_vma(struct vm_area_struct *vma)
>
>  static void ksm_add_vmas(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	VMA_ITERATOR(vmi, mm, 0);
>  	for_each_vma(vmi, vma)
> @@ -2747,7 +2747,7 @@ static void ksm_add_vmas(struct mm_struct *mm)
>
>  static int ksm_del_vmas(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int err;
>
>  	VMA_ITERATOR(vmi, mm, 0);
> @@ -2826,7 +2826,7 @@ int ksm_disable(struct mm_struct *mm)
>  	return ksm_del_vmas(mm);
>  }
>
> -int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> +int ksm_madvise(struct mm_area *vma, unsigned long start,
>  		unsigned long end, int advice, unsigned long *vm_flags)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -2953,7 +2953,7 @@ void __ksm_exit(struct mm_struct *mm)
>  }
>
>  struct folio *ksm_might_need_to_copy(struct folio *folio,
> -			struct vm_area_struct *vma, unsigned long addr)
> +			struct mm_area *vma, unsigned long addr)
>  {
>  	struct page *page = folio_page(folio, 0);
>  	struct anon_vma *anon_vma = folio_anon_vma(folio);
> @@ -3021,7 +3021,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc)
>  	hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) {
>  		struct anon_vma *anon_vma = rmap_item->anon_vma;
>  		struct anon_vma_chain *vmac;
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		cond_resched();
>  		if (!anon_vma_trylock_read(anon_vma)) {
> @@ -3079,7 +3079,7 @@ void collect_procs_ksm(const struct folio *folio, const struct page *page,
>  {
>  	struct ksm_stable_node *stable_node;
>  	struct ksm_rmap_item *rmap_item;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk;
>
>  	stable_node = folio_stable_node(folio);
> @@ -3277,7 +3277,7 @@ static void wait_while_offlining(void)
>   */
>  bool ksm_process_mergeable(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mmap_assert_locked(mm);
>  	VMA_ITERATOR(vmi, mm, 0);
> diff --git a/mm/madvise.c b/mm/madvise.c
> index b17f684322ad..8e401df400b1 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -99,7 +99,7 @@ void anon_vma_name_free(struct kref *kref)
>  	kfree(anon_name);
>  }
>
> -struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> +struct anon_vma_name *anon_vma_name(struct mm_area *vma)
>  {
>  	mmap_assert_locked(vma->vm_mm);
>
> @@ -107,7 +107,7 @@ struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
>  }
>
>  /* mmap_lock should be write-locked */
> -static int replace_anon_vma_name(struct vm_area_struct *vma,
> +static int replace_anon_vma_name(struct mm_area *vma,
>  				 struct anon_vma_name *anon_name)
>  {
>  	struct anon_vma_name *orig_name = anon_vma_name(vma);
> @@ -127,7 +127,7 @@ static int replace_anon_vma_name(struct vm_area_struct *vma,
>  	return 0;
>  }
>  #else /* CONFIG_ANON_VMA_NAME */
> -static int replace_anon_vma_name(struct vm_area_struct *vma,
> +static int replace_anon_vma_name(struct mm_area *vma,
>  				 struct anon_vma_name *anon_name)
>  {
>  	if (anon_name)
> @@ -142,8 +142,8 @@ static int replace_anon_vma_name(struct vm_area_struct *vma,
>   * Caller should ensure anon_name stability by raising its refcount even when
>   * anon_name belongs to a valid vma because this function might free that vma.
>   */
> -static int madvise_update_vma(struct vm_area_struct *vma,
> -			      struct vm_area_struct **prev, unsigned long start,
> +static int madvise_update_vma(struct mm_area *vma,
> +			      struct mm_area **prev, unsigned long start,
>  			      unsigned long end, unsigned long new_flags,
>  			      struct anon_vma_name *anon_name)
>  {
> @@ -179,7 +179,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
>  static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start,
>  		unsigned long end, struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->private;
> +	struct mm_area *vma = walk->private;
>  	struct swap_iocb *splug = NULL;
>  	pte_t *ptep = NULL;
>  	spinlock_t *ptl;
> @@ -225,7 +225,7 @@ static const struct mm_walk_ops swapin_walk_ops = {
>  	.walk_lock		= PGWALK_RDLOCK,
>  };
>
> -static void shmem_swapin_range(struct vm_area_struct *vma,
> +static void shmem_swapin_range(struct mm_area *vma,
>  		unsigned long start, unsigned long end,
>  		struct address_space *mapping)
>  {
> @@ -266,8 +266,8 @@ static void shmem_swapin_range(struct vm_area_struct *vma,
>  /*
>   * Schedule all required I/O operations.  Do not wait for completion.
>   */
> -static long madvise_willneed(struct vm_area_struct *vma,
> -			     struct vm_area_struct **prev,
> +static long madvise_willneed(struct mm_area *vma,
> +			     struct mm_area **prev,
>  			     unsigned long start, unsigned long end)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -314,7 +314,7 @@ static long madvise_willneed(struct vm_area_struct *vma,
>  	return 0;
>  }
>
> -static inline bool can_do_file_pageout(struct vm_area_struct *vma)
> +static inline bool can_do_file_pageout(struct mm_area *vma)
>  {
>  	if (!vma->vm_file)
>  		return false;
> @@ -349,7 +349,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
>  	struct mmu_gather *tlb = private->tlb;
>  	bool pageout = private->pageout;
>  	struct mm_struct *mm = tlb->mm;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	pte_t *start_pte, *pte, ptent;
>  	spinlock_t *ptl;
>  	struct folio *folio = NULL;
> @@ -567,7 +567,7 @@ static const struct mm_walk_ops cold_walk_ops = {
>  };
>
>  static void madvise_cold_page_range(struct mmu_gather *tlb,
> -			     struct vm_area_struct *vma,
> +			     struct mm_area *vma,
>  			     unsigned long addr, unsigned long end)
>  {
>  	struct madvise_walk_private walk_private = {
> @@ -580,13 +580,13 @@ static void madvise_cold_page_range(struct mmu_gather *tlb,
>  	tlb_end_vma(tlb, vma);
>  }
>
> -static inline bool can_madv_lru_vma(struct vm_area_struct *vma)
> +static inline bool can_madv_lru_vma(struct mm_area *vma)
>  {
>  	return !(vma->vm_flags & (VM_LOCKED|VM_PFNMAP|VM_HUGETLB));
>  }
>
> -static long madvise_cold(struct vm_area_struct *vma,
> -			struct vm_area_struct **prev,
> +static long madvise_cold(struct mm_area *vma,
> +			struct mm_area **prev,
>  			unsigned long start_addr, unsigned long end_addr)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -605,7 +605,7 @@ static long madvise_cold(struct vm_area_struct *vma,
>  }
>
>  static void madvise_pageout_page_range(struct mmu_gather *tlb,
> -			     struct vm_area_struct *vma,
> +			     struct mm_area *vma,
>  			     unsigned long addr, unsigned long end)
>  {
>  	struct madvise_walk_private walk_private = {
> @@ -618,8 +618,8 @@ static void madvise_pageout_page_range(struct mmu_gather *tlb,
>  	tlb_end_vma(tlb, vma);
>  }
>
> -static long madvise_pageout(struct vm_area_struct *vma,
> -			struct vm_area_struct **prev,
> +static long madvise_pageout(struct mm_area *vma,
> +			struct mm_area **prev,
>  			unsigned long start_addr, unsigned long end_addr)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -654,7 +654,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
>  	const cydp_t cydp_flags = CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY;
>  	struct mmu_gather *tlb = walk->private;
>  	struct mm_struct *mm = tlb->mm;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	spinlock_t *ptl;
>  	pte_t *start_pte, *pte, ptent;
>  	struct folio *folio;
> @@ -794,7 +794,7 @@ static const struct mm_walk_ops madvise_free_walk_ops = {
>  	.walk_lock		= PGWALK_RDLOCK,
>  };
>
> -static int madvise_free_single_vma(struct vm_area_struct *vma,
> +static int madvise_free_single_vma(struct mm_area *vma,
>  			unsigned long start_addr, unsigned long end_addr)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -848,7 +848,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma,
>   * An interface that causes the system to free clean pages and flush
>   * dirty pages is already available as msync(MS_INVALIDATE).
>   */
> -static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
> +static long madvise_dontneed_single_vma(struct mm_area *vma,
>  					unsigned long start, unsigned long end)
>  {
>  	struct zap_details details = {
> @@ -860,7 +860,7 @@ static long madvise_dontneed_single_vma(struct vm_area_struct *vma,
>  	return 0;
>  }
>
> -static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
> +static bool madvise_dontneed_free_valid_vma(struct mm_area *vma,
>  					    unsigned long start,
>  					    unsigned long *end,
>  					    int behavior)
> @@ -890,8 +890,8 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
>  	return true;
>  }
>
> -static long madvise_dontneed_free(struct vm_area_struct *vma,
> -				  struct vm_area_struct **prev,
> +static long madvise_dontneed_free(struct mm_area *vma,
> +				  struct mm_area **prev,
>  				  unsigned long start, unsigned long end,
>  				  int behavior)
>  {
> @@ -994,8 +994,8 @@ static long madvise_populate(struct mm_struct *mm, unsigned long start,
>   * Application wants to free up the pages and associated backing store.
>   * This is effectively punching a hole into the middle of a file.
>   */
> -static long madvise_remove(struct vm_area_struct *vma,
> -				struct vm_area_struct **prev,
> +static long madvise_remove(struct mm_area *vma,
> +				struct mm_area **prev,
>  				unsigned long start, unsigned long end)
>  {
>  	loff_t offset;
> @@ -1039,7 +1039,7 @@ static long madvise_remove(struct vm_area_struct *vma,
>  	return error;
>  }
>
> -static bool is_valid_guard_vma(struct vm_area_struct *vma, bool allow_locked)
> +static bool is_valid_guard_vma(struct mm_area *vma, bool allow_locked)
>  {
>  	vm_flags_t disallowed = VM_SPECIAL | VM_HUGETLB;
>
> @@ -1115,8 +1115,8 @@ static const struct mm_walk_ops guard_install_walk_ops = {
>  	.walk_lock		= PGWALK_RDLOCK,
>  };
>
> -static long madvise_guard_install(struct vm_area_struct *vma,
> -				 struct vm_area_struct **prev,
> +static long madvise_guard_install(struct mm_area *vma,
> +				 struct mm_area **prev,
>  				 unsigned long start, unsigned long end)
>  {
>  	long err;
> @@ -1225,8 +1225,8 @@ static const struct mm_walk_ops guard_remove_walk_ops = {
>  	.walk_lock		= PGWALK_RDLOCK,
>  };
>
> -static long madvise_guard_remove(struct vm_area_struct *vma,
> -				 struct vm_area_struct **prev,
> +static long madvise_guard_remove(struct mm_area *vma,
> +				 struct mm_area **prev,
>  				 unsigned long start, unsigned long end)
>  {
>  	*prev = vma;
> @@ -1246,8 +1246,8 @@ static long madvise_guard_remove(struct vm_area_struct *vma,
>   * will handle splitting a vm area into separate areas, each area with its own
>   * behavior.
>   */
> -static int madvise_vma_behavior(struct vm_area_struct *vma,
> -				struct vm_area_struct **prev,
> +static int madvise_vma_behavior(struct mm_area *vma,
> +				struct mm_area **prev,
>  				unsigned long start, unsigned long end,
>  				unsigned long behavior)
>  {
> @@ -1488,12 +1488,12 @@ static bool process_madvise_remote_valid(int behavior)
>  static
>  int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
>  		      unsigned long end, unsigned long arg,
> -		      int (*visit)(struct vm_area_struct *vma,
> -				   struct vm_area_struct **prev, unsigned long start,
> +		      int (*visit)(struct mm_area *vma,
> +				   struct mm_area **prev, unsigned long start,
>  				   unsigned long end, unsigned long arg))
>  {
> -	struct vm_area_struct *vma;
> -	struct vm_area_struct *prev;
> +	struct mm_area *vma;
> +	struct mm_area *prev;
>  	unsigned long tmp;
>  	int unmapped_error = 0;
>
> @@ -1545,8 +1545,8 @@ int madvise_walk_vmas(struct mm_struct *mm, unsigned long start,
>  }
>
>  #ifdef CONFIG_ANON_VMA_NAME
> -static int madvise_vma_anon_name(struct vm_area_struct *vma,
> -				 struct vm_area_struct **prev,
> +static int madvise_vma_anon_name(struct mm_area *vma,
> +				 struct mm_area **prev,
>  				 unsigned long start, unsigned long end,
>  				 unsigned long anon_name)
>  {
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index b91a33fb6c69..8a194e377443 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -398,7 +398,7 @@ static void shake_page(struct page *page)
>  	shake_folio(page_folio(page));
>  }
>
> -static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
> +static unsigned long dev_pagemap_mapping_shift(struct mm_area *vma,
>  		unsigned long address)
>  {
>  	unsigned long ret = 0;
> @@ -446,7 +446,7 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma,
>   * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM.
>   */
>  static void __add_to_kill(struct task_struct *tsk, const struct page *p,
> -			  struct vm_area_struct *vma, struct list_head *to_kill,
> +			  struct mm_area *vma, struct list_head *to_kill,
>  			  unsigned long addr)
>  {
>  	struct to_kill *tk;
> @@ -487,7 +487,7 @@ static void __add_to_kill(struct task_struct *tsk, const struct page *p,
>  }
>
>  static void add_to_kill_anon_file(struct task_struct *tsk, const struct page *p,
> -		struct vm_area_struct *vma, struct list_head *to_kill,
> +		struct mm_area *vma, struct list_head *to_kill,
>  		unsigned long addr)
>  {
>  	if (addr == -EFAULT)
> @@ -510,7 +510,7 @@ static bool task_in_to_kill_list(struct list_head *to_kill,
>  }
>
>  void add_to_kill_ksm(struct task_struct *tsk, const struct page *p,
> -		     struct vm_area_struct *vma, struct list_head *to_kill,
> +		     struct mm_area *vma, struct list_head *to_kill,
>  		     unsigned long addr)
>  {
>  	if (!task_in_to_kill_list(to_kill, tsk))
> @@ -621,7 +621,7 @@ static void collect_procs_anon(const struct folio *folio,
>  	pgoff = page_pgoff(folio, page);
>  	rcu_read_lock();
>  	for_each_process(tsk) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		struct anon_vma_chain *vmac;
>  		struct task_struct *t = task_early_kill(tsk, force_early);
>  		unsigned long addr;
> @@ -648,7 +648,7 @@ static void collect_procs_file(const struct folio *folio,
>  		const struct page *page, struct list_head *to_kill,
>  		int force_early)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk;
>  	struct address_space *mapping = folio->mapping;
>  	pgoff_t pgoff;
> @@ -683,7 +683,7 @@ static void collect_procs_file(const struct folio *folio,
>
>  #ifdef CONFIG_FS_DAX
>  static void add_to_kill_fsdax(struct task_struct *tsk, const struct page *p,
> -			      struct vm_area_struct *vma,
> +			      struct mm_area *vma,
>  			      struct list_head *to_kill, pgoff_t pgoff)
>  {
>  	unsigned long addr = vma_address(vma, pgoff, 1);
> @@ -697,7 +697,7 @@ static void collect_procs_fsdax(const struct page *page,
>  		struct address_space *mapping, pgoff_t pgoff,
>  		struct list_head *to_kill, bool pre_remove)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct task_struct *tsk;
>
>  	i_mmap_lock_read(mapping);
> diff --git a/mm/memory.c b/mm/memory.c
> index 9d0ba6fe73c1..854615d98d2b 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -344,14 +344,14 @@ void free_pgd_range(struct mmu_gather *tlb,
>  }
>
>  void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> -		   struct vm_area_struct *vma, unsigned long floor,
> +		   struct mm_area *vma, unsigned long floor,
>  		   unsigned long ceiling, bool mm_wr_locked)
>  {
>  	struct unlink_vma_file_batch vb;
>
>  	do {
>  		unsigned long addr = vma->vm_start;
> -		struct vm_area_struct *next;
> +		struct mm_area *next;
>
>  		/*
>  		 * Note: USER_PGTABLES_CEILING may be passed as ceiling and may
> @@ -476,7 +476,7 @@ static inline void add_mm_rss_vec(struct mm_struct *mm, int *rss)
>   *
>   * The calling function must still handle the error.
>   */
> -static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
> +static void print_bad_pte(struct mm_area *vma, unsigned long addr,
>  			  pte_t pte, struct page *page)
>  {
>  	pgd_t *pgd = pgd_offset(vma->vm_mm, addr);
> @@ -572,7 +572,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
>   * order to support COWable mappings.
>   *
>   */
> -struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
> +struct page *vm_normal_page(struct mm_area *vma, unsigned long addr,
>  			    pte_t pte)
>  {
>  	unsigned long pfn = pte_pfn(pte);
> @@ -638,7 +638,7 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>  	return pfn_to_page(pfn);
>  }
>
> -struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
> +struct folio *vm_normal_folio(struct mm_area *vma, unsigned long addr,
>  			    pte_t pte)
>  {
>  	struct page *page = vm_normal_page(vma, addr, pte);
> @@ -649,7 +649,7 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
>  }
>
>  #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES
> -struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> +struct page *vm_normal_page_pmd(struct mm_area *vma, unsigned long addr,
>  				pmd_t pmd)
>  {
>  	unsigned long pfn = pmd_pfn(pmd);
> @@ -688,7 +688,7 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>  	return pfn_to_page(pfn);
>  }
>
> -struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
> +struct folio *vm_normal_folio_pmd(struct mm_area *vma,
>  				  unsigned long addr, pmd_t pmd)
>  {
>  	struct page *page = vm_normal_page_pmd(vma, addr, pmd);
> @@ -725,7 +725,7 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
>   * page table modifications (e.g., MADV_DONTNEED, mprotect), so device drivers
>   * must use MMU notifiers to sync against any concurrent changes.
>   */
> -static void restore_exclusive_pte(struct vm_area_struct *vma,
> +static void restore_exclusive_pte(struct mm_area *vma,
>  		struct folio *folio, struct page *page, unsigned long address,
>  		pte_t *ptep, pte_t orig_pte)
>  {
> @@ -759,7 +759,7 @@ static void restore_exclusive_pte(struct vm_area_struct *vma,
>   * Tries to restore an exclusive pte if the page lock can be acquired without
>   * sleeping.
>   */
> -static int try_restore_exclusive_pte(struct vm_area_struct *vma,
> +static int try_restore_exclusive_pte(struct mm_area *vma,
>  		unsigned long addr, pte_t *ptep, pte_t orig_pte)
>  {
>  	struct page *page = pfn_swap_entry_to_page(pte_to_swp_entry(orig_pte));
> @@ -782,8 +782,8 @@ static int try_restore_exclusive_pte(struct vm_area_struct *vma,
>
>  static unsigned long
>  copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> -		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma, unsigned long addr, int *rss)
> +		pte_t *dst_pte, pte_t *src_pte, struct mm_area *dst_vma,
> +		struct mm_area *src_vma, unsigned long addr, int *rss)
>  {
>  	unsigned long vm_flags = dst_vma->vm_flags;
>  	pte_t orig_pte = ptep_get(src_pte);
> @@ -903,7 +903,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
>   * lock.
>   */
>  static inline int
> -copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +copy_present_page(struct mm_area *dst_vma, struct mm_area *src_vma,
>  		  pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss,
>  		  struct folio **prealloc, struct page *page)
>  {
> @@ -938,8 +938,8 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
>  	return 0;
>  }
>
> -static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma,
> -		struct vm_area_struct *src_vma, pte_t *dst_pte, pte_t *src_pte,
> +static __always_inline void __copy_present_ptes(struct mm_area *dst_vma,
> +		struct mm_area *src_vma, pte_t *dst_pte, pte_t *src_pte,
>  		pte_t pte, unsigned long addr, int nr)
>  {
>  	struct mm_struct *src_mm = src_vma->vm_mm;
> @@ -969,7 +969,7 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma,
>   * Otherwise, returns the number of copied PTEs (at least 1).
>   */
>  static inline int
> -copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +copy_present_ptes(struct mm_area *dst_vma, struct mm_area *src_vma,
>  		 pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long addr,
>  		 int max_nr, int *rss, struct folio **prealloc)
>  {
> @@ -1046,7 +1046,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
>  }
>
>  static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
> -		struct vm_area_struct *vma, unsigned long addr, bool need_zero)
> +		struct mm_area *vma, unsigned long addr, bool need_zero)
>  {
>  	struct folio *new_folio;
>
> @@ -1068,7 +1068,7 @@ static inline struct folio *folio_prealloc(struct mm_struct *src_mm,
>  }
>
>  static int
> -copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +copy_pte_range(struct mm_area *dst_vma, struct mm_area *src_vma,
>  	       pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
>  	       unsigned long end)
>  {
> @@ -1223,7 +1223,7 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  }
>
>  static inline int
> -copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +copy_pmd_range(struct mm_area *dst_vma, struct mm_area *src_vma,
>  	       pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
>  	       unsigned long end)
>  {
> @@ -1260,7 +1260,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  }
>
>  static inline int
> -copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +copy_pud_range(struct mm_area *dst_vma, struct mm_area *src_vma,
>  	       p4d_t *dst_p4d, p4d_t *src_p4d, unsigned long addr,
>  	       unsigned long end)
>  {
> @@ -1297,7 +1297,7 @@ copy_pud_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>  }
>
>  static inline int
> -copy_p4d_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
> +copy_p4d_range(struct mm_area *dst_vma, struct mm_area *src_vma,
>  	       pgd_t *dst_pgd, pgd_t *src_pgd, unsigned long addr,
>  	       unsigned long end)
>  {
> @@ -1326,7 +1326,7 @@ copy_p4d_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
>   * when the child accesses the memory range.
>   */
>  static bool
> -vma_needs_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
> +vma_needs_copy(struct mm_area *dst_vma, struct mm_area *src_vma)
>  {
>  	/*
>  	 * Always copy pgtables when dst_vma has uffd-wp enabled even if it's
> @@ -1353,7 +1353,7 @@ vma_needs_copy(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
>  }
>
>  int
> -copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
> +copy_page_range(struct mm_area *dst_vma, struct mm_area *src_vma)
>  {
>  	pgd_t *src_pgd, *dst_pgd;
>  	unsigned long addr = src_vma->vm_start;
> @@ -1461,7 +1461,7 @@ static inline bool zap_drop_markers(struct zap_details *details)
>   * Returns true if uffd-wp ptes was installed, false otherwise.
>   */
>  static inline bool
> -zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
> +zap_install_uffd_wp_if_needed(struct mm_area *vma,
>  			      unsigned long addr, pte_t *pte, int nr,
>  			      struct zap_details *details, pte_t pteval)
>  {
> @@ -1489,7 +1489,7 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
>  }
>
>  static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, struct folio *folio,
> +		struct mm_area *vma, struct folio *folio,
>  		struct page *page, pte_t *pte, pte_t ptent, unsigned int nr,
>  		unsigned long addr, struct zap_details *details, int *rss,
>  		bool *force_flush, bool *force_break, bool *any_skipped)
> @@ -1540,7 +1540,7 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
>   * Returns the number of processed (skipped or zapped) PTEs (at least 1).
>   */
>  static inline int zap_present_ptes(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, pte_t *pte, pte_t ptent,
> +		struct mm_area *vma, pte_t *pte, pte_t ptent,
>  		unsigned int max_nr, unsigned long addr,
>  		struct zap_details *details, int *rss, bool *force_flush,
>  		bool *force_break, bool *any_skipped)
> @@ -1589,7 +1589,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb,
>  }
>
>  static inline int zap_nonpresent_ptes(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, pte_t *pte, pte_t ptent,
> +		struct mm_area *vma, pte_t *pte, pte_t ptent,
>  		unsigned int max_nr, unsigned long addr,
>  		struct zap_details *details, int *rss, bool *any_skipped)
>  {
> @@ -1659,7 +1659,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather *tlb,
>  }
>
>  static inline int do_zap_pte_range(struct mmu_gather *tlb,
> -				   struct vm_area_struct *vma, pte_t *pte,
> +				   struct mm_area *vma, pte_t *pte,
>  				   unsigned long addr, unsigned long end,
>  				   struct zap_details *details, int *rss,
>  				   bool *force_flush, bool *force_break,
> @@ -1695,7 +1695,7 @@ static inline int do_zap_pte_range(struct mmu_gather *tlb,
>  }
>
>  static unsigned long zap_pte_range(struct mmu_gather *tlb,
> -				struct vm_area_struct *vma, pmd_t *pmd,
> +				struct mm_area *vma, pmd_t *pmd,
>  				unsigned long addr, unsigned long end,
>  				struct zap_details *details)
>  {
> @@ -1787,7 +1787,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
>  }
>
>  static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
> -				struct vm_area_struct *vma, pud_t *pud,
> +				struct mm_area *vma, pud_t *pud,
>  				unsigned long addr, unsigned long end,
>  				struct zap_details *details)
>  {
> @@ -1829,7 +1829,7 @@ static inline unsigned long zap_pmd_range(struct mmu_gather *tlb,
>  }
>
>  static inline unsigned long zap_pud_range(struct mmu_gather *tlb,
> -				struct vm_area_struct *vma, p4d_t *p4d,
> +				struct mm_area *vma, p4d_t *p4d,
>  				unsigned long addr, unsigned long end,
>  				struct zap_details *details)
>  {
> @@ -1858,7 +1858,7 @@ static inline unsigned long zap_pud_range(struct mmu_gather *tlb,
>  }
>
>  static inline unsigned long zap_p4d_range(struct mmu_gather *tlb,
> -				struct vm_area_struct *vma, pgd_t *pgd,
> +				struct mm_area *vma, pgd_t *pgd,
>  				unsigned long addr, unsigned long end,
>  				struct zap_details *details)
>  {
> @@ -1877,7 +1877,7 @@ static inline unsigned long zap_p4d_range(struct mmu_gather *tlb,
>  }
>
>  void unmap_page_range(struct mmu_gather *tlb,
> -			     struct vm_area_struct *vma,
> +			     struct mm_area *vma,
>  			     unsigned long addr, unsigned long end,
>  			     struct zap_details *details)
>  {
> @@ -1898,7 +1898,7 @@ void unmap_page_range(struct mmu_gather *tlb,
>
>
>  static void unmap_single_vma(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, unsigned long start_addr,
> +		struct mm_area *vma, unsigned long start_addr,
>  		unsigned long end_addr,
>  		struct zap_details *details, bool mm_wr_locked)
>  {
> @@ -1963,7 +1963,7 @@ static void unmap_single_vma(struct mmu_gather *tlb,
>   * drops the lock and schedules.
>   */
>  void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
> -		struct vm_area_struct *vma, unsigned long start_addr,
> +		struct mm_area *vma, unsigned long start_addr,
>  		unsigned long end_addr, unsigned long tree_end,
>  		bool mm_wr_locked)
>  {
> @@ -1991,14 +1991,14 @@ void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
>
>  /**
>   * zap_page_range_single - remove user pages in a given range
> - * @vma: vm_area_struct holding the applicable pages
> + * @vma: mm_area holding the applicable pages
>   * @address: starting address of pages to zap
>   * @size: number of bytes to zap
>   * @details: details of shared cache invalidation
>   *
>   * The range must fit into one VMA.
>   */
> -void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> +void zap_page_range_single(struct mm_area *vma, unsigned long address,
>  		unsigned long size, struct zap_details *details)
>  {
>  	const unsigned long end = address + size;
> @@ -2023,7 +2023,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>
>  /**
>   * zap_vma_ptes - remove ptes mapping the vma
> - * @vma: vm_area_struct holding ptes to be zapped
> + * @vma: mm_area holding ptes to be zapped
>   * @address: starting address of pages to zap
>   * @size: number of bytes to zap
>   *
> @@ -2032,7 +2032,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
>   * The entire address range must be fully contained within the vma.
>   *
>   */
> -void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
> +void zap_vma_ptes(struct mm_area *vma, unsigned long address,
>  		unsigned long size)
>  {
>  	if (!range_in_vma(vma, address, address + size) ||
> @@ -2075,7 +2075,7 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr,
>  	return pte_alloc_map_lock(mm, pmd, addr, ptl);
>  }
>
> -static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma)
> +static bool vm_mixed_zeropage_allowed(struct mm_area *vma)
>  {
>  	VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP);
>  	/*
> @@ -2105,7 +2105,7 @@ static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma)
>  	       (vma_is_fsdax(vma) || vma->vm_flags & VM_IO);
>  }
>
> -static int validate_page_before_insert(struct vm_area_struct *vma,
> +static int validate_page_before_insert(struct mm_area *vma,
>  				       struct page *page)
>  {
>  	struct folio *folio = page_folio(page);
> @@ -2124,7 +2124,7 @@ static int validate_page_before_insert(struct vm_area_struct *vma,
>  	return 0;
>  }
>
> -static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
> +static int insert_page_into_pte_locked(struct mm_area *vma, pte_t *pte,
>  				unsigned long addr, struct page *page,
>  				pgprot_t prot, bool mkwrite)
>  {
> @@ -2165,7 +2165,7 @@ static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte,
>  	return 0;
>  }
>
> -static int insert_page(struct vm_area_struct *vma, unsigned long addr,
> +static int insert_page(struct mm_area *vma, unsigned long addr,
>  			struct page *page, pgprot_t prot, bool mkwrite)
>  {
>  	int retval;
> @@ -2186,7 +2186,7 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
>  	return retval;
>  }
>
> -static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte,
> +static int insert_page_in_batch_locked(struct mm_area *vma, pte_t *pte,
>  			unsigned long addr, struct page *page, pgprot_t prot)
>  {
>  	int err;
> @@ -2200,7 +2200,7 @@ static int insert_page_in_batch_locked(struct vm_area_struct *vma, pte_t *pte,
>  /* insert_pages() amortizes the cost of spinlock operations
>   * when inserting pages in a loop.
>   */
> -static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
> +static int insert_pages(struct mm_area *vma, unsigned long addr,
>  			struct page **pages, unsigned long *num, pgprot_t prot)
>  {
>  	pmd_t *pmd = NULL;
> @@ -2273,7 +2273,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
>   *
>   * The same restrictions apply as in vm_insert_page().
>   */
> -int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
> +int vm_insert_pages(struct mm_area *vma, unsigned long addr,
>  			struct page **pages, unsigned long *num)
>  {
>  	const unsigned long end_addr = addr + (*num * PAGE_SIZE) - 1;
> @@ -2320,7 +2320,7 @@ EXPORT_SYMBOL(vm_insert_pages);
>   *
>   * Return: %0 on success, negative error code otherwise.
>   */
> -int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
> +int vm_insert_page(struct mm_area *vma, unsigned long addr,
>  			struct page *page)
>  {
>  	if (addr < vma->vm_start || addr >= vma->vm_end)
> @@ -2347,7 +2347,7 @@ EXPORT_SYMBOL(vm_insert_page);
>   *
>   * Return: 0 on success and error code otherwise.
>   */
> -static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> +static int __vm_map_pages(struct mm_area *vma, struct page **pages,
>  				unsigned long num, unsigned long offset)
>  {
>  	unsigned long count = vma_pages(vma);
> @@ -2390,7 +2390,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages,
>   * Context: Process context. Called by mmap handlers.
>   * Return: 0 on success and error code otherwise.
>   */
> -int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> +int vm_map_pages(struct mm_area *vma, struct page **pages,
>  				unsigned long num)
>  {
>  	return __vm_map_pages(vma, pages, num, vma->vm_pgoff);
> @@ -2410,14 +2410,14 @@ EXPORT_SYMBOL(vm_map_pages);
>   * Context: Process context. Called by mmap handlers.
>   * Return: 0 on success and error code otherwise.
>   */
> -int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
> +int vm_map_pages_zero(struct mm_area *vma, struct page **pages,
>  				unsigned long num)
>  {
>  	return __vm_map_pages(vma, pages, num, 0);
>  }
>  EXPORT_SYMBOL(vm_map_pages_zero);
>
> -static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> +static vm_fault_t insert_pfn(struct mm_area *vma, unsigned long addr,
>  			pfn_t pfn, pgprot_t prot, bool mkwrite)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -2504,7 +2504,7 @@ static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>   * Context: Process context.  May allocate using %GFP_KERNEL.
>   * Return: vm_fault_t value.
>   */
> -vm_fault_t vmf_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
> +vm_fault_t vmf_insert_pfn_prot(struct mm_area *vma, unsigned long addr,
>  			unsigned long pfn, pgprot_t pgprot)
>  {
>  	/*
> @@ -2552,14 +2552,14 @@ EXPORT_SYMBOL(vmf_insert_pfn_prot);
>   * Context: Process context.  May allocate using %GFP_KERNEL.
>   * Return: vm_fault_t value.
>   */
> -vm_fault_t vmf_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> +vm_fault_t vmf_insert_pfn(struct mm_area *vma, unsigned long addr,
>  			unsigned long pfn)
>  {
>  	return vmf_insert_pfn_prot(vma, addr, pfn, vma->vm_page_prot);
>  }
>  EXPORT_SYMBOL(vmf_insert_pfn);
>
> -static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite)
> +static bool vm_mixed_ok(struct mm_area *vma, pfn_t pfn, bool mkwrite)
>  {
>  	if (unlikely(is_zero_pfn(pfn_t_to_pfn(pfn))) &&
>  	    (mkwrite || !vm_mixed_zeropage_allowed(vma)))
> @@ -2576,7 +2576,7 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, pfn_t pfn, bool mkwrite)
>  	return false;
>  }
>
> -static vm_fault_t __vm_insert_mixed(struct vm_area_struct *vma,
> +static vm_fault_t __vm_insert_mixed(struct mm_area *vma,
>  		unsigned long addr, pfn_t pfn, bool mkwrite)
>  {
>  	pgprot_t pgprot = vma->vm_page_prot;
> @@ -2643,7 +2643,7 @@ vm_fault_t vmf_insert_page_mkwrite(struct vm_fault *vmf, struct page *page,
>  }
>  EXPORT_SYMBOL_GPL(vmf_insert_page_mkwrite);
>
> -vm_fault_t vmf_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
> +vm_fault_t vmf_insert_mixed(struct mm_area *vma, unsigned long addr,
>  		pfn_t pfn)
>  {
>  	return __vm_insert_mixed(vma, addr, pfn, false);
> @@ -2655,7 +2655,7 @@ EXPORT_SYMBOL(vmf_insert_mixed);
>   *  different entry in the mean time, we treat that as success as we assume
>   *  the same entry was actually inserted.
>   */
> -vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma,
> +vm_fault_t vmf_insert_mixed_mkwrite(struct mm_area *vma,
>  		unsigned long addr, pfn_t pfn)
>  {
>  	return __vm_insert_mixed(vma, addr, pfn, true);
> @@ -2759,7 +2759,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd,
>  	return 0;
>  }
>
> -static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long addr,
> +static int remap_pfn_range_internal(struct mm_area *vma, unsigned long addr,
>  		unsigned long pfn, unsigned long size, pgprot_t prot)
>  {
>  	pgd_t *pgd;
> @@ -2816,7 +2816,7 @@ static int remap_pfn_range_internal(struct vm_area_struct *vma, unsigned long ad
>   * Variant of remap_pfn_range that does not call track_pfn_remap.  The caller
>   * must have pre-validated the caching bits of the pgprot_t.
>   */
> -int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
> +int remap_pfn_range_notrack(struct mm_area *vma, unsigned long addr,
>  		unsigned long pfn, unsigned long size, pgprot_t prot)
>  {
>  	int error = remap_pfn_range_internal(vma, addr, pfn, size, prot);
> @@ -2845,7 +2845,7 @@ int remap_pfn_range_notrack(struct vm_area_struct *vma, unsigned long addr,
>   *
>   * Return: %0 on success, negative error code otherwise.
>   */
> -int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
> +int remap_pfn_range(struct mm_area *vma, unsigned long addr,
>  		    unsigned long pfn, unsigned long size, pgprot_t prot)
>  {
>  	int err;
> @@ -2876,7 +2876,7 @@ EXPORT_SYMBOL(remap_pfn_range);
>   *
>   * Return: %0 on success, negative error code otherwise.
>   */
> -int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
> +int vm_iomap_memory(struct mm_area *vma, phys_addr_t start, unsigned long len)
>  {
>  	unsigned long vm_len, pfn, pages;
>
> @@ -3161,7 +3161,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
>  	int ret;
>  	void *kaddr;
>  	void __user *uaddr;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long addr = vmf->address;
>
> @@ -3253,7 +3253,7 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
>  	return ret;
>  }
>
> -static gfp_t __get_fault_gfp_mask(struct vm_area_struct *vma)
> +static gfp_t __get_fault_gfp_mask(struct mm_area *vma)
>  {
>  	struct file *vm_file = vma->vm_file;
>
> @@ -3308,7 +3308,7 @@ static vm_fault_t do_page_mkwrite(struct vm_fault *vmf, struct folio *folio)
>   */
>  static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct address_space *mapping;
>  	struct folio *folio = page_folio(vmf->page);
>  	bool dirtied;
> @@ -3362,7 +3362,7 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf)
>  static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
>  	__releases(vmf->ptl)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	pte_t entry;
>
>  	VM_BUG_ON(!(vmf->flags & FAULT_FLAG_WRITE));
> @@ -3395,7 +3395,7 @@ static inline void wp_page_reuse(struct vm_fault *vmf, struct folio *folio)
>   */
>  static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>
>  	if (vma->vm_ops->map_pages || !(vmf->flags & FAULT_FLAG_VMA_LOCK))
>  		return 0;
> @@ -3420,7 +3420,7 @@ static inline vm_fault_t vmf_can_call_fault(const struct vm_fault *vmf)
>   */
>  vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	vm_fault_t ret = 0;
>
>  	if (likely(vma->anon_vma))
> @@ -3456,7 +3456,7 @@ vm_fault_t __vmf_anon_prepare(struct vm_fault *vmf)
>  static vm_fault_t wp_page_copy(struct vm_fault *vmf)
>  {
>  	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct folio *old_folio = NULL;
>  	struct folio *new_folio = NULL;
> @@ -3647,7 +3647,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio
>   */
>  static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>
>  	if (vma->vm_ops && vma->vm_ops->pfn_mkwrite) {
>  		vm_fault_t ret;
> @@ -3670,7 +3670,7 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf)
>  static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
>  	__releases(vmf->ptl)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	vm_fault_t ret = 0;
>
>  	folio_get(folio);
> @@ -3709,7 +3709,7 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio)
>
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	bool exclusive = false;
>
> @@ -3775,14 +3775,14 @@ static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
>  }
>  #else /* !CONFIG_TRANSPARENT_HUGEPAGE */
>  static bool __wp_can_reuse_large_anon_folio(struct folio *folio,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	BUILD_BUG();
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
>  static bool wp_can_reuse_anon_folio(struct folio *folio,
> -				    struct vm_area_struct *vma)
> +				    struct mm_area *vma)
>  {
>  	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && folio_test_large(folio))
>  		return __wp_can_reuse_large_anon_folio(folio, vma);
> @@ -3848,7 +3848,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
>  	__releases(vmf->ptl)
>  {
>  	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio = NULL;
>  	pte_t pte;
>
> @@ -3939,7 +3939,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
>  	return wp_page_copy(vmf);
>  }
>
> -static void unmap_mapping_range_vma(struct vm_area_struct *vma,
> +static void unmap_mapping_range_vma(struct mm_area *vma,
>  		unsigned long start_addr, unsigned long end_addr,
>  		struct zap_details *details)
>  {
> @@ -3951,7 +3951,7 @@ static inline void unmap_mapping_range_tree(struct rb_root_cached *root,
>  					    pgoff_t last_index,
>  					    struct zap_details *details)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	pgoff_t vba, vea, zba, zea;
>
>  	vma_interval_tree_foreach(vma, root, first_index, last_index) {
> @@ -4073,7 +4073,7 @@ EXPORT_SYMBOL(unmap_mapping_range);
>  static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>  {
>  	struct folio *folio = page_folio(vmf->page);
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mmu_notifier_range range;
>  	vm_fault_t ret;
>
> @@ -4114,7 +4114,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>  }
>
>  static inline bool should_try_to_free_swap(struct folio *folio,
> -					   struct vm_area_struct *vma,
> +					   struct mm_area *vma,
>  					   unsigned int fault_flags)
>  {
>  	if (!folio_test_swapcache(folio))
> @@ -4205,7 +4205,7 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
>
>  static struct folio *__alloc_swap_folio(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio;
>  	swp_entry_t entry;
>
> @@ -4303,7 +4303,7 @@ static inline unsigned long thp_swap_suitable_orders(pgoff_t swp_offset,
>
>  static struct folio *alloc_swap_folio(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	unsigned long orders;
>  	struct folio *folio;
>  	unsigned long addr;
> @@ -4399,7 +4399,7 @@ static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq);
>   */
>  vm_fault_t do_swap_page(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *swapcache, *folio = NULL;
>  	DECLARE_WAITQUEUE(wait, current);
>  	struct page *page;
> @@ -4859,7 +4859,7 @@ static bool pte_range_none(pte_t *pte, int nr_pages)
>
>  static struct folio *alloc_anon_folio(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	unsigned long orders;
>  	struct folio *folio;
> @@ -4949,7 +4949,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
>   */
>  static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	unsigned long addr = vmf->address;
>  	struct folio *folio;
>  	vm_fault_t ret = 0;
> @@ -5069,7 +5069,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
>   */
>  static vm_fault_t __do_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio;
>  	vm_fault_t ret;
>
> @@ -5126,7 +5126,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  static void deposit_prealloc_pte(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>
>  	pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, vmf->prealloc_pte);
>  	/*
> @@ -5140,7 +5140,7 @@ static void deposit_prealloc_pte(struct vm_fault *vmf)
>  vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
>  {
>  	struct folio *folio = page_folio(page);
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	bool write = vmf->flags & FAULT_FLAG_WRITE;
>  	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>  	pmd_t entry;
> @@ -5229,7 +5229,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
>  void set_pte_range(struct vm_fault *vmf, struct folio *folio,
>  		struct page *page, unsigned int nr, unsigned long addr)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	bool write = vmf->flags & FAULT_FLAG_WRITE;
>  	bool prefault = !in_range(vmf->address, addr, nr * PAGE_SIZE);
>  	pte_t entry;
> @@ -5285,7 +5285,7 @@ static bool vmf_pte_changed(struct vm_fault *vmf)
>   */
>  vm_fault_t finish_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct page *page;
>  	struct folio *folio;
>  	vm_fault_t ret;
> @@ -5528,7 +5528,7 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf)
>
>  static vm_fault_t do_cow_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio;
>  	vm_fault_t ret;
>
> @@ -5570,7 +5570,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf)
>
>  static vm_fault_t do_shared_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	vm_fault_t ret, tmp;
>  	struct folio *folio;
>
> @@ -5620,7 +5620,7 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf)
>   */
>  static vm_fault_t do_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mm_struct *vm_mm = vma->vm_mm;
>  	vm_fault_t ret;
>
> @@ -5666,7 +5666,7 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf,
>  		      unsigned long addr, int *flags,
>  		      bool writable, int *last_cpupid)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>
>  	/*
>  	 * Avoid grouping on RO pages in general. RO pages shouldn't hurt as
> @@ -5709,7 +5709,7 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf,
>  	return mpol_misplaced(folio, vmf, addr);
>  }
>
> -static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
> +static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct mm_area *vma,
>  					unsigned long fault_addr, pte_t *fault_pte,
>  					bool writable)
>  {
> @@ -5724,7 +5724,7 @@ static void numa_rebuild_single_mapping(struct vm_fault *vmf, struct vm_area_str
>  	update_mmu_cache_range(vmf, vma, fault_addr, fault_pte, 1);
>  }
>
> -static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_struct *vma,
> +static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct mm_area *vma,
>  				       struct folio *folio, pte_t fault_pte,
>  				       bool ignore_writable, bool pte_write_upgrade)
>  {
> @@ -5765,7 +5765,7 @@ static void numa_rebuild_large_mapping(struct vm_fault *vmf, struct vm_area_stru
>
>  static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct folio *folio = NULL;
>  	int nid = NUMA_NO_NODE;
>  	bool writable = false, ignore_writable = false;
> @@ -5856,7 +5856,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>
>  static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	if (vma_is_anonymous(vma))
>  		return do_huge_pmd_anonymous_page(vmf);
>  	if (vma->vm_ops->huge_fault)
> @@ -5867,7 +5867,7 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
>  /* `inline' is required to avoid gcc 4.1.2 build error */
>  static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
>  	vm_fault_t ret;
>
> @@ -5900,7 +5900,7 @@ static vm_fault_t create_huge_pud(struct vm_fault *vmf)
>  {
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) &&			\
>  	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	/* No support for anonymous transparent PUD pages yet */
>  	if (vma_is_anonymous(vma))
>  		return VM_FAULT_FALLBACK;
> @@ -5914,7 +5914,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, pud_t orig_pud)
>  {
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) &&			\
>  	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	vm_fault_t ret;
>
>  	/* No support for anonymous transparent PUD pages yet */
> @@ -6043,7 +6043,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf)
>   * the result, the mmap_lock is not held on exit.  See filemap_fault()
>   * and __folio_lock_or_retry().
>   */
> -static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
> +static vm_fault_t __handle_mm_fault(struct mm_area *vma,
>  		unsigned long address, unsigned int flags)
>  {
>  	struct vm_fault vmf = {
> @@ -6208,7 +6208,7 @@ static inline void mm_account_fault(struct mm_struct *mm, struct pt_regs *regs,
>  }
>
>  #ifdef CONFIG_LRU_GEN
> -static void lru_gen_enter_fault(struct vm_area_struct *vma)
> +static void lru_gen_enter_fault(struct mm_area *vma)
>  {
>  	/* the LRU algorithm only applies to accesses with recency */
>  	current->in_lru_fault = vma_has_recency(vma);
> @@ -6219,7 +6219,7 @@ static void lru_gen_exit_fault(void)
>  	current->in_lru_fault = false;
>  }
>  #else
> -static void lru_gen_enter_fault(struct vm_area_struct *vma)
> +static void lru_gen_enter_fault(struct mm_area *vma)
>  {
>  }
>
> @@ -6228,7 +6228,7 @@ static void lru_gen_exit_fault(void)
>  }
>  #endif /* CONFIG_LRU_GEN */
>
> -static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma,
> +static vm_fault_t sanitize_fault_flags(struct mm_area *vma,
>  				       unsigned int *flags)
>  {
>  	if (unlikely(*flags & FAULT_FLAG_UNSHARE)) {
> @@ -6270,7 +6270,7 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma,
>   * The mmap_lock may have been released depending on flags and our
>   * return value.  See filemap_fault() and __folio_lock_or_retry().
>   */
> -vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
> +vm_fault_t handle_mm_fault(struct mm_area *vma, unsigned long address,
>  			   unsigned int flags, struct pt_regs *regs)
>  {
>  	/* If the fault handler drops the mmap_lock, vma may be freed */
> @@ -6397,10 +6397,10 @@ static inline bool upgrade_mmap_lock_carefully(struct mm_struct *mm, struct pt_r
>   * We can also actually take the mm lock for writing if we
>   * need to extend the vma, which helps the VM layer a lot.
>   */
> -struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
> +struct mm_area *lock_mm_and_find_vma(struct mm_struct *mm,
>  			unsigned long addr, struct pt_regs *regs)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if (!get_mmap_lock_carefully(mm, regs))
>  		return NULL;
> @@ -6454,7 +6454,7 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
>  #endif
>
>  #ifdef CONFIG_PER_VMA_LOCK
> -static inline bool __vma_enter_locked(struct vm_area_struct *vma, bool detaching)
> +static inline bool __vma_enter_locked(struct mm_area *vma, bool detaching)
>  {
>  	unsigned int tgt_refcnt = VMA_LOCK_OFFSET;
>
> @@ -6478,13 +6478,13 @@ static inline bool __vma_enter_locked(struct vm_area_struct *vma, bool detaching
>  	return true;
>  }
>
> -static inline void __vma_exit_locked(struct vm_area_struct *vma, bool *detached)
> +static inline void __vma_exit_locked(struct mm_area *vma, bool *detached)
>  {
>  	*detached = refcount_sub_and_test(VMA_LOCK_OFFSET, &vma->vm_refcnt);
>  	rwsem_release(&vma->vmlock_dep_map, _RET_IP_);
>  }
>
> -void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
> +void __vma_start_write(struct mm_area *vma, unsigned int mm_lock_seq)
>  {
>  	bool locked;
>
> @@ -6512,7 +6512,7 @@ void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq)
>  }
>  EXPORT_SYMBOL_GPL(__vma_start_write);
>
> -void vma_mark_detached(struct vm_area_struct *vma)
> +void vma_mark_detached(struct mm_area *vma)
>  {
>  	vma_assert_write_locked(vma);
>  	vma_assert_attached(vma);
> @@ -6541,11 +6541,11 @@ void vma_mark_detached(struct vm_area_struct *vma)
>   * stable and not isolated. If the VMA is not found or is being modified the
>   * function returns NULL.
>   */
> -struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm,
> +struct mm_area *lock_vma_under_rcu(struct mm_struct *mm,
>  					  unsigned long address)
>  {
>  	MA_STATE(mas, &mm->mm_mt, address, address);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	rcu_read_lock();
>  retry:
> @@ -6675,7 +6675,7 @@ static inline void pfnmap_args_setup(struct follow_pfnmap_args *args,
>  	args->special = special;
>  }
>
> -static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma)
> +static inline void pfnmap_lockdep_assert(struct mm_area *vma)
>  {
>  #ifdef CONFIG_LOCKDEP
>  	struct file *file = vma->vm_file;
> @@ -6722,7 +6722,7 @@ static inline void pfnmap_lockdep_assert(struct vm_area_struct *vma)
>   */
>  int follow_pfnmap_start(struct follow_pfnmap_args *args)
>  {
> -	struct vm_area_struct *vma = args->vma;
> +	struct mm_area *vma = args->vma;
>  	unsigned long address = args->address;
>  	struct mm_struct *mm = vma->vm_mm;
>  	spinlock_t *lock;
> @@ -6825,7 +6825,7 @@ EXPORT_SYMBOL_GPL(follow_pfnmap_end);
>   * iomem mapping. This callback is used by access_process_vm() when the @vma is
>   * not page based.
>   */
> -int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
> +int generic_access_phys(struct mm_area *vma, unsigned long addr,
>  			void *buf, int len, int write)
>  {
>  	resource_size_t phys_addr;
> @@ -6899,7 +6899,7 @@ static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
>  	while (len) {
>  		int bytes, offset;
>  		void *maddr;
> -		struct vm_area_struct *vma = NULL;
> +		struct mm_area *vma = NULL;
>  		struct page *page = get_user_page_vma_remote(mm, addr,
>  							     gup_flags, &vma);
>
> @@ -7024,7 +7024,7 @@ static int __copy_remote_vm_str(struct mm_struct *mm, unsigned long addr,
>  		int bytes, offset, retval;
>  		void *maddr;
>  		struct page *page;
> -		struct vm_area_struct *vma = NULL;
> +		struct mm_area *vma = NULL;
>
>  		page = get_user_page_vma_remote(mm, addr, gup_flags, &vma);
>  		if (IS_ERR(page)) {
> @@ -7120,7 +7120,7 @@ EXPORT_SYMBOL_GPL(copy_remote_vm_str);
>  void print_vma_addr(char *prefix, unsigned long ip)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * we might be running from an atomic context so we cannot sleep
> @@ -7251,7 +7251,7 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
>
>  static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
>  				   unsigned long addr_hint,
> -				   struct vm_area_struct *vma,
> +				   struct mm_area *vma,
>  				   unsigned int nr_pages)
>  {
>  	unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst));
> @@ -7274,7 +7274,7 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
>  struct copy_subpage_arg {
>  	struct folio *dst;
>  	struct folio *src;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  };
>
>  static int copy_subpage(unsigned long addr, int idx, void *arg)
> @@ -7289,7 +7289,7 @@ static int copy_subpage(unsigned long addr, int idx, void *arg)
>  }
>
>  int copy_user_large_folio(struct folio *dst, struct folio *src,
> -			  unsigned long addr_hint, struct vm_area_struct *vma)
> +			  unsigned long addr_hint, struct mm_area *vma)
>  {
>  	unsigned int nr_pages = folio_nr_pages(dst);
>  	struct copy_subpage_arg arg = {
> @@ -7364,13 +7364,13 @@ void ptlock_free(struct ptdesc *ptdesc)
>  }
>  #endif
>
> -void vma_pgtable_walk_begin(struct vm_area_struct *vma)
> +void vma_pgtable_walk_begin(struct mm_area *vma)
>  {
>  	if (is_vm_hugetlb_page(vma))
>  		hugetlb_vma_lock_read(vma);
>  }
>
> -void vma_pgtable_walk_end(struct vm_area_struct *vma)
> +void vma_pgtable_walk_end(struct mm_area *vma)
>  {
>  	if (is_vm_hugetlb_page(vma))
>  		hugetlb_vma_unlock_read(vma);
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index b28a1e6ae096..3403a4805d17 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -445,7 +445,7 @@ void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new)
>   */
>  void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	mmap_write_lock(mm);
> @@ -511,7 +511,7 @@ struct queue_pages {
>  	nodemask_t *nmask;
>  	unsigned long start;
>  	unsigned long end;
> -	struct vm_area_struct *first;
> +	struct mm_area *first;
>  	struct folio *large;		/* note last large folio encountered */
>  	long nr_failed;			/* could not be isolated at this time */
>  };
> @@ -566,7 +566,7 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk)
>  static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
>  			unsigned long end, struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	struct folio *folio;
>  	struct queue_pages *qp = walk->private;
>  	unsigned long flags = qp->flags;
> @@ -698,7 +698,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
>   * an architecture makes a different choice, it will need further
>   * changes to the core.
>   */
> -unsigned long change_prot_numa(struct vm_area_struct *vma,
> +unsigned long change_prot_numa(struct mm_area *vma,
>  			unsigned long addr, unsigned long end)
>  {
>  	struct mmu_gather tlb;
> @@ -721,7 +721,7 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
>  static int queue_pages_test_walk(unsigned long start, unsigned long end,
>  				struct mm_walk *walk)
>  {
> -	struct vm_area_struct *next, *vma = walk->vma;
> +	struct mm_area *next, *vma = walk->vma;
>  	struct queue_pages *qp = walk->private;
>  	unsigned long flags = qp->flags;
>
> @@ -817,7 +817,7 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
>   * Apply policy to a single VMA
>   * This must be called with the mmap_lock held for writing.
>   */
> -static int vma_replace_policy(struct vm_area_struct *vma,
> +static int vma_replace_policy(struct mm_area *vma,
>  				struct mempolicy *pol)
>  {
>  	int err;
> @@ -847,8 +847,8 @@ static int vma_replace_policy(struct vm_area_struct *vma,
>  }
>
>  /* Split or merge the VMA (if required) and apply the new policy */
> -static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
> -		struct vm_area_struct **prev, unsigned long start,
> +static int mbind_range(struct vma_iterator *vmi, struct mm_area *vma,
> +		struct mm_area **prev, unsigned long start,
>  		unsigned long end, struct mempolicy *new_pol)
>  {
>  	unsigned long vmstart, vmend;
> @@ -960,7 +960,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
>  {
>  	int err;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	struct mempolicy *pol = current->mempolicy, *pol_refcount = NULL;
>
>  	if (flags &
> @@ -1094,7 +1094,7 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest,
>  			    int flags)
>  {
>  	nodemask_t nmask;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	LIST_HEAD(pagelist);
>  	long nr_failed;
>  	long err = 0;
> @@ -1299,7 +1299,7 @@ static long do_mbind(unsigned long start, unsigned long len,
>  		     nodemask_t *nmask, unsigned long flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	struct vma_iterator vmi;
>  	struct migration_mpol mmpol;
>  	struct mempolicy *new;
> @@ -1572,7 +1572,7 @@ SYSCALL_DEFINE4(set_mempolicy_home_node, unsigned long, start, unsigned long, le
>  		unsigned long, home_node, unsigned long, flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	struct mempolicy *new, *old;
>  	unsigned long end;
>  	int err = -ENOENT;
> @@ -1799,7 +1799,7 @@ SYSCALL_DEFINE5(get_mempolicy, int __user *, policy,
>  	return kernel_get_mempolicy(policy, nmask, maxnode, addr, flags);
>  }
>
> -bool vma_migratable(struct vm_area_struct *vma)
> +bool vma_migratable(struct mm_area *vma)
>  {
>  	if (vma->vm_flags & (VM_IO | VM_PFNMAP))
>  		return false;
> @@ -1827,7 +1827,7 @@ bool vma_migratable(struct vm_area_struct *vma)
>  	return true;
>  }
>
> -struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> +struct mempolicy *__get_vma_policy(struct mm_area *vma,
>  				   unsigned long addr, pgoff_t *ilx)
>  {
>  	*ilx = 0;
> @@ -1850,7 +1850,7 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
>   * freeing by another task.  It is the caller's responsibility to free the
>   * extra reference for shared policies.
>   */
> -struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> +struct mempolicy *get_vma_policy(struct mm_area *vma,
>  				 unsigned long addr, int order, pgoff_t *ilx)
>  {
>  	struct mempolicy *pol;
> @@ -1866,7 +1866,7 @@ struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
>  	return pol;
>  }
>
> -bool vma_policy_mof(struct vm_area_struct *vma)
> +bool vma_policy_mof(struct mm_area *vma)
>  {
>  	struct mempolicy *pol;
>
> @@ -2135,7 +2135,7 @@ static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol,
>   * If the effective policy is 'bind' or 'prefer-many', returns a pointer
>   * to the mempolicy's @nodemask for filtering the zonelist.
>   */
> -int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
> +int huge_node(struct mm_area *vma, unsigned long addr, gfp_t gfp_flags,
>  		struct mempolicy **mpol, nodemask_t **nodemask)
>  {
>  	pgoff_t ilx;
> @@ -2341,7 +2341,7 @@ struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int order,
>   *
>   * Return: The folio on success or NULL if allocation fails.
>   */
> -struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct *vma,
> +struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct mm_area *vma,
>  		unsigned long addr)
>  {
>  	struct mempolicy *pol;
> @@ -2607,7 +2607,7 @@ unsigned long alloc_pages_bulk_mempolicy_noprof(gfp_t gfp,
>  				       nr_pages, page_array);
>  }
>
> -int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
> +int vma_dup_policy(struct mm_area *src, struct mm_area *dst)
>  {
>  	struct mempolicy *pol = mpol_dup(src->vm_policy);
>
> @@ -2795,7 +2795,7 @@ int mpol_misplaced(struct folio *folio, struct vm_fault *vmf,
>  	pgoff_t ilx;
>  	struct zoneref *z;
>  	int curnid = folio_nid(folio);
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	int thiscpu = raw_smp_processor_id();
>  	int thisnid = numa_node_id();
>  	int polnid = NUMA_NO_NODE;
> @@ -3054,7 +3054,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
>  }
>
>  int mpol_set_shared_policy(struct shared_policy *sp,
> -			struct vm_area_struct *vma, struct mempolicy *pol)
> +			struct mm_area *vma, struct mempolicy *pol)
>  {
>  	int err;
>  	struct sp_node *new = NULL;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index f3ee6d8d5e2e..7909e4ae797c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -237,7 +237,7 @@ struct rmap_walk_arg {
>   * Restore a potential migration pte to a working pte entry
>   */
>  static bool remove_migration_pte(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long addr, void *arg)
> +		struct mm_area *vma, unsigned long addr, void *arg)
>  {
>  	struct rmap_walk_arg *rmap_walk_arg = arg;
>  	DEFINE_FOLIO_VMA_WALK(pvmw, rmap_walk_arg->folio, vma, addr, PVMW_SYNC | PVMW_MIGRATION);
> @@ -405,7 +405,7 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
>   *
>   * This function will release the vma lock before returning.
>   */
> -void migration_entry_wait_huge(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
> +void migration_entry_wait_huge(struct mm_area *vma, unsigned long addr, pte_t *ptep)
>  {
>  	spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->vm_mm, ptep);
>  	pte_t pte;
> @@ -2254,7 +2254,7 @@ static int __add_folio_for_migration(struct folio *folio, int node,
>  static int add_folio_for_migration(struct mm_struct *mm, const void __user *p,
>  		int node, struct list_head *pagelist, bool migrate_all)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct folio_walk fw;
>  	struct folio *folio;
>  	unsigned long addr;
> @@ -2423,7 +2423,7 @@ static void do_pages_stat_array(struct mm_struct *mm, unsigned long nr_pages,
>
>  	for (i = 0; i < nr_pages; i++) {
>  		unsigned long addr = (unsigned long)(*pages);
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>  		struct folio_walk fw;
>  		struct folio *folio;
>  		int err = -EFAULT;
> @@ -2640,7 +2640,7 @@ static struct folio *alloc_misplaced_dst_folio(struct folio *src,
>   * permitted. Must be called with the PTL still held.
>   */
>  int migrate_misplaced_folio_prepare(struct folio *folio,
> -		struct vm_area_struct *vma, int node)
> +		struct mm_area *vma, int node)
>  {
>  	int nr_pages = folio_nr_pages(folio);
>  	pg_data_t *pgdat = NODE_DATA(node);
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index 3158afe7eb23..96786d64edd6 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -62,7 +62,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
>  	struct migrate_vma *migrate = walk->private;
>  	struct folio *fault_folio = migrate->fault_page ?
>  		page_folio(migrate->fault_page) : NULL;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long addr = start, unmapped = 0;
>  	spinlock_t *ptl;
> @@ -589,7 +589,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
>  				    unsigned long *src)
>  {
>  	struct folio *folio = page_folio(page);
> -	struct vm_area_struct *vma = migrate->vma;
> +	struct mm_area *vma = migrate->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	bool flush = false;
>  	spinlock_t *ptl;
> diff --git a/mm/mincore.c b/mm/mincore.c
> index 832f29f46767..6b53d9361ec7 100644
> --- a/mm/mincore.c
> +++ b/mm/mincore.c
> @@ -70,7 +70,7 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t index)
>  }
>
>  static int __mincore_unmapped_range(unsigned long addr, unsigned long end,
> -				struct vm_area_struct *vma, unsigned char *vec)
> +				struct mm_area *vma, unsigned char *vec)
>  {
>  	unsigned long nr = (end - addr) >> PAGE_SHIFT;
>  	int i;
> @@ -101,7 +101,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  			struct mm_walk *walk)
>  {
>  	spinlock_t *ptl;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	pte_t *ptep;
>  	unsigned char *vec = walk->private;
>  	int nr = (end - addr) >> PAGE_SHIFT;
> @@ -155,7 +155,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  	return 0;
>  }
>
> -static inline bool can_do_mincore(struct vm_area_struct *vma)
> +static inline bool can_do_mincore(struct mm_area *vma)
>  {
>  	if (vma_is_anonymous(vma))
>  		return true;
> @@ -186,7 +186,7 @@ static const struct mm_walk_ops mincore_walk_ops = {
>   */
>  static long do_mincore(unsigned long addr, unsigned long pages, unsigned char *vec)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long end;
>  	int err;
>
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 3cb72b579ffd..8c13cce0d0cb 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -319,7 +319,7 @@ static inline unsigned int folio_mlock_step(struct folio *folio,
>  }
>
>  static inline bool allow_mlock_munlock(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long start,
> +		struct mm_area *vma, unsigned long start,
>  		unsigned long end, unsigned int step)
>  {
>  	/*
> @@ -353,7 +353,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
>  			   unsigned long end, struct mm_walk *walk)
>
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	spinlock_t *ptl;
>  	pte_t *start_pte, *pte;
>  	pte_t ptent;
> @@ -422,7 +422,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long addr,
>   * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED;
>   * called for munlock() and munlockall(), to clear VM_LOCKED from @vma.
>   */
> -static void mlock_vma_pages_range(struct vm_area_struct *vma,
> +static void mlock_vma_pages_range(struct mm_area *vma,
>  	unsigned long start, unsigned long end, vm_flags_t newflags)
>  {
>  	static const struct mm_walk_ops mlock_walk_ops = {
> @@ -465,8 +465,8 @@ static void mlock_vma_pages_range(struct vm_area_struct *vma,
>   *
>   * For vmas that pass the filters, merge/split as appropriate.
>   */
> -static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> -	       struct vm_area_struct **prev, unsigned long start,
> +static int mlock_fixup(struct vma_iterator *vmi, struct mm_area *vma,
> +	       struct mm_area **prev, unsigned long start,
>  	       unsigned long end, vm_flags_t newflags)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -517,7 +517,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
>  				vm_flags_t flags)
>  {
>  	unsigned long nstart, end, tmp;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	VMA_ITERATOR(vmi, current->mm, start);
>
>  	VM_BUG_ON(offset_in_page(start));
> @@ -573,7 +573,7 @@ static int apply_vma_lock_flags(unsigned long start, size_t len,
>  static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm,
>  		unsigned long start, size_t len)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long count = 0;
>  	unsigned long end;
>  	VMA_ITERATOR(vmi, mm, start);
> @@ -706,7 +706,7 @@ SYSCALL_DEFINE2(munlock, unsigned long, start, size_t, len)
>  static int apply_mlockall_flags(int flags)
>  {
>  	VMA_ITERATOR(vmi, current->mm, 0);
> -	struct vm_area_struct *vma, *prev = NULL;
> +	struct mm_area *vma, *prev = NULL;
>  	vm_flags_t to_add = 0;
>
>  	current->mm->def_flags &= ~VM_LOCKED_MASK;
> diff --git a/mm/mmap.c b/mm/mmap.c
> index bd210aaf7ebd..d7d95a6f343d 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -78,7 +78,7 @@ static bool ignore_rlimit_data;
>  core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
>
>  /* Update vma->vm_page_prot to reflect vma->vm_flags. */
> -void vma_set_page_prot(struct vm_area_struct *vma)
> +void vma_set_page_prot(struct mm_area *vma)
>  {
>  	unsigned long vm_flags = vma->vm_flags;
>  	pgprot_t vm_page_prot;
> @@ -116,7 +116,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
>  {
>  	unsigned long newbrk, oldbrk, origbrk;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *brkvma, *next = NULL;
> +	struct mm_area *brkvma, *next = NULL;
>  	unsigned long min_brk;
>  	bool populate = false;
>  	LIST_HEAD(uf);
> @@ -693,7 +693,7 @@ generic_get_unmapped_area(struct file *filp, unsigned long addr,
>  			  unsigned long flags, vm_flags_t vm_flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	struct vm_unmapped_area_info info = {};
>  	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
>
> @@ -741,7 +741,7 @@ generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr,
>  				  unsigned long len, unsigned long pgoff,
>  				  unsigned long flags, vm_flags_t vm_flags)
>  {
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	struct mm_struct *mm = current->mm;
>  	struct vm_unmapped_area_info info = {};
>  	const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags);
> @@ -886,7 +886,7 @@ EXPORT_SYMBOL(mm_get_unmapped_area);
>   * Returns: The first VMA within the provided range, %NULL otherwise.  Assumes
>   * start_addr < end_addr.
>   */
> -struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
> +struct mm_area *find_vma_intersection(struct mm_struct *mm,
>  					     unsigned long start_addr,
>  					     unsigned long end_addr)
>  {
> @@ -905,7 +905,7 @@ EXPORT_SYMBOL(find_vma_intersection);
>   * Returns: The VMA associated with addr, or the next VMA.
>   * May return %NULL in the case of no VMA at addr or above.
>   */
> -struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *find_vma(struct mm_struct *mm, unsigned long addr)
>  {
>  	unsigned long index = addr;
>
> @@ -927,11 +927,11 @@ EXPORT_SYMBOL(find_vma);
>   * Returns: The VMA associated with @addr, or the next vma.
>   * May return %NULL in the case of no vma at addr or above.
>   */
> -struct vm_area_struct *
> +struct mm_area *
>  find_vma_prev(struct mm_struct *mm, unsigned long addr,
> -			struct vm_area_struct **pprev)
> +			struct mm_area **pprev)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, addr);
>
>  	vma = vma_iter_load(&vmi);
> @@ -958,14 +958,14 @@ static int __init cmdline_parse_stack_guard_gap(char *p)
>  __setup("stack_guard_gap=", cmdline_parse_stack_guard_gap);
>
>  #ifdef CONFIG_STACK_GROWSUP
> -int expand_stack_locked(struct vm_area_struct *vma, unsigned long address)
> +int expand_stack_locked(struct mm_area *vma, unsigned long address)
>  {
>  	return expand_upwards(vma, address);
>  }
>
> -struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
>  {
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>
>  	addr &= PAGE_MASK;
>  	vma = find_vma_prev(mm, addr, &prev);
> @@ -980,14 +980,14 @@ struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned lon
>  	return prev;
>  }
>  #else
> -int expand_stack_locked(struct vm_area_struct *vma, unsigned long address)
> +int expand_stack_locked(struct mm_area *vma, unsigned long address)
>  {
>  	return expand_downwards(vma, address);
>  }
>
> -struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *find_extend_vma_locked(struct mm_struct *mm, unsigned long addr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long start;
>
>  	addr &= PAGE_MASK;
> @@ -1028,9 +1028,9 @@ struct vm_area_struct *find_extend_vma_locked(struct mm_struct *mm, unsigned lon
>   * If no vma is found or it can't be expanded, it returns NULL and has
>   * dropped the lock.
>   */
> -struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *expand_stack(struct mm_struct *mm, unsigned long addr)
>  {
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>
>  	mmap_read_unlock(mm);
>  	if (mmap_write_lock_killable(mm))
> @@ -1093,7 +1093,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
>  {
>
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long populate = 0;
>  	unsigned long ret = -EINVAL;
>  	struct file *file;
> @@ -1172,7 +1172,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
>
>  	if (start + size > vma->vm_end) {
>  		VMA_ITERATOR(vmi, mm, vma->vm_end);
> -		struct vm_area_struct *next, *prev = vma;
> +		struct mm_area *next, *prev = vma;
>
>  		for_each_vma_range(vmi, next, start + size) {
>  			/* hole between vmas ? */
> @@ -1210,7 +1210,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
>  int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	unsigned long len;
>  	int ret;
>  	bool populate;
> @@ -1258,7 +1258,7 @@ EXPORT_SYMBOL(vm_brk_flags);
>  void exit_mmap(struct mm_struct *mm)
>  {
>  	struct mmu_gather tlb;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long nr_accounted = 0;
>  	VMA_ITERATOR(vmi, mm, 0);
>  	int count = 0;
> @@ -1325,7 +1325,7 @@ void exit_mmap(struct mm_struct *mm)
>   * and into the inode's i_mmap tree.  If vm_file is non-NULL
>   * then i_mmap_rwsem is taken here.
>   */
> -int insert_vm_struct(struct mm_struct *mm, struct vm_area_struct *vma)
> +int insert_vm_struct(struct mm_struct *mm, struct mm_area *vma)
>  {
>  	unsigned long charged = vma_pages(vma);
>
> @@ -1411,7 +1411,7 @@ static vm_fault_t special_mapping_fault(struct vm_fault *vmf);
>   *
>   * Having a close hook prevents vma merging regardless of flags.
>   */
> -static void special_mapping_close(struct vm_area_struct *vma)
> +static void special_mapping_close(struct mm_area *vma)
>  {
>  	const struct vm_special_mapping *sm = vma->vm_private_data;
>
> @@ -1419,12 +1419,12 @@ static void special_mapping_close(struct vm_area_struct *vma)
>  		sm->close(sm, vma);
>  }
>
> -static const char *special_mapping_name(struct vm_area_struct *vma)
> +static const char *special_mapping_name(struct mm_area *vma)
>  {
>  	return ((struct vm_special_mapping *)vma->vm_private_data)->name;
>  }
>
> -static int special_mapping_mremap(struct vm_area_struct *new_vma)
> +static int special_mapping_mremap(struct mm_area *new_vma)
>  {
>  	struct vm_special_mapping *sm = new_vma->vm_private_data;
>
> @@ -1437,7 +1437,7 @@ static int special_mapping_mremap(struct vm_area_struct *new_vma)
>  	return 0;
>  }
>
> -static int special_mapping_split(struct vm_area_struct *vma, unsigned long addr)
> +static int special_mapping_split(struct mm_area *vma, unsigned long addr)
>  {
>  	/*
>  	 * Forbid splitting special mappings - kernel has expectations over
> @@ -1460,7 +1460,7 @@ static const struct vm_operations_struct special_mapping_vmops = {
>
>  static vm_fault_t special_mapping_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	pgoff_t pgoff;
>  	struct page **pages;
>  	struct vm_special_mapping *sm = vma->vm_private_data;
> @@ -1483,14 +1483,14 @@ static vm_fault_t special_mapping_fault(struct vm_fault *vmf)
>  	return VM_FAULT_SIGBUS;
>  }
>
> -static struct vm_area_struct *__install_special_mapping(
> +static struct mm_area *__install_special_mapping(
>  	struct mm_struct *mm,
>  	unsigned long addr, unsigned long len,
>  	unsigned long vm_flags, void *priv,
>  	const struct vm_operations_struct *ops)
>  {
>  	int ret;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = vm_area_alloc(mm);
>  	if (unlikely(vma == NULL))
> @@ -1519,7 +1519,7 @@ static struct vm_area_struct *__install_special_mapping(
>  	return ERR_PTR(ret);
>  }
>
> -bool vma_is_special_mapping(const struct vm_area_struct *vma,
> +bool vma_is_special_mapping(const struct mm_area *vma,
>  	const struct vm_special_mapping *sm)
>  {
>  	return vma->vm_private_data == sm &&
> @@ -1535,7 +1535,7 @@ bool vma_is_special_mapping(const struct vm_area_struct *vma,
>   * The array pointer and the pages it points to are assumed to stay alive
>   * for as long as this mapping might exist.
>   */
> -struct vm_area_struct *_install_special_mapping(
> +struct mm_area *_install_special_mapping(
>  	struct mm_struct *mm,
>  	unsigned long addr, unsigned long len,
>  	unsigned long vm_flags, const struct vm_special_mapping *spec)
> @@ -1725,7 +1725,7 @@ subsys_initcall(init_reserve_notifier);
>   * This function is almost certainly NOT what you want for anything other than
>   * early executable temporary stack relocation.
>   */
> -int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
> +int relocate_vma_down(struct mm_area *vma, unsigned long shift)
>  {
>  	/*
>  	 * The process proceeds as follows:
> @@ -1746,7 +1746,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
>  	unsigned long new_end = old_end - shift;
>  	VMA_ITERATOR(vmi, mm, new_start);
>  	VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff);
> -	struct vm_area_struct *next;
> +	struct mm_area *next;
>  	struct mmu_gather tlb;
>  	PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length);
>
> @@ -1824,7 +1824,7 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift)
>   * before downgrading it.
>   */
>  bool mmap_read_lock_maybe_expand(struct mm_struct *mm,
> -				 struct vm_area_struct *new_vma,
> +				 struct mm_area *new_vma,
>  				 unsigned long addr, bool write)
>  {
>  	if (!write || addr >= new_vma->vm_start) {
> @@ -1845,7 +1845,7 @@ bool mmap_read_lock_maybe_expand(struct mm_struct *mm,
>  	return true;
>  }
>  #else
> -bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct vm_area_struct *vma,
> +bool mmap_read_lock_maybe_expand(struct mm_struct *mm, struct mm_area *vma,
>  				 unsigned long addr, bool write)
>  {
>  	return false;
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index db7ba4a725d6..c94257a65e5b 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -48,7 +48,7 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
>  }
>
>  #ifdef CONFIG_SMP
> -static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma)
> +static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct mm_area *vma)
>  {
>  	struct encoded_page **pages = batch->encoded_pages;
>
> @@ -79,7 +79,7 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_
>   * we only need to walk through the current active batch and the
>   * original local one.
>   */
> -void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
> +void tlb_flush_rmaps(struct mmu_gather *tlb, struct mm_area *vma)
>  {
>  	if (!tlb->delayed_rmap)
>  		return;
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 62c1f7945741..2f1f44d80639 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -40,7 +40,7 @@
>
>  #include "internal.h"
>
> -bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> +bool can_change_pte_writable(struct mm_area *vma, unsigned long addr,
>  			     pte_t pte)
>  {
>  	struct page *page;
> @@ -84,7 +84,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
>  }
>
>  static long change_pte_range(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr,
> +		struct mm_area *vma, pmd_t *pmd, unsigned long addr,
>  		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>  {
>  	pte_t *pte, oldpte;
> @@ -292,7 +292,7 @@ static long change_pte_range(struct mmu_gather *tlb,
>   * protection procedure, false otherwise.
>   */
>  static inline bool
> -pgtable_split_needed(struct vm_area_struct *vma, unsigned long cp_flags)
> +pgtable_split_needed(struct mm_area *vma, unsigned long cp_flags)
>  {
>  	/*
>  	 * pte markers only resides in pte level, if we need pte markers,
> @@ -308,7 +308,7 @@ pgtable_split_needed(struct vm_area_struct *vma, unsigned long cp_flags)
>   * procedure, false otherwise
>   */
>  static inline bool
> -pgtable_populate_needed(struct vm_area_struct *vma, unsigned long cp_flags)
> +pgtable_populate_needed(struct mm_area *vma, unsigned long cp_flags)
>  {
>  	/* If not within ioctl(UFFDIO_WRITEPROTECT), then don't bother */
>  	if (!(cp_flags & MM_CP_UFFD_WP))
> @@ -351,7 +351,7 @@ pgtable_populate_needed(struct vm_area_struct *vma, unsigned long cp_flags)
>  	})
>
>  static inline long change_pmd_range(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, pud_t *pud, unsigned long addr,
> +		struct mm_area *vma, pud_t *pud, unsigned long addr,
>  		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>  {
>  	pmd_t *pmd;
> @@ -421,7 +421,7 @@ static inline long change_pmd_range(struct mmu_gather *tlb,
>  }
>
>  static inline long change_pud_range(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr,
> +		struct mm_area *vma, p4d_t *p4d, unsigned long addr,
>  		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>  {
>  	struct mmu_notifier_range range;
> @@ -480,7 +480,7 @@ static inline long change_pud_range(struct mmu_gather *tlb,
>  }
>
>  static inline long change_p4d_range(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr,
> +		struct mm_area *vma, pgd_t *pgd, unsigned long addr,
>  		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>  {
>  	p4d_t *p4d;
> @@ -503,7 +503,7 @@ static inline long change_p4d_range(struct mmu_gather *tlb,
>  }
>
>  static long change_protection_range(struct mmu_gather *tlb,
> -		struct vm_area_struct *vma, unsigned long addr,
> +		struct mm_area *vma, unsigned long addr,
>  		unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -533,7 +533,7 @@ static long change_protection_range(struct mmu_gather *tlb,
>  }
>
>  long change_protection(struct mmu_gather *tlb,
> -		       struct vm_area_struct *vma, unsigned long start,
> +		       struct mm_area *vma, unsigned long start,
>  		       unsigned long end, unsigned long cp_flags)
>  {
>  	pgprot_t newprot = vma->vm_page_prot;
> @@ -595,7 +595,7 @@ static const struct mm_walk_ops prot_none_walk_ops = {
>
>  int
>  mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> -	       struct vm_area_struct *vma, struct vm_area_struct **pprev,
> +	       struct mm_area *vma, struct mm_area **pprev,
>  	       unsigned long start, unsigned long end, unsigned long newflags)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -704,7 +704,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
>  		unsigned long prot, int pkey)
>  {
>  	unsigned long nstart, end, tmp, reqprot;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	int error;
>  	const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP);
>  	const bool rier = (current->personality & READ_IMPLIES_EXEC) &&
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 0865387531ed..2634b9f85423 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -61,7 +61,7 @@ struct vma_remap_struct {
>  	struct list_head *uf_unmap;
>
>  	/* VMA state, determined in do_mremap(). */
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/* Internal state, determined in do_mremap(). */
>  	unsigned long delta;		/* Absolute delta of old_len,new_len. */
> @@ -139,7 +139,7 @@ static pmd_t *alloc_new_pmd(struct mm_struct *mm, unsigned long addr)
>  	return pmd;
>  }
>
> -static void take_rmap_locks(struct vm_area_struct *vma)
> +static void take_rmap_locks(struct mm_area *vma)
>  {
>  	if (vma->vm_file)
>  		i_mmap_lock_write(vma->vm_file->f_mapping);
> @@ -147,7 +147,7 @@ static void take_rmap_locks(struct vm_area_struct *vma)
>  		anon_vma_lock_write(vma->anon_vma);
>  }
>
> -static void drop_rmap_locks(struct vm_area_struct *vma)
> +static void drop_rmap_locks(struct mm_area *vma)
>  {
>  	if (vma->anon_vma)
>  		anon_vma_unlock_write(vma->anon_vma);
> @@ -173,7 +173,7 @@ static pte_t move_soft_dirty_pte(pte_t pte)
>  static int move_ptes(struct pagetable_move_control *pmc,
>  		unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd)
>  {
> -	struct vm_area_struct *vma = pmc->old;
> +	struct mm_area *vma = pmc->old;
>  	bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma);
>  	struct mm_struct *mm = vma->vm_mm;
>  	pte_t *old_pte, *new_pte, pte;
> @@ -297,7 +297,7 @@ static bool move_normal_pmd(struct pagetable_move_control *pmc,
>  			pmd_t *old_pmd, pmd_t *new_pmd)
>  {
>  	spinlock_t *old_ptl, *new_ptl;
> -	struct vm_area_struct *vma = pmc->old;
> +	struct mm_area *vma = pmc->old;
>  	struct mm_struct *mm = vma->vm_mm;
>  	bool res = false;
>  	pmd_t pmd;
> @@ -381,7 +381,7 @@ static bool move_normal_pud(struct pagetable_move_control *pmc,
>  		pud_t *old_pud, pud_t *new_pud)
>  {
>  	spinlock_t *old_ptl, *new_ptl;
> -	struct vm_area_struct *vma = pmc->old;
> +	struct mm_area *vma = pmc->old;
>  	struct mm_struct *mm = vma->vm_mm;
>  	pud_t pud;
>
> @@ -439,7 +439,7 @@ static bool move_huge_pud(struct pagetable_move_control *pmc,
>  		pud_t *old_pud, pud_t *new_pud)
>  {
>  	spinlock_t *old_ptl, *new_ptl;
> -	struct vm_area_struct *vma = pmc->old;
> +	struct mm_area *vma = pmc->old;
>  	struct mm_struct *mm = vma->vm_mm;
>  	pud_t pud;
>
> @@ -598,7 +598,7 @@ static bool move_pgt_entry(struct pagetable_move_control *pmc,
>   * so we make an exception for it.
>   */
>  static bool can_align_down(struct pagetable_move_control *pmc,
> -			   struct vm_area_struct *vma, unsigned long addr_to_align,
> +			   struct mm_area *vma, unsigned long addr_to_align,
>  			   unsigned long mask)
>  {
>  	unsigned long addr_masked = addr_to_align & mask;
> @@ -902,7 +902,7 @@ static bool vrm_implies_new_addr(struct vma_remap_struct *vrm)
>   */
>  static unsigned long vrm_set_new_addr(struct vma_remap_struct *vrm)
>  {
> -	struct vm_area_struct *vma = vrm->vma;
> +	struct mm_area *vma = vrm->vma;
>  	unsigned long map_flags = 0;
>  	/* Page Offset _into_ the VMA. */
>  	pgoff_t internal_pgoff = (vrm->addr - vma->vm_start) >> PAGE_SHIFT;
> @@ -978,7 +978,7 @@ static void vrm_stat_account(struct vma_remap_struct *vrm,
>  {
>  	unsigned long pages = bytes >> PAGE_SHIFT;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = vrm->vma;
> +	struct mm_area *vma = vrm->vma;
>
>  	vm_stat_account(mm, vma->vm_flags, pages);
>  	if (vma->vm_flags & VM_LOCKED) {
> @@ -994,7 +994,7 @@ static void vrm_stat_account(struct vma_remap_struct *vrm,
>  static unsigned long prep_move_vma(struct vma_remap_struct *vrm)
>  {
>  	unsigned long err = 0;
> -	struct vm_area_struct *vma = vrm->vma;
> +	struct mm_area *vma = vrm->vma;
>  	unsigned long old_addr = vrm->addr;
>  	unsigned long old_len = vrm->old_len;
>  	unsigned long dummy = vma->vm_flags;
> @@ -1043,7 +1043,7 @@ static void unmap_source_vma(struct vma_remap_struct *vrm)
>  	struct mm_struct *mm = current->mm;
>  	unsigned long addr = vrm->addr;
>  	unsigned long len = vrm->old_len;
> -	struct vm_area_struct *vma = vrm->vma;
> +	struct mm_area *vma = vrm->vma;
>  	VMA_ITERATOR(vmi, mm, addr);
>  	int err;
>  	unsigned long vm_start;
> @@ -1119,13 +1119,13 @@ static void unmap_source_vma(struct vma_remap_struct *vrm)
>  		unsigned long end = addr + len;
>
>  		if (vm_start < addr) {
> -			struct vm_area_struct *prev = vma_prev(&vmi);
> +			struct mm_area *prev = vma_prev(&vmi);
>
>  			vm_flags_set(prev, VM_ACCOUNT); /* Acquires VMA lock. */
>  		}
>
>  		if (vm_end > end) {
> -			struct vm_area_struct *next = vma_next(&vmi);
> +			struct mm_area *next = vma_next(&vmi);
>
>  			vm_flags_set(next, VM_ACCOUNT); /* Acquires VMA lock. */
>  		}
> @@ -1141,14 +1141,14 @@ static void unmap_source_vma(struct vma_remap_struct *vrm)
>   * error code.
>   */
>  static int copy_vma_and_data(struct vma_remap_struct *vrm,
> -			     struct vm_area_struct **new_vma_ptr)
> +			     struct mm_area **new_vma_ptr)
>  {
>  	unsigned long internal_offset = vrm->addr - vrm->vma->vm_start;
>  	unsigned long internal_pgoff = internal_offset >> PAGE_SHIFT;
>  	unsigned long new_pgoff = vrm->vma->vm_pgoff + internal_pgoff;
>  	unsigned long moved_len;
> -	struct vm_area_struct *vma = vrm->vma;
> -	struct vm_area_struct *new_vma;
> +	struct mm_area *vma = vrm->vma;
> +	struct mm_area *new_vma;
>  	int err = 0;
>  	PAGETABLE_MOVE(pmc, NULL, NULL, vrm->addr, vrm->new_addr, vrm->old_len);
>
> @@ -1206,7 +1206,7 @@ static int copy_vma_and_data(struct vma_remap_struct *vrm,
>   * links from it (if the entire VMA was copied over).
>   */
>  static void dontunmap_complete(struct vma_remap_struct *vrm,
> -			       struct vm_area_struct *new_vma)
> +			       struct mm_area *new_vma)
>  {
>  	unsigned long start = vrm->addr;
>  	unsigned long end = vrm->addr + vrm->old_len;
> @@ -1232,7 +1232,7 @@ static void dontunmap_complete(struct vma_remap_struct *vrm,
>  static unsigned long move_vma(struct vma_remap_struct *vrm)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *new_vma;
> +	struct mm_area *new_vma;
>  	unsigned long hiwater_vm;
>  	int err;
>
> @@ -1288,7 +1288,7 @@ static unsigned long move_vma(struct vma_remap_struct *vrm)
>  static int resize_is_valid(struct vma_remap_struct *vrm)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = vrm->vma;
> +	struct mm_area *vma = vrm->vma;
>  	unsigned long addr = vrm->addr;
>  	unsigned long old_len = vrm->old_len;
>  	unsigned long new_len = vrm->new_len;
> @@ -1444,7 +1444,7 @@ static unsigned long mremap_to(struct vma_remap_struct *vrm)
>  	return move_vma(vrm);
>  }
>
> -static int vma_expandable(struct vm_area_struct *vma, unsigned long delta)
> +static int vma_expandable(struct mm_area *vma, unsigned long delta)
>  {
>  	unsigned long end = vma->vm_end + delta;
>
> @@ -1546,7 +1546,7 @@ static unsigned long check_mremap_params(struct vma_remap_struct *vrm)
>  static unsigned long expand_vma_in_place(struct vma_remap_struct *vrm)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = vrm->vma;
> +	struct mm_area *vma = vrm->vma;
>  	VMA_ITERATOR(vmi, mm, vma->vm_end);
>
>  	if (!vrm_charge(vrm))
> @@ -1688,7 +1688,7 @@ static unsigned long mremap_at(struct vma_remap_struct *vrm)
>  static unsigned long do_mremap(struct vma_remap_struct *vrm)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long ret;
>
>  	ret = check_mremap_params(vrm);
> diff --git a/mm/mseal.c b/mm/mseal.c
> index c27197ac04e8..791ea7bc053a 100644
> --- a/mm/mseal.c
> +++ b/mm/mseal.c
> @@ -16,7 +16,7 @@
>  #include <linux/sched.h>
>  #include "internal.h"
>
> -static inline void set_vma_sealed(struct vm_area_struct *vma)
> +static inline void set_vma_sealed(struct mm_area *vma)
>  {
>  	vm_flags_set(vma, VM_SEALED);
>  }
> @@ -37,7 +37,7 @@ static bool is_madv_discard(int behavior)
>  	return false;
>  }
>
> -static bool is_ro_anon(struct vm_area_struct *vma)
> +static bool is_ro_anon(struct mm_area *vma)
>  {
>  	/* check anonymous mapping. */
>  	if (vma->vm_file || vma->vm_flags & VM_SHARED)
> @@ -57,7 +57,7 @@ static bool is_ro_anon(struct vm_area_struct *vma)
>  /*
>   * Check if a vma is allowed to be modified by madvise.
>   */
> -bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
> +bool can_modify_vma_madv(struct mm_area *vma, int behavior)
>  {
>  	if (!is_madv_discard(behavior))
>  		return true;
> @@ -69,8 +69,8 @@ bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
>  	return true;
>  }
>
> -static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> -		struct vm_area_struct **prev, unsigned long start,
> +static int mseal_fixup(struct vma_iterator *vmi, struct mm_area *vma,
> +		struct mm_area **prev, unsigned long start,
>  		unsigned long end, vm_flags_t newflags)
>  {
>  	int ret = 0;
> @@ -100,7 +100,7 @@ static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   */
>  static int check_mm_seal(unsigned long start, unsigned long end)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long nstart = start;
>
>  	VMA_ITERATOR(vmi, current->mm, start);
> @@ -126,7 +126,7 @@ static int check_mm_seal(unsigned long start, unsigned long end)
>  static int apply_mm_seal(unsigned long start, unsigned long end)
>  {
>  	unsigned long nstart;
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>
>  	VMA_ITERATOR(vmi, current->mm, start);
>
> diff --git a/mm/msync.c b/mm/msync.c
> index ac4c9bfea2e7..c46feec8295a 100644
> --- a/mm/msync.c
> +++ b/mm/msync.c
> @@ -33,7 +33,7 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len, int, flags)
>  {
>  	unsigned long end;
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int unmapped_error = 0;
>  	int error = -EINVAL;
>
> diff --git a/mm/nommu.c b/mm/nommu.c
> index 617e7ba8022f..af225d5af3bb 100644
> --- a/mm/nommu.c
> +++ b/mm/nommu.c
> @@ -89,7 +89,7 @@ unsigned int kobjsize(const void *objp)
>  	 * PAGE_SIZE for 0-order pages.
>  	 */
>  	if (!PageCompound(page)) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		vma = find_vma(current->mm, (unsigned long)objp);
>  		if (vma)
> @@ -144,7 +144,7 @@ static void *__vmalloc_user_flags(unsigned long size, gfp_t flags)
>
>  	ret = __vmalloc(size, flags);
>  	if (ret) {
> -		struct vm_area_struct *vma;
> +		struct mm_area *vma;
>
>  		mmap_write_lock(current->mm);
>  		vma = find_vma(current->mm, (unsigned long)ret);
> @@ -325,28 +325,28 @@ void free_vm_area(struct vm_struct *area)
>  }
>  EXPORT_SYMBOL_GPL(free_vm_area);
>
> -int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
> +int vm_insert_page(struct mm_area *vma, unsigned long addr,
>  		   struct page *page)
>  {
>  	return -EINVAL;
>  }
>  EXPORT_SYMBOL(vm_insert_page);
>
> -int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
> +int vm_insert_pages(struct mm_area *vma, unsigned long addr,
>  			struct page **pages, unsigned long *num)
>  {
>  	return -EINVAL;
>  }
>  EXPORT_SYMBOL(vm_insert_pages);
>
> -int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
> +int vm_map_pages(struct mm_area *vma, struct page **pages,
>  			unsigned long num)
>  {
>  	return -EINVAL;
>  }
>  EXPORT_SYMBOL(vm_map_pages);
>
> -int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
> +int vm_map_pages_zero(struct mm_area *vma, struct page **pages,
>  				unsigned long num)
>  {
>  	return -EINVAL;
> @@ -540,7 +540,7 @@ static void put_nommu_region(struct vm_region *region)
>  	__put_nommu_region(region);
>  }
>
> -static void setup_vma_to_mm(struct vm_area_struct *vma, struct mm_struct *mm)
> +static void setup_vma_to_mm(struct mm_area *vma, struct mm_struct *mm)
>  {
>  	vma->vm_mm = mm;
>
> @@ -556,7 +556,7 @@ static void setup_vma_to_mm(struct vm_area_struct *vma, struct mm_struct *mm)
>  	}
>  }
>
> -static void cleanup_vma_from_mm(struct vm_area_struct *vma)
> +static void cleanup_vma_from_mm(struct mm_area *vma)
>  {
>  	vma->vm_mm->map_count--;
>  	/* remove the VMA from the mapping */
> @@ -575,7 +575,7 @@ static void cleanup_vma_from_mm(struct vm_area_struct *vma)
>  /*
>   * delete a VMA from its owning mm_struct and address space
>   */
> -static int delete_vma_from_mm(struct vm_area_struct *vma)
> +static int delete_vma_from_mm(struct mm_area *vma)
>  {
>  	VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_start);
>
> @@ -594,7 +594,7 @@ static int delete_vma_from_mm(struct vm_area_struct *vma)
>  /*
>   * destroy a VMA record
>   */
> -static void delete_vma(struct mm_struct *mm, struct vm_area_struct *vma)
> +static void delete_vma(struct mm_struct *mm, struct mm_area *vma)
>  {
>  	vma_close(vma);
>  	if (vma->vm_file)
> @@ -603,7 +603,7 @@ static void delete_vma(struct mm_struct *mm, struct vm_area_struct *vma)
>  	vm_area_free(vma);
>  }
>
> -struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
> +struct mm_area *find_vma_intersection(struct mm_struct *mm,
>  					     unsigned long start_addr,
>  					     unsigned long end_addr)
>  {
> @@ -618,7 +618,7 @@ EXPORT_SYMBOL(find_vma_intersection);
>   * look up the first VMA in which addr resides, NULL if none
>   * - should be called with mm->mmap_lock at least held readlocked
>   */
> -struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *find_vma(struct mm_struct *mm, unsigned long addr)
>  {
>  	VMA_ITERATOR(vmi, mm, addr);
>
> @@ -630,10 +630,10 @@ EXPORT_SYMBOL(find_vma);
>   * At least xtensa ends up having protection faults even with no
>   * MMU.. No stack expansion, at least.
>   */
> -struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
> +struct mm_area *lock_mm_and_find_vma(struct mm_struct *mm,
>  			unsigned long addr, struct pt_regs *regs)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mmap_read_lock(mm);
>  	vma = vma_lookup(mm, addr);
> @@ -646,12 +646,12 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_struct *mm,
>   * expand a stack to a given address
>   * - not supported under NOMMU conditions
>   */
> -int expand_stack_locked(struct vm_area_struct *vma, unsigned long addr)
> +int expand_stack_locked(struct mm_area *vma, unsigned long addr)
>  {
>  	return -ENOMEM;
>  }
>
> -struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *expand_stack(struct mm_struct *mm, unsigned long addr)
>  {
>  	mmap_read_unlock(mm);
>  	return NULL;
> @@ -661,11 +661,11 @@ struct vm_area_struct *expand_stack(struct mm_struct *mm, unsigned long addr)
>   * look up the first VMA exactly that exactly matches addr
>   * - should be called with mm->mmap_lock at least held readlocked
>   */
> -static struct vm_area_struct *find_vma_exact(struct mm_struct *mm,
> +static struct mm_area *find_vma_exact(struct mm_struct *mm,
>  					     unsigned long addr,
>  					     unsigned long len)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long end = addr + len;
>  	VMA_ITERATOR(vmi, mm, addr);
>
> @@ -887,7 +887,7 @@ static unsigned long determine_vm_flags(struct file *file,
>   * set up a shared mapping on a file (the driver or filesystem provides and
>   * pins the storage)
>   */
> -static int do_mmap_shared_file(struct vm_area_struct *vma)
> +static int do_mmap_shared_file(struct mm_area *vma)
>  {
>  	int ret;
>
> @@ -908,7 +908,7 @@ static int do_mmap_shared_file(struct vm_area_struct *vma)
>  /*
>   * set up a private mapping or an anonymous shared mapping
>   */
> -static int do_mmap_private(struct vm_area_struct *vma,
> +static int do_mmap_private(struct mm_area *vma,
>  			   struct vm_region *region,
>  			   unsigned long len,
>  			   unsigned long capabilities)
> @@ -1016,7 +1016,7 @@ unsigned long do_mmap(struct file *file,
>  			unsigned long *populate,
>  			struct list_head *uf)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_region *region;
>  	struct rb_node *rb;
>  	unsigned long capabilities, result;
> @@ -1300,10 +1300,10 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg)
>   * split a vma into two pieces at address 'addr', a new vma is allocated either
>   * for the first part or the tail.
>   */
> -static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +static int split_vma(struct vma_iterator *vmi, struct mm_area *vma,
>  		     unsigned long addr, int new_below)
>  {
> -	struct vm_area_struct *new;
> +	struct mm_area *new;
>  	struct vm_region *region;
>  	unsigned long npages;
>  	struct mm_struct *mm;
> @@ -1379,7 +1379,7 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   * the end
>   */
>  static int vmi_shrink_vma(struct vma_iterator *vmi,
> -		      struct vm_area_struct *vma,
> +		      struct mm_area *vma,
>  		      unsigned long from, unsigned long to)
>  {
>  	struct vm_region *region;
> @@ -1423,7 +1423,7 @@ static int vmi_shrink_vma(struct vma_iterator *vmi,
>  int do_munmap(struct mm_struct *mm, unsigned long start, size_t len, struct list_head *uf)
>  {
>  	VMA_ITERATOR(vmi, mm, start);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long end;
>  	int ret = 0;
>
> @@ -1505,7 +1505,7 @@ SYSCALL_DEFINE2(munmap, unsigned long, addr, size_t, len)
>  void exit_mmap(struct mm_struct *mm)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if (!mm)
>  		return;
> @@ -1540,7 +1540,7 @@ static unsigned long do_mremap(unsigned long addr,
>  			unsigned long old_len, unsigned long new_len,
>  			unsigned long flags, unsigned long new_addr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/* insanity checks first */
>  	old_len = PAGE_ALIGN(old_len);
> @@ -1584,7 +1584,7 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
>  	return ret;
>  }
>
> -int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
> +int remap_pfn_range(struct mm_area *vma, unsigned long addr,
>  		unsigned long pfn, unsigned long size, pgprot_t prot)
>  {
>  	if (addr != (pfn << PAGE_SHIFT))
> @@ -1595,7 +1595,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
>  }
>  EXPORT_SYMBOL(remap_pfn_range);
>
> -int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len)
> +int vm_iomap_memory(struct mm_area *vma, phys_addr_t start, unsigned long len)
>  {
>  	unsigned long pfn = start >> PAGE_SHIFT;
>  	unsigned long vm_len = vma->vm_end - vma->vm_start;
> @@ -1605,7 +1605,7 @@ int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long
>  }
>  EXPORT_SYMBOL(vm_iomap_memory);
>
> -int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
> +int remap_vmalloc_range(struct mm_area *vma, void *addr,
>  			unsigned long pgoff)
>  {
>  	unsigned int size = vma->vm_end - vma->vm_start;
> @@ -1638,7 +1638,7 @@ EXPORT_SYMBOL(filemap_map_pages);
>  static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
>  			      void *buf, int len, unsigned int gup_flags)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int write = gup_flags & FOLL_WRITE;
>
>  	if (mmap_read_lock_killable(mm))
> @@ -1717,7 +1717,7 @@ static int __copy_remote_vm_str(struct mm_struct *mm, unsigned long addr,
>  				void *buf, int len)
>  {
>  	unsigned long addr_end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret = -EFAULT;
>
>  	*(char *)buf = '\0';
> @@ -1801,7 +1801,7 @@ EXPORT_SYMBOL_GPL(copy_remote_vm_str);
>  int nommu_shrink_inode_mappings(struct inode *inode, size_t size,
>  				size_t newsize)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vm_region *region;
>  	pgoff_t low, high;
>  	size_t r_size, r_top;
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 25923cfec9c6..55bd5da45232 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -514,7 +514,7 @@ static DEFINE_SPINLOCK(oom_reaper_lock);
>
>  static bool __oom_reap_task_mm(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	bool ret = true;
>  	VMA_ITERATOR(vmi, mm, 0);
>
> diff --git a/mm/page_idle.c b/mm/page_idle.c
> index 408aaf29a3ea..655e4c716d0d 100644
> --- a/mm/page_idle.c
> +++ b/mm/page_idle.c
> @@ -50,7 +50,7 @@ static struct folio *page_idle_get_folio(unsigned long pfn)
>  }
>
>  static bool page_idle_clear_pte_refs_one(struct folio *folio,
> -					struct vm_area_struct *vma,
> +					struct mm_area *vma,
>  					unsigned long addr, void *arg)
>  {
>  	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0);
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index e463c3be934a..13f7bd3e99c9 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -183,7 +183,7 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
>   */
>  bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>  {
> -	struct vm_area_struct *vma = pvmw->vma;
> +	struct mm_area *vma = pvmw->vma;
>  	struct mm_struct *mm = vma->vm_mm;
>  	unsigned long end;
>  	spinlock_t *ptl;
> @@ -342,7 +342,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
>   * Only valid for normal file or anonymous VMAs.
>   */
>  unsigned long page_mapped_in_vma(const struct page *page,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	const struct folio *folio = page_folio(page);
>  	struct page_vma_mapped_walk pvmw = {
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index e478777c86e1..2266b191ae3e 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -321,7 +321,7 @@ static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
>  static int walk_hugetlb_range(unsigned long addr, unsigned long end,
>  			      struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	struct hstate *h = hstate_vma(vma);
>  	unsigned long next;
>  	unsigned long hmask = huge_page_mask(h);
> @@ -364,7 +364,7 @@ static int walk_hugetlb_range(unsigned long addr, unsigned long end,
>  static int walk_page_test(unsigned long start, unsigned long end,
>  			struct mm_walk *walk)
>  {
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	const struct mm_walk_ops *ops = walk->ops;
>
>  	if (ops->test_walk)
> @@ -391,7 +391,7 @@ static int __walk_page_range(unsigned long start, unsigned long end,
>  			struct mm_walk *walk)
>  {
>  	int err = 0;
> -	struct vm_area_struct *vma = walk->vma;
> +	struct mm_area *vma = walk->vma;
>  	const struct mm_walk_ops *ops = walk->ops;
>  	bool is_hugetlb = is_vm_hugetlb_page(vma);
>
> @@ -426,7 +426,7 @@ static inline void process_mm_walk_lock(struct mm_struct *mm,
>  		mmap_assert_write_locked(mm);
>  }
>
> -static inline void process_vma_walk_lock(struct vm_area_struct *vma,
> +static inline void process_vma_walk_lock(struct mm_area *vma,
>  					 enum page_walk_lock walk_lock)
>  {
>  #ifdef CONFIG_PER_VMA_LOCK
> @@ -457,7 +457,7 @@ int walk_page_range_mm(struct mm_struct *mm, unsigned long start,
>  {
>  	int err = 0;
>  	unsigned long next;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_walk walk = {
>  		.ops		= ops,
>  		.mm		= mm,
> @@ -648,7 +648,7 @@ int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
>  	return walk_pgd_range(start, end, &walk);
>  }
>
> -int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
> +int walk_page_range_vma(struct mm_area *vma, unsigned long start,
>  			unsigned long end, const struct mm_walk_ops *ops,
>  			void *private)
>  {
> @@ -671,7 +671,7 @@ int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
>  	return __walk_page_range(start, end, &walk);
>  }
>
> -int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
> +int walk_page_vma(struct mm_area *vma, const struct mm_walk_ops *ops,
>  		void *private)
>  {
>  	struct mm_walk walk = {
> @@ -714,7 +714,7 @@ int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
>   *   struct mm_struct::mmap_lock is not needed.
>   *
>   *   Also this means that a caller can't rely on the struct
> - *   vm_area_struct::vm_flags to be constant across a call,
> + *   mm_area::vm_flags to be constant across a call,
>   *   except for immutable flags. Callers requiring this shouldn't use
>   *   this function.
>   *
> @@ -729,7 +729,7 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
>  		.ops		= ops,
>  		.private	= private,
>  	};
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	pgoff_t vba, vea, cba, cea;
>  	unsigned long start_addr, end_addr;
>  	int err = 0;
> @@ -827,7 +827,7 @@ int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
>   * Return: folio pointer on success, otherwise NULL.
>   */
>  struct folio *folio_walk_start(struct folio_walk *fw,
> -		struct vm_area_struct *vma, unsigned long addr,
> +		struct mm_area *vma, unsigned long addr,
>  		folio_walk_flags_t flags)
>  {
>  	unsigned long entry_size;
> diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
> index 5a882f2b10f9..b6e5dc860ec0 100644
> --- a/mm/pgtable-generic.c
> +++ b/mm/pgtable-generic.c
> @@ -65,7 +65,7 @@ void pmd_clear_bad(pmd_t *pmd)
>   * used to be done in the caller, but sparc needs minor faults to
>   * force that call on sun4c so we changed this macro slightly
>   */
> -int ptep_set_access_flags(struct vm_area_struct *vma,
> +int ptep_set_access_flags(struct mm_area *vma,
>  			  unsigned long address, pte_t *ptep,
>  			  pte_t entry, int dirty)
>  {
> @@ -79,7 +79,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> -int ptep_clear_flush_young(struct vm_area_struct *vma,
> +int ptep_clear_flush_young(struct mm_area *vma,
>  			   unsigned long address, pte_t *ptep)
>  {
>  	int young;
> @@ -91,7 +91,7 @@ int ptep_clear_flush_young(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PTEP_CLEAR_FLUSH
> -pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
> +pte_t ptep_clear_flush(struct mm_area *vma, unsigned long address,
>  		       pte_t *ptep)
>  {
>  	struct mm_struct *mm = (vma)->vm_mm;
> @@ -106,7 +106,7 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address,
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>
>  #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> -int pmdp_set_access_flags(struct vm_area_struct *vma,
> +int pmdp_set_access_flags(struct mm_area *vma,
>  			  unsigned long address, pmd_t *pmdp,
>  			  pmd_t entry, int dirty)
>  {
> @@ -121,7 +121,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
> -int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +int pmdp_clear_flush_young(struct mm_area *vma,
>  			   unsigned long address, pmd_t *pmdp)
>  {
>  	int young;
> @@ -134,7 +134,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
> -pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_huge_clear_flush(struct mm_area *vma, unsigned long address,
>  			    pmd_t *pmdp)
>  {
>  	pmd_t pmd;
> @@ -147,7 +147,7 @@ pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
>  }
>
>  #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> -pud_t pudp_huge_clear_flush(struct vm_area_struct *vma, unsigned long address,
> +pud_t pudp_huge_clear_flush(struct mm_area *vma, unsigned long address,
>  			    pud_t *pudp)
>  {
>  	pud_t pud;
> @@ -195,7 +195,7 @@ pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_INVALIDATE
> -pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_invalidate(struct mm_area *vma, unsigned long address,
>  		     pmd_t *pmdp)
>  {
>  	VM_WARN_ON_ONCE(!pmd_present(*pmdp));
> @@ -206,7 +206,7 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>  #endif
>
>  #ifndef __HAVE_ARCH_PMDP_INVALIDATE_AD
> -pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_invalidate_ad(struct mm_area *vma, unsigned long address,
>  			 pmd_t *pmdp)
>  {
>  	VM_WARN_ON_ONCE(!pmd_present(*pmdp));
> @@ -215,7 +215,7 @@ pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address,
>  #endif
>
>  #ifndef pmdp_collapse_flush
> -pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
> +pmd_t pmdp_collapse_flush(struct mm_area *vma, unsigned long address,
>  			  pmd_t *pmdp)
>  {
>  	/*
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 67bb273dfb80..6c00e97fec67 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -147,7 +147,7 @@ static void anon_vma_chain_free(struct anon_vma_chain *anon_vma_chain)
>  	kmem_cache_free(anon_vma_chain_cachep, anon_vma_chain);
>  }
>
> -static void anon_vma_chain_link(struct vm_area_struct *vma,
> +static void anon_vma_chain_link(struct mm_area *vma,
>  				struct anon_vma_chain *avc,
>  				struct anon_vma *anon_vma)
>  {
> @@ -183,7 +183,7 @@ static void anon_vma_chain_link(struct vm_area_struct *vma,
>   * to do any locking for the common case of already having
>   * an anon_vma.
>   */
> -int __anon_vma_prepare(struct vm_area_struct *vma)
> +int __anon_vma_prepare(struct mm_area *vma)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct anon_vma *anon_vma, *allocated;
> @@ -277,7 +277,7 @@ static inline void unlock_anon_vma_root(struct anon_vma *root)
>   * walker has a good chance of avoiding scanning the whole hierarchy when it
>   * searches where page is mapped.
>   */
> -int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
> +int anon_vma_clone(struct mm_area *dst, struct mm_area *src)
>  {
>  	struct anon_vma_chain *avc, *pavc;
>  	struct anon_vma *root = NULL;
> @@ -331,7 +331,7 @@ int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
>   * the corresponding VMA in the parent process is attached to.
>   * Returns 0 on success, non-zero on failure.
>   */
> -int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
> +int anon_vma_fork(struct mm_area *vma, struct mm_area *pvma)
>  {
>  	struct anon_vma_chain *avc;
>  	struct anon_vma *anon_vma;
> @@ -393,7 +393,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma)
>  	return -ENOMEM;
>  }
>
> -void unlink_anon_vmas(struct vm_area_struct *vma)
> +void unlink_anon_vmas(struct mm_area *vma)
>  {
>  	struct anon_vma_chain *avc, *next;
>  	struct anon_vma *root = NULL;
> @@ -786,7 +786,7 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags)
>   * Return: The virtual address corresponding to this page in the VMA.
>   */
>  unsigned long page_address_in_vma(const struct folio *folio,
> -		const struct page *page, const struct vm_area_struct *vma)
> +		const struct page *page, const struct mm_area *vma)
>  {
>  	if (folio_test_anon(folio)) {
>  		struct anon_vma *page__anon_vma = folio_anon_vma(folio);
> @@ -847,7 +847,7 @@ struct folio_referenced_arg {
>   * arg: folio_referenced_arg will be passed
>   */
>  static bool folio_referenced_one(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long address, void *arg)
> +		struct mm_area *vma, unsigned long address, void *arg)
>  {
>  	struct folio_referenced_arg *pra = arg;
>  	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, 0);
> @@ -947,7 +947,7 @@ static bool folio_referenced_one(struct folio *folio,
>  	return true;
>  }
>
> -static bool invalid_folio_referenced_vma(struct vm_area_struct *vma, void *arg)
> +static bool invalid_folio_referenced_vma(struct mm_area *vma, void *arg)
>  {
>  	struct folio_referenced_arg *pra = arg;
>  	struct mem_cgroup *memcg = pra->memcg;
> @@ -1024,7 +1024,7 @@ int folio_referenced(struct folio *folio, int is_locked,
>  static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
>  {
>  	int cleaned = 0;
> -	struct vm_area_struct *vma = pvmw->vma;
> +	struct mm_area *vma = pvmw->vma;
>  	struct mmu_notifier_range range;
>  	unsigned long address = pvmw->address;
>
> @@ -1091,7 +1091,7 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
>  	return cleaned;
>  }
>
> -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
> +static bool page_mkclean_one(struct folio *folio, struct mm_area *vma,
>  			     unsigned long address, void *arg)
>  {
>  	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
> @@ -1102,7 +1102,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
>  	return true;
>  }
>
> -static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg)
> +static bool invalid_mkclean_vma(struct mm_area *vma, void *arg)
>  {
>  	if (vma->vm_flags & VM_SHARED)
>  		return false;
> @@ -1143,7 +1143,7 @@ struct wrprotect_file_state {
>  };
>
>  static bool mapping_wrprotect_range_one(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long address, void *arg)
> +		struct mm_area *vma, unsigned long address, void *arg)
>  {
>  	struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg;
>  	struct page_vma_mapped_walk pvmw = {
> @@ -1222,7 +1222,7 @@ EXPORT_SYMBOL_GPL(mapping_wrprotect_range);
>   * Returns the number of cleaned PTEs (including PMDs).
>   */
>  int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
> -		      struct vm_area_struct *vma)
> +		      struct mm_area *vma)
>  {
>  	struct page_vma_mapped_walk pvmw = {
>  		.pfn		= pfn,
> @@ -1242,7 +1242,7 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
>  }
>
>  static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *vma,
> +		struct page *page, int nr_pages, struct mm_area *vma,
>  		enum rmap_level level, int *nr_pmdmapped)
>  {
>  	atomic_t *mapped = &folio->_nr_pages_mapped;
> @@ -1327,7 +1327,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio,
>   * that folio can be moved into the anon_vma that belongs to just that
>   * process, so the rmap code will not search the parent or sibling processes.
>   */
> -void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
> +void folio_move_anon_rmap(struct folio *folio, struct mm_area *vma)
>  {
>  	void *anon_vma = vma->anon_vma;
>
> @@ -1350,7 +1350,7 @@ void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
>   * @address:	User virtual address of the mapping
>   * @exclusive:	Whether the folio is exclusive to the process.
>   */
> -static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
> +static void __folio_set_anon(struct folio *folio, struct mm_area *vma,
>  			     unsigned long address, bool exclusive)
>  {
>  	struct anon_vma *anon_vma = vma->anon_vma;
> @@ -1383,7 +1383,7 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
>   * @address:	the user virtual address mapped
>   */
>  static void __page_check_anon_rmap(const struct folio *folio,
> -		const struct page *page, struct vm_area_struct *vma,
> +		const struct page *page, struct mm_area *vma,
>  		unsigned long address)
>  {
>  	/*
> @@ -1426,7 +1426,7 @@ static void __folio_mod_stat(struct folio *folio, int nr, int nr_pmdmapped)
>  }
>
>  static __always_inline void __folio_add_anon_rmap(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *vma,
> +		struct page *page, int nr_pages, struct mm_area *vma,
>  		unsigned long address, rmap_t flags, enum rmap_level level)
>  {
>  	int i, nr, nr_pmdmapped = 0;
> @@ -1505,7 +1505,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
>   * (but KSM folios are never downgraded).
>   */
>  void folio_add_anon_rmap_ptes(struct folio *folio, struct page *page,
> -		int nr_pages, struct vm_area_struct *vma, unsigned long address,
> +		int nr_pages, struct mm_area *vma, unsigned long address,
>  		rmap_t flags)
>  {
>  	__folio_add_anon_rmap(folio, page, nr_pages, vma, address, flags,
> @@ -1526,7 +1526,7 @@ void folio_add_anon_rmap_ptes(struct folio *folio, struct page *page,
>   * the anon_vma case: to serialize mapping,index checking after setting.
>   */
>  void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
> -		struct vm_area_struct *vma, unsigned long address, rmap_t flags)
> +		struct mm_area *vma, unsigned long address, rmap_t flags)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	__folio_add_anon_rmap(folio, page, HPAGE_PMD_NR, vma, address, flags,
> @@ -1551,7 +1551,7 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page,
>   *
>   * If the folio is pmd-mappable, it is accounted as a THP.
>   */
> -void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> +void folio_add_new_anon_rmap(struct folio *folio, struct mm_area *vma,
>  		unsigned long address, rmap_t flags)
>  {
>  	const bool exclusive = flags & RMAP_EXCLUSIVE;
> @@ -1610,7 +1610,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>  }
>
>  static __always_inline void __folio_add_file_rmap(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *vma,
> +		struct page *page, int nr_pages, struct mm_area *vma,
>  		enum rmap_level level)
>  {
>  	int nr, nr_pmdmapped = 0;
> @@ -1637,7 +1637,7 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio,
>   * The caller needs to hold the page table lock.
>   */
>  void folio_add_file_rmap_ptes(struct folio *folio, struct page *page,
> -		int nr_pages, struct vm_area_struct *vma)
> +		int nr_pages, struct mm_area *vma)
>  {
>  	__folio_add_file_rmap(folio, page, nr_pages, vma, RMAP_LEVEL_PTE);
>  }
> @@ -1653,7 +1653,7 @@ void folio_add_file_rmap_ptes(struct folio *folio, struct page *page,
>   * The caller needs to hold the page table lock.
>   */
>  void folio_add_file_rmap_pmd(struct folio *folio, struct page *page,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	__folio_add_file_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_LEVEL_PMD);
> @@ -1673,7 +1673,7 @@ void folio_add_file_rmap_pmd(struct folio *folio, struct page *page,
>   * The caller needs to hold the page table lock.
>   */
>  void folio_add_file_rmap_pud(struct folio *folio, struct page *page,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
>  	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> @@ -1684,7 +1684,7 @@ void folio_add_file_rmap_pud(struct folio *folio, struct page *page,
>  }
>
>  static __always_inline void __folio_remove_rmap(struct folio *folio,
> -		struct page *page, int nr_pages, struct vm_area_struct *vma,
> +		struct page *page, int nr_pages, struct mm_area *vma,
>  		enum rmap_level level)
>  {
>  	atomic_t *mapped = &folio->_nr_pages_mapped;
> @@ -1799,7 +1799,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
>   * The caller needs to hold the page table lock.
>   */
>  void folio_remove_rmap_ptes(struct folio *folio, struct page *page,
> -		int nr_pages, struct vm_area_struct *vma)
> +		int nr_pages, struct mm_area *vma)
>  {
>  	__folio_remove_rmap(folio, page, nr_pages, vma, RMAP_LEVEL_PTE);
>  }
> @@ -1815,7 +1815,7 @@ void folio_remove_rmap_ptes(struct folio *folio, struct page *page,
>   * The caller needs to hold the page table lock.
>   */
>  void folio_remove_rmap_pmd(struct folio *folio, struct page *page,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	__folio_remove_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_LEVEL_PMD);
> @@ -1835,7 +1835,7 @@ void folio_remove_rmap_pmd(struct folio *folio, struct page *page,
>   * The caller needs to hold the page table lock.
>   */
>  void folio_remove_rmap_pud(struct folio *folio, struct page *page,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \
>  	defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD)
> @@ -1867,7 +1867,7 @@ static inline bool can_batch_unmap_folio_ptes(unsigned long addr,
>  /*
>   * @arg: enum ttu_flags will be passed to this argument
>   */
> -static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> +static bool try_to_unmap_one(struct folio *folio, struct mm_area *vma,
>  		     unsigned long address, void *arg)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -2227,7 +2227,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>  	return ret;
>  }
>
> -static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg)
> +static bool invalid_migration_vma(struct mm_area *vma, void *arg)
>  {
>  	return vma_is_temporary_stack(vma);
>  }
> @@ -2269,7 +2269,7 @@ void try_to_unmap(struct folio *folio, enum ttu_flags flags)
>   * If TTU_SPLIT_HUGE_PMD is specified any PMD mappings will be split into PTEs
>   * containing migration entries.
>   */
> -static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> +static bool try_to_migrate_one(struct folio *folio, struct mm_area *vma,
>  		     unsigned long address, void *arg)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -2657,7 +2657,7 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr,
>  {
>  	struct mmu_notifier_range range;
>  	struct folio *folio, *fw_folio;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct folio_walk fw;
>  	struct page *page;
>  	swp_entry_t entry;
> @@ -2821,7 +2821,7 @@ static void rmap_walk_anon(struct folio *folio,
>  	pgoff_end = pgoff_start + folio_nr_pages(folio) - 1;
>  	anon_vma_interval_tree_foreach(avc, &anon_vma->rb_root,
>  			pgoff_start, pgoff_end) {
> -		struct vm_area_struct *vma = avc->vma;
> +		struct mm_area *vma = avc->vma;
>  		unsigned long address = vma_address(vma, pgoff_start,
>  				folio_nr_pages(folio));
>
> @@ -2866,7 +2866,7 @@ static void __rmap_walk_file(struct folio *folio, struct address_space *mapping,
>  			     struct rmap_walk_control *rwc, bool locked)
>  {
>  	pgoff_t pgoff_end = pgoff_start + nr_pages - 1;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	VM_WARN_ON_FOLIO(folio && mapping != folio_mapping(folio), folio);
>  	VM_WARN_ON_FOLIO(folio && pgoff_start != folio_pgoff(folio), folio);
> @@ -2958,7 +2958,7 @@ void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc)
>   * Unlike common anonymous pages, anonymous hugepages have no accounting code
>   * and no lru code, because we handle hugepages differently from common pages.
>   */
> -void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> +void hugetlb_add_anon_rmap(struct folio *folio, struct mm_area *vma,
>  		unsigned long address, rmap_t flags)
>  {
>  	VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio);
> @@ -2973,7 +2973,7 @@ void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>  }
>
>  void hugetlb_add_new_anon_rmap(struct folio *folio,
> -		struct vm_area_struct *vma, unsigned long address)
> +		struct mm_area *vma, unsigned long address)
>  {
>  	VM_WARN_ON_FOLIO(!folio_test_hugetlb(folio), folio);
>
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 1b0a214ee558..6fc28aeec966 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -120,7 +120,7 @@ static int secretmem_release(struct inode *inode, struct file *file)
>  	return 0;
>  }
>
> -static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
> +static int secretmem_mmap(struct file *file, struct mm_area *vma)
>  {
>  	unsigned long len = vma->vm_end - vma->vm_start;
>
> @@ -136,7 +136,7 @@ static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -bool vma_is_secretmem(struct vm_area_struct *vma)
> +bool vma_is_secretmem(struct mm_area *vma)
>  {
>  	return vma->vm_ops == &secretmem_vm_ops;
>  }
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 99327c30507c..c7535853a324 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -160,7 +160,7 @@ static unsigned long shmem_default_max_inodes(void)
>
>  static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>  			struct folio **foliop, enum sgp_type sgp, gfp_t gfp,
> -			struct vm_area_struct *vma, vm_fault_t *fault_type);
> +			struct mm_area *vma, vm_fault_t *fault_type);
>
>  static inline struct shmem_sb_info *SHMEM_SB(struct super_block *sb)
>  {
> @@ -281,12 +281,12 @@ bool shmem_mapping(struct address_space *mapping)
>  }
>  EXPORT_SYMBOL_GPL(shmem_mapping);
>
> -bool vma_is_anon_shmem(struct vm_area_struct *vma)
> +bool vma_is_anon_shmem(struct mm_area *vma)
>  {
>  	return vma->vm_ops == &shmem_anon_vm_ops;
>  }
>
> -bool vma_is_shmem(struct vm_area_struct *vma)
> +bool vma_is_shmem(struct mm_area *vma)
>  {
>  	return vma_is_anon_shmem(vma) || vma->vm_ops == &shmem_vm_ops;
>  }
> @@ -614,7 +614,7 @@ static unsigned int shmem_get_orders_within_size(struct inode *inode,
>
>  static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>  					      loff_t write_end, bool shmem_huge_force,
> -					      struct vm_area_struct *vma,
> +					      struct mm_area *vma,
>  					      unsigned long vm_flags)
>  {
>  	unsigned int maybe_pmd_order = HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ?
> @@ -861,7 +861,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
>
>  static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>  					      loff_t write_end, bool shmem_huge_force,
> -					      struct vm_area_struct *vma,
> +					      struct mm_area *vma,
>  					      unsigned long vm_flags)
>  {
>  	return 0;
> @@ -1003,7 +1003,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
>   * This is safe to call without i_rwsem or the i_pages lock thanks to RCU,
>   * as long as the inode doesn't go away and racy results are not a problem.
>   */
> -unsigned long shmem_swap_usage(struct vm_area_struct *vma)
> +unsigned long shmem_swap_usage(struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(vma->vm_file);
>  	struct shmem_inode_info *info = SHMEM_I(inode);
> @@ -1755,7 +1755,7 @@ bool shmem_hpage_pmd_enabled(void)
>  }
>
>  unsigned long shmem_allowable_huge_orders(struct inode *inode,
> -				struct vm_area_struct *vma, pgoff_t index,
> +				struct mm_area *vma, pgoff_t index,
>  				loff_t write_end, bool shmem_huge_force)
>  {
>  	unsigned long mask = READ_ONCE(huge_shmem_orders_always);
> @@ -1802,7 +1802,7 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
>  					   struct address_space *mapping, pgoff_t index,
>  					   unsigned long orders)
>  {
> -	struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
> +	struct mm_area *vma = vmf ? vmf->vma : NULL;
>  	pgoff_t aligned_index;
>  	unsigned long pages;
>  	int order;
> @@ -1959,7 +1959,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>  }
>
>  static struct folio *shmem_swap_alloc_folio(struct inode *inode,
> -		struct vm_area_struct *vma, pgoff_t index,
> +		struct mm_area *vma, pgoff_t index,
>  		swp_entry_t entry, int order, gfp_t gfp)
>  {
>  	struct shmem_inode_info *info = SHMEM_I(inode);
> @@ -2036,7 +2036,7 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp)
>
>  static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
>  				struct shmem_inode_info *info, pgoff_t index,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	struct folio *new, *old = *foliop;
>  	swp_entry_t entry = old->swap;
> @@ -2231,7 +2231,7 @@ static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
>   */
>  static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>  			     struct folio **foliop, enum sgp_type sgp,
> -			     gfp_t gfp, struct vm_area_struct *vma,
> +			     gfp_t gfp, struct mm_area *vma,
>  			     vm_fault_t *fault_type)
>  {
>  	struct address_space *mapping = inode->i_mapping;
> @@ -2434,7 +2434,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
>  		loff_t write_end, struct folio **foliop, enum sgp_type sgp,
>  		gfp_t gfp, struct vm_fault *vmf, vm_fault_t *fault_type)
>  {
> -	struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
> +	struct mm_area *vma = vmf ? vmf->vma : NULL;
>  	struct mm_struct *fault_mm;
>  	struct folio *folio;
>  	int error;
> @@ -2853,13 +2853,13 @@ unsigned long shmem_get_unmapped_area(struct file *file,
>  }
>
>  #ifdef CONFIG_NUMA
> -static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
> +static int shmem_set_policy(struct mm_area *vma, struct mempolicy *mpol)
>  {
>  	struct inode *inode = file_inode(vma->vm_file);
>  	return mpol_set_shared_policy(&SHMEM_I(inode)->policy, vma, mpol);
>  }
>
> -static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
> +static struct mempolicy *shmem_get_policy(struct mm_area *vma,
>  					  unsigned long addr, pgoff_t *ilx)
>  {
>  	struct inode *inode = file_inode(vma->vm_file);
> @@ -2924,7 +2924,7 @@ int shmem_lock(struct file *file, int lock, struct ucounts *ucounts)
>  	return retval;
>  }
>
> -static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
> +static int shmem_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>
> @@ -3148,7 +3148,7 @@ static inline struct inode *shmem_get_inode(struct mnt_idmap *idmap,
>
>  #ifdef CONFIG_USERFAULTFD
>  int shmem_mfill_atomic_pte(pmd_t *dst_pmd,
> -			   struct vm_area_struct *dst_vma,
> +			   struct mm_area *dst_vma,
>  			   unsigned long dst_addr,
>  			   unsigned long src_addr,
>  			   uffd_flags_t flags,
> @@ -5880,7 +5880,7 @@ EXPORT_SYMBOL_GPL(shmem_file_setup_with_mnt);
>   * shmem_zero_setup - setup a shared anonymous mapping
>   * @vma: the vma to be mmapped is prepared by do_mmap
>   */
> -int shmem_zero_setup(struct vm_area_struct *vma)
> +int shmem_zero_setup(struct mm_area *vma)
>  {
>  	struct file *file;
>  	loff_t size = vma->vm_end - vma->vm_start;
> diff --git a/mm/swap.c b/mm/swap.c
> index 77b2d5997873..e86133c365cc 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -514,7 +514,7 @@ EXPORT_SYMBOL(folio_add_lru);
>   * If the VMA is mlocked, @folio is added to the unevictable list.
>   * Otherwise, it is treated the same way as folio_add_lru().
>   */
> -void folio_add_lru_vma(struct folio *folio, struct vm_area_struct *vma)
> +void folio_add_lru_vma(struct folio *folio, struct mm_area *vma)
>  {
>  	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
>
> diff --git a/mm/swap.h b/mm/swap.h
> index 6f4a3f927edb..a2122e9848f5 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -61,12 +61,12 @@ void clear_shadow_from_swap_cache(int type, unsigned long begin,
>  				  unsigned long end);
>  void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int nr);
>  struct folio *swap_cache_get_folio(swp_entry_t entry,
> -		struct vm_area_struct *vma, unsigned long addr);
> +		struct mm_area *vma, unsigned long addr);
>  struct folio *filemap_get_incore_folio(struct address_space *mapping,
>  		pgoff_t index);
>
>  struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> -		struct vm_area_struct *vma, unsigned long addr,
> +		struct mm_area *vma, unsigned long addr,
>  		struct swap_iocb **plug);
>  struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags,
>  		struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated,
> @@ -151,7 +151,7 @@ static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entr
>  }
>
>  static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
> -		struct vm_area_struct *vma, unsigned long addr)
> +		struct mm_area *vma, unsigned long addr)
>  {
>  	return NULL;
>  }
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 68fd981b514f..60a1d4571fc8 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -284,7 +284,7 @@ static inline bool swap_use_vma_readahead(void)
>   * Caller must lock the swap device or hold a reference to keep it valid.
>   */
>  struct folio *swap_cache_get_folio(swp_entry_t entry,
> -		struct vm_area_struct *vma, unsigned long addr)
> +		struct mm_area *vma, unsigned long addr)
>  {
>  	struct folio *folio;
>
> @@ -481,7 +481,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
>   * swap cache folio lock.
>   */
>  struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> -		struct vm_area_struct *vma, unsigned long addr,
> +		struct mm_area *vma, unsigned long addr,
>  		struct swap_iocb **plug)
>  {
>  	struct swap_info_struct *si;
> @@ -677,7 +677,7 @@ void exit_swap_address_space(unsigned int type)
>  static int swap_vma_ra_win(struct vm_fault *vmf, unsigned long *start,
>  			   unsigned long *end)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	unsigned long ra_val;
>  	unsigned long faddr, prev_faddr, left, right;
>  	unsigned int max_win, hits, prev_win, win;
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 2eff8b51a945..fb46d0ea6aec 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1971,7 +1971,7 @@ static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
>   * just let do_wp_page work it out if a write is requested later - to
>   * force COW, vm_page_prot omits write permission from any private vma.
>   */
> -static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
> +static int unuse_pte(struct mm_area *vma, pmd_t *pmd,
>  		unsigned long addr, swp_entry_t entry, struct folio *folio)
>  {
>  	struct page *page;
> @@ -2072,7 +2072,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
>  	return ret;
>  }
>
> -static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> +static int unuse_pte_range(struct mm_area *vma, pmd_t *pmd,
>  			unsigned long addr, unsigned long end,
>  			unsigned int type)
>  {
> @@ -2145,7 +2145,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>  	return 0;
>  }
>
> -static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
> +static inline int unuse_pmd_range(struct mm_area *vma, pud_t *pud,
>  				unsigned long addr, unsigned long end,
>  				unsigned int type)
>  {
> @@ -2164,7 +2164,7 @@ static inline int unuse_pmd_range(struct vm_area_struct *vma, pud_t *pud,
>  	return 0;
>  }
>
> -static inline int unuse_pud_range(struct vm_area_struct *vma, p4d_t *p4d,
> +static inline int unuse_pud_range(struct mm_area *vma, p4d_t *p4d,
>  				unsigned long addr, unsigned long end,
>  				unsigned int type)
>  {
> @@ -2184,7 +2184,7 @@ static inline int unuse_pud_range(struct vm_area_struct *vma, p4d_t *p4d,
>  	return 0;
>  }
>
> -static inline int unuse_p4d_range(struct vm_area_struct *vma, pgd_t *pgd,
> +static inline int unuse_p4d_range(struct mm_area *vma, pgd_t *pgd,
>  				unsigned long addr, unsigned long end,
>  				unsigned int type)
>  {
> @@ -2204,7 +2204,7 @@ static inline int unuse_p4d_range(struct vm_area_struct *vma, pgd_t *pgd,
>  	return 0;
>  }
>
> -static int unuse_vma(struct vm_area_struct *vma, unsigned int type)
> +static int unuse_vma(struct mm_area *vma, unsigned int type)
>  {
>  	pgd_t *pgd;
>  	unsigned long addr, end, next;
> @@ -2227,7 +2227,7 @@ static int unuse_vma(struct vm_area_struct *vma, unsigned int type)
>
>  static int unuse_mm(struct mm_struct *mm, unsigned int type)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int ret = 0;
>  	VMA_ITERATOR(vmi, mm, 0);
>
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index fbf2cf62ab9f..ed1f47504327 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -21,7 +21,7 @@
>  #include "swap.h"
>
>  static __always_inline
> -bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
> +bool validate_dst_vma(struct mm_area *dst_vma, unsigned long dst_end)
>  {
>  	/* Make sure that the dst range is fully within dst_vma. */
>  	if (dst_end > dst_vma->vm_end)
> @@ -39,10 +39,10 @@ bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
>  }
>
>  static __always_inline
> -struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm,
> +struct mm_area *find_vma_and_prepare_anon(struct mm_struct *mm,
>  						 unsigned long addr)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mmap_assert_locked(mm);
>  	vma = vma_lookup(mm, addr);
> @@ -66,10 +66,10 @@ struct vm_area_struct *find_vma_and_prepare_anon(struct mm_struct *mm,
>   * Return: A locked vma containing @address, -ENOENT if no vma is found, or
>   * -ENOMEM if anon_vma couldn't be allocated.
>   */
> -static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm,
> +static struct mm_area *uffd_lock_vma(struct mm_struct *mm,
>  				       unsigned long address)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = lock_vma_under_rcu(mm, address);
>  	if (vma) {
> @@ -96,11 +96,11 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_struct *mm,
>  	return vma;
>  }
>
> -static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
> +static struct mm_area *uffd_mfill_lock(struct mm_struct *dst_mm,
>  					      unsigned long dst_start,
>  					      unsigned long len)
>  {
> -	struct vm_area_struct *dst_vma;
> +	struct mm_area *dst_vma;
>
>  	dst_vma = uffd_lock_vma(dst_mm, dst_start);
>  	if (IS_ERR(dst_vma) || validate_dst_vma(dst_vma, dst_start + len))
> @@ -110,18 +110,18 @@ static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
>  	return ERR_PTR(-ENOENT);
>  }
>
> -static void uffd_mfill_unlock(struct vm_area_struct *vma)
> +static void uffd_mfill_unlock(struct mm_area *vma)
>  {
>  	vma_end_read(vma);
>  }
>
>  #else
>
> -static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
> +static struct mm_area *uffd_mfill_lock(struct mm_struct *dst_mm,
>  					      unsigned long dst_start,
>  					      unsigned long len)
>  {
> -	struct vm_area_struct *dst_vma;
> +	struct mm_area *dst_vma;
>
>  	mmap_read_lock(dst_mm);
>  	dst_vma = find_vma_and_prepare_anon(dst_mm, dst_start);
> @@ -137,14 +137,14 @@ static struct vm_area_struct *uffd_mfill_lock(struct mm_struct *dst_mm,
>  	return dst_vma;
>  }
>
> -static void uffd_mfill_unlock(struct vm_area_struct *vma)
> +static void uffd_mfill_unlock(struct mm_area *vma)
>  {
>  	mmap_read_unlock(vma->vm_mm);
>  }
>  #endif
>
>  /* Check if dst_addr is outside of file's size. Must be called with ptl held. */
> -static bool mfill_file_over_size(struct vm_area_struct *dst_vma,
> +static bool mfill_file_over_size(struct mm_area *dst_vma,
>  				 unsigned long dst_addr)
>  {
>  	struct inode *inode;
> @@ -166,7 +166,7 @@ static bool mfill_file_over_size(struct vm_area_struct *dst_vma,
>   * and anon, and for both shared and private VMAs.
>   */
>  int mfill_atomic_install_pte(pmd_t *dst_pmd,
> -			     struct vm_area_struct *dst_vma,
> +			     struct mm_area *dst_vma,
>  			     unsigned long dst_addr, struct page *page,
>  			     bool newly_allocated, uffd_flags_t flags)
>  {
> @@ -235,7 +235,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd,
>  }
>
>  static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
> -				 struct vm_area_struct *dst_vma,
> +				 struct mm_area *dst_vma,
>  				 unsigned long dst_addr,
>  				 unsigned long src_addr,
>  				 uffd_flags_t flags,
> @@ -311,7 +311,7 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd,
>  }
>
>  static int mfill_atomic_pte_zeroed_folio(pmd_t *dst_pmd,
> -					 struct vm_area_struct *dst_vma,
> +					 struct mm_area *dst_vma,
>  					 unsigned long dst_addr)
>  {
>  	struct folio *folio;
> @@ -343,7 +343,7 @@ static int mfill_atomic_pte_zeroed_folio(pmd_t *dst_pmd,
>  }
>
>  static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd,
> -				     struct vm_area_struct *dst_vma,
> +				     struct mm_area *dst_vma,
>  				     unsigned long dst_addr)
>  {
>  	pte_t _dst_pte, *dst_pte;
> @@ -378,7 +378,7 @@ static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd,
>
>  /* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */
>  static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
> -				     struct vm_area_struct *dst_vma,
> +				     struct mm_area *dst_vma,
>  				     unsigned long dst_addr,
>  				     uffd_flags_t flags)
>  {
> @@ -422,7 +422,7 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd,
>
>  /* Handles UFFDIO_POISON for all non-hugetlb VMAs. */
>  static int mfill_atomic_pte_poison(pmd_t *dst_pmd,
> -				   struct vm_area_struct *dst_vma,
> +				   struct mm_area *dst_vma,
>  				   unsigned long dst_addr,
>  				   uffd_flags_t flags)
>  {
> @@ -487,7 +487,7 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address)
>   */
>  static __always_inline ssize_t mfill_atomic_hugetlb(
>  					      struct userfaultfd_ctx *ctx,
> -					      struct vm_area_struct *dst_vma,
> +					      struct mm_area *dst_vma,
>  					      unsigned long dst_start,
>  					      unsigned long src_start,
>  					      unsigned long len,
> @@ -643,7 +643,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
>  #else /* !CONFIG_HUGETLB_PAGE */
>  /* fail at build time if gcc attempts to use this */
>  extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx,
> -				    struct vm_area_struct *dst_vma,
> +				    struct mm_area *dst_vma,
>  				    unsigned long dst_start,
>  				    unsigned long src_start,
>  				    unsigned long len,
> @@ -651,7 +651,7 @@ extern ssize_t mfill_atomic_hugetlb(struct userfaultfd_ctx *ctx,
>  #endif /* CONFIG_HUGETLB_PAGE */
>
>  static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd,
> -						struct vm_area_struct *dst_vma,
> +						struct mm_area *dst_vma,
>  						unsigned long dst_addr,
>  						unsigned long src_addr,
>  						uffd_flags_t flags,
> @@ -701,7 +701,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx,
>  					    uffd_flags_t flags)
>  {
>  	struct mm_struct *dst_mm = ctx->mm;
> -	struct vm_area_struct *dst_vma;
> +	struct mm_area *dst_vma;
>  	ssize_t err;
>  	pmd_t *dst_pmd;
>  	unsigned long src_addr, dst_addr;
> @@ -897,7 +897,7 @@ ssize_t mfill_atomic_poison(struct userfaultfd_ctx *ctx, unsigned long start,
>  			    uffd_flags_set_mode(flags, MFILL_ATOMIC_POISON));
>  }
>
> -long uffd_wp_range(struct vm_area_struct *dst_vma,
> +long uffd_wp_range(struct mm_area *dst_vma,
>  		   unsigned long start, unsigned long len, bool enable_wp)
>  {
>  	unsigned int mm_cp_flags;
> @@ -932,7 +932,7 @@ int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start,
>  	struct mm_struct *dst_mm = ctx->mm;
>  	unsigned long end = start + len;
>  	unsigned long _start, _end;
> -	struct vm_area_struct *dst_vma;
> +	struct mm_area *dst_vma;
>  	unsigned long page_mask;
>  	long err;
>  	VMA_ITERATOR(vmi, dst_mm, start);
> @@ -1027,8 +1027,8 @@ static inline bool is_pte_pages_stable(pte_t *dst_pte, pte_t *src_pte,
>  }
>
>  static int move_present_pte(struct mm_struct *mm,
> -			    struct vm_area_struct *dst_vma,
> -			    struct vm_area_struct *src_vma,
> +			    struct mm_area *dst_vma,
> +			    struct mm_area *src_vma,
>  			    unsigned long dst_addr, unsigned long src_addr,
>  			    pte_t *dst_pte, pte_t *src_pte,
>  			    pte_t orig_dst_pte, pte_t orig_src_pte,
> @@ -1073,7 +1073,7 @@ static int move_present_pte(struct mm_struct *mm,
>  	return err;
>  }
>
> -static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
> +static int move_swap_pte(struct mm_struct *mm, struct mm_area *dst_vma,
>  			 unsigned long dst_addr, unsigned long src_addr,
>  			 pte_t *dst_pte, pte_t *src_pte,
>  			 pte_t orig_dst_pte, pte_t orig_src_pte,
> @@ -1107,8 +1107,8 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
>  }
>
>  static int move_zeropage_pte(struct mm_struct *mm,
> -			     struct vm_area_struct *dst_vma,
> -			     struct vm_area_struct *src_vma,
> +			     struct mm_area *dst_vma,
> +			     struct mm_area *src_vma,
>  			     unsigned long dst_addr, unsigned long src_addr,
>  			     pte_t *dst_pte, pte_t *src_pte,
>  			     pte_t orig_dst_pte, pte_t orig_src_pte,
> @@ -1140,8 +1140,8 @@ static int move_zeropage_pte(struct mm_struct *mm,
>   * in moving the page.
>   */
>  static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
> -			  struct vm_area_struct *dst_vma,
> -			  struct vm_area_struct *src_vma,
> +			  struct mm_area *dst_vma,
> +			  struct mm_area *src_vma,
>  			  unsigned long dst_addr, unsigned long src_addr,
>  			  __u64 mode)
>  {
> @@ -1445,15 +1445,15 @@ static inline bool move_splits_huge_pmd(unsigned long dst_addr,
>  }
>  #endif
>
> -static inline bool vma_move_compatible(struct vm_area_struct *vma)
> +static inline bool vma_move_compatible(struct mm_area *vma)
>  {
>  	return !(vma->vm_flags & (VM_PFNMAP | VM_IO |  VM_HUGETLB |
>  				  VM_MIXEDMAP | VM_SHADOW_STACK));
>  }
>
>  static int validate_move_areas(struct userfaultfd_ctx *ctx,
> -			       struct vm_area_struct *src_vma,
> -			       struct vm_area_struct *dst_vma)
> +			       struct mm_area *src_vma,
> +			       struct mm_area *dst_vma)
>  {
>  	/* Only allow moving if both have the same access and protection */
>  	if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
> @@ -1491,10 +1491,10 @@ static __always_inline
>  int find_vmas_mm_locked(struct mm_struct *mm,
>  			unsigned long dst_start,
>  			unsigned long src_start,
> -			struct vm_area_struct **dst_vmap,
> -			struct vm_area_struct **src_vmap)
> +			struct mm_area **dst_vmap,
> +			struct mm_area **src_vmap)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mmap_assert_locked(mm);
>  	vma = find_vma_and_prepare_anon(mm, dst_start);
> @@ -1518,10 +1518,10 @@ int find_vmas_mm_locked(struct mm_struct *mm,
>  static int uffd_move_lock(struct mm_struct *mm,
>  			  unsigned long dst_start,
>  			  unsigned long src_start,
> -			  struct vm_area_struct **dst_vmap,
> -			  struct vm_area_struct **src_vmap)
> +			  struct mm_area **dst_vmap,
> +			  struct mm_area **src_vmap)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int err;
>
>  	vma = uffd_lock_vma(mm, dst_start);
> @@ -1581,8 +1581,8 @@ static int uffd_move_lock(struct mm_struct *mm,
>  	return err;
>  }
>
> -static void uffd_move_unlock(struct vm_area_struct *dst_vma,
> -			     struct vm_area_struct *src_vma)
> +static void uffd_move_unlock(struct mm_area *dst_vma,
> +			     struct mm_area *src_vma)
>  {
>  	vma_end_read(src_vma);
>  	if (src_vma != dst_vma)
> @@ -1594,8 +1594,8 @@ static void uffd_move_unlock(struct vm_area_struct *dst_vma,
>  static int uffd_move_lock(struct mm_struct *mm,
>  			  unsigned long dst_start,
>  			  unsigned long src_start,
> -			  struct vm_area_struct **dst_vmap,
> -			  struct vm_area_struct **src_vmap)
> +			  struct mm_area **dst_vmap,
> +			  struct mm_area **src_vmap)
>  {
>  	int err;
>
> @@ -1606,8 +1606,8 @@ static int uffd_move_lock(struct mm_struct *mm,
>  	return err;
>  }
>
> -static void uffd_move_unlock(struct vm_area_struct *dst_vma,
> -			     struct vm_area_struct *src_vma)
> +static void uffd_move_unlock(struct mm_area *dst_vma,
> +			     struct mm_area *src_vma)
>  {
>  	mmap_assert_locked(src_vma->vm_mm);
>  	mmap_read_unlock(dst_vma->vm_mm);
> @@ -1694,7 +1694,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
>  		   unsigned long src_start, unsigned long len, __u64 mode)
>  {
>  	struct mm_struct *mm = ctx->mm;
> -	struct vm_area_struct *src_vma, *dst_vma;
> +	struct mm_area *src_vma, *dst_vma;
>  	unsigned long src_addr, dst_addr;
>  	pmd_t *src_pmd, *dst_pmd;
>  	long err = -EINVAL;
> @@ -1865,7 +1865,7 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
>  	return moved ? moved : err;
>  }
>
> -static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
> +static void userfaultfd_set_vm_flags(struct mm_area *vma,
>  				     vm_flags_t flags)
>  {
>  	const bool uffd_wp_changed = (vma->vm_flags ^ flags) & VM_UFFD_WP;
> @@ -1880,7 +1880,7 @@ static void userfaultfd_set_vm_flags(struct vm_area_struct *vma,
>  		vma_set_page_prot(vma);
>  }
>
> -static void userfaultfd_set_ctx(struct vm_area_struct *vma,
> +static void userfaultfd_set_ctx(struct mm_area *vma,
>  				struct userfaultfd_ctx *ctx,
>  				unsigned long flags)
>  {
> @@ -1890,18 +1890,18 @@ static void userfaultfd_set_ctx(struct vm_area_struct *vma,
>  				 (vma->vm_flags & ~__VM_UFFD_FLAGS) | flags);
>  }
>
> -void userfaultfd_reset_ctx(struct vm_area_struct *vma)
> +void userfaultfd_reset_ctx(struct mm_area *vma)
>  {
>  	userfaultfd_set_ctx(vma, NULL, 0);
>  }
>
> -struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
> -					     struct vm_area_struct *prev,
> -					     struct vm_area_struct *vma,
> +struct mm_area *userfaultfd_clear_vma(struct vma_iterator *vmi,
> +					     struct mm_area *prev,
> +					     struct mm_area *vma,
>  					     unsigned long start,
>  					     unsigned long end)
>  {
> -	struct vm_area_struct *ret;
> +	struct mm_area *ret;
>
>  	/* Reset ptes for the whole vma range if wr-protected */
>  	if (userfaultfd_wp(vma))
> @@ -1924,13 +1924,13 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
>
>  /* Assumes mmap write lock taken, and mm_struct pinned. */
>  int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
> -			       struct vm_area_struct *vma,
> +			       struct mm_area *vma,
>  			       unsigned long vm_flags,
>  			       unsigned long start, unsigned long end,
>  			       bool wp_async)
>  {
>  	VMA_ITERATOR(vmi, ctx->mm, start);
> -	struct vm_area_struct *prev = vma_prev(&vmi);
> +	struct mm_area *prev = vma_prev(&vmi);
>  	unsigned long vma_end;
>  	unsigned long new_flags;
>
> @@ -1985,7 +1985,7 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
>  void userfaultfd_release_new(struct userfaultfd_ctx *ctx)
>  {
>  	struct mm_struct *mm = ctx->mm;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	/* the various vma->vm_userfaultfd_ctx still points to it */
> @@ -2000,7 +2000,7 @@ void userfaultfd_release_new(struct userfaultfd_ctx *ctx)
>  void userfaultfd_release_all(struct mm_struct *mm,
>  			     struct userfaultfd_ctx *ctx)
>  {
> -	struct vm_area_struct *vma, *prev;
> +	struct mm_area *vma, *prev;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	if (!mmget_not_zero(mm))
> diff --git a/mm/util.c b/mm/util.c
> index 448117da071f..e0ed4f7d00d4 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -314,7 +314,7 @@ void *memdup_user_nul(const void __user *src, size_t len)
>  EXPORT_SYMBOL(memdup_user_nul);
>
>  /* Check if the vma is being used as a stack by this task */
> -int vma_is_stack_for_current(struct vm_area_struct *vma)
> +int vma_is_stack_for_current(struct mm_area *vma)
>  {
>  	struct task_struct * __maybe_unused t = current;
>
> @@ -324,7 +324,7 @@ int vma_is_stack_for_current(struct vm_area_struct *vma)
>  /*
>   * Change backing file, only valid to use during initial VMA setup.
>   */
> -void vma_set_file(struct vm_area_struct *vma, struct file *file)
> +void vma_set_file(struct mm_area *vma, struct file *file)
>  {
>  	/* Changing an anonymous vma with this is illegal */
>  	get_file(file);
> diff --git a/mm/vma.c b/mm/vma.c
> index 5cdc5612bfc1..06e6e9c02ab8 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -21,8 +21,8 @@ struct mmap_state {
>  	unsigned long charged;
>  	bool retry_merge;
>
> -	struct vm_area_struct *prev;
> -	struct vm_area_struct *next;
> +	struct mm_area *prev;
> +	struct mm_area *next;
>
>  	/* Unmapping state. */
>  	struct vma_munmap_struct vms;
> @@ -59,7 +59,7 @@ struct mmap_state {
>
>  static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool merge_next)
>  {
> -	struct vm_area_struct *vma = merge_next ? vmg->next : vmg->prev;
> +	struct mm_area *vma = merge_next ? vmg->next : vmg->prev;
>
>  	if (!mpol_equal(vmg->policy, vma_policy(vma)))
>  		return false;
> @@ -83,7 +83,7 @@ static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool merge_nex
>  }
>
>  static inline bool is_mergeable_anon_vma(struct anon_vma *anon_vma1,
> -		 struct anon_vma *anon_vma2, struct vm_area_struct *vma)
> +		 struct anon_vma *anon_vma2, struct mm_area *vma)
>  {
>  	/*
>  	 * The list_is_singular() test is to avoid merging VMA cloned from
> @@ -96,8 +96,8 @@ static inline bool is_mergeable_anon_vma(struct anon_vma *anon_vma1,
>  }
>
>  /* Are the anon_vma's belonging to each VMA compatible with one another? */
> -static inline bool are_anon_vmas_compatible(struct vm_area_struct *vma1,
> -					    struct vm_area_struct *vma2)
> +static inline bool are_anon_vmas_compatible(struct mm_area *vma1,
> +					    struct mm_area *vma2)
>  {
>  	return is_mergeable_anon_vma(vma1->anon_vma, vma2->anon_vma, NULL);
>  }
> @@ -110,11 +110,11 @@ static inline bool are_anon_vmas_compatible(struct vm_area_struct *vma1,
>   *       removal.
>   */
>  static void init_multi_vma_prep(struct vma_prepare *vp,
> -				struct vm_area_struct *vma,
> +				struct mm_area *vma,
>  				struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *adjust;
> -	struct vm_area_struct **remove = &vp->remove;
> +	struct mm_area *adjust;
> +	struct mm_area **remove = &vp->remove;
>
>  	memset(vp, 0, sizeof(struct vma_prepare));
>  	vp->vma = vma;
> @@ -191,7 +191,7 @@ static bool can_vma_merge_after(struct vma_merge_struct *vmg)
>  	return false;
>  }
>
> -static void __vma_link_file(struct vm_area_struct *vma,
> +static void __vma_link_file(struct mm_area *vma,
>  			    struct address_space *mapping)
>  {
>  	if (vma_is_shared_maywrite(vma))
> @@ -205,7 +205,7 @@ static void __vma_link_file(struct vm_area_struct *vma,
>  /*
>   * Requires inode->i_mapping->i_mmap_rwsem
>   */
> -static void __remove_shared_vm_struct(struct vm_area_struct *vma,
> +static void __remove_shared_vm_struct(struct mm_area *vma,
>  				      struct address_space *mapping)
>  {
>  	if (vma_is_shared_maywrite(vma))
> @@ -231,7 +231,7 @@ static void __remove_shared_vm_struct(struct vm_area_struct *vma,
>   * the root anon_vma's mutex.
>   */
>  static void
> -anon_vma_interval_tree_pre_update_vma(struct vm_area_struct *vma)
> +anon_vma_interval_tree_pre_update_vma(struct mm_area *vma)
>  {
>  	struct anon_vma_chain *avc;
>
> @@ -240,7 +240,7 @@ anon_vma_interval_tree_pre_update_vma(struct vm_area_struct *vma)
>  }
>
>  static void
> -anon_vma_interval_tree_post_update_vma(struct vm_area_struct *vma)
> +anon_vma_interval_tree_post_update_vma(struct mm_area *vma)
>  {
>  	struct anon_vma_chain *avc;
>
> @@ -374,7 +374,7 @@ static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi,
>   * @vp: The vma_prepare struct
>   * @vma: The vma that will be altered once locked
>   */
> -static void init_vma_prep(struct vma_prepare *vp, struct vm_area_struct *vma)
> +static void init_vma_prep(struct vma_prepare *vp, struct mm_area *vma)
>  {
>  	init_multi_vma_prep(vp, vma, NULL);
>  }
> @@ -420,7 +420,7 @@ static bool can_vma_merge_right(struct vma_merge_struct *vmg,
>  /*
>   * Close a vm structure and free it.
>   */
> -void remove_vma(struct vm_area_struct *vma)
> +void remove_vma(struct mm_area *vma)
>  {
>  	might_sleep();
>  	vma_close(vma);
> @@ -435,8 +435,8 @@ void remove_vma(struct vm_area_struct *vma)
>   *
>   * Called with the mm semaphore held.
>   */
> -void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
> -		struct vm_area_struct *prev, struct vm_area_struct *next)
> +void unmap_region(struct ma_state *mas, struct mm_area *vma,
> +		struct mm_area *prev, struct mm_area *next)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>  	struct mmu_gather tlb;
> @@ -458,11 +458,11 @@ void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
>   * VMA Iterator will point to the original VMA.
>   */
>  static __must_check int
> -__split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +__split_vma(struct vma_iterator *vmi, struct mm_area *vma,
>  	    unsigned long addr, int new_below)
>  {
>  	struct vma_prepare vp;
> -	struct vm_area_struct *new;
> +	struct mm_area *new;
>  	int err;
>
>  	WARN_ON(vma->vm_start >= addr);
> @@ -544,7 +544,7 @@ __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   * Split a vma into two pieces at address 'addr', a new vma is allocated
>   * either for the first part or the tail.
>   */
> -static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +static int split_vma(struct vma_iterator *vmi, struct mm_area *vma,
>  		     unsigned long addr, int new_below)
>  {
>  	if (vma->vm_mm->map_count >= sysctl_max_map_count)
> @@ -561,8 +561,8 @@ static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
>   *
>   * Returns: 0 on success.
>   */
> -static int dup_anon_vma(struct vm_area_struct *dst,
> -			struct vm_area_struct *src, struct vm_area_struct **dup)
> +static int dup_anon_vma(struct mm_area *dst,
> +			struct mm_area *src, struct mm_area **dup)
>  {
>  	/*
>  	 * Easily overlooked: when mprotect shifts the boundary, make sure the
> @@ -589,7 +589,7 @@ void validate_mm(struct mm_struct *mm)
>  {
>  	int bug = 0;
>  	int i = 0;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, 0);
>
>  	mt_validate(&mm->mm_mt);
> @@ -647,7 +647,7 @@ void validate_mm(struct mm_struct *mm)
>   */
>  static void vmg_adjust_set_range(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *adjust;
> +	struct mm_area *adjust;
>  	pgoff_t pgoff;
>
>  	if (vmg->__adjust_middle_start) {
> @@ -670,7 +670,7 @@ static void vmg_adjust_set_range(struct vma_merge_struct *vmg)
>   */
>  static int commit_merge(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct vma_prepare vp;
>
>  	if (vmg->__adjust_next_start) {
> @@ -705,7 +705,7 @@ static int commit_merge(struct vma_merge_struct *vmg)
>  }
>
>  /* We can only remove VMAs when merging if they do not have a close hook. */
> -static bool can_merge_remove_vma(struct vm_area_struct *vma)
> +static bool can_merge_remove_vma(struct mm_area *vma)
>  {
>  	return !vma->vm_ops || !vma->vm_ops->close;
>  }
> @@ -739,13 +739,13 @@ static bool can_merge_remove_vma(struct vm_area_struct *vma)
>   * - The caller must hold a WRITE lock on the mm_struct->mmap_lock.
>   * - vmi must be positioned within [@vmg->middle->vm_start, @vmg->middle->vm_end).
>   */
> -static __must_check struct vm_area_struct *vma_merge_existing_range(
> +static __must_check struct mm_area *vma_merge_existing_range(
>  		struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *middle = vmg->middle;
> -	struct vm_area_struct *prev = vmg->prev;
> -	struct vm_area_struct *next;
> -	struct vm_area_struct *anon_dup = NULL;
> +	struct mm_area *middle = vmg->middle;
> +	struct mm_area *prev = vmg->prev;
> +	struct mm_area *next;
> +	struct mm_area *anon_dup = NULL;
>  	unsigned long start = vmg->start;
>  	unsigned long end = vmg->end;
>  	bool left_side = middle && start == middle->vm_start;
> @@ -974,10 +974,10 @@ static __must_check struct vm_area_struct *vma_merge_existing_range(
>   * - The caller must have specified the next vma in @vmg->next.
>   * - The caller must have positioned the vmi at or before the gap.
>   */
> -struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
> +struct mm_area *vma_merge_new_range(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *prev = vmg->prev;
> -	struct vm_area_struct *next = vmg->next;
> +	struct mm_area *prev = vmg->prev;
> +	struct mm_area *next = vmg->next;
>  	unsigned long end = vmg->end;
>  	bool can_merge_left, can_merge_right;
>
> @@ -1053,10 +1053,10 @@ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg)
>   */
>  int vma_expand(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *anon_dup = NULL;
> +	struct mm_area *anon_dup = NULL;
>  	bool remove_next = false;
> -	struct vm_area_struct *middle = vmg->middle;
> -	struct vm_area_struct *next = vmg->next;
> +	struct mm_area *middle = vmg->middle;
> +	struct mm_area *next = vmg->next;
>
>  	mmap_assert_write_locked(vmg->mm);
>
> @@ -1105,7 +1105,7 @@ int vma_expand(struct vma_merge_struct *vmg)
>   *
>   * Returns: 0 on success, -ENOMEM otherwise
>   */
> -int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +int vma_shrink(struct vma_iterator *vmi, struct mm_area *vma,
>  	       unsigned long start, unsigned long end, pgoff_t pgoff)
>  {
>  	struct vma_prepare vp;
> @@ -1162,7 +1162,7 @@ static inline void vms_clear_ptes(struct vma_munmap_struct *vms,
>  static void vms_clean_up_area(struct vma_munmap_struct *vms,
>  		struct ma_state *mas_detach)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if (!vms->nr_pages)
>  		return;
> @@ -1185,7 +1185,7 @@ static void vms_clean_up_area(struct vma_munmap_struct *vms,
>  static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
>  		struct ma_state *mas_detach)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct mm_struct *mm;
>
>  	mm = current->mm;
> @@ -1231,7 +1231,7 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
>   */
>  static void reattach_vmas(struct ma_state *mas_detach)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	mas_set(mas_detach, 0);
>  	mas_for_each(mas_detach, vma, ULONG_MAX)
> @@ -1253,7 +1253,7 @@ static void reattach_vmas(struct ma_state *mas_detach)
>  static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>  		struct ma_state *mas_detach)
>  {
> -	struct vm_area_struct *next = NULL;
> +	struct mm_area *next = NULL;
>  	int error;
>
>  	/*
> @@ -1356,7 +1356,7 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>  	/* Make sure no VMAs are about to be lost. */
>  	{
>  		MA_STATE(test, mas_detach->tree, 0, 0);
> -		struct vm_area_struct *vma_mas, *vma_test;
> +		struct mm_area *vma_mas, *vma_test;
>  		int test_count = 0;
>
>  		vma_iter_set(vms->vmi, vms->start);
> @@ -1392,14 +1392,14 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
>   * init_vma_munmap() - Initializer wrapper for vma_munmap_struct
>   * @vms: The vma munmap struct
>   * @vmi: The vma iterator
> - * @vma: The first vm_area_struct to munmap
> + * @vma: The first mm_area to munmap
>   * @start: The aligned start address to munmap
>   * @end: The aligned end address to munmap
>   * @uf: The userfaultfd list_head
>   * @unlock: Unlock after the operation.  Only unlocked on success
>   */
>  static void init_vma_munmap(struct vma_munmap_struct *vms,
> -		struct vma_iterator *vmi, struct vm_area_struct *vma,
> +		struct vma_iterator *vmi, struct mm_area *vma,
>  		unsigned long start, unsigned long end, struct list_head *uf,
>  		bool unlock)
>  {
> @@ -1424,7 +1424,7 @@ static void init_vma_munmap(struct vma_munmap_struct *vms,
>  /*
>   * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
>   * @vmi: The vma iterator
> - * @vma: The starting vm_area_struct
> + * @vma: The starting mm_area
>   * @mm: The mm_struct
>   * @start: The aligned start address to munmap.
>   * @end: The aligned end address to munmap.
> @@ -1435,7 +1435,7 @@ static void init_vma_munmap(struct vma_munmap_struct *vms,
>   * Return: 0 on success and drops the lock if so directed, error and leaves the
>   * lock held otherwise.
>   */
> -int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +int do_vmi_align_munmap(struct vma_iterator *vmi, struct mm_area *vma,
>  		struct mm_struct *mm, unsigned long start, unsigned long end,
>  		struct list_head *uf, bool unlock)
>  {
> @@ -1487,7 +1487,7 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
>  		  bool unlock)
>  {
>  	unsigned long end;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
>  		return -EINVAL;
> @@ -1520,12 +1520,12 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
>   * The function returns either the merged VMA, the original VMA if a split was
>   * required instead, or an error if the split failed.
>   */
> -static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
> +static struct mm_area *vma_modify(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *vma = vmg->middle;
> +	struct mm_area *vma = vmg->middle;
>  	unsigned long start = vmg->start;
>  	unsigned long end = vmg->end;
> -	struct vm_area_struct *merged;
> +	struct mm_area *merged;
>
>  	/* First, try to merge. */
>  	merged = vma_merge_existing_range(vmg);
> @@ -1553,9 +1553,9 @@ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
>  	return vma;
>  }
>
> -struct vm_area_struct *vma_modify_flags(
> -	struct vma_iterator *vmi, struct vm_area_struct *prev,
> -	struct vm_area_struct *vma, unsigned long start, unsigned long end,
> +struct mm_area *vma_modify_flags(
> +	struct vma_iterator *vmi, struct mm_area *prev,
> +	struct mm_area *vma, unsigned long start, unsigned long end,
>  	unsigned long new_flags)
>  {
>  	VMG_VMA_STATE(vmg, vmi, prev, vma, start, end);
> @@ -1565,10 +1565,10 @@ struct vm_area_struct *vma_modify_flags(
>  	return vma_modify(&vmg);
>  }
>
> -struct vm_area_struct
> +struct mm_area
>  *vma_modify_flags_name(struct vma_iterator *vmi,
> -		       struct vm_area_struct *prev,
> -		       struct vm_area_struct *vma,
> +		       struct mm_area *prev,
> +		       struct mm_area *vma,
>  		       unsigned long start,
>  		       unsigned long end,
>  		       unsigned long new_flags,
> @@ -1582,10 +1582,10 @@ struct vm_area_struct
>  	return vma_modify(&vmg);
>  }
>
> -struct vm_area_struct
> +struct mm_area
>  *vma_modify_policy(struct vma_iterator *vmi,
> -		   struct vm_area_struct *prev,
> -		   struct vm_area_struct *vma,
> +		   struct mm_area *prev,
> +		   struct mm_area *vma,
>  		   unsigned long start, unsigned long end,
>  		   struct mempolicy *new_pol)
>  {
> @@ -1596,10 +1596,10 @@ struct vm_area_struct
>  	return vma_modify(&vmg);
>  }
>
> -struct vm_area_struct
> +struct mm_area
>  *vma_modify_flags_uffd(struct vma_iterator *vmi,
> -		       struct vm_area_struct *prev,
> -		       struct vm_area_struct *vma,
> +		       struct mm_area *prev,
> +		       struct mm_area *vma,
>  		       unsigned long start, unsigned long end,
>  		       unsigned long new_flags,
>  		       struct vm_userfaultfd_ctx new_ctx)
> @@ -1616,8 +1616,8 @@ struct vm_area_struct
>   * Expand vma by delta bytes, potentially merging with an immediately adjacent
>   * VMA with identical properties.
>   */
> -struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
> -					struct vm_area_struct *vma,
> +struct mm_area *vma_merge_extend(struct vma_iterator *vmi,
> +					struct mm_area *vma,
>  					unsigned long delta)
>  {
>  	VMG_VMA_STATE(vmg, vmi, vma, vma, vma->vm_end, vma->vm_end + delta);
> @@ -1650,7 +1650,7 @@ static void unlink_file_vma_batch_process(struct unlink_vma_file_batch *vb)
>  }
>
>  void unlink_file_vma_batch_add(struct unlink_vma_file_batch *vb,
> -			       struct vm_area_struct *vma)
> +			       struct mm_area *vma)
>  {
>  	if (vma->vm_file == NULL)
>  		return;
> @@ -1673,7 +1673,7 @@ void unlink_file_vma_batch_final(struct unlink_vma_file_batch *vb)
>   * Unlink a file-based vm structure from its interval tree, to hide
>   * vma from rmap and vmtruncate before freeing its page tables.
>   */
> -void unlink_file_vma(struct vm_area_struct *vma)
> +void unlink_file_vma(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>
> @@ -1686,7 +1686,7 @@ void unlink_file_vma(struct vm_area_struct *vma)
>  	}
>  }
>
> -void vma_link_file(struct vm_area_struct *vma)
> +void vma_link_file(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct address_space *mapping;
> @@ -1699,7 +1699,7 @@ void vma_link_file(struct vm_area_struct *vma)
>  	}
>  }
>
> -int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> +int vma_link(struct mm_struct *mm, struct mm_area *vma)
>  {
>  	VMA_ITERATOR(vmi, mm, 0);
>
> @@ -1719,14 +1719,14 @@ int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
>   * Copy the vma structure to a new location in the same mm,
>   * prior to moving page table entries, to effect an mremap move.
>   */
> -struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> +struct mm_area *copy_vma(struct mm_area **vmap,
>  	unsigned long addr, unsigned long len, pgoff_t pgoff,
>  	bool *need_rmap_locks)
>  {
> -	struct vm_area_struct *vma = *vmap;
> +	struct mm_area *vma = *vmap;
>  	unsigned long vma_start = vma->vm_start;
>  	struct mm_struct *mm = vma->vm_mm;
> -	struct vm_area_struct *new_vma;
> +	struct mm_area *new_vma;
>  	bool faulted_in_anon_vma = true;
>  	VMA_ITERATOR(vmi, mm, addr);
>  	VMG_VMA_STATE(vmg, &vmi, NULL, vma, addr, addr + len);
> @@ -1818,7 +1818,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
>   * driver is doing some kind of reference counting. But that doesn't
>   * really matter for the anon_vma sharing case.
>   */
> -static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct *b)
> +static int anon_vma_compatible(struct mm_area *a, struct mm_area *b)
>  {
>  	return a->vm_end == b->vm_start &&
>  		mpol_equal(vma_policy(a), vma_policy(b)) &&
> @@ -1849,9 +1849,9 @@ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct *
>   * and with the same memory policies). That's all stable, even with just
>   * a read lock on the mmap_lock.
>   */
> -static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old,
> -					  struct vm_area_struct *a,
> -					  struct vm_area_struct *b)
> +static struct anon_vma *reusable_anon_vma(struct mm_area *old,
> +					  struct mm_area *a,
> +					  struct mm_area *b)
>  {
>  	if (anon_vma_compatible(a, b)) {
>  		struct anon_vma *anon_vma = READ_ONCE(old->anon_vma);
> @@ -1870,10 +1870,10 @@ static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old,
>   * anon_vmas being allocated, preventing vma merge in subsequent
>   * mprotect.
>   */
> -struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma)
> +struct anon_vma *find_mergeable_anon_vma(struct mm_area *vma)
>  {
>  	struct anon_vma *anon_vma = NULL;
> -	struct vm_area_struct *prev, *next;
> +	struct mm_area *prev, *next;
>  	VMA_ITERATOR(vmi, vma->vm_mm, vma->vm_end);
>
>  	/* Try next first. */
> @@ -1909,13 +1909,13 @@ static bool vm_ops_needs_writenotify(const struct vm_operations_struct *vm_ops)
>  	return vm_ops && (vm_ops->page_mkwrite || vm_ops->pfn_mkwrite);
>  }
>
> -static bool vma_is_shared_writable(struct vm_area_struct *vma)
> +static bool vma_is_shared_writable(struct mm_area *vma)
>  {
>  	return (vma->vm_flags & (VM_WRITE | VM_SHARED)) ==
>  		(VM_WRITE | VM_SHARED);
>  }
>
> -static bool vma_fs_can_writeback(struct vm_area_struct *vma)
> +static bool vma_fs_can_writeback(struct mm_area *vma)
>  {
>  	/* No managed pages to writeback. */
>  	if (vma->vm_flags & VM_PFNMAP)
> @@ -1929,7 +1929,7 @@ static bool vma_fs_can_writeback(struct vm_area_struct *vma)
>   * Does this VMA require the underlying folios to have their dirty state
>   * tracked?
>   */
> -bool vma_needs_dirty_tracking(struct vm_area_struct *vma)
> +bool vma_needs_dirty_tracking(struct mm_area *vma)
>  {
>  	/* Only shared, writable VMAs require dirty tracking. */
>  	if (!vma_is_shared_writable(vma))
> @@ -1952,7 +1952,7 @@ bool vma_needs_dirty_tracking(struct vm_area_struct *vma)
>   * to the private version (using protection_map[] without the
>   * VM_SHARED bit).
>   */
> -bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot)
> +bool vma_wants_writenotify(struct mm_area *vma, pgprot_t vm_page_prot)
>  {
>  	/* If it was private or non-writable, the write bit is already clear */
>  	if (!vma_is_shared_writable(vma))
> @@ -2066,7 +2066,7 @@ static void vm_lock_mapping(struct mm_struct *mm, struct address_space *mapping)
>   */
>  int mm_take_all_locks(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct anon_vma_chain *avc;
>  	VMA_ITERATOR(vmi, mm, 0);
>
> @@ -2162,7 +2162,7 @@ static void vm_unlock_mapping(struct address_space *mapping)
>   */
>  void mm_drop_all_locks(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct anon_vma_chain *avc;
>  	VMA_ITERATOR(vmi, mm, 0);
>
> @@ -2301,7 +2301,7 @@ static int __mmap_prepare(struct mmap_state *map, struct list_head *uf)
>
>
>  static int __mmap_new_file_vma(struct mmap_state *map,
> -			       struct vm_area_struct *vma)
> +			       struct mm_area *vma)
>  {
>  	struct vma_iterator *vmi = map->vmi;
>  	int error;
> @@ -2345,11 +2345,11 @@ static int __mmap_new_file_vma(struct mmap_state *map,
>   *
>   * Returns: Zero on success, or an error.
>   */
> -static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
> +static int __mmap_new_vma(struct mmap_state *map, struct mm_area **vmap)
>  {
>  	struct vma_iterator *vmi = map->vmi;
>  	int error = 0;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * Determine the object being mapped and call the appropriate
> @@ -2415,7 +2415,7 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
>   * @map: Mapping state.
>   * @vma: Merged or newly allocated VMA for the mmap()'d region.
>   */
> -static void __mmap_complete(struct mmap_state *map, struct vm_area_struct *vma)
> +static void __mmap_complete(struct mmap_state *map, struct mm_area *vma)
>  {
>  	struct mm_struct *mm = map->mm;
>  	unsigned long vm_flags = vma->vm_flags;
> @@ -2455,7 +2455,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  		struct list_head *uf)
>  {
>  	struct mm_struct *mm = current->mm;
> -	struct vm_area_struct *vma = NULL;
> +	struct mm_area *vma = NULL;
>  	int error;
>  	VMA_ITERATOR(vmi, mm, addr);
>  	MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
> @@ -2480,7 +2480,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
>
>  	/* If flags changed, we might be able to merge, so try again. */
>  	if (map.retry_merge) {
> -		struct vm_area_struct *merged;
> +		struct mm_area *merged;
>  		VMG_MMAP_STATE(vmg, &map, vma);
>
>  		vma_iter_config(map.vmi, map.addr, map.end);
> @@ -2573,7 +2573,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>   * do not match then create a new anonymous VMA.  Eventually we may be able to
>   * do some brk-specific accounting here.
>   */
> -int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +int do_brk_flags(struct vma_iterator *vmi, struct mm_area *vma,
>  		 unsigned long addr, unsigned long len, unsigned long flags)
>  {
>  	struct mm_struct *mm = current->mm;
> @@ -2657,7 +2657,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info)
>  {
>  	unsigned long length, gap;
>  	unsigned long low_limit, high_limit;
> -	struct vm_area_struct *tmp;
> +	struct mm_area *tmp;
>  	VMA_ITERATOR(vmi, current->mm, 0);
>
>  	/* Adjust search length to account for worst case alignment overhead */
> @@ -2714,7 +2714,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
>  {
>  	unsigned long length, gap, gap_end;
>  	unsigned long low_limit, high_limit;
> -	struct vm_area_struct *tmp;
> +	struct mm_area *tmp;
>  	VMA_ITERATOR(vmi, current->mm, 0);
>
>  	/* Adjust search length to account for worst case alignment overhead */
> @@ -2757,7 +2757,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
>   * update accounting. This is shared with both the
>   * grow-up and grow-down cases.
>   */
> -static int acct_stack_growth(struct vm_area_struct *vma,
> +static int acct_stack_growth(struct mm_area *vma,
>  			     unsigned long size, unsigned long grow)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> @@ -2796,10 +2796,10 @@ static int acct_stack_growth(struct vm_area_struct *vma,
>   * PA-RISC uses this for its stack.
>   * vma is the last one with address > vma->vm_end.  Have to extend vma.
>   */
> -int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> +int expand_upwards(struct mm_area *vma, unsigned long address)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> -	struct vm_area_struct *next;
> +	struct mm_area *next;
>  	unsigned long gap_addr;
>  	int error = 0;
>  	VMA_ITERATOR(vmi, mm, vma->vm_start);
> @@ -2882,10 +2882,10 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
>   * vma is the first one with address < vma->vm_start.  Have to extend vma.
>   * mmap_lock held for writing.
>   */
> -int expand_downwards(struct vm_area_struct *vma, unsigned long address)
> +int expand_downwards(struct mm_area *vma, unsigned long address)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> -	struct vm_area_struct *prev;
> +	struct mm_area *prev;
>  	int error = 0;
>  	VMA_ITERATOR(vmi, mm, vma->vm_start);
>
> diff --git a/mm/vma.h b/mm/vma.h
> index 7356ca5a22d3..b488a473fa97 100644
> --- a/mm/vma.h
> +++ b/mm/vma.h
> @@ -11,19 +11,19 @@
>   * VMA lock generalization
>   */
>  struct vma_prepare {
> -	struct vm_area_struct *vma;
> -	struct vm_area_struct *adj_next;
> +	struct mm_area *vma;
> +	struct mm_area *adj_next;
>  	struct file *file;
>  	struct address_space *mapping;
>  	struct anon_vma *anon_vma;
> -	struct vm_area_struct *insert;
> -	struct vm_area_struct *remove;
> -	struct vm_area_struct *remove2;
> +	struct mm_area *insert;
> +	struct mm_area *remove;
> +	struct mm_area *remove2;
>  };
>
>  struct unlink_vma_file_batch {
>  	int count;
> -	struct vm_area_struct *vmas[8];
> +	struct mm_area *vmas[8];
>  };
>
>  /*
> @@ -31,9 +31,9 @@ struct unlink_vma_file_batch {
>   */
>  struct vma_munmap_struct {
>  	struct vma_iterator *vmi;
> -	struct vm_area_struct *vma;     /* The first vma to munmap */
> -	struct vm_area_struct *prev;    /* vma before the munmap area */
> -	struct vm_area_struct *next;    /* vma after the munmap area */
> +	struct mm_area *vma;     /* The first vma to munmap */
> +	struct mm_area *prev;    /* vma before the munmap area */
> +	struct mm_area *next;    /* vma after the munmap area */
>  	struct list_head *uf;           /* Userfaultfd list_head */
>  	unsigned long start;            /* Aligned start addr (inclusive) */
>  	unsigned long end;              /* Aligned end addr (exclusive) */
> @@ -79,11 +79,11 @@ struct vma_merge_struct {
>  	 *
>  	 * next may be assigned by the caller.
>  	 */
> -	struct vm_area_struct *prev;
> -	struct vm_area_struct *middle;
> -	struct vm_area_struct *next;
> +	struct mm_area *prev;
> +	struct mm_area *middle;
> +	struct mm_area *next;
>  	/* This is the VMA we ultimately target to become the merged VMA. */
> -	struct vm_area_struct *target;
> +	struct mm_area *target;
>  	/*
>  	 * Initially, the start, end, pgoff fields are provided by the caller
>  	 * and describe the proposed new VMA range, whether modifying an
> @@ -145,7 +145,7 @@ static inline bool vmg_nomem(struct vma_merge_struct *vmg)
>  }
>
>  /* Assumes addr >= vma->vm_start. */
> -static inline pgoff_t vma_pgoff_offset(struct vm_area_struct *vma,
> +static inline pgoff_t vma_pgoff_offset(struct mm_area *vma,
>  				       unsigned long addr)
>  {
>  	return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start);
> @@ -189,11 +189,11 @@ void validate_mm(struct mm_struct *mm);
>
>  __must_check int vma_expand(struct vma_merge_struct *vmg);
>  __must_check int vma_shrink(struct vma_iterator *vmi,
> -		struct vm_area_struct *vma,
> +		struct mm_area *vma,
>  		unsigned long start, unsigned long end, pgoff_t pgoff);
>
>  static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
> -			struct vm_area_struct *vma, gfp_t gfp)
> +			struct mm_area *vma, gfp_t gfp)
>
>  {
>  	if (vmi->mas.status != ma_start &&
> @@ -210,7 +210,7 @@ static inline int vma_iter_store_gfp(struct vma_iterator *vmi,
>  }
>
>  int
> -do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> +do_vmi_align_munmap(struct vma_iterator *vmi, struct mm_area *vma,
>  		    struct mm_struct *mm, unsigned long start,
>  		    unsigned long end, struct list_head *uf, bool unlock);
>
> @@ -218,51 +218,51 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
>  		  unsigned long start, size_t len, struct list_head *uf,
>  		  bool unlock);
>
> -void remove_vma(struct vm_area_struct *vma);
> +void remove_vma(struct mm_area *vma);
>
> -void unmap_region(struct ma_state *mas, struct vm_area_struct *vma,
> -		struct vm_area_struct *prev, struct vm_area_struct *next);
> +void unmap_region(struct ma_state *mas, struct mm_area *vma,
> +		struct mm_area *prev, struct mm_area *next);
>
>  /* We are about to modify the VMA's flags. */
> -__must_check struct vm_area_struct
> +__must_check struct mm_area
>  *vma_modify_flags(struct vma_iterator *vmi,
> -		struct vm_area_struct *prev, struct vm_area_struct *vma,
> +		struct mm_area *prev, struct mm_area *vma,
>  		unsigned long start, unsigned long end,
>  		unsigned long new_flags);
>
>  /* We are about to modify the VMA's flags and/or anon_name. */
> -__must_check struct vm_area_struct
> +__must_check struct mm_area
>  *vma_modify_flags_name(struct vma_iterator *vmi,
> -		       struct vm_area_struct *prev,
> -		       struct vm_area_struct *vma,
> +		       struct mm_area *prev,
> +		       struct mm_area *vma,
>  		       unsigned long start,
>  		       unsigned long end,
>  		       unsigned long new_flags,
>  		       struct anon_vma_name *new_name);
>
>  /* We are about to modify the VMA's memory policy. */
> -__must_check struct vm_area_struct
> +__must_check struct mm_area
>  *vma_modify_policy(struct vma_iterator *vmi,
> -		   struct vm_area_struct *prev,
> -		   struct vm_area_struct *vma,
> +		   struct mm_area *prev,
> +		   struct mm_area *vma,
>  		   unsigned long start, unsigned long end,
>  		   struct mempolicy *new_pol);
>
>  /* We are about to modify the VMA's flags and/or uffd context. */
> -__must_check struct vm_area_struct
> +__must_check struct mm_area
>  *vma_modify_flags_uffd(struct vma_iterator *vmi,
> -		       struct vm_area_struct *prev,
> -		       struct vm_area_struct *vma,
> +		       struct mm_area *prev,
> +		       struct mm_area *vma,
>  		       unsigned long start, unsigned long end,
>  		       unsigned long new_flags,
>  		       struct vm_userfaultfd_ctx new_ctx);
>
> -__must_check struct vm_area_struct
> +__must_check struct mm_area
>  *vma_merge_new_range(struct vma_merge_struct *vmg);
>
> -__must_check struct vm_area_struct
> +__must_check struct mm_area
>  *vma_merge_extend(struct vma_iterator *vmi,
> -		  struct vm_area_struct *vma,
> +		  struct mm_area *vma,
>  		  unsigned long delta);
>
>  void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb);
> @@ -270,22 +270,22 @@ void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb);
>  void unlink_file_vma_batch_final(struct unlink_vma_file_batch *vb);
>
>  void unlink_file_vma_batch_add(struct unlink_vma_file_batch *vb,
> -			       struct vm_area_struct *vma);
> +			       struct mm_area *vma);
>
> -void unlink_file_vma(struct vm_area_struct *vma);
> +void unlink_file_vma(struct mm_area *vma);
>
> -void vma_link_file(struct vm_area_struct *vma);
> +void vma_link_file(struct mm_area *vma);
>
> -int vma_link(struct mm_struct *mm, struct vm_area_struct *vma);
> +int vma_link(struct mm_struct *mm, struct mm_area *vma);
>
> -struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> +struct mm_area *copy_vma(struct mm_area **vmap,
>  	unsigned long addr, unsigned long len, pgoff_t pgoff,
>  	bool *need_rmap_locks);
>
> -struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma);
> +struct anon_vma *find_mergeable_anon_vma(struct mm_area *vma);
>
> -bool vma_needs_dirty_tracking(struct vm_area_struct *vma);
> -bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
> +bool vma_needs_dirty_tracking(struct mm_area *vma);
> +bool vma_wants_writenotify(struct mm_area *vma, pgprot_t vm_page_prot);
>
>  int mm_take_all_locks(struct mm_struct *mm);
>  void mm_drop_all_locks(struct mm_struct *mm);
> @@ -294,13 +294,13 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>  		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
>  		struct list_head *uf);
>
> -int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *brkvma,
> +int do_brk_flags(struct vma_iterator *vmi, struct mm_area *brkvma,
>  		 unsigned long addr, unsigned long request, unsigned long flags);
>
>  unsigned long unmapped_area(struct vm_unmapped_area_info *info);
>  unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info);
>
> -static inline bool vma_wants_manual_pte_write_upgrade(struct vm_area_struct *vma)
> +static inline bool vma_wants_manual_pte_write_upgrade(struct mm_area *vma)
>  {
>  	/*
>  	 * We want to check manually if we can change individual PTEs writable
> @@ -320,7 +320,7 @@ static inline pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags
>  }
>  #endif
>
> -static inline struct vm_area_struct *vma_prev_limit(struct vma_iterator *vmi,
> +static inline struct mm_area *vma_prev_limit(struct vma_iterator *vmi,
>  						    unsigned long min)
>  {
>  	return mas_prev(&vmi->mas, min);
> @@ -370,13 +370,13 @@ static inline void vma_iter_reset(struct vma_iterator *vmi)
>  }
>
>  static inline
> -struct vm_area_struct *vma_iter_prev_range_limit(struct vma_iterator *vmi, unsigned long min)
> +struct mm_area *vma_iter_prev_range_limit(struct vma_iterator *vmi, unsigned long min)
>  {
>  	return mas_prev_range(&vmi->mas, min);
>  }
>
>  static inline
> -struct vm_area_struct *vma_iter_next_range_limit(struct vma_iterator *vmi, unsigned long max)
> +struct mm_area *vma_iter_next_range_limit(struct vma_iterator *vmi, unsigned long max)
>  {
>  	return mas_next_range(&vmi->mas, max);
>  }
> @@ -397,7 +397,7 @@ static inline int vma_iter_area_highest(struct vma_iterator *vmi, unsigned long
>   * VMA Iterator functions shared between nommu and mmap
>   */
>  static inline int vma_iter_prealloc(struct vma_iterator *vmi,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	return mas_preallocate(&vmi->mas, vma, GFP_KERNEL);
>  }
> @@ -407,14 +407,14 @@ static inline void vma_iter_clear(struct vma_iterator *vmi)
>  	mas_store_prealloc(&vmi->mas, NULL);
>  }
>
> -static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi)
> +static inline struct mm_area *vma_iter_load(struct vma_iterator *vmi)
>  {
>  	return mas_walk(&vmi->mas);
>  }
>
>  /* Store a VMA with preallocated memory */
>  static inline void vma_iter_store_overwrite(struct vma_iterator *vmi,
> -					    struct vm_area_struct *vma)
> +					    struct mm_area *vma)
>  {
>  	vma_assert_attached(vma);
>
> @@ -442,7 +442,7 @@ static inline void vma_iter_store_overwrite(struct vma_iterator *vmi,
>  }
>
>  static inline void vma_iter_store_new(struct vma_iterator *vmi,
> -				      struct vm_area_struct *vma)
> +				      struct mm_area *vma)
>  {
>  	vma_mark_attached(vma);
>  	vma_iter_store_overwrite(vmi, vma);
> @@ -465,7 +465,7 @@ static inline int vma_iter_bulk_alloc(struct vma_iterator *vmi,
>  }
>
>  static inline
> -struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
> +struct mm_area *vma_iter_prev_range(struct vma_iterator *vmi)
>  {
>  	return mas_prev_range(&vmi->mas, 0);
>  }
> @@ -475,11 +475,11 @@ struct vm_area_struct *vma_iter_prev_range(struct vma_iterator *vmi)
>   * if no previous VMA, to index 0.
>   */
>  static inline
> -struct vm_area_struct *vma_iter_next_rewind(struct vma_iterator *vmi,
> -		struct vm_area_struct **pprev)
> +struct mm_area *vma_iter_next_rewind(struct vma_iterator *vmi,
> +		struct mm_area **pprev)
>  {
> -	struct vm_area_struct *next = vma_next(vmi);
> -	struct vm_area_struct *prev = vma_prev(vmi);
> +	struct mm_area *next = vma_next(vmi);
> +	struct mm_area *prev = vma_prev(vmi);
>
>  	/*
>  	 * Consider the case where no previous VMA exists. We advance to the
> @@ -500,7 +500,7 @@ struct vm_area_struct *vma_iter_next_rewind(struct vma_iterator *vmi,
>
>  #ifdef CONFIG_64BIT
>
> -static inline bool vma_is_sealed(struct vm_area_struct *vma)
> +static inline bool vma_is_sealed(struct mm_area *vma)
>  {
>  	return (vma->vm_flags & VM_SEALED);
>  }
> @@ -509,7 +509,7 @@ static inline bool vma_is_sealed(struct vm_area_struct *vma)
>   * check if a vma is sealed for modification.
>   * return true, if modification is allowed.
>   */
> -static inline bool can_modify_vma(struct vm_area_struct *vma)
> +static inline bool can_modify_vma(struct mm_area *vma)
>  {
>  	if (unlikely(vma_is_sealed(vma)))
>  		return false;
> @@ -517,16 +517,16 @@ static inline bool can_modify_vma(struct vm_area_struct *vma)
>  	return true;
>  }
>
> -bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior);
> +bool can_modify_vma_madv(struct mm_area *vma, int behavior);
>
>  #else
>
> -static inline bool can_modify_vma(struct vm_area_struct *vma)
> +static inline bool can_modify_vma(struct mm_area *vma)
>  {
>  	return true;
>  }
>
> -static inline bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
> +static inline bool can_modify_vma_madv(struct mm_area *vma, int behavior)
>  {
>  	return true;
>  }
> @@ -534,10 +534,10 @@ static inline bool can_modify_vma_madv(struct vm_area_struct *vma, int behavior)
>  #endif
>
>  #if defined(CONFIG_STACK_GROWSUP)
> -int expand_upwards(struct vm_area_struct *vma, unsigned long address);
> +int expand_upwards(struct mm_area *vma, unsigned long address);
>  #endif
>
> -int expand_downwards(struct vm_area_struct *vma, unsigned long address);
> +int expand_downwards(struct mm_area *vma, unsigned long address);
>
>  int __vm_munmap(unsigned long start, size_t len, bool unlock);
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 3ed720a787ec..c3ad2c82c0f9 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4450,7 +4450,7 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count)
>   *
>   * Similar to remap_pfn_range() (see mm/memory.c)
>   */
> -int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
> +int remap_vmalloc_range_partial(struct mm_area *vma, unsigned long uaddr,
>  				void *kaddr, unsigned long pgoff,
>  				unsigned long size)
>  {
> @@ -4510,7 +4510,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
>   *
>   * Similar to remap_pfn_range() (see mm/memory.c)
>   */
> -int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
> +int remap_vmalloc_range(struct mm_area *vma, void *addr,
>  						unsigned long pgoff)
>  {
>  	return remap_vmalloc_range_partial(vma, vma->vm_start,
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b620d74b0f66..9e629fea2e9a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3322,7 +3322,7 @@ static void reset_batch_size(struct lru_gen_mm_walk *walk)
>  static int should_skip_vma(unsigned long start, unsigned long end, struct mm_walk *args)
>  {
>  	struct address_space *mapping;
> -	struct vm_area_struct *vma = args->vma;
> +	struct mm_area *vma = args->vma;
>  	struct lru_gen_mm_walk *walk = args->private;
>
>  	if (!vma_is_accessible(vma))
> @@ -3391,7 +3391,7 @@ static bool get_next_vma(unsigned long mask, unsigned long size, struct mm_walk
>  	return false;
>  }
>
> -static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned long addr,
> +static unsigned long get_pte_pfn(pte_t pte, struct mm_area *vma, unsigned long addr,
>  				 struct pglist_data *pgdat)
>  {
>  	unsigned long pfn = pte_pfn(pte);
> @@ -3416,7 +3416,7 @@ static unsigned long get_pte_pfn(pte_t pte, struct vm_area_struct *vma, unsigned
>  	return pfn;
>  }
>
> -static unsigned long get_pmd_pfn(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr,
> +static unsigned long get_pmd_pfn(pmd_t pmd, struct mm_area *vma, unsigned long addr,
>  				 struct pglist_data *pgdat)
>  {
>  	unsigned long pfn = pmd_pfn(pmd);
> @@ -3569,7 +3569,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
>  	return suitable_to_scan(total, young);
>  }
>
> -static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area_struct *vma,
> +static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct mm_area *vma,
>  				  struct mm_walk *args, unsigned long *bitmap, unsigned long *first)
>  {
>  	int i;
> @@ -3664,7 +3664,7 @@ static void walk_pmd_range(pud_t *pud, unsigned long start, unsigned long end,
>  	pmd_t *pmd;
>  	unsigned long next;
>  	unsigned long addr;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	DECLARE_BITMAP(bitmap, MIN_LRU_BATCH);
>  	unsigned long first = -1;
>  	struct lru_gen_mm_walk *walk = args->private;
> @@ -4193,7 +4193,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
>  	int young = 1;
>  	pte_t *pte = pvmw->pte;
>  	unsigned long addr = pvmw->address;
> -	struct vm_area_struct *vma = pvmw->vma;
> +	struct mm_area *vma = pvmw->vma;
>  	struct folio *folio = pfn_folio(pvmw->pfn);
>  	struct mem_cgroup *memcg = folio_memcg(folio);
>  	struct pglist_data *pgdat = folio_pgdat(folio);
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 323892066def..7d9b9ea0014d 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -3467,7 +3467,7 @@ int sock_no_recvmsg(struct socket *sock, struct msghdr *m, size_t len,
>  }
>  EXPORT_SYMBOL(sock_no_recvmsg);
>
> -int sock_no_mmap(struct file *file, struct socket *sock, struct vm_area_struct *vma)
> +int sock_no_mmap(struct file *file, struct socket *sock, struct mm_area *vma)
>  {
>  	/* Mirror missing mmap method error code */
>  	return -ENODEV;
> diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
> index ea8de00f669d..f51b18d0fac2 100644
> --- a/net/ipv4/tcp.c
> +++ b/net/ipv4/tcp.c
> @@ -1801,7 +1801,7 @@ static const struct vm_operations_struct tcp_vm_ops = {
>  };
>
>  int tcp_mmap(struct file *file, struct socket *sock,
> -	     struct vm_area_struct *vma)
> +	     struct mm_area *vma)
>  {
>  	if (vma->vm_flags & (VM_WRITE | VM_EXEC))
>  		return -EPERM;
> @@ -1997,7 +1997,7 @@ static int tcp_zc_handle_leftover(struct tcp_zerocopy_receive *zc,
>  	return zc->copybuf_len < 0 ? 0 : copylen;
>  }
>
> -static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
> +static int tcp_zerocopy_vm_insert_batch_error(struct mm_area *vma,
>  					      struct page **pending_pages,
>  					      unsigned long pages_remaining,
>  					      unsigned long *address,
> @@ -2045,7 +2045,7 @@ static int tcp_zerocopy_vm_insert_batch_error(struct vm_area_struct *vma,
>  	return err;
>  }
>
> -static int tcp_zerocopy_vm_insert_batch(struct vm_area_struct *vma,
> +static int tcp_zerocopy_vm_insert_batch(struct mm_area *vma,
>  					struct page **pages,
>  					unsigned int pages_to_map,
>  					unsigned long *address,
> @@ -2104,11 +2104,11 @@ static void tcp_zc_finalize_rx_tstamp(struct sock *sk,
>  	}
>  }
>
> -static struct vm_area_struct *find_tcp_vma(struct mm_struct *mm,
> +static struct mm_area *find_tcp_vma(struct mm_struct *mm,
>  					   unsigned long address,
>  					   bool *mmap_locked)
>  {
> -	struct vm_area_struct *vma = lock_vma_under_rcu(mm, address);
> +	struct mm_area *vma = lock_vma_under_rcu(mm, address);
>
>  	if (vma) {
>  		if (vma->vm_ops != &tcp_vm_ops) {
> @@ -2141,7 +2141,7 @@ static int tcp_zerocopy_receive(struct sock *sk,
>  	struct tcp_sock *tp = tcp_sk(sk);
>  	const skb_frag_t *frags = NULL;
>  	unsigned int pages_to_map = 0;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	struct sk_buff *skb = NULL;
>  	u32 seq = tp->copied_seq;
>  	u32 total_bytes_to_map;
> diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
> index 3e9ddf72cd03..c1ac0ed67f71 100644
> --- a/net/packet/af_packet.c
> +++ b/net/packet/af_packet.c
> @@ -4358,7 +4358,7 @@ static __poll_t packet_poll(struct file *file, struct socket *sock,
>   * for user mmaps.
>   */
>
> -static void packet_mm_open(struct vm_area_struct *vma)
> +static void packet_mm_open(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct socket *sock = file->private_data;
> @@ -4368,7 +4368,7 @@ static void packet_mm_open(struct vm_area_struct *vma)
>  		atomic_long_inc(&pkt_sk(sk)->mapped);
>  }
>
> -static void packet_mm_close(struct vm_area_struct *vma)
> +static void packet_mm_close(struct mm_area *vma)
>  {
>  	struct file *file = vma->vm_file;
>  	struct socket *sock = file->private_data;
> @@ -4619,7 +4619,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
>  }
>
>  static int packet_mmap(struct file *file, struct socket *sock,
> -		struct vm_area_struct *vma)
> +		struct mm_area *vma)
>  {
>  	struct sock *sk = sock->sk;
>  	struct packet_sock *po = pkt_sk(sk);
> diff --git a/net/socket.c b/net/socket.c
> index 9a0e720f0859..796d8811c0cc 100644
> --- a/net/socket.c
> +++ b/net/socket.c
> @@ -119,7 +119,7 @@ unsigned int sysctl_net_busy_poll __read_mostly;
>
>  static ssize_t sock_read_iter(struct kiocb *iocb, struct iov_iter *to);
>  static ssize_t sock_write_iter(struct kiocb *iocb, struct iov_iter *from);
> -static int sock_mmap(struct file *file, struct vm_area_struct *vma);
> +static int sock_mmap(struct file *file, struct mm_area *vma);
>
>  static int sock_close(struct inode *inode, struct file *file);
>  static __poll_t sock_poll(struct file *file,
> @@ -1379,7 +1379,7 @@ static __poll_t sock_poll(struct file *file, poll_table *wait)
>  	return ops->poll(file, sock, wait) | flag;
>  }
>
> -static int sock_mmap(struct file *file, struct vm_area_struct *vma)
> +static int sock_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct socket *sock = file->private_data;
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 5696af45bcf7..13d7febb2286 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -1595,7 +1595,7 @@ static int xsk_getsockopt(struct socket *sock, int level, int optname,
>  }
>
>  static int xsk_mmap(struct file *file, struct socket *sock,
> -		    struct vm_area_struct *vma)
> +		    struct mm_area *vma)
>  {
>  	loff_t offset = (loff_t)vma->vm_pgoff << PAGE_SHIFT;
>  	unsigned long size = vma->vm_end - vma->vm_start;
> diff --git a/samples/ftrace/ftrace-direct-too.c b/samples/ftrace/ftrace-direct-too.c
> index 3d0fa260332d..6c77296cf30b 100644
> --- a/samples/ftrace/ftrace-direct-too.c
> +++ b/samples/ftrace/ftrace-direct-too.c
> @@ -7,10 +7,10 @@
>  #include <asm/asm-offsets.h>
>  #endif
>
> -extern void my_direct_func(struct vm_area_struct *vma, unsigned long address,
> +extern void my_direct_func(struct mm_area *vma, unsigned long address,
>  			   unsigned int flags, struct pt_regs *regs);
>
> -void my_direct_func(struct vm_area_struct *vma, unsigned long address,
> +void my_direct_func(struct mm_area *vma, unsigned long address,
>  		    unsigned int flags, struct pt_regs *regs)
>  {
>  	trace_printk("handle mm fault vma=%p address=%lx flags=%x regs=%p\n",
> diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c
> index 18623ba666e3..4b6121f12c27 100644
> --- a/samples/vfio-mdev/mbochs.c
> +++ b/samples/vfio-mdev/mbochs.c
> @@ -777,7 +777,7 @@ static void mbochs_put_pages(struct mdev_state *mdev_state)
>
>  static vm_fault_t mbochs_region_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mdev_state *mdev_state = vma->vm_private_data;
>  	pgoff_t page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
>
> @@ -795,7 +795,7 @@ static const struct vm_operations_struct mbochs_region_vm_ops = {
>  	.fault = mbochs_region_vm_fault,
>  };
>
> -static int mbochs_mmap(struct vfio_device *vdev, struct vm_area_struct *vma)
> +static int mbochs_mmap(struct vfio_device *vdev, struct mm_area *vma)
>  {
>  	struct mdev_state *mdev_state =
>  		container_of(vdev, struct mdev_state, vdev);
> @@ -816,7 +816,7 @@ static int mbochs_mmap(struct vfio_device *vdev, struct vm_area_struct *vma)
>
>  static vm_fault_t mbochs_dmabuf_vm_fault(struct vm_fault *vmf)
>  {
> -	struct vm_area_struct *vma = vmf->vma;
> +	struct mm_area *vma = vmf->vma;
>  	struct mbochs_dmabuf *dmabuf = vma->vm_private_data;
>
>  	if (WARN_ON(vmf->pgoff >= dmabuf->pagecount))
> @@ -831,7 +831,7 @@ static const struct vm_operations_struct mbochs_dmabuf_vm_ops = {
>  	.fault = mbochs_dmabuf_vm_fault,
>  };
>
> -static int mbochs_mmap_dmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
> +static int mbochs_mmap_dmabuf(struct dma_buf *buf, struct mm_area *vma)
>  {
>  	struct mbochs_dmabuf *dmabuf = buf->priv;
>  	struct device *dev = mdev_dev(dmabuf->mdev_state->mdev);
> diff --git a/samples/vfio-mdev/mdpy.c b/samples/vfio-mdev/mdpy.c
> index 8104831ae125..8f939e826acf 100644
> --- a/samples/vfio-mdev/mdpy.c
> +++ b/samples/vfio-mdev/mdpy.c
> @@ -418,7 +418,7 @@ static ssize_t mdpy_write(struct vfio_device *vdev, const char __user *buf,
>  	return -EFAULT;
>  }
>
> -static int mdpy_mmap(struct vfio_device *vdev, struct vm_area_struct *vma)
> +static int mdpy_mmap(struct vfio_device *vdev, struct mm_area *vma)
>  {
>  	struct mdev_state *mdev_state =
>  		container_of(vdev, struct mdev_state, vdev);
> diff --git a/scripts/coccinelle/api/vma_pages.cocci b/scripts/coccinelle/api/vma_pages.cocci
> index 10511b9bf35e..96c7790dff71 100644
> --- a/scripts/coccinelle/api/vma_pages.cocci
> +++ b/scripts/coccinelle/api/vma_pages.cocci
> @@ -16,7 +16,7 @@ virtual report
>  //----------------------------------------------------------
>
>  @r_context depends on context && !patch && !org && !report@
> -struct vm_area_struct *vma;
> +struct mm_area *vma;
>  @@
>
>  * (vma->vm_end - vma->vm_start) >> PAGE_SHIFT
> @@ -26,7 +26,7 @@ struct vm_area_struct *vma;
>  //----------------------------------------------------------
>
>  @r_patch depends on !context && patch && !org && !report@
> -struct vm_area_struct *vma;
> +struct mm_area *vma;
>  @@
>
>  - ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT)
> @@ -37,7 +37,7 @@ struct vm_area_struct *vma;
>  //----------------------------------------------------------
>
>  @r_org depends on !context && !patch && (org || report)@
> -struct vm_area_struct *vma;
> +struct mm_area *vma;
>  position p;
>  @@
>
> diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
> index 7952e8cab353..cadd2fdbf01d 100644
> --- a/security/apparmor/lsm.c
> +++ b/security/apparmor/lsm.c
> @@ -585,7 +585,7 @@ static int apparmor_mmap_file(struct file *file, unsigned long reqprot,
>  	return common_mmap(OP_FMMAP, file, prot, flags, GFP_ATOMIC);
>  }
>
> -static int apparmor_file_mprotect(struct vm_area_struct *vma,
> +static int apparmor_file_mprotect(struct mm_area *vma,
>  				  unsigned long reqprot, unsigned long prot)
>  {
>  	return common_mmap(OP_FMPROT, vma->vm_file, prot,
> diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
> index f3e7ac513db3..a6e25bb8dc0b 100644
> --- a/security/integrity/ima/ima_main.c
> +++ b/security/integrity/ima/ima_main.c
> @@ -478,7 +478,7 @@ static int ima_file_mmap(struct file *file, unsigned long reqprot,
>
>  /**
>   * ima_file_mprotect - based on policy, limit mprotect change
> - * @vma: vm_area_struct protection is set to
> + * @vma: mm_area protection is set to
>   * @reqprot: protection requested by the application
>   * @prot: protection that will be applied by the kernel
>   *
> @@ -490,7 +490,7 @@ static int ima_file_mmap(struct file *file, unsigned long reqprot,
>   *
>   * On mprotect change success, return 0.  On failure, return -EACESS.
>   */
> -static int ima_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
> +static int ima_file_mprotect(struct mm_area *vma, unsigned long reqprot,
>  			     unsigned long prot)
>  {
>  	struct ima_template_desc *template = NULL;
> diff --git a/security/ipe/hooks.c b/security/ipe/hooks.c
> index d0323b81cd8f..5882e26563be 100644
> --- a/security/ipe/hooks.c
> +++ b/security/ipe/hooks.c
> @@ -77,7 +77,7 @@ int ipe_mmap_file(struct file *f, unsigned long reqprot __always_unused,
>   * * %0		- Success
>   * * %-EACCES	- Did not pass IPE policy
>   */
> -int ipe_file_mprotect(struct vm_area_struct *vma,
> +int ipe_file_mprotect(struct mm_area *vma,
>  		      unsigned long reqprot __always_unused,
>  		      unsigned long prot)
>  {
> diff --git a/security/ipe/hooks.h b/security/ipe/hooks.h
> index 38d4a387d039..3b4b2f502809 100644
> --- a/security/ipe/hooks.h
> +++ b/security/ipe/hooks.h
> @@ -27,7 +27,7 @@ int ipe_bprm_check_security(struct linux_binprm *bprm);
>  int ipe_mmap_file(struct file *f, unsigned long reqprot, unsigned long prot,
>  		  unsigned long flags);
>
> -int ipe_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
> +int ipe_file_mprotect(struct mm_area *vma, unsigned long reqprot,
>  		      unsigned long prot);
>
>  int ipe_kernel_read_file(struct file *file, enum kernel_read_file_id id,
> diff --git a/security/security.c b/security/security.c
> index fb57e8fddd91..1026b02ee7cf 100644
> --- a/security/security.c
> +++ b/security/security.c
> @@ -3006,7 +3006,7 @@ int security_mmap_addr(unsigned long addr)
>   *
>   * Return: Returns 0 if permission is granted.
>   */
> -int security_file_mprotect(struct vm_area_struct *vma, unsigned long reqprot,
> +int security_file_mprotect(struct mm_area *vma, unsigned long reqprot,
>  			   unsigned long prot)
>  {
>  	return call_int_hook(file_mprotect, vma, reqprot, prot);
> diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
> index e7a7dcab81db..28b458a22af8 100644
> --- a/security/selinux/hooks.c
> +++ b/security/selinux/hooks.c
> @@ -3848,7 +3848,7 @@ static int selinux_mmap_file(struct file *file,
>  				   (flags & MAP_TYPE) == MAP_SHARED);
>  }
>
> -static int selinux_file_mprotect(struct vm_area_struct *vma,
> +static int selinux_file_mprotect(struct mm_area *vma,
>  				 unsigned long reqprot __always_unused,
>  				 unsigned long prot)
>  {
> diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
> index 47480eb2189b..84ed683ce903 100644
> --- a/security/selinux/selinuxfs.c
> +++ b/security/selinux/selinuxfs.c
> @@ -240,7 +240,7 @@ static ssize_t sel_read_handle_status(struct file *filp, char __user *buf,
>  }
>
>  static int sel_mmap_handle_status(struct file *filp,
> -				  struct vm_area_struct *vma)
> +				  struct mm_area *vma)
>  {
>  	struct page    *status = filp->private_data;
>  	unsigned long	size = vma->vm_end - vma->vm_start;
> @@ -465,7 +465,7 @@ static const struct vm_operations_struct sel_mmap_policy_ops = {
>  	.page_mkwrite = sel_mmap_policy_fault,
>  };
>
> -static int sel_mmap_policy(struct file *filp, struct vm_area_struct *vma)
> +static int sel_mmap_policy(struct file *filp, struct mm_area *vma)
>  {
>  	if (vma->vm_flags & VM_SHARED) {
>  		/* do not allow mprotect to make mapping writable */
> diff --git a/sound/core/compress_offload.c b/sound/core/compress_offload.c
> index 840bb9cfe789..84e86bd99ead 100644
> --- a/sound/core/compress_offload.c
> +++ b/sound/core/compress_offload.c
> @@ -389,7 +389,7 @@ static ssize_t snd_compr_read(struct file *f, char __user *buf,
>  	return retval;
>  }
>
> -static int snd_compr_mmap(struct file *f, struct vm_area_struct *vma)
> +static int snd_compr_mmap(struct file *f, struct mm_area *vma)
>  {
>  	return -ENXIO;
>  }
> diff --git a/sound/core/hwdep.c b/sound/core/hwdep.c
> index 09200df2932c..ac5cf0c98ec4 100644
> --- a/sound/core/hwdep.c
> +++ b/sound/core/hwdep.c
> @@ -253,7 +253,7 @@ static long snd_hwdep_ioctl(struct file * file, unsigned int cmd,
>  	return -ENOTTY;
>  }
>
> -static int snd_hwdep_mmap(struct file * file, struct vm_area_struct * vma)
> +static int snd_hwdep_mmap(struct file * file, struct mm_area * vma)
>  {
>  	struct snd_hwdep *hw = file->private_data;
>  	if (hw->ops.mmap)
> diff --git a/sound/core/info.c b/sound/core/info.c
> index 1f5b8a3d9e3b..2d80eb13ab7e 100644
> --- a/sound/core/info.c
> +++ b/sound/core/info.c
> @@ -211,7 +211,7 @@ static long snd_info_entry_ioctl(struct file *file, unsigned int cmd,
>  				   file, cmd, arg);
>  }
>
> -static int snd_info_entry_mmap(struct file *file, struct vm_area_struct *vma)
> +static int snd_info_entry_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct inode *inode = file_inode(file);
>  	struct snd_info_private_data *data;
> diff --git a/sound/core/init.c b/sound/core/init.c
> index 114fb87de990..6c357c892dc4 100644
> --- a/sound/core/init.c
> +++ b/sound/core/init.c
> @@ -451,7 +451,7 @@ static long snd_disconnect_ioctl(struct file *file,
>  	return -ENODEV;
>  }
>
> -static int snd_disconnect_mmap(struct file *file, struct vm_area_struct *vma)
> +static int snd_disconnect_mmap(struct file *file, struct mm_area *vma)
>  {
>  	return -ENODEV;
>  }
> diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
> index b3853583d2ae..2c5f64a1c8fe 100644
> --- a/sound/core/memalloc.c
> +++ b/sound/core/memalloc.c
> @@ -25,7 +25,7 @@ struct snd_malloc_ops {
>  	struct page *(*get_page)(struct snd_dma_buffer *dmab, size_t offset);
>  	unsigned int (*get_chunk_size)(struct snd_dma_buffer *dmab,
>  				       unsigned int ofs, unsigned int size);
> -	int (*mmap)(struct snd_dma_buffer *dmab, struct vm_area_struct *area);
> +	int (*mmap)(struct snd_dma_buffer *dmab, struct mm_area *area);
>  	void (*sync)(struct snd_dma_buffer *dmab, enum snd_dma_sync_mode mode);
>  };
>
> @@ -189,7 +189,7 @@ EXPORT_SYMBOL_GPL(snd_devm_alloc_dir_pages);
>   * Return: zero if successful, or a negative error code
>   */
>  int snd_dma_buffer_mmap(struct snd_dma_buffer *dmab,
> -			struct vm_area_struct *area)
> +			struct mm_area *area)
>  {
>  	const struct snd_malloc_ops *ops;
>
> @@ -334,7 +334,7 @@ static void snd_dma_continuous_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_continuous_mmap(struct snd_dma_buffer *dmab,
> -				   struct vm_area_struct *area)
> +				   struct mm_area *area)
>  {
>  	return remap_pfn_range(area, area->vm_start,
>  			       dmab->addr >> PAGE_SHIFT,
> @@ -362,7 +362,7 @@ static void snd_dma_vmalloc_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_vmalloc_mmap(struct snd_dma_buffer *dmab,
> -				struct vm_area_struct *area)
> +				struct mm_area *area)
>  {
>  	return remap_vmalloc_range(area, dmab->area, 0);
>  }
> @@ -451,7 +451,7 @@ static void snd_dma_iram_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_iram_mmap(struct snd_dma_buffer *dmab,
> -			     struct vm_area_struct *area)
> +			     struct mm_area *area)
>  {
>  	area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
>  	return remap_pfn_range(area, area->vm_start,
> @@ -481,7 +481,7 @@ static void snd_dma_dev_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_dev_mmap(struct snd_dma_buffer *dmab,
> -			    struct vm_area_struct *area)
> +			    struct mm_area *area)
>  {
>  	return dma_mmap_coherent(dmab->dev.dev, area,
>  				 dmab->area, dmab->addr, dmab->bytes);
> @@ -520,7 +520,7 @@ static void snd_dma_wc_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_wc_mmap(struct snd_dma_buffer *dmab,
> -			   struct vm_area_struct *area)
> +			   struct mm_area *area)
>  {
>  	area->vm_page_prot = pgprot_writecombine(area->vm_page_prot);
>  	return dma_mmap_coherent(dmab->dev.dev, area,
> @@ -538,7 +538,7 @@ static void snd_dma_wc_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_wc_mmap(struct snd_dma_buffer *dmab,
> -			   struct vm_area_struct *area)
> +			   struct mm_area *area)
>  {
>  	return dma_mmap_wc(dmab->dev.dev, area,
>  			   dmab->area, dmab->addr, dmab->bytes);
> @@ -585,7 +585,7 @@ static void snd_dma_noncontig_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_noncontig_mmap(struct snd_dma_buffer *dmab,
> -				  struct vm_area_struct *area)
> +				  struct mm_area *area)
>  {
>  	return dma_mmap_noncontiguous(dmab->dev.dev, area,
>  				      dmab->bytes, dmab->private_data);
> @@ -789,7 +789,7 @@ static void snd_dma_sg_fallback_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_sg_fallback_mmap(struct snd_dma_buffer *dmab,
> -				    struct vm_area_struct *area)
> +				    struct mm_area *area)
>  {
>  	struct snd_dma_sg_fallback *sgbuf = dmab->private_data;
>
> @@ -849,7 +849,7 @@ static void snd_dma_noncoherent_free(struct snd_dma_buffer *dmab)
>  }
>
>  static int snd_dma_noncoherent_mmap(struct snd_dma_buffer *dmab,
> -				    struct vm_area_struct *area)
> +				    struct mm_area *area)
>  {
>  	area->vm_page_prot = vm_get_page_prot(area->vm_flags);
>  	return dma_mmap_pages(dmab->dev.dev, area,
> diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
> index 4683b9139c56..884e96ea9cca 100644
> --- a/sound/core/oss/pcm_oss.c
> +++ b/sound/core/oss/pcm_oss.c
> @@ -2867,7 +2867,7 @@ static __poll_t snd_pcm_oss_poll(struct file *file, poll_table * wait)
>  	return mask;
>  }
>
> -static int snd_pcm_oss_mmap(struct file *file, struct vm_area_struct *area)
> +static int snd_pcm_oss_mmap(struct file *file, struct mm_area *area)
>  {
>  	struct snd_pcm_oss_file *pcm_oss_file;
>  	struct snd_pcm_substream *substream = NULL;
> diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
> index 6c2b6a62d9d2..415c3dec027f 100644
> --- a/sound/core/pcm_native.c
> +++ b/sound/core/pcm_native.c
> @@ -3668,7 +3668,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_status =
>  };
>
>  static int snd_pcm_mmap_status(struct snd_pcm_substream *substream, struct file *file,
> -			       struct vm_area_struct *area)
> +			       struct mm_area *area)
>  {
>  	long size;
>  	if (!(area->vm_flags & VM_READ))
> @@ -3706,7 +3706,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_control =
>  };
>
>  static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file *file,
> -				struct vm_area_struct *area)
> +				struct mm_area *area)
>  {
>  	long size;
>  	if (!(area->vm_flags & VM_READ))
> @@ -3762,12 +3762,12 @@ static bool pcm_control_mmap_allowed(struct snd_pcm_file *pcm_file)
>  #define pcm_control_mmap_allowed(pcm_file)	false
>
>  static int snd_pcm_mmap_status(struct snd_pcm_substream *substream, struct file *file,
> -			       struct vm_area_struct *area)
> +			       struct mm_area *area)
>  {
>  	return -ENXIO;
>  }
>  static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file *file,
> -				struct vm_area_struct *area)
> +				struct mm_area *area)
>  {
>  	return -ENXIO;
>  }
> @@ -3776,7 +3776,7 @@ static int snd_pcm_mmap_control(struct snd_pcm_substream *substream, struct file
>  /*
>   * snd_pcm_mmap_data_open - increase the mmap counter
>   */
> -static void snd_pcm_mmap_data_open(struct vm_area_struct *area)
> +static void snd_pcm_mmap_data_open(struct mm_area *area)
>  {
>  	struct snd_pcm_substream *substream = area->vm_private_data;
>
> @@ -3786,7 +3786,7 @@ static void snd_pcm_mmap_data_open(struct vm_area_struct *area)
>  /*
>   * snd_pcm_mmap_data_close - decrease the mmap counter
>   */
> -static void snd_pcm_mmap_data_close(struct vm_area_struct *area)
> +static void snd_pcm_mmap_data_close(struct mm_area *area)
>  {
>  	struct snd_pcm_substream *substream = area->vm_private_data;
>
> @@ -3852,7 +3852,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
>   * Return: zero if successful, or a negative error code
>   */
>  int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
> -			     struct vm_area_struct *area)
> +			     struct mm_area *area)
>  {
>  	vm_flags_set(area, VM_DONTEXPAND | VM_DONTDUMP);
>  	if (!substream->ops->page &&
> @@ -3880,7 +3880,7 @@ EXPORT_SYMBOL_GPL(snd_pcm_lib_default_mmap);
>   * Return: zero if successful, or a negative error code
>   */
>  int snd_pcm_lib_mmap_iomem(struct snd_pcm_substream *substream,
> -			   struct vm_area_struct *area)
> +			   struct mm_area *area)
>  {
>  	struct snd_pcm_runtime *runtime = substream->runtime;
>
> @@ -3894,7 +3894,7 @@ EXPORT_SYMBOL(snd_pcm_lib_mmap_iomem);
>   * mmap DMA buffer
>   */
>  int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file,
> -		      struct vm_area_struct *area)
> +		      struct mm_area *area)
>  {
>  	struct snd_pcm_runtime *runtime;
>  	long size;
> @@ -3937,7 +3937,7 @@ int snd_pcm_mmap_data(struct snd_pcm_substream *substream, struct file *file,
>  }
>  EXPORT_SYMBOL(snd_pcm_mmap_data);
>
> -static int snd_pcm_mmap(struct file *file, struct vm_area_struct *area)
> +static int snd_pcm_mmap(struct file *file, struct mm_area *area)
>  {
>  	struct snd_pcm_file * pcm_file;
>  	struct snd_pcm_substream *substream;
> diff --git a/sound/soc/fsl/fsl_asrc_m2m.c b/sound/soc/fsl/fsl_asrc_m2m.c
> index f46881f71e43..32356e92f2ae 100644
> --- a/sound/soc/fsl/fsl_asrc_m2m.c
> +++ b/sound/soc/fsl/fsl_asrc_m2m.c
> @@ -401,7 +401,7 @@ static int fsl_asrc_m2m_comp_set_params(struct snd_compr_stream *stream,
>  	return 0;
>  }
>
> -static int fsl_asrc_m2m_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +static int fsl_asrc_m2m_mmap(struct dma_buf *dmabuf, struct mm_area *vma)
>  {
>  	struct snd_dma_buffer *dmab = dmabuf->priv;
>
> diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
> index dac463390da1..d595f2ef22a8 100644
> --- a/sound/soc/intel/avs/pcm.c
> +++ b/sound/soc/intel/avs/pcm.c
> @@ -1240,7 +1240,7 @@ avs_component_pointer(struct snd_soc_component *component, struct snd_pcm_substr
>
>  static int avs_component_mmap(struct snd_soc_component *component,
>  			      struct snd_pcm_substream *substream,
> -			      struct vm_area_struct *vma)
> +			      struct mm_area *vma)
>  {
>  	return snd_pcm_lib_default_mmap(substream, vma);
>  }
> diff --git a/sound/soc/loongson/loongson_dma.c b/sound/soc/loongson/loongson_dma.c
> index 20e4a0641340..2e05bc1683bd 100644
> --- a/sound/soc/loongson/loongson_dma.c
> +++ b/sound/soc/loongson/loongson_dma.c
> @@ -295,7 +295,7 @@ static int loongson_pcm_close(struct snd_soc_component *component,
>
>  static int loongson_pcm_mmap(struct snd_soc_component *component,
>  			     struct snd_pcm_substream *substream,
> -			     struct vm_area_struct *vma)
> +			     struct mm_area *vma)
>  {
>  	return remap_pfn_range(vma, vma->vm_start,
>  			substream->dma_buffer.addr >> PAGE_SHIFT,
> diff --git a/sound/soc/pxa/mmp-sspa.c b/sound/soc/pxa/mmp-sspa.c
> index 73f36c9dd35c..bbb0f3a15c39 100644
> --- a/sound/soc/pxa/mmp-sspa.c
> +++ b/sound/soc/pxa/mmp-sspa.c
> @@ -402,7 +402,7 @@ static const struct snd_dmaengine_pcm_config mmp_pcm_config = {
>
>  static int mmp_pcm_mmap(struct snd_soc_component *component,
>  			struct snd_pcm_substream *substream,
> -			struct vm_area_struct *vma)
> +			struct mm_area *vma)
>  {
>  	vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP);
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> diff --git a/sound/soc/qcom/lpass-platform.c b/sound/soc/qcom/lpass-platform.c
> index 9946f12254b3..bf8cd80fcf5a 100644
> --- a/sound/soc/qcom/lpass-platform.c
> +++ b/sound/soc/qcom/lpass-platform.c
> @@ -894,7 +894,7 @@ static snd_pcm_uframes_t lpass_platform_pcmops_pointer(
>  }
>
>  static int lpass_platform_cdc_dma_mmap(struct snd_pcm_substream *substream,
> -				       struct vm_area_struct *vma)
> +				       struct mm_area *vma)
>  {
>  	struct snd_pcm_runtime *runtime = substream->runtime;
>  	unsigned long size, offset;
> @@ -910,7 +910,7 @@ static int lpass_platform_cdc_dma_mmap(struct snd_pcm_substream *substream,
>
>  static int lpass_platform_pcmops_mmap(struct snd_soc_component *component,
>  				      struct snd_pcm_substream *substream,
> -				      struct vm_area_struct *vma)
> +				      struct mm_area *vma)
>  {
>  	struct snd_soc_pcm_runtime *soc_runtime = snd_soc_substream_to_rtd(substream);
>  	struct snd_soc_dai *cpu_dai = snd_soc_rtd_to_cpu(soc_runtime, 0);
> diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
> index 2cd522108221..6a9ef02b5ab6 100644
> --- a/sound/soc/qcom/qdsp6/q6apm-dai.c
> +++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
> @@ -739,7 +739,7 @@ static int q6apm_dai_compr_set_metadata(struct snd_soc_component *component,
>
>  static int q6apm_dai_compr_mmap(struct snd_soc_component *component,
>  				struct snd_compr_stream *stream,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	struct snd_compr_runtime *runtime = stream->runtime;
>  	struct q6apm_dai_rtd *prtd = runtime->private_data;
> diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
> index a400c9a31fea..7d382c459845 100644
> --- a/sound/soc/qcom/qdsp6/q6asm-dai.c
> +++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
> @@ -1114,7 +1114,7 @@ static int q6asm_compr_copy(struct snd_soc_component *component,
>
>  static int q6asm_dai_compr_mmap(struct snd_soc_component *component,
>  				struct snd_compr_stream *stream,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	struct snd_compr_runtime *runtime = stream->runtime;
>  	struct q6asm_dai_rtd *prtd = runtime->private_data;
> diff --git a/sound/soc/samsung/idma.c b/sound/soc/samsung/idma.c
> index 402ccadad46c..618cc682b223 100644
> --- a/sound/soc/samsung/idma.c
> +++ b/sound/soc/samsung/idma.c
> @@ -240,7 +240,7 @@ idma_pointer(struct snd_soc_component *component,
>
>  static int idma_mmap(struct snd_soc_component *component,
>  		     struct snd_pcm_substream *substream,
> -	struct vm_area_struct *vma)
> +	struct mm_area *vma)
>  {
>  	struct snd_pcm_runtime *runtime = substream->runtime;
>  	unsigned long size, offset;
> diff --git a/sound/soc/soc-component.c b/sound/soc/soc-component.c
> index 25f5e543ae8d..019eabf1f618 100644
> --- a/sound/soc/soc-component.c
> +++ b/sound/soc/soc-component.c
> @@ -1095,7 +1095,7 @@ struct page *snd_soc_pcm_component_page(struct snd_pcm_substream *substream,
>  }
>
>  int snd_soc_pcm_component_mmap(struct snd_pcm_substream *substream,
> -			       struct vm_area_struct *vma)
> +			       struct mm_area *vma)
>  {
>  	struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
>  	struct snd_soc_component *component;
> diff --git a/sound/soc/uniphier/aio-dma.c b/sound/soc/uniphier/aio-dma.c
> index 265d61723e99..e930c48f3ac2 100644
> --- a/sound/soc/uniphier/aio-dma.c
> +++ b/sound/soc/uniphier/aio-dma.c
> @@ -193,7 +193,7 @@ static snd_pcm_uframes_t uniphier_aiodma_pointer(
>
>  static int uniphier_aiodma_mmap(struct snd_soc_component *component,
>  				struct snd_pcm_substream *substream,
> -				struct vm_area_struct *vma)
> +				struct mm_area *vma)
>  {
>  	vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
>
> diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
> index 6bcf8b859ebb..818228d5e3a2 100644
> --- a/sound/usb/usx2y/us122l.c
> +++ b/sound/usb/usx2y/us122l.c
> @@ -173,7 +173,7 @@ static int usb_stream_hwdep_release(struct snd_hwdep *hw, struct file *file)
>  }
>
>  static int usb_stream_hwdep_mmap(struct snd_hwdep *hw,
> -				 struct file *filp, struct vm_area_struct *area)
> +				 struct file *filp, struct mm_area *area)
>  {
>  	unsigned long	size = area->vm_end - area->vm_start;
>  	struct us122l	*us122l = hw->private_data;
> diff --git a/sound/usb/usx2y/usX2Yhwdep.c b/sound/usb/usx2y/usX2Yhwdep.c
> index 9fd6a86cc08e..f53ab11ba825 100644
> --- a/sound/usb/usx2y/usX2Yhwdep.c
> +++ b/sound/usb/usx2y/usX2Yhwdep.c
> @@ -37,7 +37,7 @@ static const struct vm_operations_struct us428ctls_vm_ops = {
>  	.fault = snd_us428ctls_vm_fault,
>  };
>
> -static int snd_us428ctls_mmap(struct snd_hwdep *hw, struct file *filp, struct vm_area_struct *area)
> +static int snd_us428ctls_mmap(struct snd_hwdep *hw, struct file *filp, struct mm_area *area)
>  {
>  	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
>  	struct usx2ydev	*us428 = hw->private_data;
> diff --git a/sound/usb/usx2y/usx2yhwdeppcm.c b/sound/usb/usx2y/usx2yhwdeppcm.c
> index 1b1496adb47e..acf7d36dc4e9 100644
> --- a/sound/usb/usx2y/usx2yhwdeppcm.c
> +++ b/sound/usb/usx2y/usx2yhwdeppcm.c
> @@ -667,11 +667,11 @@ static int snd_usx2y_hwdep_pcm_release(struct snd_hwdep *hw, struct file *file)
>  	return err;
>  }
>
> -static void snd_usx2y_hwdep_pcm_vm_open(struct vm_area_struct *area)
> +static void snd_usx2y_hwdep_pcm_vm_open(struct mm_area *area)
>  {
>  }
>
> -static void snd_usx2y_hwdep_pcm_vm_close(struct vm_area_struct *area)
> +static void snd_usx2y_hwdep_pcm_vm_close(struct mm_area *area)
>  {
>  }
>
> @@ -693,7 +693,7 @@ static const struct vm_operations_struct snd_usx2y_hwdep_pcm_vm_ops = {
>  	.fault = snd_usx2y_hwdep_pcm_vm_fault,
>  };
>
> -static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep *hw, struct file *filp, struct vm_area_struct *area)
> +static int snd_usx2y_hwdep_pcm_mmap(struct snd_hwdep *hw, struct file *filp, struct mm_area *area)
>  {
>  	unsigned long	size = (unsigned long)(area->vm_end - area->vm_start);
>  	struct usx2ydev	*usx2y = hw->private_data;
> diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h
> index 72ea363d434d..3c3285b1bb05 100644
> --- a/tools/include/linux/btf_ids.h
> +++ b/tools/include/linux/btf_ids.h
> @@ -205,7 +205,7 @@ extern u32 btf_sock_ids[];
>  #define BTF_TRACING_TYPE_xxx	\
>  	BTF_TRACING_TYPE(BTF_TRACING_TYPE_TASK, task_struct)	\
>  	BTF_TRACING_TYPE(BTF_TRACING_TYPE_FILE, file)		\
> -	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, vm_area_struct)
> +	BTF_TRACING_TYPE(BTF_TRACING_TYPE_VMA, mm_area)
>
>  enum {
>  #define BTF_TRACING_TYPE(name, type) name,
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 28705ae67784..7894f9c2ae9b 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -5368,7 +5368,7 @@ union bpf_attr {
>   *
>   *		The expected callback signature is
>   *
> - *		long (\*callback_fn)(struct task_struct \*task, struct vm_area_struct \*vma, void \*callback_ctx);
> + *		long (\*callback_fn)(struct task_struct \*task, struct mm_area \*vma, void \*callback_ctx);
>   *
>   *	Return
>   *		0 on success.
> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> index 6535c8ae3c46..08d8c40b8546 100644
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h
> @@ -164,7 +164,7 @@ struct bpf_iter_task_vma;
>  extern int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
>  				 struct task_struct *task,
>  				 __u64 addr) __ksym;
> -extern struct vm_area_struct *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it) __ksym;
> +extern struct mm_area *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it) __ksym;
>  extern void bpf_iter_task_vma_destroy(struct bpf_iter_task_vma *it) __ksym;
>
>  /* Convenience macro to wrap over bpf_obj_drop_impl */
> diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
> index d64ba7ddaed5..899e6b03c070 100644
> --- a/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
> +++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
> @@ -25,7 +25,7 @@ __u32 one_task_error = 0;
>
>  SEC("iter/task_vma") int proc_maps(struct bpf_iter__task_vma *ctx)
>  {
> -	struct vm_area_struct *vma = ctx->vma;
> +	struct mm_area *vma = ctx->vma;
>  	struct seq_file *seq = ctx->meta->seq;
>  	struct task_struct *task = ctx->task;
>  	struct file *file;
> diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c b/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c
> index 174298e122d3..6a27844ef324 100644
> --- a/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c
> +++ b/tools/testing/selftests/bpf/progs/bpf_iter_vma_offset.c
> @@ -15,7 +15,7 @@ __u32 page_shift = 0;
>  SEC("iter/task_vma")
>  int get_vma_offset(struct bpf_iter__task_vma *ctx)
>  {
> -	struct vm_area_struct *vma = ctx->vma;
> +	struct mm_area *vma = ctx->vma;
>  	struct seq_file *seq = ctx->meta->seq;
>  	struct task_struct *task = ctx->task;
>
> diff --git a/tools/testing/selftests/bpf/progs/find_vma.c b/tools/testing/selftests/bpf/progs/find_vma.c
> index 02b82774469c..75f90cb21179 100644
> --- a/tools/testing/selftests/bpf/progs/find_vma.c
> +++ b/tools/testing/selftests/bpf/progs/find_vma.c
> @@ -20,7 +20,7 @@ __u64 addr = 0;
>  int find_zero_ret = -1;
>  int find_addr_ret = -1;
>
> -static long check_vma(struct task_struct *task, struct vm_area_struct *vma,
> +static long check_vma(struct task_struct *task, struct mm_area *vma,
>  		      struct callback_ctx *data)
>  {
>  	if (vma->vm_file)
> diff --git a/tools/testing/selftests/bpf/progs/find_vma_fail1.c b/tools/testing/selftests/bpf/progs/find_vma_fail1.c
> index 7ba9a428f228..4a5a41997169 100644
> --- a/tools/testing/selftests/bpf/progs/find_vma_fail1.c
> +++ b/tools/testing/selftests/bpf/progs/find_vma_fail1.c
> @@ -10,7 +10,7 @@ struct callback_ctx {
>  	int dummy;
>  };
>
> -static long write_vma(struct task_struct *task, struct vm_area_struct *vma,
> +static long write_vma(struct task_struct *task, struct mm_area *vma,
>  		      struct callback_ctx *data)
>  {
>  	/* writing to vma, which is illegal */
> diff --git a/tools/testing/selftests/bpf/progs/find_vma_fail2.c b/tools/testing/selftests/bpf/progs/find_vma_fail2.c
> index 9bcf3203e26b..1117fc0475f2 100644
> --- a/tools/testing/selftests/bpf/progs/find_vma_fail2.c
> +++ b/tools/testing/selftests/bpf/progs/find_vma_fail2.c
> @@ -9,7 +9,7 @@ struct callback_ctx {
>  	int dummy;
>  };
>
> -static long write_task(struct task_struct *task, struct vm_area_struct *vma,
> +static long write_task(struct task_struct *task, struct mm_area *vma,
>  		       struct callback_ctx *data)
>  {
>  	/* writing to task, which is illegal */
> diff --git a/tools/testing/selftests/bpf/progs/iters_css_task.c b/tools/testing/selftests/bpf/progs/iters_css_task.c
> index 9ac758649cb8..bc48b47d1793 100644
> --- a/tools/testing/selftests/bpf/progs/iters_css_task.c
> +++ b/tools/testing/selftests/bpf/progs/iters_css_task.c
> @@ -19,7 +19,7 @@ int css_task_cnt;
>  u64 cg_id;
>
>  SEC("lsm/file_mprotect")
> -int BPF_PROG(iter_css_task_for_each, struct vm_area_struct *vma,
> +int BPF_PROG(iter_css_task_for_each, struct mm_area *vma,
>  	    unsigned long reqprot, unsigned long prot, int ret)
>  {
>  	struct task_struct *cur_task = bpf_get_current_task_btf();
> diff --git a/tools/testing/selftests/bpf/progs/iters_task_vma.c b/tools/testing/selftests/bpf/progs/iters_task_vma.c
> index dc0c3691dcc2..6334a2d0518d 100644
> --- a/tools/testing/selftests/bpf/progs/iters_task_vma.c
> +++ b/tools/testing/selftests/bpf/progs/iters_task_vma.c
> @@ -18,7 +18,7 @@ SEC("raw_tp/sys_enter")
>  int iter_task_vma_for_each(const void *ctx)
>  {
>  	struct task_struct *task = bpf_get_current_task_btf();
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned int seen = 0;
>
>  	if (task->pid != target_pid)
> diff --git a/tools/testing/selftests/bpf/progs/iters_testmod.c b/tools/testing/selftests/bpf/progs/iters_testmod.c
> index 9e4b45201e69..d5303fb6d618 100644
> --- a/tools/testing/selftests/bpf/progs/iters_testmod.c
> +++ b/tools/testing/selftests/bpf/progs/iters_testmod.c
> @@ -14,7 +14,7 @@ int iter_next_trusted(const void *ctx)
>  {
>  	struct task_struct *cur_task = bpf_get_current_task_btf();
>  	struct bpf_iter_task_vma vma_it;
> -	struct vm_area_struct *vma_ptr;
> +	struct mm_area *vma_ptr;
>
>  	bpf_iter_task_vma_new(&vma_it, cur_task, 0);
>
> @@ -34,7 +34,7 @@ int iter_next_trusted_or_null(const void *ctx)
>  {
>  	struct task_struct *cur_task = bpf_get_current_task_btf();
>  	struct bpf_iter_task_vma vma_it;
> -	struct vm_area_struct *vma_ptr;
> +	struct mm_area *vma_ptr;
>
>  	bpf_iter_task_vma_new(&vma_it, cur_task, 0);
>
> diff --git a/tools/testing/selftests/bpf/progs/lsm.c b/tools/testing/selftests/bpf/progs/lsm.c
> index 0c13b7409947..7218621a833a 100644
> --- a/tools/testing/selftests/bpf/progs/lsm.c
> +++ b/tools/testing/selftests/bpf/progs/lsm.c
> @@ -86,7 +86,7 @@ int mprotect_count = 0;
>  int bprm_count = 0;
>
>  SEC("lsm/file_mprotect")
> -int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
> +int BPF_PROG(test_int_hook, struct mm_area *vma,
>  	     unsigned long reqprot, unsigned long prot, int ret)
>  {
>  	if (ret != 0)
> diff --git a/tools/testing/selftests/bpf/progs/test_bpf_cookie.c b/tools/testing/selftests/bpf/progs/test_bpf_cookie.c
> index c83142b55f47..8f803369ad2d 100644
> --- a/tools/testing/selftests/bpf/progs/test_bpf_cookie.c
> +++ b/tools/testing/selftests/bpf/progs/test_bpf_cookie.c
> @@ -125,7 +125,7 @@ int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
>  }
>
>  SEC("lsm/file_mprotect")
> -int BPF_PROG(test_int_hook, struct vm_area_struct *vma,
> +int BPF_PROG(test_int_hook, struct mm_area *vma,
>  	     unsigned long reqprot, unsigned long prot, int ret)
>  {
>  	if (my_tid != (u32)bpf_get_current_pid_tgid())
> diff --git a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
> index 75dd922e4e9f..aa00a677636b 100644
> --- a/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_iterating_callbacks.c
> @@ -14,7 +14,7 @@ struct {
>  	__uint(max_entries, 8);
>  } ringbuf SEC(".maps");
>
> -struct vm_area_struct;
> +struct mm_area;
>  struct bpf_map;
>
>  struct buf_context {
> @@ -146,7 +146,7 @@ int unsafe_ringbuf_drain(void *unused)
>  	return choice_arr[loop_ctx.i];
>  }
>
> -static __u64 find_vma_cb(struct task_struct *task, struct vm_area_struct *vma, void *data)
> +static __u64 find_vma_cb(struct task_struct *task, struct mm_area *vma, void *data)
>  {
>  	return oob_state_machine(data);
>  }
> diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> index 3220f1d28697..b58ebc8ab3b1 100644
> --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
> @@ -198,7 +198,7 @@ __bpf_kfunc void bpf_kfunc_nested_release_test(struct sk_buff *ptr)
>  {
>  }
>
> -__bpf_kfunc void bpf_kfunc_trusted_vma_test(struct vm_area_struct *ptr)
> +__bpf_kfunc void bpf_kfunc_trusted_vma_test(struct mm_area *ptr)
>  {
>  }
>
> diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> index b58817938deb..b28cf00b119b 100644
> --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
> @@ -154,7 +154,7 @@ int bpf_kfunc_st_ops_test_epilogue(struct st_ops_args *args) __ksym;
>  int bpf_kfunc_st_ops_test_pro_epilogue(struct st_ops_args *args) __ksym;
>  int bpf_kfunc_st_ops_inc10(struct st_ops_args *args) __ksym;
>
> -void bpf_kfunc_trusted_vma_test(struct vm_area_struct *ptr) __ksym;
> +void bpf_kfunc_trusted_vma_test(struct mm_area *ptr) __ksym;
>  void bpf_kfunc_trusted_task_test(struct task_struct *ptr) __ksym;
>  void bpf_kfunc_trusted_num_test(int *ptr) __ksym;
>  void bpf_kfunc_rcu_task_test(struct task_struct *ptr) __ksym;
> diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c
> index 11f761769b5b..57d129d16596 100644
> --- a/tools/testing/vma/vma.c
> +++ b/tools/testing/vma/vma.c
> @@ -59,13 +59,13 @@ unsigned long rlimit(unsigned int limit)
>  }
>
>  /* Helper function to simply allocate a VMA. */
> -static struct vm_area_struct *alloc_vma(struct mm_struct *mm,
> +static struct mm_area *alloc_vma(struct mm_struct *mm,
>  					unsigned long start,
>  					unsigned long end,
>  					pgoff_t pgoff,
>  					vm_flags_t flags)
>  {
> -	struct vm_area_struct *ret = vm_area_alloc(mm);
> +	struct mm_area *ret = vm_area_alloc(mm);
>
>  	if (ret == NULL)
>  		return NULL;
> @@ -80,7 +80,7 @@ static struct vm_area_struct *alloc_vma(struct mm_struct *mm,
>  }
>
>  /* Helper function to allocate a VMA and link it to the tree. */
> -static int attach_vma(struct mm_struct *mm, struct vm_area_struct *vma)
> +static int attach_vma(struct mm_struct *mm, struct mm_area *vma)
>  {
>  	int res;
>
> @@ -91,13 +91,13 @@ static int attach_vma(struct mm_struct *mm, struct vm_area_struct *vma)
>  }
>
>  /* Helper function to allocate a VMA and link it to the tree. */
> -static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
> +static struct mm_area *alloc_and_link_vma(struct mm_struct *mm,
>  						 unsigned long start,
>  						 unsigned long end,
>  						 pgoff_t pgoff,
>  						 vm_flags_t flags)
>  {
> -	struct vm_area_struct *vma = alloc_vma(mm, start, end, pgoff, flags);
> +	struct mm_area *vma = alloc_vma(mm, start, end, pgoff, flags);
>
>  	if (vma == NULL)
>  		return NULL;
> @@ -118,9 +118,9 @@ static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm,
>  }
>
>  /* Helper function which provides a wrapper around a merge new VMA operation. */
> -static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
> +static struct mm_area *merge_new(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	/*
>  	 * For convenience, get prev and next VMAs. Which the new VMA operation
>  	 * requires.
> @@ -140,9 +140,9 @@ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg)
>   * Helper function which provides a wrapper around a merge existing VMA
>   * operation.
>   */
> -static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg)
> +static struct mm_area *merge_existing(struct vma_merge_struct *vmg)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = vma_merge_existing_range(vmg);
>  	if (vma)
> @@ -191,13 +191,13 @@ static void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start,
>   * Update vmg and the iterator for it and try to merge, otherwise allocate a new
>   * VMA, link it to the maple tree and return it.
>   */
> -static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm,
> +static struct mm_area *try_merge_new_vma(struct mm_struct *mm,
>  						struct vma_merge_struct *vmg,
>  						unsigned long start, unsigned long end,
>  						pgoff_t pgoff, vm_flags_t flags,
>  						bool *was_merged)
>  {
> -	struct vm_area_struct *merged;
> +	struct mm_area *merged;
>
>  	vmg_set_range(vmg, start, end, pgoff, flags);
>
> @@ -231,7 +231,7 @@ static void reset_dummy_anon_vma(void)
>   */
>  static int cleanup_mm(struct mm_struct *mm, struct vma_iterator *vmi)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	int count = 0;
>
>  	fail_prealloc = false;
> @@ -249,7 +249,7 @@ static int cleanup_mm(struct mm_struct *mm, struct vma_iterator *vmi)
>  }
>
>  /* Helper function to determine if VMA has had vma_start_write() performed. */
> -static bool vma_write_started(struct vm_area_struct *vma)
> +static bool vma_write_started(struct mm_area *vma)
>  {
>  	int seq = vma->vm_lock_seq;
>
> @@ -261,17 +261,17 @@ static bool vma_write_started(struct vm_area_struct *vma)
>  }
>
>  /* Helper function providing a dummy vm_ops->close() method.*/
> -static void dummy_close(struct vm_area_struct *)
> +static void dummy_close(struct mm_area *)
>  {
>  }
>
>  static bool test_simple_merge(void)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
> -	struct vm_area_struct *vma_left = alloc_vma(&mm, 0, 0x1000, 0, flags);
> -	struct vm_area_struct *vma_right = alloc_vma(&mm, 0x2000, 0x3000, 2, flags);
> +	struct mm_area *vma_left = alloc_vma(&mm, 0, 0x1000, 0, flags);
> +	struct mm_area *vma_right = alloc_vma(&mm, 0x2000, 0x3000, 2, flags);
>  	VMA_ITERATOR(vmi, &mm, 0x1000);
>  	struct vma_merge_struct vmg = {
>  		.mm = &mm,
> @@ -301,10 +301,10 @@ static bool test_simple_merge(void)
>
>  static bool test_simple_modify(void)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
> -	struct vm_area_struct *init_vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
> +	struct mm_area *init_vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
>  	VMA_ITERATOR(vmi, &mm, 0x1000);
>
>  	ASSERT_FALSE(attach_vma(&mm, init_vma));
> @@ -363,7 +363,7 @@ static bool test_simple_expand(void)
>  {
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
> -	struct vm_area_struct *vma = alloc_vma(&mm, 0, 0x1000, 0, flags);
> +	struct mm_area *vma = alloc_vma(&mm, 0, 0x1000, 0, flags);
>  	VMA_ITERATOR(vmi, &mm, 0);
>  	struct vma_merge_struct vmg = {
>  		.vmi = &vmi,
> @@ -391,7 +391,7 @@ static bool test_simple_shrink(void)
>  {
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
> -	struct vm_area_struct *vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
> +	struct mm_area *vma = alloc_vma(&mm, 0, 0x3000, 0, flags);
>  	VMA_ITERATOR(vmi, &mm, 0);
>
>  	ASSERT_FALSE(attach_vma(&mm, vma));
> @@ -433,7 +433,7 @@ static bool test_merge_new(void)
>  		.close = dummy_close,
>  	};
>  	int count;
> -	struct vm_area_struct *vma, *vma_a, *vma_b, *vma_c, *vma_d;
> +	struct mm_area *vma, *vma_a, *vma_b, *vma_c, *vma_d;
>  	bool merged;
>
>  	/*
> @@ -616,7 +616,7 @@ static bool test_vma_merge_special_flags(void)
>  	vm_flags_t special_flags[] = { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXEDMAP };
>  	vm_flags_t all_special_flags = 0;
>  	int i;
> -	struct vm_area_struct *vma_left, *vma;
> +	struct mm_area *vma_left, *vma;
>
>  	/* Make sure there aren't new VM_SPECIAL flags. */
>  	for (i = 0; i < ARRAY_SIZE(special_flags); i++) {
> @@ -688,7 +688,7 @@ static bool test_vma_merge_with_close(void)
>  	const struct vm_operations_struct vm_ops = {
>  		.close = dummy_close,
>  	};
> -	struct vm_area_struct *vma_prev, *vma_next, *vma;
> +	struct mm_area *vma_prev, *vma_next, *vma;
>
>  	/*
>  	 * When merging VMAs we are not permitted to remove any VMA that has a
> @@ -894,12 +894,12 @@ static bool test_vma_merge_new_with_close(void)
>  		.mm = &mm,
>  		.vmi = &vmi,
>  	};
> -	struct vm_area_struct *vma_prev = alloc_and_link_vma(&mm, 0, 0x2000, 0, flags);
> -	struct vm_area_struct *vma_next = alloc_and_link_vma(&mm, 0x5000, 0x7000, 5, flags);
> +	struct mm_area *vma_prev = alloc_and_link_vma(&mm, 0, 0x2000, 0, flags);
> +	struct mm_area *vma_next = alloc_and_link_vma(&mm, 0x5000, 0x7000, 5, flags);
>  	const struct vm_operations_struct vm_ops = {
>  		.close = dummy_close,
>  	};
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	/*
>  	 * We should allow the partial merge of a proposed new VMA if the
> @@ -945,7 +945,7 @@ static bool test_merge_existing(void)
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
>  	VMA_ITERATOR(vmi, &mm, 0);
> -	struct vm_area_struct *vma, *vma_prev, *vma_next;
> +	struct mm_area *vma, *vma_prev, *vma_next;
>  	struct vma_merge_struct vmg = {
>  		.mm = &mm,
>  		.vmi = &vmi,
> @@ -1175,7 +1175,7 @@ static bool test_anon_vma_non_mergeable(void)
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
>  	VMA_ITERATOR(vmi, &mm, 0);
> -	struct vm_area_struct *vma, *vma_prev, *vma_next;
> +	struct mm_area *vma, *vma_prev, *vma_next;
>  	struct vma_merge_struct vmg = {
>  		.mm = &mm,
>  		.vmi = &vmi,
> @@ -1290,7 +1290,7 @@ static bool test_dup_anon_vma(void)
>  	struct anon_vma_chain dummy_anon_vma_chain = {
>  		.anon_vma = &dummy_anon_vma,
>  	};
> -	struct vm_area_struct *vma_prev, *vma_next, *vma;
> +	struct mm_area *vma_prev, *vma_next, *vma;
>
>  	reset_dummy_anon_vma();
>
> @@ -1447,7 +1447,7 @@ static bool test_vmi_prealloc_fail(void)
>  		.mm = &mm,
>  		.vmi = &vmi,
>  	};
> -	struct vm_area_struct *vma_prev, *vma;
> +	struct mm_area *vma_prev, *vma;
>
>  	/*
>  	 * We are merging vma into prev, with vma possessing an anon_vma, which
> @@ -1507,7 +1507,7 @@ static bool test_merge_extend(void)
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
>  	VMA_ITERATOR(vmi, &mm, 0x1000);
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>
>  	vma = alloc_and_link_vma(&mm, 0, 0x1000, 0, flags);
>  	alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags);
> @@ -1538,7 +1538,7 @@ static bool test_copy_vma(void)
>  	struct mm_struct mm = {};
>  	bool need_locks = false;
>  	VMA_ITERATOR(vmi, &mm, 0);
> -	struct vm_area_struct *vma, *vma_new, *vma_next;
> +	struct mm_area *vma, *vma_new, *vma_next;
>
>  	/* Move backwards and do not merge. */
>
> @@ -1570,7 +1570,7 @@ static bool test_expand_only_mode(void)
>  	unsigned long flags = VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE;
>  	struct mm_struct mm = {};
>  	VMA_ITERATOR(vmi, &mm, 0);
> -	struct vm_area_struct *vma_prev, *vma;
> +	struct mm_area *vma_prev, *vma;
>  	VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, flags, 5);
>
>  	/*
> @@ -1609,7 +1609,7 @@ static bool test_mmap_region_basic(void)
>  {
>  	struct mm_struct mm = {};
>  	unsigned long addr;
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, &mm, 0);
>
>  	current->mm = &mm;
> diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_internal.h
> index 572ab2cea763..acb90a6ff98a 100644
> --- a/tools/testing/vma/vma_internal.h
> +++ b/tools/testing/vma/vma_internal.h
> @@ -235,7 +235,7 @@ struct file {
>
>  #define VMA_LOCK_OFFSET	0x40000000
>
> -struct vm_area_struct {
> +struct mm_area {
>  	/* The first cache line has the info for VMA tree walking. */
>
>  	union {
> @@ -337,27 +337,27 @@ struct vm_area_struct {
>  struct vm_fault {};
>
>  struct vm_operations_struct {
> -	void (*open)(struct vm_area_struct * area);
> +	void (*open)(struct mm_area * area);
>  	/**
>  	 * @close: Called when the VMA is being removed from the MM.
>  	 * Context: User context.  May sleep.  Caller holds mmap_lock.
>  	 */
> -	void (*close)(struct vm_area_struct * area);
> +	void (*close)(struct mm_area * area);
>  	/* Called any time before splitting to check if it's allowed */
> -	int (*may_split)(struct vm_area_struct *area, unsigned long addr);
> -	int (*mremap)(struct vm_area_struct *area);
> +	int (*may_split)(struct mm_area *area, unsigned long addr);
> +	int (*mremap)(struct mm_area *area);
>  	/*
>  	 * Called by mprotect() to make driver-specific permission
>  	 * checks before mprotect() is finalised.   The VMA must not
>  	 * be modified.  Returns 0 if mprotect() can proceed.
>  	 */
> -	int (*mprotect)(struct vm_area_struct *vma, unsigned long start,
> +	int (*mprotect)(struct mm_area *vma, unsigned long start,
>  			unsigned long end, unsigned long newflags);
>  	vm_fault_t (*fault)(struct vm_fault *vmf);
>  	vm_fault_t (*huge_fault)(struct vm_fault *vmf, unsigned int order);
>  	vm_fault_t (*map_pages)(struct vm_fault *vmf,
>  			pgoff_t start_pgoff, pgoff_t end_pgoff);
> -	unsigned long (*pagesize)(struct vm_area_struct * area);
> +	unsigned long (*pagesize)(struct mm_area * area);
>
>  	/* notification that a previously read-only page is about to become
>  	 * writable, if an error is returned it will cause a SIGBUS */
> @@ -370,13 +370,13 @@ struct vm_operations_struct {
>  	 * for use by special VMAs. See also generic_access_phys() for a generic
>  	 * implementation useful for any iomem mapping.
>  	 */
> -	int (*access)(struct vm_area_struct *vma, unsigned long addr,
> +	int (*access)(struct mm_area *vma, unsigned long addr,
>  		      void *buf, int len, int write);
>
>  	/* Called by the /proc/PID/maps code to ask the vma whether it
>  	 * has a special name.  Returning non-NULL will also cause this
>  	 * vma to be dumped unconditionally. */
> -	const char *(*name)(struct vm_area_struct *vma);
> +	const char *(*name)(struct mm_area *vma);
>
>  #ifdef CONFIG_NUMA
>  	/*
> @@ -386,7 +386,7 @@ struct vm_operations_struct {
>  	 * install a MPOL_DEFAULT policy, nor the task or system default
>  	 * mempolicy.
>  	 */
> -	int (*set_policy)(struct vm_area_struct *vma, struct mempolicy *new);
> +	int (*set_policy)(struct mm_area *vma, struct mempolicy *new);
>
>  	/*
>  	 * get_policy() op must add reference [mpol_get()] to any policy at
> @@ -398,7 +398,7 @@ struct vm_operations_struct {
>  	 * must return NULL--i.e., do not "fallback" to task or system default
>  	 * policy.
>  	 */
> -	struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
> +	struct mempolicy *(*get_policy)(struct mm_area *vma,
>  					unsigned long addr, pgoff_t *ilx);
>  #endif
>  	/*
> @@ -406,7 +406,7 @@ struct vm_operations_struct {
>  	 * page for @addr.  This is useful if the default behavior
>  	 * (using pte_page()) would not find the correct page.
>  	 */
> -	struct page *(*find_special_page)(struct vm_area_struct *vma,
> +	struct page *(*find_special_page)(struct mm_area *vma,
>  					  unsigned long addr);
>  };
>
> @@ -442,12 +442,12 @@ static inline bool is_shared_maywrite(vm_flags_t vm_flags)
>  		(VM_SHARED | VM_MAYWRITE);
>  }
>
> -static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma)
> +static inline bool vma_is_shared_maywrite(struct mm_area *vma)
>  {
>  	return is_shared_maywrite(vma->vm_flags);
>  }
>
> -static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
> +static inline struct mm_area *vma_next(struct vma_iterator *vmi)
>  {
>  	/*
>  	 * Uses mas_find() to get the first VMA when the iterator starts.
> @@ -461,25 +461,25 @@ static inline struct vm_area_struct *vma_next(struct vma_iterator *vmi)
>   * assertions should be made either under mmap_write_lock or when the object
>   * has been isolated under mmap_write_lock, ensuring no competing writers.
>   */
> -static inline void vma_assert_attached(struct vm_area_struct *vma)
> +static inline void vma_assert_attached(struct mm_area *vma)
>  {
>  	WARN_ON_ONCE(!refcount_read(&vma->vm_refcnt));
>  }
>
> -static inline void vma_assert_detached(struct vm_area_struct *vma)
> +static inline void vma_assert_detached(struct mm_area *vma)
>  {
>  	WARN_ON_ONCE(refcount_read(&vma->vm_refcnt));
>  }
>
> -static inline void vma_assert_write_locked(struct vm_area_struct *);
> -static inline void vma_mark_attached(struct vm_area_struct *vma)
> +static inline void vma_assert_write_locked(struct mm_area *);
> +static inline void vma_mark_attached(struct mm_area *vma)
>  {
>  	vma_assert_write_locked(vma);
>  	vma_assert_detached(vma);
>  	refcount_set_release(&vma->vm_refcnt, 1);
>  }
>
> -static inline void vma_mark_detached(struct vm_area_struct *vma)
> +static inline void vma_mark_detached(struct mm_area *vma)
>  {
>  	vma_assert_write_locked(vma);
>  	vma_assert_attached(vma);
> @@ -496,7 +496,7 @@ extern const struct vm_operations_struct vma_dummy_vm_ops;
>
>  extern unsigned long rlimit(unsigned int limit);
>
> -static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
> +static inline void vma_init(struct mm_area *vma, struct mm_struct *mm)
>  {
>  	memset(vma, 0, sizeof(*vma));
>  	vma->vm_mm = mm;
> @@ -505,9 +505,9 @@ static inline void vma_init(struct vm_area_struct *vma, struct mm_struct *mm)
>  	vma->vm_lock_seq = UINT_MAX;
>  }
>
> -static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
> +static inline struct mm_area *vm_area_alloc(struct mm_struct *mm)
>  {
> -	struct vm_area_struct *vma = calloc(1, sizeof(struct vm_area_struct));
> +	struct mm_area *vma = calloc(1, sizeof(struct mm_area));
>
>  	if (!vma)
>  		return NULL;
> @@ -517,9 +517,9 @@ static inline struct vm_area_struct *vm_area_alloc(struct mm_struct *mm)
>  	return vma;
>  }
>
> -static inline struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
> +static inline struct mm_area *vm_area_dup(struct mm_area *orig)
>  {
> -	struct vm_area_struct *new = calloc(1, sizeof(struct vm_area_struct));
> +	struct mm_area *new = calloc(1, sizeof(struct mm_area));
>
>  	if (!new)
>  		return NULL;
> @@ -576,7 +576,7 @@ static inline void mapping_allow_writable(struct address_space *mapping)
>  	atomic_inc(&mapping->i_mmap_writable);
>  }
>
> -static inline void vma_set_range(struct vm_area_struct *vma,
> +static inline void vma_set_range(struct mm_area *vma,
>  				 unsigned long start, unsigned long end,
>  				 pgoff_t pgoff)
>  {
> @@ -586,7 +586,7 @@ static inline void vma_set_range(struct vm_area_struct *vma,
>  }
>
>  static inline
> -struct vm_area_struct *vma_find(struct vma_iterator *vmi, unsigned long max)
> +struct mm_area *vma_find(struct vma_iterator *vmi, unsigned long max)
>  {
>  	return mas_find(&vmi->mas, max - 1);
>  }
> @@ -603,7 +603,7 @@ static inline int vma_iter_clear_gfp(struct vma_iterator *vmi,
>  }
>
>  static inline void mmap_assert_locked(struct mm_struct *);
> -static inline struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
> +static inline struct mm_area *find_vma_intersection(struct mm_struct *mm,
>  						unsigned long start_addr,
>  						unsigned long end_addr)
>  {
> @@ -614,12 +614,12 @@ static inline struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
>  }
>
>  static inline
> -struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr)
> +struct mm_area *vma_lookup(struct mm_struct *mm, unsigned long addr)
>  {
>  	return mtree_load(&mm->mm_mt, addr);
>  }
>
> -static inline struct vm_area_struct *vma_prev(struct vma_iterator *vmi)
> +static inline struct mm_area *vma_prev(struct vma_iterator *vmi)
>  {
>  	return mas_prev(&vmi->mas, 0);
>  }
> @@ -629,7 +629,7 @@ static inline void vma_iter_set(struct vma_iterator *vmi, unsigned long addr)
>  	mas_set(&vmi->mas, addr);
>  }
>
> -static inline bool vma_is_anonymous(struct vm_area_struct *vma)
> +static inline bool vma_is_anonymous(struct mm_area *vma)
>  {
>  	return !vma->vm_ops;
>  }
> @@ -638,11 +638,11 @@ static inline bool vma_is_anonymous(struct vm_area_struct *vma)
>  #define vma_iter_load(vmi) \
>  	mas_walk(&(vmi)->mas)
>
> -static inline struct vm_area_struct *
> +static inline struct mm_area *
>  find_vma_prev(struct mm_struct *mm, unsigned long addr,
> -			struct vm_area_struct **pprev)
> +			struct mm_area **pprev)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	VMA_ITERATOR(vmi, mm, addr);
>
>  	vma = vma_iter_load(&vmi);
> @@ -662,12 +662,12 @@ static inline void vma_iter_init(struct vma_iterator *vmi,
>
>  /* Stubbed functions. */
>
> -static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> +static inline struct anon_vma_name *anon_vma_name(struct mm_area *vma)
>  {
>  	return NULL;
>  }
>
> -static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> +static inline bool is_mergeable_vm_userfaultfd_ctx(struct mm_area *vma,
>  					struct vm_userfaultfd_ctx vm_ctx)
>  {
>  	return true;
> @@ -683,7 +683,7 @@ static inline void might_sleep(void)
>  {
>  }
>
> -static inline unsigned long vma_pages(struct vm_area_struct *vma)
> +static inline unsigned long vma_pages(struct mm_area *vma)
>  {
>  	return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
>  }
> @@ -696,7 +696,7 @@ static inline void mpol_put(struct mempolicy *)
>  {
>  }
>
> -static inline void vm_area_free(struct vm_area_struct *vma)
> +static inline void vm_area_free(struct mm_area *vma)
>  {
>  	free(vma);
>  }
> @@ -718,7 +718,7 @@ static inline void update_hiwater_vm(struct mm_struct *)
>  }
>
>  static inline void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
> -		      struct vm_area_struct *vma, unsigned long start_addr,
> +		      struct mm_area *vma, unsigned long start_addr,
>  		      unsigned long end_addr, unsigned long tree_end,
>  		      bool mm_wr_locked)
>  {
> @@ -732,7 +732,7 @@ static inline void unmap_vmas(struct mmu_gather *tlb, struct ma_state *mas,
>  }
>
>  static inline void free_pgtables(struct mmu_gather *tlb, struct ma_state *mas,
> -		   struct vm_area_struct *vma, unsigned long floor,
> +		   struct mm_area *vma, unsigned long floor,
>  		   unsigned long ceiling, bool mm_wr_locked)
>  {
>  	(void)tlb;
> @@ -760,12 +760,12 @@ static inline struct file *get_file(struct file *f)
>  	return f;
>  }
>
> -static inline int vma_dup_policy(struct vm_area_struct *, struct vm_area_struct *)
> +static inline int vma_dup_policy(struct mm_area *, struct mm_area *)
>  {
>  	return 0;
>  }
>
> -static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
> +static inline int anon_vma_clone(struct mm_area *dst, struct mm_area *src)
>  {
>  	/* For testing purposes. We indicate that an anon_vma has been cloned. */
>  	if (src->anon_vma != NULL) {
> @@ -776,16 +776,16 @@ static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_stru
>  	return 0;
>  }
>
> -static inline void vma_start_write(struct vm_area_struct *vma)
> +static inline void vma_start_write(struct mm_area *vma)
>  {
>  	/* Used to indicate to tests that a write operation has begun. */
>  	vma->vm_lock_seq++;
>  }
>
> -static inline void vma_adjust_trans_huge(struct vm_area_struct *vma,
> +static inline void vma_adjust_trans_huge(struct mm_area *vma,
>  					 unsigned long start,
>  					 unsigned long end,
> -					 struct vm_area_struct *next)
> +					 struct mm_area *next)
>  {
>  	(void)vma;
>  	(void)start;
> @@ -799,7 +799,7 @@ static inline void vma_iter_free(struct vma_iterator *vmi)
>  }
>
>  static inline
> -struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi)
> +struct mm_area *vma_iter_next_range(struct vma_iterator *vmi)
>  {
>  	return mas_next_range(&vmi->mas, ULONG_MAX);
>  }
> @@ -808,12 +808,12 @@ static inline void vm_acct_memory(long pages)
>  {
>  }
>
> -static inline void vma_interval_tree_insert(struct vm_area_struct *,
> +static inline void vma_interval_tree_insert(struct mm_area *,
>  					    struct rb_root_cached *)
>  {
>  }
>
> -static inline void vma_interval_tree_remove(struct vm_area_struct *,
> +static inline void vma_interval_tree_remove(struct mm_area *,
>  					    struct rb_root_cached *)
>  {
>  }
> @@ -832,11 +832,11 @@ static inline void anon_vma_interval_tree_remove(struct anon_vma_chain*,
>  {
>  }
>
> -static inline void uprobe_mmap(struct vm_area_struct *)
> +static inline void uprobe_mmap(struct mm_area *)
>  {
>  }
>
> -static inline void uprobe_munmap(struct vm_area_struct *vma,
> +static inline void uprobe_munmap(struct mm_area *vma,
>  				 unsigned long start, unsigned long end)
>  {
>  	(void)vma;
> @@ -852,11 +852,11 @@ static inline void anon_vma_lock_write(struct anon_vma *)
>  {
>  }
>
> -static inline void vma_assert_write_locked(struct vm_area_struct *)
> +static inline void vma_assert_write_locked(struct mm_area *)
>  {
>  }
>
> -static inline void unlink_anon_vmas(struct vm_area_struct *vma)
> +static inline void unlink_anon_vmas(struct mm_area *vma)
>  {
>  	/* For testing purposes, indicate that the anon_vma was unlinked. */
>  	vma->anon_vma->was_unlinked = true;
> @@ -870,12 +870,12 @@ static inline void i_mmap_unlock_write(struct address_space *)
>  {
>  }
>
> -static inline void anon_vma_merge(struct vm_area_struct *,
> -				  struct vm_area_struct *)
> +static inline void anon_vma_merge(struct mm_area *,
> +				  struct mm_area *)
>  {
>  }
>
> -static inline int userfaultfd_unmap_prep(struct vm_area_struct *vma,
> +static inline int userfaultfd_unmap_prep(struct mm_area *vma,
>  					 unsigned long start,
>  					 unsigned long end,
>  					 struct list_head *unmaps)
> @@ -934,7 +934,7 @@ static inline bool mpol_equal(struct mempolicy *, struct mempolicy *)
>  	return true;
>  }
>
> -static inline void khugepaged_enter_vma(struct vm_area_struct *vma,
> +static inline void khugepaged_enter_vma(struct mm_area *vma,
>  			  unsigned long vm_flags)
>  {
>  	(void)vma;
> @@ -946,17 +946,17 @@ static inline bool mapping_can_writeback(struct address_space *)
>  	return true;
>  }
>
> -static inline bool is_vm_hugetlb_page(struct vm_area_struct *)
> +static inline bool is_vm_hugetlb_page(struct mm_area *)
>  {
>  	return false;
>  }
>
> -static inline bool vma_soft_dirty_enabled(struct vm_area_struct *)
> +static inline bool vma_soft_dirty_enabled(struct mm_area *)
>  {
>  	return false;
>  }
>
> -static inline bool userfaultfd_wp(struct vm_area_struct *)
> +static inline bool userfaultfd_wp(struct mm_area *)
>  {
>  	return false;
>  }
> @@ -998,63 +998,63 @@ static inline bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long)
>  	return true;
>  }
>
> -static inline void vm_flags_init(struct vm_area_struct *vma,
> +static inline void vm_flags_init(struct mm_area *vma,
>  				 vm_flags_t flags)
>  {
>  	vma->__vm_flags = flags;
>  }
>
> -static inline void vm_flags_set(struct vm_area_struct *vma,
> +static inline void vm_flags_set(struct mm_area *vma,
>  				vm_flags_t flags)
>  {
>  	vma_start_write(vma);
>  	vma->__vm_flags |= flags;
>  }
>
> -static inline void vm_flags_clear(struct vm_area_struct *vma,
> +static inline void vm_flags_clear(struct mm_area *vma,
>  				  vm_flags_t flags)
>  {
>  	vma_start_write(vma);
>  	vma->__vm_flags &= ~flags;
>  }
>
> -static inline int call_mmap(struct file *, struct vm_area_struct *)
> +static inline int call_mmap(struct file *, struct mm_area *)
>  {
>  	return 0;
>  }
>
> -static inline int shmem_zero_setup(struct vm_area_struct *)
> +static inline int shmem_zero_setup(struct mm_area *)
>  {
>  	return 0;
>  }
>
> -static inline void vma_set_anonymous(struct vm_area_struct *vma)
> +static inline void vma_set_anonymous(struct mm_area *vma)
>  {
>  	vma->vm_ops = NULL;
>  }
>
> -static inline void ksm_add_vma(struct vm_area_struct *)
> +static inline void ksm_add_vma(struct mm_area *)
>  {
>  }
>
> -static inline void perf_event_mmap(struct vm_area_struct *)
> +static inline void perf_event_mmap(struct mm_area *)
>  {
>  }
>
> -static inline bool vma_is_dax(struct vm_area_struct *)
> +static inline bool vma_is_dax(struct mm_area *)
>  {
>  	return false;
>  }
>
> -static inline struct vm_area_struct *get_gate_vma(struct mm_struct *)
> +static inline struct mm_area *get_gate_vma(struct mm_struct *)
>  {
>  	return NULL;
>  }
>
> -bool vma_wants_writenotify(struct vm_area_struct *vma, pgprot_t vm_page_prot);
> +bool vma_wants_writenotify(struct mm_area *vma, pgprot_t vm_page_prot);
>
>  /* Update vma->vm_page_prot to reflect vma->vm_flags. */
> -static inline void vma_set_page_prot(struct vm_area_struct *vma)
> +static inline void vma_set_page_prot(struct mm_area *vma)
>  {
>  	unsigned long vm_flags = vma->vm_flags;
>  	pgprot_t vm_page_prot;
> @@ -1076,16 +1076,16 @@ static inline bool arch_validate_flags(unsigned long)
>  	return true;
>  }
>
> -static inline void vma_close(struct vm_area_struct *)
> +static inline void vma_close(struct mm_area *)
>  {
>  }
>
> -static inline int mmap_file(struct file *, struct vm_area_struct *)
> +static inline int mmap_file(struct file *, struct mm_area *)
>  {
>  	return 0;
>  }
>
> -static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
> +static inline unsigned long stack_guard_start_gap(struct mm_area *vma)
>  {
>  	if (vma->vm_flags & VM_GROWSDOWN)
>  		return stack_guard_gap;
> @@ -1097,7 +1097,7 @@ static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
> +static inline unsigned long vm_start_gap(struct mm_area *vma)
>  {
>  	unsigned long gap = stack_guard_start_gap(vma);
>  	unsigned long vm_start = vma->vm_start;
> @@ -1108,7 +1108,7 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
>  	return vm_start;
>  }
>
> -static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
> +static inline unsigned long vm_end_gap(struct mm_area *vma)
>  {
>  	unsigned long vm_end = vma->vm_end;
>
> @@ -1126,7 +1126,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
>  	return 0;
>  }
>
> -static inline bool vma_is_accessible(struct vm_area_struct *vma)
> +static inline bool vma_is_accessible(struct mm_area *vma)
>  {
>  	return vma->vm_flags & VM_ACCESS_FLAGS;
>  }
> @@ -1153,7 +1153,7 @@ static inline bool mlock_future_ok(struct mm_struct *mm, unsigned long flags,
>  	return locked_pages <= limit_pages;
>  }
>
> -static inline int __anon_vma_prepare(struct vm_area_struct *vma)
> +static inline int __anon_vma_prepare(struct mm_area *vma)
>  {
>  	struct anon_vma *anon_vma = calloc(1, sizeof(struct anon_vma));
>
> @@ -1166,7 +1166,7 @@ static inline int __anon_vma_prepare(struct vm_area_struct *vma)
>  	return 0;
>  }
>
> -static inline int anon_vma_prepare(struct vm_area_struct *vma)
> +static inline int anon_vma_prepare(struct mm_area *vma)
>  {
>  	if (likely(vma->anon_vma))
>  		return 0;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index e85b33a92624..419e641a79a8 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2618,7 +2618,7 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_is_visible_gfn);
>
>  unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	unsigned long addr, size;
>
>  	size = PAGE_SIZE;
> @@ -2860,7 +2860,7 @@ static int hva_to_pfn_slow(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn)
>  	return npages;
>  }
>
> -static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
> +static bool vma_is_valid(struct mm_area *vma, bool write_fault)
>  {
>  	if (unlikely(!(vma->vm_flags & VM_READ)))
>  		return false;
> @@ -2871,7 +2871,7 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault)
>  	return true;
>  }
>
> -static int hva_to_pfn_remapped(struct vm_area_struct *vma,
> +static int hva_to_pfn_remapped(struct mm_area *vma,
>  			       struct kvm_follow_pfn *kfp, kvm_pfn_t *p_pfn)
>  {
>  	struct follow_pfnmap_args args = { .vma = vma, .address = kfp->hva };
> @@ -2919,7 +2919,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
>
>  kvm_pfn_t hva_to_pfn(struct kvm_follow_pfn *kfp)
>  {
> -	struct vm_area_struct *vma;
> +	struct mm_area *vma;
>  	kvm_pfn_t pfn;
>  	int npages, r;
>
> @@ -3997,7 +3997,7 @@ static const struct vm_operations_struct kvm_vcpu_vm_ops = {
>  	.fault = kvm_vcpu_fault,
>  };
>
> -static int kvm_vcpu_mmap(struct file *file, struct vm_area_struct *vma)
> +static int kvm_vcpu_mmap(struct file *file, struct mm_area *vma)
>  {
>  	struct kvm_vcpu *vcpu = file->private_data;
>  	unsigned long pages = vma_pages(vma);
> @@ -4613,7 +4613,7 @@ static long kvm_vcpu_compat_ioctl(struct file *filp,
>  }
>  #endif
>
> -static int kvm_device_mmap(struct file *filp, struct vm_area_struct *vma)
> +static int kvm_device_mmap(struct file *filp, struct mm_area *vma)
>  {
>  	struct kvm_device *dev = filp->private_data;
>
> --
> 2.47.2
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 12:25 [PATCH] mm: Rename vm_area_struct to mm_area Matthew Wilcox (Oracle)
  2025-04-01 12:35 ` Lorenzo Stoakes
@ 2025-04-01 14:17 ` Liam R. Howlett
  2025-04-01 14:21   ` Vlastimil Babka
  2025-04-01 15:11 ` David Hildenbrand
  2 siblings, 1 reply; 11+ messages in thread
From: Liam R. Howlett @ 2025-04-01 14:17 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Andrew Morton, Lorenzo Stoakes, Vlastimil Babka, Jann Horn, linux-mm

* Matthew Wilcox (Oracle) <willy@infradead.org> [250401 08:26]:
> We don't need to put "_struct" on the end of the name.  It's obviously
> a struct.  Just look at the word "struct" before the name.  The acronym
> "vm" tends to mean "virtual machine" rather than "virtual memory" these
> days, so use "mm_area" instead of "vm_area".  I decided not to rename
> the variables (typically "vma") of type "struct mm_area *" as that would
> be a fair bit more disruptive.

I'm not sure I like this idea.  I mean, we should be more clear about
the type.  It's not even saying it is _in_ a struct.

Maybe we should go another direction and change vm_area_struct to
"memory_area_virtual_in_struct" to really clarify what we are talking
about.

Obviously the variables "vma" should be updated (over time, as code is
changed..) to "mavis" to match the new struct name, for type clarity -
like a beacon.  I really like the mavis beacon idea, it makes typing
easier.

Another added benefit to this naming convention is that the virtual
machine code is free to use "machine_address_under_virtual_enrichment".
This will will provide colour to the code, especially in the variable
names.

Thanks,
Liam


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 14:17 ` Liam R. Howlett
@ 2025-04-01 14:21   ` Vlastimil Babka
  2025-04-01 14:46     ` Johannes Weiner
  2025-04-01 14:54     ` Harry Yoo
  0 siblings, 2 replies; 11+ messages in thread
From: Vlastimil Babka @ 2025-04-01 14:21 UTC (permalink / raw)
  To: Liam R. Howlett, Matthew Wilcox (Oracle),
	Andrew Morton, Lorenzo Stoakes, Jann Horn, linux-mm

On 4/1/25 16:17, Liam R. Howlett wrote:
> * Matthew Wilcox (Oracle) <willy@infradead.org> [250401 08:26]:
>> We don't need to put "_struct" on the end of the name.  It's obviously
>> a struct.  Just look at the word "struct" before the name.  The acronym
>> "vm" tends to mean "virtual machine" rather than "virtual memory" these
>> days, so use "mm_area" instead of "vm_area".  I decided not to rename
>> the variables (typically "vma") of type "struct mm_area *" as that would
>> be a fair bit more disruptive.
> 
> I'm not sure I like this idea.  I mean, we should be more clear about
> the type.  It's not even saying it is _in_ a struct.
> 
> Maybe we should go another direction and change vm_area_struct to
> "memory_area_virtual_in_struct" to really clarify what we are talking
> about.
> 
> Obviously the variables "vma" should be updated (over time, as code is
> changed..) to "mavis" to match the new struct name, for type clarity -
> like a beacon.  I really like the mavis beacon idea, it makes typing
> easier.

I agree with this direction. We should also rename "struct address_space" to
"struct address_space_struct" and rename folio.mapping accordingly.

> Another added benefit to this naming convention is that the virtual
> machine code is free to use "machine_address_under_virtual_enrichment".
> This will will provide colour to the code, especially in the variable
> names.
> 
> Thanks,
> Liam



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 14:21   ` Vlastimil Babka
@ 2025-04-01 14:46     ` Johannes Weiner
  2025-04-01 14:54     ` Harry Yoo
  1 sibling, 0 replies; 11+ messages in thread
From: Johannes Weiner @ 2025-04-01 14:46 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Liam R. Howlett, Matthew Wilcox (Oracle),
	Andrew Morton, Lorenzo Stoakes, Jann Horn, linux-mm

On Tue, Apr 01, 2025 at 04:21:53PM +0200, Vlastimil Babka wrote:
> On 4/1/25 16:17, Liam R. Howlett wrote:
> > * Matthew Wilcox (Oracle) <willy@infradead.org> [250401 08:26]:
> >> We don't need to put "_struct" on the end of the name.  It's obviously
> >> a struct.  Just look at the word "struct" before the name.  The acronym
> >> "vm" tends to mean "virtual machine" rather than "virtual memory" these
> >> days, so use "mm_area" instead of "vm_area".  I decided not to rename
> >> the variables (typically "vma") of type "struct mm_area *" as that would
> >> be a fair bit more disruptive.
> > 
> > I'm not sure I like this idea.  I mean, we should be more clear about
> > the type.  It's not even saying it is _in_ a struct.
> > 
> > Maybe we should go another direction and change vm_area_struct to
> > "memory_area_virtual_in_struct" to really clarify what we are talking
> > about.
> > 
> > Obviously the variables "vma" should be updated (over time, as code is
> > changed..) to "mavis" to match the new struct name, for type clarity -
> > like a beacon.  I really like the mavis beacon idea, it makes typing
> > easier.
> 
> I agree with this direction. We should also rename "struct address_space" to
> "struct address_space_struct" and rename folio.mapping accordingly.

folio.paddress_space_struct has a nice ring to it.

It's a bit of a mouthful, but that shouldn't matter as much anymore
with Copilot doing most of the writing at this point.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 14:21   ` Vlastimil Babka
  2025-04-01 14:46     ` Johannes Weiner
@ 2025-04-01 14:54     ` Harry Yoo
  2025-04-01 15:17       ` Mike Rapoport
  1 sibling, 1 reply; 11+ messages in thread
From: Harry Yoo @ 2025-04-01 14:54 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Liam R. Howlett, Matthew Wilcox (Oracle),
	Andrew Morton, Lorenzo Stoakes, Jann Horn, linux-mm

On Tue, Apr 01, 2025 at 04:21:53PM +0200, Vlastimil Babka wrote:
> On 4/1/25 16:17, Liam R. Howlett wrote:
> > * Matthew Wilcox (Oracle) <willy@infradead.org> [250401 08:26]:
> >> We don't need to put "_struct" on the end of the name.  It's obviously
> >> a struct.  Just look at the word "struct" before the name.  The acronym
> >> "vm" tends to mean "virtual machine" rather than "virtual memory" these
> >> days, so use "mm_area" instead of "vm_area".  I decided not to rename
> >> the variables (typically "vma") of type "struct mm_area *" as that would
> >> be a fair bit more disruptive.
> > 
> > I'm not sure I like this idea.  I mean, we should be more clear about
> > the type.  It's not even saying it is _in_ a struct.
> > 
> > Maybe we should go another direction and change vm_area_struct to
> > "memory_area_virtual_in_struct" to really clarify what we are talking
> > about.
> > 
> > Obviously the variables "vma" should be updated (over time, as code is
> > changed..) to "mavis" to match the new struct name, for type clarity -
> > like a beacon.  I really like the mavis beacon idea, it makes typing
> > easier.
> 
> I agree with this direction. We should also rename "struct address_space" to
> "struct address_space_struct" and rename folio.mapping accordingly.

I'm not sure if that abbrebation is complaint with the CoC. :P

> > Another added benefit to this naming convention is that the virtual
> > machine code is free to use "machine_address_under_virtual_enrichment".
> > This will will provide colour to the code, especially in the variable
> > names.
> > 
> > Thanks,
> > Liam

-- 
Cheers,
Harry (formerly known as Hyeonggon)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 12:25 [PATCH] mm: Rename vm_area_struct to mm_area Matthew Wilcox (Oracle)
  2025-04-01 12:35 ` Lorenzo Stoakes
  2025-04-01 14:17 ` Liam R. Howlett
@ 2025-04-01 15:11 ` David Hildenbrand
  2025-04-01 15:20   ` Lorenzo Stoakes
  2 siblings, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2025-04-01 15:11 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton
  Cc: Liam R . Howlett, Lorenzo Stoakes, Vlastimil Babka, Jann Horn, linux-mm

On 01.04.25 14:25, Matthew Wilcox (Oracle) wrote:
> We don't need to put "_struct" on the end of the name.  It's obviously
> a struct.  Just look at the word "struct" before the name.  The acronym
> "vm" tends to mean "virtual machine" rather than "virtual memory" these
> days, so use "mm_area" instead of "vm_area".  I decided not to rename
> the variables (typically "vma") of type "struct mm_area *" as that would
> be a fair bit more disruptive.

I almost fell for it, until I looked at the calendar :)

On a serious note: "struct vma" ;)

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 14:54     ` Harry Yoo
@ 2025-04-01 15:17       ` Mike Rapoport
  0 siblings, 0 replies; 11+ messages in thread
From: Mike Rapoport @ 2025-04-01 15:17 UTC (permalink / raw)
  To: Harry Yoo
  Cc: Vlastimil Babka, Liam R. Howlett, Matthew Wilcox (Oracle),
	Andrew Morton, Lorenzo Stoakes, Jann Horn, linux-mm

On Tue, Apr 01, 2025 at 11:54:08PM +0900, Harry Yoo wrote:
> On Tue, Apr 01, 2025 at 04:21:53PM +0200, Vlastimil Babka wrote:
> > On 4/1/25 16:17, Liam R. Howlett wrote:
> > > * Matthew Wilcox (Oracle) <willy@infradead.org> [250401 08:26]:
> > >> We don't need to put "_struct" on the end of the name.  It's obviously
> > >> a struct.  Just look at the word "struct" before the name.  The acronym
> > >> "vm" tends to mean "virtual machine" rather than "virtual memory" these
> > >> days, so use "mm_area" instead of "vm_area".  I decided not to rename
> > >> the variables (typically "vma") of type "struct mm_area *" as that would
> > >> be a fair bit more disruptive.
> > > 
> > > I'm not sure I like this idea.  I mean, we should be more clear about
> > > the type.  It's not even saying it is _in_ a struct.
> > > 
> > > Maybe we should go another direction and change vm_area_struct to
> > > "memory_area_virtual_in_struct" to really clarify what we are talking
> > > about.
> > > 
> > > Obviously the variables "vma" should be updated (over time, as code is
> > > changed..) to "mavis" to match the new struct name, for type clarity -
> > > like a beacon.  I really like the mavis beacon idea, it makes typing
> > > easier.
> > 
> > I agree with this direction. We should also rename "struct address_space" to
> > "struct address_space_struct" and rename folio.mapping accordingly.
> 
> I'm not sure if that abbrebation is complaint with the CoC. :P

With that in mind Address Space Isolation series should be actually named
Address Space Separation
 
> > > Another added benefit to this naming convention is that the virtual
> > > machine code is free to use "machine_address_under_virtual_enrichment".
> > > This will will provide colour to the code, especially in the variable
> > > names.
> > > 
> > > Thanks,
> > > Liam
> 
> -- 
> Cheers,
> Harry (formerly known as Hyeonggon)
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 15:11 ` David Hildenbrand
@ 2025-04-01 15:20   ` Lorenzo Stoakes
  2025-04-01 15:26     ` David Hildenbrand
  0 siblings, 1 reply; 11+ messages in thread
From: Lorenzo Stoakes @ 2025-04-01 15:20 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Matthew Wilcox (Oracle),
	Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	linux-mm

On Tue, Apr 01, 2025 at 05:11:58PM +0200, David Hildenbrand wrote:
> On 01.04.25 14:25, Matthew Wilcox (Oracle) wrote:
> > We don't need to put "_struct" on the end of the name.  It's obviously
> > a struct.  Just look at the word "struct" before the name.  The acronym
> > "vm" tends to mean "virtual machine" rather than "virtual memory" these
> > days, so use "mm_area" instead of "vm_area".  I decided not to rename
> > the variables (typically "vma") of type "struct mm_area *" as that would
> > be a fair bit more disruptive.
>
> I almost fell for it, until I looked at the calendar :)
>
> On a serious note: "struct vma" ;)

We should put this in a helper struct

Maybe:

struct vmb {
	struct vma *vma;
	bool is_file;
};

I think this would solve a lot of our problems and probably eliminate
anon_vma or something

>
> --
> Cheers,
>
> David / dhildenb
>
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 15:20   ` Lorenzo Stoakes
@ 2025-04-01 15:26     ` David Hildenbrand
  2025-04-01 23:53       ` John Hubbard
  0 siblings, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2025-04-01 15:26 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Matthew Wilcox (Oracle),
	Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	linux-mm

On 01.04.25 17:20, Lorenzo Stoakes wrote:
> On Tue, Apr 01, 2025 at 05:11:58PM +0200, David Hildenbrand wrote:
>> On 01.04.25 14:25, Matthew Wilcox (Oracle) wrote:
>>> We don't need to put "_struct" on the end of the name.  It's obviously
>>> a struct.  Just look at the word "struct" before the name.  The acronym
>>> "vm" tends to mean "virtual machine" rather than "virtual memory" these
>>> days, so use "mm_area" instead of "vm_area".  I decided not to rename
>>> the variables (typically "vma") of type "struct mm_area *" as that would
>>> be a fair bit more disruptive.
>>
>> I almost fell for it, until I looked at the calendar :)
>>
>> On a serious note: "struct vma" ;)
> 
> We should put this in a helper struct
> 
> Maybe:
> 
> struct vmb {
> 	struct vma *vma;
> 	bool is_file;
> };
> 
> I think this would solve a lot of our problems and probably eliminate
> anon_vma or something

If that solves most our problems, imagine what a "struct vmc" could do :)

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] mm: Rename vm_area_struct to mm_area
  2025-04-01 15:26     ` David Hildenbrand
@ 2025-04-01 23:53       ` John Hubbard
  0 siblings, 0 replies; 11+ messages in thread
From: John Hubbard @ 2025-04-01 23:53 UTC (permalink / raw)
  To: David Hildenbrand, Lorenzo Stoakes
  Cc: Matthew Wilcox (Oracle),
	Andrew Morton, Liam R . Howlett, Vlastimil Babka, Jann Horn,
	linux-mm

On 4/1/25 8:26 AM, David Hildenbrand wrote:
...
>>>> be a fair bit more disruptive.
>>>
>>> I almost fell for it, until I looked at the calendar :)
>>>
>>> On a serious note: "struct vma" ;)
>>
>> We should put this in a helper struct
>>
>> Maybe:
>>
>> struct vmb {
>>     struct vma *vma;
>>     bool is_file;
>> };
>>
>> I think this would solve a lot of our problems and probably eliminate
>> anon_vma or something
> 
> If that solves most our problems, imagine what a "struct vmc" could do :)
> 

I was almost ready to ack all of the above, until I realized that
it was too disruptive, because it's written in C.

In order to be as non-contentious as possible, the whole thing should
be done in Rust:

struct Vmb {
     vma: *mut Vma,
     is_file: bool,
}


thanks,
-- 
John Hubbard



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-04-01 23:54 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-01 12:25 [PATCH] mm: Rename vm_area_struct to mm_area Matthew Wilcox (Oracle)
2025-04-01 12:35 ` Lorenzo Stoakes
2025-04-01 14:17 ` Liam R. Howlett
2025-04-01 14:21   ` Vlastimil Babka
2025-04-01 14:46     ` Johannes Weiner
2025-04-01 14:54     ` Harry Yoo
2025-04-01 15:17       ` Mike Rapoport
2025-04-01 15:11 ` David Hildenbrand
2025-04-01 15:20   ` Lorenzo Stoakes
2025-04-01 15:26     ` David Hildenbrand
2025-04-01 23:53       ` John Hubbard

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox