linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed()
@ 2024-05-22 12:57 David Hildenbrand
  2024-05-22 12:57 ` [PATCH v2 1/3] mm/memory: move page_count() check into validate_page_before_insert() David Hildenbrand
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: David Hildenbrand @ 2024-05-22 12:57 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, David Hildenbrand, Andrew Morton, Vincent Donnefort,
	Dan Williams

There is interest in mapping zeropages via vm_insert_pages() [1] into
MAP_SHARED mappings.

For now, we only get zeropages in MAP_SHARED mappings via
vmf_insert_mixed() from FSDAX code, and I think it's a bit shaky in some
cases because we refcount the zeropage when mapping it but not necessarily
always when unmapping it ... and we should actually never refcount it.

It's all a bit tricky, especially how zeropages in MAP_SHARED mappings
interact with GUP (FOLL_LONGTERM), mprotect(), write-faults and s390x
forbidding the shared zeropage (rewrite [2] s now upstream).

This series tries to take the careful approach of only allowing the
zeropage where it is likely safe to use (which should cover the existing
FSDAX use case and [1]), preventing that it could accidentally get mapped
writable during a write fault, mprotect() etc, and preventing issues
with FOLL_LONGTERM in the future with other users.

Tested with a patch from Vincent that uses the zeropage in context of
[1]. Vincent will post that patch based on this series soon. (not tested
with FSDAX, but I don't expect surprises).

[1] https://lkml.kernel.org/r/20240430111354.637356-1-vdonnefort@google.com
[2] https://lkml.kernel.org/r/20240411161441.910170-1-david@redhat.com

v1 -> v2:
* "mm/memory: move page_count() check into validate_page_before_insert()"
 -> Added
* "mm/memory: cleanly support zeropage in vm_insert_page*(), ..."
 -> Fixed "return true;" for never-writable VMAs

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vincent Donnefort <vdonnefort@google.com>
Cc: Dan Williams <dan.j.williams@intel.com>

David Hildenbrand (3):
  mm/memory: move page_count() check into validate_page_before_insert()
  mm/memory: cleanly support zeropage in vm_insert_page*(),
    vm_map_pages*() and vmf_insert_mixed()
  mm/rmap: sanity check that zeropages are not passed to RMAP

 include/linux/rmap.h |  3 ++
 mm/memory.c          | 97 ++++++++++++++++++++++++++++++++------------
 mm/mprotect.c        |  2 +
 3 files changed, 77 insertions(+), 25 deletions(-)


base-commit: 29c73fc794c83505066ee6db893b2a83ac5fac63
-- 
2.45.0



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-05-22 12:57 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-22 12:57 [PATCH v2 0/3] mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed() David Hildenbrand
2024-05-22 12:57 ` [PATCH v2 1/3] mm/memory: move page_count() check into validate_page_before_insert() David Hildenbrand
2024-05-22 12:57 ` [PATCH v2 2/3] mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed() David Hildenbrand
2024-05-22 12:57 ` [PATCH v2 3/3] mm/rmap: sanity check that zeropages are not passed to RMAP David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox