linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] batched remove rmap in try_to_unmap_one()
@ 2023-03-06  9:22 Yin Fengwei
  2023-03-06  9:22 ` [PATCH v3 1/5] rmap: move hugetlb try_to_unmap to dedicated function Yin Fengwei
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Yin Fengwei @ 2023-03-06  9:22 UTC (permalink / raw)
  To: linux-mm, akpm, willy, mike.kravetz, sidhartha.kumar,
	naoya.horiguchi, jane.chu, david
  Cc: fengwei.yin

This series is trying to bring the batched rmap removing to
try_to_unmap_one(). It's expected that the batched rmap
removing bring performance gain than remove rmap per page.

This series reconstruct the try_to_unmap_one() from:
  loop:
     clear and update PTE
     unmap one page
     goto loop
to:
  loop:
     clear and update PTE
     goto loop
  unmap the range of folio in one call
It is one step to always map/unmap the entire folio in one call.
Which can simplify the folio mapcount handling by avoid dealing
with each page map/unmap.


The changes are organized as:
Patch1/2 move the hugetlb and normal page unmap to dedicated
functions to make try_to_unmap_one() logic clearer and easy
to add batched rmap removing. To make code review easier, no
function change.

Patch3 cleanup the try_to_unmap_one_page(). Try to removed
some duplicated function calls.

Patch4 adds folio_remove_rmap_range() which batched remove rmap.

Patch5 make try_to_unmap_one() to batched remove rmap.

Testing done with the V3 patchset in a qemu guest
with 4G mem:
  - kernel mm selftest to trigger vmscan() and final hit
    try_to_unmap_one().
  - Inject hwpoison to hugetlb page to trigger try_to_unmap_one()
    call against hugetlb.
  - 8 hours stress testing: Firefox + kernel mm selftest + kernel
    build.

This series is based on next-20230303.

Changes from v2:
  - General
    - Rebase the patch to next-20230303
    - Update cover letter about the preparation to unmap
      the entire folio in one call
    - No code change comparing to V2. But fix the patch applying
      conflict because of wrong patch order in V2.

Changes from v1:
  - General
    - Rebase the patch to next-20230228

  - Patch1
    - Removed the if (PageHWPoison(page) && !(flags & TTU_HWPOISON)
      as suggestion from Mike Kravetz and HORIGUCHI NAOYA
    - Removed the mlock_drain_local() as suggestion from Mike Kravetz
    _ Removed the comments about the mm counter change as suggestion
      from Mike Kravetz

Yin Fengwei (5):
  rmap: move hugetlb try_to_unmap to dedicated function
  rmap: move page unmap operation to dedicated function
  rmap: cleanup exit path of try_to_unmap_one_page()
  rmap:addd folio_remove_rmap_range()
  try_to_unmap_one: batched remove rmap, update folio refcount

 include/linux/rmap.h |   5 +
 mm/page_vma_mapped.c |  30 +++
 mm/rmap.c            | 623 +++++++++++++++++++++++++------------------
 3 files changed, 398 insertions(+), 260 deletions(-)

-- 
2.30.2



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-03-09 13:58 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-06  9:22 [PATCH v3 0/5] batched remove rmap in try_to_unmap_one() Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 1/5] rmap: move hugetlb try_to_unmap to dedicated function Yin Fengwei
2023-03-08 21:38   ` Mike Kravetz
2023-03-09  5:13     ` Yin, Fengwei
2023-03-06  9:22 ` [PATCH v3 2/5] rmap: move page unmap operation " Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 3/5] rmap: cleanup exit path of try_to_unmap_one_page() Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 4/5] rmap:addd folio_remove_rmap_range() Yin Fengwei
2023-03-06  9:22 ` [PATCH v3 5/5] try_to_unmap_one: batched remove rmap, update folio refcount Yin Fengwei
2023-03-06 12:39   ` haoxin
2023-03-07  2:45     ` Yin, Fengwei
2023-03-06 21:12 ` [PATCH v3 0/5] batched remove rmap in try_to_unmap_one() Andrew Morton
2023-03-07  2:44   ` Yin, Fengwei
2023-03-09 13:56   ` Yin, Fengwei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox