linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Zhenyu Zhang <zhenyzha@redhat.com>
To: Guowen Shan <gshan@redhat.com>, linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
	 william.kucharski@oracle.com, ziy@nvidia.com,
	kirill.shutemov@linux.intel.com,  apopple@nvidia.com,
	hughd@google.com, willy@infradead.org,  shan.gavin@gmail.com,
	David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH v2] mm: migrate: Fix THP's mapcount on isolation
Date: Fri, 25 Nov 2022 15:40:03 +0800	[thread overview]
Message-ID: <CAJFLiBJQBaXkdr+siDhQVRTze88SSfb-nSp_+5cBXAtqjKzdqQ@mail.gmail.com> (raw)
In-Reply-To: <31947f33-cd9e-adbb-2dcc-106a464438df@redhat.com>

With the patch applied, I'm unable to hit memory hot-remove failure in
the environment where the issue was initially found.

Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>

On Thu, Nov 24, 2022 at 10:09 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 24.11.22 14:22, David Hildenbrand wrote:
> > On 24.11.22 13:55, Gavin Shan wrote:
> >> On 11/24/22 6:43 PM, David Hildenbrand wrote:
> >>> On 24.11.22 11:21, Gavin Shan wrote:
> >>>> On 11/24/22 6:09 PM, David Hildenbrand wrote:
> >>>>> On 24.11.22 10:55, Gavin Shan wrote:
> >>>>>> The issue is reported when removing memory through virtio_mem device.
> >>>>>> The transparent huge page, experienced copy-on-write fault, is wrongly
> >>>>>> regarded as pinned. The transparent huge page is escaped from being
> >>>>>> isolated in isolate_migratepages_block(). The transparent huge page
> >>>>>> can't be migrated and the corresponding memory block can't be put
> >>>>>> into offline state.
> >>>>>>
> >>>>>> Fix it by replacing page_mapcount() with total_mapcount(). With this,
> >>>>>> the transparent huge page can be isolated and migrated, and the memory
> >>>>>> block can be put into offline state. Besides, The page's refcount is
> >>>>>> increased a bit earlier to avoid the page is released when the check
> >>>>>> is executed.
> >>>>>
> >>>>> Did you look into handling pages that are in the swapcache case as well?
> >>>>>
> >>>>> See is_refcount_suitable() in mm/khugepaged.c.
> >>>>>
> >>>>> Should be easy to reproduce, let me know if you need inspiration.
> >>>>>
> >>>>
> >>>> Nope, I didn't look into the case. Please elaborate the details so that
> >>>> I can reproduce it firstly.
> >>>
> >>>
> >>> A simple reproducer would be (on a system with ordinary swap (not zram))
> >>>
> >>> 1) mmap a region (MAP_ANON|MAP_PRIVATE) that can hold a THP
> >>>
> >>> 2) Enable THP for that region (MADV_HUGEPAGE)
> >>>
> >>> 3) Populate a THP (e.g., write access)
> >>>
> >>> 4) PTE-map the THP, for example, using MADV_FREE on the last subpage
> >>>
> >>> 5) Trigger swapout of the THP, for example, using MADV_PAGEOUT
> >>>
> >>> 6) Read-access to some subpages to fault them in from the swapcache
> >>>
> >>>
> >>> Now you'd have a THP, which
> >>>
> >>> 1) Is partially PTE-mapped into the page table
> >>> 2) Is in the swapcache (each subpage should have one reference from the swapache)
> >>>
> >>>
> >>> Now we could test, if alloc_contig_range() will still succeed (e.g., using virtio-mem).
> >>>
> >>
> >> Thanks for the details. Step (4) and (5) can be actually combined. To swap part of
> >> the THP (e.g. one sub-page) will force the THP to be split.
> >>
> >> I followed your steps in the attached program, there is no issue to do memory hot-remove
> >> through virtio-mem with or without this patch.
> >
> > Interesting. But I don't really see how we could pass this check with a
> > page that's in the swapcache, maybe I'm missing something else.
> >
> > I'll try to see if I can reproduce it.
> >
>
> After some unsuccessful attempts and many head-scratches, I realized
> that it's quite simple why we don't have to worry about swapcache pages
> here:
>
> page_mapping() is != NULL for pages in the swapcache: folio_mapping()
> makes this rather obvious:
>
> if (unlikely(folio_test_swapcache(folio))
>         return swap_address_space(folio_swap_entry(folio));
>
>
> I think the get_page_unless_zero() might also be a fix for the
> page_mapping() call, smells like something could blow up on concurrent
> page freeing. (what about concurrent removal from the swapcache? nobody
> knows :) )
>
>
> Thanks Gavin!
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
>
> --
> Thanks,
>
> David / dhildenb
>



      reply	other threads:[~2022-11-25  7:40 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-24  9:55 Gavin Shan
2022-11-24 10:09 ` David Hildenbrand
2022-11-24 10:21   ` Gavin Shan
2022-11-24 10:43     ` David Hildenbrand
2022-11-24 12:38       ` Zi Yan
2022-11-24 13:20         ` David Hildenbrand
2022-11-24 12:55       ` Gavin Shan
2022-11-24 13:22         ` David Hildenbrand
2022-11-24 14:09           ` David Hildenbrand
2022-11-25  7:40             ` Zhenyu Zhang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJFLiBJQBaXkdr+siDhQVRTze88SSfb-nSp_+5cBXAtqjKzdqQ@mail.gmail.com \
    --to=zhenyzha@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=gshan@redhat.com \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shan.gavin@gmail.com \
    --cc=william.kucharski@oracle.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox