linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>,
	Alistair Popple <apopple@nvidia.com>
Cc: Hugh Dickins <hughd@google.com>, Gavin Shan <gshan@redhat.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, william.kucharski@oracle.com,
	ziy@nvidia.com, kirill.shutemov@linux.intel.com,
	zhenyzha@redhat.com, shan.gavin@gmail.com, riel@surriel.com
Subject: Re: [PATCH] mm: migrate: Fix THP's mapcount on isolation
Date: Thu, 24 Nov 2022 09:49:44 +0100	[thread overview]
Message-ID: <51ffd399-7fa3-b2f2-b6e5-61a8b609e350@redhat.com> (raw)
In-Reply-To: <Y37mC1+LQscJaOk4@casper.infradead.org>

On 24.11.22 04:33, Matthew Wilcox wrote:
> On Thu, Nov 24, 2022 at 12:06:56PM +1100, Alistair Popple wrote:
>>
>> David Hildenbrand <david@redhat.com> writes:
>>
>>> On 23.11.22 06:14, Hugh Dickins wrote:
>>>> On Wed, 23 Nov 2022, Gavin Shan wrote:
>>>>
>>>>> The issue is reported when removing memory through virtio_mem device.
>>>>> The transparent huge page, experienced copy-on-write fault, is wrongly
>>>>> regarded as pinned. The transparent huge page is escaped from being
>>>>> isolated in isolate_migratepages_block(). The transparent huge page
>>>>> can't be migrated and the corresponding memory block can't be put
>>>>> into offline state.
>>>>>
>>>>> Fix it by replacing page_mapcount() with total_mapcount(). With this,
>>>>> the transparent huge page can be isolated and migrated, and the memory
>>>>> block can be put into offline state.
>>>>>
>>>>> Fixes: 3917c80280c9 ("thp: change CoW semantics for anon-THP")
>>>>> Cc: stable@vger.kernel.org   # v5.8+
>>>>> Reported-by: Zhenyu Zhang <zhenyzha@redhat.com>
>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>>>> Interesting, good catch, looked right to me: except for the Fixes
>>>> line
>>>> and mention of v5.8.  That CoW change may have added a case which easily
>>>> demonstrates the problem, but it would have been the wrong test on a THP
>>>> for long before then - but only in v5.7 were compound pages allowed
>>>> through at all to reach that test, so I think it should be
>>>> Fixes: 1da2f328fa64 ("mm,thp,compaction,cma: allow THP migration for
>>>> CMA allocations")
>>>> Cc: stable@vger.kernel.org   # v5.7+
>>>> Oh, no, stop: this is not so easy, even in the latest tree.
>>>> Because at the time of that "admittedly racy check", we have no hold
>>>> at all on the page in question: and if it's PageLRU or PageCompound
>>>> at one instant, it may be different the next instant.  Which leaves it
>>>> vulnerable to whatever BUG_ON()s there may be in the total_mapcount()
>>>> path - needs research.  *Perhaps* there are no more BUG_ON()s in the
>>>> total_mapcount() path than in the existing page_mapcount() path.
>>>> I suspect that for this to be safe (before your patch and more so
>>>> after),
>>>> it will be necessary to shift the "admittedly racy check" down after the
>>>> get_page_unless_zero() (and check the sequence of operations when a
>>>> compound page is initialized).
>>>
>>> Grabbing a reference first sounds like the right approach to me.
>>
>> I think you're right. Without a page reference I don't think it is even
>> safe to look at struct page, at least not without synchronisation
>> against memory hot unplug which could remove the struct page. From a
>> quick glance I didn't see anything here that obviously did that though.
> 
> Memory hotplug is the offending party here.  It has to make sure that
> everything else is definitely quiescent before removing the struct pages.
> Otherwise you can't even try_get a refcount.

At least alloc_contig_range() and memory offlining are mutually 
exclusive due to MIGRATE_ISOLTAE. I recall that ordinary memory 
compaction similarly deals with isolated pageblocks (or some other 
mechanism I forgot) to not race with memory offlining. Wouldn't worry 
about that for now.

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2022-11-24  8:49 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-23  0:57 Gavin Shan
2022-11-23  4:26 ` Alistair Popple
2022-11-23  5:06   ` Gavin Shan
2022-11-23  5:14 ` Hugh Dickins
2022-11-23  8:56   ` David Hildenbrand
2022-11-23 16:07     ` Matthew Wilcox
2022-11-24  8:50       ` David Hildenbrand
2022-11-24  0:14     ` Gavin Shan
2022-11-24  8:46       ` David Hildenbrand
2022-11-24  9:44         ` Gavin Shan
2022-11-24  1:06     ` Alistair Popple
2022-11-24  3:33       ` Matthew Wilcox
2022-11-24  8:49         ` David Hildenbrand [this message]
2022-11-25  0:58           ` Alistair Popple
2022-11-25  8:54             ` David Hildenbrand
2022-12-01 22:35               ` Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51ffd399-7fa3-b2f2-b6e5-61a8b609e350@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=gshan@redhat.com \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@surriel.com \
    --cc=shan.gavin@gmail.com \
    --cc=william.kucharski@oracle.com \
    --cc=willy@infradead.org \
    --cc=zhenyzha@redhat.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox