From: Khalid Aziz <khalid.aziz@oracle.com>
To: David Hildenbrand <david@redhat.com>, akpm@linux-foundation.org
Cc: willy@infradead.org, steven.sistare@oracle.com,
ying.huang@intel.com, mgorman@techsingularity.net,
khalid@kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3] mm, compaction: Skip all non-migratable pages during scan
Date: Wed, 17 May 2023 16:33:51 -0600 [thread overview]
Message-ID: <9704cc32-3bb6-bcc8-896c-688d2112102c@oracle.com> (raw)
In-Reply-To: <c34e3768-8a01-d155-1970-8eada8c80ba7@redhat.com>
On 5/17/23 12:32, David Hildenbrand wrote:
> On 17.05.23 18:15, Khalid Aziz wrote:
>> Pages pinned in memory through extra refcounts can not be migrated.
>> Currently as isolate_migratepages_block() scans pages for
>> compaction, it skips any pinned anonymous pages. All non-migratable
>> pages should be skipped and not just the anonymous pinned pages.
>> This patch adds a check for extra refcounts on a page to determine
>> if the page can be migrated. This was seen as a real issue on a
>> customer workload where a large number of pages were pinned by vfio
>> on the host and any attempts to allocate hugepages resulted in
>> significant amount of cpu time spent in either direct compaction or
>> in kcompactd scanning vfio pinned pages over and over again that can
>> not be migrated.
>
> How will this change affect alloc_contig_range(), such as used for CMA allocations or virtio-mem? alloc_contig_range()
> ends up calling isolate_migratepages_range() -> isolate_migratepages_block().
>
> We don't want to fail early in case there is a short-term pin that might go away any moment after we isolated ... that
> will make the situation worse for these use cases, especially if MIGRATE_CMA or ZONE_MOVABLE is involved.
>
You are right that transitory conditions can be problematic. Wouldn't that apply to anonymous pages as well and we do
skip pinned anonymous pages today? A retry would be the right way to handle transitory conditions I think. At the same
time, by not scanning long term pinned non-anonymous pages repeatedly, alloc_contig_range() would be helped as well, right?
Nevertheless, we certainly do not want a change that makes overall system behavior worse. Do you see system behavior
getting worse, or would the retry in cma_alloc() be sufficient to deal with transitory pins?
Thanks,
Khalid
next prev parent reply other threads:[~2023-05-17 22:34 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-17 16:15 Khalid Aziz
2023-05-17 18:32 ` David Hildenbrand
2023-05-17 22:33 ` Khalid Aziz [this message]
2023-05-18 1:09 ` Huang, Ying
2023-05-19 9:51 ` David Hildenbrand
2023-05-22 5:55 ` Huang, Ying
2023-05-22 15:12 ` Khalid Aziz
2023-05-23 1:23 ` Huang, Ying
2023-05-18 1:21 ` Huang, Ying
2023-05-18 15:07 ` Khalid Aziz
2023-05-19 0:19 ` Huang, Ying
2023-05-23 3:42 ` Baolin Wang
2023-05-23 20:54 ` Khalid Aziz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9704cc32-3bb6-bcc8-896c-688d2112102c@oracle.com \
--to=khalid.aziz@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=khalid@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=steven.sistare@oracle.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox