From: Jason Gunthorpe <jgg@ziepe.ca>
To: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com,
david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com,
sashal@kernel.org, tyhicks@linux.microsoft.com,
iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com,
rostedt@goodmis.org, mingo@redhat.com, peterz@infradead.org,
mgorman@suse.de, willy@infradead.org, rientjes@google.com,
jhubbard@nvidia.com, linux-doc@vger.kernel.org,
ira.weiny@intel.com, linux-kselftest@vger.kernel.org
Subject: Re: [PATCH v6 08/14] mm/gup: do not migrate zero page
Date: Wed, 20 Jan 2021 09:14:00 -0400 [thread overview]
Message-ID: <20210120131400.GF4605@ziepe.ca> (raw)
In-Reply-To: <20210120014333.222547-9-pasha.tatashin@soleen.com>
On Tue, Jan 19, 2021 at 08:43:27PM -0500, Pavel Tatashin wrote:
> On some platforms ZERO_PAGE(0) might end-up in a movable zone. Do not
> migrate zero page in gup during longterm pinning as migration of zero page
> is not allowed.
>
> For example, in x86 QEMU with 16G of memory and kernelcore=5G parameter, I
> see the following:
>
> Boot#1: zero_pfn 0x48a8d zero_pfn zone: ZONE_DMA32
> Boot#2: zero_pfn 0x20168d zero_pfn zone: ZONE_MOVABLE
>
> On x86, empty_zero_page is declared in .bss and depending on the loader
> may end up in different physical locations during boots.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> include/linux/mmzone.h | 4 ++++
> mm/gup.c | 2 ++
> 2 files changed, 6 insertions(+)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index fc99e9241846..f67427a8f22b 100644
> +++ b/include/linux/mmzone.h
> @@ -427,6 +427,10 @@ enum zone_type {
> * techniques might use alloc_contig_range() to hide previously
> * exposed pages from the buddy again (e.g., to implement some sort
> * of memory unplug in virtio-mem).
> + * 6. ZERO_PAGE(0), kernelcore/movablecore setups might create
> + * situations where ZERO_PAGE(0) which is allocated differently
> + * on different platforms may end up in a movable zone. ZERO_PAGE(0)
> + * cannot be migrated.
> *
> * In general, no unmovable allocations that degrade memory offlining
> * should end up in ZONE_MOVABLE. Allocators (like alloc_contig_range())
> diff --git a/mm/gup.c b/mm/gup.c
> index 857b273e32ac..fdd5cda30a07 100644
> +++ b/mm/gup.c
> @@ -1580,6 +1580,8 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm,
> * of the CMA zone if possible.
> */
> if (is_migrate_cma_page(head)) {
> + if (is_zero_pfn(page_to_pfn(head)))
> + continue;
I think you should put this logic in is_pinnable_page()
Jason
next prev parent reply other threads:[~2021-01-20 13:14 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-20 1:43 [PATCH v6 00/14] prohibit pinning pages in ZONE_MOVABLE Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 01/14] mm/gup: don't pin migrated cma pages in movable zone Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 02/14] mm/gup: check every subpage of a compound page during isolation Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 03/14] mm/gup: return an error on migration failure Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 04/14] mm/gup: check for isolation errors Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 05/14] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 06/14] mm: apply per-task gfp constraints in fast path Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 07/14] mm: honor PF_MEMALLOC_PIN for all movable pages Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 08/14] mm/gup: do not migrate zero page Pavel Tatashin
2021-01-20 13:14 ` Jason Gunthorpe [this message]
2021-01-20 14:26 ` Pavel Tatashin
2021-01-25 14:28 ` Jason Gunthorpe
2021-01-25 15:38 ` Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 09/14] mm/gup: migrate pinned pages out of movable zone Pavel Tatashin
2021-01-20 17:50 ` kernel test robot
2021-01-20 21:31 ` Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 10/14] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Pavel Tatashin
2021-01-20 13:22 ` Jason Gunthorpe
2021-01-20 14:28 ` Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 11/14] mm/gup: change index type to long as it counts pages Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 12/14] mm/gup: longterm pin migration cleaup Pavel Tatashin
2021-01-20 13:19 ` Jason Gunthorpe
2021-01-20 14:17 ` Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 13/14] selftests/vm: test flag is broken Pavel Tatashin
2021-01-20 1:43 ` [PATCH v6 14/14] selftests/vm: test faulting in kernel, and verify pinnable pages Pavel Tatashin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210120131400.GF4605@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=akpm@linux-foundation.org \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=ira.weiny@intel.com \
--cc=jhubbard@nvidia.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=mingo@redhat.com \
--cc=osalvador@suse.de \
--cc=pasha.tatashin@soleen.com \
--cc=peterz@infradead.org \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=sashal@kernel.org \
--cc=tyhicks@linux.microsoft.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox