From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DFDCC4167B for ; Thu, 10 Dec 2020 00:43:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B227922D5B for ; Thu, 10 Dec 2020 00:43:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B227922D5B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AC5076B0072; Wed, 9 Dec 2020 19:43:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A4F446B0073; Wed, 9 Dec 2020 19:43:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8ED576B0074; Wed, 9 Dec 2020 19:43:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 7274C6B0072 for ; Wed, 9 Dec 2020 19:43:51 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4492A180AD807 for ; Thu, 10 Dec 2020 00:43:51 +0000 (UTC) X-FDA: 77575525062.24.kiss53_020ba32273f4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id 289D41A4A0 for ; Thu, 10 Dec 2020 00:43:51 +0000 (UTC) X-HE-Tag: kiss53_020ba32273f4 X-Filterd-Recvd-Size: 11060 Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Dec 2020 00:43:50 +0000 (UTC) Received: by mail-qk1-f172.google.com with SMTP id z11so3303558qkj.7 for ; Wed, 09 Dec 2020 16:43:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Qk8rVh1rXtyNOJhKlNnNxraylm8G6DtD3KrN637cCCY=; b=fMdVWCNKITv0dg2NlC0bJIuTY/rS0L/auHzJODLG1gJsRMzwg1f2vhMfk8VgTPWFb1 EGP2rvW7Ddsyis7N9GW7HF9Esuunne9NnIMmaRDqXC6mpUhPIXZ+1ciCTkNDyjdLhx+y VfNtVUaQFX50fcEsJqRUZeONp2iOgImfFJ6YSI0zItK+pg6nljFaSVEl5+VMIn5Ju/yi Ef6R/VeZqKwv8OkjcphblD65i3nnkN+eM34dHKwtHrfzYg3muPKZwZvlA4vqzv/S06Gv nUFwMjeZ3/39YoDQqyquk9o1umGf+i7jw6W4Uz7/oSy7LPLzfaRkrKxMEAWQqIQc0jTU aBmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Qk8rVh1rXtyNOJhKlNnNxraylm8G6DtD3KrN637cCCY=; b=NKRkJpz0RzcKrMXeglk8mTTboEJ9f4LBSJLPhOCuwt/EmBnbm3JfBBAhdOvvM17g9b Bbim9w80T0ONFM752FWbjfm8QfNxoIJc1fgE0rxNAzPqzUvch1JBpmM+S75sbq6EXLy2 rDG9Lv1xOIfZv6f8MI8LWIY1mfdP3TVqMQUoNz+Ji+NDDOptVHDfkQN9Pp0Z8Hc53LbA ZUKez8X6fcOBqPGYBUfFaTNyy0yRKMvh3Rr6GA9Q9IrGSJdKZ374E2Nc3XPShST6NF0C WVO3hc8V/ayictf3UcOeQuzqBpuRgRmT2SzJIwpWJY+UpxOLqmrE/0WEy5mRLpBNRXKb d/Aw== X-Gm-Message-State: AOAM530nKrhRq6gL2T4nFynLAo70bDAFHYAHSK9P1/7OhbsDvo5j/852 4XhL/MI+D/HsFW7WZcGrAR17ww== X-Google-Smtp-Source: ABdhPJy/W1k/3gnadCh/Cym4o8ZSSnpU7JcnD+shVsxXMkusZ00RCHBLY90vQHfYNck7M8GFIweDew== X-Received: by 2002:a37:aa15:: with SMTP id t21mr6023243qke.86.1607561029865; Wed, 09 Dec 2020 16:43:49 -0800 (PST) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id y1sm2538745qky.63.2020.12.09.16.43.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Dec 2020 16:43:49 -0800 (PST) From: Pavel Tatashin To: pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, jgg@ziepe.ca, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com, linux-doc@vger.kernel.org Subject: [PATCH v2 7/8] mm/gup: migrate pinned pages out of movable zone Date: Wed, 9 Dec 2020 19:43:34 -0500 Message-Id: <20201210004335.64634-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201210004335.64634-1-pasha.tatashin@soleen.com> References: <20201210004335.64634-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only movable CMA pages. Generalize the function that migrates CMA pages to migrate all movable pages. Use is_pinnable_page() to check which pages need to be migrated Signed-off-by: Pavel Tatashin Reviewed-by: John Hubbard --- include/linux/migrate.h | 1 + include/linux/mmzone.h | 11 ++++-- include/trace/events/migrate.h | 3 +- mm/gup.c | 65 ++++++++++++++-------------------- 4 files changed, 37 insertions(+), 43 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 0f8d1583fa8e..00bab23d1ee5 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -27,6 +27,7 @@ enum migrate_reason { MR_MEMPOLICY_MBIND, MR_NUMA_MISPLACED, MR_CONTIG_RANGE, + MR_LONGTERM_PIN, MR_TYPES }; =20 diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index fb3bf696c05e..87a7321b4252 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -405,9 +405,14 @@ enum zone_type { * likely to succeed, and to locally limit unmovable allocations - e.g.= , * to increase the number of THP/huge pages. Notable special cases are: * - * 1. Pinned pages: (long-term) pinning of movable pages might - * essentially turn such pages unmovable. Memory offlining might - * retry a long time. + * 1. Pinned pages: (long-term) pinning of movable pages is avoided + * when pages are pinned and faulted, but it is still possible that + * address space already has pages in ZONE_MOVABLE at the time when + * pages are pinned (i.e. user has touches that memory before + * pinning). In such case we try to migrate them to a different zone= , + * but if migration fails the pages can still end-up pinned in + * ZONE_MOVABLE. In such case, memory offlining might retry a long + * time and will only succeed once user application unpins pages. * 2. memblock allocations: kernelcore/movablecore setups might create * situations where ZONE_MOVABLE contains unmovable allocations * after boot. Memory offlining and allocations fail early. diff --git a/include/trace/events/migrate.h b/include/trace/events/migrat= e.h index 4d434398d64d..363b54ce104c 100644 --- a/include/trace/events/migrate.h +++ b/include/trace/events/migrate.h @@ -20,7 +20,8 @@ EM( MR_SYSCALL, "syscall_or_cpuset") \ EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ EM( MR_NUMA_MISPLACED, "numa_misplaced") \ - EMe(MR_CONTIG_RANGE, "contig_range") + EM( MR_CONTIG_RANGE, "contig_range") \ + EMe(MR_LONGTERM_PIN, "longterm_pin") =20 /* * First define the enums in the above macros to be exported to userspac= e diff --git a/mm/gup.c b/mm/gup.c index 0eb8a85fb704..e575237d4c67 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -88,11 +88,12 @@ static __maybe_unused struct page *try_grab_compound_= head(struct page *page, int orig_refs =3D refs; =20 /* - * Can't do FOLL_LONGTERM + FOLL_PIN with CMA in the gup fast - * path, so fail and let the caller fall back to the slow path. + * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a + * right zone, so fail and let the caller fall back to the slow + * path. */ - if (unlikely(flags & FOLL_LONGTERM) && - is_migrate_cma_page(page)) + if (unlikely((flags & FOLL_LONGTERM) && + !is_pinnable_page(page))) return NULL; =20 /* @@ -1593,19 +1594,18 @@ static bool check_dax_vmas(struct vm_area_struct = **vmas, long nr_pages) } #endif =20 -#ifdef CONFIG_CMA -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) +static long check_and_migrate_movable_pages(struct mm_struct *mm, + unsigned long start, + unsigned long nr_pages, + struct page **pages, + struct vm_area_struct **vmas, + unsigned int gup_flags) { unsigned long i; unsigned long step; bool drain_allow =3D true; bool migrate_allow =3D true; - LIST_HEAD(cma_page_list); + LIST_HEAD(movable_page_list); long ret =3D nr_pages; struct migration_target_control mtc =3D { .nid =3D NUMA_NO_NODE, @@ -1623,13 +1623,12 @@ static long check_and_migrate_cma_pages(struct mm= _struct *mm, */ step =3D compound_nr(head) - (pages[i] - head); /* - * If we get a page from the CMA zone, since we are going to - * be pinning these entries, we might as well move them out - * of the CMA zone if possible. + * If we get a movable page, since we are going to be pinning + * these entries, try to move them out if possible. */ - if (is_migrate_cma_page(head)) { + if (!is_pinnable_page(head)) { if (PageHuge(head)) - isolate_huge_page(head, &cma_page_list); + isolate_huge_page(head, &movable_page_list); else { if (!PageLRU(head) && drain_allow) { lru_add_drain_all(); @@ -1637,7 +1636,7 @@ static long check_and_migrate_cma_pages(struct mm_s= truct *mm, } =20 if (!isolate_lru_page(head)) { - list_add_tail(&head->lru, &cma_page_list); + list_add_tail(&head->lru, &movable_page_list); mod_node_page_state(page_pgdat(head), NR_ISOLATED_ANON + page_is_file_lru(head), @@ -1649,7 +1648,7 @@ static long check_and_migrate_cma_pages(struct mm_s= truct *mm, i +=3D step; } =20 - if (!list_empty(&cma_page_list)) { + if (!list_empty(&movable_page_list)) { /* * drop the above get_user_pages reference. */ @@ -1659,7 +1658,7 @@ static long check_and_migrate_cma_pages(struct mm_s= truct *mm, for (i =3D 0; i < nr_pages; i++) put_page(pages[i]); =20 - if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, + if (migrate_pages(&movable_page_list, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages @@ -1667,17 +1666,16 @@ static long check_and_migrate_cma_pages(struct mm= _struct *mm, */ migrate_allow =3D false; =20 - if (!list_empty(&cma_page_list)) - putback_movable_pages(&cma_page_list); + if (!list_empty(&movable_page_list)) + putback_movable_pages(&movable_page_list); } /* * We did migrate all the pages, Try to get the page references - * again migrating any new CMA pages which we failed to isolate - * earlier. + * again migrating any pages which we failed to isolate earlier. */ ret =3D __get_user_pages_locked(mm, start, nr_pages, - pages, vmas, NULL, - gup_flags); + pages, vmas, NULL, + gup_flags); =20 if ((ret > 0) && migrate_allow) { nr_pages =3D ret; @@ -1688,17 +1686,6 @@ static long check_and_migrate_cma_pages(struct mm_= struct *mm, =20 return ret; } -#else -static long check_and_migrate_cma_pages(struct mm_struct *mm, - unsigned long start, - unsigned long nr_pages, - struct page **pages, - struct vm_area_struct **vmas, - unsigned int gup_flags) -{ - return nr_pages; -} -#endif /* CONFIG_CMA */ =20 /* * __gup_longterm_locked() is a wrapper for __get_user_pages_locked whic= h @@ -1746,8 +1733,8 @@ static long __gup_longterm_locked(struct mm_struct = *mm, goto out; } =20 - rc =3D check_and_migrate_cma_pages(mm, start, rc, pages, - vmas_tmp, gup_flags); + rc =3D check_and_migrate_movable_pages(mm, start, rc, pages, + vmas_tmp, gup_flags); out: memalloc_pin_restore(flags); } --=20 2.25.1