From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2595EC433E0 for ; Tue, 23 Jun 2020 06:14:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D44AF20772 for ; Tue, 23 Jun 2020 06:14:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mJFf5ODp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D44AF20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CDDE6B000C; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 756EC6B000D; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6473A6B000E; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 46D966B000C for ; Tue, 23 Jun 2020 02:14:47 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 383E12492 for ; Tue, 23 Jun 2020 06:14:46 +0000 (UTC) X-FDA: 76959462972.29.cable36_4717cc426e39 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 0E7B918086CB6 for ; Tue, 23 Jun 2020 06:14:46 +0000 (UTC) X-HE-Tag: cable36_4717cc426e39 X-Filterd-Recvd-Size: 7212 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 23 Jun 2020 06:14:45 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id b5so9376377pgm.8 for ; Mon, 22 Jun 2020 23:14:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=QF5i+SE5gbif6Va56aYHUaIx3APq5EkmJc5AU12Pz8o=; b=mJFf5ODpBD9gu+7mDdXDz31TmMcHtiC98GFP0hobASpUDM4xOAyss2qOk0u5RA3vLJ yeaSEKxRfACslj6q4pzg6MmOMJs2Nk+7aZPOhWVGqx5exhF0U3izjuVbmwyKdFazHT4s 2U/6UIyUKZcIVeZ5z2k8g8sogP7GynPjXPprlwFHDqpfrvWZ85mMQnqb0UfIgewT3XoH bmhmp0Lli5yp5t/j9X3q7YqkB/3eyenHxEtuUoxyx8cZrgl0u3/nMXSYtx+5OSEpq4gs suc+k+jfluaY78PMckMubgEE4tZ1hh8HPYKXugMiw6ZzUK/gnPkWy+R+ZfSkMh7HbEcz UCmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=QF5i+SE5gbif6Va56aYHUaIx3APq5EkmJc5AU12Pz8o=; b=CH1KBnVhmH0igZ/yLaEQ11dBBs2pJgSKUBWXcmFY75IpP6hw9utuXNiwa4cfhGndiv uMIvBdyOXJGqbC5ld0CnGwbdVHFlYFUgrmBYDGg+D7RFvwVtKQAj0tUZWYii3fFBp1xE ZJRjEUGV8bHbaCn2otDydxp8iLC62W4q15Unn9fdluwFXKWgeJYgnF+h7mqtKReY6yiZ 56EuWggNQVoSLAQua53QqlBwPsM3UiyNzuu1Lg7EdrxEvn3IKCFKnGfjQt2xsXx7I4Ru 3kMAca7PHpPIfrITdAlm55HiOI3MsSeqHU+L6eO/IF2RBLoSlk/Wy32HwR6PV7u0umYR pJpw== X-Gm-Message-State: AOAM530UYJHdPnTxXNP1dZlxyJ/XeFl9QLjFGvbQYVHrU5O/P2pRkE8P r6HdAoBXJRBeu5Se2rYvLS4= X-Google-Smtp-Source: ABdhPJwrMMzKL86HSk0nGtBbRa3gNPmTWuQbWt+R1Htn2k3+JEhVmmRornqm1es9r19YZtnhaZa3/g== X-Received: by 2002:aa7:91d4:: with SMTP id z20mr22230280pfa.153.1592892884702; Mon, 22 Jun 2020 23:14:44 -0700 (PDT) Received: from localhost.localdomain ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id m15sm12801093pgv.45.2020.06.22.23.14.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 Jun 2020 23:14:44 -0700 (PDT) From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Michal Hocko , Joonsoo Kim Subject: [PATCH v3 6/8] mm/gup: use a standard migration target allocation callback Date: Tue, 23 Jun 2020 15:13:46 +0900 Message-Id: <1592892828-1934-7-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 0E7B918086CB6 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Joonsoo Kim There is a well-defined migration target allocation callback. It's mostly similar with new_non_cma_page() except considering CMA pages. This patch adds a CMA consideration to the standard migration target allocation callback and use it on gup.c. Signed-off-by: Joonsoo Kim --- mm/gup.c | 57 ++++++++------------------------------------------------- mm/internal.h | 1 + mm/migrate.c | 4 +++- 3 files changed, 12 insertions(+), 50 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 15be281..f6124e3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1608,56 +1608,15 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) } #ifdef CONFIG_CMA -static struct page *new_non_cma_page(struct page *page, unsigned long private) +static struct page *alloc_migration_target_non_cma(struct page *page, unsigned long private) { - /* - * We want to make sure we allocate the new page from the same node - * as the source page. - */ - int nid = page_to_nid(page); - /* - * Trying to allocate a page for migration. Ignore allocation - * failure warnings. We don't force __GFP_THISNODE here because - * this node here is the node where we have CMA reservation and - * in some case these nodes will have really less non movable - * allocation memory. - */ - gfp_t gfp_mask = GFP_USER | __GFP_NOWARN; - - if (PageHighMem(page)) - gfp_mask |= __GFP_HIGHMEM; - -#ifdef CONFIG_HUGETLB_PAGE - if (PageHuge(page)) { - struct hstate *h = page_hstate(page); - - /* - * We don't want to dequeue from the pool because pool pages will - * mostly be from the CMA region. - */ - return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask, true); - } -#endif - if (PageTransHuge(page)) { - struct page *thp; - /* - * ignore allocation failure warnings - */ - gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN; - - /* - * Remove the movable mask so that we don't allocate from - * CMA area again. - */ - thp_gfpmask &= ~__GFP_MOVABLE; - thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER); - if (!thp) - return NULL; - prep_transhuge_page(thp); - return thp; - } + struct migration_target_control mtc = { + .nid = page_to_nid(page), + .gfp_mask = GFP_USER | __GFP_NOWARN, + .skip_cma = true, + }; - return __alloc_pages_node(nid, gfp_mask, 0); + return alloc_migration_target(page, (unsigned long)&mtc); } static long check_and_migrate_cma_pages(struct task_struct *tsk, @@ -1719,7 +1678,7 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, for (i = 0; i < nr_pages; i++) put_page(pages[i]); - if (migrate_pages(&cma_page_list, new_non_cma_page, + if (migrate_pages(&cma_page_list, alloc_migration_target_non_cma, NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) { /* * some of the pages failed migration. Do get_user_pages diff --git a/mm/internal.h b/mm/internal.h index f725aa8..fb7f7fe 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -619,6 +619,7 @@ struct migration_target_control { int nid; /* preferred node id */ nodemask_t *nmask; gfp_t gfp_mask; + bool skip_cma; }; #endif /* __MM_INTERNAL_H */ diff --git a/mm/migrate.c b/mm/migrate.c index 3afff59..7c4cd74 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1550,7 +1550,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) if (PageHuge(page)) { return alloc_huge_page_nodemask( page_hstate(compound_head(page)), mtc->nid, - mtc->nmask, gfp_mask, false); + mtc->nmask, gfp_mask, mtc->skip_cma); } if (PageTransHuge(page)) { @@ -1561,6 +1561,8 @@ struct page *alloc_migration_target(struct page *page, unsigned long private) zidx = zone_idx(page_zone(page)); if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE) gfp_mask |= __GFP_HIGHMEM; + if (mtc->skip_cma) + gfp_mask &= ~__GFP_MOVABLE; new_page = __alloc_pages_nodemask(gfp_mask, order, mtc->nid, mtc->nmask); -- 2.7.4