From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.0 required=3.0 tests=BAYES_00,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACC32C433E0 for ; Wed, 15 Jul 2020 08:36:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 781D02064C for ; Wed, 15 Jul 2020 08:36:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 781D02064C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 044386B0005; Wed, 15 Jul 2020 04:36:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F0DFB6B0006; Wed, 15 Jul 2020 04:36:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D880B8D0003; Wed, 15 Jul 2020 04:36:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id BF5626B0005 for ; Wed, 15 Jul 2020 04:36:09 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7A6D0180AD815 for ; Wed, 15 Jul 2020 08:36:09 +0000 (UTC) X-FDA: 77039652858.16.copy82_3e0bc5226ef8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 46565100E690B for ; Wed, 15 Jul 2020 08:36:09 +0000 (UTC) X-HE-Tag: copy82_3e0bc5226ef8 X-Filterd-Recvd-Size: 5790 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 08:36:08 +0000 (UTC) Received: by mail-wm1-f66.google.com with SMTP id g75so4499215wme.5 for ; Wed, 15 Jul 2020 01:36:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=JbXs7oCulwVFv9sXWuC2a5VDnfZjgW3QGY/2RDFbB6s=; b=rFQqOOgeJygJvUKAnY7yGrdwQsRY5JaI2jx5/oXsh2LSqJHgPdl4oVnHXzA4bauSqo 3MIU6a1WIw3+HAhXbWBbZnCkRUH9Mh6B0DM5VykucLQQwV8BYElAE+hs4WcY85/1gKqh 7L2+WIh5+WrK3WUtH9Mp1l++zqB0kd0TOVBNulMjHbBTieq23h1IlGUdWOHCNf2ZFWbw qiUkRU3wXFommlRFbQ2muu/et6wWXz1eWkHJXPMUoVlbpg7qVXo5v4vFRwX1gDKc3b9g jCLZJ8pl9mOo9EF8tUyrx3joD7OKOqqZ2Tt9/AmY7m+G9NeT9Flmyq84/fhV/TBYsY7q g4kQ== X-Gm-Message-State: AOAM533dUKNH7htnnrGS7umPYJg6EwU2v3Et91I3NV6SSM25CYPagLat sEWLGLk5dCx8xFtyd6eudYo= X-Google-Smtp-Source: ABdhPJyNgumAlyuAzHqDJl8g2JtRCvbClaKC8LPUnxnGgU5lsT4GBlULKzCN1UwU3yOswVJpknru8g== X-Received: by 2002:a1c:a589:: with SMTP id o131mr7120865wme.12.1594802167814; Wed, 15 Jul 2020 01:36:07 -0700 (PDT) Received: from localhost (ip-37-188-169-187.eurotel.cz. [37.188.169.187]) by smtp.gmail.com with ESMTPSA id n5sm2349843wmi.34.2020.07.15.01.36.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 15 Jul 2020 01:36:07 -0700 (PDT) Date: Wed, 15 Jul 2020 10:36:05 +0200 From: Michal Hocko To: js1304@gmail.com Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , "Aneesh Kumar K . V" , Joonsoo Kim Subject: Re: [PATCH 4/4] mm/gup: use a standard migration target allocation callback Message-ID: <20200715083605.GF5451@dhcp22.suse.cz> References: <1594789529-6206-1-git-send-email-iamjoonsoo.kim@lge.com> <1594789529-6206-4-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1594789529-6206-4-git-send-email-iamjoonsoo.kim@lge.com> X-Rspamd-Queue-Id: 46565100E690B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 15-07-20 14:05:29, Joonsoo Kim wrote: > From: Joonsoo Kim > > There is a well-defined migration target allocation callback. Use it. > > Acked-by: Vlastimil Babka > Signed-off-by: Joonsoo Kim Acked-by: Michal Hocko > --- > mm/gup.c | 54 ++++++------------------------------------------------ > 1 file changed, 6 insertions(+), 48 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 4ba822a..628ca4c 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1608,52 +1608,6 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) > } > > #ifdef CONFIG_CMA > -static struct page *new_non_cma_page(struct page *page, unsigned long private) > -{ > - /* > - * We want to make sure we allocate the new page from the same node > - * as the source page. > - */ > - int nid = page_to_nid(page); > - /* > - * Trying to allocate a page for migration. Ignore allocation > - * failure warnings. We don't force __GFP_THISNODE here because > - * this node here is the node where we have CMA reservation and > - * in some case these nodes will have really less non CMA > - * allocation memory. > - * > - * Note that CMA region is prohibited by allocation scope. > - */ > - gfp_t gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN; > - > - if (PageHighMem(page)) > - gfp_mask |= __GFP_HIGHMEM; > - > -#ifdef CONFIG_HUGETLB_PAGE > - if (PageHuge(page)) { > - struct hstate *h = page_hstate(page); > - > - gfp_mask = htlb_modify_alloc_mask(h, gfp_mask); > - return alloc_huge_page_nodemask(h, nid, NULL, gfp_mask); > - } > -#endif > - if (PageTransHuge(page)) { > - struct page *thp; > - /* > - * ignore allocation failure warnings > - */ > - gfp_t thp_gfpmask = GFP_TRANSHUGE | __GFP_NOWARN; > - > - thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER); > - if (!thp) > - return NULL; > - prep_transhuge_page(thp); > - return thp; > - } > - > - return __alloc_pages_node(nid, gfp_mask, 0); > -} > - > static long check_and_migrate_cma_pages(struct task_struct *tsk, > struct mm_struct *mm, > unsigned long start, > @@ -1668,6 +1622,10 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > bool migrate_allow = true; > LIST_HEAD(cma_page_list); > long ret = nr_pages; > + struct migration_target_control mtc = { > + .nid = NUMA_NO_NODE, > + .gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_NOWARN, > + }; > > check_again: > for (i = 0; i < nr_pages;) { > @@ -1713,8 +1671,8 @@ static long check_and_migrate_cma_pages(struct task_struct *tsk, > for (i = 0; i < nr_pages; i++) > put_page(pages[i]); > > - if (migrate_pages(&cma_page_list, new_non_cma_page, > - NULL, 0, MIGRATE_SYNC, MR_CONTIG_RANGE)) { > + if (migrate_pages(&cma_page_list, alloc_migration_target, NULL, > + (unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) { > /* > * some of the pages failed migration. Do get_user_pages > * without migration. > -- > 2.7.4 > -- Michal Hocko SUSE Labs