From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A38AC433FE for ; Mon, 7 Dec 2020 07:13:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4471E206E4 for ; Mon, 7 Dec 2020 07:13:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4471E206E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 937598D0003; Mon, 7 Dec 2020 02:13:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E93C8D0001; Mon, 7 Dec 2020 02:13:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D7768D0003; Mon, 7 Dec 2020 02:13:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 68E1E8D0001 for ; Mon, 7 Dec 2020 02:13:47 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 30EBF362B for ; Mon, 7 Dec 2020 07:13:47 +0000 (UTC) X-FDA: 77565621294.11.slip16_35024ad273dc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id E97D6180F8B82 for ; Mon, 7 Dec 2020 07:13:46 +0000 (UTC) X-HE-Tag: slip16_35024ad273dc X-Filterd-Recvd-Size: 8526 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Dec 2020 07:13:46 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id w4so8110468pgg.13 for ; Sun, 06 Dec 2020 23:13:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=BuegduoTe+wHssZKdFwfVQK5dfW32mpobttg/54Oqek=; b=mhicnEpi95O1H2zQf5C0nreui1P+urNU/iK8Ol2V1uqv1Aha5N5eXnKw45YJWDtQ+q J81nhhHRUaF82tOGP1bTTd2tSTqg5qx85qVeo9Ed/MZyUjq91IDIw/5gm5hDb1wF64U7 cZK/PZD/AiB43FcuElQN5IUha0Mb3QqG7lpAxiaZ1W6kPSsAXoyqXgztvJ7zm+7KdQiW 4sAoSt1XN9PAcTxrQNTS0j+sOnXjR4XTx51BzllD/6YyDRGqYRE0lvXDgDNBJCuyxjhr RXAjuO0MnOWS7zlJSVdwiA2UV/yMrJdcLMDm/saLaoSMlJNK86GiL+xs2sYzhh50ChMH c1PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=BuegduoTe+wHssZKdFwfVQK5dfW32mpobttg/54Oqek=; b=b/4YxAoUbqyBgwerFonNQ8Vd6UXT/B0LirL+qD+ZglFqagBtmZRGEy31+mXAezWfPM 1TBKuR8yA9wKLHlfHq0dlsjlBAPSy9Slg1k+QmVcKh1TKckIempcTQ2MsG4QV+loEkYk IQlxCmgdwqodoqkXxKqsw1hLW895zyUc6qKMLWQ0R0/Bf25HQgCiEKyDzPIYGBaUjdYb 7sbzSqqmZIXeSp44mtaqUjh+ihNilaAClldWOwtz6cwVea85SnaJfk2NXSL+DYkuDwaB ypvlqIfwgVXtmcd6q2wGmSIR1NhdJ36DcmoIqNaFLJtpNWCZlqpfO//NG0rVGCHxggTT Ip6w== X-Gm-Message-State: AOAM532qjK7av7b07T/ky6Kka51Mz5O+CZYhykiCcWzb6d6siJfiGKCD q6JlTTg09uiedi3Yr+Upt00= X-Google-Smtp-Source: ABdhPJz/bGRqAzP5r8jUuS0kn9Z+fns/7WZqTFsQ7vcaiX3eb8T1xo1vZmsfbn/lpC4itT+CYDdscw== X-Received: by 2002:a63:1d26:: with SMTP id d38mr17164270pgd.246.1607325225494; Sun, 06 Dec 2020 23:13:45 -0800 (PST) Received: from js1304-desktop ([114.206.198.176]) by smtp.gmail.com with ESMTPSA id z25sm11065754pge.66.2020.12.06.23.13.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 Dec 2020 23:13:45 -0800 (PST) Date: Mon, 7 Dec 2020 16:13:36 +0900 From: Joonsoo Kim To: Pavel Tatashin Cc: LKML , linux-mm , Andrew Morton , Vlastimil Babka , Michal Hocko , David Hildenbrand , Oscar Salvador , Dan Williams , Sasha Levin , Tyler Hicks , mike.kravetz@oracle.com, Steven Rostedt , Ingo Molnar , Jason Gunthorpe , Peter Zijlstra , Mel Gorman , Matthew Wilcox , David Rientjes , John Hubbard Subject: Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone Message-ID: <20201207071335.GB10731@js1304-desktop> References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-7-pasha.tatashin@soleen.com> <20201204041358.GB17056@js1304-desktop> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Dec 04, 2020 at 12:43:29PM -0500, Pavel Tatashin wrote: > On Thu, Dec 3, 2020 at 11:14 PM Joonsoo Kim wrote: > > > > On Wed, Dec 02, 2020 at 12:23:30AM -0500, Pavel Tatashin wrote: > > > We do not allocate pin pages in ZONE_MOVABLE, but if pages were already > > > allocated before pinning they need to migrated to a different zone. > > > Currently, we migrate movable CMA pages only. Generalize the function > > > that migrates CMA pages to migrate all movable pages. > > > > > > Signed-off-by: Pavel Tatashin > > > --- > > > include/linux/migrate.h | 1 + > > > include/trace/events/migrate.h | 3 +- > > > mm/gup.c | 56 +++++++++++++--------------------- > > > 3 files changed, 24 insertions(+), 36 deletions(-) > > > > > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > > > index 0f8d1583fa8e..00bab23d1ee5 100644 > > > --- a/include/linux/migrate.h > > > +++ b/include/linux/migrate.h > > > @@ -27,6 +27,7 @@ enum migrate_reason { > > > MR_MEMPOLICY_MBIND, > > > MR_NUMA_MISPLACED, > > > MR_CONTIG_RANGE, > > > + MR_LONGTERM_PIN, > > > MR_TYPES > > > }; > > > > > > diff --git a/include/trace/events/migrate.h b/include/trace/events/migrate.h > > > index 4d434398d64d..363b54ce104c 100644 > > > --- a/include/trace/events/migrate.h > > > +++ b/include/trace/events/migrate.h > > > @@ -20,7 +20,8 @@ > > > EM( MR_SYSCALL, "syscall_or_cpuset") \ > > > EM( MR_MEMPOLICY_MBIND, "mempolicy_mbind") \ > > > EM( MR_NUMA_MISPLACED, "numa_misplaced") \ > > > - EMe(MR_CONTIG_RANGE, "contig_range") > > > + EM( MR_CONTIG_RANGE, "contig_range") \ > > > + EMe(MR_LONGTERM_PIN, "longterm_pin") > > > > > > /* > > > * First define the enums in the above macros to be exported to userspace > > > diff --git a/mm/gup.c b/mm/gup.c > > > index 724d8a65e1df..1d511f65f8a7 100644 > > > --- a/mm/gup.c > > > +++ b/mm/gup.c > > > @@ -1593,19 +1593,18 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) > > > } > > > #endif > > > > > > -#ifdef CONFIG_CMA > > > -static long check_and_migrate_cma_pages(struct mm_struct *mm, > > > - unsigned long start, > > > - unsigned long nr_pages, > > > - struct page **pages, > > > - struct vm_area_struct **vmas, > > > - unsigned int gup_flags) > > > +static long check_and_migrate_movable_pages(struct mm_struct *mm, > > > + unsigned long start, > > > + unsigned long nr_pages, > > > + struct page **pages, > > > + struct vm_area_struct **vmas, > > > + unsigned int gup_flags) > > > { > > > unsigned long i; > > > unsigned long step; > > > bool drain_allow = true; > > > bool migrate_allow = true; > > > - LIST_HEAD(cma_page_list); > > > + LIST_HEAD(page_list); > > > long ret = nr_pages; > > > struct migration_target_control mtc = { > > > .nid = NUMA_NO_NODE, > > > @@ -1623,13 +1622,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, > > > */ > > > step = compound_nr(head) - (pages[i] - head); > > > /* > > > - * If we get a page from the CMA zone, since we are going to > > > - * be pinning these entries, we might as well move them out > > > - * of the CMA zone if possible. > > > + * If we get a movable page, since we are going to be pinning > > > + * these entries, try to move them out if possible. > > > */ > > > - if (is_migrate_cma_page(head)) { > > > + if (is_migrate_movable(get_pageblock_migratetype(head))) { > > > > is_migrate_movable() isn't a check for the ZONE. It's a check for the > > MIGRATE_TYPE. MIGRATE_TYPE doesn't require hard guarantee for > > migration, and, most of memory, including ZONE_NORMAL, is > > MIGRATE_MOVABLE. With this code, long term gup would always fails due > > to not enough memory. I think that correct change would be > > "is_migrate_cma_page(hear) && zone == ZONE_MOVABLE". > > Good point. The above should be OR not AND. > > zone_idx(page_zone(head)) == ZONE_MOVABLE || is_migrate_cma_page(hear) Yep! Thanks.