From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD533C64E7C for ; Wed, 2 Dec 2020 16:35:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3766D20872 for ; Wed, 2 Dec 2020 16:35:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3766D20872 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 58BF28D0006; Wed, 2 Dec 2020 11:35:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 53BFA8D0002; Wed, 2 Dec 2020 11:35:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42AD28D0006; Wed, 2 Dec 2020 11:35:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 2CD3F8D0002 for ; Wed, 2 Dec 2020 11:35:10 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E90B7824999B for ; Wed, 2 Dec 2020 16:35:09 +0000 (UTC) X-FDA: 77548891938.05.honey27_2b17bf2273b4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id C950318014A2C for ; Wed, 2 Dec 2020 16:35:09 +0000 (UTC) X-HE-Tag: honey27_2b17bf2273b4 X-Filterd-Recvd-Size: 5713 Received: from mail-qv1-f66.google.com (mail-qv1-f66.google.com [209.85.219.66]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Dec 2020 16:35:09 +0000 (UTC) Received: by mail-qv1-f66.google.com with SMTP id cv2so971182qvb.9 for ; Wed, 02 Dec 2020 08:35:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=4hAyW+JiNSbp72nZBWsfmIBRqBtiOCYtYCN54NjZjGM=; b=KAmCkI3XgKwS7JrjNjdZixSGStpnUIsz+GAuDutpC16uLJxjKM5vZ7OdkDQVk1sw9U NegzZCzRgbgGy+fH/inFmM2HbgGW4E584g3f2OkY1IqOSG75YqdDEA5gUViIJDu4izfZ nSZzp3MtaFdYC4zitfa8xX1DoGZR4H3VOaDFmg37NcXRtLrBPWx55G0abBTm/YgpsGPT Sv9IGEXeqNvJr61wSOPEsl99SYVYhbk8nPpjHMM0Mt0cSA5sveO7C2qBCJ1Ih1o4BULU 5LGBdQY29XB5mv30AiAAU+fTtU2DLFHTID6asouveENswHVa5sglFe1+dvr54oMNI/iS ZQmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=4hAyW+JiNSbp72nZBWsfmIBRqBtiOCYtYCN54NjZjGM=; b=sSYTm+nGde5t8RyOFkKA9O1dGo0XPMDbMa/OHd2/TAYOvaVZgUEAUsKv2gdJAX++Pv ZbuVXgTwrQYTfJWGHvcrE/n7enW7MpRGp2N03fLi7AQOdBDgoQDA1uTQjk2NLGKEmHDP F7jdXPekpGZIusjBtEFi9BIzJi7SMnWEaMlBed7A5dagCIwN+jli/Jwdz3EXr9ZZq1KO fNEtzGc+vfUopagwqhAXpzwtcEM5p/jZb7M0A2SRguvYkswP6wOndSRLxkg7e40l83Sj /2Y39nJ1J5+yGqhPB4eeZrSequ+TX2HGIeibcLZYHIeY7hTRbq9Q3lkdt//DZhabWvtL Nkfg== X-Gm-Message-State: AOAM530M1O7eJc+kdCwI1AiHv8OAxby/3NX/aeUP4uECq0es1dc/E1jW a+s3lYPiMK0ESaynglIXdOL/tA== X-Google-Smtp-Source: ABdhPJz+w3PD/20ZuVEbzMNHbrzOS0CXXEVsymIuorDzL/MwlGuCgFmrTe2bMzh8hjFdkNy+up2PkQ== X-Received: by 2002:a0c:b3d6:: with SMTP id b22mr3263441qvf.10.1606926908608; Wed, 02 Dec 2020 08:35:08 -0800 (PST) Received: from ziepe.ca (hlfxns017vw-156-34-48-30.dhcp-dynamic.fibreop.ns.bellaliant.net. [156.34.48.30]) by smtp.gmail.com with ESMTPSA id n21sm2246937qke.32.2020.12.02.08.35.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Dec 2020 08:35:08 -0800 (PST) Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kkV5v-0056oY-G4; Wed, 02 Dec 2020 12:35:07 -0400 Date: Wed, 2 Dec 2020 12:35:07 -0400 From: Jason Gunthorpe To: Pavel Tatashin Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, vbabka@suse.cz, mhocko@suse.com, david@redhat.com, osalvador@suse.de, dan.j.williams@intel.com, sashal@kernel.org, tyhicks@linux.microsoft.com, iamjoonsoo.kim@lge.com, mike.kravetz@oracle.com, rostedt@goodmis.org, mingo@redhat.com, peterz@infradead.org, mgorman@suse.de, willy@infradead.org, rientjes@google.com, jhubbard@nvidia.com Subject: Re: [PATCH 6/6] mm/gup: migrate pinned pages out of movable zone Message-ID: <20201202163507.GL5487@ziepe.ca> References: <20201202052330.474592-1-pasha.tatashin@soleen.com> <20201202052330.474592-7-pasha.tatashin@soleen.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201202052330.474592-7-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 02, 2020 at 12:23:30AM -0500, Pavel Tatashin wrote: > /* > * First define the enums in the above macros to be exported to userspace > diff --git a/mm/gup.c b/mm/gup.c > index 724d8a65e1df..1d511f65f8a7 100644 > +++ b/mm/gup.c > @@ -1593,19 +1593,18 @@ static bool check_dax_vmas(struct vm_area_struct **vmas, long nr_pages) > } > #endif > > -#ifdef CONFIG_CMA > -static long check_and_migrate_cma_pages(struct mm_struct *mm, > - unsigned long start, > - unsigned long nr_pages, > - struct page **pages, > - struct vm_area_struct **vmas, > - unsigned int gup_flags) > +static long check_and_migrate_movable_pages(struct mm_struct *mm, > + unsigned long start, > + unsigned long nr_pages, > + struct page **pages, > + struct vm_area_struct **vmas, > + unsigned int gup_flags) > { > unsigned long i; > unsigned long step; > bool drain_allow = true; > bool migrate_allow = true; > - LIST_HEAD(cma_page_list); > + LIST_HEAD(page_list); > long ret = nr_pages; > struct migration_target_control mtc = { > .nid = NUMA_NO_NODE, > @@ -1623,13 +1622,12 @@ static long check_and_migrate_cma_pages(struct mm_struct *mm, > */ > step = compound_nr(head) - (pages[i] - head); > /* > - * If we get a page from the CMA zone, since we are going to > - * be pinning these entries, we might as well move them out > - * of the CMA zone if possible. > + * If we get a movable page, since we are going to be pinning > + * these entries, try to move them out if possible. > */ > - if (is_migrate_cma_page(head)) { > + if (is_migrate_movable(get_pageblock_migratetype(head))) { > if (PageHuge(head)) It is a good moment to say, I really dislike how this was implemented in the first place. Scanning the output of gup just to do the is_migrate_movable() test is kind of nonsense and slow. It would be better/faster to handle this directly while gup is scanning the page tables and adding pages to the list. Now that this becoming more general, can you take a moment to see if a better implementation could be possible? Also, something takes care of the gup fast path too? Jason