From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1115FC2D0C0 for ; Thu, 19 Dec 2019 05:27:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BC63E222C2 for ; Thu, 19 Dec 2019 05:27:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="dqorifJ4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC63E222C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 650028E0156; Thu, 19 Dec 2019 00:27:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 600108E00F5; Thu, 19 Dec 2019 00:27:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 515958E0156; Thu, 19 Dec 2019 00:27:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 3C94A8E00F5 for ; Thu, 19 Dec 2019 00:27:56 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id E9D64181AC9CC for ; Thu, 19 Dec 2019 05:27:55 +0000 (UTC) X-FDA: 76280759310.22.money42_280bbb8838922 X-HE-Tag: money42_280bbb8838922 X-Filterd-Recvd-Size: 9247 Received: from mail-ot1-f66.google.com (mail-ot1-f66.google.com [209.85.210.66]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Dec 2019 05:27:55 +0000 (UTC) Received: by mail-ot1-f66.google.com with SMTP id c22so5585575otj.13 for ; Wed, 18 Dec 2019 21:27:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=YiNT2OwN1PC5xiClJXvmIxtU8FKdWcajydHQPOKowMQ=; b=dqorifJ4f+v2Gylbu8oAiFDcKa1F8skIpz6V3IgWOqRB33n7U59EJyePbqvuTj6Vy4 uOPeG7Kob1GhJUtITzia2N5iO6b+FgQt0PfBLv1SShdXtnPTkzPXC4wv8NlfHewxSVCr BWaS5xgO+C3htMgxe5c7ubvEmgYEe4TsZt+Zn5XRlxBQhxFahwlW/Mo0S+TgOXqQ/47k mtfOMZwImte59GAX6ig1VEn2mK8lUBxmhPPLTpVmIFFbH4EVvah9nw+1Oh0aFp9cBJiW sz8ohgGLCmoNne0NYqks6igeYI3pnrq7++p4oBryH4HFQcsawtXKY12nyEVckq37Q8q+ 63BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=YiNT2OwN1PC5xiClJXvmIxtU8FKdWcajydHQPOKowMQ=; b=TJH75Kl2/dHFWVQm/SkTkGnF16d8rUj4RJbOPUR6BvUc33o96Te+nqmpl9y0xq53re XBBuJs5k27vhy9IRQgwM+8oUQFPxq1UuuvvWRdb9OPD+5KrLu0CFDBV53Hm96lpfD1iN ODE+XH9QIybE8ZWneOTJj20UvdyqynB+rk23ASnj6CtZegslLG+uFC7S17yGnkudUVWz +SQa/wWXBqCO+A5vqF/xOue0MN3tIBcWAi1ELoeUMdd9q8/gecmgUK5VKdXWV4TI5dcJ yqTTpcaCZqbnh1f5ZRpcPRZBgqScbKPwTIIFm7ZIN2sORmR8/dBgGfq1X0Ya/KirbHGs w6GQ== X-Gm-Message-State: APjAAAVMjkOKJKFCCbbuE+Ev4DwJlBfelm5FW1/snawCcrIIZDUzvISH YM8vV1wFwISAc978kS8Iwz7092vzP3M0uALxoLKz+Q== X-Google-Smtp-Source: APXvYqzIho60C4tgIgH4FPSEG3bP2Gra7VUCaG88r8S6eX3efYPsOK7esN+4WGhr8L+TiHfEgJwBcHps0ms7QGyGe6Y= X-Received: by 2002:a05:6830:1744:: with SMTP id 4mr6583360otz.71.1576733274234; Wed, 18 Dec 2019 21:27:54 -0800 (PST) MIME-Version: 1.0 References: <20191216222537.491123-1-jhubbard@nvidia.com> <20191216222537.491123-5-jhubbard@nvidia.com> In-Reply-To: <20191216222537.491123-5-jhubbard@nvidia.com> From: Dan Williams Date: Wed, 18 Dec 2019 21:27:43 -0800 Message-ID: Subject: Re: [PATCH v11 04/25] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?UTF-8?B?QmrDtnJuIFTDtnBlbA==?= , Christoph Hellwig , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, Maling list - DRI developers , KVM list , linux-block@vger.kernel.org, Linux Doc Mailing List , linux-fsdevel , linux-kselftest@vger.kernel.org, "Linux-media@vger.kernel.org" , linux-rdma , linuxppc-dev , Netdev , Linux MM , LKML , Christoph Hellwig Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 16, 2019 at 2:26 PM John Hubbard wrote: > > An upcoming patch changes and complicates the refcounting and > especially the "put page" aspects of it. In order to keep > everything clean, refactor the devmap page release routines: > > * Rename put_devmap_managed_page() to page_is_devmap_managed(), > and limit the functionality to "read only": return a bool, > with no side effects. > > * Add a new routine, put_devmap_managed_page(), to handle checking > what kind of page it is, and what kind of refcount handling it > requires. > > * Rename __put_devmap_managed_page() to free_devmap_managed_page(), > and limit the functionality to unconditionally freeing a devmap > page. > > This is originally based on a separate patch by Ira Weiny, which > applied to an early version of the put_user_page() experiments. > Since then, J=C3=A9r=C3=B4me Glisse suggested the refactoring described a= bove. > > Cc: Christoph Hellwig > Suggested-by: J=C3=A9r=C3=B4me Glisse > Reviewed-by: Dan Williams > Reviewed-by: Jan Kara > Signed-off-by: Ira Weiny > Signed-off-by: John Hubbard > --- > include/linux/mm.h | 17 +++++++++++++---- > mm/memremap.c | 16 ++-------------- > mm/swap.c | 24 ++++++++++++++++++++++++ > 3 files changed, 39 insertions(+), 18 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index c97ea3b694e6..77a4df06c8a7 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -952,9 +952,10 @@ static inline bool is_zone_device_page(const struct = page *page) > #endif > > #ifdef CONFIG_DEV_PAGEMAP_OPS > -void __put_devmap_managed_page(struct page *page); > +void free_devmap_managed_page(struct page *page); > DECLARE_STATIC_KEY_FALSE(devmap_managed_key); > -static inline bool put_devmap_managed_page(struct page *page) > + > +static inline bool page_is_devmap_managed(struct page *page) > { > if (!static_branch_unlikely(&devmap_managed_key)) > return false; > @@ -963,7 +964,6 @@ static inline bool put_devmap_managed_page(struct pag= e *page) > switch (page->pgmap->type) { > case MEMORY_DEVICE_PRIVATE: > case MEMORY_DEVICE_FS_DAX: > - __put_devmap_managed_page(page); > return true; > default: > break; > @@ -971,7 +971,14 @@ static inline bool put_devmap_managed_page(struct pa= ge *page) > return false; > } > > +bool put_devmap_managed_page(struct page *page); > + > #else /* CONFIG_DEV_PAGEMAP_OPS */ > +static inline bool page_is_devmap_managed(struct page *page) > +{ > + return false; > +} > + > static inline bool put_devmap_managed_page(struct page *page) > { > return false; > @@ -1028,8 +1035,10 @@ static inline void put_page(struct page *page) > * need to inform the device driver through callback. See > * include/linux/memremap.h and HMM for details. > */ > - if (put_devmap_managed_page(page)) > + if (page_is_devmap_managed(page)) { > + put_devmap_managed_page(page); > return; > + } > > if (put_page_testzero(page)) > __put_page(page); > diff --git a/mm/memremap.c b/mm/memremap.c > index e899fa876a62..2ba773859031 100644 > --- a/mm/memremap.c > +++ b/mm/memremap.c > @@ -411,20 +411,8 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pf= n, > EXPORT_SYMBOL_GPL(get_dev_pagemap); > > #ifdef CONFIG_DEV_PAGEMAP_OPS > -void __put_devmap_managed_page(struct page *page) > +void free_devmap_managed_page(struct page *page) > { > - int count =3D page_ref_dec_return(page); > - > - /* still busy */ > - if (count > 1) > - return; > - > - /* only triggered by the dev_pagemap shutdown path */ > - if (count =3D=3D 0) { > - __put_page(page); > - return; > - } > - > /* notify page idle for dax */ > if (!is_device_private_page(page)) { > wake_up_var(&page->_refcount); > @@ -461,5 +449,5 @@ void __put_devmap_managed_page(struct page *page) > page->mapping =3D NULL; > page->pgmap->ops->page_free(page); > } > -EXPORT_SYMBOL(__put_devmap_managed_page); > +EXPORT_SYMBOL(free_devmap_managed_page); This patch does not have a module consumer for free_devmap_managed_page(), so the export should move to the patch that needs the new export. Also the only reason that put_devmap_managed_page() is EXPORT_SYMBOL instead of EXPORT_SYMBOL_GPL is that there was no practical way to hide the devmap details from evey module in the kernel that did put_page(). I would expect free_devmap_managed_page() to EXPORT_SYMBOL_GPL if it is not inlined into an existing exported static inline api.