From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D47E3C2D0CD for ; Wed, 18 Dec 2019 16:04:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 79AFD2146E for ; Wed, 18 Dec 2019 16:04:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="z5wPjhUP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 79AFD2146E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 044068E011A; Wed, 18 Dec 2019 11:04:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F37448E00F5; Wed, 18 Dec 2019 11:04:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFE198E011A; Wed, 18 Dec 2019 11:04:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id C98628E00F5 for ; Wed, 18 Dec 2019 11:04:23 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 9459A5840 for ; Wed, 18 Dec 2019 16:04:23 +0000 (UTC) X-FDA: 76278734406.20.cow62_91725d0e914d X-HE-Tag: cow62_91725d0e914d X-Filterd-Recvd-Size: 10113 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Dec 2019 16:04:22 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id k1so2089617ljg.1 for ; Wed, 18 Dec 2019 08:04:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=lx2tYrxH76TzOflvrZdMYwmw6kPdiwBO2AoS02kSF8I=; b=z5wPjhUP09OUtbjMjcfRXuIHIwQ6JsCqdqDQrpkHeR2FGTJKRgTZuRVpd8Ejot/uzW WkIRJ1o23H1rHR8qrWtCj6uBitDImUrGPYpJ5mMWyp3EmJAhL7rYh6HpljzRDkNNAPx0 LUjYdc/PK8ZbQZUps8imW7pyun+gW90XpZ53fBcBIDiOV27gBLz+BEehaTKZC5uM8Oqa ydjJOsIeBdrZtGFXR3Ff0Wal0p2P05zNibodcJL9uNUAN8pxDWcLGNcXSYNkE83Lm4vk MRxOsdbgin+Oe/MgK9nfd0EbUF6BgvQFFgBekQTNtwPB0Nh3TqyAfgcwxiXCuXBEgxgt 8yAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=lx2tYrxH76TzOflvrZdMYwmw6kPdiwBO2AoS02kSF8I=; b=YaFnI5pAMATUWVTReUV8rFucZywvvCPzPPWrAUinkZlB1MHPc02SWFJtMnf4YdJnwW MhQbw4sIJnCCP0kdRcWNm2IQEbW8JuXPGGLTrS5jPl3UCRVTSKXaNIPdJMzgofiardqc KinsAgYs5ysBN/jROEM3zLy7V4Q12PfYEdC60b6thOl12x1vFD4mao9YbM4GsDANpGFa 6az9BkxrD/w1yOOUUXlWTAo2T+8994mMLhcLwSnGS9d4wKm3SxkVNvdUKOS8lSZSiuBo 2zK194ZmwNorykZf7OOWOxG5nzp0FIPcKIwtccSlSuAEPoUq7uzrI+nIt7C6TcOARZfJ ihkA== X-Gm-Message-State: APjAAAUIIjweWK6jkgdeSFHyQCKv/U9Oxjzsd//pXULZPP69v7fQjoER trOlds87yMO/grCAgOTw0KQ7fw== X-Google-Smtp-Source: APXvYqwF7pecGZBZMkXKVp15IXhXZo2PGYpkUgZNZl53wUwEdzClRBwVejnpDULy6WyyjOVSSgygvA== X-Received: by 2002:a2e:9284:: with SMTP id d4mr2361276ljh.226.1576685060830; Wed, 18 Dec 2019 08:04:20 -0800 (PST) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j19sm1730231lfb.90.2019.12.18.08.04.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Dec 2019 08:04:19 -0800 (PST) Received: by box.localdomain (Postfix, from userid 1000) id 5A04D1012CF; Wed, 18 Dec 2019 19:04:20 +0300 (+03) Date: Wed, 18 Dec 2019 19:04:20 +0300 From: "Kirill A. Shutemov" To: John Hubbard Cc: Andrew Morton , Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?B?QmrDtnJuIFTDtnBlbA==?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?B?SsOpcsO0bWU=?= Glisse , Magnus Karlsson , Mauro Carvalho Chehab , Michael Ellerman , Michal Hocko , Mike Kravetz , Paul Mackerras , Shuah Khan , Vlastimil Babka , bpf@vger.kernel.org, dri-devel@lists.freedesktop.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, linux-rdma@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-mm@kvack.org, LKML , Christoph Hellwig Subject: Re: [PATCH v11 04/25] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages Message-ID: <20191218160420.gyt4c45e6zsnxqv6@box> References: <20191216222537.491123-1-jhubbard@nvidia.com> <20191216222537.491123-5-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191216222537.491123-5-jhubbard@nvidia.com> User-Agent: NeoMutt/20180716 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Dec 16, 2019 at 02:25:16PM -0800, John Hubbard wrote: > An upcoming patch changes and complicates the refcounting and > especially the "put page" aspects of it. In order to keep > everything clean, refactor the devmap page release routines: >=20 > * Rename put_devmap_managed_page() to page_is_devmap_managed(), > and limit the functionality to "read only": return a bool, > with no side effects. >=20 > * Add a new routine, put_devmap_managed_page(), to handle checking > what kind of page it is, and what kind of refcount handling it > requires. >=20 > * Rename __put_devmap_managed_page() to free_devmap_managed_page(), > and limit the functionality to unconditionally freeing a devmap > page. What the reason to separate put_devmap_managed_page() from free_devmap_managed_page() if free_devmap_managed_page() has exacly one caller? Is it preparation for the next patches? > This is originally based on a separate patch by Ira Weiny, which > applied to an early version of the put_user_page() experiments. > Since then, J=E9r=F4me Glisse suggested the refactoring described above= . >=20 > Cc: Christoph Hellwig > Suggested-by: J=E9r=F4me Glisse > Reviewed-by: Dan Williams > Reviewed-by: Jan Kara > Signed-off-by: Ira Weiny > Signed-off-by: John Hubbard > --- > include/linux/mm.h | 17 +++++++++++++---- > mm/memremap.c | 16 ++-------------- > mm/swap.c | 24 ++++++++++++++++++++++++ > 3 files changed, 39 insertions(+), 18 deletions(-) >=20 > diff --git a/include/linux/mm.h b/include/linux/mm.h > index c97ea3b694e6..77a4df06c8a7 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -952,9 +952,10 @@ static inline bool is_zone_device_page(const struc= t page *page) > #endif > =20 > #ifdef CONFIG_DEV_PAGEMAP_OPS > -void __put_devmap_managed_page(struct page *page); > +void free_devmap_managed_page(struct page *page); > DECLARE_STATIC_KEY_FALSE(devmap_managed_key); > -static inline bool put_devmap_managed_page(struct page *page) > + > +static inline bool page_is_devmap_managed(struct page *page) > { > if (!static_branch_unlikely(&devmap_managed_key)) > return false; > @@ -963,7 +964,6 @@ static inline bool put_devmap_managed_page(struct p= age *page) > switch (page->pgmap->type) { > case MEMORY_DEVICE_PRIVATE: > case MEMORY_DEVICE_FS_DAX: > - __put_devmap_managed_page(page); > return true; > default: > break; > @@ -971,7 +971,14 @@ static inline bool put_devmap_managed_page(struct = page *page) > return false; > } > =20 > +bool put_devmap_managed_page(struct page *page); > + > #else /* CONFIG_DEV_PAGEMAP_OPS */ > +static inline bool page_is_devmap_managed(struct page *page) > +{ > + return false; > +} > + > static inline bool put_devmap_managed_page(struct page *page) > { > return false; > @@ -1028,8 +1035,10 @@ static inline void put_page(struct page *page) > * need to inform the device driver through callback. See > * include/linux/memremap.h and HMM for details. > */ > - if (put_devmap_managed_page(page)) > + if (page_is_devmap_managed(page)) { > + put_devmap_managed_page(page); put_devmap_managed_page() has yet another page_is_devmap_managed() check inside. It looks strange. > return; > + } > =20 > if (put_page_testzero(page)) > __put_page(page); > diff --git a/mm/memremap.c b/mm/memremap.c > index e899fa876a62..2ba773859031 100644 > --- a/mm/memremap.c > +++ b/mm/memremap.c > @@ -411,20 +411,8 @@ struct dev_pagemap *get_dev_pagemap(unsigned long = pfn, > EXPORT_SYMBOL_GPL(get_dev_pagemap); > =20 > #ifdef CONFIG_DEV_PAGEMAP_OPS > -void __put_devmap_managed_page(struct page *page) > +void free_devmap_managed_page(struct page *page) > { > - int count =3D page_ref_dec_return(page); > - > - /* still busy */ > - if (count > 1) > - return; > - > - /* only triggered by the dev_pagemap shutdown path */ > - if (count =3D=3D 0) { > - __put_page(page); > - return; > - } > - > /* notify page idle for dax */ > if (!is_device_private_page(page)) { > wake_up_var(&page->_refcount); > @@ -461,5 +449,5 @@ void __put_devmap_managed_page(struct page *page) > page->mapping =3D NULL; > page->pgmap->ops->page_free(page); > } > -EXPORT_SYMBOL(__put_devmap_managed_page); > +EXPORT_SYMBOL(free_devmap_managed_page); > #endif /* CONFIG_DEV_PAGEMAP_OPS */ > diff --git a/mm/swap.c b/mm/swap.c > index 5341ae93861f..49f7c2eea0ba 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -1102,3 +1102,27 @@ void __init swap_setup(void) > * _really_ don't want to cluster much more > */ > } > + > +#ifdef CONFIG_DEV_PAGEMAP_OPS > +bool put_devmap_managed_page(struct page *page) > +{ > + bool is_devmap =3D page_is_devmap_managed(page); > + > + if (is_devmap) { Reversing the condition would save you an indentation level. > + int count =3D page_ref_dec_return(page); > + > + /* > + * devmap page refcounts are 1-based, rather than 0-based: if > + * refcount is 1, then the page is free and the refcount is > + * stable because nobody holds a reference on the page. > + */ > + if (count =3D=3D 1) > + free_devmap_managed_page(page); > + else if (!count) > + __put_page(page); > + } > + > + return is_devmap; > +} > +EXPORT_SYMBOL(put_devmap_managed_page); > +#endif > --=20 > 2.24.1 >=20 >=20 --=20 Kirill A. Shutemov