From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0780C352A3 for ; Wed, 5 Feb 2020 11:23:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 95DBF21741 for ; Wed, 5 Feb 2020 11:23:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="tFQv4blG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95DBF21741 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1E5266B00B4; Wed, 5 Feb 2020 06:23:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1969D6B00B5; Wed, 5 Feb 2020 06:23:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05DDA6B00B6; Wed, 5 Feb 2020 06:23:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id E2F0E6B00B4 for ; Wed, 5 Feb 2020 06:23:32 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 97B2D33C4 for ; Wed, 5 Feb 2020 11:23:32 +0000 (UTC) X-FDA: 76455837864.07.deer04_762c07c1f0c2d X-HE-Tag: deer04_762c07c1f0c2d X-Filterd-Recvd-Size: 5275 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 5 Feb 2020 11:23:31 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id d10so1896941ljl.9 for ; Wed, 05 Feb 2020 03:23:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=dh5KVm5NVZR7B6kV3CkfPxftpb+A9muCzSxY5f+by0k=; b=tFQv4blGi1NukrKVKLKM9a5yWogak2ERntocChs0vWrya9lUNK3FSyatnKS1MLPDrt p+3KZiUybS59AUws2x8n3rybFtHSU3qsaP38wMgFaDNTG/usGncV2+gIcDPWZaTihn2x IT0s2ZIQkgy6nZw6cmza+jvP3jviAy+EvvnJ4AOSPcmWvi3H07xyPGK9XfmmYHOdOfBr i64GsDmqMsxzDMDTMf2POxCYF3RjYkxUsa1AEq/CUhlFdErzLlXMxmx6vvsrYmjq/W1E 0uNIM8buRrHTOj7xD45gFWXrseGaMzB/dqpGKw77ntg+wPel+osm/zIX+A2PVoNrN2AX dPxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=dh5KVm5NVZR7B6kV3CkfPxftpb+A9muCzSxY5f+by0k=; b=a9Uu7hKMASEFDYMEYeIK8QK4YDEdT75UT14XL4H20ib2t4XqJarL+5GBQ9fRUlOngg BsITyBTUZmmB6trX5bt06DwTrHxWvGqeOTwqRrlPaA6XBLnRcbm9ZEloZlWjk3jVIflM vB0AeEzo/PCT3uuKD9vHXrw1+TFBUtmo0Lgdl2VHFPSx8jsIDLmxlUmDgzd1h8tEfREQ xiy19CDia12M7tOKceJrzBGVyYEWjMP1sDSrSV8seNQKUBCa9v42AEkOKfcRx1Q0jsBv m5Fa7f6zOARZ4Ap9fokzmvLr30KpstkYfT43XYMaU1pwRZCsFi6NVwV+li9wAF1mGSda 6xNw== X-Gm-Message-State: APjAAAXfjV/5c8WnqiRYPxxfxljmxErqJ417wxFf+LdMlnTyCi9wCWfL 4rFjyW4fBROMsj2oCimdV8Hsng== X-Google-Smtp-Source: APXvYqzGnroRbFkq73Du+mrJTZV4pY9s8Imx6qAPmcft4bWsOSpmZpl02xqYNxfUrCA0UHefdOERHw== X-Received: by 2002:a2e:918c:: with SMTP id f12mr18604844ljg.66.1580901810333; Wed, 05 Feb 2020 03:23:30 -0800 (PST) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id y18sm12786085ljm.93.2020.02.05.03.23.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Feb 2020 03:23:29 -0800 (PST) Received: by box.localdomain (Postfix, from userid 1000) id AE293100AF6; Wed, 5 Feb 2020 14:23:43 +0300 (+03) Date: Wed, 5 Feb 2020 14:23:43 +0300 From: "Kirill A. Shutemov" To: John Hubbard Cc: Andrew Morton , Al Viro , Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jan Kara , Jason Gunthorpe , Jonathan Corbet , =?utf-8?B?SsOpcsO0bWU=?= Glisse , Michal Hocko , Mike Kravetz , Shuah Khan , Vlastimil Babka , Matthew Wilcox , linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-rdma@vger.kernel.org, linux-mm@kvack.org, LKML Subject: Re: [PATCH v4 04/12] mm: introduce page_ref_sub_return() Message-ID: <20200205112343.e2vpcylgrobfcxlo@box> References: <20200204234117.2974687-1-jhubbard@nvidia.com> <20200204234117.2974687-5-jhubbard@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200204234117.2974687-5-jhubbard@nvidia.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 04, 2020 at 03:41:09PM -0800, John Hubbard wrote: > An upcoming patch requires subtracting a large chunk of refcounts from > a page, and checking what the resulting refcount is. This is a little > different than the usual "check for zero refcount" that many of the > page ref functions already do. However, it is similar to a few other > routines that (like this one) are generally useful for things such as > 1-based refcounting. > > Add page_ref_sub_return(), that subtracts a chunk of refcounts > atomically, and returns an atomic snapshot of the result. > > Signed-off-by: John Hubbard > --- > include/linux/page_ref.h | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h > index 14d14beb1f7f..a0e171265b79 100644 > --- a/include/linux/page_ref.h > +++ b/include/linux/page_ref.h > @@ -102,6 +102,15 @@ static inline void page_ref_sub(struct page *page, int nr) > __page_ref_mod(page, -nr); > } > > +static inline int page_ref_sub_return(struct page *page, int nr) > +{ > + int ret = atomic_sub_return(nr, &page->_refcount); > + > + if (page_ref_tracepoint_active(__tracepoint_page_ref_mod)) s/__tracepoint_page_ref_mod/__tracepoint_page_ref_mod_and_return/ > + __page_ref_mod_and_return(page, -nr, ret); > + return ret; > +} > + > static inline void page_ref_inc(struct page *page) > { > atomic_inc(&page->_refcount); > -- > 2.25.0 > -- Kirill A. Shutemov