From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8389C3524B for ; Sat, 1 Feb 2020 03:40:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9EAC920643 for ; Sat, 1 Feb 2020 03:40:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="iDX4gxpQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9EAC920643 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5AE276B05DF; Fri, 31 Jan 2020 22:40:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 470106B05E3; Fri, 31 Jan 2020 22:40:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BBBC6B05DE; Fri, 31 Jan 2020 22:40:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 02CBA6B05E1 for ; Fri, 31 Jan 2020 22:40:37 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 980183D13 for ; Sat, 1 Feb 2020 03:40:37 +0000 (UTC) X-FDA: 76440156114.25.blood15_3f9afdd35a65b X-HE-Tag: blood15_3f9afdd35a65b X-Filterd-Recvd-Size: 4506 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Sat, 1 Feb 2020 03:40:36 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 31 Jan 2020 19:39:42 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 31 Jan 2020 19:40:35 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 31 Jan 2020 19:40:35 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 1 Feb 2020 03:40:35 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Sat, 1 Feb 2020 03:40:35 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 31 Jan 2020 19:40:34 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Christoph Hellwig , Dan Williams , Dave Chinner , Ira Weiny , Jan Kara , Jason Gunthorpe , Jonathan Corbet , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A . Shutemov" , Michal Hocko , Mike Kravetz , Shuah Khan , Vlastimil Babka , Matthew Wilcox , , , , , , LKML , John Hubbard Subject: [PATCH v3 04/12] mm: introduce page_ref_sub_return() Date: Fri, 31 Jan 2020 19:40:21 -0800 Message-ID: <20200201034029.4063170-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200201034029.4063170-1-jhubbard@nvidia.com> References: <20200201034029.4063170-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: quoted-printable Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1580528382; bh=VhVwoCFqTx48m7C67hmpKfI2lp8OYGAP7BWu53hCs80=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=iDX4gxpQL2C0C9cMl4/0o9eEAC7Hffol4qeTM8bdozImGlWGXqrTALpzFzNxnt84/ RP6GdlgNzoWj4sRyEUs8oyVGWXe+luTZElPfdyPd6vZfePeURd2Ux1fkG4/x2z70pn XJ2izDvgoWSUjZFwNRu1IjGXJZ+3MnE5RojJLRoNlgb+5a8aplp4kvEs3lPpb1Tg+0 EHKeLOExY9j98deZAlBlmkzFh1D2x7PgVif+1weB34UZlIrezOItQEENKUD7tLo9Jt 6TLR7YuU1IULbseyKPQlc2LYomTvJPzqML/kdsxGpziYRoKXXdjiZffwif2bKYuB8k lyJ7bq5UVNBsA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: An upcoming patch requires subtracting a large chunk of refcounts from a page, and checking what the resulting refcount is. This is a little different than the usual "check for zero refcount" that many of the page ref functions already do. However, it is similar to a few other routines that (like this one) are generally useful for things such as 1-based refcounting. Add page_ref_sub_return(), that subtracts a chunk of refcounts atomically, and returns an atomic snapshot of the result. Signed-off-by: John Hubbard --- include/linux/page_ref.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 14d14beb1f7f..b9cbe553d1e7 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -102,6 +102,16 @@ static inline void page_ref_sub(struct page *page, int= nr) __page_ref_mod(page, -nr); } =20 +static inline int page_ref_sub_return(struct page *page, int nr) +{ + int ret =3D atomic_sub_return(nr, &page->_refcount); + + if (page_ref_tracepoint_active(__tracepoint_page_ref_mod)) + __page_ref_mod(page, -nr); + + return ret; +} + static inline void page_ref_inc(struct page *page) { atomic_inc(&page->_refcount); --=20 2.25.0