From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73520C3DA7A for ; Fri, 6 Jan 2023 15:36:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 119F38E0002; Fri, 6 Jan 2023 10:36:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C7578E0001; Fri, 6 Jan 2023 10:36:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED1058E0002; Fri, 6 Jan 2023 10:36:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DEBCE8E0001 for ; Fri, 6 Jan 2023 10:36:03 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B6EAE160AEA for ; Fri, 6 Jan 2023 15:36:03 +0000 (UTC) X-FDA: 80324775006.07.2787F77 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id 48ACE140003 for ; Fri, 6 Jan 2023 15:36:01 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sKykOzPr; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673019362; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Png50+d+uzA051GBdfy7sKVXvqEjJyuHL7UG2y9PYno=; b=K6mK0JagvwtgtZvXmjQLrTaqfuuFLBLPmbg0rOzDo6T6AI4FUO9xCGhjIrL255KNim+mAw PPOWdGCvm68ZUrFuQ4MSiw04AGMDxyQ/aniolH5dnFC53MF0JO/kHGjHFW1b57ErV50KwQ 1FSrSL9AdM+YASVnxbpy1NblTRva3ko= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=sKykOzPr; dmarc=none; spf=none (imf26.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673019362; a=rsa-sha256; cv=none; b=q31mNoxd6kkIlHtrUjZHr4Y1MEN96OMj69WGjV2GbwEtQ0mMI1An+9RUNSLkGf3y9UbTBl 8UNirwRISoO1R75qOJUp0s+C2FOoQM60sDVQ1bcE56n6v0vkkjyNhzRt4jKdPInuhGyeZo A3f0yxIUvDW/9uIb7dPrezQEdd4t5/o= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=Png50+d+uzA051GBdfy7sKVXvqEjJyuHL7UG2y9PYno=; b=sKykOzPrUEKuzf1Cy8r/f2n/rI jJ06jInTzN/kS4olbiqwyn8h+RE696XKkavIWIg3wzrmkuFzbjUwyxlbMvhuJcUjU4r8DgRndfsba dNw2J5yUIAx2i2Io1VQbql0nq5wnVfmpSWG78l/aZU8S9OHJi/Lxfemf3MOULwMEVSsiR9BP+cFxL 51VJhtIHud8ByNqHURoaFWLVPJh7roG06ddn8Drrlgu334535VP5seYoWuqepMNblEtZ2rWOnl+Rp qs/brD2KCMKKD++GL/WuSqmCwDPy1eGIyY8xlvwf8L3m1fI/9Wfms1ZotATV3TArTqd6W6fwi/3xN Vl3tLiVw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDolG-00HHCg-Mz; Fri, 06 Jan 2023 15:36:02 +0000 Date: Fri, 6 Jan 2023 15:36:02 +0000 From: Matthew Wilcox To: Jesper Dangaard Brouer Cc: Jesper Dangaard Brouer , Ilias Apalodimas , brouer@redhat.com, netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: Re: [PATCH v2 05/24] page_pool: Start using netmem in allocation path. Message-ID: References: <20230105214631.3939268-1-willy@infradead.org> <20230105214631.3939268-6-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 48ACE140003 X-Stat-Signature: 44u64rgnmbgmg7iot8z8j5cnohfqp6mw X-HE-Tag: 1673019361-455453 X-HE-Meta: U2FsdGVkX18d03XwUsxXEnNg80w9H6AKytIkYk5E5vF/96UxkLRwm1N7P29iTmxktZts1i49Eq2yVvPJFmLx5GYko7MvNqSdFgx3bhVIy+DgAAUdUBBPvRgGL2DOgpL72vE/FBF6a2hTadxhJpX5W11zM4jqfD1b/pYv4oty00XVLGXdfHmlSGafU/z8wvNba29Kj/NZo1Fj2VlR3w+c5PoDvTgy0JEtpVpFfXXcePdWBHW0e4D5BoHjrrt/Xp9bCIBtSn7INkQIVn2FNccorNeV38bnQeuDtSVgdczkcpDShjEU1JJ9+vFHzqnD9RADQgzlarWJ/SZ21KOtSVKDtEGIhjoONxhGtY5b7VOOEOCQ0AyyV2DT1mvgZBrz0pAMbwDyoaTULQnBQnaRU/Wrmo5TT9nKufmWR/JiVcrzGlWR6c4+Zl6gsHJHbrYEmxW6PlmZ+dnan7DyyPNkbhdO62HA4ajzIUmYZoKtkcV7rkCovs+BoWXKrhY7LZa0V1+ybskk3aN8hBTqgE/HEOIFVOTvGk7F6x5+JyyGI/UQ4DuB3feAWI/lItzjB3N2JmsOAaqnm9KM558hBLc76vv88APEOIZredDwLFGlVHLh3mXdITlhSGD9etDsWek+TR04wjW0sErwNupqLtU6m9YecuuSJ1A0KpnwgJujYxoYHwUCz2HO3DET1E8Zq36XB9X2amQ2h0kMN4Or0q8D1HbjFo41qub0Dpzeg0BeRqhka8mJaocbXCsBJdG9ideKwvakWHe8NNR2dfvU7OpDx6xmKR0yka9gGpdAcbH3MUe8Jbl8MKuzCegP2+01sG0GdELYN9wPSM93e4fOuiqlPUHV4FQ+SIZJ8J7GRzDR4cZsXfcOLo12Q+Jir3yuEEWqq3G3VZay9iJ0ow9nJIC0GaLPeA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 06, 2023 at 02:59:30PM +0100, Jesper Dangaard Brouer wrote: > On 05/01/2023 22.46, Matthew Wilcox (Oracle) wrote: > > Convert __page_pool_alloc_page_order() and __page_pool_alloc_pages_slow() > > to use netmem internally. This removes a couple of calls > > to compound_head() that are hidden inside put_page(). > > Convert trace_page_pool_state_hold(), page_pool_dma_map() and > > page_pool_set_pp_info() to take a netmem argument. > > > > Saves 83 bytes of text in __page_pool_alloc_page_order() and 98 in > > __page_pool_alloc_pages_slow() for a total of 181 bytes. > > > > Signed-off-by: Matthew Wilcox (Oracle) > > --- > > include/trace/events/page_pool.h | 14 +++++------ > > net/core/page_pool.c | 42 +++++++++++++++++--------------- > > 2 files changed, 29 insertions(+), 27 deletions(-) > > Acked-by: Jesper Dangaard Brouer > > Question below. > > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > > index 437241aba5a7..4e985502c569 100644 > > --- a/net/core/page_pool.c > > +++ b/net/core/page_pool.c > [...] > > @@ -421,7 +422,8 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > > page = NULL; > > } > > - /* When page just alloc'ed is should/must have refcnt 1. */ > > + /* When page just allocated it should have refcnt 1 (but may have > > + * speculative references) */ > > return page; > > What does it mean page may have speculative references ? > > And do I/we need to worry about that for page_pool? An excellent question. There are two code paths (known to me) which take speculative references on a page, and there may well be more. One is in GUP and the other is in the page cache. Both take the form of: rcu_read_lock(); again: look-up-page try-get-page-ref check-lookup if lookup-failed drop-page-ref; goto again; rcu_read_unlock(); If a page _has been_ in the page tables, then GUP can find it. If a page _has been_ in the page cache, then filemap can find it. Because there's no RCU grace period between freeing and reallocating a page, it actually means that any page can see its refcount temporarily raised. Usually the biggest problem is consumers assuming that they will be the last code to call put_page() / folio_put(), and can do their cleanup at that time (when the last caller of folio_put() may be GUP or filemap which knows nothing of what you're using the page for). I didn't notice any problems with temporarily elevated refcounts while doing the netmem conversion, and it's something I'm fairly sensitive to, so I think you got it all right and there is no need to be concerned. Hope that's helpful!