From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ABC4C46467 for ; Tue, 10 Jan 2023 09:28:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C222D8E0002; Tue, 10 Jan 2023 04:28:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD1CD8E0001; Tue, 10 Jan 2023 04:28:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A99838E0002; Tue, 10 Jan 2023 04:28:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9A1C18E0001 for ; Tue, 10 Jan 2023 04:28:16 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5F82DC2184 for ; Tue, 10 Jan 2023 09:28:16 +0000 (UTC) X-FDA: 80338363392.03.6061FEF Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf06.hostedemail.com (Postfix) with ESMTP id A626B180007 for ; Tue, 10 Jan 2023 09:28:14 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=aTxLk7rQ; spf=pass (imf06.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.46 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673342894; a=rsa-sha256; cv=none; b=Zn0UJPdGC7Pg7p7/AMbtQqv0jXXVA6vu6h8bVKqhGyOLQ9J3fvsn+14vTRYaaaQ/mKvhrQ EO6bzvEx0nBwhnC97xEGKlE+UgadtFXKuvqVgS89zX9fhhhLpTBgRk2v1TnslVOCZn8+zd fZsUrHDt6cN4JEcMZWf8oLuFtJTdn3A= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=aTxLk7rQ; spf=pass (imf06.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.46 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673342894; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ySmq1r+9SpVjHXx+Pp4BpxZzA0xuTyLwWflYCNv+4aQ=; b=PKcT6zzMZ9vM4nREmAP3qdpRPvC0x2sJyUEsy7o3m87AXD2/pROfs4369CumfLs3mXbgyE IU2jz2CcOfx4rPxZke3jR6+vLgh33yarH5Y7GtBshN0Un5WvqzBtDdmmGoYVxLeP25QZwA nJMp9Y12rzve18xVN7XxD96S/F4xpO0= Received: by mail-ed1-f46.google.com with SMTP id v10so15370499edi.8 for ; Tue, 10 Jan 2023 01:28:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=ySmq1r+9SpVjHXx+Pp4BpxZzA0xuTyLwWflYCNv+4aQ=; b=aTxLk7rQP/EqOkexObow9ea4HKO5QAfznvZG/0F21iRO2fA8XiStEkFpc1VGAJ6l/h g0HMWXfT4HvoxHLnHr819qUnGQuThzhs8Qg7aZAG2+xMdQeVRDyXubE0jrB4fspzV6Ad kO/MzJ9QtnYaW1xFxXPKTEdrQ8obEUhrtD7eRYmRG6z82qSeyKJenuN0tQWNtzFJNwH5 2JQIOESKXFNAowx5OoGyhoO7IBpEl3BtvZIAlMoXwxM7S06EdEawuZ17moOaoYVKlDKn xzPEUJhn6QwZuVbXaO+Np3o2QNz97fYb9tequJ9rlqspZrT7bnZvU1ueIs1FnZ+9QtGj R2uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ySmq1r+9SpVjHXx+Pp4BpxZzA0xuTyLwWflYCNv+4aQ=; b=BOTSxrZBNk62BP4fAlKZtj22rC+9W3m5cq9LNUxD3dYUDr4tFmGOE04wJCDwx/NE4m BKVh3XiufqS8NljXJkdpAQNZlYW+I7ONVtufv8q/OT7iA5xo25OQGA1IAj+9wboMVTm4 R4lc65db9wQ2EXP/U5jFZ4mpVoZivKFI37JED6Fu47peQvTwy9F706H/kGfdmM0fFZgq 1AEpz5do05MejWw5DUixVKR5hQ+4HTBLUTDVwQ7M0mdBh2RFDA5ebTxdBmCmwiIu7Shg MjcrmObmHHp4K3TcVhXlYTkkW5KymkD4EVnz+FJ1I8DHkcVZO5qWDCwznxIxZyIZoekI wj3Q== X-Gm-Message-State: AFqh2kodTCF+ognVp6EGItLJOKfGft09+eZdCKPwrrG/8K9v4KoKj9cb YwlHL5JLRtpRr/tXenMMEE5Wvg== X-Google-Smtp-Source: AMrXdXuawtQ4d6cBf3D2ANagoBv64ZrsTLzqjm1LYkIMVk1a1gBSRGV9cLdIeeo0IoygatbYLII2kw== X-Received: by 2002:a05:6402:2932:b0:47e:bdb8:9133 with SMTP id ee50-20020a056402293200b0047ebdb89133mr66776372edb.38.1673342893122; Tue, 10 Jan 2023 01:28:13 -0800 (PST) Received: from hera (ppp079167090036.access.hol.gr. [79.167.90.36]) by smtp.gmail.com with ESMTPSA id s1-20020aa7d781000000b0048ecd372fc9sm4672693edq.2.2023.01.10.01.28.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 01:28:12 -0800 (PST) Date: Tue, 10 Jan 2023 11:28:10 +0200 From: Ilias Apalodimas To: "Matthew Wilcox (Oracle)" Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: Re: [PATCH v2 04/24] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Message-ID: References: <20230105214631.3939268-1-willy@infradead.org> <20230105214631.3939268-5-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230105214631.3939268-5-willy@infradead.org> X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A626B180007 X-Stat-Signature: oyfy4ux1tsh4g6n548e771nxxwdmj6t9 X-HE-Tag: 1673342894-264813 X-HE-Meta: U2FsdGVkX19GjFUFocic0Dqc6H+3e/RYNWk2JGRNw0/8afkwLg8mZle16w4tE4bjooYKlU6L6GuuspyxqTANDEYFPcDa3Q+YV3RVQIzntKbhGiRaiLE9LDZi9/lFAD3LLQFVpB+Z8YT5GhSOIN0gxdkzDXCbvrETJw28J7cnNSrdn+qEKgNWldXVk8kZSbMeYqmWEERoYRdYilfDXZ2e0dYebQag7vHdgK1W2Qc4QEKzoxKNDbe/WveXY3/RHB2BaZzhUO+0Ff0YKxguLn1IHdgJFfGe+Q9GGldvCReoRbzOE0Zd/jYpJaZxHA5R8XND32vehNH9Rk2pWHlxNwv+V3kb7GpuRzQydEDu5Q61YxBSvnDHLWWOO+2ANJILg4I7QVHuvT7DEFMJGOHpb8yfVmhMaykIVwaOSTray6aZPRV6mhUdDcCL+Jc7GN20Dx9zhsKHe2vbOpipGEjIxvFJo3TUy8mIEX7IZYjud97b+NhtdfWmRMc38jzbTnQDwRpENhNTnDCe2PKeAEAytIVzpr48DUQdVJorq92XbeRm7XhapfL4qT8LXQeokolVpVOZcI9J/aFpDPPhowUrKiUJ1INippSvrku1UQH30VnNOmAw+7eSrm6X0Sx7y/svY3vuVd9o84TpLsFXGy3ixFSS6DiNeIbuYlg1GwYJKM2kyOPVktIC35f3/KxF8i8uyDP1rGLBjPW9uv+gzKauImBITwGQhHVSOYT9+rhetyGZooXrLboARfERiT4yUZweutiaym1RiuCO6gdlXv8BYaSO1WFK0cJU6+JiDst3Lw749MD8S2HPiWNK0a73/yKbL2WzcFJp/ZStMvuAZspiqfghBmueZnUG3803ykdcxL7vWUYbsOJU/vUPCQNJKhGOZpihAew/PKiA8ycjLFst91OYwZ8+BDHpRWiSvH69Gv5LiHeXthHONPWSiJKODr1hEeBsm+/PVf8XdSNSRKGg3GN lJNPz50a PXWjZlNoQ2nN3d1L7KxfkCTuKfdG6WzJbMX+eHfUZE7uGYz8mo1O86OUo/inHCq68IE67u0T63fUBazCIBgRau02DyEFkui9DBd7eOKMQiG4+h/3BuUhgskh6C3FXCmqCnXlE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Matthew, On Thu, Jan 05, 2023 at 09:46:11PM +0000, Matthew Wilcox (Oracle) wrote: > Also convert page_pool_clear_pp_info() and trace_page_pool_state_release() > to take a netmem. Include a wrapper for page_pool_release_page() to > avoid converting all callers. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/net/page_pool.h | 14 ++++++++++---- > include/trace/events/page_pool.h | 14 +++++++------- > net/core/page_pool.c | 18 +++++++++--------- > 3 files changed, 26 insertions(+), 20 deletions(-) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index 196b585763d9..480baa22bc50 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -18,7 +18,7 @@ > * > * API keeps track of in-flight pages, in-order to let API user know > * when it is safe to dealloactor page_pool object. Thus, API users > - * must make sure to call page_pool_release_page() when a page is > + * must make sure to call page_pool_release_netmem() when a page is > * "leaving" the page_pool. Or call page_pool_put_page() where > * appropiate. For maintaining correct accounting. > * > @@ -354,7 +354,7 @@ struct xdp_mem_info; > void page_pool_destroy(struct page_pool *pool); > void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), > struct xdp_mem_info *mem); > -void page_pool_release_page(struct page_pool *pool, struct page *page); > +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem); > void page_pool_put_page_bulk(struct page_pool *pool, void **data, > int count); > #else > @@ -367,8 +367,8 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool, > struct xdp_mem_info *mem) > { > } > -static inline void page_pool_release_page(struct page_pool *pool, > - struct page *page) > +static inline void page_pool_release_netmem(struct page_pool *pool, > + struct netmem *nmem) > { > } > > @@ -378,6 +378,12 @@ static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, > } > #endif > I think it's worth commenting here that page_pool_release_page() is eventually going to be removed once we convert all drivers and shouldn't be used anymore > +static inline void page_pool_release_page(struct page_pool *pool, > + struct page *page) > +{ > + page_pool_release_netmem(pool, page_netmem(page)); > +} > + > void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, > unsigned int dma_sync_size, > bool allow_direct); > diff --git a/include/trace/events/page_pool.h b/include/trace/events/page_pool.h > index ca534501158b..113aad0c9e5b 100644 > --- a/include/trace/events/page_pool.h > +++ b/include/trace/events/page_pool.h > @@ -42,26 +42,26 @@ TRACE_EVENT(page_pool_release, > TRACE_EVENT(page_pool_state_release, > > TP_PROTO(const struct page_pool *pool, > - const struct page *page, u32 release), > + const struct netmem *nmem, u32 release), > > - TP_ARGS(pool, page, release), > + TP_ARGS(pool, nmem, release), > > TP_STRUCT__entry( > __field(const struct page_pool *, pool) > - __field(const struct page *, page) > + __field(const struct netmem *, nmem) > __field(u32, release) > __field(unsigned long, pfn) > ), > > TP_fast_assign( > __entry->pool = pool; > - __entry->page = page; > + __entry->nmem = nmem; > __entry->release = release; > - __entry->pfn = page_to_pfn(page); > + __entry->pfn = netmem_pfn(nmem); > ), > > - TP_printk("page_pool=%p page=%p pfn=0x%lx release=%u", > - __entry->pool, __entry->page, __entry->pfn, __entry->release) > + TP_printk("page_pool=%p nmem=%p pfn=0x%lx release=%u", > + __entry->pool, __entry->nmem, __entry->pfn, __entry->release) > ); > > TRACE_EVENT(page_pool_state_hold, > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 9b203d8660e4..437241aba5a7 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -336,10 +336,10 @@ static void page_pool_set_pp_info(struct page_pool *pool, > pool->p.init_callback(page, pool->p.init_arg); > } > > -static void page_pool_clear_pp_info(struct page *page) > +static void page_pool_clear_pp_info(struct netmem *nmem) > { > - page->pp_magic = 0; > - page->pp = NULL; > + nmem->pp_magic = 0; > + nmem->pp = NULL; > } > > static struct page *__page_pool_alloc_page_order(struct page_pool *pool, > @@ -467,7 +467,7 @@ static s32 page_pool_inflight(struct page_pool *pool) > * a regular page (that will eventually be returned to the normal > * page-allocator via put_page). > */ > -void page_pool_release_page(struct page_pool *pool, struct page *page) > +void page_pool_release_netmem(struct page_pool *pool, struct netmem *nmem) > { > dma_addr_t dma; > int count; > @@ -478,23 +478,23 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) > */ > goto skip_dma_unmap; > > - dma = page_pool_get_dma_addr(page); > + dma = netmem_get_dma_addr(nmem); > > /* When page is unmapped, it cannot be returned to our pool */ > dma_unmap_page_attrs(pool->p.dev, dma, > PAGE_SIZE << pool->p.order, pool->p.dma_dir, > DMA_ATTR_SKIP_CPU_SYNC); > - page_pool_set_dma_addr(page, 0); > + netmem_set_dma_addr(nmem, 0); > skip_dma_unmap: > - page_pool_clear_pp_info(page); > + page_pool_clear_pp_info(nmem); > > /* This may be the last page returned, releasing the pool, so > * it is not safe to reference pool afterwards. > */ > count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); > - trace_page_pool_state_release(pool, page, count); > + trace_page_pool_state_release(pool, nmem, count); > } > -EXPORT_SYMBOL(page_pool_release_page); > +EXPORT_SYMBOL(page_pool_release_netmem); > > /* Return a page to the page allocator, cleaning up our state */ > static void page_pool_return_page(struct page_pool *pool, struct page *page) > -- > 2.35.1 > Reviewed-by: Ilias Apalodimas