From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6979C54EBC for ; Tue, 10 Jan 2023 11:37:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5802B8E0002; Tue, 10 Jan 2023 06:37:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 530408E0001; Tue, 10 Jan 2023 06:37:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F7BE8E0002; Tue, 10 Jan 2023 06:37:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2ED558E0001 for ; Tue, 10 Jan 2023 06:37:05 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E802E140395 for ; Tue, 10 Jan 2023 11:37:04 +0000 (UTC) X-FDA: 80338687968.28.32B54FC Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) by imf17.hostedemail.com (Postfix) with ESMTP id 41DE140019 for ; Tue, 10 Jan 2023 11:37:02 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=Ky4f0z25; spf=pass (imf17.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.54 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673350623; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DnKNkdS7L7V0h5HPSCk/ld41k2/nNcjDj8U1N0SHNxQ=; b=KsN7urIWMJxekrYcJ2MHvh5c7AKX/yvYnLwo2UOlB2pP5JW73r2pTQDXk1cseI03WhV88a lKil0/6nn+Dd6Nv7z4yrvWhqMjkqcGgNsXYUqRnDOxK0pO/82zrUxB7fr2nygA5D8Sd1qG eOATTAZE5X3w+CapHWzT3xPPKxU3GSs= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=Ky4f0z25; spf=pass (imf17.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.54 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673350623; a=rsa-sha256; cv=none; b=GA/t6MbMDNkCQwxdeENSWJJMNh78VAgR6uG68FY0l5t3Pb8wZ+HmZQ0fwCtSwu+ctDJJSx GirpBTpVWbNZNwviK72rlwBMyVzxq9Ia6DqJfwBINpRwVF3QiNKXez0pwuhoLz4mNg7hsx Z2Wu/OmywO1JtbRE4Srey24VQ+YR0x4= Received: by mail-ed1-f54.google.com with SMTP id x10so14193117edd.10 for ; Tue, 10 Jan 2023 03:37:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=DnKNkdS7L7V0h5HPSCk/ld41k2/nNcjDj8U1N0SHNxQ=; b=Ky4f0z25pW5XeGlRdKl3rRDRDmM0y6vmeu11f6pLqhZUbuoooZiLpbByV5eQQ+mlcA zec31d0RoBepaYfNzM6BELBQM9MOdBCrbG91raYx/T9D27TlJNiokQyqzGxzexyJe1S7 HXd5Ku4xDLhV/PtmjB1aY7QlfW+cBs1mEngzWcaYHb9dpB6WP0QNHJf4j6sigIwB/yO1 N4VD8Fy96Tv0KstG9sLm0svhgEzs97Mw/eAXBzX7almO0mRz3PUmQo2aqJp1SvN5dkOF VUvsFCBdznsDRfdMyJrdUl2sBLZaeBCTSo+Yhi2lkJcp9RYJpclAhHoDeU4atARufcTd Dkww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=DnKNkdS7L7V0h5HPSCk/ld41k2/nNcjDj8U1N0SHNxQ=; b=nnfZ3bfxiMjV7GT9w/XX/3iJdhZUyFYK2btEDySRoz7kUq/ejhE+mUsemLLOcb8LMk 6JMi2vU6M4MybisZiJGWHMLKthsVbKsqADQnkwd6kpKifnTdk5J66fUiiH/qhYcUC0fO 3PvxDlFPxGSfClFGJHtTTNLqD0x5SOR2GTezxq7Xs47CaOv07oEp/EBsLlRSsblvD2c5 geu/jJx09mzPtJLNmcm392cTXBbrd5BnIGahowt3z1UUs97HIqqSH0dzM/W+kQUGLO6A ZJ0PGhC0vHBsO7TCnsNId2Po/OruioITClg5tvzmIe3XGM7iAH1YwBgpvZD4z1+9I/ZS yFuQ== X-Gm-Message-State: AFqh2kpA/1kGh3AjDeJ1IFnP1AcqdeRasYUMMYo/0Xy9OxEY+VWIM4VQ 1OQMfQc0JbrCzCBA55Gx0xFYrQ== X-Google-Smtp-Source: AMrXdXvoaboIZhsl11vl6FglPLePh15GTs+Bt6ft/oBlI8b0AzVmmJrnFgkOMAZPoDHazUVDXAeyXg== X-Received: by 2002:a05:6402:e0f:b0:468:58d4:a0f2 with SMTP id h15-20020a0564020e0f00b0046858d4a0f2mr65613759edh.23.1673350621812; Tue, 10 Jan 2023 03:37:01 -0800 (PST) Received: from hera (ppp079167090036.access.hol.gr. [79.167.90.36]) by smtp.gmail.com with ESMTPSA id k19-20020a17090646d300b007c10fe64c5dsm4760440ejs.86.2023.01.10.03.37.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 03:37:01 -0800 (PST) Date: Tue, 10 Jan 2023 13:36:59 +0200 From: Ilias Apalodimas To: "Matthew Wilcox (Oracle)" Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: Re: [PATCH v2 18/24] page_pool: Convert frag_page to frag_nmem Message-ID: References: <20230105214631.3939268-1-willy@infradead.org> <20230105214631.3939268-19-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230105214631.3939268-19-willy@infradead.org> X-Rspamd-Queue-Id: 41DE140019 X-Stat-Signature: 6cfwgpcj6xmeyhjqz4i9yj3ohzd7ygwk X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1673350622-931621 X-HE-Meta: U2FsdGVkX1+WrvxE6+Wk8Y8cKwJiLoDz1SwFOXoQFt32VcBl+jYn8j7+Uw7JqrEBwIRTRxPdSsmotKDZ/O8WmLmrP5GnvsIOt0D64rvrsq1lRa4syvYT/r4vARtQG4ATrpgA+4Js1c+MUesmn3f2YvVLjVo2BDUsNwQGPGhRQ2Jy28cMpUs0veI86hIS5ZvdfZnnhAJ4KfVvVcYIK9PsmFTkyHpOs8n4f6Sv8uinSsD1EXzJoGxjllgRIzAhbLTgeWjoI147szQNGmUX51b1WliWav+abwYYm/vgr/+dpWFAcCLqWx8hMfrP5J08NwPyFUOF2u1vU2fliQ2jqKgQfCCHQxUf38wFUdCB3ROTYljEY42Lr1I2MGnUJcGN+vJ9B3pv5mXHZ+zaBVx7xhgjQdZqy+phGTpD2GY9E3W2QDvuJnLIFVuVWnQg8fzbXoqWBgK5TWL+VqQBPpHZTEWuN0QHUHMkO+pdHgdL7YL2Uewg1OZye5UQ5Is8TuUrdcm+fcYMu56Krv4hq0vdC7NRT35WV37CLoN/CMgQMGqc+JmLTSfZXEWKqlMyWmwvepfamEZ4nEZA0wPgZdZQipP1Oo0vE3jII6mpoSnCe0OPB09LGS4OfeNI6hy/F+g5NW6cM1LFdiGsUtTKwkGYPdc/CUndGRFV7rGFoO5q+3rQisvJP+QTEEVIDmGno13EL0muRAMuaPkOvAMgjaiFXxWgbu6e0wrVeGErf3t+pH/NY+NW5fooNMbe4FCoXN0DheuUHQWmlbTQzcCnxW7griS59ZxTGpgsUjOaPoIwu8bm4i3aTnaBQRvDcXWi4SO+mPiTN0VhPNbucFyqMYlqzLpp9HADtpJ8tmLB9uJIoyF8Ft9dZepB9miXCCfmJrqWKhx7hi4pmjhnEzusjW8CK/DVjrFckG1H61p11+F9XEgolS/qz8hw6iUbYwnCS3RJ8qyKDXl8tT+ThuCk4zRv45F MeeSHzP0 HSOw/hW3zLtYzuNexWqixrz6Bdgc3+c1zqq/lAmZy5BYYAhxrhS0fTEo5Uf9TZVmiKERtxldsJXzeRu/KhFJm+BeXoC8d3EyY3vUDWnpIpD2Y+tbGZThH+EwJOy7Twrv2nBWC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 05, 2023 at 09:46:25PM +0000, Matthew Wilcox (Oracle) wrote: > Remove page_pool_defrag_page() and page_pool_return_page() as they have > no more callers. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/net/page_pool.h | 17 ++++++--------- > net/core/page_pool.c | 47 ++++++++++++++++++----------------------- > 2 files changed, 26 insertions(+), 38 deletions(-) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index 126c04315929..a9dae4b5f2f7 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -262,7 +262,7 @@ struct page_pool { > > u32 pages_state_hold_cnt; > unsigned int frag_offset; > - struct page *frag_page; > + struct netmem *frag_nmem; > long frag_users; > > #ifdef CONFIG_PAGE_POOL_STATS > @@ -334,8 +334,8 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) > return page_pool_alloc_pages(pool, gfp); > } > > -struct page *page_pool_alloc_frag(struct page_pool *pool, unsigned int *offset, > - unsigned int size, gfp_t gfp); > +struct netmem *page_pool_alloc_frag(struct page_pool *pool, > + unsigned int *offset, unsigned int size, gfp_t gfp); > > static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, > unsigned int *offset, > @@ -343,7 +343,7 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, > { > gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); > > - return page_pool_alloc_frag(pool, offset, size, gfp); > + return netmem_page(page_pool_alloc_frag(pool, offset, size, gfp)); > } > > /* get the stored dma direction. A driver might decide to treat this locally and > @@ -399,9 +399,9 @@ void page_pool_put_defragged_netmem(struct page_pool *pool, struct netmem *nmem, > unsigned int dma_sync_size, > bool allow_direct); > > -static inline void page_pool_fragment_page(struct page *page, long nr) > +static inline void page_pool_fragment_netmem(struct netmem *nmem, long nr) > { > - atomic_long_set(&page->pp_frag_count, nr); > + atomic_long_set(&nmem->pp_frag_count, nr); > } > > static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) > @@ -425,11 +425,6 @@ static inline long page_pool_defrag_netmem(struct netmem *nmem, long nr) > return ret; > } > > -static inline long page_pool_defrag_page(struct page *page, long nr) > -{ > - return page_pool_defrag_netmem(page_netmem(page), nr); > -} > - > static inline bool page_pool_is_last_frag(struct page_pool *pool, > struct netmem *nmem) > { > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index ddf9f2bb85f7..5624cdae1f4e 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -222,12 +222,6 @@ EXPORT_SYMBOL(page_pool_create); > > static void page_pool_return_netmem(struct page_pool *pool, struct netmem *nm); > > -static inline > -void page_pool_return_page(struct page_pool *pool, struct page *page) > -{ > - page_pool_return_netmem(pool, page_netmem(page)); > -} > - > noinline > static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) > { > @@ -665,10 +659,9 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, > } > EXPORT_SYMBOL(page_pool_put_page_bulk); > > -static struct page *page_pool_drain_frag(struct page_pool *pool, > - struct page *page) > +static struct netmem *page_pool_drain_frag(struct page_pool *pool, > + struct netmem *nmem) > { > - struct netmem *nmem = page_netmem(page); > long drain_count = BIAS_MAX - pool->frag_users; > > /* Some user is still using the page frag */ > @@ -679,7 +672,7 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, > if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) > page_pool_dma_sync_for_device(pool, nmem, -1); > > - return page; > + return nmem; > } > > page_pool_return_netmem(pool, nmem); > @@ -689,22 +682,22 @@ static struct page *page_pool_drain_frag(struct page_pool *pool, > static void page_pool_free_frag(struct page_pool *pool) > { > long drain_count = BIAS_MAX - pool->frag_users; > - struct page *page = pool->frag_page; > + struct netmem *nmem = pool->frag_nmem; > > - pool->frag_page = NULL; > + pool->frag_nmem = NULL; > > - if (!page || page_pool_defrag_page(page, drain_count)) > + if (!nmem || page_pool_defrag_netmem(nmem, drain_count)) > return; > > - page_pool_return_page(pool, page); > + page_pool_return_netmem(pool, nmem); > } > > -struct page *page_pool_alloc_frag(struct page_pool *pool, > +struct netmem *page_pool_alloc_frag(struct page_pool *pool, > unsigned int *offset, > unsigned int size, gfp_t gfp) > { > unsigned int max_size = PAGE_SIZE << pool->p.order; > - struct page *page = pool->frag_page; > + struct netmem *nmem = pool->frag_nmem; > > if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) || > size > max_size)) > @@ -713,35 +706,35 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, > size = ALIGN(size, dma_get_cache_alignment()); > *offset = pool->frag_offset; > > - if (page && *offset + size > max_size) { > - page = page_pool_drain_frag(pool, page); > - if (page) { > + if (nmem && *offset + size > max_size) { > + nmem = page_pool_drain_frag(pool, nmem); > + if (nmem) { > alloc_stat_inc(pool, fast); > goto frag_reset; > } > } > > - if (!page) { > - page = page_pool_alloc_pages(pool, gfp); > - if (unlikely(!page)) { > - pool->frag_page = NULL; > + if (!nmem) { > + nmem = page_pool_alloc_netmem(pool, gfp); > + if (unlikely(!nmem)) { > + pool->frag_nmem = NULL; > return NULL; > } > > - pool->frag_page = page; > + pool->frag_nmem = nmem; > > frag_reset: > pool->frag_users = 1; > *offset = 0; > pool->frag_offset = size; > - page_pool_fragment_page(page, BIAS_MAX); > - return page; > + page_pool_fragment_netmem(nmem, BIAS_MAX); > + return nmem; > } > > pool->frag_users++; > pool->frag_offset = *offset + size; > alloc_stat_inc(pool, fast); > - return page; > + return nmem; > } > EXPORT_SYMBOL(page_pool_alloc_frag); > > -- > 2.35.1 > Reviewed-by: Ilias Apalodimas