From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 451CDC3DA7A for ; Thu, 5 Jan 2023 21:46:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6689900002; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A2AD1900004; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B9A0900007; Thu, 5 Jan 2023 16:46:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 37421900003 for ; Thu, 5 Jan 2023 16:46:36 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 12AEDC0B3C for ; Thu, 5 Jan 2023 21:46:36 +0000 (UTC) X-FDA: 80322079992.29.42B5CDE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf28.hostedemail.com (Postfix) with ESMTP id 6D971C000A for ; Thu, 5 Jan 2023 21:46:34 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rw7xh7p0; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672955194; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E09t+hkiAU+x+XiPYonyux6Fp0qlpaeqrtoR4FA5S+c=; b=AkyUcpU3q+6Txz0ARLRLafBdq22zCldAVsuuXj+PMJzBeCJyFtHvNpgKbtLCVwiuNbwD6L CPy8hL8zzPooUOjEo6tPaYP8gMuy/s7NTX0e1w7peG9OtQGWRDaHtSpG/NOkQ+6DwSgrf9 7QoHV5zBqrMxGE/LOLnSzkXu2ygS1nY= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=rw7xh7p0; dmarc=none; spf=none (imf28.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672955194; a=rsa-sha256; cv=none; b=XXIwEZ6bDpP4f0ESOdP+3p/j5Fo/dSkYMuRNmjrwDAFEhuSfF0yR0LEQIe0Boxc6taj2mr zQK9djgi5ZFgu0WgJV2Vx868p3zUUWZ941yTRBocQmro0L3hwSu2bIJnF4g+frMX79ph7w xob05nW+GAh9P8FcEO4wm0SeoXU4nBg= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=E09t+hkiAU+x+XiPYonyux6Fp0qlpaeqrtoR4FA5S+c=; b=rw7xh7p0TjYLAaPOaEupk3tOPt NDSRcqv5GjpfzNYdVhFTxq01vywHFyZYSzkoEF4YFw08ADYsHl8+fs9gQ/QJ/PYSr5HT9YosjQOMq Zuesj3Ju4EdAPiQfgvy7Hyj6QJZascVjhebd9Po7QRaZE0auC1255M/LE5cng9YKIATtrI2UsEBcx yLQjVzblxp7/5vli/L/bE8opuJR9N7W4LOddMgh1Y8pSeu4MxgUvagJ5npC0J+r7oVXdrKMPwwHRX Q+wpviR08yhoayANvaUgG1YTP8h9GI/gB/Kq6SzxX3PLbVU0s1pMe3qi++l4zV/ht5ga0PAsp9uJS QYO4GNkw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1pDY4I-00GWnV-A2; Thu, 05 Jan 2023 21:46:34 +0000 From: "Matthew Wilcox (Oracle)" To: Jesper Dangaard Brouer , Ilias Apalodimas Cc: "Matthew Wilcox (Oracle)" , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: [PATCH v2 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Date: Thu, 5 Jan 2023 21:46:19 +0000 Message-Id: <20230105214631.3939268-13-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230105214631.3939268-1-willy@infradead.org> References: <20230105214631.3939268-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 6D971C000A X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: x9cbfihcagmz5sap1tj15cx15ki67wwx X-HE-Tag: 1672955194-756947 X-HE-Meta: U2FsdGVkX1+RJ1BeDeNZ+BUN9xsxYLVUghi+iCB8B8tEkOpvxdCXRZ1T4Q9nbjSOTA8Dx1NxLGksJuFEpIAQo9mBcYOHvQcHMnhc10OFAa3GpWVf7eHbv9bhkALRtqcCsCPemjsJ0T0qDPmqkf+IqoQyp1zJY185ufhciFIzb4nR/YqKnlJhUzd5BPMdyE+SapRR8iiS7c51I3GcsVzlLiGhZ8FO2SeD5cUVI6YaWFhOBQsvV51IUxVkUS7RSkMRIbFLN5w8rCs9pzpGyoRY8dBgLWRi3dwVJSJl2hg0Oftv7pQ9w27oIModjBERcwhkb75omAqwVHUSrNwDUGHNLyzf97KgybXP9B1AM5w95zYA44WDPTib+JjEnz7FYH7jP16scBr/WGaUsEoe7OkXrXEJfxHDerdUbRTOodUMjaQWFUGXvH20s+TcYxybRLXR5WoPi5QiXR4aF1+YX4DZWvYxbA7hjZtFdtPUfF+w0OzbsxLf7+B4UpH5EzOP7dE/4lWbaWwCR0UvP8H5p/6lN0mY4e1YO0CTVzkXbA/OLa/rttbQ1M+hPxFBW2qOybrb9HmYlkzapBIKrGTyrG+K+YvJlJG7uD7irQCvPLCRHx9gzeUgeV4cGNjglEmeEqVkg1T5Hc55rDDOVniTjxz0D2qav304ypOa0EMJ8c5oOdsZ21wh2DRYoEBflhy+yWFwa2/EA367twuM7aaaKpXs+HXaSvlZfQDpALDQc0hhutlXnloBVU7hSTjMHWCxHdthGYHeac4Ux0LtsUOBSg7haghC7HjKDszWSfRji7JUaWZEp4yv2Ttp92LurmfHFSMkY0GYLZFd+6zedZ11objErnBDpFE6uok0uvghTxTYP1VNjlWA4Wi2q20niT/5Qo3MWyKri0j4HdfIemKwm4+p3qhedwvcUJIaDS3o4xZspFOADWOKer9kbg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add wrappers for page_pool_alloc_pages() and page_pool_dev_alloc_netmem(). Also convert __page_pool_alloc_pages_slow() to __page_pool_alloc_netmem_slow() and __page_pool_alloc_page_order() to __page_pool_alloc_netmem(). __page_pool_get_cached() now returns a netmem. Signed-off-by: Matthew Wilcox (Oracle) --- include/net/page_pool.h | 13 ++++++++++++- net/core/page_pool.c | 39 +++++++++++++++++++-------------------- 2 files changed, 31 insertions(+), 21 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 8b826da3b8b0..fbb653c9f1da 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -314,7 +314,18 @@ struct page_pool { u64 destroy_cnt; }; -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp); + +static inline struct netmem *page_pool_dev_alloc_netmem(struct page_pool *pool) +{ + return page_pool_alloc_netmem(pool, GFP_ATOMIC | __GFP_NOWARN); +} + +static inline +struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +{ + return netmem_page(page_pool_alloc_netmem(pool, gfp)); +} static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 0212244e07e7..c7ea487acbaa 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -282,7 +282,7 @@ static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) } /* fast path */ -static struct page *__page_pool_get_cached(struct page_pool *pool) +static struct netmem *__page_pool_get_cached(struct page_pool *pool) { struct netmem *nmem; @@ -295,7 +295,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) nmem = page_pool_refill_alloc_cache(pool); } - return netmem_page(nmem); + return nmem; } static void page_pool_dma_sync_for_device(struct page_pool *pool, @@ -349,8 +349,8 @@ static void page_pool_clear_pp_info(struct netmem *nmem) nmem->pp = NULL; } -static struct page *__page_pool_alloc_page_order(struct page_pool *pool, - gfp_t gfp) +static +struct netmem *__page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { struct netmem *nmem; @@ -371,27 +371,27 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, /* Track how many pages are held 'in-flight' */ pool->pages_state_hold_cnt++; trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); - return netmem_page(nmem); + return nmem; } /* slow path */ noinline -static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, +static struct netmem *__page_pool_alloc_netmem_slow(struct page_pool *pool, gfp_t gfp) { const int bulk = PP_ALLOC_CACHE_REFILL; unsigned int pp_flags = pool->p.flags; unsigned int pp_order = pool->p.order; - struct page *page; + struct netmem *nmem; int i, nr_pages; /* Don't support bulk alloc for high-order pages */ if (unlikely(pp_order)) - return __page_pool_alloc_page_order(pool, gfp); + return __page_pool_alloc_netmem(pool, gfp); /* Unnecessary as alloc cache is empty, but guarantees zero count */ if (unlikely(pool->alloc.count > 0)) - return netmem_page(pool->alloc.cache[--pool->alloc.count]); + return pool->alloc.cache[--pool->alloc.count]; /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); @@ -422,34 +422,33 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, /* Return last page */ if (likely(pool->alloc.count > 0)) { - page = netmem_page(pool->alloc.cache[--pool->alloc.count]); + nmem = pool->alloc.cache[--pool->alloc.count]; alloc_stat_inc(pool, slow); } else { - page = NULL; + nmem = NULL; } /* When page just allocated it should have refcnt 1 (but may have * speculative references) */ - return page; + return nmem; } /* For using page_pool replace: alloc_pages() API calls, but provide * synchronization guarantee for allocation side. */ -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) { - struct page *page; + struct netmem *nmem; /* Fast-path: Get a page from cache */ - page = __page_pool_get_cached(pool); - if (page) - return page; + nmem = __page_pool_get_cached(pool); + if (nmem) + return nmem; /* Slow-path: cache empty, do real allocation */ - page = __page_pool_alloc_pages_slow(pool, gfp); - return page; + return __page_pool_alloc_netmem_slow(pool, gfp); } -EXPORT_SYMBOL(page_pool_alloc_pages); +EXPORT_SYMBOL(page_pool_alloc_netmem); /* Calculate distance between two u32 values, valid if distance is below 2^(31) * https://en.wikipedia.org/wiki/Serial_number_arithmetic#General_Solution -- 2.35.1