From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A221DC46467 for ; Tue, 10 Jan 2023 10:46:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15DE38E0002; Tue, 10 Jan 2023 05:46:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 10E548E0001; Tue, 10 Jan 2023 05:46:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F18208E0002; Tue, 10 Jan 2023 05:46:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E26988E0001 for ; Tue, 10 Jan 2023 05:46:03 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A06AD120CDD for ; Tue, 10 Jan 2023 10:46:03 +0000 (UTC) X-FDA: 80338559406.26.40DF34A Received: from mail-ed1-f44.google.com (mail-ed1-f44.google.com [209.85.208.44]) by imf13.hostedemail.com (Postfix) with ESMTP id DD17C2000F for ; Tue, 10 Jan 2023 10:46:01 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=ZjU4kfBm; spf=pass (imf13.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.44 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673347562; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RVT3HuAU7d5TyEdRMwNaOkxi/0LTusx0lkgJayOmf3E=; b=Hpi+hvZg1vW3SdbKkBJGs1iYlHujNQeETODBvgop3BcWIBORMt0Uub2i2EGQCB+UOepAcm JjVXiqNB66OjBAIHsMoJQRXvKV6wfeVXqiKKXRks+R0NYUwVhEYHtmnao6WefYMrHTxphV LZIiLUJWVgg4cbsnyEVmaKXWI/AnLXs= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=linaro.org header.s=google header.b=ZjU4kfBm; spf=pass (imf13.hostedemail.com: domain of ilias.apalodimas@linaro.org designates 209.85.208.44 as permitted sender) smtp.mailfrom=ilias.apalodimas@linaro.org; dmarc=pass (policy=none) header.from=linaro.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673347562; a=rsa-sha256; cv=none; b=yTbixJ9A7wgIHn98BDUE0DeTZGofyEcPyXl0WmUcd/XISYzOlShqAxIC0OxaAnaKD+2XTX J0gVO3nvOEHcenJ7u3QSHxkIF+70PCOHZ+bOAq90LBlL5v5RZHnRg6L38LQ+NcpKPntrbJ DJ/HAgjL6yeuyfhtnhilF8On+eEPuec= Received: by mail-ed1-f44.google.com with SMTP id v30so16879282edb.9 for ; Tue, 10 Jan 2023 02:46:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=RVT3HuAU7d5TyEdRMwNaOkxi/0LTusx0lkgJayOmf3E=; b=ZjU4kfBmg+KcuZSuDmAT1bRE2ERqP6clJ7gUSagyhTw791YIRfjWpM2/M/bUnPJNXJ LwgnthjFqsBN0miIT87jX+nwbxmhokPmAOpOPXTXSwV1v+eUk3V5gcqqLqhJBe3heSsE V31otLrldfTwGeAlmUXHpcrWbP41GiIOsBjsc4H3X2tIMBQhEk5F3uc7CdycLp07n6d/ SCCS416hn8nPqzIQAl1JpUR+fKvkkUqEjrMeleGUAhoEGOyruPyoLAXnfUHThG+TSqvb JkdeM+Z4uqcNr+IkKr26JuTtb+8rX2yNdQaOe+IAmAhrI3W4a7D7E1Y9fIDquaC2wpVL DszQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=RVT3HuAU7d5TyEdRMwNaOkxi/0LTusx0lkgJayOmf3E=; b=m+4xIXEuGL0KgV5s5Nlviq2znHHmi5QCxp0GAjP7Ojm+lAIE+fz7H6fN5xkYiidVQ0 7gMax/6l4WYj0IZKyT6tOjfymhOMJdszUtjNNHxM+inzQLZpA/TX8Wnj53dTBzNMUPPn XuIxRoUI7umV+BPC0c8l3BvcnRKWgQcrUvVWegVWyLIMqc+hHEbLLO2q8R5pWrEFDqCH /RxzFWUfYPmoem6RfzXXUW/wSqGMZ0XwbP13ujbdT+JuGPmxZ0N761zfwCHwh4biFu1B DSRbuHR2XDKreGsl/DvAGHYUEadn43YNw2C2TsDsaDg+Fp6T+KWmn49Cd+wmAcJp5Lh0 kpUw== X-Gm-Message-State: AFqh2kosnQYJ971WpfYGCgXfb4rrn6aSmLOOQzw6rS1h5tRITlDqXWgN +6o2Iauu+WtMlqzWhG5GNVVKUQ== X-Google-Smtp-Source: AMrXdXsYew0IjHk2hmF42p44/tSPB9zxgWq/QsXtosbapLoAyRxNMaUfLVjzTDP3UvykuKkVMqt+Kw== X-Received: by 2002:a05:6402:528b:b0:499:b672:ee39 with SMTP id en11-20020a056402528b00b00499b672ee39mr5221190edb.11.1673347560617; Tue, 10 Jan 2023 02:46:00 -0800 (PST) Received: from hera (ppp079167090036.access.hol.gr. [79.167.90.36]) by smtp.gmail.com with ESMTPSA id eg49-20020a05640228b100b00488117821ffsm4805912edb.31.2023.01.10.02.45.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 10 Jan 2023 02:46:00 -0800 (PST) Date: Tue, 10 Jan 2023 12:45:58 +0200 From: Ilias Apalodimas To: "Matthew Wilcox (Oracle)" Cc: Jesper Dangaard Brouer , netdev@vger.kernel.org, linux-mm@kvack.org, Shakeel Butt Subject: Re: [PATCH v2 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Message-ID: References: <20230105214631.3939268-1-willy@infradead.org> <20230105214631.3939268-13-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230105214631.3939268-13-willy@infradead.org> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DD17C2000F X-Rspam-User: X-Stat-Signature: jk6bazq57ghu1ta4yx959sg5cpd6erba X-HE-Tag: 1673347561-637058 X-HE-Meta: U2FsdGVkX1/mcLevrswSwUH/YCNjEQ0zO45FIMPc4D+FqkkcxmOL15BC0ZKNqN/3jORoJRQ69NYhCVfA6PKJ9WL25znld4uaTD2fGYAyFFVJl5CgkPoIiyDq0TntkBlKhUrcgo07z9SZkU24Gn70tWtiaufPY8hETgYjZqlgWZZuVxRql52JX9vWELpT27nbtEeu4L1AVNUayHv9SJ1/ieyjxpnBf+sL8LM8M6iiG1Nbpc4ncXzDlYMnVTnBZxMIDA6O57qOeh7AIjB2AUlLTKCN0VZgaF0p7SmTgySTo619vJSIaUl9YrmzLf8i+kkzxqNlHDZRBHqzHx9xIeNqHR7/jXOJfinUbHDPjl8u4z3v0bUjEif9EoYtJt1fFR48UCwKX/T1LoWB9/SKa1j5ZCdVU9lzriS37Qc5AGdwSAACX6nOBcs6ed8fEaKE+ZNx3uKRpactkbiRNQwJ2Or0tfz+vTvOUP/eanKCB+13xyB2/vGqzh7AkwxVoQR7qApF1eI5g0XZrRbcI/HLycTV1u73lAPEw0SOHJlJDCpPjIRM17dVX0R6wB893CtzhBMhiM4c8QArHCY2ftfqfbeQO3fhloaMTL52mLfX72gs7EXKeq2oBkgSUbJ24QtC0l0lZouJwafJG81CAls/4qZpYzVjZwY2Syv19vWzZFz4iWvOP+0SM8s/ZlbY+8Whx86waFCtqwfEabJBCquIcv8BTzM7aqUKU3XvE/jkqr7F2eJXO22FGwp7bJugBQ7xTeD1qkoRPg9FzdsGc8d4e/zdbBWlt8rGxZVsYRo8Dq3xHvmdY0K14Yvn1b1M5PNkrXPPLsrZw7EF7shZCvhrq5cKulYc8sQ4xSFWGuIvNcLYKeCmc0z/NawRMIRl6LJMhOIuFEgm9u4jB73cpwtcd49lVTXUdGiv5S2KJKCYb5krjgFaqiQIszc8VPdR9v/icBY6oPobFsNwB5mlWr3kEJ2 Ll8VntaK JxXxNYAr7iagW26jOllIrLYt76q7vqhxv5WhfKSPhsWS2Viecqs1xRPN4F2Oqy1KZNaTIlsuPV1l/9+58Pk8I3fFrYiLaa6Skg3Z4pV2WCXIQqejp7rlEvV7brc34jaMms7Cw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 05, 2023 at 09:46:19PM +0000, Matthew Wilcox (Oracle) wrote: > Add wrappers for page_pool_alloc_pages() and > page_pool_dev_alloc_netmem(). Also convert __page_pool_alloc_pages_slow() > to __page_pool_alloc_netmem_slow() and __page_pool_alloc_page_order() > to __page_pool_alloc_netmem(). __page_pool_get_cached() now returns > a netmem. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/net/page_pool.h | 13 ++++++++++++- > net/core/page_pool.c | 39 +++++++++++++++++++-------------------- > 2 files changed, 31 insertions(+), 21 deletions(-) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index 8b826da3b8b0..fbb653c9f1da 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -314,7 +314,18 @@ struct page_pool { > u64 destroy_cnt; > }; > > -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp); > +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp); > + > +static inline struct netmem *page_pool_dev_alloc_netmem(struct page_pool *pool) > +{ > + return page_pool_alloc_netmem(pool, GFP_ATOMIC | __GFP_NOWARN); > +} > + > +static inline > +struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) > +{ > + return netmem_page(page_pool_alloc_netmem(pool, gfp)); > +} > > static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) > { > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index 0212244e07e7..c7ea487acbaa 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -282,7 +282,7 @@ static struct netmem *page_pool_refill_alloc_cache(struct page_pool *pool) > } > > /* fast path */ > -static struct page *__page_pool_get_cached(struct page_pool *pool) > +static struct netmem *__page_pool_get_cached(struct page_pool *pool) > { > struct netmem *nmem; > > @@ -295,7 +295,7 @@ static struct page *__page_pool_get_cached(struct page_pool *pool) > nmem = page_pool_refill_alloc_cache(pool); > } > > - return netmem_page(nmem); > + return nmem; > } > > static void page_pool_dma_sync_for_device(struct page_pool *pool, > @@ -349,8 +349,8 @@ static void page_pool_clear_pp_info(struct netmem *nmem) > nmem->pp = NULL; > } > > -static struct page *__page_pool_alloc_page_order(struct page_pool *pool, > - gfp_t gfp) > +static > +struct netmem *__page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) > { > struct netmem *nmem; > > @@ -371,27 +371,27 @@ static struct page *__page_pool_alloc_page_order(struct page_pool *pool, > /* Track how many pages are held 'in-flight' */ > pool->pages_state_hold_cnt++; > trace_page_pool_state_hold(pool, nmem, pool->pages_state_hold_cnt); > - return netmem_page(nmem); > + return nmem; > } > > /* slow path */ > noinline > -static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > +static struct netmem *__page_pool_alloc_netmem_slow(struct page_pool *pool, > gfp_t gfp) > { > const int bulk = PP_ALLOC_CACHE_REFILL; > unsigned int pp_flags = pool->p.flags; > unsigned int pp_order = pool->p.order; > - struct page *page; > + struct netmem *nmem; > int i, nr_pages; > > /* Don't support bulk alloc for high-order pages */ > if (unlikely(pp_order)) > - return __page_pool_alloc_page_order(pool, gfp); > + return __page_pool_alloc_netmem(pool, gfp); > > /* Unnecessary as alloc cache is empty, but guarantees zero count */ > if (unlikely(pool->alloc.count > 0)) > - return netmem_page(pool->alloc.cache[--pool->alloc.count]); > + return pool->alloc.cache[--pool->alloc.count]; > > /* Mark empty alloc.cache slots "empty" for alloc_pages_bulk_array */ > memset(&pool->alloc.cache, 0, sizeof(void *) * bulk); > @@ -422,34 +422,33 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > > /* Return last page */ > if (likely(pool->alloc.count > 0)) { > - page = netmem_page(pool->alloc.cache[--pool->alloc.count]); > + nmem = pool->alloc.cache[--pool->alloc.count]; > alloc_stat_inc(pool, slow); > } else { > - page = NULL; > + nmem = NULL; > } > > /* When page just allocated it should have refcnt 1 (but may have > * speculative references) */ > - return page; > + return nmem; > } > > /* For using page_pool replace: alloc_pages() API calls, but provide > * synchronization guarantee for allocation side. > */ > -struct page *page_pool_alloc_pages(struct page_pool *pool, gfp_t gfp) > +struct netmem *page_pool_alloc_netmem(struct page_pool *pool, gfp_t gfp) > { > - struct page *page; > + struct netmem *nmem; > > /* Fast-path: Get a page from cache */ > - page = __page_pool_get_cached(pool); > - if (page) > - return page; > + nmem = __page_pool_get_cached(pool); > + if (nmem) > + return nmem; > > /* Slow-path: cache empty, do real allocation */ > - page = __page_pool_alloc_pages_slow(pool, gfp); > - return page; > + return __page_pool_alloc_netmem_slow(pool, gfp); > } > -EXPORT_SYMBOL(page_pool_alloc_pages); > +EXPORT_SYMBOL(page_pool_alloc_netmem); > > /* Calculate distance between two u32 values, valid if distance is below 2^(31) > * https://en.wikipedia.org/wiki/Serial_number_arithmetic#General_Solution > -- > 2.35.1 > Reviewed-by: Ilias Apalodimas