From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Alexander Lobakin <aleksander.lobakin@intel.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>
Cc: Alexander Duyck <alexanderduyck@fb.com>,
Yunsheng Lin <linyunsheng@huawei.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
Christoph Lameter <cl@linux.com>,
Vlastimil Babka <vbabka@suse.cz>,
Andrew Morton <akpm@linux-foundation.org>,
<nex.sw.ncis.osdt.itp.upstreaming@intel.com>,
<netdev@vger.kernel.org>, <intel-wired-lan@lists.osuosl.org>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next v9 7/9] libeth: add Rx buffer management
Date: Fri, 5 Apr 2024 12:32:55 +0200 [thread overview]
Message-ID: <45eb2bf1-e7b0-4045-82b3-93b9f81b7988@intel.com> (raw)
In-Reply-To: <20240404154402.3581254-8-aleksander.lobakin@intel.com>
On 4/4/24 17:44, Alexander Lobakin wrote:
> Add a couple intuitive helpers to hide Rx buffer implementation details
> in the library and not multiplicate it between drivers. The settings are
> sorta optimized for 100G+ NICs, but nothing really HW-specific here.
> Use the new page_pool_dev_alloc() to dynamically switch between
> split-page and full-page modes depending on MTU, page size, required
> headroom etc. For example, on x86_64 with the default driver settings
> each page is shared between 2 buffers. Turning on XDP (not in this
> series) -> increasing headroom requirement pushes truesize out of 2048
> boundary, leading to that each buffer starts getting a full page.
> The "ceiling" limit is %PAGE_SIZE, as only order-0 pages are used to
> avoid compound overhead. For the above architecture, this means maximum
> linear frame size of 3712 w/o XDP.
> Not that &libeth_buf_queue is not a complete queue/ring structure for
> now, rather a shim, but eventually the libeth-enabled drivers will move
> to it, with iavf being the first one.
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> drivers/net/ethernet/intel/libeth/Kconfig | 1 +
> include/net/libeth/rx.h | 117 ++++++++++++++++++++++
> drivers/net/ethernet/intel/libeth/rx.c | 98 ++++++++++++++++++
> 3 files changed, 216 insertions(+)
>
[...]
> +/**
> + * struct libeth_fqe - structure representing an Rx buffer
> + * @page: page holding the buffer
> + * @offset: offset from the page start (to the headroom)
> + * @truesize: total space occupied by the buffer (w/ headroom and tailroom)
> + *
> + * Depending on the MTU, API switches between one-page-per-frame and shared
> + * page model (to conserve memory on bigger-page platforms). In case of the
> + * former, @offset is always 0 and @truesize is always ```PAGE_SIZE```.
> + */
> +struct libeth_fqe {
> + struct page *page;
> + u32 offset;
> + u32 truesize;
> +} __aligned_largest;
> +
> +/**
> + * struct libeth_fq - structure representing a buffer queue
> + * @fp: hotpath part of the structure
> + * @pp: &page_pool for buffer management
> + * @fqes: array of Rx buffers
> + * @truesize: size to allocate per buffer, w/overhead
> + * @count: number of descriptors/buffers the queue has
> + * @buf_len: HW-writeable length per each buffer
> + * @nid: ID of the closest NUMA node with memory
> + */
> +struct libeth_fq {
> + struct_group_tagged(libeth_fq_fp, fp,
> + struct page_pool *pp;
> + struct libeth_fqe *fqes;
> +
> + u32 truesize;
> + u32 count;
> + );
> +
> + /* Cold fields */
> + u32 buf_len;
> + int nid;
> +};
[...]
Could you please unpack the meaning of `fq` and `fqe` acronyms here?
otherwise the whole series is very good for me, thank you very much!
next prev parent reply other threads:[~2024-04-05 10:33 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-04 15:43 [PATCH net-next v9 0/9] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 1/9] net: intel: introduce {,Intel} Ethernet common library Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 2/9] iavf: kill "legacy-rx" for good Alexander Lobakin
2024-04-05 10:15 ` Przemek Kitszel
2024-04-04 15:43 ` [PATCH net-next v9 3/9] iavf: drop page splitting and recycling Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 4/9] slab: introduce kvmalloc_array_node() and kvcalloc_node() Alexander Lobakin
2024-04-05 10:12 ` Przemek Kitszel
2024-04-05 10:44 ` Vlastimil Babka
2024-04-04 15:43 ` [PATCH net-next v9 5/9] page_pool: constify some read-only function arguments Alexander Lobakin
2024-04-04 15:43 ` [PATCH net-next v9 6/9] page_pool: add DMA-sync-for-CPU inline helper Alexander Lobakin
2024-04-04 15:44 ` [PATCH net-next v9 7/9] libeth: add Rx buffer management Alexander Lobakin
2024-04-05 10:32 ` Przemek Kitszel [this message]
2024-04-08 9:09 ` Alexander Lobakin
2024-04-09 10:58 ` Przemek Kitszel
2024-04-10 11:49 ` Alexander Lobakin
2024-04-10 13:01 ` Przemek Kitszel
2024-04-10 13:01 ` Alexander Lobakin
2024-04-10 13:12 ` Przemek Kitszel
2024-04-06 4:25 ` Jakub Kicinski
2024-04-08 9:11 ` Alexander Lobakin
2024-04-08 9:45 ` Alexander Lobakin
2024-04-09 16:17 ` Kees Cook
2024-04-10 13:36 ` Alexander Lobakin
2024-04-11 0:54 ` Jakub Kicinski
2024-04-11 9:07 ` Alexander Lobakin
2024-04-11 13:45 ` Jakub Kicinski
2024-04-04 15:44 ` [PATCH net-next v9 8/9] iavf: pack iavf_ring more efficiently Alexander Lobakin
2024-04-04 15:44 ` [PATCH net-next v9 9/9] iavf: switch to Page Pool Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=45eb2bf1-e7b0-4045-82b3-93b9f81b7988@intel.com \
--to=przemyslaw.kitszel@intel.com \
--cc=akpm@linux-foundation.org \
--cc=aleksander.lobakin@intel.com \
--cc=alexanderduyck@fb.com \
--cc=cl@linux.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linyunsheng@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=nex.sw.ncis.osdt.itp.upstreaming@intel.com \
--cc=pabeni@redhat.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox