From: Yunsheng Lin <linyunsheng@huawei.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>,
Matteo Croce <mcroce@linux.microsoft.com>,
Networking <netdev@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>,
Ayush Sawal <ayush.sawal@chelsio.com>,
"Vinay Kumar Yadav" <vinay.yadav@chelsio.com>,
Rohit Maheshwari <rohitm@chelsio.com>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>,
Thomas Petazzoni <thomas.petazzoni@bootlin.com>,
Marcin Wojtas <mw@semihalf.com>,
Russell King <linux@armlinux.org.uk>,
Mirko Lindner <mlindner@marvell.com>,
Stephen Hemminger <stephen@networkplumber.org>,
"Tariq Toukan" <tariqt@nvidia.com>,
Jesper Dangaard Brouer <hawk@kernel.org>,
"Alexei Starovoitov" <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
"John Fastabend" <john.fastabend@gmail.com>,
Boris Pismenny <borisp@nvidia.com>, Arnd Bergmann <arnd@arndb.de>,
Andrew Morton <akpm@linux-foundation.org>,
"Peter Zijlstra (Intel)" <peterz@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>, Yu Zhao <yuzhao@google.com>,
Will Deacon <will@kernel.org>,
Michel Lespinasse <walken@google.com>,
Fenghua Yu <fenghua.yu@intel.com>, Roman Gushchin <guro@fb.com>,
Hugh Dickins <hughd@google.com>, Peter Xu <peterx@redhat.com>,
Jason Gunthorpe <jgg@ziepe.ca>,
Jonathan Lemon <jonathan.lemon@gmail.com>,
Alexander Lobakin <alobakin@pm.me>,
Cong Wang <cong.wang@bytedance.com>, wenxu <wenxu@ucloud.cn>,
Kevin Hao <haokexin@gmail.com>,
Jakub Sitnicki <jakub@cloudflare.com>,
Marco Elver <elver@google.com>,
Willem de Bruijn <willemb@google.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Guillaume Nault <gnault@redhat.com>,
open list <linux-kernel@vger.kernel.org>,
<linux-rdma@vger.kernel.org>, bpf <bpf@vger.kernel.org>,
Eric Dumazet <edumazet@google.com>,
David Ahern <dsahern@gmail.com>,
Lorenzo Bianconi <lorenzo@kernel.org>,
Saeed Mahameed <saeedm@nvidia.com>, Andrew Lunn <andrew@lunn.ch>,
Paolo Abeni <pabeni@redhat.com>,
Sven Auhagen <sven.auhagen@voleatech.de>
Subject: Re: [PATCH net-next v4 1/4] mm: add a signature in struct page
Date: Thu, 13 May 2021 11:25:22 +0800 [thread overview]
Message-ID: <8f815871-e384-3e65-56a8-39e379dea4ce@huawei.com> (raw)
In-Reply-To: <YJyQYCj3UUk5Sp4Z@casper.infradead.org>
On 2021/5/13 10:35, Matthew Wilcox wrote:
> On Thu, May 13, 2021 at 10:15:26AM +0800, Yunsheng Lin wrote:
>> On 2021/5/12 23:57, Matthew Wilcox wrote:
>>> You'll need something like this because of the current use of
>>> page->index to mean "pfmemalloc".
>>>
>>> @@ -1682,12 +1684,12 @@ static inline bool page_is_pfmemalloc(const struct page *page)
>>> */
>>> static inline void set_page_pfmemalloc(struct page *page)
>>> {
>>> - page->index = -1UL;
>>> + page->compound_head = 2;
>>
>> Is there any reason why not use "page->compound_head |= 2"? as
>> corresponding to the "page->compound_head & 2" in the above
>> page_is_pfmemalloc()?
>>
>> Also, this may mean we need to make sure to pass head page or
>> base page to set_page_pfmemalloc() if using
>> "page->compound_head = 2", because it clears the bit 0 and head
>> page ptr for tail page too, right?
>
> I think what you're missing here is that this page is freshly allocated.
> This is information being passed from the page allocator to any user
> who cares to look at it. By definition, it's set on the head/base page, and
> there is nothing else present in the page->compound_head. Doing an OR
> is more expensive than just setting it to 2.
Thanks for clarifying.
>
> I'm not really sure why set/clear page_pfmemalloc are defined in mm.h.
> They should probably be in mm/page_alloc.c where nobody else would ever
> think that they could or should be calling them.>
>>> struct { /* page_pool used by netstack */
>>> - /**
>>> - * @dma_addr: might require a 64-bit value on
>>> - * 32-bit architectures.
>>> - */
>>> + unsigned long pp_magic;
>>> + struct page_pool *pp;
>>> + unsigned long _pp_mapping_pad;
>>> unsigned long dma_addr[2];
>>
>> It seems the dma_addr[1] aliases with page->private, and
>> page_private() is used in skb_copy_ubufs()?
>>
>> It seems we can avoid using page_private() in skb_copy_ubufs()
>> by using a dynamic allocated array to store the page ptr?
>
> This is why I hate it when people use page_private() instead of
> documenting what they're doing in struct page. There is no way to know
> (as an outsider to networking) whether the page in skb_copy_ubufs()
> comes from page_pool. I looked at it, and thought it didn't:
>
> page = alloc_page(gfp_mask);
>
> but if you say those pages can come from page_pool, I believe you.
page_private() using in skb_copy_ubufs() does indeed seem ok here.
the page_private() is used on the page which is freshly allocated
from alloc_page().
Sorry for the confusion.
>
> .
>
next prev parent reply other threads:[~2021-05-13 3:25 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-11 13:31 [PATCH net-next v4 0/4] page_pool: recycle buffers Matteo Croce
2021-05-11 13:31 ` [PATCH net-next v4 1/4] mm: add a signature in struct page Matteo Croce
2021-05-11 13:45 ` Matthew Wilcox
2021-05-11 14:11 ` Ilias Apalodimas
2021-05-11 14:18 ` Matthew Wilcox
2021-05-11 14:25 ` Ilias Apalodimas
2021-05-12 15:57 ` Matthew Wilcox
2021-05-12 16:09 ` Eric Dumazet
2021-05-12 16:26 ` Matthew Wilcox
2021-05-12 16:47 ` Ilias Apalodimas
2021-05-13 2:15 ` Yunsheng Lin
2021-05-13 2:35 ` Matthew Wilcox
2021-05-13 3:25 ` Yunsheng Lin [this message]
2021-05-11 13:31 ` [PATCH net-next v4 2/4] page_pool: Allow drivers to hint on SKB recycling Matteo Croce
2021-05-11 15:24 ` Eric Dumazet
2021-05-12 9:50 ` Ilias Apalodimas
2021-05-12 14:09 ` Eric Dumazet
2021-05-12 14:39 ` Ilias Apalodimas
2021-05-11 13:31 ` [PATCH net-next v4 3/4] mvpp2: recycle buffers Matteo Croce
2021-05-11 13:31 ` [PATCH net-next v4 4/4] mvneta: " Matteo Croce
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8f815871-e384-3e65-56a8-39e379dea4ce@huawei.com \
--to=linyunsheng@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=alobakin@pm.me \
--cc=andrew@lunn.ch \
--cc=arnd@arndb.de \
--cc=ast@kernel.org \
--cc=ayush.sawal@chelsio.com \
--cc=borisp@nvidia.com \
--cc=bpf@vger.kernel.org \
--cc=cong.wang@bytedance.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=dsahern@gmail.com \
--cc=edumazet@google.com \
--cc=elver@google.com \
--cc=fenghua.yu@intel.com \
--cc=gnault@redhat.com \
--cc=guro@fb.com \
--cc=haokexin@gmail.com \
--cc=hawk@kernel.org \
--cc=hughd@google.com \
--cc=ilias.apalodimas@linaro.org \
--cc=jakub@cloudflare.com \
--cc=jgg@ziepe.ca \
--cc=john.fastabend@gmail.com \
--cc=jonathan.lemon@gmail.com \
--cc=kuba@kernel.org \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=lorenzo@kernel.org \
--cc=mcroce@linux.microsoft.com \
--cc=mlindner@marvell.com \
--cc=mw@semihalf.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=rohitm@chelsio.com \
--cc=saeedm@nvidia.com \
--cc=stephen@networkplumber.org \
--cc=sven.auhagen@voleatech.de \
--cc=tariqt@nvidia.com \
--cc=thomas.petazzoni@bootlin.com \
--cc=vbabka@suse.cz \
--cc=vinay.yadav@chelsio.com \
--cc=walken@google.com \
--cc=wenxu@ucloud.cn \
--cc=will@kernel.org \
--cc=willemb@google.com \
--cc=willy@infradead.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox