From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32D76C2BA18 for ; Mon, 17 Jun 2024 14:17:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3DE126B01F6; Mon, 17 Jun 2024 10:17:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 366F86B01F7; Mon, 17 Jun 2024 10:17:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BB636B01F8; Mon, 17 Jun 2024 10:17:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F0A106B01F6 for ; Mon, 17 Jun 2024 10:16:59 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 98109817BE for ; Mon, 17 Jun 2024 14:16:59 +0000 (UTC) X-FDA: 82240582158.13.AB42C9C Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by imf27.hostedemail.com (Postfix) with ESMTP id 8B94B40017 for ; Mon, 17 Jun 2024 14:16:57 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ujet7W4r; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of asml.silence@gmail.com designates 209.85.218.45 as permitted sender) smtp.mailfrom=asml.silence@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718633813; a=rsa-sha256; cv=none; b=B0fsonmZC/HqjE2tPR54RzE4oIs+JJYI1woGgC7efYydAVoytZQrP9MXM/FLMkxz7tfnl1 pEXKlNRUuuoVzEfywpU/6tqkZz1zRzHpLgCc1nq9jqjTIkc6uav6eGl+6TeBFRM6J0ryis yHMKaaS+SoF0S6Z6N0xARmuTOxhKyN0= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ujet7W4r; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf27.hostedemail.com: domain of asml.silence@gmail.com designates 209.85.218.45 as permitted sender) smtp.mailfrom=asml.silence@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718633813; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Drz0c1Oa4e/cLUt8BmnU83fUjCzoD8CgDqb9N07B3fI=; b=dipxX8dn4pfQP2Ljd6tsWlq+A2xVxYjdlBAGvkOMbXvP0Pf2RdUWyZnYKrcl18liFNUwSn fBngPLT+DtzsfWKjYNOVGpZF9AlMq57xqp0ZlMpTz7VOykt8NBXr9H3wRS+WupwZMaOn0Z oCkLpbUkLKCT9trwIrGzm9Jyy6hFoPI= Received: by mail-ej1-f45.google.com with SMTP id a640c23a62f3a-a6f11a2d18aso582621866b.2 for ; Mon, 17 Jun 2024 07:16:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718633816; x=1719238616; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=Drz0c1Oa4e/cLUt8BmnU83fUjCzoD8CgDqb9N07B3fI=; b=Ujet7W4rvZ3bYhp9QrPqxtkerxKyl3BfBQjKHVcFy2vUtPlgb3FCY7fjQNoO0QSTkf JofICyHide1ztlomeFf1JWhDHr3lt3uX0bcjT9bPNQ7FMSBPsDQy67cOsz0q/uXuSmRt XRgu2sLnN8W86age4sqEPOsMcAGDgfrDcpcUp8brzMN2ViwcRFDTMgVp48Nron+IEbs3 jdgJC9KrLS6GvGE8HfpOKQhVSgVU64Y33vjttS/b/wcattGIGHTpvzjQP/2MYbdn1L4Q /yr91D7iie63lTKKlSv2obI5xd1vs00vx/SPl+Wg4uM8C81R5KdXTW0hUjWvhi6IK++S osDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718633816; x=1719238616; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Drz0c1Oa4e/cLUt8BmnU83fUjCzoD8CgDqb9N07B3fI=; b=GEstLa/+BbLvVnxrc5nCjhkHsgxWlP13d0bxifDDsiegJGL4zpamrS0JJ9DpK8pJLN 3PMYOIiEqBN37QsVSjq2IU8Z9Dwp4/R0OG/OZ37+GCM9NZU8OBHPi5oGh3wIg0i0hVaO NyqsbEzSNevvJvi2Hoy0oN6z7vMIu04+urmVO5cKZ8sF1NnHQ2RtW/knzmcXcs0x13yV JWTlBmphPQJ0xkgiI+djynNAO/avT66rKvioQFUi0mL26vRfb5A7TC1CRi3L0P0gcVrK cMn6a+gLecgmCFrOS5s5p1vehFvXXcjT9/DqBLZ+aiPfNx/ijXA/t8RWbVli4Zqe9Due 4pQg== X-Forwarded-Encrypted: i=1; AJvYcCUV5AItbrbDJKlqOaEVp3wOZ3Oq7saUgQzdvCJ5asc6jB9wJrH6RnRQcMg9/kKSmgW/9umBZJfGJoWHbR3E68zAL6M= X-Gm-Message-State: AOJu0YzhvDBt/1uQgpafINB0A/CkmoheHRL/l3e4mhqN2gwKId7eyyv6 WRlQtft+Fjw5nUgYkd06f1BDUyxcJH06IVGeXm3K1oThxUxOYLYJ X-Google-Smtp-Source: AGHT+IFokiO39Qki+IyFQ+TRRzsTCRMTNhTNMlfnFWf73BlGYeWH1z0yP3M3M0hW8ADbaYfa4+g3mQ== X-Received: by 2002:a17:906:4555:b0:a6f:523a:8e93 with SMTP id a640c23a62f3a-a6f60de2129mr643249766b.71.1718633815552; Mon, 17 Jun 2024 07:16:55 -0700 (PDT) Received: from [192.168.42.82] ([163.114.131.193]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a6f56db67e5sm518501066b.66.2024.06.17.07.16.53 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 17 Jun 2024 07:16:55 -0700 (PDT) Message-ID: <439590d4-0f05-4f5e-80ec-e7fdf214e307@gmail.com> Date: Mon, 17 Jun 2024 15:16:56 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v12 06/13] page_pool: devmem support To: Mina Almasry , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, sparclinux@vger.kernel.org, linux-renesas-soc@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-arch@vger.kernel.org, bpf@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org Cc: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Donald Hunter , Jonathan Corbet , Richard Henderson , Ivan Kokshaysky , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Andreas Larsson , Sergey Shtylyov , Jesper Dangaard Brouer , Ilias Apalodimas , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Steffen Klassert , Herbert Xu , David Ahern , Willem de Bruijn , Shuah Khan , Sumit Semwal , =?UTF-8?Q?Christian_K=C3=B6nig?= , Bagas Sanjaya , Christoph Hellwig , Nikolay Aleksandrov , David Wei , Jason Gunthorpe , Yunsheng Lin , Shailend Chand , Harshitha Ramamurthy , Shakeel Butt , Jeroen de Borst , Praveen Kaligineedi , linux-mm@kvack.org, Matthew Wilcox References: <20240613013557.1169171-1-almasrymina@google.com> <20240613013557.1169171-7-almasrymina@google.com> Content-Language: en-US From: Pavel Begunkov In-Reply-To: <20240613013557.1169171-7-almasrymina@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 8B94B40017 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: jqftppjkph7baz1xccxhwb5fwuzdsrn9 X-HE-Tag: 1718633817-866115 X-HE-Meta: U2FsdGVkX19vI1jAY4tXGsm2gEKp+j9grejZkmKNFGU9iOgIN+WOCd1GozhH7dqf9EpVifutWC+XS5BUXgLjzvNbIstA5KlHvd0I/FYoMAZJRzAUNHmnRR+loxuiyvUhalXvLzbCqY5tzWFV+m8PnG805Dh5kW5r0EvbufDRJrt7bvccZ6n2GoU6GJVZV3XeZn8V2BRaTLJjerPiuMy549iE3V4oboToI3ddWquKKZ6g/Dj80NTIBZLlwLvUrRjNhXij9VovgkdnWTU7ojL1XxsKmSNgfYEBc1+KL2Sr7PRIgMMX7sU8D4ezaBxub9l3kPiJ5rQamkDDjfv7FpJI16LMOpw1AGi3kLuSd1IBE0XDlpatL+a0NoD2W+oOi4cYf2e59Wn7L6oaUl5gRG0/PBZkuTXJPioszefe+p+Isf6LXHiO32MA50EhxwRLolZsfabZkBnsjKlXvQcLBAS+Jbgx0zI+D9UBTc+vgooD8mK60/UeumkyRcr5+MsEPPcGjCNT0mvCB35FLbT3wKkXJc8XFcFP9c0vuZ19bfEGj5+UTcGds/VMnTbZ71VHQ5Ubav94Uy0lhrZ5CgmNzFGJ6C7atHJliBLgQxB5s79c5dVsUfxvNV1XVaw8AkeeB++RpmEoHLQUtXsIqm1jVlOPTV4T8xTSJ/RIbWn9mOtdJKg0SbZqYf5nWs3oD7geTLqtGXTy73q3x49FfDMd+RD1xFVeglWJ3IAL35sOjgTAtWoXjYG8b3LJIKJmI8I5iX7y43aiUB9E2IH6fAgxZHqGtTk7GatHrbrIHpQe1NbbR/y707rdWVLcEXkr20DG+s5zlUAH87ZppXIG5wV+LdpWZKkW0UuL/mdMXJI4FALx9HU2e1EgP9Aw9JjiRS4nBm8jnIvuQX+EyFPPYi0U2ThZRAH3xulmRNLH0P40TskezB5KztouMe8LXNa7insVaTsWsNy5QmQrF0sAjBxOCIL 17LQPZX4 VBYKjlhh4PRXRuWTlYAlrYgAscL9Rt988O4JGoAf4ZXDIOZ3Svktm6BIWkZ/PyOEEHZwkKALvNXAcdvMChb68KFrcc5IKFbUCE6CSV2yzqg3zUiwY43Ju+wK9VK5yi5PHfKeo0LYYq/Kk8qvE826MQHpA/jXr54VO7C/nF6u6sGSUXazeChANCkimEw4zbgkpvjHJyZ0+DWVAStfQ+qLWhiYU11cjBYMxNeme3IszVXXRpG/cSpZ3P+AnXcFkM9E3rEgyz5hHhcNxqYHgcjZK60/+/JQKSu6XpMrBmtNXfjHRqF8QX9pwqTofWcF3mZQcHy7BNG6ONzwsUv9yo6MeRvnkg7EhlcgHuvG8zu8JYhX9Tc46PKvuerusf4O2ABD3FySMNUhMA9K1Tsr9BskE4cA/g3DkUn1RKzQBSMROhfFOe5AGaIe1KHwjXiZpYKxHVO77sd52MIU7KgVU5IoHVk2GXGQTywWxA+q0KU6eWJmHS9r0aWCcK5/yjAEzr98cf62KlbzsoeB+9aBBVlWSnMWpQlQOKbbw5cq5bfVshDMwWQbgCQGPvZp6FaTuddC7+FIEOu2vrlLguNOF4rGDw9ajDls9QZqMfxpRKr1ZDPwHANI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 6/13/24 02:35, Mina Almasry wrote: > Convert netmem to be a union of struct page and struct netmem. Overload > the LSB of struct netmem* to indicate that it's a net_iov, otherwise > it's a page. > > Currently these entries in struct page are rented by the page_pool and > used exclusively by the net stack: > > struct { > unsigned long pp_magic; > struct page_pool *pp; > unsigned long _pp_mapping_pad; > unsigned long dma_addr; > atomic_long_t pp_ref_count; > }; > > Mirror these (and only these) entries into struct net_iov and implement > netmem helpers that can access these common fields regardless of > whether the underlying type is page or net_iov. > > Implement checks for net_iov in netmem helpers which delegate to mm > APIs, to ensure net_iov are never passed to the mm stack. > > Signed-off-by: Mina Almasry Apart from small comments below Reviewed-by: Pavel Begunkov > --- > include/net/netmem.h | 137 ++++++++++++++++++++++++++++++-- > include/net/page_pool/helpers.h | 25 +++--- > net/core/devmem.c | 3 + > net/core/page_pool.c | 26 +++--- > net/core/skbuff.c | 22 +++-- > 5 files changed, 168 insertions(+), 45 deletions(-) > > diff --git a/include/net/netmem.h b/include/net/netmem.h > index 664df8325ece5..35ad237fdf29e 100644 > --- a/include/net/netmem.h > +++ b/include/net/netmem.h ... > -/* Converting from page to netmem is always safe, because a page can always be > - * a netmem. > - */ > static inline netmem_ref page_to_netmem(struct page *page) > { > return (__force netmem_ref)page; > @@ -68,17 +107,103 @@ static inline netmem_ref page_to_netmem(struct page *page) > > static inline int netmem_ref_count(netmem_ref netmem) > { > + /* The non-pp refcount of net_iov is always 1. On net_iov, we only > + * support pp refcounting which uses the pp_ref_count field. > + */ > + if (netmem_is_net_iov(netmem)) > + return 1; > + > return page_ref_count(netmem_to_page(netmem)); > } > > static inline unsigned long netmem_to_pfn(netmem_ref netmem) > { > + if (netmem_is_net_iov(netmem)) > + return 0; IIRC 0 is a valid pfn. Not much of a concern since it's used only for tracing, but might make sense to pass some invalid pfn if there is one > + > return page_to_pfn(netmem_to_page(netmem)); > } > ... > static inline netmem_ref netmem_compound_head(netmem_ref netmem) > { > + /* niov are never compounded */ > + if (netmem_is_net_iov(netmem)) > + return netmem; > + > return page_to_netmem(compound_head(netmem_to_page(netmem))); > } > > +static inline void *netmem_address(netmem_ref netmem) I don't think it's used anywhere, do I miss it? > +{ > + if (netmem_is_net_iov(netmem)) > + return NULL; > + > + return page_address(netmem_to_page(netmem)); > +} > + ... > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index a5957d3359762..1152e3547795a 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -26,6 +26,8 @@ ... > > /* If the page refcnt == 1, this will try to recycle the page. > @@ -714,7 +713,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem, > * refcnt == 1 means page_pool owns page, and can recycle it. > * > * page is NOT reusable when allocated when system is under > - * some pressure. (page_is_pfmemalloc) > + * some pressure. (page_pool_page_is_pfmemalloc) There is no page_pool_page_is_pfmemalloc() > */ > if (likely(__page_pool_page_can_be_recycled(netmem))) { > /* Read barrier done in page_ref_count / READ_ONCE */ > @@ -727,6 +726,7 @@ __page_pool_put_page(struct page_pool *pool, netmem_ref netmem, > /* Page found as candidate for recycling */ > return netmem; > } -- Pavel Begunkov