From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52464E77199 for ; Tue, 7 Jan 2025 14:26:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E0FFE6B008A; Tue, 7 Jan 2025 09:26:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D98626B009C; Tue, 7 Jan 2025 09:26:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C127B6B00A4; Tue, 7 Jan 2025 09:26:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9FE146B008A for ; Tue, 7 Jan 2025 09:26:46 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 55E3943CCA for ; Tue, 7 Jan 2025 14:26:46 +0000 (UTC) X-FDA: 82980882012.25.1DED57D Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf12.hostedemail.com (Postfix) with ESMTP id 93D914000D for ; Tue, 7 Jan 2025 14:26:44 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HYL4fPuU; spf=pass (imf12.hostedemail.com: domain of hawk@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736260004; a=rsa-sha256; cv=none; b=NOVxub09AhSF1w5MS2jB3KIRkFMLodX7kR2KiSaqPolUgg3LXpBSSon2H/2Tvte+gSif7L rqXCiiCGVzNxWBe5Btm0jf2IOeY8kQRr1jitNay2wFEegJLbvFAqeQDh7EkUh6qgPsa5je k/+wOi8KO8ZhA3R0fo0KzwQfbazC3jU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HYL4fPuU; spf=pass (imf12.hostedemail.com: domain of hawk@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=hawk@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736260004; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vcbDAo/G+1qjA1QI53mP6il8NGe+B3qtUjge2AuvwSc=; b=TCYNeeZ3XHx3UpoV+jQho9jrM3PqG7VH2ZKnSbrmwCWxTvokp8weojTu4exJhvU25vhm/5 /yx7NLOjxqkIPt0vFSmiD5BkQgsPsvUQTYUyIf/8yFCy7WYlWv17Y1Hr9DBs0JcvO0n4BU KSTEbr8m2AMajynMZN1/gBucWcLDVLY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 9C4B1A41587; Tue, 7 Jan 2025 14:24:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1940CC4CED6; Tue, 7 Jan 2025 14:26:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1736260003; bh=AeMg2utTMAlKSbxWiRzE7yu6vRG6PdpdikVVUTzB7Po=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=HYL4fPuUt3YBqyPuBz92sS5jcy/VsnOz6MkBc+Fbk9qHBM2JFQcpjLIB5VduAzKzX Wog6S9U2SOSYofWSlq8OUKbOvKA9Z46TrCntIHu428GAwuyK4bZH0fQxNMzp33Bcn1 WShlsGyIlFWAAqmI6IbMsU3k2+GM7FPDtsN6WV8C3t0InsWrCNNYfQJr5a3LItROi3 WpEwlP8/8JmYS1fDCgkRu7PCzNfBZ2Koe6q8/FFo1HKARb14CWN6XkXP9zLY6Gdc06 lG290+aDphxV1KOFeQxY4EHEag88h0oxdHfkn7a1G0uVf3RHzYAXFyeCJVndgHuCHp q6ystbxVShpDw== Message-ID: Date: Tue, 7 Jan 2025 15:26:35 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v6 0/8] fix two bugs related to page_pool To: Yunsheng Lin , davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com Cc: liuyonglong@huawei.com, fanghaiqing@huawei.com, zhangkun09@huawei.com, Alexander Lobakin , Robin Murphy , Alexander Duyck , Andrew Morton , IOMMU , MM , Alexei Starovoitov , Daniel Borkmann , John Fastabend , Matthias Brugger , AngeloGioacchino Del Regno , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org References: <20250106130116.457938-1-linyunsheng@huawei.com> Content-Language: en-US From: Jesper Dangaard Brouer In-Reply-To: <20250106130116.457938-1-linyunsheng@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 93D914000D X-Stat-Signature: 3e4giy85ebjm31y3o4waajzgk43ops1e X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1736260004-490410 X-HE-Meta: U2FsdGVkX1/6zciNv+FwTlbg4sMi9EsIw02IOfQjQu8h13Rmi9PjGljfZBDfIKBiJnRgek3OKJj5K7CO+VzfW1mNDif1KHY7RLSD472rLqgDl1EkhIEFKGCoMfqx/xO1ptAYqGjzlfN6z20rm0F3Xy0n209m7I8xWmWXuOpfV/BwWgXfCU1xfaWwORJz1AmgOEjMo9+Ec2bS0GU6X/KrbeALtiS7He7TmYI7UdrQ6FDlUqVWp/dgMxT6ijHgvQGmTXcUvA/sAtc8LtCsPcimxD9cA0BjV8IJSbZRtSzMh6g4dscMePi4xrfA4hCT6n12Rc5kFcdW29BiA0/csvDxV+UhokNrtSQ4B7hDDHOBNoXPjLZrhe5fWtWgBrqBua7ZuF8sW4ac07Mh26idRj82R9j4UKu8FYblnVJvy5pS8Sy+6mJYY6Ag8qPLukk4CJvtDLbRQ6IDKqSq6FP5o29jw75LwnTdNmuiju5DbbyfOV5andzAD0R/WUj40t5FNzRzQYkf/MnhzUx9JjhpgXcQzwRMnJsRz/Hm2MNquPC44d2LDd1Sg2JqyViD2Z5XirckI+RhaaykBetBYz7DTTIPw7W5mM0pC1GKtYagW5zC5Kyee9/+ti8gcCgXzSrIviG5tb7lDecaWAVYkMOxSaQIBF8V3cIPSp+n1XSeRnCVqITtYO0zmHw6WM5qX/0lDDdc2aPFqC6enB49ezn1gx/zJNy8eLqwBT/iwJVCE1V+87nQo7fffqW5cgPvvJL4ikOmtThqXqjVKah7TvK0wqDvSbfVUEEt+LF5wpJYwtdjzwHkdc0wwht+bXrXWPAFrvhWnlxQSKCKXKSD6382091nBrYCLRl1z5iUDUKS9bMHFJjA26HYz+5RKzy7P2KXIC8XMBjmvsPWGEx8kML1j9it0PUzB4F1DZPHBE3qZrldhHJd9GfT4A7uaZgTig0rfvq5BtqeNG+uXKsQTa6Cqvc Xkg7qu1m PNfNCiX3hImYm3kk3ZjLBl62GXqmu/dOlCOM3iNcCBUzRGYzWJTcPmzvhRaszCyPCvsqYYrvWI6//mKwofXPKWC4LSCAponQ1ElyrPtwR8gEAO43tHnGyEXDNLRAFWOD1QBGYZF/5X34eiVknAuy15lM+BGe4SabpwQ1OrM0huM6HmP9Csgu4A/2gwTc7L+zxaafRZCZkpkY6qFbQF6Ka7VHBpdYOeboP7YrlORvU4o2/yawAcwJ7TsthTwmZS5Rl4Lb+QQFNl7Qp2htNhfeCLnkU6n2lj0X6qPe6AFY3EmDrgPsgH9mZNAlclFJ44g4O86J4pfSVaFdVABUzpE7pCaG92DUzBEDUYjV+A1hOw9GAgNlg6ltfuQlAVi80jkcIr7B6+i4Hngs9TiwxNmLXMdRd8uNqhioKfySbxJaaaBQqxCyrMN77buVCf9EuwHWucUF8DcsnG1TXdUfNYl1L3N7fUsP1+F3S0PvHzh9a+p81SwljMV8s/Q/tDagoLyWakZRXPNq4cxyyH63yiL/G4fuk1wJ1dQnbsOhMyKK0IJddzK5f0zzbBLaxvEJXxmh0iYKtD8qG9Skya8IgQYKHBbtTnqA325lEUviFvv4g8+/ZEJ/j4wSo4QHA5w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 06/01/2025 14.01, Yunsheng Lin wrote: > This patchset fix a possible time window problem for page_pool and > the dma API misuse problem as mentioned in [1], and try to avoid the > overhead of the fixing using some optimization. > > From the below performance data, the overhead is not so obvious > due to performance variations for time_bench_page_pool01_fast_path() > and time_bench_page_pool02_ptr_ring, and there is about 20ns overhead > for time_bench_page_pool03_slow() for fixing the bug. > > Before this patchset: > root@(none)$ insmod bench_page_pool_simple.ko > [ 323.367627] bench_page_pool_simple: Loaded > [ 323.448747] time_bench: Type:for_loop Per elem: 0 cycles(tsc) 0.769 ns (step:0) - (measurement period time:0.076997150 sec time_interval:76997150) - (invoke count:100000000 tsc_interval:7699707) > [ 324.812884] time_bench: Type:atomic_inc Per elem: 1 cycles(tsc) 13.468 ns (step:0) - (measurement period time:1.346855130 sec time_interval:1346855130) - (invoke count:100000000 tsc_interval:134685507) > [ 324.980875] time_bench: Type:lock Per elem: 1 cycles(tsc) 15.010 ns (step:0) - (measurement period time:0.150101270 sec time_interval:150101270) - (invoke count:10000000 tsc_interval:15010120) > [ 325.652195] time_bench: Type:rcu Per elem: 0 cycles(tsc) 6.542 ns (step:0) - (measurement period time:0.654213000 sec time_interval:654213000) - (invoke count:100000000 tsc_interval:65421294) > [ 325.669215] bench_page_pool_simple: time_bench_page_pool01_fast_path(): Cannot use page_pool fast-path > [ 325.974848] time_bench: Type:no-softirq-page_pool01 Per elem: 2 cycles(tsc) 29.633 ns (step:0) - (measurement period time:0.296338200 sec time_interval:296338200) - (invoke count:10000000 tsc_interval:29633814) (referring to above line, below) > [ 325.993517] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): Cannot use page_pool fast-path > [ 326.576636] time_bench: Type:no-softirq-page_pool02 Per elem: 5 cycles(tsc) 57.391 ns (step:0) - (measurement period time:0.573911820 sec time_interval:573911820) - (invoke count:10000000 tsc_interval:57391174) > [ 326.595307] bench_page_pool_simple: time_bench_page_pool03_slow(): Cannot use page_pool fast-path > [ 328.422661] time_bench: Type:no-softirq-page_pool03 Per elem: 18 cycles(tsc) 181.849 ns (step:0) - (measurement period time:1.818495880 sec time_interval:1818495880) - (invoke count:10000000 tsc_interval:181849581) > [ 328.441681] bench_page_pool_simple: pp_tasklet_handler(): in_serving_softirq fast-path > [ 328.449584] bench_page_pool_simple: time_bench_page_pool01_fast_path(): in_serving_softirq fast-path > [ 328.755031] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 2 cycles(tsc) 29.632 ns (step:0) - (measurement period time:0.296327910 sec time_interval:296327910) - (invoke count:10000000 tsc_interval:29632785) It is strange that fast-path "tasklet_page_pool01_fast_path" isn't faster than above "no-softirq-page_pool01". They are both 29.633 ns. What hardware is this? e.g. the cycle count of 2 cycles(tsc) seem strange. On my testlab hardware Intel CPU E5-1650 v4 @3.60GHz My fast-path numbers say 5.202 ns (18 cycles) for "tasklet_page_pool01_fast_path" Raw data look like this [Tue Jan 7 15:15:18 2025] bench_page_pool_simple: pp_tasklet_handler(): in_serving_softirq fast-path [Tue Jan 7 15:15:18 2025] bench_page_pool_simple: time_bench_page_pool01_fast_path(): in_serving_softirq fast-path [Tue Jan 7 15:15:18 2025] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 18 cycles(tsc) 5.202 ns (step:0) - (measurement period time:0.052020430 sec time_interval:52020430) - (invoke count:10000000 tsc_interval:187272981) [Tue Jan 7 15:15:18 2025] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): in_serving_softirq fast-path [Tue Jan 7 15:15:19 2025] time_bench: Type:tasklet_page_pool02_ptr_ring Per elem: 55 cycles(tsc) 15.343 ns (step:0) - (measurement period time:0.153438301 sec time_interval:153438301) - (invoke count:10000000 tsc_interval:552378168) [Tue Jan 7 15:15:19 2025] bench_page_pool_simple: time_bench_page_pool03_slow(): in_serving_softirq fast-path [Tue Jan 7 15:15:19 2025] time_bench: Type:tasklet_page_pool03_slow Per elem: 243 cycles(tsc) 67.725 ns (step:0) - (measurement period time:0.677255574 sec time_interval:677255574) - (invoke count:10000000 tsc_interval:2438124315) > [ 328.774308] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): in_serving_softirq fast-path > [ 329.578579] time_bench: Type:tasklet_page_pool02_ptr_ring Per elem: 7 cycles(tsc) 79.523 ns (step:0) - (measurement period time:0.795236560 sec time_interval:795236560) - (invoke count:10000000 tsc_interval:79523650) > [ 329.597769] bench_page_pool_simple: time_bench_page_pool03_slow(): in_serving_softirq fast-path > [ 331.507501] time_bench: Type:tasklet_page_pool03_slow Per elem: 19 cycles(tsc) 190.104 ns (step:0) - (measurement period time:1.901047510 sec time_interval:1901047510) - (invoke count:10000000 tsc_interval:190104743) > > After this patchset: > root@(none)$ insmod bench_page_pool_simple.ko > [ 138.634758] bench_page_pool_simple: Loaded > [ 138.715879] time_bench: Type:for_loop Per elem: 0 cycles(tsc) 0.769 ns (step:0) - (measurement period time:0.076972720 sec time_interval:76972720) - (invoke count:100000000 tsc_interval:7697265) > [ 140.079897] time_bench: Type:atomic_inc Per elem: 1 cycles(tsc) 13.467 ns (step:0) - (measurement period time:1.346735370 sec time_interval:1346735370) - (invoke count:100000000 tsc_interval:134673531) > [ 140.247841] time_bench: Type:lock Per elem: 1 cycles(tsc) 15.005 ns (step:0) - (measurement period time:0.150055080 sec time_interval:150055080) - (invoke count:10000000 tsc_interval:15005497) > [ 140.919072] time_bench: Type:rcu Per elem: 0 cycles(tsc) 6.541 ns (step:0) - (measurement period time:0.654125000 sec time_interval:654125000) - (invoke count:100000000 tsc_interval:65412493) > [ 140.936091] bench_page_pool_simple: time_bench_page_pool01_fast_path(): Cannot use page_pool fast-path > [ 141.246985] time_bench: Type:no-softirq-page_pool01 Per elem: 3 cycles(tsc) 30.159 ns (step:0) - (measurement period time:0.301598160 sec time_interval:301598160) - (invoke count:10000000 tsc_interval:30159812) > [ 141.265654] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): Cannot use page_pool fast-path > [ 141.976265] time_bench: Type:no-softirq-page_pool02 Per elem: 7 cycles(tsc) 70.140 ns (step:0) - (measurement period time:0.701405780 sec time_interval:701405780) - (invoke count:10000000 tsc_interval:70140573) > [ 141.994933] bench_page_pool_simple: time_bench_page_pool03_slow(): Cannot use page_pool fast-path > [ 144.018945] time_bench: Type:no-softirq-page_pool03 Per elem: 20 cycles(tsc) 201.514 ns (step:0) - (measurement period time:2.015141210 sec time_interval:2015141210) - (invoke count:10000000 tsc_interval:201514113) > [ 144.037966] bench_page_pool_simple: pp_tasklet_handler(): in_serving_softirq fast-path > [ 144.045870] bench_page_pool_simple: time_bench_page_pool01_fast_path(): in_serving_softirq fast-path > [ 144.205045] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 1 cycles(tsc) 15.005 ns (step:0) - (measurement period time:0.150056510 sec time_interval:150056510) - (invoke count:10000000 tsc_interval:15005645) This 15.005 ns looks like a significant improvement over 29.633 ns > [ 144.224320] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): in_serving_softirq fast-path > [ 144.916044] time_bench: Type:tasklet_page_pool02_ptr_ring Per elem: 6 cycles(tsc) 68.269 ns (step:0) - (measurement period time:0.682693070 sec time_interval:682693070) - (invoke count:10000000 tsc_interval:68269300) > [ 144.935234] bench_page_pool_simple: time_bench_page_pool03_slow(): in_serving_softirq fast-path > [ 146.997684] time_bench: Type:tasklet_page_pool03_slow Per elem: 20 cycles(tsc) 205.376 ns (step:0) - (measurement period time:2.053766310 sec time_interval:2053766310) - (invoke count:10000000 tsc_interval:205376624) > Looks like I should also try out this patchset on my testlab, as this hardware seems significantly different than mine... > 1. https://lore.kernel.org/lkml/8067f204-1380-4d37-8ffd-007fc6f26738@kernel.org/T/ > > CC: Alexander Lobakin > CC: Robin Murphy > CC: Alexander Duyck > CC: Andrew Morton > CC: IOMMU > CC: MM > > Change log: > V6: > 1. Repost based on latest net-next. > 2. Rename page_pool_to_pp() to page_pool_get_pp(). > > V5: > 1. Support unlimit inflight pages. > 2. Add some optimization to avoid the overhead of fixing bug. > > V4: > 1. use scanning to do the unmapping > 2. spilt dma sync skipping into separate patch > > V3: > 1. Target net-next tree instead of net tree. > 2. Narrow the rcu lock as the discussion in v2. > 3. Check the ummapping cnt against the inflight cnt. > > V2: > 1. Add a item_full stat. > 2. Use container_of() for page_pool_to_pp(). > > Yunsheng Lin (8): > page_pool: introduce page_pool_get_pp() API > page_pool: fix timing for checking and disabling napi_local > page_pool: fix IOMMU crash when driver has already unbound > page_pool: support unlimited number of inflight pages > page_pool: skip dma sync operation for inflight pages > page_pool: use list instead of ptr_ring for ring cache > page_pool: batch refilling pages to reduce atomic operation > page_pool: use list instead of array for alloc cache > > drivers/net/ethernet/freescale/fec_main.c | 8 +- > .../ethernet/google/gve/gve_buffer_mgmt_dqo.c | 2 +- > drivers/net/ethernet/intel/iavf/iavf_txrx.c | 6 +- > drivers/net/ethernet/intel/idpf/idpf_txrx.c | 14 +- > drivers/net/ethernet/intel/libeth/rx.c | 2 +- > .../net/ethernet/mellanox/mlx5/core/en/xdp.c | 3 +- > drivers/net/netdevsim/netdev.c | 6 +- > drivers/net/wireless/mediatek/mt76/mt76.h | 2 +- > include/linux/mm_types.h | 2 +- > include/linux/skbuff.h | 1 + > include/net/libeth/rx.h | 3 +- > include/net/netmem.h | 24 +- > include/net/page_pool/helpers.h | 11 + > include/net/page_pool/types.h | 63 +- > net/core/devmem.c | 4 +- > net/core/netmem_priv.h | 5 +- > net/core/page_pool.c | 660 ++++++++++++++---- > net/core/page_pool_priv.h | 12 +- > 18 files changed, 664 insertions(+), 164 deletions(-) >