From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 662E1C28B28 for ; Wed, 12 Mar 2025 12:04:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C8D2280002; Wed, 12 Mar 2025 08:04:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 05280280001; Wed, 12 Mar 2025 08:04:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E35E8280002; Wed, 12 Mar 2025 08:04:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C7506280001 for ; Wed, 12 Mar 2025 08:04:34 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 852341A0F6A for ; Wed, 12 Mar 2025 12:04:35 +0000 (UTC) X-FDA: 83212766910.29.DE48BC5 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf28.hostedemail.com (Postfix) with ESMTP id ADD61C001B for ; Wed, 12 Mar 2025 12:04:31 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741781073; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=L1fsC3vdhb1wRZrNtw2hC6kxS+YjZHV6oocS2jqE4zY=; b=NBZAjBFIQZTMS9z0RKyfGnjRTEfImYJFcouwfcn6JH68vZkLDqiNtLeWlbL/wwt1SJgzeP O4/w4FRyA6b1H5Nn6SuvTLUep3MAf3YV6RTmZ6fkzvamuWv7ansolSTSGG3qKLBERqzI7T MPLHZnj7SpqH8JEYDW9s+y/IZCcm15g= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741781073; a=rsa-sha256; cv=none; b=az8Wq8b+dLiQhhl2gpQPMb7Ebl9ZZeGFUsggL/GQLHr6HGMz08Xjy6STMxunemeSQEH1so dA03ZObQX3QZZKFxW2JVF5AP2l0QubzhLSBYVbKedQfTQjwqy5XQlmFHzUsXvICwaqsc8s yW/PThQeXJEM7rNokSuwGLG7DXVMaog= Received: from mail.maildlp.com (unknown [172.19.88.105]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4ZCTm21LV3zqVWK; Wed, 12 Mar 2025 20:02:58 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 70CF01401F1; Wed, 12 Mar 2025 20:04:27 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 12 Mar 2025 20:04:27 +0800 Message-ID: Date: Wed, 12 Mar 2025 20:04:26 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH net-next] page_pool: Track DMA-mapped pages and unmap them when destroying the pool To: =?UTF-8?Q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Yunsheng Lin , Andrew Morton , Jesper Dangaard Brouer , Ilias Apalodimas , "David S. Miller" CC: Yonglong Liu , Mina Almasry , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Simon Horman , , References: <20250308145500.14046-1-toke@redhat.com> <87cyepxn7n.fsf@toke.dk> <2c363f6a-f9e4-4dd2-941d-db446c501885@huawei.com> <875xkgykmi.fsf@toke.dk> <136f1d94-2cdd-43f6-a195-b87c55df2110@huawei.com> <87wmcvitq8.fsf@toke.dk> Content-Language: en-US From: Yunsheng Lin In-Reply-To: <87wmcvitq8.fsf@toke.dk> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.120.129] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Stat-Signature: 8bbsbqswqqao99mczff1ji7reica4qc5 X-Rspamd-Queue-Id: ADD61C001B X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1741781071-879861 X-HE-Meta: U2FsdGVkX19XIxc/aLpIpQjyUmqPTmLoMIg/etSxWUMtxJKuQguP7IX216VfMVUpWyN4JwJH6o6KEaJesEOJwJ51kXHDx0Erofn4/gQwqLBixGgNqd7R0Ms1n+ZfX22rV/e6pvNOHRGuDIRozswphWNLdVcqd6qRG2NELBtF0vieVhjNYMrvWSo9qm1wq06YrbP4updmYBHhBnpJj/eaZep2hGwxbZJ0O5uriPQFG/gyKhhOSQvKyhcYwX4ceRHKagnR0DxDgG91zDdKhoRqF9df1IiE9fUZo4e6ln3TjDIqCOmHne+7rmehCnICn46AQ2oWblhUAIV0ZPa3CeUEyWBj2kVsABuHYvdgCqL21lm9weQB7zD/tMqDmT9rMTBJ3g4ExCsp8POBhzqdi6c9XjH7bSRKWEGGtiJr7nuSFBlgm8/JZsX3d3Xv5+KOJkJ5Mmab57x1H0nvlmuaIHWiod3dA2QPR3sh2z3gfXvG+DAs+BLV1ap3vgP43WEqAwam7XVPn2XD44d5auECb57N7RB2TY3y2MUXDav5vG4JsQlQvMJBahmlRTLVGmG8lOqQJb3PrRLnG2fc6J9enQdyxdXJuXXhFt5Om4kw2U3RinwwrnAcutfi52fBtQ5CKRQBX+61MpCvlUVfMqDtgT3XA/AuSdLxvPp+Uux3ZVdzvTVEnqQo3NiKlRZdi3oUjhP+BZ1tsQD6njNa5411n/GUlSateYlE4hXj0e686FuJbG1Ic/Rf9U3dyhOX1Btwc1HVYT8sGTXuvAuVUYMQxNPG+IsX3jr/VYJOWjPO+jpriiTURjd/HBXPdzd9PxCdyRDDmEnfjEKQK/hhmR3CILlsMLnFlC4zctSdf4l4VcZwkdAACtOYm+XCE5HNTaXOJONpWURVtnXo9xc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/3/11 21:26, Toke Høiland-Jørgensen wrote: > Yunsheng Lin writes: > >> On 2025/3/10 23:24, Toke Høiland-Jørgensen wrote: >> >>>> >>>> I guess that is one of the disadvantages that an advanced struct like >>>> Xarray is used:( >>> >>> Sure, there will be some overhead from using xarray, but I think the >>> simplicity makes up for it; especially since we can limit this to the >> >> As my understanding, it is more complicated, it is just that >> complexity is hidden before xarray now. > > Yes, which encapsulates the complexity into a shared abstraction that is > widely used in the kernel, so it does not add new complexity to the > kernel as a whole. Whereas your series adds a whole bunch of new > complexity to the kernel in the form of a new slab allocator. > >> Even if there is no space in 'struct page' to store the id, the >> 'struct page' pointer itself can be used as id if the xarray can >> use pointer as id. But it might mean the memory utilization might >> not be as efficient as it should be, and performance hurts too if >> there is more memory to be allocated and freed. > > I don't think it can. But sure, there can be other ways around this. > > FWIW, I don't think your idea of allocating page_pool_items to use as an > indirection is totally crazy, but all the complexity around it (the > custom slab allocator etc) is way too much. And if we can avoid the item > indirection that is obviously better. > >> It seems it is just a matter of choices between using tailor-made >> page_pool specific optimization and using some generic advanced >> struct like xarray. > > Yup, basically. > >> I chose the tailor-made one because it ensure least overhead as >> much as possibe from performance and memory utilization perspective, >> for example, the 'single producer, multiple consumer' guarantee >> offered by NAPI context can avoid some lock and atomic operation. > > Right, and my main point is that the complexity of this is not worth it :) Without the complexity, there is about 400ns performance overhead for Xarray compared to about 10ns performance overhead for the tailor-made one, which means there is about more than 200% performance degradation for page_pool03_slow test case: [ 9190.217338] bench_page_pool_simple: Loaded [ 9190.298495] time_bench: Type:for_loop Per elem: 0 cycles(tsc) 0.770 ns (step:0) - (measurement period time:0.077040570 sec time_interval:77040570) - (invoke count:100000000 tsc_interval:7704049) [ 9190.893294] time_bench: Type:atomic_inc Per elem: 0 cycles(tsc) 5.775 ns (step:0) - (measurement period time:0.577582060 sec time_interval:577582060) - (invoke count:100000000 tsc_interval:57758202) [ 9191.061026] time_bench: Type:lock Per elem: 1 cycles(tsc) 15.017 ns (step:0) - (measurement period time:0.150170250 sec time_interval:150170250) - (invoke count:10000000 tsc_interval:15017020) [ 9191.771113] time_bench: Type:rcu Per elem: 0 cycles(tsc) 6.930 ns (step:0) - (measurement period time:0.693045930 sec time_interval:693045930) - (invoke count:100000000 tsc_interval:69304585) [ 9231.309907] time_bench: Type:xarray Per elem: 39 cycles(tsc) 395.218 ns (step:0) - (measurement period time:39.521827650 sec time_interval:39521827650) - (invoke count:100000000 tsc_interval:3952182703) [ 9231.327827] bench_page_pool_simple: time_bench_page_pool01_fast_path(): Cannot use page_pool fast-path [ 9231.640260] time_bench: Type:no-softirq-page_pool01 Per elem: 3 cycles(tsc) 30.316 ns (step:0) - (measurement period time:0.303162870 sec time_interval:303162870) - (invoke count:10000000 tsc_interval:30316281) [ 9231.658866] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): Cannot use page_pool fast-path [ 9232.244955] time_bench: Type:no-softirq-page_pool02 Per elem: 5 cycles(tsc) 57.691 ns (step:0) - (measurement period time:0.576910280 sec time_interval:576910280) - (invoke count:10000000 tsc_interval:57691015) [ 9232.263567] bench_page_pool_simple: time_bench_page_pool03_slow(): Cannot use page_pool fast-path [ 9233.990491] time_bench: Type:no-softirq-page_pool03 Per elem: 17 cycles(tsc) 171.809 ns (step:0) - (measurement period time:1.718090950 sec time_interval:1718090950) - (invoke count:10000000 tsc_interval:171809088) [ 9234.011402] bench_page_pool_simple: pp_tasklet_handler(): in_serving_softirq fast-path [ 9234.019286] bench_page_pool_simple: time_bench_page_pool01_fast_path(): in_serving_softirq fast-path [ 9234.328952] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 3 cycles(tsc) 30.057 ns (step:0) - (measurement period time:0.300574060 sec time_interval:300574060) - (invoke count:10000000 tsc_interval:30057400) [ 9234.348155] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): in_serving_softirq fast-path [ 9234.898627] time_bench: Type:tasklet_page_pool02_ptr_ring Per elem: 5 cycles(tsc) 54.146 ns (step:0) - (measurement period time:0.541466850 sec time_interval:541466850) - (invoke count:10000000 tsc_interval:54146680) [ 9234.917742] bench_page_pool_simple: time_bench_page_pool03_slow(): in_serving_softirq fast-path [ 9236.691076] time_bench: Type:tasklet_page_pool03_slow Per elem: 17 cycles(tsc) 176.467 ns (step:0) - (measurement period time:1.764675290 sec time_interval:1764675290) - (invoke count:10000000 tsc_interval:176467523) Tested using the below diff: +++ b/kernel/lib/bench_page_pool_simple.c @@ -149,6 +149,48 @@ static int time_bench_rcu( return loops_cnt; } +static int time_bench_xarray( + struct time_bench_record *rec, void *data) +{ + uint64_t loops_cnt = 0; + struct xarray dma_mapped; + int i, err; + u32 id; + void *old; + + xa_init_flags(&dma_mapped, XA_FLAGS_ALLOC1); + + time_bench_start(rec); + /** Loop to measure **/ + for (i = 0; i < rec->loops; i++) { + + if (in_softirq()) + err = xa_alloc(&dma_mapped, &id, &dma_mapped, + XA_LIMIT(1, UINT_MAX), GFP_ATOMIC); + else + err = xa_alloc_bh(&dma_mapped, &id, &dma_mapped, + XA_LIMIT(1, UINT_MAX), GFP_KERNEL); + + if (err) + break; + + loops_cnt++; + barrier(); /* avoid compiler to optimize this loop */ + + if (in_softirq()) + old = xa_cmpxchg(&dma_mapped, id, &dma_mapped, NULL, 0); + else + old = xa_cmpxchg_bh(&dma_mapped, id, &dma_mapped, NULL, 0); + + if (old != &dma_mapped) + break; + } + time_bench_stop(rec, loops_cnt); + + xa_destroy(&dma_mapped); + return loops_cnt; +} + /* Helper for filling some page's into ptr_ring */ static void pp_fill_ptr_ring(struct page_pool *pp, int elems) { @@ -334,6 +376,8 @@ static int run_benchmark_tests(void) "lock", NULL, time_bench_lock); time_bench_loop(nr_loops*10, 0, "rcu", NULL, time_bench_rcu); + time_bench_loop(nr_loops*10, 0, + "xarray", NULL, time_bench_xarray); } /* This test cannot activate correct code path, due to no-softirq ctx */ > >>> cases where it's absolutely needed. >> >> The above can also be done for using page_pool_item too as the >> lower 2 bits can be used to indicate the pointer in 'struct page' >> is 'page_pool_item' or 'page_pool', I just don't think it is >> necessary yet as it might add more checking in the fast path. > > Yup, did think about using the lower bits to distinguish if it does turn > out that we can't avoid an indirection. See above; it's not actually the The 'memdesc' seems like an indirection to me when using that to shrink 'struct page' to a smaller size. > page_pool_item concept that is my main issue with your series. > > -Toke > >