From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C2A1D5CC9B for ; Wed, 30 Oct 2024 11:31:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DCD9C6B00AC; Wed, 30 Oct 2024 07:31:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7AC06B00B2; Wed, 30 Oct 2024 07:31:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1B816B00B3; Wed, 30 Oct 2024 07:31:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9F6EE6B00AC for ; Wed, 30 Oct 2024 07:31:10 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 25A3A160AA4 for ; Wed, 30 Oct 2024 11:31:10 +0000 (UTC) X-FDA: 82730050872.09.E91D7AE Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf28.hostedemail.com (Postfix) with ESMTP id ECDB6C002B for ; Wed, 30 Oct 2024 11:30:40 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730287788; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fWK5NYk/Gjqku7SWptcG9bOPhtjG2kA678P2+QzRnyQ=; b=pBvLkGhOOAjkYxRHHlHJN+YlmdxyNb9Sq72qdKoZ5bvfxhA4SdU7Qg4NzI+8pz24+/6LZY JO81B6M67fEJod0mw2COYGCG2CCv2u4IzFeGdkL6l1qXa2GRcQilEXnk0yvcid+spMvzdZ ZHOsRVxjkcuLBvK78pskyKH7dRbr5S8= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; spf=pass (imf28.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730287788; a=rsa-sha256; cv=none; b=0Kva8tkB3LSpT7epmUO7PJkBYwnr/r79UfO7kLFgmNogn/47mmFU8bHxDHZG4+sachI5CR MXyoqAIQoYgVWFWReZXLvLLR5hle0J01tTBpPg+Fz41gFUnwf8rYn356kOoC0Ecb0BMS0o FvbVWtRTRD9JudmpWn3zeKBJKKCqFiU= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4XdlJm37Sjz1jvlv; Wed, 30 Oct 2024 19:29:28 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id 64B021A0188; Wed, 30 Oct 2024 19:31:00 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 30 Oct 2024 19:31:00 +0800 Message-ID: <1eac33ae-e8e1-4437-9403-57291ba4ced6@huawei.com> Date: Wed, 30 Oct 2024 19:30:59 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver has already unbound To: =?UTF-8?Q?Toke_H=C3=B8iland-J=C3=B8rgensen?= , Jesper Dangaard Brouer , , , CC: , , , Robin Murphy , Alexander Duyck , IOMMU , Andrew Morton , Eric Dumazet , Ilias Apalodimas , , , , kernel-team References: <20241022032214.3915232-1-linyunsheng@huawei.com> <20241022032214.3915232-4-linyunsheng@huawei.com> <113c9835-f170-46cf-92ba-df4ca5dfab3d@huawei.com> <878qudftsn.fsf@toke.dk> <87r084e8lc.fsf@toke.dk> <878qu7c8om.fsf@toke.dk> Content-Language: en-US From: Yunsheng Lin In-Reply-To: <878qu7c8om.fsf@toke.dk> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.120.129] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: ECDB6C002B X-Stat-Signature: wh5zsnwuj3tot195n98sss4z7skfz6bo X-HE-Tag: 1730287840-205265 X-HE-Meta: U2FsdGVkX1/JWNDY1rcUqBK19aC9H9/6kVZXXUt1CqVSoJL+6/ZrJ4YhF1ZCVqaAYfnhZN7vOfpXS9mWRylH5yAbxE5CXO5nu+70dAWgaZCV1vSwarAyo8RtS/vXfHbTBGCZa1DJF9R5NdAONtv/l3yCB3U6PYnOAqkpOaUz05HwaUwFCVPs/CqZERLATmUyk80dSy9QK0JRwbvaRFmSnqEbYV0nCZ3gexcUAio9S67+JEqvYGPqHTthQQvF22Cqw7NmjiDyAsn5zOKvOSDYSfwco61zzt7F9g8LEe+Yfamz1CS5owzt+pCs9AR1w5C72bc3W7u2v82XKsOHhhSABxO+VuWNh/Wzil25rcWTPnkzPioMmk2iQWnRCnwW5gTRAt3IE5cc1HamSi2EliYctsAaL7PHeGl248u8aaWtRZfpGTi5nTJXqX11uqd2fx9n2dL3+3NYAaeC/tnjRYJ68HX0jlTwBFqh0UtOc6ejqcUOjFk3nzXHMUTHBobNToTGHTaizoaxCDPwo1pvc0iGimA7KA1l6fNd7eKgSTVns5n641pPXT9EoUWa3GhErWYR1/nGiyYMWJicfTMYgRNPDdb/CfK7qbthWQa4n/LbVc8LGjWY629pVDRiD/+ntsr+ihxkgifkdceMKynYBm2RwEj/6rwxTswRo0YM2LUvOhc0M1luqmB7JZSUaEbJEy1FQ1/MiFbDsRNuzH8BDyeUjwlBMZoR/4eV1IRKTlBm7pcpHcqP4SzY3m4URz/bJrAecqVvOy6JuCgcF9yd2R05BqWsTSTCyM7dc3Rjj4mcG1Jl96R8OmERdNcdMLnYXNIyaCWVwNnb6AOOWkwdLIL+mmA/AypG3/hxubpnGRbImVbYNddTw8Wvv7vqMZJasKer4NltnuVlr+6cAMZwJF5gIddo1Iq+0utsLgiu1QRU9ZTTbAT3g0X8KGwAqGqPvRUKvSO51QM6HsGo1TUiqx6 3NYJQGqA t9etfHBApzpeEZ8qH622Om/s+8TB8Mbcc/5pW45Rdk+XM8RaQHxyHERUjI0PxSRGeUyYt24WNdR4QAPxFEVuFawHgYJ799242ciY4nCvxbsKBL9CgToFiVU4oX1bjRbMhc5Czub7Hu0y6e603UQ5DWf4u5dsZIeWLn7asUM5pB/GtHPQxa0dHujw5UIoVwTLPrUxDv4k0JQWn+L/3hHG6IpsuXDINT3ixGBaAPxKDsLmbAs5mSz5tUZe3aiakNX7jFFzsq/8tBX+O9JlPUjp4SoOYv4+bZQXoLJnPfDtkxNxGtThElpQBCWO8bzJGHEkNCJTjBVH/kNh1sat0YVvr1LnChvGYjAiUqAQdTdJVHtbPg8uIenq2GGrPRrCfM7QRZzEQ1tAoYCErcIQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/10/29 21:58, Toke Høiland-Jørgensen wrote: > Yunsheng Lin writes: > >>>> I would prefer the waiting too if simple waiting fixed the test cases that >>>> Youglong and Haiqing were reporting and I did not look into the rabbit hole >>>> of possible caching in networking. >>>> >>>> As mentioned in commit log and [1]: >>>> 1. ipv4 packet defragmentation timeout: this seems to cause delay up to 30 >>>> secs, which was reported by Haiqing. >>>> 2. skb_defer_free_flush(): this may cause infinite delay if there is no >>>> triggering for net_rx_action(), which was reported by Yonglong. >>>> >>>> For case 1, is it really ok to stall the driver unbound up to 30 secs for the >>>> default setting of defragmentation timeout? >>>> >>>> For case 2, it is possible to add timeout for those kind of caching like the >>>> defragmentation timeout too, but as mentioned in [2], it seems to be a normal >>>> thing for this kind of caching in networking: >>> >>> Both 1 and 2 seem to be cases where the netdev teardown code can just >>> make sure to kick the respective queues and make sure there's nothing >>> outstanding (for (1), walk the defrag cache and clear out anything >>> related to the netdev going away, for (2) make sure to kick >>> net_rx_action() as part of the teardown). >> >> It would be good to be more specific about the 'kick' here, does it mean >> taking the lock and doing one of below action for each cache instance? >> 1. flush all the cache of each cache instance. >> 2. scan for the page_pool owned page and do the finegrained flushing. > > Depends on the context. The page pool is attached to a device, so it > should be possible to walk the skb frags queue and just remove any skbs > that refer to that netdevice, or something like that. I am not sure if netdevice is still the same when passing through all sorts of software netdevice, checking if it is the page_pool owned page seems safer? The scaning/flushing seems complicated and hard to get it right if it is depending on internal detail of other subsystem's cache implementation. > > As for the lack of net_rx_action(), this is related to the deferred > freeing of skbs, so it seems like just calling skb_defer_free_flush() on > teardown could be an option. That was my initial thinking about the above case too if we know which percpu sd to be passed to skb_defer_free_flush() or which cpu to trigger its net_rx_action(). But it seems hard to tell which cpu napi is running in before napi is disabled, which means skb_defer_free_flush() might need to be called for every cpu with softirq disabled, as skb_defer_free_flush() calls napi_consume_skb() with budget being 1 or call kick_defer_list_purge() for each CPU. > >>>> "Eric pointed out/predicted there's no guarantee that applications will >>>> read / close their sockets so a page pool page may be stuck in a socket >>>> (but not leaked) forever." >>> >>> As for this one, I would put that in the "well, let's see if this >>> becomes a problem in practice" bucket. >> >> As the commit log in [2], it seems it is already happening. >> >> Those cache are mostly per-cpu and per-socket, and there may be hundreds of >> CPUs and thousands of sockets in one system, are you really sure we need >> to take the lock of each cache instance, which may be thousands of them, >> and do the flushing/scaning of memory used in networking, which may be as >> large as '24 GiB' mentioned by Jesper? > > Well, as above, the two issues you mentioned are per-netns (or possibly > per-CPU), so those seem to be manageable to do on device teardown if the > wait is really a problem. As above, I am not sure if it is still the same netns if the skb is passing through all sorts of software netdevice? > > But, well, I'm not sure it is? You seem to be taking it as axiomatic > that the wait in itself is bad. Why? It's just a bit memory being held > on to while it is still in use, and so what? Actually, I thought about adding some sort of timeout or kicking based on jakub's waiting patch too. But after looking at more caching in the networking, waiting and kicking/flushing seems harder than recording the inflight pages, mainly because kicking/flushing need very subsystem using page_pool owned page to provide a kicking/flushing mechanism for it to work, not to mention how much time does it take to do all the kicking/flushing. It seems rdma subsystem uses a similar mechanism: https://lwn.net/Articles/989087/ > > -Toke > > >