From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CA64D59F6C for ; Wed, 6 Nov 2024 19:55:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0B0A6B00A6; Wed, 6 Nov 2024 14:55:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C93A16B00A7; Wed, 6 Nov 2024 14:55:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ABE206B00A9; Wed, 6 Nov 2024 14:55:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 866B46B00A6 for ; Wed, 6 Nov 2024 14:55:48 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 45106120C87 for ; Wed, 6 Nov 2024 19:55:48 +0000 (UTC) X-FDA: 82756724106.27.49A35A8 Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by imf14.hostedemail.com (Postfix) with ESMTP id BF1C6100005 for ; Wed, 6 Nov 2024 19:55:07 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Wvp3uqVu; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of alexander.duyck@gmail.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=alexander.duyck@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730922822; a=rsa-sha256; cv=none; b=b+IF1Kf4W9w5Tg73EJq1NqFU981k/Y5pluyjWtZEr01Ab8Pg4ZlDI6xRRsg9/iG5QW8cD1 ZB67WcFZcZLag+gY4MkO7qcB/0wzjknHYkH4PjENGUgQzRedwxqr8ybKhIbzVI/XdBGtYS P4AMZEsypNbKuSgQwoc/fWNUYY93tKk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Wvp3uqVu; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of alexander.duyck@gmail.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=alexander.duyck@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730922822; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sz8jpFca6SqfJ60+obKhkJJRmDB3H7XKSoVY6XiXXVw=; b=Nel6R8YgaxX3AH7rAh97Z7IrkQKnZqCu0LBlUTQDuP6/lF0wT0sFlzA2XbdGPjs6g25fxI qcoAjiPLDNIOR8cHMsPdnvjRQSQ/TFVd9OlDIeF9dGuU6fYNobWtVPbj6GxFZBLaEgpAmT ACpYNinEwOaNHq/ts8cfZZ1OSmHfy40= Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-4315abed18aso1654005e9.2 for ; Wed, 06 Nov 2024 11:55:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1730922945; x=1731527745; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=sz8jpFca6SqfJ60+obKhkJJRmDB3H7XKSoVY6XiXXVw=; b=Wvp3uqVuPnG4sA7YLOyLXDYqFIqcRGAa5aZplWOmmNY53NBCjkcrmSJlyAxTFSsq60 941B8yUpPhx7e3aJB1ken60qqMyJUDwtMjMr7cRpRokFea+1cmQB7EbKXamLVi+0X1nF IISKw1Vgqosb15QbBoYzhIdUl4Oz84nDrVS2lG5HVP/T+79gInfvBk8bESUpbpQfpGT/ Q8Mtg4v1+eJEgj7lfpluMRISUehnsaFSiiu22Y6RlxG/QEpRlKC0H41Er5pLrIvChvAt x9eltUNGDpTWi+ITRgZmRaE9xH6xCfBlPT1t5NBLszFvtPw0RLXs6w4W/2EX++706Kur C9Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730922945; x=1731527745; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sz8jpFca6SqfJ60+obKhkJJRmDB3H7XKSoVY6XiXXVw=; b=aqeBBBoGI/P4nd0BNjPG4mqKFf0x4jak8MQBXGwgRzPBQrnZa84tA+QtJnnVDGH7WQ 9bdzAz3BG530sXgCBYH1EKo2x8VBCYQeeeJMWTROG/x1VCzOe+kac3+y3QQB00+UyR4g YlSVYftIhHpCxZxApD2mToK9qmgPM8G2BedCnIK4O6XH3yxRlJ2rLzrtiWa2ExClzyzS XF/1d8JFW8ILZD2eDbIbvQlAuAMHb9yOFOowviW3Bpjej4wLNCOfqEsyJtrxjrF5psa9 +mqKA8uImfEpLWhTrXEyebJ5tGUJaMC/MLIzcEZv2yY2VO9WsmKxaF1UPLwWJ5GxRHPH K/iQ== X-Forwarded-Encrypted: i=1; AJvYcCXIE8VYU93fqD6J1jQj6CFgndpQK8deU015k6oO6qRqdioWxeb4tYw4n3AhuVY7Jw0zyUvEBt0qhQ==@kvack.org X-Gm-Message-State: AOJu0YwmfugeYLDsZya+TJ/NwX4RtvXuxQyVC5rxEZMPzW7ka51J+Umo v2+VL78TI5fBeFDNZIDzKu1ey8VLNWOiiEWccSv9UdpLOkX9Hj8YaUfmvahxS3pWBlIloIyAXWS m5hxDEaDmRdBoU0l8jbWh9GeVuPk= X-Google-Smtp-Source: AGHT+IF2udQRzmHpiQvlWHqISilnXK7IsefQOlaY9vSnZF5qmLudxSk4K2ExJY2MYgmJugH+Y57WGNpI1w2K1sv76WY= X-Received: by 2002:a5d:6d0b:0:b0:37d:509e:8742 with SMTP id ffacd0b85a97d-381c7a480f3mr17102742f8f.1.1730922943951; Wed, 06 Nov 2024 11:55:43 -0800 (PST) MIME-Version: 1.0 References: <20241022032214.3915232-1-linyunsheng@huawei.com> <20241022032214.3915232-4-linyunsheng@huawei.com> <113c9835-f170-46cf-92ba-df4ca5dfab3d@huawei.com> <878qudftsn.fsf@toke.dk> <87r084e8lc.fsf@toke.dk> <0c146fb8-4c95-4832-941f-dfc3a465cf91@kernel.org> <204272e7-82c3-4437-bb0d-2c3237275d1f@huawei.com> <18ba4489-ad30-423e-9c54-d4025f74c193@kernel.org> In-Reply-To: From: Alexander Duyck Date: Wed, 6 Nov 2024 11:55:07 -0800 Message-ID: Subject: Re: [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver has already unbound To: Jesper Dangaard Brouer Cc: Yunsheng Lin , =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= , davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com, zhangkun09@huawei.com, fanghaiqing@huawei.com, liuyonglong@huawei.com, Robin Murphy , IOMMU , Andrew Morton , Eric Dumazet , Ilias Apalodimas , linux-mm@kvack.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, kernel-team , Viktor Malik Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: BF1C6100005 X-Stat-Signature: z57n6c5ab8nofgqeun8utbjq7od4kbwx X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1730922907-92219 X-HE-Meta: U2FsdGVkX19+I7yzvYyz7XlqUwzvyS1my/w4jocPS8FfZEW3X0fW/xdplPJ1yfud9StQUydd/eKkzNiPkaXgEDoiHigSG26qnd9jqLlp92cb5jVeRomlj99nW+m1R9Js1KAd76/ZWDsafIDZJ9cPT1aM9QxwGUiarPznJFTSe0n42HDGDiQlPDPQZNXFnvfE9B2p/i5G/4sJ8xfKv8CeRa6B5paotBZjJ+X6RBgGtgLsUY/8JhPz+UeERKtvqDOMtXvp/ezzokfzrJyOsfdd/7DhQLuMsKi3xj4d9c5K//rehpiNOFkIlflt8NQgchnrPjIBAjsee4mIuA3ffV5K37feSi2gEhA5bMjVQSWauMbeQFqY5Wy0S3yEjDr/ViFu/oZxBU5QWQ7br+LmV0aGM+P+sB/9XZyUy9EMZyXspcy737/1R5Swtwy0FjfezKhfqpGGpZa7vNVn0fsOoLRcxM3dYSQyG0PWW6slVC05zWipho9eZcIHEufG2/l5byNeqwS3qywkF7q8lenP9tX2LaIHMj/bblYPumtFq2NhnKyWkkwDlh0X/A/CSJL46kXQuvhfEk64SDn8fKtk91kw94b48RHrE0SrDC3dvPhwJ7J5uMpMSe7f4HUPeR4lKJxqOqVEsSokFMEGij+YfDTtYCHtVrsPOWmkiX9wZfXFE3A8Vvh0OFtBqoj2pZmeI9Tue7RZn247wbQj80iV1CCbaEI4JFM73WBoewYKdiAaDO4gHPatOTkLB82OTVcCbAEAOIl9ydOvfqqEcOc8hE5x8h1peeF+st7iTtCbUMwtsvNOVR46yM3+Ngx9ON5CwX1NO0pC+M6sWO1Ag/8AphMyS8pXD6Si+byW/fOj6o8fXv2Oh2UrBAucN/EWzulRqWu/o3hcQDrOYM90xrqpMdizOhP0zCBkJo6w8enfOX0d7y46nvt8GA/e8aN39VR/zBcBENZ0AWZdrtt7RxeunPc uSsBAXW1 vT85TToL6flJY99G/mkFpcEhnUPNPFyfly7mrTYVAn8P8h+d4DPbhKi3icStdo8jmvhlR1BnE4rWx+9XwNHCCpx2nzja8mmBE771d2JgeK97EnJCawvvHxR5F2hTO+/I/fkHC5Uaxs1TsVCM+XqbPRsNSb9pw+AmI1RWZTgRK1B1wJ5TihJtIxpz20YbYKV5wTvBH8ruGFKy5NLQjbWLvGDMXolpEHZaLaw/8xSqcdats33ra+DiyQCDnWwNv91rikYygg3n8huJkO5BRnG2GdaXbVvjP8S0vh52pKZJqjTMeKnwTj0mVCjXwJuJPt6+xannBt0Bys6M2Ah/2WnnAaN5xVp+W3yEU7mdWfTq5nBrOo7/hyZEv3gxVdqyJf+nIZ4hDrL9xK6rRlvnlfysf/h8OIIM0aHFdGX11WiTjyykM0baKF90EkGi4bx4wL0M2+NO5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 6, 2024 at 7:57=E2=80=AFAM Jesper Dangaard Brouer wrote: > > > > On 06/11/2024 14.25, Jesper Dangaard Brouer wrote: > > > > On 26/10/2024 09.33, Yunsheng Lin wrote: > >> On 2024/10/25 22:07, Jesper Dangaard Brouer wrote: > >> > >> ... > >> > >>> > >>>>> You and Jesper seems to be mentioning a possible fact that there mi= ght > >>>>> be 'hundreds of gigs of memory' needed for inflight pages, it would > >>>>> be nice > >>>>> to provide more info or reasoning above why 'hundreds of gigs of > >>>>> memory' is > >>>>> needed here so that we don't do a over-designed thing to support > >>>>> recording > >>>>> unlimited in-flight pages if the driver unbound stalling turns out > >>>>> impossible > >>>>> and the inflight pages do need to be recorded. > >>>> > >>>> I don't have a concrete example of a use that will blow the limit yo= u > >>>> are setting (but maybe Jesper does), I am simply objecting to the > >>>> arbitrary imposing of any limit at all. It smells a lot of "640k oug= ht > >>>> to be enough for anyone". > >>>> > >>> > >>> As I wrote before. In *production* I'm seeing TCP memory reach 24 GiB > >>> (on machines with 384GiB memory). I have attached a grafana screensho= t > >>> to prove what I'm saying. > >>> > >>> As my co-worker Mike Freemon, have explain to me (and more details in > >>> blogposts[1]). It is no coincident that graph have a strange "sealing= " > >>> close to 24 GiB (on machines with 384GiB total memory). This is beca= use > >>> TCP network stack goes into a memory "under pressure" state when 6.25= % > >>> of total memory is used by TCP-stack. (Detail: The system will stay i= n > >>> that mode until allocated TCP memory falls below 4.68% of total memor= y). > >>> > >>> [1] > >>> https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive= -buffers-and-how-we-fixed-it/ > >> > >> Thanks for the info. > > > > Some more info from production servers. > > > > (I'm amazed what we can do with a simple bpftrace script, Cc Viktor) > > > > In below bpftrace script/oneliner I'm extracting the inflight count, fo= r > > all page_pool's in the system, and storing that in a histogram hash. > > > > sudo bpftrace -e ' > > rawtracepoint:page_pool_state_release { @cnt[probe]=3Dcount(); > > @cnt_total[probe]=3Dcount(); > > $pool=3D(struct page_pool*)arg0; > > $release_cnt=3D(uint32)arg2; > > $hold_cnt=3D$pool->pages_state_hold_cnt; > > $inflight_cnt=3D(int32)($hold_cnt - $release_cnt); > > @inflight=3Dhist($inflight_cnt); > > } > > interval:s:1 {time("\n%H:%M:%S\n"); > > print(@cnt); clear(@cnt); > > print(@inflight); > > print(@cnt_total); > > }' > > > > The page_pool behavior depend on how NIC driver use it, so I've run thi= s > > on two prod servers with drivers bnxt and mlx5, on a 6.6.51 kernel. > > > > Driver: bnxt_en > > - kernel 6.6.51 > > > > @cnt[rawtracepoint:page_pool_state_release]: 8447 > > @inflight: > > [0] 507 | | > > [1] 275 | | > > [2, 4) 261 | | > > [4, 8) 215 | | > > [8, 16) 259 | | > > [16, 32) 361 | | > > [32, 64) 933 | | > > [64, 128) 1966 | | > > [128, 256) 937052 |@@@@@@@@@ | > > [256, 512) 5178744 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > > [512, 1K) 73908 | | > > [1K, 2K) 1220128 |@@@@@@@@@@@@ | > > [2K, 4K) 1532724 |@@@@@@@@@@@@@@@ | > > [4K, 8K) 1849062 |@@@@@@@@@@@@@@@@@@ | > > [8K, 16K) 1466424 |@@@@@@@@@@@@@@ | > > [16K, 32K) 858585 |@@@@@@@@ | > > [32K, 64K) 693893 |@@@@@@ | > > [64K, 128K) 170625 |@ | > > > > Driver: mlx5_core > > - Kernel: 6.6.51 > > > > @cnt[rawtracepoint:page_pool_state_release]: 1975 > > @inflight: > > [128, 256) 28293 |@@@@ | > > [256, 512) 184312 |@@@@@@@@@@@@@@@@@@@@@@@@@@@ | > > [512, 1K) 0 | | > > [1K, 2K) 4671 | | > > [2K, 4K) 342571 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > > [4K, 8K) 180520 |@@@@@@@@@@@@@@@@@@@@@@@@@@@ | > > [8K, 16K) 96483 |@@@@@@@@@@@@@@ | > > [16K, 32K) 25133 |@@@ | > > [32K, 64K) 8274 |@ | > > > > > > The key thing to notice that we have up-to 128,000 pages in flight on > > these random production servers. The NIC have 64 RX queue configured, > > thus also 64 page_pool objects. > > > > I realized that we primarily want to know the maximum in-flight pages. > > So, I modified the bpftrace oneliner to track the max for each page_pool > in the system. > > sudo bpftrace -e ' > rawtracepoint:page_pool_state_release { @cnt[probe]=3Dcount(); > @cnt_total[probe]=3Dcount(); > $pool=3D(struct page_pool*)arg0; > $release_cnt=3D(uint32)arg2; > $hold_cnt=3D$pool->pages_state_hold_cnt; > $inflight_cnt=3D(int32)($hold_cnt - $release_cnt); > $cur=3D@inflight_max[$pool]; > if ($inflight_cnt > $cur) { > @inflight_max[$pool]=3D$inflight_cnt;} > } > interval:s:1 {time("\n%H:%M:%S\n"); > print(@cnt); clear(@cnt); > print(@inflight_max); > print(@cnt_total); > }' > > I've attached the output from the script. > For unknown reason this system had 199 page_pool objects. > > The 20 top users: > > $ cat out02.inflight-max | grep inflight_max | tail -n 20 > @inflight_max[0xffff88829133d800]: 26473 > @inflight_max[0xffff888293c3e000]: 27042 > @inflight_max[0xffff888293c3b000]: 27709 > @inflight_max[0xffff8881076f2800]: 29400 > @inflight_max[0xffff88818386e000]: 29690 > @inflight_max[0xffff8882190b1800]: 29813 > @inflight_max[0xffff88819ee83800]: 30067 > @inflight_max[0xffff8881076f4800]: 30086 > @inflight_max[0xffff88818386b000]: 31116 > @inflight_max[0xffff88816598f800]: 36970 > @inflight_max[0xffff8882190b7800]: 37336 > @inflight_max[0xffff888293c38800]: 39265 > @inflight_max[0xffff888293c3c800]: 39632 > @inflight_max[0xffff888293c3b800]: 43461 > @inflight_max[0xffff888293c3f000]: 43787 > @inflight_max[0xffff88816598f000]: 44557 > @inflight_max[0xffff888132ce9000]: 45037 > @inflight_max[0xffff888293c3f800]: 51843 > @inflight_max[0xffff888183869800]: 62612 > @inflight_max[0xffff888113d08000]: 73203 > > Adding all values together: > > grep inflight_max out02.inflight-max | awk 'BEGIN {tot=3D0} {tot+=3D$2; > printf "total:" tot "\n"}' | tail -n 1 > > total:1707129 > > Worst case we need a data structure holding 1,707,129 pages. > Fortunately, we don't need a single data structure as this will be split > between 199 page_pool's. > > --Jesper Is there any specific reason for why we need to store the pages instead of just scanning the page tables to look for them? We should already know how many we need to look for and free. If we were to just scan the page structs and identify the page pool pages that are pointing to our pool we should be able to go through and clean them up. It won't be the fastest approach, but this should be an exceptional case to handle things like a hot plug removal of a device where we can essentially run this in the background before we free the device. Then it would just be a matter of modifying the pool so that it will drop support for doing DMA unmapping and essentially just become a place for the freed pages to go to die.