From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9ECE4E77188 for ; Fri, 10 Jan 2025 10:47:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DB6A6B00B5; Fri, 10 Jan 2025 05:47:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1893A8D0003; Fri, 10 Jan 2025 05:47:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF7918D0001; Fri, 10 Jan 2025 05:47:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D02486B00B5 for ; Fri, 10 Jan 2025 05:47:54 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 37B771C6CF4 for ; Fri, 10 Jan 2025 10:47:54 +0000 (UTC) X-FDA: 82991216868.27.0C5B379 Received: from mail-ua1-f41.google.com (mail-ua1-f41.google.com [209.85.222.41]) by imf12.hostedemail.com (Postfix) with ESMTP id 5CE2140004 for ; Fri, 10 Jan 2025 10:47:52 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Js/T6NPR"; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.41 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736506072; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xodj1KOvLIIuRl/DlLRuXerNVBmDZ20XG7O4qvAjeDE=; b=yWbHRYLPZqQT2GCBhqmROL9jffd0A0FPTk6IyPYVtBYDjhbhVuAGg5kw3k4wza+MmzbE7x maGgPavYRmnwTXgdidbzZBHJGTTeZgzJr94iyMjsmISboQZLmb9Mv/CQwz0g8dlbHaL9Vy 8nP0Iy8e7f3NwXbkvIheX+WTyRRgB8s= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="Js/T6NPR"; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.41 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736506072; a=rsa-sha256; cv=none; b=hFtNsxAvSxF8e4CM/pBuumx4B100AlD9EVwYZ6Nj4mJZcNG01BfBjOxyTqEs94F5fKCNOp /2IdY6tRkv8z7iQuDpuA7iyMzsjv1fNuj4zPOr6u9Z+Z3GtBVJHSdNJHOS0FLdoipQKq7E pSFK/xDMuo/v9dIAQpseWfLhXqp/mqY= Received: by mail-ua1-f41.google.com with SMTP id a1e0cc1a2514c-85c5eb83a7fso909846241.2 for ; Fri, 10 Jan 2025 02:47:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736506071; x=1737110871; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Xodj1KOvLIIuRl/DlLRuXerNVBmDZ20XG7O4qvAjeDE=; b=Js/T6NPRBk6mEOkt05teTHhd1nlQlIb3Nu8stUMcbtoICzHViTi+pCOWNak/SX8AFb KbRJNPn04L2M7CG4vI3ciagLd6doilcuqsl7r5FTKhrMwcXRzxOWrKqpws8+BhnQjYOD 5mj0I9DqbyeEJXK5xlZPYbXiR4qWDXlokl4J0n+UXmVGMuPa5RHrEN+F6S1KBYyEmThb UritqXeTcEDOkD/2mlKLH+Ieu2a7nzMeoY4pc5g/3qWK2gpGgD/kNYeLQDXoFMlG7Sw2 SEH7zP+UXLMVPNpY79846wF8l2cVE0RSxQvh6WzxluL7CXlWqrNm2CuoJT4nOrGDIjeX qHHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736506071; x=1737110871; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xodj1KOvLIIuRl/DlLRuXerNVBmDZ20XG7O4qvAjeDE=; b=UURYJJbdfPYD5NMMaXiWECeCWwCqGScuHJEpH7sXMqkBBehZK+Aaee+OFoYxVUZQ8e HAXCy5OutjkOgn+s1BjbDqLwBTaRd9QoSEjaqOWCnyvTw5UVU6dhp2Z1Fy6i8c//iPDd No3RbNHm/NhI0x8IwEmt+846pbpkSHMEQEqHQF1YIDqw0dxhglfzijI3Hbb2q15y0ipI XDuX8nhYWtC8zmqQv3bKXZgZU9fa7mP4TBGut8FmINpd5e3jzw11LhURPLFwTJ5nMp8o /lq2vREF4zJWqqHyNHfcpJawClxqyfQ2miGxOP1j53IG424WNO+ms/6gTB3tnwUz5PWD OWqQ== X-Forwarded-Encrypted: i=1; AJvYcCWRKK83YJKe8IVJ0BHw2q4/c/UqFRCUGw4IAtfdNGPeah/8w2V0hZC3qFHkQn9fjNCpqYq8hEDoXQ==@kvack.org X-Gm-Message-State: AOJu0YzU6nNXCj3uuV8jU6/m+/hbyqANLlL7ynNAz/PSZGweBIgl4KUf 5EkbXlwqVl5yMKeitE4m2LBcMB+AFq/XzBFWHkhp/wMAJ+FirHLlT13Tr3pRkmc33COuQUnB4hs kMRLxTEDlfXznniEl/eUGpbeX4vc= X-Gm-Gg: ASbGncsCMZ/dPjuoKc4tyvLrJ3905GIHkSzcydS+Z4BwC48Toj7ujQB5P/SbVibHwNp AuKr5qOVd0EqIJgf0cyFqCLV+e2Oo/XwF0yQA9QkwZrsXVmJEFP2LgLtHMrhQhfaSkA/1PSRV X-Google-Smtp-Source: AGHT+IFuKHrHlSxp/9BacsA0l6OTNYFrQo6PywCuJL64LCOHI9oZO4qjnjg7ZIxWxTxrbhVhwQ+pas7Qs4lof9QoKE4= X-Received: by 2002:a05:6102:4425:b0:4b2:5c1a:bb57 with SMTP id ada2fe7eead31-4b3d0ec6e0cmr8494165137.20.1736506071310; Fri, 10 Jan 2025 02:47:51 -0800 (PST) MIME-Version: 1.0 References: <58716200-fd10-4487-aed3-607a10e9fdd0@gmail.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Fri, 10 Jan 2025 23:47:40 +1300 X-Gm-Features: AbW1kvaEJM4hXyX59j8qlNGak4RkdAsavid4j8S48jYHIJwz5RWN1DAkazXldYQ Message-ID: Subject: Re: [LSF/MM/BPF TOPIC] Large folio (z)swapin To: Usama Arif Cc: lsf-pc@lists.linux-foundation.org, Linux Memory Management List , Johannes Weiner , Yosry Ahmed , Shakeel Butt , Yu Zhao Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Stat-Signature: boqifg5kdie7px7m6wi99ycppwjnizp6 X-Rspamd-Queue-Id: 5CE2140004 X-Rspam-User: X-HE-Tag: 1736506072-888262 X-HE-Meta: U2FsdGVkX1/1S8RVKTA59otIswDleR35Uo4BavCIKccOeHQ1zUseLDJKFW8TKgtlG4mdVVaJSnjaddpHLIX8ipE7+QyrZIhJL2ZMZVvVI/CCRWkY6V8qLO541gLrr6ltTDG4TMo66Dfgb/ZICQ655++QPu7ibvMGCmHPUWj4M0N5Gi4BxMuw81Oi0pi7ymhdvXFYrXBAn46J8LJMh7NjOq8DJU2WOHoJK88DuNoMzf98O7961veT003oNz1Fz+/K8HpxpBCpiOWseZeLfLgt6xieg30JOkdZP6c7/mnxsDyirxrcVzidKAZ5CK+RWO1jJGNBEkoRksLC42p8HGIV/ZWzWgw6SkDaHxOVRKM8McQ4Pxk1yqOv/5WFo08zDWozxNzzN1ZpbMK7WuTDyS7j0inJqkM7a02BIukGS7cRmYloXWmZ1NkMlUg50+fUNp37D2JNPXBQIbwvh4z6sj/7NN9MW9EHBS728V9L0Edkq51RVXZYyf90OJaseypipO41tO66Epfw9Igiw0VnSoLg9zpXZs3hnSskvlG9Ichxh43DYZdMYxsnG1v4+rYdnOsvxYuvv3qOsZsviefPoZvUgok0uhWCRj50sd+2WjlMe4sAhtf1UOcpOGpFTn7K3sKA+35JCl4OinjEXrkO4FZEBbvz23OJlGjmsU6pQY0aHaFGiqpgEfTIlWIhqRJu4XVhv17+GZ06UiYjwygJZ42/Dn/hFfg3lFYzEm8nrAT14aCpkqTWR6deUnekOJd2RZlLMhZOcKgEyzzsOaip81AWbOi+v+v4PYxXc1tN+LzBbL9HnImSYAR5gfUbiKaYyAthZDDlmZh4Qr/MZZYnhfptuTCvTEE50ZFOi2BXB+0GgHzJqQ7qg2stRe0PqlKzPv2utYHrrde2OaTc0RdqjYVQ8jtgeYZzk8TJQhmJ0XrXTgZxKTR4lKMiMrqJyOOdb4z+oUiK8cowB5fipD43UmI ENGWBKiE Z5RtrMhWENAvmprXF7tARYbPdQ6Y/DvJ2w0hYMKYxMPsIRDng1gkOUkQZPl9nI4Ib5moTn4xw9ZBIulC8tVa/vCozZBfsuE8FicCq3kzV72HY8RbBg8F4ilVQhJa9b1v7TeEgmseocSZQAppRyswmnx1TPIG+vJoduGjcnSHNRw7D+q5v/GxIB8tuLbAEIUm72UwTnf1nvdRXwp+p3MDnBfbIm8bu7SQ7ysPxShcEfc7d0lS4Brj2+FN0cfIIgzFJesksUDoca62+Ub7yPlj0Mu4y7YsJe+KElAaX/pXgSGhPcZIrIzLNKLB1h1Q2TJndQvoc/D5ms7R9Lts9PSnbUIoTzTgnBkP7MQAg92rZr0giNXsrjvOer6VkIPHGe8ogHxizEwCqrpA6Gb3tzO1PfDjzReQlaJY7WdRQKWMPZBF6FphR+fpf6xnactCQw8Lpi1hK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 10, 2025 at 11:40=E2=80=AFPM Usama Arif wrote: > > > > On 10/01/2025 10:30, Barry Song wrote: > > On Fri, Jan 10, 2025 at 11:26=E2=80=AFPM Usama Arif wrote: > >> > >> > >> > >> On 10/01/2025 10:09, Barry Song wrote: > >>> Hi Usama, > >>> > >>> Please include me in the discussion. I'll try to attend, at least rem= otely. > >>> > >>> On Fri, Jan 10, 2025 at 9:06=E2=80=AFAM Usama Arif wrote: > >>>> > >>>> I would like to propose a session to discuss the work going on > >>>> around large folio swapin, whether its traditional swap or > >>>> zswap or zram. > >>>> > >>>> Large folios have obvious advantages that have been discussed before > >>>> like fewer page faults, batched PTE and rmap manipulation, reduced > >>>> lru list, TLB coalescing (for arm64 and amd). > >>>> However, swapping in large folios has its own drawbacks like higher > >>>> swap thrashing. > >>>> I had initially sent a RFC of zswapin of large folios in [1] > >>>> but it causes a regression due to swap thrashing in kernel > >>>> build time, which I am confident is happening with zram large > >>>> folio swapin as well (which is merged in kernel). > >>>> > >>>> Some of the points we could discuss in the session: > >>>> > >>>> - What is the right (preferably open source) benchmark to test for > >>>> swapin of large folios? kernel build time in limited > >>>> memory cgroup shows a regression, microbenchmarks show a massive > >>>> improvement, maybe there are benchmarks where TLB misses is > >>>> a big factor and show an improvement. > >>> > >>> My understanding is that it largely depends on the workload. In inter= active > >>> scenarios, such as on a phone, swap thrashing is not an issue because > >>> there is minimal to no thrashing for the app occupying the screen > >>> (foreground). In such cases, swap bandwidth becomes the most critical= factor > >>> in improving app switching speed, especially when multiple applicatio= ns > >>> are switching between background and foreground states. > >>> > >>>> > >>>> - We could have something like > >>>> /sys/kernel/mm/transparent_hugepage/hugepages-*kB/swapin_enabled > >>>> to enable/disable swapin but its going to be difficult to tune, migh= t > >>>> have different optimum values based on workloads and are likely to b= e > >>>> left at their default values. Is there some dynamic way to decide wh= en > >>>> to swapin large folios and when to fallback to smaller folios? > >>>> swapin_readahead swapcache path which only supports 4K folios atm ha= s a > >>>> read ahead window based on hits, however readahead is a folio flag a= nd > >>>> not a page flag, so this method can't be used as once a large folio > >>>> is swapped in, we won't get a fault and subsequent hits on other > >>>> pages of the large folio won't be recorded. > >>>> > >>>> - For zswap and zram, it might be that doing larger block compressio= n/ > >>>> decompression might offset the regression from swap thrashing, but i= t > >>>> brings about its own issues. For e.g. once a large folio is swapped > >>>> out, it could fail to swapin as a large folio and fallback > >>>> to 4K, resulting in redundant decompressions. > >>> > >>> That's correct. My current workaround involves swapping four small fo= lios, > >>> and zsmalloc will compress and decompress in chunks of four pages, > >>> regardless of the actual size of the mTHP - The improvement in compre= ssion > >>> ratio and speed becomes less significant after exceeding four pages, = even > >>> though there is still some increase. > >>> > >>> Our recent experiments on phone also show that enabling direct reclam= ation > >>> for do_swap_page() to allocate 2-order mTHP results in a 0% allocatio= n > >>> failure rate - this probably removes the need for fallbacking to 4 s= mall > >>> folios. (Note that our experiments include Yu's TAO=E2=80=94Android G= KI has > >>> already merged it. However, since 2 is less than > >>> PAGE_ALLOC_COSTLY_ORDER, we might achieve similar results even > >>> without Yu's TAO, although I have not confirmed this.) > >>> > >> > >> Hi Barry, > >> > >> Thanks for the comments! > >> > >> I haven't seen any activity on TAO on the mailing list recently. Do yo= u know > >> if there are any plans for it to be sent for upstream review? > >> Have cc-ed Yu Zhao as well. > >> > >> > >>>> This will also mean swapin of large folios from traditional swap > >>>> isn't something we should proceed with? > >>>> > >>>> - Should we even support large folio swapin? You often have high swa= p > >>>> activity when the system/cgroup is close to running out of memory, a= t this > >>>> point, maybe the best way forward is to just swapin 4K pages and let > >>>> khugepaged [2], [3] collapse them if the surrounding pages are swapp= ed in > >>>> as well. > >>> > >>> This approach might be suitable for non-interactive scenarios, such a= s building > >>> a kernel within a memory control group (memcg) or running other serve= r > >>> applications. However, performing collapse in interactive and power-s= ensitive > >>> scenarios would be unnecessary and could lead to wasted power due to > >>> memory migration and unmap/map operations. > >>> > >>> However, it is quite challenging to automatically determine the type > >>> of workloads > >>> the system is running. I feel we still need a global control to decid= e whether > >>> to enable mTHP swap-in=E2=80=94not necessarily per size, but at least= at a global level. > >>> That said, there is evident resistance to introducing additional > >>> controls to enable > >>> or disable mTHP features. > >>> > >>> By the way, Usama, have you ever tried switching between mglru and th= e > >>> traditional > >>> active/inactive LRU? My experience shows a significant difference in > >>> swap thrashing > >>> =E2=80=94active/inactive LRU exhibits much less swap thrashing in my = local kernel build > >>> tests. > >>> > >> > >> I never tried with MGLRU enabled, so I am probably seeing the lowest a= mount of > >> swap-thrashing. > > > > Are you sure, Usama, since mglru is enabled by default? I have to echo > > 0 to manually > > disable it. > > > > Yes, I dont have CONFIG_LRU_GEN set in my defconfig. I dont think it is s= et > by default as well? Atleast on x86. > > $ make defconfig > $ grep LRU_GEN .config > # CONFIG_LRU_GEN is not set Okay, it=E2=80=99s likely because I=E2=80=99m using the Ubuntu distribution= for x86 and Android GKI for arm64, where mglru is enabled by default in both cases. But regardl= ess, I=E2=80=99d appreciate it if you could enable it and check if you observe t= he same phenomena as I did :-) > > Thanks, > Usama > > >> > >> Thanks, > >> Usama > >> > >>> the latest mm-unstable > >>> > >>> *********** default mglru: *********** > >>> > >>> root@barry-desktop:/home/barry/develop/linux# ./build.sh > >>> *** Executing round 1 *** > >>> real 6m44.561s > >>> user 46m53.274s > >>> sys 3m48.585s > >>> pswpin: 1286081 > >>> pswpout: 3147936 > >>> 64kB-swpout: 0 > >>> 32kB-swpout: 0 > >>> 16kB-swpout: 714580 > >>> 64kB-swpin: 0 > >>> 32kB-swpin: 0 > >>> 16kB-swpin: 286881 > >>> pgpgin: 17199072 > >>> pgpgout: 21493892 > >>> swpout_zero: 229163 > >>> swpin_zero: 84353 > >>> > >>> ******** disable mglru ******** > >>> > >>> root@barry-desktop:/home/barry/develop/linux# echo 0 > > >>> /sys/kernel/mm/lru_gen/enabled > >>> > >>> root@barry-desktop:/home/barry/develop/linux# ./build.sh > >>> *** Executing round 1 *** > >>> real 6m27.944s > >>> user 46m41.832s > >>> sys 3m30.635s > >>> pswpin: 474036 > >>> pswpout: 1434853 > >>> 64kB-swpout: 0 > >>> 32kB-swpout: 0 > >>> 16kB-swpout: 331755 > >>> 64kB-swpin: 0 > >>> 32kB-swpin: 0 > >>> 16kB-swpin: 106333 > >>> pgpgin: 11763720 > >>> pgpgout: 14551524 > >>> swpout_zero: 145050 > >>> swpin_zero: 87981 > >>> > >>> my build script: > >>> > >>> #!/bin/bash > >>> echo never > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enabl= ed > >>> echo never > /sys/kernel/mm/transparent_hugepage/hugepages-32kB/enabl= ed > >>> echo always > /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enab= led > >>> echo never > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/ena= bled > >>> > >>> vmstat_path=3D"/proc/vmstat" > >>> thp_base_path=3D"/sys/kernel/mm/transparent_hugepage" > >>> > >>> read_values() { > >>> pswpin=3D$(grep "pswpin" $vmstat_path | awk '{print $2}') > >>> pswpout=3D$(grep "pswpout" $vmstat_path | awk '{print $2}') > >>> pgpgin=3D$(grep "pgpgin" $vmstat_path | awk '{print $2}') > >>> pgpgout=3D$(grep "pgpgout" $vmstat_path | awk '{print $2}') > >>> swpout_zero=3D$(grep "swpout_zero" $vmstat_path | awk '{print $2}= ') > >>> swpin_zero=3D$(grep "swpin_zero" $vmstat_path | awk '{print $2}') > >>> swpout_64k=3D$(cat $thp_base_path/hugepages-64kB/stats/swpout > >>> 2>/dev/null || echo 0) > >>> swpout_32k=3D$(cat $thp_base_path/hugepages-32kB/stats/swpout > >>> 2>/dev/null || echo 0) > >>> swpout_16k=3D$(cat $thp_base_path/hugepages-16kB/stats/swpout > >>> 2>/dev/null || echo 0) > >>> swpin_64k=3D$(cat $thp_base_path/hugepages-64kB/stats/swpin > >>> 2>/dev/null || echo 0) > >>> swpin_32k=3D$(cat $thp_base_path/hugepages-32kB/stats/swpin > >>> 2>/dev/null || echo 0) > >>> swpin_16k=3D$(cat $thp_base_path/hugepages-16kB/stats/swpin > >>> 2>/dev/null || echo 0) > >>> echo "$pswpin $pswpout $swpout_64k $swpout_32k $swpout_16k > >>> $swpin_64k $swpin_32k $swpin_16k $pgpgin $pgpgout $swpout_zero > >>> $swpin_zero" > >>> } > >>> > >>> for ((i=3D1; i<=3D1; i++)) > >>> do > >>> echo > >>> echo "*** Executing round $i ***" > >>> make ARCH=3Darm64 CROSS_COMPILE=3Daarch64-linux-gnu- clean 1>/dev/n= ull 2>/dev/null > >>> echo 3 > /proc/sys/vm/drop_caches > >>> > >>> #kernel build > >>> initial_values=3D($(read_values)) > >>> time systemd-run --scope -p MemoryMax=3D1G make ARCH=3Darm64 \ > >>> CROSS_COMPILE=3Daarch64-linux-gnu- vmlinux -j10 1>/dev/null 2= >/dev/null > >>> final_values=3D($(read_values)) > >>> > >>> echo "pswpin: $((final_values[0] - initial_values[0]))" > >>> echo "pswpout: $((final_values[1] - initial_values[1]))" > >>> echo "64kB-swpout: $((final_values[2] - initial_values[2]))" > >>> echo "32kB-swpout: $((final_values[3] - initial_values[3]))" > >>> echo "16kB-swpout: $((final_values[4] - initial_values[4]))" > >>> echo "64kB-swpin: $((final_values[5] - initial_values[5]))" > >>> echo "32kB-swpin: $((final_values[6] - initial_values[6]))" > >>> echo "16kB-swpin: $((final_values[7] - initial_values[7]))" > >>> echo "pgpgin: $((final_values[8] - initial_values[8]))" > >>> echo "pgpgout: $((final_values[9] - initial_values[9]))" > >>> echo "swpout_zero: $((final_values[10] - initial_values[10]))" > >>> echo "swpin_zero: $((final_values[11] - initial_values[11]))" > >>> sync > >>> sleep 10 > >>> done > >>> > >>>> > >>>> [1] https://lore.kernel.org/all/20241018105026.2521366-1-usamaarif64= 2@gmail.com/ > >>>> [2] https://lore.kernel.org/all/20250108233128.14484-1-npache@redhat= .com/ > >>>> [3] https://lore.kernel.org/lkml/20241216165105.56185-1-dev.jain@arm= .com/ > >>>> > >>>> Thanks, > >>>> Usama > >>> > > Thanks Barry