From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE5A5C27C52 for ; Tue, 4 Jun 2024 07:29:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4383E6B008C; Tue, 4 Jun 2024 03:29:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E79D6B009C; Tue, 4 Jun 2024 03:29:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AFA16B009D; Tue, 4 Jun 2024 03:29:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 0ED516B008C for ; Tue, 4 Jun 2024 03:29:22 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C2C0680C32 for ; Tue, 4 Jun 2024 07:29:21 +0000 (UTC) X-FDA: 82192380522.23.D81E8A3 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf14.hostedemail.com (Postfix) with ESMTP id EF06C10001F for ; Tue, 4 Jun 2024 07:29:18 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lOyMSiEd; spf=pass (imf14.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717486159; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cXONGUQZ5dJQ86IhJXeSI9YCKb4qYHFFCxbujp03Aak=; b=N/n2O0E9+PggNMo2MLtyHxofKWjjLCArTYzYe+5tafAHkVNl9/SzYmA8Bay6N+fptHtyAk l5I6fme2mh/VQ+dQ39no06rYmBGmsqyhQno+5Fh+c0tVGH/ELNP5HcwkB7NX4HvudsNTQd KCuqzWlxOg/E8ddH/hdodRSihP/dni8= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lOyMSiEd; spf=pass (imf14.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717486159; a=rsa-sha256; cv=none; b=i7e9pEgz3yQ+PyS8EVZik1J2fHrnJLeBEtegTrESF0TKnB0VIgHkrIjDS1yM1gkfWl9aAf EQBNVsPVHEZYR7rVduLhehosctk1c3cvgufrifBjgaZ2YOdyN68bgSrpR/FUBRfo9OcC+1 +N3TS+0QL/jhFOsueN4UCOVAcWmbDPc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717486159; x=1749022159; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=M3ZJ1uTI0ViwBLZWES2skfKCKpFt9Qmq5SM/24HXduY=; b=lOyMSiEdDqHz0DHVK1mTW/1TbUfEh/bu5vnFv4p9GvjYDrGeHhLQeleh QNdLo1iva5a3vy/cgcFiPh1uzBYux4hGNo5d2eoP8AZ87lj4LYHEkrKrN 88OZcYMD1zxaKu9zhm1jl42vVKJbkmU+QaWeQtLhxe5cT6RtLHVPp5ki0 I6HTqi7QH2IODiUDw7TMhYiwd6zNy3ZD3I9jVX3lg6CJA3jQcQTn2ZVRn 7oLw6w1UEYALplXEH3mCnI3lLHO0iyOpAJ994e7yN+7CX4euNBpTnQAXn 0l6czuRtVSNwThpRpzGrY8qAfAY25Hh9smfcupht4iHlVi+wQEDsiLzOL g==; X-CSE-ConnectionGUID: ZfVxBgHbTWu4TALErQYUyQ== X-CSE-MsgGUID: OuzAnQ5YS8KSPhrQtrqsQQ== X-IronPort-AV: E=McAfee;i="6600,9927,11092"; a="13960825" X-IronPort-AV: E=Sophos;i="6.08,213,1712646000"; d="scan'208";a="13960825" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2024 00:29:17 -0700 X-CSE-ConnectionGUID: W6Z9teJ6QwGgAXsAry/lcw== X-CSE-MsgGUID: iSpJryjGTKCgWXy0FqUdzg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,213,1712646000"; d="scan'208";a="37086174" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa010-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jun 2024 00:29:15 -0700 From: "Huang, Ying" To: Kairui Song Cc: Chris Li , Andrew Morton , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Barry Song Subject: Re: [PATCH 0/2] mm: swap: mTHP swap allocator base on swap cluster order In-Reply-To: (Kairui Song's message of "Fri, 31 May 2024 20:40:11 +0800") References: <20240524-swap-allocator-v1-0-47861b423b26@kernel.org> <87cyp5575y.fsf@yhuang6-desk2.ccr.corp.intel.com> <875xuw1062.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o78mzp24.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Tue, 04 Jun 2024 15:27:23 +0800 Message-ID: <87frttcgmc.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: EF06C10001F X-Stat-Signature: hktbqsnej5z8a75hzsacd38iikghzs8y X-HE-Tag: 1717486158-136745 X-HE-Meta: U2FsdGVkX19I5KGFAm8jBr7L/q+L9cgNHgiDdSmdK7nRukS6dy1fF54SsqsULWziEBvqvA2UYi5saDIcT1OpI19HtxweBcegQsOM4R1iNzAs0iYG2ppoDCPoCXf/fwTVmKyowcGlhce46iSOFFvmy0APsx26HH2ypHuskQqsbBftp461f2GkHqANt71d9I0QTDNI09d/dT2A8HzcMaSlPxABtjtbGjTH1zPqtcNzxubAc9B8aPOfusD9MDDiTV6yOBuWwPVIKpTpyknrUSwnFqpKiGxP68h7NC0SMyp6ijeuVWsr/3HuQwdrQ7ReOSksq2arJwa+yl2gbUcpRxLs4QZFWh6kEHqoxhwItSRSCk4bTL7vNPx3b6blts4NNGJVvk1LGrhn/ndoEPLD3KTsGjx9JzXqEjwr4mTJ2EB/oiN/2kyJpwcG1biX19+26Ao2KLg19A7IgIaZXolE/CmlX6QTrSdwdmCZlUz8p663yqZCYJ+w4jeqOZnQk9jHvyd7PTvB9Y8uxHQoPpOGONk9U62vW4Kxva4dTaP4AE9jTK/hsCE+aazGM1eMmkTj0pBK5ZOsiPABXZQP2W6lKuq5qieQ3FvKpiCevRig08gmTUR4PbLdqDYHeR1VloWd6wVjdTfvavJrYouClPe/FZojqPrwdGLLjT5c70fWCeY8qf+mqFEuRlIP70/UVspvCBpyLG1tBV10MOeMfWc5FDV4XVwLZCovDeF6jeGiXxsUm+lUDGj4M67M50NZMpTbyKq67GA7WpIX15S0OvdrBvFZYlXb0+qGjdXzLTnFxv7CeNULtT1yV8tUXGJm61m6dSibQyjiQkRprQD4I8ncTeUAP93J5sCbcCb5ytLR0Kw8rA3C4Ylrz7rp17XP8Et3gIjcw9AegA1qBrf/FeHj5Ae2esK0rkK2HKVsTAVjyTSovpBMbVPJJgEoTLD6iVGpt5oTkqdwyFwTunzN2QPgvdD wTtakD7T cKKa1QpYOSsE12d7Snw7znWdt7xmtx8Jm608R2+ps7szy3eMRsGe4HWk5Cs9S3b2Uz38WXdl94y1Amae3PPNAJNMMF2Y6Rda+lY5co4STowRa16C3m98KjB/5056jOM7Hr4h8lfZJ61VPaA04sG/iLGhfZ+d8cmMawjKAUETM2Lx0K3+Jd9LqbbDvC7qT85WhZ76JQpLOTGbeDGNANDHmMGCHDckVqV/08zx9gnV2rgC2joMYoNjflBVXDE6Qy+0r961l X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kairui Song writes: > On Fri, May 31, 2024 at 10:37=E2=80=AFAM Huang, Ying wrote: >> >> Chris Li writes: >> >> > On Wed, May 29, 2024 at 7:54=E2=80=AFPM Huang, Ying wrote: >> >> >> >> Chris Li writes: >> >> >> >> > Hi Ying, >> >> > >> >> > On Wed, May 29, 2024 at 1:57=E2=80=AFAM Huang, Ying wrote: >> >> >> >> >> >> Chris Li writes: >> >> >> >> >> >> > I am spinning a new version for this series to address two issues >> >> >> > found in this series: >> >> >> > >> >> >> > 1) Oppo discovered a bug in the following line: >> >> >> > + ci =3D si->cluster_info + tmp; >> >> >> > Should be "tmp / SWAPFILE_CLUSTER" instead of "tmp". >> >> >> > That is a serious bug but trivial to fix. >> >> >> > >> >> >> > 2) order 0 allocation currently blindly scans swap_map disregard= ing >> >> >> > the cluster->order. >> >> >> >> >> >> IIUC, now, we only scan swap_map[] only if >> >> >> !list_empty(&si->free_clusters) && !list_empty(&si->nonfull_cluste= rs[order]). >> >> >> That is, if you doesn't run low swap free space, you will not do t= hat. >> >> > >> >> > You can still swap space in order 0 clusters while order 4 runs out= of >> >> > free_cluster >> >> > or nonfull_clusters[order]. For Android that is a common case. >> >> >> >> When we fail to allocate order 4, we will fallback to order 0. Still >> >> don't need to scan swap_map[]. But after looking at your below reply= , I >> >> realized that the swap space is almost full at most times in your cas= es. >> >> Then, it's possible that we run into scanning swap_map[]. >> >> list_empty(&si->free_clusters) && >> >> list_empty(&si->nonfull_clusters[order]) will become true, if we put = too >> >> many clusters in si->percpu_cluster. So, if we want to avoid to scan >> >> swap_map[], we can stop add clusters in si->percpu_cluster when swap >> >> space runs low. And maybe take clusters out of si->percpu_cluster >> >> sometimes. >> > >> > One idea after reading your reply. If we run out of the >> > nonfull_cluster[order], we should be able to use other cpu's >> > si->percpu_cluster[] as well. That is a very small win for Android, >> >> This will be useful in general. The number CPU may be large, and >> multiple orders may be used. >> >> > because android does not have too many cpu. We are talking about a >> > handful of clusters, which might not justify the code complexity. It >> > does not change the behavior that order 0 can pollut higher order. >> >> I have a feeling that you don't really know why swap_map[] is scanned. >> I suggest you to do more test and tracing to find out the reason. I >> suspect that there are some non-full cluster collection issues. >> >> >> Another issue is nonfull_cluster[order1] cannot be used for >> >> nonfull_cluster[order2]. In definition, we should not fail order 0 >> >> allocation, we need to steal nonfull_cluster[order>0] for order 0 >> >> allocation. This can avoid to scan swap_map[] too. This may be not >> >> perfect, but it is the simplest first step implementation. You can >> >> optimize based on it further. >> > >> > Yes, that is listed as the limitation of this cluster order approach. >> > Initially we need to support one order well first. We might choose >> > what order that is, 16K or 64K folio. 4K pages are too small, 2M pages >> > are too big. The sweet spot might be some there in between. If we can >> > support one order well, we can demonstrate the value of the mTHP. We >> > can worry about other mix orders later. >> > >> > Do you have any suggestions for how to prevent the order 0 polluting >> > the higher order cluster? If we allow that to happen, then it defeats >> > the goal of being able to allocate higher order swap entries. The >> > tricky question is we don't know how much swap space we should reserve >> > for each order. We can always break higher order clusters to lower >> > order, but can't do the reserves. The current patch series lets the >> > actual usage determine the percentage of the cluster for each order. >> > However that seems not enough for the test case Barry has. When the >> > app gets OOM kill that is where a large swing of order 0 swap will >> > show up and not enough higher order usage for the brief moment. The >> > order 0 swap entry will pollute the high order cluster. We are >> > currently debating a "knob" to be able to reserve a certain % of swap >> > space for a certain order. Those reservations will be guaranteed and >> > order 0 swap entry can't pollute them even when it runs out of swap >> > space. That can make the mTHP at least usable for the Android case. >> >> IMO, the bottom line is that order-0 allocation is the first class >> citizen, we must keep it optimized. And, OOM with free swap space isn't >> acceptable. Please consider the policy we used for page allocation. >> >> > Do you see another way to protect the high order cluster polluted by >> > lower order one? >> >> If we use high-order page allocation as reference, we need something >> like compaction to guarantee high-order allocation finally. But we are >> too far from that. >> >> For specific configuration, I believe that we can get reasonable >> high-order swap entry allocation success rate for specific use cases. >> For example, if we only do limited maximum number order-0 swap entries >> allocation, can we keep high-order clusters? > > Isn't limiting order-0 allocation breaks the bottom line that order-0 > allocation is the first class citizen, and should not fail if there is > space? Sorry for confusing words. I mean limiting maximum number order-0 swap entries allocation in workloads, instead of limiting that in kernel. > Just my two cents... > > I had a try locally based on Chris's work, allowing order 0 to use > nonfull_clusters as Ying has suggested, and starting with low order > and increase the order until nonfull_cluster[order] is not empty, that > way higher order is just better protected, because unless we ran out > of free_cluster and nonfull_cluster, direct scan won't happen. > > More concretely, I applied the following changes, which didn't change > the code much: > - In scan_swap_map_try_ssd_cluster, check nonfull_cluster first, then > free_clusters, then discard_cluster. > - If it's order 0, also check for (int i =3D 0; i < SWAP_NR_ORDERS; ++i) > nonfull_clusters[i] cluster before scan_swap_map_try_ssd_cluster > returns false. > > A quick test still using the memtier test, but decreased the swap > device size from 10G to 8g for higher pressure. > > Before: > hugepages-32kB/stats/swpout:34013 > hugepages-32kB/stats/swpout_fallback:266 > hugepages-512kB/stats/swpout:0 > hugepages-512kB/stats/swpout_fallback:77 > hugepages-2048kB/stats/swpout:0 > hugepages-2048kB/stats/swpout_fallback:1 > hugepages-1024kB/stats/swpout:0 > hugepages-1024kB/stats/swpout_fallback:0 > hugepages-64kB/stats/swpout:35088 > hugepages-64kB/stats/swpout_fallback:66 > hugepages-16kB/stats/swpout:31848 > hugepages-16kB/stats/swpout_fallback:402 > hugepages-256kB/stats/swpout:390 > hugepages-256kB/stats/swpout_fallback:7244 > hugepages-128kB/stats/swpout:28573 > hugepages-128kB/stats/swpout_fallback:474 > > After: > hugepages-32kB/stats/swpout:31448 > hugepages-32kB/stats/swpout_fallback:3354 > hugepages-512kB/stats/swpout:30 > hugepages-512kB/stats/swpout_fallback:33 > hugepages-2048kB/stats/swpout:2 > hugepages-2048kB/stats/swpout_fallback:0 > hugepages-1024kB/stats/swpout:0 > hugepages-1024kB/stats/swpout_fallback:0 > hugepages-64kB/stats/swpout:31255 > hugepages-64kB/stats/swpout_fallback:3112 > hugepages-16kB/stats/swpout:29931 > hugepages-16kB/stats/swpout_fallback:3397 > hugepages-256kB/stats/swpout:5223 > hugepages-256kB/stats/swpout_fallback:2351 > hugepages-128kB/stats/swpout:25600 > hugepages-128kB/stats/swpout_fallback:2194 > > High order (256k) swapout rate are significantly higher, 512k is now > possible, which indicate high orders are better protected, lower > orders are sacrificed but seems worth it. Yes. I think that this reflects another aspect of the problem. In some situations, it's better to steal one high-order cluster and use it for order-0 allocation instead of scattering order-0 allocation in random high-order clusters. -- Best Regards, Huang, Ying