From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D77B3C25B74 for ; Fri, 31 May 2024 02:37:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1BC096B0085; Thu, 30 May 2024 22:37:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16D136B0088; Thu, 30 May 2024 22:37:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 034136B0089; Thu, 30 May 2024 22:37:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DE34F6B0085 for ; Thu, 30 May 2024 22:37:14 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8FEA91A0532 for ; Fri, 31 May 2024 02:37:14 +0000 (UTC) X-FDA: 82177129188.17.A2876DB Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by imf23.hostedemail.com (Postfix) with ESMTP id A314914000E for ; Fri, 31 May 2024 02:37:11 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=OwJS+NHf; spf=pass (imf23.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717123032; a=rsa-sha256; cv=none; b=KYMkqqGmv6k6B56AXP6rs2clscsS+VmiOPU1killNLVCQwVS2onUVShQj1Gr0wKvULbl8u 940HBD9fvS3spNY7j9lWi1tLTdnC9qmbhICjQ08QSTflL4Tt9TQyh9iSzZ3ZbCz4ni8xv1 O6u1V4JUTUVWHVv62OjRNze1Vfl8tUo= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=OwJS+NHf; spf=pass (imf23.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.12 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717123032; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hmKNL4m23DIyYWY9nKBudBbN+wKuECk1kYJ1+x5iTtw=; b=tmqykhrtOLeYobVBgFkOvF0e0zZF50mpfb0u5qdlIF310GQP4s8vrO/BU6UKYXjdUsx83g jp+PyNnGfNUoZkaDTq4G8Jpo+88SIpDQzWbP9hquXgssdnDCmniqCq972JnrQB/MCMMyGK BteydwSfDPhPKrtPzpomWdwd5ncGSsg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717123032; x=1748659032; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=HXJhVHgaP2yBbTxxl2IaRVJ2gMdlgLwgYbfZ3DUTCsc=; b=OwJS+NHfN+Z0OzquklPCk5ZA9EkkOm318358cHDb+suJ+nj7BvsyB9vK oqw5WSn8eX8EB0Zdc+K3EIs8GKg79L3TlhspDzd9fj+KXeNyoGVovoaJg w4PbNO27wkPnTUOndHErGiK5dEbVXfFydZOsm5e8NvtfQWfW4WTR/dM8c 53oAZCfSdYx//X3fx5PV06RJ4cXxBH/ovo/APxhHtTqAgW5a4B1CoQQOy 8+cMEnxkoQJTUxRMBBxSmzmBHltLQAynjs6WSAmW5CV7efxcP6OKb3its vdjgEfHpy+i9OtdNAP32NSFLjxpL+o+rieYkjl0B5fI8ip1p4/WoZHppP w==; X-CSE-ConnectionGUID: MSQA/sf9Sw+AGJI06K1tBw== X-CSE-MsgGUID: JHfzHJGmQCWjHrAHeZAiYw== X-IronPort-AV: E=McAfee;i="6600,9927,11088"; a="17496650" X-IronPort-AV: E=Sophos;i="6.08,202,1712646000"; d="scan'208";a="17496650" Received: from orviesa007.jf.intel.com ([10.64.159.147]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2024 19:37:10 -0700 X-CSE-ConnectionGUID: J0HrU7DtTbKaIrGi3wTesg== X-CSE-MsgGUID: PAFVPX4WQiCHJLQzAUWYsw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,202,1712646000"; d="scan'208";a="36641804" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 May 2024 19:37:08 -0700 From: "Huang, Ying" To: Chris Li Cc: Andrew Morton , Kairui Song , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Barry Song Subject: Re: [PATCH 0/2] mm: swap: mTHP swap allocator base on swap cluster order In-Reply-To: (Chris Li's message of "Thu, 30 May 2024 14:44:33 -0700") References: <20240524-swap-allocator-v1-0-47861b423b26@kernel.org> <87cyp5575y.fsf@yhuang6-desk2.ccr.corp.intel.com> <875xuw1062.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Fri, 31 May 2024 10:35:15 +0800 Message-ID: <87o78mzp24.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A314914000E X-Stat-Signature: 591staiyoscm6on4bdjkbyrgahpquox5 X-HE-Tag: 1717123031-194732 X-HE-Meta: U2FsdGVkX18V4ZQofEL/Lk0+0gSuvwKShgvx1/OhP3O/0+ZklCuiywPcIq5O7X/J8qcTWfJNDtSLtUONwzXvZimUZkCZp2GYEANpUwbuQKtZj6A/2EXqg+QR1a3lejGNAlxUtY2imjAgXNtOtxjv80C81SYgAZqE3zAghcR3G2JmSlbB0/1Vn9oN9yK1yp9l32WUk2LCP9f/CY/CQQXQS6ZYYrIf/0WXMOL1zp5P9mriY7h+bajPF27d0Q7Z3Tesz6Fi38M/gy7FguxUeAdGJTLKw6B3yQP8GzszrfqhYJEvySPDCGxww045X0h7luVWs/lC+aQnP9GHCHhVErboJ+cBsE94DErIbrD2XJX4uoXvm5aAvVk3W/xNgeHokAfWUx8n/tUcFROV/hnUONEh4aLMWByAV8SOHpP7Mwo36OmA/i4ICEJ3WS6FHUxKceUqqy5x+DAgfYtYXzp6KRDT0WLzwPLrBLK8BYkwEa7pbW2wvWLPYSG2G7lSAO/DQxShSE2UtjigQVV081f2Z1xK/f5wek7TlzLkJqYhL/WM1ZdeRJNUSWlYRzx1IRgVil0J1b64yaWNGanuUxC0hIETA4HeICg724IVJCVaZBgB4+42qPQl0spMxz0QC8CTKRwFkwc5pmx+LWALCzAAzTWr3SXIBuEKcBcasxS+coA8q3tSgjCP05CXFMphM60Mva6XvCZuVq2ds/DbxHfdoUnyeqDfDOclGnk9zogmr+oguO1YPdZNIv8ER2RebOqfz2JBeMYhUf+FDSEG3gN5L38DfmfKQkdxjtWDcc+DBb2+4fMPdzWimxeVMGIq43/JCe6edZmEAr9zBdY6vzUsMwW8zyif7i0bsQPv8tO3QmW1qTYKkGurvGgY7tLiy7KbjKEfT5eC+Z6Kn0H0b5EIIZON77GthXwim+jBv7VFl6Tt19Tyh6C0pjtB40Jy9n2Spfw3Gng3ho5vuhPIQ9tw+M2 FQ4lM0EF LXvAwcBY2YQRpfOBEcMdM73CgMZPafZm01NlI2h1/r908QiE3fM1SNP2le8O6s2IMWHcqRBOgEfoIAZCuBwgR8WDDm+8PKbkBAsX5mHzt2B/vURZC6RZ99A+zHGDW+hLrQPNS5pwwGdkcraxFSQH9QppFvLvI0a3xrVdAane6MTtGaEZeLIH2LR1iySmUR/tq831VwAaT/yfJ+Kc76g8ubyuuzkvN+28bUrFQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Chris Li writes: > On Wed, May 29, 2024 at 7:54=E2=80=AFPM Huang, Ying wrote: >> >> Chris Li writes: >> >> > Hi Ying, >> > >> > On Wed, May 29, 2024 at 1:57=E2=80=AFAM Huang, Ying wrote: >> >> >> >> Chris Li writes: >> >> >> >> > I am spinning a new version for this series to address two issues >> >> > found in this series: >> >> > >> >> > 1) Oppo discovered a bug in the following line: >> >> > + ci =3D si->cluster_info + tmp; >> >> > Should be "tmp / SWAPFILE_CLUSTER" instead of "tmp". >> >> > That is a serious bug but trivial to fix. >> >> > >> >> > 2) order 0 allocation currently blindly scans swap_map disregarding >> >> > the cluster->order. >> >> >> >> IIUC, now, we only scan swap_map[] only if >> >> !list_empty(&si->free_clusters) && !list_empty(&si->nonfull_clusters[= order]). >> >> That is, if you doesn't run low swap free space, you will not do that. >> > >> > You can still swap space in order 0 clusters while order 4 runs out of >> > free_cluster >> > or nonfull_clusters[order]. For Android that is a common case. >> >> When we fail to allocate order 4, we will fallback to order 0. Still >> don't need to scan swap_map[]. But after looking at your below reply, I >> realized that the swap space is almost full at most times in your cases. >> Then, it's possible that we run into scanning swap_map[]. >> list_empty(&si->free_clusters) && >> list_empty(&si->nonfull_clusters[order]) will become true, if we put too >> many clusters in si->percpu_cluster. So, if we want to avoid to scan >> swap_map[], we can stop add clusters in si->percpu_cluster when swap >> space runs low. And maybe take clusters out of si->percpu_cluster >> sometimes. > > One idea after reading your reply. If we run out of the > nonfull_cluster[order], we should be able to use other cpu's > si->percpu_cluster[] as well. That is a very small win for Android, This will be useful in general. The number CPU may be large, and multiple orders may be used. > because android does not have too many cpu. We are talking about a > handful of clusters, which might not justify the code complexity. It > does not change the behavior that order 0 can pollut higher order. I have a feeling that you don't really know why swap_map[] is scanned. I suggest you to do more test and tracing to find out the reason. I suspect that there are some non-full cluster collection issues. >> Another issue is nonfull_cluster[order1] cannot be used for >> nonfull_cluster[order2]. In definition, we should not fail order 0 >> allocation, we need to steal nonfull_cluster[order>0] for order 0 >> allocation. This can avoid to scan swap_map[] too. This may be not >> perfect, but it is the simplest first step implementation. You can >> optimize based on it further. > > Yes, that is listed as the limitation of this cluster order approach. > Initially we need to support one order well first. We might choose > what order that is, 16K or 64K folio. 4K pages are too small, 2M pages > are too big. The sweet spot might be some there in between. If we can > support one order well, we can demonstrate the value of the mTHP. We > can worry about other mix orders later. > > Do you have any suggestions for how to prevent the order 0 polluting > the higher order cluster? If we allow that to happen, then it defeats > the goal of being able to allocate higher order swap entries. The > tricky question is we don't know how much swap space we should reserve > for each order. We can always break higher order clusters to lower > order, but can't do the reserves. The current patch series lets the > actual usage determine the percentage of the cluster for each order. > However that seems not enough for the test case Barry has. When the > app gets OOM kill that is where a large swing of order 0 swap will > show up and not enough higher order usage for the brief moment. The > order 0 swap entry will pollute the high order cluster. We are > currently debating a "knob" to be able to reserve a certain % of swap > space for a certain order. Those reservations will be guaranteed and > order 0 swap entry can't pollute them even when it runs out of swap > space. That can make the mTHP at least usable for the Android case. IMO, the bottom line is that order-0 allocation is the first class citizen, we must keep it optimized. And, OOM with free swap space isn't acceptable. Please consider the policy we used for page allocation. > Do you see another way to protect the high order cluster polluted by > lower order one? If we use high-order page allocation as reference, we need something like compaction to guarantee high-order allocation finally. But we are too far from that. For specific configuration, I believe that we can get reasonable high-order swap entry allocation success rate for specific use cases. For example, if we only do limited maximum number order-0 swap entries allocation, can we keep high-order clusters? >> >> And, I checked your code again. It appears that si->percpu_cluster may >> be put in si->nonfull_cluster[], then be used by another CPU. Please >> check it. > > Ah, good point. I think it does. Let me take a closer look. -- Best Regards, Huang, Ying