From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0E59C25B75 for ; Thu, 30 May 2024 02:54:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D6FB6B0095; Wed, 29 May 2024 22:54:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 386A16B0096; Wed, 29 May 2024 22:54:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24E0D6B0098; Wed, 29 May 2024 22:54:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 05B0A6B0095 for ; Wed, 29 May 2024 22:54:20 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 808A6809E8 for ; Thu, 30 May 2024 02:54:20 +0000 (UTC) X-FDA: 82173543480.08.488EF4E Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by imf29.hostedemail.com (Postfix) with ESMTP id 4C6A6120002 for ; Thu, 30 May 2024 02:54:17 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Fzwe2me1; spf=pass (imf29.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717037658; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=N4xIlehtO8hha7s1GjwF9UA+8ZvMnMZxU97HEhVdp5A=; b=owZdrAzTs2RziJfnfEuoSF282SniNxxojumgQ4SEj400UfXbCNoQLfUhdKIkhWvpGhhNC/ YA8SWK/Qib5PoFRDuxMsTuSJRcThQGFd5K29ZQ8U8HMsEvrvnopfPEhBwHnKsXrzrjx3CM ojNmHlS3HClhBPg4JkIjMYMBavWRgac= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717037658; a=rsa-sha256; cv=none; b=VtvVI5xTDuyg/19LVW0YQP53G4wUOQITtYR+QIxYrHh/5OEga2ABvToRYa8nKjsx84qMRq Nf9of5oJltpT7mw25UsBO4pLdd5QVnr9UvABejcq4vzqUDEwyWRvMV5dou4/Kmq+gmaSoI joCo9XeftjjumydhOOivr2OQm50nLWw= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Fzwe2me1; spf=pass (imf29.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717037657; x=1748573657; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=d8md9x7SBiL3q1zi6EOHY9MD8rjpR2Ja26NCg5WpPq0=; b=Fzwe2me1E6gaMD58Mf0XcAmidesY6Ff189i9HoILyK/4krACMU0Mlbfl bgecr7ITzXaQ6mH7YFuXjl5rowB3Y7EcWdvEOcgXqzJ9WU2TI5nRLOnV+ y/sqwv0i7ca5lm+QBjycdOJldF0llg+6n9fHfrHQ2K1uZODOM5RsrGbDs r0RSd0wSj9AR+LWF4q6irERcy1vfRO9ADzMlPapk5ophpVQ9Ck8TvKg03 pmxMuEfUxnpbnHt+TCbxEusbaoBLG2JkJrYlcpU2eEGjMQI+ndieKrMxc LTkM4q62ctrr24/Abb7zOtV3Hwx4nmEhs4XMSx4b9nHwkXICictFMxaaL g==; X-CSE-ConnectionGUID: bGZkHwqASjqxl4gQjmSt2A== X-CSE-MsgGUID: Befb+T8LRqKm6kL11vvjMQ== X-IronPort-AV: E=McAfee;i="6600,9927,11087"; a="13368284" X-IronPort-AV: E=Sophos;i="6.08,199,1712646000"; d="scan'208";a="13368284" Received: from orviesa005.jf.intel.com ([10.64.159.145]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 19:54:16 -0700 X-CSE-ConnectionGUID: 83EShAK2R5G2dyN3IJXvew== X-CSE-MsgGUID: ANFGCL9WR1qERbEhYS7Gfw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,199,1712646000"; d="scan'208";a="40561289" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 May 2024 19:54:14 -0700 From: "Huang, Ying" To: Chris Li Cc: Andrew Morton , Kairui Song , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Barry Song Subject: Re: [PATCH 0/2] mm: swap: mTHP swap allocator base on swap cluster order In-Reply-To: (Chris Li's message of "Wed, 29 May 2024 18:13:33 -0700") References: <20240524-swap-allocator-v1-0-47861b423b26@kernel.org> <87cyp5575y.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Thu, 30 May 2024 10:52:21 +0800 Message-ID: <875xuw1062.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: x87ezjt6oodhdax753fzykz1txebscxp X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 4C6A6120002 X-HE-Tag: 1717037657-531547 X-HE-Meta: U2FsdGVkX1//l09k6rLQpHfXSYPGzZi/j4WrKUfdnmEAcub5kBFI77kpO74qxnh+twcPT8f97U1bvpgUFZ95vGIy6JWE0a8IWR1YLzrAFgyj6S7oo8KKlFdE4OVcNLEAsvyNKy/pyz66TaT30dyyJKMM2fZEsOrJpybTthjmsVDCNqnuKj+rT1DZCpkrAII0Rbux6ykKDpwZQAu5Xg87Q7Or1hFjvt3DqyvPVaLbJooQItTPJwdRLL/w5RC8iAtYvBc1s9v4i7+or8ybh1JmMbik8ReOwsVlzOZ14SRC9667EitBUYjriWO5o85B/GnPvnbVoeC7aaxRTkubbC1rrFf143myXPFU17wHe2GJP1mMAvyarU+ao3e+UrpD9rt/N/1F2twMD3PcraxAt14vLY+lduDQ5JBoX1ynwwO1L5xPzhXCG4YxCEr/0NNyCpMJrXhH7TszHUDbnA6UfEy5q1dgksh5J7n5YXolpC7OhgeUVQ7nx7jpEBB24DNCoWAhIoBJWtoDJWWYHpk3LhuP1Z5Ujea+arUq5ow/weUFnoceapABbd1I7DX3h50JpL1RCVv+gurNdiXdQFyf493NmPY4Q595DZHwPuo62RNDA4pQTsB9eYYTwf47NZMtvX3cWmu1iy1sgetpDjGdt3CkVgUB6QhmhpcO28JxT7Ab79MMjQDxYt+APjDBkaJmStBpqT0FWEY8cSRyaT5f8iOySyd2/g8jCR+ccQ5E+XiFTFGYDzmtb+6PxFZlmpuHRCpuWJQcIfJAybOCI0Ibbu9Io0zDdTNVsqyec2IpFo0I734iaM4ApjUQ0QOoCFjC52Sc/fdl32l7v8TOJpg6OEp4JNIvxkMiEYKQU9O+6dKFWX/V1WTFDQPUTeUVwFMjJb5ietg0MR2bbFwMfJz97CBeLfY/CmSuapGSqCNnloj6mjRN/ZUhmowKqSz/DG36dbosKNQD07r1Sxvpuy6Fb9S 2ZAUOfDU HBI3LaQzh+ugbP4pVLW+C1F8HpQe+2ZoMhw4VP5dZLQ9loz0aVyLU0mCgEBythyZ0JvySPGZZwnf3rw799gn31mNDP55Z4aMgt7l+aOxfCq5lzzuEmNmkFRgyzPZsC5tRWbJIEttnA1K4oFmXQ7aUjjgHY8YicgIId1NnqfCKzzahj93lHSBFDT2WEkynD8EzZ+pG7T7TwcibCFuWLx3v9TGotj3t002PK4vU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Chris Li writes: > Hi Ying, > > On Wed, May 29, 2024 at 1:57=E2=80=AFAM Huang, Ying wrote: >> >> Chris Li writes: >> >> > I am spinning a new version for this series to address two issues >> > found in this series: >> > >> > 1) Oppo discovered a bug in the following line: >> > + ci =3D si->cluster_info + tmp; >> > Should be "tmp / SWAPFILE_CLUSTER" instead of "tmp". >> > That is a serious bug but trivial to fix. >> > >> > 2) order 0 allocation currently blindly scans swap_map disregarding >> > the cluster->order. >> >> IIUC, now, we only scan swap_map[] only if >> !list_empty(&si->free_clusters) && !list_empty(&si->nonfull_clusters[ord= er]). >> That is, if you doesn't run low swap free space, you will not do that. > > You can still swap space in order 0 clusters while order 4 runs out of > free_cluster > or nonfull_clusters[order]. For Android that is a common case. When we fail to allocate order 4, we will fallback to order 0. Still don't need to scan swap_map[]. But after looking at your below reply, I realized that the swap space is almost full at most times in your cases. Then, it's possible that we run into scanning swap_map[]. list_empty(&si->free_clusters) && list_empty(&si->nonfull_clusters[order]) will become true, if we put too many clusters in si->percpu_cluster. So, if we want to avoid to scan swap_map[], we can stop add clusters in si->percpu_cluster when swap space runs low. And maybe take clusters out of si->percpu_cluster sometimes. Another issue is nonfull_cluster[order1] cannot be used for nonfull_cluster[order2]. In definition, we should not fail order 0 allocation, we need to steal nonfull_cluster[order>0] for order 0 allocation. This can avoid to scan swap_map[] too. This may be not perfect, but it is the simplest first step implementation. You can optimize based on it further. And, I checked your code again. It appears that si->percpu_cluster may be put in si->nonfull_cluster[], then be used by another CPU. Please check it. -- Best Regards, Huang, Ying [snip]