From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C660C3DA49 for ; Fri, 26 Jul 2024 07:21:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C8F686B008C; Fri, 26 Jul 2024 03:21:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C194D6B0092; Fri, 26 Jul 2024 03:21:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6B206B0096; Fri, 26 Jul 2024 03:21:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7E6336B008C for ; Fri, 26 Jul 2024 03:21:41 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 650041216F6 for ; Fri, 26 Jul 2024 07:21:40 +0000 (UTC) X-FDA: 82381058760.18.8904B56 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by imf13.hostedemail.com (Postfix) with ESMTP id 653F82002A for ; Fri, 26 Jul 2024 07:21:37 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="Eq2/nbXG"; spf=pass (imf13.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721978473; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5B4TsHU0450TT2peTvDSQ8BvlEY4qxPvJbqG0J1ZqQA=; b=RZ6EHMD0nx37mG84YSFjPojHaXfbTO0I+kaMH1II0PNJYDu1AYbOmSPZvnjULBfTizljX1 abqSgF+2G8sXMn5KQD2ibgzTpATcFmB4D8Z3FQKbUjXKHLysWTOi3y9O49zoHQ3JlD3x8t fIjtEzXE3LOoROxwE1NnCMyMrThlebg= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="Eq2/nbXG"; spf=pass (imf13.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.17 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721978473; a=rsa-sha256; cv=none; b=xCtdwNfoKJwQP6d5O9yVbrc4dZnq6t9WPoBu9GXCoT/50lGYW+iMoK8a2Y1M3t6PjzA5km eGSxqv+IjnUF+j28ZApcHe3dPioOM2+dZb4yYRGjGVw+sODv/XDWbz5Rk1PzTANszbNdGR RwfZaa2f+OcZ7GaPUdk1KNKiNiB8xqY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721978497; x=1753514497; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=gKrNSgDY0ErL0fBmb3lOUOMz9WR2nuOrqOt+rD3xeWQ=; b=Eq2/nbXGXur+p2ikvxiuQB40DWj4ILYNo6bea5Rso0gn68+Jt3thHRKk HLxswxREqKkr1nnwIgzGi5JW4YHvQfCj+cG1C134qN0BzhS3WUqEa6ZGh dpGmR1m6o74Oc4ZWHDkJ5KepYMrgUNCnxe/tfg0V39/Wp/yJJssB1w1FT ynCy5X6tMoacgb1cudfFxQnug+af47f7U1CNZHcBxZMcaBxt10MSFEPc3 d4r9qxiT4r08+alkittnHjCY3Fup7qHGWkG8w/ZUvYjotZ1v286/FoVhu 0tp++dTDcI8KO2NQ2x8uQEKz4u+ofAePc0h2KK1pDpihPWnLwCqz3gcjg Q==; X-CSE-ConnectionGUID: UPTFyJMSRsm1DHBmIu7y2w== X-CSE-MsgGUID: O4gOaJ4oQiGLehQIhw+Ohw== X-IronPort-AV: E=McAfee;i="6700,10204,11144"; a="19894320" X-IronPort-AV: E=Sophos;i="6.09,238,1716274800"; d="scan'208";a="19894320" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2024 00:21:35 -0700 X-CSE-ConnectionGUID: 2LySTCSbRiOnG8aIyhcD2g== X-CSE-MsgGUID: 8ui9B32sT+eaeu9Shma26w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,238,1716274800"; d="scan'208";a="57464689" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Jul 2024 00:21:34 -0700 From: "Huang, Ying" To: Chris Li Cc: Ryan Roberts , Andrew Morton , Kairui Song , Hugh Dickins , Kalesh Singh , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Barry Song Subject: Re: [PATCH v4 2/3] mm: swap: mTHP allocate swap entries from nonfull list In-Reply-To: (Chris Li's message of "Fri, 26 Jul 2024 00:10:31 -0700") References: <20240711-swap-allocator-v4-0-0295a4d4c7aa@kernel.org> <20240711-swap-allocator-v4-2-0295a4d4c7aa@kernel.org> <874j8nxhiq.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o76qjhqs.fsf@yhuang6-desk2.ccr.corp.intel.com> <43f73463-af42-4a00-8996-5f63bdf264a3@arm.com> <87jzhdkdzv.fsf@yhuang6-desk2.ccr.corp.intel.com> <87sew0ei84.fsf@yhuang6-desk2.ccr.corp.intel.com> <4ec149fc-7c13-4777-bc97-58ee455a3d7e@arm.com> <87le1q6jyo.fsf@yhuang6-desk2.ccr.corp.intel.com> <87zfq43o4n.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o76k3dkt.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Fri, 26 Jul 2024 15:18:00 +0800 Message-ID: <87bk2k39lj.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 653F82002A X-Stat-Signature: e1eqzmazn8q3smj8zhojmcnpj668ou6d X-HE-Tag: 1721978497-974930 X-HE-Meta: U2FsdGVkX1/7a3feOCAyXdC/A1AXjy2EYg4DZj1jyHFDzs8LzMswmChvqltuTuq6yqCY1vsjLvlkjh7nGANno53aUSXC1IEjqWcCqmeC6dCQOWHh6zzRL3Znfw/QKBxmu3wVPNv+JTr9COcdx+dZJtmaCjKAQfEs5FiqZFV3/zOiU/wS2U1MYDQpNJbN0cPl5wPjXLiOg9ftoXiaK/FGUTO4one3L3vOfucZbci8d30zXi9lqrdF+terKV9bJgHYCGCEXj08TARiDkZ5zTAqO8thGfIqOxBXenBHwH4rE5WIZlzmQcLO/RjLP8uNaJBA0bdp8LM9wZrGfEULzAYE6X1kpwqPLtU3BDt5k8qpeOm22GwAXyHNUp1SOOTjGCfFj3uPUXEmjJIhiSzrQcBJLJrsVNvcSL6McY6Dr2x3uDp99FPYhVo/Ipj4u9NUX1WZp4LLsJXAzeIEOmYhZDKJqEd/p7d8JteSXLPGwOCRjaASIuXovKHXUy7caeHGwvuXNnjFFisxR0Il3tUEHbDFfEU+7/QQb/rLiQqYsaE6BHd6smC+Sgv9JNyQ1FCrGE+9NYV+FoZrp/gIvCEBwO3QuenS3iDT3UYm1UiOHe2/XojVQUaoDrkrOUXU9Iya54KsqnAodPFO6ETIOGDObZwt10788S22fHwsqLOlCq3PrAdKgXzxXDLL9ysGLBdWy2e7Pb51aVtJGEdDQVumyFrwcoXdhK9B9TFaJYrxjqPjAOSXNhQZIceAVFpuT3vaeV/DtB0sFrixFLRcmEaPckuj0pU9ywuEjqDpZ7jOvoHx5Cej8XMWCizUJX7bPhYsVFgbOqP+dWgeg2UoeaeOosZc1ChVP0NwV0MFKdE9HJI+3RSmZw7nhwo6IUfBX7nbxJRvzrVipp3D6TWwhkp2RFcGC51XfqB1q3dsW379Maazq0WiYBaR1EVEv96crtdh6Hn5mBSWZK47TgZcRctc1XD OmsYWWbV /qUoDa0g+eJOUSbOEHWzzcydyrw4MFUjXerz5hx8xEGASVC4wjWjuUKG3HYYcuN0pDA09lo/MkR0nCijx8N6y244VrP4AJDiGr0XcPWNWp4TgtE1b3EfhN2M7ivVZic8FJ1l90ofC/HpeLfvLS6T7zNWESpmEikAqA2b0MVxw8aUf/Pz2bHjvF+vrUCgVYmDZyWsSn3oPjOGUsS6+mqAVRj7G25l4O6UHBIVV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Chris Li writes: > On Thu, Jul 25, 2024 at 10:55=E2=80=AFPM Huang, Ying wrote: >> >> Chris Li writes: >> >> > On Thu, Jul 25, 2024 at 7:07=E2=80=AFPM Huang, Ying wrote: >> >> > If the freeing of swap entry is random distribution. You need 16 >> >> > continuous swap entries free at the same time at aligned 16 base >> >> > locations. The total number of order 4 free swap space add up toget= her >> >> > is much lower than the order 0 allocatable swap space. >> >> > If having one entry free is 50% probability(swapfile half full), th= en >> >> > having 16 swap entries is continually free is (0.5) EXP 16 =3D 1.5 = E-5. >> >> > If the swapfile is 80% full, that number drops to 6.5 E -12. >> >> >> >> This depends on workloads. Quite some workloads will show some degree >> >> of spatial locality. For a workload with no spatial locality at all = as >> >> above, mTHP may be not a good choice at the first place. >> > >> > The fragmentation comes from the order 0 entry not from the mTHP. mTHP >> > have their own valid usage case, and should be separate from how you >> > use the order 0 entry. That is why I consider this kind of strategy >> > only works on the lucky case. I would much prefer the strategy that >> > can guarantee work not depend on luck. >> >> It seems that you have some perfect solution. Will learn it when you >> post it. > > No, I don't have perfect solutions. I see puting limit on order 0 swap > usage and writing out discontinuous swap entries from a folio are more > deterministic and not depend on luck. Both have their price to pay as > well. > >> >> >> >> - Order-4 pages need to be swapped out, but no enough order-4 non-= full >> >> >> clusters available. >> >> > >> >> > Exactly. >> >> > >> >> >> >> >> >> So, we need a way to migrate non-full clusters among orders to adj= ust to >> >> >> the various situations automatically. >> >> > >> >> > There is no easy way to migrate swap entries to different locations. >> >> > That is why I like to have discontiguous swap entries allocation for >> >> > mTHP. >> >> >> >> We suggest to migrate non-full swap clsuters among different lists, n= ot >> >> swap entries. >> > >> > Then you have the down side of reducing the number of total high order >> > clusters. By chance it is much easier to fragment the cluster than >> > anti-fragment a cluster. The orders of clusters have a natural >> > tendency to move down rather than move up, given long enough time of >> > random access. It will likely run out of high order clusters in the >> > long run if we don't have any separation of orders. >> >> As my example above, you may have almost 0 high-order clusters forever. >> So, your solution only works for very specific use cases. It's not a >> general solution. > > One simple solution is having an optional limitation of 0 order swap. > I understand you don't like that option, but there is no other easy > solution to achieve the same effectiveness, so far. If there is, I > like to hear it. Just as you said, it's optional, so it's not general solution. This may trigger OOM in general solution. >> >> >> >> But yes, data is needed for any performance related change. >> >> >> >> BTW: I think non-full cluster isn't a good name. Partial cluster is >> >> much better and follows the same convention as partial slab. >> > >> > I am not opposed to it. The only reason I hold off on the rename is >> > because there are patches from Kairui I am testing depending on it. >> > Let's finish up the V5 patch with the swap cache reclaim code path >> > then do the renaming as one batch job. We actually have more than one >> > list that has the clusters partially full. It helps reduce the repeat >> > scan of the cluster that is not full but also not able to allocate >> > swap entries for this order. Just the name of one of them as >> > "partial" is not precise either. Because the other lists are also >> > partially full. We'd better give them precise meaning systematically. >> >> I don't think that it's hard to do a search/replace before the next >> version. > > The overhead is on the other internal experimental patches. Again, > I am not opposed to renaming it. Just want to do it at one batch not > many times, including other list names. -- Best Regards, Huang, Ying