From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53B2BCD11C2 for ; Sun, 7 Apr 2024 06:04:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FB2F6B0083; Sun, 7 Apr 2024 02:04:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2AB346B0087; Sun, 7 Apr 2024 02:04:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 174446B0088; Sun, 7 Apr 2024 02:04:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id ED5DA6B0083 for ; Sun, 7 Apr 2024 02:04:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 64D55C08BE for ; Sun, 7 Apr 2024 06:04:17 +0000 (UTC) X-FDA: 81981695754.03.73ECCA3 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) by imf24.hostedemail.com (Postfix) with ESMTP id B41F3180017 for ; Sun, 7 Apr 2024 06:04:14 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lLNoqeVL; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712469855; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kd3qwBqJim9v35xd9dW+M12mnXyAaSQCtFmX9x6MGjk=; b=Z/L5xpCVPi4LRs1yq3GKh9RDmYUsQbaaH1cxsWfJ7m85SIylhXHYMFEVNU2ocMW4tnsUuR t7mL1EVGZDybF0oV4kE5N9gI8F+jnpGk6R+JPPJuLzSJPINrK+YMY4HIHMlnBTj/ISP+3U jWQIHYJJ//5Njp/fM5F88g2LwZWjkoU= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lLNoqeVL; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf24.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.15 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712469855; a=rsa-sha256; cv=none; b=JxjdIskVTNYfB6vLUI0ZOr0wSq3Cmg+ylhAQ3OdPqE/GIAhie8WOCWTGQi85IO9BBvwiRe JsvUxSIbmPXdW2byAAqqt9mN0PQ6YDTlG5gFUxnSsuzQRgUWha95Qz5N1vXzXLnyGrz/MM zTVmvu6u7BqKCCaiho/HIIXh13lwIew= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1712469854; x=1744005854; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=9ASREbF5V+W5V2iTmngHkcY+rf4Cr1ccazmdC+nE9Pc=; b=lLNoqeVLOBry+o4PfOLFBnc+cYc6SnBYdkUyfqNAeNLUkNUJFuwrI86o 2Y7W8Ytc6q4F1TIoIgpAFcKLxLdniZY1J4hUgr+OmAgMXqmQHLvVihLiz PHN8RAc5+poxd1xcaUq6qw6VDGL4PPkKPyR/sdY0vfitYksFdZiv8jxW9 TYwfABC/GZWhOESbETbWSSBYGGYGbRK2v+laIPVykTJMuK0ceWpSc6fdf G91Qo3DXn1eaKTczmGbQLlRHSS4DwIHlVzKzGNXQJDhFUbyU16d+f/1O/ DmoXNKk3ZLlfO3pZqGcDVwMmnfxemF/yPkpoHZk3HSpHEAIzymuwxVsKc w==; X-CSE-ConnectionGUID: 1c7gunuCQBCry4rUAIFaRg== X-CSE-MsgGUID: BNI4OsYFQHyv6V4Q4Agsjw== X-IronPort-AV: E=McAfee;i="6600,9927,11036"; a="7944395" X-IronPort-AV: E=Sophos;i="6.07,184,1708416000"; d="scan'208";a="7944395" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2024 23:04:12 -0700 X-CSE-ConnectionGUID: enoCzR67Q7GqsPFXl1aIDg== X-CSE-MsgGUID: /eEN4FILTLOXj5Wgffu5wQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,184,1708416000"; d="scan'208";a="19611610" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa009-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2024 23:04:09 -0700 From: "Huang, Ying" To: David Hildenbrand Cc: Ryan Roberts , Andrew Morton , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v6 4/6] mm: swap: Allow storage of all mTHP orders In-Reply-To: (David Hildenbrand's message of "Fri, 5 Apr 2024 12:38:10 +0200") References: <20240403114032.1162100-1-ryan.roberts@arm.com> <20240403114032.1162100-5-ryan.roberts@arm.com> Date: Sun, 07 Apr 2024 14:02:16 +0800 Message-ID: <87edbhaexj.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B41F3180017 X-Stat-Signature: cjsz1r5qdujjuysgbgb1morbwk5sozf6 X-HE-Tag: 1712469854-320536 X-HE-Meta: U2FsdGVkX1+xkiEq29Zt3eD6F0Bx/o8Gdlg7sFthhOl32V6Jo6rxan2L+f0xF7k5UJrs3S6I1uhqYOnAj3I+AzSdoFWtLPKldruKVoIzoSc7HckQZYr2skLZ5vx4Ecd0gRcaHUo0q5XSSUPVttU9IT1YjYA8l3Vvi2agWqcb3wQGMe5mlWbQ2wCRqj4O4LVWUjyNn/WK1pKZMyKvSL9hxwYdkxmMwXpZkX9XGyXIdcM2g1OtYd7Hr2FFlubWA3jBwRJspNymzj2exLX6HTulLjpr2msQtWliAhEkma/W7O/cCOiRBDdiYBOJHB1MRIpkSssQD8I6YYbnESQpMWZCDdc67kRjqtEHvkIWQrk9rXyew++/FqPVZaTyYS0CsR2/kpqfulgSSIx43vIKCkS5OHsvmXI/eM2Y9IZSpn1MUWv80cyupgRFVup1nc1bXP6wFi5Ru3sl15MMXn8X4HlC8Anvix3HYLPyTpClsf80PaBjvCktObMtJssIpYjE3snpzFd9gOXHBMmOJR6WkcfC0XjrOTh/f8YapWeH6hrq52zfM9CB8ARuymhCNADrlQtpF2vun6Qg6ygWlntEDkvpJK3ZwtrBoBmUoHQRSpntJb4llElSFmXDM3xq09sO/GNY5aenptP9+6A2GfzPwfRFw1qMLHvVPL2NPI15kFP9qYStK6FEQwIyMZEdD8aWdPQZWgMPSBBE3AblFLQVPl125GssKoenYsTnSZUpZUaiaEwSIOh8QVocS9tyyFWVLlwqwGWBW5tldbV0dmkOSM82CXSrjom4bGSdNnInltiSPTtPaRYrCRi/URieLqUUutDKYxcky/mA+3lJfxW7zDIfcX1cPCdVtyPtWsE7sef8edUieeHu10eETtIoegm93MEPQft0RhY/YkNTRcXikjCUqxK++n09oSCQ9ok/VC4xUbCJKNd5Dmmfzd3VlAvA5/ei94r5HOFvqz2LkA1WS1L bMlzNl34 YJQKnGIwxxDNtLlVsI0UJtA5y1KWmW0VidqEDvJP+DeJjiGwMYK64T68yz07br32UPJ2ake5BwNLDUJDyk1TBVi/Lh9c1SexZPfP7zKtFke7mctkIJ6OIn5jGow4T4YQvK8pJIGGtqv07tWh3zORpUAe9pPP9NLsYmnv4N4PanuYUn0Do6UKlwFtb/ccWNRq+FzGRM+k1RXOtSde3MQgooYUafd1baa7gM2HuUceniCfLqyFsBfm76Y9ScyMLps7KnTNI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: David Hildenbrand writes: > On 03.04.24 13:40, Ryan Roberts wrote: >> Multi-size THP enables performance improvements by allocating large, >> pte-mapped folios for anonymous memory. However I've observed that on an >> arm64 system running a parallel workload (e.g. kernel compilation) >> across many cores, under high memory pressure, the speed regresses. This >> is due to bottlenecking on the increased number of TLBIs added due to >> all the extra folio splitting when the large folios are swapped out. >> Therefore, solve this regression by adding support for swapping out >> mTHP >> without needing to split the folio, just like is already done for >> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, >> and when the swap backing store is a non-rotating block device. These >> are the same constraints as for the existing PMD-sized THP swap-out >> support. >> Note that no attempt is made to swap-in (m)THP here - this is still >> done >> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a >> prerequisite for swapping-in mTHP. >> The main change here is to improve the swap entry allocator so that >> it >> can allocate any power-of-2 number of contiguous entries between [1, (1 >> << PMD_ORDER)]. This is done by allocating a cluster for each distinct >> order and allocating sequentially from it until the cluster is full. >> This ensures that we don't need to search the map and we get no >> fragmentation due to alignment padding for different orders in the >> cluster. If there is no current cluster for a given order, we attempt to >> allocate a free cluster from the list. If there are no free clusters, we >> fail the allocation and the caller can fall back to splitting the folio >> and allocates individual entries (as per existing PMD-sized THP >> fallback). >> The per-order current clusters are maintained per-cpu using the >> existing >> infrastructure. This is done to avoid interleving pages from different >> tasks, which would prevent IO being batched. This is already done for >> the order-0 allocations so we follow the same pattern. >> As is done for order-0 per-cpu clusters, the scanner now can steal >> order-0 entries from any per-cpu-per-order reserved cluster. This >> ensures that when the swap file is getting full, space doesn't get tied >> up in the per-cpu reserves. >> This change only modifies swap to be able to accept any order >> mTHP. It >> doesn't change the callers to elide doing the actual split. That will be >> done in separate changes. >> Reviewed-by: "Huang, Ying" >> Signed-off-by: Ryan Roberts >> --- >> include/linux/swap.h | 10 ++- >> mm/swap_slots.c | 6 +- >> mm/swapfile.c | 175 ++++++++++++++++++++++++------------------- >> 3 files changed, 109 insertions(+), 82 deletions(-) >> diff --git a/include/linux/swap.h b/include/linux/swap.h >> index 5e1e4f5bf0cb..11c53692f65f 100644 >> --- a/include/linux/swap.h >> +++ b/include/linux/swap.h >> @@ -268,13 +268,19 @@ struct swap_cluster_info { >> */ >> #define SWAP_NEXT_INVALID 0 >> +#ifdef CONFIG_THP_SWAP >> +#define SWAP_NR_ORDERS (PMD_ORDER + 1) >> +#else >> +#define SWAP_NR_ORDERS 1 >> +#endif >> + >> /* >> * We assign a cluster to each CPU, so each CPU can allocate swap entry from >> * its own cluster and swapout sequentially. The purpose is to optimize swapout >> * throughput. >> */ >> struct percpu_cluster { >> - unsigned int next; /* Likely next allocation offset */ >> + unsigned int next[SWAP_NR_ORDERS]; /* Likely next allocation offset */ >> }; >> struct swap_cluster_list { >> @@ -471,7 +477,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio); >> bool folio_free_swap(struct folio *folio); >> void put_swap_folio(struct folio *folio, swp_entry_t entry); >> extern swp_entry_t get_swap_page_of_type(int); >> -extern int get_swap_pages(int n, swp_entry_t swp_entries[], int entry_size); >> +extern int get_swap_pages(int n, swp_entry_t swp_entries[], int order); >> extern int add_swap_count_continuation(swp_entry_t, gfp_t); >> extern void swap_shmem_alloc(swp_entry_t); >> extern int swap_duplicate(swp_entry_t); >> diff --git a/mm/swap_slots.c b/mm/swap_slots.c >> index 53abeaf1371d..13ab3b771409 100644 >> --- a/mm/swap_slots.c >> +++ b/mm/swap_slots.c >> @@ -264,7 +264,7 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache) >> cache->cur = 0; >> if (swap_slot_cache_active) >> cache->nr = get_swap_pages(SWAP_SLOTS_CACHE_SIZE, >> - cache->slots, 1); >> + cache->slots, 0); >> return cache->nr; >> } >> @@ -311,7 +311,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) >> if (folio_test_large(folio)) { >> if (IS_ENABLED(CONFIG_THP_SWAP)) >> - get_swap_pages(1, &entry, folio_nr_pages(folio)); >> + get_swap_pages(1, &entry, folio_order(folio)); > > The only comment I have is that this nr_pages -> order conversion adds > a bit of noise to this patch. > > AFAIKS, it's primarily only required for "cluster->next[order]", > everything else doesn't really require the order. > > I'd just have split that out into a separate patch, or simply > converted nr_pages -> order where required. > > Nothing jumped at me, but I'm not an expert on that code, so I'm > mostly trusting the others ;) The nr_pages -> order conversion replaces ilog2(nr_pages) with (1<