From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47B87CDB483 for ; Thu, 19 Oct 2023 05:51:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE4068004A; Thu, 19 Oct 2023 01:51:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6C7180046; Thu, 19 Oct 2023 01:51:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B0D5E8004A; Thu, 19 Oct 2023 01:51:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9C54C80046 for ; Thu, 19 Oct 2023 01:51:11 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 53B0FC04D2 for ; Thu, 19 Oct 2023 05:51:11 +0000 (UTC) X-FDA: 81361137942.06.14A62B6 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by imf22.hostedemail.com (Postfix) with ESMTP id 3B415C000E for ; Thu, 19 Oct 2023 05:51:09 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dS0YjEKL; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697694669; a=rsa-sha256; cv=none; b=vStp+mXAR+Ela3g2uPgevdM3kJy/ojk0wTVjkHV3tpDFaUkFsfw/RQlSc1sL3AEnrhJfEQ +VSNHfBJmGT5WE4WfP3F15AaRnThQ7dQBabOP8LhoSnb/NKubRyQCuomFViVy5f5//DiK8 +4J+Tm4WwXVFimMBYh4OqaY0TTbP604= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dS0YjEKL; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697694669; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2wfNfkWTXoahExkEfo8ir2zs9SPTDGcCCmuDK6eI9XY=; b=X/z3b1ucbHPhnh6ztHEDiePsf3kCWme9SgB0MvJGYGzXyEoAaBDmEGTf/pzQX+Kx9dhzWK 4goFfy+E75WGLQ9nG+nFaTR+ENur00DGvLvB0ys56ZfmAzJDo5SSP8uY/glMtJ+9l7g5l2 w6wnmjyu1oCyFBEWr4norla+ZzDVQSk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697694669; x=1729230669; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=xq9jerLUiSqwfAJjYH39aV0c9fonR4RPiGN08HEiBxQ=; b=dS0YjEKLc3QoxANFgQU2wdUSfEEwJ0z61KtdAv3fJ7HsLfttf2gy9eIb LSC9/ufWzgSgYYCq7FMjoc4SsZeKZwtA7wRoUJKQM3YDF15Us0k2yu3rJ 4Jo2s44Cm0w3gGve6kYdzyZptqFK6Ao+qC2pj2vEqz4QzRzcZcxjTYvNe JW9/TZNCH0eLUnJc1MABVs8kg+4eBgSMg09fbVJiF+987oe1bduca25Xx FX3G2LswCi1jfWet4XuHen8RZzkLBzer6FKCNq+942Hzv0QJtBLHh1cvn 9ZlvmQvDQFo66/iiOdhZ3WA0ZPxgpEKAUkySD4cAx2J67Trkky2E3V244 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10867"; a="417290940" X-IronPort-AV: E=Sophos;i="6.03,236,1694761200"; d="scan'208";a="417290940" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2023 22:51:07 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10867"; a="900623810" X-IronPort-AV: E=Sophos;i="6.03,236,1694761200"; d="scan'208";a="900623810" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Oct 2023 22:48:59 -0700 From: "Huang, Ying" To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , , , Tim Chen Subject: Re: [PATCH v2 2/2] mm: swap: Swap-out small-sized THP without splitting In-Reply-To: (Ryan Roberts's message of "Wed, 18 Oct 2023 15:07:55 +0100") References: <20231017161302.2518826-1-ryan.roberts@arm.com> <20231017161302.2518826-3-ryan.roberts@arm.com> <87r0ls773p.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Thu, 19 Oct 2023 13:49:04 +0800 Message-ID: <87mswfuppr.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 3B415C000E X-Stat-Signature: a8ax8gtdgoax9zb7hicjzob3r3xob39j X-Rspam-User: X-HE-Tag: 1697694669-444991 X-HE-Meta: U2FsdGVkX19eeWUBrlCkzPTNzcfo5O+bowYDOQ61zUHo3J/m409JH33oQOEN+kbK1Nd061LGJ4KhMhHtX8MaX3lk85NKJeE1jNnBc2Wk8607gGkKgH9CiusuorIgYd6cQdNFPpOa54wiX2eijY5C7lrXQZGqnMfZotJ7PMbreCqp988SIjgmbITS/4RVeB7AXKu5by/1f8gBT4Q+S1NybMTBaV8P5H5VmySlBhP0kNB/uhngJD0nT6n7na085ZUZbUUlTI0xdYjFFHTqk9XePqnNre4BAQkulaYClSWLNNoNS5/fNEmbLEAvCdgxAw0X+JYjSpG+vJfneV8vz54/JSUnI/g/RIrYRbwy5XAk6zECyRH14rL5Tek+wBRhvZT9LRdTs3JLof2+JI/Cpblh8rh1SKTYXrx+dbUmIWMHvSdJRrs8F+67Y1pcPTn7gN3/GaR4duFW2JlmzBjzfgCCFa13eylvzy28U609dsG+kqHDJQxwKSA8tNBPZ779baSNpt4DDyazmcDIo0NpUQEGA6vD3rXuGzpTXiKvXL8IY/1Y5xdnP4DvAKDbzqWAuI5/RACKgutKRwGiC6XqIMq4x/FSXaH1i/Tu2XHtYMWVFaqP2mGeyKkf3aM8um0+AO1IuJBsWrFr3D2B6Q3vGYmD/r7QLjrwyK2aeXxz9gC6ToxJ8asToH/UqHL+WjtiVtUZV9hcw1M7rT/7DIvZMRDMiRqdmBuGrUWyNYV8zcCtU8zB3/WzSvbzPX/9xVN34+1IzUtgVUoZKOjCqFGnX9Y6m4nrlx2tA9OfL1BEdB0Uff0ok4+ZvhbZxKdErkbM0FsaIemF5Im4n+nGsSEssUEwP2B9rs58aDqjSkTxryncfNB/K0TJ3TNeEKzcJSZDF3gPI/TtPa0swX4LfCtsnYM1xeFDSsW1/uNBvHGHYNxpfuOz9gzZn5csO1Zhjw6itaNCy+zZI0RsIOauQ00Nhbi 93Y6+qbT cgEDdXzIsmPqtG3dW6aEQ2l67OTW5rVQJ0KmBTpuNazinSlc+bTVIR3Pvt5bHylBExX6f5DqaglMagzj+QdmTefCShudGuJ/Uwe8T4VzTThj8XvcfKJXbw5G3iJNHKWLIdmxWHRKzlOzI40vAJYCgwPlhlnNMRrzDfNaKRGIl02V8z4bezmQEiWlKYJhgRWGEnjwtTXTvowA7du4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ryan Roberts writes: > On 18/10/2023 07:55, Huang, Ying wrote: >> Ryan Roberts writes: >> [snip] >>> diff --git a/include/linux/swap.h b/include/linux/swap.h >>> index a073366a227c..35cbbe6509a9 100644 >>> --- a/include/linux/swap.h >>> +++ b/include/linux/swap.h >>> @@ -268,6 +268,12 @@ struct swap_cluster_info { >>> struct percpu_cluster { >>> struct swap_cluster_info index; /* Current cluster index */ >>> unsigned int next; /* Likely next allocation offset */ >>> + unsigned int large_next[]; /* >>> + * next free offset within current >>> + * allocation cluster for large folios, >>> + * or UINT_MAX if no current cluster. >>> + * Index is (order - 1). >>> + */ >>> }; >>> >>> struct swap_cluster_list { >>> diff --git a/mm/swapfile.c b/mm/swapfile.c >>> index b83ad77e04c0..625964e53c22 100644 >>> --- a/mm/swapfile.c >>> +++ b/mm/swapfile.c >>> @@ -987,35 +987,70 @@ static int scan_swap_map_slots(struct swap_info_struct *si, >>> return n_ret; >>> } >>> >>> -static int swap_alloc_cluster(struct swap_info_struct *si, swp_entry_t *slot) >>> +static int swap_alloc_large(struct swap_info_struct *si, swp_entry_t *slot, >>> + unsigned int nr_pages) >> >> This looks hacky. IMO, we should put the allocation logic inside >> percpu_cluster framework. If percpu_cluster framework doesn't work for >> you, just refactor it firstly. > > I'm not sure I really understand what you are suggesting - could you elaborate? > What "framework"? I only see a per-cpu data structure and > scan_swap_map_try_ssd_cluster(), which is very much geared towards order-0 > allocations. I suggest to share as much code as possible between order-0 and order > 0 swap entry allocation. I think that we can make scan_swap_map_try_ssd_cluster() works for order > 0 swap entry allocation. > Are you suggesting you want to allocate large entries (> order-0) from the same > cluster that is used for small (order-0) entries? The problem with this approach > is that there may not be enough space left in the current cluster for the large > entry that you are trying to allocate. Then you would need to take a new cluster > from the free list and presumably leave the previous cluster with unused entries > (which will now only be accessible by scanning). That unused space could be > considerable. > > That's why I am currently reserving a "current cluster" per order; that way, all > allocations are the same order, they are all naturally aligned and there is no > waste. I am fine to use one swap cluster per order per CPU. I just think that we should share code. > Perhaps I could implement "stealing" between cpus so that a cpu trying to > allocate a specific order, but which doesn't have a current cluster for that > order and the free list is empty, could allocate from another cpu's current > cluster? I don't think it's a good idea to mix different orders in the same > cluster though. I think we can start from a simple solution, that is, just fall back to split the large folio. Then, we can optimize it step by step. > I guess if really low, I could remove a current cluster from a cpu and allow it > to be scanned for order-0 allocations at least? Better to have same behavior between order- and order > 0. Perhaps begin with the current solution (allow swap entries in per-CPU cluster to be stolen). Then we can optimize based on this. Not directly related to this patchset. Maybe we can replace swap_slots_cache with per-CPU cluster in the future. This will reduce the code complexity. > Any opinions gratefully received! Thanks! >> >>> { >>> + int order_idx; >>> unsigned long idx; >>> struct swap_cluster_info *ci; >>> + struct percpu_cluster *cluster; >>> unsigned long offset; >>> >>> /* >>> * Should not even be attempting cluster allocations when huge >>> * page swap is disabled. Warn and fail the allocation. >>> */ >>> - if (!IS_ENABLED(CONFIG_THP_SWAP)) { >>> + if (!IS_ENABLED(CONFIG_THP_SWAP) || >>> + nr_pages < 4 || nr_pages > SWAPFILE_CLUSTER || >>> + !is_power_of_2(nr_pages)) { >>> VM_WARN_ON_ONCE(1); >>> return 0; >>> } >>> >>> - if (cluster_list_empty(&si->free_clusters)) >>> + /* >>> + * Not using clusters so unable to allocate large entries. >>> + */ >>> + if (!si->cluster_info) >>> return 0; >>> >>> - idx = cluster_list_first(&si->free_clusters); >>> - offset = idx * SWAPFILE_CLUSTER; >>> - ci = lock_cluster(si, offset); >>> - alloc_cluster(si, idx); >>> - cluster_set_count(ci, SWAPFILE_CLUSTER); >>> + order_idx = ilog2(nr_pages) - 2; >>> + cluster = this_cpu_ptr(si->percpu_cluster); >>> + offset = cluster->large_next[order_idx]; >>> + >>> + if (offset == UINT_MAX) { >>> + if (cluster_list_empty(&si->free_clusters)) >>> + return 0; >>> + >>> + idx = cluster_list_first(&si->free_clusters); >>> + offset = idx * SWAPFILE_CLUSTER; >>> >>> - memset(si->swap_map + offset, SWAP_HAS_CACHE, SWAPFILE_CLUSTER); >>> + ci = lock_cluster(si, offset); >>> + alloc_cluster(si, idx); >>> + cluster_set_count(ci, SWAPFILE_CLUSTER); >>> + >>> + /* >>> + * If scan_swap_map_slots() can't find a free cluster, it will >>> + * check si->swap_map directly. To make sure this standby >>> + * cluster isn't taken by scan_swap_map_slots(), mark the swap >>> + * entries bad (occupied). (same approach as discard). >>> + */ >>> + memset(si->swap_map + offset + nr_pages, SWAP_MAP_BAD, >>> + SWAPFILE_CLUSTER - nr_pages); >> >> There's an issue with this solution. If the free space of swap device >> runs low, it's possible that >> >> - some cluster are put in the percpu_cluster of some CPUs >> the swap entries there are marked as used >> >> - no free swap entries elsewhere >> >> - nr_swap_pages isn't 0 >> >> So, we will still scan LRU, but swap allocation fails, although there's >> still free swap space. > > Ahh yes, good spot. > >> >> I think that we should follow the method we used for the original >> percpu_cluster. That is, if all free swap entries are in >> percpu_cluster, we will start to allocate from percpu_cluster. > > As i suggested above, I think I could do this by removing a cpu's current > cluster for a given order from the percpu_cluster and making it generally > available for scanning. Does that work for you? replied above. >> >>> + } else { >>> + idx = offset / SWAPFILE_CLUSTER; >>> + ci = lock_cluster(si, offset); >>> + } >>> + >>> + memset(si->swap_map + offset, SWAP_HAS_CACHE, nr_pages); >>> unlock_cluster(ci); >>> - swap_range_alloc(si, offset, SWAPFILE_CLUSTER); >>> + swap_range_alloc(si, offset, nr_pages); >>> *slot = swp_entry(si->type, offset); >>> >>> + offset += nr_pages; >>> + if (idx != offset / SWAPFILE_CLUSTER) >>> + offset = UINT_MAX; >>> + cluster->large_next[order_idx] = offset; >>> + >>> return 1; >>> } >>> >> >> [snip] -- Best Regards, Huang, Ying