linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Huang, Ying" <ying.huang@intel.com>
To: Ryan Roberts <ryan.roberts@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	 David Hildenbrand <david@redhat.com>,
	 Matthew Wilcox <willy@infradead.org>,
	 Gao Xiang <xiang@kernel.org>,  Yu Zhao <yuzhao@google.com>,
	 Yang Shi <shy828301@gmail.com>,  Michal Hocko <mhocko@suse.com>,
	 Kefeng Wang <wangkefeng.wang@huawei.com>,
	 Barry Song <21cnbao@gmail.com>,  Chris Li <chrisl@kernel.org>,
	 <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v4 4/6] mm: swap: Allow storage of all mTHP orders
Date: Thu, 21 Mar 2024 12:39:36 +0800	[thread overview]
Message-ID: <8734skryev.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <d6ac1097-2ca3-4e6d-902d-1b942cacf0fb@arm.com> (Ryan Roberts's message of "Wed, 20 Mar 2024 12:22:18 +0000")

Ryan Roberts <ryan.roberts@arm.com> writes:

> Hi Huang, Ying,
>
>
> On 12/03/2024 07:51, Huang, Ying wrote:
>> Ryan Roberts <ryan.roberts@arm.com> writes:
>> 
>>> Multi-size THP enables performance improvements by allocating large,
>>> pte-mapped folios for anonymous memory. However I've observed that on an
>>> arm64 system running a parallel workload (e.g. kernel compilation)
>>> across many cores, under high memory pressure, the speed regresses. This
>>> is due to bottlenecking on the increased number of TLBIs added due to
>>> all the extra folio splitting when the large folios are swapped out.
>>>
>>> Therefore, solve this regression by adding support for swapping out mTHP
>>> without needing to split the folio, just like is already done for
>>> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled,
>>> and when the swap backing store is a non-rotating block device. These
>>> are the same constraints as for the existing PMD-sized THP swap-out
>>> support.
>>>
>>> Note that no attempt is made to swap-in (m)THP here - this is still done
>>> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a
>>> prerequisite for swapping-in mTHP.
>>>
>>> The main change here is to improve the swap entry allocator so that it
>>> can allocate any power-of-2 number of contiguous entries between [1, (1
>>> << PMD_ORDER)]. This is done by allocating a cluster for each distinct
>>> order and allocating sequentially from it until the cluster is full.
>>> This ensures that we don't need to search the map and we get no
>>> fragmentation due to alignment padding for different orders in the
>>> cluster. If there is no current cluster for a given order, we attempt to
>>> allocate a free cluster from the list. If there are no free clusters, we
>>> fail the allocation and the caller can fall back to splitting the folio
>>> and allocates individual entries (as per existing PMD-sized THP
>>> fallback).
>>>
>>> The per-order current clusters are maintained per-cpu using the existing
>>> infrastructure. This is done to avoid interleving pages from different
>>> tasks, which would prevent IO being batched. This is already done for
>>> the order-0 allocations so we follow the same pattern.
>>>
>>> As is done for order-0 per-cpu clusters, the scanner now can steal
>>> order-0 entries from any per-cpu-per-order reserved cluster. This
>>> ensures that when the swap file is getting full, space doesn't get tied
>>> up in the per-cpu reserves.
>>>
>>> This change only modifies swap to be able to accept any order mTHP. It
>>> doesn't change the callers to elide doing the actual split. That will be
>>> done in separate changes.
>
> [...]
>
>>> @@ -905,17 +961,18 @@ static int scan_swap_map_slots(struct swap_info_struct *si,
>>>  	}
>>>  
>>>  	if (si->swap_map[offset]) {
>>> +		VM_WARN_ON(order > 0);
>>>  		unlock_cluster(ci);
>>>  		if (!n_ret)
>>>  			goto scan;
>>>  		else
>>>  			goto done;
>>>  	}
>>> -	WRITE_ONCE(si->swap_map[offset], usage);
>>> -	inc_cluster_info_page(si, si->cluster_info, offset);
>>> +	memset(si->swap_map + offset, usage, nr_pages);
>> 
>> Add barrier() here corresponds to original WRITE_ONCE()?
>> unlock_cluster(ci) may be NOP for some swap devices.
>
> Looking at this a bit more closely, I'm not sure this is needed. Even if there
> is no cluster, the swap_info is still locked, so unlocking that will act as a
> barrier. There are a number of other callsites that memset(si->swap_map) without
> an explicit barrier and with the swap_info locked.
>
> Looking at the original commit that added the WRITE_ONCE() it was worried about
> a race with reading swap_map in _swap_info_get(). But that site is now annotated
> with a data_race(), which will suppress the warning. And I don't believe there
> are any places that read swap_map locklessly and depend upon observing ordering
> between it and other state? So I think the si unlock is sufficient?
>
> I'm not planning to add barrier() here. Let me know if you disagree.

swap_map[] may be read locklessly in swap_offset_available_and_locked()
in parallel.  IIUC, WRITE_ONCE() here is to make the writing take effect
as early as possible there.

>
>> 
>>> +	add_cluster_info_page(si, si->cluster_info, offset, nr_pages);
>>>  	unlock_cluster(ci);

--
Best Regards,
Huang, Ying


  reply	other threads:[~2024-03-21  4:41 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-11 15:00 [PATCH v4 0/6] Swap-out mTHP without splitting Ryan Roberts
2024-03-11 15:00 ` [PATCH v4 1/6] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-03-11 15:00 ` [PATCH v4 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Ryan Roberts
2024-03-20 11:10   ` Ryan Roberts
2024-03-20 14:13     ` David Hildenbrand
2024-03-20 14:21       ` Ryan Roberts
2024-03-11 15:00 ` [PATCH v4 3/6] mm: swap: Simplify struct percpu_cluster Ryan Roberts
2024-03-12  7:52   ` Huang, Ying
2024-03-12  8:51     ` Ryan Roberts
2024-03-13  1:34       ` Huang, Ying
2024-03-11 15:00 ` [PATCH v4 4/6] mm: swap: Allow storage of all mTHP orders Ryan Roberts
2024-03-12  7:51   ` Huang, Ying
2024-03-12  9:40     ` Ryan Roberts
2024-03-13  1:33       ` Huang, Ying
2024-03-20 12:22     ` Ryan Roberts
2024-03-21  4:39       ` Huang, Ying [this message]
2024-03-21 12:21         ` Ryan Roberts
2024-03-22  2:38           ` Can you help us on memory barrier usage? (was Re: [PATCH v4 4/6] mm: swap: Allow storage of all mTHP orders) Huang, Ying
2024-03-22  9:23             ` Ryan Roberts
2024-03-25  3:20               ` Huang, Ying
2024-03-22 13:19             ` Chris Li
2024-03-23  2:11             ` Akira Yokosawa
2024-03-25  0:01               ` Paul E. McKenney
2024-03-25  3:16                 ` Huang, Ying
2024-03-26 17:08                   ` Ryan Roberts
2024-03-25  3:00               ` Huang, Ying
2024-03-22  2:39           ` [PATCH v4 4/6] mm: swap: Allow storage of all mTHP orders Huang, Ying
2024-03-22  9:39             ` Ryan Roberts
2024-03-11 15:00 ` [PATCH v4 5/6] mm: vmscan: Avoid split during shrink_folio_list() Ryan Roberts
2024-03-11 22:30   ` Barry Song
2024-03-12  8:12     ` Ryan Roberts
2024-03-12  8:40       ` Barry Song
2024-03-15 10:43   ` David Hildenbrand
2024-03-15 10:49     ` Ryan Roberts
2024-03-15 11:12       ` David Hildenbrand
2024-03-15 11:38         ` Ryan Roberts
2024-03-18  2:16           ` Huang, Ying
2024-03-18 10:00             ` Yin, Fengwei
2024-03-18 10:05               ` David Hildenbrand
2024-03-18 15:35                 ` Ryan Roberts
2024-03-18 15:36                   ` Ryan Roberts
2024-03-19  2:20                   ` Yin Fengwei
2024-03-19 14:40                     ` Ryan Roberts
2024-03-19  2:31                 ` Yin Fengwei
2024-03-11 15:00 ` [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD Ryan Roberts
2024-03-13  7:19   ` Barry Song
2024-03-13  9:03     ` Ryan Roberts
2024-03-13  9:16       ` Barry Song
2024-03-13  9:36         ` Ryan Roberts
2024-03-13 10:37           ` Barry Song
2024-03-13 11:08             ` Ryan Roberts
2024-03-13 11:37               ` Barry Song
2024-03-13 12:02                 ` Ryan Roberts
2024-03-13  9:19       ` Lance Yang
2024-03-13 14:02       ` Lance Yang
2024-03-20 13:49         ` Ryan Roberts
2024-03-20 14:35           ` Lance Yang
2024-03-20 17:38             ` Ryan Roberts
2024-03-21  1:38               ` Lance Yang
2024-03-21 13:38                 ` Ryan Roberts
2024-03-21 14:55                   ` Lance Yang
2024-03-21 15:24                     ` Ryan Roberts
2024-03-22  0:56                       ` Lance Yang
     [not found]   ` <ffeee7da-e625-40dc-8da8-b70e4e6ef935@redhat.com>
2024-03-15 10:55     ` Ryan Roberts
2024-03-15 11:13       ` David Hildenbrand
2024-03-20 13:57     ` Ryan Roberts
2024-03-20 14:09       ` David Hildenbrand
2024-03-12  8:01 ` [PATCH v4 0/6] Swap-out mTHP without splitting Huang, Ying
2024-03-12  8:49   ` Ryan Roberts
2024-03-12 13:56     ` Ryan Roberts
2024-03-13  1:15       ` Huang, Ying
2024-03-13  8:50         ` Ryan Roberts
2024-03-12  8:45 ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8734skryev.fsf@yhuang6-desk2.ccr.corp.intel.com \
    --to=ying.huang@intel.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox