linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Hugh Dickins <hughd@google.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: willy@infradead.org, david@redhat.com,
	wangkefeng.wang@huawei.com, chrisl@kernel.org,
	ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com,
	shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com,
	da.gomez@samsung.com, p.raghav@samsung.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 8/9] mm: shmem: split large entry if the swapin folio is not large
Date: Tue, 27 Aug 2024 14:46:40 +0800	[thread overview]
Message-ID: <39c71ccf-669b-4d9f-923c-f6b9c4ceb8df@linux.alibaba.com> (raw)
In-Reply-To: <e2a2ba5d-864c-50aa-7579-97cba1c7dd0c@google.com>



On 2024/8/26 06:31, Hugh Dickins wrote:
> On Mon, 12 Aug 2024, Baolin Wang wrote:
> 
>> Now the swap device can only swap-in order 0 folio, even though a large
>> folio is swapped out. This requires us to split the large entry previously
>> saved in the shmem pagecache to support the swap in of small folios.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/shmem.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 100 insertions(+)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 345e25425e37..996062dc196b 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1990,6 +1990,81 @@ static void shmem_set_folio_swapin_error(struct inode *inode, pgoff_t index,
>>   	swap_free_nr(swap, nr_pages);
>>   }
>>   
>> +static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
>> +				   swp_entry_t swap, int new_order, gfp_t gfp)
>> +{
>> +	struct address_space *mapping = inode->i_mapping;
>> +	XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order);
>> +	void *alloced_shadow = NULL;
>> +	int alloced_order = 0, i;
> 
> gfp needs to be adjusted: see fix patch below.

Ah, good catch. Thank you Hugh.

>> +
>> +	for (;;) {
>> +		int order = -1, split_order = 0;
>> +		void *old = NULL;
>> +
>> +		xas_lock_irq(&xas);
>> +		old = xas_load(&xas);
>> +		if (!xa_is_value(old) || swp_to_radix_entry(swap) != old) {
>> +			xas_set_err(&xas, -EEXIST);
>> +			goto unlock;
>> +		}
>> +
>> +		order = xas_get_order(&xas);
>> +
>> +		/* Swap entry may have changed before we re-acquire the lock */
>> +		if (alloced_order &&
>> +		    (old != alloced_shadow || order != alloced_order)) {
>> +			xas_destroy(&xas);
>> +			alloced_order = 0;
>> +		}
>> +
>> +		/* Try to split large swap entry in pagecache */
>> +		if (order > 0 && order > new_order) {
> 
> I have not even attempted to understand all the manipulations of order and
> new_order and alloced_order and split_order.  And further down it turns out
> that this is only ever called with new_order 0.
> 
> You may be wanting to cater for more generality in future, but for now
> please cut this down to the new_order 0 case, and omit that parameter.
> It will be easier for us to think about the xa_get_order() races if
> the possibilities are more limited.

Sure. I will drop the 'new_order' with following fix.

> 
>> +			if (!alloced_order) {
>> +				split_order = order;
>> +				goto unlock;
>> +			}
>> +			xas_split(&xas, old, order);
>> +
>> +			/*
>> +			 * Re-set the swap entry after splitting, and the swap
>> +			 * offset of the original large entry must be continuous.
>> +			 */
>> +			for (i = 0; i < 1 << order; i += (1 << new_order)) {
>> +				pgoff_t aligned_index = round_down(index, 1 << order);
>> +				swp_entry_t tmp;
>> +
>> +				tmp = swp_entry(swp_type(swap), swp_offset(swap) + i);
>> +				__xa_store(&mapping->i_pages, aligned_index + i,
>> +					   swp_to_radix_entry(tmp), 0);
>> +			}
> 
> So that is done under xas lock: good. But is the intermediate state
> visible to RCU readers, and could that be a problem?

In xas_split(), the multi-index entry has been split into smaller 
entries, and each of these smaller entries has been set with the old 
swap value. During the process of __xa_store(), these entries will be 
re-set to the new swap value. Although RCU readers might observe the old 
swap value, I have not seen any problems until now (may be I missed 
something).

For concurrent shmem swap-in cases, there are some checks in 
shmem_swapin_folio() (including folio->swap.val and shmem_confirm_swap() 
validation ) to ensure the correctness of the swap values.

For the shmem_partial_swap_usage(), we may get racy swap usages, but it 
is not a problem form its comments:
" * This is safe to call without i_rwsem or the i_pages lock thanks to RCU,
  * as long as the inode doesn't go away and racy results are not a 
problem."

For shmem truncation, when removing the racy swap entry from shmem page 
cache, it will use xa_cmpxchg_irq() to sync the correct swap state.


[PATCH] mm: shmem: split large entry if the swapin folio is not large
  fix 2

Now we only split large folio to order 0, so drop the 'new_order'
parameter.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
  mm/shmem.c | 10 +++++-----
  1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index d8038a66b110..f00b7b99ad09 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1998,10 +1998,10 @@ static void shmem_set_folio_swapin_error(struct 
inode *inode, pgoff_t index,
  }

  static int shmem_split_large_entry(struct inode *inode, pgoff_t index,
-                                  swp_entry_t swap, int new_order, 
gfp_t gfp)
+                                  swp_entry_t swap, gfp_t gfp)
  {
         struct address_space *mapping = inode->i_mapping;
-       XA_STATE_ORDER(xas, &mapping->i_pages, index, new_order);
+       XA_STATE_ORDER(xas, &mapping->i_pages, index, 0);
         void *alloced_shadow = NULL;
         int alloced_order = 0, i;

@@ -2026,7 +2026,7 @@ static int shmem_split_large_entry(struct inode 
*inode, pgoff_t index,
                 }

                 /* Try to split large swap entry in pagecache */
-               if (order > 0 && order > new_order) {
+               if (order > 0) {
                         if (!alloced_order) {
                                 split_order = order;
                                 goto unlock;
@@ -2037,7 +2037,7 @@ static int shmem_split_large_entry(struct inode 
*inode, pgoff_t index,
                          * Re-set the swap entry after splitting, and 
the swap
                          * offset of the original large entry must be 
continuous.
                          */
-                       for (i = 0; i < 1 << order; i += (1 << new_order)) {
+                       for (i = 0; i < 1 << order; i++) {
                                 pgoff_t aligned_index = 
round_down(index, 1 << order);
                                 swp_entry_t tmp;

@@ -2123,7 +2123,7 @@ static int shmem_swapin_folio(struct inode *inode, 
pgoff_t index,
                  * should split the large swap entry stored in the 
pagecache
                  * if necessary.
                  */
-               split_order = shmem_split_large_entry(inode, index, 
swap, 0, gfp);
+               split_order = shmem_split_large_entry(inode, index, 
swap, gfp);
                 if (split_order < 0) {
                         error = split_order;
                         goto failed;


  reply	other threads:[~2024-08-27  6:46 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-12  7:42 [PATCH v5 0/9] support large folio swap-out and swap-in for shmem Baolin Wang
2024-08-12  7:42 ` [PATCH v5 1/9] mm: swap: extend swap_shmem_alloc() to support batch SWAP_MAP_SHMEM flag setting Baolin Wang
2024-08-12  7:42 ` [PATCH v5 2/9] mm: shmem: extend shmem_partial_swap_usage() to support large folio swap Baolin Wang
2024-08-12  7:42 ` [PATCH v5 3/9] mm: shmem: return number of pages beeing freed in shmem_free_swap Baolin Wang
2024-08-12  7:42 ` [PATCH v5 4/9] mm: filemap: use xa_get_order() to get the swap entry order Baolin Wang
2024-08-25 21:55   ` Hugh Dickins
2024-08-25 23:28     ` Matthew Wilcox
2024-08-27 10:10     ` Baolin Wang
2024-08-29  8:07       ` Hugh Dickins
2024-08-29 12:40         ` Baolin Wang
2024-08-30 10:18           ` Hugh Dickins
2024-08-12  7:42 ` [PATCH v5 5/9] mm: shmem: use swap_free_nr() to free shmem swap entries Baolin Wang
2024-08-12  7:42 ` [PATCH v5 6/9] mm: shmem: support large folio allocation for shmem_replace_folio() Baolin Wang
2024-08-25 22:05   ` Hugh Dickins
2024-08-27  3:06     ` Baolin Wang
2024-08-12  7:42 ` [PATCH v5 7/9] mm: shmem: drop folio reference count using 'nr_pages' in shmem_delete_from_page_cache() Baolin Wang
2024-08-12  7:42 ` [PATCH v5 8/9] mm: shmem: split large entry if the swapin folio is not large Baolin Wang
2024-08-25 22:31   ` Hugh Dickins
2024-08-27  6:46     ` Baolin Wang [this message]
2024-08-12  7:42 ` [PATCH v5 9/9] mm: shmem: support large folio swap out Baolin Wang
2024-08-25 23:14   ` Hugh Dickins
2024-08-27  6:58     ` Baolin Wang
2024-08-28  8:28     ` [PATCH] mm: shmem: support large folio swap out fix 2 Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=39c71ccf-669b-4d9f-923c-f6b9c4ceb8df@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=chrisl@kernel.org \
    --cc=da.gomez@samsung.com \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=ioworker0@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=p.raghav@samsung.com \
    --cc=ryan.roberts@arm.com \
    --cc=shy828301@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox