From: Rongwei Wang <rongwei.wang@linux.alibaba.com>
To: Aaron Lu <aaron.lu@intel.com>
Cc: akpm@linux-foundation.org, bagasdotme@gmail.com,
willy@infradead.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH v2] mm/swap: fix swap_info_struct race between swapoff and get_swap_pages()
Date: Fri, 7 Apr 2023 10:20:48 +0800 [thread overview]
Message-ID: <97551857-b2fe-eb26-88a0-780951b873d7@linux.alibaba.com> (raw)
In-Reply-To: <20230406145754.GA440657@ziqianlu-desk2>
On 2023/4/6 22:57, Aaron Lu wrote:
> On Thu, Apr 06, 2023 at 10:04:16PM +0800, Aaron Lu wrote:
>> On Tue, Apr 04, 2023 at 11:47:16PM +0800, Rongwei Wang wrote:
>>> The si->lock must be held when deleting the si from
>>> the available list. Otherwise, another thread can
>>> re-add the si to the available list, which can lead
>>> to memory corruption. The only place we have found
>>> where this happens is in the swapoff path. This case
>>> can be described as below:
>>>
>>> core 0 core 1
>>> swapoff
>>>
>>> del_from_avail_list(si) waiting
>>>
>>> try lock si->lock acquire swap_avail_lock
>>> and re-add si into
>>> swap_avail_head
>> confused here.
>>
>> If del_from_avail_list(si) finished in swaoff path, then this si should
>> not exist in any of the per-node avail list and core 1 should not be
>> able to re-add it.
> I think a possible sequence could be like this:
>
> cpuX cpuY
> swapoff put_swap_folio()
>
> del_from_avail_list(si)
> taken si->lock
> spin_lock(&si->lock);
>
> swap_range_free()
> was_full && SWP_WRITEOK -> re-add!
> drop si->lock
>
> taken si->lock
> proceed removing si
>
> End result: si left on avail_list after being swapped off.
>
> The problem is, in add_to_avail_list(), it has no idea this si is being
> swapped off and taking si->lock then del_from_avail_list() could avoid
> this problem, so I think this patch did the right thing but the
> changelog about how this happened needs updating and after that:
Hi Aaron
That's my fault. Actually, I don't refers specifically to
swap_range_free() path in commit, mainly because cpuY can stand all
threads which is waiting swap_avail_lock, then to call
add_to_avail_list(), not only swap_range_free(), e.g. maybe swapon().
>
> Reviewed-by: Aaron Lu <aaron.lu@intel.com>
Thanks for your time.
-wrw
>
> Thanks,
> Aaron
>
prev parent reply other threads:[~2023-04-07 2:20 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-01 22:19 [PATCH] " Rongwei Wang
2023-04-02 13:37 ` Bagas Sanjaya
2023-04-02 14:56 ` Rongwei Wang
2023-04-03 4:10 ` Matthew Wilcox
2023-04-03 8:02 ` Rongwei Wang
2023-04-04 15:47 ` [PATCH v2] " Rongwei Wang
2023-04-04 16:08 ` Rongwei Wang
2023-04-06 12:12 ` Aaron Lu
2023-04-06 12:55 ` Rongwei Wang
2023-04-04 19:26 ` Andrew Morton
2023-04-05 6:49 ` Rongwei Wang
2023-04-06 6:58 ` Aaron Lu
2023-04-06 12:20 ` Rongwei Wang
2023-04-06 14:04 ` Aaron Lu
2023-04-06 14:57 ` Aaron Lu
2023-04-07 2:20 ` Rongwei Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=97551857-b2fe-eb26-88a0-780951b873d7@linux.alibaba.com \
--to=rongwei.wang@linux.alibaba.com \
--cc=aaron.lu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=bagasdotme@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=stable@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox