From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F203C5475B for ; Thu, 7 Mar 2024 02:38:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 42FAA6B00E2; Wed, 6 Mar 2024 21:38:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B90C6B00E3; Wed, 6 Mar 2024 21:38:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2597B6B00E4; Wed, 6 Mar 2024 21:38:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 104926B00E2 for ; Wed, 6 Mar 2024 21:38:55 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id CB220160415 for ; Thu, 7 Mar 2024 02:38:54 +0000 (UTC) X-FDA: 81868685388.03.D750185 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf11.hostedemail.com (Postfix) with ESMTP id 6886E40015 for ; Thu, 7 Mar 2024 02:38:51 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709779133; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=p8nFgYrakH8eM+KFiXDAadU/ZDcNtSIHt7I1YWbBPTY=; b=lhpalet8snaMnC6mGYJr+eXRpLF53vs0STYvO2C0QdoJPW0ufr+NlPAdbYReBZ4vfW2/rF V9swYfea2SLGF78ojigrX59Oep83AUoMWsbgeWYKxo77RvK2N2Zl94YyQPrgSFwGQSoFDc Adnj4qCJc75jqteyG5OInTKvqNZ53ZY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf11.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709779133; a=rsa-sha256; cv=none; b=LOpns3f4MPqwu7UIH0Iw4P7NXn4mTpZ5B32wO/KNJqEC8K/0hCEgnO6+/iTg4B5zOHEK3K rEzq6Frhp0fizmLJxs0RKT+FUgHqWmG1ligFZUHyJ8084xb9FX+3qcekEvX2eHn18bK1E4 V4o+DUpZ0RHb7MA0QSxYlB+cdl+P2Tk= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Tqtj71HnVz1h1WY; Thu, 7 Mar 2024 10:36:27 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id AF7DB180060; Thu, 7 Mar 2024 10:38:48 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Thu, 7 Mar 2024 10:38:48 +0800 Subject: Re: [PATCH v1] mm: swap: Fix race between free_swap_and_cache() and swapoff() To: Ryan Roberts CC: Andrew Morton , David Hildenbrand , , , , "Huang, Ying" References: <20240305151349.3781428-1-ryan.roberts@arm.com> <875xy0842q.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Miaohe Lin Message-ID: Date: Thu, 7 Mar 2024 10:38:47 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To canpemm500002.china.huawei.com (7.192.104.244) X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6886E40015 X-Stat-Signature: 6nuaf7e4jietdgi7bb18hpggh97eitmc X-HE-Tag: 1709779131-384868 X-HE-Meta: U2FsdGVkX19f1n8QWaKF572qtKqKse1dmQGw2wuFSIYFa7L+CcI6T9Eaj39TXJ4hiicCcfcfEbzhdYnBKjbyiIYNBAXQQso1zGpoN5Qg99FSDZQXqOZHwt5ZUPvkZ7sCrgvGkbsJE5K4knJzpdDqbUhv02wFu60snnP5gNsrbcD8dejNQ2O63Lqjbgrp1kA/wOpr+1rDn0tqjEXRo1BATLjcHO9PHrkP5eMd0mJGoCms8HGKm99nGhp+qyp9mlpGGSJgN1gJV2jkUO9aROz5TpPmLmhOR0NC0cnMS7V29a3On4upV+GAulvIZ8iWxw1Rv25mNmHe0MAbcOU1zQ08vvCDqmUzVkMLe5KJS8bKKZAmi4Pjr+jYvFri0XH/Vv3s/sanKRt0mX6XjZreINxRXnAb2nXkFEMyJuzfWFU9TxAWMUV0ttfaZFBI7kiZ9bQoxP4BjQ6tfUACbuW6+QYNEUs8p18oxwOwCi4RbP4JGSA1laDhdUuIyUBFE7t0YGzOPgoYx2mfHfQyJcJB93kRKsq2Nk4rhbqWT0HXYt6z0Xg2AS/pRp+V6lx3Qx2hQLQMMrH/3KWD+XQZwvO+R2xOWaDvETiJPAAc3al+/jMjk9S4p6knZLW1ExmtMd3rjog/V9rPoP3kFkGhtoXP6X0vjWsEYwwfNLL8HuXxnOi3yXRQPq2vIxI+WH6el8gG1ptMUfDaUv/j34gI3rO7hmRKhk/izq4Tw8knipvbdaYmwx0fX44ss92LFA3vZiUvW9/TBEIAvhb0H4x/55HrB6iRo2nrDkkpg7hQI8D6ZHcbAbnBqjzTP2bweDeAoCiLzNeIJJTCjvTmUlU9MpqgSffLRqBcTkP/AKX2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/6 17:31, Ryan Roberts wrote: > On 06/03/2024 08:51, Miaohe Lin wrote: >> On 2024/3/6 10:52, Huang, Ying wrote: >>> Ryan Roberts writes: >>> >>>> There was previously a theoretical window where swapoff() could run and >>>> teardown a swap_info_struct while a call to free_swap_and_cache() was >>>> running in another thread. This could cause, amongst other bad >>>> possibilities, swap_page_trans_huge_swapped() (called by >>>> free_swap_and_cache()) to access the freed memory for swap_map. >>>> >>>> This is a theoretical problem and I haven't been able to provoke it from >>>> a test case. But there has been agreement based on code review that this >>>> is possible (see link below). >>>> >>>> Fix it by using get_swap_device()/put_swap_device(), which will stall >>>> swapoff(). There was an extra check in _swap_info_get() to confirm that >>>> the swap entry was valid. This wasn't present in get_swap_device() so >>>> I've added it. I couldn't find any existing get_swap_device() call sites >>>> where this extra check would cause any false alarms. >>>> >>>> Details of how to provoke one possible issue (thanks to David Hilenbrand >>>> for deriving this): >>>> >>>> --8<----- >>>> >>>> __swap_entry_free() might be the last user and result in >>>> "count == SWAP_HAS_CACHE". >>>> >>>> swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. >>>> >>>> So the question is: could someone reclaim the folio and turn >>>> si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). >>>> >>>> Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are >>>> still references by swap entries. >>>> >>>> Process 1 still references subpage 0 via swap entry. >>>> Process 2 still references subpage 1 via swap entry. >>>> >>>> Process 1 quits. Calls free_swap_and_cache(). >>>> -> count == SWAP_HAS_CACHE >>>> [then, preempted in the hypervisor etc.] >>>> >>>> Process 2 quits. Calls free_swap_and_cache(). >>>> -> count == SWAP_HAS_CACHE >>>> >>>> Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls >>>> __try_to_reclaim_swap(). >>>> >>>> __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> >>>> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> >>>> swap_entry_free()->swap_range_free()-> >>>> ... >>>> WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); >>>> >>>> What stops swapoff to succeed after process 2 reclaimed the swap cache >>>> but before process1 finished its call to swap_page_trans_huge_swapped()? >>>> >>>> --8<----- >>> >>> I think that this can be simplified. Even for a 4K folio, this could >>> happen. >>> >>> CPU0 CPU1 >>> ---- ---- >>> >>> zap_pte_range >>> free_swap_and_cache >>> __swap_entry_free >>> /* swap count become 0 */ >>> swapoff >>> try_to_unuse >>> filemap_get_folio >>> folio_free_swap >>> /* remove swap cache */ >>> /* free si->swap_map[] */ >>> >>> swap_page_trans_huge_swapped <-- access freed si->swap_map !!! >> >> Sorry for jumping the discussion here. IMHO, free_swap_and_cache is called with pte lock held. > > I don't beleive it has the PTL when called by shmem. In the case of shmem, folio_lock is used to guard against the race. > >> So synchronize_rcu (called by swapoff) will wait zap_pte_range to release the pte lock. So this >> theoretical problem can't happen. Or am I miss something? > > For Huang Ying's example, I agree this can't happen because try_to_unuse() will > be waiting for the PTL (see the reply I just sent). Do you mean the below message? " I don't think si->inuse_pages is decremented until __try_to_reclaim_swap() is called (per David, above), which is called after swap_page_trans_huge_swapped() has executed. So in CPU1, try_to_unuse() wouldn't see si->inuse_pages being zero until after CPU0 has completed accessing si->swap_map, so if swapoff starts where you have put it, it would get stalled waiting for the PTL which CPU0 has. " I agree try_to_unuse() will wait for si->inuse_pages being zero. But why will it waits for the PTL? It seems PTL is not used to protect si->inuse_pages. Or am I miss something? > >> >> CPU0 CPU1 >> ---- ---- >> >> zap_pte_range >> pte_offset_map_lock -- spin_lock is held. >> free_swap_and_cache >> __swap_entry_free >> /* swap count become 0 */ >> swapoff >> try_to_unuse >> filemap_get_folio >> folio_free_swap >> /* remove swap cache */ >> percpu_ref_kill(&p->users); >> swap_page_trans_huge_swapped >> pte_unmap_unlock -- spin_lock is released. >> synchronize_rcu(); --> Will wait pte_unmap_unlock to be called? > > Perhaps you can educate me here; I thought that synchronize_rcu() will only wait > for RCU critical sections to complete. The PTL is a spin lock, so why would > synchronize_rcu() wait for the PTL to become unlocked? I assume PTL will always disable preemption which disables a grace period until PTL is released. But this might be fragile and I'm not really sure. I might be wrong. Thanks. > > >> /* free si->swap_map[] */ >> >> Thanks. >> >> > > . >