From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2969C5475B for ; Wed, 6 Mar 2024 09:31:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 51E226B007D; Wed, 6 Mar 2024 04:31:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CD7D6B0080; Wed, 6 Mar 2024 04:31:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BCC96B0081; Wed, 6 Mar 2024 04:31:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 270386B007D for ; Wed, 6 Mar 2024 04:31:08 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EDA90140843 for ; Wed, 6 Mar 2024 09:31:07 +0000 (UTC) X-FDA: 81866095374.20.8D30A3E Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id ED04720011 for ; Wed, 6 Mar 2024 09:31:05 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709717466; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=h3q+wvRdRnDVaTjGQhx5gr0bIR+PavhZNs/DGNtmPl8=; b=FPCx9yucJ4oMTGKpxvgyFdzLjBzWJbbpm0NhL+5rvIws8X0w/dUB7/IZ7MkQMHVk/Znh7w wvsZJp5/XihGLU/JtcLzZ8M1jxByKvNGr7Z97mzX9DrB0aWcARgGI9biJCzfHntbzOPUFV UuXmmEEKNmWIRQ6fdiHfwmvM2eIW+SI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709717466; a=rsa-sha256; cv=none; b=394Q7quTaM132Yl0zs/tDav2ET/syZ0ci89lHwD0OM5QLHYoUunkEddIQpbbdT3kXy6SGO uZM43bWZdsYwJMWAnkngbu6oZIHDArhCLu7D/GmarVYt5Un/Vl/BYHrxAxERgExHuevmew QCpmYxmXDkt08nnFvEVS9VCDBd73ryA= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86C1D1FB; Wed, 6 Mar 2024 01:31:41 -0800 (PST) Received: from [10.57.68.241] (unknown [10.57.68.241]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 26E643F762; Wed, 6 Mar 2024 01:31:02 -0800 (PST) Message-ID: Date: Wed, 6 Mar 2024 09:31:01 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1] mm: swap: Fix race between free_swap_and_cache() and swapoff() Content-Language: en-GB To: Miaohe Lin , "Huang, Ying" Cc: Andrew Morton , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org References: <20240305151349.3781428-1-ryan.roberts@arm.com> <875xy0842q.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: ED04720011 X-Rspam-User: X-Stat-Signature: y18foniybendxij117zeadzh1t8jiwcn X-Rspamd-Server: rspam03 X-HE-Tag: 1709717465-640724 X-HE-Meta: U2FsdGVkX1+uqWSY0YRlLoo6ooM+0zdFJ27muAlQh76IDHzu9e80avx0X39CQyc9Qfismtxq3bZB4oz4Nq8irAqHhmC4IB03jPsjJlC5g75BdNug82Glw9Rn9J4eqac8IHqbiwBPibz4Q9rDJlletEI3vJLS26Eq5vVs1FQBv3lGflbQjmkFG6REqlXuo+sVnj/A8M3wmjusk2BbwgfZ4bQ0Y50pSTwmkt2LcwyjF1tZ3fbQZ3cgOXOEr2FllQ6qHqC64fFfq00wbZmTeAoWAf+68fJ+7enONgIAxiIn+jxGSnxFXJKGg5vVikudPHdLigUcWFthNaaasf+J2A131XRlCJvfr4SA0JKKR9233fFRwCJ179RSllpzl7LjHy5WZCLYwW9typZPjWoMUqdRZ6ls5FjaSPhl+aQ4tiAneXWH2bSigU7LYCMLqi6FpsEkUGnRWymL6nhnfO4EG4FuvX+vjM0aiqo0NsZq2LoPLaT44d6rs0LXpDTjAAR34ZzjxxpZAZBlrT4a11u5uqkCsX0EH9YWJaqo79f4yX6LcjqglSg5yudLwJxLjv4AGcxEKHyai1/WgmJFNfla5GQ7MmqlnnDIq7uStlCmIOjlvMmjFHLBeI/bHPZAwzzk15TqoTCjJvkrPOwE6HSnWD6mx4Fc0sALVzXT9ScTjEKDOP22zv/XzwkF7t2Rf4O2pjmWMfbE8gu7cquO93GXwZbrXVxaua0X8v0ATCAbcqD1s/zDIrhm9H/mcVbPQJOFleK7/RD6h7Dmv1HbMhPBvCXCE72+2eRLv9bKd2pGIev6n8tDi97c9p557a6CWRsYPWKICgO4TITEcK5C//wMycHhtJdQpLIJj51Tj7QPgkuxDmHgrgVO+OjBlW93zn2QT5faOav1zOfLvBk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 06/03/2024 08:51, Miaohe Lin wrote: > On 2024/3/6 10:52, Huang, Ying wrote: >> Ryan Roberts writes: >> >>> There was previously a theoretical window where swapoff() could run and >>> teardown a swap_info_struct while a call to free_swap_and_cache() was >>> running in another thread. This could cause, amongst other bad >>> possibilities, swap_page_trans_huge_swapped() (called by >>> free_swap_and_cache()) to access the freed memory for swap_map. >>> >>> This is a theoretical problem and I haven't been able to provoke it from >>> a test case. But there has been agreement based on code review that this >>> is possible (see link below). >>> >>> Fix it by using get_swap_device()/put_swap_device(), which will stall >>> swapoff(). There was an extra check in _swap_info_get() to confirm that >>> the swap entry was valid. This wasn't present in get_swap_device() so >>> I've added it. I couldn't find any existing get_swap_device() call sites >>> where this extra check would cause any false alarms. >>> >>> Details of how to provoke one possible issue (thanks to David Hilenbrand >>> for deriving this): >>> >>> --8<----- >>> >>> __swap_entry_free() might be the last user and result in >>> "count == SWAP_HAS_CACHE". >>> >>> swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. >>> >>> So the question is: could someone reclaim the folio and turn >>> si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). >>> >>> Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are >>> still references by swap entries. >>> >>> Process 1 still references subpage 0 via swap entry. >>> Process 2 still references subpage 1 via swap entry. >>> >>> Process 1 quits. Calls free_swap_and_cache(). >>> -> count == SWAP_HAS_CACHE >>> [then, preempted in the hypervisor etc.] >>> >>> Process 2 quits. Calls free_swap_and_cache(). >>> -> count == SWAP_HAS_CACHE >>> >>> Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls >>> __try_to_reclaim_swap(). >>> >>> __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> >>> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> >>> swap_entry_free()->swap_range_free()-> >>> ... >>> WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); >>> >>> What stops swapoff to succeed after process 2 reclaimed the swap cache >>> but before process1 finished its call to swap_page_trans_huge_swapped()? >>> >>> --8<----- >> >> I think that this can be simplified. Even for a 4K folio, this could >> happen. >> >> CPU0 CPU1 >> ---- ---- >> >> zap_pte_range >> free_swap_and_cache >> __swap_entry_free >> /* swap count become 0 */ >> swapoff >> try_to_unuse >> filemap_get_folio >> folio_free_swap >> /* remove swap cache */ >> /* free si->swap_map[] */ >> >> swap_page_trans_huge_swapped <-- access freed si->swap_map !!! > > Sorry for jumping the discussion here. IMHO, free_swap_and_cache is called with pte lock held. I don't beleive it has the PTL when called by shmem. > So synchronize_rcu (called by swapoff) will wait zap_pte_range to release the pte lock. So this > theoretical problem can't happen. Or am I miss something? For Huang Ying's example, I agree this can't happen because try_to_unuse() will be waiting for the PTL (see the reply I just sent). > > CPU0 CPU1 > ---- ---- > > zap_pte_range > pte_offset_map_lock -- spin_lock is held. > free_swap_and_cache > __swap_entry_free > /* swap count become 0 */ > swapoff > try_to_unuse > filemap_get_folio > folio_free_swap > /* remove swap cache */ > percpu_ref_kill(&p->users); > swap_page_trans_huge_swapped > pte_unmap_unlock -- spin_lock is released. > synchronize_rcu(); --> Will wait pte_unmap_unlock to be called? Perhaps you can educate me here; I thought that synchronize_rcu() will only wait for RCU critical sections to complete. The PTL is a spin lock, so why would synchronize_rcu() wait for the PTL to become unlocked? > /* free si->swap_map[] */ > > Thanks. > >