From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AB39CD11C2 for ; Wed, 10 Apr 2024 07:58:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CED756B007B; Wed, 10 Apr 2024 03:58:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9D0F6B0082; Wed, 10 Apr 2024 03:58:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3E0D6B0087; Wed, 10 Apr 2024 03:58:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 94D096B007B for ; Wed, 10 Apr 2024 03:58:39 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 561B51A069B for ; Wed, 10 Apr 2024 07:58:39 +0000 (UTC) X-FDA: 81992870358.01.B21922B Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf22.hostedemail.com (Postfix) with ESMTP id 8E87FC000D for ; Wed, 10 Apr 2024 07:58:36 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712735917; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NqdZKqZz503BbhXMakFsG5NchY7z0xDPKPg0Jn1AFpo=; b=hArNUjlM99T7b9O+LplWHabFsMdH2+/4om+Ie8MsKx+xzW17dQRYZg5dg2y82XZouBl8Ix vFikX7+0YeK4R17k2nwsyJw2L2u5PnRZoX7pn+Za6EnzlzbyratEVFbUD++aafOBkM9edr 4pFwPN77Vkifj097CZwyP3gA7QtDt3A= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf22.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712735917; a=rsa-sha256; cv=none; b=2BRzHO7iP0LEx6pOsrvBVjxB8akgo6W9FFmQAr+U6/VtZYyBb/IYXU9v2vmz/8o+lNkWe/ iuL4p8xfdbR0L/1VtachM6jgpBRUs9iygdgKCR5+tHUaxf8mPz7MsOp470ncabijepc3Ax rkKEQKvcMGHWGDCrJoXLF5pwrbQYr0M= Received: from mail.maildlp.com (unknown [172.19.163.174]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4VDw9h2wwPzwRcF; Wed, 10 Apr 2024 15:55:36 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id BFAFB140159; Wed, 10 Apr 2024 15:58:31 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 10 Apr 2024 15:58:31 +0800 Subject: Re: [PATCH] mm,swap: add document about RCU read lock and swapoff interaction To: Huang Ying CC: , , Ryan Roberts , David Hildenbrand , Hugh Dickins , Minchan Kim , Andrew Morton References: <20240407065450.498821-1-ying.huang@intel.com> From: Miaohe Lin Message-ID: <62d09dae-39b9-f235-9786-6e12302425a0@huawei.com> Date: Wed, 10 Apr 2024 15:58:30 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20240407065450.498821-1-ying.huang@intel.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To canpemm500002.china.huawei.com (7.192.104.244) X-Rspamd-Queue-Id: 8E87FC000D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: gxgb1yeaj384sz7p3atzt45atxdb7udx X-HE-Tag: 1712735916-19105 X-HE-Meta: U2FsdGVkX1/6PjGqaxY0O1+tTUpDVlV2kO6Yw4QTTGwdOg9DVOqO3rG4R9tQjylaNvAmZdeYQ71IXfIwvlR3EX6yGxZy8/zKWU9/TLUbk7Y/DieRua3fv6lcG0zAezyXtuAvPlkxMsCwhCZeAbacCqe3fSl6z+L+J+gV+AGTqpPVdf7SFaQQ2k4nLbYxtLYyIxsOKVE4SIsFdCA5cZdjtBqU1eWFpIMCac8wdtUQjIsFXazlLKtIp7zHqK/R/4MS7G2P9jG+fz2UXhxCMcGZZupBb2pLRz/yv8GBF+8lSgRbxEq4RcC9IVp+vqwUnR023zLQJEACmSZLcZMaN3OwrTSpObzgIIRWwm5DQhhJoI53lOTkwgcnEMtoPHZcI59XNEBJfxubGAiC1B6ajlO63SUHDwoCTi6YfEfHEBc5b4S5vvNPO5T45hcLVG4aMiCmy7npmNZerfCFAJ2oQiVJFeSeE4giRLiPKszx5ltP4Ub3KN8/slqZsVGzwX1WjLAIkZjNAQYcKfizlF+Ew25fS6P6rqzqqT5n3GqY0N0WuSUa49zDTl9RcXTD5L+KJPO814e8RyJBaKgXtUeOFfA6+qdsI6Fb7aPPPXUXN3cauk0K2jfywuR+sDlB0gXuFvZBOCT9h5eV0LHkkHhZGVt9jHnXlhrBKnUPYdv4RRAMwlkvzufsUNg4dhLEh8EJ38Pcrky8fc79ybvCfD2tq/HyQu239j5ayZsnvol2QFbFf7xHwQ9D5zmLaunmMfXfTsrjIVwdLkFYRXwfzvDMCEpONNLRrStVw6h8lLFrygevvsIpvcqRTZhw0DhD62UdKhQWLMJsSLOGTvRGbicrK2NAATSbFUceiLrVA/r8kR1uZZtnTpi5yerTAvtYrId3LzRHh1r/o7UdYWRO/CEX5ikHT4jX4KSCr0w2YMs8Z5HXhZTDg+rG6TF8I4ut3Ou64DQ6uyKib1C9IBNukbta6I9 79QHEy7r MzjfZo30JsYJLFfHwera+39dZWo6/kUu2vBHhGKbAzUcZEALe1sHjOULCv/XHB9M00437yPDP14Eq+VbAxHlgXPIcDMPJFRi8L340RK5QZndql48jeyUkwoXm6kvvpV//6U3oUzSNkRyZI7RCaJnJTF5voGtFifHKo3T8MyVWvvwzi9lq7ow5A+PnwEuKc3oOM5rCcEvGsVXqEjQ5oZlrG35BTn1qr9yDwDtE97RLDVK6UbSyYXxV/qcZA6KJ1QfrwAQGcn0NrVZgyxpyG4EH4ht+IGBqffowjzdYJ7/Q8EeuFKx0GBbE4lWR7aP5Cf13uURQedYqqmoZEOR3LHTCibHiww== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/4/7 14:54, Huang Ying wrote: > During reviewing a patch to fix the race condition between > free_swap_and_cache() and swapoff() [1], it was found that the > document about how to prevent racing with swapoff isn't clear enough. > Especially RCU read lock can prevent swapoff from freeing data > structures. So, the document is added as comments. > > [1] https://lore.kernel.org/linux-mm/c8fe62d0-78b8-527a-5bef-ee663ccdc37a@huawei.com/ > > Signed-off-by: "Huang, Ying" > Cc: Ryan Roberts > Cc: David Hildenbrand > Cc: Miaohe Lin > Cc: Hugh Dickins > Cc: Minchan Kim Thanks for your work. Reviewed-by: Miaohe Lin . > --- > mm/swapfile.c | 26 +++++++++++++------------- > 1 file changed, 13 insertions(+), 13 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 4919423cce76..6925462406fa 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1226,16 +1226,15 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, > > /* > * When we get a swap entry, if there aren't some other ways to > - * prevent swapoff, such as the folio in swap cache is locked, page > - * table lock is held, etc., the swap entry may become invalid because > - * of swapoff. Then, we need to enclose all swap related functions > - * with get_swap_device() and put_swap_device(), unless the swap > - * functions call get/put_swap_device() by themselves. > + * prevent swapoff, such as the folio in swap cache is locked, RCU > + * reader side is locked, etc., the swap entry may become invalid > + * because of swapoff. Then, we need to enclose all swap related > + * functions with get_swap_device() and put_swap_device(), unless the > + * swap functions call get/put_swap_device() by themselves. > * > - * Note that when only holding the PTL, swapoff might succeed immediately > - * after freeing a swap entry. Therefore, immediately after > - * __swap_entry_free(), the swap info might become stale and should not > - * be touched without a prior get_swap_device(). > + * RCU reader side lock (including any spinlock) is sufficient to > + * prevent swapoff, because synchronize_rcu() is called in swapoff() > + * before freeing data structures. > * > * Check whether swap entry is valid in the swap device. If so, > * return pointer to swap_info_struct, and keep the swap entry valid > @@ -2495,10 +2494,11 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) > > /* > * Wait for swap operations protected by get/put_swap_device() > - * to complete. > - * > - * We need synchronize_rcu() here to protect the accessing to > - * the swap cache data structure. > + * to complete. Because of synchronize_rcu() here, all swap > + * operations protected by RCU reader side lock (including any > + * spinlock) will be waited too. This makes it easy to > + * prevent folio_test_swapcache() and the following swap cache > + * operations from racing with swapoff. > */ > percpu_ref_kill(&p->users); > synchronize_rcu(); >