From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77FE7C5475B for ; Wed, 6 Mar 2024 08:51:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B733F6B0081; Wed, 6 Mar 2024 03:51:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B22BC6B0082; Wed, 6 Mar 2024 03:51:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EAA36B0083; Wed, 6 Mar 2024 03:51:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 90A816B0081 for ; Wed, 6 Mar 2024 03:51:31 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5ABE812095E for ; Wed, 6 Mar 2024 08:51:31 +0000 (UTC) X-FDA: 81865995582.03.DA971BA Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf13.hostedemail.com (Postfix) with ESMTP id 1ECA120007 for ; Wed, 6 Mar 2024 08:51:27 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709715089; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XCEZdOWIOxfgA0YcYJMkJRJ99sDidmZYMfGbR5t2nOY=; b=54Rv2Tbs97eDcTSYh6fIps+dtD2a3cv1Y7mki1j47AR9NxWdYGB8FgMxAMj1c47e4TAW1y bYtKsZCQttaYtxO0QUJyYjWQuOHYMzYRrFte0AF7RUATSbeZKPlJyQfzmOdvqmxA4BeHkC xQwHyZOZcz0/ftDdyYeyJSeFkHo3G54= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf13.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709715089; a=rsa-sha256; cv=none; b=cBnNEVHz5kC2ihp+WdAbVWuW0nz9sKnaAEvyB7/h5uF8I5hOpJJAmE8TLdrkLjRtaLpMLo YWlt3C9YnMpwHBQKGrW+X77THetxdxenip0GDkzj0vBgcfcqUMppzgbqpDKMy7jbPuf+4G 4bL4Wq+FfmHhvtKNMPnmdlIFdQncQN0= Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TqR435cdyz1FLfD; Wed, 6 Mar 2024 16:51:15 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id CABD31A0172; Wed, 6 Mar 2024 16:51:23 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 6 Mar 2024 16:51:23 +0800 Subject: Re: [PATCH v1] mm: swap: Fix race between free_swap_and_cache() and swapoff() To: "Huang, Ying" , Ryan Roberts CC: Andrew Morton , David Hildenbrand , , , References: <20240305151349.3781428-1-ryan.roberts@arm.com> <875xy0842q.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Miaohe Lin Message-ID: Date: Wed, 6 Mar 2024 16:51:22 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <875xy0842q.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-Rspamd-Queue-Id: 1ECA120007 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: shs93zugstr4tksh3te5bcn6t14mat9f X-HE-Tag: 1709715087-868421 X-HE-Meta: U2FsdGVkX19L407nH+XvTn0sOeFou4PgH35bYVYSw6MyUcXuMmHqCWYb7F4pbOZGnIbdw2RViCDPr9FLbR34QNBw48YzyUnoKdq0YzxpM4GtHgNAveNYLFmNl+VbU5gBjBHfmWvAxRJ8zSC3xbfSSfjTeqdZYJokRRzqijXbDmRyfzeLHpf5y3pph1N3JvUkVsoom+XlSiJC3dRRQZqleBQn5tmT/+8wxoyPy0GHqNaJZHN6BnYvo0txYfzawZ6cyU9BfN0q+TCGc3aeX4sshf1dNCqJBTY3JKK3pQW5UD54NCPbQc9TKUronG9eGH4YVytpN0Bys3aE5M8eK43C1HzCE8qsnQfEzTDc5RLjPNiUvWko6pWrbZYSUo+dc1xw9bjaDEgQK2vA+e7T1gB6pUsBP+uQM+q9qnTFnsDfLvW4z3ZhjFn/7dwhYetdstvXlkXChpQ86hFKueuixCKBQmWvCQd7ilrqGDTRAUzLMIXOU3Lr0dyNiG4oE7mLcgLXJBhE5TL5GvPAhpqDxbayzgAUDtrZWmM/AqdyQMkXLekCtDa6tIjJty/4Pkpen4XBZEEbrq6OHha8XqlAhP5+qrYWMGDOzxRFtQW3JfrpFpdmaj3Li4Se2E/RLAl7RmSv3va9EbMKJ9nkzj/wCf1FK0BEn3CrtLjvBBnZIC9UdAJeqtP619uM0572ofvYOm/N+bLETRx2xIVvhepgnYTX/h2Cdd2c5b83XAlh4ZX1nl75xah8cGhLNgaEOByVcjwhVPiBR6bmFYySBL1F+MyDzB7oo5ugpinmc59GvZFNLeOeeRyRPcf8EtIjxKN9bVig4ZKWiIKRvZu/+iFHBcCRAitmIAMfJ0XelnOtexlX36GJCahziXl2VXsSs3zPRWpKCUUBlIJGJiqAxhv3GlvV6GF+9WPjCpatDV/Jj5fhEe0H0y1OVUaWAeLs8gVfqXp27EY5Ttt4A4YdxJouDQ3 MLY5krwv 8+mpkb0c2ulurnXiVWA0Wc3Tsm/kytB6M9XQU4hXTh4InjE/j5l1kk6eNigcx+2mNUJZPZPmWarnscPODZxNaRZWYEVSW3MbkDNvcxpd/eptn2ted7YJrduNZXIuPheOkwqkWryMaLbslZXc46GGRBMhqfsMNfqJp+nF7oF+ABIvCA5Ey1UYxkURsBXUV2dA+aJlO+0enp4LTooeSRFo6uX4ZfVXtdAlq+LdjfxMcy4YKhj8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/6 10:52, Huang, Ying wrote: > Ryan Roberts writes: > >> There was previously a theoretical window where swapoff() could run and >> teardown a swap_info_struct while a call to free_swap_and_cache() was >> running in another thread. This could cause, amongst other bad >> possibilities, swap_page_trans_huge_swapped() (called by >> free_swap_and_cache()) to access the freed memory for swap_map. >> >> This is a theoretical problem and I haven't been able to provoke it from >> a test case. But there has been agreement based on code review that this >> is possible (see link below). >> >> Fix it by using get_swap_device()/put_swap_device(), which will stall >> swapoff(). There was an extra check in _swap_info_get() to confirm that >> the swap entry was valid. This wasn't present in get_swap_device() so >> I've added it. I couldn't find any existing get_swap_device() call sites >> where this extra check would cause any false alarms. >> >> Details of how to provoke one possible issue (thanks to David Hilenbrand >> for deriving this): >> >> --8<----- >> >> __swap_entry_free() might be the last user and result in >> "count == SWAP_HAS_CACHE". >> >> swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. >> >> So the question is: could someone reclaim the folio and turn >> si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). >> >> Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are >> still references by swap entries. >> >> Process 1 still references subpage 0 via swap entry. >> Process 2 still references subpage 1 via swap entry. >> >> Process 1 quits. Calls free_swap_and_cache(). >> -> count == SWAP_HAS_CACHE >> [then, preempted in the hypervisor etc.] >> >> Process 2 quits. Calls free_swap_and_cache(). >> -> count == SWAP_HAS_CACHE >> >> Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls >> __try_to_reclaim_swap(). >> >> __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()-> >> put_swap_folio()->free_swap_slot()->swapcache_free_entries()-> >> swap_entry_free()->swap_range_free()-> >> ... >> WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); >> >> What stops swapoff to succeed after process 2 reclaimed the swap cache >> but before process1 finished its call to swap_page_trans_huge_swapped()? >> >> --8<----- > > I think that this can be simplified. Even for a 4K folio, this could > happen. > > CPU0 CPU1 > ---- ---- > > zap_pte_range > free_swap_and_cache > __swap_entry_free > /* swap count become 0 */ > swapoff > try_to_unuse > filemap_get_folio > folio_free_swap > /* remove swap cache */ > /* free si->swap_map[] */ > > swap_page_trans_huge_swapped <-- access freed si->swap_map !!! Sorry for jumping the discussion here. IMHO, free_swap_and_cache is called with pte lock held. So synchronize_rcu (called by swapoff) will wait zap_pte_range to release the pte lock. So this theoretical problem can't happen. Or am I miss something? CPU0 CPU1 ---- ---- zap_pte_range pte_offset_map_lock -- spin_lock is held. free_swap_and_cache __swap_entry_free /* swap count become 0 */ swapoff try_to_unuse filemap_get_folio folio_free_swap /* remove swap cache */ percpu_ref_kill(&p->users); swap_page_trans_huge_swapped pte_unmap_unlock -- spin_lock is released. synchronize_rcu(); --> Will wait pte_unmap_unlock to be called? /* free si->swap_map[] */ Thanks.