From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5AD0C48BF6 for ; Mon, 4 Mar 2024 21:55:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C1376B0088; Mon, 4 Mar 2024 16:55:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 749466B008A; Mon, 4 Mar 2024 16:55:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E8A16B008C; Mon, 4 Mar 2024 16:55:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 492906B0088 for ; Mon, 4 Mar 2024 16:55:32 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 02A23140B37 for ; Mon, 4 Mar 2024 21:55:31 +0000 (UTC) X-FDA: 81860713704.23.8BDC85D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf25.hostedemail.com (Postfix) with ESMTP id DE796A000F for ; Mon, 4 Mar 2024 21:55:29 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709589330; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1SCE5wx7A3UFQKuEanl2vnW2NkT/iXba+Q/nR7HUzd8=; b=HClHKSumo7Fa2upmXJ4StQ4Ab+qxwGNfHBkPNumtxLTYM0kkQtOnpFkUDSXOSPrynhUU7P N3l2atfaf8pAeYaeV+vYpdayiV6WgGOpED99DkkfDEev5fN67OZYqNb/FGOx2YzKOwTuCi CY+kS8BRXptOByKaKOEQshS7Wg5crVM= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf25.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709589330; a=rsa-sha256; cv=none; b=ATFEPFhL0QoZLrS1HYJ5omfcrlLWBH0/+zKc9E+tMCpz9lNjTFkNWWpQ54hI+LlrLA6r9V 14Fb5eBSu0rnBUc9Q7aWZjWcNvIEFHa21igTAAqb47aFxwsWFNVrlZrCw3lK1p1lXiPm0M r0KkCQEStDaKucvJ27pps/cV180CqpE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 77E3D2F4; Mon, 4 Mar 2024 13:56:05 -0800 (PST) Received: from [10.57.68.92] (unknown [10.57.68.92]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0147B3F73F; Mon, 4 Mar 2024 13:55:25 -0800 (PST) Message-ID: <6cfc022a-0c7a-4fe6-aaa4-3d28aeacc982@arm.com> Date: Mon, 4 Mar 2024 21:55:24 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Content-Language: en-GB To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20231025144546.577640-1-ryan.roberts@arm.com> <20231025144546.577640-2-ryan.roberts@arm.com> <6541e29b-f25a-48b8-a553-fd8febe85e5a@redhat.com> <2934125a-f2e2-417c-a9f9-3cb1e074a44f@redhat.com> <049818ca-e656-44e4-b336-934992c16028@arm.com> <949b6c22-d737-4060-9ca1-a69d8e986d90@redhat.com> <9ed743a7-0c5d-49d9-b8b2-d58364df1f5f@arm.com> <65a66eb9-41f8-4790-8db2-0c70ea15979f@redhat.com> From: Ryan Roberts In-Reply-To: <65a66eb9-41f8-4790-8db2-0c70ea15979f@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: 4hkmwzjuynn3b8yhaus16j3jye7yoe95 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DE796A000F X-HE-Tag: 1709589329-645316 X-HE-Meta: U2FsdGVkX1+06LNJtNEUaHalHBQ3Cp6BZN7C33esR0IygX5WVpVWrMiESE7uHF3IBWHGXSqJdKq+vYPeh8hDYruCzViz83E4TFwWrUvHqmNt7edTjFqqdFZQgxpQSnhyI5hgQ4QdR4/lYRP88prUF6ZzOaq+LCCFlOjqOklAEikfgXGaSWOCkUQDwyrT5F02azBWkfxZ4tGO6RS8dJTBb+Dx69ARJr00Yye4w8ejqdr/TaYRWgsZkn6jNv/i26yqdNHsAFJdAMoX81yt1HvGRnDgzjYwiNzIqY5SxXvz7iLdHvlH/9QK2HDwfmvzAN+vQu9uQ3DNBlBFp2LsRN4sOK3gDNHAsBazUwoYcHXSDZ//CJAEfaAhykQuhol7+qGK4vYtQYKg3NV/KUc+EsumqY6X+mEzUpl38EcR45mRReNKfal3hw7ozRZr6vaXVZnIqOCe+aMmyJlVBMJiOzFPX7dBv1lbqo2zgLNWFBskI+Qpvgdf/+Zg3th0pJDZ/G7LpnzIZpZcQPcvd9LtMOGLpFT68mI9ZLOewuWVgy3c9njYR+g3cpK5bwkX1PzsApXf867FY+Gm03rtVUsMgIfm2U4TLu98M4nKA0MPYKtkh+0l1obk4gs8LGtHs03GO7rJ9g6ECIYbsn/+82Fneyn53jL4qCiNvqDWUNLf1eAndt5nAl9ZYs5+TBfH0LlllCIk1PqxYJtMG6YRTxtnaNxcxmBmLwWy1aofrbGbiGqfWadnkUXMx+zKiEIe8vxhgBLo3uzEOY/l+W/O6EY1FVqGXi1FGfWWlaOV5XOkyG/cA+m2kfhLoRjXB8w6TtsgbLCOiCUrNqeYMXt3CucgGsfwDwjtDlfPmf5EOPhNeBK2AHf/k1N7JTB4n2MYElovY62HHxYHjxZ05fVsqPVhEj5Il1Oe7InQL3nWZw8NeqEOS31SEePSJYrRRLjgJNJnvFPuFZ/GRrdSU/gOqTOxQUs hnofEyuP VBJrNj4vAuxUMft0QWlanRBpSD3WLxZGSRNgN6RoQYR2hcA4K9zFvd5duAq7XF2zlxNmlOokWib2pKdoeZPV7QzyA+CoUZGJLkfxxzOdTfF1fM0UEtJDCCpqd8Ock0vENHZogr8bvZs3caGacOx25uKGtqcRMgdIlkAFzG6juab2brvxgtEVbtqcYG/Oy324bJW4BqmEGbQA2kOQAyG/mRdEFycVGgpWMdkCI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 04/03/2024 20:50, David Hildenbrand wrote: >>>> >>>> This is the existing free_swap_and_cache(). I think _swap_info_get() would >>>> break >>>> if this could race with swapoff(), and __swap_entry_free() looks up the cluster >>>> from an array, which would also be freed by swapoff if racing: >>>> >>>> int free_swap_and_cache(swp_entry_t entry) >>>> { >>>>      struct swap_info_struct *p; >>>>      unsigned char count; >>>> >>>>      if (non_swap_entry(entry)) >>>>          return 1; >>>> >>>>      p = _swap_info_get(entry); >>>>      if (p) { >>>>          count = __swap_entry_free(p, entry); >>> >>> If count dropped to 0 and >>> >>>>          if (count == SWAP_HAS_CACHE) >>> >>> >>> count is now SWAP_HAS_CACHE, there is in fact no swap entry anymore. We removed >>> it. That one would have to be reclaimed asynchronously. >>> >>> The existing code we would call swap_page_trans_huge_swapped() with the SI it >>> obtained via _swap_info_get(). >>> >>> I also don't see what should be left protecting the SI. It's not locked anymore, >>> the swapcounts are at 0. We don't hold the folio lock. >>> >>> try_to_unuse() will stop as soon as si->inuse_pages is at 0. Hm ... >> >> But, assuming the caller of free_swap_and_cache() acquires the PTL first, I >> think this all works out ok? While free_swap_and_cache() is running, >> try_to_unuse() will wait for the PTL. Or if try_to_unuse() runs first, then >> free_swap_and_cache() will never be called because the swap entry will have been >> removed from the PTE? > > But can't try_to_unuse() run, detect !si->inuse_pages and not even bother about > scanning any further page tables? > > But my head hurts from digging through that code. Yep, glad I'm not the only one that gets headaches from swapfile.c. > > Let me try again: > > __swap_entry_free() might be the last user and result in "count == SWAP_HAS_CACHE". > > swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. > > > So the question is: could someone reclaim the folio and turn si->inuse_pages==0, > before we completed swap_page_trans_huge_swapped(). > > Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are still > references by swap entries. > > Process 1 still references subpage 0 via swap entry. > Process 2 still references subpage 1 via swap entry. > > Process 1 quits. Calls free_swap_and_cache(). > -> count == SWAP_HAS_CACHE > [then, preempted in the hypervisor etc.] > > Process 2 quits. Calls free_swap_and_cache(). > -> count == SWAP_HAS_CACHE > > Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls > __try_to_reclaim_swap(). > > __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()->put_swap_folio()-> > free_swap_slot()->swapcache_free_entries()->swap_entry_free()->swap_range_free()-> > ... > WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); > > > What stops swapoff to succeed after process 2 reclaimed the swap cache but > before process 1 finished its call to swap_page_trans_huge_swapped()? Assuming you are talking about anonymous memory, process 1 has the PTL while it's executing free_swap_and_cache(). try_to_unuse() iterates over every vma in every mm, and it swaps-in a page for every PTE that holds a swap entry for the device being swapoff'ed. It takes the PTL while converting the swap entry to present PTE - see unuse_pte(). Process 1 must have beaten try_to_unuse() to the particular pte, because if try_to_unuse() got there first, it would have converted it from a swap entry to present pte and process 1 would never even have called free_swap_and_cache(). So try_to_unuse() will eventually wait on the PTL until process 1 has released it after free_swap_and_cache() completes. Am I missing something? Because that part feels pretty clear to me. Its the shmem case that I'm struggling to explain. > > > >> >> That just leaves shmem... I suspected there might be some serialization between >> shmem_unuse() (called from try_to_unuse()) and the shmem free_swap_and_cache() >> callsites, but I can't see it. Hmm... >> >>> >>> Would performing the overall operation under lock_cluster_or_swap_info help? Not >>> so sure :( >> >> No - that function relies on being able to access the cluster from the array in >> the swap_info and lock it. And I think that array has the same lifetime as >> swap_map, so same problem. You'd need get_swap_device()/put_swap_device() and a >> bunch of refactoring for the internals not to take the locks, I guess. I think >> its doable, just not sure if neccessary... > > Agreed. >