From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4259DC77B7F for ; Tue, 16 May 2023 12:35:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D23B1280008; Tue, 16 May 2023 08:35:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD371900002; Tue, 16 May 2023 08:35:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7421280008; Tue, 16 May 2023 08:35:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A759F900002 for ; Tue, 16 May 2023 08:35:33 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2E9211A0285 for ; Tue, 16 May 2023 12:35:33 +0000 (UTC) X-FDA: 80796064146.30.EB09C0C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf12.hostedemail.com (Postfix) with ESMTP id A324840015 for ; Tue, 16 May 2023 12:35:30 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HpfgmK+L; spf=pass (imf12.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684240530; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gPcSQRk5YWCF8SG/bxTTXXNa/kiYw7+TxfeC8KMSgkE=; b=GX/fez47/jxDb+fJkJyQBJoh9knBOxH3s4oAe4x2fQe6+x3H6d8IPdkjdQ51Z4asvomIRD 2txBXckndDXVfh1dd88eG854wp5bcwRu0ZNmpr/kXcIFzPCl+CVucyM6H4b6LnKtJb4tLx /2FK/sKch5BCZ1ufzDmhfZdzX2kucZM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684240530; a=rsa-sha256; cv=none; b=rWRQGL7VI1J3Mrs5N0m5OFb+xyIKtJ6dTKTH6eqWG2WWaDaKQRFHyuhkOxD9PIbKHysiKX 0hQ7KdlDRKaBhNfB6+Nfi1OnvGHp+ZQniHbVuRdG/ULT0Yu0MjimE1PxHGvXiWa4fvJwYP hmwcUHKIBXNng44yC/6rRf0blqNB8tw= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HpfgmK+L; spf=pass (imf12.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684240529; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gPcSQRk5YWCF8SG/bxTTXXNa/kiYw7+TxfeC8KMSgkE=; b=HpfgmK+LVtVo5lM4zJWBTuIJNqyaH+yWnHA2Chrf/btmN+jM4bXrFRfowd5VSM7pNc6RSx HAsVrsDurfHhvWZRdf9hjwgf7bNYXgaiDIqTGi52FbtOpIhDQ+jFmznVbRF9ONCA3EwV0p +RiYv7LAZTW2MVqoBN/8Gl3VzEpanE8= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-604-f1vFmIiINAqhT2tpAqx9Uw-1; Tue, 16 May 2023 08:35:28 -0400 X-MC-Unique: f1vFmIiINAqhT2tpAqx9Uw-1 Received: by mail-qk1-f200.google.com with SMTP id af79cd13be357-757987559c7so1145059385a.0 for ; Tue, 16 May 2023 05:35:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684240528; x=1686832528; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=gPcSQRk5YWCF8SG/bxTTXXNa/kiYw7+TxfeC8KMSgkE=; b=HEFm4WEQ4BIcud9EXLasocJpdwTc4u869xBvEUMBYXeM6y35RpYAmEu5PxKZReXVv4 VhD5MFwVndJKXhB49+KFCln3krtLhnU4Qflk6dIigTohoVRLIf1WcMl9XRfZV1vQoGzi KIGKU5qGhPsczeveRy1dNwKhf9VcJPQ6v2ELM4G3QyAai0rV/7IS1OJE+tOFo7cwRMsY 4YHrj0EKRe0t2iidtABboeQSPVC7Pd1f2CfvXwZPlLgZD65eSNHa5c67ano5J55oAza/ QGZMj0tcJMwJzdGG5+s4XFjubcPVLip24wbKQuI5RZPRS66Fct3iFU/Jvxf0HeaJ4cg6 SowA== X-Gm-Message-State: AC+VfDy18ShbUfF11UZ9R1achh0eGsRkt68tESLkexwsiqg5BMjlGdhh /860L/CmbUf5MzGlbx9EOmR7MK2H2yngSYx0CN5df8Yamndr/nnUG5lMiMnRYCvXh8oltb6BsoC KZOS+8+/oBZU= X-Received: by 2002:a05:622a:1750:b0:3ef:61d9:bc6d with SMTP id l16-20020a05622a175000b003ef61d9bc6dmr58465853qtk.14.1684240528289; Tue, 16 May 2023 05:35:28 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5+d8urdrwiyS7t7x4tC0VLPVJdpWAujqpSopHlHz1JW6isskcmCjkiALJIA7MuvZ88SFH/kA== X-Received: by 2002:a05:622a:1750:b0:3ef:61d9:bc6d with SMTP id l16-20020a05622a175000b003ef61d9bc6dmr58465796qtk.14.1684240527879; Tue, 16 May 2023 05:35:27 -0700 (PDT) Received: from ?IPV6:2003:cb:c74f:2500:1e3a:9ee0:5180:cc13? (p200300cbc74f25001e3a9ee05180cc13.dip0.t-ipconnect.de. [2003:cb:c74f:2500:1e3a:9ee0:5180:cc13]) by smtp.gmail.com with ESMTPSA id p3-20020a05620a112300b0075902dffce7sm553768qkk.100.2023.05.16.05.35.24 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 16 May 2023 05:35:27 -0700 (PDT) Message-ID: <851940cd-64f1-9e59-3de9-b50701a99281@redhat.com> Date: Tue, 16 May 2023 14:35:23 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Subject: Re: [PATCH 1/3] mm: Move arch_do_swap_page() call to before swap_free() To: Peter Collingbourne , Catalin Marinas Cc: =?UTF-8?B?UXVuLXdlaSBMaW4gKOael+e+pOW0tCk=?= , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "surenb@google.com" , =?UTF-8?B?Q2hpbndlbiBDaGFuZyAo5by16Yym5paHKQ==?= , "kasan-dev@googlegroups.com" , =?UTF-8?B?S3Vhbi1ZaW5nIExlZSAo5p2O5Yag56mOKQ==?= , =?UTF-8?B?Q2FzcGVyIExpICjmnY7kuK3mpq4p?= , "gregkh@linuxfoundation.org" , vincenzo.frascino@arm.com, Alexandru Elisei , will@kernel.org, eugenis@google.com, Steven Price , stable@vger.kernel.org References: <20230512235755.1589034-1-pcc@google.com> <20230512235755.1589034-2-pcc@google.com> <7471013e-4afb-e445-5985-2441155fc82c@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: A324840015 X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: 9spibgmsjfbz68tqgebm93ifrc4nwipe X-HE-Tag: 1684240530-349741 X-HE-Meta: U2FsdGVkX1/z4K/WJLzJkQhuWDxnB5LWD8aQEhz4xJBnSGbSG6Re0vlVIAD25/NEc8yxTuYHeMyKbUDjSOl/EUA03dqgePa87h0DtNViwiWM5HMI4hPvU+dwjyij1Z66IA0nS7vo2Zdohkatp0+Paw597NnRgU4SY+n8bRERFjD2hewOclB5OgEBR0xBBbW6s7AGojlUsdr6E2DfO7qbyUVuDyyZ6liAf+9CORDe0z3q3qnAZgro1cysF0Jel5l2fwEk5ERQ5o7deKZrQWD0ssbqrcyk3MW/prBGawV79NvtU3a+B0Hp11nUMix1LFoPdnmsau6U64l19WKJZIIYINcMOekD3zc2JuO1QZ3YeQe4wuTS7FJwIAv47/CgwuBFvewykfMbJmP/JzMMZCzdm4Y/rdic04uGNC2uQgtkDe7DoL7wUA5vPx1F4hnVAzbzyVE75aKv992sCX/ibccbfWVR0t/k94a3It4KTC51QiYvW36MU39nsn3kfPWPhQJ6/0gnxBx+gbjcBW1JtOPZEba8dEbkqbtgrxn7qczRaJMFZ7F9NQLEjG9C0BzS6ciMqeWrFg6tadDCG3aQdfJ8Qj5mdG5k5wIR06QQnOQ5Udjq+7P43DVA8RVTS26RAcD9zWoAbgw3e9R4V2YWELPzIBzeLkI2UWxt7tZfQBcd7XSngZEs3rLbbQ1M/Fyj59qkoJzkZiw5rlk9M/KAwCcD1xy9aCgmQ0lI8/dCeWuJY9fqGwt1gN+0SOVud/EvhshsPVLiLChh4Coy2Bt1qUy3d7iezNaZSzuPG/g9ST/vG4x3jsnF3PsAKzCgY8YNPluODj8iNEayszgLYvwwZA514OA8PS+x8VMB4/IYhoVv4NtCvtGu0yRUq6wIRLGFPS1mgF/x5qiXhE4o1uw5Hhh2vqWWYEOBwUOjSFO9D8R1VRiZ6pAqQtXsoBl1gfw0kkMcbxNBFheMjrhZUKv1mnN pvl4Oatp mk6g0iQ7tPQlQu7Ll28bFIrfjcT0Rz+56Bfj0FimX1RnvOKNKEDTMyv+Qiu1QFqsO1m17A0L+n7O3dw76k1ZInyvClJql5JB57yY/TjoQCa4FzH/aPTClYCBW8YFWJKJryTEajofOwrx09NECsoH29uMt+xD0ySwr4cWM0UieARxKI8GHCKpF50w8542XT6b4n9Isb7dAgm5WHq4Ual4848qB7E8l4u+xW4PXOeJsBTurkwGcd4Yhv9lXZkz/qzAapaWXBe6h3xaqfAsT5tv8N09b1Qw0tqWB8NlAqvOthhEVlyz57NEHxoiHfkBhb0HhcnWWEu+uWei2veQTJ9ZMm/W0PLpzEvxyJWR59MZ/uC0rjC5TbaQTVVqNwxRHU1OKewpAIcSN9I98mdDI4XqzX/P30LvD48nFBA/NATLf11mbyAU43XRFXrNeq3lS+hp4SKX11UHkhiiQvgzHovi/fsVu+nuloaQFXv4vHZUQe8MrjiGFYXg83RicAdoC73OuwZvtz63YeXy1oWe45Mx58uzonO9yHo1s222ZrXSaLluPCU5cSCHqDZhIAu51NKVU2FnfdMW4FcVHp7UQlicpdwv0gg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 16.05.23 01:40, Peter Collingbourne wrote: > On Mon, May 15, 2023 at 06:34:30PM +0100, Catalin Marinas wrote: >> On Sat, May 13, 2023 at 05:29:53AM +0200, David Hildenbrand wrote: >>> On 13.05.23 01:57, Peter Collingbourne wrote: >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 01a23ad48a04..83268d287ff1 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -3914,19 +3914,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>> } >>>> } >>>> - /* >>>> - * Remove the swap entry and conditionally try to free up the swapcache. >>>> - * We're already holding a reference on the page but haven't mapped it >>>> - * yet. >>>> - */ >>>> - swap_free(entry); >>>> - if (should_try_to_free_swap(folio, vma, vmf->flags)) >>>> - folio_free_swap(folio); >>>> - >>>> - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); >>>> - dec_mm_counter(vma->vm_mm, MM_SWAPENTS); >>>> pte = mk_pte(page, vma->vm_page_prot); >>>> - >>>> /* >>>> * Same logic as in do_wp_page(); however, optimize for pages that are >>>> * certainly not shared either because we just allocated them without >>>> @@ -3946,8 +3934,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>> pte = pte_mksoft_dirty(pte); >>>> if (pte_swp_uffd_wp(vmf->orig_pte)) >>>> pte = pte_mkuffd_wp(pte); >>>> + arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); >>>> vmf->orig_pte = pte; >>>> + /* >>>> + * Remove the swap entry and conditionally try to free up the swapcache. >>>> + * We're already holding a reference on the page but haven't mapped it >>>> + * yet. >>>> + */ >>>> + swap_free(entry); >>>> + if (should_try_to_free_swap(folio, vma, vmf->flags)) >>>> + folio_free_swap(folio); >>>> + >>>> + inc_mm_counter(vma->vm_mm, MM_ANONPAGES); >>>> + dec_mm_counter(vma->vm_mm, MM_SWAPENTS); >>>> + >>>> /* ksm created a completely new copy */ >>>> if (unlikely(folio != swapcache && swapcache)) { >>>> page_add_new_anon_rmap(page, vma, vmf->address); >>>> @@ -3959,7 +3960,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>> VM_BUG_ON(!folio_test_anon(folio) || >>>> (pte_write(pte) && !PageAnonExclusive(page))); >>>> set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); >>>> - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); >>>> folio_unlock(folio); >>>> if (folio != swapcache && swapcache) { >>> >>> >>> You are moving the folio_free_swap() call after the folio_ref_count(folio) >>> == 1 check, which means that such (previously) swapped pages that are >>> exclusive cannot be detected as exclusive. >>> >>> There must be a better way to handle MTE here. >>> >>> Where are the tags stored, how is the location identified, and when are they >>> effectively restored right now? >> >> I haven't gone through Peter's patches yet but a pretty good description >> of the problem is here: >> https://lore.kernel.org/all/5050805753ac469e8d727c797c2218a9d780d434.camel@mediatek.com/. >> I couldn't reproduce it with my swap setup but both Qun-wei and Peter >> triggered it. > > In order to reproduce this bug it is necessary for the swap slot cache > to be disabled, which is unlikely to occur during normal operation. I > was only able to reproduce the bug by disabling it forcefully with the > following patch: > > diff --git a/mm/swap_slots.c b/mm/swap_slots.c > index 0bec1f705f8e0..25afba16980c7 100644 > --- a/mm/swap_slots.c > +++ b/mm/swap_slots.c > @@ -79,7 +79,7 @@ void disable_swap_slots_cache_lock(void) > > static void __reenable_swap_slots_cache(void) > { > - swap_slot_cache_enabled = has_usable_swap(); > + swap_slot_cache_enabled = false; > } > > void reenable_swap_slots_cache_unlock(void) > > With that I can trigger the bug on an MTE-utilizing process by running > a program that enumerates the process's private anonymous mappings and > calls process_madvise(MADV_PAGEOUT) on all of them. > >> When a tagged page is swapped out, the arm64 code stores the metadata >> (tags) in a local xarray indexed by the swap pte. When restoring from >> swap, the arm64 set_pte_at() checks this xarray using the old swap pte >> and spills the tags onto the new page. Apparently something changed in >> the kernel recently that causes swap_range_free() to be called before >> set_pte_at(). The arm64 arch_swap_invalidate_page() frees the metadata >> from the xarray and the subsequent set_pte_at() won't find it. >> >> If we have the page, the metadata can be restored before set_pte_at() >> and I guess that's what Peter is trying to do (again, I haven't looked >> at the details yet; leaving it for tomorrow). >> >> Is there any other way of handling this? E.g. not release the metadata >> in arch_swap_invalidate_page() but later in set_pte_at() once it was >> restored. But then we may leak this metadata if there's no set_pte_at() >> (the process mapping the swap entry died). > > Another problem that I can see with this approach is that it does not > respect reference counts for swap entries, and it's unclear whether that > can be done in a non-racy fashion. > > Another approach that I considered was to move the hook to swap_readpage() > as in the patch below (sorry, it only applies to an older version > of Android's android14-6.1 branch and not mainline, but you get the > idea). But during a stress test (running the aforementioned program that > calls process_madvise(MADV_PAGEOUT) in a loop during an Android "monkey" > test) I discovered the following racy use-after-free that can occur when > two tasks T1 and T2 concurrently restore the same page: > > T1: | T2: > arch_swap_readpage() | > | arch_swap_readpage() -> mte_restore_tags() -> xe_load() > swap_free() | > | arch_swap_readpage() -> mte_restore_tags() -> mte_restore_page_tags() > > We can avoid it by taking the swap_info_struct::lock spinlock in > mte_restore_tags(), but it seems like it would lead to lock contention. > Would the idea be to fail swap_readpage() on the one that comes last, simply retrying to lookup the page? This might be a naive question, but how does MTE play along with shared anonymous pages? -- Thanks, David / dhildenb