From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10861C433F5 for ; Wed, 20 Apr 2022 17:10:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 69CB26B0071; Wed, 20 Apr 2022 13:10:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64DC26B0073; Wed, 20 Apr 2022 13:10:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4EE766B0074; Wed, 20 Apr 2022 13:10:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 41E916B0071 for ; Wed, 20 Apr 2022 13:10:31 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 04CAF261A0 for ; Wed, 20 Apr 2022 17:10:30 +0000 (UTC) X-FDA: 79377896262.15.BC16007 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf31.hostedemail.com (Postfix) with ESMTP id 3860920002 for ; Wed, 20 Apr 2022 17:10:28 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 9598221107; Wed, 20 Apr 2022 17:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=qEFLX/t/h3r7OpqmNOmGOP8dA4rSO2sikq6ol1N77QIelrvEBFghQ7bcvovldgvJObmH6p 66q5lRqmWi9t95sVzEwUZIP8Cr7uNETQRm4NEKf64zjhBKay2Q/2JBXNnuG2fhAbtkoWOO HtmZ7GIqs22vDe7hH9NqFNk4XWWgWUo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1650474628; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fv+DUHDuv75iqi/speofL8fyyiV3PO12nkEaPGAjVpg=; b=Lz02F1m0uYxMKVcER4wii/ZMq3u7/dEO9YPxA0w+R8Q7KJp2LiDa55Odye1bSPkQBU7LeS pJ2pECEhijNbb1DQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id E940913A30; Wed, 20 Apr 2022 17:10:27 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +h0eOIM+YGIqXwAAMHmgww (envelope-from ); Wed, 20 Apr 2022 17:10:27 +0000 Message-ID: Date: Wed, 20 Apr 2022 19:10:27 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0 Subject: Re: [PATCH v2 1/8] mm/swap: remember PG_anon_exclusive via a swp pte bit Content-Language: en-US To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Matthew Wilcox , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , Catalin Marinas , Will Deacon , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , Gerald Schaefer , linux-mm@kvack.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20220329164329.208407-1-david@redhat.com> <20220329164329.208407-2-david@redhat.com> From: Vlastimil Babka In-Reply-To: <20220329164329.208407-2-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 3860920002 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b="qEFLX/t/"; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=Lz02F1m0; dmarc=none; spf=pass (imf31.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Stat-Signature: e3zsqjsdti9seohtttbsxk81qnycqh1o X-HE-Tag: 1650474628-407377 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/29/22 18:43, David Hildenbrand wrote: > Currently, we clear PG_anon_exclusive in try_to_unmap() and forget about > it. We do this, to keep fork() logic on swap entries easy and efficient: > for example, if we wouldn't clear it when unmapping, we'd have to lookup > the page in the swapcache for each and every swap entry during fork() and > clear PG_anon_exclusive if set. > > Instead, we want to store that information directly in the swap pte, > protected by the page table lock, similarly to how we handle > SWP_MIGRATION_READ_EXCLUSIVE for migration entries. However, for actual > swap entries, we don't want to mess with the swap type (e.g., still one > bit) because it overcomplicates swap code. > > In try_to_unmap(), we already reject to unmap in case the page might be > pinned, because we must not lose PG_anon_exclusive on pinned pages ever. > Checking if there are other unexpected references reliably *before* > completely unmapping a page is unfortunately not really possible: THP > heavily overcomplicate the situation. Once fully unmapped it's easier -- > we, for example, make sure that there are no unexpected references > *after* unmapping a page before starting writeback on that page. > > So, we currently might end up unmapping a page and clearing > PG_anon_exclusive if that page has additional references, for example, > due to a FOLL_GET. > > do_swap_page() has to re-determine if a page is exclusive, which will > easily fail if there are other references on a page, most prominently > GUP references via FOLL_GET. This can currently result in memory > corruptions when taking a FOLL_GET | FOLL_WRITE reference on a page even > when fork() is never involved: try_to_unmap() will succeed, and when > refaulting the page, it cannot be marked exclusive and will get replaced > by a copy in the page tables on the next write access, resulting in writes > via the GUP reference to the page being lost. > > In an ideal world, everybody that uses GUP and wants to modify page > content, such as O_DIRECT, would properly use FOLL_PIN. However, that > conversion will take a while. It's easier to fix what used to work in the > past (FOLL_GET | FOLL_WRITE) remembering PG_anon_exclusive. In addition, > by remembering PG_anon_exclusive we can further reduce unnecessary COW > in some cases, so it's the natural thing to do. > > So let's transfer the PG_anon_exclusive information to the swap pte and > store it via an architecture-dependant pte bit; use that information when > restoring the swap pte in do_swap_page() and unuse_pte(). During fork(), we > simply have to clear the pte bit and are done. > > Of course, there is one corner case to handle: swap backends that don't > support concurrent page modifications while the page is under writeback. > Special case these, and drop the exclusive marker. Add a comment why that > is just fine (also, reuse_swap_page() would have done the same in the > past). > > In the future, we'll hopefully have all architectures support > __HAVE_ARCH_PTE_SWP_EXCLUSIVE, such that we can get rid of the empty > stubs and the define completely. Then, we can also convert > SWP_MIGRATION_READ_EXCLUSIVE. For architectures it's fairly easy to > support: either simply use a yet unused pte bit that can be used for swap > entries, steal one from the arch type bits if they exceed 5, or steal one > from the offset bits. > > Note: R/O FOLL_GET references were never really reliable, especially > when taking one on a shared page and then writing to the page (e.g., GUP > after fork()). FOLL_GET, including R/W references, were never really > reliable once fork was involved (e.g., GUP before fork(), > GUP during fork()). KSM steps back in case it stumbles over unexpected > references and is, therefore, fine. > > Signed-off-by: David Hildenbrand With the fixup as reportedy by Miaohe Lin Acked-by: Vlastimil Babka (sent a separate mm-commits mail to inquire about the fix going missing from mmotm) https://lore.kernel.org/mm-commits/c3195d8a-2931-0749-973a-1d04e4baec94@suse.cz/T/#m4e98ccae6f747e11f45e4d0726427ba2fef740eb