From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5131BC433EF for ; Fri, 11 Mar 2022 21:08:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B311E8D0002; Fri, 11 Mar 2022 16:08:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ADF7B8D0001; Fri, 11 Mar 2022 16:08:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CF698D0002; Fri, 11 Mar 2022 16:08:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 8C2E08D0001 for ; Fri, 11 Mar 2022 16:08:28 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 6087C8145F for ; Fri, 11 Mar 2022 21:08:28 +0000 (UTC) X-FDA: 79233343896.09.297D832 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf17.hostedemail.com (Postfix) with ESMTP id 4002640023 for ; Fri, 11 Mar 2022 21:08:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=SzKX99Cocy/lwgU3tnCH1Vt8ZkKR8jjaQaAMFI9xwxc=; b=nRWaKR4qwPKfFozcskyUoAynnV jv/gIbaWr0G1z+JE77V8iiPRV4WKN1S/2xymJG5jcRHfRLh0T4A8ZwzHeWyOk2n35+GEh71diCwg5 izPd/42UPCFqErRfIqqh4r2DBmO+RZFwI3Aff8KbvJASbG2Nt/5RLYEkCtHUubuTyFL6iqDpqQWxa m/XbgA9zah0cwIIsuevRuaOp0n8DvPWmwaWpdPpyTfpfx6x/Rz1LahjBDgObRj+5KZdKS1EZYABGF peDL4T18VlLbtMm2XFEcPyDDKr9z5MwBNw2Kiu7AH1eX3vDM2iLu/s0aomnYK3GXY4qDeFA/YCtiY VjyL3oUA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nSmOp-001uVY-CC; Fri, 11 Mar 2022 21:02:11 +0000 Date: Fri, 11 Mar 2022 21:02:11 +0000 From: Matthew Wilcox To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, Andrew Morton , Hugh Dickins , Linus Torvalds , David Rientjes , Shakeel Butt , John Hubbard , Jason Gunthorpe , Mike Kravetz , Mike Rapoport , Yang Shi , "Kirill A . Shutemov" , Vlastimil Babka , Jann Horn , Michal Hocko , Nadav Amit , Rik van Riel , Roman Gushchin , Andrea Arcangeli , Peter Xu , Donald Dutile , Christoph Hellwig , Oleg Nesterov , Jan Kara , Liang Zhang , Pedro Gomes , Oded Gabbay , linux-mm@kvack.org Subject: Re: [PATCH v1 10/15] mm/page-flags: reuse PG_slab as PG_anon_exclusive for PageAnon() pages Message-ID: References: <20220308141437.144919-1-david@redhat.com> <20220308141437.144919-11-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4002640023 X-Stat-Signature: tiyywewdbmc5p49aygukxrubgy3oq4yf X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nRWaKR4q; spf=none (imf17.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1647032907-926883 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 11, 2022 at 07:46:39PM +0100, David Hildenbrand wrote: > I'm currently testing with the following. My tests so far with a swapfile on > all different kinds of weird filesystems (excluding networking fs, though) > revealed no surprises so far: I like this a lot better than reusing PG_swap. Thanks! I'm somewhat reluctant to introduce a new flag that can be set on tail pages. Do we lose much if it's always set only on the head page? > +++ b/include/linux/page-flags.h > @@ -142,6 +142,60 @@ enum pageflags { > > PG_readahead = PG_reclaim, > > + /* > + * Depending on the way an anonymous folio can be mapped into a page > + * table (e.g., single PMD/PUD/CONT of the head page vs. PTE-mapped > + * THP), PG_anon_exclusive may be set only for the head page or for > + * subpages of an anonymous folio. > + * > + * PG_anon_exclusive is *usually* only expressive in combination with a > + * page table entry. Depending on the page table entry type it might > + * store the following information: > + * > + * Is what's mapped via this page table entry exclusive to the > + * single process and can be mapped writable without further > + * checks? If not, it might be shared and we might have to COW. > + * > + * For now, we only expect PTE-mapped THPs to make use of > + * PG_anon_exclusive in subpages. For other anonymous compound > + * folios (i.e., hugetlb), only the head page is logically mapped and > + * holds this information. > + * > + * For example, an exclusive, PMD-mapped THP only has PG_anon_exclusive > + * set on the head page. When replacing the PMD by a page table full > + * of PTEs, PG_anon_exclusive, if set on the head page, will be set on > + * all tail pages accordingly. Note that converting from a PTE-mapping > + * to a PMD mapping using the same compound page is currently not > + * possible and consequently doesn't require care. > + * > + * If GUP wants to take a reliable pin (FOLL_PIN) on an anonymous page, > + * it should only pin if the relevant PG_anon_bit is set. In that case, > + * the pin will be fully reliable and stay consistent with the pages > + * mapped into the page table, as the bit cannot get cleared (e.g., by > + * fork(), KSM) while the page is pinned. For anonymous pages that > + * are mapped R/W, PG_anon_exclusive can be assumed to always be set > + * because such pages cannot possibly be shared. > + * > + * The page table lock protecting the page table entry is the primary > + * synchronization mechanism for PG_anon_exclusive; GUP-fast that does > + * not take the PT lock needs special care when trying to clear the > + * flag. > + * > + * Page table entry types and PG_anon_exclusive: > + * * Present: PG_anon_exclusive applies. > + * * Swap: the information is lost. PG_anon_exclusive was cleared. > + * * Migration: the entry holds this information instead. > + * PG_anon_exclusive was cleared. > + * * Device private: PG_anon_exclusive applies. > + * * Device exclusive: PG_anon_exclusive applies. > + * * HW Poison: PG_anon_exclusive is stale and not changed. > + * > + * If the page may be pinned (FOLL_PIN), clearing PG_anon_exclusive is > + * not allowed and the flag will stick around until the page is freed > + * and folio->mapping is cleared. > + */ ... I also don't think this is the right place for this comment. Not sure where it should go. > +static __always_inline void SetPageAnonExclusive(struct page *page) > +{ > + VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page); hm. seems to me like we should have a PageAnonNotKsm which just does return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON; because that's "a bit" inefficient. OK, that's just a VM_BUG_ON, but we have other users in real code: mm/migrate.c: if (PageAnon(page) && !PageKsm(page)) mm/page_idle.c: need_lock = !PageAnon(page) || PageKsm(page); mm/rmap.c: if (!is_locked && (!PageAnon(page) || PageKsm(page))) {