From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEBD3EE49B0 for ; Wed, 23 Aug 2023 15:12:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07A6528007E; Wed, 23 Aug 2023 11:12:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02AC828007D; Wed, 23 Aug 2023 11:12:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0D6228007E; Wed, 23 Aug 2023 11:12:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id CEBC228007D for ; Wed, 23 Aug 2023 11:12:45 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 588D440199 for ; Wed, 23 Aug 2023 15:12:45 +0000 (UTC) X-FDA: 81155711490.22.DA0304F Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) by imf10.hostedemail.com (Postfix) with ESMTP id 6956DC000B for ; Wed, 23 Aug 2023 15:12:43 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=W5i1r1gI; spf=pass (imf10.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.41 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692803563; a=rsa-sha256; cv=none; b=YueHBobcS/+KILp2vi1Uuq8oM19clcLKQpUS/RlRsXvqRZMwc3FaqE8BN2jiymeUWkQpHM ShZlq0U5CUOuCxZzdAA6+157Yth8/HXqSnFWEkbb7v5magSVwWSpzC5oQd3kuinZKO5nso MKwkQ6PvLyzLMSbpB0z2K1m1Ge5kevw= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=W5i1r1gI; spf=pass (imf10.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.41 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692803563; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vpoAy4jnvOK1xW6jFL+0DmNbwXzcQEUlVeR2jyjxQ1A=; b=ojUGq5Tf/xN4cyPgUve5ts+VpFFC5rBXYChBfqTaStMLJTLpd/N8EqoIje9Z0y2oPOR+Hh LHdNRtPljIsg3HLhzmkfE1S/oAUYQA8cPGjnZ4DeQZ+G3U0+nv1GFM2+apFDf+p6pG/bmj OfWFEb5QobvO2rKdmMko7ifbp4UrKQQ= Received: by mail-ej1-f41.google.com with SMTP id a640c23a62f3a-99bf1f632b8so769989066b.1 for ; Wed, 23 Aug 2023 08:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1692803561; x=1693408361; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vpoAy4jnvOK1xW6jFL+0DmNbwXzcQEUlVeR2jyjxQ1A=; b=W5i1r1gIPpq7njhCQ839rCz/rmhoPQ98F/D86/DYZROi14613JsrXcVrCCQL7CY+Pa YR2JiftWJqLyVLfnxbacHoN6P9eJjYRm6T0BgPK7WLs6W/iO6RWEmpnYKtTsMYNTXwmC ilr0e+XcXUfZ4u49neUSHfNjO6XHf0AohNg+5MStLyYTQEZV17fSocFIgdUWTLsP/tz2 sOzj0bl3TAQ1H6tgIua/ZuMLnytj6Au7zgQ/7DBeUMxx4bFQXwTQHjzfE2aMEla7ovMR JWOf9NmVQJg+OLCGnKAzrrHjJTekZVaVzktNWgrjVKfEjcRJRawUf/6tsvIgQuzvdXOI N2jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1692803561; x=1693408361; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vpoAy4jnvOK1xW6jFL+0DmNbwXzcQEUlVeR2jyjxQ1A=; b=HLi2nMXe34XqdLLZxas9b5weE4b7bCCK2inq8pU7qdNxqEF4Ehv3m1Chpic9cHFI01 Qeoz1MMeP++mcCMr8G0QsMZEit70EnLgQ920+zKEfU+yteCRxecVIMf6y2rXohsLLurd bFwGsHHJBe2mXHGTpVH4MF2c0DxuucVQitLoBfLKGdQLtomZeGDddaGP+UVqu9gvIMe7 MZOVIG0hV9P9J/ojIxoJUYpUFO/w1Pp4VrBC99ngFYu5UvjwGb7Donc2hAy837fKaSzq HqAyaz8hbKNpAgUkqltXanSaLbYyOQzvxcVhuZAsu4RufiRI31dVuc9LCYIcBqMLSjXH Op3Q== X-Gm-Message-State: AOJu0YzVn66ProAQ/D9PAOaraYp9aQA71Lx5kxhQdb+JLBp3Vprh2CQ3 2wpcCQPDVqfAaHKNKsBixNxAITTK5aQKo8/C3/QsLQ== X-Google-Smtp-Source: AGHT+IEKeKdtIHQh5mZfaEHRRPX3jGpU0RqmjPEr4eDQfl7Gcd2SKTSFMl3JnY6Axip7ScrNd8RUJLaXRfHLgQg2XyU= X-Received: by 2002:a17:906:9d2:b0:99e:39d:4fa7 with SMTP id r18-20020a17090609d200b0099e039d4fa7mr9787481eje.22.1692803561333; Wed, 23 Aug 2023 08:12:41 -0700 (PDT) MIME-Version: 1.0 References: <20230821160849.531668-1-david@redhat.com> <20230821160849.531668-2-david@redhat.com> In-Reply-To: <20230821160849.531668-2-david@redhat.com> From: Yosry Ahmed Date: Wed, 23 Aug 2023 08:12:05 -0700 Message-ID: Subject: Re: [PATCH mm-unstable v1 1/4] mm/swap: stop using page->private on tail pages for THP_SWAP To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, Andrew Morton , Matthew Wilcox , Peter Xu , Catalin Marinas , Will Deacon , Hugh Dickins , Seth Jennings , Dan Streetman , Vitaly Wool Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6956DC000B X-Stat-Signature: zqeou9hjrez7515m8jhq4me7nffjkikp X-Rspam-User: X-HE-Tag: 1692803563-753130 X-HE-Meta: U2FsdGVkX18DKxjtFZxUTY1H2fOmkwoMiXoZ9bKromy7mCdHBO7eyVbcNra6ZWxJQaBvN4qUDvL3GAqFbDH9SP8zz4Yv12jN2jn1D91TQZoFRQprT8NUmiW7IbRNV0qvgcAdEO7+4IXkH42XgXf8jjorIIQ99fUDG8xk7rtolk5tL1UnNW3mnI0y1+0H18L6XAi1YsBwQkJ0IJOUDLirGRywL2hXFSOCZHhIxaq4N3A4pJjzYQ799fl67isdLpPGLgyZ+u+0H5apKdzCXQGKj9ifR5LMABE1lPHbCNHp1Ek1oZWEIVyNUiBIChxYGbe04740CfXoe8TC+/wmBPWWgLDbK85c4gYqN9N79Fk4KILMB4Va694aHFk3Qkh6ercH+edYH1Ps+s0Hy0Mhm9sLI9jUlAqW8WDaD/1gtTI6viaUhCCLixjHCON0j9rJabqlVm0GyTvzxMwZw+GPaI4JCMdy0sh12iTRS6iivodL9DzuE3rrWbyhclpu90NwCVe+w7+RVMQhkwqMiNjOYQH2VULb9+VgOSGNp825nOvKzhmGJqQuhiH3ybLjNIeF0N+ajC5nnhkoR2BgAeZ71Ce/W8ufP3wHQLX0RdwbZW7lghKgjcI4hXL879i5//skEsWmXAahS+7CtOxP8djSXI0DDyfv8rXL1P73x9OG9bW9ixPqMcr8uoNJ+n2CwcUzgkhrlonXWUZ4IsuAvWGkIJlZ5wV5ma3OZH3/ZlNNxsspJXrUvSMbIG/qyLqO0xJf7NPyteKUX0XjMSKIrV+Vs2n+zT7NGsuUxiFBl7O205esBobriPLxlt5WGNqlrOX2Kc74MV9MG6+kR44HpXP0kS/uC+MbOVmGLn+ZRj3JMkVhWJk8QPlgPnx4eWBsFmzYojYu9ebyZgOEnlFmyxw9RhC/DVqP3lceOyLBOk8S1I7uMEUVLnCcKqpF1YfnyevKL8MCqg170C6BpjT+hydBfns QNWvSUU7 5AGUTpCVGZLyOx5SEPl6EBREU54zf97i+Dkr6Jb/rieuu9hP5dxseZRBxjXvcl1rv9cQchdYa7Idg1KOVuCRTzfVpOLPT2RqdUW8BAuCQkf9F2LKUJHDAR/uD7W986M2Ole6rBNs2B9clbj8KTV+CZAhZ/I2alb+WqZrNermGAiS00SVnvzbYE3W0kgOw2hKG+aFYzXeijMP1hc18DzGocrlc0K/wTr2U9x4JVB5SM3el2bd9RBEJ/Ddt6E1lsgeIFtm/VyODq0mi9qFTKlVSbfvVl1U9xOdeBWbaOQOtBB6MJKTT2qHIKq2gtCOF1k1t5qnA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 21, 2023 at 9:09=E2=80=AFAM David Hildenbrand wrote: > > Let's stop using page->private on tail pages, making it possible to > just unconditionally reuse that field in the tail pages of large folios. > > The remaining usage of the private field for THP_SWAP is in the THP > splitting code (mm/huge_memory.c), that we'll handle separately later. > > Update the THP_SWAP documentation and sanity checks in mm_types.h and > __split_huge_page_tail(). > > Signed-off-by: David Hildenbrand The mm part looks good to me (with the added fixup): Reviewed-by: Yosry Ahmed Minor nit below, not worth a respin, but perhaps if you respin anyway for something else. > --- > arch/arm64/mm/mteswap.c | 5 +++-- > include/linux/mm_types.h | 12 +----------- > include/linux/swap.h | 9 +++++++++ > mm/huge_memory.c | 15 ++++++--------- > mm/memory.c | 2 +- > mm/rmap.c | 2 +- > mm/swap_state.c | 5 +++-- > mm/swapfile.c | 4 ++-- > 8 files changed, 26 insertions(+), 28 deletions(-) > > diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c > index cd508ba80ab1..a31833e3ddc5 100644 > --- a/arch/arm64/mm/mteswap.c > +++ b/arch/arm64/mm/mteswap.c > @@ -33,8 +33,9 @@ int mte_save_tags(struct page *page) > > mte_save_page_tags(page_address(page), tag_storage); > > - /* page_private contains the swap entry.val set in do_swap_page *= / > - ret =3D xa_store(&mte_pages, page_private(page), tag_storage, GFP= _KERNEL); > + /* lookup the swap entry.val from the page */ > + ret =3D xa_store(&mte_pages, page_swap_entry(page).val, tag_stora= ge, > + GFP_KERNEL); > if (WARN(xa_is_err(ret), "Failed to store MTE tags")) { > mte_free_tag_storage(tag_storage); > return xa_err(ret); > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index b9b6c88875b9..61361f1750c3 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -333,11 +333,8 @@ struct folio { > atomic_t _pincount; > #ifdef CONFIG_64BIT > unsigned int _folio_nr_pages; > - /* 4 byte gap here */ > - /* private: the union with struct page is transitional */ > - /* Fix THP_SWAP to not use tail->private */ > - unsigned long _private_1; > #endif > + /* private: the union with struct page is transitional */ > }; > struct page __page_1; > }; > @@ -358,9 +355,6 @@ struct folio { > /* public: */ > struct list_head _deferred_list; > /* private: the union with struct page is transitional */ > - unsigned long _avail_2a; > - /* Fix THP_SWAP to not use tail->private */ > - unsigned long _private_2a; > }; > struct page __page_2; > }; > @@ -385,9 +379,6 @@ FOLIO_MATCH(memcg_data, memcg_data); > offsetof(struct page, pg) + sizeof(struct page)) > FOLIO_MATCH(flags, _flags_1); > FOLIO_MATCH(compound_head, _head_1); > -#ifdef CONFIG_64BIT > -FOLIO_MATCH(private, _private_1); > -#endif > #undef FOLIO_MATCH > #define FOLIO_MATCH(pg, fl) \ > static_assert(offsetof(struct folio, fl) =3D=3D = \ > @@ -396,7 +387,6 @@ FOLIO_MATCH(flags, _flags_2); > FOLIO_MATCH(compound_head, _head_2); > FOLIO_MATCH(flags, _flags_2a); > FOLIO_MATCH(compound_head, _head_2a); > -FOLIO_MATCH(private, _private_2a); > #undef FOLIO_MATCH > > /** > diff --git a/include/linux/swap.h b/include/linux/swap.h > index bb5adc604144..84fe0e94f5cd 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -339,6 +339,15 @@ static inline swp_entry_t folio_swap_entry(struct fo= lio *folio) > return entry; > } > > +static inline swp_entry_t page_swap_entry(struct page *page) > +{ > + struct folio *folio =3D page_folio(page); > + swp_entry_t entry =3D folio_swap_entry(folio); > + > + entry.val +=3D page - &folio->page; > + return entry; > +} > + > static inline void folio_set_swap_entry(struct folio *folio, swp_entry_t= entry) > { > folio->private =3D (void *)entry.val; > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index cc2f65f8cc62..c04702ae71d2 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2446,18 +2446,15 @@ static void __split_huge_page_tail(struct page *h= ead, int tail, > page_tail->index =3D head->index + tail; > > /* > - * page->private should not be set in tail pages with the excepti= on > - * of swap cache pages that store the swp_entry_t in tail pages. > - * Fix up and warn once if private is unexpectedly set. > - * > - * What of 32-bit systems, on which folio->_pincount overlays > - * head[1].private? No problem: THP_SWAP is not enabled on 32-bi= t, and > - * pincount must be 0 for folio_ref_freeze() to have succeeded. > + * page->private should not be set in tail pages. Fix up and warn= once > + * if private is unexpectedly set. > */ > - if (!folio_test_swapcache(page_folio(head))) { > - VM_WARN_ON_ONCE_PAGE(page_tail->private !=3D 0, page_tail= ); > + if (unlikely(page_tail->private)) { > + VM_WARN_ON_ONCE_PAGE(true, page_tail); > page_tail->private =3D 0; > } Could probably save a couple of lines here: if (VM_WARN_ON_ONCE_PAGE(page_tail->private !=3D 0, page_tail)) page_tail->private =3D 0; > + if (PageSwapCache(head)) > + set_page_private(page_tail, (unsigned long)head->private = + tail); > > /* Page flags must be visible before we make the page non-compoun= d. */ > smp_wmb(); > diff --git a/mm/memory.c b/mm/memory.c > index d003076b218d..ff13242c1589 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3882,7 +3882,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * changed. > */ > if (unlikely(!folio_test_swapcache(folio) || > - page_private(page) !=3D entry.val)) > + page_swap_entry(page).val !=3D entry.val)) > goto out_page; > > /* > diff --git a/mm/rmap.c b/mm/rmap.c > index 1f04debdc87a..ec7f8e6c9e48 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1647,7 +1647,7 @@ static bool try_to_unmap_one(struct folio *folio, s= truct vm_area_struct *vma, > */ > dec_mm_counter(mm, mm_counter(&folio->page)); > } else if (folio_test_anon(folio)) { > - swp_entry_t entry =3D { .val =3D page_private(sub= page) }; > + swp_entry_t entry =3D page_swap_entry(subpage); > pte_t swp_pte; > /* > * Store the swap location in the pte. > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 01f15139b7d9..2f2417810052 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -100,6 +100,7 @@ int add_to_swap_cache(struct folio *folio, swp_entry_= t entry, > > folio_ref_add(folio, nr); > folio_set_swapcache(folio); > + folio_set_swap_entry(folio, entry); > > do { > xas_lock_irq(&xas); > @@ -113,7 +114,6 @@ int add_to_swap_cache(struct folio *folio, swp_entry_= t entry, > if (shadowp) > *shadowp =3D old; > } > - set_page_private(folio_page(folio, i), entry.val = + i); > xas_store(&xas, folio); > xas_next(&xas); > } > @@ -154,9 +154,10 @@ void __delete_from_swap_cache(struct folio *folio, > for (i =3D 0; i < nr; i++) { > void *entry =3D xas_store(&xas, shadow); > VM_BUG_ON_PAGE(entry !=3D folio, entry); > - set_page_private(folio_page(folio, i), 0); > xas_next(&xas); > } > + entry.val =3D 0; > + folio_set_swap_entry(folio, entry); > folio_clear_swapcache(folio); > address_space->nrpages -=3D nr; > __node_stat_mod_folio(folio, NR_FILE_PAGES, -nr); > diff --git a/mm/swapfile.c b/mm/swapfile.c > index d46933adf789..bd9d904671b9 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -3369,7 +3369,7 @@ struct swap_info_struct *swp_swap_info(swp_entry_t = entry) > > struct swap_info_struct *page_swap_info(struct page *page) > { > - swp_entry_t entry =3D { .val =3D page_private(page) }; > + swp_entry_t entry =3D page_swap_entry(page); > return swp_swap_info(entry); > } > > @@ -3384,7 +3384,7 @@ EXPORT_SYMBOL_GPL(swapcache_mapping); > > pgoff_t __page_file_index(struct page *page) > { > - swp_entry_t swap =3D { .val =3D page_private(page) }; > + swp_entry_t swap =3D page_swap_entry(page); > return swp_offset(swap); > } > EXPORT_SYMBOL_GPL(__page_file_index); > -- > 2.41.0 > >