From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75598C10DC3 for ; Mon, 11 Dec 2023 16:15:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8A706B014A; Mon, 11 Dec 2023 11:15:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D39EF6B014C; Mon, 11 Dec 2023 11:15:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB4EE6B014D; Mon, 11 Dec 2023 11:15:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A4FCD6B014A for ; Mon, 11 Dec 2023 11:15:25 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 64895A1A3D for ; Mon, 11 Dec 2023 16:15:25 +0000 (UTC) X-FDA: 81555037410.15.4D92AEA Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf07.hostedemail.com (Postfix) with ESMTP id 739D640025 for ; Mon, 11 Dec 2023 16:15:23 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf07.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702311323; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pIHhczPxE4PJ2bRt/m4+zPIqf0kxphH8CjYj9ffX4kk=; b=TSCIqQnPP5I9cQkMlUex1Z7qtZ2BuTGYkIpsH5XpF7rOAfoM8wlQipJOt64sPNuTLSst0T gkrcJFE2sB/3i4Xz7Ya1fJeG8yd8GtuLtsy9nlawNtNa0tam9+jQfuPol6rya8s/ojs9jI +ed+EGA1PxIfGYi+uSXb1UM6SOrqo2s= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf07.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702311323; a=rsa-sha256; cv=none; b=kSnS5WVhF9iX0UsoYCkOfYnLjsvlSc5/3ZIYClNA3aREpqNzYQh6/9V7RSLR3VnzKLW1dA /GRyQGr8LikwHB/0ATgbYQ4vinQCD6pnwrQgjQ9UKuECY86w7JN8lIcVO+/3LLLn1wV8g8 9PG1ukO02SfirtkHo126irw/awzXf7s= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F16E16F3; Mon, 11 Dec 2023 08:16:09 -0800 (PST) Received: from [10.57.73.30] (unknown [10.57.73.30]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 43F1E3F738; Mon, 11 Dec 2023 08:15:21 -0800 (PST) Message-ID: <3acd2e94-7ae4-4272-8e43-b496c0d26e55@arm.com> Date: Mon, 11 Dec 2023 16:15:21 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 02/39] mm/rmap: introduce and use hugetlb_remove_rmap() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231211155652.131054-1-david@redhat.com> <20231211155652.131054-3-david@redhat.com> From: Ryan Roberts In-Reply-To: <20231211155652.131054-3-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 739D640025 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 4qxtz1q9589u3tccc79wgs8yku9wd9mq X-HE-Tag: 1702311323-503267 X-HE-Meta: U2FsdGVkX1+hGruYz6xp3apbwrk7wRqugeklwUrXAtvhBN+HzUnBjI3dthvY0iiefY6c3vgj9aFaEGbFiv323zqfQOyV1Ard4QO4i/bqOeIafnsgCwTyENT1Q0C+zO44PRMxW/PR7rOVZ2H9eARVFEqxgTjST7VGeKLsLPHQzjJ4EgCiEUxLnoBK1Q6DY51IshAs14r7RA4CaMVSznCi88dVVrCAU9RRi5ChFyR9TNmnzmTCSuv0XqP/Gp48u8rrfel8n8KCdL241zTycFtNJjV7j5/dl3WgxsF0NY6M4VYJMmz64AAVN8NVqx6RX9/Fz6u+RiZ61CIZ931A0kspC8wwQtA+k1yR7DitHPWECjrIn5ZxOLLSYp+IPFgJHaqUI+AomVii6hQ8YGQ38BD1YqshO9dSmdlTmQchHxpbFJJqhC3dPE1q8kKkCMyKiNLBYKirUSewzKUVOnK/XhtqixyBdb/r7iuwCI0H2+7ULZq8o8yvuqla92y/tRXgJgmM6/asanHHYy0gwaKx6RcJScC4IYuWXNR7eO5v4mg1tRN/etFqvpxWZSWpm6HTjYc+yHqhxgW+hc7W1r2EMtpFlDj18CzmuqCqa7fO6O6ITcfeBQOIlX7ozumyi2pKlF4UMar8OW5dcUSbDC4mVCxDSdFxN7WxEtWx970jdwxPRbfpfzmJVYEKvJbSBCcWDdofLwQ57LXetAGTcKB9S9KoP1lFdCOxi7QCHPk0X45KgVF6QIsidU3cJ7Ml7mYLh34Z5nks8RKkQGx3B87bKCAj4i4MHMBlWY3brTs2qSyo4MUWAkHq8xGCY8ceqhNp+1wMkaXC+fxGFFk/cJfv6t2bIVOvOxSG/erRtrMwtbrL8RuIXsG1NpC0+rM8XojRmfoh7X/PEe/kEM6PfoLZQDawgZgE9fq1riFXuHDL92diMgP/8m5IbwNkwz07e9IIUKirjJQUbPe/JtVif+USMon NLgsyJJs KdBtjvmthW577ZAVkaeVsZZ+JPpngIeT8qSGKII2HOmlvS3ne7Y0k+RfMMKSiDAeXwPzEpENfr7COSDM0yDjFTAW545vMYnJyoai8hC2ZiyFCr24ZSD5XWrL0ElT6fKwGIdTe1xfJMIcnR64citV63xAAPNeO2RZjpygBMD9luf+vCGiDT/xeO7MXl1/DUwU2MnwECMgHSuiGlxhfJIXBNnbiLH8wf/wZ5+zRQwmxJ/VCnT99Mbb7raEUvRseY+UCjbtz/uNsz9zAleRWZ/QcdMLBl5qWZGVAK2dT8bJvAHiO5g4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/12/2023 15:56, David Hildenbrand wrote: > hugetlb rmap handling differs quite a lot from "ordinary" rmap code. > For example, hugetlb currently only supports entire mappings, and treats > any mapping as mapped using a single "logical PTE". Let's move it out > of the way so we can overhaul our "ordinary" rmap. > implementation/interface. > > Let's introduce and use hugetlb_remove_rmap() and remove the hugetlb > code from page_remove_rmap(). This effectively removes one check on the > small-folio path as well. > > Note: all possible candidates that need care are page_remove_rmap() that > pass compound=true. > > Reviewed-by: Yin Fengwei > Signed-off-by: David Hildenbrand Reviewed-by: Ryan Roberts > --- > include/linux/rmap.h | 5 +++++ > mm/hugetlb.c | 4 ++-- > mm/rmap.c | 17 ++++++++--------- > 3 files changed, 15 insertions(+), 11 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index 0bfea866f39b..d85bd1d4de04 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -213,6 +213,11 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *, > void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, > unsigned long address); > > +static inline void hugetlb_remove_rmap(struct folio *folio) > +{ > + atomic_dec(&folio->_entire_mapcount); > +} > + > static inline void __page_dup_rmap(struct page *page, bool compound) > { > if (compound) { > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 305f3ca1dee6..ef48ae673890 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5676,7 +5676,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, > make_pte_marker(PTE_MARKER_UFFD_WP), > sz); > hugetlb_count_sub(pages_per_huge_page(h), mm); > - page_remove_rmap(page, vma, true); > + hugetlb_remove_rmap(page_folio(page)); > > spin_unlock(ptl); > tlb_remove_page_size(tlb, page, huge_page_size(h)); > @@ -5987,7 +5987,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > /* Break COW or unshare */ > huge_ptep_clear_flush(vma, haddr, ptep); > - page_remove_rmap(&old_folio->page, vma, true); > + hugetlb_remove_rmap(old_folio); > hugetlb_add_new_anon_rmap(new_folio, vma, haddr); > if (huge_pte_uffd_wp(pte)) > newpte = huge_pte_mkuffd_wp(newpte); > diff --git a/mm/rmap.c b/mm/rmap.c > index 80d42c31281a..4e60c1f38eaa 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1482,13 +1482,6 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, > > VM_BUG_ON_PAGE(compound && !PageHead(page), page); > > - /* Hugetlb pages are not counted in NR_*MAPPED */ > - if (unlikely(folio_test_hugetlb(folio))) { > - /* hugetlb pages are always mapped with pmds */ > - atomic_dec(&folio->_entire_mapcount); > - return; > - } > - > /* Is page being unmapped by PTE? Is this its last map to be removed? */ > if (likely(!compound)) { > last = atomic_add_negative(-1, &page->_mapcount); > @@ -1846,7 +1839,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > dec_mm_counter(mm, mm_counter_file(&folio->page)); > } > discard: > - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); > + if (unlikely(folio_test_hugetlb(folio))) > + hugetlb_remove_rmap(folio); > + else > + page_remove_rmap(subpage, vma, false); > if (vma->vm_flags & VM_LOCKED) > mlock_drain_local(); > folio_put(folio); > @@ -2199,7 +2195,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, > */ > } > > - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio)); > + if (unlikely(folio_test_hugetlb(folio))) > + hugetlb_remove_rmap(folio); > + else > + page_remove_rmap(subpage, vma, false); > if (vma->vm_flags & VM_LOCKED) > mlock_drain_local(); > folio_put(folio);