From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3EEBC4167B for ; Tue, 5 Dec 2023 12:22:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 390606B007D; Tue, 5 Dec 2023 07:22:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 318CA6B0080; Tue, 5 Dec 2023 07:22:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BA7A6B0082; Tue, 5 Dec 2023 07:22:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 086726B007D for ; Tue, 5 Dec 2023 07:22:44 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D4DAAC0182 for ; Tue, 5 Dec 2023 12:22:43 +0000 (UTC) X-FDA: 81532678206.18.EFBFF6B Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 0350EA001A for ; Tue, 5 Dec 2023 12:22:41 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701778962; a=rsa-sha256; cv=none; b=YX/fztPq/M3+pibDkIK8Cb8L2K1CAVlTy+MnTXKI+qcGZOoYb98NkVkGOCLX/j/I5zowzB mqFgZGVcout9lxOmS7BWP2YYjLmGjieWEmhmdbTRISklj7Fi2EwFcptmi38VtzjipAI44S FAQLN4Y/wM5B8K6y5CLTZ6zir6PGpHc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701778962; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=prn9nyk4HyENIlB7q85f82khsorEzjdzdT/Eqn+6gQE=; b=mzNknklpwoLsbnGt4F/KA6CVaBHnT7PgOKv+heuo40UMJjzlMa/5oC3wZXBb8hJRstHO77 Npv2p/B+rgbkfH132YME8cu3ZmoJUjgAL3o/i38v4QHtS7kuxf7Zt6UWEjXOR/jWm8KSHy IEJT/ATYnKgPAHuFGwCLhcw/lmPWxY0= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 609DD139F; Tue, 5 Dec 2023 04:23:27 -0800 (PST) Received: from [10.57.73.130] (unknown [10.57.73.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 63A123F5A1; Tue, 5 Dec 2023 04:22:39 -0800 (PST) Message-ID: <3c03d009-6a46-4321-a38b-9707b4558618@arm.com> Date: Tue, 5 Dec 2023 12:22:37 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH RFC 15/39] mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231204142146.91437-1-david@redhat.com> <20231204142146.91437-16-david@redhat.com> From: Ryan Roberts In-Reply-To: <20231204142146.91437-16-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0350EA001A X-Stat-Signature: b1rimh33bkcojh5mnjmg7544bxwzotq1 X-HE-Tag: 1701778961-73692 X-HE-Meta: U2FsdGVkX19xtKln7ET4ul83TNp3Fx9Dy6VPWXL8v0PZrahPN1pL/r05xvMXTArUgDq/ewT6OoLl/FOM7CCq7e7FYlODS5hzmPx/EPp41nDKNX9GeBY1WKqpf6LHvZY/F0uPa33V1GDmuXwmlECiUCsWz8hC+rR8MvX/aFBMPWQKDryinxn2woeAMOKYC3iMWXKcNFy/S9Lbwh+NL5ThixYlNGKkStY1bvarE3BposDMKDuEqRgi9hNUT8Kv2YlKTyJmWcL8SFDqqNLbN9AW7akGKNNRggFNUyDkmeoxqV9YinzftsxwhjDt65zxml5YWt5l8hpXfyX5FLmoqfEi6RvLtKKoeVUDoFK8kacjAXQpcyqM9j892N/hE2llIiFECD92PMIWSs4bpqFDDA4SQFRinPy0+B5TyVBLA3zFm8fgdPdPZ8KOMCYS3DXJl+gxtab83dOti/K6/mP7xmtFm9JKQ19KUHq0ItS1oGyp5hcotNwRYNHcIcHo4Zv7YZIwPIedp+RNJyGUkUSTZ7rjCFToW1osxvEWovXHsXjD9S1uSm8bbd9xGH9QPpFsEqAKdHeb5iGu4MtiPnG3VbN4WXXbfKvc1d9xB2EiLrBKqg+fouPwWI1L6ijxMpImYurGJ/00FbWDs/IOd323SBGBR+YawzL5ZorO2EZPJuF2hsSmkpcGzMa+2DlzLSMYmAyJz1YAZ6h+Chnad0ceTyY6UObbjvjd+Le1GJfrUcWiBT0NXLm3JCDWDjrvCD6Mn+PF45OQd4kV2SlIb3Gb0CK12XtN1HVreLR6HnAfYn1OtwEWlTrNnkMZpkqL3Y7VGH77AFXp0tVCudC1OpfPwcvd2k3BGKQM1k923jhwWaPVGpXbj75ptfL66t2hu9kc0HoCJy1mEb4m7kX1bwI53+cwYwVqYdiR0f4Xljwwz8/AkR163mOA9uvANAXIQMYwEj3N0Ze73jkdhl9Uy3X7lN2 MPGJUoIi ySwBST8LZ8OVGgos/qGQTYZ5hNKBfU0DRDCE0n4riluG5XG/MRa+ADd68xNodJ3YgEzi5IHzG0gfl+pkz1ex3uffL+rXrjrC0Atcl3trRK01ijEWSrrzoxr/hBqKBlg9tRleDBCZWlfMY9AQHJvKROAsJ8RY0u7zRbOEN1Jj2OCDR28pzYhMu0c0hJw9iOEC6YbFKegWvFeJq+KSueuJIl6xbCg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 04/12/2023 14:21, David Hildenbrand wrote: > Let's use folio_add_anon_rmap_ptes(), batching the rmap operations. > > While at it, use more folio operations (but only in the code branch we're > touching), use VM_WARN_ON_FOLIO(), and pass RMAP_COMPOUND instead of You mean RMAP_EXCLUSIVE? > manually setting PageAnonExclusive. > > We should never see non-anon pages on that branch: otherwise, the > existing page_add_anon_rmap() call would have been flawed already. > > Signed-off-by: David Hildenbrand > --- > mm/huge_memory.c | 23 +++++++++++++++-------- > 1 file changed, 15 insertions(+), 8 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index cb33c6e0404cf..2c037ab3f4916 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2099,6 +2099,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > unsigned long haddr, bool freeze) > { > struct mm_struct *mm = vma->vm_mm; > + struct folio *folio; > struct page *page; > pgtable_t pgtable; > pmd_t old_pmd, _pmd; > @@ -2194,16 +2195,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > uffd_wp = pmd_swp_uffd_wp(old_pmd); > } else { > page = pmd_page(old_pmd); > + folio = page_folio(page); > if (pmd_dirty(old_pmd)) { > dirty = true; > - SetPageDirty(page); > + folio_set_dirty(folio); > } > write = pmd_write(old_pmd); > young = pmd_young(old_pmd); > soft_dirty = pmd_soft_dirty(old_pmd); > uffd_wp = pmd_uffd_wp(old_pmd); > > - VM_BUG_ON_PAGE(!page_count(page), page); > + VM_WARN_ON_FOLIO(!folio_ref_count(folio), folio); > + VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); > > /* > * Without "freeze", we'll simply split the PMD, propagating the > @@ -2220,11 +2223,18 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > * > * See page_try_share_anon_rmap(): invalidate PMD first. > */ > - anon_exclusive = PageAnon(page) && PageAnonExclusive(page); > + anon_exclusive = PageAnonExclusive(page); > if (freeze && anon_exclusive && page_try_share_anon_rmap(page)) > freeze = false; > - if (!freeze) > - page_ref_add(page, HPAGE_PMD_NR - 1); > + if (!freeze) { > + rmap_t rmap_flags = RMAP_NONE; > + > + folio_ref_add(folio, HPAGE_PMD_NR - 1); > + if (anon_exclusive) > + rmap_flags = RMAP_EXCLUSIVE; nit: I'd be inclined to make this |= since you're accumulating optional falgs. Yes, its the only one so it still works as is... > + folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR, > + vma, haddr, rmap_flags); > + } > } > > /* > @@ -2267,8 +2277,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > entry = mk_pte(page + i, READ_ONCE(vma->vm_page_prot)); > if (write) > entry = pte_mkwrite(entry, vma); > - if (anon_exclusive) > - SetPageAnonExclusive(page + i); > if (!young) > entry = pte_mkold(entry); > /* NOTE: this may set soft-dirty too on some archs */ > @@ -2278,7 +2286,6 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > entry = pte_mksoft_dirty(entry); > if (uffd_wp) > entry = pte_mkuffd_wp(entry); > - page_add_anon_rmap(page + i, vma, addr, RMAP_NONE); > } > VM_BUG_ON(!pte_none(ptep_get(pte))); > set_pte_at(mm, addr, pte, entry);