From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 905B5CD1296 for ; Mon, 8 Apr 2024 12:07:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A1586B0088; Mon, 8 Apr 2024 08:07:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 052346B0089; Mon, 8 Apr 2024 08:07:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E83F96B008A; Mon, 8 Apr 2024 08:07:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C98776B0088 for ; Mon, 8 Apr 2024 08:07:49 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 59DCB1609F6 for ; Mon, 8 Apr 2024 12:07:49 +0000 (UTC) X-FDA: 81986240658.04.7403368 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id 75835180018 for ; Mon, 8 Apr 2024 12:07:46 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712578066; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DUjsFzxFXfC2KgiKHcWQcBklKwFuFt3vSaMX+WKKn2I=; b=rpIJtW6J7QaaEmhFtWOjaTDMb4O8b6TXt+ZMc8ElsbSIVTXPargaijk2nOPM5JnTdVvQ0X jUORr2gjYxJC30h9VqrZ5JpUodA5Yl66stC3K083Su9v9IARAJJfHIl8nncihEDPY56rzU lemv+oSYOf6RFnESwpITdcASwVQXGcI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712578066; a=rsa-sha256; cv=none; b=SiNpu8dnAc/azRMeUAsdrvxLa9a1Y3E8XAJsURXsl45oAenjhuXQP/jYTXT9cWhwmiAdeu wjQvv521nZoH5aJkAcJEnrJ/Fe/v9T9wSdmWQu7+TMx91p3tHJUkhTM+J+E1Q6IB+RXJJ8 sFNijpqxbc7bjONDADHBWHOGaeU3qfE= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ED007DA7; Mon, 8 Apr 2024 05:08:15 -0700 (PDT) Received: from [10.57.73.169] (unknown [10.57.73.169]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A01823F64C; Mon, 8 Apr 2024 05:07:43 -0700 (PDT) Message-ID: <4110bb1d-65e5-4cf0-91ad-62749975829d@arm.com> Date: Mon, 8 Apr 2024 13:07:42 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Content-Language: en-GB To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240403114032.1162100-1-ryan.roberts@arm.com> <20240403114032.1162100-3-ryan.roberts@arm.com> <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com> From: Ryan Roberts In-Reply-To: <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 75835180018 X-Rspam-User: X-Stat-Signature: w96qoy9xzuonkxbhmozaj8enw7yg1foj X-Rspamd-Server: rspam03 X-HE-Tag: 1712578066-55175 X-HE-Meta: U2FsdGVkX18uKnq/tYFPJtDSC6Zlf1Lok7MoOniRcIwmXO4bRc4Y/10SP9HHhd2aLJJBw8xMG0tCCfCtQ6+drHNg6PO9SsiCVs3vfQ+MFCOLjlejQipCi66OmYD1CRwBLi97rFWHYLWdt3OOS9q/+r/7sBdUpFqxat7naVIxLzODbUeEQOKP48q3Zpc3a7UfePcccE6pk6jEEUn0Xo6qSB3f2mh5uDM7+ycRDM2tHv++jQbR11dAFGPDSVpl0mcup9euBX103IA2GXcuK+lEltyiO1BodJMTfokzyu/Cz7rv8AJvxY6KYFOD3w+fdgzlCOBzE50PGvJ29BG8AEPiJ4TijgcsRdfOjh5iyflhT0z/NUK5xMtWIyV5EBTVsrFtbrbBbgSBunU5qcOKMGCSb5N7L9l77Rxc9PqTUFWOl4nuo0KSvWXvE05l0/5fC3EKDV0q2eE9ExaNV4yzEp8naMNOdJM2hvCdebjI8Yh9vKj6BMIYUevni14Wnr6SPHuykvJON9ABBoukYRJLrg4SYtakUCXu9LOpNOqnbWJPQM3v6h4l1MHxC8CiPHKUkVbZOntf7rKs8eswCDzmQCdF4KHlNvqtokvAQUUkwxlVXIMOy0reMxOoNUsT71xUFlI/9lzVFro1sAmVdtiFrvwyrlqEaV8KtOmRBpl2fbmfNz96KFuALc3G1X0qTUOXUgMOY6iFDVkDNgo6MezKITVe0E9BL3MBbIPB/giBYqn80ARBXWHk4AFl1DJ1abQp70XEsuuEbr6UyMmacmkFL0OnSjseSpQFRRG6QXfL2HuIjPu/JZfuT5lEWxB3Zp8I3LQxETkbvToJFljQkjWa8Ho9F1CHA/CfuQ6Dx47EyFaG4Z2fUibVIiKJV3MmkuqINDLdGB0SoVyiTIATnVRaU1g7IxuP6hwdkITTgKZOy5jg5xS0qmZxOJnM3oieBvA8WQ5Al02i5oYzVmW+u7EVNzC rzLRmB5b USSBsTMTSkpY5t/oXG6ckbXwaEneUkVMk2y5J3pZoPi2GgsYgjTbZp2/tdTRcBXuYX1z6FvtRlf5YjGp6iLZ9d31tdM2g31CRfvQBjG4/QMAKuDzplRgDGKXeJc/D+w6MgbbdvOOICEMpyGj2aJccUlO6PtRA7aBSMUFuyik02ngFA0QKW295PkWX3U8GOaTMiZjMi86Mw8glz0UCaSo27feIH6Z6tnJAyefy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: [...] > > [...] > >> + >> +/** >> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries >> + * @start_ptep: Page table pointer for the first entry. >> + * @max_nr: The maximum number of table entries to consider. >> + * @entry: Swap entry recovered from the first table entry. >> + * >> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs >> + * containing swap entries all with consecutive offsets and targeting the same >> + * swap type. >> + * > > Likely you should document that any swp pte bits are ignored? () Now that I understand what swp pte bits are, I think the simplest thing is to just make this function always consider the pte bits by using pte_same() as you suggest below? I don't think there is ever a case for ignoring the swp pte bits? And then I don't need to do anything special for uffd-wp either (below you suggested not doing batching when the VMA has uffd enabled). Any concerns? > >> + * max_nr must be at least one and must be limited by the caller so scanning >> + * cannot exceed a single page table. >> + * >> + * Return: the number of table entries in the batch. >> + */ >> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, >> +                 swp_entry_t entry) >> +{ >> +    const pte_t *end_ptep = start_ptep + max_nr; >> +    unsigned long expected_offset = swp_offset(entry) + 1; >> +    unsigned int expected_type = swp_type(entry); >> +    pte_t *ptep = start_ptep + 1; >> + >> +    VM_WARN_ON(max_nr < 1); >> +    VM_WARN_ON(non_swap_entry(entry)); >> + >> +    while (ptep < end_ptep) { >> +        pte_t pte = ptep_get(ptep); >> + >> +        if (pte_none(pte) || pte_present(pte)) >> +            break; >> + >> +        entry = pte_to_swp_entry(pte); >> + >> +        if (non_swap_entry(entry) || >> +            swp_type(entry) != expected_type || >> +            swp_offset(entry) != expected_offset) >> +            break; >> + >> +        expected_offset++; >> +        ptep++; >> +    } >> + >> +    return ptep - start_ptep; >> +} > > Looks very clean :) > > I was wondering whether we could similarly construct the expected swp PTE and > only check pte_same. > > expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset)); So planning to do this. > > ... or have a variant to increase only the swp offset for an existing pte. But > non-trivial due to the arch-dependent format. not this - I agree this will be difficult due to per-arch changes. I'd rather just do the generic version and leave the compiler to do the best it can to simplify and optimize. > > But then, we'd fail on mismatch of other swp pte bits. > > > On swapin, when reusing this function (likely!), we'll might to make sure that > the PTE bits match as well. > > See below regarding uffd-wp. > > >>   #endif /* CONFIG_MMU */ >>     void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, >> diff --git a/mm/madvise.c b/mm/madvise.c >> index 1f77a51baaac..070bedb4996e 100644 >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >> long addr, >>       struct folio *folio; >>       int nr_swap = 0; >>       unsigned long next; >> +    int nr, max_nr; >>         next = pmd_addr_end(addr, end); >>       if (pmd_trans_huge(*pmd)) >> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >> long addr, >>           return 0; >>       flush_tlb_batched_pending(mm); >>       arch_enter_lazy_mmu_mode(); >> -    for (; addr != end; pte++, addr += PAGE_SIZE) { >> +    for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) { >> +        nr = 1; >>           ptent = ptep_get(pte); >>             if (pte_none(ptent)) >> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >> long addr, >>                 entry = pte_to_swp_entry(ptent); >>               if (!non_swap_entry(entry)) { >> -                nr_swap--; >> -                free_swap_and_cache(entry); >> -                pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >> +                max_nr = (end - addr) / PAGE_SIZE; >> +                nr = swap_pte_batch(pte, max_nr, entry); >> +                nr_swap -= nr; >> +                free_swap_and_cache_nr(entry, nr); >> +                clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); >>               } else if (is_hwpoison_entry(entry) || >>                      is_poisoned_swp_entry(entry)) { >>                   pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >> diff --git a/mm/memory.c b/mm/memory.c >> index 7dc6c3d9fa83..ef2968894718 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather >> *tlb, >>                   folio_remove_rmap_pte(folio, page, vma); >>               folio_put(folio); >>           } else if (!non_swap_entry(entry)) { >> -            /* Genuine swap entry, hence a private anon page */ >> +            max_nr = (end - addr) / PAGE_SIZE; >> +            nr = swap_pte_batch(pte, max_nr, entry); >> +            /* Genuine swap entries, hence a private anon pages */ >>               if (!should_zap_cows(details)) >>                   continue; >> -            rss[MM_SWAPENTS]--; >> -            if (unlikely(!free_swap_and_cache(entry))) >> -                print_bad_pte(vma, addr, ptent, NULL); >> +            rss[MM_SWAPENTS] -= nr; >> +            free_swap_and_cache_nr(entry, nr); >>           } else if (is_migration_entry(entry)) { >>               folio = pfn_swap_entry_folio(entry); >>               if (!should_zap_folio(details, folio)) >> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, >>               pr_alert("unrecognized swap entry 0x%lx\n", entry.val); >>               WARN_ON_ONCE(1); >>           } >> -        pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >> -        zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent); >> +        clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); > > For zap_install_uffd_wp_if_needed(), the uffd-wp bit has to match. > > zap_install_uffd_wp_if_needed() will use the uffd-wp information in > ptent->pteval to make a decision whether to place PTE_MARKER_UFFD_WP markers. > > On mixture, you either lose some or place too many markers. > > A simple workaround would be to disable any such batching if the VMA does have > uffd-wp enabled. Rather than this, I'll just consider all the swp pte bits when batching. > >> +        zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent); >>       } while (pte += nr, addr += PAGE_SIZE * nr, addr != end); [...]