From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35D74C67861 for ; Mon, 8 Apr 2024 12:47:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 723326B0089; Mon, 8 Apr 2024 08:47:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6D3B76B008C; Mon, 8 Apr 2024 08:47:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 59AFE6B0092; Mon, 8 Apr 2024 08:47:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3C07B6B0089 for ; Mon, 8 Apr 2024 08:47:33 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id F04ED120969 for ; Mon, 8 Apr 2024 12:47:32 +0000 (UTC) X-FDA: 81986340744.11.852F4F6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf06.hostedemail.com (Postfix) with ESMTP id AA10C180019 for ; Mon, 8 Apr 2024 12:47:30 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712580451; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uO0bkexM+W3SuD/ZE3M1fUozY0AjvdgB5m2Gx8HtuAs=; b=4S9EdWRk5YUiW4IQvGZABd5VRgvo4hAOCP+GXwq/a849hFWA+6P9UFY3+VjzB5uCCL7cfl 4O4siVnj5Z0WNO+vnjXPmhBFWVqPoZhMBqI/AzrnsU52Vh1/p3/rfX6elyTHiZHgsZc49u jxu6D1jzbNA1hlrGTiXXZqHGRanih9I= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf06.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712580451; a=rsa-sha256; cv=none; b=eeJNy905qewHFn3NAnaAM1f5HNo282U9JcAyOsPvvTvvdWMvqBUWxXNNrNe8OQwUuphADY H0PWX8ipjrBpi7I4ebwuPltW3nRXf6ruryNHpSslLD5yGodfVorMdahszvbbCnObBSL3KK G4ouVIq3+q2//utL83AtbpiIorGujuk= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E7D8DA7; Mon, 8 Apr 2024 05:48:00 -0700 (PDT) Received: from [10.57.73.169] (unknown [10.57.73.169]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD26E3F64C; Mon, 8 Apr 2024 05:47:27 -0700 (PDT) Message-ID: Date: Mon, 8 Apr 2024 13:47:26 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Content-Language: en-GB From: Ryan Roberts To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240403114032.1162100-1-ryan.roberts@arm.com> <20240403114032.1162100-3-ryan.roberts@arm.com> <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com> <4110bb1d-65e5-4cf0-91ad-62749975829d@arm.com> In-Reply-To: <4110bb1d-65e5-4cf0-91ad-62749975829d@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: AA10C180019 X-Stat-Signature: 6kgq9xn46zsfnoarqepmfgbst9xo8ktn X-HE-Tag: 1712580450-160445 X-HE-Meta: U2FsdGVkX1/MxkNUd48/d0VezTGo7jqusO6TLv4F0b7/5I3oOyeQ67zgmejGTOag+wZ7JmRM8+xP7BfS/law5eu+bg4LKgQfJqTC60EXchFm8Sy4Y1CzBSjDo2eRpyZNvLF1VqsEULWd8gk2bS/ka/VHMAa3oS5w4urJSZowOPaaASH5ZHUxcZ8doRuHBdYxrQXzBap4DABRF2olbZ3wDVZIF9kuKucWfSdOPQ+Xay7Uqq3YR1K49F3dlSunmR6DjLrLuMSUibnv40zUhxT1j2AHynzWQGe0VFv5aFBxk9TE3S1e/8n15NkpLroGXr9Aqrky8L08PQQq95/kKw69cVY7GAtscp5C3HN/SxYdh6mtx1sqAM5lpqS4foJ7G+qHI25lq7W6g5n0Aou7xSYwYaRlMA0xyO3QO/qquJUd1u+a5woHm6vzHMZK1iDckKGR9qWKG+5B/eFymlwDw7neXPrJ3Is2lSeupXOQDqZujybP6ew4fkiUh/E2JC8K8SlOBREu8yKb5wsZBcuygRIRPpKz7S9QqomsR3OJKLfuDaR12pfM0JQZWll58lI8eNbGExiWrfL28daPF7er7ckjGy0unaefIw/pRsh0O7oOo7M2ACzBW6GiOidRG7U8KhiicvqjIMf2jrg2NJ4toGKjEGAxzhjygUkgpFehhGb+h4FLBIhwM1A3Ylp4w7n/VButidrQIrNluTuRHxfhJ2MaeXB02LNkITUERa2zTwNAa2FYsgkevBRm8YyKl1WKHVhPz8pGNtPbY3Qjmd58IUMiXzMmCA8luz8Seb99+2NY2u8pWPcu8uT4DRmpdUK75PXXtfXHfH7u4fylC6burmsfIF0KTWZBLxe1KNV3N6S5DP4W7f1ge6ucPtW8z4SzLz2fqqImc+3maj/Ybr2/13On0Kg+J+uBneGHua+5Ve5jj9boE7QW2Dz+jee4c38Bhe0ToHXMU7tS7tKjtZzUTOk JpYdsJ2w GB7IIJbY9n6DIhX287MRgz8d6xjhC5LWWKhsV+hG2z5Xjk+lU38ihdrz1jtbEukcOLxCS6b1dh1O1j6A6TBfXWKLYveu0XQvRHMzS/sTV7TU3pDrNlrtAHsNYptf4Y21gBHLHJbrrMPpAf2ODE7TB0/2zh9JRR+CV6Lr5Jefn42RWeF8BRXtoOgdp3SoE/6ntEoD9fTRY7gWMFUCTbDA7BJkJpXzU5T+5a+TH X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 08/04/2024 13:07, Ryan Roberts wrote: > [...] >> >> [...] >> >>> + >>> +/** >>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries >>> + * @start_ptep: Page table pointer for the first entry. >>> + * @max_nr: The maximum number of table entries to consider. >>> + * @entry: Swap entry recovered from the first table entry. >>> + * >>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs >>> + * containing swap entries all with consecutive offsets and targeting the same >>> + * swap type. >>> + * >> >> Likely you should document that any swp pte bits are ignored? () > > Now that I understand what swp pte bits are, I think the simplest thing is to > just make this function always consider the pte bits by using pte_same() as you > suggest below? I don't think there is ever a case for ignoring the swp pte bits? > And then I don't need to do anything special for uffd-wp either (below you > suggested not doing batching when the VMA has uffd enabled). > > Any concerns? > >> >>> + * max_nr must be at least one and must be limited by the caller so scanning >>> + * cannot exceed a single page table. >>> + * >>> + * Return: the number of table entries in the batch. >>> + */ >>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, >>> +                 swp_entry_t entry) >>> +{ >>> +    const pte_t *end_ptep = start_ptep + max_nr; >>> +    unsigned long expected_offset = swp_offset(entry) + 1; >>> +    unsigned int expected_type = swp_type(entry); >>> +    pte_t *ptep = start_ptep + 1; >>> + >>> +    VM_WARN_ON(max_nr < 1); >>> +    VM_WARN_ON(non_swap_entry(entry)); >>> + >>> +    while (ptep < end_ptep) { >>> +        pte_t pte = ptep_get(ptep); >>> + >>> +        if (pte_none(pte) || pte_present(pte)) >>> +            break; >>> + >>> +        entry = pte_to_swp_entry(pte); >>> + >>> +        if (non_swap_entry(entry) || >>> +            swp_type(entry) != expected_type || >>> +            swp_offset(entry) != expected_offset) >>> +            break; >>> + >>> +        expected_offset++; >>> +        ptep++; >>> +    } >>> + >>> +    return ptep - start_ptep; >>> +} >> >> Looks very clean :) >> >> I was wondering whether we could similarly construct the expected swp PTE and >> only check pte_same. >> >> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset)); > > So planning to do this. Of course this clears all the swp pte bits in expected_pte. So need to do something a bit more complex. If we can safely assume all offset bits are contiguous in every per-arch representation then we can do: static inline pte_t pte_next_swp_offset(pte_t pte) { pte_t offset_inc = __swp_entry_to_pte(__swp_entry(0, 1)); return __pte(pte_val(pte) + pte_val(offset_inc)); } Or if not: static inline pte_t pte_next_swp_offset(pte_t pte) { swp_entry_t entry = pte_to_swp_entry(pte); pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry), swp_offset(entry) + 1)); if (pte_swp_soft_dirty(pte)) new = pte_swp_mksoft_dirty(new); if (pte_swp_exclusive(pte)) new = pte_swp_mkexclusive(new); if (pte_swp_uffd_wp(pte)) new = pte_swp_mkuffd_wp(new); return new; } Then swap_pte_batch() becomes: static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte) { pte_t expected_pte = pte_next_swp_offset(pte); const pte_t *end_ptep = start_ptep + max_nr; pte_t *ptep = start_ptep + 1; VM_WARN_ON(max_nr < 1); VM_WARN_ON(!is_swap_pte(pte)); VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte))); while (ptep < end_ptep) { pte = ptep_get(ptep); if (!pte_same(pte, expected_pte)) break; expected_pte = pte_next_swp_offset(expected_pte); ptep++; } return ptep - start_ptep; } Would you be happy with either of these? I'll go look if we can assume the offset bits are always contiguous. > >> >> ... or have a variant to increase only the swp offset for an existing pte. But >> non-trivial due to the arch-dependent format. > > not this - I agree this will be difficult due to per-arch changes. I'd rather > just do the generic version and leave the compiler to do the best it can to > simplify and optimize. > >> >> But then, we'd fail on mismatch of other swp pte bits. >> >> >> On swapin, when reusing this function (likely!), we'll might to make sure that >> the PTE bits match as well. >> >> See below regarding uffd-wp. >> >> >>>   #endif /* CONFIG_MMU */ >>>     void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, >>> diff --git a/mm/madvise.c b/mm/madvise.c >>> index 1f77a51baaac..070bedb4996e 100644 >>> --- a/mm/madvise.c >>> +++ b/mm/madvise.c >>> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >>> long addr, >>>       struct folio *folio; >>>       int nr_swap = 0; >>>       unsigned long next; >>> +    int nr, max_nr; >>>         next = pmd_addr_end(addr, end); >>>       if (pmd_trans_huge(*pmd)) >>> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >>> long addr, >>>           return 0; >>>       flush_tlb_batched_pending(mm); >>>       arch_enter_lazy_mmu_mode(); >>> -    for (; addr != end; pte++, addr += PAGE_SIZE) { >>> +    for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) { >>> +        nr = 1; >>>           ptent = ptep_get(pte); >>>             if (pte_none(ptent)) >>> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >>> long addr, >>>                 entry = pte_to_swp_entry(ptent); >>>               if (!non_swap_entry(entry)) { >>> -                nr_swap--; >>> -                free_swap_and_cache(entry); >>> -                pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >>> +                max_nr = (end - addr) / PAGE_SIZE; >>> +                nr = swap_pte_batch(pte, max_nr, entry); >>> +                nr_swap -= nr; >>> +                free_swap_and_cache_nr(entry, nr); >>> +                clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); >>>               } else if (is_hwpoison_entry(entry) || >>>                      is_poisoned_swp_entry(entry)) { >>>                   pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >>> diff --git a/mm/memory.c b/mm/memory.c >>> index 7dc6c3d9fa83..ef2968894718 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather >>> *tlb, >>>                   folio_remove_rmap_pte(folio, page, vma); >>>               folio_put(folio); >>>           } else if (!non_swap_entry(entry)) { >>> -            /* Genuine swap entry, hence a private anon page */ >>> +            max_nr = (end - addr) / PAGE_SIZE; >>> +            nr = swap_pte_batch(pte, max_nr, entry); >>> +            /* Genuine swap entries, hence a private anon pages */ >>>               if (!should_zap_cows(details)) >>>                   continue; >>> -            rss[MM_SWAPENTS]--; >>> -            if (unlikely(!free_swap_and_cache(entry))) >>> -                print_bad_pte(vma, addr, ptent, NULL); >>> +            rss[MM_SWAPENTS] -= nr; >>> +            free_swap_and_cache_nr(entry, nr); >>>           } else if (is_migration_entry(entry)) { >>>               folio = pfn_swap_entry_folio(entry); >>>               if (!should_zap_folio(details, folio)) >>> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, >>>               pr_alert("unrecognized swap entry 0x%lx\n", entry.val); >>>               WARN_ON_ONCE(1); >>>           } >>> -        pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >>> -        zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent); >>> +        clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); >> >> For zap_install_uffd_wp_if_needed(), the uffd-wp bit has to match. >> >> zap_install_uffd_wp_if_needed() will use the uffd-wp information in >> ptent->pteval to make a decision whether to place PTE_MARKER_UFFD_WP markers. >> >> On mixture, you either lose some or place too many markers. >> >> A simple workaround would be to disable any such batching if the VMA does have >> uffd-wp enabled. > > Rather than this, I'll just consider all the swp pte bits when batching. > >> >>> +        zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent); >>>       } while (pte += nr, addr += PAGE_SIZE * nr, addr != end); > > [...] >