From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC593CD128A for ; Mon, 8 Apr 2024 13:27:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 500E06B0085; Mon, 8 Apr 2024 09:27:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B17F6B0087; Mon, 8 Apr 2024 09:27:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 378676B0088; Mon, 8 Apr 2024 09:27:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1B2F46B0085 for ; Mon, 8 Apr 2024 09:27:43 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A9F21A0DE7 for ; Mon, 8 Apr 2024 13:27:42 +0000 (UTC) X-FDA: 81986441964.08.C1775FB Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf19.hostedemail.com (Postfix) with ESMTP id A53601A000D for ; Mon, 8 Apr 2024 13:27:40 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712582861; a=rsa-sha256; cv=none; b=k6+n2YjLhoZOfDA9yVaCqyNeO5HqbXDqFMZpgRkp41ZA2ArjXSLA8yBKJVI32bCw8L6F/g S3zWG7zJ8gAKHVbkyVx/VH5S3A8/ze/ZvCxeESpUltp++2e/rUoJsKQgimFS1M/ltihCZt 4yA/amsrTXBWTSRH1drMTCx3GRN+POg= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712582861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GIBgHRjoa4eTemC7NB0M7G4EyRo/0tTCGfQ+FUTcoN8=; b=7/GijyhhXJi6TvZRPXIZSiq9fTMLgRORISrGjKjwXoeev4xT0QWcXKIHOn2cyvnyxTZJDO /c4kep/KJk5s5iNaxaQIZXCOGQwrXClnYXHd+RL4CZh2ViMFSfEtoS8wrpn5vZPgFSuyBV N9x8Bs6BXqrT009oiXAbuZ6iTa0BEgE= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 253F9DA7; Mon, 8 Apr 2024 06:28:10 -0700 (PDT) Received: from [10.57.73.169] (unknown [10.57.73.169]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ABBE13F64C; Mon, 8 Apr 2024 06:27:37 -0700 (PDT) Message-ID: <2cfa542a-ae38-4867-a64b-621e7778fdf7@arm.com> Date: Mon, 8 Apr 2024 14:27:36 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v6 2/6] mm: swap: free_swap_and_cache_nr() as batched free_swap_and_cache() Content-Language: en-GB From: Ryan Roberts To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240403114032.1162100-1-ryan.roberts@arm.com> <20240403114032.1162100-3-ryan.roberts@arm.com> <051052af-3b56-4290-98d3-fd5a1eb11ce1@redhat.com> <4110bb1d-65e5-4cf0-91ad-62749975829d@arm.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A53601A000D X-Stat-Signature: rwqheua4i74es66wm4aou7ee17ai1mnj X-HE-Tag: 1712582860-51099 X-HE-Meta: U2FsdGVkX1/IZZd3ac4orbcNgyowCnsdt3GvkdGyTxlqbgihMbhKPF3F+PnG3wenzIdgIuAoOPZ/9uZY5HGfD7H0RY7ZslYzISvBWWf5YKym7JTpCEvXeTLeY+ga2AZBM7Y3OIWQUAytQlsDy/ZgiLebJ8gajGMB5A2qZXE9dEBIH025yx5cN+zbNxBTrUyw0K8YlO9hHRDzKQteC/0lFBHdPIk9OLHcR0VnKVk5RVljPQEQiRkGyvGjmGGZjr8XGz3OZ8V0la/1ezvncGbpzs0R97DL/eXeBMM0/OAwJVVqZ0gx7/gtqVnj4WJ39JG0FV/+IkOIer+Vas4e58eQ+YIN4lGTi/5PDsXa/9kCPK8zCxOyHBqg0k985BohN5Kwt1hIhR57GlQxzBEdyy/wRzoecCVA9AjQR0tkfI/fK0czNBkQpUXGKmT66EAeXi9Gi/UjakzBGGinmujhsr2r7yEp2lnrMrUCyIWFShoxrX10GO1LWFDN5CSD8Ml/r9Hw2Gz4KimpS/RImbnzES0lgenqJB8FBg/lED71tFJrWIoi73IEW4lGb+DALBspuxWN6U5RO3/83i01+8+4M1dKaom4mPJTVm1Noc6Vmjfn57Of37mmlkFmfeUaP6j1TTiSdBYG37zANNjgHfPIdEq0aP60ntArvoKrRmoDk3xweiI/MEw5BUBix6GcXQPbVPGstI8KM3q9BOD+C7Z92pg1sesiVpqPfyR6q/uf92F4LyE374GP4ui9RYC+AXijCHc0jZDeGsuX8uYCtCQuCb1rGwyobt9XVj7ADZ5je7tJ0OA2KuL9ceG4QPjWNnI3mbgHZx/zgGSEiOvXHWskhLvDph6aDzG47s4mhsikfJ/MaHa5cv194X2C3I0df+j/PWE/A+TSUt2VeJg9VgtjRetlNNYMFtRZkquSEO7MPRMP1vye/4zmfA2BcWZhXq+uPl0C2DPytvebA0N1KpHPSv5 TCIz1CqH ctVVvpsZG7eO6/yylxR1GkDF1/q8XtaKKporKn4C/sF2wz0e/S7VoJW8fXenEAiCX6Q/C1NBG9VXFZQ53qZzX73I2dBqQ9sAvfaqZA6k9eAlryd+T+l7pDb8UZK164IRMsS7j4bHAaOmLXcBWxm5DdBAcoShLypmPPmZOrY5fjvx2DqFQdmBx7TNtXs+2w7pb7msEuc6ICNr8mTp9ZbkGyUCIPKq1RYU5rbkX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 08/04/2024 13:47, Ryan Roberts wrote: > On 08/04/2024 13:07, Ryan Roberts wrote: >> [...] >>> >>> [...] >>> >>>> + >>>> +/** >>>> + * swap_pte_batch - detect a PTE batch for a set of contiguous swap entries >>>> + * @start_ptep: Page table pointer for the first entry. >>>> + * @max_nr: The maximum number of table entries to consider. >>>> + * @entry: Swap entry recovered from the first table entry. >>>> + * >>>> + * Detect a batch of contiguous swap entries: consecutive (non-present) PTEs >>>> + * containing swap entries all with consecutive offsets and targeting the same >>>> + * swap type. >>>> + * >>> >>> Likely you should document that any swp pte bits are ignored? () >> >> Now that I understand what swp pte bits are, I think the simplest thing is to >> just make this function always consider the pte bits by using pte_same() as you >> suggest below? I don't think there is ever a case for ignoring the swp pte bits? >> And then I don't need to do anything special for uffd-wp either (below you >> suggested not doing batching when the VMA has uffd enabled). >> >> Any concerns? >> >>> >>>> + * max_nr must be at least one and must be limited by the caller so scanning >>>> + * cannot exceed a single page table. >>>> + * >>>> + * Return: the number of table entries in the batch. >>>> + */ >>>> +static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, >>>> +                 swp_entry_t entry) >>>> +{ >>>> +    const pte_t *end_ptep = start_ptep + max_nr; >>>> +    unsigned long expected_offset = swp_offset(entry) + 1; >>>> +    unsigned int expected_type = swp_type(entry); >>>> +    pte_t *ptep = start_ptep + 1; >>>> + >>>> +    VM_WARN_ON(max_nr < 1); >>>> +    VM_WARN_ON(non_swap_entry(entry)); >>>> + >>>> +    while (ptep < end_ptep) { >>>> +        pte_t pte = ptep_get(ptep); >>>> + >>>> +        if (pte_none(pte) || pte_present(pte)) >>>> +            break; >>>> + >>>> +        entry = pte_to_swp_entry(pte); >>>> + >>>> +        if (non_swap_entry(entry) || >>>> +            swp_type(entry) != expected_type || >>>> +            swp_offset(entry) != expected_offset) >>>> +            break; >>>> + >>>> +        expected_offset++; >>>> +        ptep++; >>>> +    } >>>> + >>>> +    return ptep - start_ptep; >>>> +} >>> >>> Looks very clean :) >>> >>> I was wondering whether we could similarly construct the expected swp PTE and >>> only check pte_same. >>> >>> expected_pte = __swp_entry_to_pte(__swp_entry(expected_type, expected_offset)); >> >> So planning to do this. > > Of course this clears all the swp pte bits in expected_pte. So need to do something a bit more complex. > > If we can safely assume all offset bits are contiguous in every per-arch representation then we can do: Looks like at least csky and hexagon store the offset in discontiguous regions. So it will have to be the second approach if we want to avoid anything arch-specific. I'll assume that for now; we can always specialize pte_next_swp_offset() per-arch in the future if needed. > > static inline pte_t pte_next_swp_offset(pte_t pte) > { > pte_t offset_inc = __swp_entry_to_pte(__swp_entry(0, 1)); > > return __pte(pte_val(pte) + pte_val(offset_inc)); > } > > Or if not: > > static inline pte_t pte_next_swp_offset(pte_t pte) > { > swp_entry_t entry = pte_to_swp_entry(pte); > pte_t new = __swp_entry_to_pte(__swp_entry(swp_type(entry), swp_offset(entry) + 1)); > > if (pte_swp_soft_dirty(pte)) > new = pte_swp_mksoft_dirty(new); > if (pte_swp_exclusive(pte)) > new = pte_swp_mkexclusive(new); > if (pte_swp_uffd_wp(pte)) > new = pte_swp_mkuffd_wp(new); > > return new; > } > > Then swap_pte_batch() becomes: > > static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte) > { > pte_t expected_pte = pte_next_swp_offset(pte); > const pte_t *end_ptep = start_ptep + max_nr; > pte_t *ptep = start_ptep + 1; > > VM_WARN_ON(max_nr < 1); > VM_WARN_ON(!is_swap_pte(pte)); > VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte))); > > while (ptep < end_ptep) { > pte = ptep_get(ptep); > > if (!pte_same(pte, expected_pte)) > break; > > expected_pte = pte_next_swp_offset(expected_pte); > ptep++; > } > > return ptep - start_ptep; > } > > Would you be happy with either of these? I'll go look if we can assume the offset bits are always contiguous. > > >> >>> >>> ... or have a variant to increase only the swp offset for an existing pte. But >>> non-trivial due to the arch-dependent format. >> >> not this - I agree this will be difficult due to per-arch changes. I'd rather >> just do the generic version and leave the compiler to do the best it can to >> simplify and optimize. >> >>> >>> But then, we'd fail on mismatch of other swp pte bits. >>> >>> >>> On swapin, when reusing this function (likely!), we'll might to make sure that >>> the PTE bits match as well. >>> >>> See below regarding uffd-wp. >>> >>> >>>>   #endif /* CONFIG_MMU */ >>>>     void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, >>>> diff --git a/mm/madvise.c b/mm/madvise.c >>>> index 1f77a51baaac..070bedb4996e 100644 >>>> --- a/mm/madvise.c >>>> +++ b/mm/madvise.c >>>> @@ -628,6 +628,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >>>> long addr, >>>>       struct folio *folio; >>>>       int nr_swap = 0; >>>>       unsigned long next; >>>> +    int nr, max_nr; >>>>         next = pmd_addr_end(addr, end); >>>>       if (pmd_trans_huge(*pmd)) >>>> @@ -640,7 +641,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >>>> long addr, >>>>           return 0; >>>>       flush_tlb_batched_pending(mm); >>>>       arch_enter_lazy_mmu_mode(); >>>> -    for (; addr != end; pte++, addr += PAGE_SIZE) { >>>> +    for (; addr != end; pte += nr, addr += PAGE_SIZE * nr) { >>>> +        nr = 1; >>>>           ptent = ptep_get(pte); >>>>             if (pte_none(ptent)) >>>> @@ -655,9 +657,11 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned >>>> long addr, >>>>                 entry = pte_to_swp_entry(ptent); >>>>               if (!non_swap_entry(entry)) { >>>> -                nr_swap--; >>>> -                free_swap_and_cache(entry); >>>> -                pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >>>> +                max_nr = (end - addr) / PAGE_SIZE; >>>> +                nr = swap_pte_batch(pte, max_nr, entry); >>>> +                nr_swap -= nr; >>>> +                free_swap_and_cache_nr(entry, nr); >>>> +                clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); >>>>               } else if (is_hwpoison_entry(entry) || >>>>                      is_poisoned_swp_entry(entry)) { >>>>                   pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 7dc6c3d9fa83..ef2968894718 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -1637,12 +1637,13 @@ static unsigned long zap_pte_range(struct mmu_gather >>>> *tlb, >>>>                   folio_remove_rmap_pte(folio, page, vma); >>>>               folio_put(folio); >>>>           } else if (!non_swap_entry(entry)) { >>>> -            /* Genuine swap entry, hence a private anon page */ >>>> +            max_nr = (end - addr) / PAGE_SIZE; >>>> +            nr = swap_pte_batch(pte, max_nr, entry); >>>> +            /* Genuine swap entries, hence a private anon pages */ >>>>               if (!should_zap_cows(details)) >>>>                   continue; >>>> -            rss[MM_SWAPENTS]--; >>>> -            if (unlikely(!free_swap_and_cache(entry))) >>>> -                print_bad_pte(vma, addr, ptent, NULL); >>>> +            rss[MM_SWAPENTS] -= nr; >>>> +            free_swap_and_cache_nr(entry, nr); >>>>           } else if (is_migration_entry(entry)) { >>>>               folio = pfn_swap_entry_folio(entry); >>>>               if (!should_zap_folio(details, folio)) >>>> @@ -1665,8 +1666,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, >>>>               pr_alert("unrecognized swap entry 0x%lx\n", entry.val); >>>>               WARN_ON_ONCE(1); >>>>           } >>>> -        pte_clear_not_present_full(mm, addr, pte, tlb->fullmm); >>>> -        zap_install_uffd_wp_if_needed(vma, addr, pte, 1, details, ptent); >>>> +        clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm); >>> >>> For zap_install_uffd_wp_if_needed(), the uffd-wp bit has to match. >>> >>> zap_install_uffd_wp_if_needed() will use the uffd-wp information in >>> ptent->pteval to make a decision whether to place PTE_MARKER_UFFD_WP markers. >>> >>> On mixture, you either lose some or place too many markers. >>> >>> A simple workaround would be to disable any such batching if the VMA does have >>> uffd-wp enabled. >> >> Rather than this, I'll just consider all the swp pte bits when batching. >> >>> >>>> +        zap_install_uffd_wp_if_needed(vma, addr, pte, nr, details, ptent); >>>>       } while (pte += nr, addr += PAGE_SIZE * nr, addr != end); >> >> [...] >> >