From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 328B5C04A94 for ; Thu, 3 Aug 2023 13:56:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9867428025C; Thu, 3 Aug 2023 09:56:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9375728022C; Thu, 3 Aug 2023 09:56:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7FE3F28025C; Thu, 3 Aug 2023 09:56:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 718A128022C for ; Thu, 3 Aug 2023 09:56:49 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7D0FB141241 for ; Thu, 3 Aug 2023 13:56:48 +0000 (UTC) X-FDA: 81082944096.28.703884C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 74F4AC0016 for ; Thu, 3 Aug 2023 13:56:46 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691071006; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=abEhAHqbwWr5P3CtqtIzBAeYPgx8OA3LH1IHUU7YF6I=; b=tyiu+sbLmFisbpfBaRxrHJJnfkzhxi2QEjC74x8doMJ2lUPhO4f+Bl9X+bidF7PrVA4E+0 3LP4Urw/D5hkvuNOBvX6EZ5TKDRs3uLz1XoqP8+e3x8qSwp0HcarkMoKoFdCcP0G+GRUas XaRxuyBIrgW6vQ7ydP60xi9CZwFxl00= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf22.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691071006; a=rsa-sha256; cv=none; b=5FK71yjJrDX3I+jSZ/EgsJVGAKRB+ZdAKQVpfH8owz45PHrplc5DmiZIwaMFe6EBJhWGzI DS9snR5fsJFc2AL5NMpUZnvT3NBVb5drBoZ0r3qVAeKzVt0ZTfpBMNY0Typlq6+ho20dbE sMXGWHSa8JJNaHIZwPZa/k8ZDeeXa2w= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4075F113E; Thu, 3 Aug 2023 06:57:28 -0700 (PDT) Received: from [10.1.35.53] (C02Z41KALVDN.cambridge.arm.com [10.1.35.53]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7905D3F5A1; Thu, 3 Aug 2023 06:56:43 -0700 (PDT) Message-ID: <4255e71a-63c9-b2f9-5e97-e46834f7837c@arm.com> Date: Thu, 3 Aug 2023 14:56:42 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 Subject: Re: [PATCH v4 3/3] mm: Batch-zap large anonymous folio PTE mappings To: David Hildenbrand , Andrew Morton , Matthew Wilcox , Yin Fengwei , Yu Zhao , Yang Shi , "Huang, Ying" , Zi Yan , Nathan Chancellor , Alexander Gordeev , Gerald Schaefer Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20230727141837.3386072-1-ryan.roberts@arm.com> <20230727141837.3386072-4-ryan.roberts@arm.com> <6cda91b3-bb7a-4c4c-a618-2572b9c8bbf9@redhat.com> From: Ryan Roberts In-Reply-To: <6cda91b3-bb7a-4c4c-a618-2572b9c8bbf9@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: thpqxjk8xpdm1wuoanjf5q3wo7ty8wdd X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 74F4AC0016 X-HE-Tag: 1691071006-462535 X-HE-Meta: U2FsdGVkX18O9HiPUP+3RXAZfkHEpfNiJOM4XV0GdN4BH99CKTWqsDvwajq39DrYz/eWcaCoDdbL5+42IAJHdlD5yJfUlcBU51JGhYIRGQfqgNR1mVjq7g7gDnwYlwiLzC6/WwFNae0tr0KI7N1/6Ae8haG2aa7L4mH5aRYeTYxLDzfCsqCnQjbNEmyjkBxFikY1JwsjaRSiCdzuQlfwKBaCPIobmTHFJF9kRDAq59y8MLmLyLmmQ4svz+bXASl6jn78sfWT4FX/Hw6dHTZ8/+8b6+vMjDj/XKgU8U5uD7b+QIe8thvCawyQDax2kJMKfBFAVf20bh1pFcC2dscNZLVATJz1lb/DcCzaIbmC3RvEjDiTYgvLia2L38LwnYN7RAvYO7TIKS+62Y/rdavHGAFYYHCzHqGpvvLxSkPdxdc8HpLbdvxMYa5bhuVj9erDeRTf3W9qKggN8uEWdIGvjJfRk4CTZHnUrnweRbUydtNdCcl0OEOGPdA1pnGWZsm4ciAOs0vMJfYqt5+7hPM4bYNxA1+VWgBB5FLvQWDtziIdfS4Tjra5WbM0K6v9a98KuYKr2SGxZZwf0OKvRBVu0KgbEBhUZNZSe5eCNdZitLORe2koV3Ua+SmtC6DkW4kTSoo4c+Y3e91FpoorYPhZWf+5iaGjZtXklyIv7k/aDSXq9OezKdULWH7OSJ737fwXRXGYdiNoLSpCzlL3jW5iHImBpkp58e87z7HJFH8L85eQ6iUGAppXsIGFmRy+LbZDfZFy5qxGNZ+Dy+d2Mc7DDcxbCSOeWtOTGO53xj4/MrUsYGomFsO46EyE7eN1oBhrDpdV4e9p2G7O6Ii9qUO/S4B14dyGpyUI88cp3bj2WgtzqsD40p1bcwwB0l4XUWA1CukiGanoGuljkVDlwygOgZwn35CM5Scy50p1+eGLL3RrfcLMWYYH4N4zppqs+cTTBIR3zgVGhAFObZ5ZWVP NkhXqLT/ e787nq5JWQLIM8XUDpm4YOAPFKvY/VbKi5rwsg2XaGB9PooBJ6HkVTTd8u5okc9sbibrC/M2bQsQUlAlyHAxNVJnNfxQM/M4RFh9dCxWyaH+hl3g1D5XkTGMZaEIWTKTUo41Mtz7DCYeeM3nkbi1/6R3yrq1c3Gv+hzkeiKT00cIGN7E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As per yesterday's discussion, I'm going to rework this series into a more generic version that covers pagecache folios too. But your comments will still be relevent there so answers below. On 03/08/2023 14:38, David Hildenbrand wrote: > On 27.07.23 16:18, Ryan Roberts wrote: >> This allows batching the rmap removal with folio_remove_rmap_range(), >> which means we avoid spuriously adding a partially unmapped folio to the >> deferred split queue in the common case, which reduces split queue lock >> contention. >> >> Previously each page was removed from the rmap individually with >> page_remove_rmap(). If the first page belonged to a large folio, this >> would cause page_remove_rmap() to conclude that the folio was now >> partially mapped and add the folio to the deferred split queue. But >> subsequent calls would cause the folio to become fully unmapped, meaning >> there is no value to adding it to the split queue. >> >> A complicating factor is that for platforms where MMU_GATHER_NO_GATHER >> is enabled (e.g. s390), __tlb_remove_page() drops a reference to the >> page. This means that the folio reference count could drop to zero while >> still in use (i.e. before folio_remove_rmap_range() is called). This >> does not happen on other platforms because the actual page freeing is >> deferred. >> >> Solve this by appropriately getting/putting the folio to guarrantee it >> does not get freed early. Given the need to get/put the folio in the >> batch path, we stick to the non-batched path if the folio is not large. >> While the batched path is functionally correct for a folio with 1 page, >> it is unlikely to be as efficient as the existing non-batched path in >> this case. >> >> Signed-off-by: Ryan Roberts >> --- >>   mm/memory.c | 132 ++++++++++++++++++++++++++++++++++++++++++++++++++++ >>   1 file changed, 132 insertions(+) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index 01f39e8144ef..d35bd8d2b855 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -1391,6 +1391,99 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, >>       pte_install_uffd_wp_if_needed(vma, addr, pte, pteval); >>   } >>   +static inline unsigned long page_cont_mapped_vaddr(struct page *page, >> +                struct page *anchor, unsigned long anchor_vaddr) >> +{ >> +    unsigned long offset; >> +    unsigned long vaddr; >> + >> +    offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT; >> +    vaddr = anchor_vaddr + offset; >> + >> +    if (anchor > page) { >> +        if (vaddr > anchor_vaddr) >> +            return 0; >> +    } else { >> +        if (vaddr < anchor_vaddr) >> +            return ULONG_MAX; >> +    } >> + >> +    return vaddr; >> +} >> + >> +static int folio_nr_pages_cont_mapped(struct folio *folio, >> +                      struct page *page, pte_t *pte, >> +                      unsigned long addr, unsigned long end) >> +{ >> +    pte_t ptent; >> +    int floops; >> +    int i; >> +    unsigned long pfn; >> +    struct page *folio_end; >> + >> +    if (!folio_test_large(folio)) >> +        return 1; >> + >> +    folio_end = &folio->page + folio_nr_pages(folio); >> +    end = min(page_cont_mapped_vaddr(folio_end, page, addr), end); >> +    floops = (end - addr) >> PAGE_SHIFT; >> +    pfn = page_to_pfn(page); >> +    pfn++; >> +    pte++; >> + >> +    for (i = 1; i < floops; i++) { >> +        ptent = ptep_get(pte); >> + >> +        if (!pte_present(ptent) || pte_pfn(ptent) != pfn) >> +            break; >> + >> +        pfn++; >> +        pte++; >> +    } >> + >> +    return i; >> +} >> + >> +static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb, >> +                        struct vm_area_struct *vma, >> +                        struct folio *folio, >> +                        struct page *page, pte_t *pte, >> +                        unsigned long addr, int nr_pages, >> +                        struct zap_details *details) >> +{ >> +    struct mm_struct *mm = tlb->mm; >> +    pte_t ptent; >> +    bool full; >> +    int i; >> + >> +    /* __tlb_remove_page may drop a ref; prevent going to 0 while in use. */ >> +    folio_get(folio); > > Is there no way around that? It feels wrong and IMHO a bit ugly. I haven't found a good one, but I'm all ears. On the non-batched path, __tlb_remove_page() is called before page_remove_rmap(). On this path, the whole point is that we just call folio_remove_rmap_range() for the whole batch. If I'm right, we must only remove from the rmap *after* doing the pte clear to avoid races. And we need to do call __tlb_remove_page() as we go, because it might run out of space at any time. If I knew how many pages the tlb could accept ahead of time, I could defer the __tlb_remove_page() calls to after folio_remove_rmap_range(). But there is no accessor for that as far as I can see. It looks fairly complicated to calculate it too. > > With this patch, you'll might suddenly have mapcount > refcount for a folio, or > am I wrong? Yes you would. Does that break things? > >> + >> +    for (i = 0; i < nr_pages;) { >> +        ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); >> +        tlb_remove_tlb_entry(tlb, pte, addr); >> +        zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); >> +        full = __tlb_remove_page(tlb, page, 0); >> + >> +        if (unlikely(page_mapcount(page) < 1)) >> +            print_bad_pte(vma, addr, ptent, page); > > Can we avoid new users of page_mapcount() outside rmap code, please? :) Sure. This is just trying to replicate the same diagnstics that's done on the non-batched path. I'm happy to remove it. >