From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11E94C4167B for ; Tue, 5 Dec 2023 13:37:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8FB9B6B0089; Tue, 5 Dec 2023 08:37:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AB676B0092; Tue, 5 Dec 2023 08:37:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 799FE6B0093; Tue, 5 Dec 2023 08:37:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6BD486B0089 for ; Tue, 5 Dec 2023 08:37:38 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 449BE12010E for ; Tue, 5 Dec 2023 13:37:38 +0000 (UTC) X-FDA: 81532866996.29.E686571 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 550EBA000D for ; Tue, 5 Dec 2023 13:37:36 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701783456; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iOxbSXuDkpV+5m06Kd035rSHpv9DDMwq+YaO4ofXapQ=; b=VBntdb6JTOOxzL4Dp0lPIM0V4QSAvXcwzcwlBN/OE9bQ2QRuHY1+Q2TYwFWyL1cIi6C15t WqXrqcTq4nR9wDyS8JoWNQhCNUwfLfFcE2RcuU0HmBgRpMrYBRABDHw0we+d0ClfZiSG6p dsTXCwMjtiq4UqMe6Tvq20mPK4drMao= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf15.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701783456; a=rsa-sha256; cv=none; b=GLpG9T7d8ZTtnMBv3I8jnzmPoCUq+tzTTUQoC0rRtdNiBM+8jlK9Jm3crl5rJiwu5xQJQf /m61CznbWFE1i+1+7p2c5dUfbfRRrNlBW3oxHKl1Gjxue8yd1+s1YY0BJFHGaz4Fq5KaZw DGGEdX3e+bdSL0mJZGxjDCxgX3/tcAk= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AB4FC139F; Tue, 5 Dec 2023 05:38:21 -0800 (PST) Received: from [10.57.73.130] (unknown [10.57.73.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7BD353F5A1; Tue, 5 Dec 2023 05:37:33 -0800 (PST) Message-ID: <17d64bc7-3a84-4562-821c-439950e1da91@arm.com> Date: Tue, 5 Dec 2023 13:37:31 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH RFC 23/39] mm/rmap: introduce folio_remove_rmap_[pte|ptes|pmd]() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231204142146.91437-1-david@redhat.com> <20231204142146.91437-24-david@redhat.com> <58be47c1-c34e-46a2-a32b-a993f7dee664@redhat.com> From: Ryan Roberts In-Reply-To: <58be47c1-c34e-46a2-a32b-a993f7dee664@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 550EBA000D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: eizdkaerywtrei3nwtj6zx9stf6j6sb3 X-HE-Tag: 1701783456-849952 X-HE-Meta: U2FsdGVkX1+2NoBo7P9lS3MgluXzH9LytdPfI40ISj7i3wisy6WPHpnE3AMdE7PkmNPpS4YEewl6PF4+nclAarWUEH2V8XtHCK2IHPqXslvi7t5vYq1Ta0hna/RZzOZBp9Jq0nea1O1qij7EhX6TTKjPdePM20G7cSzsTrXtqIX8z/u5TOs4WznvDAkgqZqiOI4hvhK9bGVv4DbyOpgy28Oj/kX09JNJX0/sxWCncHUs4Fa4DM0vpmuaLGJPl3x7Fx2GqfUIB9/bcPV6aebsVD/j5YMeNGk0tRo46KHWWIQ03wpg7APbMihDqyhDUDJ6PySFENE4CsLvjcbms3HRpmjIdO+5yZXL/NonRwAa66bjZ6g3SNAXdd3y0SRE2t8lUB7guJZ19qN8Pwl+jH7hggTM56ofe7RE8/ASHGE+CD+ahYuhCrpKIrI4cfen5eKXvwvF7t3F4faiGsqlVydJ/9QDO7Fo1CXg/qux+U2QlV3gpIAZopoxjM2y6bmHPj2x0cbUKUdH/QQDq3uCdAFvdsVRq8ppZoz9CHS2NED4ppuzuitCW0pi4dE0b12Xu1heusE/Ca0JfNaZfrkRBM21/jn7EOXcolFuYRaUHsYy9B81y0dUgmB5RAg2G9FICt6QhRLUAc94HrG2vg6atqxYcqz6w89Ocb1Eeyrw1gRbeptk8L1sH1+RPB65dG3Tg/8MNYM55vlw7IuWus1TjHO4eYh6pecs0mWGYNv/6JCCeZuywXXtvQFXbqQbjLrYPQ/q8cf0IgTbbVz4XSC0QZiKqSeOShMoGPOY5kDlJKtLl9WKWJkCjYcZxBZtRCVaxEkW0JOA9qG85Bleah78NXAXyvR+/qe+v36l9B9xFiJ+w7b3Dxxeda7+rX3fNBPz2dUjGpfy9EKuOlnSEbq9PXGConkC/GdPasMvOCP4rxqKh/XNfvpGSfEjEaHnnEELS0AG5tPOVFQ0ffJkW21vV4u 1VTRQzie OyNsK4eHd/gsJOoOGf59FIEHaGuROYhBlzAsckONcgIQG9Hl6A1F+W+QlMOn2JrE8ZrexBwHAnsZx/wG9aAyUYN9NXbognuZwdBmsxAi4Rg9xcQ8fbhHVcHlmNeBCxoM2NK5WxTiuGAjAPPHHbCyjp7/Qfg3wyT2+57bCfdMcfFY/ZwurXVJ5TGNtBSttyyPp7TWD X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 05/12/2023 13:09, David Hildenbrand wrote: >>> +static __always_inline void __folio_remove_rmap(struct folio *folio, >>> +        struct page *page, unsigned int nr_pages, >>> +        struct vm_area_struct *vma, enum rmap_mode mode) >>> +{ >>>       atomic_t *mapped = &folio->_nr_pages_mapped; >>> -    int nr = 0, nr_pmdmapped = 0; >>> -    bool last; >>> +    int last, nr = 0, nr_pmdmapped = 0; >> >> nit: you're being inconsistent across the functions with signed vs unsigned for >> page counts (e.g. nr, nr_pmdmapped) - see __folio_add_rmap(), >> __folio_add_file_rmap(), __folio_add_anon_rmap(). >> > > Definitely. > >> I suggest pick one and stick to it. Personally I'd go with signed int (since >> that's what all the counters in struct folio that we are manipulating are, >> underneath the atomic_t) then check that nr_pages > 0 in >> __folio_rmap_sanity_checks(). > > Can do, but note that the counters are signed to detect udnerflows. It doesn't > make sense here to pass a negative number. I agree it doesn't make sense to pass negative - hence the check. These 2 functions are inconsistent on size, but agree on signed: long folio_nr_pages(struct folio *folio) int folio_nr_pages_mapped(struct folio *folio) I don't have a strong opinon. > >> >>>       enum node_stat_item idx; >>>   -    VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); >>> -    VM_BUG_ON_PAGE(compound && !PageHead(page), page); >>> +    __folio_rmap_sanity_checks(folio, page, nr_pages, mode); >>>         /* Is page being unmapped by PTE? Is this its last map to be removed? */ >>> -    if (likely(!compound)) { >>> -        last = atomic_add_negative(-1, &page->_mapcount); >>> -        nr = last; >>> -        if (last && folio_test_large(folio)) { >>> -            nr = atomic_dec_return_relaxed(mapped); >>> -            nr = (nr < COMPOUND_MAPPED); >>> -        } >>> -    } else if (folio_test_pmd_mappable(folio)) { >>> -        /* That test is redundant: it's for safety or to optimize out */ >>> +    if (likely(mode == RMAP_MODE_PTE)) { >>> +        do { >>> +            last = atomic_add_negative(-1, &page->_mapcount); >>> +            if (last && folio_test_large(folio)) { >>> +                last = atomic_dec_return_relaxed(mapped); >>> +                last = (last < COMPOUND_MAPPED); >>> +            } >>>   +            if (last) >>> +                nr++; >>> +        } while (page++, --nr_pages > 0); >>> +    } else if (mode == RMAP_MODE_PMD) { >>>           last = atomic_add_negative(-1, &folio->_entire_mapcount); >>>           if (last) { >>>               nr = atomic_sub_return_relaxed(COMPOUND_MAPPED, mapped); >>> @@ -1517,7 +1528,7 @@ void page_remove_rmap(struct page *page, struct >>> vm_area_struct *vma, >>>            * is still mapped. >>>            */ >>>           if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) >> >> folio_test_pmd_mappable() -> folio_test_large() >> >> Since you're converting this to support batch PTE removal, it might as well also >> support smaller-than-pmd too? > > I remember that you have a patch for that, right? :) > >> >> I currently have a patch to do this same change in the multi-size THP series. >> > > Ah, yes, and that should go in first. > >