From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60A49C35274 for ; Mon, 18 Dec 2023 16:07:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CAE2D6B0071; Mon, 18 Dec 2023 11:07:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C5EB36B0075; Mon, 18 Dec 2023 11:07:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AFE0B6B0074; Mon, 18 Dec 2023 11:07:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9C3CA6B0071 for ; Mon, 18 Dec 2023 11:07:34 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7E7C6A23B6 for ; Mon, 18 Dec 2023 16:07:34 +0000 (UTC) X-FDA: 81580419228.23.7A106E7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 3C69D1C0006 for ; Mon, 18 Dec 2023 16:07:31 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702915651; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y+Idsx6p19Dt9oP0GU3USS/JDvMIvWvF/R2FkSs3yLQ=; b=xuqE6A/3MPsauyhVRcGkjvyvZTXvLem4E8vXttyeWRwJMnnik6KTsZ5yWZm9PF/lytjM/w tTnxCgPkRfTvFrAN9huBdyPEsVPT3BfxKFzztilZBOs/ayf2UxsxT6u7DmHILLfoLYGjBh IOh1fdepgLheTA//HyIOhPyz8HOQaAI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702915651; a=rsa-sha256; cv=none; b=L3Js9dIDvjWKTvtuTWytBxv9PoZLkbhjx8vrRikDilqhxaA7IqFoUHg/o7+wZ8Ry1T0dbC ni2TMc/0TGcEOO3zW0pj2Na7wEQUEGCw4KWbCR3/LDvT+PKZCnyJCg/gxufQWaNoWiE3Sq D8V7ygQ0gTXWa6BOkAzoQ3ScdlSqBEk= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AA7E12F4; Mon, 18 Dec 2023 08:08:14 -0800 (PST) Received: from [10.57.75.230] (unknown [10.57.75.230]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C254E3F5A1; Mon, 18 Dec 2023 08:07:28 -0800 (PST) Message-ID: Date: Mon, 18 Dec 2023 16:07:27 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 13/39] mm/rmap: factor out adding folio mappings into __folio_add_rmap() Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu References: <20231211155652.131054-1-david@redhat.com> <20231211155652.131054-14-david@redhat.com> From: Ryan Roberts In-Reply-To: <20231211155652.131054-14-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 3C69D1C0006 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: yhogtaxuntqk8arud3rbabn76z1r5qpt X-HE-Tag: 1702915651-634351 X-HE-Meta: U2FsdGVkX19UhRmp+joRZTPTItWqvD2CWzcDvKeTX8rwB+8eUSQiE5RpmtoL/vdlrAiLByH9r8LQbiW0yq7YU3EnmSveRY3k9k1Mm6bsqc6oO9DX9CXF7QgWmkblqludJX7/YBdmDTE4FJGOE0Aoce71Y93CVX8lAifA8tzNgn5KNHUPDA1y0eWWVsd7Pzr4jMn80uKlhz4URx5yFoX5N2s5porRA3L0YZb8UFfCrMIOMXPogkhizdmiF8vQErrf455qMpKgzk4D0NzCKF7zrnL2I74I0E7htwDXOF4OsTeVmfTeiDsVoSl0vcMxpWZ4YJGkBf1rf2Jgy7uTpDf+hUDf7/Kl7N1B+ncG0H8sDcqX1h3mxcslR2a+VlHxUi0GsuZKeTC4Tz/i6w5Ebn+RLldXUQFmdhrITs843w1KvAm9b6IzbLYVPSL9BnIfe016aEJ+3dlX2oAY8Acgzd42LzsDMJAEFtERXoSV+XWDdh/vY2yWQ0hXFHc9LrD/MqVUCrll1sqEqwiuTaMi9RQnJruKcr6sCwJGxRSBuqjnTSIIUDFwdr6pqz8YOwf/aKMFhLMuNVZPhFNjT7QLZ7DXrX6bCWdi9/BFROvLbkSkUf3ToaLLy8L4SBwAL7RIuMDIWFmDZJXPxGRSbHqed2g1JcsD/RXGLGbZKMvWHZnIarymY3hJHgPwZN5wXSaLC1p632+As51s4mDEzflDc81Mytrx2KK1aupm4dEnjxMhkvyHXT6WLgrq/EOC4YxDoVq9RD8dm2Q1I6i3BslhRptl3ZFgbrtaY7mnwE0VGiJ7+YV8JzZhTCVeIJp+CHnrxNYzf91ArrKRVq5WTr3k5mZ+GQhdwgci48esFrvgGLjAEpwl5UYbpFtu/unvdJLOHksMpQOnCRkaVx4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/12/2023 15:56, David Hildenbrand wrote: > Let's factor it out to prepare for reuse as we convert > page_add_anon_rmap() to folio_add_anon_rmap_[pte|ptes|pmd](). > > Make the compiler always special-case on the granularity by using > __always_inline. > > Reviewed-by: Yin Fengwei > Signed-off-by: David Hildenbrand > --- > mm/rmap.c | 81 ++++++++++++++++++++++++++++++------------------------- > 1 file changed, 45 insertions(+), 36 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 2ff2f11275e5..c5761986a411 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1157,6 +1157,49 @@ int folio_total_mapcount(struct folio *folio) > return mapcount; > } > > +static __always_inline unsigned int __folio_add_rmap(struct folio *folio, > + struct page *page, int nr_pages, enum rmap_mode mode, > + unsigned int *nr_pmdmapped) > +{ > + atomic_t *mapped = &folio->_nr_pages_mapped; > + int first, nr = 0; > + > + __folio_rmap_sanity_checks(folio, page, nr_pages, mode); > + > + /* Is page being mapped by PTE? Is this its first map to be added? */ I suspect this comment is left over from the old version? It sounds a bit odd in its new context. > + switch (mode) { > + case RMAP_MODE_PTE: > + do { > + first = atomic_inc_and_test(&page->_mapcount); > + if (first && folio_test_large(folio)) { > + first = atomic_inc_return_relaxed(mapped); > + first = (first < COMPOUND_MAPPED); > + } > + > + if (first) > + nr++; > + } while (page++, --nr_pages > 0); > + break; > + case RMAP_MODE_PMD: > + first = atomic_inc_and_test(&folio->_entire_mapcount); > + if (first) { > + nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); > + if (likely(nr < COMPOUND_MAPPED + COMPOUND_MAPPED)) { > + *nr_pmdmapped = folio_nr_pages(folio); > + nr = *nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); > + /* Raced ahead of a remove and another add? */ > + if (unlikely(nr < 0)) > + nr = 0; > + } else { > + /* Raced ahead of a remove of COMPOUND_MAPPED */ > + nr = 0; > + } > + } > + break; > + } > + return nr; > +} > + > /** > * folio_move_anon_rmap - move a folio to our anon_vma > * @folio: The folio to move to our anon_vma > @@ -1380,45 +1423,11 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio, > struct page *page, int nr_pages, struct vm_area_struct *vma, > enum rmap_mode mode) > { > - atomic_t *mapped = &folio->_nr_pages_mapped; > - unsigned int nr_pmdmapped = 0, first; > - int nr = 0; > + unsigned int nr, nr_pmdmapped = 0; You're still being inconsistent with signed/unsigned here. Is there a reason these can't be signed like nr_pages in the interface? > > VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); > - __folio_rmap_sanity_checks(folio, page, nr_pages, mode); > - > - /* Is page being mapped by PTE? Is this its first map to be added? */ > - switch (mode) { > - case RMAP_MODE_PTE: > - do { > - first = atomic_inc_and_test(&page->_mapcount); > - if (first && folio_test_large(folio)) { > - first = atomic_inc_return_relaxed(mapped); > - first = (first < COMPOUND_MAPPED); > - } > - > - if (first) > - nr++; > - } while (page++, --nr_pages > 0); > - break; > - case RMAP_MODE_PMD: > - first = atomic_inc_and_test(&folio->_entire_mapcount); > - if (first) { > - nr = atomic_add_return_relaxed(COMPOUND_MAPPED, mapped); > - if (likely(nr < COMPOUND_MAPPED + COMPOUND_MAPPED)) { > - nr_pmdmapped = folio_nr_pages(folio); > - nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); > - /* Raced ahead of a remove and another add? */ > - if (unlikely(nr < 0)) > - nr = 0; > - } else { > - /* Raced ahead of a remove of COMPOUND_MAPPED */ > - nr = 0; > - } > - } > - break; > - } > > + nr = __folio_add_rmap(folio, page, nr_pages, mode, &nr_pmdmapped); > if (nr_pmdmapped) > __lruvec_stat_mod_folio(folio, folio_test_swapbacked(folio) ? > NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);