From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A862EB64DA for ; Tue, 18 Jul 2023 06:24:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BB3266B0071; Tue, 18 Jul 2023 02:24:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B62C76B0074; Tue, 18 Jul 2023 02:24:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A518F8D0001; Tue, 18 Jul 2023 02:24:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 969576B0071 for ; Tue, 18 Jul 2023 02:24:20 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6850F4014A for ; Tue, 18 Jul 2023 06:24:20 +0000 (UTC) X-FDA: 81023743080.21.B4767A7 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf02.hostedemail.com (Postfix) with ESMTP id A8B2980018 for ; Tue, 18 Jul 2023 06:24:17 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XPx1N3Mu; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689661458; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PIyU3CESH52NJgNVxa/DUMG6b3VHeTrVLJXLAS4ciek=; b=Z3MXkBhzAzQkZ4uwGUSwqVzF/k7WATvpRZwFNO6EztMPA8eg7QKb+CG6nedBtAwa0RyFAV eluu25qoo2BboM6vnCFwD+RsY5lPOgxs8gg0XiNeD4CB+ECADWvh3jWLNHFvaboFScb9Z+ j1j7YcZs7IHnlt7XrisE1E+ylGThokQ= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XPx1N3Mu; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf02.hostedemail.com: domain of ying.huang@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689661458; a=rsa-sha256; cv=none; b=xc6riZulJfj9gBRGFxj/QJFgupZcapLTyN0qcAWsa0NsanTvwy9cG+03rIY8zzySj4kt/N TV29PESNPpFyfy3/YfWLl+ZJ9jyb+ZH0E2H/as5uOgVSMdZKcxTzTcjKlRmEQOcLAiZARd 60swVBQ+StPVbyjzKcyU+mSES8Z1T/Y= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689661455; x=1721197455; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=ycN0HSOoo8giOXVll7EuuW/rgiYv30tIU6vjpIahWHY=; b=XPx1N3MuC02qg03HbnSb+ULrz5EIR+9plEj2+5unqdE1CL2LQURzlTJu CE5oKl4i7Fd8cih97ZR1X3pO6oRvw8fV8D3aHPV2VX5D4llz5eGTvLO0C 3vI5yfOYJg2ShParZsEG344ZOJ5zaYLqAorzfmr36hDQwZPgbixgJfr4n HO+Q1xcBHFzATmG5asWcWq+PtE+U0fxnuc9XLMHWGrxmrDH4HwF6Ykc5R qQ/KfCWsvTrbDrkL1lnN+50ZFcQnuy3vZ2R92RivOUyuMcNqZSSpWlHfP qaVDcPtbMA11ZbrzGIWMwS1Q2WtK0ET1LxBjyycHLLKV+2n44VL2d5cvr g==; X-IronPort-AV: E=McAfee;i="6600,9927,10774"; a="396961936" X-IronPort-AV: E=Sophos;i="6.01,213,1684825200"; d="scan'208";a="396961936" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2023 23:24:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10774"; a="700777210" X-IronPort-AV: E=Sophos;i="6.01,213,1684825200"; d="scan'208";a="700777210" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2023 23:24:04 -0700 From: "Huang, Ying" To: Ryan Roberts Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , Zi Yan , , Subject: Re: [PATCH v1 2/3] mm: Implement folio_remove_rmap_range() References: <20230717143110.260162-1-ryan.roberts@arm.com> <20230717143110.260162-3-ryan.roberts@arm.com> Date: Tue, 18 Jul 2023 14:22:19 +0800 In-Reply-To: <20230717143110.260162-3-ryan.roberts@arm.com> (Ryan Roberts's message of "Mon, 17 Jul 2023 15:31:09 +0100") Message-ID: <874jm1d9ic.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A8B2980018 X-Stat-Signature: gzsh6nxbfmxdccwmnhxok6qxamo6sz7i X-HE-Tag: 1689661457-588015 X-HE-Meta: U2FsdGVkX1/1D0QZe4m/zXN5qkiy1Fs1+BS4qFP2MkdwFcCfKYWG5uT/0AS/RkXrxj4VckInPNGw4TPv1uSQGUXNrIahK2p+5AWBrO1fEDYrjrTctmhmjOznZlzuiB1keqYKvO37j64lE6m5n0PyMOAPVrdof3+OfGSLegtDJh4n1tU9Iw2SuDfNm6JyddevdawiHq/NWULWL9/4t0hsqtDfNMx18iPlGn7Lhv2pqFMETvtfExUlGlFYNyhFUjgkUBAkBwSmswzch65RShhsRX8CQ41iLPxjDPiApdKvRgHPkcUPgMKaiiXRgv0/AxqVE8xdhd3PDekX2mGLVg3e8eq6OW/aXsxOEQm52b3lrNPRUPzgfA/1n7dFq0Huev5665VmnV/rxb6XcPb/FRp9IZNGgVyw8FMEa0Hc4AmTFcD/iZ7QSirbWT3JwmlADmEE9ZZ75BL/kOXq2WlrDQ32oPk3lDGt7dMyVM61BHvkXy8Gr08IfqPZfKn0fzw03kyksj8QMU2hjOBCQBtig2SqeoaMXuHFw9PETHGd0m5PpBewsMc1Phrj3ViCRR0hm7fVOsp+ZHjUNDxKvMipFQUwd08NKrpkcCsZuPGqLwL3JKJspKZ6SF16fkbLW2k+Td2zmmuT07IY0sZH1OzqQ+3pu9j0Z9fnGMTMG7M1WZ1z8F/h8q6sjEYi4LBaR+sIim6UTtAf1r0Rscmd/NSrZL1vtsD7cdDmhc+zpsP6PSZtNWhpt7/EeTb0NCLf6+9+X3CaceAJ5K6xYKuv+KYb0tbph7LX80cQZulCN87R+MC0l7n/JEccsfVjgLB+Uv/H32hP+1HiMQTDOVmcdRmplBrtqgu/2kU/qgtFbyQRuxgTbtaeKgpymMqeDJSWDmKVZkt/Z+AcT8PCqgOpFc5Nx/mGM+EzIM53tHpWf29lKl6EjmcOFUTUnyDo79jbVazl0gFEIANLXxqtsG9t99FiU2j aiyyYIgz Yc6V/Czyme2ZrIJY7S59z+FmWXsYPIDCHYPPuJ9tLPJCdKE942Jnu0REKlBpvzU4uke45R5IibWsl/FLeTPQoW/tD0yzln9jRHKTM4a3GY2ix44Hz6dAMruNVSwhaeMXuQpI/A73+tgP0GUHYcQXj9P/UxdOp/OcW0w5f5rqRj5UJJyeDxqp7HlCC8/IhTKQTtHs+NKH61/LDFfQi9QW81GCLdETF6Dbmu8yEneXT+cG4MrXUi+wQUavKJVIe5nPl4G9X X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ryan Roberts writes: > Like page_remove_rmap() but batch-removes the rmap for a range of pages > belonging to a folio. This can provide a small speedup due to less > manipuation of the various counters. But more crucially, if removing the > rmap for all pages of a folio in a batch, there is no need to > (spuriously) add it to the deferred split list, which saves significant > cost when there is contention for the split queue lock. > > All contained pages are accounted using the order-0 folio (or base page) > scheme. > > Signed-off-by: Ryan Roberts > --- > include/linux/rmap.h | 2 ++ > mm/rmap.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 67 insertions(+) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index b87d01660412..f578975c12c0 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -200,6 +200,8 @@ void page_add_file_rmap(struct page *, struct vm_area_struct *, > bool compound); > void page_remove_rmap(struct page *, struct vm_area_struct *, > bool compound); > +void folio_remove_rmap_range(struct folio *folio, struct page *page, > + int nr, struct vm_area_struct *vma); > > void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, > unsigned long address, rmap_t flags); > diff --git a/mm/rmap.c b/mm/rmap.c > index 2baf57d65c23..1da05aca2bb1 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1359,6 +1359,71 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, > mlock_vma_folio(folio, vma, compound); > } > > +/* > + * folio_remove_rmap_range - take down pte mappings from a range of pages > + * belonging to a folio. All pages are accounted as small pages. > + * @folio: folio that all pages belong to > + * @page: first page in range to remove mapping from > + * @nr: number of pages in range to remove mapping from > + * @vma: the vm area from which the mapping is removed > + * > + * The caller needs to hold the pte lock. > + */ > +void folio_remove_rmap_range(struct folio *folio, struct page *page, > + int nr, struct vm_area_struct *vma) > +{ > + atomic_t *mapped = &folio->_nr_pages_mapped; > + int nr_unmapped = 0; > + int nr_mapped; > + bool last; > + enum node_stat_item idx; > + > + if (unlikely(folio_test_hugetlb(folio))) { > + VM_WARN_ON_FOLIO(1, folio); > + return; > + } > + > + if (!folio_test_large(folio)) { > + /* Is this the page's last map to be removed? */ > + last = atomic_add_negative(-1, &page->_mapcount); > + nr_unmapped = last; > + } else { > + for (; nr != 0; nr--, page++) { > + /* Is this the page's last map to be removed? */ > + last = atomic_add_negative(-1, &page->_mapcount); > + if (last) { > + /* Page still mapped if folio mapped entirely */ > + nr_mapped = atomic_dec_return_relaxed(mapped); > + if (nr_mapped < COMPOUND_MAPPED) > + nr_unmapped++; > + } > + } > + } > + > + if (nr_unmapped) { > + idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; > + __lruvec_stat_mod_folio(folio, idx, -nr_unmapped); > + > + /* > + * Queue anon THP for deferred split if we have just unmapped at Just some nitpicks. So feel free to ignore. s/anon THP/large folio/ ? > + * least 1 page, while at least 1 page remains mapped. > + */ > + if (folio_test_large(folio) && folio_test_anon(folio)) > + if (nr_mapped) if (folio_test_large(folio) && folio_test_anon(folio) && nr_mapped) ? > + deferred_split_folio(folio); > + } > + > + /* > + * It would be tidy to reset folio_test_anon mapping when fully > + * unmapped, but that might overwrite a racing page_add_anon_rmap > + * which increments mapcount after us but sets mapping before us: > + * so leave the reset to free_pages_prepare, and remember that > + * it's only reliable while mapped. > + */ > + > + munlock_vma_folio(folio, vma, false); > +} > + > /** > * page_remove_rmap - take down pte mapping from a page > * @page: page to remove mapping from Best Regards, Huang, Ying