From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C9ABEB64DD for ; Tue, 18 Jul 2023 07:13:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D8E46B0071; Tue, 18 Jul 2023 03:13:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 360B26B0074; Tue, 18 Jul 2023 03:13:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1DB5F8D0001; Tue, 18 Jul 2023 03:13:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0AB166B0071 for ; Tue, 18 Jul 2023 03:13:49 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BE872401A0 for ; Tue, 18 Jul 2023 07:13:48 +0000 (UTC) X-FDA: 81023867736.14.D6A7761 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf22.hostedemail.com (Postfix) with ESMTP id 18D18C0009 for ; Tue, 18 Jul 2023 07:13:44 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jRrV6ijH; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689664426; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5qmbVo2VgdRCPPCy384BfkKGBO07wXUxKFKq8TkqUig=; b=1mPYT7C7EWRV582n22X5nJ829Q1ahR1BxtpByjMO/jv2/Kt3XVy6PJYW/bJod61c6ayIt5 6Gv2g5Kd7IQKLZ3xXHO6+gVnWYLDsz2oqdPynLwh8ctA5i5oojig9Z1w/0aU643+JnpdaN fn8yWOhLy9Zl+lpAD5oYZ01w+0huyfM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689664426; a=rsa-sha256; cv=none; b=D81CgqPmmnn2EHIK7S0iWUAPEwWk8xjdXgGXhaNv3ph+7XKzqmrNQORpWTrJkFK0RxGAQu Z9Db5vqsf7cBvsCN/p5bc5sKnrY9vRlk44MEy/et1Bu/bE80+U6r6SHY3NULtx9oClhX12 Y6GrXFHJDZKwBBQpIiVJ1d8c4NhNpgU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=jRrV6ijH; spf=pass (imf22.hostedemail.com: domain of ying.huang@intel.com designates 134.134.136.126 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689664425; x=1721200425; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=5Tlpa/luNtH2U+NXmdrN8YElpnlqiE80eX8pzxkcVCY=; b=jRrV6ijHNDvfZEbEXEWrQ3s9VtWa6shaTxd+YGD4U3KxKHUCIryRx/OD y7wM6l31X/ToDk1EbouHTzoxGa/mLVj262AFzxcURJhszoMr5LKinphNU QMFEEIY+uQ+wptYYO1Bhq/PaJWpqRFQujTlaNTsx92RwicKGPRjCaWX/R pOPBjfxvF7LI1TkVQ0x1E6yjEV4ZpLU1SY7QCjyte8XWaIiOzKfjceVYR 31oqAB1yt0by/Zc/XdU/2mA4sh1X10TDRFa+0UfAUhqQIZc/FiSGcp0z1 mpbE9QT3rZfC7oIQ/0vZHVWtFJ7OZwNAmKWRcn2s/M1GLao4/HuA41jxA g==; X-IronPort-AV: E=McAfee;i="6600,9927,10774"; a="350998290" X-IronPort-AV: E=Sophos;i="6.01,213,1684825200"; d="scan'208";a="350998290" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2023 00:13:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.01,202,1684825200"; d="scan'208";a="866969785" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2023 00:13:41 -0700 From: "Huang, Ying" To: Ryan Roberts Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Yang Shi , Zi Yan , , Subject: Re: [PATCH v1 2/3] mm: Implement folio_remove_rmap_range() References: <20230717143110.260162-1-ryan.roberts@arm.com> <20230717143110.260162-3-ryan.roberts@arm.com> Date: Tue, 18 Jul 2023 15:12:03 +0800 In-Reply-To: <20230717143110.260162-3-ryan.roberts@arm.com> (Ryan Roberts's message of "Mon, 17 Jul 2023 15:31:09 +0100") Message-ID: <87zg3tbsn0.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: 18D18C0009 X-Rspam-User: X-Stat-Signature: cc5dfexy5rs8ddsfyar6fe496sqwbyof X-Rspamd-Server: rspam03 X-HE-Tag: 1689664424-648549 X-HE-Meta: U2FsdGVkX18JFDAmSTYzQOE1y8alut0QZJ0E5nC1QZ5FVTxHsNZ5TsuGVkSy+2egfPyaLyAXXEYxIaXiTHOJ+ufNusxlbewhD6Ywg6afCIjRG/oJIcJgOcRuzsztDJHYKK7pquG+rCFNZrQjdqK/FpseHyQROZqAJakOFp0WS9lfrLiiofVBol5DRS0CssKfOSA2qZb9KUcP5o7FFQhexlq7DbhHwmJQedAGkedeVayscd+c3SsLuKxafSC9Y6hFeIk614XM1/yTj6H5FZ59sR1q2eEt847s2Aahac67BCtd7MGbTiDxIPp3VoU/At+p4gu/iUofe932lksJMVEutHLV8SUVfkFg/AA6am5tIObIkcugGluFv9GUz0c+7yj9bq9zoEsT0QaImdn1V7Ye76I0FM9AYJdPpMTLxQD9rIXqu7Qf0OV4FeWQWd9XNSMRRJhpsrDoRpin8I+5YZasAKFBe2yInaaILFrXgvMwq8MdW9B2jmTbmLrs73CYcTxC7FdrFURbVXyuSnNUl+QiBuyoA5s80jlpVfsMZCjUuaIgbX6ZYUUshwqBGT/Cht3TEfPcqi/aLN4G8UFz5j0ZkOzCtWrPzea8g+1KUS2fGH2bC30dxrG0BOxMv+VlyW8FqJZZraUbrgCc8MskClxl3HVNAvyMAjdVVYQWp+vIMgH5lyEhDIB5xkEgTQuS2S2plmXEFGnz9g8tITBTPkTi2CMPnVbWry3HUtmQZeDWhcanZ/dZWdX38bzqN5vmpZNovby2l30Heu0SC8GqZfWrsHIMjarpTwKoji8ft64iZrJdWotppRrj2s7nS0DdY97zrGaiX8WStu79q5pXFUJcxeWNLZtaUW/EYAmBLXDYLzBZk/XUVByi7XbfWSfiufGyF/5FxKas25iNx5gE8Q0X5L6yLeMV0xWn9LnILN1dkQrwwd6L659EOtdfGtf0ZpCFlsCO9A3DSuqkSchxNAY E26SgS5A LNWcWrnoSRUmzmZ5c1c1iwo14jY9GgpSBe0kwla6mLuCs5GHY4lB+O02DvLqvtcKcxLgTCUmznTKGIkff/B/sip06I/KEbVsFZh682SrRIKdnzdSF4WB6DLwPPIbOPmXcutCgXDB3hVbgzS+dJKCUboB6K5e8eZea1nWLIExAnH/i+8XU1nlzaBY670uZYZzaEMIw4GkB9oPKQo3iQKVoQMGEvkhz+gbRg52enaWrFUms0iPUjw8v1Z9vkcBNhI0NR1GX X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ryan Roberts writes: > Like page_remove_rmap() but batch-removes the rmap for a range of pages > belonging to a folio. This can provide a small speedup due to less > manipuation of the various counters. But more crucially, if removing the > rmap for all pages of a folio in a batch, there is no need to > (spuriously) add it to the deferred split list, which saves significant > cost when there is contention for the split queue lock. > > All contained pages are accounted using the order-0 folio (or base page) > scheme. > > Signed-off-by: Ryan Roberts > --- > include/linux/rmap.h | 2 ++ > mm/rmap.c | 65 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 67 insertions(+) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index b87d01660412..f578975c12c0 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -200,6 +200,8 @@ void page_add_file_rmap(struct page *, struct vm_area_struct *, > bool compound); > void page_remove_rmap(struct page *, struct vm_area_struct *, > bool compound); > +void folio_remove_rmap_range(struct folio *folio, struct page *page, > + int nr, struct vm_area_struct *vma); > > void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *, > unsigned long address, rmap_t flags); > diff --git a/mm/rmap.c b/mm/rmap.c > index 2baf57d65c23..1da05aca2bb1 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1359,6 +1359,71 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma, > mlock_vma_folio(folio, vma, compound); > } > > +/* > + * folio_remove_rmap_range - take down pte mappings from a range of pages > + * belonging to a folio. All pages are accounted as small pages. > + * @folio: folio that all pages belong to > + * @page: first page in range to remove mapping from > + * @nr: number of pages in range to remove mapping from > + * @vma: the vm area from which the mapping is removed > + * > + * The caller needs to hold the pte lock. > + */ > +void folio_remove_rmap_range(struct folio *folio, struct page *page, > + int nr, struct vm_area_struct *vma) Can we call folio_remove_ramp_range() in page_remove_rmap() if !compound? This can give us some opportunities to reduce code duplication? Best Regards, Huang, Ying > +{ > + atomic_t *mapped = &folio->_nr_pages_mapped; > + int nr_unmapped = 0; > + int nr_mapped; > + bool last; > + enum node_stat_item idx; > + > + if (unlikely(folio_test_hugetlb(folio))) { > + VM_WARN_ON_FOLIO(1, folio); > + return; > + } > + > + if (!folio_test_large(folio)) { > + /* Is this the page's last map to be removed? */ > + last = atomic_add_negative(-1, &page->_mapcount); > + nr_unmapped = last; > + } else { > + for (; nr != 0; nr--, page++) { > + /* Is this the page's last map to be removed? */ > + last = atomic_add_negative(-1, &page->_mapcount); > + if (last) { > + /* Page still mapped if folio mapped entirely */ > + nr_mapped = atomic_dec_return_relaxed(mapped); > + if (nr_mapped < COMPOUND_MAPPED) > + nr_unmapped++; > + } > + } > + } > + > + if (nr_unmapped) { > + idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; > + __lruvec_stat_mod_folio(folio, idx, -nr_unmapped); > + > + /* > + * Queue anon THP for deferred split if we have just unmapped at > + * least 1 page, while at least 1 page remains mapped. > + */ > + if (folio_test_large(folio) && folio_test_anon(folio)) > + if (nr_mapped) > + deferred_split_folio(folio); > + } > + > + /* > + * It would be tidy to reset folio_test_anon mapping when fully > + * unmapped, but that might overwrite a racing page_add_anon_rmap > + * which increments mapcount after us but sets mapping before us: > + * so leave the reset to free_pages_prepare, and remember that > + * it's only reliable while mapped. > + */ > + > + munlock_vma_folio(folio, vma, false); > +} > + > /** > * page_remove_rmap - take down pte mapping from a page > * @page: page to remove mapping from