From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B41FC54798 for ; Tue, 27 Feb 2024 10:48:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A65A3280001; Tue, 27 Feb 2024 05:48:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A1561940008; Tue, 27 Feb 2024 05:48:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DD94280001; Tue, 27 Feb 2024 05:48:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7F945940008 for ; Tue, 27 Feb 2024 05:48:33 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2B81880BA9 for ; Tue, 27 Feb 2024 10:48:33 +0000 (UTC) X-FDA: 81837260106.01.5E2C8C1 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf13.hostedemail.com (Postfix) with ESMTP id 2E05F20021 for ; Tue, 27 Feb 2024 10:48:31 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709030911; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=djzaAZ4nIKBpChJkN/6n8FjvgHoe9tHqfs3EM0TpMGU=; b=43vto1lt5cFTm9UQXKnGq/RGjS+jAH+7hnS0oMHcnSbT/OaqWnMdE/nupLuxDuOm0syC0n 1SdIdrIZ/Sm/YPLjOFCsw+pzdwGnhz70dBFoLTddraKPFDL3n28mxqW62971Z8gqEA4cwL CLEo96Of9SaZGV9u/ftG2a+ztZHAIZE= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf13.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709030911; a=rsa-sha256; cv=none; b=hi9L9N2QPs3IVhTUE+7iF5Lk2CpRGw9YXBpxFUSBG+oSc6uUlqwQI8bRx1+V68vzJlxjQ0 hf7gEjUUuHkOYNkS0R08wyuwYw0oI0awGqSlCANIpDhsmPGoUTDksBkLF9QWuQy2ZA4WNn vLLnchDvKhi38mvfYdtphM+YXGluhjc= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7110BDA7; Tue, 27 Feb 2024 02:49:08 -0800 (PST) Received: from [10.1.30.188] (XHFQ2J9959.cambridge.arm.com [10.1.30.188]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C5C213F762; Tue, 27 Feb 2024 02:48:28 -0800 (PST) Message-ID: Date: Tue, 27 Feb 2024 10:48:27 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] mm: make folio_pte_batch available outside of mm/memory.c Content-Language: en-GB To: Barry Song <21cnbao@gmail.com>, akpm@linux-foundation.org, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Barry Song , David Hildenbrand , Lance Yang , Yin Fengwei References: <20240227104201.337988-1-21cnbao@gmail.com> From: Ryan Roberts In-Reply-To: <20240227104201.337988-1-21cnbao@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 2E05F20021 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: x7yfd94iqjpfie6m5b3kxih4hk9gm83w X-HE-Tag: 1709030911-362646 X-HE-Meta: U2FsdGVkX19ZcrJlCvt+RxTWzpWXTVDi3Gc2iNH5GNModf8tGmAXkPVFNxglj+m2EZxOhyjnXvFpCb7jZj2thFcCNoNIVwpETlFMz2zOd8Af31wMQccK24ZUb7ikTwWPy5guIobEGLBshrccDkG7BUV41KjoBNDTFxUGbqGL3Cs39167YpjduB3SJphOuLcpQk9/wgQIDaK1AjLDbv/44/hlA5Ea/csJODgHa6wWHRkSLeH0exo8SXqZdhZrVnQnfyD5dWOY+Klr58IcP4ptbYfz0HTLZJ7qV3zkGjOYJRg3DaqDOLMyJNyf1kYzY9xeKN/YyQvDhwcc8pofz9cLUA8wWw6GC9ZQATOa95zUuE9rrXHY6515OLKrRrfDPJF/Y0ytHSa2JX/tTXZ6TMRCl1mpZqH9JFbByDg6JWnU98Mt9KD3jdZMSPp44prY8gvv0HEATBlgt30TvEOeTEZddiWGhW/f4kEaHPOJrV4oJiZbSFdf1t9cTKsZX95IPEcmwlvvYMQNO0ylcercgZtuJRH3ubm3/iiEiWCrpTgj3msJSl62j+5ZkFbPN/QO20QxNP7yQJoxFm6wAcdiECUMlFV6vj+z4pXcVh6XZHBJSr52tC4kSiUMFDeqg4klHxIL1BIw2VKdu4oQrPmYXYk+9A9fxHtGLJcKYJFrZ/BDDib6kmbzxsrUpNx5Efmn6ZDRVJLTzxYB+ktdM+7FRBL5zdDBw9LQeuu4ys0f7Etyu2FwMi5nNfa54lBiFQRhr9FWU2MeCzYprs49RZJJWdA9vBdICJd6qOwKEtT2gpnQT4rWs6Dq4SUdZgWxGxZ4hHC1L4WL8XS3kavoowUxOtQJIdylBJH85UAVYA83nyP3iF169X/g89B69xuOrCJJ3ZmD617gZ5fs12s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 27/02/2024 10:42, Barry Song wrote: > From: Barry Song > > madvise, mprotect and some others might need folio_pte_batch to check if > a range of PTEs are completely mapped to a large folio with contiguous > physical addresses. Let's make it available in mm/internal.h. > > Suggested-by: David Hildenbrand > Cc: Lance Yang > Cc: Ryan Roberts > Cc: Yin Fengwei > [david@redhat.com: improve the doc for the exported func] > Signed-off-by: David Hildenbrand > Signed-off-by: Barry Song Reviewed-by: Ryan Roberts > --- > -v2: > * inline folio_pte_batch according to Ryan and David; > * improve the doc, thanks to David's work on this; > * fix tags of David and add David's s-o-b; > -v1: > https://lore.kernel.org/all/20240227024050.244567-1-21cnbao@gmail.com/ > > mm/internal.h | 90 +++++++++++++++++++++++++++++++++++++++++++++++++++ > mm/memory.c | 76 ------------------------------------------- > 2 files changed, 90 insertions(+), 76 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 13b59d384845..fa9e2f7db506 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -83,6 +83,96 @@ static inline void *folio_raw_mapping(struct folio *folio) > return (void *)(mapping & ~PAGE_MAPPING_FLAGS); > } > > +/* Flags for folio_pte_batch(). */ > +typedef int __bitwise fpb_t; > + > +/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ > +#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) > + > +/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ > +#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) > + > +static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) > +{ > + if (flags & FPB_IGNORE_DIRTY) > + pte = pte_mkclean(pte); > + if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) > + pte = pte_clear_soft_dirty(pte); > + return pte_wrprotect(pte_mkold(pte)); > +} > + > +/** > + * folio_pte_batch - detect a PTE batch for a large folio > + * @folio: The large folio to detect a PTE batch for. > + * @addr: The user virtual address the first page is mapped at. > + * @start_ptep: Page table pointer for the first entry. > + * @pte: Page table entry for the first page. > + * @max_nr: The maximum number of table entries to consider. > + * @flags: Flags to modify the PTE batch semantics. > + * @any_writable: Optional pointer to indicate whether any entry except the > + * first one is writable. > + * > + * Detect a PTE batch: consecutive (present) PTEs that map consecutive > + * pages of the same large folio. > + * > + * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN, > + * the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and > + * soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY). > + * > + * start_ptep must map any page of the folio. max_nr must be at least one and > + * must be limited by the caller so scanning cannot exceed a single page table. > + * > + * Return: the number of table entries in the batch. > + */ > +static inline int folio_pte_batch(struct folio *folio, unsigned long addr, > + pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, > + bool *any_writable) > +{ > + unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); > + const pte_t *end_ptep = start_ptep + max_nr; > + pte_t expected_pte, *ptep; > + bool writable; > + int nr; > + > + if (any_writable) > + *any_writable = false; > + > + VM_WARN_ON_FOLIO(!pte_present(pte), folio); > + VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); > + VM_WARN_ON_FOLIO(page_folio(pfn_to_page(pte_pfn(pte))) != folio, folio); > + > + nr = pte_batch_hint(start_ptep, pte); > + expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); > + ptep = start_ptep + nr; > + > + while (ptep < end_ptep) { > + pte = ptep_get(ptep); > + if (any_writable) > + writable = !!pte_write(pte); > + pte = __pte_batch_clear_ignored(pte, flags); > + > + if (!pte_same(pte, expected_pte)) > + break; > + > + /* > + * Stop immediately once we reached the end of the folio. In > + * corner cases the next PFN might fall into a different > + * folio. > + */ > + if (pte_pfn(pte) >= folio_end_pfn) > + break; > + > + if (any_writable) > + *any_writable |= writable; > + > + nr = pte_batch_hint(ptep, pte); > + expected_pte = pte_advance_pfn(expected_pte, nr); > + ptep += nr; > + } > + > + return min(ptep - start_ptep, max_nr); > +} > + > void __acct_reclaim_writeback(pg_data_t *pgdat, struct folio *folio, > int nr_throttled); > static inline void acct_reclaim_writeback(struct folio *folio) > diff --git a/mm/memory.c b/mm/memory.c > index 1c45b6a42a1b..a7bcc39de56b 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -953,82 +953,6 @@ static __always_inline void __copy_present_ptes(struct vm_area_struct *dst_vma, > set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr); > } > > -/* Flags for folio_pte_batch(). */ > -typedef int __bitwise fpb_t; > - > -/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ > -#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) > - > -/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ > -#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) > - > -static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) > -{ > - if (flags & FPB_IGNORE_DIRTY) > - pte = pte_mkclean(pte); > - if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) > - pte = pte_clear_soft_dirty(pte); > - return pte_wrprotect(pte_mkold(pte)); > -} > - > -/* > - * Detect a PTE batch: consecutive (present) PTEs that map consecutive > - * pages of the same folio. > - * > - * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN, > - * the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and > - * soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY). > - * > - * If "any_writable" is set, it will indicate if any other PTE besides the > - * first (given) PTE is writable. > - */ > -static inline int folio_pte_batch(struct folio *folio, unsigned long addr, > - pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, > - bool *any_writable) > -{ > - unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); > - const pte_t *end_ptep = start_ptep + max_nr; > - pte_t expected_pte, *ptep; > - bool writable; > - int nr; > - > - if (any_writable) > - *any_writable = false; > - > - VM_WARN_ON_FOLIO(!pte_present(pte), folio); > - > - nr = pte_batch_hint(start_ptep, pte); > - expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags); > - ptep = start_ptep + nr; > - > - while (ptep < end_ptep) { > - pte = ptep_get(ptep); > - if (any_writable) > - writable = !!pte_write(pte); > - pte = __pte_batch_clear_ignored(pte, flags); > - > - if (!pte_same(pte, expected_pte)) > - break; > - > - /* > - * Stop immediately once we reached the end of the folio. In > - * corner cases the next PFN might fall into a different > - * folio. > - */ > - if (pte_pfn(pte) >= folio_end_pfn) > - break; > - > - if (any_writable) > - *any_writable |= writable; > - > - nr = pte_batch_hint(ptep, pte); > - expected_pte = pte_advance_pfn(expected_pte, nr); > - ptep += nr; > - } > - > - return min(ptep - start_ptep, max_nr); > -} > - > /* > * Copy one present PTE, trying to batch-process subsequent PTEs that map > * consecutive pages of the same folio by copying them as well.