From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1949C83F27 for ; Tue, 15 Jul 2025 22:09:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2AF6F6B0095; Tue, 15 Jul 2025 18:09:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2865C6B0096; Tue, 15 Jul 2025 18:09:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19C376B0099; Tue, 15 Jul 2025 18:09:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0A0586B0095 for ; Tue, 15 Jul 2025 18:09:07 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 4DCDAB70A4 for ; Tue, 15 Jul 2025 22:09:06 +0000 (UTC) X-FDA: 83667890292.02.C062782 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id D5A7840003 for ; Tue, 15 Jul 2025 22:09:03 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HSm61V45; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752617344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qxa7GMOF7VFXqx0T9kmP+domxPXINMeG49ExJ1EeWjI=; b=ygTP9UqCgeraEBdjSFvPTNt+s3SepYQJc1pAzYADk1UGlzDLykQzTS6oqjX/btjpWcJDo9 ei6rnZ7QkeRsnGpynARu4KM1e39zcryV9/V2mA9FHap6DyaHCBMRl1nhxCVEObm5iIZFCq RajM1Afwoc2lFGdEcR8ne7UsSjtr62w= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752617344; a=rsa-sha256; cv=none; b=3rh5zO3zVzM0O/EmfCPaAhe5pTQ8jaSBwzEF7hQco3flUvQmvCy7rqRZ8dunuoVs2gbHPx IBhGjLFbSEMUvek/7cziRSwdQusrgXdBld0ThnMh8SJdQtAvun5h/8HZ343PNjpwEK36eP iYlcDY6WDL7SjiFP+Ce2m8d6RolMBsI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HSm61V45; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752617343; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qxa7GMOF7VFXqx0T9kmP+domxPXINMeG49ExJ1EeWjI=; b=HSm61V45T7VGz4vGIqX/oTWiRZbqp/fN1Tva1Ow0vSf1Rwtz1kGoSEiBsTaEzfm8FFhMgK w/EuKY7r1Ca9HKcUM1SlO0ytHVY7VBJ6nYzobIDTqvH4TEXh9tNBBz8SFD1S+8fST8EPZz frr9OvPK+dPxiVvfJBdTXjHm0CRmceo= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-131-O7635QSzMBulGeTNqMV3Rw-1; Tue, 15 Jul 2025 18:09:01 -0400 X-MC-Unique: O7635QSzMBulGeTNqMV3Rw-1 X-Mimecast-MFC-AGG-ID: O7635QSzMBulGeTNqMV3Rw_1752617340 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-45359bfe631so34912965e9.0 for ; Tue, 15 Jul 2025 15:09:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752617340; x=1753222140; h=content-transfer-encoding:in-reply-to:organization:content-language :from:references:cc:to:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=qxa7GMOF7VFXqx0T9kmP+domxPXINMeG49ExJ1EeWjI=; b=Usck8h7TUIPPJkOqBwxZTfhZNtAWe5R8cAC+vLC3b90HPdskKwVbsqSGHZBvZEEEOP UH5MLupbbOJ6DJATbhRxwH2bxMcW1J93VHXbVIDRyL+YCt15N55cSZBGJogC/0npqjkL u61TBlZwg7Q9iUvyqq7pPqbpTLvfWYoO+EWsjU33mQ6m8bosCwBC11Bd5cdvygQXSmMg 1SXzls7cXDpf/Esrw2oHg+eYKKDMsjV5nK1iCsRMTA4A2oJ6yTRUMkF2hM5/gORl5xSF ViCl0fTpYGYBINC0kyILMfhE1UxcBoeGfdkeFLZH9AdSoCezeXhIh+Yc6GtXxLgZdCCE r5og== X-Forwarded-Encrypted: i=1; AJvYcCXmhLTkoifIkTuXnMkdu6UDboccOeYxF16w9ze//kcrKNDIizxVggigP5YYopcM+CBe0YAl48Zzjw==@kvack.org X-Gm-Message-State: AOJu0Ywhk5A6dYimqXRepHgmx+15vB0XcMfRYSsbffMUe436k2WvPvFC oN0TsBCka2ttqQ3PvVQZYfB3y0llnidp6Yo6VBcA7JpFBlnyDCQzidYt7GjnoeyEQvRi3vJws7h JcAkTAeMctyIxJa8ZgZ8YbenScltfKILbW/ZVuEzdQVfDVskfkBPr X-Gm-Gg: ASbGnctAK7deyq3r88hH1f7ajzq7/iCamTT9k6pyDTm4XRdJf5g3qnNy4ap9phN/Qw3 jfEWdNDIvH2Xj6ZC4PnviUIbk5cMMiLWkwFgUcN9E88ilSVRxr/7LlbZlOm7jSh/0mYh+T808tq J218GL0khtLj5XzyCqqDVSxbC2aqOsIKpALxE0kb+S+ZlIaPS+YmJEicvJmC8jW2iGXy0vbgKNU vN/35kNb08kas1ZjAItboycrrReoS/tp7MdcDcnwll8UAovRv8GHokpln4JlZuggZkQUGej54Wc tCgyGx1ObW2gvCaz8D3NwblnpKDYTiwM/tms9JgL/J6c8ZGU5iOtl9gCPN+JnxjsaH97ib0oweM 9BW/TTBMBoDV22YxeoPRK99CxJOgcdXW5BkhzJfqKjtfw9gmBM0C0L/TdT+lWQ9qIsJA= X-Received: by 2002:a05:600c:3555:b0:456:db0:4f3d with SMTP id 5b1f17b1804b1-4562e0370c6mr4631095e9.24.1752617340262; Tue, 15 Jul 2025 15:09:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFgTCeHwxhDXyhTXS9WtOn3LM+nLm6DlKLBuErn6V/bphHUjO/N4l7tzta3dyerQor9iy1xnw== X-Received: by 2002:a05:600c:3555:b0:456:db0:4f3d with SMTP id 5b1f17b1804b1-4562e0370c6mr4630775e9.24.1752617339744; Tue, 15 Jul 2025 15:08:59 -0700 (PDT) Received: from ?IPV6:2003:d8:2f28:4900:2c24:4e20:1f21:9fbd? (p200300d82f2849002c244e201f219fbd.dip0.t-ipconnect.de. [2003:d8:2f28:4900:2c24:4e20:1f21:9fbd]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4562e82cf22sm2342345e9.26.2025.07.15.15.08.58 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 15 Jul 2025 15:08:59 -0700 (PDT) Message-ID: Date: Wed, 16 Jul 2025 00:08:57 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 13/14] mm: memory: support clearing page-extents To: Ankur Arora , linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Cc: akpm@linux-foundation.org, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, mingo@redhat.com, mjguzik@gmail.com, luto@kernel.org, peterz@infradead.org, acme@kernel.org, namhyung@kernel.org, tglx@linutronix.de, willy@infradead.org, raghavendra.kt@amd.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com References: <20250710005926.1159009-1-ankur.a.arora@oracle.com> <20250710005926.1159009-14-ankur.a.arora@oracle.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <20250710005926.1159009-14-ankur.a.arora@oracle.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: UD-g7WG_yXEhxRiLA5HNsXObVMw3jM5remeKCQVtkQ8_1752617340 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: D5A7840003 X-Rspam-User: X-Rspamd-Server: rspam09 X-Stat-Signature: j6crfznxyswoho5yb9u1hqe7xh7objo4 X-HE-Tag: 1752617343-132952 X-HE-Meta: U2FsdGVkX1+dVkuPzFTDM5dKf3rRuAp1818v7Wey11yMOs9NCwSMXeWwlYgN0s8hjTCH2FbmoiuG+XwYQawa0MUaRVpRxgk+xWhaDSg46BhW2HmcHXBLftNU+CXfoLi7OtF+rl3fLvg9ngNb/BpX7/CDO4ulJehNM7KDudrjElAFtagjBw7wNv++2QxLw5425g88ylOX6tZQJUZVZAf+WUe0tcxNHJbQzbkUFTSG0fIkz76/1yToJ5ZXb02harQNtPfaJwldJhb30O/S2KIBXRJVY4tm8ZYXBOEXFiz6Yi4W98uQ0ZzFNTdZaj2bpXXimgdeIzsZwfnBaEkX0z7TACI3Ot1r15DJ5mGDdghn9czT4Rl88p7pGaDSNeQJtdhOVvodC4iM3vb3znT85wpzgvrfD4laB+wRSgF6wvQyfCJASgyco1R0uTy115QGRUZza3hxHmwJLoM6k3fO20DD83yLUO0vIt2dOdvLPpQLgkWxVuHNC8KwHcxZ/a06XgsNUpziWZ8xTR7Gv2XnLkcQCMz2eTqzonhjO7K8DcmZuO99QTeAKDdBmEcp4xxQ6WBCTlxweGJVqTWhkspOL+Lm1VCHuc6pSGJ0WerkTVkls3Jcx8SJP6tp9GKVvK3KiAaDVzQ25sgnOjUo8k0muhbiaqjJGkdQcFx9Q3POdiLJndiCOLw0aAX+WAdmVAclgBGZcEs8Kl1CNRdn5vdDsljU4ToKJzo6QnwWzeJk/ay+oaCq5P66RBV20oJqyvDWbTJQMlNurG6HuxAcc8oM0dnqpkPEtk3M5ngABtSleoQCZ+g1C4pOVL21BcymzgYI8eD2Lb3i/Toy2dDsxvyEmT8WpTMR6nzy+l215GrtgVJQBS50+Z4HpKcu7GuFPHwJpph1BeFdiGxQYAzlZC6Kdna9ECnQ1K+DUPWky2eatNBHsts7+S1IEbCC94tr3/Yeyo6VlykX9PchuBVuk7p7TP1 /VcMX70q JbAOwvfObO4cILQdEBakEb2qZw9tiDPJXeL4mAgXHzWLngyiD0TDBJkK98LfBosLkYlcyKDbC2HZwnFR5FT7eD3xOdTfkvEdGwJKX27jQarAI4BtKt+ZyzPjVElG+Piy/3Fq3PZ1LiGhp1I+74pQF1fG420p+Mzfdx5nzDuxeWJbZ44JAlhiggn6Psul83DRmlA9nKl/kYCNNLnvDvQd+xxDOLTvEYTx7Fe51LSSYzDsgqbsyZMgfIgOAmM8l+A8KnjXzDhSjkiCrJaol5kASo+UGXV0+zTf5nrRERQT7JJVVKU/eFAT1u2MVhAk1B+xf2P1ddMTc7xbNM1lCMrX2kSz+UHJZEmamA1dzDot4ueXGxa/0vEyQJa372jbYXpWhNJnOMaHMDhbMSogT0q9muo1qvDvmSNo6hwXCsLrznhUWpkLe2mgQktrChCtCJvvTki3u/0RtRJBKNB+mfhb7MfIitaEKgESDOAth8t53u3++y4c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10.07.25 02:59, Ankur Arora wrote: > folio_zero_user() is constrained to clear in a page-at-a-time > fashion because it supports CONFIG_HIGHMEM which means that kernel > mappings for pages in a folio are not guaranteed to be contiguous. > > We don't have this problem when running under configurations with > CONFIG_CLEAR_PAGE_EXTENT (implies !CONFIG_HIGHMEM), so zero in > longer page-extents. > This is expected to be faster because the processor can now optimize > the clearing based on the knowledge of the extent. > > However, clearing in larger chunks can have two other problems: > > - cache locality when clearing small folios (< MAX_ORDER_NR_PAGES) > (larger folios don't have any expectation of cache locality). > > - preemption latency when clearing large folios. > > Handle the first by splitting the clearing in three parts: the > faulting page and its immediate locality, its left and right > regions; the local neighbourhood is cleared last. > > The second problem is relevant only when running under cooperative > preemption models. Limit the worst case preemption latency by clearing > in architecture specified ARCH_CLEAR_PAGE_EXTENT units. > > Signed-off-by: Ankur Arora > --- > mm/memory.c | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 85 insertions(+), 1 deletion(-) > > diff --git a/mm/memory.c b/mm/memory.c > index b0cda5aab398..c52806270375 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -7034,6 +7034,7 @@ static inline int process_huge_page( > return 0; > } > > +#ifndef CONFIG_CLEAR_PAGE_EXTENT > static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, > unsigned int nr_pages) > { > @@ -7058,7 +7059,10 @@ static int clear_subpage(unsigned long addr, int idx, void *arg) > /** > * folio_zero_user - Zero a folio which will be mapped to userspace. > * @folio: The folio to zero. > - * @addr_hint: The address will be accessed or the base address if uncelar. > + * @addr_hint: The address accessed by the user or the base address. > + * > + * folio_zero_user() uses clear_gigantic_page() or process_huge_page() to > + * do page-at-a-time zeroing because it needs to handle CONFIG_HIGHMEM. > */ > void folio_zero_user(struct folio *folio, unsigned long addr_hint) > { > @@ -7070,6 +7074,86 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint) > process_huge_page(addr_hint, nr_pages, clear_subpage, folio); > } > > +#else /* CONFIG_CLEAR_PAGE_EXTENT */ > + > +static void clear_pages_resched(void *addr, int npages) > +{ > + int i, remaining; > + > + if (preempt_model_preemptible()) { > + clear_pages(addr, npages); > + goto out; > + } > + > + for (i = 0; i < npages/ARCH_CLEAR_PAGE_EXTENT; i++) { > + clear_pages(addr + i * ARCH_CLEAR_PAGE_EXTENT * PAGE_SIZE, > + ARCH_CLEAR_PAGE_EXTENT); > + cond_resched(); > + } > + > + remaining = npages % ARCH_CLEAR_PAGE_EXTENT; > + > + if (remaining) > + clear_pages(addr + i * ARCH_CLEAR_PAGE_EXTENT * PAGE_SHIFT, > + remaining); > +out: > + cond_resched(); > +} > + > +/* > + * folio_zero_user - Zero a folio which will be mapped to userspace. > + * @folio: The folio to zero. > + * @addr_hint: The address accessed by the user or the base address. > + * > + * Uses architectural support for clear_pages() to zero page extents > + * instead of clearing page-at-a-time. > + * > + * Clearing of small folios (< MAX_ORDER_NR_PAGES) is split in three parts: > + * pages in the immediate locality of the faulting page, and its left, right > + * regions; the local neighbourhood cleared last in order to keep cache > + * lines of the target region hot. > + * > + * For larger folios we assume that there is no expectation of cache locality > + * and just do a straight zero. > + */ > +void folio_zero_user(struct folio *folio, unsigned long addr_hint) > +{ > + unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio)); > + const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE; > + const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1); > + const int width = 2; /* number of pages cleared last on either side */ > + struct range r[3]; > + int i; > + > + if (folio_nr_pages(folio) > MAX_ORDER_NR_PAGES) { > + clear_pages_resched(page_address(folio_page(folio, 0)), folio_nr_pages(folio)); > + return; > + } > + > + /* > + * Faulting page and its immediate neighbourhood. Cleared at the end to > + * ensure it sticks around in the cache. > + */ > + r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - width, pg.start, pg.end), > + clamp_t(s64, fault_idx + width, pg.start, pg.end)); > + > + /* Region to the left of the fault */ > + r[1] = DEFINE_RANGE(pg.start, > + clamp_t(s64, r[2].start-1, pg.start-1, r[2].start)); > + > + /* Region to the right of the fault: always valid for the common fault_idx=0 case. */ > + r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end+1, r[2].end, pg.end+1), > + pg.end); > + > + for (i = 0; i <= 2; i++) { > + int npages = range_len(&r[i]); > + > + if (npages > 0) > + clear_pages_resched(page_address(folio_page(folio, r[i].start)), npages); > + } > +} > +#endif /* CONFIG_CLEAR_PAGE_EXTENT */ > + > static int copy_user_gigantic_page(struct folio *dst, struct folio *src, > unsigned long addr_hint, > struct vm_area_struct *vma, So, folio_zero_user() is only compiled with THP | HUGETLB already. What we should probably do is scrap the whole new kconfig option and do something like this in here: diff --git a/mm/memory.c b/mm/memory.c index 3dd6c57e6511e..64b6bd3e7657a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -7009,19 +7009,53 @@ static inline int process_huge_page( return 0; } -static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint, - unsigned int nr_pages) +#ifdef CONFIG_ARCH_HAS_CLEAR_PAGES +static void clear_user_highpages_resched(struct page *page, + unsigned int nr_pages, unsigned long addr) +{ + void *addr = page_address(page); + int i, remaining; + + /* + * CONFIG_ARCH_HAS_CLEAR_PAGES is not expected to be set on systems + * with HIGHMEM, so we can safely use clear_pages(). + */ + BUILD_BUG_ON(IS_ENABLED(CONFIG_HIGHMEM)); + + if (preempt_model_preemptible()) { + clear_pages(addr, npages); + goto out; + } + + for (i = 0; i < npages/ARCH_CLEAR_PAGE_EXTENT; i++) { + clear_pages(addr + i * ARCH_CLEAR_PAGE_EXTENT * PAGE_SIZE, + ARCH_CLEAR_PAGE_EXTENT); + cond_resched(); + } + + remaining = npages % ARCH_CLEAR_PAGE_EXTENT; + + if (remaining) + clear_pages(addr + i * ARCH_CLEAR_PAGE_EXTENT * PAGE_SHIFT, + remaining); +out: + cond_resched(); +} +#else +static void clear_user_highpages_resched(struct page *page, + unsigned int nr_pages, unsigned long addr) { - unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio)); int i; might_sleep(); for (i = 0; i < nr_pages; i++) { cond_resched(); - clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE); + clear_user_highpage(nth_page(page, i), addr + i * PAGE_SIZE); } } +#endif /* CONFIG_ARCH_HAS_CLEAR_PAGES */ + static int clear_subpage(unsigned long addr, int idx, void *arg) { struct folio *folio = arg; @@ -7030,19 +7064,76 @@ static int clear_subpage(unsigned long addr, int idx, void *arg) return 0; } -/** +static void folio_zero_user_huge(struct folio *folio, unsigned long addr_hint) +{ + const unsigned int nr_pages = folio_nr_pages(folio); + const unsigned long addr = ALIGN_DOWN(addr_hint, nr_pages * PAGE_SIZE); + const long fault_idx = (addr_hint - addr) / PAGE_SIZE; + const struct range pg = DEFINE_RANGE(0, nr_pages - 1); + const int width = 2; /* number of pages cleared last on either side */ + struct range r[3]; + int i; + + /* + * Without an optimized clear_user_highpages_resched(), we'll perform + * some extra magic dance around the faulting address. + */ + if (!IS_ENABLED(CONFIG_ARCH_HAS_CLEAR_PAGES)) { + process_huge_page(addr_hint, nr_pages, clear_subpage, folio); + return; + } + + /* + * Faulting page and its immediate neighbourhood. Cleared at the end to + * ensure it sticks around in the cache. + */ + r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - width, pg.start, pg.end), + clamp_t(s64, fault_idx + width, pg.start, pg.end)); + + /* Region to the left of the fault */ + r[1] = DEFINE_RANGE(pg.start, + clamp_t(s64, r[2].start-1, pg.start-1, r[2].start)); + + /* Region to the right of the fault: always valid for the common fault_idx=0 case. */ + r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end+1, r[2].end, pg.end+1), + pg.end); + + for (i = 0; i <= 2; i++) { + unsigned int cur_nr_pages = range_len(&r[i]); + struct page *cur_page = folio_page(folio, r[i].start); + unsigned long cur_addr = addr + folio_page_idx(folio, cur_page) * PAGE_SIZE; + + if (cur_nr_pages > 0) + clear_user_highpages_resched(cur_page, cur_nr_pages, cur_addr); + } +} + +/* * folio_zero_user - Zero a folio which will be mapped to userspace. * @folio: The folio to zero. - * @addr_hint: The address will be accessed or the base address if uncelar. + * @addr_hint: The address accessed by the user or the base address. + * + * Uses architectural support for clear_pages() to zero page extents + * instead of clearing page-at-a-time. + * + * Clearing of small folios (< MAX_ORDER_NR_PAGES) is split in three parts: + * pages in the immediate locality of the faulting page, and its left, right + * regions; the local neighbourhood cleared last in order to keep cache + * lines of the target region hot. + * + * For larger folios we assume that there is no expectation of cache locality + * and just do a straight zero. */ void folio_zero_user(struct folio *folio, unsigned long addr_hint) { - unsigned int nr_pages = folio_nr_pages(folio); + const unsigned int nr_pages = folio_nr_pages(folio); + const unsigned long addr = ALIGN_DOWN(addr_hint, nr_pages * PAGE_SIZE); - if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) - clear_gigantic_page(folio, addr_hint, nr_pages); - else - process_huge_page(addr_hint, nr_pages, clear_subpage, folio); + if (unlikely(nr_pages >= MAX_ORDER_NR_PAGES)) { + clear_user_highpages_resched(folio_page(folio, 0), nr_pages, addr); + return; + } + folio_zero_user_huge(folio, addr_hint); } static int copy_user_gigantic_page(struct folio *dst, struct folio *src, -- 2.50.1 Note that this probably completely broken in various ways, just to give you an idea. *maybe* we could change clear_user_highpages_resched to something like folio_zero_user_range(), consuming a folio + idx instead of a page. That might or might not be better here. -- Cheers, David / dhildenb