From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4378DC02199 for ; Fri, 7 Feb 2025 04:09:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 689706B007B; Thu, 6 Feb 2025 23:09:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 638A36B0082; Thu, 6 Feb 2025 23:09:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 500276B0083; Thu, 6 Feb 2025 23:09:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 321C66B007B for ; Thu, 6 Feb 2025 23:09:42 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CD3BA1415B7 for ; Fri, 7 Feb 2025 04:09:41 +0000 (UTC) X-FDA: 83091819762.28.3517833 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id C0FE2120002 for ; Fri, 7 Feb 2025 04:09:39 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738901380; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ItNRY/g2x+DGWrcBuWyM/sHE34fLNYAXLJOuMsQ7kkc=; b=jVFex1Bb54zAGTFoAhX2dMIW+sViC6HcRFo6PSF/JuQXiiQgysKHUhAzB5AfIaU9QO66Eo 2Dv8WxbIUdobDPkLfiQyxQat+hO6AYxJqebbVXJ0TXwlj9SI8ASSXX/b9ZWxsUoxQheFjr a6tKkBLnbmuDqOvE2vHenEqT5PVM/iQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738901380; a=rsa-sha256; cv=none; b=uIEV8OYZYF7WwWnWcYa8VrYz9+CJm4st5qRMbcyqbjhZ1AKwNpdBh3jlw99kkaGQfOP1B/ GD+1ZyjKR6eGCDA3Et5332F1LiENs+1odLcqHw+v+mrj02DFBGMdj1N5itOuJuziOgFJAf 3DPtRRD+u8t2E6KRonfJkgn7wiUyz9M= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E4B38106F; Thu, 6 Feb 2025 20:10:01 -0800 (PST) Received: from [10.162.16.89] (a077893.blr.arm.com [10.162.16.89]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D69983F63F; Thu, 6 Feb 2025 20:09:33 -0800 (PST) Message-ID: Date: Fri, 7 Feb 2025 09:39:30 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 07/16] arm64: hugetlb: Use ___set_ptes() and ___ptep_get_and_clear() To: Ryan Roberts , Catalin Marinas , Will Deacon , Muchun Song , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Mark Rutland , Ard Biesheuvel , Dev Jain , Alexandre Ghiti , Steve Capper , Kevin Brodsky Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20250205151003.88959-1-ryan.roberts@arm.com> <20250205151003.88959-8-ryan.roberts@arm.com> Content-Language: en-US From: Anshuman Khandual In-Reply-To: <20250205151003.88959-8-ryan.roberts@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: C0FE2120002 X-Stat-Signature: kf5zyquc5ek1sp4n4bof1yzwnau9x3c1 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1738901379-732300 X-HE-Meta: U2FsdGVkX19PmuzjIQvssNahZ0hZjTJrORK+UHv5CeDYkqGXpXSMhAcA8tqgnaN9rTy1JLyebBkT88odV9JmG/ws8mWuaVpOqkA628BiQ+q91QbOW9C6GIgi6R/3oirmeUWzqDTvZfInAEgQsSAswXGjhr/m7k3SNwYYE3iTbAEEIj1/dQPRTDAjtEKdHm/xz3gsdK8pJ5EWJgUYyGOQXJkdqBB6FP/gkhpq5Wfnl6LrYAh3vQfIo+rtquRnb1imyii5UPGu7xoHm4QFpW79Jn5loZBiO+ctb86RCaErmSZD/f7L79ppv3gTvptI1T3PFp+3ksx436dbVWJmpGMKDwsekygsuMkJcvDkrxHFkEYEiwe114AEfuvkdSuXAGpPrLHr1EOOUqgdKUCwApmLi9wIou9vCx5BtEK40m9k9xPYRgy3hdXwL5wVz3Bav6wHcZ2A1a+8RqXlcCW10+MN6xHXI9pyLDyGCZ62XG38YkDcloXUQhGjRIRiOB9EqhFimX87fK5Q39BttHlPMKsMKVyFe1eq4nQqaeE44lBYZTcl/M1N1m6innuVh1mwJryUzHITh/yP92r9o4qPcJ8aFULsnLe+NbuW+r1QDGSYnooyXNADT2hsG+RWcQqlAzcNNZWKGBTvpIIz5tGygQko3Jh0urfOXTLDzEqR2xDsAVRF48nXODW9hhX4pZiwSSQ3lD86R1fvteetyQFAis4CExHUrG9BeY3AlVmuAFpsGmgubG64FTciWrlUbwh9X0GclKe/QjjVIurNW57N2/tclX3F1HzZBWSdUIlOes9Yfb2f10Yxu3uKtvAId9NULRgreqG71fu+4fTz3YzpD0cOyCQpna7w1AKHr8U/jXTgEiWHzsnzhMa16VZR0f2U1/ESBkeEJbEON6np+6arcPrUNFFvbvLM6ewp1/vEMEMVCKJP/qbg857ByqoLrwFgXjpnyb22tCQLXPZFY8q3YHY mOWjkjQT Leem4FjeQYjHHwfR4YGhZ+WVlZwxWHQQqkEm/+TSWFoCE2Wk4QQANbYUqBZs6y6thW3TVxM/RDjCmU2pqR3yXPfhbqiQJxBRlU/FNySkJ+v9fhI6g1C0lIGKp0NbnnJXLsgkBDkphGI1P1JOVx8Nj3UTdzsCpGQT+Rar2ko5Fp5ywYbe7DZmb7d6kBS75+NFJGttpaBAkjGpV0hQSTmVsgzA1aLLXo12tWYb29DbVNd4RlqsrKUdeGCyl3HB+xa+QGf1+zuu6erREw2WgGeS42ZQWlZXBR3APvsLO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2/5/25 20:39, Ryan Roberts wrote: > Refactor the huge_pte helpers to use the new generic ___set_ptes() and > ___ptep_get_and_clear() APIs. > > This provides 2 benefits; First, when page_table_check=on, hugetlb is > now properly/fully checked. Previously only the first page of a hugetlb PAGE_TABLE_CHECK will be fully supported now in hugetlb irrespective of the page table level. This is definitely an improvement. > folio was checked. Second, instead of having to call __set_ptes(nr=1) > for each pte in a loop, the whole contiguous batch can now be set in one > go, which enables some efficiencies and cleans up the code. Improvements done to common __set_ptes() will automatically be available for hugetlb pages as well. This converges all batch updates in a single i.e __set_ptes() which can be optimized further in a single place. Makes sense. > > One detail to note is that huge_ptep_clear_flush() was previously > calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd > block mapping). This has a couple of disadvantages; first > ptep_clear_flush() calls ptep_get_and_clear() which transparently > handles contpte. Given we only call for non-contiguous ptes, it would be > safe, but a waste of effort. It's preferable to go stright to the layer A small nit - typo s/stright/straight > below. However, more problematic is that ptep_get_and_clear() is for > PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not > clear the whole hugetlb folio. So let's stop special-casing the non-cont > case and just rely on get_clear_contig_flush() to do the right thing for > non-cont entries. Like before, this change is unrelated to all the conversions done earlier for the set and clear paths above using the new helpers. Hence ideally it should be separated out into a different patch. > > Signed-off-by: Ryan Roberts > --- > arch/arm64/mm/hugetlbpage.c | 50 ++++++++----------------------------- > 1 file changed, 11 insertions(+), 39 deletions(-) > > diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c > index e870d01d12ea..02afee31444e 100644 > --- a/arch/arm64/mm/hugetlbpage.c > +++ b/arch/arm64/mm/hugetlbpage.c > @@ -166,12 +166,12 @@ static pte_t get_clear_contig(struct mm_struct *mm, > pte_t pte, tmp_pte; > bool present; > > - pte = __ptep_get_and_clear(mm, addr, ptep); > + pte = ___ptep_get_and_clear(mm, ptep, pgsize); > present = pte_present(pte); > while (--ncontig) { > ptep++; > addr += pgsize; > - tmp_pte = __ptep_get_and_clear(mm, addr, ptep); > + tmp_pte = ___ptep_get_and_clear(mm, ptep, pgsize); > if (present) { > if (pte_dirty(tmp_pte)) > pte = pte_mkdirty(pte); > @@ -215,7 +215,7 @@ static void clear_flush(struct mm_struct *mm, > unsigned long i, saddr = addr; > > for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) > - __ptep_get_and_clear(mm, addr, ptep); > + ___ptep_get_and_clear(mm, ptep, pgsize); > > __flush_hugetlb_tlb_range(&vma, saddr, addr, pgsize, true); > } ___ptep_get_and_clear() will have the opportunity to call page_table_check_pxx_clear() depending on the page size passed unlike the current scenario. > @@ -226,32 +226,20 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, > size_t pgsize; > int i; > int ncontig; > - unsigned long pfn, dpfn; > - pgprot_t hugeprot; > > ncontig = num_contig_ptes(sz, &pgsize); > > if (!pte_present(pte)) { > for (i = 0; i < ncontig; i++, ptep++, addr += pgsize) > - __set_ptes(mm, addr, ptep, pte, 1); > + ___set_ptes(mm, ptep, pte, 1, pgsize); IIUC __set_ptes() wrapper is still around in the header. So what's the benefit of converting this into ___set_ptes() ? __set_ptes() gets dropped eventually ? > return; > } > > - if (!pte_cont(pte)) { > - __set_ptes(mm, addr, ptep, pte, 1); > - return; > - } > - > - pfn = pte_pfn(pte); > - dpfn = pgsize >> PAGE_SHIFT; > - hugeprot = pte_pgprot(pte); > - > /* Only need to "break" if transitioning valid -> valid. */ > - if (pte_valid(__ptep_get(ptep))) > + if (pte_cont(pte) && pte_valid(__ptep_get(ptep))) > clear_flush(mm, addr, ptep, pgsize, ncontig); > > - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) > - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); > + ___set_ptes(mm, ptep, pte, ncontig, pgsize); > } Similarly __set_ptes() will have the opportunity to call page_table_check_pxx_set() depending on the page size passed unlike the current scenario. > > pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, > @@ -441,11 +429,9 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep, > pte_t pte, int dirty) > { > - int ncontig, i; > + int ncontig; > size_t pgsize = 0; > - unsigned long pfn = pte_pfn(pte), dpfn; > struct mm_struct *mm = vma->vm_mm; > - pgprot_t hugeprot; > pte_t orig_pte; > > VM_WARN_ON(!pte_present(pte)); > @@ -454,7 +440,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, > return __ptep_set_access_flags(vma, addr, ptep, pte, dirty); > > ncontig = find_num_contig(mm, addr, ptep, &pgsize); > - dpfn = pgsize >> PAGE_SHIFT; > > if (!__cont_access_flags_changed(ptep, pte, ncontig)) > return 0; > @@ -469,19 +454,14 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma, > if (pte_young(orig_pte)) > pte = pte_mkyoung(pte); > > - hugeprot = pte_pgprot(pte); > - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) > - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); > - > + ___set_ptes(mm, ptep, pte, ncontig, pgsize); > return 1; > } This makes huge_ptep_set_access_flags() cleaner and simpler as well. > > void huge_ptep_set_wrprotect(struct mm_struct *mm, > unsigned long addr, pte_t *ptep) > { > - unsigned long pfn, dpfn; > - pgprot_t hugeprot; > - int ncontig, i; > + int ncontig; > size_t pgsize; > pte_t pte; > > @@ -494,16 +474,11 @@ void huge_ptep_set_wrprotect(struct mm_struct *mm, > } > > ncontig = find_num_contig(mm, addr, ptep, &pgsize); > - dpfn = pgsize >> PAGE_SHIFT; > > pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); > pte = pte_wrprotect(pte); > > - hugeprot = pte_pgprot(pte); > - pfn = pte_pfn(pte); > - > - for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) > - __set_ptes(mm, addr, ptep, pfn_pte(pfn, hugeprot), 1); > + ___set_ptes(mm, ptep, pte, ncontig, pgsize); > } This makes huge_ptep_set_wrprotect() cleaner and simpler as well. > > pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, > @@ -517,10 +492,7 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, > pte = __ptep_get(ptep); > VM_WARN_ON(!pte_present(pte)); > > - if (!pte_cont(pte)) > - return ptep_clear_flush(vma, addr, ptep); > - > - ncontig = find_num_contig(mm, addr, ptep, &pgsize); > + ncontig = num_contig_ptes(page_size(pte_page(pte)), &pgsize); A VMA argument is present in this function huge_ptep_clear_flush(). Why not just use that to get the huge page size here, instead of retrieving the PFN contained in page table entry which might be safer ? s/page_size(pte_page(pte))/huge_page_size(hstate_vma(vma)) > return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); > } >