From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3964FCD128A for ; Sun, 7 Apr 2024 09:12:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0E3D6B0087; Sun, 7 Apr 2024 05:12:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BBE696B0089; Sun, 7 Apr 2024 05:12:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A85976B008A; Sun, 7 Apr 2024 05:12:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8B5D66B0087 for ; Sun, 7 Apr 2024 05:12:51 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E2F0E1C0E56 for ; Sun, 7 Apr 2024 09:12:50 +0000 (UTC) X-FDA: 81982170900.22.3323FDC Received: from out-174.mta0.migadu.com (out-174.mta0.migadu.com [91.218.175.174]) by imf03.hostedemail.com (Postfix) with ESMTP id 70F2320016 for ; Sun, 7 Apr 2024 09:12:48 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=MVvLUjIZ; spf=pass (imf03.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.174 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712481168; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9NNpLUc6dnajYrJPWgv0GAk8XCX/XzUPTDODuiC/IpM=; b=4EsMmTrCS2Dbp9drs4cUZr+DYSzJWu8cPf4R1/G/fhBCZecPq0yPhKozVjgHSoqRX+0IWB kzJrykks6PVeRDg4az5OL3bo8CAzaH+FtZfD3UmD5EyTbVxzStksQluuw5QbYddzf8nZNO iuHKGtCH0gPSn7GxOZphp/e0V/XgGmI= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=MVvLUjIZ; spf=pass (imf03.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.174 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712481168; a=rsa-sha256; cv=none; b=CyvyJIwvOvNAGrWZF1czhRXei3RW0vXqXBHbnTMopAve0S2OTOWIhIDLlAAko7d5ue/GNV 92CIIPdPj8/FP/4pHBBNXX/dAwkPDTBouqe6J56DWj9B4TiBaM9iUsEnGD/cLx0bk/viMO Sw1eUbn/WjumdtYa+rpUVi6DLzQPh08= Message-ID: <7d001108-157d-4139-bfa9-5b4102166f17@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1712481166; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9NNpLUc6dnajYrJPWgv0GAk8XCX/XzUPTDODuiC/IpM=; b=MVvLUjIZcLF5+LdnNKy5dKuuAqE+gNJqLgw7EW34IIU+NNITq81OqYa7dyMLQlMCMxFjx3 jb/QIP7PgR/7XBIN+4npQMvSnJ03mM6FOB5CgV1ziDidt77TnxOcKj0yNYQIxVXhnbAIn8 dYXpzd5egfJ1ScK6QCN9RS0dgwfb6qA= Date: Sun, 7 Apr 2024 17:12:42 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v2 3/3] hugetlb: Convert hugetlb_wp() to use struct vm_fault To: "Vishal Moola (Oracle)" Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, willy@infradead.org, linux-mm@kvack.org References: <20240401202651.31440-1-vishal.moola@gmail.com> <20240401202651.31440-4-vishal.moola@gmail.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20240401202651.31440-4-vishal.moola@gmail.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 70F2320016 X-Rspam-User: X-Stat-Signature: onh7fdoa995ec4pnpm6d585xrtwsymaf X-Rspamd-Server: rspam01 X-HE-Tag: 1712481168-323040 X-HE-Meta: U2FsdGVkX19lb8MTlz1gSDJVwbUQ+JnO2ujXhggGWn5BFqaECefWRmHH8EL77hsGC003s1nfydFrA14DzYf5/Pwe0s/XTnf0ArGgnL/83NA0C9FE4oaHP+3iYTlnKpKr/avP1LKXXaWqv8Ym1Kzw5MsFUab89O8KQWukXAgAWdOzSSzhy5eh1RuWrXS4OA2yxjsr2nFUfov2IamKIxWaD7WpN6BrXidZa31y/LFELbekoLD4IBSjt35NeEBuST4cUjDrf97jX9hpwg6G3swsmvK2Bj/r+JR40NmiChbO+lVeSz6YOgQ090p2O0t+lkBcYnjMPxo8AfKG2wcWoDPiLELKlO/CxlanhU3iskBE8/H7bsoOY3ACdhlwzLH1hN/I3oax9Br3/TRoSqpcTeJpim9r1RNkSCxyMZ6Cv+zGE/Z+1TxzKhgMglVSDyusY3nP3Llg0PQCPIOMCVa/PYQwMVgnfgk7XqymMgzofXYPStpXZhWEVYVa052KWTjMwh8HEXtAV3rZEKsZxJZe2W2zTXilzZ9A9NgLZi2Yd1MTe8+vKD95J1ahdqvAuiYtIhapiam5XTTjvN7XztbeitxpYQhgKFGY8Un6MPAC82iQlCqlN3cO1Wi19L+RVxVb2mi+MHzXp8rzJZ/Xom4/XsGvlJyAVH1PqfeHg+tvZv4lDvf+SrqZ6ns+KrZ6dUNk8GPoiqmaHT0yB4gGjtkAeKHxMMvz0yL+5Zu31r9uzudJxW3KS/0RCLtjL8CP4NgB+UxkixiFjYgQe0PB8JA4ZpcrcuzPb2JE+kjZVCHBzmfk+CqZAPkgQkZvAq4bQSWW6NG56RgftKAFJ5nJNsNkA2Z/Mf6bl+3MYpnnhjtCVF+ODCCexpKWSnmJ1TYO7wy+GyevCBBKw8lV/NGrCsbesokRawKQq44jcCpreg8VuP/Tx68rVMoZ4MMChewQCpLX31iF59mySc0K5eyW37VB9Ud dnmTNR4y ZGsUTP6xcIZzptqd/xiQexHf2+xGblp6hXCyASZkI+XvPAt2drDi4JSx/1Qj2uNFvRcqoQMbSdYSfE353pOjyD7Qt3c9EDz+OeRPwKV2I6mDuzsTzLEJMA/pllFAfLK57JYrDZVPQd1HV9XQQHAm5BR/cNMI26QvfIfGlIjxi9AZg6FSmTDLhgZGukkLF3sbs3Nn6+pe2BcJnwAafE+Wi04/9gAq8tIx7q4AkUvRJ9CzGLvA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/4/2 04:26, Vishal Moola (Oracle) wrote: > hugetlb_wp() can use the struct vm_fault passed in from hugetlb_fault(). > This alleviates the stack by consolidating 5 variables into a single > struct. > > Signed-off-by: Vishal Moola (Oracle) > --- > mm/hugetlb.c | 61 ++++++++++++++++++++++++++-------------------------- > 1 file changed, 30 insertions(+), 31 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index aca2f11b4138..d4f26947173e 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5918,18 +5918,16 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma, > * Keep the pte_same checks anyway to make transition from the mutex easier. > */ > static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > - unsigned long address, pte_t *ptep, unsigned int flags, > - struct folio *pagecache_folio, spinlock_t *ptl, > + struct folio *pagecache_folio, The same as comment in the previous thread. Muchun, Thanks. > struct vm_fault *vmf) > { > - const bool unshare = flags & FAULT_FLAG_UNSHARE; > - pte_t pte = huge_ptep_get(ptep); > + const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE; > + pte_t pte = huge_ptep_get(vmf->pte); > struct hstate *h = hstate_vma(vma); > struct folio *old_folio; > struct folio *new_folio; > int outside_reserve = 0; > vm_fault_t ret = 0; > - unsigned long haddr = address & huge_page_mask(h); > struct mmu_notifier_range range; > > /* > @@ -5952,7 +5950,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > > /* Let's take out MAP_SHARED mappings first. */ > if (vma->vm_flags & VM_MAYSHARE) { > - set_huge_ptep_writable(vma, haddr, ptep); > + set_huge_ptep_writable(vma, vmf->address, vmf->pte); > return 0; > } > > @@ -5971,7 +5969,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > SetPageAnonExclusive(&old_folio->page); > } > if (likely(!unshare)) > - set_huge_ptep_writable(vma, haddr, ptep); > + set_huge_ptep_writable(vma, vmf->address, vmf->pte); > > delayacct_wpcopy_end(); > return 0; > @@ -5998,8 +5996,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > * Drop page table lock as buddy allocator may be called. It will > * be acquired again before returning to the caller, as expected. > */ > - spin_unlock(ptl); > - new_folio = alloc_hugetlb_folio(vma, haddr, outside_reserve); > + spin_unlock(vmf->ptl); > + new_folio = alloc_hugetlb_folio(vma, vmf->address, outside_reserve); > > if (IS_ERR(new_folio)) { > /* > @@ -6024,19 +6022,21 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > * > * Reacquire both after unmap operation. > */ > - idx = vma_hugecache_offset(h, vma, haddr); > + idx = vma_hugecache_offset(h, vma, vmf->address); > hash = hugetlb_fault_mutex_hash(mapping, idx); > hugetlb_vma_unlock_read(vma); > mutex_unlock(&hugetlb_fault_mutex_table[hash]); > > - unmap_ref_private(mm, vma, &old_folio->page, haddr); > + unmap_ref_private(mm, vma, &old_folio->page, > + vmf->address); > > mutex_lock(&hugetlb_fault_mutex_table[hash]); > hugetlb_vma_lock_read(vma); > - spin_lock(ptl); > - ptep = hugetlb_walk(vma, haddr, huge_page_size(h)); > - if (likely(ptep && > - pte_same(huge_ptep_get(ptep), pte))) > + spin_lock(vmf->ptl); > + vmf->pte = hugetlb_walk(vma, vmf->address, > + huge_page_size(h)); > + if (likely(vmf->pte && > + pte_same(huge_ptep_get(vmf->pte), pte))) > goto retry_avoidcopy; > /* > * race occurs while re-acquiring page table > @@ -6058,37 +6058,38 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > if (unlikely(ret)) > goto out_release_all; > > - if (copy_user_large_folio(new_folio, old_folio, address, vma)) { > + if (copy_user_large_folio(new_folio, old_folio, vmf->real_address, vma)) { > ret = VM_FAULT_HWPOISON_LARGE; > goto out_release_all; > } > __folio_mark_uptodate(new_folio); > > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr, > - haddr + huge_page_size(h)); > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, vmf->address, > + vmf->address + huge_page_size(h)); > mmu_notifier_invalidate_range_start(&range); > > /* > * Retake the page table lock to check for racing updates > * before the page tables are altered > */ > - spin_lock(ptl); > - ptep = hugetlb_walk(vma, haddr, huge_page_size(h)); > - if (likely(ptep && pte_same(huge_ptep_get(ptep), pte))) { > + spin_lock(vmf->ptl); > + vmf->pte = hugetlb_walk(vma, vmf->address, huge_page_size(h)); > + if (likely(vmf->pte && pte_same(huge_ptep_get(vmf->pte), pte))) { > pte_t newpte = make_huge_pte(vma, &new_folio->page, !unshare); > > /* Break COW or unshare */ > - huge_ptep_clear_flush(vma, haddr, ptep); > + huge_ptep_clear_flush(vma, vmf->address, vmf->pte); > hugetlb_remove_rmap(old_folio); > - hugetlb_add_new_anon_rmap(new_folio, vma, haddr); > + hugetlb_add_new_anon_rmap(new_folio, vma, vmf->address); > if (huge_pte_uffd_wp(pte)) > newpte = huge_pte_mkuffd_wp(newpte); > - set_huge_pte_at(mm, haddr, ptep, newpte, huge_page_size(h)); > + set_huge_pte_at(mm, vmf->address, vmf->pte, newpte, > + huge_page_size(h)); > folio_set_hugetlb_migratable(new_folio); > /* Make the old page be freed below */ > new_folio = old_folio; > } > - spin_unlock(ptl); > + spin_unlock(vmf->ptl); > mmu_notifier_invalidate_range_end(&range); > out_release_all: > /* > @@ -6096,12 +6097,12 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > * unshare) > */ > if (new_folio != old_folio) > - restore_reserve_on_error(h, vma, haddr, new_folio); > + restore_reserve_on_error(h, vma, vmf->address, new_folio); > folio_put(new_folio); > out_release_old: > folio_put(old_folio); > > - spin_lock(ptl); /* Caller expects lock to be held */ > + spin_lock(vmf->ptl); /* Caller expects lock to be held */ > > delayacct_wpcopy_end(); > return ret; > @@ -6365,8 +6366,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > hugetlb_count_add(pages_per_huge_page(h), mm); > if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { > /* Optimization, do the COW without a second fault */ > - ret = hugetlb_wp(mm, vma, vmf->real_address, vmf->pte, > - vmf->flags, folio, vmf->ptl, vmf); > + ret = hugetlb_wp(mm, vma, folio, vmf); > } > > spin_unlock(vmf->ptl); > @@ -6579,8 +6579,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, > > if (flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) { > if (!huge_pte_write(vmf.orig_pte)) { > - ret = hugetlb_wp(mm, vma, address, vmf.pte, flags, > - pagecache_folio, vmf.ptl, &vmf); > + ret = hugetlb_wp(mm, vma, pagecache_folio, &vmf); > goto out_put_page; > } else if (likely(flags & FAULT_FLAG_WRITE)) { > vmf.orig_pte = huge_pte_mkdirty(vmf.orig_pte);