From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41C83CF9C6B for ; Tue, 24 Sep 2024 11:20:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B4FC6B0089; Tue, 24 Sep 2024 07:20:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 965566B008A; Tue, 24 Sep 2024 07:20:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 853D66B008C; Tue, 24 Sep 2024 07:20:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 66C2C6B0089 for ; Tue, 24 Sep 2024 07:20:13 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 19445161999 for ; Tue, 24 Sep 2024 11:20:13 +0000 (UTC) X-FDA: 82599387906.08.DF1841E Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf08.hostedemail.com (Postfix) with ESMTP id 1026F160018 for ; Tue, 24 Sep 2024 11:20:09 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727176692; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C5sHVNrp1yp5tp1SmUZyRbHWX1+5K+ipYopVUzyVD54=; b=42VHwKafRPB7XQDYnTznslxlrYv7k1r5uJj16vCe7VOT7uVgReYgRS+2/zbXXBFUw+T/w9 SC3QX/0AfCiJ/LHA8hJHiD55beQRxIwy75taTlBJZgJ0tjjeLpTE0Um59qfLn4Tm9tZl3w IktEsRQMD6SuUAJZXvxMqWGsBeTDOhA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727176692; a=rsa-sha256; cv=none; b=PLQmX+qQU/SV5+PxqzLadZOO/Z6XPmgHkrt8RxpdYCVTzvs2l46N0Tad60ZSAvbGe1tAO3 DbhNZQNTzva+sOhmQVut4z2ANGq1LpUnvzbmuuunUzbgKC2Akz69iTanaz9rsimBTGnHHL 4i3Fa5IYUdRH52KyX+CxElVtqqAAfJQ= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4XCcp875KLz20pLy; Tue, 24 Sep 2024 19:19:44 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 32DF6140119; Tue, 24 Sep 2024 19:20:05 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 24 Sep 2024 19:20:04 +0800 Message-ID: <61b16640-49e0-4f84-8587-ae9b90a78887@huawei.com> Date: Tue, 24 Sep 2024 19:20:03 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v5 1/2] mm: Abstract THP allocation To: Dev Jain , , , , CC: , , , , , , , , , , , , , , , , , , , , , References: <20240924101654.1777697-1-dev.jain@arm.com> <20240924101654.1777697-2-dev.jain@arm.com> Content-Language: en-US From: Kefeng Wang In-Reply-To: <20240924101654.1777697-2-dev.jain@arm.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To dggpemf100008.china.huawei.com (7.185.36.138) X-Stat-Signature: ipmqjdjrnzdkwg3epucub4af5imdbe1n X-Rspamd-Queue-Id: 1026F160018 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1727176809-443097 X-HE-Meta: U2FsdGVkX19sEVhGbG+9nU+a5RVz2i1xxnwPGnhl/S97AwVvgTumRzJk5rH1pAYb6YesKDdtufJVbT/l4saFAfIevimOfiAbUgVUZzUPeg4rRQtd10G3QEgNy+jPiMxrf2A9Y0kjwJ+ym2kqWeVWnqe5qxPocG4Ev+Fb4drq94XYw1hkGP2vQKUnCylacEZBRpr3KttZK3MeN5Jk5BZFa1hE+W8/N+BI3ezE5rUxDT1Rn+AkSQEgQpmbKFR6MjIuwwYZCu8M1i/gFewG63Bf4Zd3oFWe8vAHrN/RcvYc184iebXHxn4FQzp5FNFbujBv4w3vaRKfdubRHdkqeQ17Aw+SX2xSCQPcF5CdWOGz6H14AFSSIFUY3LZjFCPRDopX280286/pOTdCgUgxnIO4Aj7Nh4G9Vzi77cXqUUvLOWe1LADTrLcNogryX8pLpEfkc99ClGwcLj6e9RWfWf1J4nivkPLbR/k3gJSGVpN1cWZcTjrmdJ5zGmhUgiZ8sS1J1wF9ik2YGHS6UF70WbtNStvrGk2NPFRsN6OiNEfGpK29eEiEF52PP2ULcwwyyGMZlgbcG+rJKkIThsRdg6fdNuDTLs54PzZzzwyI1CvZgG/XgJVxcnH4jLXT20v4F+1HPq8SyBhAl8o2W3gIBJ/lv6uRDs0ufbrtW1aW/8IHiZpCfOvv8+GZxCjNls46xzsBqsVswqUL0Zoj8Ugp9SlQ0AbZ8g2Vsl2WFmQDTnw6mk+J80QFK+79AdpSTpCDiOrWD7IEpNldx6wpyXHUcyh7zfhrONzHQpbX3V6G6Ur+058t6iC+BHjQbrPvEXKyjq1ZuyudpJiYhqEpTW1j9PyHZeAN5uGzAOBx+YwbwgXINdEymO7gXl16ZMa8NC3UdBQbdt2fsVlANClR9SoLK+3iDL07Dg0KY6fYGdslYBncEIMlopXIl6NfxHKjHMwLluA4pGxAl8y9A08VtI4Oq6a GK9UkAQi 3jMhXgoRzbYhCcl211tcqYWnDvea2rDYVIXQJXnOsaR9ny4XaD02W5IPDuQg9nxbekAiK9J3VeyDnDDnCMA5PFtQieq37STE1LWfEiDHYkL/MJW3kbVnSi0kd1A0AwHGxjDp85SdO4ZusGwdHcK4XuoTiIqNhWL2fULhvsEoJcNt0PaXxpnxyulKKlIORfb4SLjySc0Y5unzV9e0TSpI3GHnvI8PzOYupe51aK2CB/sByNC5TiPOmcilbd9DITBhoVq/r9vVQOMKUxqomNYPzMPo8nXOm/cKszwZSP5OtsBJIYuAA4dW/J8LsZC9Q0oOl+HRJ0RBnKV04S5s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/9/24 18:16, Dev Jain wrote: > In preparation for the second patch, abstract away the THP allocation > logic present in the create_huge_pmd() path, which corresponds to the > faulting case when no page is present. > > There should be no functional change as a result of applying this patch, > except that, as David notes at [1], a PMD-aligned address should > be passed to update_mmu_cache_pmd(). > > [1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/ > > Acked-by: David Hildenbrand > Signed-off-by: Dev Jain > --- > mm/huge_memory.c | 98 ++++++++++++++++++++++++++++-------------------- > 1 file changed, 57 insertions(+), 41 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 4e34b7f89daf..bdbf67c18f6c 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1148,47 +1148,81 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, > } > EXPORT_SYMBOL_GPL(thp_get_unmapped_area); > > -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, > - struct page *page, gfp_t gfp) > +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma, > + unsigned long addr) > { > - struct vm_area_struct *vma = vmf->vma; > - struct folio *folio = page_folio(page); > - pgtable_t pgtable; > - unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > - vm_fault_t ret = 0; > + unsigned long haddr = addr & HPAGE_PMD_MASK; > + gfp_t gfp = vma_thp_gfp_mask(vma); > + const int order = HPAGE_PMD_ORDER; > + struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true); There is a warning without NUMA, ../mm/huge_memory.c: In function ‘vma_alloc_anon_folio_pmd’: ../mm/huge_memory.c:1154:16: warning: unused variable ‘haddr’ [-Wunused-variable] 1154 | unsigned long haddr = addr & HPAGE_PMD_MASK; | ^~~~~ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c584e77efe10..147a6e069c71 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1151,11 +1151,11 @@ EXPORT_SYMBOL_GPL(thp_get_unmapped_area); static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma, unsigned long addr) { - unsigned long haddr = addr & HPAGE_PMD_MASK; gfp_t gfp = vma_thp_gfp_mask(vma); const int order = HPAGE_PMD_ORDER; - struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true); + struct folio *folio; + folio = vma_alloc_folio(gfp, order, vma, addr & HPAGE_PMD_MASK, true); if (unlikely(!folio)) { count_vm_event(THP_FAULT_FALLBACK); count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); > > - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); > + if (unlikely(!folio)) { > + count_vm_event(THP_FAULT_FALLBACK); > + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); > + goto out; Maybe return NULL to omit the out? Reviewed-by: Kefeng Wang > + } > > + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); > if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { > folio_put(folio); > count_vm_event(THP_FAULT_FALLBACK); > count_vm_event(THP_FAULT_FALLBACK_CHARGE); > - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK); > - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); > - return VM_FAULT_FALLBACK; > + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK); > + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); > + return NULL; > } > folio_throttle_swaprate(folio, gfp); > > - pgtable = pte_alloc_one(vma->vm_mm); > - if (unlikely(!pgtable)) { > - ret = VM_FAULT_OOM; > - goto release; > - } > - > - folio_zero_user(folio, vmf->address); > + folio_zero_user(folio, addr); > /* > * The memory barrier inside __folio_mark_uptodate makes sure that > * folio_zero_user writes become visible before the set_pmd_at() > * write. > */ > __folio_mark_uptodate(folio); > +out: > + return folio; > +} > + > +static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd, > + struct vm_area_struct *vma, unsigned long haddr) > +{ > + pmd_t entry; > + > + entry = mk_huge_pmd(&folio->page, vma->vm_page_prot); > + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > + folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); > + folio_add_lru_vma(folio, vma); > + set_pmd_at(vma->vm_mm, haddr, pmd, entry); > + update_mmu_cache_pmd(vma, haddr, pmd); > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); > + count_vm_event(THP_FAULT_ALLOC); > + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); > + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); > +} > + > +static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf) > +{ > + unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > + struct vm_area_struct *vma = vmf->vma; > + struct folio *folio; > + pgtable_t pgtable; > + vm_fault_t ret = 0; > + > + folio = vma_alloc_anon_folio_pmd(vma, vmf->address); > + if (unlikely(!folio)) > + return VM_FAULT_FALLBACK; > + > + pgtable = pte_alloc_one(vma->vm_mm); > + if (unlikely(!pgtable)) { > + ret = VM_FAULT_OOM; > + goto release; > + } > > vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > if (unlikely(!pmd_none(*vmf->pmd))) { > goto unlock_release; > } else { > - pmd_t entry; > - > ret = check_stable_address_space(vma->vm_mm); > if (ret) > goto unlock_release; > @@ -1202,21 +1236,11 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, > VM_BUG_ON(ret & VM_FAULT_FALLBACK); > return ret; > } > - > - entry = mk_huge_pmd(page, vma->vm_page_prot); > - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > - folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE); > - folio_add_lru_vma(folio, vma); > pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); > - set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); > - update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); > - add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); > + map_anon_folio_pmd(folio, vmf->pmd, vma, haddr); > mm_inc_nr_ptes(vma->vm_mm); > deferred_split_folio(folio, false); > spin_unlock(vmf->ptl); > - count_vm_event(THP_FAULT_ALLOC); > - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); > - count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); > } > > return 0; > @@ -1283,8 +1307,6 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm, > vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > - gfp_t gfp; > - struct folio *folio; > unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > vm_fault_t ret; > > @@ -1335,14 +1357,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) > } > return ret; > } > - gfp = vma_thp_gfp_mask(vma); > - folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true); > - if (unlikely(!folio)) { > - count_vm_event(THP_FAULT_FALLBACK); > - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK); > - return VM_FAULT_FALLBACK; > - } > - return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); > + > + return __do_huge_pmd_anonymous_page(vmf); > } > > static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,