From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23BFFC3DA4A for ; Fri, 9 Aug 2024 03:43:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97F1A6B0089; Thu, 8 Aug 2024 23:43:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92E746B008A; Thu, 8 Aug 2024 23:43:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F54A6B008C; Thu, 8 Aug 2024 23:43:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 624F06B0089 for ; Thu, 8 Aug 2024 23:43:33 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CBB0F406CB for ; Fri, 9 Aug 2024 03:43:32 +0000 (UTC) X-FDA: 82431312264.29.64B7EB2 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf09.hostedemail.com (Postfix) with ESMTP id 69BE3140009 for ; Fri, 9 Aug 2024 03:43:29 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="c/EHypN4"; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723175001; a=rsa-sha256; cv=none; b=xbYRyDOgO/PWbuN99CJI3gr4D0YjBHThkolBjjaeLUvRH1omK7WWfp3XubOKb4HCqk7uN5 Q1IlyqrhmKxJ+67EJHxCEudbhjC1oy3prg2kmVkLvuN1hFqmVOMpr4MxghWzu66TIClzGw x7/5991OeIWR1YQ1UGSyPpYOVfRwHck= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="c/EHypN4"; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf09.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723175001; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LkBIUvVrpl0JBWdf4u/kUWlqJW2P6LxmcD6sEDhgjUA=; b=OUmUp5ZSswHBimCr7BQleK4Y8TR0zkQCgSYWSeAdueQLEH0cRoaeDKTjG5vwda5GgKWhv+ hS47qX/fjGKfelOzlbCwTRGMF4n5jIKF/UWh2Rc1tD2HBjS/xYl7DP1Ev9wivwbMSIuo5W QuRNLIH+6wiVmy6z7Izj2HgURS5BhRs= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1723175006; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=LkBIUvVrpl0JBWdf4u/kUWlqJW2P6LxmcD6sEDhgjUA=; b=c/EHypN4PaBT9ItXfrrsxtoCVG48HNcWQ1OEEOX7WMBaLRryF+O164UehBL7YZuMNvs4m/gYd1tVAQ4SMtmRkCP1a6Pk7C6Vs7LQ46FNyxqrSSgc5SfiBbdQoEZCH9Vh8lS3k3T9AV2nBMcTziMNw5rVidNM3Ap7jsPOt6ZB5fY= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033032019045;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0WCOQmX4_1723175004; Received: from 30.97.56.60(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WCOQmX4_1723175004) by smtp.aliyun-inc.com; Fri, 09 Aug 2024 11:43:25 +0800 Message-ID: <585f2459-674b-494f-9036-8b2474ffa73d@linux.alibaba.com> Date: Fri, 9 Aug 2024 11:43:24 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] mm/migrate: move common code to numa_migrate_check (was numa_migrate_prep) To: Zi Yan , linux-mm@kvack.org Cc: Andrew Morton , David Hildenbrand , "Huang, Ying" , Kefeng Wang , linux-kernel@vger.kernel.org References: <20240808233728.1477034-1-ziy@nvidia.com> From: Baolin Wang In-Reply-To: <20240808233728.1477034-1-ziy@nvidia.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 69BE3140009 X-Rspamd-Server: rspam01 X-Stat-Signature: 44kb1rudih5dbiqnyry7j1q4ftu64d9a X-HE-Tag: 1723175009-899233 X-HE-Meta: U2FsdGVkX18uKxydSI6pShHLxeL1yZSacnYKf3kNk9DxWWbhEaA2XPY6+8tSc/V70L+EZg8CnvXFhxw4y1hpVPRv3rwTASBxmGWBTt80wzNR3XAM1MRCKe7t5UpERO76l2T8bYOpnqjQL+ZwfSSEJEOCicfKgiqUR+vWrE8Sf/upTNEj2zKe/N+mSGfQgxl1rW/0WHKwhmaysFPOXmUvt6U7b6QNQZhCsIMZ7YxnFmh14+GIN5E/3F8cGAp6vNX2bSsHxI6ooIFymfmrvxC+xwjhG7QS74RCZP+41Kn2mQXM8WD6yqnrCNRSfn2z6EV0BGG8HyhwG4t5YJmXTZIcH1pon/cJQOGobcZ8Li4f7sk24RqeNs9LgNOrN7l33OYu1v2LY9ovrFg7nCseLPPifSVxEgxAdN3UFF0lKmJmpvvLdqgYJ7+ZK4uEIqN/gPIwd2TKSYPJATO9NDTXudMFIkX1FylGTnFXETbVEUtujzn9YKVTH/VXstk3mx998ckI4s/Y+16YpFgdCBeGRDPo7EXm2Zwap6nA8I+ooSaPbTkVefckClgC02l1lGZZjBD3kF3XPwMsm+PrUyuc5cFqidHt78EZlwtyY/4vg9ELG1kUVqx4ld5xNnzN0JS/AnQGfJqaQsKsafOV0d1xPPXTDsSCNS5jzf+zfWcJd4kCvvbYGDDrQpCoQmgxIcIa2kEgmxapKryUKTaUx5NnEIUGOp3MzCY71nVRmNJfUJ26lPlGUC8U0P4pfvwUH1o0OF3p3Xo3OV2CtlLNPFm3y2r+pm+bmFbUKE0IxuzAgQBDmnBu6gag2tPlyjCdA2YQAgKqSO0t7A1MTAXjTerkiedqP4aUShzqVhVeeKouRQHBIbaMa5om3RfUe/mRjd1w4rEuFDUXUO3XOq5RGL6TeU3L9LCEtP/0yilMlcQcyigwD/dJnAvQloJYHAW1O/DQIEqMDmWNGL3JxmMA22ON0jy mWoTOya1 F/DKfrUV2lu9bvYrRElcKAftOERfUWuFvWdyxdnojdZBnc8sudFdgZoD26znsaN9vIMBpfnkNYfqX4FExTr2q2qmGTLyzavsIN8oD/axDUL5pw1xcqfwiUsVZTipTsvVtn/McDz4TCb0JvW9GwkJCtyb7JC6+JWzePCdw9YaDdjKIsB89buQUEbcv3kdrpS0IB8Q5IEcF7nU9f+K9nk9SSSZSp0jenRlHMzbpi4sRFsUO1QFRiisDWHk9Rhj9z4jX35VYYiyWA5K6TTnniW4IyshsJoIb22hzmpLFxA6d1YsGVXk4ia/rx2nUl4AD2qF1U8++q9MprY+LGLBQFq0JhHYXJQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/8/9 07:37, Zi Yan wrote: > do_numa_page() and do_huge_pmd_numa_page() share a lot of common code. To > reduce redundancy, move common code to numa_migrate_prep() and rename > the function to numa_migrate_check() to reflect its functionality. > > Now do_huge_pmd_numa_page() also checks shared folios to set TNF_SHARED > flag. > > Suggested-by: David Hildenbrand > Signed-off-by: Zi Yan LGTM. Feel free to add: Reviewed-by: Baolin Wang > --- > mm/huge_memory.c | 29 +++++++++------------- > mm/internal.h | 5 ++-- > mm/memory.c | 63 +++++++++++++++++++++++++----------------------- > 3 files changed, 47 insertions(+), 50 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 4e4364a17e6d..96a52e71d167 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1669,22 +1669,23 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, > vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > - pmd_t oldpmd = vmf->orig_pmd; > - pmd_t pmd; > struct folio *folio; > unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > int nid = NUMA_NO_NODE; > - int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); > + int target_nid, last_cpupid; > + pmd_t pmd, old_pmd; > bool writable = false; > int flags = 0; > > vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > - if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { > + old_pmd = pmdp_get(vmf->pmd); > + > + if (unlikely(!pmd_same(old_pmd, vmf->orig_pmd))) { > spin_unlock(vmf->ptl); > return 0; > } > > - pmd = pmd_modify(oldpmd, vma->vm_page_prot); > + pmd = pmd_modify(old_pmd, vma->vm_page_prot); > > /* > * Detect now whether the PMD could be writable; this information > @@ -1699,18 +1700,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > if (!folio) > goto out_map; > > - /* See similar comment in do_numa_page for explanation */ > - if (!writable) > - flags |= TNF_NO_GROUP; > - > nid = folio_nid(folio); > - /* > - * For memory tiering mode, cpupid of slow memory page is used > - * to record page access time. So use default value. > - */ > - if (!folio_use_access_time(folio)) > - last_cpupid = folio_last_cpupid(folio); > - target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); > + > + target_nid = numa_migrate_check(folio, vmf, haddr, &flags, writable, > + &last_cpupid); > if (target_nid == NUMA_NO_NODE) > goto out_map; > if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { > @@ -1728,7 +1721,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > } else { > flags |= TNF_MIGRATE_FAIL; > vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > - if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { > + if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd))) { > spin_unlock(vmf->ptl); > return 0; > } > @@ -1736,7 +1729,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > > out_map: > /* Restore the PMD */ > - pmd = pmd_modify(oldpmd, vma->vm_page_prot); > + pmd = pmd_modify(pmdp_get(vmf->pmd), vma->vm_page_prot); > pmd = pmd_mkyoung(pmd); > if (writable) > pmd = pmd_mkwrite(pmd, vma); > diff --git a/mm/internal.h b/mm/internal.h > index 52f7fc4e8ac3..fb16e18c9761 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1191,8 +1191,9 @@ void vunmap_range_noflush(unsigned long start, unsigned long end); > > void __vunmap_range_noflush(unsigned long start, unsigned long end); > > -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, > - unsigned long addr, int page_nid, int *flags); > +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, > + unsigned long addr, int *flags, bool writable, > + int *last_cpupid); > > void free_zone_device_folio(struct folio *folio); > int migrate_device_coherent_page(struct page *page); > diff --git a/mm/memory.c b/mm/memory.c > index d9b1dff9dc57..3441f60d54ef 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5368,16 +5368,43 @@ static vm_fault_t do_fault(struct vm_fault *vmf) > return ret; > } > > -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, > - unsigned long addr, int page_nid, int *flags) > +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, > + unsigned long addr, int *flags, > + bool writable, int *last_cpupid) > { > struct vm_area_struct *vma = vmf->vma; > > + /* > + * Avoid grouping on RO pages in general. RO pages shouldn't hurt as > + * much anyway since they can be in shared cache state. This misses > + * the case where a mapping is writable but the process never writes > + * to it but pte_write gets cleared during protection updates and > + * pte_dirty has unpredictable behaviour between PTE scan updates, > + * background writeback, dirty balancing and application behaviour. > + */ > + if (!writable) > + *flags |= TNF_NO_GROUP; > + > + /* > + * Flag if the folio is shared between multiple address spaces. This > + * is later used when determining whether to group tasks together > + */ > + if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) > + *flags |= TNF_SHARED; > + /* > + * For memory tiering mode, cpupid of slow memory page is used > + * to record page access time. So use default value. > + */ > + if (folio_use_access_time(folio)) > + *last_cpupid = (-1 & LAST_CPUPID_MASK); > + else > + *last_cpupid = folio_last_cpupid(folio); > + > /* Record the current PID acceesing VMA */ > vma_set_access_pid_bit(vma); > > count_vm_numa_event(NUMA_HINT_FAULTS); > - if (page_nid == numa_node_id()) { > + if (folio_nid(folio) == numa_node_id()) { > count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); > *flags |= TNF_FAULT_LOCAL; > } > @@ -5479,35 +5506,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > if (!folio || folio_is_zone_device(folio)) > goto out_map; > > - /* > - * Avoid grouping on RO pages in general. RO pages shouldn't hurt as > - * much anyway since they can be in shared cache state. This misses > - * the case where a mapping is writable but the process never writes > - * to it but pte_write gets cleared during protection updates and > - * pte_dirty has unpredictable behaviour between PTE scan updates, > - * background writeback, dirty balancing and application behaviour. > - */ > - if (!writable) > - flags |= TNF_NO_GROUP; > - > - /* > - * Flag if the folio is shared between multiple address spaces. This > - * is later used when determining whether to group tasks together > - */ > - if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) > - flags |= TNF_SHARED; > - > nid = folio_nid(folio); > nr_pages = folio_nr_pages(folio); > - /* > - * For memory tiering mode, cpupid of slow memory page is used > - * to record page access time. So use default value. > - */ > - if (folio_use_access_time(folio)) > - last_cpupid = (-1 & LAST_CPUPID_MASK); > - else > - last_cpupid = folio_last_cpupid(folio); > - target_nid = numa_migrate_prep(folio, vmf, vmf->address, nid, &flags); > + > + target_nid = numa_migrate_check(folio, vmf, vmf->address, &flags, > + writable, &last_cpupid); > if (target_nid == NUMA_NO_NODE) > goto out_map; > if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) {