From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8457AC3DA61 for ; Thu, 18 Jul 2024 08:40:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 05D026B0083; Thu, 18 Jul 2024 04:40:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 00C596B0096; Thu, 18 Jul 2024 04:40:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DEF3D6B0099; Thu, 18 Jul 2024 04:40:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BD7F06B0083 for ; Thu, 18 Jul 2024 04:40:19 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3E42440DDB for ; Thu, 18 Jul 2024 08:40:19 +0000 (UTC) X-FDA: 82352226558.28.A61E49B Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.17]) by imf21.hostedemail.com (Postfix) with ESMTP id 110C31C0026 for ; Thu, 18 Jul 2024 08:40:15 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HeDKXTp5; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721291996; a=rsa-sha256; cv=none; b=282pI4WKmpJgM2O8T5cmr5OQH4ARpknpUqimh8f7lLGwjYevR3AopA+OQvezl49pTv+er5 5IYC6/VPrXphBkJi0CRRMF60ouJFDh/9o7ekYsjx0TDoeIEMEszH2IPW/1Zmpzn7ujQIGZ q16Q9cW5dFjz81sdwXN7uGBfyqSNeDU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HeDKXTp5; spf=pass (imf21.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.17 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721291996; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LJxAh8ko9cFXrsZYUWLVIxcrekI7KAoyHQl7eQqIn+4=; b=3qXvwGbOE1NqdEgo0HxWYGFw35Kgj5tBYASCHaFgwObp5C9F2QUtWDNL3aD8GHzJ89y6rP C2UAy1mHAvAOO477ja8ok3/AaZIaz8vKtW84Qi7uJUucSjepUUmm6Ekn93sByxmhA37JJp EgaNCA6coe4i1rYrhsBGOtAQ09wFpNw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1721292016; x=1752828016; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=pmVR/peiXH3BQKZbh/1kDwjsp5ffJ3UEojs7kWLo3kI=; b=HeDKXTp5faQnzOWf4wRmZNqayGUv0SIBmRyqESO0aReUwALS4m9tOWI5 HKhco0adEcDugKeFRnjXIWS4uieRU+vG6lgSyqxgzWVi+/WvZg+jWQyEw NZCvD3qi3iC5sHVafiTOu2kFe4mKS7FK1n1poHUnBV3jhqOJx/Nj6xToY WrYE6rXWtUdlop7viznfqGbG+vw0urYOOTgALFVw426nauEYIz0iKohCF h7qSQZQ9smMrrQjvBnjAe24gKh6YUA4d9Dk8c8wBv3i5Wp0MQgCzl2OeO cdkrb/D2gjGlraIXGkSatBdXTmUVrSx04Bp1YB38lvUnMnF1ykk9DDjs4 A==; X-CSE-ConnectionGUID: 25VJ6cuTQNqv0oZ/4imUzA== X-CSE-MsgGUID: 5lVxY/wwTaWGIODkPNUTBg== X-IronPort-AV: E=McAfee;i="6700,10204,11136"; a="18709407" X-IronPort-AV: E=Sophos;i="6.09,217,1716274800"; d="scan'208";a="18709407" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa111.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 01:40:15 -0700 X-CSE-ConnectionGUID: c/SEOOoUSAKKvsOX1zTkjQ== X-CSE-MsgGUID: 2VlH7bIdTCmGAPNG0WD6Zg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.09,217,1716274800"; d="scan'208";a="51405001" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jul 2024 01:40:12 -0700 From: "Huang, Ying" To: Zi Yan Cc: David Hildenbrand , linux-mm@kvack.org, Zi Yan , Andrew Morton , Baolin Wang , linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 3/3] mm/migrate: move common code to numa_migrate_check (was numa_migrate_prep) In-Reply-To: <20240712024455.163543-4-zi.yan@sent.com> (Zi Yan's message of "Thu, 11 Jul 2024 22:44:55 -0400") References: <20240712024455.163543-1-zi.yan@sent.com> <20240712024455.163543-4-zi.yan@sent.com> Date: Thu, 18 Jul 2024 16:36:39 +0800 Message-ID: <87zfqfw0yw.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Stat-Signature: hzjhdf53oaa4umay9bfct1w96t7mtme4 X-Rspamd-Queue-Id: 110C31C0026 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1721292015-284575 X-HE-Meta: U2FsdGVkX19k8zPQwBAmNNnOma5P0T1qIbKclCKPLSZgJikfduI1amA/JgZIBtfeDqCr/shdx6EznxfDSmIy8v3g/0FCQJPkMjm70MAHqbd01tWBgaMlyqJ7R7EL7K7dEHUV+eVzBz9J41UBSvJClGoP7fzSyRgiip0PRugigIBum/DVBhfA5bzQrZYa+nX2LlUx2e0djpSs7M24uMAwUX1dPNUFJRS/+saCXTl04JaTjh4G6hKe3xD5P/Ww9xp43Hdbffs/yV/pjr6ZCj4mQBfGBuM0UyrOApTwn5kOmoVVRpNlUhDFgRAmbTw+UGM8a0kNYxyrG2szaRS9pPx5RkPpLO32RkEQjUCyNd8a2aqFk35CrpLur2/g8OG4FkNLADQJz6vBm3fapjVfgpSs7BnKbt2IWuygjF+D4uGYKffIO89zknLlQD4QAnbfRfg15urFgYYjC8WTHuSE2EIGlMLrVUueJK7qTUyLeIBaFNq9nf1RMgCWJtoxTqPITHBZ9VuNzOHGP203gI+LjipVHLFAobfIrSyD17f6N/zarkx5XtQeiYnNdgKXxx8U1kmghQMAx+egHlY+1oP9bzBQYcXmy+FIYzo4qMxnAoPPlN5fbr4UIwRtrbs5x0xY0crnZMRVJFM6bqkGTNp0UQh3DVO3m8xMm84v+SzHyVp40XYtv+G9Bt5TdCNTL7wTFMTwfGqBP+bwwZndffcZsCmWwQExFWxqpGCK70KDmIpCHJxYjsXyGa1kbVHYwMEGT6gUPnPY8G11kx6B52Qm+BYFRZjS8zhxYGz8JWtxEBKS6hTrbObd6CD0kl8z5o9Iwm1efAR1lXVZB1wmTBRDVW3vvEgVq0Blsg3ohqqENwJ+ZCZvzJDo+HNuWpVfM9bNODfF+PW2vekHb9D/I/7nE3dMdFqYhhUk5Df3ghWUlILHxfaSsVPtskJRwN7865Zy/6k00y3Ihvwdk4kBDEmo93Z WHGJP0fe KMTwsS+134flKhrODm8Uu7pc4EPy7ccSAOxw/tkmzhhLcNk1zmNqV+z0b6lhXP6tjYJyKVzSekPnUV4niMdm1rLQk7d6kPgzFtWXZRZB/4H6oMgzetmUutN7wy8phlsM+Ynq7X/sipRXs01m8wiIKqMHMgJ4DnUiXLl51jMTP+aE9GXIiVUfbvTrhAZYK61VFhti6rB+1ZUzoDfZqmXKeYAKKYynXb04fk0pTLuaREoZEVD3z/9frAnE+InWW4ZkPZHtolhdBAPyEP2IMr3B+A0YWe+0U6gpdeRKp4Stbwue+U7M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Zi Yan writes: > From: Zi Yan > > do_numa_page() and do_huge_pmd_numa_page() share a lot of common code. To > reduce redundancy, move common code to numa_migrate_prep() and rename > the function to numa_migrate_check() to reflect its functionality. > > There is some code difference between do_numa_page() and > do_huge_pmd_numa_page() before the code move: > > 1. do_huge_pmd_numa_page() did not check shared folios to set TNF_SHARED. > 2. do_huge_pmd_numa_page() did not check and skip zone device folios. > > Signed-off-by: Zi Yan > --- > mm/huge_memory.c | 28 ++++++----------- > mm/internal.h | 5 +-- > mm/memory.c | 81 +++++++++++++++++++++++------------------------- > 3 files changed, 52 insertions(+), 62 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 8c11d6da4b36..66d67d13e0dc 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1670,10 +1670,10 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > pmd_t pmd; > struct folio *folio; > unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > - int nid = NUMA_NO_NODE; > - int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK); > + int target_nid = NUMA_NO_NODE; > + int last_cpupid = (-1 & LAST_CPUPID_MASK); > bool writable = false; > - int flags = 0; > + int flags = 0, nr_pages; > > vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { > @@ -1693,21 +1693,13 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > writable = true; > > folio = vm_normal_folio_pmd(vma, haddr, pmd); > - if (!folio) > + if (!folio || folio_is_zone_device(folio)) This change appears unrelated. Can we put it in a separate patch? IIUC, this isn't necessary even in do_numa_page()? Because in change_pte_range(), folio_is_zone_device() has been checked already. But It doesn't hurt too. > goto out_map; > > - /* See similar comment in do_numa_page for explanation */ > - if (!writable) > - flags |= TNF_NO_GROUP; > + nr_pages = folio_nr_pages(folio); > > - nid = folio_nid(folio); > - /* > - * For memory tiering mode, cpupid of slow memory page is used > - * to record page access time. So use default value. > - */ > - if (folio_has_cpupid(folio)) > - last_cpupid = folio_last_cpupid(folio); > - target_nid = numa_migrate_prep(folio, vmf, haddr, nid, &flags); > + target_nid = numa_migrate_check(folio, vmf, haddr, writable, > + &flags, &last_cpupid); > if (target_nid == NUMA_NO_NODE) > goto out_map; > if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { > @@ -1720,8 +1712,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > > if (!migrate_misplaced_folio(folio, vma, target_nid)) { > flags |= TNF_MIGRATED; > - nid = target_nid; > } else { > + target_nid = NUMA_NO_NODE; > flags |= TNF_MIGRATE_FAIL; > vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > if (unlikely(!pmd_same(oldpmd, *vmf->pmd))) { > @@ -1732,8 +1724,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) > } > > out: > - if (nid != NUMA_NO_NODE) > - task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); > + if (target_nid != NUMA_NO_NODE) > + task_numa_fault(last_cpupid, target_nid, nr_pages, flags); This appears a behavior change. IIUC, there are 2 possible issues. 1) if migrate_misplaced_folio() fails, folio_nid() should be used as nid. "target_nid" as variable name here is confusing, because folio_nid() is needed in fact. 2) if !pmd_same(), task_numa_fault() should be skipped. The original code is buggy. Similar issues for do_numa_page(). If my understanding were correct, we should implement a separate patch to fix 2) above. And that may need to be backported. > > return 0; > > diff --git a/mm/internal.h b/mm/internal.h > index b4d86436565b..7782b7bb3383 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -1217,8 +1217,9 @@ void vunmap_range_noflush(unsigned long start, unsigned long end); > > void __vunmap_range_noflush(unsigned long start, unsigned long end); > > -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, > - unsigned long addr, int page_nid, int *flags); > +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, > + unsigned long addr, bool writable, > + int *flags, int *last_cpupid); > > void free_zone_device_folio(struct folio *folio); > int migrate_device_coherent_page(struct page *page); > diff --git a/mm/memory.c b/mm/memory.c > index 96c2f5b3d796..a252c0f13755 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -5209,16 +5209,42 @@ static vm_fault_t do_fault(struct vm_fault *vmf) > return ret; > } > > -int numa_migrate_prep(struct folio *folio, struct vm_fault *vmf, > - unsigned long addr, int page_nid, int *flags) > +int numa_migrate_check(struct folio *folio, struct vm_fault *vmf, > + unsigned long addr, bool writable, > + int *flags, int *last_cpupid) > { > struct vm_area_struct *vma = vmf->vma; > > + /* > + * Avoid grouping on RO pages in general. RO pages shouldn't hurt as > + * much anyway since they can be in shared cache state. This misses > + * the case where a mapping is writable but the process never writes > + * to it but pte_write gets cleared during protection updates and > + * pte_dirty has unpredictable behaviour between PTE scan updates, > + * background writeback, dirty balancing and application behaviour. > + */ > + if (!writable) > + *flags |= TNF_NO_GROUP; > + > + /* > + * Flag if the folio is shared between multiple address spaces. This > + * is later used when determining whether to group tasks together > + */ > + if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) > + *flags |= TNF_SHARED; > + > + /* > + * For memory tiering mode, cpupid of slow memory page is used > + * to record page access time. > + */ > + if (folio_has_cpupid(folio)) > + *last_cpupid = folio_last_cpupid(folio); > + > /* Record the current PID acceesing VMA */ > vma_set_access_pid_bit(vma); > > count_vm_numa_event(NUMA_HINT_FAULTS); > - if (page_nid == numa_node_id()) { > + if (folio_nid(folio) == numa_node_id()) { > count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL); > *flags |= TNF_FAULT_LOCAL; > } > @@ -5284,12 +5310,11 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > struct folio *folio = NULL; > - int nid = NUMA_NO_NODE; > + int target_nid = NUMA_NO_NODE; > bool writable = false, ignore_writable = false; > bool pte_write_upgrade = vma_wants_manual_pte_write_upgrade(vma); > - int last_cpupid; > - int target_nid; > - pte_t pte, old_pte; > + int last_cpupid = (-1 & LAST_CPUPID_MASK); > + pte_t pte, old_pte = vmf->orig_pte; > int flags = 0, nr_pages; > > /* > @@ -5297,10 +5322,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > * table lock, that its contents have not changed during fault handling. > */ > spin_lock(vmf->ptl); > - /* Read the live PTE from the page tables: */ > - old_pte = ptep_get(vmf->pte); > - > - if (unlikely(!pte_same(old_pte, vmf->orig_pte))) { > + if (unlikely(!pte_same(old_pte, *vmf->pte))) { > pte_unmap_unlock(vmf->pte, vmf->ptl); > goto out; > } > @@ -5320,35 +5342,10 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > if (!folio || folio_is_zone_device(folio)) > goto out_map; > > - /* > - * Avoid grouping on RO pages in general. RO pages shouldn't hurt as > - * much anyway since they can be in shared cache state. This misses > - * the case where a mapping is writable but the process never writes > - * to it but pte_write gets cleared during protection updates and > - * pte_dirty has unpredictable behaviour between PTE scan updates, > - * background writeback, dirty balancing and application behaviour. > - */ > - if (!writable) > - flags |= TNF_NO_GROUP; > - > - /* > - * Flag if the folio is shared between multiple address spaces. This > - * is later used when determining whether to group tasks together > - */ > - if (folio_likely_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) > - flags |= TNF_SHARED; > - > - nid = folio_nid(folio); > nr_pages = folio_nr_pages(folio); > - /* > - * For memory tiering mode, cpupid of slow memory page is used > - * to record page access time. So use default value. > - */ > - if (!folio_has_cpupid(folio)) > - last_cpupid = (-1 & LAST_CPUPID_MASK); > - else > - last_cpupid = folio_last_cpupid(folio); > - target_nid = numa_migrate_prep(folio, vmf, vmf->address, nid, &flags); > + > + target_nid = numa_migrate_check(folio, vmf, vmf->address, writable, > + &flags, &last_cpupid); > if (target_nid == NUMA_NO_NODE) > goto out_map; > if (migrate_misplaced_folio_prepare(folio, vma, target_nid)) { > @@ -5362,9 +5359,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > > /* Migrate to the requested node */ > if (!migrate_misplaced_folio(folio, vma, target_nid)) { > - nid = target_nid; > flags |= TNF_MIGRATED; > } else { > + target_nid = NUMA_NO_NODE; > flags |= TNF_MIGRATE_FAIL; > vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, > vmf->address, &vmf->ptl); > @@ -5378,8 +5375,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) > } > > out: > - if (nid != NUMA_NO_NODE) > - task_numa_fault(last_cpupid, nid, nr_pages, flags); > + if (target_nid != NUMA_NO_NODE) > + task_numa_fault(last_cpupid, target_nid, nr_pages, flags); > return 0; > out_map: > /* -- Best Regards, Huang, Ying