From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A93DC77B7A for ; Wed, 24 May 2023 18:49:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C7D426B007D; Wed, 24 May 2023 14:49:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C2E84900003; Wed, 24 May 2023 14:49:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF565900002; Wed, 24 May 2023 14:49:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A1F4D6B007D for ; Wed, 24 May 2023 14:49:42 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 6FEB2809A9 for ; Wed, 24 May 2023 18:49:42 +0000 (UTC) X-FDA: 80826037404.16.E754087 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf10.hostedemail.com (Postfix) with ESMTP id 9BBA7C0020 for ; Wed, 24 May 2023 18:49:40 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ayhgoVY4; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684954180; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LExq5RU9whDcwy6UUNZ1WYzMY0l6K6+5M8Rpcv7pbW8=; b=Rr16ocLrNIAMuYyayPcOuHNLuIR8IBkrjcpQ7pxIyZLfAlCyhL2bUXSmlq/nEutGG+zmKJ NmAy5D16/dEpiW7Joj8kIk99Y95VKU2lU+bveLpEkC6f8WsjaW2mprdgLOG6lB5nxwXwUI ZlZXjl/S4QX8Muo5L16SFuGvy1pi6fc= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ayhgoVY4; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf10.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684954180; a=rsa-sha256; cv=none; b=UpUKM0OpWiMAhBU+N9yavamQwXRFWNhVCkDAQNnAD9JppfuRAZ7tyKIsBnsyOgR6h6A/Kv euNPf+kq2vwJTWkdiz1wQpAu8koi3syU581yQOzao0qwjkCbFwUZfQ7FjTpgX+sS8Qmvi5 GWguVW0cNKxmlTbqyKqV/rTLiDh78Os= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A3DD364024; Wed, 24 May 2023 18:49:39 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9818C433EF; Wed, 24 May 2023 18:49:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684954179; bh=cZOpNNZu1hpbya06Z/YpDzm6RENOLZB3iMFQA7C+Zds=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ayhgoVY4A5xEvdS3V34XXv3WTLwygyQQMMNe5kJVpbyxskzqRm8UzfK5mAkW10Xyq gyFF0adVxmrQnQq9JR6YspCROH6ijc2ODp8UYiIPo6mAx+n16ZbhyuuXbdWEyTbuaU zczc0e01kAexMOOsjfZeejQqXIwOYqvzn9m+NuaWOG/VyGREEMEYEMwT5VDi486nAi uH70wQBngBGKmyP5deVUuamUGDgnEIig7YguwxfNrbphKiJrEtltCYD7cxqZD580EV 72c52UXIMjKSvddRucxN4yLu82IHZ0KqinFj9HYLQS2Kg8uSrinngX2+kZWBPAf57S bpxmtT93C7Dzw== Date: Wed, 24 May 2023 21:49:17 +0300 From: Mike Rapoport To: Ryan Roberts Cc: Andrew Morton , SeongJae Park , Christoph Hellwig , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Lorenzo Stoakes , Uladzislau Rezki , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, damon@lists.linux.dev Subject: Re: [PATCH v2 2/5] mm: damon must atomically clear young on ptes and pmds Message-ID: <20230524184917.GP4967@kernel.org> References: <20230518110727.2106156-1-ryan.roberts@arm.com> <20230518110727.2106156-3-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230518110727.2106156-3-ryan.roberts@arm.com> X-Rspamd-Queue-Id: 9BBA7C0020 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 8cxu1hko9ztn6a75mucuyu6m9cc9w9fm X-HE-Tag: 1684954180-1153 X-HE-Meta: U2FsdGVkX1+MA3yNwr9aV8fp230OTd3H0o6D8iLOJfPDa2AbaZq8h4UKAw7GW8EFMGJJFcI4GNphwmYb7jvCEULlHrv8jqIqpfwYlt6kXwSACbkYLtDDwOUYDmMCBXh/QofnRDGNAuESG0wuE+BbLEYS+OdudQ2vVwiF9DyqcC7z5vGNCOzeNRSZQPvZ/K/pjRpznYMYWypUDCn6+Fcm6xZgNkThgq0rVCJVEHCxEEPQCAHKfRk51X+Y4BLcNA3ITZy3wddHSLCRcCTMen/n5l/hq/SuZS5yd4zAgx5idU4QnTw1J9E8jY0eaBHUC44n5hrcrc25k3VvHtn2K8lwP8dbVp2IyWxANalLBDX8nT0B7P0HdUj4wrfkgdHEMETFHE3+UAeYJc3cy3eVYWWOYzHHQqGjubN2aREDc3Dx0RMHfr+ziGZLZmfVC/Z3YI/UuXiOe27NPMa0apz+CcbLtC/3BTa8fKCfpdlq2Tegtz9sMd7ni2a380pyZc7imrXEUlb/d/ZjjP84NDtP5FYk7nMzmMihhUXyTWDSe115XCAyHUCPvv5x+0+qKin2a8eCD+NoVh0SQmJa417Ok7XnUmZ/Rg49IpQUfhBkMhdWzY5Ac5z6eDrvpIYjTsSZDGCEiOSHDlvQev9DAoBD9TCjmkmPCc0M1KJ3UkWcztB/akJ8sVOVI8T9MkCfz6UrOH9OyXFFd4915x7kH8BaOS0vQGsxIe+nwgAYmY3yjrZyghW+HrQYC/ATlhDnsRu3Lwbjw3vzxnIFMxCMUkCCw8itiUqUmuAj2EkDjQQoM4dDvuQXVjt8qNA8TEUpNiBF7bNTpUK2HwNdkXVIPKPpWEcPBE4YKc/VBVzShYNzTGaCuPn2ALJR/9Br34FPf7ilG6GRtFu2mqci4MGNh2vRdX0pAEICfZoxqR6bZa7mY+A850lgcbxRGoSbQLPwXZXW2gqtAdCTo2rwD0EmBHWKGqj LUHxyqla 9+gTlDJSYzfIC5QM+Vll0qFGvx92x09FSP4MrLHTyTBOoDNq3EK4ZvG7W4R1yBplTYSsPQ/ieKBuKCMGJJq/9mEu2DmjXN9N3xnzKF6lyZY4P5jr1J1Sm8XoAfcsDv0iQL1Us3ciX3HRqcttxR2Rb9xUGvEfGXrvx/NZNSMWkbL4dP7J9EwIZRMj7nSmZ2jK6ip5fUgjWH8IxpY5i/kewDwV8LiiFW9vP4/0Ew/qkhSuEWRndvGJmAZnpNrVefw92YNXHkyqqjjlF0UQRaRSwDCYa8ec6okh52r5iQnk3r1+Olu142IDASpUSCsvSI4NjE9ASH5potZ2f2uBFU2RLHmtNWsmvuy/clx4g9aQjvNY/7UmrJ4CTxJjdlBrRdkTpBXX0yquKzA0yZRCWvIWVoGClcpAFQLi5uB36URNfSBCqThQ6/riq169hzA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 18, 2023 at 12:07:24PM +0100, Ryan Roberts wrote: > It is racy to non-atomically read a pte, then clear the young bit, then > write it back as this could discard dirty information. Further, it is > bad practice to directly set a pte entry within a table. Instead > clearing young must go through the arch-provided helper, > ptep_test_and_clear_young() to ensure it is modified atomically and to > give the arch code visibility and allow it to check (and potentially > modify) the operation. > > Fixes: 46c3a0accdc4 ("mm/damon/vaddr: separate commonly usable functions") > Signed-off-by: Ryan Roberts > Reviewed-by: Zi Yan Reviewed-by: Mike Rapoport (IBM) > --- > mm/damon/ops-common.c | 16 ++++++---------- > mm/damon/ops-common.h | 4 ++-- > mm/damon/paddr.c | 4 ++-- > mm/damon/vaddr.c | 4 ++-- > 4 files changed, 12 insertions(+), 16 deletions(-) > > diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c > index cc63cf953636..acc264b97903 100644 > --- a/mm/damon/ops-common.c > +++ b/mm/damon/ops-common.c > @@ -37,7 +37,7 @@ struct folio *damon_get_folio(unsigned long pfn) > return folio; > } > > -void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) > +void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr) > { > bool referenced = false; > struct folio *folio = damon_get_folio(pte_pfn(*pte)); > @@ -45,13 +45,11 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) > if (!folio) > return; > > - if (pte_young(*pte)) { > + if (ptep_test_and_clear_young(vma, addr, pte)) > referenced = true; > - *pte = pte_mkold(*pte); > - } > > #ifdef CONFIG_MMU_NOTIFIER > - if (mmu_notifier_clear_young(mm, addr, addr + PAGE_SIZE)) > + if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE)) > referenced = true; > #endif /* CONFIG_MMU_NOTIFIER */ > > @@ -62,7 +60,7 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) > folio_put(folio); > } > > -void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr) > +void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr) > { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > bool referenced = false; > @@ -71,13 +69,11 @@ void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr) > if (!folio) > return; > > - if (pmd_young(*pmd)) { > + if (pmdp_test_and_clear_young(vma, addr, pmd)) > referenced = true; > - *pmd = pmd_mkold(*pmd); > - } > > #ifdef CONFIG_MMU_NOTIFIER > - if (mmu_notifier_clear_young(mm, addr, addr + HPAGE_PMD_SIZE)) > + if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + HPAGE_PMD_SIZE)) > referenced = true; > #endif /* CONFIG_MMU_NOTIFIER */ > > diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h > index 14f4bc69f29b..18d837d11bce 100644 > --- a/mm/damon/ops-common.h > +++ b/mm/damon/ops-common.h > @@ -9,8 +9,8 @@ > > struct folio *damon_get_folio(unsigned long pfn); > > -void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr); > -void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr); > +void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr); > +void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr); > > int damon_cold_score(struct damon_ctx *c, struct damon_region *r, > struct damos *s); > diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c > index 467b99166b43..5b3a3463d078 100644 > --- a/mm/damon/paddr.c > +++ b/mm/damon/paddr.c > @@ -24,9 +24,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, > while (page_vma_mapped_walk(&pvmw)) { > addr = pvmw.address; > if (pvmw.pte) > - damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr); > + damon_ptep_mkold(pvmw.pte, vma, addr); > else > - damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr); > + damon_pmdp_mkold(pvmw.pmd, vma, addr); > } > return true; > } > diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c > index 1fec16d7263e..37994fb6120c 100644 > --- a/mm/damon/vaddr.c > +++ b/mm/damon/vaddr.c > @@ -311,7 +311,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, > } > > if (pmd_trans_huge(*pmd)) { > - damon_pmdp_mkold(pmd, walk->mm, addr); > + damon_pmdp_mkold(pmd, walk->vma, addr); > spin_unlock(ptl); > return 0; > } > @@ -323,7 +323,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, > pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); > if (!pte_present(*pte)) > goto out; > - damon_ptep_mkold(pte, walk->mm, addr); > + damon_ptep_mkold(pte, walk->vma, addr); > out: > pte_unmap_unlock(pte, ptl); > return 0; > -- > 2.25.1 > > -- Sincerely yours, Mike.