From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77CE9C433F5 for ; Thu, 28 Apr 2022 05:55:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99D5A6B0071; Thu, 28 Apr 2022 01:55:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 94C586B0072; Thu, 28 Apr 2022 01:55:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83B516B0073; Thu, 28 Apr 2022 01:55:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 744CF6B0071 for ; Thu, 28 Apr 2022 01:55:57 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C19E361C3D for ; Thu, 28 Apr 2022 05:55:56 +0000 (UTC) X-FDA: 79405226712.14.49ED8FE Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf12.hostedemail.com (Postfix) with ESMTP id DAE3B40050 for ; Thu, 28 Apr 2022 05:55:45 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id s137so3151280pgs.5 for ; Wed, 27 Apr 2022 22:55:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=OmWboORltJDtK22Ml4YLWVJ4IYLqmhFFeju3DQyJN9k=; b=7wEby7WZpYLiIm29N1XA/YqtYC0pLS1v428ehj5J4t8iB3tIf5AVgTpuwo3tOoK0Qp RAO+mhuAo53l0Z+XoDOgvgVPw6y1Qay/TaRM3aDggZxnW9lNOhKKzp2+L6TlJSt40cRM OX6eHh4XBy/8ms8Hvvxa2yjZJBRhBLEZRqbkIEYDbeMMGf/DkTV0cY08oQr0txjRS/Dr 7nRb5jSHGfYuy6PQseVOzK0NKrmkCAoelFuEx+fegs0qt+KDrCFhwbcWdrd05BFXYexL x2exmG6PEXaC2bo2KPGUbR6lsFBAb3W+01mrOus+zBTyyY+8en+GtaHJjrA+hxbg7CQ6 Ba/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=OmWboORltJDtK22Ml4YLWVJ4IYLqmhFFeju3DQyJN9k=; b=f4qAFyfnGMIngQ/rETbEbu8+ueccZldW6QTjkrmH4/hyUgvrl2NxTQcqBOWDy1wGhf Rjfu+ImruTXjBmlacGQ2xE05bwLwScpXwUVlWKBPcanUNpmNNJq0ob85SJ/9Lmm4J+y8 AJ01ycu4uEH8vPOWzTFd953MJFDf2dgGCs5zEJ2uBYb7OKy3odm9zlKq69dp9baFKvdS 18Xzm/ppuE2Vq/C38GwAU4cCciINgc1Awja2S7SBPDmF7b22v79CCrOU4Fcv7LfHTaox Fr4yddlg/LRc6ZqXcG+n7JXooCgvC7MXP43V2GlTrfoKKnD7FMgdYhywe5Bvmwa7D8zz bGww== X-Gm-Message-State: AOAM533DHFUoWKgVpEnni2kWFhjVvABPkQvVBaF4Lf8WuACxoQeqijU6 rxqjaGTuU4bRW8I1PvdF+61Q6Q== X-Google-Smtp-Source: ABdhPJxmxbtMOA2ZNFO2CETwLqP2cOouytBK7GT6+vU44mwGy5qDzo/+nItIBe+ck/HYuOo/Zz1wmQ== X-Received: by 2002:a05:6a00:1145:b0:4f6:3ebc:a79b with SMTP id b5-20020a056a00114500b004f63ebca79bmr33588283pfm.41.1651125354465; Wed, 27 Apr 2022 22:55:54 -0700 (PDT) Received: from localhost ([139.177.225.239]) by smtp.gmail.com with ESMTPSA id c18-20020a056a000ad200b004f0f9696578sm23424519pfl.141.2022.04.27.22.55.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 22:55:54 -0700 (PDT) Date: Thu, 28 Apr 2022 13:55:50 +0800 From: Muchun Song To: Baolin Wang Cc: akpm@linux-foundation.org, mike.kravetz@oracle.com, almasrymina@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/3] mm: rmap: Move the cache flushing to the correct place for hugetlb PMD sharing Message-ID: References: <4f7ae6dfdc838ab71e1655188b657c032ff1f28f.1651056365.git.baolin.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <4f7ae6dfdc838ab71e1655188b657c032ff1f28f.1651056365.git.baolin.wang@linux.alibaba.com> X-Stat-Signature: onqpgttrze8831ceecjt89p6bw6swrw3 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DAE3B40050 X-Rspam-User: Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=7wEby7WZ; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf12.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-HE-Tag: 1651125345-74838 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Apr 27, 2022 at 06:52:06PM +0800, Baolin Wang wrote: > The cache level flush will always be first when changing an existing > virtual–>physical mapping to a new value, since this allows us to > properly handle systems whose caches are strict and require a > virtual–>physical translation to exist for a virtual address. So we > should move the cache flushing before huge_pmd_unshare(). > Right. > As Muchun pointed out[1], now the architectures whose supporting hugetlb > PMD sharing have no cache flush issues in practice. But I think we > should still follow the cache/TLB flushing rules when changing a valid > virtual address mapping in case of potential issues in future. Right. One point i need to clarify. I do not object this change but want you to clarify this (not an issue in practice) in commit log to let others know they do not need to bp this. > > [1] https://lore.kernel.org/all/YmT%2F%2FhuUbFX+KHcy@FVFYT0MHHV2J.usts.net/ > Signed-off-by: Baolin Wang > --- > mm/rmap.c | 40 ++++++++++++++++++++++------------------ > 1 file changed, 22 insertions(+), 18 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index 61e63db..4f0d115 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1535,15 +1535,16 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, > * do this outside rmap routines. > */ > VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); > + /* > + * huge_pmd_unshare may unmap an entire PMD page. > + * There is no way of knowing exactly which PMDs may > + * be cached for this mm, so we must flush them all. > + * start/end were already adjusted above to cover this > + * range. > + */ > + flush_cache_range(vma, range.start, range.end); > + flush_cache_range() is always called even if we do not need to flush. How about introducing a new helper like hugetlb_pmd_shared() which returns true for shared PMD? Then: if (hugetlb_pmd_shared(mm, vma, pvmw.pte)) { flush_cache_range(vma, range.start, range.end); huge_pmd_unshare(mm, vma, &address, pvmw.pte); flush_tlb_range(vma, range.start, range.end); } The code could be a little simpler. Right? Thanks.