From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C3EAC25B08 for ; Wed, 17 Aug 2022 08:41:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D13678D0003; Wed, 17 Aug 2022 04:41:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CC29B8D0002; Wed, 17 Aug 2022 04:41:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB1E58D0003; Wed, 17 Aug 2022 04:41:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AC8488D0002 for ; Wed, 17 Aug 2022 04:41:47 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7AB15807B2 for ; Wed, 17 Aug 2022 08:41:47 +0000 (UTC) X-FDA: 79808441454.07.9590094 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf02.hostedemail.com (Postfix) with ESMTP id 98EE280049 for ; Wed, 17 Aug 2022 08:41:45 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4M71cr1ZgDz1N7Hc; Wed, 17 Aug 2022 16:38:20 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Wed, 17 Aug 2022 16:41:40 +0800 Subject: Re: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before set_pte_at() To: Muchun Song CC: Andrew Morton , Mike Kravetz , Muchun Song , Linux MM , References: <20220816130553.31406-1-linmiaohe@huawei.com> <20220816130553.31406-5-linmiaohe@huawei.com> From: Miaohe Lin Message-ID: Date: Wed, 17 Aug 2022 16:41:40 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660725706; a=rsa-sha256; cv=none; b=7QssLFfGXFc+706Im8CNItLbUmwrTQMtbvw1KBXKj+ksSvNPVX10xj0P3xMCsBumVGDFnC /Ta11hcDOgjo1uFL1eB3x/ew7pBZYwCijeN+Ol7vm7M8/t7mkYNel8T/t+OsbpJmEYh9Fg WfE3BV66Hz5KzSHfVa0xfEcCpkhv9n8= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660725706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NRqjW6Au4s+UqmkkdT67jsHoIq1MQtKNydE3/PEFjUA=; b=3dk1lBm1UkF2lXizwYDBMI5gDtMtfn+Zk931PY1EKQUfSV8zRggLkStto8aiLNMMM1PrnD y4Lfwe/agrcaST1Xfhr1icFTDPtwnnhIq60djiN2Df9mndlFR3Flwsl3vRdjuY6XNc5V0P aRId94by5SgY6AXfE1oI6mJuEqLd12Q= Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam05 X-Rspam-User: X-Stat-Signature: jejriwkanmeedi7oyubtp4d5ypwym6ub X-Rspamd-Queue-Id: 98EE280049 X-HE-Tag: 1660725705-25652 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/8/17 10:53, Muchun Song wrote: > > >> On Aug 16, 2022, at 21:05, Miaohe Lin wrote: >> >> The memory barrier smp_wmb() is needed to make sure that preceding stores >> to the page contents become visible before the below set_pte_at() write. > > I’m not sure if you are right. I think it is set_pte_at()’s responsibility. Maybe not. There're many call sites do the similar things: hugetlb_mcopy_atomic_pte __do_huge_pmd_anonymous_page collapse_huge_page do_anonymous_page migrate_vma_insert_page mcopy_atomic_pte Take do_anonymous_page as an example: /* * The memory barrier inside __SetPageUptodate makes sure that * preceding stores to the page contents become visible before * the set_pte_at() write. */ __SetPageUptodate(page); So I think a memory barrier is needed before the set_pte_at() write. Or am I miss something? Thanks, Miaohe Lin > Take arm64 (since it is a Relaxed Memory Order model) as an example (the > following code snippet is set_pte()), I see a barrier guarantee. So I am > curious what issues you are facing. So I want to know the basis for you to > do this change. > > static inline void set_pte(pte_t *ptep, pte_t pte) > { > *ptep = pte; > > /* > * Only if the new pte is valid and kernel, otherwise TLB maintenance > * or update_mmu_cache() have the necessary barriers. > */ > if (pte_valid_not_user(pte)) { > dsb(ishst); > isb(); > } > } > > Thanks. >