From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 662F3C433EF for ; Wed, 24 Nov 2021 08:45:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A43046B0075; Wed, 24 Nov 2021 03:45:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F2C56B0078; Wed, 24 Nov 2021 03:45:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E1AC6B007B; Wed, 24 Nov 2021 03:45:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id 77BC26B0075 for ; Wed, 24 Nov 2021 03:45:03 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 20016894CE for ; Wed, 24 Nov 2021 08:44:53 +0000 (UTC) X-FDA: 78843188424.12.F13B36C Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf28.hostedemail.com (Postfix) with ESMTP id 298F1900050E for ; Wed, 24 Nov 2021 08:44:51 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10177"; a="298640205" X-IronPort-AV: E=Sophos;i="5.87,260,1631602800"; d="scan'208";a="298640205" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2021 00:41:24 -0800 X-IronPort-AV: E=Sophos;i="5.87,260,1631602800"; d="scan'208";a="509792101" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.159.101]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2021 00:41:20 -0800 From: "Huang, Ying" To: Marco Elver Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, syzbot+aa5bebed695edaccf0df@syzkaller.appspotmail.com, Nadav Amit , Mel Gorman , Andrea Arcangeli , Andy Lutomirski , Dave Hansen , Will Deacon , Yu Zhao Subject: Re: [PATCH] mm/rmap: fix potential batched TLB flush race References: <20211123074344.1877731-1-ying.huang@intel.com> <8735nm9vkw.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Wed, 24 Nov 2021 16:41:18 +0800 In-Reply-To: (Marco Elver's message of "Wed, 24 Nov 2021 09:10:52 +0100") Message-ID: <87v90i6j4h.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 298F1900050E X-Stat-Signature: tjqjczspqu9conmipe9gdm3sq77ox8u6 Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf28.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=ying.huang@intel.com X-HE-Tag: 1637743491-852563 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Marco Elver writes: > On Wed, 24 Nov 2021 at 02:44, Huang, Ying wrote: >> >> Marco Elver writes: >> >> > On Tue, 23 Nov 2021 at 08:44, Huang Ying wrote: > [...] >> >> --- a/mm/rmap.c >> >> +++ b/mm/rmap.c >> >> @@ -633,7 +633,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) >> >> * before the PTE is cleared. >> >> */ >> >> barrier(); >> >> - mm->tlb_flush_batched = true; >> >> + atomic_inc(&mm->tlb_flush_batched); >> > >> > The use of barrier() and atomic needs some clarification. >> >> There are some comments above barrier() to describe why it is needed. >> For atomic, because the type of mm->tlb_flush_batched is atomic_t, do we >> need extra clarification? > > Apologies, maybe I wasn't clear enough: the existing comment tells me > the clearing of PTE should never happen after tlb_flush_batched is > set, but only the compiler is considered. However, I become suspicious > when I see barrier() paired with an atomic. barrier() is purely a > compiler-barrier and does not prevent the CPU from reordering things. > atomic_inc() does not return anything and is therefore unordered per > Documentation/atomic_t.txt. > >> > Is there a >> > requirement that the CPU also doesn't reorder anything after this >> > atomic_inc() (which is unordered)? I.e. should this be >> > atomic_inc_return_release() and remove barrier()? >> >> We don't have an atomic_xx_acquire() to pair with this. So I guess we >> don't need atomic_inc_return_release()? > > You have 2 things stronger than unordered: atomic_read() which result > is used in a conditional branch, thus creating a control-dependency > ordering later dependent writes; and the atomic_cmpxchg() is fully > ordered. > > But before all that, I'd still want to understand what ordering > requirements you have. The current comments say only the compiler > needs taming, but does that mean we're fine with the CPU wildly > reordering things? Per my understanding, atomic_cmpxchg() is fully ordered, so we have strong ordering in flush_tlb_batched_pending(). And we use xchg() in ptep_get_and_clear() (at least for x86) which is called before set_tlb_ubc_flush_pending(). So we have strong ordering there too. So at least for x86, barrier() in set_tlb_ubc_flush_pending() appears unnecessary. Is it needed by other architectures? Best Regards, Huang, Ying