From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CB01C433F5 for ; Thu, 25 Nov 2021 06:37:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D3406B0074; Thu, 25 Nov 2021 01:37:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 782AB6B0075; Thu, 25 Nov 2021 01:37:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 698F86B007B; Thu, 25 Nov 2021 01:37:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id 5B2516B0074 for ; Thu, 25 Nov 2021 01:37:17 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 242B082655CB for ; Thu, 25 Nov 2021 06:37:07 +0000 (UTC) X-FDA: 78846495294.31.19D01EE Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf04.hostedemail.com (Postfix) with ESMTP id 6F45A5000304 for ; Thu, 25 Nov 2021 06:37:02 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10178"; a="235395189" X-IronPort-AV: E=Sophos;i="5.87,262,1631602800"; d="scan'208";a="235395189" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2021 22:37:04 -0800 X-IronPort-AV: E=Sophos;i="5.87,262,1631602800"; d="scan'208";a="457738597" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.159.50]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2021 22:36:57 -0800 From: "Huang, Ying" To: Marco Elver Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, syzbot+aa5bebed695edaccf0df@syzkaller.appspotmail.com, Nadav Amit , Mel Gorman , Andrea Arcangeli , Andy Lutomirski , Dave Hansen , Will Deacon , Yu Zhao , Linux ARM Subject: Re: [PATCH] mm/rmap: fix potential batched TLB flush race In-Reply-To: (Marco Elver's message of "Wed, 24 Nov 2021 09:49:57 +0100") References: <20211123074344.1877731-1-ying.huang@intel.com> <8735nm9vkw.fsf@yhuang6-desk2.ccr.corp.intel.com> <87v90i6j4h.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) Date: Thu, 25 Nov 2021 14:36:49 +0800 Message-ID: <87pmqoaghq.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 6F45A5000304 X-Stat-Signature: gw45nnudd7wrksjpcfi1gpzu5gzdjad1 Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf04.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=ying.huang@intel.com X-HE-Tag: 1637822222-163700 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Marco Elver writes: > On Wed, 24 Nov 2021 at 09:41, Huang, Ying wrote: >> >> Marco Elver writes: >> >> > On Wed, 24 Nov 2021 at 02:44, Huang, Ying wrote: >> >> >> >> Marco Elver writes: >> >> >> >> > On Tue, 23 Nov 2021 at 08:44, Huang Ying wrote: >> > [...] >> >> >> --- a/mm/rmap.c >> >> >> +++ b/mm/rmap.c >> >> >> @@ -633,7 +633,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) >> >> >> * before the PTE is cleared. >> >> >> */ >> >> >> barrier(); >> >> >> - mm->tlb_flush_batched = true; >> >> >> + atomic_inc(&mm->tlb_flush_batched); >> >> > >> >> > The use of barrier() and atomic needs some clarification. >> >> >> >> There are some comments above barrier() to describe why it is needed. >> >> For atomic, because the type of mm->tlb_flush_batched is atomic_t, do we >> >> need extra clarification? >> > >> > Apologies, maybe I wasn't clear enough: the existing comment tells me >> > the clearing of PTE should never happen after tlb_flush_batched is >> > set, but only the compiler is considered. However, I become suspicious >> > when I see barrier() paired with an atomic. barrier() is purely a >> > compiler-barrier and does not prevent the CPU from reordering things. >> > atomic_inc() does not return anything and is therefore unordered per >> > Documentation/atomic_t.txt. >> > >> >> > Is there a >> >> > requirement that the CPU also doesn't reorder anything after this >> >> > atomic_inc() (which is unordered)? I.e. should this be >> >> > atomic_inc_return_release() and remove barrier()? >> >> >> >> We don't have an atomic_xx_acquire() to pair with this. So I guess we >> >> don't need atomic_inc_return_release()? >> > >> > You have 2 things stronger than unordered: atomic_read() which result >> > is used in a conditional branch, thus creating a control-dependency >> > ordering later dependent writes; and the atomic_cmpxchg() is fully >> > ordered. >> > >> > But before all that, I'd still want to understand what ordering >> > requirements you have. The current comments say only the compiler >> > needs taming, but does that mean we're fine with the CPU wildly >> > reordering things? >> >> Per my understanding, atomic_cmpxchg() is fully ordered, so we have >> strong ordering in flush_tlb_batched_pending(). And we use xchg() in >> ptep_get_and_clear() (at least for x86) which is called before >> set_tlb_ubc_flush_pending(). So we have strong ordering there too. >> >> So at least for x86, barrier() in set_tlb_ubc_flush_pending() appears >> unnecessary. Is it needed by other architectures? > > Hmm, this is not arch/ code -- this code needs to be portable. > atomic_t accessors provide arch-independent guarantees. But do the > other operations here provide any guarantees? If they don't, then I > think we have to assume unordered. Yes. The analysis is for x86 only. For other architectures, we need to make sure the order of ptep_get_and_clear(). But anyway, that should be another patch. This patch doesn't make the original ordering weaker. Best Regards, Huang, Ying