From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92942C433F5 for ; Wed, 24 Nov 2021 08:50:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF8A96B0078; Wed, 24 Nov 2021 03:50:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA8B86B007B; Wed, 24 Nov 2021 03:50:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D96B06B007D; Wed, 24 Nov 2021 03:50:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id BAD3C6B0078 for ; Wed, 24 Nov 2021 03:50:21 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 69E2A85D54 for ; Wed, 24 Nov 2021 08:50:11 +0000 (UTC) X-FDA: 78843201822.14.AE4A077 Received: from mail-oi1-f181.google.com (mail-oi1-f181.google.com [209.85.167.181]) by imf29.hostedemail.com (Postfix) with ESMTP id 4FF7B9000ED7 for ; Wed, 24 Nov 2021 08:50:08 +0000 (UTC) Received: by mail-oi1-f181.google.com with SMTP id u74so3931730oie.8 for ; Wed, 24 Nov 2021 00:50:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=AMhR1ewFe9Zq+4nwASM51W9QtAUxF6d/M/r40CrqRYE=; b=ZAv2GLdlNXcFIZR9wSCcLXdfGNT8pzMt1fjwZVa7oZWrUGE78jw9BxgHx6v6b2QDWF x79O6ORKV4EaE6CNOuu8eWrOTc9No3f8nQKEmY81VUBCOJTUkhJzsJtgukBLIFNKK1SD e9PvL37mNw1uMTWHfaS11uu4O96hKWo0uTbF4bELP2Ki3/hFAYvxjEixm3rFNfpnUQGL RdJotnrClSH64EyM5gMD5yf5V5M+Zw8rAtQiULdGMwEJsbzQkZQ49Fba0a1PfXgrXToY 3pThY1+mX0kV3kcZa/7BTkFGZZ6d71Jp8Eq/KtYjlmsd/8Bya1pLmSrSO/jehRUUSyTD Dhuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=AMhR1ewFe9Zq+4nwASM51W9QtAUxF6d/M/r40CrqRYE=; b=dhFHy4ujB/kKcUdtZTBO1UY1i55zc9pu4SXaAZux0dxdcLQ6DAowKqr2bFTFqk2Wub sfnFjy5v3sF3Asq/LAgVlf/A4+/DDndX6dBrXkWzVTFbur/qgQbgpG+GAIEggj6LQqiD adX7JSpu/PxOFGos3phLVjVO06UogY8JH76lrJaHDDsoujK1fVuv974Fmqeqlki45Dh/ wdK098foo8bUgkgjwVuBItnVO/yhdhynDSYB3JDdQusSiMtfySeg3TYbOhaFfK+H0r/4 a7NmrKsXImKkmBnYx9WG7JkETK51MCXNP6mO1JNA9uqiW19tkBfDK4Z2OKzxwmkzTMR9 rxgQ== X-Gm-Message-State: AOAM531m99p+GU3/d5W5NFRR+kkizFCsnkS9ehNY+ztFO04TzXD/dIiR Ueq6+08Q9d/dQgBlsYWeVvhysXmhfzGo4FCBDrFUFA== X-Google-Smtp-Source: ABdhPJx4pIxNjY2VyzhhRpa+E5XN4oTCL7BLPXAggYif6f0Y7VFolS6Yuajx7PsQc042ho92++New+Br60Ty0BVmYp4= X-Received: by 2002:a05:6808:1903:: with SMTP id bf3mr4264838oib.7.1637743809956; Wed, 24 Nov 2021 00:50:09 -0800 (PST) MIME-Version: 1.0 References: <20211123074344.1877731-1-ying.huang@intel.com> <8735nm9vkw.fsf@yhuang6-desk2.ccr.corp.intel.com> <87v90i6j4h.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87v90i6j4h.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Marco Elver Date: Wed, 24 Nov 2021 09:49:57 +0100 Message-ID: Subject: Re: [PATCH] mm/rmap: fix potential batched TLB flush race To: "Huang, Ying" Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, syzbot+aa5bebed695edaccf0df@syzkaller.appspotmail.com, Nadav Amit , Mel Gorman , Andrea Arcangeli , Andy Lutomirski , Dave Hansen , Will Deacon , Yu Zhao , Linux ARM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 4FF7B9000ED7 X-Stat-Signature: a79py88g1xo5ou949yrsrjrb6chb1qc9 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZAv2GLdl; spf=pass (imf29.hostedemail.com: domain of elver@google.com designates 209.85.167.181 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1637743808-399897 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 24 Nov 2021 at 09:41, Huang, Ying wrote: > > Marco Elver writes: > > > On Wed, 24 Nov 2021 at 02:44, Huang, Ying wrote: > >> > >> Marco Elver writes: > >> > >> > On Tue, 23 Nov 2021 at 08:44, Huang Ying wrote: > > [...] > >> >> --- a/mm/rmap.c > >> >> +++ b/mm/rmap.c > >> >> @@ -633,7 +633,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > >> >> * before the PTE is cleared. > >> >> */ > >> >> barrier(); > >> >> - mm->tlb_flush_batched = true; > >> >> + atomic_inc(&mm->tlb_flush_batched); > >> > > >> > The use of barrier() and atomic needs some clarification. > >> > >> There are some comments above barrier() to describe why it is needed. > >> For atomic, because the type of mm->tlb_flush_batched is atomic_t, do we > >> need extra clarification? > > > > Apologies, maybe I wasn't clear enough: the existing comment tells me > > the clearing of PTE should never happen after tlb_flush_batched is > > set, but only the compiler is considered. However, I become suspicious > > when I see barrier() paired with an atomic. barrier() is purely a > > compiler-barrier and does not prevent the CPU from reordering things. > > atomic_inc() does not return anything and is therefore unordered per > > Documentation/atomic_t.txt. > > > >> > Is there a > >> > requirement that the CPU also doesn't reorder anything after this > >> > atomic_inc() (which is unordered)? I.e. should this be > >> > atomic_inc_return_release() and remove barrier()? > >> > >> We don't have an atomic_xx_acquire() to pair with this. So I guess we > >> don't need atomic_inc_return_release()? > > > > You have 2 things stronger than unordered: atomic_read() which result > > is used in a conditional branch, thus creating a control-dependency > > ordering later dependent writes; and the atomic_cmpxchg() is fully > > ordered. > > > > But before all that, I'd still want to understand what ordering > > requirements you have. The current comments say only the compiler > > needs taming, but does that mean we're fine with the CPU wildly > > reordering things? > > Per my understanding, atomic_cmpxchg() is fully ordered, so we have > strong ordering in flush_tlb_batched_pending(). And we use xchg() in > ptep_get_and_clear() (at least for x86) which is called before > set_tlb_ubc_flush_pending(). So we have strong ordering there too. > > So at least for x86, barrier() in set_tlb_ubc_flush_pending() appears > unnecessary. Is it needed by other architectures? Hmm, this is not arch/ code -- this code needs to be portable. atomic_t accessors provide arch-independent guarantees. But do the other operations here provide any guarantees? If they don't, then I think we have to assume unordered.