From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74ECDC433EF for ; Tue, 23 Nov 2021 09:34:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EB2E86B0071; Tue, 23 Nov 2021 04:34:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E62AC6B0072; Tue, 23 Nov 2021 04:34:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D51936B0073; Tue, 23 Nov 2021 04:34:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id CA1BB6B0071 for ; Tue, 23 Nov 2021 04:34:04 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7AB1A181FA8AE for ; Tue, 23 Nov 2021 09:33:54 +0000 (UTC) X-FDA: 78839683398.05.1DA22FB Received: from mail-ot1-f48.google.com (mail-ot1-f48.google.com [209.85.210.48]) by imf30.hostedemail.com (Postfix) with ESMTP id 7037DE002103 for ; Tue, 23 Nov 2021 09:33:48 +0000 (UTC) Received: by mail-ot1-f48.google.com with SMTP id 35-20020a9d08a6000000b00579cd5e605eso2281206otf.0 for ; Tue, 23 Nov 2021 01:33:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=u1eFzhqZhKStQ6SAejbe5i9BeR0+rlVTE2OXKMpdriE=; b=X/NuWj2aYCJ+2lcyvWwjUYPcjbJ5MQlDDkLmDtPEDST/3hWWBt7AHgGeaOCpl639ba lWgROjeIfON09qqiyonuRHQwTWU0aHzcnx26GwXTt3uSZ3iRXDho8JIzoFM+kuwGvU72 DT0IcVmqxEoBVKQ5tAYMqjxrENYmxeLyCJCFHHeuFli6g3W0qJMVIUmTmtnHkkWmpbyx 1flQkBfYRtIEYxRCs/3I5v3xCmAjxEUz2dM2jOeRXYL3eLJCv260JYWxakVE1YaPx4ki I0UQ+5JeVX8nAqMCy2wMFjM/WkcclAF4IP9vPWRZ4H13dvbmp0/F4b46wRXKuNvfe600 nSrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=u1eFzhqZhKStQ6SAejbe5i9BeR0+rlVTE2OXKMpdriE=; b=s6DJzETNauRlcHnWKYLGf2MLV7ssGsPfdSbF0UGrrrnYqRAuhCOf42SbUKo7cM7Xb2 hU4acwLRBJtD31W3h/l9/h9V85EB52USQTwHLE41/3mOIw6RPZSwzhkbjPQcsOTWfyYV BpJmiSiqoneLeyQVpMcARIaZ7P3zibq2rz0H2tAInzuGhPdG/dGdDxjZjm6icQEhE+QG ZQMbusjjFxNctDB5R7C75A38M52bLyuNqvtNb97hahO8vP/Xy2rpEpfKn79NU1SwDl7j hX1XJeYMgbHHK3544qEFosSNBQ/yfCAfswNQS4gU7LXhKeE9/QOZZ16w0LOJVpNqLXVk TM7w== X-Gm-Message-State: AOAM531cGFenz2e4IlQSsqtVmKKsykhLISx/pXiK/dtW3pWjEF0Ybz4C CeOp6WECjvrAN0id8v5UpN/q2eGlWJEjxj8KL4GoPQ== X-Google-Smtp-Source: ABdhPJwyGX6dc9JXTH/JGlfix7XL94vC6ykb+Wdz4x/qw9tS4QrZ+d6qdrdqC+PGECCkLP/FKULRqSUDyi3d6bq1G6M= X-Received: by 2002:a9d:7548:: with SMTP id b8mr2994949otl.92.1637660032976; Tue, 23 Nov 2021 01:33:52 -0800 (PST) MIME-Version: 1.0 References: <20211123074344.1877731-1-ying.huang@intel.com> In-Reply-To: <20211123074344.1877731-1-ying.huang@intel.com> From: Marco Elver Date: Tue, 23 Nov 2021 10:33:41 +0100 Message-ID: Subject: Re: [PATCH] mm/rmap: fix potential batched TLB flush race To: Huang Ying Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, syzbot+aa5bebed695edaccf0df@syzkaller.appspotmail.com, Nadav Amit , Mel Gorman , Andrea Arcangeli , Andy Lutomirski , Dave Hansen , Will Deacon , Yu Zhao Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7037DE002103 X-Stat-Signature: ocfxdg11j3qmdnsxrxh8gerio1za9jbo Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="X/NuWj2a"; spf=pass (imf30.hostedemail.com: domain of elver@google.com designates 209.85.210.48 as permitted sender) smtp.mailfrom=elver@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1637660028-790352 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 23 Nov 2021 at 08:44, Huang Ying wrote: > > In theory, the following race is possible for batched TLB flushing. > > CPU0 CPU1 > ---- ---- > shrink_page_list() > unmap > zap_pte_range() > flush_tlb_batched_pending() > flush_tlb_mm() > try_to_unmap() > set_tlb_ubc_flush_pending() > mm->tlb_flush_batched = true > mm->tlb_flush_batched = false > > After the TLB is flushed on CPU1 via flush_tlb_mm() and before > mm->tlb_flush_batched is set to false, some PTE is unmapped on CPU0 > and the TLB flushing is pended. Then the pended TLB flushing will be > lost. Although both set_tlb_ubc_flush_pending() and > flush_tlb_batched_pending() are called with PTL locked, different PTL > instances may be used. > > Because the race window is really small, and the lost TLB flushing > will cause problem only if a TLB entry is inserted before the > unmapping in the race window, the race is only theoretical. But the > fix is simple and cheap too. Thanks for fixing this! > Syzbot has reported this too as follows, > > ================================================================== > BUG: KCSAN: data-race in flush_tlb_batched_pending / try_to_unmap_one [...] > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index c3a6e6209600..789778067db9 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -632,7 +632,7 @@ struct mm_struct { > atomic_t tlb_flush_pending; > #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH > /* See flush_tlb_batched_pending() */ > - bool tlb_flush_batched; > + atomic_t tlb_flush_batched; > #endif > struct uprobes_state uprobes_state; > #ifdef CONFIG_PREEMPT_RT > diff --git a/mm/rmap.c b/mm/rmap.c > index 163ac4e6bcee..60902c3cfb4a 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -633,7 +633,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > * before the PTE is cleared. > */ > barrier(); > - mm->tlb_flush_batched = true; > + atomic_inc(&mm->tlb_flush_batched); The use of barrier() and atomic needs some clarification. Is there a requirement that the CPU also doesn't reorder anything after this atomic_inc() (which is unordered)? I.e. should this be atomic_inc_return_release() and remove barrier()? > /* > * If the PTE was dirty then it's best to assume it's writable. The > @@ -680,15 +680,16 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) > */ > void flush_tlb_batched_pending(struct mm_struct *mm) > { > - if (data_race(mm->tlb_flush_batched)) { > - flush_tlb_mm(mm); > + int batched = atomic_read(&mm->tlb_flush_batched); > > + if (batched) { > + flush_tlb_mm(mm); > /* > - * Do not allow the compiler to re-order the clearing of > - * tlb_flush_batched before the tlb is flushed. > + * If the new TLB flushing is pended during flushing, > + * leave mm->tlb_flush_batched as is, to avoid to lose > + * flushing. > */ > - barrier(); > - mm->tlb_flush_batched = false; > + atomic_cmpxchg(&mm->tlb_flush_batched, batched, 0); > } > } > #else > -- > 2.30.2 >