From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 455A1C64E7B for ; Tue, 1 Dec 2020 17:36:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5996A20870 for ; Tue, 1 Dec 2020 17:36:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="scAJGdX6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5996A20870 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9D10E6B005D; Tue, 1 Dec 2020 12:36:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 95B9E8D0002; Tue, 1 Dec 2020 12:36:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 824878D0001; Tue, 1 Dec 2020 12:36:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 5F3656B005D for ; Tue, 1 Dec 2020 12:36:48 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1ED7A181AEF1D for ; Tue, 1 Dec 2020 17:36:48 +0000 (UTC) X-FDA: 77545418496.13.beam04_320cef3273ac Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id E92E518140B67 for ; Tue, 1 Dec 2020 17:36:47 +0000 (UTC) X-HE-Tag: beam04_320cef3273ac X-Filterd-Recvd-Size: 14243 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 17:36:47 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id o4so1611095pgj.0 for ; Tue, 01 Dec 2020 09:36:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=u1wFSdTm0XIa4pJxOHjw0hz/xIDISDjW1RFtcIkNzfA=; b=scAJGdX6TCWLDUgjT27Q9d1M/UpvLpRHfcub4SjXJmd06DMme8I4Js3NEqNjx3O/t7 UIrskFH4VqxgXEE0QY9LLDwoIO4tUxV26mD37NYAqkz5EIxg1It94222hJTUlFtqQ1k2 RrLtefITm8CzyI8ttpF4Sx+marA66G4VvDiB6JNf2bpIulEGcx4G67Py68zmfCPOKNKa a3aKFB0Lse78++EbNsSAvOnptmWtaeZtnlHw7grjoDeW/6cP1Rx+znYCOJl5iz0ArMDR TX31k3ndtCz2UcmJidR/XyFyLg1SO5Bn2/NFtCTlO4yyy5kj14Qoqyo1BBrI3RTydnA0 t3uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=u1wFSdTm0XIa4pJxOHjw0hz/xIDISDjW1RFtcIkNzfA=; b=r6hELxJPjGTNCFrq+c+aiJ/8JmTVZfGW7cJysT0AE3KKzFNxhdmcp6pfh0Lsg/Wh1C h8H2iS/blkH7RrRGrUpddXvZXulXEycFC9dCzEceOYhPKdF9Qea0GdoaDx5ZtlQ/lbZA z31gmoGTbNbu2GQfF8XrE6gJ0hE2BxwV9ixjZ4KJ6hXZVHcPpPE/ifwII+UDVWbs+jHq Fk7K8JqfEnzxagtNvxMMO310E9H9XEPQB6sdYpQGjSaNhcHBQN+TTXv8H+PfqqCgPSpb byAccLev+F9s8rcxcg/CGFPystEB+q5TatL8CqPiUEDUh0Cblk1xkKOajhK8TehjCH37 8o9A== X-Gm-Message-State: AOAM531m8U6OqX7mZX3bvppYkZp4N6uua1vqg/b4yIxSuHrs3i7DLjAZ L0p/r33fN8CY1WGqb3s9khEo//G3PUvkj8VJyQ78lA== X-Google-Smtp-Source: ABdhPJxl+LQYefwF2pe5fEzMp/ZGorJfPdNXdYTDDRlfOLNHi3wmfWoZx4CxQtix/l2sfLYJH11lhKy1URkyGQpVXck= X-Received: by 2002:a65:63d5:: with SMTP id n21mr3213544pgv.346.1606844206017; Tue, 01 Dec 2020 09:36:46 -0800 (PST) MIME-Version: 1.0 References: <20201130233504.3725241-1-axelrasmussen@google.com> In-Reply-To: From: Axel Rasmussen Date: Tue, 1 Dec 2020 09:36:10 -0800 Message-ID: Subject: Re: [PATCH] mm: mmap_lock: fix use-after-free race and css ref leak in tracepoints To: Shakeel Butt Cc: Andrew Morton , Chinwen Chang , Daniel Jordan , David Rientjes , Davidlohr Bueso , Ingo Molnar , Jann Horn , Laurent Dufour , Michel Lespinasse , Stephen Rothwell , Steven Rostedt , Vlastimil Babka , Yafang Shao , "David S . Miller" , dsahern@kernel.org, Greg Kroah-Hartman , Jakub Kicinski , liuhangbin@gmail.com, Tejun Heo , LKML , Linux MM Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 30, 2020 at 5:34 PM Shakeel Butt wrote: > > On Mon, Nov 30, 2020 at 3:43 PM Axel Rasmussen wrote: > > > > syzbot reported[1] a use-after-free introduced in 0f818c4bc1f3. The bug > > is that an ongoing trace event might race with the tracepoint being > > disabled (and therefore the _unreg() callback being called). Consider > > this ordering: > > > > T1: trace event fires, get_mm_memcg_path() is called > > T1: get_memcg_path_buf() returns a buffer pointer > > T2: trace_mmap_lock_unreg() is called, buffers are freed > > T1: cgroup_path() is called with the now-freed buffer > > Any reason to use the cgroup_path instead of the cgroup_ino? There are > other examples of trace points using cgroup_ino and no need to > allocate buffers. Also cgroup namespace might complicate the path > usage. Hmm, so in general I would love to use a numeric identifier instead of a string. I did some reading, and it looks like the cgroup_ino() mainly has to do with writeback, instead of being just a general identifier? https://www.kernel.org/doc/Documentation/cgroup-v2.txt There is cgroup_id() which I think is almost what I'd want, but there are a couple problems with it: - I don't know of a way for userspace to translate IDs -> paths, to make them human readable? - Also I think the ID implementation we use for this is "dense", meaning if a cgroup is removed, its ID is likely to be quickly reused. > > > > > The solution in this commit is to modify trace_mmap_lock_unreg() to > > first stop new buffers from being handed out, and then to wait (spin) > > until any existing buffer references are dropped (i.e., those trace > > events complete). > > > > I have a simple reproducer program which spins up two pools of threads, > > doing the following in a tight loop: > > > > Pool 1: > > mmap(NULL, 4096, PROT_READ | PROT_WRITE, > > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0) > > munmap() > > > > Pool 2: > > echo 1 > /sys/kernel/debug/tracing/events/mmap_lock/enable > > echo 0 > /sys/kernel/debug/tracing/events/mmap_lock/enable > > > > This triggers the use-after-free very quickly. With this patch, I let it > > run for an hour without any BUGs. > > > > While fixing this, I also noticed and fixed a css ref leak. Previously > > we called get_mem_cgroup_from_mm(), but we never called css_put() to > > release that reference. get_mm_memcg_path() now does this properly. > > > > [1]: https://syzkaller.appspot.com/bug?extid=19e6dd9943972fa1c58a > > > > Fixes: 0f818c4bc1f3 ("mm: mmap_lock: add tracepoints around lock acquisition") > > The original patch is in mm tree, so the SHA1 is not stabilized. > Usually Andrew squash the fixes into the original patches. Ah, I added this because it also shows up in linux-next, under the next-20201130 tag. I'll remove it in v2, squashing is fine. :) > > > Signed-off-by: Axel Rasmussen > > --- > > mm/mmap_lock.c | 100 +++++++++++++++++++++++++++++++++++++++++-------- > > 1 file changed, 85 insertions(+), 15 deletions(-) > > > > diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c > > index 12af8f1b8a14..be38dc58278b 100644 > > --- a/mm/mmap_lock.c > > +++ b/mm/mmap_lock.c > > @@ -3,6 +3,7 @@ > > #include > > > > #include > > +#include > > #include > > #include > > #include > > @@ -18,13 +19,28 @@ EXPORT_TRACEPOINT_SYMBOL(mmap_lock_released); > > #ifdef CONFIG_MEMCG > > > > /* > > - * Our various events all share the same buffer (because we don't want or need > > - * to allocate a set of buffers *per event type*), so we need to protect against > > - * concurrent _reg() and _unreg() calls, and count how many _reg() calls have > > - * been made. > > + * This is unfortunately complicated... _reg() and _unreg() may be called > > + * in parallel, separately for each of our three event types. To save memory, > > + * all of the event types share the same buffers. Furthermore, trace events > > + * might happen in parallel with _unreg(); we need to ensure we don't free the > > + * buffers before all inflights have finished. Because these events happen > > + * "frequently", we also want to prevent new inflights from starting once the > > + * _unreg() process begins. And, for performance reasons, we want to avoid any > > + * locking in the trace event path. > > + * > > + * So: > > + * > > + * - Use a spinlock to serialize _reg() and _unreg() calls. > > + * - Keep track of nested _reg() calls with a lock-protected counter. > > + * - Define a flag indicating whether or not unregistration has begun (and > > + * therefore that there should be no new buffer uses going forward). > > + * - Keep track of inflight buffer users with a reference count. > > */ > > static DEFINE_SPINLOCK(reg_lock); > > -static int reg_refcount; > > +static int reg_types_rc; /* Protected by reg_lock. */ > > +static bool unreg_started; /* Doesn't need synchronization. */ > > +/* atomic_t instead of refcount_t, as we want ordered inc without locks. */ > > +static atomic_t inflight_rc = ATOMIC_INIT(0); > > > > /* > > * Size of the buffer for memcg path names. Ignoring stack trace support, > > @@ -46,9 +62,14 @@ int trace_mmap_lock_reg(void) > > unsigned long flags; > > int cpu; > > > > + /* > > + * Serialize _reg() and _unreg(). Without this, e.g. _unreg() might > > + * start cleaning up while _reg() is only partially completed. > > + */ > > spin_lock_irqsave(®_lock, flags); > > > > - if (reg_refcount++) > > + /* If the refcount is going 0->1, proceed with allocating buffers. */ > > + if (reg_types_rc++) > > goto out; > > > > for_each_possible_cpu(cpu) { > > @@ -62,6 +83,11 @@ int trace_mmap_lock_reg(void) > > per_cpu(memcg_path_buf_idx, cpu) = 0; > > } > > > > + /* Reset unreg_started flag, allowing new trace events. */ > > + WRITE_ONCE(unreg_started, false); > > + /* Add the registration +1 to the inflight refcount. */ > > + atomic_inc(&inflight_rc); > > + > > out: > > spin_unlock_irqrestore(®_lock, flags); > > return 0; > > @@ -74,7 +100,8 @@ int trace_mmap_lock_reg(void) > > break; > > } > > > > - --reg_refcount; > > + /* Since we failed, undo the earlier increment. */ > > + --reg_types_rc; > > > > spin_unlock_irqrestore(®_lock, flags); > > return -ENOMEM; > > @@ -87,9 +114,23 @@ void trace_mmap_lock_unreg(void) > > > > spin_lock_irqsave(®_lock, flags); > > > > - if (--reg_refcount) > > + /* If the refcount is going 1->0, proceed with freeing buffers. */ > > + if (--reg_types_rc) > > goto out; > > > > + /* This was the last registration; start preventing new events... */ > > + WRITE_ONCE(unreg_started, true); > > + /* Remove the registration +1 from the inflight refcount. */ > > + atomic_dec(&inflight_rc); > > + /* > > + * Wait for inflight refcount to be zero (all inflights stopped). Since > > + * we have a spinlock we can't sleep, so just spin. Because trace events > > + * are "fast", and because we stop new inflights from starting at this > > + * point with unreg_started, this should be a short spin. > > + */ > > + while (atomic_read(&inflight_rc)) > > + barrier(); > > + > > for_each_possible_cpu(cpu) { > > kfree(per_cpu(memcg_path_buf, cpu)); > > } > > @@ -102,6 +143,20 @@ static inline char *get_memcg_path_buf(void) > > { > > int idx; > > > > + /* > > + * If unregistration is happening, stop. Yes, this check is racy; > > + * that's fine. It just means _unreg() might spin waiting for an extra > > + * event or two. Use-after-free is actually prevented by the refcount. > > + */ > > + if (READ_ONCE(unreg_started)) > > + return NULL; > > + /* > > + * Take a reference, unless the registration +1 has been released > > + * and there aren't already existing inflights (refcount is zero). > > + */ > > + if (!atomic_inc_not_zero(&inflight_rc)) > > + return NULL; > > + > > idx = this_cpu_add_return(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE) - > > MEMCG_PATH_BUF_SIZE; > > return &this_cpu_read(memcg_path_buf)[idx]; > > @@ -110,27 +165,42 @@ static inline char *get_memcg_path_buf(void) > > static inline void put_memcg_path_buf(void) > > { > > this_cpu_sub(memcg_path_buf_idx, MEMCG_PATH_BUF_SIZE); > > + /* We're done with this buffer; drop the reference. */ > > + atomic_dec(&inflight_rc); > > } > > > > /* > > * Write the given mm_struct's memcg path to a percpu buffer, and return a > > - * pointer to it. If the path cannot be determined, NULL is returned. > > + * pointer to it. If the path cannot be determined, or no buffer was available > > + * (because the trace event is being unregistered), NULL is returned. > > * > > * Note: buffers are allocated per-cpu to avoid locking, so preemption must be > > * disabled by the caller before calling us, and re-enabled only after the > > * caller is done with the pointer. > > + * > > + * The caller must call put_memcg_path_buf() once the buffer is no longer > > + * needed. This must be done while preemption is still disabled. > > */ > > static const char *get_mm_memcg_path(struct mm_struct *mm) > > { > > + char *buf = NULL; > > struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); > > > > - if (memcg != NULL && likely(memcg->css.cgroup != NULL)) { > > - char *buf = get_memcg_path_buf(); > > + if (memcg == NULL) > > + goto out; > > + if (unlikely(memcg->css.cgroup == NULL)) > > + goto out_put; > > > > - cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); > > - return buf; > > - } > > - return NULL; > > + buf = get_memcg_path_buf(); > > + if (buf == NULL) > > + goto out_put; > > + > > + cgroup_path(memcg->css.cgroup, buf, MEMCG_PATH_BUF_SIZE); > > + > > +out_put: > > + css_put(&memcg->css); > > +out: > > + return buf; > > } > > > > #define TRACE_MMAP_LOCK_EVENT(type, mm, ...) \ > > -- > > 2.29.2.454.gaff20da3a2-goog > >