From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73FAEEDE994 for ; Tue, 10 Sep 2024 02:09:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E36016B0261; Mon, 9 Sep 2024 22:09:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DE7146B0266; Mon, 9 Sep 2024 22:09:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5F9B6B0269; Mon, 9 Sep 2024 22:09:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A74AB6B0261 for ; Mon, 9 Sep 2024 22:09:34 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 2059814084C for ; Tue, 10 Sep 2024 02:09:34 +0000 (UTC) X-FDA: 82547197068.26.96E6ECE Received: from mail-il1-f169.google.com (mail-il1-f169.google.com [209.85.166.169]) by imf26.hostedemail.com (Postfix) with ESMTP id 55C6E140008 for ; Tue, 10 Sep 2024 02:09:31 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=pMVGpJBc; spf=pass (imf26.hostedemail.com: domain of surenb@google.com designates 209.85.166.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725934069; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+DsQQhszRT2ef6P8hGH8RSEvant5C9m/wASSqO7/oIY=; b=qQ+yp1dAPayBNYyaeHMov7fnRq7j0yA4vnylEBF8MuecUUe56npKsG2AhCz26gP57FLRPy sHEVeiAP/6ETExInnyFXpnmqdeXEXM7NQTeynXZ5A8wTqMTjLx/6FQgMoG9lR11avQgDv7 5leyWPJ592nrw6KCLrCKqyu92Wb2FBA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725934069; a=rsa-sha256; cv=none; b=LFFakUf9/z3yQ1tTrtZ6HxdHj5mAhWSRo03XjqCzyBi9HFRUj/Z8IcjRTbofli9zXHZlyF puTNHAzCJBbPEXvjG6M4u+5ZK2brhcW3t56MiKuG2757AO/OrjQKdhTKXq5CdEegbngZ3l nlOqqEKuhyUC6zLifXomyhpaKVAEWeQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=pMVGpJBc; spf=pass (imf26.hostedemail.com: domain of surenb@google.com designates 209.85.166.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-il1-f169.google.com with SMTP id e9e14a558f8ab-39d3ad05f8eso75855ab.1 for ; Mon, 09 Sep 2024 19:09:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1725934170; x=1726538970; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=+DsQQhszRT2ef6P8hGH8RSEvant5C9m/wASSqO7/oIY=; b=pMVGpJBck0uK5latWNHmEWTrkDGRlI7e+HtDsnKXzstqQ5ZyuI10Cv2Ru4FUMRxEQS 4aHZ7KFCoKKcdGjk26uWr+H1dvlRrm86VjQTKOGnbeeSAccDvSp1MpT6+vkcALVNEVuh 3KUEn0M7q8cA8haG/F83t7G+tICfPKrLh61ZOZVT14Nkk3fKoNfYY446ZTRLuCb5yKxB 3E9tDPlmUEb/BW1qGu6pSSbzTyyFVUg2QO8+LK9XX9eIRO16jnx27HL43GGzz0tXc8S3 EOtjL9fbNRDFdF7DoZfbTqk6cvxG44aLDxj0oTQ9sfIlTE6pLZdgiDwpzzJR626xSxIl qpbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725934170; x=1726538970; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+DsQQhszRT2ef6P8hGH8RSEvant5C9m/wASSqO7/oIY=; b=hAgjoCieVfzdQDhemh437GCHD++S606vOj5WVgvt4rphlppSEy353uBdMfmJjuKIsd XOdwhYEqgVsRKS23lcSQt1WkY2MwhgK7U3ungzdpEN+vAXKJEwVxkfMZ4ZSFq2cN9/bS qEffDetLmkB6prW75osIHDBmAORIdM/fpbPNFryRH/VHl7sFOM6KRW0pi0N0TdyK9ZQQ ikT4E6t9YkO25U3FtBXGh30ptGio6FCxn08X9x4XYsF3qQy3Wij6MWQgVABaf/6tIfDf +6YnpSi5jb0kwms9DZwQveoIR5UN4xZfOOhJjS1m7YY/0OXuBPnHkEkxvzZ0MPvBc2Vb sMJQ== X-Forwarded-Encrypted: i=1; AJvYcCXrSXWdcpGS3WmdcYFpgVcHUfa1nO4yYGnppbjBMOWt89b773Rs0PE+xOpDibUBuXDyURJQhlEovg==@kvack.org X-Gm-Message-State: AOJu0YxBrjzpBNrmlbUkKjFset01L7z1he6NqRZrt2L9EALcCwJeo8Ud GIkz/H86YJF/hMonRZEnEX+CGCVtsAQYiUlv18kJVn7xDhbCDVZKj9cOpqqdteUSuVC+CEbDGeV tH/3wSYvYwldFnMjNwaeH5sjJyWsPrYdWdqvq X-Google-Smtp-Source: AGHT+IGXDcSXioesg5+maD/YfxW9DxpNZWUOGcMd1XOPIFq3h8PpbcDkNkfD0lh+z2VUtMYBl1WO+LDOgNGnRPwmiRg= X-Received: by 2002:a05:6e02:1c45:b0:377:14ab:42ea with SMTP id e9e14a558f8ab-3a06bae3ad1mr1954975ab.16.1725934169925; Mon, 09 Sep 2024 19:09:29 -0700 (PDT) MIME-Version: 1.0 References: <20240906051205.530219-1-andrii@kernel.org> <20240906051205.530219-2-andrii@kernel.org> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 9 Sep 2024 19:09:15 -0700 Message-ID: Subject: Re: [PATCH 1/2] mm: introduce mmap_lock_speculation_{start|end} To: Jann Horn Cc: Andrii Nakryiko , linux-trace-kernel@vger.kernel.org, peterz@infradead.org, oleg@redhat.com, rostedt@goodmis.org, mhiramat@kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org, jolsa@kernel.org, paulmck@kernel.org, willy@infradead.org, akpm@linux-foundation.org, linux-mm@kvack.org, mjguzik@gmail.com, brauner@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 55C6E140008 X-Stat-Signature: szmi5f59uu1arrcj7gk3y9ae7xf7tzpi X-HE-Tag: 1725934171-390505 X-HE-Meta: U2FsdGVkX1+MGgNMDkigFA5DQVeTYHQ14XIXh/zi+bRLikYGe5t4Vj9/FJvpKIWr9+KOsdMkB7MShnE7CjmNmXK5GMzLl0NjpvcK6m3/5HvjopAkGBHzezeUDKFY1UsA0q5V5nhs5rc84xZOPx8H+U9fP56IE4nvyGG38FwxXUVhPlRzugCZRBlfq6JBj0z/flvZT8lWV5nlW9FMTv7ElaIMJRJOrqI9coRb4M80oPb2jCo8DwPr/HGT5KvvzfJel7g8lRHVOhG/6z/C3c3DSKYiwhVa+TAoBWJcspYZGWCBaIAjiEBE++dKiGGZq2sdkryuYxJNdrVGScQXKhETNOr+ZzbgvcWO1s/8ziktMxgchWUzwieq147Aca1O3DiEvaHCnNUdDOxmdR16+ixqYc/LtkZWu5IsnjuboFODquoJV0et5On/YT2JRwQ/MCRlslRQ+46S859PQobpMYYqhCoriYK5aMIrjm4qJ57DP/QXrfyPNlM+13K86URARdXkWiIi8UnmdqiANbP5h5eXyRQQzWZVf6Qah6+Hez1l5eHDsBcNdxXAeSHwsdoweHpPbAndl5h3wl3Ac2ScJWDnkgBOkUXaAYoZ4o7zIR111rgyOfUavK+TWiFxxZ8dNfb9XKiBi4sxh5evkJhjb8z5a4rgTvnIr9sc+ginHxK+CReTiPwxxDb37j61EaGl5qeG/qBMCJ+OfnaVj9pRrKbE1C2hE9h3a9abb1iZSxeeIUUfgNPhi8DggM2CVm0l4FpTjuAW8SQFjKX7QNFHk9ZhAR1XSDWJe/Wg+ySI2lgzOXBK/vR28bAMFl7po4NMwwEozzpVi43fsXnulr93z7kNImKpuXEzuczQPkYcAgI0dzEIQCK+K8MfICAAIY9VTYxq3WKw4zaLyFJ+FDsjffuVhIJth0oo1QIA4UBOJIkvoKeLLbgxQVnliNpzrspeIzjp2mwO9AE4riOir7ZbMn9 tD8Hs/yc 3NoMP2pzdW41e483SxekM4XcM28ju+T2TYm8+l0HvQx8Zq3sfNYR4syMJAXveL6jldObqxhMzwc7Qn11kFD0UHPBdhdeE+xfgbA769QWhYsmEQiD2GKvZfFdjf7SbdMh5Kv4t39k1ZICA7wrFisGYYupr1wFa3KWCVibM7oYz6U3jpYopCIdcNaoNYuy0bbk6Z+AAcF6QyfcB8iKiCdlDyH2napg0cDWMaIYUhbeNLB+tlDxInt9FLKvttD/JWju82duC5n8MgBtzoE8i8QFQpdrB5qmikHE+4zc3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.001512, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Sep 9, 2024 at 5:35=E2=80=AFAM Jann Horn wrote: > > On Fri, Sep 6, 2024 at 7:12=E2=80=AFAM Andrii Nakryiko wrote: > > +static inline bool mmap_lock_speculation_end(struct mm_struct *mm, int= seq) > > +{ > > + /* Pairs with RELEASE semantics in inc_mm_lock_seq(). */ > > + return seq =3D=3D smp_load_acquire(&mm->mm_lock_seq); > > +} > > A load-acquire can't provide "end of locked section" semantics - a > load-acquire is a one-way barrier, you can basically use it for > "acquire lock" semantics but not for "release lock" semantics, because > the CPU will prevent reordering the load with *later* loads but not > with *earlier* loads. So if you do: > > mmap_lock_speculation_start() > [locked reads go here] > mmap_lock_speculation_end() > > then the CPU is allowed to reorder your instructions like this: > > mmap_lock_speculation_start() > mmap_lock_speculation_end() > [locked reads go here] > > so the lock is broken. Hi Jann, Thanks for the review! Yeah, you are right, we do need an smp_rmb() before we compare mm->mm_lock_seq with the stored seq. Otherwise reads might get reordered this way: CPU1 CPU2 mmap_lock_speculation_start() // seq =3D mm->mm_lock_seq reloaded_seq =3D mm->mm_lock_seq; // reordered read mmap_write_lock() // inc_mm_lock_seq(mm) vma->vm_file =3D ...; mmap_write_unlock() // inc_mm_lock_seq(mm) mmap_lock_speculation_end() // return (reloaded_seq =3D=3D seq) > > > static inline void mmap_write_lock(struct mm_struct *mm) > > { > > __mmap_lock_trace_start_locking(mm, true); > > down_write(&mm->mmap_lock); > > + inc_mm_lock_seq(mm); > > __mmap_lock_trace_acquire_returned(mm, true, true); > > } > > Similarly, inc_mm_lock_seq(), which does a store-release, can only > provide "release lock" semantics, not "take lock" semantics, because > the CPU can reorder it with later stores; for example, this code: > > inc_mm_lock_seq() > [locked stuff goes here] > inc_mm_lock_seq() > > can be reordered into this: > > [locked stuff goes here] > inc_mm_lock_seq() > inc_mm_lock_seq() > > so the lock is broken. Ugh, yes. We do need smp_wmb() AFTER the inc_mm_lock_seq(). Whenever we use inc_mm_lock_seq() for "take lock" semantics, it's preceded by a down_write(&mm->mmap_lock) with implied ACQUIRE ordering. So I thought we can use it but I realize now that this reordering is still possible: CPU1 CPU2 mmap_write_lock() down_write(&mm->mmap_lock); vma->vm_file =3D ...; mmap_lock_speculation_start() // seq =3D mm->mm_lock_seq mmap_lock_speculation_end() // return (mm->mm_lock_seq =3D=3D seq) inc_mm_lock_seq(mm); mmap_write_unlock() // inc_mm_lock_seq(mm) Is that what you were describing? Thanks, Suren. > > For "taking a lock" with a memory store, or "dropping a lock" with a > memory load, you need heavier memory barriers, see > Documentation/memory-barriers.txt.