From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4946EEB64DA for ; Thu, 6 Jul 2023 00:43:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D1BBA8D0002; Wed, 5 Jul 2023 20:43:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CCBC28D0001; Wed, 5 Jul 2023 20:43:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6C948D0002; Wed, 5 Jul 2023 20:43:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A6E208D0001 for ; Wed, 5 Jul 2023 20:43:08 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6A4141C89BC for ; Thu, 6 Jul 2023 00:43:08 +0000 (UTC) X-FDA: 80979337656.12.B7D7114 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf01.hostedemail.com (Postfix) with ESMTP id 815F64001B for ; Thu, 6 Jul 2023 00:43:06 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=rvWFOwKt; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688604186; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AkSspCOErsANmUQHMJUgc/2V9u039DDqqjUdLJyc3Ek=; b=joVpyyya+3783y6VetsYoHaGPQZOh1PaUKtLoU8yDx4JT0hnFTocwyX38dKKR8uBAhvN4o 0rCutwfKFg6cnwROz4iOKmUT5HBX1M9j+okZGTD4kcq7nIuEQ5lAYeUIwJ1M/3GzlgAI9c PJe7aR5xRo3DQqqAZUGQd2kp5NRK7Qk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=rvWFOwKt; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688604186; a=rsa-sha256; cv=none; b=I7g+O9NOgUJ7YIOABVy7N0MO1RQyDYyokv123GjAoagmUEZ8HYFb9INQxMl39+JM0ZAg5U NFPsrFeS8XFjithLKEI2/NjA+ph6zOc1PttBlyvJVs9a3d20gn8yPP0r4s+4NZfbdxBNhn lx4gaSweVk7wA9YfF7tnCjR2L1tyEkY= Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-c4d04d50c4cso89043276.1 for ; Wed, 05 Jul 2023 17:43:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688604185; x=1691196185; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=AkSspCOErsANmUQHMJUgc/2V9u039DDqqjUdLJyc3Ek=; b=rvWFOwKtRNWWzTzEX/nBgwZIxcHC2IimRJHDuCmWnR5IBpsHWKYO9VjvjRpzBLhXaA LPkdziDbBvB+PTsZDOzr9FZi+4hf37P93hiLqdUOztORPgZOHR88YnlZ1vGI8x1UFje5 QKXnGkYf3PMUBEckpFsGvJLiFuLj+RJtSKrp8PZtJOurJ+3Vpe65kzIdIvPqepwzBFWy E1PFNKAk8zWvOqkHxK2o9lrlOHM6aESe5Y+a5+iOXkMnVWtP3b97KZ2kXiORe4PToWWB 1NvdfpViGy60l3TZJmahSYNmlX4CqN+3AWfYkfY8tfss7Oqxan8OYazkZonS87YeEsAg BiTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688604185; x=1691196185; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AkSspCOErsANmUQHMJUgc/2V9u039DDqqjUdLJyc3Ek=; b=LgbWvek51d0rcmWRSLkMTidvAng2X7hH+8HJpGh9J0N12up3b5e/sJGGPHuCi1yUxe h8Dse5hmQg+iYCJHOtt1lUPbfNplwZkr88J0plHdZZbG3pveBCCOP4/CoN/BdB1GIAxJ mbLuSN5TcWjhpG8NuAb58wkqgwYAWcQhULleFMB8LLo34g96GrlgYS8WmELci9+Qxza7 u5NHx9ZwCeZUtW4tHUH6xrFFRJXcszBR8LaxG/ukaAdfTCx4VN1REbW3j77L+2aKrZSU walYDr8JcZMNUbh9bm5PilnojWoQxUBOVMzuYFb1fSrdaNUil+gc7Ywetk2OPu0+BIwQ 7bXw== X-Gm-Message-State: ABy/qLZl9Y14c9w6I/lSktRwGtyBqXQl/Nel517aX0BO5yox0z9NCzZO 1gve2ybzPzp/9GhPB1sjoYhCMwPm/eIWoapdkzJtVQ== X-Google-Smtp-Source: APBJJlGmkKX3Bo4nXDNgFWdLoKh/m7u78WfwSoJAB43jrQLyQj1eKp1pbDaBm86/UZaCYFq7AOjIBPRgo6EbMEwDuXM= X-Received: by 2002:a25:ed03:0:b0:c14:f668:ebbc with SMTP id k3-20020a25ed03000000b00c14f668ebbcmr379338ybh.65.1688604185266; Wed, 05 Jul 2023 17:43:05 -0700 (PDT) MIME-Version: 1.0 References: <20230705171213.2843068-1-surenb@google.com> <20230705171213.2843068-2-surenb@google.com> <10c8fe17-fa9b-bf34-cb88-c758e07c9d72@redhat.com> <20230705230647.twq3n5nb2iabr7uk@revolver> <20230706003252.sj57tjmqb77yflqq@revolver> In-Reply-To: <20230706003252.sj57tjmqb77yflqq@revolver> From: Suren Baghdasaryan Date: Wed, 5 Jul 2023 17:42:54 -0700 Message-ID: Subject: Re: [PATCH v3 1/2] fork: lock VMAs of the parent process when forking To: "Liam R. Howlett" , Suren Baghdasaryan , David Hildenbrand , akpm@linux-foundation.org, jirislaby@kernel.org, jacobly.alt@gmail.com, holger@applied-asynchrony.com, hdegoede@redhat.com, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, mingo@redhat.com, will@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, chriscli@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, rppt@kernel.org, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 815F64001B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: iphreu3nohp5njxwe89huy9epmdua9tg X-HE-Tag: 1688604186-935677 X-HE-Meta: U2FsdGVkX19EvQv9ak1og4ZC9cfHPXvxg4OvHA33Kw4Tr3S3kKGnH5kq5Ze+sYWWSSAn53BBZoDsuRbHJLpKPtCPNSbv56LzJqZotwecuEAZ3t/KsNSEzj3QfMRCYBcyMtPQ2gJgsfGytgrWh2hgoUduf3wInJhkihAPpF6Ttxxtx0YNK2FtWLp1JGfLXBZsYZwHbynoVUg52x4YfpthkNNvY6L93LV/zBplTvGjdadjyWi2LLbGToX3M4ss1bwsZK1923UDYD+S7yEHGcQBVP97QqYcggWKafu7W538pHra1aCulp79UepvKBrp/5R7Vzqpr3JkZ1cx55YL7OElVKOrFNsiE2ygRn3F9kne3wX528hBt5r7jhyJoVH2f+dIVj67DRoJwj8oN+PETC6dvq2s3bx8twa9qzELPg2y29KolWBzVMyYuWTp0qIP8NBt4KukGa2h4OfNeKQaKYUbWkVVtbyLKEAfRrKQjp8czVlxOsmte8KptRpyQVR+iEFDsvCAMtL2tPFye1/DWQ//nsdMqjigCfU6c7kOETfttbKJ8DNjfkE7ChjkpQBmTGtRfNLfANrdHd2mgItzUO2WoGD4vlr4xqAOICoWGLB57cG9ME1si+hKbZ9VEQOYe9dkRZq7ZylayTkfpQo/10dYNcRbreRtL873AvGcxyX7rvJknF07ggC0e3y3T5Dkp6catjyHvPmXoR5DLEsKtjtuk+NZWAqANckEsCrDe/rn+6r63e/vqnsDSWL03FrPHRMRzI0aVu61ljKKjGgHBHB2i+YiMlvKV1wEYhaGgyQVLsFUCPug1MwKivu8yDc+2TMQRu5BW8+WZgBA1bgIWzKa2jbRtPOJ4J3oKKGi4d9tmjOIqtHx4CFC792liaTKXMGpPLAvV9eVn+Z23ZHjuExlUnQ8HDnpJ6QqgJFeF5pfEmoCvEBZGyZ9MLWiAA6DDkkm+ajqQ2gIwDTg/FBXFEw Bp5k1CBq FFu8YDWhtBiqHRBtXoWjXRWVUSYF0m3vPilFMiGZy0sV1WihvsGJUfB62P3Uo4c4mhr7P4ev12eMgGFXbAeEH9utgu5D/1SbGrB47wJSox7rroqCQVyisrKIhG09Kv+Z+aw+seNAbYFN5ZEvCsYbx08zP07qKQyKVqBi8+eJmVh7sFAkMnynMe8xB/EYWog7WgmZ7yrAO1SicEL9m6p6dx67BG87WHaBeQWotgk6tYEUBhrHlolMr9PtlzE3OjjUYtO6nog5wfMhadicZcKAUHiVq04ra6xGRi0U+lWgyAdKjHbt9bkybV22VVEy3pqB6w4HHVKtxyTi8a3WiWh2Y8CXbPWe7PGeCvb9VjR8QEq1qZcpLvKIwwOodsmabOyBT/ASWgQnlfX9HkkMR0v27nhZz8IH9/LrHO+nRWmMlNysiMHI7OVSL54eIWFQB1675qjh4Cc4Ot2oLC7vlYC1/PWKUhymy9AfmM35n/bSUvrr1NsoarZ94phnsW+3RARt9AOoE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 5, 2023 at 5:33=E2=80=AFPM Liam R. Howlett wrote: > > * Suren Baghdasaryan [230705 20:20]: > > On Wed, Jul 5, 2023 at 4:07=E2=80=AFPM Liam R. Howlett wrote: > > > > > > * Suren Baghdasaryan [230705 13:24]: > > > > On Wed, Jul 5, 2023 at 10:14=E2=80=AFAM David Hildenbrand wrote: > > > > > > > > > > On 05.07.23 19:12, Suren Baghdasaryan wrote: > > > > > > When forking a child process, parent write-protects an anonymou= s page > > > > > > and COW-shares it with the child being forked using copy_presen= t_pte(). > > > > > > Parent's TLB is flushed right before we drop the parent's mmap_= lock in > > > > > > dup_mmap(). If we get a write-fault before that TLB flush in th= e parent, > > > > > > and we end up replacing that anonymous page in the parent proce= ss in > > > > > > do_wp_page() (because, COW-shared with the child), this might l= ead to > > > > > > some stale writable TLB entries targeting the wrong (old) page. > > > > > > Similar issue happened in the past with userfaultfd (see flush_= tlb_page() > > > > > > call inside do_wp_page()). > > > > > > Lock VMAs of the parent process when forking a child, which pre= vents > > > > > > concurrent page faults during fork operation and avoids this is= sue. > > > > > > This fix can potentially regress some fork-heavy workloads. Ker= nel build > > > > > > time did not show noticeable regression on a 56-core machine wh= ile a > > > > > > stress test mapping 10000 VMAs and forking 5000 times in a tigh= t loop > > > > > > shows ~5% regression. If such fork time regression is unaccepta= ble, > > > > > > disabling CONFIG_PER_VMA_LOCK should restore its performance. F= urther > > > > > > optimizations are possible if this regression proves to be prob= lematic. > > > > > > > > > > > > Suggested-by: David Hildenbrand > > > > > > Reported-by: Jiri Slaby > > > > > > Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c= 6e3cdf51b@kernel.org/ > > > > > > Reported-by: Holger Hoffst=C3=A4tte > > > > > > Closes: https://lore.kernel.org/all/b198d649-f4bf-b971-31d0-e84= 33ec2a34c@applied-asynchrony.com/ > > > > > > Reported-by: Jacob Young > > > > > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=3D217624 > > > > > > Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault han= dling first") > > > > > > Cc: stable@vger.kernel.org > > > > > > Signed-off-by: Suren Baghdasaryan > > > > > > --- > > > > > > kernel/fork.c | 6 ++++++ > > > > > > 1 file changed, 6 insertions(+) > > > > > > > > > > > > diff --git a/kernel/fork.c b/kernel/fork.c > > > > > > index b85814e614a5..403bc2b72301 100644 > > > > > > --- a/kernel/fork.c > > > > > > +++ b/kernel/fork.c > > > > > > @@ -658,6 +658,12 @@ static __latent_entropy int dup_mmap(struc= t mm_struct *mm, > > > > > > retval =3D -EINTR; > > > > > > goto fail_uprobe_end; > > > > > > } > > > > > > +#ifdef CONFIG_PER_VMA_LOCK > > > > > > + /* Disallow any page faults before calling flush_cache_du= p_mm */ > > > > > > + for_each_vma(old_vmi, mpnt) > > > > > > + vma_start_write(mpnt); > > > > > > + vma_iter_init(&old_vmi, oldmm, 0); > > > > > > vma_iter_set(&old_vmi, 0) is probably what you want here. > > > > Ok, I send another version with that. > > > > > > > > > > > +#endif > > > > > > flush_cache_dup_mm(oldmm); > > > > > > uprobe_dup_mmap(oldmm, mm); > > > > > > /* > > > > > > > > > > The old version was most probably fine as well, but this certainl= y looks > > > > > even safer. > > > > > > > > > > Acked-by: David Hildenbrand > > > > > > I think this is overkill and believe setting the vma_start_write() wi= ll > > > synchronize with any readers since it's using the per-vma rw semaphor= e > > > in write mode. Anything faulting will need to finish before the fork > > > continues and faults during the fork will fall back to a read lock of > > > the mmap_lock. Is there a possibility of populate happening outside = the > > > mmap_write lock/vma_lock? > > > > Yes, I think we understand the loss of concurrency in the parent's > > ability to fault pages while forking. Is that a real problem though? > > No, I don't think that part is an issue at all. I wanted to be sure I > didn't miss something. > > > > > > > > > Was your benchmarking done with this loop at the start? > > > > No, it was done with the initial version where the lock was inside the > > existing loop. I just reran the benchmark and while kernel compilation > > times did not change, the stress test shows ~7% regression now, > > probably due to that additional tree walk. I'll update that number in > > the new patch. > > ..but I expected a performance hit and didn't understand why you updated > the patch this way. It would probably only happen on really big trees > though and, ah, the largest trees I see are from the android side. I'd > wager the impact will be felt more when larger trees encounter smaller > CPU cache. My test has 10000 vmas and even for Android that's a stretch (the highest number I've seen was ~4000). We can think of a less restrictive solution if this proves to be a problem for some workloads but for now I would prefer to fix this in a safe way and possibly improve that later. The alternative is to revert this completely and we get no more testing until the next release. > > Thanks, > Liam