From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAABFC05027 for ; Fri, 20 Jan 2023 17:50:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2096F6B0071; Fri, 20 Jan 2023 12:50:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1DFE46B0081; Fri, 20 Jan 2023 12:50:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A8446B0082; Fri, 20 Jan 2023 12:50:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F0BD66B0071 for ; Fri, 20 Jan 2023 12:50:15 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CEC561C66FB for ; Fri, 20 Jan 2023 17:50:15 +0000 (UTC) X-FDA: 80375916390.04.CEE0368 Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) by imf09.hostedemail.com (Postfix) with ESMTP id 2DB16140003 for ; Fri, 20 Jan 2023 17:50:13 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ANRlWfvx; spf=pass (imf09.hostedemail.com: domain of surenb@google.com designates 209.85.128.177 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674237014; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aVyoBoXVncV7XEl3t2voVw28ML7SaYV5Dgtuf6b40FA=; b=IdJ2izAMDn2JDhr8P3mdy7Sl1ibkizcUOiYVdxJ+uERNWKarkUYHED8BIm98ULbLGopVh3 GjvLyhei0XwS7Sca0XwnZiBuxqffOHs/FF0+DncJk83tQoT3lmdB5jLO87D7llvmGhvfO6 x6qkh3XpdLW5tqIM2JWvrtNoXpgpGW4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ANRlWfvx; spf=pass (imf09.hostedemail.com: domain of surenb@google.com designates 209.85.128.177 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674237014; a=rsa-sha256; cv=none; b=TETNv7BkY0f5Vv1hrUKO7x2Ir/BOMSlfaU50sTReBM2ZQehAoCha9Yu3s9hCDfTv7yciDW RXzLWyzUyegSHvOHvYgkxYn88ZRrCeXPEeQCvar43mExwYU7W7tO4yS72E2tMkN0eprSCi RCZXX93wIARRC2YEUVpFTrrJQuySdlk= Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-4ff1fa82bbbso36733237b3.10 for ; Fri, 20 Jan 2023 09:50:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=aVyoBoXVncV7XEl3t2voVw28ML7SaYV5Dgtuf6b40FA=; b=ANRlWfvxGTpJVPhZIEDmerA7RYCecNGYC3i5Nr9zZ8q28FPyljMpMTIhlgQZfHFTVP Ee92UzH2d4DuxfxktT1DA3smDF0ji/BkS3gfkTDonTSF7HQup1SefrYmZjTyJW0Bg9tK BJRGr+0GxhGYoL1k3OnQbi/pfLLvgxCoDu8z4/kaWzhjUtNPMmPTcuBjMAYHCLMuH4mY fgKJAXrZ5MsNM0RpCYDGi6IbNIrrwup5btcvBTsiaC5aEA9EihfFW59n9S+gxvYQms2F NRJL4C6RLSnaLwLC/V6yhn1nTzQyRvm8lUqVp3AclCOntYxy8GEUOjQZqRDLkC2dw8ZE UNeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=aVyoBoXVncV7XEl3t2voVw28ML7SaYV5Dgtuf6b40FA=; b=xv26+90GYskqzUGB9Zwqov8dpWbxj/g5JgzRq73YfI0CAUTAXK/XgEofdS30pr+hyz 8mOwNCr56pI+ysLQkpvOKI985KDmsDV436gk89MDI83kLzlTUZJLgM+yXlNqWXuG+z/1 LTutC4I6NnrhT9XKaz6vouFOOn9/QZePLDQ+cAW7Pti69m6Ffyndbr5pU6SfvWEESmjc AR0/WWdjJc9sw+nv7fJJt5qL+/riL+6SFtWFYjwFIvy+1eQIDZveXGgWhAimclvWChbJ yZ6wunZShJZRk6I/de7oh6driV8xw1YXmr6dL6c/VgDy5lGVTgZC8UqhmxRYVptdAME7 wepg== X-Gm-Message-State: AFqh2kqW6S3tL8rkCGnU+2UvR0Up5+Pcuo6HYwmsLjzXfPkIx/2L0XsN z5/3LdJqgQTEb67SLlLB6ClmyrCaqxWwrsahluI6nQ== X-Google-Smtp-Source: AMrXdXvMiPI/AGYcURx7BpXFo8CgDd1daSskgeMlewm7uoKhNJ8L/DxZm49IfvACqDRKjvLcYNle2ICQOI0LbOIpu2s= X-Received: by 2002:a81:6d8d:0:b0:490:89c3:21b0 with SMTP id i135-20020a816d8d000000b0049089c321b0mr2028934ywc.132.1674237012869; Fri, 20 Jan 2023 09:50:12 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-40-surenb@google.com> <20230120170815.yuylbs27r6xcjpq5@revolver> In-Reply-To: From: Suren Baghdasaryan Date: Fri, 20 Jan 2023 09:50:01 -0800 Message-ID: Subject: Re: [PATCH 39/41] kernel/fork: throttle call_rcu() calls in vm_area_free To: Matthew Wilcox Cc: "Liam R. Howlett" , Michal Hocko , akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2DB16140003 X-Rspam-User: X-Stat-Signature: 6k8p84r9oqeo9wjyng1pwdku74mcys6u X-HE-Tag: 1674237013-324106 X-HE-Meta: U2FsdGVkX19cjAzB7giV73HM6kwJW3dntujXJEhta7YeiEu1H1MKOnYj2V3QZM+RBybYlDjavDwErmbWnZ0InMDo8BBnSnq0wijgSlbXpCpsUrJNehjJlQpIcHnZ01QkR9grgBiuBgbGLhinkBC7SJXUJbgebMDgOmrjPFmBLSk/oaMFLwB4aGF12NB9boSUHh1k3OdRVuWEYv6403H+pr3rbMelcs4Wb2W0yIsZAoqwFi5VyNN+aHTYu6Rg8knPeiz4bb4R78tBwW+1Ae9ROc0Ll1lDtFeqdL2p38haX8Zq07FsR39u/HC2KWchwmIzdV5O1TETLC5k8ZLZ04bpzw4CMJCIc4unEyutjswJa7RD3epvPx1+6dg8dJtQwdYQjvRlekxAxBYayFk5Ba+uH8WFmjHOE/xPd/dgtC8cwcT/bK8vi/vrQ3mWhk/oDz7dYZ/HcSmHD28ZPJHTRxUjMyMc/Zqr9kQfZd8vIz5AnbC1OP89hD3jDnlnwWZs/2owgUhDZAKc2xXam+hUthTyny2ywLnmLeAvd2MdCgIreduJF2DC6kOYoeXTPus7cy76XTsnrt/jn9cLZB7Wjvx9fneSoLk8yQqzGw4XTiQju3PeRIJ9wlwbwnwqDArhdLrf0ovoJ+nEpBPgvJhxXTKNdM8YjPlMaZmgXUfVmseEukmc95mH7Zk9k9jnReHlndcFpdqd7FU4j8e67pKpT8/Mpcxw5I22PDJFyLn8cZ5j5c2ftKprrmVNmGsHWBX3RKIB5GrBl6luDA5uaSF36eAR9pLuXKw2kTyJBM/z/D8PGo0wnzunDDwFQyN+ac+eqNmdlAPx9+dr5Vgt8F02q5cMnqZUluTspDgWIfma/aGh6/yaral3sBBmWDmRD8hFhK6KhiIzLU/u8hS82Wo2oitfBgy4y9qd2f8VqEC03m05G6q/IeAIBUvW6iLf3RNkkHQlMpewgV1uTb8lPiexO8S sEajxGce x+1gXH3/xOCrC4G8RI5NCOfDcmBDY2l/vzXDsYpskJDFY6mT5+B0nNvDKo0RgJsllT3VmqBtyPUvP5Je8iFnNzn+Y8bFCjoijbvK5bBiv5eREeTVyxCqp65lqbuRC4gEYF46MddUROZZiSiTkj/x1bz4TZwQJVn3E5TM3GDEsPCkGUCNz1QTSPyOlMi3iQY+h9CkF3VJ9f/x4CYRSi9/e/7rZqvIrU/GFwHhdUhnpOev/vKbbE4y3jYCAs3IWSEbk8zNXy6payP0xyTvnKa6epWTpOVmEaVYwVMSdShFW4bJj/7j+G7DcgjxbdMNAnLtILOEY2KaDp83qllJK2vm3wxaAjlMJZPZbdrWEfA28L8/i2EWM3xQ1GqNPVL522p3KUgmxY9+wvkdrGxcrsG29H0CinLvrKzb8dqSnrm/RZvkVpnYvexAcb7f7Hp/esG4pmsuXgp4IBXfES/I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 20, 2023 at 9:32 AM Matthew Wilcox wrote: > > On Fri, Jan 20, 2023 at 09:17:46AM -0800, Suren Baghdasaryan wrote: > > On Fri, Jan 20, 2023 at 9:08 AM Liam R. Howlett wrote: > > > > > > * Matthew Wilcox [230120 11:50]: > > > > On Fri, Jan 20, 2023 at 08:45:21AM -0800, Suren Baghdasaryan wrote: > > > > > On Fri, Jan 20, 2023 at 8:20 AM Suren Baghdasaryan wrote: > > > > > > > > > > > > On Fri, Jan 20, 2023 at 12:52 AM Michal Hocko wrote: > > > > > > > > > > > > > > On Thu 19-01-23 10:52:03, Suren Baghdasaryan wrote: > > > > > > > > On Thu, Jan 19, 2023 at 4:59 AM Michal Hocko wrote: > > > > > > > > > > > > > > > > > > On Mon 09-01-23 12:53:34, Suren Baghdasaryan wrote: > > > > > > > > > > call_rcu() can take a long time when callback offloading is enabled. > > > > > > > > > > Its use in the vm_area_free can cause regressions in the exit path when > > > > > > > > > > multiple VMAs are being freed. To minimize that impact, place VMAs into > > > > > > > > > > a list and free them in groups using one call_rcu() call per group. > > > > > > > > > > > > > > > > > > After some more clarification I can understand how call_rcu might not be > > > > > > > > > super happy about thousands of callbacks to be invoked and I do agree > > > > > > > > > that this is not really optimal. > > > > > > > > > > > > > > > > > > On the other hand I do not like this solution much either. > > > > > > > > > VM_AREA_FREE_LIST_MAX is arbitrary and it won't really help all that > > > > > > > > > much with processes with a huge number of vmas either. It would still be > > > > > > > > > in housands of callbacks to be scheduled without a good reason. > > > > > > > > > > > > > > > > > > Instead, are there any other cases than remove_vma that need this > > > > > > > > > batching? We could easily just link all the vmas into linked list and > > > > > > > > > use a single call_rcu instead, no? This would both simplify the > > > > > > > > > implementation, remove the scaling issue as well and we do not have to > > > > > > > > > argue whether VM_AREA_FREE_LIST_MAX should be epsilon or epsilon + 1. > > > > > > > > > > > > > > > > Yes, I agree the solution is not stellar. I wanted something simple > > > > > > > > but this is probably too simple. OTOH keeping all dead vm_area_structs > > > > > > > > on the list without hooking up a shrinker (additional complexity) does > > > > > > > > not sound too appealing either. > > > > > > > > > > > > > > I suspect you have missed my idea. I do not really want to keep the list > > > > > > > around or any shrinker. It is dead simple. Collect all vmas in > > > > > > > remove_vma and then call_rcu the whole list at once after the whole list > > > > > > > (be it from exit_mmap or remove_mt). See? > > > > > > > > > > > > Yes, I understood your idea but keeping dead objects until the process > > > > > > exits even when the system is low on memory (no shrinkers attached) > > > > > > seems too wasteful. If we do this I would advocate for attaching a > > > > > > shrinker. > > > > > > > > > > Maybe even simpler, since we are hit with this VMA freeing flood > > > > > during exit_mmap (when all VMAs are destroyed), we pass a hint to > > > > > vm_area_free to batch the destruction and all other cases call > > > > > call_rcu()? I don't think there will be other cases of VMA destruction > > > > > floods. > > > > > > > > ... or have two different call_rcu functions; one for munmap() and > > > > one for exit. It'd be nice to use kmem_cache_free_bulk(). > > > > > > Do we even need a call_rcu on exit? At the point of freeing the VMAs we > > > have set the MMF_OOM_SKIP bit and unmapped the vmas under the read lock. > > > Once we have obtained the write lock again, I think it's safe to say we > > > can just go ahead and free the VMAs directly. > > > > I think that would be still racy if the page fault handler found that > > VMA under read-RCU protection but did not lock it yet (no locks are > > held yet). If it's preempted, the VMA can be freed and destroyed from > > under it without RCU grace period. > > The page fault handler (or whatever other reader -- ptrace, proc, etc) > should have a refcount on the mm_struct, so we can't be in this path > trying to free VMAs. Right? Hmm. That sounds right. I checked process_mrelease() as well, which operated on mm with only mmgrab()+mmap_read_lock() but it only unmaps VMAs without freeing them, so we are still good. Michal, do you agree this is ok? lock_vma_under_rcu() receives mm as a parameter, so I guess it's implied that the caller should either mmget() it or operate on current->mm, so no need to document this requirement?