From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73FF3C46467 for ; Thu, 19 Jan 2023 18:52:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 06D356B007D; Thu, 19 Jan 2023 13:52:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 01E856B0081; Thu, 19 Jan 2023 13:52:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4F946B0082; Thu, 19 Jan 2023 13:52:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D4D7D6B007D for ; Thu, 19 Jan 2023 13:52:17 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B4484120D9F for ; Thu, 19 Jan 2023 18:52:17 +0000 (UTC) X-FDA: 80372443914.15.67E53E5 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf23.hostedemail.com (Postfix) with ESMTP id 2B497140002 for ; Thu, 19 Jan 2023 18:52:15 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=mo0ihRCw; spf=pass (imf23.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674154336; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ToHyrSNIoPykkmHjpmy+AXA0uDswLz8tCL1AkqWK5E4=; b=YyDPW+g17uz+QefXKCsydJppQJgT9gocUki50gMeOkU9pKCIAMGQVnnvs0TQCyI0XWEt5P lBaUaa6jROpxGaMKdhEH1n/DXzU9U6px2xkYbw9p3dT74eBHIYIxkdxu14KNQvTLi7kK4t nap+nPy5UYDjMr3JPXmNZE75gZDTCj8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=mo0ihRCw; spf=pass (imf23.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674154336; a=rsa-sha256; cv=none; b=sFdBNfkHGK0JFacdJIerz414RjaqK+k7icY2oi+PXJT8Z5xcFYE2DEr8zygHytic4MymGP JnSLUokkQSeLBnR1wFaR1fpN0u9FK/dJgnXyRMffnvdsMbohoZ4NUF2URR6pZV1wpo/a+j PAKBl4SBv0B3XAc6VF/k1USKMlGETPo= Received: by mail-yb1-f176.google.com with SMTP id 66so3779081yba.4 for ; Thu, 19 Jan 2023 10:52:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=ToHyrSNIoPykkmHjpmy+AXA0uDswLz8tCL1AkqWK5E4=; b=mo0ihRCw6Y3aWSIWf6n6XXsp3Czez4pmBoLn4xxcvfC7MIJwMGncIVEoR+KlAAsFdZ X+KL3ImyMm7CDvgKDNaitAKtQYJmfDiQS5wzretGPDK1Mi1wi1p+dYAlWVDWQ7EB8COm 73VnlegayZ9CXqL1k0bgORj7azfFIJkz0+xJkhEfX5/5LxFj6WaWcB6V5foMDa3ZSoca mTUKfpVRCQYGi2+eL0D4ZMiEMZnLqo9cHtAvJNd2hYyr8+pslpkuYm69SorYbX73aFGZ KE1jn5AwD+MqZBLCrvWROTxITII/UL5w0qzHVSzaDoEabOI+otr/5WNJbNSN6S8nkrsZ TpOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=ToHyrSNIoPykkmHjpmy+AXA0uDswLz8tCL1AkqWK5E4=; b=5Z+8DOlGoJxMuqUH15YezU6ilOmjl6bnnJtPlyRoZoSLf1+GYaLu1UWOUnIxa64LmC jdW004RnN+t1QDVAM8YtAXV7scPsHE6dlo16t6PALAA045hADd/KdyLv5lJ+6G3gAUxV uJosMJbIoIZZ/ZI8wJil4TybIap6YaahIOKW3OJdIi5h5Vu1Nu9ZnPWyoSH7q5s4Vq+T sMv4c1Gn+EZHmwMuvudaX8DWVOESD/Fh0t6IkPnirkPa3mbUJi3tN5q5crh+M7O9i+TM 7E3oeAZaKhiXhlPsoK9vU442EoBm7LTMRt7vQbe3QNay+9VylybM5yS7TzDH9geuV2A1 qB1Q== X-Gm-Message-State: AFqh2kr0NItJdjfGTpQHbysi6CAxdsKFO+PNerK7p2pXwAMyQycT2J/F cD5HHAh7WoDyAWTsagjaens3IvqEhQPCDRYJV5dP1A== X-Google-Smtp-Source: AMrXdXv4wX6M2y+O+IjiMg3HAJdiZYlyaI74hGgkr4eEujY3dQUI2lPzn402SgrjGkBzZXopyxNC269/MaskvyV/RAQ= X-Received: by 2002:a25:9801:0:b0:7d5:b884:3617 with SMTP id a1-20020a259801000000b007d5b8843617mr1116250ybo.380.1674154334935; Thu, 19 Jan 2023 10:52:14 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-40-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 19 Jan 2023 10:52:03 -0800 Message-ID: Subject: Re: [PATCH 39/41] kernel/fork: throttle call_rcu() calls in vm_area_free To: Michal Hocko Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: aemdxn68jdhznqfwoqz4g5hboa3bbb5m X-Rspam-User: X-Rspamd-Queue-Id: 2B497140002 X-Rspamd-Server: rspam06 X-HE-Tag: 1674154335-691281 X-HE-Meta: U2FsdGVkX1+OD7L91H7S94W7CA/kMpUtZvUDkCAa9Vlz6KltBv3RJSLfJE4kXP6QLQIatAvAcutBAaCUtMRax+8sYZt+Kevs8c683sJ+ogIgnbTcpq3wfmNpdTEHKge/U6Q43mZFYjHOekGAIdYypJAZasVOTagDtGNe2p9HnhjSwtoTe0b2Lke9iUasoYWKCT9z6fgYJfFyBvdrVy0m3m+mYeEX+TOpeAV7OvPaDIZAmJ1nuhLxOZORhzG8nshwRqbNDSPQjU420vEZRvKhJxc/iqHYids94hQUlOgWVDdHGDNNsZPC2KRiJ9YL8APfVrU+kam5RvP17Qxulau4VQPpmGb5e0a7eZDQsALN3bbDe7WRRw2tqhfCjVYGo8RBfoiHXPOgkWZKhoxBUqeGSGc1YXNdZitxj2jaqfw1svpZl0Nv+uq10mjU03LStIHOy5LDF23BDUjilTTN8AaDaooepNiDIM3NVVN9aT3pYm+eGvAp+QP6rbX0uyogpV7WAIL68tYdgPPqXist/Y8l0jiyn1FmrRtNoQdzGg6fszE3RitmhfdUDHAD1OdWc0fvXqwVr6nDFY9KyNkE2zOzZ5FHVp+Kea5xIrqw7cSz0Hnt/KJwxytm+Gd8KVk3ySIYSe6nUuatUwksbOvP9sKz7jBQcdsriMn+g04yiaq5xzHzppO6pFfyk7597hzfssA0A8BA2CtMBbp82exGJG6bG80/AwIYkl0w3hQ4mTmzlCUT9SvQoBl2yh3FYIe8bAYE+EhptDcbAy17LeTOPVIwQ2BAVI5HJWjMJaLJZxjZiWJ6Tx4RXQHndebWWgwy8mhBJpNFwY2Ksv7ahWdW2HOIg5ZBJupEzNZe5SMKtZj8DpE8UOv2BKVDt9s4ZkewuCohoTY5YtMizdxeXV/5tGbj5eUQqi/u0CHKwLvrVIm9Wuc5Xqb6veQSffhl/KG1Y/GurJ7+YLUag1zFUoHkdY1 SkIUQ/zK SFq6uJNa7NEXzkEZ8XI8X5Xa7upUa828xzykSEUFBjIB2zbZmBq/AkHDo6w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 19, 2023 at 4:59 AM Michal Hocko wrote: > > On Mon 09-01-23 12:53:34, Suren Baghdasaryan wrote: > > call_rcu() can take a long time when callback offloading is enabled. > > Its use in the vm_area_free can cause regressions in the exit path when > > multiple VMAs are being freed. To minimize that impact, place VMAs into > > a list and free them in groups using one call_rcu() call per group. > > After some more clarification I can understand how call_rcu might not be > super happy about thousands of callbacks to be invoked and I do agree > that this is not really optimal. > > On the other hand I do not like this solution much either. > VM_AREA_FREE_LIST_MAX is arbitrary and it won't really help all that > much with processes with a huge number of vmas either. It would still be > in housands of callbacks to be scheduled without a good reason. > > Instead, are there any other cases than remove_vma that need this > batching? We could easily just link all the vmas into linked list and > use a single call_rcu instead, no? This would both simplify the > implementation, remove the scaling issue as well and we do not have to > argue whether VM_AREA_FREE_LIST_MAX should be epsilon or epsilon + 1. Yes, I agree the solution is not stellar. I wanted something simple but this is probably too simple. OTOH keeping all dead vm_area_structs on the list without hooking up a shrinker (additional complexity) does not sound too appealing either. WDYT about time domain throttling to limit draining the list to say once per second like this: void vm_area_free(struct vm_area_struct *vma) { struct mm_struct *mm = vma->vm_mm; bool drain; free_anon_vma_name(vma); spin_lock(&mm->vma_free_list.lock); list_add(&vma->vm_free_list, &mm->vma_free_list.head); mm->vma_free_list.size++; - drain = mm->vma_free_list.size > VM_AREA_FREE_LIST_MAX; + drain = jiffies > mm->last_drain_tm + HZ; spin_unlock(&mm->vma_free_list.lock); - if (drain) + if (drain) { drain_free_vmas(mm); + mm->last_drain_tm = jiffies; + } } Ultimately we want to prevent very frequent call_rcu() calls, so throttling in the time domain seems appropriate. That's the simplest way I can think of to address your concern about a quick spike in VMA freeing. It does not place any restriction on the list size and we might have excessive dead vm_area_structs if after a large spike there are no vm_area_free() calls but I don't know if that's a real problem, so not sure we should be addressing it at this time. WDYT? > > -- > Michal Hocko > SUSE Labs