From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 962DDC61DA3 for ; Fri, 24 Feb 2023 10:12:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F06B66B00A1; Fri, 24 Feb 2023 05:12:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB7036B00A3; Fri, 24 Feb 2023 05:12:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D591B6B00A4; Fri, 24 Feb 2023 05:12:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C5E786B00A1 for ; Fri, 24 Feb 2023 05:12:46 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 9E218141834 for ; Fri, 24 Feb 2023 10:12:46 +0000 (UTC) X-FDA: 80501771532.05.9CFD479 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf03.hostedemail.com (Postfix) with ESMTP id 9B5A920006 for ; Fri, 24 Feb 2023 10:12:42 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=aFguvAoA; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677233563; a=rsa-sha256; cv=none; b=nSqZn5FwCIQoOEFnPRvg7QHHBE5f1AtZIaK86dMV/tWoYwMIdAoRk+IimZ8VUk2urgadk+ nZNKQxbnr78eKFjPKUZiRUNIilCJZmiNtcu8qFSjrJuqIvXIFbXwb4Bou1JyJJPkPRpWgH R45JtsqpE29b6IPKVUESEy82EVU5QdQ= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=aFguvAoA; spf=pass (imf03.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677233563; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/IvV+bH+C+VeI7S9uQHFqjbaSa0dBmu9Y/yAg4Kj/4o=; b=uTOG/2xwSsPBIAFGOBE0+lnE5/y2e4Kc7w77E259yGoCLbIcz5H6Qtg4ROgYbW0XoImZzE ghLERXRDG39zpCm3w1INL+PgqrBKHRUSJHSdY7lQI968lFX/ULw4U1je48yDS1g2ZGbB2o lzaNvDyN+pq8TT4L2xkw3uNoWgpWzzg= Received: by mail-pl1-f170.google.com with SMTP id z2so16333186plf.12 for ; Fri, 24 Feb 2023 02:12:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=/IvV+bH+C+VeI7S9uQHFqjbaSa0dBmu9Y/yAg4Kj/4o=; b=aFguvAoAID1KylifBI4GB0jBM3I9fZNzm//O0CXEbV/B+MUGhsczQECOWoFsBAIhpG akkTXYiqVOSrBXAeNN4/BmxrqB9nwO4WzdDS/yatJAfOrxPSXDHyjCpUT8toKPMqe9bh cD7MlMS3BBV3ki01OZDCI3AdV1XELZNnL+6lg3nMwF5UX5CIuoq1tkhj79aPSD+4u/Qf v6We3wWkmICmXC5qlDT7/fVakyZ3yGI+7oRarpXK3AQBm39Wfr/1WXnEDnFJpPD1vlDo WFgGLyfWzW0Gx+UX8BsXRSyfOSxYZGU4PJvmw7hIcYt9edn7GQEG9qI62RrdhCjSqOa7 73KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/IvV+bH+C+VeI7S9uQHFqjbaSa0dBmu9Y/yAg4Kj/4o=; b=5d74EHMJSXZ/xD4bbNAltnpjCoBN7O4yjf/YRA8l8ZAouZyLFfO3L5SNKoMOFrmRRF JTwpDkQFx3S+dUhwXv1Ka1Y9Mdj8g51qV7nxoLjgTMEmUb9gnK/eFelKLk0p+dxIDIl/ frY0l6Q5AIonuH8O2539XVyaYFIbPvBPBN6SSD2qeBa9pmgCUc9kpEut4mjakL/RSipB aJYx9bVXZ4XTHyRejB05SCo5VQWbO35+pcr8uRCle1/cmRTp+HHHWhdsgSELuUoNMiuj kQXfYtT9x7mOXaE4vOEZR/5vhejnL0zz4f5kEunQFIrP1kx9kHCdYsNh5FTgq11VFG6s 5ViA== X-Gm-Message-State: AO0yUKV1Mgj16xZT/loglEM5VgDHrgV7jnC/1muPZDVaDWpd6v0rTC0g JVpnM631lPt6fijB6wLkCCdkoA== X-Google-Smtp-Source: AK7set88TinS01xc14vhp6vQ4QdTEryb0sekST3hLdYHjRa2gtieICleQ2cjpnWJD7UK3K9ys1sshg== X-Received: by 2002:a17:90a:29a3:b0:233:dd4d:6b1a with SMTP id h32-20020a17090a29a300b00233dd4d6b1amr13548742pjd.3.1677233561078; Fri, 24 Feb 2023 02:12:41 -0800 (PST) Received: from [10.70.252.135] ([139.177.225.229]) by smtp.gmail.com with ESMTPSA id gd5-20020a17090b0fc500b00233cde36909sm1179920pjb.21.2023.02.24.02.12.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 24 Feb 2023 02:12:40 -0800 (PST) Message-ID: Date: Fri, 24 Feb 2023 18:12:31 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.7.2 Subject: Re: [PATCH v2 2/7] mm: vmscan: make global slab shrink lockless Content-Language: en-US To: Sultan Alsawaf Cc: akpm@linux-foundation.org, tkhai@ya.ru, hannes@cmpxchg.org, shakeelb@google.com, mhocko@kernel.org, roman.gushchin@linux.dev, muchun.song@linux.dev, david@redhat.com, shy828301@gmail.com, dave@stgolabs.net, penguin-kernel@i-love.sakura.ne.jp, paulmck@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20230223132725.11685-1-zhengqi.arch@bytedance.com> <20230223132725.11685-3-zhengqi.arch@bytedance.com> <8049b6ed-435f-b518-f947-5516a514aec2@bytedance.com> From: Qi Zheng In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: 9B5A920006 X-Rspamd-Server: rspam01 X-Stat-Signature: u9bzask4r7rwds3nhcobbbryx4w1zudd X-HE-Tag: 1677233562-46137 X-HE-Meta: U2FsdGVkX1/7eZT6JPeg/7pJtguyu9C/w29rVhLq0oszakzDJb908XMDigOHLfe0U8B0IvRLszse6zSbuT/sEoc90bt9aN2kKp2iviFynq20u2/Up6dCVdztHwtOU1LK8W0GrOlhPV1jw+67VlNUJBDraeEQtFXzZ3yWH3Xl/02/1jpAGq5O1YfBNnvVGI99l/vcit8noTe5pWr+EVJwQTTYHGyi9ay09LF0mWWHgfLStsEeu4KIqN0kt67Aa2KaAUEorEZKy3KgTpu6brCWHrF0k8iFb3glOy/CKRg25FfVVNecokHUpDBMx4UHdbDqdQ/FT1kCBrMKEEJoallJW8brB4/I+R17iTlgp5GN4wzuKpEFT0glOlSxrVdL9ZD5Egi/k4gmx685rDKZMBVxU6vtOTQbO9dwoE1pF531w7oUICsBMd71UwIwbnroMspHqQEV3A8vwMB12px3ZT1cI0nPms3p3/31JkfkAsKQHQKrQgMhfAZCXNKVOXE+BxoMPpBkn2KxU2vNYNp4utXZi8d1a6wS4HJYtzkUghvQ2gu2JnKd3b87hl//PPmRtSvsWGica/Jyn/XLdyQYYRMZkjI8UlS4UfZYTMd1c4R0oFol6249dwFYdjT3d//isnVpxwmWs3a1inhpFgW7UuThXy7YZHVkk17KZ4fITt9X7ozloJXioeW7ABvf+B7ooH2gxFzdbc3xO8UwItCXHruc5ygPEy9Vnw+xbLwAiGyScB6omGs/RFdq4EQ2CWt5oBIkxS95yH5RsgVXiZ8yEVUFntB+tlI6lwVb7hCZiHpnI+v1v0iHvZMRk+TFWsHegLpdmiR4cgftddfAshR2E1zzPdKbO9KIdcHLLyn1V+NDeDOmeRZ6mlKIbVr+z6hH1C1ZVITcWkQDV47P4ITTsa6NsUoKuWgooWnzqe+rm5wZPfB+/lX3KpKuZ8AddgzD7TBeWV1NormABCQIGu0dfpp kbNyksrK yqGepg5fq5kdNTzqi722DaHc4+hWofrruHpANIc2S86TRJbUrFzioPL//FqgDQQQbJ3yf1IMsr2huqIJLoQZ9yQLFIuBjD0XLCRb3GB4b0x7uzsuzIwItzt5PiH3qfXx7gTs74jqeH9sag7nunb4KfslH5rlF6XSCQGzS81zeveEzsecNWaCpPrKJ6g3oWQq6RBX0lrQ77RuQBcxI1k5wBoY0O9JNx5EmLWl4A6Bg3Zr4x8e2LXl0PYr3aypzHYTfrUpwMJd4+fzccCR54x2X5EPLRj4Pdkm2Miomgo8KeXofkwBZGfkjrIpzm5BFwYOoxDe7v3klRJrnBJ9eN8z2UI/uMxoTQw57P5RaXtlaW7T7SDb9INYbC2BcWcqpxaQLc5KxKiewKBx2S/K4nJwx7+y2HzAbTRr2nMacNRL9MoShzN9K9PtoMxNTECBQW3IUgy+3LJET1tY/XGdwbOCv6vdWlQY5EpJwucrzk9SIQvfDwIhDnfb2K/tY07KH6OEbYP2NQ95IsmGJFOkSkzFHEU4SphF/gLPaBFh98TWmvFh+w5HBP9pIb8UM7F7oEV1Zv1bGQqHQIHI0ZLhKffUya/omAby22SdnfwwyS9TqGnDiXYJr8vqVVDZ9IBhwnSGoUEV8bPArzH+8tsD0H5Q4MSdyy3oaTBxAZd+I6yWCLeH+BIHqSwRD1wEr+RcWw92ejxvc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/2/24 16:20, Sultan Alsawaf wrote: > On Fri, Feb 24, 2023 at 12:00:21PM +0800, Qi Zheng wrote: >> >> >> On 2023/2/24 02:24, Sultan Alsawaf wrote: >>> On Thu, Feb 23, 2023 at 09:27:20PM +0800, Qi Zheng wrote: >>>> The shrinker_rwsem is a global lock in shrinkers subsystem, >>>> it is easy to cause blocking in the following cases: >>>> >>>> a. the write lock of shrinker_rwsem was held for too long. >>>> For example, there are many memcgs in the system, which >>>> causes some paths to hold locks and traverse it for too >>>> long. (e.g. expand_shrinker_info()) >>>> b. the read lock of shrinker_rwsem was held for too long, >>>> and a writer came at this time. Then this writer will be >>>> forced to wait and block all subsequent readers. >>>> For example: >>>> - be scheduled when the read lock of shrinker_rwsem is >>>> held in do_shrink_slab() >>>> - some shrinker are blocked for too long. Like the case >>>> mentioned in the patchset[1]. >>>> >>>> Therefore, many times in history ([2],[3],[4],[5]), some >>>> people wanted to replace shrinker_rwsem reader with SRCU, >>>> but they all gave up because SRCU was not unconditionally >>>> enabled. >>>> >>>> But now, since commit 1cd0bd06093c ("rcu: Remove CONFIG_SRCU"), >>>> the SRCU is unconditionally enabled. So it's time to use >>>> SRCU to protect readers who previously held shrinker_rwsem. >>>> >>>> [1]. https://lore.kernel.org/lkml/20191129214541.3110-1-ptikhomirov@virtuozzo.com/ >>>> [2]. https://lore.kernel.org/all/1437080113.3596.2.camel@stgolabs.net/ >>>> [3]. https://lore.kernel.org/lkml/1510609063-3327-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp/ >>>> [4]. https://lore.kernel.org/lkml/153365347929.19074.12509495712735843805.stgit@localhost.localdomain/ >>>> [5]. https://lore.kernel.org/lkml/20210927074823.5825-1-sultan@kerneltoast.com/ >>>> >>>> Signed-off-by: Qi Zheng >>>> --- >>>> mm/vmscan.c | 27 +++++++++++---------------- >>>> 1 file changed, 11 insertions(+), 16 deletions(-) >>>> >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>>> index 9f895ca6216c..02987a6f95d1 100644 >>>> --- a/mm/vmscan.c >>>> +++ b/mm/vmscan.c >>>> @@ -202,6 +202,7 @@ static void set_task_reclaim_state(struct task_struct *task, >>>> LIST_HEAD(shrinker_list); >>>> DECLARE_RWSEM(shrinker_rwsem); >>>> +DEFINE_SRCU(shrinker_srcu); >>>> #ifdef CONFIG_MEMCG >>>> static int shrinker_nr_max; >>>> @@ -706,7 +707,7 @@ void free_prealloced_shrinker(struct shrinker *shrinker) >>>> void register_shrinker_prepared(struct shrinker *shrinker) >>>> { >>>> down_write(&shrinker_rwsem); >>>> - list_add_tail(&shrinker->list, &shrinker_list); >>>> + list_add_tail_rcu(&shrinker->list, &shrinker_list); >>>> shrinker->flags |= SHRINKER_REGISTERED; >>>> shrinker_debugfs_add(shrinker); >>>> up_write(&shrinker_rwsem); >>>> @@ -760,13 +761,15 @@ void unregister_shrinker(struct shrinker *shrinker) >>>> return; >>>> down_write(&shrinker_rwsem); >>>> - list_del(&shrinker->list); >>>> + list_del_rcu(&shrinker->list); >>>> shrinker->flags &= ~SHRINKER_REGISTERED; >>>> if (shrinker->flags & SHRINKER_MEMCG_AWARE) >>>> unregister_memcg_shrinker(shrinker); >>>> debugfs_entry = shrinker_debugfs_remove(shrinker); >>>> up_write(&shrinker_rwsem); >>>> + synchronize_srcu(&shrinker_srcu); >>>> + >>>> debugfs_remove_recursive(debugfs_entry); >>>> kfree(shrinker->nr_deferred); >>>> @@ -786,6 +789,7 @@ void synchronize_shrinkers(void) >>>> { >>>> down_write(&shrinker_rwsem); >>>> up_write(&shrinker_rwsem); >>>> + synchronize_srcu(&shrinker_srcu); >>>> } >>>> EXPORT_SYMBOL(synchronize_shrinkers); >>>> @@ -996,6 +1000,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>>> { >>>> unsigned long ret, freed = 0; >>>> struct shrinker *shrinker; >>>> + int srcu_idx; >>>> /* >>>> * The root memcg might be allocated even though memcg is disabled >>>> @@ -1007,10 +1012,10 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>>> if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg)) >>>> return shrink_slab_memcg(gfp_mask, nid, memcg, priority); >>>> - if (!down_read_trylock(&shrinker_rwsem)) >>>> - goto out; >>>> + srcu_idx = srcu_read_lock(&shrinker_srcu); >>>> - list_for_each_entry(shrinker, &shrinker_list, list) { >>>> + list_for_each_entry_srcu(shrinker, &shrinker_list, list, >>>> + srcu_read_lock_held(&shrinker_srcu)) { >>>> struct shrink_control sc = { >>>> .gfp_mask = gfp_mask, >>>> .nid = nid, >>>> @@ -1021,19 +1026,9 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>>> if (ret == SHRINK_EMPTY) >>>> ret = 0; >>>> freed += ret; >>>> - /* >>>> - * Bail out if someone want to register a new shrinker to >>>> - * prevent the registration from being stalled for long periods >>>> - * by parallel ongoing shrinking. >>>> - */ >>>> - if (rwsem_is_contended(&shrinker_rwsem)) { >>>> - freed = freed ? : 1; >>>> - break; >>>> - } >>>> } >>>> - up_read(&shrinker_rwsem); >>>> -out: >>>> + srcu_read_unlock(&shrinker_srcu, srcu_idx); >>>> cond_resched(); >>>> return freed; >>>> } >>>> -- >>>> 2.20.1 >>>> >>>> >>> >>> Hi Qi, >>> >>> A different problem I realized after my old attempt to use SRCU was that the >>> unregister_shrinker() path became quite slow due to the heavy synchronize_srcu() >>> call. Both register_shrinker() *and* unregister_shrinker() are called frequently >>> these days, and SRCU is too unfair to the unregister path IMO. >> >> Hi Sultan, >> >> IIUC, for unregister_shrinker(), the wait time is hardly longer with >> SRCU than with shrinker_rwsem before. > > The wait time can be quite different because with shrinker_rwsem, the > rwsem_is_contended() bailout would cause unregister_shrinker() to wait for only > one random shrinker to finish at worst rather than waiting for *all* shrinkers > to finish. Yes, to be exact, unregister_shrinker() needs to wait for all the shrinkers who entered grace period before it. But the benefit in exchange is that the slab shrink is completely lock-free, I think this is more worthwhile than letting unregister_shrinker() wait a little longer. > >> And I just did a simple test. After using the script in cover letter to >> increase the shrink_slab hotspot, I did umount 1k times at the same >> time, and then I used bpftrace to measure the time consumption of >> unregister_shrinker() as follows: >> >> bpftrace -e 'kprobe:unregister_shrinker { @start[tid] = nsecs; } >> kretprobe:unregister_shrinker /@start[tid]/ { @ns[comm] = hist(nsecs - >> @start[tid]); delete(@start[tid]); }' >> >> @ns[umount]: >> [16K, 32K) 3 | | >> [32K, 64K) 66 |@@@@@@@@@@ | >> [64K, 128K) 32 |@@@@@ | >> [128K, 256K) 22 |@@@ | >> [256K, 512K) 48 |@@@@@@@ | >> [512K, 1M) 19 |@@@ | >> [1M, 2M) 131 |@@@@@@@@@@@@@@@@@@@@@ | >> [2M, 4M) 313 >> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| >> [4M, 8M) 302 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ >> | >> [8M, 16M) 55 |@@@@@@@@@ >> >> I see that the highest time-consuming of unregister_shrinker() is between >> 8ms and 16ms, which feels tolerable? > > If you've got a fast x86 machine then I'd say that's a bit slow. :) Nope, I tested it on a qemu virtual machine. And I just tested it on a physical machine (Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz) and the results are as follows: 1) use synchronize_srcu(): @ns[umount]: [8K, 16K) 83 |@@@@@@@ | [16K, 32K) 578 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [32K, 64K) 78 |@@@@@@@ | [64K, 128K) 6 | | [128K, 256K) 7 | | [256K, 512K) 29 |@@ | [512K, 1M) 51 |@@@@ | [1M, 2M) 90 |@@@@@@@@ | [2M, 4M) 70 |@@@@@@ | [4M, 8M) 8 | | 2) use synchronize_srcu_expedited(): @ns[umount]: [8K, 16K) 31 |@@ | [16K, 32K) 803 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [32K, 64K) 158 |@@@@@@@@@@ | [64K, 128K) 4 | | [128K, 256K) 2 | | [256K, 512K) 2 | | Thanks, Qi > > This depends a lot on which shrinkers are active on your system and how much > work each one does upon running. If a driver's shrinker doesn't have much to do > because there's nothing it can shrink further, then it'll run fast. Conversely, > if a driver is stressed in a way that constantly creates a lot of potential work > for its shrinker, then the shrinker will run longer. > > Since shrinkers are allowed to sleep, the delays can really add up when waiting > for all of them to finish running. In the past, I recall observing delays of > 100ms+ in unregister_shrinker() on slower arm64 hardware when I stress tested > the SRCU approach. > > If your GPU driver has a shrinker (such as i915), I suggest testing again under > heavy GPU load. The GPU shrinkers can be pretty heavy IIRC. > > Thanks, > Sultan > >> Thanks, >> Qi >> >>> >>> Although I never got around to submitting it, I made a non-SRCU solution [1] >>> that uses fine-grained locking instead, which is fair to both the register path >>> and unregister path. (The patch I've linked is a version of this adapted to an >>> older 4.14 kernel FYI, but it can be reworked for the current kernel.) >>> >>> What do you think about the fine-grained locking approach? >>> >>> Thanks, >>> Sultan >>> >>> [1] https://github.com/kerneltoast/android_kernel_google_floral/commit/012378f3173a82d2333d3ae7326691544301e76a >>> >> >> -- >> Thanks, >> Qi -- Thanks, Qi