From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F1C8C6FA8E for ; Fri, 24 Feb 2023 21:15:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA3F16B0071; Fri, 24 Feb 2023 16:15:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A53506B0073; Fri, 24 Feb 2023 16:15:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91AE16B0074; Fri, 24 Feb 2023 16:15:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 802C36B0071 for ; Fri, 24 Feb 2023 16:15:10 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 51CC8AB63B for ; Fri, 24 Feb 2023 21:15:10 +0000 (UTC) X-FDA: 80503440780.28.FEB0974 Received: from forward500c.mail.yandex.net (forward500c.mail.yandex.net [178.154.239.208]) by imf22.hostedemail.com (Postfix) with ESMTP id 24A96C0002 for ; Fri, 24 Feb 2023 21:15:06 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=ya.ru header.s=mail header.b=OlcbsBvo; spf=pass (imf22.hostedemail.com: domain of tkhai@ya.ru designates 178.154.239.208 as permitted sender) smtp.mailfrom=tkhai@ya.ru; dmarc=pass (policy=none) header.from=ya.ru ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677273307; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=22Azepy2Dbys1MXTz7hzaBtdOBI1jBJ6KyrlvrLf+Lc=; b=1krUuaFWcktVA/fUMq8VFFdUh8/eiU8+uHot3YMAqCZLutQ3CEU7O9ncZ/uCrMD616iF8L 5d99cdRJT1oG3uHqMpeGJIpJ8hpTu2YDYyQS50JZGLn3cyVNYhqrG9eB5WFugM7LLQIrJF LHpuM1vES3g9/hQU0XWk6Ac5BEFhTD0= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=ya.ru header.s=mail header.b=OlcbsBvo; spf=pass (imf22.hostedemail.com: domain of tkhai@ya.ru designates 178.154.239.208 as permitted sender) smtp.mailfrom=tkhai@ya.ru; dmarc=pass (policy=none) header.from=ya.ru ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677273307; a=rsa-sha256; cv=none; b=dt66/N3CVWKYAQS7ozofwEvUMcBbPFVL0fa7jgUeF2xtOJaOAW4rovlKiEvvKOGWMOpx38 52gWuowF1/LziIAgl9stwor4IDeC1O6yYWDVovUyEcouXkIgWmlomEIEOasZ/Uz2ko2ZRC RUZDqBlNfMyV/Kl2Oh2QSQZCFNc6LuE= Received: from myt5-f3d0b203e46f.qloud-c.yandex.net (myt5-f3d0b203e46f.qloud-c.yandex.net [IPv6:2a02:6b8:c12:3b2d:0:640:f3d0:b203]) by forward500c.mail.yandex.net (Yandex) with ESMTP id 88C125EB74; Sat, 25 Feb 2023 00:15:04 +0300 (MSK) Received: by myt5-f3d0b203e46f.qloud-c.yandex.net (smtp/Yandex) with ESMTPSA id 0FYdsvHZ6W21-xDSwbxYK; Sat, 25 Feb 2023 00:15:03 +0300 X-Yandex-Fwd: 1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ya.ru; s=mail; t=1677273303; bh=22Azepy2Dbys1MXTz7hzaBtdOBI1jBJ6KyrlvrLf+Lc=; h=In-Reply-To:Cc:Date:References:To:From:Subject:Message-ID; b=OlcbsBvo1YZ3M44OKHRxcgR/3j+lL1atrkjXRKyZ9/CaIETHhwjlPyqT6HRPLPjXL oF9VLfQ+xL/maLAeA02K2b46ialGj2/hiILCNkumJtYtzVQBKCyAqbRmUjgaxv15+g eetgBa/ClaA2bjCO01A7Ju6r8wcNlX2rSiu8VvAA= Message-ID: Date: Sat, 25 Feb 2023 00:14:58 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 Subject: Re: [PATCH v2 2/7] mm: vmscan: make global slab shrink lockless From: Kirill Tkhai To: Qi Zheng , Sultan Alsawaf Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, shakeelb@google.com, mhocko@kernel.org, roman.gushchin@linux.dev, muchun.song@linux.dev, david@redhat.com, shy828301@gmail.com, dave@stgolabs.net, penguin-kernel@i-love.sakura.ne.jp, paulmck@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20230223132725.11685-1-zhengqi.arch@bytedance.com> <20230223132725.11685-3-zhengqi.arch@bytedance.com> <8049b6ed-435f-b518-f947-5516a514aec2@bytedance.com> Content-Language: en-US In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 5kmbm7pkwopw87gjexm1aztu45tpat8s X-Rspamd-Queue-Id: 24A96C0002 X-HE-Tag: 1677273306-839323 X-HE-Meta: U2FsdGVkX1+8H4bO1qBxXLba55l/KLTXdwE0dwrxo8IRNnAreIuoHag8ifVNbsautzdZ4gudXBs3u3Z5IZ0juSfu/caTeOjHTfQnS9knUiflti+g0JslHyeP9oGuo82xoPs3oO2bCrAqr5aaQl54K3vbfHFuQGj5eTVLjjvj8snDUBZZhSjgzPl4H6I+CpNujQQwga9yMLkAhjK7u6fTAUIOxhj0LSt36hDGrWaukzXR2ECBX1yBsOH38mIqN+QYx8MnnJBP9CTUvHZrIeuzVQI0uH0+218ZqXTZ+VfLhlrmiTxdFxpF85w1xrWVERnQXoauw9BDlapaH5hdKsujjqIWYX5f4lBvBgLOCoVtORdoCl3yitLJm7Fn7bUPV04CyC314H7ZXamr0cl8IVeYEFf14JD3bdZ6wfmtAZ6jCcBqZaagjV3P6D34EEd6SvqRC4B700y/E9kxqZugdQfW+UnDP5kAb0WkWl4LHbS9WGO1ShSMLXIr3pr48qIvlzmdYDOQfiVKp9LNc/7QMkGEgaAd1ci1czcO8NaibYuXzwh6igtqLg/q3jDwDd5CfWo/X8g+zNGgxXN6zOVvBl4ELQj3yqvDWImd+ZvnSgmxQoCQijiudNRJtHT7Gb7dZqHqBpRqFKo7+7F99v9Z7bkd1Sa7yDl6DnK9DyJ6VHY+j4zsdSMcNuYgm/h/sK6nhLmFnd2qsG0RUNjhajb0S2u5d7JVoBVT/xGCGa6BKzvI5Pp03FpKuUfkGExtsDwbEz/x0yHwhtpFqNmsV8stdfQyd4qsNFZmoxpOvA7hP8FAeOuOvjNTE0Z6X/Nc//gTrjwWBkpQg6XuBLU3UkThNcDnyJvlKmivQNa6z+AYpR3tK8cZciYtOM1bmSf/Tud9IqTGrGuJNjGWqp8xNnSpxMspNHLSh6wQL63w4jWDSU+dZCenF1B8/tggKKgj9kHCRP22u0uvvpNOjNuNNB0bqV1 wfZ+T4ai n5xP0bZUEPoFyYP/5+Mj2ZORiF4rVTMM2+Z58YaLMjhyT8E6rSKuvV1aU31hHFnlJ21E1KJtDMeJWc1WOElPybWCNx59Jf64amRVvvADVnXrs/lQ0KwH6u5UbgdaFQsv11yn8KT55hf+Zy9mMP3X0b0oTuKBcOJXscsZIXif5YpvgGiQcVtFRji8GQ4nbu+kGWV8JR4VVb0Tf5iYuNxFE4TfwSmgxNmhuY8N55Y/JWRu6Auve5NqYuIpaU1afRENDHZ3v/56SAAveBkZZT25ZZI/MEyePgQX5Yv1ZBsPvS7dvPgk6r69x3EVoY43PRawxmvi9f2F64SnRSkTmUmdQsGn7ApAEiTa8v+E9tmZTAQKi4dXXsXag1skG72+Vn3zZQpmepiXNetz9GD/emVQnl+WFd+6LqTsl3Q7bLbos30JhwQEqPfRMSg/YxhwXKEkBw4h/dWYvAUTdONy9KmkBE43WNTlVWuyYJSDpK4F2XsFoIsuHibU7ZCGSBG19qP4b9xq2i3+FnWEWe1srD1Jb1gaAzYixhVfUDEBBswktxw7Za8q9yRRGJkTTlQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 25.02.2023 00:02, Kirill Tkhai wrote: > On 24.02.2023 07:00, Qi Zheng wrote: >> >> >> On 2023/2/24 02:24, Sultan Alsawaf wrote: >>> On Thu, Feb 23, 2023 at 09:27:20PM +0800, Qi Zheng wrote: >>>> The shrinker_rwsem is a global lock in shrinkers subsystem, >>>> it is easy to cause blocking in the following cases: >>>> >>>> a. the write lock of shrinker_rwsem was held for too long. >>>>     For example, there are many memcgs in the system, which >>>>     causes some paths to hold locks and traverse it for too >>>>     long. (e.g. expand_shrinker_info()) >>>> b. the read lock of shrinker_rwsem was held for too long, >>>>     and a writer came at this time. Then this writer will be >>>>     forced to wait and block all subsequent readers. >>>>     For example: >>>>     - be scheduled when the read lock of shrinker_rwsem is >>>>       held in do_shrink_slab() >>>>     - some shrinker are blocked for too long. Like the case >>>>       mentioned in the patchset[1]. >>>> >>>> Therefore, many times in history ([2],[3],[4],[5]), some >>>> people wanted to replace shrinker_rwsem reader with SRCU, >>>> but they all gave up because SRCU was not unconditionally >>>> enabled. >>>> >>>> But now, since commit 1cd0bd06093c ("rcu: Remove CONFIG_SRCU"), >>>> the SRCU is unconditionally enabled. So it's time to use >>>> SRCU to protect readers who previously held shrinker_rwsem. >>>> >>>> [1]. https://lore.kernel.org/lkml/20191129214541.3110-1-ptikhomirov@virtuozzo.com/ >>>> [2]. https://lore.kernel.org/all/1437080113.3596.2.camel@stgolabs.net/ >>>> [3]. https://lore.kernel.org/lkml/1510609063-3327-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp/ >>>> [4]. https://lore.kernel.org/lkml/153365347929.19074.12509495712735843805.stgit@localhost.localdomain/ >>>> [5]. https://lore.kernel.org/lkml/20210927074823.5825-1-sultan@kerneltoast.com/ >>>> >>>> Signed-off-by: Qi Zheng >>>> --- >>>>   mm/vmscan.c | 27 +++++++++++---------------- >>>>   1 file changed, 11 insertions(+), 16 deletions(-) >>>> >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>>> index 9f895ca6216c..02987a6f95d1 100644 >>>> --- a/mm/vmscan.c >>>> +++ b/mm/vmscan.c >>>> @@ -202,6 +202,7 @@ static void set_task_reclaim_state(struct task_struct *task, >>>>     LIST_HEAD(shrinker_list); >>>>   DECLARE_RWSEM(shrinker_rwsem); >>>> +DEFINE_SRCU(shrinker_srcu); >>>>     #ifdef CONFIG_MEMCG >>>>   static int shrinker_nr_max; >>>> @@ -706,7 +707,7 @@ void free_prealloced_shrinker(struct shrinker *shrinker) >>>>   void register_shrinker_prepared(struct shrinker *shrinker) >>>>   { >>>>       down_write(&shrinker_rwsem); >>>> -    list_add_tail(&shrinker->list, &shrinker_list); >>>> +    list_add_tail_rcu(&shrinker->list, &shrinker_list); >>>>       shrinker->flags |= SHRINKER_REGISTERED; >>>>       shrinker_debugfs_add(shrinker); >>>>       up_write(&shrinker_rwsem); >>>> @@ -760,13 +761,15 @@ void unregister_shrinker(struct shrinker *shrinker) >>>>           return; >>>>         down_write(&shrinker_rwsem); >>>> -    list_del(&shrinker->list); >>>> +    list_del_rcu(&shrinker->list); >>>>       shrinker->flags &= ~SHRINKER_REGISTERED; >>>>       if (shrinker->flags & SHRINKER_MEMCG_AWARE) >>>>           unregister_memcg_shrinker(shrinker); >>>>       debugfs_entry = shrinker_debugfs_remove(shrinker); >>>>       up_write(&shrinker_rwsem); >>>>   +    synchronize_srcu(&shrinker_srcu); >>>> + >>>>       debugfs_remove_recursive(debugfs_entry); >>>>         kfree(shrinker->nr_deferred); >>>> @@ -786,6 +789,7 @@ void synchronize_shrinkers(void) >>>>   { >>>>       down_write(&shrinker_rwsem); >>>>       up_write(&shrinker_rwsem); >>>> +    synchronize_srcu(&shrinker_srcu); >>>>   } >>>>   EXPORT_SYMBOL(synchronize_shrinkers); >>>>   @@ -996,6 +1000,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>>>   { >>>>       unsigned long ret, freed = 0; >>>>       struct shrinker *shrinker; >>>> +    int srcu_idx; >>>>         /* >>>>        * The root memcg might be allocated even though memcg is disabled >>>> @@ -1007,10 +1012,10 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>>>       if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg)) >>>>           return shrink_slab_memcg(gfp_mask, nid, memcg, priority); >>>>   -    if (!down_read_trylock(&shrinker_rwsem)) >>>> -        goto out; >>>> +    srcu_idx = srcu_read_lock(&shrinker_srcu); >>>>   -    list_for_each_entry(shrinker, &shrinker_list, list) { >>>> +    list_for_each_entry_srcu(shrinker, &shrinker_list, list, >>>> +                 srcu_read_lock_held(&shrinker_srcu)) { >>>>           struct shrink_control sc = { >>>>               .gfp_mask = gfp_mask, >>>>               .nid = nid, >>>> @@ -1021,19 +1026,9 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, >>>>           if (ret == SHRINK_EMPTY) >>>>               ret = 0; >>>>           freed += ret; >>>> -        /* >>>> -         * Bail out if someone want to register a new shrinker to >>>> -         * prevent the registration from being stalled for long periods >>>> -         * by parallel ongoing shrinking. >>>> -         */ >>>> -        if (rwsem_is_contended(&shrinker_rwsem)) { >>>> -            freed = freed ? : 1; >>>> -            break; >>>> -        } >>>>       } >>>>   -    up_read(&shrinker_rwsem); >>>> -out: >>>> +    srcu_read_unlock(&shrinker_srcu, srcu_idx); >>>>       cond_resched(); >>>>       return freed; >>>>   } >>>> --  >>>> 2.20.1 >>>> >>>> >>> >>> Hi Qi, >>> >>> A different problem I realized after my old attempt to use SRCU was that the >>> unregister_shrinker() path became quite slow due to the heavy synchronize_srcu() >>> call. Both register_shrinker() *and* unregister_shrinker() are called frequently >>> these days, and SRCU is too unfair to the unregister path IMO. >> >> Hi Sultan, >> >> IIUC, for unregister_shrinker(), the wait time is hardly longer with >> SRCU than with shrinker_rwsem before. >> >> And I just did a simple test. After using the script in cover letter to >> increase the shrink_slab hotspot, I did umount 1k times at the same >> time, and then I used bpftrace to measure the time consumption of >> unregister_shrinker() as follows: >> >> bpftrace -e 'kprobe:unregister_shrinker { @start[tid] = nsecs; } kretprobe:unregister_shrinker /@start[tid]/ { @ns[comm] = hist(nsecs - @start[tid]); delete(@start[tid]); }' >> >> @ns[umount]: >> [16K, 32K)             3 |      | >> [32K, 64K)            66 |@@@@@@@@@@      | >> [64K, 128K)           32 |@@@@@      | >> [128K, 256K)          22 |@@@      | >> [256K, 512K)          48 |@@@@@@@      | >> [512K, 1M)            19 |@@@      | >> [1M, 2M)             131 |@@@@@@@@@@@@@@@@@@@@@      | >> [2M, 4M)             313 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| >> [4M, 8M)             302 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  | >> [8M, 16M)             55 |@@@@@@@@@ >> >> I see that the highest time-consuming of unregister_shrinker() is between 8ms and 16ms, which feels tolerable? > > The fundamental difference is that before the patchset this for_each_set_bit() iteration could be broken in the middle > of two do_shrink_slab() calls, while after the patchset we can leave for_each_set_bit() only after visiting all set bits. > > Using only synchronize_srcu_expedited() won't help here. > > My opinion is we should restore a check similar to the rwsem_is_contendent() check that we had before. Something like > the below on top of your patchset merged into appropriate patch: > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 27ef9946ae8a..50e7812468ec 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -204,6 +204,7 @@ static void set_task_reclaim_state(struct task_struct *task, > LIST_HEAD(shrinker_list); > DEFINE_MUTEX(shrinker_mutex); > DEFINE_SRCU(shrinker_srcu); > +static atomic_t shrinker_srcu_generation = ATOMIC_INIT(0); > > #ifdef CONFIG_MEMCG > static int shrinker_nr_max; > @@ -782,6 +783,7 @@ void unregister_shrinker(struct shrinker *shrinker) > debugfs_entry = shrinker_debugfs_remove(shrinker); > mutex_unlock(&shrinker_mutex); > > + atomic_inc(&shrinker_srcu_generation); > synchronize_srcu(&shrinker_srcu); > > debugfs_remove_recursive(debugfs_entry); > @@ -799,6 +801,7 @@ EXPORT_SYMBOL(unregister_shrinker); > */ > void synchronize_shrinkers(void) > { > + atomic_inc(&shrinker_srcu_generation); > synchronize_srcu(&shrinker_srcu); > } > EXPORT_SYMBOL(synchronize_shrinkers); > @@ -908,7 +911,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > { > struct shrinker_info *info; > unsigned long ret, freed = 0; > - int srcu_idx; > + int srcu_idx, generation; > int i; > > if (!mem_cgroup_online(memcg)) > @@ -919,6 +922,7 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > if (unlikely(!info)) > goto unlock; > > + generation = atomic_read(&shrinker_srcu_generation); > for_each_set_bit(i, info->map, info->map_nr_max) { > struct shrink_control sc = { > .gfp_mask = gfp_mask, > @@ -965,6 +969,11 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > set_shrinker_bit(memcg, nid, i); > } > freed += ret; > + > + if (atomic_read(&shrinker_srcu_generation) != generation) { > + freed = freed ? : 1; > + break; > + } > } > unlock: > srcu_read_unlock(&shrinker_srcu, srcu_idx); > @@ -1004,7 +1013,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > { > unsigned long ret, freed = 0; > struct shrinker *shrinker; > - int srcu_idx; > + int srcu_idx, generation; > > /* > * The root memcg might be allocated even though memcg is disabled > @@ -1017,6 +1026,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > return shrink_slab_memcg(gfp_mask, nid, memcg, priority); > > srcu_idx = srcu_read_lock(&shrinker_srcu); > + generation = atomic_read(&shrinker_srcu_generation); > > list_for_each_entry_srcu(shrinker, &shrinker_list, list, > srcu_read_lock_held(&shrinker_srcu)) { > @@ -1030,6 +1040,11 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > if (ret == SHRINK_EMPTY) > ret = 0; > freed += ret; > + > + if (atomic_read(&shrinker_srcu_generation) != generation) { > + freed = freed ? : 1; > + break; > + } > } > > srcu_read_unlock(&shrinker_srcu, srcu_idx); Even more, for memcg shrinkers we may unlock SRCU and continue iterations from the same shrinker id: diff --git a/mm/vmscan.c b/mm/vmscan.c index 27ef9946ae8a..0b197bba1257 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -204,6 +204,7 @@ static void set_task_reclaim_state(struct task_struct *task, LIST_HEAD(shrinker_list); DEFINE_MUTEX(shrinker_mutex); DEFINE_SRCU(shrinker_srcu); +static atomic_t shrinker_srcu_generation = ATOMIC_INIT(0); #ifdef CONFIG_MEMCG static int shrinker_nr_max; @@ -782,6 +783,7 @@ void unregister_shrinker(struct shrinker *shrinker) debugfs_entry = shrinker_debugfs_remove(shrinker); mutex_unlock(&shrinker_mutex); + atomic_inc(&shrinker_srcu_generation); synchronize_srcu(&shrinker_srcu); debugfs_remove_recursive(debugfs_entry); @@ -799,6 +801,7 @@ EXPORT_SYMBOL(unregister_shrinker); */ void synchronize_shrinkers(void) { + atomic_inc(&shrinker_srcu_generation); synchronize_srcu(&shrinker_srcu); } EXPORT_SYMBOL(synchronize_shrinkers); @@ -908,18 +911,19 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, { struct shrinker_info *info; unsigned long ret, freed = 0; - int srcu_idx; - int i; + int srcu_idx, generation; + int i = 0; if (!mem_cgroup_online(memcg)) return 0; - +again: srcu_idx = srcu_read_lock(&shrinker_srcu); info = shrinker_info_srcu(memcg, nid); if (unlikely(!info)) goto unlock; - for_each_set_bit(i, info->map, info->map_nr_max) { + generation = atomic_read(&shrinker_srcu_generation); + for_each_set_bit_from(i, info->map, info->map_nr_max) { struct shrink_control sc = { .gfp_mask = gfp_mask, .nid = nid, @@ -965,6 +969,11 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, set_shrinker_bit(memcg, nid, i); } freed += ret; + + if (atomic_read(&shrinker_srcu_generation) != generation) { + srcu_read_unlock(&shrinker_srcu, srcu_idx); + goto again; + } } unlock: srcu_read_unlock(&shrinker_srcu, srcu_idx); @@ -1004,7 +1013,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, { unsigned long ret, freed = 0; struct shrinker *shrinker; - int srcu_idx; + int srcu_idx, generation; /* * The root memcg might be allocated even though memcg is disabled @@ -1017,6 +1026,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, return shrink_slab_memcg(gfp_mask, nid, memcg, priority); srcu_idx = srcu_read_lock(&shrinker_srcu); + generation = atomic_read(&shrinker_srcu_generation); list_for_each_entry_srcu(shrinker, &shrinker_list, list, srcu_read_lock_held(&shrinker_srcu)) { @@ -1030,6 +1040,11 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, if (ret == SHRINK_EMPTY) ret = 0; freed += ret; + + if (atomic_read(&shrinker_srcu_generation) != generation) { + freed = freed ? : 1; + break; + } } srcu_read_unlock(&shrinker_srcu, srcu_idx);