From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D844C433DB for ; Fri, 29 Jan 2021 17:22:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8FAB264E04 for ; Fri, 29 Jan 2021 17:22:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8FAB264E04 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C09748D0006; Fri, 29 Jan 2021 12:22:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BBAE18D0001; Fri, 29 Jan 2021 12:22:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACF538D0006; Fri, 29 Jan 2021 12:22:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 9871A8D0001 for ; Fri, 29 Jan 2021 12:22:18 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 647BB8249980 for ; Fri, 29 Jan 2021 17:22:18 +0000 (UTC) X-FDA: 77759481156.04.limit06_1714dd4275aa Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 25D2E8008B3E for ; Fri, 29 Jan 2021 17:22:18 +0000 (UTC) X-HE-Tag: limit06_1714dd4275aa X-Filterd-Recvd-Size: 10647 Received: from mail-ed1-f43.google.com (mail-ed1-f43.google.com [209.85.208.43]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 29 Jan 2021 17:22:17 +0000 (UTC) Received: by mail-ed1-f43.google.com with SMTP id dj23so11370028edb.13 for ; Fri, 29 Jan 2021 09:22:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0jMPBcH5Hii5Z/IhKWkdbAh1C9dd7EBdelppvl4SiIw=; b=PK1xfj09tP+01uheFIWbCQc6laIlltYPmC4NcUiZlgM+LnOKqchn8+Rq/8WCq22xoY DMpJTUtRd2aka8LrEMJTLkVSaBUjlkzEHsDUZ3hDkVLMtedmE/bKIKmcwDamH4X29UdS azZ7NIBNJDlH/p3ZSGwOW8RKgAa8KaGKgLy+c2I39zibADldDDiSeJjB99bfYK+3nzJn ArM1b6Ch9YozH4cs1E3mA0/iv9r1sRF93N5P9qeAvlbdwONCaQaY6PRtRO811NVkY86t 9oqSd7CegyzgnC2soucu81gGIQ0ZJESWOCYqdWoq3SlMP9DMh9YiRgQOIeIaZoA3sjNg vv5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0jMPBcH5Hii5Z/IhKWkdbAh1C9dd7EBdelppvl4SiIw=; b=Z6ATBwvaBHH24YdPCTf3asvyVoREVl3tRdMG9jFdFiGceEm7W8oNl5t3Pvty8Of+RV wExPug7seEW51+jI32Vz8WrXOf4GsUJh72RDAm8Y0dFB1vgwnbo0+NT2fqYxwg1dVd6q ZaJgsS15NvfK1KpMqvh6no+KYwewEvzrUALf86IdMJqQ/NNzvBaBs1UJmDI7qDHxm/rR 1Brtu1vS41G3pCJmHfanm9bjlpO9M7zH67qnxuYAyVRaaX70gPwmsX2KkoKRg9H6YsuS tM5KoC5gT2CgG4kN7F9u+/Un/XSB9k4AnDQsrZaOWbJmhoLbWY1bwLGdbwCEFEJ/cfFC ub0Q== X-Gm-Message-State: AOAM531THZgyFAWpDBfAhtcqacHbooqGNdhGmim2tRJJBFP7lKDDUehc b6BRbmTtQogggy8YZVjr8qhnRY66vhKl2stWSKA= X-Google-Smtp-Source: ABdhPJxpwXu1vxBEysF+v5JL3vgVQlczm1769e4KAPZ+HjtqgV5OmuC2y9n/9vFsEAHurGu8ufV+9bmRcFmeymATWAI= X-Received: by 2002:a50:e8c1:: with SMTP id l1mr6288647edn.168.1611940936530; Fri, 29 Jan 2021 09:22:16 -0800 (PST) MIME-Version: 1.0 References: <20210127233345.339910-1-shy828301@gmail.com> <20210127233345.339910-9-shy828301@gmail.com> <7c0152a2-f846-c696-4dec-63f285d20ae5@virtuozzo.com> In-Reply-To: <7c0152a2-f846-c696-4dec-63f285d20ae5@virtuozzo.com> From: Yang Shi Date: Fri, 29 Jan 2021 09:22:04 -0800 Message-ID: Subject: Re: [v5 PATCH 08/11] mm: vmscan: use per memcg nr_deferred of shrinker To: Kirill Tkhai Cc: Roman Gushchin , Shakeel Butt , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 29, 2021 at 6:59 AM Kirill Tkhai wrote: > > On 29.01.2021 17:55, Kirill Tkhai wrote: > > On 28.01.2021 02:33, Yang Shi wrote: > >> Use per memcg's nr_deferred for memcg aware shrinkers. The shrinker's nr_deferred > >> will be used in the following cases: > >> 1. Non memcg aware shrinkers > >> 2. !CONFIG_MEMCG > >> 3. memcg is disabled by boot parameter > >> > >> Signed-off-by: Yang Shi > >> --- > >> mm/vmscan.c | 87 ++++++++++++++++++++++++++++++++++++++++++++--------- > >> 1 file changed, 73 insertions(+), 14 deletions(-) > >> > >> diff --git a/mm/vmscan.c b/mm/vmscan.c > >> index 20be0db291fe..e1f8960f5cf6 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -205,7 +205,8 @@ static int expand_one_shrinker_info(struct mem_cgroup *memcg, > >> > >> for_each_node(nid) { > >> old = rcu_dereference_protected( > >> - mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, true); > >> + mem_cgroup_nodeinfo(memcg, nid)->shrinker_info, > >> + lockdep_is_held(&shrinker_rwsem)); > > > > Won't it better to pack this repeating pattern into helper function, e.g.: > > > > static struct shrinker_info memcg_shrinker_info(struct mem_cgroup *memcg, int nid) > > { > > return rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, > > lockdep_is_held(&shrinker_rwsem)); > > } > > > > ? > > > > Even shrink_slab_memcg() may want to use it. > > Hm, I see you already introduced a helper in [10/11], but it is used in only place. > Then, we should use it for all places (introduce the helper earlier). Yes, good point. Will fix in v6. > > >> /* Not yet online memcg */ > >> if (!old) > >> return 0; > >> @@ -239,7 +240,8 @@ void free_shrinker_info(struct mem_cgroup *memcg) > >> > >> for_each_node(nid) { > >> pn = mem_cgroup_nodeinfo(memcg, nid); > >> - info = rcu_dereference_protected(pn->shrinker_info, true); > >> + info = rcu_dereference_protected(pn->shrinker_info, > >> + lockdep_is_held(&shrinker_rwsem)); > >> if (info) > >> kvfree(info); > >> rcu_assign_pointer(pn->shrinker_info, NULL); > >> @@ -360,6 +362,27 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) > >> up_write(&shrinker_rwsem); > >> } > >> > >> +static long count_nr_deferred_memcg(int nid, struct shrinker *shrinker, > >> + struct mem_cgroup *memcg) > >> +{ > >> + struct shrinker_info *info; > >> + > >> + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, > >> + lockdep_is_held(&shrinker_rwsem)); > >> + return atomic_long_xchg(&info->nr_deferred[shrinker->id], 0); > >> +} > >> + > >> +static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, > >> + struct mem_cgroup *memcg) > >> +{ > >> + struct shrinker_info *info; > >> + > >> + info = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_info, > >> + lockdep_is_held(&shrinker_rwsem)); > >> + > >> + return atomic_long_add_return(nr, &info->nr_deferred[shrinker->id]); > >> +} > >> + > >> static bool cgroup_reclaim(struct scan_control *sc) > >> { > >> return sc->target_mem_cgroup; > >> @@ -398,6 +421,18 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) > >> { > >> } > >> > >> +static long count_nr_deferred_memcg(int nid, struct shrinker *shrinker, > >> + struct mem_cgroup *memcg) > >> +{ > >> + return 0; > >> +} > >> + > >> +static long set_nr_deferred_memcg(long nr, int nid, struct shrinker *shrinker, > >> + struct mem_cgroup *memcg) > >> +{ > >> + return 0; > >> +} > >> + > >> static bool cgroup_reclaim(struct scan_control *sc) > >> { > >> return false; > >> @@ -409,6 +444,39 @@ static bool writeback_throttling_sane(struct scan_control *sc) > >> } > >> #endif > >> > >> +static long count_nr_deferred(struct shrinker *shrinker, > >> + struct shrink_control *sc) > >> +{ > >> + int nid = sc->nid; > >> + > >> + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > >> + nid = 0; > >> + > >> + if (sc->memcg && > >> + (shrinker->flags & SHRINKER_MEMCG_AWARE)) > >> + return count_nr_deferred_memcg(nid, shrinker, > >> + sc->memcg); > >> + > >> + return atomic_long_xchg(&shrinker->nr_deferred[nid], 0); > >> +} > >> + > >> + > >> +static long set_nr_deferred(long nr, struct shrinker *shrinker, > >> + struct shrink_control *sc) > >> +{ > >> + int nid = sc->nid; > >> + > >> + if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > >> + nid = 0; > >> + > >> + if (sc->memcg && > >> + (shrinker->flags & SHRINKER_MEMCG_AWARE)) > >> + return set_nr_deferred_memcg(nr, nid, shrinker, > >> + sc->memcg); > >> + > >> + return atomic_long_add_return(nr, &shrinker->nr_deferred[nid]); > >> +} > >> + > >> /* > >> * This misses isolated pages which are not accounted for to save counters. > >> * As the data only determines if reclaim or compaction continues, it is > >> @@ -545,14 +613,10 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > >> long freeable; > >> long nr; > >> long new_nr; > >> - int nid = shrinkctl->nid; > >> long batch_size = shrinker->batch ? shrinker->batch > >> : SHRINK_BATCH; > >> long scanned = 0, next_deferred; > >> > >> - if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) > >> - nid = 0; > >> - > >> freeable = shrinker->count_objects(shrinker, shrinkctl); > >> if (freeable == 0 || freeable == SHRINK_EMPTY) > >> return freeable; > >> @@ -562,7 +626,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > >> * and zero it so that other concurrent shrinker invocations > >> * don't also do this scanning work. > >> */ > >> - nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0); > >> + nr = count_nr_deferred(shrinker, shrinkctl); > >> > >> total_scan = nr; > >> if (shrinker->seeks) { > >> @@ -653,14 +717,9 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > >> next_deferred = 0; > >> /* > >> * move the unused scan count back into the shrinker in a > >> - * manner that handles concurrent updates. If we exhausted the > >> - * scan, there is no need to do an update. > >> + * manner that handles concurrent updates. > >> */ > >> - if (next_deferred > 0) > >> - new_nr = atomic_long_add_return(next_deferred, > >> - &shrinker->nr_deferred[nid]); > >> - else > >> - new_nr = atomic_long_read(&shrinker->nr_deferred[nid]); > >> + new_nr = set_nr_deferred(next_deferred, shrinker, shrinkctl); > >> > >> trace_mm_shrink_slab_end(shrinker, shrinkctl->nid, freed, nr, new_nr, total_scan); > >> return freed; > >> > > > >