From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61E4CC433DF for ; Mon, 3 Aug 2020 14:19:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F00EB207FB for ; Mon, 3 Aug 2020 14:19:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eW8bZNmz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F00EB207FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 328B78D00FF; Mon, 3 Aug 2020 10:19:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B2948D0081; Mon, 3 Aug 2020 10:19:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 153528D00FF; Mon, 3 Aug 2020 10:19:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id EDA178D0081 for ; Mon, 3 Aug 2020 10:19:31 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4AF11180AD820 for ; Mon, 3 Aug 2020 14:19:31 +0000 (UTC) X-FDA: 77109465342.06.scene89_6101eee26f9e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 1130B1004C631 for ; Mon, 3 Aug 2020 14:19:31 +0000 (UTC) X-HE-Tag: scene89_6101eee26f9e X-Filterd-Recvd-Size: 10147 Received: from mail-io1-f67.google.com (mail-io1-f67.google.com [209.85.166.67]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Aug 2020 14:19:29 +0000 (UTC) Received: by mail-io1-f67.google.com with SMTP id l17so38648579iok.7 for ; Mon, 03 Aug 2020 07:19:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=C5JzbfyS738NchCa592mbyQsB4maRtZAAdcVc9LsJB4=; b=eW8bZNmzZYhQBg+xd4k13/pfkf0B2jjkB04NJzzv+89V+jdQL6Iz5p7WNSSQACBAv3 IN2Z/TkO+9k1CwXND/qZTgb1MEV9e8hqpv1l87jKYoR6TFSKSVVOPS9i2uwz4Q/Ed2/e u2AUa8lkne0PvL69TFVlY3dSSnxPOlrwRqRqjf6bNyAc60oA2p8VatbTp8NjcoScmNe8 e4gqWBYLXnsqbylaLeP2j2HHrPAMWXrKbypTX7KUG4wXSrQaIYlIrjlUpEP2eUF5BxRt G9qe63Xk19S0zQBYRJNN7yKX76XghXs6vqHYJOmd07zCAzIWQvzpjwMQG9ewgE18rWAb 2i5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=C5JzbfyS738NchCa592mbyQsB4maRtZAAdcVc9LsJB4=; b=srAIGzTmbh6tImd9ktsl48aI0vd3hJCi1Y0tV5k8TVLsaF/2uxyXP38a62xekctw7A GuOP0H2BO3rpX5UzU1Z1umqfPq342yZTtks5z56PN1xytxpVnKTck2Lt6CC/dFzUgwN6 OVvVtZOSa6eO2QAJMMlt1yAyIu7zBpBoQY0mFHbstm+ZdTNoLFNmjZJqsuCp9phf/AU/ bdUryMKvf3ay+9YuJDCQlfM0Mj+SCh47dOlZplmFZ+1rTFtKDHVZMXXmAvM88cvc4R4W F1CuI/q/1JuhtDgtCjbxegV8VD8ZOcr3mskvFQy1dwR6tg6PM647rpqgJBTYK6AgDAA+ VTsA== X-Gm-Message-State: AOAM533zFhJxMMkmRGbSSIjirO3+w+OCgNDZOfDFurI2d2986oieEfnP 8+BsyuXOy/XJg81Lq6FZJkGe4c+sjgD7avuQoR0= X-Google-Smtp-Source: ABdhPJxSpVoa3VMzYgYPaFMGNM/BDlpX6omjJORD1u4miOT7obBJ27xArhDWMrm2NGvE1XSBjfbLk75xOpTtx/f2Nds= X-Received: by 2002:a02:a905:: with SMTP id n5mr21124364jam.64.1596464369045; Mon, 03 Aug 2020 07:19:29 -0700 (PDT) MIME-Version: 1.0 References: <20200728074032.1555-1-laoar.shao@gmail.com> <20200730112620.GH18727@dhcp22.suse.cz> <20200803101226.GH5174@dhcp22.suse.cz> <20200803135636.GN5174@dhcp22.suse.cz> In-Reply-To: <20200803135636.GN5174@dhcp22.suse.cz> From: Yafang Shao Date: Mon, 3 Aug 2020 22:18:52 +0800 Message-ID: Subject: Re: [PATCH] mm, memcg: do full scan initially in force_empty To: Michal Hocko , longman@redhat.com Cc: Johannes Weiner , Andrew Morton , Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 1130B1004C631 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 3, 2020 at 9:56 PM Michal Hocko wrote: > > On Mon 03-08-20 21:20:44, Yafang Shao wrote: > > On Mon, Aug 3, 2020 at 6:12 PM Michal Hocko wrote: > > > > > > On Fri 31-07-20 09:50:04, Yafang Shao wrote: > > > > On Thu, Jul 30, 2020 at 7:26 PM Michal Hocko wrote: > > > > > > > > > > On Tue 28-07-20 03:40:32, Yafang Shao wrote: > > > > > > Sometimes we use memory.force_empty to drop pages in a memcg to work > > > > > > around some memory pressure issues. When we use force_empty, we want the > > > > > > pages can be reclaimed ASAP, however force_empty reclaims pages as a > > > > > > regular reclaimer which scans the page cache LRUs from DEF_PRIORITY > > > > > > priority and finally it will drop to 0 to do full scan. That is a waste > > > > > > of time, we'd better do full scan initially in force_empty. > > > > > > > > > > Do you have any numbers please? > > > > > > > > > > > > > Unfortunately the number doesn't improve obviously, while it is > > > > directly proportional to the numbers of total pages to be scanned. > > > > > > Your changelog claims an optimization and that should be backed by some > > > numbers. It is true that reclaim at a higher priority behaves slightly > > > and subtly differently but that urge for even more details in the > > > changelog. > > > > > > > With the below addition change (nr_to_scan also changed), the elapsed > > time of force_empty can be reduced by 10%. > > > > @@ -3208,6 +3211,7 @@ static inline bool memcg_has_children(struct > > mem_cgroup *memcg) > > static int mem_cgroup_force_empty(struct mem_cgroup *memcg) > > { > > int nr_retries = MEM_CGROUP_RECLAIM_RETRIES; > > + unsigned long size; > > > > /* we call try-to-free pages for make this cgroup empty */ > > lru_add_drain_all(); > > @@ -3215,14 +3219,15 @@ static int mem_cgroup_force_empty(struct > > mem_cgroup *memcg) > > drain_all_stock(memcg); > > /* try to free all pages in this cgroup */ > > - while (nr_retries && page_counter_read(&memcg->memory)) { > > + while (nr_retries && (size = page_counter_read(&memcg->memory))) { > > int progress; > > > > if (signal_pending(current)) > > return -EINTR; > > - progress = try_to_free_mem_cgroup_pages(memcg, 1, > > - GFP_KERNEL, true); > > + progress = try_to_free_mem_cgroup_pages(memcg, size, > > + GFP_KERNEL, true, > > + 0); > > Have you tried this change without changing the reclaim priority? > I tried it again. Seems the improvement is mostly due to the change of nr_to_reclaim, rather the reclaim priority, - progress = try_to_free_mem_cgroup_pages(memcg, 1, + progress = try_to_free_mem_cgroup_pages(memcg, size, > > Below are the numbers for a 16G memcg with full clean pagecache. > > Without these change, > > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty > > real 0m2.247s > > user 0m0.000s > > sys 0m1.722s > > > > With these change, > > $ time echo 1 > /sys/fs/cgroup/memory/foo/memory.force_empty > > real 0m2.053s > > user 0m0.000s > > sys 0m1.529s > > > > But I'm not sure whether we should make this improvement, because > > force_empty is not a critical path. > > Well, an isolated change to force_empty would be more acceptable but it > is worth noting that a very large reclaim target might affect the > userspace triggering this path because it will potentially increase > latency to process any signals. I do not expect this to be a huge > problem in practice because even reclaim for a smaller target can take > quite long if the memory is not really reclaimable and it has to take > the full world scan. Moreovere most userspace will simply do > echo 1 > $MEMCG_PAGE/force_empty > and only care about killing that if it takes too long. > We may do it in a script to force empty many memcgs at the same time. Of course we can measure the time it takes to force empty, but that will be complicated. > > > > But then I notice that force_empty will try to write dirty pages, that > > > > is not expected by us, because this behavior may be dangerous in the > > > > production environment. > > > > > > I do not understand your claim here. Direct reclaim doesn't write dirty > > > page cache pages directly. > > > > It will write dirty pages once the sc->priority drops to a very low number. > > if (sc->priority < DEF_PRIORITY - 2) > > sc->may_writepage = 1; > > OK, I see what you mean now. Please have a look above that check: > /* > * Only kswapd can writeback filesystem pages > * to avoid risk of stack overflow. But avoid > * injecting inefficient single-page IO into > * flusher writeback as much as possible: only > * write pages when we've encountered many > * dirty pages, and when we've already scanned > * the rest of the LRU for clean pages and see > * the same dirty pages again (PageReclaim). > */ > > > > And it is even less clear why that would be > > > dangerous if it did. > > > > > > > It will generate many IOs, which may block the others. > > > > > > What do you think introducing per memcg drop_cache ? > > > > > > I do not like the global drop_cache and per memcg is not very much > > > different. This all shouldn't be really necessary because we do have > > > means to reclaim memory in a memcg. > > > -- > > > > We used to find an issue that there are many negative dentries in some memcgs. > > Yes, negative dentries can build up but the memory reclaim should be > pretty effective reclaiming them. > > > These negative dentries were introduced by some specific workload in > > these memcgs, and we want to drop them as soon as possible. > > But unfortunately there is no good way to drop them except the > > force_empy or global drop_caches. > > You can use memcg limits (e.g. memory high) to pro-actively reclaim > excess memory. Have you tried that? > > > The force_empty will also drop the pagecache pages, which is not > > expected by us. > > force_empty is intended to reclaim _all_ pages. > > > The global drop_caches can't work either because it will drop slabs in > > other memcgs. > > That is why I want to introduce per memcg drop_caches. > > Problems with negative dentries has been already discussed in the past. > I believe there was no conclusion so far. Please try to dig into > archives. I have read the proposal of Waiman. But it seems there isn't a conclusion yet. If the kernel can't fix this issue perfectly, then giving the user a chance to work around it would be a possible solution - drop_caches is that kind of workaround. [ adding Waiman to CC ] -- Thanks Yafang