From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6CCFC433DB for ; Fri, 5 Feb 2021 16:04:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C9E464E4E for ; Fri, 5 Feb 2021 16:04:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C9E464E4E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AA5DB6B007E; Fri, 5 Feb 2021 11:04:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A7CC46B0080; Fri, 5 Feb 2021 11:04:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BA566B0081; Fri, 5 Feb 2021 11:04:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 87F446B007E for ; Fri, 5 Feb 2021 11:04:11 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 41A49181AEF10 for ; Fri, 5 Feb 2021 16:04:11 +0000 (UTC) X-FDA: 77784685902.19.train95_4117ee2275e6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id CD5D11ACC2B for ; Fri, 5 Feb 2021 16:04:09 +0000 (UTC) X-HE-Tag: train95_4117ee2275e6 X-Filterd-Recvd-Size: 6057 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Fri, 5 Feb 2021 16:04:09 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1612541047; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=TuCw125E70gQA5pcVuLFq5da8zSoBoZGjiwm7DafF+E=; b=pgEZdng5KQRSoP5J4Sq44/H4tL4C8kHWXtH/bHop8m7RvjYpX6iLAWxTdU4CdjusWgwxFQ mFO7TK3ReZ3+zfOYkHlygAQkIuWaeQ5KchDns8ZUHLfo8QtDCk8gXMskKt1ZNsz3biIAj5 3PjuS5sox0GzmG3AucSvi4atHq+xNlk= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id BA792AC9B; Fri, 5 Feb 2021 16:04:07 +0000 (UTC) Date: Fri, 5 Feb 2021 17:04:07 +0100 From: Michal Hocko To: Muchun Song Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Cgroups , Linux Memory Management List , LKML Subject: Re: [External] Re: [PATCH] mm: memcontrol: fix missing wakeup oom task Message-ID: References: <20210205062310.74268-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 05-02-21 23:30:36, Muchun Song wrote: > On Fri, Feb 5, 2021 at 8:20 PM Michal Hocko wrote: > > > > On Fri 05-02-21 19:04:19, Muchun Song wrote: > > > On Fri, Feb 5, 2021 at 6:21 PM Michal Hocko wrote: > > > > > > > > On Fri 05-02-21 17:55:10, Muchun Song wrote: > > > > > On Fri, Feb 5, 2021 at 4:24 PM Michal Hocko wrote: > > > > > > > > > > > > On Fri 05-02-21 14:23:10, Muchun Song wrote: > > > > > > > We call memcg_oom_recover() in the uncharge_batch() to wakeup OOM task > > > > > > > when page uncharged, but for the slab pages, we do not do this when page > > > > > > > uncharged. > > > > > > > > > > > > How does the patch deal with this? > > > > > > > > > > When we uncharge a slab page via __memcg_kmem_uncharge, > > > > > actually, this path forgets to do this for us compared to > > > > > uncharge_batch(). Right? > > > > > > > > Yes this was more more or less clear (still would have been nicer to be > > > > explicit). But you still haven't replied to my question I believe. I > > > > assume you rely on refill_stock doing draining but how does this address > > > > the problem? Is it sufficient to do wakeups in the batched way? > > > > > > Sorry, the subject title may not be suitable. IIUC, memcg_oom_recover > > > aims to wake up the OOM task when we uncharge the page. > > > > Yes, your understanding is correct. This is a way to pro-actively wake > > up oom victims when the memcg oom handling is outsourced to the > > userspace. Please note that I haven't objected to the problem statement. > > > > I was questioning the fix for the problem. > > > > > I see uncharge_batch always do this. I am confused why > > > __memcg_kmem_uncharge does not. > > > > Very likely an omission. I haven't checked closely but I suspect this > > has been introduced by the recent kmem accounting changes. > > > > Why didn't you simply do the same thing and call memcg_oom_recover > > unconditionally and instead depend on the draining? I suspect this was > > because you wanted to recover also when draining which is not necessary > > as pointed out in other email. > > Thanks for your explanations. You are right. It is my fault to depend > on the draining. I should call memcg_oom_recover directly in the > __memcg_kmem_uncharge. Right? Yes. > > [...] > > > > > > Does this lead to any code generation improvements? I would expect > > > > > > compiler to be clever enough to inline static functions if that pays > > > > > > off. If yes make this a patch on its own. > > > > > > > > > > I have disassembled the code, I see memcg_oom_recover is not > > > > > inline. Maybe because memcg_oom_recover has a lot of callers. > > > > > Just guess. > > > > > > > > > > (gdb) disassemble uncharge_batch > > > > > [...] > > > > > 0xffffffff81341c73 <+227>: callq 0xffffffff8133c420 > > > > > 0xffffffff81341c78 <+232>: jmpq 0xffffffff81341bc0 > > > > > 0xffffffff81341c7d <+237>: callq 0xffffffff8133e2c0 > > > > > > > > So does it really help to do the inlining? > > > > > > I just think memcg_oom_recover is very small, inline maybe > > > a good choice. Maybe I am wrong. > > > > In general I am not overly keen on changes without a proper > > justification. In this particular case I would understand that a > > function call that will almost never do anything but the test (because > > oom_disabled is a rarely used) is just waste of cycles in some hot > > paths (e.g. kmem uncharge). Maybe this even has some visible performance > > benefit. If this is really the case then would it make sense to guard > > this test by the existing cgroup_subsys_on_dfl(memory_cgrp_subsys)? > > Agree. I think it can improve performance when this > function is inline. Guarding the test should be also > an improvement on cgroup v2. I would be surprised if this was measurable but you can give it a try. A static key would be a reasonable argument for inlining on its own. -- Michal Hocko SUSE Labs