From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34FDDC0219B for ; Tue, 11 Feb 2025 11:30:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 56F3B6B007B; Tue, 11 Feb 2025 06:30:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F7236B0082; Tue, 11 Feb 2025 06:30:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 397A66B0083; Tue, 11 Feb 2025 06:30:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 16E8B6B007B for ; Tue, 11 Feb 2025 06:30:21 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EE6DF121746 for ; Tue, 11 Feb 2025 11:29:59 +0000 (UTC) X-FDA: 83107444518.08.9752C73 Received: from dggsgout12.his.huawei.com (dggsgout12.his.huawei.com [45.249.212.56]) by imf11.hostedemail.com (Postfix) with ESMTP id A585B40007 for ; Tue, 11 Feb 2025 11:29:53 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of chenridong@huaweicloud.com designates 45.249.212.56 as permitted sender) smtp.mailfrom=chenridong@huaweicloud.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739273398; a=rsa-sha256; cv=none; b=o4Z9/oepmbP0UQCmgTltZvCLxwy+oMGatTjh3um8HOSWNCIjHmmr8hSeKX9h4MWP2IlNiv dOsIvcO1lVKIuT0vkpuRK2LAJgq/saRR6edfe8b8TVw7WK1Cs/x8gkxlc8jT1gZVv5e33c 7o6wi7DYWxotfkhuUlUk1i5wicqkhaA= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf11.hostedemail.com: domain of chenridong@huaweicloud.com designates 45.249.212.56 as permitted sender) smtp.mailfrom=chenridong@huaweicloud.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739273398; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KWRWEUBmGO9n+R5kvQ80ngVki8hEUOEV9+HJX4L7aLA=; b=hAlrWyzIRM+yB+XHeVq46xWM9zipYMd2bB/Jqy2qzRUC1/WgA0qapuWViOAxYs4seFRiBJ qkRTiIqLhkKPy/79iVxUGU42X/LBihJjBFlvWfiA6fb2TLfFOPOXBjVwHhMZBKb1S4kRP7 srbEkaqtxUo7VixIYyUzdOBYVk1Talo= Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout12.his.huawei.com (SkyGuard) with ESMTP id 4YsfNf6pgtz4f3jM1 for ; Tue, 11 Feb 2025 19:29:22 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.128]) by mail.maildlp.com (Postfix) with ESMTP id 43D6B1A0196 for ; Tue, 11 Feb 2025 19:29:44 +0800 (CST) Received: from [10.67.109.79] (unknown [10.67.109.79]) by APP4 (Coremail) with SMTP id gCh0CgDnSF2mNKtnDpMpDg--.53537S2; Tue, 11 Feb 2025 19:29:44 +0800 (CST) Message-ID: <10f34835-b604-4fbe-8bca-8f7d762d4419@huaweicloud.com> Date: Tue, 11 Feb 2025 19:29:42 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] memcg: avoid dead loop when setting memory.max To: Michal Hocko Cc: hannes@cmpxchg.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chenridong@huawei.com, wangweiyang2@huawei.com References: <20250211081819.33307-1-chenridong@huaweicloud.com> Content-Language: en-US From: Chen Ridong In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-CM-TRANSID:gCh0CgDnSF2mNKtnDpMpDg--.53537S2 X-Coremail-Antispam: 1UD129KBjvJXoWxWr48Kr1fAry7CF4kJw4ruFg_yoW7Jr1UpF 93X3WUKF48Jr4kXrsFvF10gr15Aan7CFy7JryxWr1fZasxG3Wjyr1UKw45XryDJr1Fvr4S vF1qqw1xtw4qyaUanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUvjb4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_Gr0_Xr1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Cr0_Gr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I 0E14v26rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40E x7xfMcIj6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x 0Yz7v_Jr0_Gr1lF7xvr2IY64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lc7CjxVAaw2AF wI0_Jw0_GFyl42xK82IYc2Ij64vIr41l4I8I3I0E4IkC6x0Yz7v_Jr0_Gr1lx2IqxVAqx4 xG67AKxVWUJVWUGwC20s026x8GjcxK67AKxVWUGVWUWwC2zVAF1VAY17CE14v26r1q6r43 MIIYrxkI7VAKI48JMIIF0xvE2Ix0cI8IcVAFwI0_Jr0_JF4lIxAIcVC0I7IYx2IY6xkF7I 0E14v26r4j6F4UMIIF0xvE42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWU JVW8JwCI42IY6I8E87Iv6xkF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjxUF1 v3UUUUU X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ X-Rspam-User: X-Rspamd-Queue-Id: A585B40007 X-Rspamd-Server: rspam12 X-Stat-Signature: gr4dgtx1hfu91qrcj4r55wcfj9rfqofa X-HE-Tag: 1739273393-128795 X-HE-Meta: U2FsdGVkX1+88THkoaTJIutSdOw6TXYocwFeiCySHbM7XbHQDDUeuXSQQkU73Z1Div5UmKt+0egQQ/UjIbuv/lrH8d4dF69y2ILEzGMCawAW0Z4Fo+eVae7qWMgwU9OBGvMquWPwUNQbn2YG+egDsclrsQQxOmDz0vTdTOhHdEKD7BpSVgzWgb5+j7Rnxpkgd9id/nEzNkt4isM2YhlfCOmKrets9RCNp6WTtbetvi3MI/TQzLmCFdtamaaFwTElXWAIjZn+t2jbM5XVUWUCXlQ8KU3Ahq428GoauIrFl6TjdNAoZjUWH7y2xs80AyEBMmjipaz3xfBbkycW7OVkA3prO2oU2jdRhRT+hKH2RsNG1IgLNlHZY3sq4eOtpr89o6gJT2xDmW+sfaWXkeVpW7BkXz9JjMhDSDgBOjSQhbJaYIAJjvNiO9zqNTrKtPl6nJgqjhZ58lwWcexPUK8tnSXBQHLbTXDZXvngivm6kS8np0FeNeImlBw3G2T2Sf+wlI3NUUFbxGNoykkaghF6t0MAL2LAvgxdbdM2FiS2jj/QhBc5txP8IiWvVh/bCZSpbNxLJVutSj0kjkiW58k4X+nC9Mcbdp32FMTvoOtAFCAbmhfK6KCuxt12hIojieAECjoQ159mjLdK55DgFroFsq4908vXUyI/ZqGuDJoVf9wVgm6Bx7Sw1dFEkSGgFF3F1kI0KepA3lA6InTzuUkmTgZG7M4ggCyOa5E6hvg1+3KdzhOj4bwzxUMURk9yChrtYjnsCWRnhBaGWBKM2ImENMeHSQMW6KpDJO0BYt6Mkk2daLAvauLwnxAYhTzjhZj8TcCXwvZqdrfEjkr4RLG76DwXnPtZ6Cqs0BTVHJ5j+dUj2XNE90MrUSsNySisFzK1qMQeOY0hGNWwc9lDw1QEi4vXjg9cY86vyqZcWtzBUJfeiVukwPa+ps85mzdvI/n5eDKCiol0ztcFerpLdl4 jf/7MxDS 2/Mdt2/TxyBoDuXFXBfrv0y/YU5o2tcMVqElCde8qJAnj24Ah1HiCQ//W66ROR/58vyr+mhXCaSHmReZVtoECEP0nQ0N0ZTLl5YZ3ICQq/oepzcxoCfm3HascmPJPCy2DgfytvzJ75Nj5gwabyOIm93xZv/vBHW9Bk/mx4Jkp+WPRafQ1KNJvB9r5H4TGs3FrPjbMDk2v5+M26XnCLixnDPF/pnzFz7N17LIKwJPrCQhfP5A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/2/11 17:02, Michal Hocko wrote: > On Tue 11-02-25 08:18:19, Chen Ridong wrote: >> From: Chen Ridong >> >> A softlockup issue was found with stress test: >> watchdog: BUG: soft lockup - CPU#27 stuck for 26s! [migration/27:181] >> CPU: 27 UID: 0 PID: 181 Comm: migration/27 6.14.0-rc2-next-20250210 #1 >> Stopper: multi_cpu_stop <- stop_machine_from_inactive_cpu >> RIP: 0010:stop_machine_yield+0x2/0x10 >> RSP: 0000:ff4a0dcecd19be48 EFLAGS: 00000246 >> RAX: ffffffff89c0108f RBX: ff4a0dcec03afe44 RCX: 0000000000000000 >> RDX: ff1cdaaf6eba5808 RSI: 0000000000000282 RDI: ff1cda80c1775a40 >> RBP: 0000000000000001 R08: 00000011620096c6 R09: 7fffffffffffffff >> R10: 0000000000000001 R11: 0000000000000100 R12: ff1cda80c1775a40 >> R13: 0000000000000000 R14: 0000000000000001 R15: ff4a0dcec03afe20 >> FS: 0000000000000000(0000) GS:ff1cdaaf6eb80000(0000) >> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 >> CR2: 0000000000000000 CR3: 00000025e2c2a001 CR4: 0000000000773ef0 >> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 >> PKRU: 55555554 >> Call Trace: >> multi_cpu_stop+0x8f/0x100 >> cpu_stopper_thread+0x90/0x140 >> smpboot_thread_fn+0xad/0x150 >> kthread+0xc2/0x100 >> ret_from_fork+0x2d/0x50 >> >> The stress test involves CPU hotplug operations and memory control group >> (memcg) operations. The scenario can be described as follows: >> >> echo xx > memory.max cache_ap_online oom_reaper >> (CPU23) (CPU50) >> xx < usage stop_machine_from_inactive_cpu >> for(;;) // all active cpus >> trigger OOM queue_stop_cpus_work >> // waiting oom_reaper >> multi_cpu_stop(migration/xx) >> // sync all active cpus ack >> // waiting cpu23 ack >> // CPU50 loops in multi_cpu_stop >> waiting cpu50 >> >> Detailed explanation: >> 1. When the usage is larger than xx, an OOM may be triggered. If the >> process does not handle with ths kill signal immediately, it will loop >> in the memory_max_write. > > Do I get it right that the issue is that mem_cgroup_out_of_memory which > doesn't have any cond_resched so it cannot yield to stopped kthread? > oom itself cannot make any progress because the oom victim is blocked as > per 3). > Yes, the same task was evaluated as the victim, which is blocked as described in point 3). Consequently, the operation returned oc->chosen = (void *)-1UL in the oom_evaluate_task function, and no cond_resched() was invoked. for(;;) { ... mem_cgroup_out_of_memory out_of_memory select_bad_process oom_evaluate_task oc->chosen = (void *)-1UL; return !!oc->chosen; } >> 2. When cache_ap_online is triggered, the multi_cpu_stop is queued to the >> active cpus. Within the multi_cpu_stop function, it attempts to >> synchronize the CPU states. However, the CPU23 didn't acknowledge >> because it is stuck in a loop within the for(;;). >> 3. The oom_reaper process is blocked because CPU50 is in a loop, waiting >> for CPU23 to acknowledge the synchronization request. >> 4. Finally, it formed cyclic dependency and lead to softlockup and dead >> loop. >> >> To fix this issue, add cond_resched() in the memory_max_write, so that >> it will not block migration task. > > My first question was why this is not a problem in other > allocation/charge paths but this one is different because it doesn't > ever try to reclaim after MAX_RECLAIM_RETRIES reclaim rounds. > We do have scheduling points in the reclaim path which are no longer > triggered after we hit oom situation in this case. > > I was thinking about having a guranteed cond_resched when oom killer > fails to find a victim but it seems the simplest fix for this particular > corner case is to add cond_resched as you did here. Hopefully we will > get rid of it very soon when !PREEMPT is removed. > > Btw. this could be a problem on a single CPU machine even without CPU > hotplug as the oom repear won't run until memory_max_write yields the > cpu. > >> Fixes: b6e6edcfa405 ("mm: memcontrol: reclaim and OOM kill when shrinking memory.max below usage") >> Signed-off-by: Chen Ridong > > Acked-by: Michal Hocko > Thank you very much. Best regards, Ridong >> --- >> mm/memcontrol.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index 8d21c1a44220..16f3bdbd37d8 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -4213,6 +4213,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, >> memcg_memory_event(memcg, MEMCG_OOM); >> if (!mem_cgroup_out_of_memory(memcg, GFP_KERNEL, 0)) >> break; >> + cond_resched(); >> } >> >> memcg_wb_domain_size_changed(memcg); >> -- >> 2.34.1 >