From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D716C4707B for ; Thu, 11 Jan 2024 19:50:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7264F6B0083; Thu, 11 Jan 2024 14:50:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AEAF6B0087; Thu, 11 Jan 2024 14:50:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 529DD8D0001; Thu, 11 Jan 2024 14:50:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3D03F6B0083 for ; Thu, 11 Jan 2024 14:50:02 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0A0AB404EB for ; Thu, 11 Jan 2024 19:50:02 +0000 (UTC) X-FDA: 81668071044.17.D052F73 Received: from mail-yw1-f174.google.com (mail-yw1-f174.google.com [209.85.128.174]) by imf11.hostedemail.com (Postfix) with ESMTP id 0E4C840025 for ; Thu, 11 Jan 2024 19:49:59 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=08xHTpw7; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf11.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.128.174 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705002600; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bPLi6O7KBw3YUZoVBDLnn1CzKaqAZdl95OpUhCOdcb4=; b=n6c/qjke8C4JhZzI7vbD1UkBHjZNef2/gD/RqhSOZdI2PXnPH9PuOgd/eeOpwB23HUv8w5 o3FL1BzbpqGCx9ytq1hsXzGsrumVOiKw5A0OKBlKZ2t4lmcQcjM6BAYnJHPuukNltdQmlt 3u7dLXYCx0ELnLTuvWtAv3J828IwfOs= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=08xHTpw7; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf11.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.128.174 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705002600; a=rsa-sha256; cv=none; b=cNa5vGRhNOFvV4yIuRqPQrTZLYueByfNnwW9hCOwgUHgksWxPa/YNG1elhRWH4ekpEPGru tKfeDcSOjkEpQHNE2EY1KdCuLCR3mSsFw+aCYzb9eOT6RGUL0KKg0mu0fn+Fr0IMYWgrxI sqzVodVDJQ2ix4GKcIND1D2iY7kfH9A= Received: by mail-yw1-f174.google.com with SMTP id 00721157ae682-5f0629e67f4so62337717b3.3 for ; Thu, 11 Jan 2024 11:49:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1705002599; x=1705607399; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=bPLi6O7KBw3YUZoVBDLnn1CzKaqAZdl95OpUhCOdcb4=; b=08xHTpw7jd1hxAc85JnQHEqvqyfMClJrNVYDGPXYIYjJKtW3P7vuOWJLSCAGErMV+n 5bG1VCN6mUczk3CDYBXtisKmG53NHXoxbvpq8ZQOjPdxzFnMttaAGei7WzgMITuCicTH VqlMpFci7WVwPCGIyFmWzXvbGBesebsxb5fr6ppLMsj74H2ObHDNuV4B8yGpUPh2DNOV h5VoTn0NrNjF0HpMN4n7vNU+JlrK2uKbyCsxS5uU26dEEQ4VmZt+ZtO2pgnwAJgFPpt3 yZ5/yViHx5yvJmVbEc825MlW81cML/5XDTCKoB/wLYPpd4Io3KuutwIcW0psVv1SQKm3 7k+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705002599; x=1705607399; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=bPLi6O7KBw3YUZoVBDLnn1CzKaqAZdl95OpUhCOdcb4=; b=OOEx6+5yIKFSDqXETp38CJttw6bfUS1JTrpSlGWuppHzfdpnvofHk+w9S/MdFpnLHs EPasBG2hirDhmrVxddeqH6mPEL3ohvJThmZXt/G7DKsVhCLr3lvbfKcH44jQNpI726eM 9vzq2J5twxcj0uqkzKDdSGc6z+8+i4arexhT0mScU7wPZ5jnbXfBTUy41GNSIkPZgiE/ KKMyOxv41zM1qD2HiYvRqMpvkTT0BppZh1cigM5/7xV4q5sXu5lkCLea4+0ov1j3+W9o N+uAf0SyGXFFC/jvFUH97CdNobdV0oUP8p3RhQgE33ojtCALPUCVLQajWql013l91t88 n5Gw== X-Gm-Message-State: AOJu0Ywhd8AeCiIBl/kDf+OpkEQwVXCFtLmcwn9jgiHbYSK/bFvrRq7f tb2rVrTuQlskf0lGHWQ/qlTJRhRneigdTQ== X-Google-Smtp-Source: AGHT+IGr58ORL/4tsY0NmadHO/euuWEOjPSFK1YDN/fjDiHc7MtekwE5jbkY7/Xj8Mu8VFaC8KDVAQ== X-Received: by 2002:a25:245:0:b0:dbc:c592:5a3c with SMTP id 66-20020a250245000000b00dbcc5925a3cmr162118ybc.2.1705002598993; Thu, 11 Jan 2024 11:49:58 -0800 (PST) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id c25-20020ac853d9000000b004283695a39bsm685648qtq.94.2024.01.11.11.49.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Jan 2024 11:49:58 -0800 (PST) Date: Thu, 11 Jan 2024 14:49:57 -0500 From: Johannes Weiner To: Roman Gushchin Cc: Andrew Morton , Michal Hocko , Shakeel Butt , Muchun Song , Tejun Heo , Dan Schatzberg , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: memcontrol: don't throttle dying tasks on memory.high Message-ID: <20240111194957.GA440376@cmpxchg.org> References: <20240111132902.389862-1-hannes@cmpxchg.org> <20240111192807.GA424308@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 0E4C840025 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: qjg3sydf9y6u5o7wkymf4zgmyf5sh6qe X-HE-Tag: 1705002599-183492 X-HE-Meta: U2FsdGVkX1+bp9s6b8Vzp9mDeblorivsMlELQ9xQFhkgxcith7uZrR6hZPTrdsnmkaMfDXDTfStClp1QhScvR6sgu/oZ2HZ7dZRJ9AoW5zwzQXOz1JAhFATuYfLnDik+dQWG9/BpwcJxTmQfefgU4i1rVUBdnP/bKCjCz5FQB33eM++vY5kMgmtoPvtmiKbiIYmVr1hgZNBF62tjk3cV8M8oQ1Fuk7eeqwXMHv1T2WiDSdwV+9+CHQXsT18qjv02UeErQbQ48SNozIzg0FCqjXYfzkzTTu/+ndn0jp+Wl5tuDEnDO8yB6OQ2nxraaBdHkX7kAIPVnsORFykMdSxCjGN1gq+AjkwaIABL6SFj/veijz07wlcbKfG1SCrp/PDKXuO3HJQSAu1Vkn89wLmCmhvTOqO+RonM6J+FyCLIucTwMYH/h/4T4MEmsXrorgsc2EHO4iRmyS07LF087N5TCQcPUTye0QiFGa18jDD5KyHoPUNVrXze8jrnlDzfTn5foc/gnoTctUsUqxktcRAgr1+lxkSUDygqlOEjVoOs9cqBrFHWpEo4RSPuNfT4bqcFFe3Zu2S0C/VUq6VQLxh6rEmdMLDnB5NbRQrpb2vEqnTef/nI4Fdy4saxEWp11BrDbdlnhIE8LAelyn+Q81xV1H9KWKXXfGwh7bu2b9ILWxQtpBTkv+PNktV8/bhJfzWGmz7Agn4jHU3tuydT2KMAI8K/kqdSP00kv4u4qAdMOxKw0r+YCxBS0nRclVf2fpsOTHVzv9K3RmkoqAPH77ARzfdexE1UHW8PtcpAE9AvhWgIA7fy/AuJs8gcsPvCdpulYz1STuBX9tzvCoA0HjzE2Ph9RhZlSqp/eahIYnKcOYKHVJa7PvRwhvDe4DZT9y8NB7hAO8G6P+wI47dsJhrEKXHZUB9U4YmsN3/psCk7QloTTNNc5OKV1EeEP9cnFJ90exGffUWzYoNNz5nMbnR ZbegqcRR YM479FQbvdwrdGSVCDfj+EDP4lx8ZtUPRnGpTFVFKpWojUjNVs6KZZ0Fw+6tDSWFLqQkoqV/wsTyuhibQ21aVBUsYtYTas2Vw3R0X3b+rWbYKC/lj+DpkfHpUoYgLYGAw1+bx9aGon7mOyo8PrTddVruwgh6YjVi1RxOuUpFNYPJxpelSiJvohBuKVY1x+xSZsL8qaSN0jA++cnSc+aZAHTRieWUENotFr7kUj4b04mF+0p7quGT+sn3YhSojpdXX3p8mMaZ5d+8NB+s8Co5GWzMHwUo6D7dkcpGRs/PuufSmPcjXKCEzB9cIgZW3FGsd4wByXWH0q8K9N/bXeTO/C7O+uBwZO4QOx3rlN7iyId9IgdS8PITXbdPVYiT98fBwIPv5BcSLc/hYs2YHzvqPaMx8wULNY+1jpKTDcGSDtz7VfgRPAoVXGVLjsgmJOiL5qUIkIjNFHGLeTqxudzTm7abuMQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 11, 2024 at 11:38:09AM -0800, Roman Gushchin wrote: > On Thu, Jan 11, 2024 at 02:28:07PM -0500, Johannes Weiner wrote: > > On Thu, Jan 11, 2024 at 09:59:11AM -0800, Roman Gushchin wrote: > > > On Thu, Jan 11, 2024 at 08:29:02AM -0500, Johannes Weiner wrote: > > > > While investigating hosts with high cgroup memory pressures, Tejun > > > > found culprit zombie tasks that had were holding on to a lot of > > > > memory, had SIGKILL pending, but were stuck in memory.high reclaim. > > > > > > > > In the past, we used to always force-charge allocations from tasks > > > > that were exiting in order to accelerate them dying and freeing up > > > > their rss. This changed for memory.max in a4ebf1b6ca1e ("memcg: > > > > prohibit unconditional exceeding the limit of dying tasks"); it noted > > > > that this can cause (userspace inducable) containment failures, so it > > > > added a mandatory reclaim and OOM kill cycle before forcing charges. > > > > At the time, memory.high enforcement was handled in the userspace > > > > return path, which isn't reached by dying tasks, and so memory.high > > > > was still never enforced by dying tasks. > > > > > > > > When c9afe31ec443 ("memcg: synchronously enforce memory.high for large > > > > overcharges") added synchronous reclaim for memory.high, it added > > > > unconditional memory.high enforcement for dying tasks as well. The > > > > callstack shows that this path is where the zombie is stuck in. > > > > > > > > We need to accelerate dying tasks getting past memory.high, but we > > > > cannot do it quite the same way as we do for memory.max: memory.max is > > > > enforced strictly, and tasks aren't allowed to move past it without > > > > FIRST reclaiming and OOM killing if necessary. This ensures very small > > > > levels of excess. With memory.high, though, enforcement happens lazily > > > > after the charge, and OOM killing is never triggered. A lot of > > > > concurrent threads could have pushed, or could actively be pushing, > > > > the cgroup into excess. The dying task will enter reclaim on every > > > > allocation attempt, with little hope of restoring balance. > > > > > > > > To fix this, skip synchronous memory.high enforcement on dying tasks > > > > altogether again. Update memory.high path documentation while at it. > > > > > > It makes total sense to me. > > > Acked-by: Roman Gushchin > > > > Thanks > > > > > However if tasks can stuck for a long time in the "high reclaim" state, > > > shouldn't we also handle the case when tasks are being killed during the > > > reclaim? E. g. something like this (completely untested): > > > > Yes, that's probably a good idea. > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index c4c422c81f93..9f971fc6aae8 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -2465,6 +2465,9 @@ static unsigned long reclaim_high(struct mem_cgroup *memcg, > > > READ_ONCE(memcg->memory.high)) > > > continue; > > > > > > + if (task_is_dying()) > > > + break; > > > + > > > memcg_memory_event(memcg, MEMCG_HIGH); > > > > > > psi_memstall_enter(&pflags); > > > > I think we can skip this one. The loop is for traversing from the > > charging cgroup to the one that has memory.high set and breached, and > > then reclaim it. It's not expected to run multiple reclaims. > > Yes, the next one is probably enough (hard to say for me without knowing > exactly where whose dying processes are getting stuck - you should have > actual stacktraces I guess). A bit tricky to say. Tejun managed to get a trace from a crashdump, but you can't tell where exactly it's looping: #0 arch_atomic_dec_and_test (./arch/x86/include/asm/atomic.h:123:9) #1 atomic_dec_and_test (./include/linux/atomic/atomic-instrumented.h:576:9) #2 page_ref_dec_and_test (./include/linux/page_ref.h:210:12) #3 put_page_testzero (./include/linux/mm.h:999:9) #4 folio_put_testzero (./include/linux/mm.h:1004:9) #5 move_folios_to_lru (mm/vmscan.c:2495:7) #6 shrink_inactive_list (mm/vmscan.c:2594:2) #7 shrink_list (mm/vmscan.c:2835:9) #8 shrink_lruvec (mm/vmscan.c:6271:21) #9 shrink_node_memcgs (mm/vmscan.c:6458:3) #10 shrink_node (mm/vmscan.c:6493:2) #11 shrink_zones (mm/vmscan.c:6728:3) #12 do_try_to_free_pages (mm/vmscan.c:6790:3) #13 try_to_free_mem_cgroup_pages (mm/vmscan.c:7105:17) #14 reclaim_high (mm/memcontrol.c:2451:19) #15 mem_cgroup_handle_over_high (mm/memcontrol.c:2670:17) #16 try_charge_memcg (mm/memcontrol.c:2887:3) #17 try_charge (mm/memcontrol.c:2898:9) #18 charge_memcg (mm/memcontrol.c:7062:8) #19 __mem_cgroup_charge (mm/memcontrol.c:7083:8) #20 mem_cgroup_charge (./include/linux/memcontrol.h:682:9) #21 __filemap_add_folio (mm/filemap.c:860:15) #22 filemap_add_folio (mm/filemap.c:942:8) #23 page_cache_ra_unbounded (mm/readahead.c:251:7) #24 do_sync_mmap_readahead (mm/filemap.c:0) #25 filemap_fault (mm/filemap.c:3288:10) #26 __do_fault (mm/memory.c:4184:8) #27 do_read_fault (mm/memory.c:4538:8) #28 do_fault (mm/memory.c:4667:9) #29 do_pte_missing (mm/memory.c:3648:10) #30 handle_pte_fault (mm/memory.c:4955:10) #31 __handle_mm_fault (mm/memory.c:5097:9) #32 handle_mm_fault (mm/memory.c:5251:9) #33 do_user_addr_fault (arch/x86/mm/fault.c:1392:10) #34 handle_page_fault (arch/x86/mm/fault.c:1486:3) #35 exc_page_fault (arch/x86/mm/fault.c:1542:2) #36 asm_exc_page_fault+0x22/0x27 (./arch/x86/include/asm/idtentry.h:570) There is some circumstantial evidence: this thread has SIGKILL set and memory pressure is high in its cgroup. When memory.high is reset, the task exits and pressure drops. So the most likely culprit is the somewhat vaguely bounded loop in mem_cgroup_handle_over_high(). > > > @@ -2645,6 +2648,9 @@ void mem_cgroup_handle_over_high(gfp_t gfp_mask) > > > current->memcg_nr_pages_over_high = 0; > > > > > > retry_reclaim: > > > + if (task_is_dying()) > > > + return; > > > + > > > /* > > > * The allocating task should reclaim at least the batch size, but for > > > * subsequent retries we only want to do what's necessary to prevent oom > > > > Yeah this is the better place for this check. > > > > How about this? > > Looks really good to me! > > I actually thought about moving the check into mem_cgroup_handle_over_high(), > and you already did it in this version. Excellent, thanks for your input.