From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7D80E7717D for ; Fri, 13 Dec 2024 04:42:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C9B86B0083; Thu, 12 Dec 2024 23:42:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 179F76B0085; Thu, 12 Dec 2024 23:42:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 040AE6B0088; Thu, 12 Dec 2024 23:42:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DA7A36B0083 for ; Thu, 12 Dec 2024 23:42:23 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 63F39140A58 for ; Fri, 13 Dec 2024 04:42:23 +0000 (UTC) X-FDA: 82888688862.11.D2DD3A4 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) by imf08.hostedemail.com (Postfix) with ESMTP id 5A06A160002 for ; Fri, 13 Dec 2024 04:42:04 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=vGCxV40W; spf=pass (imf08.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.177 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734064922; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=doqHV7X4srCESak8AqpW6ZG84XWySTc3Joa+0PJxz44=; b=f79uAHtnGo+0Z9kfijsaeqMnD7xPm+rZRE3YONy+1WtYXOUTRld2e5QKY49vwwLP380vVY 7TqrNiFdK5Sa892J801RfUAlYNUKMmmzJ++bLoLkf28HJdeuFF2yafWcYi6EHLqp7cI9wM MqgDnwrstAvQ0cg8wAvJ3EBj9R9DVFw= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=vGCxV40W; spf=pass (imf08.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.177 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734064922; a=rsa-sha256; cv=none; b=g2pHy8DW78sCWucJJgiCyZHz/WaZWQYYYy8JkVn8/y8njHEgJ5+atroMVvtylDv+1sYx59 qt9jGYS7JpuZgzSm2i6wlvsrtv4VjqY/yc3U1bRlmC7hpW6bGqr+/IxHHyl/sXebz0ySW1 P0GvIthtCyVPV1c8Ac0OhtAIwBdhNg0= Received: by mail-qk1-f177.google.com with SMTP id af79cd13be357-7b6e5ee6ac7so107129585a.0 for ; Thu, 12 Dec 2024 20:42:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1734064940; x=1734669740; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=doqHV7X4srCESak8AqpW6ZG84XWySTc3Joa+0PJxz44=; b=vGCxV40WzcBDuAJrkKSyCDKiRSCwRmPCNjO63LrTEd3OleKwyD0sIDGKN+Myizoljk ftXkKDM+dDQS/xiRSDJtfEmFx95XUEvacg1TkcXZ/9oQFFAzrmzerLRqK0Vm+Gv15kld 0A2VtPFsO+tNjNGvcjHal5NIir0QiWMrvjyr8oP0XHqGJeozXqJ+ug+/2N3ab8RglM3N SvCUxmf6DBbuVGPUojp+2TdvwKwvQ9A2H83rMN9CbYPD5vIoOX+9fQHEhDCoX/bCfOxz /1nzp0bHrzzQ3Np/OSkmqS6+rCQjuk6jyxRSsyxdrwcUHht15+Ea0z0gIOK+fNPrb5jA FXXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734064940; x=1734669740; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=doqHV7X4srCESak8AqpW6ZG84XWySTc3Joa+0PJxz44=; b=GrdnhqvEmY256bgOMO5fglQSUe6eYqUNrMK0h5mABPjTKACkw74LHSfZp018oTirR9 o/zvYyjPpyzfMgVm5QymxDoCDK7pN4Omfb62O4VVA9l8r4WxurXFj4pbvyghqm+b56tv T8Jju2guGGbYawrpManvzhVghH4azm/knsoeAoT538bGrS33Yh07MBOkUQC9pk3TgZqT UUNNbBVz+pZn5iYFsagFmcswPVnnuTVFVH1w5Dn9Exd6BzcJip8nqguLfkNsVIRopVFO oIuicUs+Yx1YSwBadndI4Tj4EBgsaWbfHnH8czyIhD4Elqvi7ZXAxrIN+T31fVseG1IH 9IEA== X-Forwarded-Encrypted: i=1; AJvYcCVHUqFV4oJhxmzgMcYNKDowmPPK9V3SdhEATM+xewg0SnxFbBbrohZgIiCuifTtk1qJnNJeC2wY4w==@kvack.org X-Gm-Message-State: AOJu0YwGtTOALI2lyQrR9+lpYP57nrNgyIXBwjLiQJoOXRqEuDZdF6Qt D7y+YUw8UUmuUkkuj4ViRMGuHGcQWW+piwSdVTsycakbNON/DFy9t9feccX07Wk= X-Gm-Gg: ASbGncu5+y7O+cShGro9VnmxxpGc0xg3qd/+slLU0EKF5shGlnS/215SQNI0wc4BM+3 F9esX2BDulrtsabRnuWb1hkpkI8nG3WmvlMAtA4MqMx/85K+qliYYLkzUcii1oTRO3bkWEIb7zQ Tll+pFwrNEj9NUJp9DQYuLvOrEYM9o3gTqVy3GgSxYpS4i2Bj0V2sf83FjBvyA5YSNvEe8j5fDH nRGGyF08cfsOqmD6CVB/K1yuhi96U8hjv+o7rkJwqiIxJQ7yvR6MzI= X-Google-Smtp-Source: AGHT+IGjJJYcn75s9DDI/soD8CkuZPWEEXQITEJPoftUx+AIzZnB5mh1pETTouvPCh12CwbWHR3H9Q== X-Received: by 2002:a05:620a:1a95:b0:7b6:d6dd:8826 with SMTP id af79cd13be357-7b6fbf43a6bmr193238285a.55.1734064939930; Thu, 12 Dec 2024 20:42:19 -0800 (PST) Received: from localhost ([2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7b6e5ca6c65sm300162185a.125.2024.12.12.20.42.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Dec 2024 20:42:18 -0800 (PST) Date: Thu, 12 Dec 2024 23:42:13 -0500 From: Johannes Weiner To: Roman Gushchin Cc: Yosry Ahmed , Rik van Riel , Balbir Singh , Michal Hocko , hakeel Butt , Muchun Song , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, Nhat Pham Subject: Re: [PATCH v2] memcg: allow exiting tasks to write back data to swap Message-ID: <20241213044213.GA6910@cmpxchg.org> References: <20241212115754.38f798b3@fangorn> <20241212183012.GB1026@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Queue-Id: 5A06A160002 X-Rspamd-Server: rspam12 X-Stat-Signature: ypcr3psmc7juh76uzc89gr75fixn9ruk X-Rspam-User: X-HE-Tag: 1734064924-964100 X-HE-Meta: U2FsdGVkX1/9fFozBhSYBAgDLa4tzzQ4A/Xv8IneksKlDENE5HjeiVHHgXJX7PJhgqxd16KZpBGTWLtVP+SSjTMpEOCheXQfJO5qRKEJ8h2PhTHbH05l0YTlvlduKyYOCXmu8Tyn2DZHXFI+T1ia54TsLSjuhQncVcHkJuE3tDqb7xgrXsuZuCsWP/YfLi6ftKWuz4bOC65rlru95SyMn73hFev1UxfclDpjeZcQ4iJ43wOKnU3LX7WWm0uOME+6dGiXrTjfKdl1cLf+Fm3k0cjw5QjYTQtgIH6FpVdy9GI+gW/G6XfxT2TXReGCCtRgpcLL8DlxM1SWu71ldnVKmiPSHcYrT3NqZUHt116LDBq2gm9j2YYGUsDpu21AyEsm9dcwk4oLn2lC+aPn1gbB9DuckpmdWBVC8TbFz34Iv+bsL4GKmtOLg5OIVNqusp/9cSXG6+eSyheLb1A06ZclO88Q+uXIu4owBFnOeCR7MNbNUFIHKVpHSqShv7JVSZLhJSkJeFY7RzPTYonuk59z4xiP8+qE2kWH6O9n9a19rG4/1lLM3/AnLQk90aM5GAROEsIcThjC2uC6B+SIj1u8Ftm2NLMmQTMLXH+ZzBCNc+0yXB5Edq/lhj5pQbq/hEXdF3u0INzE7WDDU3+/C1NMS567q9dsAINUOz2sWTEhnpRaJFQ8At1ZrY8eKFqvRu+KKXU+smsEh3q9zPy3SDycG69CgNYdBewQuWNK2LOW5FxYZp5InXWapE7dAJT9AYPmQveAT/LhPDQMzxEWcQo3C+dF3bp1pPkiPrXVKvA6PcqhE2b7w2LAem4hHMwvS93UMeM+mn+1UCu4D/gKlQZoLjNNI1fkrFqfK6aRmZArtwbOsCF4goT7ZzmhzEfIPzHhkRyKduGRrIeGUCyoqqxLj+0R0fK9gC3OThflaMVDK6DXvROU0zGOCIAA01COSIqAomn9C+qZ35Nk41DtGx7 DhXf01mx IuqonpQH/xlv5sy0bUxHj75af1NkqupTSkbKDC3Ay3LebGKp+22cWNUhIyO6j7r+DsoipPEzykqWoKm3GS3qEOMvBoG4X2BEBjkmeGt4qOnbC6WZWdnrhUn3nbY+AU93NlJ/vxiG1WhB3eNvJX0Oc/I01VuOSuoAuVhJ1KhJYuxawlZ6TjmTUmxpBzRUbQH9G/os/YsoEZWVhTp16L6Fu78cZVS+otsCTQPRSIpwgZ/Na1TOWrR1Qq0gfAEMPTingKrTZWH8+cRT9zmNTBtOP2GFMo3VjIX3v7EXnfA0wqK7f+Maadeogu9P9sUaoEpYOLduKA6paWcwCTJLqDF4No18p8geJqpB91wVxZB/9qbW8gL4W/MgCm9va0UUD/JQlI2mOvXZkhG+72RC/SXKgAzIYfBrL4TkwXQQ3TzdG0d6n7J2eu1X1CUHA87uB0ZiFwXkbDQ7GrcpJoa4KpxG4VfYcUlAxerebUxhBtkNhjniI86YJpzW3Vj4bK6k5nIJV4T0nrE4PM8jkQOG7LMyfQjbrFJYuSVus8FBoWN01sHnRjjREJzHVWOQBb+z1rNkjk/4q X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Dec 13, 2024 at 12:32:11AM +0000, Roman Gushchin wrote: > On Thu, Dec 12, 2024 at 01:30:12PM -0500, Johannes Weiner wrote: > > On Thu, Dec 12, 2024 at 09:06:25AM -0800, Yosry Ahmed wrote: > > > On Thu, Dec 12, 2024 at 8:58 AM Rik van Riel wrote: > > > > > > > > A task already in exit can get stuck trying to allocate pages, if its > > > > cgroup is at the memory.max limit, the cgroup is using zswap, but > > > > zswap writeback is enabled, and the remaining memory in the cgroup is > > > > not compressible. > > > > > > > > This seems like an unlikely confluence of events, but it can happen > > > > quite easily if a cgroup is OOM killed due to exceeding its memory.max > > > > limit, and all the tasks in the cgroup are trying to exit simultaneously. > > > > > > > > When this happens, it can sometimes take hours for tasks to exit, > > > > as they are all trying to squeeze things into zswap to bring the group's > > > > memory consumption below memory.max. > > > > > > > > Allowing these exiting programs to push some memory from their own > > > > cgroup into swap allows them to quickly bring the cgroup's memory > > > > consumption below memory.max, and exit in seconds rather than hours. > > > > > > > > Signed-off-by: Rik van Riel > > > > > > Thanks for sending a v2. > > > > > > I still think maybe this needs to be fixed on the memcg side, at least > > > by not making exiting tasks try really hard to reclaim memory to the > > > point where this becomes a problem. IIUC there could be other reasons > > > why reclaim may take too long, but maybe not as pathological as this > > > case to be fair. I will let the memcg maintainers chime in for this. > > > > > > If there's a fundamental reason why this cannot be fixed on the memcg > > > side, I don't object to this change. > > > > > > Nhat, any objections on your end? I think your fleet workloads were > > > the first users of this interface. Does this break their expectations? > > > > Yes, I don't think we can do this, unfortunately :( There can be a > > variety of reasons for why a user might want to prohibit disk swap for > > a certain cgroup, and we can't assume it's okay to make exceptions. > > > > There might also not *be* any disk swap to overflow into after Nhat's > > virtual swap patches. Presumably zram would still have the issue too. > > > > So I'm also inclined to think this needs a reclaim/memcg-side fix. We > > have a somewhat tumultous history of policy in that space: > > > > commit 7775face207922ea62a4e96b9cd45abfdc7b9840 > > Author: Tetsuo Handa > > Date: Tue Mar 5 15:46:47 2019 -0800 > > > > memcg: killed threads should not invoke memcg OOM killer > > > > allowed dying tasks to simply force all charges and move on. This > > turned out to be too aggressive; there were instances of exiting, > > uncontained memcg tasks causing global OOMs. This lead to that: > > > > commit a4ebf1b6ca1e011289677239a2a361fde4a88076 > > Author: Vasily Averin > > Date: Fri Nov 5 13:38:09 2021 -0700 > > > > memcg: prohibit unconditional exceeding the limit of dying tasks > > > > which reverted the bypass rather thoroughly. Now NO dying tasks, *not > > even OOM victims*, can force charges. I am not sure this is correct, > > either: > > > > If we return -ENOMEM to an OOM victim in a fault, the fault handler > > will re-trigger OOM, which will find the existing OOM victim and do > > nothing, then restart the fault. This is a memory deadlock. The page > > allocator gives OOM victims access to reserves for that reason. > > > > Actually, it looks even worse. For some reason we're not triggering > > OOM from dying tasks: > > > > ret = task_is_dying() || out_of_memory(&oc); > > > > Even though dying tasks are in no way privileged or allowed to exit > > expediently. Why shouldn't they trigger the OOM killer like anybody > > else trying to allocate memory? > > > > As it stands, it seems we have dying tasks getting trapped in an > > endless fault->reclaim cycle; with no access to the OOM killer and no > > access to reserves. Presumably this is what's going on here? > > > > I think we want something like this: > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index 53db98d2c4a1..be6b6e72bde5 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -1596,11 +1596,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > if (mem_cgroup_margin(memcg) >= (1 << order)) > > goto unlock; > > > > - /* > > - * A few threads which were not waiting at mutex_lock_killable() can > > - * fail to bail out. Therefore, check again after holding oom_lock. > > - */ > > - ret = task_is_dying() || out_of_memory(&oc); > > + ret = out_of_memory(&oc); > > I like the idea, but at first glance it might reintroduce the problem > fixed by 7775face2079 ("memcg: killed threads should not invoke memcg OOM killer"). The race and warning pointed out in the changelog might have been sufficiently mitigated by this more recent commit: commit 1378b37d03e8147c67fde60caf0474ea879163d8 Author: Yafang Shao Date: Thu Aug 6 23:22:08 2020 -0700 memcg, oom: check memcg margin for parallel oom If not, another possibility would be this: ret = tsk_is_oom_victim(task) || out_of_memory(&oc); But we should probably first restore reliable forward progress on dying tasks, then worry about the spurious printk later.