From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 757CEC433EF for ; Mon, 18 Jul 2022 12:11:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 15D816B0071; Mon, 18 Jul 2022 08:11:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E73E6B0072; Mon, 18 Jul 2022 08:11:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F16F66B0073; Mon, 18 Jul 2022 08:11:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E2F456B0071 for ; Mon, 18 Jul 2022 08:11:52 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id B5C5A8051A for ; Mon, 18 Jul 2022 12:11:52 +0000 (UTC) X-FDA: 79700106864.27.1F457BE Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf03.hostedemail.com (Postfix) with ESMTP id 320E32004E for ; Mon, 18 Jul 2022 12:11:52 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id D8AC71FB7D; Mon, 18 Jul 2022 12:11:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1658146310; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=V6D6pW7KOU2kPWCiZd07qffFGU6SpyCb2tbQHBnWrPk=; b=EXPnzUbilMMA5F7I79PNHtjTgfbMNI9Re7xNL324QWOc5NRzwccoapCXDua4tdUktLRmBD bqJJ6Pj+LoonmpjEJP9fMbxhYEWaV4vgH0QQoo2JU/UFNvQwduA0OeLIzlzuT7gU37fUAA HHZbVqI9QkVqm/m53IDyVIhn0W+WGW0= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id E612B2C141; Mon, 18 Jul 2022 12:11:45 +0000 (UTC) Date: Mon, 18 Jul 2022 14:11:44 +0200 From: Michal Hocko To: Abel Wu Cc: Gang Li , akpm@linux-foundation.org, surenb@google.com, hca@linux.ibm.com, gor@linux.ibm.com, agordeev@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, viro@zeniv.linux.org.uk, ebiederm@xmission.com, keescook@chromium.org, rostedt@goodmis.org, mingo@redhat.com, peterz@infradead.org, acme@kernel.org, mark.rutland@arm.com, alexander.shishkin@linux.intel.com, jolsa@kernel.org, namhyung@kernel.org, david@redhat.com, imbrenda@linux.ibm.com, adobriyan@gmail.com, yang.yang29@zte.com.cn, brauner@kernel.org, stephen.s.brennan@oracle.com, zhengqi.arch@bytedance.com, haolee.swjtu@gmail.com, xu.xin16@zte.com.cn, Liam.Howlett@oracle.com, ohoono.kwon@samsung.com, peterx@redhat.com, arnd@arndb.de, shy828301@gmail.com, alex.sierra@amd.com, xianting.tian@linux.alibaba.com, willy@infradead.org, ccross@google.com, vbabka@suse.cz, sujiaxun@uniontech.com, sfr@canb.auug.org.au, vasily.averin@linux.dev, mgorman@suse.de, vvghjk1234@gmail.com, tglx@linutronix.de, luto@kernel.org, bigeasy@linutronix.de, fenghua.yu@intel.com, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, hezhongkun.hzk@bytedance.com Subject: Re: [PATCH v2 0/5] mm, oom: Introduce per numa node oom for CONSTRAINT_{MEMORY_POLICY,CPUSET} Message-ID: References: <20220708082129.80115-1-ligang.bdlg@bytedance.com> <41ae31a7-6998-be88-858c-744e31a76b2a@bytedance.com> <6f6a2257-3b60-e312-3ee3-fb08b972dbf2@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6f6a2257-3b60-e312-3ee3-fb08b972dbf2@bytedance.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658146312; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V6D6pW7KOU2kPWCiZd07qffFGU6SpyCb2tbQHBnWrPk=; b=xYzeXAOJtPcjUMyPQp3/lU7QzR+k7Ggd9QJOyRtj5o4WEtpHSVPcUA0aTkr2ipLWy9da61 UYLIH64wtZAOEg83qr31uUB4Dsu/bHYs3BsJ1bG2DvN3RbZHeY5UivtQJJBO0EqKsx0HM1 JgK4tFtKFcj6xWFhDRP1itSjEQRtuSY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=EXPnzUbi; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658146312; a=rsa-sha256; cv=none; b=EXQwAuRB6C3hSRs9WcSwXAIENIXZ+FF6VNpDkwiyn62QnUJQnjfHtRVcwXAzdppaQLiNbZ Gf3gX68s++jZVuBBhKbfc89xcHqZJkxDFNoD2jeTUAC/eyzJ0AG5jv64rr10P3Q5rFtIhZ v5ZVFEguDt3b7plO0MiIyds5sNPdskA= X-Stat-Signature: goi8cdjx5mcoxqkfeujo495o1msshgzx X-Rspamd-Queue-Id: 320E32004E X-Rspamd-Server: rspam02 X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=EXPnzUbi; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-HE-Tag: 1658146312-662215 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 12-07-22 23:00:55, Abel Wu wrote: > > On 7/12/22 9:35 PM, Michal Hocko Wrote: > > On Tue 12-07-22 19:12:18, Abel Wu wrote: > > [...] > > > I was just going through the mail list and happen to see this. There > > > is another usecase for us about per-numa memory usage. > > > > > > Say we have several important latency-critical services sitting inside > > > different NUMA nodes without intersection. The need for memory of these > > > LC services varies, so the free memory of each node is also different. > > > Then we launch several background containers without cpuset constrains > > > to eat the left resources. Now the problem is that there doesn't seem > > > like a proper memory policy available to balance the usage between the > > > nodes, which could lead to memory-heavy LC services suffer from high > > > memory pressure and fails to meet the SLOs. > > > > I do agree that cpusets would be rather clumsy if usable at all in a > > scenario when you are trying to mix NUMA bound workloads with those > > that do not have any NUMA proferences. Could you be more specific about > > requirements here though? > > Yes, these LC services are highly sensitive to memory access latency > and bandwidth, so they are provisioned by NUMA node granule to meet > their performance requirements. While on the other hand, they usually > do not make full use of cpu/mem resources which increases the TCO of > our IDCs, so we have to co-locate them with background tasks. > > Some of these LC services are memory-bound but leave much of cpu's > capacity unused. In this case we hope the co-located background tasks > to consume some leftover without introducing obvious mm overhead to > the LC services. This are some tough requirements and I am afraid far from any typical usage. So I believe that you need a careful tunning much more than a policy which I really have hard time to imagine wrt semantic TBH. > > Let's say you run those latency critical services with "simple" memory > > policies and mix them with the other workload without any policies in > > place so they compete over memory. It is not really clear to me how can > > you achieve any reasonable QoS in such an environment. Your latency > > critical servises will be more constrained than the non-critical ones > > yet they are more demanding AFAIU. > > Yes, the QoS over memory is the biggest block in the way (the other > resources are relatively easier). For now, we hacked a new mpol to > achieve weighted-interleave behavior to balance the memory usage across > NUMA nodes, and only set memcg protections to the LC services. If the > memory pressure is still high, the background tasks will be killed. > Ideas? Thanks! It is not really clear what the new memory policy does and what is the semantic of it from your description. Memory protection (via memcg) of your sensitive workload makes sense but it would require proper setting of background jobs as well. As soon as you hit the global direct reclaim then the memory protection won't safe your sensitve workload. -- Michal Hocko SUSE Labs