From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F4ADEE4996 for ; Tue, 22 Aug 2023 08:09:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD33E28000B; Tue, 22 Aug 2023 04:09:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D829A90000D; Tue, 22 Aug 2023 04:09:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4A3628000B; Tue, 22 Aug 2023 04:09:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B347790000D for ; Tue, 22 Aug 2023 04:09:07 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 79182140251 for ; Tue, 22 Aug 2023 08:09:07 +0000 (UTC) X-FDA: 81151015134.25.02249EB Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf19.hostedemail.com (Postfix) with ESMTP id 6A8951A0006 for ; Tue, 22 Aug 2023 08:09:05 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=uC2PPyvD; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf19.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1692691745; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1f+eYC6yXyQc7wgBy27gYH2LaoZ+nOmBUa0CiiZDq5I=; b=49Ki0kD0OkthZu9wQNd/by5N8RjRBrFa5mhJGqsJVgnl1K7X23KloaaKQWPzl0w3IM6aPe YWbZ3WNPcD2AZcITc9mgyNUETI8LHGG4QEzShJsl4SpNg8VLXChqaXqFaLvskibCw5Qv56 nXiosdN/piZU4c/isopJ+lo0lVTbL+o= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=uC2PPyvD; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf19.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1692691745; a=rsa-sha256; cv=none; b=xrJtux8RIFt3E63zRePn1S+B1/xkgLPAOUFAI5m5MLkUqKkHh+kVqoMQXj72H0As+m5rVE WmC5Dx0cJp38YPjt7MFbpexhmIAm+z2ZZgXdHGcogZh84p++EewQa452gkRKG24AVHbV+c 8egnIJnTBiO36qRtdr6opD3kJtRWOyY= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 50CBD22C3E; Tue, 22 Aug 2023 08:09:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1692691743; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=1f+eYC6yXyQc7wgBy27gYH2LaoZ+nOmBUa0CiiZDq5I=; b=uC2PPyvD29/c2OWKdR7DlOi+jU/UnafZkGqW6QlofPud1YQWzUHaiC8IJxLwGsHty5vioP ZwkDciAy8FLAk0HoKc73BOKIz7tJUgWnVxJnhWs51UeIWWl0RjgXDvl1fJ5qn50yCMllve QjUX4NRLk4eW9PkW4OHVN/22YVb04JQ= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 33321132B9; Tue, 22 Aug 2023 08:09:03 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id koW9CR9t5GQmZAAAMHmgww (envelope-from ); Tue, 22 Aug 2023 08:09:03 +0000 Date: Tue, 22 Aug 2023 10:09:02 +0200 From: Michal Hocko To: "Huang, Ying" Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , Mel Gorman , Vlastimil Babka Subject: Re: [PATCH] mm: fix draining remote pageset Message-ID: References: <20230811090819.60845-1-ying.huang@intel.com> <87r0o6bcyw.fsf@yhuang6-desk2.ccr.corp.intel.com> <87jztv79co.fsf@yhuang6-desk2.ccr.corp.intel.com> <87v8d8dch1.fsf@yhuang6-desk2.ccr.corp.intel.com> <87msykc9ip.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87msykc9ip.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Rspamd-Queue-Id: 6A8951A0006 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: cz1a1xth6ds8ek918cyt6q4r15td1tqh X-HE-Tag: 1692691745-690929 X-HE-Meta: U2FsdGVkX1+yIaNS5FjCrI4quSgD2ItXpyFXHYRScUrQlpcq3TQvCT+1B7XedPIeWiCgRl0Y8f7sqTTw5IdifwtVEQjhuWHnP3DVHLxdYUZJDgCk0sgZgt6c9D5LtNAy0hlnOOWcPLhA27JGcaW5rXhVDdj207we7oGRmDuryLK0rpvpNaOzen7k0LJEYjqH6dcMF/Md+tSH3ojjdEJ87LomvJN7jGU4A9seRJM1yn+YSFW7VttcHJszrg+S0oVI6CJwJy1iWz2j7Ugntnh5oPkzC+ZQT1LcFiqCkvs4JEQkh7ztr7FxIi2KNVkGf5q90mbfq7k5dT0IUiPqIFkcyTvcyjaIfr7jd3nTe4CS65v0X2SNkQ/7F4s4xwcSop69FQlZU7Jb0Tn6jumX75uEoKVUn7ABTW6z5VWYvDmjxpzirNQePdJ8QYr78NOFdZJTS0A4jsSLsrS+RaDwLjykR+wodP+8xNiA3CIjRGifFGRQqotDA+/q6r538/Vjsth2D6DH7OS7k9g6oWxEW7Ab4euolT8bJ1bivyFkBqyplRq+0CVM7v+kKzYXV+9U5b02Aq/xDVplge4JHYKG0rq6b513wl4+BExTrpld//Yl+wpGCQDiK8EPTiSVryPVgH24tw+JvG8V/n+1eYoWksgkuxhAmJDu4U1amV9js6sTr/zp+pUbGe1TbhMGErmvkW8mufc054nGcnQBAObJetwenY4z9zV1vEjNqtf4KOvjC019OPYxf66/p3TUJryVichdh9axsy9vMsGkYeNIUusFPh77Uw0xqw6OwS/bHAQE0waaZrVFZ1hI/713FjGhkexrUF6YXbQUKnE0ApQkMj8EFYMpjPydeC3PyW9KVhZcYN83tZYVvpeVW12efC71J1N/i7xp/AXNyC3Aj7QNGGVQL9e2+Mo5OEc+/YzJwuejPIRSBVginLxijaps4Y225J0e9GwCxTrpT/pvLKHvu4o 5SHBeJev ffyjiLfS2Yy18ZtEoSq3K8b3PW2V4sv8a04v+HBh5YLTrCZyXBwc9iX6wOmRcQmx82P95tc+RXA5dM01LvwCzyvhr5Qf8rw99g2eyIFTiogYdYcE5HZfR6sSdnIgFW3HYe0vLtyU2CsBsFiHjd+eQp3dRX1MMghpZdf7WEV3B9SAvHt0apSmTnl1kPj3dGkiXkTmtM5lZRjYoYjc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 22-08-23 06:31:42, Huang, Ying wrote: > Michal Hocko writes: > > > On Mon 21-08-23 16:30:18, Huang, Ying wrote: > >> Michal Hocko writes: > >> > >> > On Wed 16-08-23 15:08:23, Huang, Ying wrote: > >> >> Michal Hocko writes: > >> >> > >> >> > On Mon 14-08-23 09:59:51, Huang, Ying wrote: > >> >> >> Hi, Michal, > >> >> >> > >> >> >> Michal Hocko writes: > >> >> >> > >> >> >> > On Fri 11-08-23 17:08:19, Huang Ying wrote: > >> >> >> >> If there is no memory allocation/freeing in the remote pageset after > >> >> >> >> some time (3 seconds for now), the remote pageset will be drained to > >> >> >> >> avoid memory wastage. > >> >> >> >> > >> >> >> >> But in the current implementation, vmstat updater worker may not be > >> >> >> >> re-queued when we are waiting for the timeout (pcp->expire != 0) if > >> >> >> >> there are no vmstat changes, for example, when CPU goes idle. > >> >> >> > > >> >> >> > Why is that a problem? > >> >> >> > >> >> >> The pages of the remote zone may be kept in the local per-CPU pageset > >> >> >> for long time as long as there's no page allocation/freeing on the > >> >> >> logical CPU. In addition to the logical CPU goes idle, this is also > >> >> >> possible if the logical CPU is busy in the user space. > >> >> > > >> >> > But why is this a problem? Is the scale of the problem sufficient to > >> >> > trigger out of memory situations or be otherwise harmful? > >> >> > >> >> This may trigger premature page reclaiming. The pages in the PCP of the > >> >> remote zone would have been freed to satisfy the page allocation for the > >> >> remote zone to avoid page reclaiming. It's highly possible that the > >> >> local CPU just allocate/free from/to the remote zone temporarily. > >> > > >> > I am slightly confused here but I suspect by zone you mean remote pcp. > >> > But more importantly is this a concern seen in real workload? Can you > >> > quantify it in some manner? E.g. with this patch we have X more kswapd > >> > scanning or even hit direct reclaim much less often. > >> >> So, > >> >> we should free PCP pages of the remote zone if there is no page > >> >> allocation/freeing from/to the remote zone for 3 seconds. > >> > > >> > Well, I would argue this depends a lot. There are workloads which really > >> > like to have CPUs idle and yet they would like to benefit from the > >> > allocator fast path after that CPU goes out of idle because idling is > >> > their power saving opportunity while workloads want to act quickly after > >> > there is something to run. > >> > > >> > That being said, we really need some numbers (ideally from real world) > >> > that proves this is not just a theoretical concern. > >> > >> The behavior to drain the PCP of the remote zone (that is, remote PCP) > >> was introduced in commit 4ae7c03943fc ("[PATCH] Periodically drain non > >> local pagesets"). The goal of draining was well documented in the > >> change log. IIUC, some of your questions can be answered there? > >> > >> This patch just restores the original behavior changed by commit > >> 7cc36bbddde5 ("vmstat: on-demand vmstat workers V8"). > > > > Let me repeat. You need some numbers to show this is needed. > > I have done some test for this patch as follows, > > - Run some workloads, use `numactl` to bind CPU to node 0 and memory to > node 1. So the PCP of the CPU on node 0 for zone on node 1 will be > filled. > > - After workloads finish, idle for 60s > > - Check /proc/zoneinfo > > With the original kernel, the number of pages in the PCP of the CPU on > node 0 for zone on node 1 is non-zero after idle. With the patched > kernel, that becomes 0 after idle. We avoid to keep pages in the remote > PCP during idle. > > This is the number I have. If you think it isn't enough to justify the > patch, then I'm OK too (although I think it's enough). Because the > remote PCP will be drained later when some pages are allocated/freed on > the CPU. Yes, this doesn't really show any actual correctness problem so I do not think this is sufficient to change the code. You would need to show that the existing behavior is actively harmful. -- Michal Hocko SUSE Labs