From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E639C6FD1D for ; Tue, 14 Mar 2023 13:00:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B02C6B0074; Tue, 14 Mar 2023 09:00:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 760AC8E0005; Tue, 14 Mar 2023 09:00:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 627938E0003; Tue, 14 Mar 2023 09:00:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 53D046B0074 for ; Tue, 14 Mar 2023 09:00:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 14C73AB2DF for ; Tue, 14 Mar 2023 13:00:02 +0000 (UTC) X-FDA: 80567511444.27.ADA82CC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id EEAC6180019 for ; Tue, 14 Mar 2023 12:59:59 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H0EyEuXX; spf=pass (imf24.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678798800; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F9vVOWJlwuPnKm/f8w0R1Et2c0yKK9tWZ7eUXSHlnvM=; b=lnaqy6UznpKpqeV1UUw9NIRleqYhUakhc+LVukYMwfhEd0GJQcsyl+eln3Pqa7SrpYJdPt 28bdi4CFnXkP2D6yq70nd0vn42vioofCNfuXftrTmz7DVwzkmmzUV4eR+k/XVz++7Ao1n3 NEBzHUuW0yLtMSvPub8spNAQtP8c1ms= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=H0EyEuXX; spf=pass (imf24.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678798800; a=rsa-sha256; cv=none; b=kH0Zft5sHMu0VHstcSteaEpCWNfvu+ia1LSYnUlTWRLGsVE3xNg+KuXntSzT3FnIe6iDHM gMKHMIDMkKxRl38Ul4xnIdTknpSFsX+betFsD78G7IbJ2AFxTcn4i3OTH5Z0niTBPsN/Gj B1PLvWIlbzHRPWMf5U42ewEN4bkHZFE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678798799; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=F9vVOWJlwuPnKm/f8w0R1Et2c0yKK9tWZ7eUXSHlnvM=; b=H0EyEuXXQ0XqwxT3k/5oYqV99yC90BwGcxwED6cJfQ8szd6Zt1ujpAf8RbHQ1EKrrtd0Uz UbnoMUuVE9bI1eFyDk+bxWvgfmRKbuGYxdYOPQWqkMFTZyUWO7ruGMbywVOq00C4/KwKWN 1dcQUz2V3DQk5bRxZmp+6d/lJeEWdVE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-108-UacGbyQFP3yH2xNDszJuSA-1; Tue, 14 Mar 2023 08:59:56 -0400 X-MC-Unique: UacGbyQFP3yH2xNDszJuSA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 473A185D067; Tue, 14 Mar 2023 12:59:55 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CAFA74042AC8; Tue, 14 Mar 2023 12:59:54 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id ED2DE40156300; Tue, 14 Mar 2023 09:59:37 -0300 (-03) Date: Tue, 14 Mar 2023 09:59:37 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: Christoph Lameter , Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Russell King , Huacai Chen , Heiko Carstens , x86@kernel.org, Vlastimil Babka Subject: Re: [PATCH v5 00/12] fold per-CPU vmstats remotely Message-ID: References: <20230313162507.032200398@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: EEAC6180019 X-Stat-Signature: hfceczr8w8sh97kpxg5djp8wzz3uofqj X-HE-Tag: 1678798799-354656 X-HE-Meta: U2FsdGVkX1//JFmdQce9Fa3p/30b7vgdaojH3j1zgzhyn/fOi9T999QzWzON1HePEUTxzYnVk21ckLnLetgKR6GTjpA5JmA8nyAepPMqFtMpaUXetVTbD3mI90x49xFKvZyo5SkHBOdbayuUxDgT+GWVAJIt0E6JYTbCZEpGH/cggX1c4IRt74M2Jsafl3xjAp7gcIRWTOyr1Aju/TVZHUJcgnwR98vQgzYrNrLHFtbMNBAfF3rMgrI8O11Sx5+cNMhZVKbG4D+u9aLR24rH71wePCh8e3kYfnoSUHxzphqZNgoSVK4dqz2SjxI7TmOuXHxzcXT7zTFsu3l9DUmQpZUNBO+G9MPj6tHqa7gWgLhNqAzN3+riQDohIBk+Od/EE8D26+cO0riysrhoC+bIKABBLLTlNC8jG8w/V7fOF5zC0kYwPsTHMMjgxaZ1TwZKs0GW5lyAzu0R+kgDSNi5mQhYOa6kMco3TFv5gl6aU4FrrWMOaYaLwuDUWxvuemq54TqWtAXeCJGsPqOA79vC4bNQMlE8OXTUpQ6YS8R2Gdn2mddqYtaHSitdfEWZXOO73k81XKcmwEU4Dxsn+OVJH+CrU8wrQoDr5maUqhR5xremuIpOGOs4Mut900Fdrl+OZXaedz47mcx1UGYzl8hLhqHEHw+76iNZJv2BeULomfGoZ/2Z2nTMV2zNejND4Xm7wmi3kbPqYeaHRYAxZF8T1IdhGpCKY2QYMGphPNyBonI4J/IDM00965RS/szDrHjNps3Tz5ZUABZ6ROqTwnInSQkGSoeTmRd56bFjgmmJZY/fFZuhHyje7Si2H6oEUgN6e05KYcW5egVHcEhnTZ0Y1fV9rb0v0p5QS0KexFJZeuU0BrYBwKjsYqzqpoti8DsQQaPAx2XDILwfZVruSFCf6GgTpnDcOu27Wlag2LGjFgayJ4Za3ZTYu1pS3gYAgmoD5OPaAhiG8+z3TVwL5a8 E2zKM1rR q7d6UhnhdtCqR82B7qsIGhBHrduJcu8U+3P9RQVclODiKkx1xMp676sEaMI3Zd+gRb8uNTxbhfGlA9MEWcdkKtmQLwWraf0HsliltHOkCGs3eMDtFz0EgsbWH5eRiDsshu7BSNEg3RRBiyi42fx9oyoN0pPMHPLujWNFAIsZK/Cl1UXyw7pfLDzi4Cg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 14, 2023 at 01:25:53PM +0100, Michal Hocko wrote: > On Mon 13-03-23 13:25:07, Marcelo Tosatti wrote: > > This patch series addresses the following two problems: > > > > 1. A customer provided some evidence which indicates that > > the idle tick was stopped; albeit, CPU-specific vmstat > > counters still remained populated. > > > > Thus one can only assume quiet_vmstat() was not > > invoked on return to the idle loop. If I understand > > correctly, I suspect this divergence might erroneously > > prevent a reclaim attempt by kswapd. If the number of > > zone specific free pages are below their per-cpu drift > > value then zone_page_state_snapshot() is used to > > compute a more accurate view of the aforementioned > > statistic. Thus any task blocked on the NUMA node > > specific pfmemalloc_wait queue will be unable to make > > significant progress via direct reclaim unless it is > > killed after being woken up by kswapd > > (see throttle_direct_reclaim()) > > I have hard time to follow the actual problem described above. Are you > suggesting that a lack of pcp vmstat counters update has led to > reclaim issues? What is the said "evidence"? Could you share more of the > story please? - The process was trapped in throttle_direct_reclaim(). The function wait_event_killable() was called to wait condition allow_direct_reclaim(pgdat) for current node to be true. The allow_direct_reclaim(pgdat) examined the number of free pages on the node by zone_page_state() which just returns value in zone->vm_stat[NR_FREE_PAGES]. - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. However, the freelist on this node was not empty. - This inconsistent of vmstat value was caused by percpu vmstat on nohz_full cpus. Every increment/decrement of vmstat is performed on percpu vmstat counter at first, then pooled diffs are cumulated to the zone's vmstat counter in timely manner. However, on nohz_full cpus (in case of this customer's system, 48 of 52 cpus) these pooled diffs were not cumulated once the cpu had no event on it so that the cpu started sleeping infinitely. I checked percpu vmstat and found there were total 69 counts not cumulated to the zone's vmstat counter yet. - In this situation, kswapd did not help the trapped process. In pgdat_balanced(), zone_wakermark_ok_safe() examined the number of free pages on the node by zone_page_state_snapshot() which checks pending counts on percpu vmstat. Therefore kswapd could know there were 69 free pages correctly. Since zone->_watermark = {8, 20, 32}, kswapd did not work because 69 was greater than 32 as high watermark. - As the result: - The process waited allow_direct_reclaim(pgdat) to be true. - allow_direct_reclaim() saw 0 via zone_page_state(). It woke kswapd since 0 was lower than min watermark. - The kswapd did nothing. - kswapd saw 69 via zone_page_state_snapshot(). It woke waiters without performing memory reclaim since 69 is greater than high watermark. - The process woked by kswapd soon restart waiting for kswapd. - Still allow_direct_reclaim() saw 0 via zone_page_state(). It woke kswapd since 0 was lower than min watermark. > > > 2. With a SCHED_FIFO task that busy loops on a given CPU, > > and kworker for that CPU at SCHED_OTHER priority, > > queuing work to sync per-vmstats will either cause that > > work to never execute, or stalld (i.e. stall daemon) > > boosts kworker priority which causes a latency > > violation > > Why is that a problem? Out-of-sync stats shouldn't cause major problems. > Or can they? Consider SCHED_FIFO task that is polling the network queue (say testpmd). do { if (net_registers->state & DATA_AVAILABLE) { process_data)(); } } while (!stopped); Since this task runs at SCHED_FIFO priority, kworker won't be scheduled to run (therefore per-CPU vmstats won't be flushed to global vmstats). Or, if testpmd runs at SCHED_OTHER, then the work item to flush per-CPU vmstats causes testpmd -> kworker kworker: flush per-CPU vmstats kworker -> testpmd And this might cause undesired latencies to the packets being processed by the testpmd task.