From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3EA7C61DA4 for ; Wed, 15 Mar 2023 14:23:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 628AA6B0074; Wed, 15 Mar 2023 10:23:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B1096B0078; Wed, 15 Mar 2023 10:23:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 403876B007B; Wed, 15 Mar 2023 10:23:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 29DF06B0074 for ; Wed, 15 Mar 2023 10:23:00 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CFC9C1C3537 for ; Wed, 15 Mar 2023 14:22:59 +0000 (UTC) X-FDA: 80571349278.02.76BC609 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf15.hostedemail.com (Postfix) with ESMTP id C4258A0010 for ; Wed, 15 Mar 2023 14:22:57 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LEl6LPRX; spf=pass (imf15.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678890177; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F4uzeqqvqacftLULyJ2ReECy8kRZPGYlZ91DDArP/f0=; b=Y910haWRP6nRrWPMCqoEjWjon5+lFBuEDVW/YVG+OfjJgbn+Af4EfIPiRgqkX/Y71vfFj0 7RJD62CxjytrmOXxNDdaiPusW/SeWy1YBXEP5UyB9ZkkMX/Vm13I/fHg6rNSC5GxE7soaW V8Pu0AoqhTJSlj5AGKllx8S9fiB4etI= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=LEl6LPRX; spf=pass (imf15.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678890177; a=rsa-sha256; cv=none; b=1p0pkFZLxF+fEEelvPI/xW4t8E23mrqLeiOo5E9bmSaUCIe8smLKfXPqwjc5KQdgiA+obA H59fZAn5dcUu1vWGnlkfcvjsZV4J8OB/QR2Euu+XYL3YXtpJDC3AwXKh+gqQPT9oMHLxhh LhQ2qK7RPC3We6l1cgwuBrAVoezcLMI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678890177; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=F4uzeqqvqacftLULyJ2ReECy8kRZPGYlZ91DDArP/f0=; b=LEl6LPRXylIbp3Uzuj1rxjqH+vZjZ75kXN7iqkjWYFAmQTUcPg+mqVCeud5Yj+AwoLq98k 1grAuFb4zWFTwDz8GC+Cdwh7xduvdRdeVCYUDvibqhLkpDTjH2u1jDjW4ZpcdfhLhGftLP KxVITMBWxU0NW8sqiF6xD3dmYqPCTRw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-376-UpS_HBG9NnmL_d0Zprz5rQ-1; Wed, 15 Mar 2023 10:22:55 -0400 X-MC-Unique: UpS_HBG9NnmL_d0Zprz5rQ-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com [10.11.54.8]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 505CD101A553; Wed, 15 Mar 2023 14:22:54 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CA711C15A0B; Wed, 15 Mar 2023 14:22:53 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 26AC3400E7196; Tue, 14 Mar 2023 21:29:32 -0300 (-03) Date: Tue, 14 Mar 2023 21:29:32 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: Christoph Lameter , Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Russell King , Huacai Chen , Heiko Carstens , x86@kernel.org, Vlastimil Babka Subject: Re: [PATCH v5 00/12] fold per-CPU vmstats remotely Message-ID: References: <20230313162507.032200398@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.8 X-Stat-Signature: 1az1ucfxzesba1mt45rqgx9ns34ir9xj X-Rspam-User: X-Rspamd-Queue-Id: C4258A0010 X-Rspamd-Server: rspam06 X-HE-Tag: 1678890177-103850 X-HE-Meta: U2FsdGVkX18mNS8FC2f55DsQHKU6YA16PuObO22qIXfGMekkUmOBiNJo/YiYx5kO74yFi8ZhLfloqvp8id4vvBNbKUyC5Nww3IP6KtoMSU1yfw1e8h0BOTevMXx1OSeCqruG+4GBcpOZ4ZZpHsH1y/8uraQB0nkUv30QDJCAPaFE0ds6s/+FmSHh6cYEBsHt4bwvM0SJ0f5hhpXdqdmOKyoWZmiiep53avNV10WYTEq5LdivxwgRO8Jwrbw7o6FY+9Wc6LEyeTN257maWCobIga0Cj11a8lPBxdkgyfjZ3Bn9FZChYyTVNU5SzZAtzIQnKVJWFHr6v/Zq+X2+LBj3GjJ3SZqZUJAaMjNsRKveHpyyF4ziB1VkpFqWTpU1S0Z1136lETukMrSMjKMqpd0vXBCt810rotpDnQuRhz8ggcwG8+BxWTSVqpQOWMrBr86eCtTtadORiow5SE8ggnbEQUiUTluwuH73kCHJOb5blSLNMd4Dh+lcodPefLsn1SUva3Nv+NEnNKw5HJfUj+wxzc8tb9exBerEiJJSFHjG3DEDjLKDtdn+SLetD7MZcEa5Urv2ckRADLrzuHMoELz6R9Sj/eDwkA8mWbj4rGiHeMmPxEd9nmXAPJy9RSQCaSH1ItNUa9J0CWiQIfnwnGyXh0VfMHaXfZrTx7dTFA3s6cVWebbHwf4H8hAxMFAvUq5XEYVqA/i+VucHsVeC2nutYmN0Zn6NcckxjBWEDt4G71uKDlXjh3W3P2CLYuSIOS81EhGCQLi9WAwe5JOqbHEDcS5zpZ9mh5hzwuyJ0jXMBJp1O37Arq8Q5xYAc+7pRo6d8sqRSOWBI1Pci3jdszGvFfuaoiYdiac/8d3nTJP8wcaln8Z9Zi8qEW9eSSe1GYvGihDMVFR1yRAUcltovHGLhffW0SFmgYFCxSGzRKuTZptFIfF7JCmze4h9mAwQCLO5GQ3J0lcVaI3Z8d4gF0 9NeXpMaa JattakYKRCZL0RZq0ga+oAF7fXqtJvuyjdkRVD8/d1JS0VRUJEG7402fxRgvGYyPAlMzqlve/kljmtWqWudbAaDcnrR+Zjjsg+J8MipYnOuOe5xJKxPGKFH7F3jYRJGKi30YgjBTNTwOZiNy/s2NyETNTeHqQZt/5wi6S6e3HZmWIkWWjVsr/2GCB2gmE6MWISwRJEe6kv8tBfsqXqA5AXhd3PjzehxKMN7cGvsBqWdQeCAI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Mar 14, 2023 at 10:01:06PM +0100, Michal Hocko wrote: > On Tue 14-03-23 15:49:09, Marcelo Tosatti wrote: > > On Tue, Mar 14, 2023 at 03:31:21PM +0100, Michal Hocko wrote: > > > On Tue 14-03-23 09:59:37, Marcelo Tosatti wrote: > > > > On Tue, Mar 14, 2023 at 01:25:53PM +0100, Michal Hocko wrote: > > > > > On Mon 13-03-23 13:25:07, Marcelo Tosatti wrote: > > > > > > This patch series addresses the following two problems: > > > > > > > > > > > > 1. A customer provided some evidence which indicates that > > > > > > the idle tick was stopped; albeit, CPU-specific vmstat > > > > > > counters still remained populated. > > > > > > > > > > > > Thus one can only assume quiet_vmstat() was not > > > > > > invoked on return to the idle loop. If I understand > > > > > > correctly, I suspect this divergence might erroneously > > > > > > prevent a reclaim attempt by kswapd. If the number of > > > > > > zone specific free pages are below their per-cpu drift > > > > > > value then zone_page_state_snapshot() is used to > > > > > > compute a more accurate view of the aforementioned > > > > > > statistic. Thus any task blocked on the NUMA node > > > > > > specific pfmemalloc_wait queue will be unable to make > > > > > > significant progress via direct reclaim unless it is > > > > > > killed after being woken up by kswapd > > > > > > (see throttle_direct_reclaim()) > > > > > > > > > > I have hard time to follow the actual problem described above. Are you > > > > > suggesting that a lack of pcp vmstat counters update has led to > > > > > reclaim issues? What is the said "evidence"? Could you share more of the > > > > > story please? > > > > > > > > > > > > - The process was trapped in throttle_direct_reclaim(). > > > > The function wait_event_killable() was called to wait condition > > > > allow_direct_reclaim(pgdat) for current node to be true. > > > > The allow_direct_reclaim(pgdat) examined the number of free pages > > > > on the node by zone_page_state() which just returns value in > > > > zone->vm_stat[NR_FREE_PAGES]. > > > > > > > > - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. > > > > However, the freelist on this node was not empty. > > > > > > > > - This inconsistent of vmstat value was caused by percpu vmstat on > > > > nohz_full cpus. Every increment/decrement of vmstat is performed > > > > on percpu vmstat counter at first, then pooled diffs are cumulated > > > > to the zone's vmstat counter in timely manner. However, on nohz_full > > > > cpus (in case of this customer's system, 48 of 52 cpus) these pooled > > > > diffs were not cumulated once the cpu had no event on it so that > > > > the cpu started sleeping infinitely. > > > > I checked percpu vmstat and found there were total 69 counts not > > > > cumulated to the zone's vmstat counter yet. > > > > > > > > - In this situation, kswapd did not help the trapped process. > > > > In pgdat_balanced(), zone_wakermark_ok_safe() examined the number > > > > of free pages on the node by zone_page_state_snapshot() which > > > > checks pending counts on percpu vmstat. > > > > Therefore kswapd could know there were 69 free pages correctly. > > > > Since zone->_watermark = {8, 20, 32}, kswapd did not work because > > > > 69 was greater than 32 as high watermark. > > > > > > If the imprecision of allow_direct_reclaim is the underlying problem why > > > haven't you used zone_page_state_snapshot instead? > > > > It might have dealt with problem #1 for this particular case. However, > > looking at the callers of zone_page_state: > > > > 5 2227 mm/compaction.c <> > > zone_page_state(zone, NR_FREE_PAGES)); > > 6 124 mm/highmem.c <<__nr_free_highpages>> > > pages += zone_page_state(zone, NR_FREE_PAGES); > > 7 283 mm/page-writeback.c <> > > nr_pages += zone_page_state(zone, NR_FREE_PAGES); > > 8 318 mm/page-writeback.c <> > > nr_pages = zone_page_state(z, NR_FREE_PAGES); > > 9 321 mm/page-writeback.c <> > > nr_pages += zone_page_state(z, NR_ZONE_INACTIVE_FILE); > > 10 322 mm/page-writeback.c <> > > nr_pages += zone_page_state(z, NR_ZONE_ACTIVE_FILE); > > 11 3091 mm/page_alloc.c <<__rmqueue>> > > zone_page_state(zone, NR_FREE_CMA_PAGES) > > > 12 3092 mm/page_alloc.c <<__rmqueue>> > > zone_page_state(zone, NR_FREE_PAGES) / 2) { > > > > The suggested patchset fixes the problem of where due to nohz_full, > > the delayed timer for vmstat_work can be armed but not executed (which means > > the per-cpu counters can be out of sync for as long as the cpu is in > > idle while in nohz_full mode). > > > > You probably do not want to convert all callers of zone_page_state > > into zone_page_state_snapshot (as a justification for the proposed > > patchset). > > Yes, I do not really think we want or even need to convert all of them. OK. > But it seems that your direct reclaim throttling example really requires > that. The thing with the remote flushing is that it would suffer from > a similar imprecations as the flushing could be deferred and under > certain conditions really starved. > So it is definitely worth fixing the > issue you are seeing without such a complex scheme. The scheme is necessary for other reasons. > > > Anyway, this is kind of information that is really helpful to have in > > > the patch description. > > > > Agree: resending a new version with updated commit. > > I would really recommend trying out the simple fix and see if it changes > the behavior. > > > > [...] > > > > > > 2. With a SCHED_FIFO task that busy loops on a given CPU, > > > > > > and kworker for that CPU at SCHED_OTHER priority, > > > > > > queuing work to sync per-vmstats will either cause that > > > > > > work to never execute, or stalld (i.e. stall daemon) > > > > > > boosts kworker priority which causes a latency > > > > > > violation > > > > > > > > > > Why is that a problem? Out-of-sync stats shouldn't cause major problems. > > > > > Or can they? > > > > > > > > Consider SCHED_FIFO task that is polling the network queue (say > > > > testpmd). > > > > > > > > do { > > > > if (net_registers->state & DATA_AVAILABLE) { > > > > process_data)(); > > > > } > > > > } while (!stopped); > > > > > > > > Since this task runs at SCHED_FIFO priority, kworker won't > > > > be scheduled to run (therefore per-CPU vmstats won't be > > > > flushed to global vmstats). > > > > > > Yes, that is certainly possible. But my main point is that vmstat > > > imprecision shouldn't cause functional problems. That is why we have > > > _snapshot readers to get an exact value where it matters for > > > consistency. > > > > Understood. Perhaps allow_direct_reclaim should use > > zone_page_state_snapshot, as otherwise it is only precise > > at sysctl_stat_interval intervals? > > or even much less than that. The flusher uses WQ infrastructure and even > when a WQ_MEM_RECLAIM one is used this doesn't mean that all workers > could be jammed. > > > > > Or, if testpmd runs at SCHED_OTHER, then the work item to > > > > flush per-CPU vmstats causes > > > > > > > > testpmd -> kworker > > > > kworker: flush per-CPU vmstats > > > > kworker -> testpmd > > > > > > And this might cause undesired latencies to the packets being > > > processed by the testpmd task. > > > > > Right but can you have any latencies expectation in a situation like > > > that? > > > > Not sure i understand what you mean. Example: > > > > https://www.gabrieleara.it/assets/documents/papers/conferences/2021-ieee-nfv-sdn.pdf > > > > In general, UDPDK exhibits a much lower > > latency than the in-kernel UDP stack used through the POSIX > > API (e.g., a 69 % reduction from 95 µs down to 29 µs), thanks > > to its ability to bypass the kernel entirely, which in turn > > outperforms the in-kernel TCP stack as available through the > > POSIX API, as expected. > > ... > > Alternatively, application processes can use UDPDK > > with the non-blocking API calls (using the O_NONBLOCK flag) > > and perform some other action while waiting for packets to > > be ready to be sent/received to/from the UDPDK Process, > > instead of performing continuous busy-loops on packet queues. > > However, in this case the cost of a single CPU fully busy due > > to the UDPDK Process itself is anyway unavoidab > > If the userspace workload avoids the kernel completely then it is quite > unlikely that there is any pcp work to be flushed for in-kernel > counters. This particular workload avoids the kernel. Others (were latency is still a concern) don't. > That being said, I am nor saying remote flushing is not useful. > I just think that the issue you are reporting here could be fixed by > a much simpler fix that doesn't change the way how the flushing is > performed. OK. Must change flushing anyway, but fixing allow_direct_reclaim to use zone_page_state_snapshot won't hurt. > Such a large rework should be justified by performance numbers. OK. > It should be also explained how do we end up doing a lot of work on > isolated cpus or a pure user space workload. Again, pure user space workload is one example where latency matters, in response to the "can you have any latencies expectation in a situation like that?" question. Will resend -v8 with allow_direct_reclaim fix.