From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE83EC61D9B for ; Wed, 22 Nov 2023 11:24:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F41C68D002D; Wed, 22 Nov 2023 06:24:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EF1BE8D0008; Wed, 22 Nov 2023 06:24:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBA1E8D002D; Wed, 22 Nov 2023 06:24:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C7EEA8D0008 for ; Wed, 22 Nov 2023 06:24:13 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id A17461CB6E3 for ; Wed, 22 Nov 2023 11:24:13 +0000 (UTC) X-FDA: 81485356386.27.C3EDBB9 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id D316E1C000E for ; Wed, 22 Nov 2023 11:24:11 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XBhway35; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700652251; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q3xI+jfvtvlHIXl6/qPsk32Rnxkwsx8pzxHaHq5T0nE=; b=Ezg0P62OGvu5NRemljrscmAp2S8kQ+lmMooNRYZEI0QXU4rY2NPk6whXAvysn8BbxkA9My pmTIeGNNYNqsRgKQIsKXqnV2hioydBotxldLD/likEjmrMJy2pl25u//ydwC8QEqC5gcRm 8mujO8Sg+z+Pl7/zXccJPj63z+Uk5NQ= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XBhway35; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf20.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700652251; a=rsa-sha256; cv=none; b=aiuDxAmaam2hvBAfhW5oWXuNDlxxRzNyZjLBQo09LuBCTqp4ahLgKBCa0QPYLBEkROoxYd xNvJppDRtwk9qvJ4QYYrQaAp68mWn89Ly7dOgZWTipFL06v6/nhRRS1uC2Tmt/rNTVXzeB RBHD1gWT6lh1uolNfdwqNlm0v1UEJIw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700652251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Q3xI+jfvtvlHIXl6/qPsk32Rnxkwsx8pzxHaHq5T0nE=; b=XBhway35ex+8vtjICs5t8cuMfRZfEBs00/HCWHLkwAtalRmDqYH02gax/Ee7MIBn9PQV6Y 1uizknVuAFBSmuGhVKVi+I3l26x7cCGqYSvpgrPORHAVrypuwIg4As+X80KLcebkAmrg0R di8ULN5P1fL4XyGcIoSd7PTiom5QD7k= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-339-EB-TIXtIMn2oqiMDdj9krg-1; Wed, 22 Nov 2023 06:24:09 -0500 X-MC-Unique: EB-TIXtIMn2oqiMDdj9krg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2299B83B826; Wed, 22 Nov 2023 11:24:09 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A7CED40C6EBA; Wed, 22 Nov 2023 11:24:08 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 1723E401DEDE0; Wed, 22 Nov 2023 08:23:51 -0300 (-03) Date: Wed, 22 Nov 2023 08:23:51 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Andrew Morton , David Hildenbrand , Peter Xu Subject: Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Message-ID: References: <20231113233420.446465795@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspam-User: X-Stat-Signature: efzkhss9frx36nf4us6fwfi9thmqdncy X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D316E1C000E X-HE-Tag: 1700652251-231590 X-HE-Meta: U2FsdGVkX1+qZTLVAXriaDTmVdUOnUmwbX9CK5GH2i/XDguaASDikanem4ugDwEZSXpDcOxsj+1CxFiyx88NuVrwO8gymMIFYVxuLEwjDhAzJg0hOsnyo9hIKSTkICIdL9MpWM4O+ian65UBMTMDhv81+X7j2X4x4FPHepSLASyar+dvkFgFfO+yaIuOKP6F0p34Gzsr99pn64uqO9KOtEd+CaRDpWmaE7Sv8EMjQzTV9tQSqJZJ8T4DQta2fTmVb1/wHC1VqXONuq4UsbCSmHPbR+9mlkZ9q3CT4kCEl1w1koCHQhG/C6w46XNXgBDoQMC25plCQVVUHNE8XZb13IygpJELAXNwQiiOk6Zyug9LVP19Q1JnHMMANYq51y/WH7Yp9R5p9D5Q1P0a9ty+oCd0ZGuyAFSFZkhnqTQettynkmv9zTbSsvTNMVdEXMSAc7I1o+A9coHOJ+KYEGtjYpdoKNdIIfGWDKotKcKEP0HF0oM5JLt20Wejk7SQY7o1dC7Dkf8ZWA1sDRxLwXLeId/mJ0tdbx6czc/SKKsaD9ewWa+FGV07p7NKjrOQaSh7aEFMmHc09UbyU37O3NeHYg3ngcbLFNhPn34x7X3NhHwb4pa+rJZvHpW/A3uArsDwI9HMfbko7J5OEnvjAHxoMivwOVJdt0er0WvPq4cK+ejnVHWOYoe4GxgzRBDS1epCHkNl9esp/aTUkt7uyIHQd555bwZvDrjuFEacRdF0MEWkpKBdXkN/nXp9h95+WqopufQxD6aSTkw7qbRm/3S/QeWHow+56ugAAUSwhR8kVjXRBVtNRmWMXPk+CXl4Z16GuRaykqStmHfwWmH7YuEn2mYTn90Q5VHl6mG5lAQir9pFdkv6yVv60gHlnOMHJr404aBh1/fVK6fjZXMATlf9GLm119AtPnQsN7t1tZDsV3mYQVUmtrTpaGvrxJO8vvKLyzgH+LGEiBmXtok2y1t /Vr4WwLC vlLYPSJtDqzOreynZVlBMFjgp0UuGnYF9WuKIsYwC9+P97J2iAUOgrWyuANzqm7SitNLNjm6ZVkZSDyu5lMzHdRLJrcaHcBhzwM2AdlcOj/Qd/1XhwACAJxDIe9nVNZcpK/M1bFFEQ7UbPcEqBkvdDASTE3ApgJxv1EOLaJMs7cx8JqKAv9kBLdyF46CdLNvMUwuhkNkUwUGgXw4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote: > On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote: > > Hi Michal, > > > > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote: > > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote: > > > > A customer reported seeing processes hung at too_many_isolated, > > > > while analysis indicated that the problem occurred due to out > > > > of sync per-CPU stats (see below). > > > > > > > > Fix is to use node_page_state_snapshot to avoid the out of stale values. > > > > > > > > 2136 static unsigned long > > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, > > > > 2138 struct scan_control *sc, enum lru_list lru) > > > > 2139 { > > > > : > > > > 2145 bool file = is_file_lru(lru); > > > > : > > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec); > > > > : > > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) { > > > > 2151 if (stalled) > > > > 2152 return 0; > > > > 2153 > > > > 2154 /* wait a bit for the reclaimer. */ > > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL. > > > > 2156 stalled = true; > > > > 2157 > > > > 2158 /* We are about to die and free our memory. Return now. */ > > > > 2159 if (fatal_signal_pending(current)) > > > > 2160 return SWAP_CLUSTER_MAX; > > > > 2161 } > > > > > > > > msleep() must be called only when there are too many isolated pages: > > > > > > What do you mean here? > > > > That msleep() must not be called when > > > > isolated > inactive > > > > is false. > > Well, but the code is structured in a way that this is simply true. > too_many_isolated might be false positive because it is a very loose > interface and the number of isolated pages can fluctuate depending on > the number of direct reclaimers. > > > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file, > > > > 2020 struct scan_control *sc) > > > > 2021 { > > > > : > > > > 2030 if (file) { > > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE); > > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE); > > > > 2033 } else { > > > > : > > > > 2046 return isolated > inactive; > > > > > > > > The return value was true since: > > > > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE] > > > > $8 = { > > > > counter = 1 > > > > } > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE] > > > > $9 = { > > > > counter = 2 > > > > > > > > while per_cpu stats had: > > > > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats > > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0 > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42] > > > > $86 = 0xffff00917fcc32e0 > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE] > > > > $87 = -1 '\377' > > > > > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44] > > > > $89 = 0xffff00917fe032e0 > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE] > > > > $91 = -1 '\377' > > > > > > This doesn't really tell much. How much out of sync they really are > > > cumulatively over all cpus? > > > > This is the cumulative value over all CPUs (offsets for other CPUs > > have been omitted since they are zero). > > OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1, > correct? If that is the case then the value is indeed outdated but it > also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2 > as kswapd is never throttled) reclaimers will be stalled anyway. So does > the exact snapshot really help? Do you have any means to reproduce this > behavior and see that the patch actually changed the behavior? > > [...] > > > > With a very low NR_FREE_PAGES and many contending allocation the system > > > could be easily stuck in reclaim. What are other reclaim > > > characteristics? > > > > I can ask. What information in particular do you want to know? > > When I am dealing with issues like this I heavily rely on /proc/vmstat > counters and pgscan, pgsteal counters to see whether there is any > progress over time. > > > > Is the direct reclaim successful? > > > > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask > > "Is the direct reclaim successful", precisely? > > With such a small LRU list it is quite likely that many processes will > be competing over last pages on the list while rest will be throttled > because there is nothing to reclaim. It is quite possible that all > reclaimers will be waiting for a single reclaimer (either kswapd or > other direct reclaimer). I would like to understand whether the system > is stuck in unproductive state where everybody just waits until the > counter is synced or everything just progress very slowly because of the > small LRU. > -- > Michal Hocko > SUSE Labs Michal, I think this provides the data you are looking for: It seems that the situation was invoking memory-consuming user program in pallarel expecting that the system will kick oom-killer at the end. The node 0-3 are small containing system data and almost all files. The node 4-7 are large prepared to contain user data only. The issue described in above was observed on node 4-7, where had very few memory for files. The node 4-7 has more cpu than node 0-3. Only cpus on node 4-7 are configuerd to be nohz_full. So we often found unflushed percpu vmstat on cpus of node 4-7.