From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4234C61D97 for ; Wed, 22 Nov 2023 11:26:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AB468D002F; Wed, 22 Nov 2023 06:26:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 35BB88D0008; Wed, 22 Nov 2023 06:26:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 224118D002F; Wed, 22 Nov 2023 06:26:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0F5B48D0008 for ; Wed, 22 Nov 2023 06:26:34 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E1379B5BF9 for ; Wed, 22 Nov 2023 11:26:33 +0000 (UTC) X-FDA: 81485362266.07.B3036F4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 0EE6D40021 for ; Wed, 22 Nov 2023 11:26:31 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=c6nNHX3K; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700652392; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2nxiX3zTK96N3iDK7/DQy2rA2pN7GRHmvt3F8JWaDVo=; b=lqmz/QGp9pjlKbAqPiOUrJ1/8cyplTgey9x+WnCq3IbBYOp9/OpeN+9aTdpTMGD6e/Xg8z FB2IkhqBBZ+/1n/ODpTfPyGV1SbWwTxT3qsY2rx/FHnObG3KUKhiszFe5weGQ12BZ2OW95 SB+pj5dS8kZ0iDCmJigB01YOhWwQ5TY= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=c6nNHX3K; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700652392; a=rsa-sha256; cv=none; b=uKbstWkNn78/TsfCSoM4rxy1WTILHz12GJv5nNSNO9bmA8ai9uOlXtq+l72BVSsWfXY7sQ Gx9ehA/lzdVChdv3Lb5lvhQrRRgNVhMtGwg+PMoSmT8ez5uDagkMsCkqxPl2zSxqAwRRw8 Ix5PLTJMF3wTDH2OlkG1WuZs+3GExDI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1700652391; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=2nxiX3zTK96N3iDK7/DQy2rA2pN7GRHmvt3F8JWaDVo=; b=c6nNHX3KYCkDk/5+SvcDd9fyXM9DSuiG/M3Jk2K+Su/m+8tKwbfaPWkieN4V98CQV5u3rG MU7EXvmyZuTem5eQnzWT0Rb6V6O/1QvvevhherobD+tjvLh4jZiNlidMsMtlGQMonQouTT SeImAq298umHI8JxYQhkII94xvELSb0= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-658-xWDN6sCRN1-B6CyhZcZiNw-1; Wed, 22 Nov 2023 06:26:29 -0500 X-MC-Unique: xWDN6sCRN1-B6CyhZcZiNw-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7993C1C18CC3; Wed, 22 Nov 2023 11:26:29 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-3.gru2.redhat.com [10.97.112.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1DE062166B27; Wed, 22 Nov 2023 11:26:29 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 8B600400E56F9; Wed, 22 Nov 2023 08:26:02 -0300 (-03) Date: Wed, 22 Nov 2023 08:26:02 -0300 From: Marcelo Tosatti To: Michal Hocko Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Vlastimil Babka , Andrew Morton , David Hildenbrand , Peter Xu Subject: Re: [patch 0/2] mm: too_many_isolated can stall due to out of sync VM counters Message-ID: References: <20231113233420.446465795@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6 X-Rspamd-Queue-Id: 0EE6D40021 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: m8h3wpuf9ah8jtjdb8ihqqf4wng3rcwh X-HE-Tag: 1700652391-849902 X-HE-Meta: U2FsdGVkX1/gr5luCGDIYYMV85lydPhul1ASeHT7s20eal9LX/DZdA7NIxenzOdRibl+D6r1SpxSQNUHLAvhixgFq+ojhp9viBP0y73wKcGF/95QZMrOfPADxDtZTAJbjxHU2hAgtYa6uYvBrr02Zk9Knqq7IvgnWqRuqO1zkplgvXlvBAWLSSiN9A/S37Cf/qJl4eOZ1sdGc/jbHM8IRSIrJigFNXFulav9WbzIJhZXXfkktUFLVDHiDiE55GaAElPVZhu71Eh4cRwhfVAkHQI/a7cGpESGIBHTKhlX7QTetmKacaE4CV/h5Boqqv5Bffnbxo9Pf9GQCaqGa8B9XbpUwPxw1oCZ2bO9Eex0Jw7XbzEyITxE5h84lHFE1Q5UY9gnYtM8vtwYBsjEPv6qvMfXIFnA3GHQsevrHMapFghUDMUKGDZYZ18KfIGNr4gejE3h62uF7Nacx24xolZa6w7S+CkeSXoCn7GMvvXngZ2uoWJMYpJ8Trj2LlOhlxvNpxNxNAMkTi6dig/I/P6BswvD314k2JtOJlQ5dgL6iCFJOABMOxUJmdG8i3PufzWGC3PH3B9yjM8TFZRUvAxL0LKDWu6BBKz9nx204jOBlLZFLxOpWkwbQYj56a803/EC9eLMRTmau59qYGVx4QsbBVDh5pT/GzKz22C5hjcJfyQHxcaRkvH3nO2+L7RzYeQ8kLE35p7xNVmNkflTFOSyQxPvgPuaQyZlVXWNuFLI7LbVTK34x8HMrm3d+LJQqQpnmoBzVyFmzL6/6hcGnRm4BnShMXJnKhbdQyiWO9qSWCGMFZok2Qs5mzlpyj4bbSc9EX6q8aR9SApHrDm3S5pzMggquMBm9iXI59XLf1AbbJhSyCVjamHR5u3yvERriW0wi5pdvugiBp0E0QzLkcwNhNEoEXThnBuN2UNkaucQYYLEBU/UvSzX4PDdW8rhtYV2iqvKAAuqg6mmhSkzGge +Ox6QHqu aAnAF7y4hieoJMIwE/S+JBb7BlBt9quxvnOa6QrgNABZ/OgxvNkRZGDFK8ngRjl4Dz3yfviFczeBfkg96G4O6sDl6SnXMOOdYPXUf1D4BfK7Gy3TbetTl8TRTCq9EtsYKVHHOkR0nVmJVb2DlYgopjkyLspdEMnsLbmJfW0Yep667svBURfwCwRHkucnDlE+JuYfzwHGcIdHm6og= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Nov 22, 2023 at 08:23:51AM -0300, Marcelo Tosatti wrote: > On Tue, Nov 14, 2023 at 01:46:41PM +0100, Michal Hocko wrote: > > On Tue 14-11-23 09:26:53, Marcelo Tosatti wrote: > > > Hi Michal, > > > > > > On Tue, Nov 14, 2023 at 09:20:09AM +0100, Michal Hocko wrote: > > > > On Mon 13-11-23 20:34:20, Marcelo Tosatti wrote: > > > > > A customer reported seeing processes hung at too_many_isolated, > > > > > while analysis indicated that the problem occurred due to out > > > > > of sync per-CPU stats (see below). > > > > > > > > > > Fix is to use node_page_state_snapshot to avoid the out of stale values. > > > > > > > > > > 2136 static unsigned long > > > > > 2137 shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, > > > > > 2138 struct scan_control *sc, enum lru_list lru) > > > > > 2139 { > > > > > : > > > > > 2145 bool file = is_file_lru(lru); > > > > > : > > > > > 2147 struct pglist_data *pgdat = lruvec_pgdat(lruvec); > > > > > : > > > > > 2150 while (unlikely(too_many_isolated(pgdat, file, sc))) { > > > > > 2151 if (stalled) > > > > > 2152 return 0; > > > > > 2153 > > > > > 2154 /* wait a bit for the reclaimer. */ > > > > > 2155 msleep(100); <--- some processes were sleeping here, with pending SIGKILL. > > > > > 2156 stalled = true; > > > > > 2157 > > > > > 2158 /* We are about to die and free our memory. Return now. */ > > > > > 2159 if (fatal_signal_pending(current)) > > > > > 2160 return SWAP_CLUSTER_MAX; > > > > > 2161 } > > > > > > > > > > msleep() must be called only when there are too many isolated pages: > > > > > > > > What do you mean here? > > > > > > That msleep() must not be called when > > > > > > isolated > inactive > > > > > > is false. > > > > Well, but the code is structured in a way that this is simply true. > > too_many_isolated might be false positive because it is a very loose > > interface and the number of isolated pages can fluctuate depending on > > the number of direct reclaimers. > > > > > > > 2019 static int too_many_isolated(struct pglist_data *pgdat, int file, > > > > > 2020 struct scan_control *sc) > > > > > 2021 { > > > > > : > > > > > 2030 if (file) { > > > > > 2031 inactive = node_page_state(pgdat, NR_INACTIVE_FILE); > > > > > 2032 isolated = node_page_state(pgdat, NR_ISOLATED_FILE); > > > > > 2033 } else { > > > > > : > > > > > 2046 return isolated > inactive; > > > > > > > > > > The return value was true since: > > > > > > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_INACTIVE_FILE] > > > > > $8 = { > > > > > counter = 1 > > > > > } > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->vm_stat[NR_ISOLATED_FILE] > > > > > $9 = { > > > > > counter = 2 > > > > > > > > > > while per_cpu stats had: > > > > > > > > > > crash> p ((struct pglist_data *) 0xffff00817fffe580)->per_cpu_nodestats > > > > > $85 = (struct per_cpu_nodestat *) 0xffff8000118832e0 > > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[42] > > > > > $86 = 0xffff00917fcc32e0 > > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fcc32e0)->vm_node_stat_diff[NR_ISOLATED_FILE] > > > > > $87 = -1 '\377' > > > > > > > > > > crash> p/x 0xffff8000118832e0 + __per_cpu_offset[44] > > > > > $89 = 0xffff00917fe032e0 > > > > > crash> p ((struct per_cpu_nodestat *) 0xffff00917fe032e0)->vm_node_stat_diff[NR_ISOLATED_FILE] > > > > > $91 = -1 '\377' > > > > > > > > This doesn't really tell much. How much out of sync they really are > > > > cumulatively over all cpus? > > > > > > This is the cumulative value over all CPUs (offsets for other CPUs > > > have been omitted since they are zero). > > > > OK, so that means the NR_ISOLATED_FILE is 0 while NR_INACTIVE_FILE is 1, > > correct? If that is the case then the value is indeed outdated but it > > also means that the NR_INACTIVE_FILE is so small that all but 1 (resp. 2 > > as kswapd is never throttled) reclaimers will be stalled anyway. So does > > the exact snapshot really help? Do you have any means to reproduce this > > behavior and see that the patch actually changed the behavior? > > > > [...] > > > > > > With a very low NR_FREE_PAGES and many contending allocation the system > > > > could be easily stuck in reclaim. What are other reclaim > > > > characteristics? > > > > > > I can ask. What information in particular do you want to know? > > > > When I am dealing with issues like this I heavily rely on /proc/vmstat > > counters and pgscan, pgsteal counters to see whether there is any > > progress over time. > > > > > > Is the direct reclaim successful? > > > > > > Processes are stuck in too_many_isolated (unnecessarily). What do you mean when you ask > > > "Is the direct reclaim successful", precisely? > > > > With such a small LRU list it is quite likely that many processes will > > be competing over last pages on the list while rest will be throttled > > because there is nothing to reclaim. It is quite possible that all > > reclaimers will be waiting for a single reclaimer (either kswapd or > > other direct reclaimer). I would like to understand whether the system > > is stuck in unproductive state where everybody just waits until the > > counter is synced or everything just progress very slowly because of the > > small LRU. > > -- > > Michal Hocko > > SUSE Labs > > Michal, > > I think this provides the data you are looking for: > > It seems that the situation was invoking memory-consuming user program > in pallarel expecting that the system will kick oom-killer at the end. > > The node 0-3 are small containing system data and almost all files. > The node 4-7 are large prepared to contain user data only. > The issue described in above was observed on node 4-7, where > had very few memory for files. > > The node 4-7 has more cpu than node 0-3. > Only cpus on node 4-7 are configuerd to be nohz_full. > So we often found unflushed percpu vmstat on cpus of node 4-7. > > Michal, Let me know if you have any objections to the patch, thanks.