From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBA85C61DF4 for ; Fri, 24 Nov 2023 17:55:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 708F78D00A1; Fri, 24 Nov 2023 12:55:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B9538D0096; Fri, 24 Nov 2023 12:55:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55ACF8D00A1; Fri, 24 Nov 2023 12:55:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 446F08D0096 for ; Fri, 24 Nov 2023 12:55:07 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 160A4C0417 for ; Fri, 24 Nov 2023 17:55:07 +0000 (UTC) X-FDA: 81493599054.11.4A4233F Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) by imf07.hostedemail.com (Postfix) with ESMTP id 1E5774001F for ; Fri, 24 Nov 2023 17:55:04 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=K8IQNqlK; spf=pass (imf07.hostedemail.com: domain of dmaluka@chromium.org designates 209.85.208.175 as permitted sender) smtp.mailfrom=dmaluka@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700848505; a=rsa-sha256; cv=none; b=tsmdeJrvfOCYcLqgtaJYS34609ijO9MFrJlgUX5XMi+sOLxt0jRdWNZwxj4OmPhjTPW+YK q3z889/jRpPHFlB6YANuRrGjvCFcy4zA1EHcEFfPCk/9HtRU5lm8fwpaF1c3YX+WtV6sPH 2oX7IfKjEZ2DWJPNCuuoiqp5Kpsw5xY= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=K8IQNqlK; spf=pass (imf07.hostedemail.com: domain of dmaluka@chromium.org designates 209.85.208.175 as permitted sender) smtp.mailfrom=dmaluka@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700848505; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BFE0t+TRDRGNBpCexvzqDA3fMaHJ8VxILINLTV7YhaI=; b=EokqUxsT8ApHNVAtUlZHxHg5HqqDTV0SGDyoEl4gNp3u9BUdZeqgMKI3ea3ROmobuPCvNY Ld/yTUcVXFU2Su69hDfeogCZdqDjEiArsQ+BWLY5NMtyyKnOCiWgRul2DqPsxxyP6xxO/Z EyatvU7R1qwlpUyGYRk/MoDAJclkumw= Received: by mail-lj1-f175.google.com with SMTP id 38308e7fff4ca-2c59a4dd14cso24719591fa.2 for ; Fri, 24 Nov 2023 09:55:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1700848503; x=1701453303; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=BFE0t+TRDRGNBpCexvzqDA3fMaHJ8VxILINLTV7YhaI=; b=K8IQNqlKstx+jDH9y/wqcQ57LrLbd9rnal62ZCp9fO3SkQEVrO8gMvwor7QsWfIT2j ifxsrO1fElgae/CbzAZTCRgfAsBME6Tn270sszTCzY9dfj/wHzPCioikSh2nIaOFxPjN 25xSuR24ywV7Ebd1J2ek0qzNwwKRxXVjeUm9s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700848503; x=1701453303; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=BFE0t+TRDRGNBpCexvzqDA3fMaHJ8VxILINLTV7YhaI=; b=EP86Xi/djwMvLYsbVaZk/InMXPjsNN0iclTJWqbQQ2BKZob27T+pOJXa286xwTY6zT GElVGbaY9Ri/SfHlLovT+YM8/nU5x0pG/yGpmKJf3ZK2jtBaDf0li/KnC147FUffqcor 9PC/BHqgQK7W1UzlnBxjW1SMAF2GMd/gXVAVcdh9uX/kVnELgZZOzHg5B6VVD+PVF62E 87ndoRstQu1rv6WM5jumfCPCV+dglfddy8BZy8wcGUURgpyaOXaJyFxUnx5ixCx/d9er L3dBSVYBMT/Zi6vxu2m8HxkAN6hB5oBemsM16e+H5RCvw9k3wBSAueWnp12Nc6zcS+7D nRiA== X-Gm-Message-State: AOJu0Yzkg/14oFX+g1ET5aMi5Ec/BM+P6FE1vAG8HXlJx2aOFoMCVbVh ne+TQlSZ5nqvU/0ISnLtMZxuNg== X-Google-Smtp-Source: AGHT+IEXc7shnNCNJyF3Y2bev/tDlX7XM9m0TsbpTMp7yEB8qRnDfoWC0hAXlw8rbPP5mKNCGncjOA== X-Received: by 2002:a2e:9346:0:b0:2c8:714f:53a with SMTP id m6-20020a2e9346000000b002c8714f053amr3055274ljh.3.1700848502716; Fri, 24 Nov 2023 09:55:02 -0800 (PST) Received: from google.com ([83.142.187.84]) by smtp.gmail.com with ESMTPSA id t23-20020a2e8e77000000b002c993572c7fsm59450ljk.35.2023.11.24.09.55.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Nov 2023 09:55:02 -0800 (PST) Date: Fri, 24 Nov 2023 18:54:54 +0100 From: Dmytro Maluka To: Michal Hocko Cc: Liu Shixin , Andrew Morton , Greg Kroah-Hartman , huang ying , Aaron Lu , Dave Hansen , Jesper Dangaard Brouer , Vlastimil Babka , Kemi Wang , Kefeng Wang , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH -next v2] mm, proc: collect percpu free pages into the free pages Message-ID: References: <20220822023311.909316-1-liushixin2@huawei.com> <20220822033354.952849-1-liushixin2@huawei.com> <20220822141207.24ff7252913a62f80ea55e90@linux-foundation.org> <6b2977fc-1e4a-f3d4-db24-7c4699e0773f@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1E5774001F X-Stat-Signature: rx9xyouuyr9x1porb75c6nssnzsxfoao X-Rspam-User: X-HE-Tag: 1700848504-877100 X-HE-Meta: U2FsdGVkX18Gmsu2Tu62PrG2JwRhIqrqG5jyDlGY705o1aD5sjJyRVkFv3C0h4vtpY5JV/vVUnHXPL26eMkJaBslirSOfxPLN/hOtCy2D4PYc/vPOTIDAp207bMIHmCBQyt6eWcfyk3knaoZ2kTuDx8nBklvpeU/0TbCxa7l0fwNTi5pXG1G9OO1xRotGNc9X9ZVRnvCz8RCp1DnKJC9awKwDDBIlSKiDGzIBxsaiR38qD7UM35Zuoq+fWmmmQrqKyAtUkhj4kasaM+0E5yav+PT2f1IxEPmnvAiz2Bkj0qupSRwA7oybjK06Znuv/ZEmPW9JMimDytZjHdiuzY78sZ6ahRKtEmJJEPUvzMHmb8co4rOOVzqtxjLlvozQ6C5SWBVDOWfIdtqOycYnN5RiYyhBCNcjHudvkmf69FMWseWmZW+xQBl0IVh2jAqB6R4lES3dxYpeSJPlG6sVVsD/FFr3QwaVviCxGKNYUmcUm8KehQYkIqRpJy0rlyzJalYoXVOZ/wn/F4e5mgWoBXWTXFlE1LTAgPE7wvAUl7NxtX2xYdCivCVTOeLrHO3bKoTXT/bRlCM8rJp7ebM5vHNGMgEbKQZL7qeRvtbN4XdWVtMgB0yXJVCu/KC4PlxRIQvay6anuRl4fBIeego72nJ0AU6euO4cN28Ju1sX9XSca5VsqPmbDVuafnhYS+TBod2iO0DzcwlkJaX3RaBO+83a0dqGkSOxxUcPE82O0vUtVDVzplYHKREhnMUK2coOGNAgtKVV2QUgVQkASBdd1twQ+R4x3GxmBPCVQhbiGQ5EdyoMhtrmsel7ZIyy5RFDo3EHxiQ9AJmz9M5vN9aFYoU1BeEPG4+qDa6AvOwOt2J/i2eMKwgaXH/+V32Y2ZID68w/pa9EjtGirfN17Cyc1vjwxXdfqoJVHOHnEM7wBnh6nnbCGoonNSzWM2FFgoqCUE6nTsafcmiVr/bdfTOv8r Jz1jOh1J EumE2CVy5YIIIG/Ac4N/0+CIulf6KLaMZg1plurlf5NbDB+/FbCmstKMHZ6knjqThDvyEGnfEtGIhVUM9Qwuov002Aggh4Qo5ueuCGsUbXsRfhHXPbxq/a1YVDkMZ0WJepcdN/hPtBqXMNp9e1GqRIYvC5JikNkHAX0zvLgQIVnz7TQGYmtattkkU51/TJDpAxKisSLmJthC2uSOZKv6ughSgDhpmdKOxZ3Bk+3O23ezB1G+fK2oAGjbFkITiC7PRn6PRY/s7tnCK4aHqgO7qVLNWgEAhIzSjgbF6wddbG48NOjn3G33Nq2MJa+q0cnQ55b8Wl7J3DKZCyUCUwRfYF0Ebn3aJo9UPp6dDHxralQAybelU0JWQjAxCAwj0Py+eODRVjhZOAVLczN5vnnCYfJRSThlyhG2Y8flE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Aug 23, 2022 at 03:37:52PM +0200, Michal Hocko wrote: > On Tue 23-08-22 20:46:43, Liu Shixin wrote: > > On 2022/8/23 15:50, Michal Hocko wrote: > > > On Mon 22-08-22 14:12:07, Andrew Morton wrote: > > >> On Mon, 22 Aug 2022 11:33:54 +0800 Liu Shixin wrote: > > >> > > >>> The page on pcplist could be used, but not counted into memory free or > > >>> avaliable, and pcp_free is only showed by show_mem() for now. Since commit > > >>> d8a759b57035 ("mm, page_alloc: double zone's batchsize"), there is a > > >>> significant decrease in the display of free memory, with a large number > > >>> of cpus and zones, the number of pages in the percpu list can be very > > >>> large, so it is better to let user to know the pcp count. > > >>> > > >>> On a machine with 3 zones and 72 CPUs. Before commit d8a759b57035, the > > >>> maximum amount of pages in the pcp lists was theoretically 162MB(3*72*768KB). > > >>> After the patch, the lists can hold 324MB. It has been observed to be 114MB > > >>> in the idle state after system startup in practice(increased 80 MB). > > >>> > > >> Seems reasonable. > > > I have asked in the previous incarnation of the patch but haven't really > > > received any answer[1]. Is this a _real_ problem? The absolute amount of > > > memory could be perceived as a lot but is this really noticeable wrt > > > overall memory on those systems? Let me provide some other numbers, from the desktop side. On a low-end chromebook with 4GB RAM and a dual-core CPU, after commit b92ca18e8ca5 (mm/page_alloc: disassociate the pcp->high from pcp->batch) the max amount of PCP pages increased 56x times: from 2.9MB (1.45 per CPU) to 165MB (82.5MB per CPU). On such a system, memory pressure conditions are not a rare occurrence, so several dozen MB make a lot of difference. (The reason it increased so much is because it now corresponds to the low watermark, which is 165MB. And the low watermark, in turn, is so high because of khugepaged, which bumps up min_free_kbytes to 132MB regardless of the total amount of memory.) > > This may not obvious when the memory is sufficient. However, as products monitor the > > memory to plan it. The change has caused warning. > > Is it possible that the said monitor is over sensitive and looking at > wrong numbers? Overall free memory doesn't really tell much TBH. > MemAvailable is a very rough estimation as well. > > In reality what really matters much more is whether the memory is > readily available when it is required and none of MemFree/MemAvailable > gives you that information in general case. > > > We have also considered using /proc/zoneinfo to calculate the total > > number of pcplists. However, we think it is more appropriate to add > > the total number of pcplists to free and available pages. After all, > > this part is also free pages. > > Those free pages are not generally available as exaplained. They are > available to a specific CPU, drained under memory pressure and other > events but still there is no guarantee a specific process can harvest > that memory because the pcp caches are replenished all the time. > So in a sense it is a semi-hidden memory. I was intuitively assuming that per-CPU pages should be always available for allocation without resorting to paging out allocated pages (and thus it should be non-controversially a good idea to include per-CPU pages in MemFree, to make it more accurate). But looking at the code in __alloc_pages() and around, I see you are right: we don't try draining other CPUs' PCP lists *before* resorting to direct reclaim, compaction etc. BTW, why not? Shouldn't draining PCP lists be cheaper than pageout() in any case? > That being said, I am still not convinced this is actually going to help > all that much. You will see a slightly different numbers which do not > tell much one way or another and if the sole reason for tweaking these > numbers is that some monitor is complaining because X became X-epsilon > then this sounds like a weak justification to me. That epsilon happens > all the time because there are quite some hidden caches that are > released under memory pressure. I am not sure it is maintainable to > consider each one of them and pretend that MemFree/MemAvailable is > somehow precise. It has never been and likely never will be. > -- > Michal Hocko > SUSE Labs