From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80706C531DF for ; Thu, 22 Aug 2024 06:24:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0918C940016; Thu, 22 Aug 2024 02:24:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0401F94000B; Thu, 22 Aug 2024 02:24:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2351940016; Thu, 22 Aug 2024 02:24:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C257894000B for ; Thu, 22 Aug 2024 02:24:54 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5BB7AA817A for ; Thu, 22 Aug 2024 06:24:54 +0000 (UTC) X-FDA: 82478893308.18.197832B Received: from mail-ed1-f47.google.com (mail-ed1-f47.google.com [209.85.208.47]) by imf03.hostedemail.com (Postfix) with ESMTP id 62A3F20021 for ; Thu, 22 Aug 2024 06:24:52 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=NGHtckMg; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724307875; a=rsa-sha256; cv=none; b=47v+w5mr12kKh1/ittGRC7vX4gGJat/v0v72i3FgBtOrN4psGzhEvBKMBD6G9yYLy8VAeo +ncvUylfxy9C88IBTWpug8mVlAqQhu+EjIm2GHqMPkSdcTIVu/VkerBjYYDYUMPshK0C52 7CdxvSsHR9GcrXJbV9ZjWwDPg05M/BY= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=google header.b=NGHtckMg; dmarc=pass (policy=quarantine) header.from=suse.com; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 209.85.208.47 as permitted sender) smtp.mailfrom=mhocko@suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724307875; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dSj0hW6JDdfU92jXObH8IcQ1h41k65h/FvcN5Kx0k/c=; b=Dmvo+ObsbxfVs4eDO4np+xQ17F7tZRSZk0prSeXvDLOjuc2E3OYJOolHrNbenolj6eoEcR ZVWu83qxEqbKY2PTTOGqQyTFM6Kkms5xRQ4VZE32kQmpW445/SK5Ira7sXGJpndZgO+uxU iM7lsrYoVVQTmBFifcSFJzCoD/g+abs= Received: by mail-ed1-f47.google.com with SMTP id 4fb4d7f45d1cf-5bed92f638fso513664a12.2 for ; Wed, 21 Aug 2024 23:24:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1724307891; x=1724912691; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=dSj0hW6JDdfU92jXObH8IcQ1h41k65h/FvcN5Kx0k/c=; b=NGHtckMgy2Pp8wQGoMyz0FPmfvTp2w6PzPsPUmpGayYR1sfaVwEYoidmmwABZfmdAW KCWEQHEEqnmJI2bvY3fZMek6rwzoMEeJzg3Lw+/oC5+Jk+lerL6uVG59Jkm5Vxh78PS5 G9MdO9blKxWgq6Gj2NkDWYGdP7U/LlT95d9a1O7EZk+qb0+yG3as/JHyZNrUMJ/Jrmo4 4w7lhWPQrhC1ItsPvnxe6oi/jy41fNJgeq+FWQCXFzPjnDpvV22fA0KbWjhD/MjYgbkQ ZfpnXayc3y/uQ6T9PMmQqjgcJGTBxTfuRQnH5HprGk6WC5ekWQWmDgpE5dVSVv1+4p5X R2fA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724307891; x=1724912691; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dSj0hW6JDdfU92jXObH8IcQ1h41k65h/FvcN5Kx0k/c=; b=OVghzo94xTP9nC3kmsTBQOEhfqucDH65wYazkUzPYfoGYKVF9i+ODaAaLn13nc1Nv0 2dLpzGH4QlI7aYPV4asCANs5mdq+WjnDLpFbwcPGiXqouwFEEGBMckGG8yXMAPLdd4hQ TYFUAQfyrNYFP6rYol5pJ1rB6vE97ewrszZOdUTp1YLOwx7tWYVfxNKM/O4tocnxqNJu sKR5C52Ya5j+73nXaxeVknYQEdHtslTw/A9ZcFn6/D9GxFte/F4DGh8MJPNhjEv8iCLs BEMj4YNrz1900JiYbt/rZzCEoOLLpg1K2fPyKJu2Xfyb8wUz3yq230cRcDZ8wNcO/5Kr N/5w== X-Forwarded-Encrypted: i=1; AJvYcCVtDCF2jrHtdluUZltaL2jXzl13F0O00n6OTvWEL7js8scRxfId3DFhUvq9d8cgXsyhlvoOmaoi/A==@kvack.org X-Gm-Message-State: AOJu0YzWwvrJFTsgEWQ/dtcCRZQ2M1lrZfyI0d4kaEUe0JPE7dPblszz kfnaNG999DBj5+xtLl34zwuGCo/q6P2BUdYnlpuRZqpaBHJmhNez76KbmAAziFF+dAKdBjjvERd e X-Google-Smtp-Source: AGHT+IHqd6rCfjoNRUOqVbNXPscNq7pN5OEsUCzKi8tCe/NqbwN/se9mjcjnct3VbqEUaVc1zCTyTg== X-Received: by 2002:a05:6402:35c4:b0:5be:fe26:dac1 with SMTP id 4fb4d7f45d1cf-5c0791d07fbmr957946a12.3.1724307890681; Wed, 21 Aug 2024 23:24:50 -0700 (PDT) Received: from localhost (109-81-92-13.rct.o2.cz. [109.81.92.13]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5c04a3eb329sm475609a12.55.2024.08.21.23.24.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 21 Aug 2024 23:24:50 -0700 (PDT) Date: Thu, 22 Aug 2024 08:24:49 +0200 From: Michal Hocko To: Zhongkun He Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, hannes@cmpxchg.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, lizefan.x@bytedance.com Subject: Re: [External] Re: [PATCH] mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath() Message-ID: References: <20240821135900.2199983-1-hezhongkun.hzk@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: 62A3F20021 X-Rspamd-Server: rspam01 X-Stat-Signature: uzbqrwh91am7fqwezcpqzx9d3pqbb9is X-HE-Tag: 1724307892-588875 X-HE-Meta: U2FsdGVkX19hs4McaCYC+u51DlV7jV8HCDDxIqzGakKV10cxn0rCS5wYAeLnkImv/mMOciSI7qN7/LHTK1Ss/zbtR5c6Bexg5lGlWAgS90bvHdf8ek0ci3WDMJT2/a/78iW6ter3VtBHc7CBRWezaUseH3V6J4CtjGQjSFL7RL+OXmKrUtaTXfz1yvaPX8mcSW2cScVMnoJthxMM0aB3YRGofxufYKDYwXASDMhp+ix90zX9iXcndVdHb1zOr8ppx1jbRAUNarGGHqm3pby+Fu5SuJg+s07e7B9pGQftXXe9W+pqPd1PCHM6Y7KpYpARdYbnRfTGWVgOLs91Axh80dZc/2hu32cY7U1CgOfVuXriNu489tyHT/kEE52DcP7zGNPbssChjzkyXUSBkkr4DoXoWnbZxzyogT5Ox9RlkfXzL5Q7kXM7ishAExUHgPh7TwtLPdWlby4bF10frFB6lW6YgO5CtLhcpHJf93MHLfcm9/gh3cP5zAksufLc36nbuSt1ZaP0DFyFSFbHh2+PWTOzcCZjwWOPC7+5FIWcYKwPm3lPWKlgfALUGYyrC2PcgFXmr5HRndEtDlbWBgRmdmbV3OJGU5ZnhlLqWCim/sglOgQUPsYV/wQ40MMjJ3PMcRjq+M5BLBXF782VDCx78xRmNfPkg2rBQ7/VpkIINcCbjNytdDfiyNoIzwAekza87CIUzcUNta2jZWpj+18JPP11/SzNXkQ9NxBuwN7dGudl33Gf7tjPHwA7d3NBoWlY9cMalQEq1RAcx1NawjI38pA/sjXHke2qRzspJNlIMvNTVc+h68R8oqQERc+EDkeNcWekmnoUWi3otK9bFxItJoO6YkyF5FYPnccC229Y+Q3CekvTLqZP694OlMw58cwOTnj5cJ7saof21MF7vqN/UKnOIyXjoeNJo0+4a6zMWMbiH9IxPfk8O5SdUqK+OeBgcdie2+WA0VHkiMoJh1k StOuiLIR 2cybMDVf5Jg7KmTf7/OGoJ7j7VNhtjXkZ9gG7SpTKbzYYD2gB5ijtbqNeQPnkGIFqWmaOdK8XB9tiAAhryrgCcjVXWwFoamTN6lcNWh5BJ5+HLC0Tn2gX+0wxLmpyzCNdbwYWM3bu8+OCcLfRsRuJQpCub7dMvdKHiY+jRqjp2L0bKOl7R+qPqnh5ZwC/674bGBwVkO/ozbI59FTubMy9mqZ3y0vu0Jcsm4or9WSoz8h/urwh0/GhmQRJJ10TMqXvky/S7WvlftSuv2gtxgHd3CtEA6vZAqTFXINoiBL/YMzCmKegi4KMNGDn7HZ2zj+mgFb/zsZKPho5GOOZW2KRRrMtCb6h8cRNUXMV/qGWxrHSDc7wipVDVcg3hw9v7YcBHzo5U8GQ9208FKFlay/XuLgQFwycYRZSGxmK X-Bogosity: Ham, tests=bogofilter, spamicity=0.003043, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu 22-08-24 11:15:34, Zhongkun He wrote: > On Wed, Aug 21, 2024 at 11:00 PM Michal Hocko wrote: > > > > On Wed 21-08-24 21:59:00, Zhongkun He wrote: > > > I found a problem in my test machine that should_reclaim_retry() do > > > not get the right node if i set the cpuset.mems > > > > > > 1.Test step and the machines. > > > ------------ > > > root@vm:/sys/fs/cgroup/test# numactl -H | grep size > > > node 0 size: 9477 MB > > > node 1 size: 10079 MB > > > node 2 size: 10079 MB > > > node 3 size: 10078 MB > > > > > > root@vm:/sys/fs/cgroup/test# cat cpuset.mems > > > 2 > > > > > > root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g --vm-keep > > > stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd > > > stress: FAIL: [33430] (425) <-- worker 33431 got signal 9 > > > stress: WARN: [33430] (427) now reaping child worker processes > > > stress: FAIL: [33430] (461) failed run completed in 2s > > > > OK, so the test gets killed as expected. > > > > > 2. reclaim_retry_zone info: > > > > > > We can only alloc pages from node=2, but the reclaim_retry_zone is > > > node=0 and return true. > > > > > > root@vm:/sys/kernel/debug/tracing# cat trace > > > stress-33431 [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1 > > > stress-33431 [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1 > > > stress-33431 [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1 > > > stress-33431 [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1 > > > stress-33431 [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1 > > > stress-33431 [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1 > > > stress-33431 [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1 > > > stress-33431 [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1 > > > stress-33431 [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1 > > > stress-33431 [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1 > > > stress-33431 [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1 > > > stress-33431 [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1 > > > stress-33431 [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1 > > > stress-33431 [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1 > > > stress-33431 [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1 > > > stress-33431 [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1 > > > > Are you suggesting that the problem is that should_reclaim_retry is > > iterating nodes which are not allowed by cpusets and that makes the > > retry loop happening more than unnecessary? > > Yes, exactly. > > > > > Is there any reason why you haven't done the same that the page > > allocator does in this case? > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 28f80daf5c04..cbf09c0e3b8a 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -4098,6 +4098,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order, > > unsigned long min_wmark = min_wmark_pages(zone); > > bool wmark; > > > > + if (cpusets_enabled() && > > + (alloc_flags & ALLOC_CPUSET) && > > + !__cpuset_zone_allowed(zone, gfp_mask)) > > + continue; > > + > > available = reclaimable = zone_reclaimable_pages(zone); > > available += zone_page_state_snapshot(zone, NR_FREE_PAGES); > > > > That was my original version, but I found that the problem exists in > other places. > Please see the function flow below. > > __alloc_pages_slowpath: > > get_page_from_freelist > __cpuset_zone_allowed /* check the node */ > > __alloc_pages_direct_reclaim > shrink_zones > cpuset_zone_allowed()/* check the node */ > > __alloc_pages_direct_compact > try_to_compact_pages > /* do not check the cpuset_zone_allowed()*/ > > should_reclaim_retry > /* do not check the cpuset_zone_allowed()*/ > > should_compact_retry > compaction_zonelist_suitable > /* do not check the cpuset_zone_allowed()*/ > > Should we add __cpuset_zone_allowed() checks in the three functions > listed above, > or should we set the nodemask in __alloc_pages_slowpath() if it is empty > and the request comes from user space? cpuset integration into the page allocator is rather complex (check ALLOC_CPUSET) use. Reviewing your change is not an easy task to make sure all the subtlety is preserved. Therefore I would suggest addressing the specific issue you have found. -- Michal Hocko SUSE Labs