From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4A0EC77B75 for ; Mon, 15 May 2023 11:14:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 302BC900003; Mon, 15 May 2023 07:14:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B22F900002; Mon, 15 May 2023 07:14:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A1BB900003; Mon, 15 May 2023 07:14:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 09BB9900002 for ; Mon, 15 May 2023 07:14:32 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B9FA01413FF for ; Mon, 15 May 2023 11:14:31 +0000 (UTC) X-FDA: 80792231142.20.304F166 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf30.hostedemail.com (Postfix) with ESMTP id 60B5C8000E for ; Mon, 15 May 2023 11:14:28 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=iWAWmzdK; spf=pass (imf30.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684149268; a=rsa-sha256; cv=none; b=xDh2De7OFCjWm6tQd6pxX8IwQtaWIFhRO0tsJpwxEcVzav2RfDkwf3HUxOhYuXwqGH7kuS aDna5k+C+dqADxu90Zi1SJIVY83l3vlIOBZyheEYRIKUlnMmC3I+5uGcYEVizH20xU65WX CrERiFlzyVPebXwmRegDP1eE/DTx8EI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=iWAWmzdK; spf=pass (imf30.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684149268; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0wvSqGZQauFmNuobTLgK83+3YhJe93al7FwH3tXa7po=; b=TSmjj/QWf2vI9LTH1ubl/FtOIpojO+rK1NlpboL1zxKuqjAhzPw4LsoKpz9nQYGknjVUp/ HbD9ie7Vmtgq8lm4c3MysMlRIgsa0VLqsOqIMJ6Xm8+9W+LCZDRmjAbvjlsSAqLf6a6eY5 PO8FJhwfROtpSsIDo4AVdANxVRXV23c= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 888001F8D7; Mon, 15 May 2023 11:14:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1684149266; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=0wvSqGZQauFmNuobTLgK83+3YhJe93al7FwH3tXa7po=; b=iWAWmzdKZmsPabxGe5UA24wp8QY736oyIjouPSU6qVKt8s88OKwFSKeYD5CKXW5hGN1xZP WnjIkKf7ZLCvant58JIsOtXluSP6bg8pyIVjGKuNBX0afBbts3XSNN2ILH7OsQ7jax0E6h Dxmvyn/vB2YCQQ9G1kyZCJR6sCJpzWo= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 77E6513466; Mon, 15 May 2023 11:14:26 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id QvurHBIUYmRSAgAAMHmgww (envelope-from ); Mon, 15 May 2023 11:14:26 +0000 Date: Mon, 15 May 2023 13:14:26 +0200 From: Michal Hocko To: "Huang, Ying" Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand , Johannes Weiner , Dave Hansen , Pavel Tatashin , Matthew Wilcox Subject: Re: [RFC 0/6] mm: improve page allocator scalability via splitting zones Message-ID: References: <20230511065607.37407-1-ying.huang@intel.com> <87r0rm8die.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87r0rm8die.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 60B5C8000E X-Stat-Signature: fasnrpk4np7k8zay1eeom9bt5jfm87wr X-HE-Tag: 1684149268-930709 X-HE-Meta: U2FsdGVkX1/NLapqCFHJ45hk24YqU+yh+QzXHBg3+Qzjp58onRKjpFpgOd2TAbof4huSFZYseWeNou3ePK6E9fqDn2f3Um/PvdGQlOyLCJej5f6sQH+J+UVj24D5tNK16SJa10Oz0zhKXQhRBC4i88aRTmyD9Tn/TrM5zL5tBVRQ5WZn+5O/QN5uy7sDe7Oxozy3igm7/ESh/q9wjClKq1Kxk4VWlOD0GW5MNHiBDlzELZzpnsUzz2G0dw1fzncf8XEF4aqutVvj6sZSkCaqD0C0VeIDH+fA7fKfjmAhxCmHziW37C5ExMvI3a1fmAkcGhWRkdG6Q+ggaKpOwtNtG9qQF3CHj/d9jCjU0H8abX/s461Ppbl/Ucq9g4Ix3QUsA12SF3ABZy/sjUlEPbUGtxFyNtxOnGMPs3uGrBfEMEs0+u+rKAcfkeJtAae9ILE6IrunUTgtFb2lOtkhnQ4ttivKuT9Gz9ErdoaelG0gzQWefoSQWXLQOL7LAkNy0PxMXS92a7i0upNQBKe/EeibUpUCIIn7+Nw6JJwkffAVUlkFIItqHk2265Nk0OZGZCm6lgWcwRRQXwYYdlMjC1w77j2LNIRKhMneYqdyqDOJr4r8tt+3lALp8P1TnvtLVUuAwQc/lcyrr1F0iNwqTNCdhq9l4WCb8MV6wJGHAAD8ImFNajmN4cCEFwEXPlnUJEonfUc/wNU302AAoU4IQ3ELREiQ9h20ptYjan3M3tEt9Kv1di/4J9LCNq5d6VeSnkdJ2r3HW4WsM5eaqgHGggM/koeHypGPqWzYTIEkJMHuzN2++aFgzfZ76DVnjyLYHA1au3c3txPzJiv9TiElTwx/O1t8HzXlGpJCPqCJSlozgECYE2CM1gE6TaaQYrukgU03pMoZSZmGTGGBVx/Tqz7xtt95JG5UiAHECOoZlKmXyRSQlZ8tr7UH4o5QLVCGPLFuJmKs6/ejnVYKfkpHWYY 2zpacSKK +TkZk+qIJ3l0tt4LGc3zUkRE5TALGbmmowYLj0bHATxKI0eX8iSW2KUfU6anggTxgCTO5Kwk5YAtcaU/hsKAFsXi9aQ2zdAfD0rZUFZm4PHNniPsInsSVAHp6O7wz9PtIvylFt0Pw0n5k+Sm2iFZvh4NTEMJuhD8wNVNB7wdVKoS3xQCSH6ecicbM5gChJjSYGTHaXtcvn8q9WJrmYkL7IIYuIufWp1Y5VnrzQlu4JR/VfSfduAvEx3XOgu76yaCDxmvOQJ9AZcDFwZd5MHpjHXnJtE1b1V3LIdzDA+Yh1YFFKd8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 12-05-23 10:55:21, Huang, Ying wrote: > Hi, Michal, > > Thanks for comments! > > Michal Hocko writes: > > > On Thu 11-05-23 14:56:01, Huang Ying wrote: > >> The patchset is based on upstream v6.3. > >> > >> More and more cores are put in one physical CPU (usually one NUMA node > >> too). In 2023, one high-end server CPU has 56, 64, or more cores. > >> Even more cores per physical CPU are planned for future CPUs. While > >> all cores in one physical CPU will contend for the page allocation on > >> one zone in most cases. This causes heavy zone lock contention in > >> some workloads. And the situation will become worse and worse in the > >> future. > >> > >> For example, on an 2-socket Intel server machine with 224 logical > >> CPUs, if the kernel is built with `make -j224`, the zone lock > >> contention cycles% can reach up to about 12.7%. > >> > >> To improve the scalability of the page allocation, in this series, we > >> will create one zone instance for each about 256 GB memory of a zone > >> type generally. That is, one large zone type will be split into > >> multiple zone instances. Then, different logical CPUs will prefer > >> different zone instances based on the logical CPU No. So the total > >> number of logical CPUs contend on one zone will be reduced. Thus the > >> scalability is improved. > > > > It is not really clear to me why you need a new zone for all this rather > > than partition free lists internally within the zone? Essentially to > > increase the current two level system to 3: per cpu caches, per cpu > > arenas and global fallback. > > Sorry, I didn't get your idea here. What is per cpu arenas? What's the > difference between it and per cpu caches (PCP)? Sorry, I didn't give this much thought than the above. Essentially, we have 2 level system right now. Pcp caches should reduce the contention on the per cpu level and that should work reasonably well, if you manage to align batch sizes to the workload AFAIK. If this is not sufficient then why to add the full zone rather than to add another level that caches across a larger than a cpu unit. Maybe a core? This might be a wrong way around going for this but there is not much performance analysis about the source of the lock contention so I am mostly guessing. > > I am also missing some information why pcp caches tunning is not > > sufficient. > > PCP does improve the page allocation scalability greatly! But it > doesn't help much for workloads that allocating pages on one CPU and > free them in different CPUs. PCP tuning can improve the page allocation > scalability for a workload greatly. But it's not trivial to find the > best tuning parameters for various workloads and workload run time > statuses (workloads may have different loads and memory requirements at > different time). And we may run different workloads on different > logical CPUs of the system. This also makes it hard to find the best > PCP tuning globally. Yes this makes sense. Does that mean that the global pcp tuning is not keeping up and we need to be able to do more auto-tuning on local bases rather than global? > It would be better to find a solution to improve > the page allocation scalability out of box or automatically. Do you > agree? Yes. -- Michal Hocko SUSE Labs