From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADA15C77B75 for ; Tue, 16 May 2023 10:30:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED89A28000C; Tue, 16 May 2023 06:30:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E890C280001; Tue, 16 May 2023 06:30:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D299E28000C; Tue, 16 May 2023 06:30:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id BE6A2280001 for ; Tue, 16 May 2023 06:30:29 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6CCFBC01C2 for ; Tue, 16 May 2023 10:30:29 +0000 (UTC) X-FDA: 80795748978.14.4D4F676 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id E813B8000C for ; Tue, 16 May 2023 10:30:26 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RB50PaGr; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684233027; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Iq39sUXrSgr2l8obnEcZ090uazUsP990dyTB+oV4fEk=; b=qt/c5OfSnrVpykh8lKMB/ot08HH3uQ2xrwac7KEaEp9P1UIyggv+raP4MIdRIw6dAWGG+s ndiBbK8oGDbXJpcwIqQ/4v64i03ZLe//T+gT+Fd1u9pZZ+7PfztTRAta9Ros60bWSuk9RM c5n9peg9C2Erx/76m4+CxW3OtO2i2kk= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RB50PaGr; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684233027; a=rsa-sha256; cv=none; b=R8VYBFj8uLT4JEYxhBKV2AG5TYV58qneg7kPxrQlFSDIJMbNIkdnTJYHRXDXtGd1P4nemH BkbxTncNI9rpkOOZehUMGzXPQI8Lm1HiSYc0V6Hb1/zrCNZPuUU3BWuDvdtv+Wsoot3p2h GB4XjhZAfGQKQ/bWkzIvOdAZOEfuBgw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1684233026; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Iq39sUXrSgr2l8obnEcZ090uazUsP990dyTB+oV4fEk=; b=RB50PaGrZI1tUL8drZE0mGwOimldpZhejLHxttMaK6Sh8uOKhs1cttLa959eL3TE+lCUpn WbIwDFtv7DhBzFvG7xOB7SZfIWp7vkJiRzuKdY7Bc5VyH14MEnXTgzn+qORetUYc30c2yQ o9Eds3wIlgKqmeGPrkk/toDFsbtJZKs= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-441--qXIX7BqO2yF_Z40FQu2qw-1; Tue, 16 May 2023 06:30:24 -0400 X-MC-Unique: -qXIX7BqO2yF_Z40FQu2qw-1 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-3062a46bf21so8337859f8f.1 for ; Tue, 16 May 2023 03:30:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684233023; x=1686825023; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Iq39sUXrSgr2l8obnEcZ090uazUsP990dyTB+oV4fEk=; b=Cuin2qN5mUAN9HSNd3aHsmdTtX7bTIy1cPEIR5UXxtBfdyztZuhtpQmIQCapXqvynm r1ag9LipLb2UABzjO6BTl/hraXZXxsKTCfYIqf7IR+QMSDrFsrBhpSaCOgY/+bB/Dyfo qxi7EramGBWE/v9OkGfJZjIoZNf5yN4mW0uRCUJwE7RREglno1r2RXhKVFWSKa0Wbpyh aphQNn2Mynrq4rotNE199S7zJonCeZTnPwLdt0LOWJJoJNejF01Y1UzwwF59f4U9FNeO Db7ys9zmXMIrVk2CXJevkWTXgucICVzrbyzm5ZAibNfP4cSVIOqEusjsC+Rm5hKgq6HS StZQ== X-Gm-Message-State: AC+VfDxJbqOynW1DtdE/1lFfKWZ6m2jXn7vujXnMPngVu0cRN+fhPdk4 G7aMuEYTbRkiykUV7XfmL676o+xxRoHg0rOZvdP5DryKJneeTaM0BU2b2ReVSEeDDWz/i+hRxC2 UM8aJ+icfn6g= X-Received: by 2002:adf:fe51:0:b0:306:42e2:5ec3 with SMTP id m17-20020adffe51000000b0030642e25ec3mr26763785wrs.6.1684233023532; Tue, 16 May 2023 03:30:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4Hm1rCXma6b8CJBSSLmrti/ly1HQMw8e7ZRDf4rG9SNwDTOKQzq42xlHil4f+OEQJ7pQ60CQ== X-Received: by 2002:adf:fe51:0:b0:306:42e2:5ec3 with SMTP id m17-20020adffe51000000b0030642e25ec3mr26763756wrs.6.1684233023118; Tue, 16 May 2023 03:30:23 -0700 (PDT) Received: from ?IPV6:2003:cb:c74f:2500:1e3a:9ee0:5180:cc13? (p200300cbc74f25001e3a9ee05180cc13.dip0.t-ipconnect.de. [2003:cb:c74f:2500:1e3a:9ee0:5180:cc13]) by smtp.gmail.com with ESMTPSA id m5-20020a5d6a05000000b0030631dcbea6sm2158557wru.77.2023.05.16.03.30.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 16 May 2023 03:30:22 -0700 (PDT) Message-ID: <3d77ca46-6256-7996-b0f5-67c414d2a8dc@redhat.com> Date: Tue, 16 May 2023 12:30:21 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 To: "Huang, Ying" , Michal Hocko Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Arjan Van De Ven , Andrew Morton , Mel Gorman , Vlastimil Babka , Johannes Weiner , Dave Hansen , Pavel Tatashin , Matthew Wilcox References: <20230511065607.37407-1-ying.huang@intel.com> <87r0rm8die.fsf@yhuang6-desk2.ccr.corp.intel.com> <87jzx87h1d.fsf@yhuang6-desk2.ccr.corp.intel.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC 0/6] mm: improve page allocator scalability via splitting zones In-Reply-To: <87jzx87h1d.fsf@yhuang6-desk2.ccr.corp.intel.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: dhx9ejq7oed6ct1i3zqmu5pq3ngb95rp X-Rspam-User: X-Rspamd-Queue-Id: E813B8000C X-Rspamd-Server: rspam07 X-HE-Tag: 1684233026-524925 X-HE-Meta: U2FsdGVkX1/RkzZzAW+tQNQUDwX11Q2oWGP3ZPAsz5L962eoRrr5YMJNKtuKxD1gGlSgavBk7n1ZUEgRzjqbPW7R9OhlhSMSFxH67YAvdcpYw586VBlOUnA0NKpFYRM1iHaFrrigM5xmFD0gob9IylOmGXPC4n3Y2dUEnBHSepqqHDJ6BSWpQIYnWVBx99gmIchjdceQpUOfbe5QGC4lJaDuLJPdUrahOG/U3Vo08OrqD4qo00B8ayG+oKqB1C1b/U4Hud7FKKxAXFnovi2+WN3oQTeNhjbnz9qKFkMNLDwmvTvhJGlSkZBPnEX2u4UctfQrfpxSVFHX6hRjI6wY0afY5h2b05m+Jfr9iGzHf8fiwX2fTw/GjeyPCyuKMMc0hUIaxBc0xHfChDYx55uYcRqagatDORySzI2/CYgoT6DuXQYhptkl5ykWXJBdxYsFsRoe3Oboy5VVfM1motTt9nATTsTkqY5t5+U5Vqy9CRyHSRwL8cPKGpn/rtO4wDXW8P/jqCFCh9tT60NUMfCSqnSCsD/KFPUDYKujDsTKRsSF8D80YoTTJpg48htidGRfMROjnXG3LL/kenGpqTpEd6Ruwr5GmS+ZaIfah1urtqMDjqgPoFOZd37kEJqyqs03WEhbD4aRJV8zAyOZGD2TrOoL//DWXbh1WOIyZWcyhhTry6qJ0GUYUOInbpFfCQzfZ9RoK5IvH4BwCKTmLxDmU8Y3v6UsoAMrUjR0I+G3oA8SI3riFm8aK5T0kZGHeXbmHx4MfboBSVue0AIM4nsL2fAHG9+62CZYJKsSYt16NHo1mY4GFOK5SFs8hLdd6/lw5wNgV7DOMiCLRmcEQbAguV76OUQCuAxabhs03yp7GnffbKA1FsJnCIDzXy3pHe7cyrfb6bgRCeeXyv0NSBMJ7+HRHZY5RQND8ZL0umgmE0c0IoqH9yJ+9Wq3D1CQcw83XecpGEsuGVS8UIA/wjZ T/EnB9Rr 7+XDbY5Oap/+sMdqeUCU03CSh+zEU+c5go6wMM0938EEdmULiFlNMbSrLy2ppVrKWO0f/R0JodhnfVIOu60t5NhUD6lU0yJVcJwOBPns1vgNV+U8c1SHgtC9edeC0EsaDC1oMytsX+YPA0qxsj3gZQqWKs4b8zE0+l9YogumOxLS/afv5HieL8w7Syw+7z/LtMwmJHTKHz/TIiFU22atnaxAkSkrQoJrupIb5ybNSMzVd9HAjdnAUxvjzhRq/q7+6iDHEM69ZrbS70eopyMFamV/NpWxwWFZPY3U3B8P3q7biVYAeSQBBZaBK4kZ16f/XIBKgcqb4leTCQ/EcUoQogpYh9jq3Vwg8Z3HOJGQ6wSD8zrzZmv64NOy4rhV+ASkZ+7IZ5vBRmBcNhv9gKQSyOr178cnHs9gBA+JI1QzTomUbYwVl6GFfzl9Lav+mMoi8i2DB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 16.05.23 11:38, Huang, Ying wrote: > Michal Hocko writes: > >> On Fri 12-05-23 10:55:21, Huang, Ying wrote: >>> Hi, Michal, >>> >>> Thanks for comments! >>> >>> Michal Hocko writes: >>> >>>> On Thu 11-05-23 14:56:01, Huang Ying wrote: >>>>> The patchset is based on upstream v6.3. >>>>> >>>>> More and more cores are put in one physical CPU (usually one NUMA node >>>>> too). In 2023, one high-end server CPU has 56, 64, or more cores. >>>>> Even more cores per physical CPU are planned for future CPUs. While >>>>> all cores in one physical CPU will contend for the page allocation on >>>>> one zone in most cases. This causes heavy zone lock contention in >>>>> some workloads. And the situation will become worse and worse in the >>>>> future. >>>>> >>>>> For example, on an 2-socket Intel server machine with 224 logical >>>>> CPUs, if the kernel is built with `make -j224`, the zone lock >>>>> contention cycles% can reach up to about 12.7%. >>>>> >>>>> To improve the scalability of the page allocation, in this series, we >>>>> will create one zone instance for each about 256 GB memory of a zone >>>>> type generally. That is, one large zone type will be split into >>>>> multiple zone instances. Then, different logical CPUs will prefer >>>>> different zone instances based on the logical CPU No. So the total >>>>> number of logical CPUs contend on one zone will be reduced. Thus the >>>>> scalability is improved. >>>> >>>> It is not really clear to me why you need a new zone for all this rather >>>> than partition free lists internally within the zone? Essentially to >>>> increase the current two level system to 3: per cpu caches, per cpu >>>> arenas and global fallback. >>> >>> Sorry, I didn't get your idea here. What is per cpu arenas? What's the >>> difference between it and per cpu caches (PCP)? >> >> Sorry, I didn't give this much thought than the above. Essentially, we >> have 2 level system right now. Pcp caches should reduce the contention >> on the per cpu level and that should work reasonably well, if you manage >> to align batch sizes to the workload AFAIK. If this is not sufficient >> then why to add the full zone rather than to add another level that >> caches across a larger than a cpu unit. Maybe a core? >> >> This might be a wrong way around going for this but there is not much >> performance analysis about the source of the lock contention so I am >> mostly guessing. > > I guess that the page allocation scalability will be improved if we put > more pages in the per CPU caches, or add another level of cache for > multiple logical CPUs. Because more page allocation requirements can be > satisfied without acquiring zone lock. > > As other caching system, there are always cases that the caches are > drained and too many requirements goes to underlying slow layer (zone > here). For example, if a workload needs to allocate a huge number of > pages (larger than cache size) in parallel, it will run into zone lock > contention finally. The situation will became worse and worse if we > share one zone with more and more logical CPUs. Which is the trend in > industry now. Per my understanding, we can observe the high zone lock > contention cycles in kbuild test because of that. > > So, per my understanding, to improve the page allocation scalability in > bad situations (that is, caching doesn't work well enough), we need to > restrict the number of logical CPUs that share one zone. This series is > an attempt for that. Better caching can increase the good situations > and reduce the bad situations. But it seems hard to eliminate all bad > situations. > > From another perspective, we don't install more and more memory for each > logical CPU. This makes it hard to enlarge the default per-CPU cache > size. > >>>> I am also missing some information why pcp caches tunning is not >>>> sufficient. >>> >>> PCP does improve the page allocation scalability greatly! But it >>> doesn't help much for workloads that allocating pages on one CPU and >>> free them in different CPUs. PCP tuning can improve the page allocation >>> scalability for a workload greatly. But it's not trivial to find the >>> best tuning parameters for various workloads and workload run time >>> statuses (workloads may have different loads and memory requirements at >>> different time). And we may run different workloads on different >>> logical CPUs of the system. This also makes it hard to find the best >>> PCP tuning globally. >> >> Yes this makes sense. Does that mean that the global pcp tuning is not >> keeping up and we need to be able to do more auto-tuning on local bases >> rather than global? > > Similar as above, I think that PCP helps the good situations performance > greatly, and splitting zone can help the bad situations scalability. > They are working at the different levels. > > As for PCP auto-tuning, I think that it's hard to implement it to > resolve all problems (that is, makes PCP never be drained). > > And auto-tuning doesn't sound easy. Do you have some idea of how to do > that? If we could avoid instantiating more zones and rather improve existing mechanisms (PCP), that would be much more preferred IMHO. I'm sure it's not easy, but that shouldn't stop us from trying ;) I did not look into the details of this proposal, but seeing the change in include/linux/page-flags-layout.h scares me. Further, I'm not so sure how that change really interacts with hot(un)plug of memory ... on a quick glimpse I feel like this series hacks the code such that such that the split works based on the boot memory size ... I agree with Michal that looking into auto-tuning PCP would be preferred. If that can't be done, adding another layer might end up cleaner and eventually cover more use cases. [I recall there was once a proposal to add a 3rd layer to limit fragmenation to individual memory blocks; but the granularity was rather small and there were also some concerns that I don't recall anymore] -- Thanks, David / dhildenb