From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F0B7EB8FA6 for ; Wed, 6 Sep 2023 03:14:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E2AFE90001B; Tue, 5 Sep 2023 23:14:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D88168E0014; Tue, 5 Sep 2023 23:14:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C03C390001B; Tue, 5 Sep 2023 23:14:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id A63258E0014 for ; Tue, 5 Sep 2023 23:14:42 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7D2C4B3F60 for ; Wed, 6 Sep 2023 03:14:42 +0000 (UTC) X-FDA: 81204705204.17.DF83734 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id D140240016 for ; Wed, 6 Sep 2023 03:14:40 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hR7H8KEj; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693970080; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tL2Y/9C3JS8xmrX+JG8HGybOEi0h/e1Q/MvhJEEXfqg=; b=sxzfftaony5RQfU6x7uB4WdIEBqgsOXw7m/QOI96VpRPzVXXul/vOrPww/7U5sBoSQ1JLo SuI+LnMjHdFfICYTEQuz03ZrQTbLJiRXQ4JCMC5iZ6VRZXsjaX6LHHCBdIl/+f08/ZxcUb Yd6q31KN26t2tU8OaMxKcngpBUvmNZ4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hR7H8KEj; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693970080; a=rsa-sha256; cv=none; b=gcsiytRIi6qTNhQpPKMwP+EWlDEqABIQ0S0Dqv5C76qtA2wkKCtqnH/8cgCWY6RMw548+V jUYkA/FpbUR1RCrls/UoeM/acPo0zldBwfcTcx2DRN6Z6bxiEIQStkKB6hJmmBzSRpfodq 5OSpB0jhCGDNQmL3NDsdO6Di6RTv8T0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=tL2Y/9C3JS8xmrX+JG8HGybOEi0h/e1Q/MvhJEEXfqg=; b=hR7H8KEjCFNoit5KJDOgzzZ8UF +0H748lLQu3U1Ctt7maSAPO4XThoq+/5ZdIc4BIlunmW/C3UynI4y6Zna97/ue89YlN1fANUrD0+A 8vx5YkYmfih8FS44HHVxJi8PWj5omuE8Ejl47aLb4lOWQMH3xvHBFr/V1gwY/7PKCcByLvWezeFV4 OeHLNXpimZ9SX2pKEWYiCe8ASc27I+T0HXeWMtE3GAKLmgBUhh96LVwuTqkWf5Z4Ggbauu+k1m5nH vklBQ2/PpQw0l23kN93x9Wp3qINM/wnpBoeSHYFqFlogwC2+mRaGLuM0iaGMshjZ/3eCcoFUBnbjI lfk7Et7A==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qdizu-00G9FF-IM; Wed, 06 Sep 2023 03:14:30 +0000 Date: Wed, 6 Sep 2023 04:14:30 +0100 From: Matthew Wilcox To: Kefeng Wang Cc: Andrew Morton , Mike Kravetz , Muchun Song , linux-mm@kvack.org, Yuan Can Subject: Re: [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list() Message-ID: References: <20230905103508.2996474-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: D140240016 X-Rspam-User: X-Stat-Signature: t5ffxpjpb4cdxjr8sjohj59ctjd5n9b1 X-Rspamd-Server: rspam01 X-HE-Tag: 1693970080-216392 X-HE-Meta: U2FsdGVkX1/2ldLxPRiV5eBHhUiwr07P7QDM1oEmwN1slDzTzcJmWa165TZmIHZ3/9hkWhbnDOzttx38Zqn5THdsHfVImZ/jOPFi4+7lP5vc8rmzGjNgcocOfxBlhgwgN9YX16HQ6XPqKrJdvLx7rYgsPupYpOCCuwBDKa8Hxl2/wtpyMchZJRl8EfOiifdOLf4PkjlRCmJlCWYXK7E0QbiUXRqFT3pEB+FkjqD2PDCwd7pqheE7+a820HnOTdvJcoB1ip3i5g3nN1SoencTnGsPQxFBab0lxrK0Q0njhEywJz9K8Abgnn3tAEQPvbAXPJQjZmPWTwIWZwmcvLaJJSbmPJGXKoF8X7QSZzw9xwBklpinnMGuw90SSLKeJF24keYr4YcA0jstHWf1Ezae5j2QbnvhmWgPtQLMXT9IkZKXbjs10V/j33sRaHpCeyLLHUPimgiNWo7INhCj8toQKWzq0cuBdTH7e33O6gxF2tz6p4maVwGnuA9b2dX/X32plsIXCGEm6xw8pPizFAlZZ0YDNCRs1FPWef+W2z7ShlcL3W7syk5zw+WqlzX/0AeRXRVZhNd8IZ6mcRlenS0jMz0wxLMB1HWi2XafORNzkU+zuBdrDZuV7j9Y5cQ4j631W9j6Mn+ZDfjX6SUHQoTvwwWP1rWAiOKgNFYFydND/uSQaqIyPlL+whhgrFRBMcNWEGE4Gvyz5OnBYb0yRCjqCY6T0sIoTXrYHUsEpEjZC3lSij5PA/BgwepXnOG5NYnHbP7XNQKfucS3W/G9DAVfB7lQ/qJ7SGcQhB5QceQxBK08jGzySMAhxG/qcKf6ExBqdVCUGFmqf1pgpFP7Gd53Yooiik2KQmvt6PL272FwCd5Z1hI+KN1s6yD069npl7CRbwbveR4ZsJ87c8lQqrRDrQ9xgjTryu0HC4zj2dv2h5HxN12YrgAQN8laKT0sVpvi+Gi1KCutpwiM8o0CZib 18BI4RaC 4lDT9O+4q3Y9GUdqL9QIJqOCpbE2xmcF78LxnzQsbMr2a/B1Cv/QguvwhHugIE6ByAy9vSA/IbnLzpJ8rz8wFb0LeZrvkTEkVuoDpJ8H7MwVq6xGireZJvRPaiOEB6Zn4/Be0+wXbmUWyLD6hbSemCFevsNrhL2WAWmwpnb2RbX7CR0tJtcDLLqUZ5g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 06, 2023 at 11:13:27AM +0800, Kefeng Wang wrote: > On 2023/9/6 10:47, Matthew Wilcox wrote: > > On Tue, Sep 05, 2023 at 06:35:08PM +0800, Kefeng Wang wrote: > > > It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in > > > alloc_vmemmap_page_list(), so let's add a bulk allocator varietas > > > alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list() > > > to use it to accelerate page allocation. > > > > Argh, no, please don't do this. > > > > Iterating a linked list is _expensive_. It is about 10x quicker to > > iterate an array than a linked list. Adding the list_head option > > to __alloc_pages_bulk() was a colossal mistake. Don't perpetuate it. > > > > These pages are going into an array anyway. Don't put them on a list > > first. > > struct vmemmap_remap_walk - walk vmemmap page table > > * @vmemmap_pages: the list head of the vmemmap pages that can be freed > * or is mapped from. > > At present, the struct vmemmap_remap_walk use a list for vmemmap page table > walk, so do you mean we need change vmemmap_pages from a list to a array > firstly and then use array bulk api, even kill list bulk api ? That would be better, yes.