From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14203EB8FAD for ; Wed, 6 Sep 2023 03:13:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 808CC90001A; Tue, 5 Sep 2023 23:13:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B8868E0014; Tue, 5 Sep 2023 23:13:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A6BF90001A; Tue, 5 Sep 2023 23:13:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5E0258E0014 for ; Tue, 5 Sep 2023 23:13:36 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2AC6CB3F6C for ; Wed, 6 Sep 2023 03:13:36 +0000 (UTC) X-FDA: 81204702432.11.21E618A Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf09.hostedemail.com (Postfix) with ESMTP id 4A062140009 for ; Wed, 6 Sep 2023 03:13:32 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693970014; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/Gr1sewE0ASlKyTUKaEZkOAs0d5ATcESAFn3AMg8rGs=; b=LX5f1DshYGRVA+0sDluYZlZZvXUIkV5ALw4slcOzsrhV94LrEzQXCX7qbRkJJPCeh9jfcd EQW/niEwcGYxtBnyInfNLJ9zn/kw/Im8LjoJPcQ/LAMkqgsDy8UodZuOotbDX7Xx+cDlga d5WUi74vC7BtenjTQgbLyCAAj9w2klA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693970014; a=rsa-sha256; cv=none; b=YUMBNC0Y2NJW51j62RcO5uonRbW1v/mFv0OM5nOb6JD8iMb+53Mo0GAdZmcjliD9KDs+kw zTsQz7e03zwF2gWqiJOHNAZS5qKAyKPTu7meOc+072/FKwV3uEZTaDJnK83WTPlmcSUQdJ Rr09kQfP/WC14nu0vnfKtkrtLpJF6X8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; spf=pass (imf09.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RgS635tskzNn1n; Wed, 6 Sep 2023 11:09:47 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Wed, 6 Sep 2023 11:13:27 +0800 Message-ID: Date: Wed, 6 Sep 2023 11:13:27 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH resend] mm: hugetlb_vmemmap: use bulk allocator in alloc_vmemmap_page_list() Content-Language: en-US To: Matthew Wilcox CC: Andrew Morton , Mike Kravetz , Muchun Song , , Yuan Can References: <20230905103508.2996474-1-wangkefeng.wang@huawei.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Stat-Signature: njufu8cs37ukot4g3dmy9c7oyefxx4u4 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4A062140009 X-Rspam-User: X-HE-Tag: 1693970012-951400 X-HE-Meta: U2FsdGVkX1/q/M7380vi1nK4q93X4Ii15hfR5wZDH7QnpFU1nNR/55M4+46KRzAem3o3LzOEgwBGZ7/gqXVGVcXpA1uhaofXLfj0JGwjEF3rppmkSzbcxAyf2WCQlbLwM0Hp/SIBjN0QecnhzUpI7+fEKrWZfb+LTVJT+BzPewphReK1pONheiDeqLwwqeLo48Z/TkPNoA1MLAsmFSaVcX4h1xh+IXh9vyoNyYG4YwaoJ9GN06gF0c9BLjzJDU2Ew49HIssoAE+EoC+wv3Lt47Wz1Kk5TJiejvXfyD900GmaCinFZFlplseisp5A824/AHstuIBI46abd2g4VqBJuOu/tIIZAMu5wK9FCbbB/JA4Ifx3JuC4Cne7xvlMIwMcoDCKksg58OGQBx2eWVOlpJb3E3+FH4+joN8qjDzXiHhmLGEEY+9tWyeRciHlvllRz+4eamP+ntkIgMGpKOkIX2z2UkMgML42ewetRQh/XL3Wiq5BjKML4Tag7tsUu4pvwgYlh84s5ykxfakOhRwpKLU3pZJqSA1jz2ckQ8LMZs5uu4Rf5TgXTFo9kORIhzuyV0FukX49ARnwjUPR9WNeoD6l9TFmlYBjzfFRFNM+xTv2G4VMOlnshQIulOCDug2UBz0u+Jp7u2ReKMOJCRk7u3n4tQgSAp0SC3IPhipEAvQiSS+FQWIHKxtFznEyizBmsBUO/DKbad4K/3sxTb9VjhNiQttf5EuvAPjPoLyBy1JjK3mVAQv9r4F/6p9X4y4FxW3j9d9xfcBHB9lNueLDfQBy1gpyvIkpwPaxozzV3ixh5dDYbLyw/fqobCjUtqViK6K1bB+Vp8yZHg6q9WfI7vdakkJMyfMDp4qB0KZ/Z5+dn6u/EcVxei6H4E1xLTdCt5w9gczBnRodyWEDkjAEAWz89DCVcNsYvdOkOB0V712PpbKHF7zKYFdE1TWhL+2ZxxPcgeFAcuxYFUPy6QF SmJtcBsq 40jZw4bQj1kbkYkpPUtkkWBGKlvb2CD6LnB7wXAJXceF2a4glMFVZ9/NfgQMvhwoVh8hhiXsouF0N353oRH5h2yfnwwI4s68gHcimZK0uo97xYtq9Q+zCYvQBhKJemVMj33FHjHDGnXaMtnorlPZ47qdAluSsV3iKH3oM+kIa+2mSK+SDmvSkxe+hvptDeVEHrcuHK8XWMpc+FfKssy223vGdCg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/9/6 10:47, Matthew Wilcox wrote: > On Tue, Sep 05, 2023 at 06:35:08PM +0800, Kefeng Wang wrote: >> It is needed 4095 pages(1G) or 7 pages(2M) to be allocated once in >> alloc_vmemmap_page_list(), so let's add a bulk allocator varietas >> alloc_pages_bulk_list_node() and switch alloc_vmemmap_page_list() >> to use it to accelerate page allocation. > > Argh, no, please don't do this. > > Iterating a linked list is _expensive_. It is about 10x quicker to > iterate an array than a linked list. Adding the list_head option > to __alloc_pages_bulk() was a colossal mistake. Don't perpetuate it. > > These pages are going into an array anyway. Don't put them on a list > first. struct vmemmap_remap_walk - walk vmemmap page table * @vmemmap_pages: the list head of the vmemmap pages that can be freed * or is mapped from. At present, the struct vmemmap_remap_walk use a list for vmemmap page table walk, so do you mean we need change vmemmap_pages from a list to a array firstly and then use array bulk api, even kill list bulk api ?