From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D48A9CD3440 for ; Tue, 19 Sep 2023 06:49:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 649D96B04B4; Tue, 19 Sep 2023 02:49:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D4096B04B6; Tue, 19 Sep 2023 02:49:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 474546B04B7; Tue, 19 Sep 2023 02:49:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 338F46B04B4 for ; Tue, 19 Sep 2023 02:49:23 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 145AA1C9DD6 for ; Tue, 19 Sep 2023 06:49:23 +0000 (UTC) X-FDA: 81252420606.28.0ACCB55 Received: from out-216.mta1.migadu.com (out-216.mta1.migadu.com [95.215.58.216]) by imf26.hostedemail.com (Postfix) with ESMTP id 1A9C3140016 for ; Tue, 19 Sep 2023 06:49:20 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=VwPyHKdm; spf=pass (imf26.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.216 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695106161; a=rsa-sha256; cv=none; b=VP7zZHps69fULKsFIHjcE/ngVzZjzV0oNaPHzGWeOJ0PKbqXu9DROWlcCTleLq56MXB6LL 4oCQ31eMkNL/yxT5NLMFWq0U3OsinsoSIuWgJshftSleaTVEYvnO7ednNMgH1XPUoJBAyg bRIem+2QanYiNZPpVVkO44w2t/julWg= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=VwPyHKdm; spf=pass (imf26.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.216 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695106161; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zv2mbCenM93IUHoZmD+q2RXYKs9LyiPjrfSBHDb0DlM=; b=eVUVWF8xoIcUK6sX+x+RBhIGLv8ktCV2EOIIfeHqJBY6YyMt/A7l6dNV9RfAXEdO3aHbS+ C7kwGAlXMe+cWDcCIVm3WI/P9vu8ttsMi8OocvCwPchUknPFDGiLVmwPYjOUkYWCrG/BVo IWMk/EJGmD0mgXzxWU4j1fqkJAj8lr0= Message-ID: <1ffd72f1-7345-1d31-ea6f-77bec83cb570@linux.dev> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1695106159; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zv2mbCenM93IUHoZmD+q2RXYKs9LyiPjrfSBHDb0DlM=; b=VwPyHKdmYl8UKS/ID54rvHNcMiruPgRN4Z/mOdEpJsuOPUoyU1v7D2KEbEwHLVKkOmoDlw p6OrgiKBFjbeQmEEcMT357XxgsGPme5LgPM+fWAFH3y8klSutcW5Di/UizOC1dZiZeeNGW xefkh8AuvrDB/l+/GMQNGYSTMYqcqlk= Date: Tue, 19 Sep 2023 14:48:54 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v4 8/8] hugetlb: batch TLB flushes when restoring vmemmap To: Mike Kravetz , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song <21cnbao@gmail.com>, Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton References: <20230918230202.254631-1-mike.kravetz@oracle.com> <20230918230202.254631-9-mike.kravetz@oracle.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230918230202.254631-9-mike.kravetz@oracle.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 1A9C3140016 X-Stat-Signature: cqqe5ogoy1zu5ffwofayqizownxqqd36 X-Rspam-User: X-HE-Tag: 1695106160-693576 X-HE-Meta: U2FsdGVkX18Npk7V0lUb9j8oyngA1CuN97s/bGOdKp/LniL6N5FaXTeJjTBNDLIm+wRW1SpbjV2XynYiPVY8vL07rLY0JZWvFjkbSdYbhhBuGePnnUijPb/tq7ZvBFzBhz4YxdXercFUQJbXO4/3MbObVoqxpybeCwuRGpvU14nZgRYZjlXE12cffmIZWVhJBOci4E1Ubgj9KwOZZJ+z07DpWUWqD4gQf20sMtloGmMtiQRf4CS3YKwYJz+2w5OA2nq3eMIbMIqde2L4vaF4KPQ6m14AcL2Ebp/HguuSCJ8/9vaDv/PpJF80SKdqEOG0esmm1GmwenimO+VmC2AgLIgbvL8sz+AdP24lOUZzcoApeUdvv+pdGj+8Baqbhtq6WiqSOLNvj7MFCLg7eYtfqAVMs8vWGHQ9olnlSRmnIJuXHzWh+P78VqkGlQ9xz8ekQJKNqflM7fmr4r6HSEtnvMnBM10v0ZWZphSwpkqNGipuda0n6/cGIH809W62njKuM32w+JUKfZDStGKuuEUzBpB3w2WG7VH1Sm/sIdYu+a+HrLQTa3WXHTwY+zev4IBQcEQAnS33DmvFH4UILEd2XbaZ5MBFMMrgFgRecc3dFNKxthHRGA7AhpAiHeND1WSliJoCHM8Jo4Oz7QssbIQG1swEp0jI9c1AgUFZWVyLIZBKH/40t3OeuV7KL6PlIJ2PfyozKNH0UTVZVyVP3eIPWfW2MqWvqufgwnZYAku1XbSDLreMxwsddZSAXYvhG2Bv4jIFLnn0+ZwrhXBC100eDxoT4siEB3Ca45oByLYHxhNSkg+XgHjrtJiMCKLn3DfwBmHJwWdpVru0o0WqT7g3ZRxkhRacsqkik3m1U79xQ2jo8KhG4WMX1oOwgAdGCvF/OOxh9HVL9n7qXU0wINvnT5+Tftq9nEvHOtPRXT6S7nrcwNppiMp7pyNoZ9xh/OboinGa7qbX/VgW1F2ScqK CMAD7MEe e+i8NSLQcowq9Pw6Nbl0IDCviUmT629mKI8RjI4thSVF5d7A5PEerlRzO8Vs565QXrejnws/viGxinSBt8vi3vYeNGN8ga/edDGLoLgwcB0OsTTom3Y6lP0QCdA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/9/19 07:02, Mike Kravetz wrote: > Update the internal hugetlb restore vmemmap code path such that TLB > flushing can be batched. Use the existing mechanism of passing the > VMEMMAP_REMAP_NO_TLB_FLUSH flag to indicate flushing should not be > performed for individual pages. The routine hugetlb_vmemmap_restore_folios > is the only user of this new mechanism, and it will perform a global > flush after all vmemmap is restored. > > Signed-off-by: Joao Martins > Signed-off-by: Mike Kravetz > --- > mm/hugetlb_vmemmap.c | 39 ++++++++++++++++++++++++--------------- > 1 file changed, 24 insertions(+), 15 deletions(-) > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index a6c356acb1fc..ae2229f19158 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -460,18 +460,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, > * @end: end address of the vmemmap virtual address range that we want to > * remap. > * @reuse: reuse address. > + * @flags: modify behavior for bulk operations Please keep the comment consistent with vmemmap_remap_split(), which says: "@flags:    modifications to vmemmap_remap_walk flags". Thanks. > * > * Return: %0 on success, negative error code otherwise. > */ > static int vmemmap_remap_alloc(unsigned long start, unsigned long end, > - unsigned long reuse) > + unsigned long reuse, unsigned long flags) > { > LIST_HEAD(vmemmap_pages); > struct vmemmap_remap_walk walk = { > .remap_pte = vmemmap_restore_pte, > .reuse_addr = reuse, > .vmemmap_pages = &vmemmap_pages, > - .flags = 0, > + .flags = flags, > }; > > /* See the comment in the vmemmap_remap_free(). */ > @@ -493,17 +494,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); > static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); > core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); > > -/** > - * hugetlb_vmemmap_restore - restore previously optimized (by > - * hugetlb_vmemmap_optimize()) vmemmap pages which > - * will be reallocated and remapped. > - * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be restored. > - * > - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > - * negative error code otherwise. > - */ > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, unsigned long flags) > { > int ret; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > @@ -524,7 +515,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > * When a HugeTLB page is freed to the buddy allocator, previously > * discarded vmemmap pages must be allocated and remapping. > */ > - ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse); > + ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, flags); > if (!ret) { > ClearHPageVmemmapOptimized(head); > static_branch_dec(&hugetlb_optimize_vmemmap_key); > @@ -533,6 +524,21 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > return ret; > } > > +/** > + * hugetlb_vmemmap_restore - restore previously optimized (by > + * hugetlb_vmemmap_optimize()) vmemmap pages which > + * will be reallocated and remapped. > + * @h: struct hstate. > + * @head: the head page whose vmemmap pages will be restored. > + * > + * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > + * negative error code otherwise. > + */ > +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +{ > + return __hugetlb_vmemmap_restore(h, head, 0); > +} > + > /** > * hugetlb_vmemmap_restore_folios - restore vmemmap for every folio on the list. > * @h: struct hstate. > @@ -557,7 +563,8 @@ int hugetlb_vmemmap_restore_folios(const struct hstate *h, > num_restored = 0; > list_for_each_entry(folio, folio_list, lru) { > if (folio_test_hugetlb_vmemmap_optimized(folio)) { > - t_ret = hugetlb_vmemmap_restore(h, &folio->page); > + t_ret = __hugetlb_vmemmap_restore(h, &folio->page, > + VMEMMAP_REMAP_NO_TLB_FLUSH); > if (t_ret) > ret = t_ret; > else > @@ -565,6 +572,8 @@ int hugetlb_vmemmap_restore_folios(const struct hstate *h, > } > } > > + flush_tlb_all(); > + > if (*restored) > *restored = num_restored; > return ret;