From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D707C6FA8F for ; Wed, 30 Aug 2023 08:47:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C74BC8E004B; Wed, 30 Aug 2023 04:47:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C22218E0009; Wed, 30 Aug 2023 04:47:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE94A8E004B; Wed, 30 Aug 2023 04:47:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9C1018E0009 for ; Wed, 30 Aug 2023 04:47:32 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 61210B2A4A for ; Wed, 30 Aug 2023 08:47:32 +0000 (UTC) X-FDA: 81180142344.06.7680F34 Received: from out-247.mta0.migadu.com (out-247.mta0.migadu.com [91.218.175.247]) by imf30.hostedemail.com (Postfix) with ESMTP id 667AB80002 for ; Wed, 30 Aug 2023 08:47:29 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=XnwKpQIQ; spf=pass (imf30.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.247 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693385249; a=rsa-sha256; cv=none; b=abN7s3DWDnHvBKStSvK1LxrDpRx68TNoCTs19086moTVFt7uloPQCVNfiZQSktAYwD9eQM tHdi5IKuxSxv5Bcc66IZAccYWu4mYruBg71mGH/53mqYT8nsQkKTDWuvfU2in0B5rHk/nO sgMrUYsudAgtoXMGvaR6ndG0rTdfKtY= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=XnwKpQIQ; spf=pass (imf30.hostedemail.com: domain of muchun.song@linux.dev designates 91.218.175.247 as permitted sender) smtp.mailfrom=muchun.song@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693385249; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=P2UuBXuSjqeYTwWpl+jkQfX9DEW9Mf9mw9kyE4kUrZc=; b=gYnQcHCqu4F6F+Jcd0D3odFksHn8O7n0T7EYpQ+Uii/cpeeaiI3O5VlRpfoBS03vDq+Poc TyhubbZH8r++HbS6gXYJJMZ2JdEfIxVv1GDORku65qWHqEPldzMuohdAfpAlXPt3hHSmJU WTfrogtcMUGnUfHEMeLoe6Fdcj+HhyA= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1693385247; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P2UuBXuSjqeYTwWpl+jkQfX9DEW9Mf9mw9kyE4kUrZc=; b=XnwKpQIQYJNc5vIAe42TNhWJCe3Sh/kvslDgkBMl7nrFLtbTg31tkfKODXQzqDJvrbEg4n dxSZAncoEBYmPqPpsFLFPVWotQfyy3rnXVN4KzuAx683q3sDl3ZIMudJhRL9VdCyiuKY4/ +j25z7rJtYsMHo302ui92MfcDuToMi8= Date: Wed, 30 Aug 2023 16:47:18 +0800 MIME-Version: 1.0 Subject: Re: [PATCH 12/12] hugetlb: batch TLB flushes when restoring vmemmap To: Mike Kravetz Cc: Muchun Song , Joao Martins , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Barry Song , Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20230825190436.55045-1-mike.kravetz@oracle.com> <20230825190436.55045-13-mike.kravetz@oracle.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230825190436.55045-13-mike.kravetz@oracle.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 667AB80002 X-Stat-Signature: hpmd11i1m3kqh6edqh41qgp3qyry1r9i X-Rspam-User: X-HE-Tag: 1693385249-303169 X-HE-Meta: U2FsdGVkX1+6lgQN/ab9Dp+KQalzX30tgpEdXfzbLvRly24cEx4jYdiSZsempEc3vzq/suvVKe3xFMs3AvWnpoa9jrU72kSIpowYqQOgRWwsWpXmzNCL4IHhVGpzd3QJZf52BQo2+2ANYuNIVIAh8RI3LPUqGO/Wz0uzOhhirPMCN/8r7yTiyFSCIcRZvKGRf+AwsGrGwN6dhDMJPYTJoZoXR6SzlN7Ge2kg/+vx/dSuj/zDQzG086ksdp7Af/tg9dPnpnjhL+csPz+xxFN3jrpA+52prAurbOwdyFj8PooVKh1YAmODSJHoFxYFpeuljlmhQzxaM5hOV8tSBQQDcufZY2JHfPdzka+YBnAdsgiB0A0Wx5w+HzTj2nMpjVsYVwdFhmYhUSPXhYJ6Cp1CDMR+tQ8/ZJrYw0BtF9CPGaPXXsyXcvIXTK6uNo2Tesxk4Af5dc6O56XyR9ZHvJgqj4XowEUOq+m59rhnekwiht2aGRLFIagGRl3vgQK5BRIjQlpopxjOAx2iucSca0DSr50fA6RbvYxdR5rlS/s29VmzTvm90SBhXmME3YyCyxz8g4VVjvshM2iE/+Phq+9FCUAJr7bDpMdKmDPRdqWKJwbZ2m06Uk3izPejGj6JWTPvZPrXImKKNaIwhEpdZyCEgDNrrFq0YRvfdqNf29AAgR0ktw+4f6hjV11aSq7FeatXyIZkHYOIGEfAcNz4bzZaldookolcN/khlGb2ZjKn7xi7b2wWezbG8OD53JLxBw8HzewEusRg2Ck8CdRwWm0htaC2AeaH4p5MV9Acw4lq4kBKMTAl0BxbTGPAaR2o71pWzAsdlX0wjNnNrEPA3IALz9FYWKdeeHFRnPLnLw7qGQ6z/WucxnOw2V2NDHKHiHwITOCgEBLkUKF1alSZujrSp6YJzd2nAfMu4h6DsVa5KhfhaR63vBhs5aBpbZRDSz4yGLVLlFo+D1LJD3ULDdq +Sp6w/Fn PQb9Xsog2DUGr8P0Kg0hW0ndtqsYO6s2j2ua/sLylu+tyW0VWYywEqHdKtZvXI7tm8Ca0HdVYVTehMrR/SBG5Hyg2kaC049eocrQgRTiIW2w9J7KgxRVlCw0M7Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/8/26 03:04, Mike Kravetz wrote: > Update the hugetlb_vmemmap_restore path to take a 'batch' parameter that > indicates restoration is happening on a batch of pages. When set, use > the existing mechanism (VMEMMAP_REMAP_BULK_PAGES) to delay TLB flushing. > The routine hugetlb_vmemmap_restore_folios is the only user of this new > batch parameter and it will perform a global flush after all vmemmap is > restored. > > Signed-off-by: Mike Kravetz > --- > mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++-------------- > 1 file changed, 23 insertions(+), 14 deletions(-) > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index a2fc7b03ac6b..d6e7440b9507 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -479,17 +479,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, > * @end: end address of the vmemmap virtual address range that we want to > * remap. > * @reuse: reuse address. > + * @bulk: bulk operation, batch TLB flushes > * > * Return: %0 on success, negative error code otherwise. > */ > static int vmemmap_remap_alloc(unsigned long start, unsigned long end, > - unsigned long reuse) > + unsigned long reuse, bool bulk) I'd like to let vmemmap_remap_alloc pass VMEMMAP_REMAP_BULK_PAGES directly, in which case, we do not need to change this function if we want to introduce another flag in the future. I mean that change "bool bulk" to "unsigned long flags". > { > LIST_HEAD(vmemmap_pages); > struct vmemmap_remap_walk walk = { > .remap_pte = vmemmap_restore_pte, > .reuse_addr = reuse, > .vmemmap_pages = &vmemmap_pages, > + .flags = !bulk ? 0 : VMEMMAP_REMAP_BULK_PAGES, > }; > > /* See the comment in the vmemmap_remap_free(). */ > @@ -511,17 +513,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); > static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); > core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); > > -/** > - * hugetlb_vmemmap_restore - restore previously optimized (by > - * hugetlb_vmemmap_optimize()) vmemmap pages which > - * will be reallocated and remapped. > - * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be restored. > - * > - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > - * negative error code otherwise. > - */ > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, bool bulk) The same as here. > { > int ret; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > @@ -541,7 +533,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > * When a HugeTLB page is freed to the buddy allocator, previously > * discarded vmemmap pages must be allocated and remapping. > */ > - ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse); > + ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, bulk); > if (!ret) { > ClearHPageVmemmapOptimized(head); > static_branch_dec(&hugetlb_optimize_vmemmap_key); > @@ -550,12 +542,29 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > return ret; > } > > +/** > + * hugetlb_vmemmap_restore - restore previously optimized (by > + * hugetlb_vmemmap_optimize()) vmemmap pages which > + * will be reallocated and remapped. > + * @h: struct hstate. > + * @head: the head page whose vmemmap pages will be restored. > + * > + * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > + * negative error code otherwise. > + */ > +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +{ > + return __hugetlb_vmemmap_restore(h, head, false); > +} > + > void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *folio_list) > { > struct folio *folio; > > list_for_each_entry(folio, folio_list, lru) > - hugetlb_vmemmap_restore(h, &folio->page); > + (void)__hugetlb_vmemmap_restore(h, &folio->page, true); Pass VMEMMAP_REMAP_BULK_PAGES directly here. Thanks. > + > + flush_tlb_kernel_range(0, TLB_FLUSH_ALL); > } > > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */