From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 464BDEE14D4 for ; Thu, 7 Sep 2023 06:58:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 879FF900020; Thu, 7 Sep 2023 02:58:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 82A828E000F; Thu, 7 Sep 2023 02:58:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74084900020; Thu, 7 Sep 2023 02:58:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6566F8E000F for ; Thu, 7 Sep 2023 02:58:33 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3761714101D for ; Thu, 7 Sep 2023 06:58:33 +0000 (UTC) X-FDA: 81208898106.02.686129D Received: from out-215.mta1.migadu.com (out-215.mta1.migadu.com [95.215.58.215]) by imf14.hostedemail.com (Postfix) with ESMTP id 1E839100009 for ; Thu, 7 Sep 2023 06:58:30 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=qwUGIcmZ; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf14.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.215 as permitted sender) smtp.mailfrom=muchun.song@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694069911; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=92MLODK4ECyh59e3y4oCqhgWkffhAWGlahL73/o8qZA=; b=O10KV5GauJQqXYtZ4UojNyrH/dYxAmFpH7RDtll6VBoAHFA3shVpdpLhzgbRSRINHr0kRK gJ2Zw3+uNr0wG2NEzfitGpVeIc/7RD/i9JNFjIKcntrD3sLccquiu0OAHIHWHnooUfkSzJ 3jGzdhRp7XoA9VXHexutdrzpG3AZdWc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=qwUGIcmZ; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf14.hostedemail.com: domain of muchun.song@linux.dev designates 95.215.58.215 as permitted sender) smtp.mailfrom=muchun.song@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694069911; a=rsa-sha256; cv=none; b=pHOVvT6p0kB3L9Qb7c+K4Vy6ii6TNMBtTR7kgU67nqPCAXQ8XrGSJofm3BssVsF3NRBR3b qvV8mqumzzTZ21wQVtO8lOQe05TaCWT8QlRMeIdEVP7quDVTsm2Ma9Yl5SOCjj9j+QR4uM Kw4KDmPfDDhJO94zq4Wii5CZ+CbSZiE= Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1694069909; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=92MLODK4ECyh59e3y4oCqhgWkffhAWGlahL73/o8qZA=; b=qwUGIcmZV/UtNUgogY+YBT2KloYA45SHsMTKsOLKGzE1QfyokHMX4q8qnA+e9sFfWf3kvs tLY5o/IGdJjnTkpbEk86CZfhMRhJ6Qy1jhalB4j59KvgTINO76FsReh3uvdYcmon0Tbgdq E6CVbC0YwfWsnTTeKl/aWqDukTrafJY= Date: Thu, 7 Sep 2023 14:58:23 +0800 MIME-Version: 1.0 Subject: Re: [PATCH v2 11/11] hugetlb: batch TLB flushes when restoring vmemmap To: Mike Kravetz , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joao Martins Cc: Muchun Song , Oscar Salvador , David Hildenbrand , Miaohe Lin , David Rientjes , Anshuman Khandual , Naoya Horiguchi , Michal Hocko , Matthew Wilcox , Xiongchun Duan , Andrew Morton References: <20230905214412.89152-1-mike.kravetz@oracle.com> <20230905214412.89152-12-mike.kravetz@oracle.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20230905214412.89152-12-mike.kravetz@oracle.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 1E839100009 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: pojp4objpkbu5oex7c4bt1tbqsq6ktis X-HE-Tag: 1694069910-312249 X-HE-Meta: U2FsdGVkX18zoxsKsf+MjnPq8qk8Jo6PB8tg/vSZhnYQyO/SboCn2Mua9O/tOKBnBHQ5FDsicBj48Gvl9OmbRmExgBEEUL7qu7sZUPXiswI6qaqc0og9rkTl9sf5NK3/+LCSvhAI/1RCPV6JyzWc/2BjfCu2FE4HtRKnDGkq1lyR441Aw7QIdSUWDE+kr4DLg3gUKOrwsVFRA0jnPZAfd/lNsnYBTbmt+VCYJgQldwK2hTOjbNNWB1F+y2YGCVft73hjKoV/7trKL/P/3+S+NnBn+O6fBAeWeUGZFMVqV6agcCgcac6S6PoVXmEZXHq1Rsr3GN0ZVXqCtcuS10zXnsWxc9Av0TAO+C5gOmTfnM9pweo5zWQMoF3zgCMumTTgNzZ3m3j5kOqmB09ZgybYM8SwA6THbZHe87D1SzDyRQG0CSaA4M4eVxLLnj8/jq086izbNgyfRE8+JrdTSHdd0UKwhmUNLohGj8xBg8KtaSxpvllTL4NqcezvYWLsx5N8a30fJkWqBhHJX1S5Mv0psINaL5bIiC0rEu51QUhkV0cDMtYicSivkcM4cxWGVuX5jQFVDM64C0VjtCrQ9nc5pJVYxKZ/TvwUK4I/D29S0941LqxwpBBwAymMmZRQQw9lIkHslFCFAa8uQyAzYWXJGZMbVzUJ2/v1XGkqmfGlLRdJq6CdzGamXzyo3hfDloLljfKkuECiWwU+XyHZIOtfWg6zV2GAX6+1uIUp65xzmGjD7EbCSLBVb/xusQAv6JBM6K8bGG3ejfBOAPJDGWhnEf7aI6bI3zzOXrpURhmLSmbZkA7wBuyY9F9wMYR6rnipKN1XLBf6Y8JIdKTiW9ypot9MKlkOmL26vh2hMrBxSckAP5+6zlXfBWQfnJ3zQRMbgCvR5a3XoCr4VVlvL5Kvs2/Z7OB7vjY0l77USTJGwOvaxHGXNsq9TKSYm/F20gGFm5O4t85/KzPKuLWFfsL vyf56YR1 yHkMK51DErwRqbJma0Zh3FTBexfxI9jGdojVns4xj3R5RDtdySGN89QTG/SVMyNYuOAKGxcrZwrm+fNtMRMUJJJvZOkcB8gMTBwD7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/9/6 05:44, Mike Kravetz wrote: > Update the hugetlb_vmemmap_restore path to take a 'batch' parameter that s/batch/flags/g And it should be reworked since the parameter has been changed. > indicates restoration is happening on a batch of pages. When set, use > the existing mechanism (VMEMMAP_NO_TLB_FLUSH) to delay TLB flushing. > The routine hugetlb_vmemmap_restore_folios is the only user of this new > batch parameter and it will perform a global flush after all vmemmap is > restored. > > Signed-off-by: Joao Martins > Signed-off-by: Mike Kravetz > --- > mm/hugetlb_vmemmap.c | 37 +++++++++++++++++++++++-------------- > 1 file changed, 23 insertions(+), 14 deletions(-) > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 8c85e2c38538..11fda9d061eb 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -458,17 +458,19 @@ static int alloc_vmemmap_page_list(unsigned long start, unsigned long end, > * @end: end address of the vmemmap virtual address range that we want to > * remap. > * @reuse: reuse address. > + * @flags: modify behavior for bulk operations > * > * Return: %0 on success, negative error code otherwise. > */ > static int vmemmap_remap_alloc(unsigned long start, unsigned long end, > - unsigned long reuse) > + unsigned long reuse, unsigned long flags) > { > LIST_HEAD(vmemmap_pages); > struct vmemmap_remap_walk walk = { > .remap_pte = vmemmap_restore_pte, > .reuse_addr = reuse, > .vmemmap_pages = &vmemmap_pages, > + .flags = flags, > }; > > /* See the comment in the vmemmap_remap_free(). */ > @@ -490,17 +492,7 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); > static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); > core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); > > -/** > - * hugetlb_vmemmap_restore - restore previously optimized (by > - * hugetlb_vmemmap_optimize()) vmemmap pages which > - * will be reallocated and remapped. > - * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be restored. > - * > - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > - * negative error code otherwise. > - */ > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, unsigned long flags) > { > int ret; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > @@ -521,7 +513,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > * When a HugeTLB page is freed to the buddy allocator, previously > * discarded vmemmap pages must be allocated and remapping. > */ > - ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse); > + ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, flags); > if (!ret) { > ClearHPageVmemmapOptimized(head); > static_branch_dec(&hugetlb_optimize_vmemmap_key); > @@ -530,6 +522,21 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > return ret; > } > > +/** > + * hugetlb_vmemmap_restore - restore previously optimized (by > + * hugetlb_vmemmap_optimize()) vmemmap pages which > + * will be reallocated and remapped. > + * @h: struct hstate. > + * @head: the head page whose vmemmap pages will be restored. > + * > + * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > + * negative error code otherwise. > + */ > +int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +{ > + return __hugetlb_vmemmap_restore(h, head, 0UL); UL suffix could be drooped. Thanks. > +} > + > /* > * This function will attempt to resore vmemmap for a list of folios. There > * is no guarantee that restoration will be successful for all or any folios. > @@ -540,7 +547,9 @@ void hugetlb_vmemmap_restore_folios(const struct hstate *h, struct list_head *fo > struct folio *folio; > > list_for_each_entry(folio, folio_list, lru) > - (void)hugetlb_vmemmap_restore(h, &folio->page); > + (void)__hugetlb_vmemmap_restore(h, &folio->page, VMEMMAP_NO_TLB_FLUSH); > + > + flush_tlb_all(); > } > > /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */