From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60D1EC43334 for ; Thu, 16 Jun 2022 07:42:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE7666B0072; Thu, 16 Jun 2022 03:42:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B96FA6B0073; Thu, 16 Jun 2022 03:42:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5FA26B0074; Thu, 16 Jun 2022 03:42:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 970676B0072 for ; Thu, 16 Jun 2022 03:42:39 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5DB6635759 for ; Thu, 16 Jun 2022 07:42:39 +0000 (UTC) X-FDA: 79583306838.01.BA85E23 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf09.hostedemail.com (Postfix) with ESMTP id 5350A140087 for ; Thu, 16 Jun 2022 07:42:38 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4LNvGn6nw4z1KB5p; Thu, 16 Jun 2022 15:40:33 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 16 Jun 2022 15:42:31 +0800 Subject: Re: [PATCH 7/7] mm/khugepaged: try to free transhuge swapcache when possible To: Yang Shi CC: Andrew Morton , Andrea Arcangeli , Matthew Wilcox , Vlastimil Babka , David Howells , NeilBrown , Alistair Popple , David Hildenbrand , Suren Baghdasaryan , Peter Xu , Linux MM , Linux Kernel Mailing List References: <20220611084731.55155-1-linmiaohe@huawei.com> <20220611084731.55155-8-linmiaohe@huawei.com> From: Miaohe Lin Message-ID: <87617483-7945-30e2-471e-578da4f4d9c7@huawei.com> Date: Thu, 16 Jun 2022 15:42:31 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655365358; a=rsa-sha256; cv=none; b=7F5REaiL2a/nisKgoYbuA3WNBx/ZTXRZKy05sJHOfOS9znBuMpT9H5Dtq55PfOwj8YJEU0 4fKNjcecKxcoB7teHyeYOQPCbCV/QKQ2KNuIe5n0LaHoK2bPT4ZblyHtIctrrp4LdRHlR1 Sm2yvf4fo3DAVTdcGsEiBVZPU9sqqxQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655365358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rny20B+jD5q/9Ia08blmHB7L0BSoeFV+YYSLOg4BUMs=; b=zN4MbXO9PVB4/bcKf3F1bDCL2S1A3Psk1qYuUHIyoohGIiIwLRABJIwrk5lHrqISACfALh dJr15LYZ/E8GfYywf/oHzNITD3nb9EphD4VfFt/3uSuN57Y2UAtZLi2Ik6yrreq7owmwqm eYDc/+jDsLkSF+TROvv9vVjalHiWX1M= X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5350A140087 X-Stat-Signature: ogzwuk4h8e9iw1b468yu6jmh8unyaid9 Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf09.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspam-User: X-HE-Tag: 1655365358-218727 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/6/16 7:58, Yang Shi wrote: > On Sat, Jun 11, 2022 at 1:47 AM Miaohe Lin wrote: >> >> Transhuge swapcaches won't be freed in __collapse_huge_page_copy(). >> It's because release_pte_page() is not called for these pages and >> thus free_page_and_swap_cache can't grab the page lock. These pages >> won't be freed from swap cache even if we are the only user until >> next time reclaim. It shouldn't hurt indeed, but we could try to >> free these pages to save more memory for system. > > >> >> Signed-off-by: Miaohe Lin >> --- >> include/linux/swap.h | 5 +++++ >> mm/khugepaged.c | 1 + >> mm/swap.h | 5 ----- >> 3 files changed, 6 insertions(+), 5 deletions(-) >> >> diff --git a/include/linux/swap.h b/include/linux/swap.h >> index 8672a7123ccd..ccb83b12b724 100644 >> --- a/include/linux/swap.h >> +++ b/include/linux/swap.h >> @@ -456,6 +456,7 @@ static inline unsigned long total_swapcache_pages(void) >> return global_node_page_state(NR_SWAPCACHE); >> } >> >> +extern void free_swap_cache(struct page *page); >> extern void free_page_and_swap_cache(struct page *); >> extern void free_pages_and_swap_cache(struct page **, int); >> /* linux/mm/swapfile.c */ >> @@ -540,6 +541,10 @@ static inline void put_swap_device(struct swap_info_struct *si) >> /* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */ >> #define free_swap_and_cache(e) is_pfn_swap_entry(e) >> >> +static inline void free_swap_cache(struct page *page) >> +{ >> +} >> + >> static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask) >> { >> return 0; >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c >> index ee0a719c8be9..52109ad13f78 100644 >> --- a/mm/khugepaged.c >> +++ b/mm/khugepaged.c >> @@ -756,6 +756,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, >> list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { >> list_del(&src_page->lru); >> release_pte_page(src_page); >> + free_swap_cache(src_page); > > Will this really work? The free_swap_cache() will just dec refcounts > without putting the page back to buddy. So the hugepage is not > actually freed at all. Am I missing something? Thanks for catching this! If page is on percpu lru_pvecs cache, page will be released when lru_pvecs are drained. But if not, free_swap_cache() won't free the page as it assumes the caller has a reference on the page and thus only does page_ref_sub(). Does the below change looks sense for you? diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 52109ad13f78..b8c96e33591d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -755,8 +755,12 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { list_del(&src_page->lru); - release_pte_page(src_page); + mod_node_page_state(page_pgdat(src_page), + NR_ISOLATED_ANON + page_is_file_lru(src_page), + -compound_nr(src_page)); + unlock_page(src_page); free_swap_cache(src_page); + putback_lru_page(src_page); } } Thanks! > >> } >> } >> >> diff --git a/mm/swap.h b/mm/swap.h >> index 0193797b0c92..863f6086c916 100644 >> --- a/mm/swap.h >> +++ b/mm/swap.h >> @@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct page *page, >> void delete_from_swap_cache(struct page *page); >> void clear_shadow_from_swap_cache(int type, unsigned long begin, >> unsigned long end); >> -void free_swap_cache(struct page *page); >> struct page *lookup_swap_cache(swp_entry_t entry, >> struct vm_area_struct *vma, >> unsigned long addr); >> @@ -81,10 +80,6 @@ static inline struct address_space *swap_address_space(swp_entry_t entry) >> return NULL; >> } >> >> -static inline void free_swap_cache(struct page *page) >> -{ >> -} >> - >> static inline void show_swap_cache_info(void) >> { >> } >> -- >> 2.23.0 >> >> > . >