From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 002FBC433EF for ; Thu, 16 Jun 2022 15:53:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D05B6B0071; Thu, 16 Jun 2022 11:53:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 87EBB6B0072; Thu, 16 Jun 2022 11:53:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 71F386B0074; Thu, 16 Jun 2022 11:53:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 62EAE6B0071 for ; Thu, 16 Jun 2022 11:53:43 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 395363543B for ; Thu, 16 Jun 2022 15:53:43 +0000 (UTC) X-FDA: 79584544326.01.926B6C8 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf05.hostedemail.com (Postfix) with ESMTP id D99BF100078 for ; Thu, 16 Jun 2022 15:53:42 +0000 (UTC) Received: by mail-pl1-f176.google.com with SMTP id m14so1609498plg.5 for ; Thu, 16 Jun 2022 08:53:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=soZ7Z/kpV8xfUd7TwDWGDyhSUZL5/aKbtE1H8BWznbY=; b=ALXXbN/beUYRTTZ5Parx1g8FvpVXPo7FUkg1yB3NJT3YrGYfREprfS4eN7h8M/xOyn 5pJlexqpbjbMHbjLHVj8X3IBDTuE0iSiVJLefdXt/4xB0F/3+kvHnD1yGuGmTefLOSfS tzaF3Ec5EBOs/Mpn9ORW/uyIIA4JL6TbRABvZxIMV7cTZJz2fWSZ3NEFenN6gSvg2vN0 GR0lOSNB2RiwgdFbGWGDTxncxFNzIZKvdrTc7Wb4/LbGj5TtBLrqm8NQqiyZfPybSvwG 9owOYbeP3wiRgnsb9uKRM5vGkAas1C0Cv5hTyUm18on0yUy3KFYUQVlHxnieFiXF7EtD +dJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=soZ7Z/kpV8xfUd7TwDWGDyhSUZL5/aKbtE1H8BWznbY=; b=Z+XI1dLZ8C2511QJl2SHs2zbyQ6RkL9uS80RxZnGmYcU6eUMLhT4xbQXync5tAQdOh +tUEUDhMH2+bd2dKyLL7QDZc5d2+XAc1inCZJCMFIeQru5KhE127GeKINIk9z7f0kkCD kLBoxrXig3Run2qppTNQi5fdtxezSI2vs4FMV6xc7B6+i+Dau7MGPSIQcf6MjKXIBTVi zsmHzoPpaxnLkLKUL2SNFB3V8iTteYm9Ht3o0bqLkKzXD0gBJ3qOL+tKSFjyIYVnON9w vzWCp+TFzmexnTzUjJc3Y8yziABasY2n+1qSjN8widWlxxyIMiUxwCJdH6eDEkm7siIm 1vxg== X-Gm-Message-State: AJIora/gLzImgPOUE7pj/epLPQ3rlSURgwOjnBlepicEjgh2Xi+ZeT7K ROeCujXdkyvjqfzXQgrYwsXMJv5ycJthVxpDB7g= X-Google-Smtp-Source: AGRyM1szceTmJvxBZDzYIbp8ZWRdBS34Xb5GVhAHComIv9CyqCx8/p7Ht9+z/srDnTdeCQWSDcrDfEaabYT1T5+eyzc= X-Received: by 2002:a17:902:c40a:b0:163:d38e:3049 with SMTP id k10-20020a170902c40a00b00163d38e3049mr5237599plk.87.1655394821875; Thu, 16 Jun 2022 08:53:41 -0700 (PDT) MIME-Version: 1.0 References: <20220611084731.55155-1-linmiaohe@huawei.com> <20220611084731.55155-8-linmiaohe@huawei.com> <87617483-7945-30e2-471e-578da4f4d9c7@huawei.com> In-Reply-To: <87617483-7945-30e2-471e-578da4f4d9c7@huawei.com> From: Yang Shi Date: Thu, 16 Jun 2022 08:53:29 -0700 Message-ID: Subject: Re: [PATCH 7/7] mm/khugepaged: try to free transhuge swapcache when possible To: Miaohe Lin Cc: Andrew Morton , Andrea Arcangeli , Matthew Wilcox , Vlastimil Babka , David Howells , NeilBrown , Alistair Popple , David Hildenbrand , Suren Baghdasaryan , Peter Xu , Linux MM , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="ALXXbN/b"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of shy828301@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=shy828301@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655394822; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=soZ7Z/kpV8xfUd7TwDWGDyhSUZL5/aKbtE1H8BWznbY=; b=2yuyAjfZphy9VZ5PE2torXQtlm0aTz3KZUrbiFablfDjA4DXOtr3v9neF75y+Q0QTOS3rH l0nrJ41a6Kw1aFCnxLJCddDiRsrNQBrmujLTSSzxB3qt+ywm3BLjboXvBoRZKsiyQOQF9B Ek2i8uOO7pbLmqHHmpOUVURjIcOlC30= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655394822; a=rsa-sha256; cv=none; b=7yHEEBDUQOM4ImThChtdogvA1y2vtsHoZq0VM8BrHw+i/sljXb8ZBo2uKS46IsDa4fbaMt 8SElg3iDKjZ69fjOa3cf4pnms7JPqhu1Z939SIQA2odyLMm5qTWT1iaw0gKPUZBCyt3u1t eeSB9ARXX3QyY4JVfOpLkWbmfYS+WYo= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D99BF100078 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="ALXXbN/b"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf05.hostedemail.com: domain of shy828301@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Stat-Signature: nfw688t65wa88jr5opat48cyr35zu1pt X-HE-Tag: 1655394822-449680 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jun 16, 2022 at 12:42 AM Miaohe Lin wrote: > > On 2022/6/16 7:58, Yang Shi wrote: > > On Sat, Jun 11, 2022 at 1:47 AM Miaohe Lin wrote: > >> > >> Transhuge swapcaches won't be freed in __collapse_huge_page_copy(). > >> It's because release_pte_page() is not called for these pages and > >> thus free_page_and_swap_cache can't grab the page lock. These pages > >> won't be freed from swap cache even if we are the only user until > >> next time reclaim. It shouldn't hurt indeed, but we could try to > >> free these pages to save more memory for system. > > > > > >> > >> Signed-off-by: Miaohe Lin > >> --- > >> include/linux/swap.h | 5 +++++ > >> mm/khugepaged.c | 1 + > >> mm/swap.h | 5 ----- > >> 3 files changed, 6 insertions(+), 5 deletions(-) > >> > >> diff --git a/include/linux/swap.h b/include/linux/swap.h > >> index 8672a7123ccd..ccb83b12b724 100644 > >> --- a/include/linux/swap.h > >> +++ b/include/linux/swap.h > >> @@ -456,6 +456,7 @@ static inline unsigned long total_swapcache_pages(void) > >> return global_node_page_state(NR_SWAPCACHE); > >> } > >> > >> +extern void free_swap_cache(struct page *page); > >> extern void free_page_and_swap_cache(struct page *); > >> extern void free_pages_and_swap_cache(struct page **, int); > >> /* linux/mm/swapfile.c */ > >> @@ -540,6 +541,10 @@ static inline void put_swap_device(struct swap_info_struct *si) > >> /* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */ > >> #define free_swap_and_cache(e) is_pfn_swap_entry(e) > >> > >> +static inline void free_swap_cache(struct page *page) > >> +{ > >> +} > >> + > >> static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp_mask) > >> { > >> return 0; > >> diff --git a/mm/khugepaged.c b/mm/khugepaged.c > >> index ee0a719c8be9..52109ad13f78 100644 > >> --- a/mm/khugepaged.c > >> +++ b/mm/khugepaged.c > >> @@ -756,6 +756,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, > >> list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { > >> list_del(&src_page->lru); > >> release_pte_page(src_page); > >> + free_swap_cache(src_page); > > > > Will this really work? The free_swap_cache() will just dec refcounts > > without putting the page back to buddy. So the hugepage is not > > actually freed at all. Am I missing something? > > Thanks for catching this! If page is on percpu lru_pvecs cache, page will > be released when lru_pvecs are drained. But if not, free_swap_cache() won't > free the page as it assumes the caller has a reference on the page and thus > only does page_ref_sub(). Does the below change looks sense for you? THP gets drained immediately so they won't stay in pagevecs. > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 52109ad13f78..b8c96e33591d 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -755,8 +755,12 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, > > list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { > list_del(&src_page->lru); > - release_pte_page(src_page); > + mod_node_page_state(page_pgdat(src_page), > + NR_ISOLATED_ANON + page_is_file_lru(src_page), > + -compound_nr(src_page)); > + unlock_page(src_page); > free_swap_cache(src_page); > + putback_lru_page(src_page); I'm not sure if it is worth it or not for a rare corner case since THP should not stay in swapcache unless try_to_unmap() in vmscan fails IIUC. And it is not guaranteed that free_swap_cache() will get the page lock. > } > } > > Thanks! > > > > >> } > >> } > >> > >> diff --git a/mm/swap.h b/mm/swap.h > >> index 0193797b0c92..863f6086c916 100644 > >> --- a/mm/swap.h > >> +++ b/mm/swap.h > >> @@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct page *page, > >> void delete_from_swap_cache(struct page *page); > >> void clear_shadow_from_swap_cache(int type, unsigned long begin, > >> unsigned long end); > >> -void free_swap_cache(struct page *page); > >> struct page *lookup_swap_cache(swp_entry_t entry, > >> struct vm_area_struct *vma, > >> unsigned long addr); > >> @@ -81,10 +80,6 @@ static inline struct address_space *swap_address_space(swp_entry_t entry) > >> return NULL; > >> } > >> > >> -static inline void free_swap_cache(struct page *page) > >> -{ > >> -} > >> - > >> static inline void show_swap_cache_info(void) > >> { > >> } > >> -- > >> 2.23.0 > >> > >> > > . > > >