From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36E6DC4708F for ; Tue, 1 Jun 2021 15:00:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B55AE613A9 for ; Tue, 1 Jun 2021 15:00:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B55AE613A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 48E886B0070; Tue, 1 Jun 2021 11:00:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 43E876B0071; Tue, 1 Jun 2021 11:00:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 291196B0072; Tue, 1 Jun 2021 11:00:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id EC1E76B0070 for ; Tue, 1 Jun 2021 11:00:19 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6DC90988E for ; Tue, 1 Jun 2021 15:00:19 +0000 (UTC) X-FDA: 78205465758.21.3D99966 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf30.hostedemail.com (Postfix) with ESMTP id 44EF7E00098B for ; Tue, 1 Jun 2021 15:00:08 +0000 (UTC) Received: by mail-qt1-f169.google.com with SMTP id v4so10328347qtp.1 for ; Tue, 01 Jun 2021 08:00:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=glTCffzNZAga2gxixmoJ9kxSIEZt1XeVm/h8KhaD35s=; b=UYiCvDbHIuC2W4incbsaJsHUTvAD/ZmPNsxsEKzh29W4tQ1q0bN5L2fD1hY+iZTN9m 99iS85WHOaDHaJSJzfBIXR3GnsMbniJ9EuPrViJq3zF2Z63mhFeCBbHxayBh9E+x1CiH YNWt0U/PgEXyTyzU5ZOA1QeTSLROtng/AswQAbViuGUcoP8tV6bmIl1Q0Zu2Amj1vUSo 5uRJWA+lJlmRwZ5ZPJ9cFbBZxyGVmJM8ijZNSNvSfW4XBOa9n1puMbCwVxILwrG7MUUC ajw6J3n8RkT8UrQ0iVpWJiOA3+y8O23Ggg2kLnRqrPXgqjdJFlaEtlxsyC2QfpD3qceb AOMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=glTCffzNZAga2gxixmoJ9kxSIEZt1XeVm/h8KhaD35s=; b=pQjPvTl2/s3FqSDzxx+NulcrJJUdgb3Ii+Ey0mz4/zQwsGuvYMSJx55DYlxA6g7KNm oHFPJ+RBFp5DxiT0hXrM1JsZVC/mrpCLjGFn4rBXMGA1BuBxzEYFPcT5toijbdD4GhJi V/P8mCN8TUM7lZMk/B9LMWqTq+BtbuBAY63aedcLhvADhGrrRvzpa7EokHYRJMiTX/Hw LVBPGan4hTcltJyycymXzQje9ObZgAaLfwAVWkQXOU5KZjheEeQGPfEngl0SuHArllxb FfV6jiYyhyoLTfFm6PRW7Pkucud3aq4mq5IHnQqfSLF43Tip1h/TujCl3pzaWL4Vga/t H4Ww== X-Gm-Message-State: AOAM530HzNMc2IbVPIAVARGVMd9ONfviwJqTPO7990Kmp0jzVXyY5lar wJ3LppM+MMcVpPAYOJbXmwegDw== X-Google-Smtp-Source: ABdhPJwdS42FJcT7Cg1mp/Zl97YtISSVdaKqYqhwZLMBjZ8mjLSbReur1d/K7L4WeNFc5us9ikf67Q== X-Received: by 2002:ac8:5392:: with SMTP id x18mr19923103qtp.381.1622559616950; Tue, 01 Jun 2021 08:00:16 -0700 (PDT) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id d21sm2208111qke.29.2021.06.01.08.00.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 01 Jun 2021 08:00:16 -0700 (PDT) Date: Tue, 1 Jun 2021 11:00:15 -0400 From: Johannes Weiner To: Matthew Wilcox Cc: Huang Ying , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Linus Torvalds , Peter Xu , Hugh Dickins , Mel Gorman , Rik van Riel , Andrea Arcangeli , Michal Hocko , Dave Hansen , Tim Chen Subject: Re: [PATCH] mm: free idle swap cache page after COW Message-ID: References: <20210601053143.1380078-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 44EF7E00098B Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=UYiCvDbH; spf=pass (imf30.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.169 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org X-Rspamd-Server: rspam04 X-Stat-Signature: 9cxc4dun9adsns7thoujsd93iy4jw7wc X-HE-Tag: 1622559608-768595 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 01, 2021 at 12:48:15PM +0100, Matthew Wilcox wrote: > On Tue, Jun 01, 2021 at 01:31:43PM +0800, Huang Ying wrote: > > With commit 09854ba94c6a ("mm: do_wp_page() simplification"), after > > COW, the idle swap cache page (neither the page nor the corresponding > > swap entry is mapped by any process) will be left in the LRU list, > > even if it's in the active list or the head of the inactive list. So, > > the page reclaimer may take quite some overhead to reclaim these > > actually unused pages. > > > > To help the page reclaiming, in this patch, after COW, the idle swap > > cache page will be tried to be freed. To avoid to introduce much > > overhead to the hot COW code path, > > > > a) there's almost zero overhead for non-swap case via checking > > PageSwapCache() firstly. > > > > b) the page lock is acquired via trylock only. > > > > To test the patch, we used pmbench memory accessing benchmark with > > working-set larger than available memory on a 2-socket Intel server > > with a NVMe SSD as swap device. Test results shows that the pmbench > > score increases up to 23.8% with the decreased size of swap cache and > > swapin throughput. > > So 2 percentage points better than my original idea? Sweet. > > > diff --git a/mm/memory.c b/mm/memory.c > > index 2b7ffcbca175..d44425820240 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3104,6 +3104,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) > > munlock_vma_page(old_page); > > unlock_page(old_page); > > } > > + if (page_copied) > > + free_swap_cache(old_page); > > put_page(old_page); > > } > > return page_copied ? VM_FAULT_WRITE : 0; > > Why not ... > > if (page_copied) > free_page_and_swap_cache(old_page); > else > put_page(old_page); > > then you don't need to expose free_swap_cache(). Or does the test for > huge_zero_page mess this up? It's free_page[s]_and_swap_cache() we should reconsider, IMO. free_swap_cache() makes for a clean API function that does one thing, and does it right. free_page_and_swap_cache() combines two independent operations, which has the habit of accumulating special case-handling for some callers that is unncessary overhead for others (Abstraction Inversion anti-pattern). For example, free_page_and_swap_cache() adds an is_huge_zero_page() check around the put_page() for the tlb batching code. This isn't needed here. AFAICS it is also unnecessary for the other callsite, __collapse_huge_page_copy(), where context rules out zero pages. The common put_page() in Huang's version also makes it slighly easier to follow the lifetime of old_page. So I'd say exposing free_swap_cache() is a good move, for this patch and in general.