From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC5F6C4338F for ; Sun, 8 Aug 2021 18:15:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 279E460F0F for ; Sun, 8 Aug 2021 18:15:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 279E460F0F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 59F076B0071; Sun, 8 Aug 2021 14:15:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 528296B0073; Sun, 8 Aug 2021 14:15:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C9528D0002; Sun, 8 Aug 2021 14:15:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 1F9CD6B0073 for ; Sun, 8 Aug 2021 14:15:00 -0400 (EDT) Received: from forelay.prod.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by fograve05.hostedemail.com (Postfix) with ESMTP id 032E118172B37 for ; Sun, 8 Aug 2021 17:49:56 +0000 (UTC) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C56748249980 for ; Sun, 8 Aug 2021 17:49:46 +0000 (UTC) X-FDA: 78452651172.29.E8E5402 Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) by imf10.hostedemail.com (Postfix) with ESMTP id 691B2601FC89 for ; Sun, 8 Aug 2021 17:49:46 +0000 (UTC) Received: by mail-wr1-f44.google.com with SMTP id b11so2096516wrx.6 for ; Sun, 08 Aug 2021 10:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=blFgUhoz8BsQqa/AS0G7/VKilP21EDRIfOXjjXEaSWg=; b=Ep351n1OHXLN0Mp3wUKytZDtzY7Gx3iF3vVL0sieJX9lIB7H0gs6xSasv9sSJAtgzL J6dc/Jp9gL7zPhlZykQv8xVbBIGb1RiMv42KKGYehzo1SMzU494DJgIIDFwHHnayqrbM fKJDQ4cvm+UfbhzxQdiA/ut+SVjf2fILSphxhlgbjIQeza6cHgOXCayQBXAg/V0+sYPB cyUsyU/UodF64kj4VTSkEE56An0rVhuy7dOnQ3Xfi+k8PKdKdbLuKAdyWiCQtk56P9+A m8yraXN9WzOg6SHZcP85xJ3kpnYJPsnpAt10CH4IOrlkb6JHEiPCWfriyq8sjgWauurK sHig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=blFgUhoz8BsQqa/AS0G7/VKilP21EDRIfOXjjXEaSWg=; b=nKJu8uoRVf/is9dI3SFPQCOploFpZSTo7JbcTops8pucuEKF3JosuV8qz9HRcxczri Eq8HOHjKKizszjEWdX6XtwEY6iKJVbnjROMe33i3gykAGl8A9wfK5PJEYcRuha18wFad AigqOyxH9Ce1KmP6MqR2Gd2erDklpsMxxl8RkjV1NZcQDXjReEFf5hy7IPKxyaL4Uvmz SjNCT2RQlKFgPoKtwt/FZTFKj8L6NvFyE6386owj5gXwasVGln47K5j7w7l3W9/G0KMO WVPIuKmnYOfRm2e6JHYqgZMDuAC6hu7LgU9BbRu5En5vKBY2kWp3iNd5yhswTxAmEh42 fQOA== X-Gm-Message-State: AOAM533njjdTKoI55wdkNWpLiiEtrD9Fndd7XjGHkThjAevRhtrKz0Nt JrOSQysY+VnPUv+3FsqwhQcsQgg/8Dwcs8cziPuMGQ== X-Google-Smtp-Source: ABdhPJxgQ5lACDemuNy/iQ1yOXYzngfASiRFCw+RkTrxdLSVWd9mTVUYnpXOSWHzuEuHRnWP2zt887iSkpTFL4417B4= X-Received: by 2002:adf:fb8f:: with SMTP id a15mr21071505wrr.92.1628444985059; Sun, 08 Aug 2021 10:49:45 -0700 (PDT) MIME-Version: 1.0 References: <20210731063938.1391602-1-yuzhao@google.com> <20210731063938.1391602-3-yuzhao@google.com> In-Reply-To: From: Yu Zhao Date: Sun, 8 Aug 2021 11:49:33 -0600 Message-ID: Subject: Re: [PATCH 2/3] mm: free zapped tail pages when splitting isolated thp To: Yang Shi Cc: Linux MM , Andrew Morton , Hugh Dickins , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Zi Yan , Linux Kernel Mailing List , Shuang Zhai Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 691B2601FC89 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=Ep351n1O; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of yuzhao@google.com designates 209.85.221.44 as permitted sender) smtp.mailfrom=yuzhao@google.com X-Stat-Signature: 3gck6rfc5t6tejkh81fhoifhnu13agp4 X-HE-Tag: 1628444986-774332 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Aug 4, 2021 at 6:13 PM Yang Shi wrote: > > On Fri, Jul 30, 2021 at 11:39 PM Yu Zhao wrote: > > > > If a tail page has only two references left, one inherited from the > > isolation of its head and the other from lru_add_page_tail() which we > > are about to drop, it means this tail page was concurrently zapped. > > Then we can safely free it and save page reclaim or migration the > > trouble of trying it. > > > > Signed-off-by: Yu Zhao > > Tested-by: Shuang Zhai > > --- > > include/linux/vm_event_item.h | 1 + > > mm/huge_memory.c | 28 ++++++++++++++++++++++++++++ > > mm/vmstat.c | 1 + > > 3 files changed, 30 insertions(+) > > > > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h > > index ae0dd1948c2b..829eeac84094 100644 > > --- a/include/linux/vm_event_item.h > > +++ b/include/linux/vm_event_item.h > > @@ -99,6 +99,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, > > #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD > > THP_SPLIT_PUD, > > #endif > > + THP_SPLIT_FREE, > > THP_ZERO_PAGE_ALLOC, > > THP_ZERO_PAGE_ALLOC_FAILED, > > THP_SWPOUT, > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index d8b655856e79..5120478bca41 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -2432,6 +2432,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > struct address_space *swap_cache = NULL; > > unsigned long offset = 0; > > unsigned int nr = thp_nr_pages(head); > > + LIST_HEAD(pages_to_free); > > + int nr_pages_to_free = 0; > > int i; > > > > VM_BUG_ON_PAGE(list && PageLRU(head), head); > > @@ -2506,6 +2508,25 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > continue; > > unlock_page(subpage); > > > > + /* > > + * If a tail page has only two references left, one inherited > > + * from the isolation of its head and the other from > > + * lru_add_page_tail() which we are about to drop, it means this > > + * tail page was concurrently zapped. Then we can safely free it > > + * and save page reclaim or migration the trouble of trying it. > > + */ > > + if (list && page_ref_freeze(subpage, 2)) { > > + VM_BUG_ON_PAGE(PageLRU(subpage), subpage); > > + VM_BUG_ON_PAGE(PageCompound(subpage), subpage); > > + VM_BUG_ON_PAGE(page_mapped(subpage), subpage); > > + > > + ClearPageActive(subpage); > > + ClearPageUnevictable(subpage); > > + list_move(&subpage->lru, &pages_to_free); > > + nr_pages_to_free++; > > + continue; > > + } > > Yes, such page could be freed instead of swapping out. But I'm > wondering if we could have some simpler implementation. Since such > pages will be re-added to page list, so we should be able to check > their refcount in shrink_page_list(). If the refcount is 1, the > refcount inc'ed by lru_add_page_tail() has been put by later > put_page(), we know it is freed under us since the only refcount comes > from isolation, we could just jump to "keep" (the label in > shrink_page_list()), then such page will be freed later by > shrink_inactive_list(). > > For MADV_PAGEOUT, I think we could add some logic to handle such page > after shrink_page_list(), just like what shrink_inactive_list() does. > > Migration already handles refcount == 1 page, so should not need any change. > > Is this idea feasible? Yes, but then we would have to loop over the tail pages twice, here and in shrink_page_list(), right? In addition, if we try to freeze the refcount of a page in shrink_page_list(), we couldn't be certain whether this page used to be a tail page. So we would have to test every page. If a page wasn't a tail page, it's unlikely for its refcount to drop unless there is a race. But this patch isn't really intended to optimize such a race. It's mainly for the next, i.e., we know there is a good chance to drop tail pages (~10% on our systems). Sounds reasonable? Thanks. > > + > > /* > > * Subpages may be freed if there wasn't any mapping > > * like if add_to_swap() is running on a lru page that > > @@ -2515,6 +2536,13 @@ static void __split_huge_page(struct page *page, struct list_head *list, > > */ > > put_page(subpage); > > } > > + > > + if (!nr_pages_to_free) > > + return; > > + > > + mem_cgroup_uncharge_list(&pages_to_free); > > + free_unref_page_list(&pages_to_free); > > + count_vm_events(THP_SPLIT_FREE, nr_pages_to_free); > > } > > > > int total_mapcount(struct page *page) > > diff --git a/mm/vmstat.c b/mm/vmstat.c > > index b0534e068166..f486e5d98d96 100644 > > --- a/mm/vmstat.c > > +++ b/mm/vmstat.c > > @@ -1300,6 +1300,7 @@ const char * const vmstat_text[] = { > > #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD > > "thp_split_pud", > > #endif > > + "thp_split_free", > > "thp_zero_page_alloc", > > "thp_zero_page_alloc_failed", > > "thp_swpout", > > -- > > 2.32.0.554.ge1b32706d8-goog > >