From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A87FC2B9F8 for ; Tue, 25 May 2021 17:34:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C11A161378 for ; Tue, 25 May 2021 17:34:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C11A161378 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E1F66B0036; Tue, 25 May 2021 13:34:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 391646B006C; Tue, 25 May 2021 13:34:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E4BC6B006E; Tue, 25 May 2021 13:34:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id DDE246B0036 for ; Tue, 25 May 2021 13:34:36 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 79060A767 for ; Tue, 25 May 2021 17:34:36 +0000 (UTC) X-FDA: 78180452952.04.2B24832 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf28.hostedemail.com (Postfix) with ESMTP id 1E1C820007D8 for ; Tue, 25 May 2021 17:34:30 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id h12so4561032plf.11 for ; Tue, 25 May 2021 10:34:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=gCqc7YMazcCz158C6V/S5fH3hyG5HHvNm1QpwNxvdxE=; b=YxdyPz99w7/Esg1lysN56IfxcB3h5pM/IaxDv0oMQ+08aYRBOQarMt9LjkwHzoUg5D UUQgKvq09axzkMhoMWdnQQjwNCB9lXGnkPTBVnQQIOUcDrsKNu9RxkG068KKtaleTF7c 0vpTeLqiScac4bbE649GA+BNp8KCn5yZHz/SMizJuIBgMHaV7X8VFyR2m2BhhcI/3jz4 oT1x9BD5Fd3CBrBlKPeArX0Wwe2ls72Uy+dx3Zh4jwGIRmUd16JY2V4v2LN1sa30N2YQ 934drS1n4ScIhKWCs73+MBfEo7HfPMK5L582hndb33slp877CIinfQALH6WI2oLeTqOJ 0SQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=gCqc7YMazcCz158C6V/S5fH3hyG5HHvNm1QpwNxvdxE=; b=KyMAokHlltmvapj5ZsBcdsB+M52iY5TxGiHjQCxqCyRVKZvn5HQtEA9ycpHfQ88aI4 LFhnFWoBTGl7MsfVc9Rx+HJNIoGJ0/aG9UGclrwXP2f3gDf+S97zulBhhBB8pooh7uvy 4GAAL/sdgLVs3yIPcmyB7IkRyHzdaPYz1DEPjXR0AmKNryM+SLdA/mraLE6UXGazLnu0 54XkB4yMDo5YaOc0Qt/lwxz0yvTIRTFAWMl7KAvPPUeGIZ4tB8YjUoXizNDLKOSW+wf6 +5uCVP20EAoeHy+zO/8MPMqKf0RzPfm0D2QcYIkgANIwFSzIt8afKiv0JG0C5XjhmLQs KnXQ== X-Gm-Message-State: AOAM532ZkeydLA7SnwsqUI6jRqy+UqX+lSCAK5jebmCHX84OOoSPAzF4 UkUF03j9sF0DDjO3lh8F7pg= X-Google-Smtp-Source: ABdhPJy0XDu9Bo7IRqG97W6GrH5BGjEdcAA84Nyngt/3RV7kqpfv2Cd1UAdYwsI9XXvtjGytuRWEYw== X-Received: by 2002:a17:902:6b4a:b029:fb:7b8e:56f8 with SMTP id g10-20020a1709026b4ab02900fb7b8e56f8mr6850122plt.46.1621964075113; Tue, 25 May 2021 10:34:35 -0700 (PDT) Received: from google.com ([2620:15c:211:201:37cd:d6b4:7992:c290]) by smtp.gmail.com with ESMTPSA id h18sm14015980pfr.49.2021.05.25.10.34.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 10:34:34 -0700 (PDT) Date: Tue, 25 May 2021 10:34:32 -0700 From: Minchan Kim To: Yang Shi Cc: Hugh Dickins , Zi Yan , "Kirill A. Shutemov" , HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Wang Yugui , Andrew Morton , Linux MM , Linux Kernel Mailing List Subject: Re: [v3 PATCH 1/2] mm: rmap: make try_to_unmap() void function Message-ID: References: <20210525162145.3510-1-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 1E1C820007D8 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=YxdyPz99; spf=pass (imf28.hostedemail.com: domain of minchankim@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=minchankim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam04 X-Stat-Signature: ea9kjgmm6a4qg9i73ktnqhhuy83anmxn X-HE-Tag: 1621964070-722265 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 25, 2021 at 10:07:05AM -0700, Yang Shi wrote: > On Tue, May 25, 2021 at 9:46 AM Minchan Kim wrote: > > > > On Tue, May 25, 2021 at 09:21:44AM -0700, Yang Shi wrote: > > > Currently try_to_unmap() return bool value by checking page_mapcount(), > > > however this may return false positive since page_mapcount() doesn't > > > check all subpages of compound page. The total_mapcount() could be used > > > instead, but its cost is higher since it traverses all subpages. > > > > > > Actually the most callers of try_to_unmap() don't care about the > > > return value at all. So just need check if page is still mapped by > > > page_mapped() when necessary. And page_mapped() does bail out early > > > when it finds mapped subpage. > > > > > > Suggested-by: Hugh Dickins > > > Signed-off-by: Yang Shi > > > --- > > > include/linux/rmap.h | 2 +- > > > mm/huge_memory.c | 4 +--- > > > mm/memory-failure.c | 13 ++++++------- > > > mm/rmap.c | 6 +----- > > > mm/vmscan.c | 3 ++- > > > 5 files changed, 11 insertions(+), 17 deletions(-) > > > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > > index def5c62c93b3..116cb193110a 100644 > > > --- a/include/linux/rmap.h > > > +++ b/include/linux/rmap.h > > > @@ -194,7 +194,7 @@ static inline void page_dup_rmap(struct page *page, bool compound) > > > int page_referenced(struct page *, int is_locked, > > > struct mem_cgroup *memcg, unsigned long *vm_flags); > > > > > > -bool try_to_unmap(struct page *, enum ttu_flags flags); > > > +void try_to_unmap(struct page *, enum ttu_flags flags); > > > > > > /* Avoid racy checks */ > > > #define PVMW_SYNC (1 << 0) > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > index 19195fca1aee..80fe642d742d 100644 > > > --- a/mm/huge_memory.c > > > +++ b/mm/huge_memory.c > > > @@ -2336,15 +2336,13 @@ static void unmap_page(struct page *page) > > > { > > > enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | > > > TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; > > > - bool unmap_success; > > > > > > VM_BUG_ON_PAGE(!PageHead(page), page); > > > > > > if (PageAnon(page)) > > > ttu_flags |= TTU_SPLIT_FREEZE; > > > > > > - unmap_success = try_to_unmap(page, ttu_flags); > > > - VM_BUG_ON_PAGE(!unmap_success, page); > > > + try_to_unmap(page, ttu_flags); > > > } > > > > > > static void remap_page(struct page *page, unsigned int nr) > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > > index 9dcc9bcea731..6dd53ff34825 100644 > > > --- a/mm/memory-failure.c > > > +++ b/mm/memory-failure.c > > > @@ -1126,7 +1126,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > > > collect_procs(hpage, &tokill, flags & MF_ACTION_REQUIRED); > > > > > > if (!PageHuge(hpage)) { > > > - unmap_success = try_to_unmap(hpage, ttu); > > > + try_to_unmap(hpage, ttu); > > > } else { > > > if (!PageAnon(hpage)) { > > > /* > > > @@ -1138,17 +1138,16 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn, > > > */ > > > mapping = hugetlb_page_mapping_lock_write(hpage); > > > if (mapping) { > > > - unmap_success = try_to_unmap(hpage, > > > - ttu|TTU_RMAP_LOCKED); > > > + try_to_unmap(hpage, ttu|TTU_RMAP_LOCKED); > > > i_mmap_unlock_write(mapping); > > > - } else { > > > + } else > > > pr_info("Memory failure: %#lx: could not lock mapping for mapped huge page\n", pfn); > > > - unmap_success = false; > > > - } > > > } else { > > > - unmap_success = try_to_unmap(hpage, ttu); > > > + try_to_unmap(hpage, ttu); > > > } > > > } > > > + > > > + unmap_success = !page_mapped(hpage); > > > if (!unmap_success) > > > pr_err("Memory failure: %#lx: failed to unmap page (mapcount=%d)\n", > > > pfn, page_mapcount(hpage)); > > > diff --git a/mm/rmap.c b/mm/rmap.c > > > index a35cbbbded0d..728de421e43a 100644 > > > --- a/mm/rmap.c > > > +++ b/mm/rmap.c > > > @@ -1748,10 +1748,8 @@ static int page_not_mapped(struct page *page) > > > * > > > * Tries to remove all the page table entries which are mapping this > > > * page, used in the pageout path. Caller must hold the page lock. > > > - * > > > - * If unmap is successful, return true. Otherwise, false. > > > */ > > > -bool try_to_unmap(struct page *page, enum ttu_flags flags) > > > +void try_to_unmap(struct page *page, enum ttu_flags flags) > > > { > > > struct rmap_walk_control rwc = { > > > .rmap_one = try_to_unmap_one, > > > @@ -1776,8 +1774,6 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) > > > rmap_walk_locked(page, &rwc); > > > else > > > rmap_walk(page, &rwc); > > > - > > > - return !page_mapcount(page) ? true : false; > > > > Couldn't we use page_mapped instead of page_mapcount here? > > Yes, of course. Actually this has been discussed in v2 review. Most > (or half) callers actually don't check the return value of > try_to_unmap() except hwpoison, vmscan and THP split. It sounds > suboptimal to have everyone pay the cost. So I thought Hugh's > suggestion made sense to me. Not sure most callers ignore the ret. I am seeing only do_migrate_range ignores it. Other than that, they checked the success with page_mapped in the end. With returning void, I feel like it's not try sematic function any longer. If you still want to go with it, I suggest adding some comment how to check the function's successness in the comment place you removed above. > > Quoted the discussion below: > > > @@ -1777,7 +1779,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags) > > else > > rmap_walk(page, &rwc); > > > > - return !page_mapcount(page) ? true : false; > > + return !total_mapcount(page) ? true : false; > > That always made me wince: "return !total_mapcount(page);" surely. > > Or slightly better, "return !page_mapped(page);", since at least that > one breaks out as soon as it sees a mapcount. Though I guess I'm > being silly there, since that case should never occur, so both > total_mapcount() and page_mapped() scan through all pages. > > Or better, change try_to_unmap() to void: most callers ignore its > return value anyway, and make their own decisions; the remaining > few could be changed to do the same. Though again, I may be > being silly, since the expensive THP case is not the common case. > > > > With boolean return of try sematic looks reasonable to me > > rather than void. > > > > > } > > > > > > /** > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index f96d62159720..fa5052ace415 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -1499,7 +1499,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, > > > if (unlikely(PageTransHuge(page))) > > > flags |= TTU_SPLIT_HUGE_PMD; > > > > > > - if (!try_to_unmap(page, flags)) { > > > + try_to_unmap(page, flags); > > > + if (page_mapped(page)) { > > > stat->nr_unmap_fail += nr_pages; > > > if (!was_swapbacked && PageSwapBacked(page)) > > > stat->nr_lazyfree_fail += nr_pages; > > > -- > > > 2.26.2 > > > > > >