From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57E7EC352BE for ; Thu, 16 Apr 2020 16:00:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 17D7C214AF for ; Thu, 16 Apr 2020 16:00:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 17D7C214AF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B74728E00C4; Thu, 16 Apr 2020 12:00:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B24F88E00BC; Thu, 16 Apr 2020 12:00:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A62268E00C4; Thu, 16 Apr 2020 12:00:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0023.hostedemail.com [216.40.44.23]) by kanga.kvack.org (Postfix) with ESMTP id 8F7828E00BC for ; Thu, 16 Apr 2020 12:00:37 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4FEB76D99 for ; Thu, 16 Apr 2020 16:00:37 +0000 (UTC) X-FDA: 76714180914.17.hope23_79ba7a47abb4c X-HE-Tag: hope23_79ba7a47abb4c X-Filterd-Recvd-Size: 5208 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Apr 2020 16:00:36 +0000 (UTC) IronPort-SDR: bDGKTmhOVVaYoeUwW0C9VdowPN2sjyLiXQH2jckejIt5PgKqI/5Pp2RWSb/FIHXjLFwhc07uIx R+VnyldrVSCA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Apr 2020 09:00:34 -0700 IronPort-SDR: Bo+j2Yt/hjZIEmSiIMDxJUqWl425YPSUrLtW0UC1/In9qKFwSxeqgtoCI38E2ipozrWN5oLJbV hN90B6B0sSHA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,391,1580803200"; d="scan'208";a="244434300" Received: from black.fi.intel.com ([10.237.72.28]) by fmsmga007.fm.intel.com with ESMTP; 16 Apr 2020 09:00:32 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id 5B73119C; Thu, 16 Apr 2020 19:00:31 +0300 (EEST) From: "Kirill A. Shutemov" To: akpm@linux-foundation.org, Andrea Arcangeli Cc: Zi Yan , Yang Shi , Ralph Campbell , John Hubbard , William Kucharski , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv4 2/8] khugepaged: Do not stop collapse if less than half PTEs are referenced Date: Thu, 16 Apr 2020 19:00:20 +0300 Message-Id: <20200416160026.16538-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200416160026.16538-1-kirill.shutemov@linux.intel.com> References: <20200416160026.16538-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __collapse_huge_page_swapin() checks the number of referenced PTE to decide if the memory range is hot enough to justify swapin. We have few problems with the approach: - It is way too late: we can do the check much earlier and safe time. khugepaged_scan_pmd() already knows if we have any pages to swap in and number of referenced page. - It stops collapse altogether if there's not enough referenced pages, not only swappingin. Fix it by making the right check early. We also can avoid additional page table scanning if khugepaged_scan_pmd() haven't found any swap entries. Signed-off-by: Kirill A. Shutemov Fixes: 0db501f7a34c ("mm, thp: convert from optimistic swapin collapsing = to conservative") Reviewed-by: William Kucharski Reviewed-and-Tested-by: Zi Yan Acked-by: Yang Shi --- mm/khugepaged.c | 27 +++++++++++---------------- 1 file changed, 11 insertions(+), 16 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 99d77ffb79c2..6e69dc9a9fb1 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -899,11 +899,6 @@ static bool __collapse_huge_page_swapin(struct mm_st= ruct *mm, .pgoff =3D linear_page_index(vma, address), }; =20 - /* we only decide to swapin, if there is enough young ptes */ - if (referenced < HPAGE_PMD_NR/2) { - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, 0); - return false; - } vmf.pte =3D pte_offset_map(pmd, address); for (; vmf.address < address + HPAGE_PMD_NR*PAGE_SIZE; vmf.pte++, vmf.address +=3D PAGE_SIZE) { @@ -943,7 +938,7 @@ static bool __collapse_huge_page_swapin(struct mm_str= uct *mm, static void collapse_huge_page(struct mm_struct *mm, unsigned long address, struct page **hpage, - int node, int referenced) + int node, int referenced, int unmapped) { pmd_t *pmd, _pmd; pte_t *pte; @@ -1000,7 +995,8 @@ static void collapse_huge_page(struct mm_struct *mm, * If it fails, we release mmap_sem and jump out_nolock. * Continuing to collapse causes inconsistency. */ - if (!__collapse_huge_page_swapin(mm, vma, address, pmd, referenced)) { + if (unmapped && !__collapse_huge_page_swapin(mm, vma, address, + pmd, referenced)) { mem_cgroup_cancel_charge(new_page, memcg, true); up_read(&mm->mmap_sem); goto out_nolock; @@ -1233,22 +1229,21 @@ static int khugepaged_scan_pmd(struct mm_struct *= mm, mmu_notifier_test_young(vma->vm_mm, address)) referenced++; } - if (writable) { - if (referenced) { - result =3D SCAN_SUCCEED; - ret =3D 1; - } else { - result =3D SCAN_LACK_REFERENCED_PAGE; - } - } else { + if (!writable) { result =3D SCAN_PAGE_RO; + } else if (!referenced || (unmapped && referenced < HPAGE_PMD_NR/2)) { + result =3D SCAN_LACK_REFERENCED_PAGE; + } else { + result =3D SCAN_SUCCEED; + ret =3D 1; } out_unmap: pte_unmap_unlock(pte, ptl); if (ret) { node =3D khugepaged_find_target_node(); /* collapse_huge_page will return with the mmap_sem released */ - collapse_huge_page(mm, address, hpage, node, referenced); + collapse_huge_page(mm, address, hpage, node, + referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, --=20 2.26.1