From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C6CC433EF for ; Fri, 10 Jun 2022 01:03:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 319008D006D; Thu, 9 Jun 2022 21:03:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A0A38D0064; Thu, 9 Jun 2022 21:03:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F34E8D006D; Thu, 9 Jun 2022 21:03:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id EECB58D0064 for ; Thu, 9 Jun 2022 21:03:08 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B648E35B59 for ; Fri, 10 Jun 2022 01:03:08 +0000 (UTC) X-FDA: 79560527256.09.AEB512E Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) by imf15.hostedemail.com (Postfix) with ESMTP id 67655A0062 for ; Fri, 10 Jun 2022 01:03:08 +0000 (UTC) Received: by mail-lj1-f174.google.com with SMTP id b7so15449678ljr.6 for ; Thu, 09 Jun 2022 18:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=5DZA+ms/xUmNNFtnMSAHC3gmn4PKCJjCcKc/k8ipO+8=; b=kpZU6QZ/D2ZHfCgTbb4dlkXYdtk9fSCCYdiB+bIC3kkgwm82vmfn7epfMfGgYk6bab MiUc1iv8vVJTdGxpwR1aNtpRJYlRf1IuUGcjSEDu4MhHE81VtVgsGe1BtwsZ+vV6i4Z7 FeQgPmFgEL+KzCdCuLGdNOvzl/yv26wLwC/ZvijDDXr5nDfs+6MNTi5nVX9R5ub+0ml5 KMNBmFPQufBHOMyZF2olCwLoSBlSan+9HwkkNWciMLoQJAQC3irS4gPwXJsKIeNggCxw gjordMXRX4M6BmaU262lDNKMoVS0QJW1I2uAEqHoDVrHDBi8ha6PQdPXsXx0ral7/iIB XC8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=5DZA+ms/xUmNNFtnMSAHC3gmn4PKCJjCcKc/k8ipO+8=; b=BEwyZArwbqTjXobU7faI9hZnyk9aij1a9sL/6GDmAnjqrD10BOt8egRnhza3BThpcD gkuCCmOoaI58wB7tqhuYuCsxJuz/z++BziA0p6C6wgcyoGZpr1I5fS+tB09EohD89lZV Fx3ZxQaC6kjUo/VahcvtklsKfe4edMEPUCbCdBWoVtb9zjlDRwRxwaeqrFVbxsF5k08i Uy7kkc+99CWNr02S/c/2vGj+69e5K13Mq/r8e3HPXd0TFb8yQfYdyZrYWoJRxSRgy8OZ tUba71cV40aytmL7legT51o5RmOoDJCMd8JpI9hMasbU/9VqO7M4vvq185vOK95Qt31t NvMA== X-Gm-Message-State: AOAM533u6pKU7dwMrwrwKT8uBKfO0ueZcxD7hmUzd+XHrNlINEbWhjE6 MqhxD4mvH4NmQM8+ZokSH7Xy4WFN9H67qbNJFTJ+jA== X-Google-Smtp-Source: ABdhPJzn2nH8t772tDWVLubaQhlwW6wxrlmBnE747LELKFkXox5R/mU4MUXsVnATS4R7pV0Ae89q5hBeQypTogusvbQ= X-Received: by 2002:a2e:7d05:0:b0:255:5dcf:f294 with SMTP id y5-20020a2e7d05000000b002555dcff294mr27393209ljc.187.1654822986436; Thu, 09 Jun 2022 18:03:06 -0700 (PDT) MIME-Version: 1.0 References: <20220606214414.736109-1-shy828301@gmail.com> <20220606214414.736109-6-shy828301@gmail.com> In-Reply-To: <20220606214414.736109-6-shy828301@gmail.com> From: "Zach O'Keefe" Date: Thu, 9 Jun 2022 18:02:29 -0700 Message-ID: Subject: Re: [v3 PATCH 5/7] mm: thp: kill transparent_hugepage_active() To: Yang Shi Cc: vbabka@suse.cz, kirill.shutemov@linux.intel.com, willy@infradead.org, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1654822988; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5DZA+ms/xUmNNFtnMSAHC3gmn4PKCJjCcKc/k8ipO+8=; b=XXCURr4DvszdmL8Kl2Hgr9O+1tW6qDU39iTRSLMfOBks1tUk+18j/7lqz33N7bHZ1T1lxE 4zlwUQujaRyHi/40lUjOnlTD3VkLD9M463UZ3m0MyZiFVLGGq4xUJhnq0xA13oeExN6Y6+ P4GkpRsQxc3GzCmSDw3coZAIab7w9P8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1654822988; a=rsa-sha256; cv=none; b=tpHtUFDdggB3A9Y+124GoEYE77teg3QIFin8V6HBjUW2+GMT/kQYyQ4lblL0e7KJrvW2iD NnO8MLl4JuRIlY51IyNqGip8/QQVSfHiYE+E0vg+RgRNbQb90mVkV0XGgw3J0MOvl+fXJP 2yXg31ej9tR51asqFTgaFLZX9j1EcgA= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="kpZU6QZ/"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of zokeefe@google.com designates 209.85.208.174 as permitted sender) smtp.mailfrom=zokeefe@google.com X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="kpZU6QZ/"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of zokeefe@google.com designates 209.85.208.174 as permitted sender) smtp.mailfrom=zokeefe@google.com X-Rspamd-Server: rspam03 X-Stat-Signature: xn59ucbstg13gonh5f49gqyhcyu63r78 X-Rspamd-Queue-Id: 67655A0062 X-HE-Tag: 1654822988-287113 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 6, 2022 at 2:44 PM Yang Shi wrote: > > The transparent_hugepage_active() was introduced to show THP eligibility > bit in smaps in proc, smaps is the only user. But it actually does the > similar check as hugepage_vma_check() which is used by khugepaged. We > definitely don't have to maintain two similar checks, so kill > transparent_hugepage_active(). I never realized smaps was the only user! Great! > Also move hugepage_vma_check() to huge_memory.c and huge_mm.h since it > is not only for khugepaged anymore. > > Signed-off-by: Yang Shi > --- > fs/proc/task_mmu.c | 2 +- > include/linux/huge_mm.h | 16 +++++++----- > include/linux/khugepaged.h | 4 +-- > mm/huge_memory.c | 50 ++++++++++++++++++++++++++++++++----- > mm/khugepaged.c | 51 +++----------------------------------- > 5 files changed, 60 insertions(+), 63 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index 2dd8c8a66924..fd79566e204c 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -860,7 +860,7 @@ static int show_smap(struct seq_file *m, void *v) > __show_smap(m, &mss, false); > > seq_printf(m, "THPeligible: %d\n", > - transparent_hugepage_active(vma)); > + hugepage_vma_check(vma, vma->vm_flags, true)); > > if (arch_pkeys_enabled()) > seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 79d5919beb83..f561c3e16def 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -209,7 +209,9 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma) > !inode_is_open_for_write(inode) && S_ISREG(inode->i_mode); > } > > -bool transparent_hugepage_active(struct vm_area_struct *vma); > +bool hugepage_vma_check(struct vm_area_struct *vma, > + unsigned long vm_flags, > + bool smaps); > > #define transparent_hugepage_use_zero_page() \ > (transparent_hugepage_flags & \ > @@ -358,11 +360,6 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > return false; > } > > -static inline bool transparent_hugepage_active(struct vm_area_struct *vma) > -{ > - return false; > -} > - > static inline bool transhuge_vma_size_ok(struct vm_area_struct *vma) > { > return false; > @@ -380,6 +377,13 @@ static inline bool transhuge_vma_enabled(struct vm_area_struct *vma, > return false; > } > > +static inline bool hugepage_vma_check(struct vm_area_struct *vma, > + unsigned long vm_flags, > + bool smaps) > +{ > + return false; > +} > + > static inline void prep_transhuge_page(struct page *page) {} > > #define transparent_hugepage_flags 0UL > diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h > index 392d34c3c59a..8a6452e089ca 100644 > --- a/include/linux/khugepaged.h > +++ b/include/linux/khugepaged.h > @@ -10,8 +10,6 @@ extern struct attribute_group khugepaged_attr_group; > extern int khugepaged_init(void); > extern void khugepaged_destroy(void); > extern int start_stop_khugepaged(void); > -extern bool hugepage_vma_check(struct vm_area_struct *vma, > - unsigned long vm_flags); > extern void __khugepaged_enter(struct mm_struct *mm); > extern void __khugepaged_exit(struct mm_struct *mm); > extern void khugepaged_enter_vma(struct vm_area_struct *vma, > @@ -57,7 +55,7 @@ static inline void khugepaged_enter(struct vm_area_struct *vma, > { > if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && > khugepaged_enabled()) { > - if (hugepage_vma_check(vma, vm_flags)) > + if (hugepage_vma_check(vma, vm_flags, false)) > __khugepaged_enter(vma->vm_mm); > } > } > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 36ada544e494..bc8370856e85 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -69,18 +69,56 @@ static atomic_t huge_zero_refcount; > struct page *huge_zero_page __read_mostly; > unsigned long huge_zero_pfn __read_mostly = ~0UL; > > -bool transparent_hugepage_active(struct vm_area_struct *vma) > +bool hugepage_vma_check(struct vm_area_struct *vma, > + unsigned long vm_flags, > + bool smaps) > { > + if (!transhuge_vma_enabled(vma, vm_flags)) > + return false; > + > + if (vm_flags & VM_NO_KHUGEPAGED) > + return false; > + > + /* Don't run khugepaged against DAX vma */ > + if (vma_is_dax(vma)) > + return false; > + > + if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - > + vma->vm_pgoff, HPAGE_PMD_NR)) > + return false; > + > if (!transhuge_vma_size_ok(vma)) > return false; > - if (vma_is_anonymous(vma)) > - return __transparent_hugepage_enabled(vma); > - if (vma_is_shmem(vma)) > + > + /* Enabled via shmem mount options or sysfs settings. */ > + if (shmem_file(vma->vm_file)) > return shmem_huge_enabled(vma); > - if (transhuge_vma_enabled(vma, vma->vm_flags) && file_thp_enabled(vma)) > + > + if (!khugepaged_enabled()) > + return false; > + > + /* THP settings require madvise. */ > + if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always()) > + return false; > + > + /* Only regular file is valid */ > + if (file_thp_enabled(vma)) > return true; > > - return false; > + if (!vma_is_anonymous(vma)) > + return false; > + > + if (vma_is_temporary_stack(vma)) > + return false; > + > + /* > + * THPeligible bit of smaps should show 1 for proper VMAs even > + * though anon_vma is not initialized yet. > + */ > + if (!vma->anon_vma) > + return smaps; > + > + return true; > } There are a few cases where the return value for smaps will be different from before. I presume this won't be an issue, and that any difference resulting from this change is actually a positive difference, given it more accurately reflects the thp eligibility of the vma? For example, a VM_NO_KHUGEPAGED-marked vma might now show 0 where it otherwise showed 1. > static bool get_huge_zero_page(void) > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index ca1754d3a827..aa0769e3b0d9 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -437,49 +437,6 @@ static inline int khugepaged_test_exit(struct mm_struct *mm) > return atomic_read(&mm->mm_users) == 0; > } > > -bool hugepage_vma_check(struct vm_area_struct *vma, > - unsigned long vm_flags) > -{ > - if (!transhuge_vma_enabled(vma, vm_flags)) > - return false; > - > - if (vm_flags & VM_NO_KHUGEPAGED) > - return false; > - > - /* Don't run khugepaged against DAX vma */ > - if (vma_is_dax(vma)) > - return false; > - > - if (vma->vm_file && !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - > - vma->vm_pgoff, HPAGE_PMD_NR)) > - return false; > - > - if (!transhuge_vma_size_ok(vma)) > - return false; > - > - /* Enabled via shmem mount options or sysfs settings. */ > - if (shmem_file(vma->vm_file)) > - return shmem_huge_enabled(vma); > - > - if (!khugepaged_enabled()) > - return false; > - > - /* THP settings require madvise. */ > - if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always()) > - return false; > - > - /* Only regular file is valid */ > - if (file_thp_enabled(vma)) > - return true; > - > - if (!vma->anon_vma || !vma_is_anonymous(vma)) > - return false; > - if (vma_is_temporary_stack(vma)) > - return false; > - > - return true; > -} > - > void __khugepaged_enter(struct mm_struct *mm) > { > struct mm_slot *mm_slot; > @@ -516,7 +473,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma, > { > if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && > khugepaged_enabled()) { > - if (hugepage_vma_check(vma, vm_flags)) > + if (hugepage_vma_check(vma, vm_flags, false)) > __khugepaged_enter(vma->vm_mm); > } > } > @@ -961,7 +918,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > if (!transhuge_vma_suitable(vma, address)) > return SCAN_ADDRESS_RANGE; > - if (!hugepage_vma_check(vma, vma->vm_flags)) > + if (!hugepage_vma_check(vma, vma->vm_flags, false)) > return SCAN_VMA_CHECK; > return 0; > } > @@ -1442,7 +1399,7 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr) > * the valid THP. Add extra VM_HUGEPAGE so hugepage_vma_check() > * will not fail the vma for missing VM_HUGEPAGE > */ > - if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE)) > + if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE, false)) > return; > > /* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */ > @@ -2132,7 +2089,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, > progress++; > break; > } > - if (!hugepage_vma_check(vma, vma->vm_flags)) { > + if (!hugepage_vma_check(vma, vma->vm_flags, false)) { > skip: > progress++; > continue; > -- > 2.26.3 > >