From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl0-f69.google.com (mail-pl0-f69.google.com [209.85.160.69]) by kanga.kvack.org (Postfix) with ESMTP id 612426B0003 for ; Fri, 20 Jul 2018 14:14:03 -0400 (EDT) Received: by mail-pl0-f69.google.com with SMTP id w1-v6so7912204plq.8 for ; Fri, 20 Jul 2018 11:14:03 -0700 (PDT) Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com. [115.124.30.130]) by mx.google.com with ESMTPS id k7-v6si2408464pfb.309.2018.07.20.11.14.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 Jul 2018 11:14:02 -0700 (PDT) From: Yang Shi Subject: [PATCH] mm: thp: remove use_zero_page sysfs knob Date: Sat, 21 Jul 2018 02:13:50 +0800 Message-Id: <1532110430-115278-1-git-send-email-yang.shi@linux.alibaba.com> Sender: owner-linux-mm@kvack.org List-ID: To: kirill@shutemov.name, hughd@google.com, rientjes@google.com, aaron.lu@intel.com, akpm@linux-foundation.org Cc: yang.shi@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org By digging into the original review, it looks use_zero_page sysfs knob was added to help ease-of-testing and give user a way to mitigate refcounting overhead. It has been a few years since the knob was added at the first place, I think we are confident that it is stable enough. And, since commit 6fcb52a56ff60 ("thp: reduce usage of huge zero page's atomic counter"), it looks refcounting overhead has been reduced significantly. Other than the above, the value of the knob is always 1 (enabled by default), I'm supposed very few people turn it off by default. So, it sounds not worth to still keep this knob around. Cc: Kirill A. Shutemov Cc: Hugh Dickins Cc: David Rientjes Cc: Aaron Lu Cc: Andrew Morton Signed-off-by: Yang Shi --- Documentation/admin-guide/mm/transhuge.rst | 7 ------- include/linux/huge_mm.h | 4 ---- mm/huge_memory.c | 22 ++-------------------- 3 files changed, 2 insertions(+), 31 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 7ab93a8..d471ce8 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -148,13 +148,6 @@ madvise never should be self-explanatory. -By default kernel tries to use huge zero page on read page fault to -anonymous mapping. It's possible to disable huge zero page by writing 0 -or enable it back by writing 1:: - - echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page - echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page - Some userspace (such as a test program, or an optimized memory allocation library) may want to know the size (in bytes) of a transparent hugepage:: diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index a8a1262..0ea7808 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -58,7 +58,6 @@ enum transparent_hugepage_flag { TRANSPARENT_HUGEPAGE_DEFRAG_KSWAPD_OR_MADV_FLAG, TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG, TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG, - TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG, #ifdef CONFIG_DEBUG_VM TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG, #endif @@ -116,9 +115,6 @@ static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) return false; } -#define transparent_hugepage_use_zero_page() \ - (transparent_hugepage_flags & \ - (1<vm_flags))) return VM_FAULT_OOM; if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm) && - transparent_hugepage_use_zero_page()) { + !mm_forbids_zeropage(vma->vm_mm)) { pgtable_t pgtable; struct page *zero_page; bool set; -- 1.8.3.1