From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14FFEC433E0 for ; Wed, 10 Feb 2021 05:18:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9368B64E42 for ; Wed, 10 Feb 2021 05:18:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9368B64E42 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 38E646B006E; Wed, 10 Feb 2021 00:18:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 33FCE6B0070; Wed, 10 Feb 2021 00:18:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27C186B0071; Wed, 10 Feb 2021 00:18:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0091.hostedemail.com [216.40.44.91]) by kanga.kvack.org (Postfix) with ESMTP id 1144D6B006E for ; Wed, 10 Feb 2021 00:18:24 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D2DE018034A57 for ; Wed, 10 Feb 2021 05:18:23 +0000 (UTC) X-FDA: 77801202486.06.drum44_4b0fe102760d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id B5E6A1004F5B2 for ; Wed, 10 Feb 2021 05:18:23 +0000 (UTC) X-HE-Tag: drum44_4b0fe102760d X-Filterd-Recvd-Size: 6170 Received: from mail-io1-f53.google.com (mail-io1-f53.google.com [209.85.166.53]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Wed, 10 Feb 2021 05:18:23 +0000 (UTC) Received: by mail-io1-f53.google.com with SMTP id f20so637347ioo.10 for ; Tue, 09 Feb 2021 21:18:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1QyQ76yM+DRGQIWWpBtZtC+km5KE0GD76W/nCu9/S+0=; b=rKd/EBgjCCDOJ6y0MmK+J+IIhH+emTDln0WBMqk5Ga+nP9hQuEJXy30zORZMmvO6CC slRGUxFBDg6G3ijaR+YblWEtMn4zINr0uRsTQrLjZ6zD+XFog4ktVfaozxrVRgPjzZ6A 6EscNGkb+hZt9b8pB6T5eowKbVqVThgCTR3Is0XV1Hq/KfIc+AtYbbHszi+5PIHuhzle R/ka2jLiwTfrrnldXoi2GGYZn4MKf/C5p0PioeOJl+P6EAekwSWZcXI5xMnrKB4pN4vd CDKeSz+4/kLoQfDLJzwtPt2XvaHWJ2uz5eVHzNxE93pPxNYRiRjLBHHjEu2ZiLw3RaIE er7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1QyQ76yM+DRGQIWWpBtZtC+km5KE0GD76W/nCu9/S+0=; b=ZrZattknvu0i6SS89uYvdT+pGkpHaInKk7cdvCctyIrmMSYr0QdFuVarUIKr18NpWM Y+jU9LhgHZLw56wdutlWEQy2BPWaZa5IEW72+BsSwm5KrWSQa/FqofmPIA/hxbFp6Eqa zfyZdjMP+TT46MRshWJ4uhHOi4olgswI8H7ccD/QtReufvAwv6yk/6ZBdAnN/Orduc/U A5TEFodP9z415Tz6MxogMWs7R+Rvm91CuJdy+YstMN5BQROr6z9TqPD8AD+y/dmdHDfB Gen3WU0yJ3NVsmQN9tixnA5bnQfpGGasxViIFJwZgq5AZJ0+Bewo5gGYCmtjXAYU3zWw U7Lg== X-Gm-Message-State: AOAM531UeaetOpoNB/Ec5a34TeXXn7e1GIvj41mtm7s8zP5BRogoF2KY f9DTP6IsQOxVv04MRgGOIQGuMgpX9L3Xw2XLE8Q= X-Google-Smtp-Source: ABdhPJx0u/s2yMDJCXGg5GpW8Icb6Q+HLJYgJm+ywZibSvppqJS6F1A6PXlRgWFuBlu6qleS949cAy5jAjahVQkOyd4= X-Received: by 2002:a6b:c8d0:: with SMTP id y199mr1085841iof.162.1612934302741; Tue, 09 Feb 2021 21:18:22 -0800 (PST) MIME-Version: 1.0 References: <20210205023956.417587-1-aneesh.kumar@linux.ibm.com> In-Reply-To: <20210205023956.417587-1-aneesh.kumar@linux.ibm.com> From: Pankaj Gupta Date: Wed, 10 Feb 2021 06:18:11 +0100 Message-ID: Subject: Re: [PATCH] mm/pmem: Avoid inserting hugepage PTE entry with fsdax if hugepage support is disabled To: "Aneesh Kumar K.V" Cc: linux-nvdimm , Dan Williams , "Kirill A . Shutemov" , Jan Kara , Linux MM , linuxppc-dev@lists.ozlabs.org Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > Differentiate between hardware not supporting hugepages and user disabling THP > via 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' > > For the devdax namespace, the kernel handles the above via the > supported_alignment attribute and failing to initialize the namespace > if the namespace align value is not supported on the platform. > > For the fsdax namespace, the kernel will continue to initialize > the namespace. This can result in the kernel creating a huge pte > entry even though the hardware don't support the same. > > We do want hugepage support with pmem even if the end-user disabled THP > via sysfs file (/sys/kernel/mm/transparent_hugepage/enabled). Hence > differentiate between hardware/firmware lacking support vs user-controlled > disable of THP and prevent a huge fault if the hardware lacks hugepage > support. > > Signed-off-by: Aneesh Kumar K.V > --- > include/linux/huge_mm.h | 15 +++++++++------ > mm/huge_memory.c | 6 +++++- > 2 files changed, 14 insertions(+), 7 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 6a19f35f836b..ba973efcd369 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -78,6 +78,7 @@ static inline vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, > } > > enum transparent_hugepage_flag { > + TRANSPARENT_HUGEPAGE_NEVER_DAX, > TRANSPARENT_HUGEPAGE_FLAG, > TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, > TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, > @@ -123,6 +124,13 @@ extern unsigned long transparent_hugepage_flags; > */ > static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > { > + > + /* > + * If the hardware/firmware marked hugepage support disabled. > + */ > + if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_NEVER_DAX)) > + return false; > + > if (vma->vm_flags & VM_NOHUGEPAGE) > return false; > > @@ -134,12 +142,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > > if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG)) > return true; > - /* > - * For dax vmas, try to always use hugepage mappings. If the kernel does > - * not support hugepages, fsdax mappings will fallback to PAGE_SIZE > - * mappings, and device-dax namespaces, that try to guarantee a given > - * mapping size, will fail to enable > - */ > + > if (vma_is_dax(vma)) > return true; > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 9237976abe72..d698b7e27447 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -386,7 +386,11 @@ static int __init hugepage_init(void) > struct kobject *hugepage_kobj; > > if (!has_transparent_hugepage()) { > - transparent_hugepage_flags = 0; > + /* > + * Hardware doesn't support hugepages, hence disable > + * DAX PMD support. > + */ > + transparent_hugepage_flags = 1 << TRANSPARENT_HUGEPAGE_NEVER_DAX; > return -EINVAL; > } Reviewed-by: Pankaj Gupta