From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70220C33CB1 for ; Thu, 16 Jan 2020 09:56:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E2ED2073A for ; Thu, 16 Jan 2020 09:56:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E2ED2073A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F30F8E005E; Thu, 16 Jan 2020 04:56:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 77BE88E003F; Thu, 16 Jan 2020 04:56:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66A738E005E; Thu, 16 Jan 2020 04:56:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id 4D6858E003F for ; Thu, 16 Jan 2020 04:56:17 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id DE701181AEF0B for ; Thu, 16 Jan 2020 09:56:16 +0000 (UTC) X-FDA: 76383041952.02.push27_53e5c48da7d22 X-HE-Tag: push27_53e5c48da7d22 X-Filterd-Recvd-Size: 5187 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 09:56:16 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 1A203AC4B; Thu, 16 Jan 2020 09:56:15 +0000 (UTC) Date: Thu, 16 Jan 2020 10:56:14 +0100 From: Michal Hocko To: Li Xinhai Cc: linux-mm@kvack.org, akpm@linux-foundation.org, Mike Kravetz Subject: Re: [PATCH v4] mm/mempolicy,hugetlb: Checking hstate for hugetlbfs page in vma_migratable Message-ID: <20200116095614.GO19428@dhcp22.suse.cz> References: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1579147885-23511-1-git-send-email-lixinhai.lxh@gmail.com> User-Agent: Mutt/1.12.2 (2019-09-21) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 16-01-20 04:11:25, Li Xinhai wrote: > Checking hstate at early phase when isolating page, instead of during > unmap and move phase, to avoid useless isolation. Could you be more specific what you mean by isolation and why does it matter? The patch description should really explain _why_ the change is needed or desirable. > Signed-off-by: Li Xinhai > Cc: Michal Hocko > Cc: Mike Kravetz > --- > include/linux/hugetlb.h | 10 ++++++++++ > include/linux/mempolicy.h | 29 +---------------------------- > mm/mempolicy.c | 28 ++++++++++++++++++++++++++++ > 3 files changed, 39 insertions(+), 28 deletions(-) > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index 31d4920..c9d871d 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -598,6 +598,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) > return arch_hugetlb_migration_supported(h); > } > > +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) > +{ > + return hugepage_migration_supported(hstate_vma(vma)); > +} > + > /* > * Movability check is different as compared to migration check. > * It determines whether or not a huge page should be placed on > @@ -809,6 +814,11 @@ static inline bool hugepage_migration_supported(struct hstate *h) > return false; > } > > +static inline bool vm_hugepage_migration_supported(struct vm_area_struct *vma) > +{ > + return false; > +} > + > static inline bool hugepage_movable_supported(struct hstate *h) > { > return false; > diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h > index 5228c62..8165278 100644 > --- a/include/linux/mempolicy.h > +++ b/include/linux/mempolicy.h > @@ -173,34 +173,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, > extern void mpol_to_str(char *buffer, int maxlen, struct mempolicy *pol); > > /* Check if a vma is migratable */ > -static inline bool vma_migratable(struct vm_area_struct *vma) > -{ > - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) > - return false; > - > - /* > - * DAX device mappings require predictable access latency, so avoid > - * incurring periodic faults. > - */ > - if (vma_is_dax(vma)) > - return false; > - > -#ifndef CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION > - if (vma->vm_flags & VM_HUGETLB) > - return false; > -#endif > - > - /* > - * Migration allocates pages in the highest zone. If we cannot > - * do so then migration (at least from node to node) is not > - * possible. > - */ > - if (vma->vm_file && > - gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) > - < policy_zone) > - return false; > - return true; > -} > +extern bool vma_migratable(struct vm_area_struct *vma); > > extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned long); > extern void mpol_put_task_policy(struct task_struct *); > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 067cf7d..8a01fb1 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -1714,6 +1714,34 @@ static int kernel_get_mempolicy(int __user *policy, > > #endif /* CONFIG_COMPAT */ > > +bool vma_migratable(struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & (VM_IO | VM_PFNMAP)) > + return false; > + > + /* > + * DAX device mappings require predictable access latency, so avoid > + * incurring periodic faults. > + */ > + if (vma_is_dax(vma)) > + return false; > + > + if (is_vm_hugetlb_page(vma) && > + !vm_hugepage_migration_supported(vma)) > + return false; > + > + /* > + * Migration allocates pages in the highest zone. If we cannot > + * do so then migration (at least from node to node) is not > + * possible. > + */ > + if (vma->vm_file && > + gfp_zone(mapping_gfp_mask(vma->vm_file->f_mapping)) > + < policy_zone) > + return false; > + return true; > +} > + > struct mempolicy *__get_vma_policy(struct vm_area_struct *vma, > unsigned long addr) > { > -- > 1.8.3.1 > -- Michal Hocko SUSE Labs