From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 50F448E0001 for ; Tue, 18 Dec 2018 22:41:05 -0500 (EST) Received: by mail-ed1-f72.google.com with SMTP id y35so14909730edb.5 for ; Tue, 18 Dec 2018 19:41:05 -0800 (PST) Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id ca21-v6si853934ejb.96.2018.12.18.19.41.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Dec 2018 19:41:03 -0800 (PST) Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wBJ3dL0B040254 for ; Tue, 18 Dec 2018 22:41:02 -0500 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by mx0b-001b2d01.pphosted.com with ESMTP id 2pfbj4p08u-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 18 Dec 2018 22:41:02 -0500 Received: from localhost by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 19 Dec 2018 03:41:01 -0000 From: "Aneesh Kumar K.V" Subject: [PATCH V5 0/3] mm/kvm/vfio/ppc64: Migrate compound pages out of CMA region Date: Wed, 19 Dec 2018 09:10:44 +0530 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Message-Id: <20181219034047.16305-1-aneesh.kumar@linux.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: akpm@linux-foundation.org, Michal Hocko , Alexey Kardashevskiy , mpe@ellerman.id.au, paulus@samba.org, David Gibson Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" ppc64 use CMA area for the allocation of guest page table (hash page table). We won't be able to start guest if we fail to allocate hash page table. We have observed hash table allocation failure because we failed to migrate pages out of CMA region because they were pinned. This happen when we are using VFIO. VFIO on ppc64 pins the entire guest RAM. If the guest RAM pages get allocated out of CMA region, we won't be able to migrate those pages. The pages are also pinned for the lifetime of the guest. Currently we support migration of non-compound pages. With THP and with the addition of hugetlb migration we can end up allocating compound pages from CMA region. This patch series add support for migrating compound pages. The first path adds the helper get_user_pages_cma_migrate() which pin the page making sure we migrate them out of CMA region before incrementing the reference count. Changes from V4: * use __GFP_NOWARN when allocating pages to avoid page allocation failure warnings. Changes from V3: * Move the hugetlb check before transhuge check * Use compound head page when isolating hugetlb page Aneesh Kumar K.V (3): mm: Add get_user_pages_cma_migrate powerpc/mm/iommu: Allow migration of cma allocated pages during mm_iommu_get powerpc/mm/iommu: Allow large IOMMU page size only for hugetlb backing arch/powerpc/mm/mmu_context_iommu.c | 140 ++++++++-------------------- include/linux/hugetlb.h | 2 + include/linux/migrate.h | 3 + mm/hugetlb.c | 4 +- mm/migrate.c | 139 +++++++++++++++++++++++++++ 5 files changed, 186 insertions(+), 102 deletions(-) -- 2.19.2