From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yk0-f170.google.com (mail-yk0-f170.google.com [209.85.160.170]) by kanga.kvack.org (Postfix) with ESMTP id 320406B0070 for ; Thu, 5 Mar 2015 13:07:52 -0500 (EST) Received: by ykr79 with SMTP id 79so23954310ykr.0 for ; Thu, 05 Mar 2015 10:07:50 -0800 (PST) Received: from mx2.parallels.com (mx2.parallels.com. [199.115.105.18]) by mx.google.com with ESMTPS id 188si4102269ykj.103.2015.03.05.10.07.49 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Mar 2015 10:07:50 -0800 (PST) Message-ID: <54F89B61.308@parallels.com> Date: Thu, 5 Mar 2015 21:07:29 +0300 From: Pavel Emelyanov MIME-Version: 1.0 Subject: Re: [PATCH 14/21] userfaultfd: mcopy_atomic|mfill_zeropage: UFFDIO_COPY|UFFDIO_ZEROPAGE preparation References: <1425575884-2574-1-git-send-email-aarcange@redhat.com> <1425575884-2574-15-git-send-email-aarcange@redhat.com> In-Reply-To: <1425575884-2574-15-git-send-email-aarcange@redhat.com> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Andrea Arcangeli , qemu-devel@nongnu.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-api@vger.kernel.org, Android Kernel Team Cc: "Kirill A. Shutemov" , Sanidhya Kashyap , zhang.zhanghailiang@huawei.com, Linus Torvalds , Andres Lagar-Cavilla , Dave Hansen , Paolo Bonzini , Rik van Riel , Mel Gorman , Andy Lutomirski , Andrew Morton , Sasha Levin , Hugh Dickins , Peter Feiner , "Dr. David Alan Gilbert" , Christopher Covington , Johannes Weiner , Robert Love , Dmitry Adamushko , Neil Brown , Mike Hommey , Taras Glek , Jan Kara , KOSAKI Motohiro , Michel Lespinasse , Minchan Kim , Keith Packard , "Huangpeng (Peter)" , Anthony Liguori , Stefan Hajnoczi , Wenchao Xia , Andrew Jones , Juan Quintela > +static int mcopy_atomic_pte(struct mm_struct *dst_mm, > + pmd_t *dst_pmd, > + struct vm_area_struct *dst_vma, > + unsigned long dst_addr, > + unsigned long src_addr) > +{ > + struct mem_cgroup *memcg; > + pte_t _dst_pte, *dst_pte; > + spinlock_t *ptl; > + struct page *page; > + void *page_kaddr; > + int ret; > + > + ret = -ENOMEM; > + page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); > + if (!page) > + goto out; Not a fatal thing, but still quite inconvenient. If there are two tasks that have anonymous private VMAs that are still not COW-ed from each other, then it will be impossible to keep the pages shared with userfault. Thus if we do post-copy memory migration for tasks, then these guys will have their memory COW-ed. Thanks, Pavel -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org