From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f43.google.com (mail-pa0-f43.google.com [209.85.220.43]) by kanga.kvack.org (Postfix) with ESMTP id 7D3F56B0254 for ; Fri, 4 Mar 2016 04:12:24 -0500 (EST) Received: by mail-pa0-f43.google.com with SMTP id fy10so31687334pac.1 for ; Fri, 04 Mar 2016 01:12:24 -0800 (PST) Received: from mga14.intel.com (mga14.intel.com. [192.55.52.115]) by mx.google.com with ESMTP id 133si4545937pfa.203.2016.03.04.01.12.23 for ; Fri, 04 Mar 2016 01:12:23 -0800 (PST) From: "Li, Liang Z" Subject: RE: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization Date: Fri, 4 Mar 2016 09:12:12 +0000 Message-ID: References: <1457001868-15949-1-git-send-email-liang.z.li@intel.com> <20160303174615.GF2115@work-vm> <20160304075538.GC9100@rkaganb.sw.ru> <20160304083550.GE9100@rkaganb.sw.ru> <20160304090820.GA2149@work-vm> In-Reply-To: <20160304090820.GA2149@work-vm> Content-Language: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Sender: owner-linux-mm@kvack.org List-ID: To: "Dr. David Alan Gilbert" , Roman Kagan , "ehabkost@redhat.com" , "kvm@vger.kernel.org" , "mst@redhat.com" , "quintela@redhat.com" , "linux-kernel@vger.kernel.org" , "qemu-devel@nongnu.org" , "linux-mm@kvack.org" , "amit.shah@redhat.com" , "pbonzini@redhat.com" , "akpm@linux-foundation.org" , "virtualization@lists.linux-foundation.org" , "rth@twiddle.net" > * Roman Kagan (rkagan@virtuozzo.com) wrote: > > On Fri, Mar 04, 2016 at 08:23:09AM +0000, Li, Liang Z wrote: > > > > On Thu, Mar 03, 2016 at 05:46:15PM +0000, Dr. David Alan Gilbert wr= ote: > > > > > * Liang Li (liang.z.li@intel.com) wrote: > > > > > > The current QEMU live migration implementation mark the all > > > > > > the guest's RAM pages as dirtied in the ram bulk stage, all > > > > > > these pages will be processed and that takes quit a lot of CPU = cycles. > > > > > > > > > > > > From guest's point of view, it doesn't care about the content > > > > > > in free pages. We can make use of this fact and skip > > > > > > processing the free pages in the ram bulk stage, it can save a > > > > > > lot CPU cycles and reduce the network traffic significantly > > > > > > while speed up the live migration process obviously. > > > > > > > > > > > > This patch set is the QEMU side implementation. > > > > > > > > > > > > The virtio-balloon is extended so that QEMU can get the free > > > > > > pages information from the guest through virtio. > > > > > > > > > > > > After getting the free pages information (a bitmap), QEMU can > > > > > > use it to filter out the guest's free pages in the ram bulk > > > > > > stage. This make the live migration process much more efficient= . > > > > > > > > > > Hi, > > > > > An interesting solution; I know a few different people have > > > > > been looking at how to speed up ballooned VM migration. > > > > > > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that can be used to detect > > > > > unmapped/zero mapped pages in the guest ram, would it achieve > > > > > the > > > > same result? > > > > > > > > Yes I was about to suggest the same thing: it's simple and makes > > > > use of the existing infrastructure. And you wouldn't need to care > > > > if the pages were unmapped by ballooning or anything else > > > > (alternative balloon implementations, not yet touched by the > > > > guest, etc.). Besides, you wouldn't need to synchronize with the g= uest. > > > > > > > > Roman. > > > > > > The unmapped/zero mapped pages can be detected by parsing > > > /proc/self/pagemap, but the free pages can't be detected by this. > > > Imaging an application allocates a large amount of memory , after > > > using, it frees the memory, then live migration happens. All these fr= ee > pages will be process and sent to the destination, it's not optimal. > > > > First, the likelihood of such a situation is marginal, there's no > > point optimizing for it specifically. > > > > And second, even if that happens, you inflate the balloon right before > > the migration and the free memory will get umapped very quickly, so > > this case is covered nicely by the same technique that works for more > > realistic cases, too. >=20 > Although I wonder which is cheaper; that would be fairly expensive for th= e > guest wouldn't it? And you'd somehow have to kick the guest before > migration to do the ballooning - and how long would you wait for it to fi= nish? About 5 seconds for an 8G guest, balloon to 1G. Get the free pages bitmap t= ake about 20ms for an 8G idle guest. Liang >=20 > Dave >=20 > > > > Roman. > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org