From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4684DC5DF60 for ; Tue, 5 Nov 2019 23:02:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BF3AB206BA for ; Tue, 5 Nov 2019 23:02:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=intel-com.20150623.gappssmtp.com header.i=@intel-com.20150623.gappssmtp.com header.b="r71UU1NF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF3AB206BA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7DE266B0003; Tue, 5 Nov 2019 18:02:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 78F4B6B0005; Tue, 5 Nov 2019 18:02:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A4706B0006; Tue, 5 Nov 2019 18:02:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 543546B0003 for ; Tue, 5 Nov 2019 18:02:54 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id ED759180AD81A for ; Tue, 5 Nov 2019 23:02:53 +0000 (UTC) X-FDA: 76123750626.10.coil13_8d8f0ce264211 X-HE-Tag: coil13_8d8f0ce264211 X-Filterd-Recvd-Size: 10739 Received: from mail-ot1-f66.google.com (mail-ot1-f66.google.com [209.85.210.66]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 5 Nov 2019 23:02:52 +0000 (UTC) Received: by mail-ot1-f66.google.com with SMTP id c19so3059882otr.11 for ; Tue, 05 Nov 2019 15:02:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=FJnp1WzP7FDNOlX+PPW/uXmMFEEBV08ygb44tlD6toQ=; b=r71UU1NF3VCvLrng/dY2UGB8Q9lxizmNC3+RQwXfL+BA+gtvyFkbklNt8G0KmWTyI/ gY4A3WDBq4+OGqDaUA8w6/JZlXbaFXwNHFgMssB0CicQRm2b3ATAUgnMl37fXYM0ybh0 i2y88ubkRNwE0lnU8UgrhxLS6tlDGagVtCZ85SLH0zGx8HRr1NwsyRQ8m6k8Nk/g7CAE 2Nz4suTZOTQTGfFBTfBb43ExGodrmtVOXpVaipIEnH/Vb/f5e+doWkdWW+4BoBYRt4W5 uILcxXLTtVAw+eWccvxT46RuKzmX+sL0p4DT2Mzbni/aVhvov/FERz4vwE5LUVDjdhbI W3kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=FJnp1WzP7FDNOlX+PPW/uXmMFEEBV08ygb44tlD6toQ=; b=Y+t6nbbTBbm9V6IdEQZjvvjjInUzK9Qgw2/jpo6GFkyzZOouxefqlAvcilcVVjzUEI E1pNbkAtY4JZLO9founaMzLYl0Ntj1xt/VI0bS6k5e2Buqn1IFeLf5Q2dH7KZZ6w+Hu2 wmhjW/3e3AE13Ve+7ijGIMH6oC+DD2geOt/LtjhFBOXG/kziW/J6S/CNzCdOxCZHiwrg QCh16QCr3NVVmXQiz4VysXwbKy+kZ/gchIBFqlVM3x7j7DYSY5ShrNi7yCuapU3B9a9X Vfsfwc3Xpsm/nAf7Nj++014KZMhfyNXUEUYsXzGpZYzWEr2Zw3gDtGTgfDHG2C+u9cWN 7v7w== X-Gm-Message-State: APjAAAUIJJa+cxxAjY/iZZp22gYFWNzrYT8o/gSeFIJAiiBGWGqvL07c sr6/0p+jOEK5YkN2m8Yj5Axda3sFFY6LXBo3UZzAKQ== X-Google-Smtp-Source: APXvYqwKvRMYh+ESeeQbTjGztzWIU2GaaX6lax5LzmKoFmyMhxApIqS8cluAsJUsUZJOgqRh+PH603pjKculyAMZPiA= X-Received: by 2002:a9d:30c8:: with SMTP id r8mr7347703otg.363.1572994971715; Tue, 05 Nov 2019 15:02:51 -0800 (PST) MIME-Version: 1.0 References: <20191024120938.11237-1-david@redhat.com> <20191024120938.11237-4-david@redhat.com> <01adb4cb-6092-638c-0bab-e61322be7cf5@redhat.com> <613f3606-748b-0e56-a3ad-1efaffa1a67b@redhat.com> <20191105160000.GC8128@linux.intel.com> In-Reply-To: From: Dan Williams Date: Tue, 5 Nov 2019 15:02:40 -0800 Message-ID: Subject: Re: [PATCH v1 03/10] KVM: Prepare kvm_is_reserved_pfn() for PG_reserved changes To: David Hildenbrand Cc: Sean Christopherson , Linux Kernel Mailing List , Linux MM , Michal Hocko , Andrew Morton , kvm-ppc@vger.kernel.org, linuxppc-dev , KVM list , linux-hyperv@vger.kernel.org, devel@driverdev.osuosl.org, xen-devel , X86 ML , Alexander Duyck , Alexander Duyck , Alex Williamson , Allison Randal , Andy Lutomirski , "Aneesh Kumar K.V" , Anshuman Khandual , Anthony Yznaga , Benjamin Herrenschmidt , Borislav Petkov , Boris Ostrovsky , Christophe Leroy , Cornelia Huck , Dave Hansen , Haiyang Zhang , "H. Peter Anvin" , Ingo Molnar , "Isaac J. Manjarres" , Jim Mattson , Joerg Roedel , Johannes Weiner , Juergen Gross , KarimAllah Ahmed , Kees Cook , "K. Y. Srinivasan" , "Matthew Wilcox (Oracle)" , Matt Sickler , Mel Gorman , Michael Ellerman , Michal Hocko , Mike Rapoport , Mike Rapoport , Nicholas Piggin , Oscar Salvador , Paolo Bonzini , Paul Mackerras , Paul Mackerras , Pavel Tatashin , Pavel Tatashin , Peter Zijlstra , Qian Cai , =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= , Sasha Levin , Stefano Stabellini , Stephen Hemminger , Thomas Gleixner , Vitaly Kuznetsov , Vlastimil Babka , Wanpeng Li , YueHaibing , Adam Borowski Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 5, 2019 at 12:31 PM David Hildenbrand wrote: > > >>> I think I know what's going wrong: > >>> > >>> Pages that are pinned via gfn_to_pfn() and friends take a references, > >>> however are often released via > >>> kvm_release_pfn_clean()/kvm_release_pfn_dirty()/kvm_release_page_clean()... > >>> > >>> > >>> E.g., in arch/x86/kvm/x86.c:reexecute_instruction() > >>> > >>> ... > >>> pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); > >>> ... > >>> kvm_release_pfn_clean(pfn); > >>> > >>> > >>> > >>> void kvm_release_pfn_clean(kvm_pfn_t pfn) > >>> { > >>> if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn)) > >>> put_page(pfn_to_page(pfn)); > >>> } > >>> > >>> This function makes perfect sense as the counterpart for kvm_get_pfn(): > >>> > >>> void kvm_get_pfn(kvm_pfn_t pfn) > >>> { > >>> if (!kvm_is_reserved_pfn(pfn)) > >>> get_page(pfn_to_page(pfn)); > >>> } > >>> > >>> > >>> As all ZONE_DEVICE pages are currently reserved, pages pinned via > >>> gfn_to_pfn() and friends will often not see a put_page() AFAIKS. > > > > Assuming gup() takes a reference for ZONE_DEVICE pages, yes, this is a > > KVM bug. > > Yes, it does take a reference AFAIKs. E.g., > > mm/gup.c:gup_pte_range(): > ... > if (pte_devmap(pte)) { > if (unlikely(flags & FOLL_LONGTERM)) > goto pte_unmap; > > pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); > if (unlikely(!pgmap)) { > undo_dev_pagemap(nr, nr_start, pages); > goto pte_unmap; > } > } else if (pte_special(pte)) > goto pte_unmap; > > VM_BUG_ON(!pfn_valid(pte_pfn(pte))); > page = pte_page(pte); > > head = try_get_compound_head(page, 1); > > try_get_compound_head() will increment the reference count. > > > > >>> Now, my patch does not change that, the result of > >>> kvm_is_reserved_pfn(pfn) will be unchanged. A proper fix for that would > >>> probably be > >>> > >>> a) To drop the reference to ZONE_DEVICE pages in gfn_to_pfn() and > >>> friends, after you successfully pinned the pages. (not sure if that's > >>> the right thing to do but you're the expert) > >>> > >>> b) To not use kvm_release_pfn_clean() and friends on pages that were > >>> definitely pinned. > > > > This is already KVM's intent, i.e. the purpose of the PageReserved() check > > is simply to avoid putting a non-existent reference. The problem is that > > KVM assumes pages with PG_reserved set are never pinned, which AFAICT was > > true when the code was first added. > > > >> (talking to myself, sorry) > >> > >> Thinking again, dropping this patch from this series could effectively also > >> fix that issue. E.g., kvm_release_pfn_clean() and friends would always do a > >> put_page() if "pfn_valid() and !PageReserved()", so after patch 9 also on > >> ZONDE_DEVICE pages. > > > > Yeah, this appears to be the correct fix. > > > >> But it would have side effects that might not be desired. E.g.,: > >> > >> 1. kvm_pfn_to_page() would also return ZONE_DEVICE pages (might even be the > >> right thing to do). > > > > This should be ok, at least on x86. There are only three users of > > kvm_pfn_to_page(). Two of those are on allocations that are controlled by > > KVM and are guaranteed to be vanilla MAP_ANONYMOUS. The third is on guest > > memory when running a nested guest, and in that case supporting ZONE_DEVICE > > memory is desirable, i.e. KVM should play nice with a guest that is backed > > by ZONE_DEVICE memory. > > > >> 2. kvm_set_pfn_dirty() would also set ZONE_DEVICE pages dirty (might be > >> okay) > > > > This is ok from a KVM perspective. > > What about > > void kvm_get_pfn(kvm_pfn_t pfn) > { > if (!kvm_is_reserved_pfn(pfn)) > get_page(pfn_to_page(pfn)); > } > > Is a pure get_page() sufficient in case of ZONE_DEVICE? > (asking because of the live references obtained via > get_dev_pagemap(pte_pfn(pte), pgmap) in mm/gup.c:gup_pte_range() > somewhat confuse me :) ) It is not sufficient. PTE_DEVMAP is there to tell the gup path "be careful, this pfn has device-lifetime, make sure the device is pinned and not actively in the process of dying before pinning any pages associated with this device". However, if kvm_get_pfn() is honoring kvm_is_reserved_pfn() that returns true for ZONE_DEVICE, I'm missing how it is messing up the reference counts. > > The scarier code (for me) is transparent_hugepage_adjust() and > > kvm_mmu_zap_collapsible_spte(), as I don't at all understand the > > interaction between THP and _PAGE_DEVMAP. > > The x86 KVM MMU code is one of the ugliest code I know (sorry, but it > had to be said :/ ). Luckily, this should be independent of the > PG_reserved thingy AFAIKs. Both transparent_hugepage_adjust() and kvm_mmu_zap_collapsible_spte() are honoring kvm_is_reserved_pfn(), so again I'm missing where the page count gets mismanaged and leads to the reported hang.