From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B3FCC43334 for ; Thu, 23 Jun 2022 18:22:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E2C038E017E; Thu, 23 Jun 2022 14:22:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E04948E0144; Thu, 23 Jun 2022 14:22:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA3FB8E017E; Thu, 23 Jun 2022 14:22:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BAF548E0144 for ; Thu, 23 Jun 2022 14:22:34 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9C517615D8 for ; Thu, 23 Jun 2022 18:22:29 +0000 (UTC) X-FDA: 79610320818.11.6B7DB8F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 2FC4C18002E for ; Thu, 23 Jun 2022 18:22:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1656008522; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LSFNExBbXBMm9Wn2GBmfh9lny4sDiS0z06A+fyuIi4I=; b=DnG7XQRHpFxj6JE5j/x84UG/4WZ39oTFXGqO3ZwuvlFrPE5flYUBmYwqbSJ2Raz7xWmH7v trliJiDcP7PJXOz8Ls/p385fUnHBUmymcyCcTBL2m3mJTlFx9JK6e4Jhyy9fMiUP9cI6TQ Gh5l0K05Rc8Y1cgaq77auCi1ZW05otk= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-1-3512KRq1NCGjDgoVgnaW7g-1; Thu, 23 Jun 2022 14:21:57 -0400 X-MC-Unique: 3512KRq1NCGjDgoVgnaW7g-1 Received: by mail-wm1-f72.google.com with SMTP id k28-20020a05600c1c9c00b003a0308775faso173112wms.1 for ; Thu, 23 Jun 2022 11:21:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=LSFNExBbXBMm9Wn2GBmfh9lny4sDiS0z06A+fyuIi4I=; b=h9hcK1uumUKJg0sEHf8kbEQXqke9LI0Xoy6P++tonJwgrnEpBS3I0tamvHKHSyQ8cN fICWKrdceq9piSK/qoftKczVOZErlAnzOFGFGWnhBOtcFn1z7h6Y4Mr7ao91NyeZ9U9o y34YuZAKjNsEhMGsOrMPzEz3AzzzCC5jBIys+nRszKACnLzNLSCtZklO3VWMCf8VdwDE 3X6K9HJX2EGFpn5aXQuS7F66T6uM1w3tD+XSKZknHoy2lEhc9PqUUUdNdNwvOxhDpw1J 9bH0ARqPVjpFDNyWxuAhkqN3gbWoPYzisyV63hZr/5pD767YP0fjXvEXipHI5cj6Avi5 LeRg== X-Gm-Message-State: AJIora/vJkLhGgKIVRC2qhsMOB6rUV4+8z/Tp46r6VDMiDPFPzrP2jKb l0Vc7Nph+sAPHeWxs2PFJzvcmYCSucAdLZrj80ag01anHYV+5MDNT4B3nEhGQXcv0coM4SAeMp+ uNSrquQzw4lk= X-Received: by 2002:a5d:4892:0:b0:20c:d4eb:1886 with SMTP id g18-20020a5d4892000000b0020cd4eb1886mr10018910wrq.96.1656008516161; Thu, 23 Jun 2022 11:21:56 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tY3oKkSydNHcX+TUbacNIttSiHAmGkqGbQeZKqXDg8Ls/Sxl+gKVhUvjgkVVeaYR16Ljr+nA== X-Received: by 2002:a5d:4892:0:b0:20c:d4eb:1886 with SMTP id g18-20020a5d4892000000b0020cd4eb1886mr10018885wrq.96.1656008515824; Thu, 23 Jun 2022 11:21:55 -0700 (PDT) Received: from ?IPV6:2003:cb:c704:b100:7694:f34e:d0dd:95e7? (p200300cbc704b1007694f34ed0dd95e7.dip0.t-ipconnect.de. [2003:cb:c704:b100:7694:f34e:d0dd:95e7]) by smtp.gmail.com with ESMTPSA id y10-20020a1c4b0a000000b0039c587342d8sm4261694wma.3.2022.06.23.11.21.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 23 Jun 2022 11:21:55 -0700 (PDT) Message-ID: <1ee41224-1095-7fb6-97c0-bf5add2e467b@redhat.com> Date: Thu, 23 Jun 2022 20:21:54 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 Subject: Re: [PATCH v5 01/13] mm: add zone device coherent type memory support To: "Sierra Guiza, Alejandro (Alex)" , Alistair Popple , akpm@linux-foundation.org Cc: Felix Kuehling , jgg@nvidia.com, linux-mm@kvack.org, rcampbell@nvidia.com, linux-ext4@vger.kernel.org, linux-xfs@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, hch@lst.de, jglisse@redhat.com, willy@infradead.org References: <20220531200041.24904-1-alex.sierra@amd.com> <20220531200041.24904-2-alex.sierra@amd.com> <3ac89358-2ce0-7d0d-8b9c-8b0e5cc48945@redhat.com> <02ed2cb7-3ad3-8ffc-6032-04ae1853e234@amd.com> <7605beee-0a76-4ee9-e950-17419630f2cf@redhat.com> <6aef4b7f-0ced-08cd-1f0c-50c22996aa41@redhat.com> <65987ab8-426d-e533-0295-069312b4f751@amd.com> <34e94bdb-675a-5d5c-6137-8aa1ee658d49@redhat.com> <87letq6wb5.fsf@nvdebian.thelocal> <643c44e7-48be-375b-c7ab-6a30b5ee2937@redhat.com> <01cf9f24-d7fc-61e9-1c28-85dc5aabe645@redhat.com> <01cad0cf-9937-8699-6df3-7d5dfa681922@amd.com> <9af76814-ee3a-0af4-7300-d432050b13a3@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656008548; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LSFNExBbXBMm9Wn2GBmfh9lny4sDiS0z06A+fyuIi4I=; b=y/pE0K48O9Nxoizwm92eL4LoJhQo60QMbs+wGXHWGBhKqjGaINy+AMRr3XOefRhFUMKZNP /eEV6qg3hU0ZIzp/WCpEJwTKCYZzlp5hMrAikwLC8VkKjySwf0c17PP3c7MPbUMgF4MxVU NKWMRRbRJSGmJipVvRIeoYl5qo7hy8s= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DnG7XQRH; spf=none (imf06.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656008548; a=rsa-sha256; cv=none; b=jzS8kc8s3p15OYymc+aRjkfeq6hkhX+dUGhBa6UCiLDJgYR2GA9p2s8dgT79oeilqgbAPU cCR1mfni6JF4+z8n415LGvgGPtYl+JPrXtN0xWwdS+G1ZXfSQNZnrijKTGgTv2q9E5xVYc 0ftg7a7cp4QrGhQmyhSnLtnvVDsfmxg= Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DnG7XQRH; spf=none (imf06.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam06 X-Stat-Signature: yge8h8u5qf1xqn8mnfoxo5ggaicmni59 X-Rspamd-Queue-Id: 2FC4C18002E X-HE-Tag: 1656008547-652190 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 23.06.22 20:20, Sierra Guiza, Alejandro (Alex) wrote: > > On 6/23/2022 2:57 AM, David Hildenbrand wrote: >> On 23.06.22 01:16, Sierra Guiza, Alejandro (Alex) wrote: >>> On 6/21/2022 11:16 AM, David Hildenbrand wrote: >>>> On 21.06.22 18:08, Sierra Guiza, Alejandro (Alex) wrote: >>>>> On 6/21/2022 7:25 AM, David Hildenbrand wrote: >>>>>> On 21.06.22 13:55, Alistair Popple wrote: >>>>>>> David Hildenbrand writes: >>>>>>> >>>>>>>> On 21.06.22 13:25, Felix Kuehling wrote: >>>>>>>>> Am 6/17/22 um 23:19 schrieb David Hildenbrand: >>>>>>>>>> On 17.06.22 21:27, Sierra Guiza, Alejandro (Alex) wrote: >>>>>>>>>>> On 6/17/2022 12:33 PM, David Hildenbrand wrote: >>>>>>>>>>>> On 17.06.22 19:20, Sierra Guiza, Alejandro (Alex) wrote: >>>>>>>>>>>>> On 6/17/2022 4:40 AM, David Hildenbrand wrote: >>>>>>>>>>>>>> On 31.05.22 22:00, Alex Sierra wrote: >>>>>>>>>>>>>>> Device memory that is cache coherent from device and CPU point of view. >>>>>>>>>>>>>>> This is used on platforms that have an advanced system bus (like CAPI >>>>>>>>>>>>>>> or CXL). Any page of a process can be migrated to such memory. However, >>>>>>>>>>>>>>> no one should be allowed to pin such memory so that it can always be >>>>>>>>>>>>>>> evicted. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Signed-off-by: Alex Sierra >>>>>>>>>>>>>>> Acked-by: Felix Kuehling >>>>>>>>>>>>>>> Reviewed-by: Alistair Popple >>>>>>>>>>>>>>> [hch: rebased ontop of the refcount changes, >>>>>>>>>>>>>>> removed is_dev_private_or_coherent_page] >>>>>>>>>>>>>>> Signed-off-by: Christoph Hellwig >>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>> include/linux/memremap.h | 19 +++++++++++++++++++ >>>>>>>>>>>>>>> mm/memcontrol.c | 7 ++++--- >>>>>>>>>>>>>>> mm/memory-failure.c | 8 ++++++-- >>>>>>>>>>>>>>> mm/memremap.c | 10 ++++++++++ >>>>>>>>>>>>>>> mm/migrate_device.c | 16 +++++++--------- >>>>>>>>>>>>>>> mm/rmap.c | 5 +++-- >>>>>>>>>>>>>>> 6 files changed, 49 insertions(+), 16 deletions(-) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> diff --git a/include/linux/memremap.h b/include/linux/memremap.h >>>>>>>>>>>>>>> index 8af304f6b504..9f752ebed613 100644 >>>>>>>>>>>>>>> --- a/include/linux/memremap.h >>>>>>>>>>>>>>> +++ b/include/linux/memremap.h >>>>>>>>>>>>>>> @@ -41,6 +41,13 @@ struct vmem_altmap { >>>>>>>>>>>>>>> * A more complete discussion of unaddressable memory may be found in >>>>>>>>>>>>>>> * include/linux/hmm.h and Documentation/vm/hmm.rst. >>>>>>>>>>>>>>> * >>>>>>>>>>>>>>> + * MEMORY_DEVICE_COHERENT: >>>>>>>>>>>>>>> + * Device memory that is cache coherent from device and CPU point of view. This >>>>>>>>>>>>>>> + * is used on platforms that have an advanced system bus (like CAPI or CXL). A >>>>>>>>>>>>>>> + * driver can hotplug the device memory using ZONE_DEVICE and with that memory >>>>>>>>>>>>>>> + * type. Any page of a process can be migrated to such memory. However no one >>>>>>>>>>>>>> Any page might not be right, I'm pretty sure. ... just thinking about special pages >>>>>>>>>>>>>> like vdso, shared zeropage, ... pinned pages ... >>>>>>>>>>>> Well, you cannot migrate long term pages, that's what I meant :) >>>>>>>>>>>> >>>>>>>>>>>>>>> + * should be allowed to pin such memory so that it can always be evicted. >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> * MEMORY_DEVICE_FS_DAX: >>>>>>>>>>>>>>> * Host memory that has similar access semantics as System RAM i.e. DMA >>>>>>>>>>>>>>> * coherent and supports page pinning. In support of coordinating page >>>>>>>>>>>>>>> @@ -61,6 +68,7 @@ struct vmem_altmap { >>>>>>>>>>>>>>> enum memory_type { >>>>>>>>>>>>>>> /* 0 is reserved to catch uninitialized type fields */ >>>>>>>>>>>>>>> MEMORY_DEVICE_PRIVATE = 1, >>>>>>>>>>>>>>> + MEMORY_DEVICE_COHERENT, >>>>>>>>>>>>>>> MEMORY_DEVICE_FS_DAX, >>>>>>>>>>>>>>> MEMORY_DEVICE_GENERIC, >>>>>>>>>>>>>>> MEMORY_DEVICE_PCI_P2PDMA, >>>>>>>>>>>>>>> @@ -143,6 +151,17 @@ static inline bool folio_is_device_private(const struct folio *folio) >>>>>>>>>>>>>> In general, this LGTM, and it should be correct with PageAnonExclusive I think. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> However, where exactly is pinning forbidden? >>>>>>>>>>>>> Long-term pinning is forbidden since it would interfere with the device >>>>>>>>>>>>> memory manager owning the >>>>>>>>>>>>> device-coherent pages (e.g. evictions in TTM). However, normal pinning >>>>>>>>>>>>> is allowed on this device type. >>>>>>>>>>>> I don't see updates to folio_is_pinnable() in this patch. >>>>>>>>>>> Device coherent type pages should return true here, as they are pinnable >>>>>>>>>>> pages. >>>>>>>>>> That function is only called for long-term pinnings in try_grab_folio(). >>>>>>>>>> >>>>>>>>>>>> So wouldn't try_grab_folio() simply pin these pages? What am I missing? >>>>>>>>>>> As far as I understand this return NULL for long term pin pages. >>>>>>>>>>> Otherwise they get refcount incremented. >>>>>>>>>> I don't follow. >>>>>>>>>> >>>>>>>>>> You're saying >>>>>>>>>> >>>>>>>>>> a) folio_is_pinnable() returns true for device coherent pages >>>>>>>>>> >>>>>>>>>> and that >>>>>>>>>> >>>>>>>>>> b) device coherent pages don't get long-term pinned >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Yet, the code says >>>>>>>>>> >>>>>>>>>> struct folio *try_grab_folio(struct page *page, int refs, unsigned int flags) >>>>>>>>>> { >>>>>>>>>> if (flags & FOLL_GET) >>>>>>>>>> return try_get_folio(page, refs); >>>>>>>>>> else if (flags & FOLL_PIN) { >>>>>>>>>> struct folio *folio; >>>>>>>>>> >>>>>>>>>> /* >>>>>>>>>> * Can't do FOLL_LONGTERM + FOLL_PIN gup fast path if not in a >>>>>>>>>> * right zone, so fail and let the caller fall back to the slow >>>>>>>>>> * path. >>>>>>>>>> */ >>>>>>>>>> if (unlikely((flags & FOLL_LONGTERM) && >>>>>>>>>> !is_pinnable_page(page))) >>>>>>>>>> return NULL; >>>>>>>>>> ... >>>>>>>>>> return folio; >>>>>>>>>> } >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> What prevents these pages from getting long-term pinned as stated in this patch? >>>>>>>>> Long-term pinning is handled by __gup_longterm_locked, which migrates >>>>>>>>> pages returned by __get_user_pages_locked that cannot be long-term >>>>>>>>> pinned. try_grab_folio is OK to grab the pages. Anything that can't be >>>>>>>>> long-term pinned will be migrated afterwards, and >>>>>>>>> __get_user_pages_locked will be retried. The migration of >>>>>>>>> DEVICE_COHERENT pages was implemented by Alistair in patch 5/13 >>>>>>>>> ("mm/gup: migrate device coherent pages when pinning instead of failing"). >>>>>>>> Thanks. >>>>>>>> >>>>>>>> __gup_longterm_locked()->check_and_migrate_movable_pages() >>>>>>>> >>>>>>>> Which checks folio_is_pinnable() and doesn't do anything if set. >>>>>>>> >>>>>>>> Sorry to be dense here, but I don't see how what's stated in this patch >>>>>>>> works without adjusting folio_is_pinnable(). >>>>>>> Ugh, I think you might be right about try_grab_folio(). >>>>>>> >>>>>>> We didn't update folio_is_pinnable() to include device coherent pages >>>>>>> because device coherent pages are pinnable. It is really just >>>>>>> FOLL_LONGTERM that we want to prevent here. >>>>>>> >>>>>>> For normal PUP that is done by my change in >>>>>>> check_and_migrate_movable_pages() which migrates pages being pinned with >>>>>>> FOLL_LONGTERM. But I think I incorrectly assumed we would take the >>>>>>> pte_devmap() path in gup_pte_range(), which we don't for coherent pages. >>>>>>> So I think the check in try_grab_folio() needs to be: >>>>>> I think I said it already (and I might be wrong without reading the >>>>>> code), but folio_is_pinnable() is *only* called for long-term pinnings. >>>>>> >>>>>> It should actually be called folio_is_longterm_pinnable(). >>>>>> >>>>>> That's where that check should go, no? >>>>> David, I think you're right. We didn't catch this since the LONGTERM gup >>>>> test we added to hmm-test only calls to pin_user_pages. Apparently >>>>> try_grab_folio is called only from fast callers (ex. >>>>> pin_user_pages_fast/get_user_pages_fast). I have added a conditional >>>>> similar to what Alistair has proposed to return null on LONGTERM && >>>>> (coherent_pages || folio_is_pinnable) at try_grab_folio. Also a new gup >>>>> test was added with LONGTERM set that calls pin_user_pages_fast. >>>>> Returning null under this condition it does causes the migration from >>>>> dev to system memory. >>>>> >>>> Why can't coherent memory simply put its checks into >>>> folio_is_pinnable()? I don't get it why we have to do things differently >>>> here. >>>> >>>>> Actually, Im having different problems with a call to PageAnonExclusive >>>>> from try_to_migrate_one during page fault from a HMM test that first >>>>> migrate pages to device private and forks to mark as COW these pages. >>>>> Apparently is catching the first BUG VM_BUG_ON_PGFLAGS(!PageAnon(page), >>>>> page) >>>> With or without this series? A backtrace would be great. >>> Here's the back trace. This happens in a hmm-test added in this patch >>> series. However, I have tried to isolate this BUG by just adding the COW >>> test with private device memory only. This is only present as follows. >>> Allocate anonymous mem->Migrate to private device memory->fork->try to >>> access to parent's anonymous memory (which will suppose to trigger a >>> page fault and migration to system mem). Just for the record, if the >>> child is terminated before the parent's memory is accessed, this problem >>> is not present. >> >> The only usage of PageAnonExclusive() in try_to_migrate_one() is: >> >> anon_exclusive = folio_test_anon(folio) && >> PageAnonExclusive(subpage); >> >> Which can only possibly fail if subpage is not actually part of the folio. >> >> >> I see some controversial code in the the if (folio_is_zone_device(folio)) case later: >> >> * The assignment to subpage above was computed from a >> * swap PTE which results in an invalid pointer. >> * Since only PAGE_SIZE pages can currently be >> * migrated, just set it to page. This will need to be >> * changed when hugepage migrations to device private >> * memory are supported. >> */ >> subpage = &folio->page; >> >> There we have our invalid pointer hint. >> >> I don't see how it could have worked if the child quit, though? Maybe >> just pure luck? >> >> >> Does the following fix your issue: > > Yes, it fixed the issue. Thanks. Should we include this patch in this > patch series or separated? > > Regards, > Alex Sierra I'll send it right away "officially" so we can get it into 5.19. Can I add your tested-by? -- Thanks, David / dhildenb