From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7225EC52D7C for ; Mon, 12 Aug 2024 19:05:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DDE416B00BD; Mon, 12 Aug 2024 15:05:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D69776B00BE; Mon, 12 Aug 2024 15:05:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE2836B00C0; Mon, 12 Aug 2024 15:05:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 90F926B00BD for ; Mon, 12 Aug 2024 15:05:57 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 455E9A0370 for ; Mon, 12 Aug 2024 19:05:57 +0000 (UTC) X-FDA: 82444523154.10.11CA848 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 281B5140012 for ; Mon, 12 Aug 2024 19:05:54 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JWtJQr0p; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723489498; a=rsa-sha256; cv=none; b=qmzSDrpW3tyhFxq9UY8oFqJJMPVE+sUJAz1ptYTKM23t/25pq9HaPfSc4wvJDF3iVevCbb I0lvRR1GJa4Ar4dDrqr9xTrEN/lsjS4qxibQDUX4zzi/BqSSevLqYCohP39Ai6JCdaPtIR aTORVUeXciW+3NQgUsFBbSbXe+cuXy0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JWtJQr0p; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723489498; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HzakOI9YGqv9HtFf9jIxuThCq3IEoBU6xpTlkdeG+TM=; b=3lSgo6OCZvQc4fQFms3jRRhD+uDm1y0aKqKVfoNLOZODO1pjxGXCQEi7TV3MtLvPs2Anfd TXuxIGJJswQ2kyi0wDWaD7MuAA0+1pE11pG3mLEOSg6eFgjKc0WC1CV/+7It52Wf7zRPdb N2fhf7om/hVfaNzdikvVKGSh2AhqFHk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1723489554; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=HzakOI9YGqv9HtFf9jIxuThCq3IEoBU6xpTlkdeG+TM=; b=JWtJQr0pjOiYDniv3k2CKLwf64+TSaQTvC1B3/Y0MzKs0b8qAfTRwoZisCYOjZuJlEBCr5 kYytbkCEbvv/BKGyzZvgg5+VxmwTi0yA/v9JzDoiyX10u62vtPyvk0PAGrFcOuMAr9YeGe aAhgzRgOFYn6CKBLyVNMf9ZuPcj4NPU= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-523-wRmPOYtdMaysNoIRvfCg3w-1; Mon, 12 Aug 2024 15:05:53 -0400 X-MC-Unique: wRmPOYtdMaysNoIRvfCg3w-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-6b799897270so10233256d6.2 for ; Mon, 12 Aug 2024 12:05:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723489553; x=1724094353; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=HzakOI9YGqv9HtFf9jIxuThCq3IEoBU6xpTlkdeG+TM=; b=p7JcHTUyXczTMHWtN8qi9Q4H37EbshABK3+ud2dj/gorPNCQBK9MLg6Fo6Pu+4qKzj 7GPKdDfd53JdKSx62NkAdU9p5iekzzasOv+tEmvoESA7T+VWoPbNcPZyVK6qmRYJAu98 xVour/XvBzK4V2Do9g1l6QepBO13+xC2Oe5w980C5gX1OWToul9/VPULVPyaCpL+coxB 0uRQTajRd1+46L4Dj5Xw8VhVA1ixtMaKo4gJG5l6NVdETRWRFv1majiN3dcTmmhLxpYK iCq/kJIZb49qcGIWgCfOXMO2ALq6ot7X1bj3FjqfwmLCiQVD+B+cetBALdY5Fq6uLZBk YT0w== X-Gm-Message-State: AOJu0YwRFXJ1EYV5rzFRjO9XWXzA9XGNlMrxRheYAHUxd2uL/UMaPGK8 8DRQx8OsWFYZ3Xp+H3egGleUSOZ2TG0kcZME+nNf8kEsDCcQOVC+bsCV8l1BGfL2OcAeTpUY4P7 QPxzEbzWKu62b/rWIMCjbPFAQZ2NljKaz51Lb9SREqJOWnt+H X-Received: by 2002:a05:6214:e41:b0:6bf:5037:34f2 with SMTP id 6a1803df08f44-6bf5266e6damr316306d6.0.1723489552700; Mon, 12 Aug 2024 12:05:52 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHULEPrzuVK7C4EVg3OxKvCsQnbJxY4NNhAXiFtPbLqFyv/lM4Rh6yLbZbETy8EfhglpTaRGQ== X-Received: by 2002:a05:6214:e41:b0:6bf:5037:34f2 with SMTP id 6a1803df08f44-6bf5266e6damr316036d6.0.1723489552107; Mon, 12 Aug 2024 12:05:52 -0700 (PDT) Received: from x1n (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6bd82e6dcb3sm27136416d6.144.2024.08.12.12.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Aug 2024 12:05:51 -0700 (PDT) Date: Mon, 12 Aug 2024 15:05:48 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sean Christopherson , Oscar Salvador , Jason Gunthorpe , Axel Rasmussen , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Will Deacon , Gavin Shan , Paolo Bonzini , Zi Yan , Andrew Morton , Catalin Marinas , Ingo Molnar , Alistair Popple , Borislav Petkov , Thomas Gleixner , kvm@vger.kernel.org, Dave Hansen , Alex Williamson , Yan Zhao Subject: Re: [PATCH 07/19] mm/fork: Accept huge pfnmap entries Message-ID: References: <20240809160909.1023470-1-peterx@redhat.com> <20240809160909.1023470-8-peterx@redhat.com> <8ef394e6-a964-41c4-b33c-0e940b6b9bd8@redhat.com> <9155deaa-b6c5-4e6c-95a7-9a5311b7085a@redhat.com> MIME-Version: 1.0 In-Reply-To: <9155deaa-b6c5-4e6c-95a7-9a5311b7085a@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 281B5140012 X-Stat-Signature: dsirugamj517bpefit5ic3i1drz948ce X-Rspam-User: X-HE-Tag: 1723489554-378987 X-HE-Meta: U2FsdGVkX1+5rEhLf6aZeVJC8iieQZYjYiWuiTzT7wfjetThT1r9CWnAgDu6OfVyO7cJL6R4vSaFKyCob0CMSaKRcZ8veMAqRUwYYnN1/ovW8LtqMaDnhujSyF7jhpkDVL69d+ED2ZFUU1xx8kHLPimF76duxIVj0rjplFVnhip2ca907UKAFl+uvMkeegorbEQfpSK+X8JIF8F2d7gSn3l73phEK4370Sna9CAqOmOVXV+7Jqp9littiwi71Jk7DF61uUSjiyCiZtdQvICBGMlG3Xl9RjC1ZQVz3ImVQLBT6QpsrWVOd3v+STOHv1Xe8PaHJlsZSL7KtGwCUl10hovTJqzedQRPyPg/Q2NPvWXetLNIi6qpFLE/FuoqG8aXRwe/rp68mnjyW6NPIDPSsASXzYv5xSrd53NdKUz195UcV7cnERMqHE1JnAKoWkNApWHZ927RMGh3nGAF2frFfEDqyQYzUabM1BO1x0Zpe2w4/pu7q438mNTMGiRJH6AxgZAQ/O85YZMqZ1Q974LhNwHmoX3n571VonnBfdakJbWanQgyvV5BnrQMhtkRKMivydtXZTBVvnnIl8stOQtFu+yGwZLF4stK+ecch7mLy7/4bU/5gYUHlBnIEj1oERqR6J5ceo/dYPkHFYy0exOguhPatV1rchVdsifoHRo886xvxABRVr1ct4gfj5VxhtcsQCOF9trCJ6Cn3IPedsZwsrX4OprwKdOa7nFtFWeQWZBZXuV/GMXeMmCQeXL8fgKo/EiNlYVg9cu4CTNHcianWMzYYxPC9dwtBKDEnXFZ4P6aXlcaCIsvGhL2BHHisp4ayp+IxPz95+8uAv19V+yLMwpOSyGQQs/7YyDSvUigSsE1w+nY0cbPWOz1Sn07IhlUGNJRJnXHE9Q+0oV9L7DenQsRzj/lXxbIhlxw0I7gwiXyJUfqJP14iirNoXRyojmuDRJkiiGsicGCgg4VRBg 1LKlT/yn 6J9SD760ILEwHXAiCcZtU8wrOvTW1YBxzrTRGnw3oV8+yGd0tc1va4yktSrHBh8BS/TYuShxjl28QM1g5QGyoATaPmldMk9xbXknzR3XgdP+NPIiRS5QFnXoDZs6lSNI/WnShfwZpc8FQI2Q4TUrrmq4b4GRKXpxoQHQnMgUybXl/8vQWrcMzuR651NcwF5NqM7wFs3rLweH07wGES1dfNqTkY/1x7SZiVMTeOJng61YqmXatNDNLWZSsSgCAeHHidyu13bD4GydCI1fOD2iTwT0a0oj6Kgo6WLv9/EW7dUQyGdYRCXjZhAzpOatey1IVqLWDxSr2mMEa9suEV4iOzZssPAbygx1EWG7woVijbSjDjnjlLbDtXaE42PIXzZ3q2y0zJLdEenDy01HVysKig2ia9wuHkrUspGtSpJOYLDLk+geJ634uvawYJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Aug 12, 2024 at 08:50:12PM +0200, David Hildenbrand wrote: > On 12.08.24 20:29, Peter Xu wrote: > > On Fri, Aug 09, 2024 at 07:59:58PM +0200, David Hildenbrand wrote: > > > On 09.08.24 19:15, Peter Xu wrote: > > > > On Fri, Aug 09, 2024 at 06:32:44PM +0200, David Hildenbrand wrote: > > > > > On 09.08.24 18:08, Peter Xu wrote: > > > > > > Teach the fork code to properly copy pfnmaps for pmd/pud levels. Pud is > > > > > > much easier, the write bit needs to be persisted though for writable and > > > > > > shared pud mappings like PFNMAP ones, otherwise a follow up write in either > > > > > > parent or child process will trigger a write fault. > > > > > > > > > > > > Do the same for pmd level. > > > > > > > > > > > > Signed-off-by: Peter Xu > > > > > > --- > > > > > > mm/huge_memory.c | 27 ++++++++++++++++++++++++--- > > > > > > 1 file changed, 24 insertions(+), 3 deletions(-) > > > > > > > > > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > > > > index 6568586b21ab..015c9468eed5 100644 > > > > > > --- a/mm/huge_memory.c > > > > > > +++ b/mm/huge_memory.c > > > > > > @@ -1375,6 +1375,22 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > > > > > pgtable_t pgtable = NULL; > > > > > > int ret = -ENOMEM; > > > > > > + pmd = pmdp_get_lockless(src_pmd); > > > > > > + if (unlikely(pmd_special(pmd))) { > > > > > > + dst_ptl = pmd_lock(dst_mm, dst_pmd); > > > > > > + src_ptl = pmd_lockptr(src_mm, src_pmd); > > > > > > + spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); > > > > > > + /* > > > > > > + * No need to recheck the pmd, it can't change with write > > > > > > + * mmap lock held here. > > > > > > + */ > > > > > > + if (is_cow_mapping(src_vma->vm_flags) && pmd_write(pmd)) { > > > > > > + pmdp_set_wrprotect(src_mm, addr, src_pmd); > > > > > > + pmd = pmd_wrprotect(pmd); > > > > > > + } > > > > > > + goto set_pmd; > > > > > > + } > > > > > > + > > > > > > > > > > I strongly assume we should be using using vm_normal_page_pmd() instead of > > > > > pmd_page() further below. pmd_special() should be mostly limited to GUP-fast > > > > > and vm_normal_page_pmd(). > > > > > > > > One thing to mention that it has this: > > > > > > > > if (!vma_is_anonymous(dst_vma)) > > > > return 0; > > > > > > Another obscure thing in this function. It's not the job of copy_huge_pmd() > > > to make the decision whether to copy, it's the job of vma_needs_copy() in > > > copy_page_range(). > > > > > > And now I have to suspect that uffd-wp is broken with this function, because > > > as vma_needs_copy() clearly states, we must copy, and we don't do that for > > > PMDs. Ugh. > > > > > > What a mess, we should just do what we do for PTEs and we will be fine ;) > > > > IIUC it's not a problem: file uffd-wp is different from anonymous, in that > > it pushes everything down to ptes. > > > > It means if we skipped one huge pmd here for file, then it's destined to > > have nothing to do with uffd-wp, otherwise it should have already been > > split at the first attempt to wr-protect. > > Is that also true for UFFD_FEATURE_WP_ASYNC, when we call > pagemap_scan_thp_entry()->make_uffd_wp_pmd() ? > > I'm not immediately finding the code that does the "pushes everything down > to ptes", so I might miss that part. UFFDIO_WRITEPROTECT should have all those covered, while I guess you're right, looks like the pagemap ioctl is overlooked.. > > > > > > > > > Also, we call copy_huge_pmd() only if "is_swap_pmd(*src_pmd) || > > > pmd_trans_huge(*src_pmd) || pmd_devmap(*src_pmd)" > > > > > > Would that even be the case with PFNMAP? I suspect that pmd_trans_huge() > > > would return "true" for special pfnmap, which is rather "surprising", but > > > fortunate for us. > > > > It's definitely not surprising to me as that's the plan.. and I thought it > > shoulidn't be surprising to you - if you remember before I sent this one, I > > tried to decouple that here with the "thp agnostic" series: > > > > https://lore.kernel.org/r/20240717220219.3743374-1-peterx@redhat.com > > > > in which you reviewed it (which I appreciated). > > > > So yes, pfnmap on pmd so far will report pmd_trans_huge==true. > > I review way to much stuff to remember everything :) That certainly screams > for a cleanup ... Definitely. > > > > > > > > > Likely we should be calling copy_huge_pmd() if pmd_leaf() ... cleanup for > > > another day. > > > > Yes, ultimately it should really be a pmd_leaf(), but since I didn't get > > much feedback there, and that can further postpone this series from being > > posted I'm afraid, then I decided to just move on with "taking pfnmap as > > THPs". The corresponding change on this path is here in that series: > > > > https://lore.kernel.org/all/20240717220219.3743374-7-peterx@redhat.com/ > > > > @@ -1235,8 +1235,7 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > > src_pmd = pmd_offset(src_pud, addr); > > do { > > next = pmd_addr_end(addr, end); > > - if (is_swap_pmd(*src_pmd) || pmd_trans_huge(*src_pmd) > > - || pmd_devmap(*src_pmd)) { > > + if (is_swap_pmd(*src_pmd) || pmd_is_leaf(*src_pmd)) { > > int err; > > VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); > > err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, > > > > Ah, good. > > [...] > > > > Yes, as stated above, likely broken with UFFD-WP ... > > > > > > I really think we should make this code just behave like it would with PTEs, > > > instead of throwing in more "different" handling. > > > > So it could simply because file / anon uffd-wp work very differently. > > Or because nobody wants to clean up that code ;) I think in this case maybe the fork() part is all fine? As long as we can switch pagemap ioctl to do proper break-downs when necessary, or even try to reuse what UFFDIO_WRITEPROTECT does if still possible in some way. In all cases, definitely sounds like another separate effort. Thanks, -- Peter Xu