From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15998C3DA61 for ; Mon, 29 Jul 2024 16:26:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E35F6B008C; Mon, 29 Jul 2024 12:26:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 993AE6B00B1; Mon, 29 Jul 2024 12:26:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 834626B00BC; Mon, 29 Jul 2024 12:26:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 631836B008C for ; Mon, 29 Jul 2024 12:26:08 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2D96B1C1838 for ; Mon, 29 Jul 2024 16:26:08 +0000 (UTC) X-FDA: 82393317216.01.546348F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 126624002D for ; Mon, 29 Jul 2024 16:26:05 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GvT8pF+G; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722270362; a=rsa-sha256; cv=none; b=iCTqU3VoJbI58WLIiypw9+CL9UvBhpd/HajMEvYkn83mdJwW80mQpK/HSkGIQlm+OfwRmB 5DUaBGgVKiHB5orglTi3sRlTbdgEkM2+tpMCsWqP5VIoZkns4kavnqVUDA40Ba9bCF8Zp/ 5uxJenKntuSj+a8qexPgOUuNsO4PVTg= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GvT8pF+G; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf01.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722270362; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ce2AMpW5fHbl44BfHagVxzwYFNwU0KTNeNYlNdLKeJE=; b=3s4uSw8mR1WZVaIT5T2BQjqeu2QyhKH+iJ2CJX5onTiwr67eTOKp+4Q3cegDTR8FFAj4R5 Nwu+XHO2aIwJKSoOyN2s8ALhmzxN7em5mzfnLmsUlIF7z7W5SJF8oBthZovJayjGvl69MX ic8uis89L9L2ZhoSSnxX+fAdU6xnkWY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1722270365; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ce2AMpW5fHbl44BfHagVxzwYFNwU0KTNeNYlNdLKeJE=; b=GvT8pF+GJ81RqMGqtEBJwKQNqmA1qthyvd8ESfjWSElTQmhcUMYof2IbXcLVwitGmTLG3n jopihHTRe61TgL56k88+RLT7vkpCDWFFbW1ZOQjaGtJTIgsaD5ZdV+XfA+BRC1xjLvFkSJ GOA8flrOEmXG2Q/cFYVFVkXgCi06JE8= Received: from mail-oo1-f70.google.com (mail-oo1-f70.google.com [209.85.161.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-369-pH3BYI1JO-2ftPPBh3tlQA-1; Mon, 29 Jul 2024 12:26:04 -0400 X-MC-Unique: pH3BYI1JO-2ftPPBh3tlQA-1 Received: by mail-oo1-f70.google.com with SMTP id 006d021491bc7-5d5d0612dbdso659375eaf.2 for ; Mon, 29 Jul 2024 09:26:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722270363; x=1722875163; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=ce2AMpW5fHbl44BfHagVxzwYFNwU0KTNeNYlNdLKeJE=; b=bku02pE9iSeU4OkiPxRmShE2SDTIo4SS/yuuSbRG55Hptbwy9IOB3W6L3BQvX1BENe xrsE/izLiT/6tR5EcQp0kDy3DVPl06RuUaA/3lcrasNyUNWEahDppcRQJGb1WBkH34Zl RqZiZTkzv+U1TueOIYMwa3kcfy1RsBpYcF/Vo6QpdaK2W0zc8ffymDZHyHIKiwYohPf+ jSYcum3LcIVR7u2opdRfoDqmAdPlcF1E53DRCRJWajRtN7IsDRnGc8RGHyf75yzqK3CY VfxUvwcUi8Z7fejPOEsMGGWC5c8macDdsKyVH/YUEZbwLy3mAQFXVPAgWGKhuSI62f0n HCpg== X-Forwarded-Encrypted: i=1; AJvYcCWBMqyaIgFQc2a7qRAwB5/abmhA+EOJAY1VHWv6b6eXYAriLPqGibXb+nPiLJT54yXiArWte9JuZUWHiR3xfHz+tkQ= X-Gm-Message-State: AOJu0Yz8pORIW6zA8rBSW28Cv2nnnG6f7HwZR9IFlpOi7CRAJqmV6RXs n7IQdrkuhFsrt0/Da4vcCEWndsUPTv9yubdw1BR2NdLU70OUP8BwSER+g4bYILGEF4vKUm6Bj8c ocsgnu1ZQGG7CaVXAkQByAPuvjBE2rR05c+nxE+tW5einF/sN X-Received: by 2002:a05:6359:4ca1:b0:1ac:f6e3:dbcd with SMTP id e5c5f4694b2df-1acfad22919mr850342055d.3.1722270363159; Mon, 29 Jul 2024 09:26:03 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH7hiOlPZ8Srn4PYCzoISj3h6+sv4uf1sjLt2dNjrz7/a5dgHrae7Z+jGqCfZWntysMWZz10w== X-Received: by 2002:a05:6359:4ca1:b0:1ac:f6e3:dbcd with SMTP id e5c5f4694b2df-1acfad22919mr850340155d.3.1722270362590; Mon, 29 Jul 2024 09:26:02 -0700 (PDT) Received: from x1n (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7a1d74600a3sm545365285a.128.2024.07.29.09.26.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jul 2024 09:26:02 -0700 (PDT) Date: Mon, 29 Jul 2024 12:26:00 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Muchun Song , Oscar Salvador , Qi Zheng Subject: Re: [PATCH v1 1/2] mm: let pte_lockptr() consume a pte_t pointer Message-ID: References: <20240725183955.2268884-1-david@redhat.com> <20240725183955.2268884-2-david@redhat.com> <9e671388-a5c6-4de1-8c85-b7af8aee7f44@redhat.com> MIME-Version: 1.0 In-Reply-To: <9e671388-a5c6-4de1-8c85-b7af8aee7f44@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspam-User: X-Rspamd-Queue-Id: 126624002D X-Rspamd-Server: rspam01 X-Stat-Signature: 6rxsww4qhmhkts4bsmqm5sotn95joocn X-HE-Tag: 1722270365-488398 X-HE-Meta: U2FsdGVkX18JzMzs7nzFFri7+hN7DYffDyED+9+RwVOhS3pPFaWwpuE+0Gen8P+4yhou3s4c3fT/sbJim0OtuSIFCnl4f++L1ryvoBnogYj+l4X+Mbud6qjSizJKW2qLPZNXNBcNTJgVOxG4sG7njGKfOEfyfFE1AWf4yfw6ZluVzn6ficdROEUv5Sns/8GTOrKFEJMfaUPBks3xk/TqWYjHgopKc+oELThZHaYZz1JbQhsCxEi5d1C0T4sYElw8quvDn0MJsrer+YdiJSMYQpwP/zKhRu4x8qa7mM+tWgIVt6kLMvSb/1i5dlKHlheeFcM7fNYuGXEv+Loot6lBMYgxn+4m8fVAVJOQHCm2hXjQo3UkJxel73g4d6xFI8OZtwecWwLQzZQ3kWXjRQO2YfRgR1BLQ/i4FOkMJXrFITkZo7GEhbZXKCoAFeyKFodUsEjpqmXg1NCg+VwPWoRVt8jF6h6nYmj1AjeTYWtntFpwcRsjrEYxWYW9zG2OiDhUr3fVMZyPeNb86UpmYnA9IYKszOU/2ujjFAqXtObOX3PPb7H0mxH/jsn2Lenobp8FC3Kqxn95uvOXvO1eOPYwVXXqNQwCWzjIcHJGUfxWNlhUfpwGFLJEq8ik4/UBErZIAkYKnLY3nnOxaoTv76jcfVbmTsml7dvPOICOB8KxmIvWv1y2Gc2UijTNC3zEr8G9D6ILrt4ut3G2jIB3JNRGqIh4JipAVY54wfPb7Zelwns3EbbKXoflwRROGw2jolknyjb452xKgppOLI1dS5AMwyWm3/C/YgTQoKwJYz1DAu8bV+Sjx6J+FG9gJwfx6hvvarrj1WJlX+dnZ6A0NHa16XbB1B2YmYwn726wfaLPUVg+Eb3x2nfxZZFja/UlnmAhmzHvSnWGDgTEw7VdyYcIm8Wvvy/q/xbsBDbCD1vQmHjUIH1BJweBLhM+mlg1vhkONLSYuFkQMFaXjBTvJYm PFl5P6oV o/PBK0DXPlCP5srubvOzgSKyEb2uxxfuFO7uqnCzIhafwcnvQJp/MbQozt42QcoXR5dxjXp3KdaoLO/OWRTfRe0w4Li0GnERioJhe59OAolxuC6ZbgggHeT5+dP8oBuv3So/c/XW0gMe3w8WX0t+lw4AgAKd8SK12+jWZ3t4QxhK3tbJEejhYDgkR6TWEeYlUIKYvTvQAycnizSyairxF/o/ryeFlEfDPkvT5h0BifNGh83HHJCtwxSumWnfwH+QXb+8lUb2h31Fk25b2GiuoqiVAwHDOR9ereGccneec+Do0Qs6cEXgCR9r942wJ/sJc10TOCmzGwsntZ8ZzEypwbkOhevBAdfJ0MvVLCzNQg0iyHmuRUTlAaDf2+eUDsEB1FWIgttZ/iOLn1mmXRBbbQN6OqkqQEJr/dWmUFQtE2zvn6wOOaLMQB6IDJBoEzZcmo8j1uV5BEbVunC+CwcujI2X+j9ANYqWXdmF1ZWum0KAElEvtrdfx+Pq66ft8w8dqIIhq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jul 26, 2024 at 11:48:01PM +0200, David Hildenbrand wrote: > On 26.07.24 23:28, Peter Xu wrote: > > On Fri, Jul 26, 2024 at 06:02:17PM +0200, David Hildenbrand wrote: > > > On 26.07.24 17:36, Peter Xu wrote: > > > > On Thu, Jul 25, 2024 at 08:39:54PM +0200, David Hildenbrand wrote: > > > > > pte_lockptr() is the only *_lockptr() function that doesn't consume > > > > > what would be expected: it consumes a pmd_t pointer instead of a pte_t > > > > > pointer. > > > > > > > > > > Let's change that. The two callers in pgtable-generic.c are easily > > > > > adjusted. Adjust khugepaged.c:retract_page_tables() to simply do a > > > > > pte_offset_map_nolock() to obtain the lock, even though we won't actually > > > > > be traversing the page table. > > > > > > > > > > This makes the code more similar to the other variants and avoids other > > > > > hacks to make the new pte_lockptr() version happy. pte_lockptr() users > > > > > reside now only in pgtable-generic.c. > > > > > > > > > > Maybe, using pte_offset_map_nolock() is the right thing to do because > > > > > the PTE table could have been removed in the meantime? At least it sounds > > > > > more future proof if we ever have other means of page table reclaim. > > > > > > > > I think it can't change, because anyone who wants to race against this > > > > should try to take the pmd lock first (which was held already)? > > > > > > That doesn't explain why it is safe for us to assume that after we took the > > > PMD lock that the PMD actually still points at a completely empty page > > > table. Likely it currently works by accident, because we only have a single > > > such user that makes this assumption. It might certainly be a different once > > > we asynchronously reclaim page tables. > > > > I think it's safe because find_pmd_or_thp_or_none() returned SUCCEED, and > > we're holding i_mmap lock for read. I don't see any way that this pmd can > > become a non-pgtable-page. > > > > I meant, AFAIU tearing down pgtable in whatever sane way will need to at > > least take both mmap write lock and i_mmap write lock (in this case, a file > > mapping), no? > > Skimming over [1] where I still owe a review I think we can now do it now > purely under the read locks, with the PMD lock held. Err, how I missed that.. yeah you're definitely right, and that's the context here where we're collapsing. I think I somehow forgot all Hugh's work when I replied there, sorry. > > I think this is also what collapse_pte_mapped_thp() ends up doing: replace a > PTE table that maps a folio by a PMD (present or none, depends) that maps a > folio only while holding the mmap lock in read mode. Of course, here the > table is not empty but we need similar ways of making PT walkers aware of > concurrent page table retraction. > > IIRC, that was the magic added to __pte_offset_map(), such that > pte_offset_map_nolock/pte_offset_map_lock can fail on races. Said that, I still think current code (before this patch) is safe, same to a hard-coded line to lock the pte pgtable lock. Again, I'm fine if you prefer pte_offset_map_nolock() but I just think the rcu read lock and stuff can be avoided. I think it's because such collapse so far can only happen in such path where we need to hold the large folio (PMD-level) lock first. It means anyone who could change this pmd entry to be not a pte pgtable is blocked already, hence it must keeping being a pte pgtable page even if we don't take any rcu. > > > But if we hold the PMD lock, nothing should actually change (so far my > understanding) -- we cannot suddenly rip out a page table. > > [1] > https://lkml.kernel.org/r/cover.1719570849.git.zhengqi.arch@bytedance.com > > > > > > > > > But yes, the PMD cannot get modified while we hold the PMD lock, otherwise > > > we'd be in trouble > > > > > > > > > > > I wonder an open coded "ptlock_ptr(page_ptdesc(pmd_page(*pmd)))" would be > > > > nicer here, but only if my understanding is correct. > > > > > > I really don't like open-coding that. Fortunately we were able to limit the > > > use of ptlock_ptr to a single user outside of arch/x86/xen/mmu_pv.c so far. > > > > I'm fine if you prefer like that; I don't see it a huge deal to me. > > Let's keep it like that, unless we can come up with something neater. At > least it makes the code also more consistent with similar code in that file > and the overhead should be minimal. > > I was briefly thinking about actually testing if the PT is full of > pte_none(), either as a debugging check or to also handle what is currently > handled via: > > if (likely(!vma->anon_vma && !userfaultfd_wp(vma))) { > > Seems wasteful just because some part of a VMA might have a private page > mapped / uffd-wp active to let all other parts suffer. > > Will think about if that is really worth it. > > ... also because I still want to understand why the PTL of the PMD table is > required at all. What if we lock it first and somebody else wants to lock it > after us while we already ripped it out? Sure there must be some reason for > the lock, I just don't understand it yet :/. IIUC the pte pgtable lock will be needed for checking anon_vma safely. e.g., consider if we don't take the pte pgtable lock, I think it's possible some thread tries to inject a private pte (and prepare anon_vma before that) concurrently with this thread trying to collapse the pgtable into a huge pmd. I mean, when without the pte pgtable lock held, I think it's racy to check this line: if (unlikely(vma->anon_vma || userfaultfd_wp(vma))) { ... } On the 1st condition. Thanks, -- Peter Xu