From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98CF1C5B543 for ; Wed, 4 Jun 2025 15:34:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20CA58D0025; Wed, 4 Jun 2025 11:34:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E4CE8D0007; Wed, 4 Jun 2025 11:34:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FABE8D0025; Wed, 4 Jun 2025 11:34:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id E49018D0007 for ; Wed, 4 Jun 2025 11:34:45 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6CE25EF0A4 for ; Wed, 4 Jun 2025 15:34:45 +0000 (UTC) X-FDA: 83518115730.06.71B4EAF Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 341C6180003 for ; Wed, 4 Jun 2025 15:34:42 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HeT3cHDA; spf=pass (imf24.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749051283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mfynHbafjDR5XmPD+vF/S3IYTbhu9p0ot28pFP/ysEk=; b=Up0n4BCPctQ+PjoWu4L8Ul+d/haDH1SRT+aBJQuSwCjT27ErhaJqq4/7whGkCECgT2Pccw WTLfzFvWb2709vuyKXg2OG0Gl7V/Jp73FTaH/UOjj0Q/I1pBsHzOGTA1dt1MEF+3OJbO+N yGC33nSZpA7VDvkcHNGNsFtriFJyz1Q= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HeT3cHDA; spf=pass (imf24.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749051283; a=rsa-sha256; cv=none; b=mHUqV2aoISF22HZXLNUjYR0p87Nbe4a2G7w/PToIWEoYUVQ5om56kRFZmzDhnSEt+JtCg3 5anCQAHoA7UpionBVRjYqpLd0G+3DN2nA04xcpLjUNY7IzxZ7CCutYAkQcngCl+eMeCxK+ tQP2TyPcgcTJasx6MKMccZPOpI7iUsE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749051282; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=mfynHbafjDR5XmPD+vF/S3IYTbhu9p0ot28pFP/ysEk=; b=HeT3cHDAedqCrBQNEAYuDLM8o6Ala9069kGBTszfgYRZ4ve6mZ4R/sxk96T+noTJsWtqIy vLOBI7ihvkhuS9c5CvEW9O7fxSnSVFNuaiM3PnOM7Kv/cUFaPNzhu9fWpRef726sWT7L5q 7mSD0bc/PQdoTKsF+3A7E6qDnplPSUA= Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-531-u0KNfFM_NxOINnWLNmCjGA-1; Wed, 04 Jun 2025 11:34:41 -0400 X-MC-Unique: u0KNfFM_NxOINnWLNmCjGA-1 X-Mimecast-MFC-AGG-ID: u0KNfFM_NxOINnWLNmCjGA_1749051281 Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-6face35aad6so408216d6.0 for ; Wed, 04 Jun 2025 08:34:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749051281; x=1749656081; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=mfynHbafjDR5XmPD+vF/S3IYTbhu9p0ot28pFP/ysEk=; b=YRNxTGroL+7QdkGCCLl6el2Fi14xfuPYQdIJ6VMEbecfZU3ihZmqEBeZuq9xxB8uJL 4KfkGibLAsD4hyVdGNKj8k2ipUvUTbKIs9niv8qqx4TXM+hI0LZevC10/jBv/Y1p5oCh BfvSu6F3TgI3Svi8t7xSCiCqj/B1Ez1iJqYwJqI1v8dhUsbHmkr7BVlASUCGZqPT4KgB jralqW/RhxEWTtX3GK7s8le8DSnRCPmF17BHyVGIU/jj6288kGD4YNW1RZDVMzFWRulx YVOSEayRoRTnJBXO3YjjoWvdC2yfqdPJ7t4G21Cb+YivGi4aoDJEX4xh9qKOHBbrnZoO jeJA== X-Gm-Message-State: AOJu0YwT3JATvtxUln7K267fmgvvYG0bexPbu4BLXPuNyaLwBPpcYwgT QQ58A2WKrCFTHHqAjbiTA9tmYIq6tnFPxQFg8KThZWa32295EObCag30Plkot/I66K3RoCYi17+ KHjjyvetY7ZvcEkePNgQFRqucOtsW9l2byPeoUsk1uYtSvlPWQ0qk X-Gm-Gg: ASbGncsSUT825pNhp8UkzHWrAmScGrzDFHl0y9igN4v2+3mAnKu1liXb5J5PrH6oq5d yOrcwRutq+b6hpjyzZeeLvp172NKc+CbjOGMkzjv9TCBCpl+AUP6TZcoKzUySz+CBKA8KoRBNsx QFcPiACPTANUsZ6xNIyk35vchO556pflD7ia3MNAC+pwcqnJ/gWoh5lXzmksqZr+41D7VOjShiZ sEhLjeldeKoqY3JH4uW22q89XWB3/FOEAJlOwb3KBZTU6W4L4p07TCoVelvJlS8pDu931uetxN/ 0Io= X-Received: by 2002:a05:6214:ac1:b0:6fa:ce35:1ab1 with SMTP id 6a1803df08f44-6faf6fb83c3mr38167726d6.29.1749051280774; Wed, 04 Jun 2025 08:34:40 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHvXfYwH6TWpq2t5A4sZ2wt/SNAp/Tp/227IPyexTsW11lGOGB7zsiZgDYEnEkSNOHjgemPhg== X-Received: by 2002:a05:6214:ac1:b0:6fa:ce35:1ab1 with SMTP id 6a1803df08f44-6faf6fb83c3mr38167236d6.29.1749051280100; Wed, 04 Jun 2025 08:34:40 -0700 (PDT) Received: from x1.local ([85.131.185.92]) by smtp.gmail.com with ESMTPSA id af79cd13be357-7d09a195507sm1055594185a.88.2025.06.04.08.34.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 08:34:39 -0700 (PDT) Date: Wed, 4 Jun 2025 11:34:37 -0400 From: Peter Xu To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Barry Song <21cnbao@gmail.com>, Suren Baghdasaryan , Andrea Arcangeli , David Hildenbrand , Lokesh Gidra , stable@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v4] mm: userfaultfd: fix race of userfaultfd_move and swap cache Message-ID: References: <20250604151038.21968-1-ryncsn@gmail.com> MIME-Version: 1.0 In-Reply-To: <20250604151038.21968-1-ryncsn@gmail.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 8Yt_IDM50-mZ94LuNEIdtbIHLxKtOIEtJzcH7J98CV8_1749051281 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspam-User: X-Rspamd-Queue-Id: 341C6180003 X-Rspamd-Server: rspam09 X-Stat-Signature: ntmisgek8bdheb5rp4y5bkbrc4d8rjwf X-HE-Tag: 1749051282-920083 X-HE-Meta: U2FsdGVkX1/h+PRD7vv4PH+jgbtsj4Me4JgVpEEGCRb33U5aDmHjZrG2OuwG3qcFN6ExQpKKNJuhV612FQVN7klvhe4akXjfqqPkk/GNT6Z1YO1mUuTJ3hSWPPgjPJRZUqrSxGyZ1BBg7eqLd7HUIbPA1X+6CxdY1B/IF+TiXhbuD9PxY7Z94C6d9Zln+iTNS4Q3NMj91S5xDvrccGhET6EeIYiWXB1W9p/8bol1kJsJ4Vru8/NfagZL23Q1+g+mE4qxCDvL5bn6Hx7CJEvQUIfvi9H5Vg8Y2Y4Rz+CMMfUKVL2IA0T9+kLLmOgGNaXZO9q5wo6Pn+1JZ5WHE2jmdeIpXWMVR4cymv9c0wbLs+2+MyQbJKJ476X/NGYIyDkor7CsxkadkhPYJqKS/XAxUBfPJJ+OUvuAoQJk7MmJ9VIxLo4bat7KR5KhJ6qAwG11Qpx3SYKWEaGKExXJPGGhnoeQVPYA2TowwXbrjTNnkbv85lJluMV0HhmRmTFaop9CmSnb2nJxSsUlsGPrtYE/T23L9WYVrU/488BdsLOO3yBKlsLIwKs+a5lcKXb5VGg3F4ndTSXxhTBtHfmZLvdxMyWRdl6QhYMOu5fqka7LAs7zzVBY16fVabwoOgiVPDawhDwqZYd8Of7UgRVXtqaAZq/OGUlrhx8RzJsHrA+uIASCIctfw3AV9FzGsivX2tpJLok43AsAGvaa4xzFw1OobdTp7fuCJZ/XDNtx/iTOaKSrpAqFX4vZQkbz41PIgTZ9VqqFTqz0JHZOo4GR0fnPeRL9+uYUjYEeld0X5h3tMoNAu/xXAA8p60ivuXQOAhwRf0iSXG8buSIAVz7cmayZbW/Lw5uTU7v71ufyKQhxy7bdRgDFXYRFNGStGfeTyQYeF5Cdhxxx6+kLjez2gzfF2BDyaO+W6RnoCuZyel/p4VHQYIakQDSTVROdJdZFvHdmZFTeVojLE7GyY8Bec5S WxUlabsf Uv1bhReGRqdDTTE7E5onzDLPwrsFLb+vTY3f1ybgrnonTU/kvfgM/Env2F6Ko7t3xOnYFjEYqi6YnG/P/oMEMnHPtNvzJIm9oAuvnEN5BAUz/XWFmahWNxlX64teXNKrmvwRnqyceKD9It8QvkGgGDiTEYNfTheLZQIBnDnEXJuwsxy91zJSRCLG5YwtnBX4jY1Kzo+LeBaBO7MqYdCm86ciSl/ww4xS8a+ic42EEXCyBynqyDXBe+UaCVV/V5Sw0/nJPRSlsY47hIy5BPNP1oRbn8mjlbNbfs/vEK8fAEcQ9PdxRT1YtEP/SC7l4ABFUTI+78YoqXcjkiRl7v+/jg3OUUqmd1ElaBRF0SMFj6Mk2Z34JVF7yZzxzUb/4SKAWqEw61Wv2acMDMjDt9lWvgsGhunCqy9NggY0lYqS9oPrAh/R4aIxk7A3OavGNdHR/eW0kJYAiTTQhb//MUONOxlcAik4Bg9tUF41zSbKyIYPH9jdpsiA+MBXh9apjR7qPnXnV X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 04, 2025 at 11:10:38PM +0800, Kairui Song wrote: > From: Kairui Song > > On seeing a swap entry PTE, userfaultfd_move does a lockless swap > cache lookup, and tries to move the found folio to the faulting vma. > Currently, it relies on checking the PTE value to ensure that the moved > folio still belongs to the src swap entry and that no new folio has > been added to the swap cache, which turns out to be unreliable. > > While working and reviewing the swap table series with Barry, following > existing races are observed and reproduced [1]: > > In the example below, move_pages_pte is moving src_pte to dst_pte, > where src_pte is a swap entry PTE holding swap entry S1, and S1 > is not in the swap cache: > > CPU1 CPU2 > userfaultfd_move > move_pages_pte() > entry = pte_to_swp_entry(orig_src_pte); > // Here it got entry = S1 > ... < interrupted> ... > > // folio A is a new allocated folio > // and get installed into src_pte > > // src_pte now points to folio A, S1 > // has swap count == 0, it can be freed > // by folio_swap_swap or swap > // allocator's reclaim. > > // folio B is a folio in another VMA. > > // S1 is freed, folio B can use it > // for swap out with no problem. > ... > folio = filemap_get_folio(S1) > // Got folio B here !!! > ... < interrupted again> ... > > // Now S1 is free to be used again. > > // Now src_pte is a swap entry PTE > // holding S1 again. > folio_trylock(folio) > move_swap_pte > double_pt_lock > is_pte_pages_stable > // Check passed because src_pte == S1 > folio_move_anon_rmap(...) > // Moved invalid folio B here !!! > > The race window is very short and requires multiple collisions of > multiple rare events, so it's very unlikely to happen, but with a > deliberately constructed reproducer and increased time window, it > can be reproduced easily. > > This can be fixed by checking if the folio returned by filemap is the > valid swap cache folio after acquiring the folio lock. > > Another similar race is possible: filemap_get_folio may return NULL, but > folio (A) could be swapped in and then swapped out again using the same > swap entry after the lookup. In such a case, folio (A) may remain in the > swap cache, so it must be moved too: > > CPU1 CPU2 > userfaultfd_move > move_pages_pte() > entry = pte_to_swp_entry(orig_src_pte); > // Here it got entry = S1, and S1 is not in swap cache > folio = filemap_get_folio(S1) > // Got NULL > ... < interrupted again> ... > > > move_swap_pte > double_pt_lock > is_pte_pages_stable > // Check passed because src_pte == S1 > folio_move_anon_rmap(...) > // folio A is ignored !!! > > Fix this by checking the swap cache again after acquiring the src_pte > lock. And to avoid the filemap overhead, we check swap_map directly [2]. > > The SWP_SYNCHRONOUS_IO path does make the problem more complex, but so > far we don't need to worry about that, since folios can only be exposed > to the swap cache in the swap out path, and this is covered in this > patch by checking the swap cache again after acquiring the src_pte lock. > > Testing with a simple C program that allocates and moves several GB of > memory did not show any observable performance change. > > Cc: > Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI") > Closes: https://lore.kernel.org/linux-mm/CAMgjq7B1K=6OOrK2OUZ0-tqCzi+EJt+2_K97TPGoSt=9+JwP7Q@mail.gmail.com/ [1] > Link: https://lore.kernel.org/all/CAGsJ_4yJhJBo16XhiC-nUzSheyX-V3-nFE+tAi=8Y560K8eT=A@mail.gmail.com/ [2] > Signed-off-by: Kairui Song > Reviewed-by: Lokesh Gidra > > --- > > V1: https://lore.kernel.org/linux-mm/20250530201710.81365-1-ryncsn@gmail.com/ > Changes: > - Check swap_map instead of doing a filemap lookup after acquiring the > PTE lock to minimize critical section overhead [ Barry Song, Lokesh Gidra ] > > V2: https://lore.kernel.org/linux-mm/20250601200108.23186-1-ryncsn@gmail.com/ > Changes: > - Move the folio and swap check inside move_swap_pte to avoid skipping > the check and potential overhead [ Lokesh Gidra ] > - Add a READ_ONCE for the swap_map read to ensure it reads a up to dated > value. > > V3: https://lore.kernel.org/all/20250602181419.20478-1-ryncsn@gmail.com/ > Changes: > - Add more comments and more context in commit message. > > mm/userfaultfd.c | 33 +++++++++++++++++++++++++++++++-- > 1 file changed, 31 insertions(+), 2 deletions(-) > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index bc473ad21202..8253978ee0fb 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -1084,8 +1084,18 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma, > pte_t orig_dst_pte, pte_t orig_src_pte, > pmd_t *dst_pmd, pmd_t dst_pmdval, > spinlock_t *dst_ptl, spinlock_t *src_ptl, > - struct folio *src_folio) > + struct folio *src_folio, > + struct swap_info_struct *si, swp_entry_t entry) > { > + /* > + * Check if the folio still belongs to the target swap entry after > + * acquiring the lock. Folio can be freed in the swap cache while > + * not locked. > + */ > + if (src_folio && unlikely(!folio_test_swapcache(src_folio) || > + entry.val != src_folio->swap.val)) > + return -EAGAIN; > + > double_pt_lock(dst_ptl, src_ptl); > > if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src_pte, > @@ -1102,6 +1112,25 @@ static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma, > if (src_folio) { > folio_move_anon_rmap(src_folio, dst_vma); > src_folio->index = linear_page_index(dst_vma, dst_addr); > + } else { > + /* > + * Check if the swap entry is cached after acquiring the src_pte > + * lock. Otherwise, we might miss a newly loaded swap cache folio. > + * > + * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. > + * We are trying to catch newly added swap cache, the only possible case is > + * when a folio is swapped in and out again staying in swap cache, using the > + * same entry before the PTE check above. The PTL is acquired and released > + * twice, each time after updating the swap_map's flag. So holding > + * the PTL here ensures we see the updated value. False positive is possible, > + * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the > + * cache, or during the tiny synchronization window between swap cache and > + * swap_map, but it will be gone very quickly, worst result is retry jitters. > + */ The comment above may not be the best I can think of, but I think I'm already too harsh. :) That's good enough to me. It's also great to mention the 2nd race too as Barry suggested in the commit log. Thank you! Acked-by: Peter Xu > + if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) { > + double_pt_unlock(dst_ptl, src_ptl); > + return -EAGAIN; > + } > } > > orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte); > @@ -1412,7 +1441,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, > } > err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte, > orig_dst_pte, orig_src_pte, dst_pmd, dst_pmdval, > - dst_ptl, src_ptl, src_folio); > + dst_ptl, src_ptl, src_folio, si, entry); > } > > out: > -- > 2.49.0 > -- Peter Xu