From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0333C5B555 for ; Fri, 6 Jun 2025 00:15:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 124B06B00D1; Thu, 5 Jun 2025 20:15:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D4806B00D2; Thu, 5 Jun 2025 20:15:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDE466B00D4; Thu, 5 Jun 2025 20:15:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id CBFB66B00D1 for ; Thu, 5 Jun 2025 20:15:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 530C916193D for ; Fri, 6 Jun 2025 00:15:15 +0000 (UTC) X-FDA: 83523056190.15.8CE0275 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by imf05.hostedemail.com (Postfix) with ESMTP id 73B4510000A for ; Fri, 6 Jun 2025 00:15:13 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uqrfNvKp; spf=pass (imf05.hostedemail.com: domain of chrisl@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749168913; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bohkEgsefn2x+LbXX7t3MjE3mlTUDATXQs7j9OqsV9w=; b=NwUk65lKoqTr1wHLKodOwxR86HBYBzHlV2Qi6GLiuVRkqZAxr3xyhG1c/hpe2iebWfnlXO lCAQmhqXvSUlD/61tclhlzDAwCsiYdaeJsClbf1RpOs5/3XZNQxxxrO5pPo+VaDGnLeT/6 bfJ/TJvZkmaWulWrWm24MyXOg/klj4s= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uqrfNvKp; spf=pass (imf05.hostedemail.com: domain of chrisl@kernel.org designates 172.105.4.254 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749168913; a=rsa-sha256; cv=none; b=ITHd3O4FtzuIvuvVZ4eknlA2APLEw5IkKkbCU1CI7WR6hxBc3s0gr2MzMAfTJ7A3sYLcAU uRAVL2kaDj2Glw+QOxqfY+t6mmevJXFFQiB98PrLgr857nsIsryAAq70sBJpPCoeQnEO7+ w2o7YlqmRcTg5bBiqnQssfGbko/AILg= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 8F79161F1D for ; Fri, 6 Jun 2025 00:15:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E54F0C4CEF6 for ; Fri, 6 Jun 2025 00:15:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1749168911; bh=V+Qlec6SXG5Mz7B+D4sbtYjym82uWnzpo1+wOeci9OY=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=uqrfNvKpzh/WL+mkq2StHQ2JU6Q9KSchU6/7PMU8i7VmNSGwvvZ1m3Iy2zVPIQCIv xrcGJbb+ubvSsMeLC/R6l1+h8YrTbKRyEehi4fyLFnb92nP3nc2HYzrEHCxqZBpaBw Aflx6PLY5cMRXWDj1TKQQX2CpH9sy/8YyAKLAhVs4FI1p+t4OxJ11wxCQyysstSDWr mbwGSlOcr436I0JK7eShTBEr03JeseA05UHe52qhzkTid38eWpopkbmLChN1C7VmPn LDgJc/gQDz+TvSkVAYlKACeiYIv1EgAzqyg64GXzpIcxTrRm9bNzE8Dpp4/MNQaQkQ Hmvjuo5Fz4yYg== Received: by mail-wm1-f52.google.com with SMTP id 5b1f17b1804b1-443d4bff5dfso16415e9.1 for ; Thu, 05 Jun 2025 17:15:11 -0700 (PDT) X-Gm-Message-State: AOJu0YyKqpFPvlC8hw1V/U72FKx7eJeB32HnssGUZRTs/sZwG56t/roE xZ1bXJer7omx1oXqSPkeb42WSXbtIE9+v9BxH6pxYW+MiYOxcUw2keeIXA6MsFh4dgfFhfOjhP8 0dtrz/nMoZr8LAaTdTyLQ6dOKo7mMySx2FZHSJw6B X-Google-Smtp-Source: AGHT+IGLbUuIjAfVPsirnUgTYnXMIcDKIEE/hWRHptE8PR6snMnIlRCkrJyI4OLFwaKdxDWggO5BwLYZFbBJ8mcmIO8= X-Received: by 2002:a05:600c:3543:b0:442:feea:622d with SMTP id 5b1f17b1804b1-4527a28f0ddmr303065e9.1.1749168910422; Thu, 05 Jun 2025 17:15:10 -0700 (PDT) MIME-Version: 1.0 References: <20250604151038.21968-1-ryncsn@gmail.com> In-Reply-To: <20250604151038.21968-1-ryncsn@gmail.com> From: Chris Li Date: Thu, 5 Jun 2025 17:14:59 -0700 X-Gmail-Original-Message-ID: X-Gm-Features: AX0GCFvVDeJjp0rqUIAm0FMQrOXJdPhT8-6JSI_7eeM2z9k4rHA4l2shVraeDC0 Message-ID: Subject: Re: [PATCH v4] mm: userfaultfd: fix race of userfaultfd_move and swap cache To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Barry Song <21cnbao@gmail.com>, Peter Xu , Suren Baghdasaryan , Andrea Arcangeli , David Hildenbrand , Lokesh Gidra , stable@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 73B4510000A X-Stat-Signature: 6o47un65p1g4qpiba1bteejkffg9d75q X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1749168913-751257 X-HE-Meta: U2FsdGVkX1++RrNjeXCzk+6l8aObmYAc1nVrdDr4fADwQLH5Fv6iKYmetXDdvdwsSJR8JNSytMVKOi9Dvvu0PkMEfXxSu6FO4IchxVcL/6HzEZcPdhGvhR66DikV92ITT7YCvprEPpkpUFuMBLnq3p6l8XV6xWTr92YdKC8bx87YyAROXczhpD9/0hjslZmxFX8EOhen86tOXTHys+66xbTscz/8FZ8lxHz/Igxa0+aF39al2FLc5aVCc/WlQ5Hx826KP5kRST1tKaQ5F1J9QuEoRQTYSrjxdpP8+BUonJyAHUDjTKjUjE8Ou9PdHSbM5sSYloG69RqsyTQ847edFtVV6gXoitQtO9mB2ZzNb2XoBrSCu9uTzoJqqW/0sKDau8DBlkLXcYnwt/e8l95XWan5lG/ZKQc3aC6ZmwymzD9qLl3Mh4pu2Jrh5GOB2WJW30CVkS/E7YJ0S/O3z6MNXXkkF/+FpQtVHSYq+a5k0yDBKwJytKCsKGAj74hsYD5Qletf5CeSpLOzhvjJi3RCUnE3zTxaAyIS56zC92o3kEez4lY0ls109945feCH7hUrtpyk8vqn/DNPN/dI+YgJSfhh1np3qFcW5kkTzBdJKf+MWgHgVDegglPWEeW/Zqirx5nG+XT8KKIa/MS3w4SWiHVHV/KItsgyS94qyMAGmw9BrzqehY0Db5e21Rx+Qjt+fLQqT2E3bNCg0ltsPAVnxJeB/kLZQ7ryBONcEbSlv9pTabz9WVa/umD+DVdtNBiQt05V6Ohy4Mr1rnW7g5l27nPdUGpxRZBZudk/rGfnYSGxpDc9Wu2bfjZzQKYS/mByuLwRuYri92cl5DXnRzgeQUSQjCEd14p9PB6CzGbUO6UFYzS+j5d7p6ys7/MmJud7rV5NAa9coZ5IdalPd36NLuaibaLgVvdg+7S9hJv/wf39MpPTwucj5CEGKUykybhd3TZho2Fr/8UTkzFcLb9 3A+Yxd0J Cr7Wt/Dyk2G7nwpzDEuNCScfR0o3b3m3Q7CJgXH1A2ddyP2tWN7dB0s6JcB/TWwhWrkBEgN1Jp5P4rkqkp+RBljypZOMoEupINurqq0sBtkMg4Ure7Z/SU4QVzoGqzSBJfLITsQs9cGXkxHm2pgLgxCKGxsq22Sf7q7uiVrMHfLjAH8W4jBsRU8cH8/PJvz50xFHc6WsFVLNC4M1XTQZzocoGi4EuAhXERr6ZmYt/2lqJTECHohL1nBPbjZbrho/RN/jr7zWzfPW/V4ccTC+7806CjYrD08DytkwgQ2agxKT7+9bA1OdSntUPr6b49Mwy0KGq+vxX2o+Xt/C1Gtm7/jSTwfxud7kKOTeFQTG2RbR1ZZ2+rbRMds03wWhAivRB7j68UsmCcer/WlMfoiEG2HaS82imHfk3CcIR/DnJo7HXQWQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 4, 2025 at 8:10=E2=80=AFAM Kairui Song wrote= : > > From: Kairui Song > > On seeing a swap entry PTE, userfaultfd_move does a lockless swap > cache lookup, and tries to move the found folio to the faulting vma. > Currently, it relies on checking the PTE value to ensure that the moved > folio still belongs to the src swap entry and that no new folio has > been added to the swap cache, which turns out to be unreliable. > > While working and reviewing the swap table series with Barry, following > existing races are observed and reproduced [1]: > > In the example below, move_pages_pte is moving src_pte to dst_pte, > where src_pte is a swap entry PTE holding swap entry S1, and S1 > is not in the swap cache: > > CPU1 CPU2 > userfaultfd_move > move_pages_pte() > entry =3D pte_to_swp_entry(orig_src_pte); > // Here it got entry =3D S1 > ... < interrupted> ... > > // folio A is a new allocated folio > // and get installed into src_pte > > // src_pte now points to folio A, S1 > // has swap count =3D=3D 0, it can be = freed > // by folio_swap_swap or swap > // allocator's reclaim. > > // folio B is a folio in another VMA. > > // S1 is freed, folio B can use it > // for swap out with no problem. > ... > folio =3D filemap_get_folio(S1) > // Got folio B here !!! > ... < interrupted again> ... > > // Now S1 is free to be used again. > > // Now src_pte is a swap entry PTE > // holding S1 again. > folio_trylock(folio) > move_swap_pte > double_pt_lock > is_pte_pages_stable > // Check passed because src_pte =3D=3D S1 > folio_move_anon_rmap(...) > // Moved invalid folio B here !!! > > The race window is very short and requires multiple collisions of > multiple rare events, so it's very unlikely to happen, but with a > deliberately constructed reproducer and increased time window, it > can be reproduced easily. Thanks for the fix. Please spell out clearly what is the consequence of the race if triggered. I assume possible data lost? That should be mentioned in the first few sentences of the commit message as the user's visible impact. > > This can be fixed by checking if the folio returned by filemap is the > valid swap cache folio after acquiring the folio lock. > > Another similar race is possible: filemap_get_folio may return NULL, but > folio (A) could be swapped in and then swapped out again using the same > swap entry after the lookup. In such a case, folio (A) may remain in the > swap cache, so it must be moved too: > > CPU1 CPU2 > userfaultfd_move > move_pages_pte() > entry =3D pte_to_swp_entry(orig_src_pte); > // Here it got entry =3D S1, and S1 is not in swap cache > folio =3D filemap_get_folio(S1) > // Got NULL > ... < interrupted again> ... > > > move_swap_pte > double_pt_lock > is_pte_pages_stable > // Check passed because src_pte =3D=3D S1 > folio_move_anon_rmap(...) > // folio A is ignored !!! > > Fix this by checking the swap cache again after acquiring the src_pte > lock. And to avoid the filemap overhead, we check swap_map directly [2]. > > The SWP_SYNCHRONOUS_IO path does make the problem more complex, but so > far we don't need to worry about that, since folios can only be exposed > to the swap cache in the swap out path, and this is covered in this > patch by checking the swap cache again after acquiring the src_pte lock. > > Testing with a simple C program that allocates and moves several GB of > memory did not show any observable performance change. > > Cc: > Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI") > Closes: https://lore.kernel.org/linux-mm/CAMgjq7B1K=3D6OOrK2OUZ0-tqCzi+EJ= t+2_K97TPGoSt=3D9+JwP7Q@mail.gmail.com/ [1] > Link: https://lore.kernel.org/all/CAGsJ_4yJhJBo16XhiC-nUzSheyX-V3-nFE+tAi= =3D8Y560K8eT=3DA@mail.gmail.com/ [2] > Signed-off-by: Kairui Song > Reviewed-by: Lokesh Gidra > > --- > > V1: https://lore.kernel.org/linux-mm/20250530201710.81365-1-ryncsn@gmail.= com/ > Changes: > - Check swap_map instead of doing a filemap lookup after acquiring the > PTE lock to minimize critical section overhead [ Barry Song, Lokesh Gid= ra ] > > V2: https://lore.kernel.org/linux-mm/20250601200108.23186-1-ryncsn@gmail.= com/ > Changes: > - Move the folio and swap check inside move_swap_pte to avoid skipping > the check and potential overhead [ Lokesh Gidra ] > - Add a READ_ONCE for the swap_map read to ensure it reads a up to dated > value. > > V3: https://lore.kernel.org/all/20250602181419.20478-1-ryncsn@gmail.com/ > Changes: > - Add more comments and more context in commit message. > > mm/userfaultfd.c | 33 +++++++++++++++++++++++++++++++-- > 1 file changed, 31 insertions(+), 2 deletions(-) > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index bc473ad21202..8253978ee0fb 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -1084,8 +1084,18 @@ static int move_swap_pte(struct mm_struct *mm, str= uct vm_area_struct *dst_vma, > pte_t orig_dst_pte, pte_t orig_src_pte, > pmd_t *dst_pmd, pmd_t dst_pmdval, > spinlock_t *dst_ptl, spinlock_t *src_ptl, > - struct folio *src_folio) > + struct folio *src_folio, > + struct swap_info_struct *si, swp_entry_t entry) > { > + /* > + * Check if the folio still belongs to the target swap entry afte= r > + * acquiring the lock. Folio can be freed in the swap cache while > + * not locked. > + */ > + if (src_folio && unlikely(!folio_test_swapcache(src_folio) || > + entry.val !=3D src_folio->swap.val)) > + return -EAGAIN; > + > double_pt_lock(dst_ptl, src_ptl); > > if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src= _pte, > @@ -1102,6 +1112,25 @@ static int move_swap_pte(struct mm_struct *mm, str= uct vm_area_struct *dst_vma, > if (src_folio) { > folio_move_anon_rmap(src_folio, dst_vma); > src_folio->index =3D linear_page_index(dst_vma, dst_addr)= ; > + } else { > + /* > + * Check if the swap entry is cached after acquiring the = src_pte > + * lock. Otherwise, we might miss a newly loaded swap cac= he folio. > + * > + * Check swap_map directly to minimize overhead, READ_ONC= E is sufficient. > + * We are trying to catch newly added swap cache, the onl= y possible case is > + * when a folio is swapped in and out again staying in sw= ap cache, using the > + * same entry before the PTE check above. The PTL is acqu= ired and released > + * twice, each time after updating the swap_map's flag. S= o holding > + * the PTL here ensures we see the updated value. False p= ositive is possible, > + * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag withou= t touching the > + * cache, or during the tiny synchronization window betwe= en swap cache and > + * swap_map, but it will be gone very quickly, worst resu= lt is retry jitters. > + */ > + if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS= _CACHE) { Nit: You can use "} else if {" to save one level of indentation. Reviewed-by: Chris Li Chris