From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3077BC5B549 for ; Wed, 4 Jun 2025 15:39:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BFFEC6B0608; Wed, 4 Jun 2025 11:39:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB1186B0609; Wed, 4 Jun 2025 11:39:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC7776B060A; Wed, 4 Jun 2025 11:39:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8D9646B0608 for ; Wed, 4 Jun 2025 11:39:47 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id F29A7141AB1 for ; Wed, 4 Jun 2025 15:39:46 +0000 (UTC) X-FDA: 83518128372.05.26E831E Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) by imf16.hostedemail.com (Postfix) with ESMTP id 3C907180010 for ; Wed, 4 Jun 2025 15:39:45 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ckW4eSmy; spf=pass (imf16.hostedemail.com: domain of surenb@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749051585; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jaIUTtis4g7w9h9/ucFToLx1kZiFQCisI5TV2JiWp7U=; b=yy2dcxvkQXUM0n9yxkUf0XMUAYqUNHTyPVMdCDAAAvqWQLyc4w2Cypz7h9PWRwVjAMWnGz +a93nyMcXBpasqj7RsM7WIzP9y+I8vj0IV6Og2Oie3jcYDpL4y1VOPH4mq69JGQivuvg1k X4fzdL4/sCsBbPgcvciVgzuzso8PCO4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ckW4eSmy; spf=pass (imf16.hostedemail.com: domain of surenb@google.com designates 209.85.160.181 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749051585; a=rsa-sha256; cv=none; b=XjW9iINzH1wGhzVD3h9Ytc3/Au1RY1jr52PFOqxCN4AhtEa2nwRPkL+QZ65Y5LWgwr8Wvq b1VRm2KERQV593ZhVqjzR2+KxPD1Swo1V3nFxrGjcEHg9WAw12gEqSEASfXKj+DWfqQH4Y 3s5U4e0EucIZc+PDvqStFLAesPFMssE= Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-4a58ef58a38so256961cf.0 for ; Wed, 04 Jun 2025 08:39:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749051584; x=1749656384; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jaIUTtis4g7w9h9/ucFToLx1kZiFQCisI5TV2JiWp7U=; b=ckW4eSmyHLf0BR5hxeS7uIWQjLfUEA0YfEdmW5AJSx3mSX25rszhvNlHPb8GP8Owf9 4f2UsO3LCpOEgiSzRS3V6DqfpiCTI6u/SSjATrhg2agPAJeHuRjsSWwcvodd20zHfwqo 7mYisphsQmFuPwVHHO717mNeBYaEA5nHU6GXxPyOv7rH1uaWZ55r4NfteWgRnaHYcgug pmPu0aiUpWGNizGcL3VdpyrIPbi7qpCoqMabl1vYJpYeQfUSHYJ2MiPR5Nu8+W5F6XLH vMidXsmAl+HbHgBY7oHk4OktShAOVeR1X74nQUUtK3NBiBh51iJRDn52hJl4MHDn4qLM 1UeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749051584; x=1749656384; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jaIUTtis4g7w9h9/ucFToLx1kZiFQCisI5TV2JiWp7U=; b=jaVaBwQs7X1yfshpxS7BufA8yMukOwUg/e9CYshLr8g6jdRC1lDRXKuKeu7a+o93wB UoiMfz7T4jZ6eNav4MSfHruxTucSm6HHPPaxxKe8b6LSM2RwG4L5vj2rMgj7iT1RV3r+ /WdD+miE3O8Fndr56luhKUbwFr+L8BZoonP0IcHpOSbvuRvxNW59AP/G1AJSsUQQSLEq IaSjlz5QMqkBKKSBsW+Z85iHEEDDINC63io3YSs5+7i8P+eCopBWnceK00PAGrVGYRIQ x9qlwoixLwvFyRZ5tZXoBhQ5hcaRjZ5sqnYw33ZBtkUn96Ki3FwFNtZJ1oRfCUqe76ZO N/Mg== X-Forwarded-Encrypted: i=1; AJvYcCXc7zXrfQ5/SnJ56lDziS4uiCWyCJ2V27YvMKMgMyTW+XGq7+dT+NUem7LxRU4RsLkNbXqQlV3BJA==@kvack.org X-Gm-Message-State: AOJu0YybxBx12gtr2BUBTnrTkvtbt93w07pmy/n03Cvh0W/GUdy9uci2 Ysj925ELmjBTbDfcsqHVuoW7du74DDs9f+G5PyisqyW69b1TtqnKquhegT8hJLnx3OjvkQobkdl GntnP2RYdiKw+RAmSaQtnpXMEpC78IylzjZfDpq3y X-Gm-Gg: ASbGncuuUMRSWQhD1tDWIsokuhVvttaWTP5a5NCG/UXV5weGCfk3eGdg3G6lZ76m+o7 ut7Xp0+/VmB2a3ATmkX72jkybUSxiOf5ZiHBK8MMiC6zeV+5296JzsdSlrtC945oC/Xg5MxNe9+ 6gQa+whn3u8soRe5CPNNMCLD3sUP7WnbsO+o2vfR13bMlvqvMopHwyOcF/fN2Davdao8r0rTw3 X-Google-Smtp-Source: AGHT+IGRNN0Nu/Cd9xexZVozmzGYncXDcXneYvDGwMtEg8bwYQBRyVY6g+btTzUvAuW4Pg8yQBu3fuKnwJ8vlqK4mkY= X-Received: by 2002:a05:622a:551b:b0:494:b641:4851 with SMTP id d75a77b69052e-4a5a60fed52mr4023891cf.27.1749051583921; Wed, 04 Jun 2025 08:39:43 -0700 (PDT) MIME-Version: 1.0 References: <20250604151038.21968-1-ryncsn@gmail.com> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 4 Jun 2025 08:39:32 -0700 X-Gm-Features: AX0GCFsWpIOVI6OZ80FViRxGFHREe5YmcX2RyGPtytnkb9wTVut9NUMVmwCE9zc Message-ID: Subject: Re: [PATCH v4] mm: userfaultfd: fix race of userfaultfd_move and swap cache To: Peter Xu Cc: Kairui Song , linux-mm@kvack.org, Andrew Morton , Barry Song <21cnbao@gmail.com>, Andrea Arcangeli , David Hildenbrand , Lokesh Gidra , stable@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 3C907180010 X-Stat-Signature: 9bghcizwrs7ao6iahobz6suhwzh1zueu X-Rspam-User: X-HE-Tag: 1749051585-111739 X-HE-Meta: U2FsdGVkX1+BGS5GRXj5Th6+DPNEzxWUol96BJtZXbgrhqyT6z/VS6yMozaeAIEuRaIH3Eu2nUPBTlPtx79At9Up58gaYDnlleUcxdExOcoyw5e3QQn0GpMB3UYEBv4ZYxJeRnHoqQH9inQBMuBSdelOhnwYJMNeC4tA/dm/1JbybVHhP2VRkYJwldhD3R/jcMebSIIduk08UU4ZX+7e5Qi6Qt1fKzf5dSNbWRWQdlECDlcszixgP+TTapqkRjSpu2zbL+32wjq2eFkFugUYoMaDOzmP5z9O8sj4vYPH32Aya8Db30JfrhmEuiIz+sBuRxukGM6jX+qCrqCk58KKJCB0bYMq0rgKPK8URiNuv41lv0lK6QWPDMK7CZykKluRPL7uzpDAWe0EMNfKAbjS3GbzTm5+B/Tl5ioZ5eZxunEXrdjEKHZmvkKNEKv9QFZr5EFtPjwXSOAGJcinhAP1ZvzNBBx+AcgJxNn0sT1hL1JOkGvapmNUVAD1C9jhFVtyoGhX5ehvZHY7XTr5q9ty4pWLcgTHFq6//jbgeZxg+fmmcBdLZ6VqJuMQGvyMUalQFJNP3ozP8NX5NxDa6eyh/RV6NYb62Bj+Z8eBBxHI9IZFp+AWrB4Yt+kKgkdzjKRjT24yT8w7r5FapwMKFVS52+QQfzP1cxYcG/5+RqN+5OgeA3lsWOIair26G8g6GKknt0n8PYjZLOc5vqFwg6WmaBbHz0PuoMPsQ/9/izlM4WAAVWoOLAEJc0nuzv2H5PFGdfQ4e0QnlMP6gyF1BFGc2apdx1q90D8dLU1pNDg3ilaGnlWfunIrFhQv3wexmeaKX/49S/qB2VqfnFX7L6ulGqEM/DQNSQ0IIPysdYlgvJ6gMqP0NlN8ja9VWgtI/8QreRfuBsPgM4hW3NGc66mZveUJw5f/9A2ena6rexLCIRw7gvHyyfeGvCrHFAs6NYC10utynDGTbeI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jun 4, 2025 at 8:34=E2=80=AFAM Peter Xu wrote: > > On Wed, Jun 04, 2025 at 11:10:38PM +0800, Kairui Song wrote: > > From: Kairui Song > > > > On seeing a swap entry PTE, userfaultfd_move does a lockless swap > > cache lookup, and tries to move the found folio to the faulting vma. > > Currently, it relies on checking the PTE value to ensure that the moved > > folio still belongs to the src swap entry and that no new folio has > > been added to the swap cache, which turns out to be unreliable. > > > > While working and reviewing the swap table series with Barry, following > > existing races are observed and reproduced [1]: > > > > In the example below, move_pages_pte is moving src_pte to dst_pte, > > where src_pte is a swap entry PTE holding swap entry S1, and S1 > > is not in the swap cache: > > > > CPU1 CPU2 > > userfaultfd_move > > move_pages_pte() > > entry =3D pte_to_swp_entry(orig_src_pte); > > // Here it got entry =3D S1 > > ... < interrupted> ... > > > > // folio A is a new allocated folio > > // and get installed into src_pte > > > > // src_pte now points to folio A, S1 > > // has swap count =3D=3D 0, it can b= e freed > > // by folio_swap_swap or swap > > // allocator's reclaim. > > > > // folio B is a folio in another VMA= . > > > > // S1 is freed, folio B can use it > > // for swap out with no problem. > > ... > > folio =3D filemap_get_folio(S1) > > // Got folio B here !!! > > ... < interrupted again> ... > > > > // Now S1 is free to be used again. > > > > // Now src_pte is a swap entry PTE > > // holding S1 again. > > folio_trylock(folio) > > move_swap_pte > > double_pt_lock > > is_pte_pages_stable > > // Check passed because src_pte =3D=3D S1 > > folio_move_anon_rmap(...) > > // Moved invalid folio B here !!! > > > > The race window is very short and requires multiple collisions of > > multiple rare events, so it's very unlikely to happen, but with a > > deliberately constructed reproducer and increased time window, it > > can be reproduced easily. > > > > This can be fixed by checking if the folio returned by filemap is the > > valid swap cache folio after acquiring the folio lock. > > > > Another similar race is possible: filemap_get_folio may return NULL, bu= t > > folio (A) could be swapped in and then swapped out again using the same > > swap entry after the lookup. In such a case, folio (A) may remain in th= e > > swap cache, so it must be moved too: > > > > CPU1 CPU2 > > userfaultfd_move > > move_pages_pte() > > entry =3D pte_to_swp_entry(orig_src_pte); > > // Here it got entry =3D S1, and S1 is not in swap cache > > folio =3D filemap_get_folio(S1) > > // Got NULL > > ... < interrupted again> ... > > > > > > move_swap_pte > > double_pt_lock > > is_pte_pages_stable > > // Check passed because src_pte =3D=3D S1 > > folio_move_anon_rmap(...) > > // folio A is ignored !!! > > > > Fix this by checking the swap cache again after acquiring the src_pte > > lock. And to avoid the filemap overhead, we check swap_map directly [2]= . > > > > The SWP_SYNCHRONOUS_IO path does make the problem more complex, but so > > far we don't need to worry about that, since folios can only be exposed > > to the swap cache in the swap out path, and this is covered in this > > patch by checking the swap cache again after acquiring the src_pte lock= . > > > > Testing with a simple C program that allocates and moves several GB of > > memory did not show any observable performance change. > > > > Cc: > > Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI") > > Closes: https://lore.kernel.org/linux-mm/CAMgjq7B1K=3D6OOrK2OUZ0-tqCzi+= EJt+2_K97TPGoSt=3D9+JwP7Q@mail.gmail.com/ [1] > > Link: https://lore.kernel.org/all/CAGsJ_4yJhJBo16XhiC-nUzSheyX-V3-nFE+t= Ai=3D8Y560K8eT=3DA@mail.gmail.com/ [2] > > Signed-off-by: Kairui Song > > Reviewed-by: Lokesh Gidra Very interesting races. Thanks for the fix! Reviewed-by: Suren Baghdasaryan > > > > --- > > > > V1: https://lore.kernel.org/linux-mm/20250530201710.81365-1-ryncsn@gmai= l.com/ > > Changes: > > - Check swap_map instead of doing a filemap lookup after acquiring the > > PTE lock to minimize critical section overhead [ Barry Song, Lokesh G= idra ] > > > > V2: https://lore.kernel.org/linux-mm/20250601200108.23186-1-ryncsn@gmai= l.com/ > > Changes: > > - Move the folio and swap check inside move_swap_pte to avoid skipping > > the check and potential overhead [ Lokesh Gidra ] > > - Add a READ_ONCE for the swap_map read to ensure it reads a up to date= d > > value. > > > > V3: https://lore.kernel.org/all/20250602181419.20478-1-ryncsn@gmail.com= / > > Changes: > > - Add more comments and more context in commit message. > > > > mm/userfaultfd.c | 33 +++++++++++++++++++++++++++++++-- > > 1 file changed, 31 insertions(+), 2 deletions(-) > > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index bc473ad21202..8253978ee0fb 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -1084,8 +1084,18 @@ static int move_swap_pte(struct mm_struct *mm, s= truct vm_area_struct *dst_vma, > > pte_t orig_dst_pte, pte_t orig_src_pte, > > pmd_t *dst_pmd, pmd_t dst_pmdval, > > spinlock_t *dst_ptl, spinlock_t *src_ptl, > > - struct folio *src_folio) > > + struct folio *src_folio, > > + struct swap_info_struct *si, swp_entry_t entry) > > { > > + /* > > + * Check if the folio still belongs to the target swap entry afte= r > > + * acquiring the lock. Folio can be freed in the swap cache while > > + * not locked. > > + */ > > + if (src_folio && unlikely(!folio_test_swapcache(src_folio) || > > + entry.val !=3D src_folio->swap.val)) > > + return -EAGAIN; > > + > > double_pt_lock(dst_ptl, src_ptl); > > > > if (!is_pte_pages_stable(dst_pte, src_pte, orig_dst_pte, orig_src= _pte, > > @@ -1102,6 +1112,25 @@ static int move_swap_pte(struct mm_struct *mm, s= truct vm_area_struct *dst_vma, > > if (src_folio) { > > folio_move_anon_rmap(src_folio, dst_vma); > > src_folio->index =3D linear_page_index(dst_vma, dst_addr)= ; > > + } else { > > + /* > > + * Check if the swap entry is cached after acquiring the = src_pte > > + * lock. Otherwise, we might miss a newly loaded swap cac= he folio. > > + * > > + * Check swap_map directly to minimize overhead, READ_ONC= E is sufficient. > > + * We are trying to catch newly added swap cache, the onl= y possible case is > > + * when a folio is swapped in and out again staying in sw= ap cache, using the > > + * same entry before the PTE check above. The PTL is acqu= ired and released > > + * twice, each time after updating the swap_map's flag. S= o holding > > + * the PTL here ensures we see the updated value. False p= ositive is possible, > > + * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag withou= t touching the > > + * cache, or during the tiny synchronization window betwe= en swap cache and > > + * swap_map, but it will be gone very quickly, worst resu= lt is retry jitters. > > + */ > > The comment above may not be the best I can think of, but I think I'm > already too harsh. :) That's good enough to me. It's also great to > mention the 2nd race too as Barry suggested in the commit log. > > Thank you! > > Acked-by: Peter Xu > > > + if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS= _CACHE) { > > + double_pt_unlock(dst_ptl, src_ptl); > > + return -EAGAIN; > > + } > > } > > > > orig_src_pte =3D ptep_get_and_clear(mm, src_addr, src_pte); > > @@ -1412,7 +1441,7 @@ static int move_pages_pte(struct mm_struct *mm, p= md_t *dst_pmd, pmd_t *src_pmd, > > } > > err =3D move_swap_pte(mm, dst_vma, dst_addr, src_addr, ds= t_pte, src_pte, > > orig_dst_pte, orig_src_pte, dst_pmd, dst_= pmdval, > > - dst_ptl, src_ptl, src_folio); > > + dst_ptl, src_ptl, src_folio, si, entry); > > } > > > > out: > > -- > > 2.49.0 > > > > -- > Peter Xu >