From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00C6CCD11BF for ; Sun, 24 Mar 2024 11:14:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E1BA6B0082; Sun, 24 Mar 2024 07:14:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B7C76B0083; Sun, 24 Mar 2024 07:14:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 27FC26B0085; Sun, 24 Mar 2024 07:14:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 191546B0082 for ; Sun, 24 Mar 2024 07:14:43 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B730A1A0857 for ; Sun, 24 Mar 2024 11:14:42 +0000 (UTC) X-FDA: 81931674804.25.E1FF4A5 Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) by imf23.hostedemail.com (Postfix) with ESMTP id C8D1C140003 for ; Sun, 24 Mar 2024 11:14:40 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HsuqC+jh; spf=pass (imf23.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.174 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711278881; a=rsa-sha256; cv=none; b=bNs4pD2A6feaPsvP6hNT6Tto44aXiDgWwy7Ro+SafD0Q1s/YV/A/fq43uIIJsk+XfI654Z ELGhQEcrpG2MWpDTElKFU4WkqmeUzB01Kw7i1KDf00vXZxbo6CjPBCIt1raevRYca9qejO OO3A1wNbAdmnyUQnKHDJjCpysfK5t8w= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HsuqC+jh; spf=pass (imf23.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.174 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711278880; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KHEBVbXcD510eWO4+9ZUum0yzIK8V8C0nA4QLZ0g0zo=; b=Iy0mB+rhZP1EWJdn4I3YiL+/daPy5OCjoRhtd7wNk2MnoHMQCK79CTnCMl08OhRrEzmf75 zwYTUB0sFndc7EnXW89Z9ambfZXf+/40y+BHEs1p2kMrsKt2tTnso1ZFsYDvQuLnDoX2w6 aulUaZ0dBs7h8cv82Ttk2oc9RDbjsvY= Received: by mail-lj1-f174.google.com with SMTP id 38308e7fff4ca-2d6c8d741e8so15029121fa.3 for ; Sun, 24 Mar 2024 04:14:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711278879; x=1711883679; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=KHEBVbXcD510eWO4+9ZUum0yzIK8V8C0nA4QLZ0g0zo=; b=HsuqC+jh3e1G3dKE45SyvpNiWGQEJzZxjvsSEnn57CHDxpZnCoDdUrtdQswfOHb/1c leveSqfhSF3KveYkqVZ7JCEHQlwW5kiquH3w0gNDxtufOQ3//2D00QugRLJ6LrV6C/LG lRAOHCaaMU3Oedzgr7U5AkwspVRDfiV1UneXgswPp5Hq9RkmLOr8AeNcwhDaaoECOyWl W/hwLv2hIpLvLvD/LUG7/KPPuyfJdBvIHXd4ivpygIsCuo+pgV9P9AhRjU2yTDGE8yGg XfBK9vs1sWypGtDxtwOugia1aw1jafSPqDsXJ+AOidu0jggQDIM8w9erki5hjf0pzzku TDRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711278879; x=1711883679; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KHEBVbXcD510eWO4+9ZUum0yzIK8V8C0nA4QLZ0g0zo=; b=aQDK4EyTv2XQBX8kD105KpCTEC3L19V6d9i5atI+7PMFCrESLh2bUrsOKrvuigzuZh LNboNAisRRZUklAOz2As9NKaXb+RRseOFQaU11G1ggT+vlxp2MlUNtgeEfUbU92BtSQ8 I6883Nc//agNjzfkNfKyX00DSHh0/Ndq0hv7Ta8ImH/dqvx3Bm0EbnZplQuqNHDwKVm2 gILitR3rdnIbWhdhQLCnik/MSRqMsBqLOnREayjRlRxySu2Aoh4oa461N1qzj9ndtyGp a71h8/Jxjy9s+QtxNi+ohY9g9+mnxXXuPUJiXdTiEex480YU93GE8dFKE/AvfIaUj0W/ xB3g== X-Forwarded-Encrypted: i=1; AJvYcCUHImw4v67GVVdQmTUPtsDBNZhMfHOo0bYQxLNxwb0pz/g+2B5mQfHx0aWSkNIVRr6V5MpJyL96NbQGy8tj3zb/YhI= X-Gm-Message-State: AOJu0YyriEP9NBO+exvzVG7s0iNyiOe9zjkjjlG0Rk0IPQkLbHQ+r24B 7GK/7+cT871RG9naioNLDN50cBrNWlxPWrw/8u+u7Phb+1lUHTlEzT8UjgBxA3PzIa0bAn9QiBT yAyiqBJg9t92XJYNW+leX11CxDtA= X-Google-Smtp-Source: AGHT+IEpW2Ne+HF4z/Vydz3PVV2sMuJoJNCsvmK3NhISrtgFfOeAUbzsUkhLBQk71K823G15cbGCwz2yPYgL6/BxA+Y= X-Received: by 2002:a2e:740b:0:b0:2d4:42da:40fe with SMTP id p11-20020a2e740b000000b002d442da40femr2271774ljc.17.1711278878589; Sun, 24 Mar 2024 04:14:38 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Zhaoyang Huang Date: Sun, 24 Mar 2024 19:14:27 +0800 Message-ID: Subject: Re: summarize all information again at bottom//reply: reply: [PATCH] mm: fix a race scenario in folio_isolate_lru To: Matthew Wilcox Cc: =?UTF-8?B?6buE5pyd6ZizIChaaGFveWFuZyBIdWFuZyk=?= , Andrew Morton , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , =?UTF-8?B?5bq357qq5ruoIChTdGV2ZSBLYW5nKQ==?= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C8D1C140003 X-Stat-Signature: k4omcfbie78j96ahui53rwdopducu814 X-Rspam-User: X-HE-Tag: 1711278880-514213 X-HE-Meta: U2FsdGVkX1/Bjg7UyYrvjDRyoYWfJahYfkUnbdz5M/euuIKfeNEhlsr0qPn87pMpjDSi6UOqWuyNpZiaPKsE1CCYn86ACarheQtNZBlIMRIobjrk8y/MfkrNkXo89XCunvkpN2herksEFlpTejGe2Bs2h/6l4jDEANHjSctWLab6OTLSx4Sa0zrjL9oeJzcX9OdgBeNqCH6T2X0hzuYQQVF82mJWjDu7UyqwBg/EWzR7JScTVXTzZvWZ9EZ/aQQuBwXlUBQI0SeQ0i6x0q75p7b5q6HkFxk22vv/cMPtXjTjRZYS0uo/Y7sdcMIns+L3FpxH7R0ppuHBt6uRoOW38Pni3LN9Ua42i3EbMo2mzEmYYQMBxyeRzo1W6zHYW+mREnrmmx6JbIKPyBGC4xrxgOOdNPG5LYOCXeHCR8mP3KqKF3e4JSXkPwWL5R0CvwnMEVaKNllQhLc0Ya21WKa1doZ92vEsnb5fUaPbZ+dKHi846Ktit6KkkNKMnkP43sGXDQawpSVmkz7tTyFAKnIF+eVvVVHaF99EP948uZusvcF1CiHV22lfEU1ZMCEL2Mr16Mp8M63EYEwgIBAUsCgr45FZogLD8SVt5zTId8+CfQixrVgeUXguqRqsRxfebbbYvxPkSJvFMZcijQJUYy+ALQQsRbVLixhBHvKa7Fwvd3usHN+b2agae+IHPmwtC0nRm9G611wqcgFhn3ZMwu+XyvnXxiR7cTGPtN9Uz2AWScz6I+0Tv1OK6qnKZX3sJsEFCpEp6E+5Mn8VmiNG/h0+JfoHMeUsHGK2rB+WMGYT1XpLHGHbZDL+nbjhijV8CNLBHCzupOOF2Y5/DVmePoSDPl03GaGBBcFlSSJqa8HRYPzqMxizS35/k9fcQQ/KL8m/vUdnsBu6SEANavHyNbLUeuEjV5rzXbIhZAJmT5pDBhFWEhTmkCIYF4QHG3lnSCcCcpxAlwGEOe3PuJaD0Uj seqhIMKm kflpLKhS6qEqnYogW8diCgvgjMhAcRCNzZsUkCfxANcw0tKEGoGaSpB00KGGvCjXbr0oXdyf4Cp9qryBX3tiWV7/pOnM1KpVe7HI3WfkA0F65qwCDq6+MJu8+BEGBZLE7UsQj3wC8o1apAjlgQ3xfthmJRtZxFtZJ0u1FYmaHry84avlp4lrmM3sarGeiq6KE5YyctvMverWaBpPq61STFObHYjqfCo01Dsrq7ApWZVWpHnYteKSlgu3eZvb4AFV+zt3w7fau+tYrqS8+nIjSKuyl3VcCYkJlCBCkkhQf2sg6YyOlXScveV6q0fA8r4hEcL9qRbtVYMiY5LLPj/KeozzKfmrW/jZqYcYiBkHirfXailipc4iMBO6Gl9eum413S2mvHu3gTHzGNi6VyqeIfGGye6/GxkeljHgXcVsD3QeUokKFANwHs8rX7CnFHGQB++49jqC6s7Efgz1tSrRdohM+G63Ub4juHdxBi5h7jKfR2Xu6V8WACFgbUg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 11:20=E2=80=AFAM Matthew Wilcox wrote: > > On Fri, Mar 22, 2024 at 09:52:36AM +0800, Zhaoyang Huang wrote: > > Thanks for the comments. fix the typo and update the timing sequence > > by amending possible preempt points to have the refcnt make sense. > > > > 0. Thread_bad gets the folio by find_get_entry and preempted before > > take refcnt(could be the second round scan of > > truncate_inode_pages_range) > > refcnt =3D=3D 1(page_cache), PG_lru =3D=3D true, PG_lock =3D=3D fal= se > > find_get_entry > > folio =3D xas_find > > > > folio_try_get_rcu > > > > 1. Thread_filemap get the folio via > > filemap_map_pages->next_uptodate_folio->xas_next_entry and gets preempt= ed > > refcnt =3D=3D 1(page_cache), PG_lru =3D=3D true, PG_lock =3D=3D fal= se > > filemap_map_pages > > next_uptodate_folio > > xas_next_entry > > > > folio_try_get_rcu > > > > 2. Thread_truncate get the folio via > > truncate_inode_pages_range->find_lock_entries > > refcnt =3D=3D 2(page_cache, fbatch_truncate), PG_lru =3D=3D true, P= G_lock =3D=3D true > > > > 3. Thread_truncate proceed to truncate_cleanup_folio > > refcnt =3D=3D 2(page_cache, fbatch_truncate), PG_lru =3D=3D true, P= G_lock =3D=3D true > > > > 4. Thread_truncate proceed to delete_from_page_cache_batch > > refcnt =3D=3D 1(fbatch_truncate), PG_lru =3D=3D true, PG_lock =3D= =3D true > > > > 4.1 folio_unlock > > refcnt =3D=3D 1(fbatch_truncate), PG_lru =3D=3D true, PG_lock =3D= =3D false > > OK, so by the time we get to folio_unlock(), the folio has been removed > from the i_pages xarray. > > > 5. Thread_filemap schedule back from '1' and proceed to setup a pte > > and have folio->_mapcnt =3D 0 & folio->refcnt +=3D 1 > > refcnt =3D=3D 1->2(+fbatch_filemap)->3->2(pte, fbatch_truncate), > > PG_lru =3D=3D true, PG_lock =3D=3D true->false > > This line succeeds (in next_uptodate_folio): > if (!folio_try_get_rcu(folio)) > continue; > but then this fails: > > if (unlikely(folio !=3D xas_reload(xas))) > goto skip; > skip: > folio_put(folio); > > because xas_reload() will return NULL due to the folio being deleted > in step 4. So we never get to the point where we set up a PTE. > > There should be no way to create a new PTE for a folio which has been > removed from the page cache. Bugs happen, of course, but I don't see > one yet. > > > 6. Thread_madv clear folio's PG_lru by > > madvise_xxx_pte_range->folio_isolate_lru->folio_test_clear_lru > > refcnt =3D=3D 2(pte,fbatch_truncate), PG_lru =3D=3D false, PG_lock = =3D=3D false > > > > 7. Thread_truncate call folio_fbatch_release and failed in freeing > > folio as refcnt not reach 0 > > refcnt =3D=3D 1(pte), PG_lru =3D=3D false, PG_lock =3D=3D false > > ********folio becomes an orphan here which is not on the page cache > > but on the task's VM********** > > > > 8. Thread_bad scheduled back from '0' to be collected in fbatch_bad > > refcnt =3D=3D 2(pte, fbatch_bad), PG_lru =3D=3D false, PG_lock =3D= =3D true > > > > 9. Thread_bad clear one refcnt wrongly when doing filemap_remove_folio > > as it take this refcnt as the page cache one > > refcnt =3D=3D 1(fbatch_bad), PG_lru =3D=3D false, PG_lock =3D=3D tr= ue->false > > truncate_inode_folio > > filemap_remove_folio > > filemap_free_folio > > ******refcnt decreased wrongly here by being taken as the page cache on= e ****** > > > > 10. Thread_bad calls release_pages(fbatch_bad) and has the folio > > introduce the bug. > > release_pages > > folio_put_testzero =3D=3D true > > folio_test_lru =3D=3D false > > list_add(folio->lru, pages_to_free) ok. It seems like madvise is robust enough to leave no BUGs. I presume another two scenarios which call folio_isloate_lru by any other ways but PTE. Besides, scenario 2 reminds me of a previous bug reported by me as find_get_entry entered in a livelock where the folio's refcnt =3D=3D 0 but remains at xarray which causes the reset->retry loops forever. I would like to reply in that thread for more details. Scenario 1: 0. Thread_bad gets the folio by find_get_entry and preempted before folio_lock (could be the second round scan of truncate_inode_pages_range) refcnt =3D=3D 2(page_cache, fbatch_bad), PG_lru =3D=3D true, PG_lock = =3D=3D false folio =3D find_get_entry folio_try_get_rcu folio_try_lock 1. Thread_truncate get the folio via truncate_inode_pages_range->find_lock_entries refcnt =3D=3D 3(page_cache, fbatch_bad, fbatch_truncate), PG_lru =3D=3D true, PG_lock =3D=3D true 2. Thread_truncate proceed to truncate_cleanup_folio refcnt =3D=3D 3(page_cache, fbatch_bad, fbatch_truncate), PG_lru =3D=3D true, PG_lock =3D=3D true 3. Thread_truncate proceed to delete_from_page_cache_batch refcnt =3D=3D 2(fbatch_bad, fbatch_truncate), PG_lru =3D=3D true, PG_lo= ck =3D=3D true 4 folio_unlock refcnt =3D=3D 2(fbatch_bad, fbatch_truncate), PG_lru =3D=3D true, PG_lo= ck =3D=3D false 5. Thread_bad schedule back from step 0 and clear one refcnt wrongly when doing truncate_inode_folio->filemap_remove_folio as it take this refcnt as the page cache one refcnt =3D=3D 1'(fbatch_truncate), PG_lru =3D=3D false, PG_lock =3D=3D = true folio =3D find_get_entry folio_try_get_rcu folio_try_lock truncate_inode_folio filemap_remove_folio 6. Thread_isolate get one refcnt and call folio_isolate_lru(could be any process) refcnt =3D=3D 2'(fbatch_truncate, thread_isolate), PG_lru =3D=3D true, PG_lock =3D=3D true 7. Thread_isolate proceed to clear PG_lru and get preempted before folio_ge= t refcnt =3D=3D 2'(fbatch_truncate, thread_isolate), PG_lru =3D=3D false, PG_lock =3D=3D true folio_test_clear_folio folio_get 8. Thread_bad scheduling back from step 5 and proceed to drop one refcnt refcnt =3D=3D 1'(thread_isolate), PG_lru =3D=3D false, PG_lock =3D=3D t= rue folio =3D find_get_entry folio_try_get_rcu folio_try_lock truncate_inode_folio filemap_remove_folio folio_unlock 9. Thread_truncate schedule back from step 3 and proceed to drop one refcnt by release_pages and hit the BUG refcnt =3D=3D 0'(thread_isolate), PG_lru =3D=3D false, PG_lock =3D=3D f= alse Scenario 2: 0. Thread_bad gets the folio by find_get_entry and preempted before folio_lock (could be the second round scan of truncate_inode_pages_range) refcnt =3D=3D 2(page_cache, fbatch_bad), PG_lru =3D=3D true, PG_lock = =3D=3D false folio =3D find_get_entry folio_try_get_rcu folio_try_lock 1. Thread_readahead remove the folio from page cache and drop one refcnt by filemap_remove_folio(get rid of the folios which failed to launch IO during readahead) refcnt =3D=3D 1(fbatch_bad), PG_lru =3D=3D true, PG_lock =3D=3D true 2. folio_unlock refcnt =3D=3D 1(fbatch_bad), PG_lru =3D=3D true, PG_lock =3D=3D false 3. Thread_isolate get one refcnt and call folio_isolate_lru(could be any process) refcnt =3D=3D 2(fbatch_bad, thread_isolate), PG_lru =3D=3D true, PG_loc= k =3D=3D false 4. Thread_bad schedule back from step 0 and clear one refcnt wrongly when doing truncate_inode_folio->filemap_remove_folio as it take this refcnt as the page cache one refcnt =3D=3D 1'(thread_isolate), PG_lru =3D=3D false, PG_lock =3D=3D f= alse find_get_entries folio =3D find_get_entry folio_try_get_rcu folio_lock mapping !=3D mapping as folio_lock_entries does= > truncate_inode_folio filemap_remove_folio 5. Thread_isolate proceed to clear PG_lru and get preempted before folio_ge= t refcnt =3D=3D 1'(fbatch_truncate, thread_isolate), PG_lru =3D=3D false, PG_lock =3D=3D false folio_test_clear_folio folio_get 6. Thread_bad schedule back from step 4 and proceed to drop one refcnt by release_pages and hit the BUG refcnt =3D=3D 0'(thread_isolate), PG_lru =3D=3D false, PG_lock =3D=3D f= alse