From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBBA7C48BC4 for ; Mon, 19 Feb 2024 02:35:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 680348D0006; Sun, 18 Feb 2024 21:35:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62FB78D0005; Sun, 18 Feb 2024 21:35:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F8338D0006; Sun, 18 Feb 2024 21:35:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3E2B78D0005 for ; Sun, 18 Feb 2024 21:35:57 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0D7741C0110 for ; Mon, 19 Feb 2024 02:35:57 +0000 (UTC) X-FDA: 81806988354.02.D76374B Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by imf27.hostedemail.com (Postfix) with ESMTP id 858864000A for ; Mon, 19 Feb 2024 02:35:54 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=l1oxGgt6; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708310155; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=88M2u2Y4oHtTmmozT+oigX9/Z0/6VKs5sRSVmnnBuxg=; b=joYguCb9LxgaimFnjK4S18jUjLC1/JnTEO734RFvO6q3scKN5vqJa3L1it6nH9QJJemihj aKcukhLPYpO0VMXgJIckK9iUWqsjg7rOeUPsOI0Zykwj9ykQWhGbF3M41JEUtQC+oMieHU /QJZSQb8MmRVq4pWFdzkp1Y5J9sjqUg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=l1oxGgt6; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf27.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.10 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708310155; a=rsa-sha256; cv=none; b=nsK1RxumOE4QJpaS6AMVdXrS7W1h1BN7ulS8ODtaHZ9OF1aJX4G055uca3+OJoJpnKWCZ/ yR/ZCFq16I57mscG1uhJdKPFqKqmHAIrJE/we+WRQRJz2cd9egXNB6CHi4ot8jANRq3YfS ttv3CDFprc46mwIiD1Y7JdDgFRiZ1F0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708310154; x=1739846154; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version:content-transfer-encoding; bh=arryqPs7E+gDphJSVU2BYnHpg1ovsJt6GImFrXkhFIU=; b=l1oxGgt6elSvhrIjvYt8XZ3WZF6RXVAZeimJzkuyj/VqAkn3WvYSBnye MDfxubmCqXEQ5mO0yO8vANHR4q3DXZm8RrZp/Ou0wkjqRyGKGo3YvKY5q CZFD98KBai8h5gRZNWkv3PZ0wOApY0bhDtOeBxm9J8BdsP8iSVkGapTxw fD6S7qaAuo6zFjNn3opuN9GIr7+LvXpyI2BlUcQmYcp4zaLPdvcLGrWf4 N0mxMJCLtQ1/CEzWiJ5wr+F2jzhi8IoPB+yc8om+yNHWLzwCa7sol+Fvt Bm2HPZTL0mdFkHo9bQmVrn7cYGH2vMAOBi+O9zzwIPISPIIJ1445wNZb2 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10988"; a="19801712" X-IronPort-AV: E=Sophos;i="6.06,170,1705392000"; d="scan'208";a="19801712" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2024 18:35:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,170,1705392000"; d="scan'208";a="8975601" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2024 18:35:46 -0800 From: "Huang, Ying" To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Chris Li , Minchan Kim , Yu Zhao , Barry Song , SeongJae Park , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , stable@vger.kernel.org, linux-kernel@vger.kernel.org, Aaron Lu Subject: Re: [PATCH v3] mm/swap: fix race when skipping swapcache In-Reply-To: <87jzn1s2xe.fsf@yhuang6-desk2.ccr.corp.intel.com> (Ying Huang's message of "Mon, 19 Feb 2024 08:39:09 +0800") References: <20240216095105.14502-1-ryncsn@gmail.com> <87wmr2rx4a.fsf@yhuang6-desk2.ccr.corp.intel.com> <87jzn1s2xe.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Mon, 19 Feb 2024 10:33:51 +0800 Message-ID: <87frxprxm8.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 858864000A X-Stat-Signature: q7omx6smcprt3kwxinb3itpunjsk1uey X-HE-Tag: 1708310154-562703 X-HE-Meta: U2FsdGVkX1/gSgnDd02ycKQe8eMOGjUDQ7asDioFiCl7I6rfu2LD5cqTX+SeKySfqFyu6F9lEPW9BR8npPEbPkllapE0yfhk9v9CoyYDulef/d03W8qg7Y9jMKYg+pIQY+LCoYSdFKLzK3pLUGVz4LnVKhNAjS8pYGWUOWT2DC3nUI34+gN805+eYde1p2WjjapRbF0rs3hn1J5iApJkhPOhQNIVDkLJ1efY201KcoHiTs7HwoYt4NU3QKlBB7pqstf+kItPHI5S5EVKBqhZjPtFvANUG85f2d+LcieyWnl1AOy7kVHeQkDS9sGe7XJKIo3jKuMnetEDZVG9UogKBKW+Eee5gnKbGz5hREKY78UBhMMinA1O+MtuvrQlH4FD+elSp6fHzT/FDTbb1eDG7HC887OWmad4qXxnKT/SOcdUmN3fBlOjhbNTc+vgwVBhewTB/FZKUM8O+uep9829kok2E7nlF9KbBNxDhOHOBbxGDvleapYFWpnVidGaFwRk6+BKiMbPdBLPO6UAb4kdsnsnSwFYlXevP7EOQUF6d/IajJkNu47/+LoKPlpLvRht8zgpGp/hCHKXFMFIl+aiMT26qWT52EESdVWWueFX1DRmvXXzyY2qnlNaNREnyicUcYnHsipnm0AfYO/Ymr17hVP5wUTuEMXUM1tshVQS3FfuRufxdm9skV+ygcVQNwzrwqIiwu2hY0lthTX9skAPoLZrxU+eFMIp494WmGAHtfd4Jn5VkmtkqdpbOXtfGe39sSqpoRikbvLTUuxipDZk9w/GEVplyR7MMVH/4By5Q4Z57jmfYZza3Vj151V2XtPM709QL/51gWcPcM3bqX/AMbj+meDEm89sAkcmkYkNFKYF4eGoJY0myxqWD30JNAJygpHc1VkWabUPt5KuOuS4bwXD1XGxHlpXL2XXAhr1pFzhBMEDVTwQbT9RidNg9XDsrMac8UWrszngFtBs6Ch pyLI4Dr2 Xlt3zuik+oYWqx5PKtXeZiqL2/O0/5vFDFIy6Tcj4IH3Qauzt4BiSZRu3qJMZc+311Ls1U9db/dDDE3u4QghyucPgBTImJzbfcuxW2uH+mnLJq8Wha6YjwKPuCqfqT42Yc0IoQf+CkcugunygV0fXLmWbNtxzCu3CSJTWBotCPKjtV29a0gAqiok433qJbzBxt60fZNT37RDLiqhFwTYr4vzT8BCuKPT3HL3nQs1dtxoPZNPqSUBOkOOIpUMFwiymGLg9VpB+iUeOyiakN639SuYmqyZ7hMiYu0CAlCmp9nwEadvbrpv8bDX15sovbkaylrPGyQiDwsTVDmk9YyL2R0sHkW/yfirOvxYfwtbZ03INMaJGesqAMhQIJJ7hexOmyzRlPemSrz8VOI+tdbMVwT3np5lc6GxnoDTpN2O//vuDeDs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: "Huang, Ying" writes: > Kairui Song writes: > >> On Sun, Feb 18, 2024 at 4:34=E2=80=AFPM Huang, Ying wrote: >>> >>> Kairui Song writes: >>> >>> > From: Kairui Song >>> > >>> > When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads >>> > swapin the same entry at the same time, they get different pages (A, = B). >>> > Before one thread (T0) finishes the swapin and installs page (A) >>> > to the PTE, another thread (T1) could finish swapin of page (B), >>> > swap_free the entry, then swap out the possibly modified page >>> > reusing the same entry. It breaks the pte_same check in (T0) because >>> > PTE value is unchanged, causing ABA problem. Thread (T0) will >>> > install a stalled page (A) into the PTE and cause data corruption. >>> > >>> > One possible callstack is like this: >>> > >>> > CPU0 CPU1 >>> > ---- ---- >>> > do_swap_page() do_swap_page() with same entry >>> > >>> > >>> > swap_read_folio() <- read to page A swap_read_folio() <- read to pag= e B >>> > >>> > ... set_pte_at() >>> > swap_free() <- entry is free >>> > >>> > >>> > pte_same() <- Check pass, PTE seems >>> > unchanged, but page A >>> > is stalled! >>> > swap_free() <- page B content lost! >>> > set_pte_at() <- staled page A installed! >>> > >>> > And besides, for ZRAM, swap_free() allows the swap device to discard >>> > the entry content, so even if page (B) is not modified, if >>> > swap_read_folio() on CPU0 happens later than swap_free() on CPU1, >>> > it may also cause data loss. >>> > >>> > To fix this, reuse swapcache_prepare which will pin the swap entry us= ing >>> > the cache flag, and allow only one thread to pin it. Release the pin >>> > after PT unlocked. Racers will simply wait since it's a rare and very >>> > short event. A schedule() call is added to avoid wasting too much CPU >>> > or adding too much noise to perf statistics >>> > >>> > Other methods like increasing the swap count don't seem to be a good >>> > idea after some tests, that will cause racers to fall back to use the >>> > swap cache again. Parallel swapin using different methods leads to >>> > a much more complex scenario. >>> >>> The swap entry may be put in swap cache by some parallel code path >>> anyway. So, we always need to consider that when reasoning the code. >>> >>> > Reproducer: >>> > >>> > This race issue can be triggered easily using a well constructed >>> > reproducer and patched brd (with a delay in read path) [1]: >>> > >>> > With latest 6.8 mainline, race caused data loss can be observed easil= y: >>> > $ gcc -g -lpthread test-thread-swap-race.c && ./a.out >>> > Polulating 32MB of memory region... >>> > Keep swapping out... >>> > Starting round 0... >>> > Spawning 65536 workers... >>> > 32746 workers spawned, wait for done... >>> > Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss! >>> > Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss! >>> > Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss! >>> > Round 0 Failed, 15 data loss! >>> > >>> > This reproducer spawns multiple threads sharing the same memory region >>> > using a small swap device. Every two threads updates mapped pages one= by >>> > one in opposite direction trying to create a race, with one dedicated >>> > thread keep swapping out the data out using madvise. >>> > >>> > The reproducer created a reproduce rate of about once every 5 minutes, >>> > so the race should be totally possible in production. >>> > >>> > After this patch, I ran the reproducer for over a few hundred rounds >>> > and no data loss observed. >>> > >>> > Performance overhead is minimal, microbenchmark swapin 10G from 32G >>> > zram: >>> > >>> > Before: 10934698 us >>> > After: 11157121 us >>> > Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) >>> > >>> > Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchron= ous device") >>> > Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-str= ess-race [1] >>> > Reported-by: "Huang, Ying" >>> > Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.= ccr.corp.intel.com/ >>> > Signed-off-by: Kairui Song >>> > Cc: stable@vger.kernel.org >>> > >>> > --- >>> > Update from V2: >>> > - Add a schedule() if raced to prevent repeated page faults wasting C= PU >>> > and add noise to perf statistics. >>> > - Use a bool to state the special case instead of reusing existing >>> > variables fixing error handling [Minchan Kim]. >>> > >>> > V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.c= om/ >>> > >>> > Update from V1: >>> > - Add some words on ZRAM case, it will discard swap content on swap_f= ree so the race window is a bit different but cure is the same. [Barry Song] >>> > - Update comments make it cleaner [Huang, Ying] >>> > - Add a function place holder to fix CONFIG_SWAP=3Dn built [SeongJae = Park] >>> > - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO = instead of "direct swapin path" [Yu Zhao] >>> > - Update commit message. >>> > - Collect Review and Acks. >>> > >>> > V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.co= m/ >>> > >>> > include/linux/swap.h | 5 +++++ >>> > mm/memory.c | 20 ++++++++++++++++++++ >>> > mm/swap.h | 5 +++++ >>> > mm/swapfile.c | 13 +++++++++++++ >>> > 4 files changed, 43 insertions(+) >>> > >>> > diff --git a/include/linux/swap.h b/include/linux/swap.h >>> > index 4db00ddad261..8d28f6091a32 100644 >>> > --- a/include/linux/swap.h >>> > +++ b/include/linux/swap.h >>> > @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp) >>> > return 0; >>> > } >>> > >>> > +static inline int swapcache_prepare(swp_entry_t swp) >>> > +{ >>> > + return 0; >>> > +} >>> > + >>> > static inline void swap_free(swp_entry_t swp) >>> > { >>> > } >>> > diff --git a/mm/memory.c b/mm/memory.c >>> > index 7e1f4849463a..7059230d0a54 100644 >>> > --- a/mm/memory.c >>> > +++ b/mm/memory.c >>> > @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> > struct page *page; >>> > struct swap_info_struct *si =3D NULL; >>> > rmap_t rmap_flags =3D RMAP_NONE; >>> > + bool need_clear_cache =3D false; >>> > bool exclusive =3D false; >>> > swp_entry_t entry; >>> > pte_t pte; >>> > @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> > if (!folio) { >>> > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && >>> > __swap_count(entry) =3D=3D 1) { >>> > + /* >>> > + * Prevent parallel swapin from proceeding with >>> > + * the cache flag. Otherwise, another thread may >>> > + * finish swapin first, free the entry, and swa= pout >>> > + * reusing the same entry. It's undetectable as >>> > + * pte_same() returns true due to entry reuse. >>> > + */ >>> > + if (swapcache_prepare(entry)) { >>> > + /* Relax a bit to prevent rapid repeate= d page faults */ >>> > + schedule(); >>> >>> The current task may be chosen in schedule(). So, I think that we >>> should use cond_resched() here. >>> >> >> I think if we are worried about current task got chosen again we can >> use schedule_timeout_uninterruptible(1) here. Isn't cond_resched still >> __schedule() and and it can even get omitted, so it should be "weaker" >> IIUC. > > schedule_timeout_uninterruptible(1) will introduce 1ms latency for the > second task. That may kill performance of some workloads. Just found that the cond_sched() in __read_swap_cache_async() has been changed to schedule_timeout_uninterruptible(1) to fix some live lock. Details are in the description of commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead"). I think the similar issue may happen here too. So, we must use schedule_timeout_uninterruptible(1) here until some other better idea becomes available. -- Best Regards, Huang, Ying