From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF3FEC48BC3 for ; Thu, 15 Feb 2024 00:44:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E03778D0012; Wed, 14 Feb 2024 19:44:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D8B738D0001; Wed, 14 Feb 2024 19:44:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BDE508D0012; Wed, 14 Feb 2024 19:44:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A72568D0001 for ; Wed, 14 Feb 2024 19:44:21 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6EDD31204A9 for ; Thu, 15 Feb 2024 00:44:21 +0000 (UTC) X-FDA: 81792191922.04.5FB96A8 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf01.hostedemail.com (Postfix) with ESMTP id 69A8640010 for ; Thu, 15 Feb 2024 00:44:19 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FMkLNugu; spf=pass (imf01.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707957859; a=rsa-sha256; cv=none; b=FwVhyju9bmNdn7+UHAGbgOZ1rGXPWRnTyRnZQat7GSzemircT5WkYiYiRR2oempBpr5j/a GxMCLaJkndCiVTkbgdHhOMdYxkvywUXYI2QSd+ycJ/LKUcssBaf145BM84tELG1WyGGo0I MPi9U7KB0txDESSiJhHOtuuLnb/zV9I= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=FMkLNugu; spf=pass (imf01.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.176 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707957859; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=v0IOozxc9pPX/tQufgjCUsjtIdktPjhqKeQuwoGygOY=; b=f8V2SxHM18+MIopFdZmJEPqP+SrXH9Qm+W+LKkNFyZRHbQfwY88diO67m0jyu27vgHyQoj V8vH5Hm5zazCPHOlPLJgaxR7Us2fGEvfoVp7OVjhSB6lH6pkxsAkyf7/+TSGQMLmE0GyKP D8Lj01S+TCZS9j21rpIVR3kpp8SP2lU= Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-1d51ba18e1bso3464425ad.0 for ; Wed, 14 Feb 2024 16:44:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707957858; x=1708562658; darn=kvack.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:sender :from:to:cc:subject:date:message-id:reply-to; bh=v0IOozxc9pPX/tQufgjCUsjtIdktPjhqKeQuwoGygOY=; b=FMkLNuguCWjmk3lHa/X3++1SHmLCciulpevJtF2xfhoa3lKNUP2dRBG8l7filmQbo3 uzdEiY5Us2qwdjs1xDL0YJgRFjZMMtfGh68nRYM7eXUhLQtPeNEP7vQemMRV2iBegsF6 Mfj/nKTAQheTQeNmQU8Rl760sw5Y60v8aJexnu8pzppn9tE7aEUZPVDidRuANmV03nzI js2tsh5pAFIx8blPwT1MovZKIw9E1UKBqAodK54G7jqgZfYy/UtI88PJrFIDu1oSlujP dwA4mMF6Ba2wwxQ38NNUa7pPC/XRK5DHSMhRAqhJYSJVBAhGIo8yb6zN+7TXqIekgQw2 oaFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707957858; x=1708562658; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:sender :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v0IOozxc9pPX/tQufgjCUsjtIdktPjhqKeQuwoGygOY=; b=QYAmmEGnFruFu/qdQo8QPPAq1UfJtRAzaEBmJWTPEcg0ji99+5LItQr30Fk+7QefRz wMV6jNSlhewFroQTWrOfxzyXee8HwuQjdvPU/mmodYMDoPkCzX35uYBmqZ/rjq+az4uY Ynf1x99BQVpMoagX4lErqNpoNh8ksvCYJznv/Ay7+WoWGEZIuFXWxP/jGpElLpZMQC6q 3bV5A9UGazudGc9ff6CoPV1IyoJ3lyBHryebPakC7y1LNZlqxC7MB78npS6EJEtYDrks bGyOCYbUvabdFvIVKvmMYlLyJXzYwFL7QqAwpV7pUoza7ZXAWriELvv5lxQsxcooG5Pp TE4Q== X-Forwarded-Encrypted: i=1; AJvYcCUb3yvzcEs4oMVVUgcs/ULQsQLxvGuVk0cYH1whLvOB7IXszdXkPj61NtAX8ziZBCulG9iU97K/VYuKYtlwqOmhexM= X-Gm-Message-State: AOJu0YzMARVEsLVNBjH4pT8ZlwJJsz1uByd21Ol04PtgN/YMG6BPuRq+ ZV9lQOgwWVAmtZABLbTcM/rFmVXb7A1D4x/QML7W3pNIClgS17da X-Google-Smtp-Source: AGHT+IESPmDs3SDqnuC4XZSDz8OvmdID20fX33yEAO++64dbip+o7i6RsA4LQItbqlW+jlyVVkCl4g== X-Received: by 2002:a17:902:f812:b0:1db:466d:f309 with SMTP id ix18-20020a170902f81200b001db466df309mr206701plb.49.1707957858055; Wed, 14 Feb 2024 16:44:18 -0800 (PST) Received: from google.com ([2620:0:1000:8411:c38e:7969:a636:ba7f]) by smtp.gmail.com with ESMTPSA id d23-20020a170902729700b001d8a93fa5b1sm45782pll.131.2024.02.14.16.44.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Feb 2024 16:44:17 -0800 (PST) Date: Wed, 14 Feb 2024 16:44:14 -0800 From: Minchan Kim To: Kairui Song Cc: Chris Li , "Huang, Ying" , Barry Song <21cnbao@gmail.com>, linux-mm@kvack.org, Andrew Morton , Yu Zhao , Barry Song , SeongJae Park , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , stable@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm/swap: fix race when skipping swapcache Message-ID: References: <87eddnxy47.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 69A8640010 X-Stat-Signature: hygyx8zf4b9rfzztgkrd4dqw46q4jnc6 X-Rspam-User: X-HE-Tag: 1707957859-961348 X-HE-Meta: U2FsdGVkX19DSXQ4Dauw+yr765/1dxTlkV9loIz5eJni3xrNK4L1EvOG1VozmAkQuJCi/piABn5+2JFYmToGJLkEewBRGFX94uLyPBZ/PaTghBUDDyZacvNmEsNs5dX9Mdmbs6YEVT79Q+GN0WYrXggjWcS+a7+k7VjOnG+ArDB6N/ULqDFPBBWHq1gaMca6YzR83QedpX9V5/kRARwDSor/1oEmckmdUCMED0Iw/qXpU0q2bPFJJH7g533bOzuzJ4V64KMGBndCwSVxUe451IKGQFJrSzRc5BOUXjjSL5p/LuHzchIdPSrc0RPLAo6GV3yfSqdn6lyp7lTmk5MvpLJ3/3zZkVs9iarH9wxR0Nk6HHKlNBbDALPf0FMdPjPO+45LG6muG6ep0o1gsociUclVdKn0yjBhHgJQRmD0oKkv2aFFgVIElRNUkmYgvwuZvpFQ1VZNAdhRq4hijXSMPJRp6h/Wu0FEQXwW5Mf0AYcJZF3K3/fvPDUWy1x9ajMHLpgDnpw5AqqTSXRHalbFqSYf48BzdvFDIXT5/uFMm7XlNQgJ5B7DrgxefyykbONBlnjCM1Yw/jPy6bSb2UulOpz+4JqNc0F9mnhNeDxDXu9su4YC/vKgtXQ+YSu08zBmA9DiietI5Bxx24mMVMR2mJq1Xx+7bQtddjofDy5o6A5DKDFr7/ye8v9rTQDkS2dRHwb6OqHg3w2arzDfUAEpop3LreOzqfsQhN9sG9xuy5O8z2LgnYmXfBRFLLacWHEofg4Jobp+7faUTMMIxE+98Hg/y39WUmP72a5A1zKLrr4SDcxZsdR3NQ+BZQ3wFK9g/gsyg4fO4XToF5wG/uRxE4lTaGlaVV37Q6FNnHLJevqO+RbSqXg+abAUU5hsWpmwavaxmjCjowU+jNbKBuuE2DYMvj8CborWUeh3Vh+8VzOYYmF26/AAJcCOTs2sw7VrJOmJZEcaL+4a7Pcpw1K RadsiVVx 5c7WYd/zP41OiwzYdiy0IGASiNujaTy/u5MdGvLjAeKRPNcaEPaiPQ9rsqF/s2ozzA0eKIaeI/STeS9HMR98dPgNxwC2A/GMrRpURve+A0neIclPtH7bRHSdi3LWmkdB0ZvDhp4haQ3TdrzRpn319KMHTfDPr6vHbSK/ArEJ0O9yWCmWZB/M8KMzPmGV1KtkbuusjKl1x/qyRFrHlwx5/Dnl2Kf1DgHFAy8a/0nG6qJMvYkqDsXSUhT3dgDmvWBzyMzPkmeyVyvFkCUyilBeetwtgsEhIUTTa4EnH+63yETI7zLd31Es7HsQ7Buw2raB/o/hTOb54uyzlEu9J+W2A9PkMHQk5T9j2M6gBVu7b9gLncETEkeULHOskB94gwBSqvNj719acWt3bSFJDyCRlOpU+I6M07bBmLIQDdBr1IQsOKHqEC5dQ3l60eHFOk6r3iuLqFynNml29XMsy/ezT88HKgDuwWMcGc/MI/SlBxrsEvGwpFI4DNkfXd9/ngJ1gqe5Ei1y+ZDbrFRXaPP09NvqmuzE/CymopytTXPQ3jueknVxwi2/+XMpRiOPVRnrtvy1Jiikpvzej0Hnb0l+vhxIPrBMq2GgYlJWJxNqeWv3G+xGfUvCargc/5vkOeA7LDr8+A61TIVTX0DzLU9w0N3gcpbbmgIl2BYZSgPX6z+a9Yt8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Kairui, On Tue, Feb 13, 2024 at 03:53:09AM +0800, Kairui Song wrote: > On Fri, Feb 9, 2024 at 1:30 PM Kairui Song wrote: > > > > On Fri, Feb 9, 2024 at 3:42 AM Chris Li wrote: > > > > > > On Thu, Feb 8, 2024 at 11:01 AM Kairui Song wrote: > > > > > > > > On Thu, Feb 8, 2024 at 2:36 PM Huang, Ying wrote: > > > > > > > > > > Kairui Song writes: > > > > > > > > > > > On Thu, Feb 8, 2024 at 2:31 AM Minchan Kim wrote: > > > > > >> > > > > > >> On Wed, Feb 07, 2024 at 12:06:15PM +0800, Kairui Song wrote: > > > > > > > > > > [snip] > > > > > > > > > > >> > > > > > > >> > So I think the thing is, it's getting complex because this patch > > > > > >> > wanted to make it simple and just reuse the swap cache flags. > > > > > >> > > > > > >> I agree that a simple fix would be the important at this point. > > > > > >> > > > > > >> Considering your description, here's my understanding of the other idea: > > > > > >> Other method, such as increasing the swap count, haven't proven effective > > > > > >> in your tests. The approach risk forcing racers to rely on the swap cache > > > > > >> again and the potential performance loss in race scenario. > > > > > >> > > > > > >> While I understand that simplicity is important, and performance loss > > > > > >> in this case may be infrequent, I believe swap_count approach could be a > > > > > >> suitable solution. What do you think? > > > > > > > > > > > > Hi Minchan > > > > > > > > > > > > Yes, my main concern was about simplicity and performance. > > > > > > > > > > > > Increasing swap_count here will also race with another process from > > > > > > releasing swap_count to 0 (swapcache was able to sync callers in other > > > > > > call paths but we skipped swapcache here). > > > > > > > > > > What is the consequence of the race condition? > > > > > > > > Hi Ying, > > > > > > > > It will increase the swap count of an already freed entry, this race > > > > with multiple swap free/alloc logic that checks if count == > > > > SWAP_HAS_CACHE or sets count to zero, or repeated free of an entry, > > > > all result in random corruption of the swap map. This happens a lot > > > > during stress testing. > > > > > > In theory, the original code before your patch can get into a > > > situation similar to what you try to avoid. > > > CPU1 enters the do_swap_page() with entry swap count == 1 and > > > continues handling the swap fault without swap cache. Then some > > > operation bumps up the swap entry count and CPU2 enters the > > > do_swap_page() racing with CPU1 with swap count == 2. CPU2 will need > > > to go through the swap cache case. We still need to handle this > > > situation correctly. > > > > Hi Chris, > > > > This won't happen, nothing can bump the swap entry count unless it's > > swapped in and freed. There are only two places that call > > swap_duplicate: unmap or fork, unmap need page mapped and entry alloc, > > so it won't happen unless we hit the entry reuse issue. Fork needs the > > VMA lock which we hold it during page fault. > > > > > So the complexity is already there. > > > > > > If we can make sure the above case works correctly, then one way to > > > avoid the problem is just send the CPU2 to use the swap cache (without > > > the swap cache by-passing). > > > > Yes, more auditing of existing code and explanation is needed to > > ensure things won't go wrong, that's the reason I tried to avoid > > things from going too complex... > > > > > > > > So the right step is: 1. Lock the cluster/swap lock; 2. Check if still > > > > > > have swap_count == 1, bail out if not; 3. Set it to 2; > > > > > > __swap_duplicate can be modified to support this, it's similar to > > > > > > existing logics for SWAP_HAS_CACHE. > > > > > > > > > > > > And swap freeing path will do more things, swapcache clean up needs to > > > > > > be handled even in the bypassing path since the racer may add it to > > > > > > swapcache. > > > > > > > > > > > > Reusing SWAP_HAS_CACHE seems to make it much simpler and avoided many > > > > > > overhead, so I used that way in this patch, the only issue is > > > > > > potentially repeated page faults now. > > > > > > > > > > > > I'm currently trying to add a SWAP_MAP_LOCK (or SWAP_MAP_SYNC, I'm bad > > > > > > at naming it) special value, so any racer can just spin on it to avoid > > > > > > all the problems, how do you think about this? > > > > > > > > > > Let's try some simpler method firstly. > > > > > > > > Another simpler idea is, add a schedule() or > > > > schedule_timeout_uninterruptible(1) in the swapcache_prepare failure > > > > path before goto out (just like __read_swap_cache_async). I think this > > > > should ensure in almost all cases, PTE is ready after it returns, also > > > > yields more CPU. > > > > > > I mentioned in my earlier email and Ying points out that as well. > > > Looping waiting inside do_swap_page() is bad because it is holding > > > other locks. > > > > It's not looping here though, just a tiny delay, since > > SWP_SYNCHRONOUS_IO is supposed to be super fast devices so a tiny > > delay should be enough. > > > > > Sorry I started this idea but it seems no good. > > > > Not at all, more reviewing helps to find a better solution :) > > > > > If we can have CPU2 make forward progress without retrying > > > the page fault would be the best, if possible. > > > > Yes, making CPU2 fall back to cached swapin path is doable after > > careful auditing. Just CPU2 is usually slower than CPU1 due to cache > > and timing, so what it does will most likely be in vain and need to be > > reverted, causing more work for both code logic and CPU. The case of > > the race (CPU2 went faster) is very rare. > > > > I'm not against the idea of bump count, it's better if things that can > > be done without introducing too much noise. Will come back after more > > tests and work on this. > > Hi, > > After some more testing, I still think bump swap count is a bad idea > here. Besides the issues I mentioned above, adding folio into cache is > fundamentally in conflict with the problem we are trying to solve: > folio in swapcache can be swapped out without allocating a new entry, > so the entry reuse issue will be much more serious. > > If we want to avoid this we have to release the cache unconditionally > before folio unlock because it can't tell if there is a parallel cache > bypassing swapin going on or not. This is unacceptable, it breaks > normal usage of swap cache. There are also other potential issues like > read ahead, race causing stalled folio remain in swap cache, so I dont > think this is a good approach. I also played the swap refcount stuff ideas and encoutered a lot problems(e.g., kernel crash and/or data missing). Now I realize your original solution would be best to fix under such a small change. Folks, please chime in if you have another idea. > > Instead if we add a schedule or schedule_timeout_uninterruptible(1), How much your workload is going If we use schedule instead of schedule_timeout_uninterruptible(1)? If that doesn't increase the statistics a lot, I prefer the schedule. (TBH, I didn't care much about the stat since it would be very rare). > it seems quite enough to avoid repeated page faults. I did following > test with the reproducer I provided in the commit message: > > Using ZRAM instead of brd for more realistic workload: > $ perf stat -e minor-faults,major-faults -I 30000 ./a.out > > Unpatched kernel: > # time counts unit events > 30.000096504 33,089 minor-faults:u > 30.000096504 958,745 major-faults:u > 60.000375308 22,150 minor-faults:u > 60.000375308 1,221,665 major-faults:u > 90.001062176 24,840 minor-faults:u > 90.001062176 1,005,427 major-faults:u > 120.004233268 23,634 minor-faults:u > 120.004233268 1,045,596 major-faults:u > 150.004530091 22,358 minor-faults:u > 150.004530091 1,097,871 major-faults:u > > Patched kernel: > # time counts unit events > 30.000069753 280,094 minor-faults:u > 30.000069753 1,198,871 major-faults:u > 60.000415915 436,862 minor-faults:u > 60.000415915 919,860 major-faults:u > 90.000651670 359,176 minor-faults:u > 90.000651670 955,340 major-faults:u > 120.001088135 289,137 minor-faults:u > 120.001088135 992,317 major-faults:u > > Indeed there is a huge increase of minor page faults, which indicate > the raced path returned without handling the fault. This reproducer > tries to maximize the race chance so the reading is more terrifying > than usual. > > But after adding a schedule/schedule_timeout_uninterruptible(1) after > swapcache_prepare failed, still using the same test: > > Patched kernel (adding a schedule() before goto out): > # time counts unit events > 30.000335901 43,066 minor-faults:u > 30.000335901 1,163,232 major-faults:u > 60.034629687 35,652 minor-faults:u > 60.034629687 844,188 major-faults:u > 90.034941472 34,957 minor-faults:u > 90.034941472 1,218,174 major-faults:u > 120.035255386 36,241 minor-faults:u > 120.035255386 1,073,395 major-faults:u > 150.035535786 33,057 minor-faults:u > 150.035535786 1,171,684 major-faults:u > > Patched kernel (adding a schedule_timeout_uninterruptible(1) before goto out): > # time counts unit events > 30.000044452 26,748 minor-faults:u > 30.000044452 1,202,064 major-faults:u > 60.000317379 19,883 minor-faults:u > 60.000317379 1,186,152 major-faults:u > 90.000568779 18,333 minor-faults:u > 90.000568779 1,151,015 major-faults:u > 120.000787253 22,844 minor-faults:u > 120.000787253 887,531 major-faults:u > 150.001309065 18,900 minor-faults:u > 150.001309065 1,211,894 major-faults:u > > The minor faults are basically the same as before, or even lower since > other races are also reduced, with no observable performance > regression so far. > If the vague margin of this schedule call is still concerning, I think > another approach is just a new swap map value for parallel cache > bypassed swapping to loop on. Long term, the swap map vaule would be good but for now, I prefer your original solution with some delay with schedule. Thanks, Kairui!