From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41E1BC5478C for ; Tue, 27 Feb 2024 23:01:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5B1F6B0253; Tue, 27 Feb 2024 18:01:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C0B5A6B0255; Tue, 27 Feb 2024 18:01:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD4416B0256; Tue, 27 Feb 2024 18:01:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 98E5F6B0253 for ; Tue, 27 Feb 2024 18:01:53 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 62A1E120E4C for ; Tue, 27 Feb 2024 23:01:53 +0000 (UTC) X-FDA: 81839108106.02.CEAC6DF Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf18.hostedemail.com (Postfix) with ESMTP id BD3241C0012 for ; Tue, 27 Feb 2024 23:01:50 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=cbE3ns3i; spf=pass (imf18.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709074911; a=rsa-sha256; cv=none; b=eVRyZ1H+Wvoz2Kg193GmNNWJE6s59mFmbxu2j87jL3oA4p+ikJKizigRuEDjU05k5smbAB vHiD2Uc39QST6EhH9HFMLBclp4t89zBSLpFPd0DPxxG6t+foDilt7PWOmDDODZR6lxJo76 qwhP5HANkVQCdoCIM6GSzixeM6x39Fw= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=cbE3ns3i; spf=pass (imf18.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709074911; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zuDc999VaRjLQddap+xe7m7DO23XrVhTmW/3ZFQtJkg=; b=Shpw+7fXs/eWJkLZ9mZXTScDoTCCJDaftB69XUbBLW+zUz7lEUe3svzoF1QMANKyx6vy+d FobLnqiOqNbVAFU5hBjQ2OY2gO0BLS9vaYEsUyeNMRlY2WhACrUUf7pJJuLl4DjNNVy6DL CGmhXYCrgOPzac9UMgzTDP1uqbEgG94= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 97326CE1F0C for ; Tue, 27 Feb 2024 23:01:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D970BC433B2 for ; Tue, 27 Feb 2024 23:01:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1709074905; bh=ltQf9QGvHmKdkf4gUO5QGjXTQ1hZxW5t+zJTbMUaYU8=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=cbE3ns3itb8tb6whcesFu6VK4h3JGlKajLUcqWymQ/RbblFe7VI5QKMi0vmnacADO u5Ae9bjdiFJmUn9GFbQFjpduTawfHMZCVUMynVl22NGWUU4MpzdCOiBX3ESMx2X7Hk 5p4tbijUCF6yKnb2tAU5zVGkA5UE9pA96bujfnJzgLXa8Y1h5wo/XG/7rD9eUD5ygp j5ATcF3G5yh2VH3VLjDBp7MxU14sJsCI6dXIhTXrEg/fR55u/8E6tL3Nl9AvVNvTmE Rbk23pOjisTDYtOlxYWX+6ja9akcCarCt+EsClSgqaHC1AYqZI1k0KBKXOMRmB3QMh 4DdQMQ8nHat1Q== Received: by mail-io1-f41.google.com with SMTP id ca18e2360f4ac-7c7db34d162so78877339f.2 for ; Tue, 27 Feb 2024 15:01:45 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCWBMca9/2To6SVThOC6UDnKCUWLiczAOv8VoNxoHrL1b+IuGvbS/ie6mtD0rhmu+8DSBRQI6TnUTTC3NdB1PbTaqD8= X-Gm-Message-State: AOJu0YzT5aLJOcDmmg3hAn8SqV1aVGV2T7ZvBxoyrC6PAj9Ztpzd5kAt Qq6ENUfpR5YUWoND26MNrCPQtzUIDZ0ei5VaR39CR30dtsxNuVbaN2Dadhl4ATxvpNyR76XJRDj zZlwVk47BIAN7qmoJhOcnnvCRKs+zsYWga8dK X-Google-Smtp-Source: AGHT+IEBT0yzs7D3rJSX2Vi+s9A19tz0c6SONKG25kyDAzEkDeKh4f8hbYmOEY47nEtjTinQSJXplqpfxt0vU+1Uw9Y= X-Received: by 2002:a92:d30d:0:b0:365:cf7:af3a with SMTP id x13-20020a92d30d000000b003650cf7af3amr10903664ila.21.1709074905147; Tue, 27 Feb 2024 15:01:45 -0800 (PST) MIME-Version: 1.0 References: <20240219082040.7495-1-ryncsn@gmail.com> <20240219173147.3f4b50b7c9ae554008f50b66@linux-foundation.org> In-Reply-To: From: Chris Li Date: Tue, 27 Feb 2024 15:01:32 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4] mm/swap: fix race when skipping swapcache To: Kairui Song Cc: Barry Song <21cnbao@gmail.com>, Andrew Morton , linux-mm , "Huang, Ying" , Minchan Kim , Barry Song , Yu Zhao , SeongJae Park , David Hildenbrand , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , Aaron Lu , stable@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: BD3241C0012 X-Stat-Signature: ueetfhixachcrggp5f7egs4zjsahdmgc X-Rspam-User: X-HE-Tag: 1709074910-543820 X-HE-Meta: U2FsdGVkX1/UG915vUdGt/ZFwA4hDSd2Howq99wB76nQXME0D0VybquoTq+S47XytzUxzLmA+TZ7HRFvxQ0EHY5YfoRjUhsoGyNfN6QsNiGV08GN5CcWX0hP4vfQGi2yIWEMZRWMjru53YDH+SEXqaJ+QPyL79EVjXJFWsMr9O1We5FWyJW27riLk5PLsXf+GK93WbmkPDr76COT2qP6sYuXs9WutwZGSUDlAtbwcc7284Dzg0s/eLwsqFoSlDt8iRaDxmNIFPSJ7sTtbDDjiKA7xTAonz/I1wueUrM0iWQdSwkk7pKoTBmiuMbQFq8r6fI3g4pGPTFUjbHONmi/kbB+SwA3w949Ua4PE5nTRo8088EiulRblZW4y7f3L45pEcWbPAp3tht9reJQJpFJsRhFmxL03IFll4x0oXPUACJWWiaCVY8m1lMHvXLG+iZbd0DMO5snPMghxxHKuNPlon5+1rLZO07oHswe+3OAzlK3H+hg7AmFEUGy+cW0gQvbDMjos128qVDZJvUcGDXQh+CLP0dO99mCyAPzucV7ZU/yBtBP85NUEJfCXIUx0ezAyMhzlWGVzdkemH7Csf2byYs+m+9E+5mrZezNJQIv/obrRmFC/RRmNu7yhNk/UdCcURgbuiqRrpx37VBH3uUcQA5mBVhsXwwrO1LSmtr/449xwz/Kl7CWsb2BiGKYwVhvVAkzcaAKeeThqk40yFDVP6IeNyT+gp7wA2Zcj4yzEI61uhHYXWHiBMmmDrdowjxOkRr2ssDJIcemwX0J+zvYyu84trxUyT2smOhW5xHyLF+t8PffO7X+8PK2WpI5m0pqPJ5vpHtWB4XWSpUmT0YAr4mMPbShPf5Dvgo3nEL20AX2EYT7Bz6lnnn7G/eS2713UIasa5L6Dx1LJwPDnPlZYQOqssVYfe3T0UaPiUYfeCLht3HfzGd1uiAfuqbiUFW2+Z/3/ztwv41Nx02hmWL v6H7VXBZ e6LdTikC2MxBnWtRlGL4IQqiFzIQCdhwQxx57jrpZHGX9uaxsnmKBY3LXHDxJOKWNEBwgtR0qBgSVwJ9SBjbPNkON0I5mcqnhFGXqgvwQYPpsQMMl4YFc9XS+9R+KqYbxALMmuAFSbmWc9WcfVzyozJbPgMa1XZFux6C6df2ertZBux8khbKe5uKbYZ/YYFl4ZMsW5Q/JiGpv8obIIiBF/O62hrPWNcxmWIKkTUAX0bzOFHz9xdIIEYmQubETt9jXybqzHk4/XV3+4Ij/Uw2Xc1iOr2OwEK9SBlv4SB3/6IwlqlY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Feb 27, 2024 at 10:14=E2=80=AFAM Kairui Song wro= te: > > On Wed, Feb 21, 2024 at 12:32=E2=80=AFAM Chris Li wro= te: > > > > On Mon, Feb 19, 2024 at 8:56=E2=80=AFPM Kairui Song = wrote: > > > > > > > > Hi Barry, > > > > > > > it might not be a problem for throughput. but for real-time and tai= l latency, > > > > this hurts. For example, this might increase dropping frames of UI = which > > > > is an important parameter to evaluate performance :-) > > > > > > > > > > That's a true issue, as Chris mentioned before I think we need to > > > think of some clever data struct to solve this more naturally in the > > > future, similar issue exists for cached swapin as well and it has bee= n > > > there for a while. On the other hand I think maybe applications that > > > are extremely latency sensitive should try to avoid swap on fault? A > > > swapin could cause other issues like reclaim, throttled or contention > > > with many other things, these seem to have a higher chance than this > > > race. > > > > > > Yes, I do think the best long term solution is to have some clever > > data structure to solve the synchronization issue and allow racing > > threads to make forward progress at the same time. > > > > I have also explored some (failed) synchronization ideas, for example > > having the run time swap entry refcount separate from swap_map count. > > BTW, zswap entry->refcount behaves like that, it is separate from swap > > entry and manages the temporary run time usage count held by the > > function. However that idea has its own problem as well, it needs to > > have an xarray to track the swap entry run time refcount (only stored > > in the xarray when CPU fails to get SWAP_HAS_CACHE bit.) When we are > > done with page faults, we still need to look up the xarray to make > > sure there is no racing CPU and put the refcount into the xarray. That > > kind of defeats the purpose of avoiding the swap cache in the first > > place. We still need to do the xarray lookup in the normal path. > > > > I came to realize that, while this current fix is not perfect, (I > > still wish we had a better solution not pausing the racing CPU). This > > patch stands better than not fixing this data corruption issue and the > > patch remains relatively simple. Yes it has latency issues but still > > better than data corruption. It also doesn't stop us from coming up > > with better solutions later on. If we want to address the > > synchronization in a way not blocking other CPUs, it will likely > > require a much bigger change. > > > > Unless we have a better suggestion. It seems the better one among the > > alternatives so far. > > > > Hi, > > Thanks for the comments. I've been trying some ideas locally, I think a s= imple and straight solution exists: We just don't skip the swap cache xarra= y. Yes, I have been pondering about that as well. Notice in __read_swap_cache_async(), it has a similar "schedule_timeout_uninterruptible(1)" when swapcache_prepare(entry) fails to grab the SWAP_HAS_CACHE bit. So falling back to use the swap cache does not automatically solve the latency issue. Similar delay exists in the swap cache case as well. > The current reason we are skipping it is for performance, but with some o= ptimization, the performance should be as good as skipping it (in current b= ehavior). Notice even in the swap cache bypass path, we need to do one look= up, and one modify (delete the shadow). That can't be skipped. So the usage= of swap cache can be better organized and optimized. > After all swapin makes use of swap cache, swapin can insert the folio in = swap cache xarray first, then set swap map cache bit. I'm thinking about re= using the folio lock, or having an intermediate value in xarray, so raced s= wapins can wait properly. There are some tricky parts syncing with swap map= s though. Inserting the swap cache xarray first and setting SWAP_HAS_CACHE bit later will need more audit on the race. I assume you take the swap device/cluster lock before folio insert into swap cache xarray? Chris > > Currently working on a series, will send in a few weeks if it works.