From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB18CC4828D for ; Wed, 7 Feb 2024 03:22:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C6896B0074; Tue, 6 Feb 2024 22:22:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 177016B0078; Tue, 6 Feb 2024 22:22:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 016376B007D; Tue, 6 Feb 2024 22:22:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E046A6B0074 for ; Tue, 6 Feb 2024 22:22:14 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8D2DEA21E7 for ; Wed, 7 Feb 2024 03:22:14 +0000 (UTC) X-FDA: 81763559388.01.5C4158F Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by imf20.hostedemail.com (Postfix) with ESMTP id A82AB1C0013 for ; Wed, 7 Feb 2024 03:22:12 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RuJcdOsD; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707276132; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=87CokFAnyfDHD1LWmJ0mJ1X2ZdqhIdtcCLtuGjQx1+Q=; b=i2oZQzngyLGAP0MWqGqPJMQmlVOXC6/3FbfPbBkdl1PsAYuaGmZ2dv3+3m/L8TS14qTs7+ 4QE8Acg2pMrg0vPmNRywXms/0bqyUNdC6m7YppEJ58UZB1J71RhDiRLGa6hTcnr8rBNXgl SCf/OK1eRGMZcn9plYEQR7AfQu+IZdM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=RuJcdOsD; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf20.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707276132; a=rsa-sha256; cv=none; b=ZwBZyy5yplVXxc/rCp2WFgrmllJtVJAO4pXbqUcLhEBeuq721NIGrSyBF9Nb84keJq11yS zYI48PZU3Eifgeex7xNVmXpPSspr2fvwNPL5bvXwTGDIDgV1Pzvtf9LTF/HGs+FHDxSfm5 SfG6bKK/b+nqatKscQz6e2AemEI2S8Y= Received: by mail-lj1-f171.google.com with SMTP id 38308e7fff4ca-2d066b532f0so2965871fa.1 for ; Tue, 06 Feb 2024 19:22:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1707276131; x=1707880931; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=87CokFAnyfDHD1LWmJ0mJ1X2ZdqhIdtcCLtuGjQx1+Q=; b=RuJcdOsDUlno3OJf8LchiQMe7q8z1SN10Dx+5l3PlRxFBdiOyBIXusCeo6JyEv1l3g C6AlzGynvQPd2pp7xgmggpO04amRAed9qQpzWQdXFJqX3o1lGcmfadnQ3Paueg/o2Wic b20FbcnalYrllH7GjXCZdVU7whlbunJNjATp/agru9pwVhKILg+2APIjkvnqZlXm0ChZ MkNrNQbmEk0vLBWAC5iwxB+3lyObvOx77yBxDpPdoJ2Ms0SX10aUypvR8B51f+ZIHRt9 6/lYVyRvGCbFDLC/ngdXb/hIhlyJH8Av9RsdpRE6gB+i9qEoeL/FepZRSCRTl/7PIvPG GBeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707276131; x=1707880931; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=87CokFAnyfDHD1LWmJ0mJ1X2ZdqhIdtcCLtuGjQx1+Q=; b=K0L6cPNi4kXbo62+7B9LmLn0iNdJazJ6PSLloL9kI5il9DK0nu8IPp+uikzFM5f0Pr 5JsaCC+iUZjMogFTQEssVRmkxEs3ZZDmrfVKEDZJfw4k+Zl2sBiODQ1Z4GljI9xiONqZ v4KeHYtCvb3+UWyePnrZtbYz0/4Q45eoWFZOcnn/1HZbhxSpI/udrf7EdQ13ub1Ibchu IQegz8UTLKz++mbuz7WZqipENchiupexJFQJEfBXwTr+PQzvMMdNKGmHFlOmYE5uVvFL bJKEgKFpbzDFkZjg7HbzErgIALIwHN3OGlEj0HKGC7Tfnd/6aqZZmMzZ3TR4iheXMFZR pDQQ== X-Gm-Message-State: AOJu0YzyNTpR7Fc09cYwiPW4jOWseEIAgMtQJ7aQR8Q+4PeXYP3C9uJM a8fIlbMXAxhNRoVu1TQ9uLH/8M1Fmcj0i54qzLEBDCWO2F9Lf++y5Oprr07fP+MAaFE7p153Rj4 AoLDKSNih4sWse0s8sIwLHZbmMio= X-Google-Smtp-Source: AGHT+IFLyMbpeeBxY/bcQloRjbR/U6RDALBwtNZudrFnQjYP+5KONkbD1Mw8TsRTrNiFgMn2TWPkd1I0R6MKavedldM= X-Received: by 2002:a2e:86d7:0:b0:2d0:59cf:51a8 with SMTP id n23-20020a2e86d7000000b002d059cf51a8mr3664674ljj.21.1707276130481; Tue, 06 Feb 2024 19:22:10 -0800 (PST) MIME-Version: 1.0 References: <20240206182559.32264-1-ryncsn@gmail.com> In-Reply-To: From: Kairui Song Date: Wed, 7 Feb 2024 11:21:52 +0800 Message-ID: Subject: Re: [PATCH v2] mm/swap: fix race when skipping swapcache To: Barry Song <21cnbao@gmail.com> Cc: Chris Li , linux-mm@kvack.org, Andrew Morton , "Huang, Ying" , Minchan Kim , Yu Zhao , Barry Song , SeongJae Park , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Yosry Ahmed , David Hildenbrand , stable@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: A82AB1C0013 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xndcqck8kzrbo4z7n376tuwgh3dtdra5 X-HE-Tag: 1707276132-489933 X-HE-Meta: U2FsdGVkX19rYoiLzTkUUGmwc1qf5Brw53NIpD3KIdu69QnZt5zcegjAx+LUzYdd39km3airLxI7u7oJrvt9sZ+REsD53AgVjZPqEE1ijtaxsUccVqdl2oA+7SzINrGOPV/rkDqFHFuIW7wgSHSw/VYH37lIHmhyw1/886VgMbOzzwXk43ad8VGFLSK2i09UNWKaWHSCf3gjwIhYElPF18UlAT3k63BxEVpaQyWeopM3KTPcSF0/aqPz4XAxRkmZo9e9BGBz2yef45k7PaHCZ9D27IxdOHP++e27g7LppIMQ04T4S1cBYactcia4N4rHDxgzXf7vCE6y7TvU6JwaoOdm0IlXqfXR0FEN0B1Gw7a436PX4DyS2hAvIoqYd4vbKMjHnix1x3E/wBDRDYtk9LPRuOvTILBZGGCpEzztM68gEhX+ixwrdutXj6mBM4ewWF5d4HtI1lBk+tDsJ+yT/Q+Ej/uqUtYFIdz4o1ZRuONucwanTTawvm21PfdpQ1VUMpSw61sqTOKCI0zv5ECWhyBD/0iLEF+1rb9qIm4T1RSQvib1qzeCm4t7U6MBMeHRi3MuBZw/H7LzfrP4eqgp+pWljJDqJGvK1jyoxNb19Lgd6GRWyPxuNpeB5yDcgLprsHHCYCRQjI4TumeuCBA5c7AVVj8DluuzM3ibMWJ32XQa1SL+/eMLDsC4u51c80mhgyuDP4P8bpz/M7iY8rFhByrRCzyic/fdoVKgcnY//ZlkU5ozsp1IIBS8eubj1uqcP3ThNObFh8VSzgXO0NtDA9qEt/QN/vCk56jeDQfCHdMAOZ4Vu7zgDMlqfG+AhJqLkZkPc57H6iqU/0jyscdx+Iy8lStdqTTjDaxvbHG9uH9BBtmLur+G3N+oWR5HFo9qD/jeSAfPFHNpmlLkrDqtYG5tu29BWBUPzp5+d38d+0LL6ylGthy/l5pmybJtFjOv3k6vuXITPNSqmAtvY9J A/aOPPzl 82te4Dz1XlbSz+CplODrpx4/d/F8v69mU7mXHS27xvZLD1Tg10QkSkJs5FrmQy7mPfnL43nHGc6o6eY78EhT09h/lU5f+yTDQKVenYB3UtQImfTa4uRgYGRsZOiFUOXoeCn+cJVUS2zeT5/f9Ao5Fy2Cc8KPNnjo4Qe9NdDxl0mqu0t3CFhhJsCLw/mxPeeMt+LM/aW8UpAK+94OMbaxxeBy2CUD60s6iZ2C8sWst1VUDTMHPFGx4t+LMtfeIDIlRGXOJNEWdYrUTt1RC2UqO3JUPcpVWXOqGhXltaVSVy1uJgFcaiEdEiAbnofG6+nyGG5eO2lqXgUKIFgxVfpBwORkbFKmS2b1goD0OM6LblFIEN7GvX3BJripZyXZtshsAfT2hzdCJrKZqR8KXXHIuz7tY9RZ/3wsXIrPg1AO4tMIvF3MSSQyn/lLTiHQHOPis8KYe19vvdW46VXaL6LBeGzvgaDU11NrF6WIN9R6L2Gxa0h8smqldXciiyYhrZKSQAUgfu3spky8TE6B6b3jzjTcH0jybMFACsz61bmedMFTtyb1whbdPgT3HcQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Feb 7, 2024 at 10:55=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > On Wed, Feb 7, 2024 at 10:21=E2=80=AFAM Kairui Song wr= ote: > > > > On Wed, Feb 7, 2024 at 10:03=E2=80=AFAM Chris Li wr= ote: > > > > > > On Tue, Feb 6, 2024 at 4:43=E2=80=AFPM Barry Song <21cnbao@gmail.com>= wrote: > > > > > > > > On Wed, Feb 7, 2024 at 7:18=E2=80=AFAM Chris Li = wrote: > > > > > > > > > > Hi Kairui, > > > > > > > > > > Sorry replying to your patch V1 late, I will reply on the V2 thre= ad. > > > > > > > > > > On Tue, Feb 6, 2024 at 10:28=E2=80=AFAM Kairui Song wrote: > > > > > > > > > > > > From: Kairui Song > > > > > > > > > > > > When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more = threads > > > > > > swapin the same entry at the same time, they get different page= s (A, B). > > > > > > Before one thread (T0) finishes the swapin and installs page (A= ) > > > > > > to the PTE, another thread (T1) could finish swapin of page (B)= , > > > > > > swap_free the entry, then swap out the possibly modified page > > > > > > reusing the same entry. It breaks the pte_same check in (T0) be= cause > > > > > > PTE value is unchanged, causing ABA problem. Thread (T0) will > > > > > > install a stalled page (A) into the PTE and cause data corrupti= on. > > > > > > > > > > > > One possible callstack is like this: > > > > > > > > > > > > CPU0 CPU1 > > > > > > ---- ---- > > > > > > do_swap_page() do_swap_page() with same e= ntry > > > > > > > > > > > > > > > > > > swap_read_folio() <- read to page A swap_read_folio() <- read = to page B > > > > > > > > > > > > ... set_pte_at() > > > > > > swap_free() <- entry is fr= ee > > > > > > > > > > > > > > > > > > pte_same() <- Check pass, PTE seems > > > > > > unchanged, but page A > > > > > > is stalled! > > > > > > swap_free() <- page B content lost! > > > > > > set_pte_at() <- staled page A installed! > > > > > > > > > > > > And besides, for ZRAM, swap_free() allows the swap device to di= scard > > > > > > the entry content, so even if page (B) is not modified, if > > > > > > swap_read_folio() on CPU0 happens later than swap_free() on CPU= 1, > > > > > > it may also cause data loss. > > > > > > > > > > > > To fix this, reuse swapcache_prepare which will pin the swap en= try using > > > > > > the cache flag, and allow only one thread to pin it. Release th= e pin > > > > > > after PT unlocked. Racers will simply busy wait since it's a ra= re > > > > > > and very short event. > > > > > > > > > > > > Other methods like increasing the swap count don't seem to be a= good > > > > > > idea after some tests, that will cause racers to fall back to u= se the > > > > > > swap cache again. Parallel swapin using different methods leads= to > > > > > > a much more complex scenario. > > > > > > > > > > > > Reproducer: > > > > > > > > > > > > This race issue can be triggered easily using a well constructe= d > > > > > > reproducer and patched brd (with a delay in read path) [1]: > > > > > > > > > > > > With latest 6.8 mainline, race caused data loss can be observed= easily: > > > > > > $ gcc -g -lpthread test-thread-swap-race.c && ./a.out > > > > > > Polulating 32MB of memory region... > > > > > > Keep swapping out... > > > > > > Starting round 0... > > > > > > Spawning 65536 workers... > > > > > > 32746 workers spawned, wait for done... > > > > > > Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data = loss! > > > > > > Round 0: Error on 0x395200, expected 32746, got 32743, 3 data= loss! > > > > > > Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data= loss! > > > > > > Round 0 Failed, 15 data loss! > > > > > > > > > > > > This reproducer spawns multiple threads sharing the same memory= region > > > > > > using a small swap device. Every two threads updates mapped pag= es one by > > > > > > one in opposite direction trying to create a race, with one ded= icated > > > > > > thread keep swapping out the data out using madvise. > > > > > > > > > > > > The reproducer created a reproduce rate of about once every 5 m= inutes, > > > > > > so the race should be totally possible in production. > > > > > > > > > > > > After this patch, I ran the reproducer for over a few hundred r= ounds > > > > > > and no data loss observed. > > > > > > > > > > > > Performance overhead is minimal, microbenchmark swapin 10G from= 32G > > > > > > zram: > > > > > > > > > > > > Before: 10934698 us > > > > > > After: 11157121 us > > > > > > Non-direct: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) > > > > > > > > > > > > Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of sy= nchronous device") > > > > > > Reported-by: "Huang, Ying" > > > > > > Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-= desk2.ccr.corp.intel.com/ > > > > > > Link: https://github.com/ryncsn/emm-test-project/tree/master/sw= ap-stress-race [1] > > > > > > Signed-off-by: Kairui Song > > > > > > Reviewed-by: "Huang, Ying" > > > > > > Acked-by: Yu Zhao > > > > > > > > > > > > --- > > > > > > Update from V1: > > > > > > - Add some words on ZRAM case, it will discard swap content on = swap_free so the race window is a bit different but cure is the same. [Barr= y Song] > > > > > > - Update comments make it cleaner [Huang, Ying] > > > > > > - Add a function place holder to fix CONFIG_SWAP=3Dn built [Seo= ngJae Park] > > > > > > - Update the commit message and summary, refer to SWP_SYNCHRONO= US_IO instead of "direct swapin path" [Yu Zhao] > > > > > > - Update commit message. > > > > > > - Collect Review and Acks. > > > > > > > > > > > > include/linux/swap.h | 5 +++++ > > > > > > mm/memory.c | 15 +++++++++++++++ > > > > > > mm/swap.h | 5 +++++ > > > > > > mm/swapfile.c | 13 +++++++++++++ > > > > > > 4 files changed, 38 insertions(+) > > > > > > > > > > > > diff --git a/include/linux/swap.h b/include/linux/swap.h > > > > > > index 4db00ddad261..8d28f6091a32 100644 > > > > > > --- a/include/linux/swap.h > > > > > > +++ b/include/linux/swap.h > > > > > > @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry= _t swp) > > > > > > return 0; > > > > > > } > > > > > > > > > > > > +static inline int swapcache_prepare(swp_entry_t swp) > > > > > > +{ > > > > > > + return 0; > > > > > > +} > > > > > > + > > > > > > static inline void swap_free(swp_entry_t swp) > > > > > > { > > > > > > } > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > > > > index 7e1f4849463a..1749c700823d 100644 > > > > > > --- a/mm/memory.c > > > > > > +++ b/mm/memory.c > > > > > > @@ -3867,6 +3867,16 @@ vm_fault_t do_swap_page(struct vm_fault = *vmf) > > > > > > if (!folio) { > > > > > > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &= & > > > > > > __swap_count(entry) =3D=3D 1) { > > > > > > + /* > > > > > > + * Prevent parallel swapin from proceed= ing with > > > > > > + * the cache flag. Otherwise, another t= hread may > > > > > > + * finish swapin first, free the entry,= and swapout > > > > > > + * reusing the same entry. It's undetec= table as > > > > > > + * pte_same() returns true due to entry= reuse. > > > > > > + */ > > > > > > + if (swapcache_prepare(entry)) > > > > > > + goto out; > > > > > > + > > > > > > > > > > I am puzzled by this "goto out". If I understand this correctly, = you > > > > > have two threads CPU1 and CPU2 racing to set the flag SWAP_HAS_CA= CHE. > > > > > The CPU1 will succeed in adding the flag and the CPU2 will get > > > > > "-EEXIST" from "swapcache_prepare(entry)". Am I understanding it > > > > > correctly so far? > > > > > > > > > > Then the goto out seems wrong to me. For the CPU2, the page fault= will > > > > > return *unhandled*. Even worse, the "-EEXIST" error is not preser= ved, > > > > > CPU2 does not even know the page fault is not handled, it will re= sume > > > > > from the page fault instruction, possibly generate another page f= ault > > > > > at the exact same location. That page fault loop will repeat unti= l > > > > > CPU1 install the new pte on that faulting virtual address and pic= k up > > > > > by CPU2. > > > > > > > > > > Am I missing something obvious there? > > > > > > > > I feel you are right. any concurrent page faults at the same pte > > > > will increase the count of page faults for a couple of times now. > > > > > > > > > > > > > > I just re-read your comment: "Racers will simply busy wait since = it's > > > > > a rare and very short event." That might be referring to the abov= e > > > > > CPU2 page fault looping situation. I consider the page fault loop= ing > > > > > on CPU2 not acceptable. For one it will mess up the page fault > > > > > statistics. > > > > > In my mind, having an explicit loop for CPU2 waiting for the PTE = to > > > > > show up is still better than this page fault loop. You can have m= ore > > > > > CPU power friendly loops. > > > > > > > > I assume you mean something like > > > > > > > > while(!pte_same()) > > > > cpu_relax(); > > > > > > > > then we still have a chance to miss the change of B. > > > > > > > > For example, another thread is changing pte to A->B->A, our loop ca= n > > > > miss B. Thus we will trap into an infinite loop. this is even worse= . > > > > > > Yes. You are right, it is worse. Thanks for catching that. That is wh= y > > > I say this needs more discussion, I haven't fully thought it through > > > :-) > > > > Hi Chris and Barry, > > > > Thanks for the comments! > > > > Hi Kairui, Hi Barry, > > > The worst thing I know of returning in do_swap_page without handling > > the swap, is an increase of some statistic counters, note it will not > > cause major page fault counters to grow, only things like perf counter > > and vma lock statistic are affected. > > I don't understand :-) if it is goto out, userspace may re-execute the > instruction. > What is going to happen is a totally new page fault. Right, it's a new page fault, I mean if the same PTE is still not present, it's handled in the same way again. > > > > And actually there are multiple already existing return points in > > do_swap_page that will return without handling it, which may > > re-trigger the page fault. > > I feel this case is different from other returns. > other returns probably have held ptl or page lock before checking entry/p= te, > and another thread has likely changed the PTE/swap entry before those > locks are released. > then it is likely there is no following page fault. Right, in other return points, they expect the next try will just success without triggering another fault. But here SWP_SYNCHRONOUS_IO is usually very fast, so next attempt here will also likely succeed without triggering a fault. So, to ensure that, maybe we can just wait a bit before returning? That way the following page fault should be very unlikely to happen. I'm not sure if there is any better way to wait than simply adding a cpu_relax, waiting on swap cache flag may cause it to wait forever, since it's totally possible some other threads added the entry to swapcache. Waiting for PTE change will result in the same issue of ABA.