From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85D34C433F5 for ; Wed, 12 Jan 2022 22:37:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A69286B0071; Wed, 12 Jan 2022 17:37:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A18B16B0073; Wed, 12 Jan 2022 17:37:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E0916B0074; Wed, 12 Jan 2022 17:37:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 7B12B6B0071 for ; Wed, 12 Jan 2022 17:37:24 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 391FD94FB9 for ; Wed, 12 Jan 2022 22:37:24 +0000 (UTC) X-FDA: 79023097608.26.AA59C21 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf20.hostedemail.com (Postfix) with ESMTP id D0BD51C0002 for ; Wed, 12 Jan 2022 22:37:23 +0000 (UTC) Received: by mail-pl1-f181.google.com with SMTP id l8so6652578plt.6 for ; Wed, 12 Jan 2022 14:37:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=kVe0Ne6n5xMwKKrAQ4mP8uWVdsiOXXolu4Kb/XQC380=; b=qB1NA+4OfViIbepFqIZigdpQC0SA4p2rY+ukViOgoyq+boZXVSvXxh9bH6VY5WMWOf RGitJQHc/w+CS8Tf7MiiH5rZCOcWOHVyYGt1X5iTMqwLzbSBWnBUT4FrbYAojaILl+5Q 4ise2vpuobK71c+GDR0o625sVo1xEWgmbYd6eXM5pGlexLT+jQ/FyKwIZlajhBe1gtBY CvKkiGZa19sJluc4CI2p1jopiPAsmslrkNvWpUrtcWYrEXYeraygfvZEHZmcM85Rzywc sUbkmG0IGEjqb3jLS9eTn7u08i12Hav0Z5s+3BbVeqvzUGT1JYZ/JM60vzls/R1Q8QOt 92Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=kVe0Ne6n5xMwKKrAQ4mP8uWVdsiOXXolu4Kb/XQC380=; b=RD2nwORv0Qa2jyHv0vE17rGsgget4prhwJdhLZ2L1EVvg8BMSTYIziJIcB9jq4vR7w jDeGby+HDtNBaytiM1z2e9RDE2cJmo1HXzgHZbgAabTmvhWxiTRnWhj1QY+WRwQj1k5K 84cqw763Uamj1joriJ286JX/ufKImerJgpSfH9qGCsle+QxKnqCPBZ8E5psvlEDJJXtZ W23PqFbSfySdC/8Vo3OeJMiurnR/4MeghCnfyIL9vzZSSO6tkSzN9DJ9O5Avqxp/7rmB kZRnmGf7Am7hK5ZiwwqymD+/8hJeWJUy0QZVmrgQNKkFxre6P95dok5k6xqjUnJwIKGw +y1Q== X-Gm-Message-State: AOAM5333Djs25hrFkqluQKEorFNE2R7iEUTgTSodreaMQsol11kKurig g4msSfLxB/J2A3iamaUhTWY= X-Google-Smtp-Source: ABdhPJwdkdBSVIZolRc1Ww6r9xHbPEzCiNbkFoJX8Bw8qVPm+iX14PoWZ4WwoIGjtkImdUuHP7/Q/Q== X-Received: by 2002:aa7:918e:0:b0:4bb:793:b7a7 with SMTP id x14-20020aa7918e000000b004bb0793b7a7mr1462781pfa.71.1642027042808; Wed, 12 Jan 2022 14:37:22 -0800 (PST) Received: from google.com ([2620:15c:211:201:b6c7:c163:623d:56bc]) by smtp.gmail.com with ESMTPSA id x1sm565051pgh.44.2022.01.12.14.37.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 12 Jan 2022 14:37:22 -0800 (PST) Date: Wed, 12 Jan 2022 14:37:20 -0800 From: Minchan Kim To: Mauricio Faria de Oliveira Cc: "Huang, Ying" , Yu Zhao , Andrew Morton , linux-mm@kvack.org, linux-block@vger.kernel.org, Miaohe Lin , Yang Shi Subject: Re: [PATCH v2] mm: fix race between MADV_FREE reclaim and blkdev direct IO read Message-ID: References: <20220105233440.63361-1-mfo@canonical.com> <87v8ypybdc.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D0BD51C0002 X-Stat-Signature: nstzofn1iurxr68e5zr31jjx844qj8ef Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qB1NA+4O; spf=pass (imf20.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1642027043-774 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 12, 2022 at 06:53:07PM -0300, Mauricio Faria de Oliveira wrote: > Hi Minchan Kim, > > Thanks for handling the hard questions! :) > > On Wed, Jan 12, 2022 at 2:33 PM Minchan Kim wrote: > > > > On Wed, Jan 12, 2022 at 09:46:23AM +0800, Huang, Ying wrote: > > > Yu Zhao writes: > > > > > > > On Wed, Jan 05, 2022 at 08:34:40PM -0300, Mauricio Faria de Oliveira wrote: > > > >> diff --git a/mm/rmap.c b/mm/rmap.c > > > >> index 163ac4e6bcee..8671de473c25 100644 > > > >> --- a/mm/rmap.c > > > >> +++ b/mm/rmap.c > > > >> @@ -1570,7 +1570,20 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > > >> > > > >> /* MADV_FREE page check */ > > > >> if (!PageSwapBacked(page)) { > > > >> - if (!PageDirty(page)) { > > > >> + int ref_count = page_ref_count(page); > > > >> + int map_count = page_mapcount(page); > > > >> + > > > >> + /* > > > >> + * The only page refs must be from the isolation > > > >> + * (checked by the caller shrink_page_list() too) > > > >> + * and one or more rmap's (dropped by discard:). > > > >> + * > > > >> + * Check the reference count before dirty flag > > > >> + * with memory barrier; see __remove_mapping(). > > > >> + */ > > > >> + smp_rmb(); > > > >> + if ((ref_count - 1 == map_count) && > > > >> + !PageDirty(page)) { > > > >> /* Invalidate as we cleared the pte */ > > > >> mmu_notifier_invalidate_range(mm, > > > >> address, address + PAGE_SIZE); > > > > > > > > Out of curiosity, how does it work with COW in terms of reordering? > > > > Specifically, it seems to me get_page() and page_dup_rmap() in > > > > copy_present_pte() can happen in any order, and if page_dup_rmap() > > > > is seen first, and direct io is holding a refcnt, this check can still > > > > pass? > > > > > > I think that you are correct. > > > > > > After more thoughts, it appears very tricky to compare page count and > > > map count. Even if we have added smp_rmb() between page_ref_count() and > > > page_mapcount(), an interrupt may happen between them. During the > > > interrupt, the page count and map count may be changed, for example, > > > unmapped, or do_swap_page(). > > > > Yeah, it happens but what specific problem are you concerning from the > > count change under race? The fork case Yu pointed out was already known > > for breaking DIO so user should take care not to fork under DIO(Please > > look at O_DIRECT section in man 2 open). If you could give a specific > > example, it would be great to think over the issue. > > > > I agree it's little tricky but it seems to be way other place has used > > for a long time(Please look at write_protect_page in ksm.c). > > Ah, that's great to see it's being used elsewhere, for DIO particularly! > > > So, here what we missing is tlb flush before the checking. > > That shouldn't be required for this particular issue/case, IIUIC. > One of the things we checked early on was disabling deferred TLB flush > (similarly to what you've done), and it didn't help with the issue; also, the > issue happens on uniprocessor mode too (thus no remote CPU involved.) I guess you didn't try it with page_mapcount + 1 == page_count at tha time? Anyway, I agree we don't need TLB flush here like KSM. I think the reason KSM is doing TLB flush before the check it to make sure trap trigger on the write from userprocess in other core. However, this MADV_FREE case, HW already gaurantees the trap. Please see below. > > > > > > Something like this. > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > index b0fd9dc19eba..b4ad9faa17b2 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1599,18 +1599,8 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > > > /* MADV_FREE page check */ > > if (!PageSwapBacked(page)) { > > - int refcount = page_ref_count(page); > > - > > - /* > > - * The only page refs must be from the isolation > > - * (checked by the caller shrink_page_list() too) > > - * and the (single) rmap (dropped by discard:). > > - * > > - * Check the reference count before dirty flag > > - * with memory barrier; see __remove_mapping(). > > - */ > > - smp_rmb(); > > - if (refcount == 2 && !PageDirty(page)) { > > + if (!PageDirty(page) && > > + page_mapcount(page) + 1 == page_count(page)) { > > In the interest of avoiding a different race/bug, it seemed worth following the > suggestion outlined in __remove_mapping(), i.e., checking PageDirty() > after the page's reference count, with a memory barrier in between. True so it means your patch as-is is good for me. > > I'm not familiar with the details of the original issue behind that code change, > but it seemed to be possible here too, particularly as writes from user-space > can happen asynchronously / after try_to_unmap_one() checked PTE clean > and didn't set PageDirty, and if the page's PTE is present, there's no fault? Yeah, it was discussed. For clean pte, CPU has to fetch and update the actual pte entry, not TLB so trap triggers for MADV_FREE page. https://lkml.org/lkml/2015/4/15/565 https://lkml.org/lkml/2015/4/16/136