From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69EC0C433F5 for ; Tue, 11 Jan 2022 20:20:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9F356B00C4; Tue, 11 Jan 2022 15:20:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A4E826B00C5; Tue, 11 Jan 2022 15:20:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8EF026B00C6; Tue, 11 Jan 2022 15:20:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 7F2656B00C4 for ; Tue, 11 Jan 2022 15:20:17 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2797087719 for ; Tue, 11 Jan 2022 20:20:17 +0000 (UTC) X-FDA: 79019123274.31.7FCA759 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf27.hostedemail.com (Postfix) with ESMTP id 7F95240012 for ; Tue, 11 Jan 2022 20:20:16 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id n30-20020a17090a5aa100b001b2b6509685so856076pji.3 for ; Tue, 11 Jan 2022 12:20:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=TPXSlVpb8cuHGEsGyiKONzqT5bPKyzPhdUShj2B9Kxc=; b=H3RzPboinSDxMqBn90H/lZ7SvPdcVqvqOvd29BvPYFIbeUQ84B4/cXoEiKURIbt0Wq IsP3qeI5vzVGCDmRZqordMamDrmLgpr2VV11B1LfNaYayo3K6dwsosqQIUN3IQjcCt/I fqj5VPIijLvs/CmqEIznMuV3jSbTTF0xZqnmJbAExShJpqqhOf4IfIgZd9APCEtHj/Zh m+wEiQoa2wGZkc2PI39t/FY6LS6Iu5UEycXjYLb/IuSgAoI7t+logN/6082csLdpaB/c FVb8NN/Y5pv3BoziPp3HZ6NdWO0a6f2z4FmMO0NOWw23FjuxOR6YNHcaS+7HFABVW4nQ chEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=TPXSlVpb8cuHGEsGyiKONzqT5bPKyzPhdUShj2B9Kxc=; b=j7ZbM6RfxkVn20mH5rfMHxQIcuc8YOlu+ILwD/XFiYlJ54vd/VOfEdIZoG3HinJjKz ZHkOGRhFo1PgXU65MvbEX63Mu6WV9OuR2zvv24sHYlERXOVxmaU9JvyxYnp55wenWU2i 2qbnE6kklAZwZUO2BURTSlssafxrYIE8FUwhT0kQCZ7WKmMzFA6VjrsoKuJv3fb9C7K2 DbyQ6I5jgUiC+OFqWgdSnP8pP3oPUjmn8SSXs19ArQ6tCxDIRjwvTWxFpggqh0awYTIv WGDhCbfvhuL/KG+h3FcgNkrhM8if903GIJRR0z2FBXxiL3Wc3kp2Elct1tZyfRgnb8k8 ehTQ== X-Gm-Message-State: AOAM533UBT79yBR/gHUZMMAe1omjmXvxB46itoQWE4kB9kJn/orgyq3O JPi6rCqvFM7Up+u9569AeQ8= X-Google-Smtp-Source: ABdhPJwRSdmMAhgk9Mb18TUH7VG/+nMdR5/7HkQViDWyLjoIpdPsnwK5LOs5ZF8j3afOTK2UAXYqBQ== X-Received: by 2002:a17:90b:3b92:: with SMTP id pc18mr5045237pjb.137.1641932415470; Tue, 11 Jan 2022 12:20:15 -0800 (PST) Received: from google.com ([2620:15c:211:201:4f0e:ffc8:3f7b:ac89]) by smtp.gmail.com with ESMTPSA id u20sm7559250pfi.220.2022.01.11.12.20.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Jan 2022 12:20:15 -0800 (PST) Date: Tue, 11 Jan 2022 12:20:13 -0800 From: Minchan Kim To: John Hubbard Cc: Yu Zhao , Mauricio Faria de Oliveira , Andrew Morton , linux-mm@kvack.org, linux-block@vger.kernel.org, Huang Ying , Miaohe Lin , Yang Shi Subject: Re: [PATCH v2] mm: fix race between MADV_FREE reclaim and blkdev direct IO read Message-ID: References: <20220105233440.63361-1-mfo@canonical.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7F95240012 X-Stat-Signature: k33qt7y7fnc58xff3eeaibqeuemhy7a1 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=H3RzPboi; spf=pass (imf27.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-HE-Tag: 1641932416-630021 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jan 11, 2022 at 11:29:36AM -0800, John Hubbard wrote: > On 1/11/22 10:54, Minchan Kim wrote: > ... > > Hi Yu, > > > > I think you're correct. I think we don't like memory barrier > > there in page_dup_rmap. Then, how about make gup_fast is aware > > of FOLL_TOUCH? > > > > FOLL_TOUCH means it's going to write something so the page > > Actually, my understanding of FOLL_TOUCH is that it does *not* mean that > data will be written to the page. That is what FOLL_WRITE is for. > FOLL_TOUCH means: update the "accessed" metadata, without actually > writing to the memory that the page represents. Exactly. I should have mentioned the FOLL_TOUCH with FOLL_WRITE. What I wanted to hit with FOLL_TOUCH was follow_page_pte: if (flags & FOLL_TOUCH) { if ((flags & FOLL_WRITE) && !pte_dirty(pte) && !PageDirty(page)) set_page_dirty(page); mark_page_accessed(page); } > > > > should be dirty. Currently, get_user_pages works like that. > > Howver, problem is get_user_pages_fast since it looks like > > that lockless_pages_from_mm doesn't support FOLL_TOUCH. > > > > So the idea is if param in internal_get_user_pages_fast > > includes FOLL_TOUCH, gup_{pmd,pte}_range try to make the > > page dirty under trylock_page(If the lock fails, it goes > > Marking a page dirty solely because FOLL_TOUCH is specified would > be an API-level mistake. That's why it isn't "supported". Or at least, > that's how I'm reading things. > > Hope that helps! > > > slow path with __gup_longterm_unlocked and set_dirty_pages > > for them). > > > > This approach would solve other cases where map userspace > > pages into kernel space and then write. Since the write > > didn't go through with the process's page table, we will > > lose the dirty bit in the page table of the process and > > it turns out same problem. That's why I'd like to approach > > this. > > > > If it doesn't work, the other option to fix this specific > > case is can't we make pages dirty in advance in DIO read-case? > > > > When I look at DIO code, it's already doing in async case. > > Could't we do the same thing for the other cases? > > I guess the worst case we will see would be more page > > writeback since the page becomes dirty unnecessary. > > Marking pages dirty after pinning them is a pre-existing area of > problems. See the long-running LWN articles about get_user_pages() [1]. Oh, Do you mean marking page dirty in DIO path is already problems? Let me read the pages in the link. Thanks! > > > [1] https://lwn.net/Kernel/Index/#Memory_management-get_user_pages > > thanks, > -- > John Hubbard > NVIDIA >