From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9F9FC433FE for ; Mon, 10 Jan 2022 21:10:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 520456B0080; Mon, 10 Jan 2022 16:10:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4A92B6B0081; Mon, 10 Jan 2022 16:10:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 324E46B0082; Mon, 10 Jan 2022 16:10:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id 1D1596B0080 for ; Mon, 10 Jan 2022 16:10:45 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DA7FB9528D for ; Mon, 10 Jan 2022 21:10:44 +0000 (UTC) X-FDA: 79015621608.10.CB7604A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 16AA114000D for ; Mon, 10 Jan 2022 21:10:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=oxVVaz0zO7FtULM+j4rDEmxozaRscngd8o8oqfKXpxY=; b=uNnvQH6x5ymjFSNhMX83kJTzi4 4igfdxzzzmR98m2TEq77/YgK64BLcbs69i1lyKj6+ZPxQ9Uv9CkNk0MLk1YW9yYfsyFW1o2SOpkdz M3ErLidizqrGtSaqMQ65OCG2RzdvAQCRNoRhGaZWhymMvAZsJwtdKTl2AVqziS2DADdmPpkHFWW4j l58VUz0nl9HX40AuGr0Ti9lKaVVoLZtKGvN5DJHr7SPNsoevF25QhpmfCDn7uTsm1GK141gzrs2c0 E5YIpwP2oleC/PiZYNJZgyTvLGVJIR/widl9SRxwIR3M39Tv8OouaTCxMNPpXQ3GCiLz5OXeQtx26 HCMjq8Gw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1n71w4-002j3a-8B; Mon, 10 Jan 2022 21:10:36 +0000 Date: Mon, 10 Jan 2022 21:10:36 +0000 From: Matthew Wilcox To: Jan Kara Cc: John Hubbard , linux-mm@kvack.org, Andrew Morton , Christoph Hellwig Subject: Re: [PATCH 14/17] gup: Convert for_each_compound_head() to gup_for_each_folio() Message-ID: References: <20220102215729.2943705-1-willy@infradead.org> <20220102215729.2943705-15-willy@infradead.org> <20c2d9d3-bbbe-2f11-f6bf-a0e3578c6a71@nvidia.com> <20220110152208.w3tj5hjnbwjd6n2l@quack3.lan> <20220110203611.7s2lg4cyejj5l5ah@quack3.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220110203611.7s2lg4cyejj5l5ah@quack3.lan> X-Rspamd-Queue-Id: 16AA114000D X-Stat-Signature: h9mrrpjcpuab3krrh8s9escgnesxb6sp Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=uNnvQH6x; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam03 X-HE-Tag: 1641849043-919141 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 10, 2022 at 09:36:11PM +0100, Jan Kara wrote: > On Mon 10-01-22 15:52:51, Matthew Wilcox wrote: > > On Mon, Jan 10, 2022 at 04:22:08PM +0100, Jan Kara wrote: > > > On Sun 09-01-22 00:01:49, John Hubbard wrote: > > > > On 1/8/22 20:39, Matthew Wilcox wrote: > > > > > On Wed, Jan 05, 2022 at 12:17:46AM -0800, John Hubbard wrote: > > > > > > > + if (!folio_test_dirty(folio)) { > > > > > > > + folio_lock(folio); > > > > > > > + folio_mark_dirty(folio); > > > > > > > + folio_unlock(folio); > > > > > > > > > > > > At some point, maybe even here, I suspect that creating the folio > > > > > > version of set_page_dirty_lock() would help. I'm sure you have > > > > > > a better feel for whether it helps, after doing all of this conversion > > > > > > work, but it just sort of jumped out at me as surprising to see it > > > > > > in this form. > > > > > > > > > > I really hate set_page_dirty_lock(). It smacks of "there is a locking > > > > > rule here which we're violating, so we'll just take the lock to fix it" > > > > > without understanding why there's a locking problem here. > > > > > > > > > > As far as I can tell, originally, the intent was that you would lock > > > > > the page before modifying any of the data in the page. ie you would > > > > > do: > > > > > > > > > > gup() > > > > > lock_page() > > > > > addr = kmap_page() > > > > > *addr = 1; > > > > > kunmap_page() > > > > > set_page_dirty() > > > > > unlock_page() > > > > > put_page() > > > > > > > > > > and that would prevent races between modifying the page and (starting) > > > > > writeback, not to mention truncate() and various other operations. > > > > > > > > > > Clearly we can't do that for DMA-pinned pages. There's only one lock > > > > > bit. But do we even need to take the lock if we have the page pinned? > > > > > What are we protecting against? > > > > > > > > This is a fun question, because you're asking it at a point when the > > > > overall problem remains unsolved. That is, the interaction between > > > > file-backed pages and gup/pup is still completely broken. > > > > > > > > And I don't have an answer for you: it does seem like lock_page() is > > > > completely pointless here. Looking back, there are some 25 callers of > > > > unpin_user_pages_dirty_lock(), and during all those patch reviews, no > > > > one noticed this point! > > > > > > I'd say it is underdocumented but not obviously pointless :) AFAIR (and > > > Christoph or Andrew may well correct me) the page lock in > > > set_page_dirty_lock() is there to protect metadata associated with the page > > > through page->private. Otherwise truncate could free these (e.g. > > > block_invalidatepage()) while ->set_page_dirty() callback (e.g. > > > __set_page_dirty_buffers()) works on this metadata. > > > > Yes, but ... we have an inconsistency between DMA writes to the page and > > CPU writes to the page. > > > > fd = open(file) > > write(fd, 1024 * 1024) > > mmap(NULL, 1024 * 1024, PROT_RW, MAP_SHARED, fd, 0) > > register-memory-with-RDMA > > ftruncate(fd, 0); // page is removed from page cache > > ftruncate(fd, 1024 * 1024) > > > > Now if we do a store from the CPU, we instantiate a new page in the > > page cache and the store will be written back to the file. If we do > > an RDMA-write, the write goes to the old page and will be lost. Indeed, > > it's no longer visible to the CPU (but is visible to other RDMA reads!) > > > > Which is fine if the program did it itself because it's doing something > > clearly bonkers, but another program might be the one doing the > > two truncate() steps, and this would surprise an innocent program. > > > > I still favour blocking the truncate-down (or holepunch) until there > > are no pinned pages in the inode. But I know this is a change in > > behaviour since for some reason, truncate() gets to override mmap(). > > I agree although this is unrelated to the page lock discussion above. In > principle we can consider such change (after all we chose this solution for > DAX) but it has some consequences - e.g. that disk space cannot be > reclaimed when someone has pagecache pages pinned (which may be unexpected > from sysadmin POV) or that we have to be careful or eager application > doing DIO (once it is converted to pinning) can block truncate > indefinitely. It's not unrelated ... once we figure out how to solve this problem, the set_page_dirty() call happens while the page is still DMA-pinned, so any solution can be applicable to both places. Maybe the solution to truncate vs DMA-pin won't be applicable to both ... As far as badly behaved applications doing DMA-pinning blocking truncate() goes, have we considered the possibility of declining the DMA pin if the process does not own the mmaped file? That would limit the amount of trouble it can cause, but maybe it would break some interesting use cases.