From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03CCCC02196 for ; Mon, 3 Feb 2025 15:49:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F2806B007B; Mon, 3 Feb 2025 10:49:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 67C396B0082; Mon, 3 Feb 2025 10:49:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F5646B0083; Mon, 3 Feb 2025 10:49:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2C9896B007B for ; Mon, 3 Feb 2025 10:49:41 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id CC7E51401DB for ; Mon, 3 Feb 2025 15:49:40 +0000 (UTC) X-FDA: 83079068520.05.DE903B9 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) by imf21.hostedemail.com (Postfix) with ESMTP id C8CFB1C000D for ; Mon, 3 Feb 2025 15:49:38 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=ffwll.ch header.s=google header.b=GypHLM+l; spf=none (imf21.hostedemail.com: domain of simona.vetter@ffwll.ch has no SPF policy when checking 209.85.128.48) smtp.mailfrom=simona.vetter@ffwll.ch; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738597778; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4swfrN/KXwv8DqI9T5BPiJD4bprbjtxNuz9fAJpEo2I=; b=ugvfsP+Zx8GvFRwZviYmgXNA6ixkCxEaRrc3IddtMLqOnBy6zS7wFZauBbcgrIXstfp8Gq 35vppZC0okPzp3DmkuD6FUm5XecP1yp30LnerpSJq5BC/bb3VuuIxn5Jb5lIvgux5eHzAx AoaxRAX+0k/K4X2mB7ZQdUCg/e0axFs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738597778; a=rsa-sha256; cv=none; b=8OAuq3dMiP+FeszEbW3WvQQfM8FwI1e1P7z6pdaITucDioQqIAwJm4s86BhK0acYB7Fw4X M2lh0SYKapkRkU48LWlTN5SB296wB/hjcPDl2MiL1GrXJkNrmwGzr9HrEFPe9k3zd0HHrq WSWYSJpzjO033cml3+S2V+78vmHZaCY= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=ffwll.ch header.s=google header.b=GypHLM+l; spf=none (imf21.hostedemail.com: domain of simona.vetter@ffwll.ch has no SPF policy when checking 209.85.128.48) smtp.mailfrom=simona.vetter@ffwll.ch; dmarc=none Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-43622267b2eso46256385e9.0 for ; Mon, 03 Feb 2025 07:49:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; t=1738597777; x=1739202577; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references :mail-followup-to:message-id:subject:cc:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=4swfrN/KXwv8DqI9T5BPiJD4bprbjtxNuz9fAJpEo2I=; b=GypHLM+lAr3DuerLKsT94Us5iAuT4Jf+dKwLKJistf0FN8ttDbZ2aQilSKVDj2k/cR xSsZ5DJrEDwtJ+4NEaVBRZQJp5MaK/Kjy4A7yaBd1KQy23ahkJp697q/aw6qES9H3OZd DobpjvHK62aBQp1PmJQh0sUSbUU/NBDdAZx+k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738597777; x=1739202577; h=in-reply-to:content-disposition:mime-version:references :mail-followup-to:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4swfrN/KXwv8DqI9T5BPiJD4bprbjtxNuz9fAJpEo2I=; b=mylnewSJYOqC70yeXq5dcTY+WfivVsjUgkkq5GFtZ1yN2AooTxAonQoi7oKZI7nZTQ u+3bS+Y02bPDvg0BjUIGUJoVrmVnZ6JRvli4uzZSPzKDyCSdmLFG4lRXM4OdFftiiZwf R3WEgrJ2Zm/n2OXDgbLnUGiz5/hW8d4bBTSSvvZuPB+2D3cOfawRPjAlXZPKZGfA+WNT fP/TGubk3BzewzKuesda7iIp7ogIwxdaoeWsQdXwjDYkRq/UP6ivJ5g0Zv2hJydik4nb sjYl4HchhBuYTd//dXKpLwrNpWNsSsveiurXYSwBmjbccl7hVD25KcJ9DFxI/0U+PaWD WiuQ== X-Forwarded-Encrypted: i=1; AJvYcCV9v30Y59rmx9lDq6GM4oTxm+fIHLq5HF8VzE8MJ5D+f8k7QFAvDi9oz2KPCrcldu3l8/IatsL1nA==@kvack.org X-Gm-Message-State: AOJu0Ywp3FqnX2A8u2YoeRlhQIB3HXqhBEjmZLE0Do26wJpejC2me0K3 8OdPV7ZX/UKSbkFmVwBShQrYkxgvH798ZaP5qlL4ckof7ViBQdBWPE8X3p2UF58= X-Gm-Gg: ASbGncswQtQHNT7AF40gZpTDxY36M/KF0jkfnbkuDuEmk6x0w2CzMkcqZ37SkGWPmB2 3p8oHvF0v9b/Smqo7T47ZQGvA8o+QnRora0PrqbO+PatgCM6oewhc3nv6c/18vMhDbcMseS/v34 Tz3f+xpJ3Xy9Fcbt1tYiQh7frlw8DKfA+2UAodWgyaGBZw4/TEZ4ybmK1mx5F2eT7fg4DxjZB5F EIMKNFmGkrlbxwzOk3uSRA/qAWT172Llj5cB1GyWRz9/3AW7KjHsxWjA1NvnXe3VJ6Wwnj4QcZT PgTH7XXWMTfu7HugXNJmBY8/Ro4= X-Google-Smtp-Source: AGHT+IFXuD8mylE+ZEqDPpJQ67LpqiN8TrZVlx/UkTqzZgwWZoSori+xM5ZHZR97fwIfxFA4qqZSOg== X-Received: by 2002:a05:600c:4511:b0:438:a1f5:3e38 with SMTP id 5b1f17b1804b1-438dc430931mr203886735e9.30.1738597777065; Mon, 03 Feb 2025 07:49:37 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:5485:d4b2:c087:b497]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38c5c1d1d03sm13321752f8f.99.2025.02.03.07.49.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Feb 2025 07:49:36 -0800 (PST) Date: Mon, 3 Feb 2025 16:49:34 +0100 From: Simona Vetter To: Lorenzo Stoakes Cc: Andrew Morton , Jaya Kumar , Simona Vetter , Helge Deller , linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Matthew Wilcox , David Hildenbrand , Kajtar Zsolt , Maira Canal Subject: Re: [PATCH 2/3] mm: provide mapping_wrprotect_page() function Message-ID: Mail-Followup-To: Lorenzo Stoakes , Andrew Morton , Jaya Kumar , Simona Vetter , Helge Deller , linux-fbdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Matthew Wilcox , David Hildenbrand , Kajtar Zsolt , Maira Canal References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Operating-System: Linux phenom 6.12.11-amd64 X-Rspamd-Queue-Id: C8CFB1C000D X-Stat-Signature: dewhnninypumjyuk86goxex533en7cd5 X-Rspam-User: X-Rspamd-Server: rspam12 X-HE-Tag: 1738597778-768633 X-HE-Meta: U2FsdGVkX1+/u8wniMdIF92z+rpkjxhSFGxe+g1ak7p/zCjUt60GHSofYmWHYuMsYkmz3EIYTp/D/07eGR1RJg+4nLO8xkBuBSkhz9YeHtt+cQHqwM065Q1/MkNpXO63hT4OwjiuRdaogDJAgCiwJ+Z6lvK7o/EeUBoTWLmufCkMeLpwBy6U0HSX1Sg5O0hCg+/L7ew5b3peAwMcNHBqaczN9h7JbV2X6cvrevCKV84Bxex7XZ7k7PjNplvp6uwPnAY5mKNQAUFEsEaraoMNRnzoVVUuBwObsy+sfBa2+Mqw8bKRiizpoI3xMO0MHc5OrtovHdVuRC4pY0fTBmks4HVY4/IdR3p02CiJ2dL7uxL11yzzQEmgqvCrTw5x3xgk1M7un9doxh6T0JpnXemu4usmZ+1nozBp3FVlt5uGqEOMxS9pAEyeWCv6YuoE2oN5ilPgXyt973BKdBfa3IWmdQhqcUBP/MTdWfPXXNX0V2y12TnTi2dH2PQ2ahAHCCkfyEZRrlF/4R9kw5I4pVmgs12eB+zDqZHKN6yuCwMuDuRgW8qp8UyM4ogPNnqYWTHX+gMsfG8EXsQAID1D0sVIUGOPwj+RtH7nhjyTq9insHrQfQFT5hTI0eqJHIrkYeiyjNPRDvAoCF7qb6kuQ0nTC9+s2in1fBr7vJU6Y4OArT5d37i7Y86oWIenRky0nvDuRxOBkCgxWYeFofWIjTqaMDCtHE0npWo0Aau9qy0SjJaCnokywmp4kwc0jcn1TOGYXvdJfdYSK4+NQ+K6ytHmB/t10badsVF0mGv3yof98p/8tCucmd5f+NjJkkpXdw6dYZKW3hH4Y/DKBs8N/tw4RZIVzl/iLK0gx6Pi0awDbUnHY9B3ZJGbBaCyZpQ2d8m7lB8OrgJLBduhBjXZQKh4FP76eaFHZCxtVVJZOPtfJqqV/mquYVlhczR63fjmHDGj2cSUt6q2MqcL2MJNad0 392LRZ6l 0lqryKnriKM6a++zBRLHa6YaJNJLVli4ymAmKrZLi+J89HXABWLc/LdQ3ltlIhDgjVTSdY+v2+1EMwjs/qGcuaHOcZgyus56V2ffJWsui1ls3gRVOIhnpu6BYk8TLAaiMITA1EHzgEl/RBYKsFoBA3K6+vjJ4BqakMG3XlTOxRl3icVN+u1U1M8y0VdaGa4/iz+oe0Kr59VOYhpN9U/kABLA4dEvxYTpkbH0/L3FrctptU2t3jgKvi9fKsVn14zZGmqc5oR3pWyLVO28nrHydAg0IAS46NGbJ3iZR2YtjJtZZIupyz5fJmZmy1C4kOPixZzldWjnednBz20kD1tutJlROAaX2e/uHQAA3pdiisDs6lweYzUCvU4AVqPjnXd4aPRL6+Dfkc0qkhIaTarCvOIRomV2+iehsXVuKOek+tt3AHU8CQog/JrfGsg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000365, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jan 31, 2025 at 06:28:57PM +0000, Lorenzo Stoakes wrote: > in the fb_defio video driver, page dirty state is used to determine when > frame buffer pages have been changed, allowing for batched, deferred I/O to > be performed for efficiency. > > This implementation had only one means of doing so effectively - the use of > the folio_mkclean() function. > > However, this use of the function is inappropriate, as the fb_defio > implementation allocates kernel memory to back the framebuffer, and then is > forced to specified page->index, mapping fields in order to permit the > folio_mkclean() rmap traversal to proceed correctly. > > It is not correct to specify these fields on kernel-allocated memory, and > moreover since these are not folios, page->index, mapping are deprecated > fields, soon to be removed. > > We therefore need to provide a means by which we can correctly traverse the > reverse mapping and write-protect mappings for a page backing an > address_space page cache object at a given offset. > > This patch provides this - mapping_wrprotect_page() allows for this > operation to be performed for a specified address_space, offset and page, > without requiring a folio nor, of course, an inappropriate use of > page->index, mapping. > > With this provided, we can subequently adjust the fb_defio implementation > to make use of this function and avoid incorrect invocation of > folio_mkclean() and more importantly, incorrect manipulation of > page->index, mapping fields. > > Signed-off-by: Lorenzo Stoakes > --- > include/linux/rmap.h | 3 ++ > mm/rmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 76 insertions(+) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index 683a04088f3f..0bf5f64884df 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -739,6 +739,9 @@ unsigned long page_address_in_vma(const struct folio *folio, > */ > int folio_mkclean(struct folio *); > > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > + unsigned long nr_pages, struct page *page); > + > int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, > struct vm_area_struct *vma); > > diff --git a/mm/rmap.c b/mm/rmap.c > index a2ff20c2eccd..bb5a42d95c48 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1127,6 +1127,79 @@ int folio_mkclean(struct folio *folio) > } > EXPORT_SYMBOL_GPL(folio_mkclean); > > +struct wrprotect_file_state { > + int cleaned; > + pgoff_t pgoff; > + unsigned long pfn; > + unsigned long nr_pages; > +}; > + > +static bool mapping_wrprotect_page_one(struct folio *folio, > + struct vm_area_struct *vma, unsigned long address, void *arg) > +{ > + struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg; > + struct page_vma_mapped_walk pvmw = { > + .pfn = state->pfn, > + .nr_pages = state->nr_pages, > + .pgoff = state->pgoff, > + .vma = vma, > + .address = address, > + .flags = PVMW_SYNC, > + }; > + > + state->cleaned += page_vma_mkclean_one(&pvmw); > + > + return true; > +} > + > +static void __rmap_walk_file(struct folio *folio, struct address_space *mapping, > + pgoff_t pgoff_start, unsigned long nr_pages, > + struct rmap_walk_control *rwc, bool locked); > + > +/** > + * mapping_wrprotect_page() - Write protect all mappings of this page. > + * > + * @mapping: The mapping whose reverse mapping should be traversed. > + * @pgoff: The page offset at which @page is mapped within @mapping. > + * @nr_pages: The number of physically contiguous base pages spanned. > + * @page: The page mapped in @mapping at @pgoff. > + * > + * Traverses the reverse mapping, finding all VMAs which contain a shared > + * mapping of the single @page in @mapping at offset @pgoff and write-protecting > + * the mappings. > + * > + * The page does not have to be a folio, but rather can be a kernel allocation > + * that is mapped into userland. We therefore do not require that the page maps > + * to a folio with a valid mapping or index field, rather these are specified in > + * @mapping and @pgoff. > + * > + * Return: the number of write-protected PTEs, or an error. > + */ > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > + unsigned long nr_pages, struct page *page) > +{ > + struct wrprotect_file_state state = { > + .cleaned = 0, > + .pgoff = pgoff, > + .pfn = page_to_pfn(page), Could we go one step further and entirely drop the struct page? Similar to unmap_mapping_range for VM_SPECIAL mappings, except it only updates the write protection. The reason is that ideally we'd like fbdev defio to entirely get rid of any struct page usage, because with some dma_alloc() memory regions there's simply no struct page for them (it's a carveout). See e.g. Sa498d4d06d6 ("drm/fbdev-dma: Only install deferred I/O if necessary") for some of the pain this has caused. So entirely struct page less way to write protect a pfn would be best. And it doesn't look like you need the page here at all? Cheers, Sima > + .nr_pages = nr_pages, > + }; > + struct rmap_walk_control rwc = { > + .arg = (void *)&state, > + .rmap_one = mapping_wrprotect_page_one, > + .invalid_vma = invalid_mkclean_vma, > + }; > + > + if (!mapping) > + return 0; > + > + __rmap_walk_file(/* folio = */NULL, mapping, pgoff, nr_pages, &rwc, > + /* locked = */false); > + > + return state.cleaned; > +} > +EXPORT_SYMBOL_GPL(mapping_wrprotect_page); > + > /** > * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of > * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) > -- > 2.48.1 > -- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch