* [PATCH 0/3] expose mapping wrprotect, fix fb_defio use
@ 2025-01-31 18:28 Lorenzo Stoakes
2025-01-31 18:28 ` [PATCH 1/3] mm: refactor rmap_walk_file() to separate out traversal logic Lorenzo Stoakes
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Lorenzo Stoakes @ 2025-01-31 18:28 UTC (permalink / raw)
To: Andrew Morton
Cc: Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel,
linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand,
Kajtar Zsolt, Maira Canal
Right now the only means by which we can write-protect a range using the
reverse mapping is via folio_mkclean().
However this is not always the appropriate means of doing so, specifically
in the case of the framebuffer deferred I/O logic (fb_defio enabled by
CONFIG_FB_DEFERRED_IO). There, kernel pages are mapped read-only and
write-protect faults used to batch up I/O operations.
Each time the deferred work is done, folio_mkclean() is used to mark the
framebuffer page as having had I/O performed on it. However doing so
requires the kernel page (perhaps allocated via vmalloc()) to have its
page->mapping, index fields set so the rmap can find everything that maps
it in order to write-protect.
This is problematic as firstly, these fields should not be set for
kernel-allocated memory, and secondly these are not folios (it's not user
memory) and page->index, mapping fields are now deprecated and soon to be
removed.
The removal of these fields is imminent, rendering this series more urgent
than it might first appear.
The implementers cannot be blamed for having used this however, as there is
simply no other way of performing this operation correctly.
This series fixes this - we provide the mapping_wrprotect_page() function
to allow the reverse mapping to be used to look up mappings from the page
cache object (i.e. its address_space pointer) at a specific offset.
The fb_defio logic already stores this offset, and can simply be expanded
to keep track of the page cache object, so the change then becomes
straight-forward.
This series should have no functional change.
non-RFC:
* Kajtar kindly smoke-tested the defio side of this change and confirmed
that it appears to work correctly. I am therefore stripping the RFC and
putting forward as a non-RFC series.
RFC v2:
* Updated Jaya Kumar's email on cc - the MAINTAINERS section is apparently
incorrect.
* Corrected rmap_walk_file() comment to refer to folios as per Matthew.
* Reference folio->mapping rather than folio_mapping(folio) in
rmap_walk_file() as per Matthew.
* Reference folio->index rather than folio_pgoff(folio) in rmap_walk_file()
as per Matthew.
* Renamed rmap_wrprotect_file_page() to mapping_wrprotect_page() as per
Matthew.
* Fixed kerneldoc and moved to implementation as per Matthew.
* Updated mapping_wrprotect_page() to take a struct page pointer as per
David.
* Removed folio lock when invoking mapping_wrprotect_page() in
fb_deferred_io_work() as per Matthew.
* Removed compound_nr() in fb_deferred_io_work() as per Matthew.
RFC v1:
https://lore.kernel.org/all/1e452b5b65f15a9a5d0c2ed3f5f812fdd1367603.1736352361.git.lorenzo.stoakes@oracle.com/
*** BLURB HERE ***
Lorenzo Stoakes (3):
mm: refactor rmap_walk_file() to separate out traversal logic
mm: provide mapping_wrprotect_page() function
fb_defio: do not use deprecated page->mapping, index fields
drivers/video/fbdev/core/fb_defio.c | 38 ++-----
include/linux/fb.h | 1 +
include/linux/rmap.h | 3 +
mm/rmap.c | 152 +++++++++++++++++++++++-----
4 files changed, 141 insertions(+), 53 deletions(-)
--
2.48.1
^ permalink raw reply [flat|nested] 13+ messages in thread* [PATCH 1/3] mm: refactor rmap_walk_file() to separate out traversal logic 2025-01-31 18:28 [PATCH 0/3] expose mapping wrprotect, fix fb_defio use Lorenzo Stoakes @ 2025-01-31 18:28 ` Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 2/3] mm: provide mapping_wrprotect_page() function Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields Lorenzo Stoakes 2 siblings, 0 replies; 13+ messages in thread From: Lorenzo Stoakes @ 2025-01-31 18:28 UTC (permalink / raw) To: Andrew Morton Cc: Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal In order to permit the traversal of the reverse mapping at a specified mapping and offset rather than those specified by an input folio, we need to separate out the portion of the rmap file logic which deals with this traversal from those parts of the logic which interact with the folio. This patch achieves this by adding a new static __rmap_walk_file() function which rmap_walk_file() invokes. This function permits the ability to pass NULL folio, on the assumption that the caller has provided for this correctly in the callbacks specified in the rmap_walk_control object. Though it provides for this, and adds debug asserts to ensure that, should a folio be specified, these are equal to the mapping and offset specified in the folio, there should be no functional change as a result of this patch. The reason for adding this is to enable for future changes to permit users to be able to traverse mappings of userland-mapped kernel memory, write-protecting those mappings to enable page_mkwrite() or pfn_mkwrite() fault handlers to be retriggered on subsequent dirty. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> --- mm/rmap.c | 79 +++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 53 insertions(+), 26 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index c6c4d4ea29a7..a2ff20c2eccd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2653,35 +2653,37 @@ static void rmap_walk_anon(struct folio *folio, anon_vma_unlock_read(anon_vma); } -/* - * rmap_walk_file - do something to file page using the object-based rmap method - * @folio: the folio to be handled - * @rwc: control variable according to each walk type - * @locked: caller holds relevant rmap lock +/** + * __rmap_walk_file() - Traverse the reverse mapping for a file-backed mapping + * of a page mapped within a specified page cache object at a specified offset. * - * Find all the mappings of a folio using the mapping pointer and the vma chains - * contained in the address_space struct it points to. + * @folio: Either the folio whose mappings to traverse, or if NULL, + * the callbacks specified in @rwc will be configured such + * as to be able to look up mappings correctly. + * @mapping: The page cache object whose mapping VMAs we intend to + * traverse. If @folio is non-NULL, this should be equal to + * folio_mapping(folio). + * @pgoff_start: The offset within @mapping of the page which we are + * looking up. If @folio is non-NULL, this should be equal + * to folio_pgoff(folio). + * @nr_pages: The number of pages mapped by the mapping. If @folio is + * non-NULL, this should be equal to folio_nr_pages(folio). + * @rwc: The reverse mapping walk control object describing how + * the traversal should proceed. + * @locked: Is the @mapping already locked? If not, we acquire the + * lock. */ -static void rmap_walk_file(struct folio *folio, - struct rmap_walk_control *rwc, bool locked) +static void __rmap_walk_file(struct folio *folio, struct address_space *mapping, + pgoff_t pgoff_start, unsigned long nr_pages, + struct rmap_walk_control *rwc, bool locked) { - struct address_space *mapping = folio_mapping(folio); - pgoff_t pgoff_start, pgoff_end; + pgoff_t pgoff_end = pgoff_start + nr_pages - 1; struct vm_area_struct *vma; - /* - * The page lock not only makes sure that page->mapping cannot - * suddenly be NULLified by truncation, it makes sure that the - * structure at mapping cannot be freed and reused yet, - * so we can safely take mapping->i_mmap_rwsem. - */ - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); - - if (!mapping) - return; + VM_WARN_ON_FOLIO(folio && mapping != folio_mapping(folio), folio); + VM_WARN_ON_FOLIO(folio && pgoff_start != folio_pgoff(folio), folio); + VM_WARN_ON_FOLIO(folio && nr_pages != folio_nr_pages(folio), folio); - pgoff_start = folio_pgoff(folio); - pgoff_end = pgoff_start + folio_nr_pages(folio) - 1; if (!locked) { if (i_mmap_trylock_read(mapping)) goto lookup; @@ -2696,8 +2698,7 @@ static void rmap_walk_file(struct folio *folio, lookup: vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff_start, pgoff_end) { - unsigned long address = vma_address(vma, pgoff_start, - folio_nr_pages(folio)); + unsigned long address = vma_address(vma, pgoff_start, nr_pages); VM_BUG_ON_VMA(address == -EFAULT, vma); cond_resched(); @@ -2710,12 +2711,38 @@ static void rmap_walk_file(struct folio *folio, if (rwc->done && rwc->done(folio)) goto done; } - done: if (!locked) i_mmap_unlock_read(mapping); } +/* + * rmap_walk_file - do something to file page using the object-based rmap method + * @folio: the folio to be handled + * @rwc: control variable according to each walk type + * @locked: caller holds relevant rmap lock + * + * Find all the mappings of a folio using the mapping pointer and the vma chains + * contained in the address_space struct it points to. + */ +static void rmap_walk_file(struct folio *folio, + struct rmap_walk_control *rwc, bool locked) +{ + /* + * The folio lock not only makes sure that folio->mapping cannot + * suddenly be NULLified by truncation, it makes sure that the structure + * at mapping cannot be freed and reused yet, so we can safely take + * mapping->i_mmap_rwsem. + */ + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + + if (!folio->mapping) + return; + + __rmap_walk_file(folio, folio->mapping, folio->index, + folio_nr_pages(folio), rwc, locked); +} + void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc) { if (unlikely(folio_test_ksm(folio))) -- 2.48.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 2/3] mm: provide mapping_wrprotect_page() function 2025-01-31 18:28 [PATCH 0/3] expose mapping wrprotect, fix fb_defio use Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 1/3] mm: refactor rmap_walk_file() to separate out traversal logic Lorenzo Stoakes @ 2025-01-31 18:28 ` Lorenzo Stoakes 2025-02-03 15:49 ` Simona Vetter 2025-01-31 18:28 ` [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields Lorenzo Stoakes 2 siblings, 1 reply; 13+ messages in thread From: Lorenzo Stoakes @ 2025-01-31 18:28 UTC (permalink / raw) To: Andrew Morton Cc: Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal in the fb_defio video driver, page dirty state is used to determine when frame buffer pages have been changed, allowing for batched, deferred I/O to be performed for efficiency. This implementation had only one means of doing so effectively - the use of the folio_mkclean() function. However, this use of the function is inappropriate, as the fb_defio implementation allocates kernel memory to back the framebuffer, and then is forced to specified page->index, mapping fields in order to permit the folio_mkclean() rmap traversal to proceed correctly. It is not correct to specify these fields on kernel-allocated memory, and moreover since these are not folios, page->index, mapping are deprecated fields, soon to be removed. We therefore need to provide a means by which we can correctly traverse the reverse mapping and write-protect mappings for a page backing an address_space page cache object at a given offset. This patch provides this - mapping_wrprotect_page() allows for this operation to be performed for a specified address_space, offset and page, without requiring a folio nor, of course, an inappropriate use of page->index, mapping. With this provided, we can subequently adjust the fb_defio implementation to make use of this function and avoid incorrect invocation of folio_mkclean() and more importantly, incorrect manipulation of page->index, mapping fields. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> --- include/linux/rmap.h | 3 ++ mm/rmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 683a04088f3f..0bf5f64884df 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -739,6 +739,9 @@ unsigned long page_address_in_vma(const struct folio *folio, */ int folio_mkclean(struct folio *); +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, + unsigned long nr_pages, struct page *page); + int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, struct vm_area_struct *vma); diff --git a/mm/rmap.c b/mm/rmap.c index a2ff20c2eccd..bb5a42d95c48 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1127,6 +1127,79 @@ int folio_mkclean(struct folio *folio) } EXPORT_SYMBOL_GPL(folio_mkclean); +struct wrprotect_file_state { + int cleaned; + pgoff_t pgoff; + unsigned long pfn; + unsigned long nr_pages; +}; + +static bool mapping_wrprotect_page_one(struct folio *folio, + struct vm_area_struct *vma, unsigned long address, void *arg) +{ + struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg; + struct page_vma_mapped_walk pvmw = { + .pfn = state->pfn, + .nr_pages = state->nr_pages, + .pgoff = state->pgoff, + .vma = vma, + .address = address, + .flags = PVMW_SYNC, + }; + + state->cleaned += page_vma_mkclean_one(&pvmw); + + return true; +} + +static void __rmap_walk_file(struct folio *folio, struct address_space *mapping, + pgoff_t pgoff_start, unsigned long nr_pages, + struct rmap_walk_control *rwc, bool locked); + +/** + * mapping_wrprotect_page() - Write protect all mappings of this page. + * + * @mapping: The mapping whose reverse mapping should be traversed. + * @pgoff: The page offset at which @page is mapped within @mapping. + * @nr_pages: The number of physically contiguous base pages spanned. + * @page: The page mapped in @mapping at @pgoff. + * + * Traverses the reverse mapping, finding all VMAs which contain a shared + * mapping of the single @page in @mapping at offset @pgoff and write-protecting + * the mappings. + * + * The page does not have to be a folio, but rather can be a kernel allocation + * that is mapped into userland. We therefore do not require that the page maps + * to a folio with a valid mapping or index field, rather these are specified in + * @mapping and @pgoff. + * + * Return: the number of write-protected PTEs, or an error. + */ +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, + unsigned long nr_pages, struct page *page) +{ + struct wrprotect_file_state state = { + .cleaned = 0, + .pgoff = pgoff, + .pfn = page_to_pfn(page), + .nr_pages = nr_pages, + }; + struct rmap_walk_control rwc = { + .arg = (void *)&state, + .rmap_one = mapping_wrprotect_page_one, + .invalid_vma = invalid_mkclean_vma, + }; + + if (!mapping) + return 0; + + __rmap_walk_file(/* folio = */NULL, mapping, pgoff, nr_pages, &rwc, + /* locked = */false); + + return state.cleaned; +} +EXPORT_SYMBOL_GPL(mapping_wrprotect_page); + /** * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) -- 2.48.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] mm: provide mapping_wrprotect_page() function 2025-01-31 18:28 ` [PATCH 2/3] mm: provide mapping_wrprotect_page() function Lorenzo Stoakes @ 2025-02-03 15:49 ` Simona Vetter 2025-02-03 16:30 ` Lorenzo Stoakes 2025-02-04 5:36 ` Christoph Hellwig 0 siblings, 2 replies; 13+ messages in thread From: Simona Vetter @ 2025-02-03 15:49 UTC (permalink / raw) To: Lorenzo Stoakes Cc: Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal On Fri, Jan 31, 2025 at 06:28:57PM +0000, Lorenzo Stoakes wrote: > in the fb_defio video driver, page dirty state is used to determine when > frame buffer pages have been changed, allowing for batched, deferred I/O to > be performed for efficiency. > > This implementation had only one means of doing so effectively - the use of > the folio_mkclean() function. > > However, this use of the function is inappropriate, as the fb_defio > implementation allocates kernel memory to back the framebuffer, and then is > forced to specified page->index, mapping fields in order to permit the > folio_mkclean() rmap traversal to proceed correctly. > > It is not correct to specify these fields on kernel-allocated memory, and > moreover since these are not folios, page->index, mapping are deprecated > fields, soon to be removed. > > We therefore need to provide a means by which we can correctly traverse the > reverse mapping and write-protect mappings for a page backing an > address_space page cache object at a given offset. > > This patch provides this - mapping_wrprotect_page() allows for this > operation to be performed for a specified address_space, offset and page, > without requiring a folio nor, of course, an inappropriate use of > page->index, mapping. > > With this provided, we can subequently adjust the fb_defio implementation > to make use of this function and avoid incorrect invocation of > folio_mkclean() and more importantly, incorrect manipulation of > page->index, mapping fields. > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > --- > include/linux/rmap.h | 3 ++ > mm/rmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 76 insertions(+) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index 683a04088f3f..0bf5f64884df 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -739,6 +739,9 @@ unsigned long page_address_in_vma(const struct folio *folio, > */ > int folio_mkclean(struct folio *); > > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > + unsigned long nr_pages, struct page *page); > + > int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, > struct vm_area_struct *vma); > > diff --git a/mm/rmap.c b/mm/rmap.c > index a2ff20c2eccd..bb5a42d95c48 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1127,6 +1127,79 @@ int folio_mkclean(struct folio *folio) > } > EXPORT_SYMBOL_GPL(folio_mkclean); > > +struct wrprotect_file_state { > + int cleaned; > + pgoff_t pgoff; > + unsigned long pfn; > + unsigned long nr_pages; > +}; > + > +static bool mapping_wrprotect_page_one(struct folio *folio, > + struct vm_area_struct *vma, unsigned long address, void *arg) > +{ > + struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg; > + struct page_vma_mapped_walk pvmw = { > + .pfn = state->pfn, > + .nr_pages = state->nr_pages, > + .pgoff = state->pgoff, > + .vma = vma, > + .address = address, > + .flags = PVMW_SYNC, > + }; > + > + state->cleaned += page_vma_mkclean_one(&pvmw); > + > + return true; > +} > + > +static void __rmap_walk_file(struct folio *folio, struct address_space *mapping, > + pgoff_t pgoff_start, unsigned long nr_pages, > + struct rmap_walk_control *rwc, bool locked); > + > +/** > + * mapping_wrprotect_page() - Write protect all mappings of this page. > + * > + * @mapping: The mapping whose reverse mapping should be traversed. > + * @pgoff: The page offset at which @page is mapped within @mapping. > + * @nr_pages: The number of physically contiguous base pages spanned. > + * @page: The page mapped in @mapping at @pgoff. > + * > + * Traverses the reverse mapping, finding all VMAs which contain a shared > + * mapping of the single @page in @mapping at offset @pgoff and write-protecting > + * the mappings. > + * > + * The page does not have to be a folio, but rather can be a kernel allocation > + * that is mapped into userland. We therefore do not require that the page maps > + * to a folio with a valid mapping or index field, rather these are specified in > + * @mapping and @pgoff. > + * > + * Return: the number of write-protected PTEs, or an error. > + */ > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > + unsigned long nr_pages, struct page *page) > +{ > + struct wrprotect_file_state state = { > + .cleaned = 0, > + .pgoff = pgoff, > + .pfn = page_to_pfn(page), Could we go one step further and entirely drop the struct page? Similar to unmap_mapping_range for VM_SPECIAL mappings, except it only updates the write protection. The reason is that ideally we'd like fbdev defio to entirely get rid of any struct page usage, because with some dma_alloc() memory regions there's simply no struct page for them (it's a carveout). See e.g. Sa498d4d06d6 ("drm/fbdev-dma: Only install deferred I/O if necessary") for some of the pain this has caused. So entirely struct page less way to write protect a pfn would be best. And it doesn't look like you need the page here at all? Cheers, Sima > + .nr_pages = nr_pages, > + }; > + struct rmap_walk_control rwc = { > + .arg = (void *)&state, > + .rmap_one = mapping_wrprotect_page_one, > + .invalid_vma = invalid_mkclean_vma, > + }; > + > + if (!mapping) > + return 0; > + > + __rmap_walk_file(/* folio = */NULL, mapping, pgoff, nr_pages, &rwc, > + /* locked = */false); > + > + return state.cleaned; > +} > +EXPORT_SYMBOL_GPL(mapping_wrprotect_page); > + > /** > * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of > * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) > -- > 2.48.1 > -- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] mm: provide mapping_wrprotect_page() function 2025-02-03 15:49 ` Simona Vetter @ 2025-02-03 16:30 ` Lorenzo Stoakes 2025-02-04 10:19 ` Simona Vetter 2025-02-04 5:36 ` Christoph Hellwig 1 sibling, 1 reply; 13+ messages in thread From: Lorenzo Stoakes @ 2025-02-03 16:30 UTC (permalink / raw) To: Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal On Mon, Feb 03, 2025 at 04:49:34PM +0100, Simona Vetter wrote: > On Fri, Jan 31, 2025 at 06:28:57PM +0000, Lorenzo Stoakes wrote: > > in the fb_defio video driver, page dirty state is used to determine when > > frame buffer pages have been changed, allowing for batched, deferred I/O to > > be performed for efficiency. > > > > This implementation had only one means of doing so effectively - the use of > > the folio_mkclean() function. > > > > However, this use of the function is inappropriate, as the fb_defio > > implementation allocates kernel memory to back the framebuffer, and then is > > forced to specified page->index, mapping fields in order to permit the > > folio_mkclean() rmap traversal to proceed correctly. > > > > It is not correct to specify these fields on kernel-allocated memory, and > > moreover since these are not folios, page->index, mapping are deprecated > > fields, soon to be removed. > > > > We therefore need to provide a means by which we can correctly traverse the > > reverse mapping and write-protect mappings for a page backing an > > address_space page cache object at a given offset. > > > > This patch provides this - mapping_wrprotect_page() allows for this > > operation to be performed for a specified address_space, offset and page, > > without requiring a folio nor, of course, an inappropriate use of > > page->index, mapping. > > > > With this provided, we can subequently adjust the fb_defio implementation > > to make use of this function and avoid incorrect invocation of > > folio_mkclean() and more importantly, incorrect manipulation of > > page->index, mapping fields. > > > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > --- > > include/linux/rmap.h | 3 ++ > > mm/rmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++ > > 2 files changed, 76 insertions(+) > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > index 683a04088f3f..0bf5f64884df 100644 > > --- a/include/linux/rmap.h > > +++ b/include/linux/rmap.h > > @@ -739,6 +739,9 @@ unsigned long page_address_in_vma(const struct folio *folio, > > */ > > int folio_mkclean(struct folio *); > > > > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > > + unsigned long nr_pages, struct page *page); > > + > > int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, > > struct vm_area_struct *vma); > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > index a2ff20c2eccd..bb5a42d95c48 100644 > > --- a/mm/rmap.c > > +++ b/mm/rmap.c > > @@ -1127,6 +1127,79 @@ int folio_mkclean(struct folio *folio) > > } > > EXPORT_SYMBOL_GPL(folio_mkclean); > > > > +struct wrprotect_file_state { > > + int cleaned; > > + pgoff_t pgoff; > > + unsigned long pfn; > > + unsigned long nr_pages; > > +}; > > + > > +static bool mapping_wrprotect_page_one(struct folio *folio, > > + struct vm_area_struct *vma, unsigned long address, void *arg) > > +{ > > + struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg; > > + struct page_vma_mapped_walk pvmw = { > > + .pfn = state->pfn, > > + .nr_pages = state->nr_pages, > > + .pgoff = state->pgoff, > > + .vma = vma, > > + .address = address, > > + .flags = PVMW_SYNC, > > + }; > > + > > + state->cleaned += page_vma_mkclean_one(&pvmw); > > + > > + return true; > > +} > > + > > +static void __rmap_walk_file(struct folio *folio, struct address_space *mapping, > > + pgoff_t pgoff_start, unsigned long nr_pages, > > + struct rmap_walk_control *rwc, bool locked); > > + > > +/** > > + * mapping_wrprotect_page() - Write protect all mappings of this page. > > + * > > + * @mapping: The mapping whose reverse mapping should be traversed. > > + * @pgoff: The page offset at which @page is mapped within @mapping. > > + * @nr_pages: The number of physically contiguous base pages spanned. > > + * @page: The page mapped in @mapping at @pgoff. > > + * > > + * Traverses the reverse mapping, finding all VMAs which contain a shared > > + * mapping of the single @page in @mapping at offset @pgoff and write-protecting > > + * the mappings. > > + * > > + * The page does not have to be a folio, but rather can be a kernel allocation > > + * that is mapped into userland. We therefore do not require that the page maps > > + * to a folio with a valid mapping or index field, rather these are specified in > > + * @mapping and @pgoff. > > + * > > + * Return: the number of write-protected PTEs, or an error. > > + */ > > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > > + unsigned long nr_pages, struct page *page) > > +{ > > + struct wrprotect_file_state state = { > > + .cleaned = 0, > > + .pgoff = pgoff, > > + .pfn = page_to_pfn(page), > > Could we go one step further and entirely drop the struct page? Similar to > unmap_mapping_range for VM_SPECIAL mappings, except it only updates the > write protection. The reason is that ideally we'd like fbdev defio to > entirely get rid of any struct page usage, because with some dma_alloc() > memory regions there's simply no struct page for them (it's a carveout). > See e.g. Sa498d4d06d6 ("drm/fbdev-dma: Only install deferred I/O if > necessary") for some of the pain this has caused. > > So entirely struct page less way to write protect a pfn would be best. And > it doesn't look like you need the page here at all? In the original version [1] we did indeed take a PFN, so this shouldn't be a problem to change. Since we make it possible here to explicitly reference the address_space object mapping the range, and from that can find all the VMAs that map the page range [pgoff, pgoff + nr_pages), I don't think we do need to think about a struct page here at all. The defio code does seem to have some questionable assumptions in place, or at least ones I couldn't explain away re: attempting to folio-lock (the non-folios...), so there'd need to be changes on that side, which I suggest would probably be best for a follow-up series given this one's urgency. But I'm more than happy to make this interface work with that by doing another revision where we export PFN only, I think something like: int mapping_wrprotect_range(struct address_space *mapping, pgoff_t pgoff, unsigned long pfn, unsigned long nr_pages); Should work? [1]:https://lore.kernel.org/all/cover.1736352361.git.lorenzo.stoakes@oracle.com/ > > Cheers, Sima Thanks! > > > > + .nr_pages = nr_pages, > > + }; > > + struct rmap_walk_control rwc = { > > + .arg = (void *)&state, > > + .rmap_one = mapping_wrprotect_page_one, > > + .invalid_vma = invalid_mkclean_vma, > > + }; > > + > > + if (!mapping) > > + return 0; > > + > > + __rmap_walk_file(/* folio = */NULL, mapping, pgoff, nr_pages, &rwc, > > + /* locked = */false); > > + > > + return state.cleaned; > > +} > > +EXPORT_SYMBOL_GPL(mapping_wrprotect_page); > > + > > /** > > * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of > > * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) > > -- > > 2.48.1 > > > > -- > Simona Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] mm: provide mapping_wrprotect_page() function 2025-02-03 16:30 ` Lorenzo Stoakes @ 2025-02-04 10:19 ` Simona Vetter 0 siblings, 0 replies; 13+ messages in thread From: Simona Vetter @ 2025-02-04 10:19 UTC (permalink / raw) To: Lorenzo Stoakes Cc: Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal On Mon, Feb 03, 2025 at 04:30:04PM +0000, Lorenzo Stoakes wrote: > On Mon, Feb 03, 2025 at 04:49:34PM +0100, Simona Vetter wrote: > > On Fri, Jan 31, 2025 at 06:28:57PM +0000, Lorenzo Stoakes wrote: > > > in the fb_defio video driver, page dirty state is used to determine when > > > frame buffer pages have been changed, allowing for batched, deferred I/O to > > > be performed for efficiency. > > > > > > This implementation had only one means of doing so effectively - the use of > > > the folio_mkclean() function. > > > > > > However, this use of the function is inappropriate, as the fb_defio > > > implementation allocates kernel memory to back the framebuffer, and then is > > > forced to specified page->index, mapping fields in order to permit the > > > folio_mkclean() rmap traversal to proceed correctly. > > > > > > It is not correct to specify these fields on kernel-allocated memory, and > > > moreover since these are not folios, page->index, mapping are deprecated > > > fields, soon to be removed. > > > > > > We therefore need to provide a means by which we can correctly traverse the > > > reverse mapping and write-protect mappings for a page backing an > > > address_space page cache object at a given offset. > > > > > > This patch provides this - mapping_wrprotect_page() allows for this > > > operation to be performed for a specified address_space, offset and page, > > > without requiring a folio nor, of course, an inappropriate use of > > > page->index, mapping. > > > > > > With this provided, we can subequently adjust the fb_defio implementation > > > to make use of this function and avoid incorrect invocation of > > > folio_mkclean() and more importantly, incorrect manipulation of > > > page->index, mapping fields. > > > > > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > > --- > > > include/linux/rmap.h | 3 ++ > > > mm/rmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 76 insertions(+) > > > > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > > index 683a04088f3f..0bf5f64884df 100644 > > > --- a/include/linux/rmap.h > > > +++ b/include/linux/rmap.h > > > @@ -739,6 +739,9 @@ unsigned long page_address_in_vma(const struct folio *folio, > > > */ > > > int folio_mkclean(struct folio *); > > > > > > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > > > + unsigned long nr_pages, struct page *page); > > > + > > > int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, > > > struct vm_area_struct *vma); > > > > > > diff --git a/mm/rmap.c b/mm/rmap.c > > > index a2ff20c2eccd..bb5a42d95c48 100644 > > > --- a/mm/rmap.c > > > +++ b/mm/rmap.c > > > @@ -1127,6 +1127,79 @@ int folio_mkclean(struct folio *folio) > > > } > > > EXPORT_SYMBOL_GPL(folio_mkclean); > > > > > > +struct wrprotect_file_state { > > > + int cleaned; > > > + pgoff_t pgoff; > > > + unsigned long pfn; > > > + unsigned long nr_pages; > > > +}; > > > + > > > +static bool mapping_wrprotect_page_one(struct folio *folio, > > > + struct vm_area_struct *vma, unsigned long address, void *arg) > > > +{ > > > + struct wrprotect_file_state *state = (struct wrprotect_file_state *)arg; > > > + struct page_vma_mapped_walk pvmw = { > > > + .pfn = state->pfn, > > > + .nr_pages = state->nr_pages, > > > + .pgoff = state->pgoff, > > > + .vma = vma, > > > + .address = address, > > > + .flags = PVMW_SYNC, > > > + }; > > > + > > > + state->cleaned += page_vma_mkclean_one(&pvmw); > > > + > > > + return true; > > > +} > > > + > > > +static void __rmap_walk_file(struct folio *folio, struct address_space *mapping, > > > + pgoff_t pgoff_start, unsigned long nr_pages, > > > + struct rmap_walk_control *rwc, bool locked); > > > + > > > +/** > > > + * mapping_wrprotect_page() - Write protect all mappings of this page. > > > + * > > > + * @mapping: The mapping whose reverse mapping should be traversed. > > > + * @pgoff: The page offset at which @page is mapped within @mapping. > > > + * @nr_pages: The number of physically contiguous base pages spanned. > > > + * @page: The page mapped in @mapping at @pgoff. > > > + * > > > + * Traverses the reverse mapping, finding all VMAs which contain a shared > > > + * mapping of the single @page in @mapping at offset @pgoff and write-protecting > > > + * the mappings. > > > + * > > > + * The page does not have to be a folio, but rather can be a kernel allocation > > > + * that is mapped into userland. We therefore do not require that the page maps > > > + * to a folio with a valid mapping or index field, rather these are specified in > > > + * @mapping and @pgoff. > > > + * > > > + * Return: the number of write-protected PTEs, or an error. > > > + */ > > > +int mapping_wrprotect_page(struct address_space *mapping, pgoff_t pgoff, > > > + unsigned long nr_pages, struct page *page) > > > +{ > > > + struct wrprotect_file_state state = { > > > + .cleaned = 0, > > > + .pgoff = pgoff, > > > + .pfn = page_to_pfn(page), > > > > Could we go one step further and entirely drop the struct page? Similar to > > unmap_mapping_range for VM_SPECIAL mappings, except it only updates the > > write protection. The reason is that ideally we'd like fbdev defio to > > entirely get rid of any struct page usage, because with some dma_alloc() > > memory regions there's simply no struct page for them (it's a carveout). > > See e.g. Sa498d4d06d6 ("drm/fbdev-dma: Only install deferred I/O if > > necessary") for some of the pain this has caused. > > > > So entirely struct page less way to write protect a pfn would be best. And > > it doesn't look like you need the page here at all? > > In the original version [1] we did indeed take a PFN, so this shouldn't be > a problem to change. > > Since we make it possible here to explicitly reference the address_space > object mapping the range, and from that can find all the VMAs that map the > page range [pgoff, pgoff + nr_pages), I don't think we do need to think > about a struct page here at all. > > The defio code does seem to have some questionable assumptions in place, or > at least ones I couldn't explain away re: attempting to folio-lock (the > non-folios...), so there'd need to be changes on that side, which I suggest > would probably be best for a follow-up series given this one's urgency. Yeah there's a bunch more things we need to do to get there. It was the lack of a pfn-based core mm function that stopped us from doing that thus far, plus also fbdev defio being very low priority. But it would definitely avoid a bunch of corner cases and duplication in fbdev emulation code in drivers/gpu/drm. > But I'm more than happy to make this interface work with that by doing > another revision where we export PFN only, I think something like: > > int mapping_wrprotect_range(struct address_space *mapping, pgoff_t pgoff, > unsigned long pfn, unsigned long nr_pages); > > Should work? > > [1]:https://lore.kernel.org/all/cover.1736352361.git.lorenzo.stoakes@oracle.com/ Yup that looks like the thing we'll need to wean defio of all that questionable folio/page wrangling. But like you say, should be easy to add/update when we get there. Thanks, Sima > > > > > Cheers, Sima > > Thanks! > > > > > > > > + .nr_pages = nr_pages, > > > + }; > > > + struct rmap_walk_control rwc = { > > > + .arg = (void *)&state, > > > + .rmap_one = mapping_wrprotect_page_one, > > > + .invalid_vma = invalid_mkclean_vma, > > > + }; > > > + > > > + if (!mapping) > > > + return 0; > > > + > > > + __rmap_walk_file(/* folio = */NULL, mapping, pgoff, nr_pages, &rwc, > > > + /* locked = */false); > > > + > > > + return state.cleaned; > > > +} > > > +EXPORT_SYMBOL_GPL(mapping_wrprotect_page); > > > + > > > /** > > > * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of > > > * [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff) > > > -- > > > 2.48.1 > > > > > > > -- > > Simona Vetter > > Software Engineer, Intel Corporation > > http://blog.ffwll.ch -- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] mm: provide mapping_wrprotect_page() function 2025-02-03 15:49 ` Simona Vetter 2025-02-03 16:30 ` Lorenzo Stoakes @ 2025-02-04 5:36 ` Christoph Hellwig 2025-02-04 8:16 ` Thomas Zimmermann 1 sibling, 1 reply; 13+ messages in thread From: Christoph Hellwig @ 2025-02-04 5:36 UTC (permalink / raw) To: Lorenzo Stoakes, Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal, Thomas Zimmermann Hi Simona, On Mon, Feb 03, 2025 at 04:49:34PM +0100, Simona Vetter wrote: > > Could we go one step further and entirely drop the struct page? Similar to > unmap_mapping_range for VM_SPECIAL mappings, except it only updates the > write protection. The reason is that ideally we'd like fbdev defio to > entirely get rid of any struct page usage, because with some dma_alloc() > memory regions there's simply no struct page for them (it's a carveout). Umm, for dma_alloc* where * is not _pages you never can get a page or PFN form them. They are block boxes and drivers must not attempt to translated them into either a page or PFN or things will go wrong. Only the kernel virtual address and dma_address may be used. > See e.g. Sa498d4d06d6 ("drm/fbdev-dma: Only install deferred I/O if > necessary") for some of the pain this has caused. The commit hash is corrupted, I guess this is 5a498d4d06d6 as the subject line matches. And that commit (just like the code it is trying to fix) is completely broken as it violates the above. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/3] mm: provide mapping_wrprotect_page() function 2025-02-04 5:36 ` Christoph Hellwig @ 2025-02-04 8:16 ` Thomas Zimmermann 0 siblings, 0 replies; 13+ messages in thread From: Thomas Zimmermann @ 2025-02-04 8:16 UTC (permalink / raw) To: Christoph Hellwig, Lorenzo Stoakes, Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal Hi Am 04.02.25 um 06:36 schrieb Christoph Hellwig: > Hi Simona, > > > On Mon, Feb 03, 2025 at 04:49:34PM +0100, Simona Vetter wrote: >> Could we go one step further and entirely drop the struct page? Similar to >> unmap_mapping_range for VM_SPECIAL mappings, except it only updates the >> write protection. The reason is that ideally we'd like fbdev defio to >> entirely get rid of any struct page usage, because with some dma_alloc() >> memory regions there's simply no struct page for them (it's a carveout). > Umm, for dma_alloc* where * is not _pages you never can get a page or > PFN form them. They are block boxes and drivers must not attempt to > translated them into either a page or PFN or things will go wrong. > Only the kernel virtual address and dma_address may be used. > >> See e.g. Sa498d4d06d6 ("drm/fbdev-dma: Only install deferred I/O if >> necessary") for some of the pain this has caused. > The commit hash is corrupted, I guess this is 5a498d4d06d6 as the > subject line matches. And that commit (just like the code it is trying > to fix) is completely broken as it violates the above. As the author of these commits, I have no doubt that the code would break on some systems. It works in practice for the current use cases though. Is it possible to create something that tracks framebuffer writes without requiring pages? Best regards Thomas > > -- -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg) ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields 2025-01-31 18:28 [PATCH 0/3] expose mapping wrprotect, fix fb_defio use Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 1/3] mm: refactor rmap_walk_file() to separate out traversal logic Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 2/3] mm: provide mapping_wrprotect_page() function Lorenzo Stoakes @ 2025-01-31 18:28 ` Lorenzo Stoakes 2025-02-01 17:06 ` Lorenzo Stoakes 2025-02-04 8:21 ` Thomas Zimmermann 2 siblings, 2 replies; 13+ messages in thread From: Lorenzo Stoakes @ 2025-01-31 18:28 UTC (permalink / raw) To: Andrew Morton Cc: Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal With the introduction of mapping_wrprotect_page() there is no need to use folio_mkclean() in order to write-protect mappings of frame buffer pages, and therefore no need to inappropriately set kernel-allocated page->index, mapping fields to permit this operation. Instead, store the pointer to the page cache object for the mapped driver in the fb_deferred_io object, and use the already stored page offset from the pageref object to look up mappings in order to write-protect them. This is justified, as for the page objects to store a mapping pointer at the point of assignment of pages, they must all reference the same underlying address_space object. Since the life time of the pagerefs is also the lifetime of the fb_deferred_io object, storing the pointer here makes snese. This eliminates the need for all of the logic around setting and maintaining page->index,mapping which we remove. This eliminates the use of folio_mkclean() entirely but otherwise should have no functional change. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Tested-by: Kajtar Zsolt <soci@c64.rulez.org> --- drivers/video/fbdev/core/fb_defio.c | 38 +++++++++-------------------- include/linux/fb.h | 1 + 2 files changed, 12 insertions(+), 27 deletions(-) diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c index 65363df8e81b..b9bab27a8c0f 100644 --- a/drivers/video/fbdev/core/fb_defio.c +++ b/drivers/video/fbdev/core/fb_defio.c @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in return pageref; } -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref) -{ - struct page *page = pageref->page; - - if (page) - page->mapping = NULL; -} - static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info, unsigned long offset, struct page *page) @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) if (!page) return VM_FAULT_SIGBUS; - if (vmf->vma->vm_file) - page->mapping = vmf->vma->vm_file->f_mapping; - else + if (!vmf->vma->vm_file) printk(KERN_ERR "no mapping available\n"); - BUG_ON(!page->mapping); - page->index = vmf->pgoff; /* for folio_mkclean() */ + BUG_ON(!info->fbdefio->mapping); vmf->page = page; return 0; @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long /* * We want the page to remain locked from ->page_mkwrite until - * the PTE is marked dirty to avoid folio_mkclean() being called - * before the PTE is updated, which would leave the page ignored - * by defio. + * the PTE is marked dirty to avoid mapping_wrprotect_page() + * being called before the PTE is updated, which would leave + * the page ignored by defio. * Do this by locking the page here and informing the caller * about it with VM_FAULT_LOCKED. */ @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work) struct fb_deferred_io_pageref *pageref, *next; struct fb_deferred_io *fbdefio = info->fbdefio; - /* here we mkclean the pages, then do all deferred IO */ + /* here we wrprotect the page's mappings, then do all deferred IO. */ mutex_lock(&fbdefio->lock); list_for_each_entry(pageref, &fbdefio->pagereflist, list) { - struct folio *folio = page_folio(pageref->page); + struct page *page = pageref->page; + pgoff_t pgoff = pageref->offset >> PAGE_SHIFT; - folio_lock(folio); - folio_mkclean(folio); - folio_unlock(folio); + mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page); } /* driver's callback with pagereflist */ @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info, { struct fb_deferred_io *fbdefio = info->fbdefio; + fbdefio->mapping = file->f_mapping; file->f_mapping->a_ops = &fb_deferred_io_aops; fbdefio->open_count++; } @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open); static void fb_deferred_io_lastclose(struct fb_info *info) { - unsigned long i; - flush_delayed_work(&info->deferred_work); - - /* clear out the mapping that we setup */ - for (i = 0; i < info->npagerefs; ++i) - fb_deferred_io_pageref_clear(&info->pagerefs[i]); } void fb_deferred_io_release(struct fb_info *info) @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info) kvfree(info->pagerefs); mutex_destroy(&fbdefio->lock); + fbdefio->mapping = NULL; } EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup); diff --git a/include/linux/fb.h b/include/linux/fb.h index 5ba187e08cf7..cd653862ab99 100644 --- a/include/linux/fb.h +++ b/include/linux/fb.h @@ -225,6 +225,7 @@ struct fb_deferred_io { int open_count; /* number of opened files; protected by fb_info lock */ struct mutex lock; /* mutex that protects the pageref list */ struct list_head pagereflist; /* list of pagerefs for touched pages */ + struct address_space *mapping; /* page cache object for fb device */ /* callback */ struct page *(*get_page)(struct fb_info *info, unsigned long offset); void (*deferred_io)(struct fb_info *info, struct list_head *pagelist); -- 2.48.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields 2025-01-31 18:28 ` [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields Lorenzo Stoakes @ 2025-02-01 17:06 ` Lorenzo Stoakes 2025-02-04 8:21 ` Thomas Zimmermann 1 sibling, 0 replies; 13+ messages in thread From: Lorenzo Stoakes @ 2025-02-01 17:06 UTC (permalink / raw) To: Andrew Morton Cc: Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal (This time sent in reply to the correct series...) On Fri, Jan 31, 2025 at 06:28:58PM +0000, Lorenzo Stoakes wrote: > With the introduction of mapping_wrprotect_page() there is no need to use > folio_mkclean() in order to write-protect mappings of frame buffer pages, > and therefore no need to inappropriately set kernel-allocated page->index, > mapping fields to permit this operation. > > Instead, store the pointer to the page cache object for the mapped driver > in the fb_deferred_io object, and use the already stored page offset from > the pageref object to look up mappings in order to write-protect them. > > This is justified, as for the page objects to store a mapping pointer at > the point of assignment of pages, they must all reference the same > underlying address_space object. Since the life time of the pagerefs is > also the lifetime of the fb_deferred_io object, storing the pointer here > makes snese. > > This eliminates the need for all of the logic around setting and > maintaining page->index,mapping which we remove. > > This eliminates the use of folio_mkclean() entirely but otherwise should > have no functional change. > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > Tested-by: Kajtar Zsolt <soci@c64.rulez.org> Andrew - Sorry to be a pain but could you please apply the attached fix-patch to avoid build bot failures when randconfig generates invalid configurations. The defio mechanism entirely relies upon the page faulting mechanism, and thus an MMU to function. This was previously masked, because folio_mkclean() happens to have a !CONFIG_MMU stub. It really doesn't make sense to apply such a stub for mapping_wrprotect_page() for architectures without an MMU. Instead, correctly express the actual dependency in Kconfig, which should prevent randconfig from doing the wrong thing and also helps document this fact about defio. Thanks! ----8<---- From 32abcfbb8dea92d9a8a99e6a86f45a1823a75c59 Mon Sep 17 00:00:00 2001 From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Date: Sat, 1 Feb 2025 16:56:02 +0000 Subject: [PATCH] fbdev: have CONFIG_FB_DEFERRED_IO depend on CONFIG_MMU Frame buffer deferred I/O is entirely reliant on the page faulting mechanism (and thus, an MMU) to function. Express this dependency in the Kconfig, as otherwise randconfig could generate invalid configurations resulting in build errors. Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202502020030.MnEJ847Z-lkp@intel.com/ --- drivers/video/fbdev/core/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/video/fbdev/core/Kconfig b/drivers/video/fbdev/core/Kconfig index d554d8c543d4..154804914680 100644 --- a/drivers/video/fbdev/core/Kconfig +++ b/drivers/video/fbdev/core/Kconfig @@ -135,6 +135,7 @@ config FB_SYSMEM_FOPS config FB_DEFERRED_IO bool depends on FB_CORE + depends on MMU config FB_DMAMEM_HELPERS bool -- 2.48.1 ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields 2025-01-31 18:28 ` [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields Lorenzo Stoakes 2025-02-01 17:06 ` Lorenzo Stoakes @ 2025-02-04 8:21 ` Thomas Zimmermann 2025-02-04 8:37 ` Lorenzo Stoakes 1 sibling, 1 reply; 13+ messages in thread From: Thomas Zimmermann @ 2025-02-04 8:21 UTC (permalink / raw) To: Lorenzo Stoakes, Andrew Morton Cc: Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal Hi Am 31.01.25 um 19:28 schrieb Lorenzo Stoakes: > With the introduction of mapping_wrprotect_page() there is no need to use > folio_mkclean() in order to write-protect mappings of frame buffer pages, > and therefore no need to inappropriately set kernel-allocated page->index, > mapping fields to permit this operation. > > Instead, store the pointer to the page cache object for the mapped driver > in the fb_deferred_io object, and use the already stored page offset from > the pageref object to look up mappings in order to write-protect them. > > This is justified, as for the page objects to store a mapping pointer at > the point of assignment of pages, they must all reference the same > underlying address_space object. Since the life time of the pagerefs is > also the lifetime of the fb_deferred_io object, storing the pointer here > makes snese. > > This eliminates the need for all of the logic around setting and > maintaining page->index,mapping which we remove. > > This eliminates the use of folio_mkclean() entirely but otherwise should > have no functional change. > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > Tested-by: Kajtar Zsolt <soci@c64.rulez.org> > --- > drivers/video/fbdev/core/fb_defio.c | 38 +++++++++-------------------- > include/linux/fb.h | 1 + > 2 files changed, 12 insertions(+), 27 deletions(-) > > diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c > index 65363df8e81b..b9bab27a8c0f 100644 > --- a/drivers/video/fbdev/core/fb_defio.c > +++ b/drivers/video/fbdev/core/fb_defio.c > @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in > return pageref; > } > > -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref) > -{ > - struct page *page = pageref->page; > - > - if (page) > - page->mapping = NULL; > -} > - > static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info, > unsigned long offset, > struct page *page) > @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) > if (!page) > return VM_FAULT_SIGBUS; > > - if (vmf->vma->vm_file) > - page->mapping = vmf->vma->vm_file->f_mapping; > - else > + if (!vmf->vma->vm_file) > printk(KERN_ERR "no mapping available\n"); fb_err() here. > > - BUG_ON(!page->mapping); > - page->index = vmf->pgoff; /* for folio_mkclean() */ > + BUG_ON(!info->fbdefio->mapping); > > vmf->page = page; > return 0; > @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long > > /* > * We want the page to remain locked from ->page_mkwrite until > - * the PTE is marked dirty to avoid folio_mkclean() being called > - * before the PTE is updated, which would leave the page ignored > - * by defio. > + * the PTE is marked dirty to avoid mapping_wrprotect_page() > + * being called before the PTE is updated, which would leave > + * the page ignored by defio. > * Do this by locking the page here and informing the caller > * about it with VM_FAULT_LOCKED. > */ > @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work) > struct fb_deferred_io_pageref *pageref, *next; > struct fb_deferred_io *fbdefio = info->fbdefio; > > - /* here we mkclean the pages, then do all deferred IO */ > + /* here we wrprotect the page's mappings, then do all deferred IO. */ > mutex_lock(&fbdefio->lock); > list_for_each_entry(pageref, &fbdefio->pagereflist, list) { > - struct folio *folio = page_folio(pageref->page); > + struct page *page = pageref->page; > + pgoff_t pgoff = pageref->offset >> PAGE_SHIFT; > > - folio_lock(folio); > - folio_mkclean(folio); > - folio_unlock(folio); > + mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page); > } > > /* driver's callback with pagereflist */ > @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info, > { > struct fb_deferred_io *fbdefio = info->fbdefio; > > + fbdefio->mapping = file->f_mapping; Does this still work if more than one program opens the file? Best regard Thomas > file->f_mapping->a_ops = &fb_deferred_io_aops; > fbdefio->open_count++; > } > @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open); > > static void fb_deferred_io_lastclose(struct fb_info *info) > { > - unsigned long i; > - > flush_delayed_work(&info->deferred_work); > - > - /* clear out the mapping that we setup */ > - for (i = 0; i < info->npagerefs; ++i) > - fb_deferred_io_pageref_clear(&info->pagerefs[i]); > } > > void fb_deferred_io_release(struct fb_info *info) > @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info) > > kvfree(info->pagerefs); > mutex_destroy(&fbdefio->lock); > + fbdefio->mapping = NULL; > } > EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup); > diff --git a/include/linux/fb.h b/include/linux/fb.h > index 5ba187e08cf7..cd653862ab99 100644 > --- a/include/linux/fb.h > +++ b/include/linux/fb.h > @@ -225,6 +225,7 @@ struct fb_deferred_io { > int open_count; /* number of opened files; protected by fb_info lock */ > struct mutex lock; /* mutex that protects the pageref list */ > struct list_head pagereflist; /* list of pagerefs for touched pages */ > + struct address_space *mapping; /* page cache object for fb device */ > /* callback */ > struct page *(*get_page)(struct fb_info *info, unsigned long offset); > void (*deferred_io)(struct fb_info *info, struct list_head *pagelist); -- -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg) ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields 2025-02-04 8:21 ` Thomas Zimmermann @ 2025-02-04 8:37 ` Lorenzo Stoakes 2025-02-04 8:57 ` Thomas Zimmermann 0 siblings, 1 reply; 13+ messages in thread From: Lorenzo Stoakes @ 2025-02-04 8:37 UTC (permalink / raw) To: Thomas Zimmermann Cc: Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal On Tue, Feb 04, 2025 at 09:21:55AM +0100, Thomas Zimmermann wrote: > Hi > > > Am 31.01.25 um 19:28 schrieb Lorenzo Stoakes: > > With the introduction of mapping_wrprotect_page() there is no need to use > > folio_mkclean() in order to write-protect mappings of frame buffer pages, > > and therefore no need to inappropriately set kernel-allocated page->index, > > mapping fields to permit this operation. > > > > Instead, store the pointer to the page cache object for the mapped driver > > in the fb_deferred_io object, and use the already stored page offset from > > the pageref object to look up mappings in order to write-protect them. > > > > This is justified, as for the page objects to store a mapping pointer at > > the point of assignment of pages, they must all reference the same > > underlying address_space object. Since the life time of the pagerefs is > > also the lifetime of the fb_deferred_io object, storing the pointer here > > makes snese. > > > > This eliminates the need for all of the logic around setting and > > maintaining page->index,mapping which we remove. > > > > This eliminates the use of folio_mkclean() entirely but otherwise should > > have no functional change. > > > > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > > Tested-by: Kajtar Zsolt <soci@c64.rulez.org> > > --- > > drivers/video/fbdev/core/fb_defio.c | 38 +++++++++-------------------- > > include/linux/fb.h | 1 + > > 2 files changed, 12 insertions(+), 27 deletions(-) > > > > diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c > > index 65363df8e81b..b9bab27a8c0f 100644 > > --- a/drivers/video/fbdev/core/fb_defio.c > > +++ b/drivers/video/fbdev/core/fb_defio.c > > @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in > > return pageref; > > } > > -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref) > > -{ > > - struct page *page = pageref->page; > > - > > - if (page) > > - page->mapping = NULL; > > -} > > - > > static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info, > > unsigned long offset, > > struct page *page) > > @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) > > if (!page) > > return VM_FAULT_SIGBUS; > > - if (vmf->vma->vm_file) > > - page->mapping = vmf->vma->vm_file->f_mapping; > > - else > > + if (!vmf->vma->vm_file) > > printk(KERN_ERR "no mapping available\n"); > > fb_err() here. Ack, will fix on respin. > > > - BUG_ON(!page->mapping); > > - page->index = vmf->pgoff; /* for folio_mkclean() */ > > + BUG_ON(!info->fbdefio->mapping); > > vmf->page = page; > > return 0; > > @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long > > /* > > * We want the page to remain locked from ->page_mkwrite until > > - * the PTE is marked dirty to avoid folio_mkclean() being called > > - * before the PTE is updated, which would leave the page ignored > > - * by defio. > > + * the PTE is marked dirty to avoid mapping_wrprotect_page() > > + * being called before the PTE is updated, which would leave > > + * the page ignored by defio. > > * Do this by locking the page here and informing the caller > > * about it with VM_FAULT_LOCKED. > > */ > > @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work) > > struct fb_deferred_io_pageref *pageref, *next; > > struct fb_deferred_io *fbdefio = info->fbdefio; > > - /* here we mkclean the pages, then do all deferred IO */ > > + /* here we wrprotect the page's mappings, then do all deferred IO. */ > > mutex_lock(&fbdefio->lock); > > list_for_each_entry(pageref, &fbdefio->pagereflist, list) { > > - struct folio *folio = page_folio(pageref->page); > > + struct page *page = pageref->page; > > + pgoff_t pgoff = pageref->offset >> PAGE_SHIFT; > > - folio_lock(folio); > > - folio_mkclean(folio); > > - folio_unlock(folio); > > + mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page); > > } > > /* driver's callback with pagereflist */ > > @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info, > > { > > struct fb_deferred_io *fbdefio = info->fbdefio; > > + fbdefio->mapping = file->f_mapping; > > Does this still work if more than one program opens the file? Yes, the mapping (address_space) pointer will remain the same across the board. The way defio is implemented absolutely relies on this assumption. > > Best regard > Thomas > > > file->f_mapping->a_ops = &fb_deferred_io_aops; > > fbdefio->open_count++; > > } > > @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open); > > static void fb_deferred_io_lastclose(struct fb_info *info) > > { > > - unsigned long i; > > - > > flush_delayed_work(&info->deferred_work); > > - > > - /* clear out the mapping that we setup */ > > - for (i = 0; i < info->npagerefs; ++i) > > - fb_deferred_io_pageref_clear(&info->pagerefs[i]); > > } > > void fb_deferred_io_release(struct fb_info *info) > > @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info) > > kvfree(info->pagerefs); > > mutex_destroy(&fbdefio->lock); > > + fbdefio->mapping = NULL; > > } > > EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup); > > diff --git a/include/linux/fb.h b/include/linux/fb.h > > index 5ba187e08cf7..cd653862ab99 100644 > > --- a/include/linux/fb.h > > +++ b/include/linux/fb.h > > @@ -225,6 +225,7 @@ struct fb_deferred_io { > > int open_count; /* number of opened files; protected by fb_info lock */ > > struct mutex lock; /* mutex that protects the pageref list */ > > struct list_head pagereflist; /* list of pagerefs for touched pages */ > > + struct address_space *mapping; /* page cache object for fb device */ > > /* callback */ > > struct page *(*get_page)(struct fb_info *info, unsigned long offset); > > void (*deferred_io)(struct fb_info *info, struct list_head *pagelist); > > -- > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Frankenstrasse 146, 90461 Nuernberg, Germany > GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman > HRB 36809 (AG Nuernberg) > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields 2025-02-04 8:37 ` Lorenzo Stoakes @ 2025-02-04 8:57 ` Thomas Zimmermann 0 siblings, 0 replies; 13+ messages in thread From: Thomas Zimmermann @ 2025-02-04 8:57 UTC (permalink / raw) To: Lorenzo Stoakes Cc: Andrew Morton, Jaya Kumar, Simona Vetter, Helge Deller, linux-fbdev, dri-devel, linux-kernel, linux-mm, Matthew Wilcox, David Hildenbrand, Kajtar Zsolt, Maira Canal Hi Am 04.02.25 um 09:37 schrieb Lorenzo Stoakes: > On Tue, Feb 04, 2025 at 09:21:55AM +0100, Thomas Zimmermann wrote: >> Hi >> >> >> Am 31.01.25 um 19:28 schrieb Lorenzo Stoakes: >>> With the introduction of mapping_wrprotect_page() there is no need to use >>> folio_mkclean() in order to write-protect mappings of frame buffer pages, >>> and therefore no need to inappropriately set kernel-allocated page->index, >>> mapping fields to permit this operation. >>> >>> Instead, store the pointer to the page cache object for the mapped driver >>> in the fb_deferred_io object, and use the already stored page offset from >>> the pageref object to look up mappings in order to write-protect them. >>> >>> This is justified, as for the page objects to store a mapping pointer at >>> the point of assignment of pages, they must all reference the same >>> underlying address_space object. Since the life time of the pagerefs is >>> also the lifetime of the fb_deferred_io object, storing the pointer here >>> makes snese. >>> >>> This eliminates the need for all of the logic around setting and >>> maintaining page->index,mapping which we remove. >>> >>> This eliminates the use of folio_mkclean() entirely but otherwise should >>> have no functional change. >>> >>> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> >>> Tested-by: Kajtar Zsolt <soci@c64.rulez.org> >>> --- >>> drivers/video/fbdev/core/fb_defio.c | 38 +++++++++-------------------- >>> include/linux/fb.h | 1 + >>> 2 files changed, 12 insertions(+), 27 deletions(-) >>> >>> diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c >>> index 65363df8e81b..b9bab27a8c0f 100644 >>> --- a/drivers/video/fbdev/core/fb_defio.c >>> +++ b/drivers/video/fbdev/core/fb_defio.c >>> @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in >>> return pageref; >>> } >>> -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref) >>> -{ >>> - struct page *page = pageref->page; >>> - >>> - if (page) >>> - page->mapping = NULL; >>> -} >>> - >>> static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info, >>> unsigned long offset, >>> struct page *page) >>> @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) >>> if (!page) >>> return VM_FAULT_SIGBUS; >>> - if (vmf->vma->vm_file) >>> - page->mapping = vmf->vma->vm_file->f_mapping; >>> - else >>> + if (!vmf->vma->vm_file) >>> printk(KERN_ERR "no mapping available\n"); >> fb_err() here. > Ack, will fix on respin. > >>> - BUG_ON(!page->mapping); >>> - page->index = vmf->pgoff; /* for folio_mkclean() */ >>> + BUG_ON(!info->fbdefio->mapping); >>> vmf->page = page; >>> return 0; >>> @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long >>> /* >>> * We want the page to remain locked from ->page_mkwrite until >>> - * the PTE is marked dirty to avoid folio_mkclean() being called >>> - * before the PTE is updated, which would leave the page ignored >>> - * by defio. >>> + * the PTE is marked dirty to avoid mapping_wrprotect_page() >>> + * being called before the PTE is updated, which would leave >>> + * the page ignored by defio. >>> * Do this by locking the page here and informing the caller >>> * about it with VM_FAULT_LOCKED. >>> */ >>> @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work) >>> struct fb_deferred_io_pageref *pageref, *next; >>> struct fb_deferred_io *fbdefio = info->fbdefio; >>> - /* here we mkclean the pages, then do all deferred IO */ >>> + /* here we wrprotect the page's mappings, then do all deferred IO. */ >>> mutex_lock(&fbdefio->lock); >>> list_for_each_entry(pageref, &fbdefio->pagereflist, list) { >>> - struct folio *folio = page_folio(pageref->page); >>> + struct page *page = pageref->page; >>> + pgoff_t pgoff = pageref->offset >> PAGE_SHIFT; >>> - folio_lock(folio); >>> - folio_mkclean(folio); >>> - folio_unlock(folio); >>> + mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page); >>> } >>> /* driver's callback with pagereflist */ >>> @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info, >>> { >>> struct fb_deferred_io *fbdefio = info->fbdefio; >>> + fbdefio->mapping = file->f_mapping; >> Does this still work if more than one program opens the file? > Yes, the mapping (address_space) pointer will remain the same across the > board. The way defio is implemented absolutely relies on this assumption. Great. With the fb_err() fixed, you can add Acked-by: Thomas Zimmermann <tzimmermann@suse.de> to the patch. Best regards Thomas > >> Best regard >> Thomas >> >>> file->f_mapping->a_ops = &fb_deferred_io_aops; >>> fbdefio->open_count++; >>> } >>> @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open); >>> static void fb_deferred_io_lastclose(struct fb_info *info) >>> { >>> - unsigned long i; >>> - >>> flush_delayed_work(&info->deferred_work); >>> - >>> - /* clear out the mapping that we setup */ >>> - for (i = 0; i < info->npagerefs; ++i) >>> - fb_deferred_io_pageref_clear(&info->pagerefs[i]); >>> } >>> void fb_deferred_io_release(struct fb_info *info) >>> @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info) >>> kvfree(info->pagerefs); >>> mutex_destroy(&fbdefio->lock); >>> + fbdefio->mapping = NULL; >>> } >>> EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup); >>> diff --git a/include/linux/fb.h b/include/linux/fb.h >>> index 5ba187e08cf7..cd653862ab99 100644 >>> --- a/include/linux/fb.h >>> +++ b/include/linux/fb.h >>> @@ -225,6 +225,7 @@ struct fb_deferred_io { >>> int open_count; /* number of opened files; protected by fb_info lock */ >>> struct mutex lock; /* mutex that protects the pageref list */ >>> struct list_head pagereflist; /* list of pagerefs for touched pages */ >>> + struct address_space *mapping; /* page cache object for fb device */ >>> /* callback */ >>> struct page *(*get_page)(struct fb_info *info, unsigned long offset); >>> void (*deferred_io)(struct fb_info *info, struct list_head *pagelist); >> -- >> -- >> Thomas Zimmermann >> Graphics Driver Developer >> SUSE Software Solutions Germany GmbH >> Frankenstrasse 146, 90461 Nuernberg, Germany >> GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman >> HRB 36809 (AG Nuernberg) >> -- -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Frankenstrasse 146, 90461 Nuernberg, Germany GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman HRB 36809 (AG Nuernberg) ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-02-04 10:19 UTC | newest] Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2025-01-31 18:28 [PATCH 0/3] expose mapping wrprotect, fix fb_defio use Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 1/3] mm: refactor rmap_walk_file() to separate out traversal logic Lorenzo Stoakes 2025-01-31 18:28 ` [PATCH 2/3] mm: provide mapping_wrprotect_page() function Lorenzo Stoakes 2025-02-03 15:49 ` Simona Vetter 2025-02-03 16:30 ` Lorenzo Stoakes 2025-02-04 10:19 ` Simona Vetter 2025-02-04 5:36 ` Christoph Hellwig 2025-02-04 8:16 ` Thomas Zimmermann 2025-01-31 18:28 ` [PATCH 3/3] fb_defio: do not use deprecated page->mapping, index fields Lorenzo Stoakes 2025-02-01 17:06 ` Lorenzo Stoakes 2025-02-04 8:21 ` Thomas Zimmermann 2025-02-04 8:37 ` Lorenzo Stoakes 2025-02-04 8:57 ` Thomas Zimmermann
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox