* [bug report] mm/huge_memory: add two new (not yet used) functions for folio_split()
@ 2025-02-12 15:13 Dan Carpenter
2025-02-12 15:30 ` Zi Yan
0 siblings, 1 reply; 4+ messages in thread
From: Dan Carpenter @ 2025-02-12 15:13 UTC (permalink / raw)
To: Zi Yan; +Cc: linux-mm
Hello Zi Yan,
Commit 1b7b0bed44e4 ("mm/huge_memory: add two new (not yet used)
functions for folio_split()") from Feb 4, 2025 (linux-next), leads to
the following Smatch static checker warning:
mm/huge_memory.c:3611 __split_unmapped_folio()
error: we previously assumed 'mapping' could be null (see line 3512)
mm/huge_memory.c
3459 static int __split_unmapped_folio(struct folio *folio, int new_order,
3460 struct page *page, struct list_head *list, pgoff_t end,
3461 struct xa_state *xas, struct address_space *mapping,
3462 bool uniform_split)
3463 {
3464 struct lruvec *lruvec;
3465 struct address_space *swap_cache = NULL;
3466 struct folio *origin_folio = folio;
3467 struct folio *next_folio = folio_next(folio);
3468 struct folio *new_folio;
3469 struct folio *next;
3470 int order = folio_order(folio);
3471 int split_order;
3472 int start_order = uniform_split ? new_order : order - 1;
3473 int nr_dropped = 0;
3474 int ret = 0;
3475 bool stop_split = false;
3476
3477 if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
3478 /* a swapcache folio can only be uniformly split to order-0 */
3479 if (!uniform_split || new_order != 0)
3480 return -EINVAL;
3481
3482 swap_cache = swap_address_space(folio->swap);
3483 xa_lock(&swap_cache->i_pages);
3484 }
3485
3486 if (folio_test_anon(folio))
3487 mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
3488
3489 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
3490 lruvec = folio_lruvec_lock(folio);
3491
3492 folio_clear_has_hwpoisoned(folio);
3493
3494 /*
3495 * split to new_order one order at a time. For uniform split,
3496 * folio is split to new_order directly.
3497 */
3498 for (split_order = start_order;
3499 split_order >= new_order && !stop_split;
3500 split_order--) {
3501 int old_order = folio_order(folio);
3502 struct folio *release;
3503 struct folio *end_folio = folio_next(folio);
3504 int status;
3505
3506 /* order-1 anonymous folio is not supported */
3507 if (folio_test_anon(folio) && split_order == 1)
3508 continue;
3509 if (uniform_split && split_order != new_order)
3510 continue;
3511
3512 if (mapping) {
^^^^^^^
Here we assume mapping can be NULL.
3513 /*
3514 * uniform split has xas_split_alloc() called before
3515 * irq is disabled, since xas_nomem() might not be
3516 * able to allocate enough memory.
3517 */
3518 if (uniform_split)
3519 xas_split(xas, folio, old_order);
3520 else {
3521 xas_set_order(xas, folio->index, split_order);
3522 xas_split_alloc(xas, folio, folio_order(folio),
3523 GFP_NOWAIT);
3524 if (xas_error(xas)) {
3525 ret = xas_error(xas);
3526 stop_split = true;
3527 goto after_split;
3528 }
3529 xas_split(xas, folio, old_order);
3530 }
3531 }
3532
3533 /* complete memcg works before add pages to LRU */
3534 split_page_memcg(&folio->page, old_order, split_order);
3535 split_page_owner(&folio->page, old_order, split_order);
3536 pgalloc_tag_split(folio, old_order, split_order);
3537
3538 status = __split_folio_to_order(folio, split_order);
3539
3540 if (status < 0) {
3541 stop_split = true;
3542 ret = -EINVAL;
3543 }
3544
3545 after_split:
3546 /*
3547 * Iterate through after-split folios and perform related
3548 * operations. But in buddy allocator like split, the folio
3549 * containing the specified page is skipped until its order
3550 * is new_order, since the folio will be worked on in next
3551 * iteration.
3552 */
3553 for (release = folio, next = folio_next(folio);
3554 release != end_folio;
3555 release = next, next = folio_next(next)) {
3556 /*
3557 * for buddy allocator like split, the folio containing
3558 * page will be split next and should not be released,
3559 * until the folio's order is new_order or stop_split
3560 * is set to true by the above xas_split() failure.
3561 */
3562 if (release == page_folio(page)) {
3563 folio = release;
3564 if (split_order != new_order && !stop_split)
3565 continue;
3566 }
3567 if (folio_test_anon(release)) {
3568 mod_mthp_stat(folio_order(release),
3569 MTHP_STAT_NR_ANON, 1);
3570 }
3571
3572 /*
3573 * Unfreeze refcount first. Additional reference from
3574 * page cache.
3575 */
3576 folio_ref_unfreeze(release,
3577 1 + ((!folio_test_anon(origin_folio) ||
3578 folio_test_swapcache(origin_folio)) ?
3579 folio_nr_pages(release) : 0));
3580
3581 if (release != origin_folio)
3582 lru_add_page_tail(origin_folio, &release->page,
3583 lruvec, list);
3584
3585 /* Some pages can be beyond EOF: drop them from page cache */
3586 if (release->index >= end) {
3587 if (shmem_mapping(origin_folio->mapping))
3588 nr_dropped += folio_nr_pages(release);
3589 else if (folio_test_clear_dirty(release))
3590 folio_account_cleaned(release,
3591 inode_to_wb(origin_folio->mapping->host));
3592 __filemap_remove_folio(release, NULL);
3593 folio_put(release);
3594 } else if (!folio_test_anon(release)) {
3595 __xa_store(&origin_folio->mapping->i_pages,
3596 release->index, &release->page, 0);
3597 } else if (swap_cache) {
3598 __xa_store(&swap_cache->i_pages,
3599 swap_cache_index(release->swap),
3600 &release->page, 0);
3601 }
3602 }
3603 }
3604
3605 unlock_page_lruvec(lruvec);
3606
3607 if (folio_test_anon(origin_folio)) {
3608 if (folio_test_swapcache(origin_folio))
3609 xa_unlock(&swap_cache->i_pages);
3610 } else
--> 3611 xa_unlock(&mapping->i_pages);
Dereferenced without checking.
3612
3613 /* Caller disabled irqs, so they are still disabled here */
3614 local_irq_enable();
3615
3616 if (nr_dropped)
3617 shmem_uncharge(mapping->host, nr_dropped);
^^^^^^^^^^^^^
Here too.
3618
3619 remap_page(origin_folio, 1 << order,
3620 folio_test_anon(origin_folio) ?
3621 RMP_USE_SHARED_ZEROPAGE : 0);
regards,
dan carpenter
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [bug report] mm/huge_memory: add two new (not yet used) functions for folio_split()
2025-02-12 15:13 [bug report] mm/huge_memory: add two new (not yet used) functions for folio_split() Dan Carpenter
@ 2025-02-12 15:30 ` Zi Yan
2025-02-12 15:32 ` Dan Carpenter
0 siblings, 1 reply; 4+ messages in thread
From: Zi Yan @ 2025-02-12 15:30 UTC (permalink / raw)
To: Dan Carpenter; +Cc: linux-mm
Hi Dan,
Thanks for reporting, but based on __split_unmmaped_folio()’s call site,
mapping cannot be NULL when it is dereferenced. Is there a proper way to
tell Smatch that? VM_BUG_ON(folio_test_anon(folio) || !mapping) might
help the first case, but not sure about the second one.
Thanks.
More comments below:
On 12 Feb 2025, at 10:13, Dan Carpenter wrote:
> Hello Zi Yan,
>
> Commit 1b7b0bed44e4 ("mm/huge_memory: add two new (not yet used)
> functions for folio_split()") from Feb 4, 2025 (linux-next), leads to
> the following Smatch static checker warning:
>
> mm/huge_memory.c:3611 __split_unmapped_folio()
> error: we previously assumed 'mapping' could be null (see line 3512)
>
> mm/huge_memory.c
> 3459 static int __split_unmapped_folio(struct folio *folio, int new_order,
> 3460 struct page *page, struct list_head *list, pgoff_t end,
> 3461 struct xa_state *xas, struct address_space *mapping,
> 3462 bool uniform_split)
> 3463 {
> 3464 struct lruvec *lruvec;
> 3465 struct address_space *swap_cache = NULL;
> 3466 struct folio *origin_folio = folio;
> 3467 struct folio *next_folio = folio_next(folio);
> 3468 struct folio *new_folio;
> 3469 struct folio *next;
> 3470 int order = folio_order(folio);
> 3471 int split_order;
> 3472 int start_order = uniform_split ? new_order : order - 1;
> 3473 int nr_dropped = 0;
> 3474 int ret = 0;
> 3475 bool stop_split = false;
> 3476
> 3477 if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
> 3478 /* a swapcache folio can only be uniformly split to order-0 */
> 3479 if (!uniform_split || new_order != 0)
> 3480 return -EINVAL;
> 3481
> 3482 swap_cache = swap_address_space(folio->swap);
> 3483 xa_lock(&swap_cache->i_pages);
> 3484 }
> 3485
> 3486 if (folio_test_anon(folio))
> 3487 mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> 3488
> 3489 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> 3490 lruvec = folio_lruvec_lock(folio);
> 3491
> 3492 folio_clear_has_hwpoisoned(folio);
> 3493
> 3494 /*
> 3495 * split to new_order one order at a time. For uniform split,
> 3496 * folio is split to new_order directly.
> 3497 */
> 3498 for (split_order = start_order;
> 3499 split_order >= new_order && !stop_split;
> 3500 split_order--) {
> 3501 int old_order = folio_order(folio);
> 3502 struct folio *release;
> 3503 struct folio *end_folio = folio_next(folio);
> 3504 int status;
> 3505
> 3506 /* order-1 anonymous folio is not supported */
> 3507 if (folio_test_anon(folio) && split_order == 1)
> 3508 continue;
> 3509 if (uniform_split && split_order != new_order)
> 3510 continue;
> 3511
> 3512 if (mapping) {
> ^^^^^^^
> Here we assume mapping can be NULL.
>
> 3513 /*
> 3514 * uniform split has xas_split_alloc() called before
> 3515 * irq is disabled, since xas_nomem() might not be
> 3516 * able to allocate enough memory.
> 3517 */
> 3518 if (uniform_split)
> 3519 xas_split(xas, folio, old_order);
> 3520 else {
> 3521 xas_set_order(xas, folio->index, split_order);
> 3522 xas_split_alloc(xas, folio, folio_order(folio),
> 3523 GFP_NOWAIT);
> 3524 if (xas_error(xas)) {
> 3525 ret = xas_error(xas);
> 3526 stop_split = true;
> 3527 goto after_split;
> 3528 }
> 3529 xas_split(xas, folio, old_order);
> 3530 }
> 3531 }
> 3532
> 3533 /* complete memcg works before add pages to LRU */
> 3534 split_page_memcg(&folio->page, old_order, split_order);
> 3535 split_page_owner(&folio->page, old_order, split_order);
> 3536 pgalloc_tag_split(folio, old_order, split_order);
> 3537
> 3538 status = __split_folio_to_order(folio, split_order);
> 3539
> 3540 if (status < 0) {
> 3541 stop_split = true;
> 3542 ret = -EINVAL;
> 3543 }
> 3544
> 3545 after_split:
> 3546 /*
> 3547 * Iterate through after-split folios and perform related
> 3548 * operations. But in buddy allocator like split, the folio
> 3549 * containing the specified page is skipped until its order
> 3550 * is new_order, since the folio will be worked on in next
> 3551 * iteration.
> 3552 */
> 3553 for (release = folio, next = folio_next(folio);
> 3554 release != end_folio;
> 3555 release = next, next = folio_next(next)) {
> 3556 /*
> 3557 * for buddy allocator like split, the folio containing
> 3558 * page will be split next and should not be released,
> 3559 * until the folio's order is new_order or stop_split
> 3560 * is set to true by the above xas_split() failure.
> 3561 */
> 3562 if (release == page_folio(page)) {
> 3563 folio = release;
> 3564 if (split_order != new_order && !stop_split)
> 3565 continue;
> 3566 }
> 3567 if (folio_test_anon(release)) {
> 3568 mod_mthp_stat(folio_order(release),
> 3569 MTHP_STAT_NR_ANON, 1);
> 3570 }
> 3571
> 3572 /*
> 3573 * Unfreeze refcount first. Additional reference from
> 3574 * page cache.
> 3575 */
> 3576 folio_ref_unfreeze(release,
> 3577 1 + ((!folio_test_anon(origin_folio) ||
> 3578 folio_test_swapcache(origin_folio)) ?
> 3579 folio_nr_pages(release) : 0));
> 3580
> 3581 if (release != origin_folio)
> 3582 lru_add_page_tail(origin_folio, &release->page,
> 3583 lruvec, list);
> 3584
> 3585 /* Some pages can be beyond EOF: drop them from page cache */
> 3586 if (release->index >= end) {
> 3587 if (shmem_mapping(origin_folio->mapping))
> 3588 nr_dropped += folio_nr_pages(release);
> 3589 else if (folio_test_clear_dirty(release))
> 3590 folio_account_cleaned(release,
> 3591 inode_to_wb(origin_folio->mapping->host));
> 3592 __filemap_remove_folio(release, NULL);
> 3593 folio_put(release);
> 3594 } else if (!folio_test_anon(release)) {
> 3595 __xa_store(&origin_folio->mapping->i_pages,
> 3596 release->index, &release->page, 0);
> 3597 } else if (swap_cache) {
> 3598 __xa_store(&swap_cache->i_pages,
> 3599 swap_cache_index(release->swap),
> 3600 &release->page, 0);
> 3601 }
> 3602 }
> 3603 }
> 3604
> 3605 unlock_page_lruvec(lruvec);
> 3606
> 3607 if (folio_test_anon(origin_folio)) {
> 3608 if (folio_test_swapcache(origin_folio))
> 3609 xa_unlock(&swap_cache->i_pages);
> 3610 } else
> --> 3611 xa_unlock(&mapping->i_pages);
>
> Dereferenced without checking.
Only __folio_split() calls __split_unmapped_folio() and __folio_split()
makes sure mapping is not NULL when !folio_test_anon(origin_folio) and
locks mapping to prevent it going away. So mapping is not NULL here.
>
> 3612
> 3613 /* Caller disabled irqs, so they are still disabled here */
> 3614 local_irq_enable();
> 3615
> 3616 if (nr_dropped)
> 3617 shmem_uncharge(mapping->host, nr_dropped);
> ^^^^^^^^^^^^^
> Here too.
nr_dropped is initialized to 0 and can only be increased to non zero
when mapping is not NULL (see line 3587).
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [bug report] mm/huge_memory: add two new (not yet used) functions for folio_split()
2025-02-12 15:30 ` Zi Yan
@ 2025-02-12 15:32 ` Dan Carpenter
2025-02-12 15:41 ` Zi Yan
0 siblings, 1 reply; 4+ messages in thread
From: Dan Carpenter @ 2025-02-12 15:32 UTC (permalink / raw)
To: Zi Yan; +Cc: linux-mm
On Wed, Feb 12, 2025 at 10:30:26AM -0500, Zi Yan wrote:
> Hi Dan,
>
> Thanks for reporting, but based on __split_unmmaped_folio()’s call site,
> mapping cannot be NULL when it is dereferenced. Is there a proper way to
> tell Smatch that? VM_BUG_ON(folio_test_anon(folio) || !mapping) might
> help the first case, but not sure about the second one.
>
No, just ignore the warning. I'll probably send another email if you
ever rename the function, but I try not to... Thanks for taking a look
at this.
regards,
dan carpenter
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [bug report] mm/huge_memory: add two new (not yet used) functions for folio_split()
2025-02-12 15:32 ` Dan Carpenter
@ 2025-02-12 15:41 ` Zi Yan
0 siblings, 0 replies; 4+ messages in thread
From: Zi Yan @ 2025-02-12 15:41 UTC (permalink / raw)
To: Dan Carpenter; +Cc: linux-mm
On 12 Feb 2025, at 10:32, Dan Carpenter wrote:
> On Wed, Feb 12, 2025 at 10:30:26AM -0500, Zi Yan wrote:
>> Hi Dan,
>>
>> Thanks for reporting, but based on __split_unmmaped_folio()’s call site,
>> mapping cannot be NULL when it is dereferenced. Is there a proper way to
>> tell Smatch that? VM_BUG_ON(folio_test_anon(folio) || !mapping) might
>> help the first case, but not sure about the second one.
>>
>
> No, just ignore the warning. I'll probably send another email if you
> ever rename the function, but I try not to... Thanks for taking a look
> at this.
Thanks. I do think these reports are helpful, since they force me to
spell out the implications.
For the second case, when mapping is NULL __folio_split() sets end to -1
which is the max value of pgoff_t and folio->index would not reach to
that value , then in __split_unmapped_folio() at line 3586 in the report,
release->index can only be >= end when mapping is not NULL. This means
line 3587 can be true and nr_dropped (line 3588) can become non-zero,
when mapping is not NULL . I write this down for the record.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-02-12 15:42 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-12 15:13 [bug report] mm/huge_memory: add two new (not yet used) functions for folio_split() Dan Carpenter
2025-02-12 15:30 ` Zi Yan
2025-02-12 15:32 ` Dan Carpenter
2025-02-12 15:41 ` Zi Yan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox