linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Pages doesn't belong to same large order folio in block IO path
@ 2024-02-05  6:33 Kundan Kumar
  2024-02-05  8:46 ` Ryan Roberts
  0 siblings, 1 reply; 4+ messages in thread
From: Kundan Kumar @ 2024-02-05  6:33 UTC (permalink / raw)
  To: linux-mm; +Cc: ryan.roberts

Hi All,

I am using the patch "Multi-size THP for anonymous memory"
https://lore.kernel.org/all/20231214160251.3574571-1-ryan.roberts@arm.com/T/#u


I enabled the mTHP using the sysfs interface :
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-32kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-64kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-128kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-256kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-512kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-1024kB/enabled
echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled

I can see this patch allocates multi-order folio for anonymous memory.

With the large order folios getting allocated I tried direct block IO using fio.

fio -iodepth=1 -rw=write -ioengine=io_uring -direct=1 -bs=16K
-numjobs=1 -size=16k -group_reporting -filename=/dev/nvme0n1
-name=io_uring_test

The fio malloced memory is allocated from a multi-order folio in
function alloc_anon_folio().
Block I/O path takes the fio allocated memory and maps it in kernel in
function iov_iter_extract_user_pages()
As the pages are mapped using large folios, I try to see if the pages
belong to same folio using page_folio(page) in function
__bio_iov_iter_get_pages().

To my surprise I see that the pages belong to different folios.

Feb  5 10:34:33 kernel: [244413.315660] 1603
iov_iter_extract_user_pages addr = 5593b252a000
Feb  5 10:34:33 kernel: [244413.315680] 1610
iov_iter_extract_user_pages nr_pages = 4
Feb  5 10:34:33 kernel: [244413.315700] 1291 __bio_iov_iter_get_pages
page = ffffea000d4bb9c0 folio = ffffea000d4bb9c0
Feb  5 10:34:33 kernel: [244413.315749] 1291 __bio_iov_iter_get_pages
page = ffffea000d796200 folio = ffffea000d796200
Feb  5 10:34:33 kernel: [244413.315796] 1291 __bio_iov_iter_get_pages
page = ffffea000d796240 folio = ffffea000d796240
Feb  5 10:34:33 kernel: [244413.315852] 1291 __bio_iov_iter_get_pages
page = ffffea000d7b2b80 folio = ffffea000d7b2b80

I repeat the same experiment with fio using HUGE pages
fio -iodepth=1 -iomem=mmaphuge -rw=write -ioengine=io_uring -direct=1
-bs=16K -numjobs=1 -size=16k -group_reporting -filename=/dev/nvme0n1
-name=io_uring_test

This time when the memory is mmapped from HUGE pages I see that pages belong
to the same folio.

Feb  5 10:51:50 kernel: [245450.439817] 1603
iov_iter_extract_user_pages addr = 7f66e4c00000
Feb  5 10:51:50 kernel: [245450.439825] 1610
iov_iter_extract_user_pages nr_pages = 4
Feb  5 10:51:50 kernel: [245450.439834] 1291 __bio_iov_iter_get_pages
page = ffffea0005bc8000 folio = ffffea0005bc8000
Feb  5 10:51:50 kernel: [245450.439858] 1291 __bio_iov_iter_get_pages
page = ffffea0005bc8040 folio = ffffea0005bc8000
Feb  5 10:51:50 kernel: [245450.439880] 1291 __bio_iov_iter_get_pages
page = ffffea0005bc8080 folio = ffffea0005bc8000
Feb  5 10:51:50 kernel: [245450.439903] 1291 __bio_iov_iter_get_pages
page = ffffea0005bc80c0 folio = ffffea0005bc8000

Please let me know if you have any clue as to why the pages for malloced memory
of fio don't belong to the same folio.

--
Kundan Kumar


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-02-05 11:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-05  6:33 Pages doesn't belong to same large order folio in block IO path Kundan Kumar
2024-02-05  8:46 ` Ryan Roberts
2024-02-05 11:02   ` Kundan Kumar
2024-02-05 11:16     ` Ryan Roberts

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox