linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [LSF/MM/BPF TOPIC] shmem/tmpfs: large folios adoption, regression tracking and performance testing
       [not found] <CGME20240223231618eucas1p1a885347603558c5d6185274b6bd7fc31@eucas1p1.samsung.com>
@ 2024-02-23 23:16 ` Daniel Gomez
  0 siblings, 0 replies; only message in thread
From: Daniel Gomez @ 2024-02-23 23:16 UTC (permalink / raw)
  To: lsf-pc, hughd, willy, david, mcgrof, akpm, brauner, yosryahmed,
	jack, Pankaj Raghav
  Cc: linux-fsdevel, linux-mm

Hi,

I want to propose a session to discuss how we should address large folios in
shmem. I have explored the write and fallocate paths [1] (which I will soon post
an updated version), but there are additional aspects that need to be covered,
such as read and swap paths.

I have started an RFC to track blocks for huge pages (aiming to avoid any
regressions with large folios) that seems to be going in the wrong direction,
according to Hugh's comments. However, there are still some open questions for
which I have not yet received a clear answer.

In addition, I've been testing tmpfs with kdevops using recent kernels and
have encountered some known issues. However, I have not found a clear way to
identify them other than through these [3][4] threads. Therefore, having a clear
and updated status of all the test profiles and the list of issues would help
to provide a better understanding. Note that this aligns with kdevops' goal.
Additionally, I recently received some patches from Hugh [5] aimed at addressing
the issues in xfstest-dev.

Luis has also initiated a thread [6] to collaborate with the 0-day team to track
regressions in tmpfs (among other efforts). It would be beneficial to discuss
the progress made thus far, potential next steps, and gather insights on this
collaborative effort between 0-day and kdevops.

Finally, I would like to explore alternative methods for performance testing
in tmpfs aside from using fio and/or running kernel build benchmarks. What
are the possible approaches for this? Luis recently reported some findings in
the last LBS cabal, where XFS on pmem DAX showed significantly better results
compared to tmpfs, regardless of whether huge pages were utilized or not. It
would be beneficial to share these latest findings and consider implementing
methods, possibly integrated into kdevops, to continuously monitor any potential
regressions.

[1] shmem: high order folios support in write path
v1: https://lore.kernel.org/all/20230915095042.1320180-1-da.gomez@samsung.com/
v2: https://lore.kernel.org/all/20230919135536.2165715-1-da.gomez@samsung.com/
v3 (RFC): https://lore.kernel.org/all/20231028211518.3424020-1-da.gomez@samsung.com/
[2] shmem: fix llseek in hugepages
RFC: https://lore.kernel.org/all/20240209142901.126894-1-da.gomez@samsung.com/
[3] https://lore.kernel.org/all/alpine.LSU.2.11.2104211723580.3299@eggly.anvils/
[4] https://lore.kernel.org/all/20230713-mgctime-v5-3-9eb795d2ae37@kernel.org/
[5] xfstests-dev patches from Hugh:
https://gitlab.com/dagmcr/xfstests-dev/-/commits/hughd/tmpfs-fixes/?ref_type=heads
[6] https://lore.kernel.org/all/CAB=NE6VRZFn+jxmxADGb3j7fLzBG9rAJ-9RCddEwz0HtwvtHxg@mail.gmail.com/

Are there any other related topics that folks would like to discuss further?

Daniel

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-02-23 23:16 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20240223231618eucas1p1a885347603558c5d6185274b6bd7fc31@eucas1p1.samsung.com>
2024-02-23 23:16 ` [LSF/MM/BPF TOPIC] shmem/tmpfs: large folios adoption, regression tracking and performance testing Daniel Gomez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox