* [PATCH 0/6] Some trivial cleanups for shmem
@ 2025-02-07 9:44 Baolin Wang
2025-02-07 9:44 ` [PATCH 1/6] mm: shmem: drop the unused macro Baolin Wang
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
Hi,
Patch 1 - Patch 5 do some trivial cleanups and refactoring for shmem.
Patch 6 adds myself as shmem reviewer.
Baolin Wang (6):
mm: shmem: drop the unused macro
mm: shmem: remove 'fadvise()' comments
mm: shmem: remove duplicate error validation
mm: shmem: change the return value of shmem_find_swap_entries()
mm: shmem: factor out the within_size logic into a new helper
MAINTAINERS: add myself as shmem reviewer
MAINTAINERS | 1 +
mm/shmem.c | 76 ++++++++++++++++++++++++++---------------------------
2 files changed, 39 insertions(+), 38 deletions(-)
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 1/6] mm: shmem: drop the unused macro
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
@ 2025-02-07 9:44 ` Baolin Wang
2025-02-07 9:44 ` [PATCH 2/6] mm: shmem: remove 'fadvise()' comments Baolin Wang
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
Drop the unused 'BLOCKS_PER_PAGE' macro.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 745f130bfb4c..ddf800357e7a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -86,7 +86,6 @@ static struct vfsmount *shm_mnt __ro_after_init;
#include "internal.h"
-#define BLOCKS_PER_PAGE (PAGE_SIZE/512)
#define VM_ACCT(size) (PAGE_ALIGN(size) >> PAGE_SHIFT)
/* Pretend that each entry is of this size in directory's i_size */
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/6] mm: shmem: remove 'fadvise()' comments
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
2025-02-07 9:44 ` [PATCH 1/6] mm: shmem: drop the unused macro Baolin Wang
@ 2025-02-07 9:44 ` Baolin Wang
2025-02-07 9:44 ` [PATCH 3/6] mm: shmem: remove duplicate error validation Baolin Wang
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
Similar to commit 255ff62d1586 ("docs: tmpfs: drop 'fadvise()' from the
documentation"), fadvise() has no HUGEPAGE advise currently. Remove
the confusing fadvise() comments.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index ddf800357e7a..b7aef4f0a427 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -525,9 +525,9 @@ static bool shmem_confirm_swap(struct address_space *mapping,
* enables huge pages for the mount;
* SHMEM_HUGE_WITHIN_SIZE:
* only allocate huge pages if the page will be fully within i_size,
- * also respect fadvise()/madvise() hints;
+ * also respect madvise() hints;
* SHMEM_HUGE_ADVISE:
- * only allocate huge pages if requested with fadvise()/madvise();
+ * only allocate huge pages if requested with madvise();
*/
#define SHMEM_HUGE_NEVER 0
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 3/6] mm: shmem: remove duplicate error validation
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
2025-02-07 9:44 ` [PATCH 1/6] mm: shmem: drop the unused macro Baolin Wang
2025-02-07 9:44 ` [PATCH 2/6] mm: shmem: remove 'fadvise()' comments Baolin Wang
@ 2025-02-07 9:44 ` Baolin Wang
2025-02-07 9:44 ` [PATCH 4/6] mm: shmem: change the return value of shmem_find_swap_entries() Baolin Wang
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
Remove duplicate error code checks for 'start' and 'end', as the
get_order_from_str() will only return -EINVAL if the cmdline string
is configured incorrectly.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index b7aef4f0a427..b764ad336598 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -5650,19 +5650,19 @@ static int __init setup_thp_shmem(char *str)
THP_ORDERS_ALL_FILE_DEFAULT);
}
- if (start == -EINVAL) {
+ if (start < 0) {
pr_err("invalid size %s in thp_shmem boot parameter\n",
start_size);
goto err;
}
- if (end == -EINVAL) {
+ if (end < 0) {
pr_err("invalid size %s in thp_shmem boot parameter\n",
end_size);
goto err;
}
- if (start < 0 || end < 0 || start > end)
+ if (start > end)
goto err;
nr = end - start + 1;
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 4/6] mm: shmem: change the return value of shmem_find_swap_entries()
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
` (2 preceding siblings ...)
2025-02-07 9:44 ` [PATCH 3/6] mm: shmem: remove duplicate error validation Baolin Wang
@ 2025-02-07 9:44 ` Baolin Wang
2025-02-07 9:44 ` [PATCH 5/6] mm: shmem: factor out the within_size logic into a new helper Baolin Wang
2025-02-07 9:44 ` [PATCH 6/6] MAINTAINERS: add myself as shmem reviewer Baolin Wang
5 siblings, 0 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
The shmem_find_swap_entries() originally returned the index corresponding to
the swap entry, but no callers used this return value. It should return the
number of entries that were found like other functions, which can be used by
the callers.
No functional changes.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index b764ad336598..c243d814f2b0 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1379,9 +1379,9 @@ static void shmem_evict_inode(struct inode *inode)
#endif
}
-static int shmem_find_swap_entries(struct address_space *mapping,
- pgoff_t start, struct folio_batch *fbatch,
- pgoff_t *indices, unsigned int type)
+static unsigned int shmem_find_swap_entries(struct address_space *mapping,
+ pgoff_t start, struct folio_batch *fbatch,
+ pgoff_t *indices, unsigned int type)
{
XA_STATE(xas, &mapping->i_pages, start);
struct folio *folio;
@@ -1414,7 +1414,7 @@ static int shmem_find_swap_entries(struct address_space *mapping,
}
rcu_read_unlock();
- return xas.xa_index;
+ return folio_batch_count(fbatch);
}
/*
@@ -1461,8 +1461,8 @@ static int shmem_unuse_inode(struct inode *inode, unsigned int type)
do {
folio_batch_init(&fbatch);
- shmem_find_swap_entries(mapping, start, &fbatch, indices, type);
- if (folio_batch_count(&fbatch) == 0) {
+ if (!shmem_find_swap_entries(mapping, start, &fbatch,
+ indices, type)) {
ret = 0;
break;
}
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 5/6] mm: shmem: factor out the within_size logic into a new helper
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
` (3 preceding siblings ...)
2025-02-07 9:44 ` [PATCH 4/6] mm: shmem: change the return value of shmem_find_swap_entries() Baolin Wang
@ 2025-02-07 9:44 ` Baolin Wang
2025-02-07 9:44 ` [PATCH 6/6] MAINTAINERS: add myself as shmem reviewer Baolin Wang
5 siblings, 0 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
Factor out the within_size logic into a new helper to remove duplicate
code.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 53 +++++++++++++++++++++++++++--------------------------
1 file changed, 27 insertions(+), 26 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index c243d814f2b0..671f63063fd4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -590,6 +590,28 @@ shmem_mapping_size_orders(struct address_space *mapping, pgoff_t index, loff_t w
return order > 0 ? BIT(order + 1) - 1 : 0;
}
+static unsigned int shmem_get_orders_within_size(struct inode *inode,
+ unsigned long within_size_orders, pgoff_t index,
+ loff_t write_end)
+{
+ pgoff_t aligned_index;
+ unsigned long order;
+ loff_t i_size;
+
+ order = highest_order(within_size_orders);
+ while (within_size_orders) {
+ aligned_index = round_up(index + 1, 1 << order);
+ i_size = max(write_end, i_size_read(inode));
+ i_size = round_up(i_size, PAGE_SIZE);
+ if (i_size >> PAGE_SHIFT >= aligned_index)
+ return within_size_orders;
+
+ order = next_order(&within_size_orders, order);
+ }
+
+ return 0;
+}
+
static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
loff_t write_end, bool shmem_huge_force,
struct vm_area_struct *vma,
@@ -598,9 +620,6 @@ static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index
unsigned int maybe_pmd_order = HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ?
0 : BIT(HPAGE_PMD_ORDER);
unsigned long within_size_orders;
- unsigned int order;
- pgoff_t aligned_index;
- loff_t i_size;
if (!S_ISREG(inode->i_mode))
return 0;
@@ -634,16 +653,11 @@ static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index
within_size_orders = shmem_mapping_size_orders(inode->i_mapping,
index, write_end);
- order = highest_order(within_size_orders);
- while (within_size_orders) {
- aligned_index = round_up(index + 1, 1 << order);
- i_size = max(write_end, i_size_read(inode));
- i_size = round_up(i_size, PAGE_SIZE);
- if (i_size >> PAGE_SHIFT >= aligned_index)
- return within_size_orders;
+ within_size_orders = shmem_get_orders_within_size(inode, within_size_orders,
+ index, write_end);
+ if (within_size_orders > 0)
+ return within_size_orders;
- order = next_order(&within_size_orders, order);
- }
fallthrough;
case SHMEM_HUGE_ADVISE:
if (vm_flags & VM_HUGEPAGE)
@@ -1756,10 +1770,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
unsigned long vm_flags = vma ? vma->vm_flags : 0;
- pgoff_t aligned_index;
unsigned int global_orders;
- loff_t i_size;
- int order;
if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags)))
return 0;
@@ -1785,17 +1796,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
return READ_ONCE(huge_shmem_orders_inherit);
/* Allow mTHP that will be fully within i_size. */
- order = highest_order(within_size_orders);
- while (within_size_orders) {
- aligned_index = round_up(index + 1, 1 << order);
- i_size = round_up(i_size_read(inode), PAGE_SIZE);
- if (i_size >> PAGE_SHIFT >= aligned_index) {
- mask |= within_size_orders;
- break;
- }
-
- order = next_order(&within_size_orders, order);
- }
+ mask |= shmem_get_orders_within_size(inode, within_size_orders, index, 0);
if (vm_flags & VM_HUGEPAGE)
mask |= READ_ONCE(huge_shmem_orders_madvise);
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 6/6] MAINTAINERS: add myself as shmem reviewer
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
` (4 preceding siblings ...)
2025-02-07 9:44 ` [PATCH 5/6] mm: shmem: factor out the within_size logic into a new helper Baolin Wang
@ 2025-02-07 9:44 ` Baolin Wang
5 siblings, 0 replies; 7+ messages in thread
From: Baolin Wang @ 2025-02-07 9:44 UTC (permalink / raw)
To: akpm, hughd; +Cc: david, baolin.wang, linux-mm, linux-kernel
In the past year, I've primarily focused on shmem and added several features
to it, such as support for mTHP, large folio swap-out and swap-in support,
mTHP collapse support, skipping swapcache, and tmpfs support for large folios,
and so on. Meanwhile I've also been helping with testing and reviewing shmem
related patches.
So I am willing to continue assisting with testing and reviewing shmem
related patches. Let me be Cc'd on patches related to shmem.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 577592e3af82..7fe4ea237afe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -23883,6 +23883,7 @@ F: drivers/hwmon/tmp513.c
TMPFS (SHMEM FILESYSTEM)
M: Hugh Dickins <hughd@google.com>
+R: Baolin Wang <baolin.wang@linux.alibaba.com>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/shmem_fs.h
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-02-07 9:48 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-07 9:44 [PATCH 0/6] Some trivial cleanups for shmem Baolin Wang
2025-02-07 9:44 ` [PATCH 1/6] mm: shmem: drop the unused macro Baolin Wang
2025-02-07 9:44 ` [PATCH 2/6] mm: shmem: remove 'fadvise()' comments Baolin Wang
2025-02-07 9:44 ` [PATCH 3/6] mm: shmem: remove duplicate error validation Baolin Wang
2025-02-07 9:44 ` [PATCH 4/6] mm: shmem: change the return value of shmem_find_swap_entries() Baolin Wang
2025-02-07 9:44 ` [PATCH 5/6] mm: shmem: factor out the within_size logic into a new helper Baolin Wang
2025-02-07 9:44 ` [PATCH 6/6] MAINTAINERS: add myself as shmem reviewer Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox