linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation
@ 2024-11-30  0:14 Suren Baghdasaryan
  2024-11-30  0:14 ` [PATCH 2/2] alloc_tag: fix set_codetag_empty() when !CONFIG_MEM_ALLOC_PROFILING_DEBUG Suren Baghdasaryan
  2024-12-02 22:18 ` [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation Yu Zhao
  0 siblings, 2 replies; 3+ messages in thread
From: Suren Baghdasaryan @ 2024-11-30  0:14 UTC (permalink / raw)
  To: akpm
  Cc: kent.overstreet, pasha.tatashin, rppt, yuzhao, souravpanda,
	00107082, linux-mm, linux-kernel, surenb, kernel test robot

vm_module_tags_populate() calculation of the populated area assumes that
area starts at a page boundary and therefore when new pages are allocation,
the end of the area is page-aligned as well. If the start of the area is
not page-aligned then allocating a page and incrementing the end of the
area by PAGE_SIZE leads to an area at the end but within the area boundary
which is not populated. Accessing this are will lead to a kernel panic.
Fix the calculation by down-aligning the start of the area and using that
as the location allocated pages are mapped to.

Fixes: 0f9b685626da ("alloc_tag: populate memory for module tags as needed")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202411132111.6a221562-lkp@intel.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Applies over mm-unstable

 lib/alloc_tag.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c
index 2414a7ee7ec7..668c4e128fa4 100644
--- a/lib/alloc_tag.c
+++ b/lib/alloc_tag.c
@@ -393,19 +393,20 @@ static bool find_aligned_area(struct ma_state *mas, unsigned long section_size,
 
 static int vm_module_tags_populate(void)
 {
-	unsigned long phys_size = vm_module_tags->nr_pages << PAGE_SHIFT;
+	unsigned long phys_end = ALIGN_DOWN(module_tags.start_addr, PAGE_SIZE) +
+				 (vm_module_tags->nr_pages << PAGE_SHIFT);
+	unsigned long new_end = module_tags.start_addr + module_tags.size;
 
-	if (phys_size < module_tags.size) {
+	if (phys_end < new_end) {
 		struct page **next_page = vm_module_tags->pages + vm_module_tags->nr_pages;
-		unsigned long addr = module_tags.start_addr + phys_size;
 		unsigned long more_pages;
 		unsigned long nr;
 
-		more_pages = ALIGN(module_tags.size - phys_size, PAGE_SIZE) >> PAGE_SHIFT;
+		more_pages = ALIGN(new_end - phys_end, PAGE_SIZE) >> PAGE_SHIFT;
 		nr = alloc_pages_bulk_array_node(GFP_KERNEL | __GFP_NOWARN,
 						 NUMA_NO_NODE, more_pages, next_page);
 		if (nr < more_pages ||
-		    vmap_pages_range(addr, addr + (nr << PAGE_SHIFT), PAGE_KERNEL,
+		    vmap_pages_range(phys_end, phys_end + (nr << PAGE_SHIFT), PAGE_KERNEL,
 				     next_page, PAGE_SHIFT) < 0) {
 			/* Clean up and error out */
 			for (int i = 0; i < nr; i++)

base-commit: 539cd49425a4e9a66d601d9a8124f5c70e238d56
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 2/2] alloc_tag: fix set_codetag_empty() when !CONFIG_MEM_ALLOC_PROFILING_DEBUG
  2024-11-30  0:14 [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation Suren Baghdasaryan
@ 2024-11-30  0:14 ` Suren Baghdasaryan
  2024-12-02 22:18 ` [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation Yu Zhao
  1 sibling, 0 replies; 3+ messages in thread
From: Suren Baghdasaryan @ 2024-11-30  0:14 UTC (permalink / raw)
  To: akpm
  Cc: kent.overstreet, pasha.tatashin, rppt, yuzhao, souravpanda,
	00107082, linux-mm, linux-kernel, surenb

It was recently noticed that set_codetag_empty() might be used not only
to mark NULL alloctag references as empty to avoid warnings but also to
reset valid tags (in clear_page_tag_ref()). Since set_codetag_empty() is
defined as NOOP for CONFIG_MEM_ALLOC_PROFILING_DEBUG=n, such use of
set_codetag_empty() leads to subtle bugs.
Fix set_codetag_empty() for CONFIG_MEM_ALLOC_PROFILING_DEBUG=n to reset
the tag reference.

Fixes: a8fc28dad6d5 ("alloc_tag: introduce clear_page_tag_ref() helper function")
Reported-by: David Wang <00107082@163.com>
Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
Applies over mm-unstable

 include/linux/alloc_tag.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
index 7c0786bdf9af..f6a1b73f5663 100644
--- a/include/linux/alloc_tag.h
+++ b/include/linux/alloc_tag.h
@@ -63,7 +63,12 @@ static inline void set_codetag_empty(union codetag_ref *ref)
 #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
 
 static inline bool is_codetag_empty(union codetag_ref *ref) { return false; }
-static inline void set_codetag_empty(union codetag_ref *ref) {}
+
+static inline void set_codetag_empty(union codetag_ref *ref)
+{
+	if (ref)
+		ref->ct = NULL;
+}
 
 #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
 
-- 
2.47.0.338.g60cca15819-goog



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation
  2024-11-30  0:14 [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation Suren Baghdasaryan
  2024-11-30  0:14 ` [PATCH 2/2] alloc_tag: fix set_codetag_empty() when !CONFIG_MEM_ALLOC_PROFILING_DEBUG Suren Baghdasaryan
@ 2024-12-02 22:18 ` Yu Zhao
  1 sibling, 0 replies; 3+ messages in thread
From: Yu Zhao @ 2024-12-02 22:18 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: akpm, kent.overstreet, pasha.tatashin, rppt, souravpanda,
	00107082, linux-mm, linux-kernel, kernel test robot

On Fri, Nov 29, 2024 at 5:14 PM Suren Baghdasaryan <surenb@google.com> wrote:
>
> vm_module_tags_populate() calculation of the populated area assumes that
> area starts at a page boundary and therefore when new pages are allocation,
> the end of the area is page-aligned as well. If the start of the area is
> not page-aligned then allocating a page and incrementing the end of the
> area by PAGE_SIZE leads to an area at the end but within the area boundary
> which is not populated. Accessing this are will lead to a kernel panic.
> Fix the calculation by down-aligning the start of the area and using that
> as the location allocated pages are mapped to.
>
> Fixes: 0f9b685626da ("alloc_tag: populate memory for module tags as needed")
> Reported-by: kernel test robot <oliver.sang@intel.com>
> Closes: https://lore.kernel.org/oe-lkp/202411132111.6a221562-lkp@intel.com
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Acked-by: Yu Zhao <yuzhao@google.com>


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-12-02 22:19 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-30  0:14 [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation Suren Baghdasaryan
2024-11-30  0:14 ` [PATCH 2/2] alloc_tag: fix set_codetag_empty() when !CONFIG_MEM_ALLOC_PROFILING_DEBUG Suren Baghdasaryan
2024-12-02 22:18 ` [PATCH 1/2] alloc_tag: fix module allocation tags populated area calculation Yu Zhao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox