* [PATCH 2/2] mm: Fix slab->page _count corruption.
@ 2012-05-14 18:41 Pravin B Shelar
2012-05-14 18:58 ` Christoph Lameter
0 siblings, 1 reply; 2+ messages in thread
From: Pravin B Shelar @ 2012-05-14 18:41 UTC (permalink / raw)
To: cl, penberg, mpm; +Cc: linux-kernel, linux-mm, jesse, abhide, Pravin B Shelar
On arches that do not support this_cpu_cmpxchg_double slab_lock is used
to do atomic cmpxchg() on double word which contains page->_count.
page count can be changed from get_page() or put_page() without taking
slab_lock. That corrupts page counter.
Following patch fixes it by moving page->_count out of cmpxchg_double
data. So that slub does no change it while updating slub meta-data in
struct page.
Reported-by: Amey Bhide <abhide@nicira.com>
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
---
include/linux/mm_types.h | 25 ++++++++++++++++++++++++-
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index dad95bd..7f0032f 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -55,7 +55,8 @@ struct page {
pgoff_t index; /* Our offset within mapping. */
void *freelist; /* slub first free object */
};
-
+#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
+ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
union {
/* Used for cmpxchg_double in slub */
unsigned long counters;
@@ -90,6 +91,28 @@ struct page {
atomic_t _count; /* Usage count, see below. */
};
};
+#else
+ /* Keep _count separate from slub cmpxchg_double data, As rest
+ * of double word is protected by slab_lock but _count is not */
+ union {
+ /* Used for cmpxchg_double in slub */
+ unsigned int counters;
+
+ struct {
+
+ union {
+ atomic_t _mapcount;
+
+ struct {
+ unsigned inuse:16;
+ unsigned objects:15;
+ unsigned frozen:1;
+ };
+ };
+ };
+ };
+ atomic_t _count;
+#endif
};
/* Third double word block */
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH 2/2] mm: Fix slab->page _count corruption.
2012-05-14 18:41 [PATCH 2/2] mm: Fix slab->page _count corruption Pravin B Shelar
@ 2012-05-14 18:58 ` Christoph Lameter
0 siblings, 0 replies; 2+ messages in thread
From: Christoph Lameter @ 2012-05-14 18:58 UTC (permalink / raw)
To: Pravin B Shelar; +Cc: penberg, mpm, linux-kernel, linux-mm, jesse, abhide
On Mon, 14 May 2012, Pravin B Shelar wrote:
> On arches that do not support this_cpu_cmpxchg_double slab_lock is used
> to do atomic cmpxchg() on double word which contains page->_count.
> page count can be changed from get_page() or put_page() without taking
> slab_lock. That corrupts page counter.
>
> Following patch fixes it by moving page->_count out of cmpxchg_double
> data. So that slub does no change it while updating slub meta-data in
> struct page.
Ugly. Maybe its best to not touch the count in the page lock case in slub?
You could accomplish that by changing the definition of counters in
mm_types.h. Make it unsigned instead of unsigned long so that it only
covers the first part of the struct (which excludes the refcounter)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2012-05-14 18:59 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-14 18:41 [PATCH 2/2] mm: Fix slab->page _count corruption Pravin B Shelar
2012-05-14 18:58 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox