linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] mm/slab: use list_first_entry_or_null()
@ 2015-12-02 15:46 Geliang Tang
  2015-12-02 15:46 ` [PATCH 2/3] mm/slab: use list_for_each_entry in cache_flusharray Geliang Tang
  2015-12-02 15:57 ` [PATCH 1/3] mm/slab: use list_first_entry_or_null() Christoph Lameter
  0 siblings, 2 replies; 11+ messages in thread
From: Geliang Tang @ 2015-12-02 15:46 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Geliang Tang, linux-mm, linux-kernel

Simplify the code with list_first_entry_or_null().

Signed-off-by: Geliang Tang <geliangtang@163.com>
---
 mm/slab.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 4765c97..6bb0466 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2791,18 +2791,18 @@ retry:
 	}
 
 	while (batchcount > 0) {
-		struct list_head *entry;
 		struct page *page;
 		/* Get slab alloc is to come from. */
-		entry = n->slabs_partial.next;
-		if (entry == &n->slabs_partial) {
+		page = list_first_entry_or_null(&n->slabs_partial,
+				struct page, lru);
+		if (!page) {
 			n->free_touched = 1;
-			entry = n->slabs_free.next;
-			if (entry == &n->slabs_free)
+			page = list_first_entry_or_null(&n->slabs_free,
+					struct page, lru);
+			if (!page)
 				goto must_grow;
 		}
 
-		page = list_entry(entry, struct page, lru);
 		check_spinlock_acquired(cachep);
 
 		/*
@@ -3085,7 +3085,6 @@ retry:
 static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
 				int nodeid)
 {
-	struct list_head *entry;
 	struct page *page;
 	struct kmem_cache_node *n;
 	void *obj;
@@ -3098,15 +3097,16 @@ static void *____cache_alloc_node(struct kmem_cache *cachep, gfp_t flags,
 retry:
 	check_irq_off();
 	spin_lock(&n->list_lock);
-	entry = n->slabs_partial.next;
-	if (entry == &n->slabs_partial) {
+	page = list_first_entry_or_null(&n->slabs_partial,
+			struct page, lru);
+	if (!page) {
 		n->free_touched = 1;
-		entry = n->slabs_free.next;
-		if (entry == &n->slabs_free)
+		page = list_first_entry_or_null(&n->slabs_free,
+				struct page, lru);
+		if (!page)
 			goto must_grow;
 	}
 
-	page = list_entry(entry, struct page, lru);
 	check_spinlock_acquired_node(cachep, nodeid);
 
 	STATS_INC_NODEALLOCS(cachep);
-- 
2.5.0


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-12-05  2:36 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-02 15:46 [PATCH 1/3] mm/slab: use list_first_entry_or_null() Geliang Tang
2015-12-02 15:46 ` [PATCH 2/3] mm/slab: use list_for_each_entry in cache_flusharray Geliang Tang
2015-12-02 15:46   ` [PATCH 3/3] mm/slab: use list_{empty_careful,last_entry} in drain_freelist Geliang Tang
2015-12-02 16:06     ` Christoph Lameter
2015-12-03 14:07       ` [PATCH v2] mm/slab.c: " Geliang Tang
2015-12-03 14:53         ` Christoph Lameter
2015-12-04 13:43           ` Geliang Tang
2015-12-04 16:16             ` Christoph Lameter
2015-12-05  2:36               ` Geliang Tang
2015-12-02 15:58   ` [PATCH 2/3] mm/slab: use list_for_each_entry in cache_flusharray Christoph Lameter
2015-12-02 15:57 ` [PATCH 1/3] mm/slab: use list_first_entry_or_null() Christoph Lameter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox