From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f51.google.com (mail-pa0-f51.google.com [209.85.220.51]) by kanga.kvack.org (Postfix) with ESMTP id 9E6806B0256 for ; Wed, 2 Dec 2015 10:13:44 -0500 (EST) Received: by pabfh17 with SMTP id fh17so44689959pab.0 for ; Wed, 02 Dec 2015 07:13:44 -0800 (PST) Received: from m50-134.163.com (m50-134.163.com. [123.125.50.134]) by mx.google.com with ESMTP id h8si5186784pat.239.2015.12.02.07.13.42 for ; Wed, 02 Dec 2015 07:13:43 -0800 (PST) From: Geliang Tang Subject: [PATCH 1/2] mm/page_alloc.c: use list_{first,last}_entry instead of list_entry Date: Wed, 2 Dec 2015 23:12:40 +0800 Message-Id: Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton , Vlastimil Babka , Michal Hocko , Mel Gorman , David Rientjes , Joonsoo Kim , "Kirill A. Shutemov" , Johannes Weiner , Alexander Duyck Cc: Geliang Tang , linux-mm@kvack.org, linux-kernel@vger.kernel.org To make the intention clearer, use list_{first,last}_entry instead of list_entry. Signed-off-by: Geliang Tang --- mm/page_alloc.c | 23 +++++++++++------------ 1 file changed, 11 insertions(+), 12 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d6d7c97..0d38185 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -830,7 +830,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, do { int mt; /* migratetype of the to-be-freed page */ - page = list_entry(list->prev, struct page, lru); + page = list_last_entry(list, struct page, lru); /* must delete as __free_one_page list manipulates */ list_del(&page->lru); @@ -1457,11 +1457,10 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, /* Find a page of the appropriate size in the preferred list */ for (current_order = order; current_order < MAX_ORDER; ++current_order) { area = &(zone->free_area[current_order]); - if (list_empty(&area->free_list[migratetype])) - continue; - - page = list_entry(area->free_list[migratetype].next, + page = list_first_entry_or_null(&area->free_list[migratetype], struct page, lru); + if (!page) + continue; list_del(&page->lru); rmv_page_order(page); area->nr_free--; @@ -1740,12 +1739,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac) for (order = 0; order < MAX_ORDER; order++) { struct free_area *area = &(zone->free_area[order]); - if (list_empty(&area->free_list[MIGRATE_HIGHATOMIC])) + page = list_first_entry_or_null( + &area->free_list[MIGRATE_HIGHATOMIC], + struct page, lru); + if (!page) continue; - page = list_entry(area->free_list[MIGRATE_HIGHATOMIC].next, - struct page, lru); - /* * It should never happen but changes to locking could * inadvertently allow a per-cpu drain to add pages @@ -1793,7 +1792,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype) if (fallback_mt == -1) continue; - page = list_entry(area->free_list[fallback_mt].next, + page = list_first_entry(&area->free_list[fallback_mt], struct page, lru); if (can_steal) steal_suitable_fallback(zone, page, start_migratetype); @@ -2252,9 +2251,9 @@ struct page *buffered_rmqueue(struct zone *preferred_zone, } if (cold) - page = list_entry(list->prev, struct page, lru); + page = list_last_entry(list, struct page, lru); else - page = list_entry(list->next, struct page, lru); + page = list_first_entry(list, struct page, lru); list_del(&page->lru); pcp->count--; -- 2.5.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org