From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 472CBC433DB for ; Sat, 13 Mar 2021 08:47:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AED6264F1D for ; Sat, 13 Mar 2021 08:47:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AED6264F1D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=163.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D7D46B0071; Sat, 13 Mar 2021 03:47:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 385E86B0072; Sat, 13 Mar 2021 03:47:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B2886B0073; Sat, 13 Mar 2021 03:47:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id ECDC76B0071 for ; Sat, 13 Mar 2021 03:47:38 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 080681843913D for ; Sat, 13 Mar 2021 08:47:38 +0000 (UTC) X-FDA: 77914222596.26.A8E8CB3 Received: from mail-m975.mail.163.com (mail-m975.mail.163.com [123.126.97.5]) by imf13.hostedemail.com (Postfix) with ESMTP id 7AF31E1855C0 for ; Sat, 13 Mar 2021 08:32:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=163.com; s=s110527; h=From:Subject:Date:Message-Id:MIME-Version; bh=3398f WT3oedbW3YT157uECsXnZB1Z4aZG4y19CLepuE=; b=iW/n2IjeYAtXbUI6Of/Fq 0VuJ308zWH94LYE8O/LTiGk4QVzf7AbXBD8nL2lY0JdtCAutGvCFn3Jna/jUF27J BrhXkRxV9N3Uikc6u44yijCv+8fVZr5wXdHkbJhD1qz8388rQePxrw4zoliWklqG 7qnn0IQbUDTjOYpChfTyq4= Received: from localhost.localdomain (unknown [116.236.176.192]) by smtp5 (Coremail) with SMTP id HdxpCgD3x2FQeExgoVGmAQ--.3460S4; Sat, 13 Mar 2021 16:31:17 +0800 (CST) From: zhou To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, mhocko@kernel.org, mgorman@suse.de, willy@linux.intel.com, rostedt@goodmis.org, mingo@redhat.com, vbabka@suse.cz, rientjes@google.com, pankaj.gupta.linux@gmail.com, bhe@redhat.com, ying.huang@intel.com, iamjoonsoo.kim@lge.com, minchan@kernel.org, ruxian.feng@transsion.com, kai.cheng@transsion.com, zhao.xu@transsion.com, zhouxianrong@tom.com, zhou xianrong Subject: [PATCH] kswapd: no need reclaim cma pages triggered by unmovable allocation Date: Sat, 13 Mar 2021 16:31:09 +0800 Message-Id: <20210313083109.5410-1-xianrong_zhou@163.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-CM-TRANSID:HdxpCgD3x2FQeExgoVGmAQ--.3460S4 X-Coremail-Antispam: 1Uf129KBjvAXoW3ZrWkJw1DAryDGF1xZw4ktFb_yoW8GFWUZo WSkrsIyw1SgryjvwsY9FykJF4UXF18Ar4xZF1j9a9xC3ZxZrWrJa90kw47JFWfXF4rtF1r Xr1jk3srtFs5t3Wxn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7v73VFW2AGmfu7bjvjm3 AaLaJ3UbIYCTnIWIevJa73UjIFyTuYvjxU3J5rDUUUU X-Originating-IP: [116.236.176.192] X-CM-SenderInfo: h0ld02prqjs6xkrxqiywtou0bp/1tbiDhZUz1XlzYLCYgAAsc X-Stat-Signature: o8itu6buh7unowxacyhx7w6gmkqamta7 X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7AF31E1855C0 Received-SPF: none (163.com>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=mail-m975.mail.163.com; client-ip=123.126.97.5 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615624357-100113 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: zhou xianrong For purpose of better migration cma pages are allocated after failure movalbe allocations and are used normally for file pages or anonymous pages. In reclaim path many cma pages if configurated are reclaimed from lru lists in kswapd mainly or direct reclaim triggered by unmovable or reclaimable allocations. But these reclaimed cma pages can not be used by original unmovable or reclaimable allocations. So the reclaim are unnecessary. So the unmovable or reclaimable allocations should not trigger reclaiming cma pages. The patch adds third factor of migratetype which is just like factors of zone index or order kswapd need consider. The modification follows codes of zone index consideration. And it is straightforward that skips reclaiming cma pages in reclaim procedure which is triggered only by unmovable or reclaimable allocations. This optimization can avoid ~3% unnecessary isolations from cma (cma isolated / total isolated) with configuration of total 100Mb cma pages. Signed-off-by: zhou xianrong Signed-off-by: feng ruxian --- include/linux/mmzone.h | 6 ++-- include/trace/events/vmscan.h | 20 +++++++---- mm/page_alloc.c | 5 +-- mm/vmscan.c | 63 +++++++++++++++++++++++++++++------ 4 files changed, 73 insertions(+), 21 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..7dd38d7372b9 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -301,6 +301,8 @@ struct lruvec { #define ISOLATE_ASYNC_MIGRATE ((__force isolate_mode_t)0x4) /* Isolate unevictable pages */ #define ISOLATE_UNEVICTABLE ((__force isolate_mode_t)0x8) +/* Isolate none cma pages */ +#define ISOLATE_NONCMA ((__force isolate_mode_t)0x10) =20 /* LRU Isolation modes. */ typedef unsigned __bitwise isolate_mode_t; @@ -756,7 +758,7 @@ typedef struct pglist_data { wait_queue_head_t pfmemalloc_wait; struct task_struct *kswapd; /* Protected by mem_hotplug_begin/end() */ - int kswapd_order; + int kswapd_order, kswapd_migratetype; enum zone_type kswapd_highest_zoneidx; =20 int kswapd_failures; /* Number of 'reclaimed =3D=3D 0' runs */ @@ -840,7 +842,7 @@ static inline bool pgdat_is_empty(pg_data_t *pgdat) =20 void build_all_zonelists(pg_data_t *pgdat); void wakeup_kswapd(struct zone *zone, gfp_t gfp_mask, int order, - enum zone_type highest_zoneidx); + int migratetype, enum zone_type highest_zoneidx); bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned lo= ng mark, int highest_zoneidx, unsigned int alloc_flags, long free_pages); diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.= h index 2070df64958e..41bbafdfde84 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -51,37 +51,41 @@ TRACE_EVENT(mm_vmscan_kswapd_sleep, =20 TRACE_EVENT(mm_vmscan_kswapd_wake, =20 - TP_PROTO(int nid, int zid, int order), + TP_PROTO(int nid, int zid, int order, int mt), =20 - TP_ARGS(nid, zid, order), + TP_ARGS(nid, zid, order, mt), =20 TP_STRUCT__entry( __field( int, nid ) __field( int, zid ) __field( int, order ) + __field( int, mt ) ), =20 TP_fast_assign( __entry->nid =3D nid; __entry->zid =3D zid; __entry->order =3D order; + __entry->mt =3D mt; ), =20 - TP_printk("nid=3D%d order=3D%d", + TP_printk("nid=3D%d order=3D%d migratetype=3D%d", __entry->nid, - __entry->order) + __entry->order, + __entry->mt) ); =20 TRACE_EVENT(mm_vmscan_wakeup_kswapd, =20 - TP_PROTO(int nid, int zid, int order, gfp_t gfp_flags), + TP_PROTO(int nid, int zid, int order, int mt, gfp_t gfp_flags), =20 - TP_ARGS(nid, zid, order, gfp_flags), + TP_ARGS(nid, zid, order, mt, gfp_flags), =20 TP_STRUCT__entry( __field( int, nid ) __field( int, zid ) __field( int, order ) + __field( int, mt ) __field( gfp_t, gfp_flags ) ), =20 @@ -89,12 +93,14 @@ TRACE_EVENT(mm_vmscan_wakeup_kswapd, __entry->nid =3D nid; __entry->zid =3D zid; __entry->order =3D order; + __entry->mt =3D mt; __entry->gfp_flags =3D gfp_flags; ), =20 - TP_printk("nid=3D%d order=3D%d gfp_flags=3D%s", + TP_printk("nid=3D%d order=3D%d migratetype=3D%d gfp_flags=3D%s", __entry->nid, __entry->order, + __entry->mt, show_gfp_flags(__entry->gfp_flags)) ); =20 diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 519a60d5b6f7..45ceb15721b8 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3517,7 +3517,7 @@ struct page *rmqueue(struct zone *preferred_zone, /* Separate test+clear to avoid unnecessary atomics */ if (test_bit(ZONE_BOOSTED_WATERMARK, &zone->flags)) { clear_bit(ZONE_BOOSTED_WATERMARK, &zone->flags); - wakeup_kswapd(zone, 0, 0, zone_idx(zone)); + wakeup_kswapd(zone, 0, 0, migratetype, zone_idx(zone)); } =20 VM_BUG_ON_PAGE(page && bad_range(zone, page), page); @@ -4426,11 +4426,12 @@ static void wake_all_kswapds(unsigned int order, = gfp_t gfp_mask, struct zone *zone; pg_data_t *last_pgdat =3D NULL; enum zone_type highest_zoneidx =3D ac->highest_zoneidx; + int migratetype =3D ac->migratetype; =20 for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, highest_zoneidx, ac->nodemask) { if (last_pgdat !=3D zone->zone_pgdat) - wakeup_kswapd(zone, gfp_mask, order, highest_zoneidx); + wakeup_kswapd(zone, gfp_mask, order, migratetype, highest_zoneidx); last_pgdat =3D zone->zone_pgdat; } } diff --git a/mm/vmscan.c b/mm/vmscan.c index b1b574ad199d..184f0c4c7151 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -99,6 +99,9 @@ struct scan_control { /* Can pages be swapped as part of reclaim? */ unsigned int may_swap:1; =20 + /* Can cma pages be reclaimed? */ + unsigned int may_cma:1; + /* * Cgroups are not reclaimed below their configured memory.low, * unless we threaten to OOM. If any cgroups are skipped due to @@ -286,6 +289,11 @@ static bool writeback_throttling_sane(struct scan_co= ntrol *sc) } #endif =20 +static bool movable_reclaim(gfp_t gfp_mask) +{ + return is_migrate_movable(gfp_migratetype(gfp_mask)); +} + /* * This misses isolated pages which are not accounted for to save counte= rs. * As the data only determines if reclaim or compaction continues, it is @@ -1499,6 +1507,7 @@ unsigned int reclaim_clean_pages_from_list(struct z= one *zone, .gfp_mask =3D GFP_KERNEL, .priority =3D DEF_PRIORITY, .may_unmap =3D 1, + .may_cma =3D 1, }; struct reclaim_stat stat; unsigned int nr_reclaimed; @@ -1593,6 +1602,9 @@ int __isolate_lru_page_prepare(struct page *page, i= solate_mode_t mode) if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) return ret; =20 + if ((mode & ISOLATE_NONCMA) && is_migrate_cma(get_pageblock_migratetype= (page))) + return ret; + return 0; } =20 @@ -1647,7 +1659,10 @@ static unsigned long isolate_lru_pages(unsigned lo= ng nr_to_scan, unsigned long skipped =3D 0; unsigned long scan, total_scan, nr_pages; LIST_HEAD(pages_skipped); - isolate_mode_t mode =3D (sc->may_unmap ? 0 : ISOLATE_UNMAPPED); + isolate_mode_t mode; + + mode =3D (sc->may_unmap ? 0 : ISOLATE_UNMAPPED); + mode |=3D (sc->may_cma ? 0 : ISOLATE_NONCMA); =20 total_scan =3D 0; scan =3D 0; @@ -2125,6 +2140,7 @@ unsigned long reclaim_pages(struct list_head *page_= list) .may_writepage =3D 1, .may_unmap =3D 1, .may_swap =3D 1, + .may_cma =3D 1, }; =20 while (!list_empty(page_list)) { @@ -3253,6 +3269,7 @@ unsigned long try_to_free_pages(struct zonelist *zo= nelist, int order, .may_writepage =3D !laptop_mode, .may_unmap =3D 1, .may_swap =3D 1, + .may_cma =3D movable_reclaim(gfp_mask), }; =20 /* @@ -3298,6 +3315,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgr= oup *memcg, .may_unmap =3D 1, .reclaim_idx =3D MAX_NR_ZONES - 1, .may_swap =3D !noswap, + .may_cma =3D 1, }; =20 WARN_ON_ONCE(!current->reclaim_state); @@ -3341,6 +3359,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct m= em_cgroup *memcg, .may_writepage =3D !laptop_mode, .may_unmap =3D 1, .may_swap =3D may_swap, + .may_cma =3D 1, }; /* * Traverse the ZONELIST_FALLBACK zonelist of the current node to put @@ -3548,7 +3567,7 @@ static bool kswapd_shrink_node(pg_data_t *pgdat, * or lower is eligible for reclaim until at least one usable zone is * balanced. */ -static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneid= x) +static int balance_pgdat(pg_data_t *pgdat, int order, int migratetype, i= nt highest_zoneidx) { int i; unsigned long nr_soft_reclaimed; @@ -3650,6 +3669,7 @@ static int balance_pgdat(pg_data_t *pgdat, int orde= r, int highest_zoneidx) */ sc.may_writepage =3D !laptop_mode && !nr_boost_reclaim; sc.may_swap =3D !nr_boost_reclaim; + sc.may_cma =3D is_migrate_movable(migratetype); =20 /* * Do some background aging of the anon list, to give @@ -3771,8 +3791,15 @@ static enum zone_type kswapd_highest_zoneidx(pg_da= ta_t *pgdat, return curr_idx =3D=3D MAX_NR_ZONES ? prev_highest_zoneidx : curr_idx; } =20 +static int kswapd_migratetype(pg_data_t *pgdat, int prev_migratetype) +{ + int curr_migratetype =3D READ_ONCE(pgdat->kswapd_migratetype); + + return curr_migratetype =3D=3D MIGRATE_TYPES ? prev_migratetype : curr_= migratetype; +} + static void kswapd_try_to_sleep(pg_data_t *pgdat, int alloc_order, int r= eclaim_order, - unsigned int highest_zoneidx) + int migratetype, unsigned int highest_zoneidx) { long remaining =3D 0; DEFINE_WAIT(wait); @@ -3807,8 +3834,8 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, i= nt alloc_order, int reclaim_o remaining =3D schedule_timeout(HZ/10); =20 /* - * If woken prematurely then reset kswapd_highest_zoneidx and - * order. The values will either be from a wakeup request or + * If woken prematurely then reset kswapd_highest_zoneidx, order + * and migratetype. The values will either be from a wakeup request or * the previous request that slept prematurely. */ if (remaining) { @@ -3818,6 +3845,10 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, = int alloc_order, int reclaim_o =20 if (READ_ONCE(pgdat->kswapd_order) < reclaim_order) WRITE_ONCE(pgdat->kswapd_order, reclaim_order); + + if (!is_migrate_movable(READ_ONCE(pgdat->kswapd_migratetype))) + WRITE_ONCE(pgdat->kswapd_migratetype, + kswapd_migratetype(pgdat, migratetype)); } =20 finish_wait(&pgdat->kswapd_wait, &wait); @@ -3870,6 +3901,7 @@ static void kswapd_try_to_sleep(pg_data_t *pgdat, i= nt alloc_order, int reclaim_o */ static int kswapd(void *p) { + int migratetype =3D 0; unsigned int alloc_order, reclaim_order; unsigned int highest_zoneidx =3D MAX_NR_ZONES - 1; pg_data_t *pgdat =3D (pg_data_t*)p; @@ -3895,23 +3927,27 @@ static int kswapd(void *p) set_freezable(); =20 WRITE_ONCE(pgdat->kswapd_order, 0); + WRITE_ONCE(pgdat->kswapd_migratetype, MIGRATE_TYPES); WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES); for ( ; ; ) { bool ret; =20 alloc_order =3D reclaim_order =3D READ_ONCE(pgdat->kswapd_order); + migratetype =3D kswapd_migratetype(pgdat, migratetype); highest_zoneidx =3D kswapd_highest_zoneidx(pgdat, highest_zoneidx); =20 kswapd_try_sleep: kswapd_try_to_sleep(pgdat, alloc_order, reclaim_order, - highest_zoneidx); + migratetype, highest_zoneidx); =20 /* Read the new order and highest_zoneidx */ alloc_order =3D READ_ONCE(pgdat->kswapd_order); + migratetype =3D kswapd_migratetype(pgdat, migratetype); highest_zoneidx =3D kswapd_highest_zoneidx(pgdat, highest_zoneidx); WRITE_ONCE(pgdat->kswapd_order, 0); + WRITE_ONCE(pgdat->kswapd_migratetype, MIGRATE_TYPES); WRITE_ONCE(pgdat->kswapd_highest_zoneidx, MAX_NR_ZONES); =20 ret =3D try_to_freeze(); @@ -3934,8 +3970,8 @@ static int kswapd(void *p) * request (alloc_order). */ trace_mm_vmscan_kswapd_wake(pgdat->node_id, highest_zoneidx, - alloc_order); - reclaim_order =3D balance_pgdat(pgdat, alloc_order, + alloc_order, migratetype); + reclaim_order =3D balance_pgdat(pgdat, alloc_order, migratetype, highest_zoneidx); if (reclaim_order < alloc_order) goto kswapd_try_sleep; @@ -3953,11 +3989,12 @@ static int kswapd(void *p) * has failed or is not needed, still wake up kcompactd if only compacti= on is * needed. */ -void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order, +void wakeup_kswapd(struct zone *zone, gfp_t gfp_flags, int order, int mi= gratetype enum zone_type highest_zoneidx) { pg_data_t *pgdat; enum zone_type curr_idx; + int curr_migratetype; =20 if (!managed_zone(zone)) return; @@ -3967,6 +4004,7 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_fla= gs, int order, =20 pgdat =3D zone->zone_pgdat; curr_idx =3D READ_ONCE(pgdat->kswapd_highest_zoneidx); + curr_migratetype =3D READ_ONCE(pgdat->kswapd_migratetype); =20 if (curr_idx =3D=3D MAX_NR_ZONES || curr_idx < highest_zoneidx) WRITE_ONCE(pgdat->kswapd_highest_zoneidx, highest_zoneidx); @@ -3974,6 +4012,9 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_fla= gs, int order, if (READ_ONCE(pgdat->kswapd_order) < order) WRITE_ONCE(pgdat->kswapd_order, order); =20 + if (curr_migratetype =3D=3D MIGRATE_TYPES || is_migrate_movable(migrate= type)) + WRITE_ONCE(pgdat->kswapd_migratetype, migratetype); + if (!waitqueue_active(&pgdat->kswapd_wait)) return; =20 @@ -3994,7 +4035,7 @@ void wakeup_kswapd(struct zone *zone, gfp_t gfp_fla= gs, int order, } =20 trace_mm_vmscan_wakeup_kswapd(pgdat->node_id, highest_zoneidx, order, - gfp_flags); + migratetype, gfp_flags); wake_up_interruptible(&pgdat->kswapd_wait); } =20 @@ -4017,6 +4058,7 @@ unsigned long shrink_all_memory(unsigned long nr_to= _reclaim) .may_writepage =3D 1, .may_unmap =3D 1, .may_swap =3D 1, + .may_cma =3D 1, .hibernation_mode =3D 1, }; struct zonelist *zonelist =3D node_zonelist(numa_node_id(), sc.gfp_mask= ); @@ -4176,6 +4218,7 @@ static int __node_reclaim(struct pglist_data *pgdat= , gfp_t gfp_mask, unsigned in .may_writepage =3D !!(node_reclaim_mode & RECLAIM_WRITE), .may_unmap =3D !!(node_reclaim_mode & RECLAIM_UNMAP), .may_swap =3D 1, + .may_cma =3D movable_reclaim(gfp_mask), .reclaim_idx =3D gfp_zone(gfp_mask), }; =20 --=20 2.25.1