From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FBC2C433B4 for ; Fri, 23 Apr 2021 06:57:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CB87F613B7 for ; Fri, 23 Apr 2021 06:57:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB87F613B7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E9416B00A6; Fri, 23 Apr 2021 02:57:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 598EB6B00A8; Fri, 23 Apr 2021 02:57:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3ED9B8D0001; Fri, 23 Apr 2021 02:57:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 19CF96B00A6 for ; Fri, 23 Apr 2021 02:57:17 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BD392181E18A3 for ; Fri, 23 Apr 2021 06:57:16 +0000 (UTC) X-FDA: 78062725272.24.E5B3160 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf19.hostedemail.com (Postfix) with ESMTP id C5A7E90009F6 for ; Fri, 23 Apr 2021 06:56:50 +0000 (UTC) IronPort-SDR: ua2J0iR7JIncTRHPr9tSuC8AdWC0Ellad/QZtb9RtYViXyMkkuBQDzp9GjtU9ACIZT3uJIOvHr 1G10BgwnlU4A== X-IronPort-AV: E=McAfee;i="6200,9189,9962"; a="196083813" X-IronPort-AV: E=Sophos;i="5.82,245,1613462400"; d="scan'208";a="196083813" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2021 23:57:11 -0700 IronPort-SDR: N6+TW/dOJtLAhBX+tNeAT9fVbNmp7IC7l5gjOqnZZJphVTO0OAe/QolffJi1IBE7QV/AOxqspH CpTOQxqv//LA== X-IronPort-AV: E=Sophos;i="5.82,245,1613462400"; d="scan'208";a="428273228" Received: from xingzhen-mobl.ccr.corp.intel.com (HELO [10.238.4.46]) ([10.238.4.46]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Apr 2021 23:57:09 -0700 Subject: Re: [RFC] mm/vmscan.c: avoid possible long latency caused by too_many_isolated() To: Yu Zhao Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ying.huang@intel.com, tim.c.chen@linux.intel.com, Shakeel Butt , Michal Hocko , wfg@mail.ustc.edu.cn References: <20210416023536.168632-1-zhengjun.xing@linux.intel.com> <7b7a1c09-3d16-e199-15d2-ccea906d4a66@linux.intel.com> From: Xing Zhengjun Message-ID: <7a0fecab-f9e1-ad39-d55e-01e574a35484@linux.intel.com> Date: Fri, 23 Apr 2021 14:57:07 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Queue-Id: C5A7E90009F6 X-Stat-Signature: f6uq9ay15wxcdh8nmu6hxrizwaidfoxn X-Rspamd-Server: rspam02 Received-SPF: none (linux.intel.com>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from=""; helo=mga03.intel.com; client-ip=134.134.136.65 X-HE-DKIM-Result: none/none X-HE-Tag: 1619161010-350228 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/23/2021 1:13 AM, Yu Zhao wrote: > On Thu, Apr 22, 2021 at 04:36:19PM +0800, Xing Zhengjun wrote: >> Hi, >> >> In the system with very few file pages (nr_active_file + nr_inacti= ve_file >> < 100), it is easy to reproduce "nr_isolated_file > nr_inactive_file",= then >> too_many_isolated return true, shrink_inactive_list enter "msleep(100)= ", the >> long latency will happen. >> >> The test case to reproduce it is very simple: allocate many huge pages= (near >> the DRAM size), then do free, repeat the same operation many times. >> In the test case, the system with very few file pages (nr_active_file = + >> nr_inactive_file < 100), I have dumpped the numbers of >> active/inactive/isolated file pages during the whole test(see in the >> attachments) , in shrink_inactive_list "too_many_isolated" is very eas= y to >> return true, then enter "msleep(100)",in "too_many_isolated" sc->gfp_m= ask is >> 0x342cca ("_GFP_IO" and "__GFP_FS" is masked) , it is also very easy t= o >> enter =E2=80=9Cinactive >>=3D3=E2=80=9D, then =E2=80=9Cisolated > inac= tive=E2=80=9D will be true. >> >> So I have a proposal to set a threshold number for the total file pag= es to >> ignore the system with very few file pages, and then bypass the 100ms = sleep. >> It is hard to set a perfect number for the threshold, so I just give a= n >> example of "256" for it. >> >> I appreciate it if you can give me your suggestion/comments. Thanks. >=20 > Hi Zhengjun, >=20 > It seems to me using the number of isolated pages to keep a lid on > direct reclaimers is not a good solution. We shouldn't keep going > that direction if we really want to fix the problem because migration > can isolate many pages too, which in turn blocks page reclaim. >=20 > Here is something works a lot better. Please give it a try. Thanks. Thanks, I will try it with my test cases. >=20 > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 507d216610bf2..9a09f7e76f6b8 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -951,6 +951,8 @@ typedef struct pglist_data { > =20 > /* Fields commonly accessed by the page reclaim scanner */ > =20 > + atomic_t nr_reclaimers; > + > /* > * NOTE: THIS IS UNUSED IF MEMCG IS ENABLED. > * > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 1c080fafec396..f7278642290a6 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1786,43 +1786,6 @@ int isolate_lru_page(struct page *page) > return ret; > } > =20 > -/* > - * A direct reclaimer may isolate SWAP_CLUSTER_MAX pages from the LRU = list and > - * then get rescheduled. When there are massive number of tasks doing = page > - * allocation, such sleeping direct reclaimers may keep piling up on e= ach CPU, > - * the LRU list will go small and be scanned faster than necessary, le= ading to > - * unnecessary swapping, thrashing and OOM. > - */ > -static int too_many_isolated(struct pglist_data *pgdat, int file, > - struct scan_control *sc) > -{ > - unsigned long inactive, isolated; > - > - if (current_is_kswapd()) > - return 0; > - > - if (!writeback_throttling_sane(sc)) > - return 0; > - > - if (file) { > - inactive =3D node_page_state(pgdat, NR_INACTIVE_FILE); > - isolated =3D node_page_state(pgdat, NR_ISOLATED_FILE); > - } else { > - inactive =3D node_page_state(pgdat, NR_INACTIVE_ANON); > - isolated =3D node_page_state(pgdat, NR_ISOLATED_ANON); > - } > - > - /* > - * GFP_NOIO/GFP_NOFS callers are allowed to isolate more pages, so th= ey > - * won't get blocked by normal direct-reclaimers, forming a circular > - * deadlock. > - */ > - if ((sc->gfp_mask & (__GFP_IO | __GFP_FS)) =3D=3D (__GFP_IO | __GFP_F= S)) > - inactive >>=3D 3; > - > - return isolated > inactive; > -} > - > /* > * move_pages_to_lru() moves pages from private @list to appropriate = LRU list. > * On return, @list is reused as a list of pages to be freed by the c= aller. > @@ -1924,19 +1887,6 @@ shrink_inactive_list(unsigned long nr_to_scan, s= truct lruvec *lruvec, > struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); > bool stalled =3D false; > =20 > - while (unlikely(too_many_isolated(pgdat, file, sc))) { > - if (stalled) > - return 0; > - > - /* wait a bit for the reclaimer. */ > - msleep(100); > - stalled =3D true; > - > - /* We are about to die and free our memory. Return now. */ > - if (fatal_signal_pending(current)) > - return SWAP_CLUSTER_MAX; > - } > - > lru_add_drain(); > =20 > spin_lock_irq(&lruvec->lru_lock); > @@ -3302,6 +3252,7 @@ static bool throttle_direct_reclaim(gfp_t gfp_mas= k, struct zonelist *zonelist, > unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *nodemask) > { > + int nr_cpus; > unsigned long nr_reclaimed; > struct scan_control sc =3D { > .nr_to_reclaim =3D SWAP_CLUSTER_MAX, > @@ -3334,8 +3285,17 @@ unsigned long try_to_free_pages(struct zonelist = *zonelist, int order, > set_task_reclaim_state(current, &sc.reclaim_state); > trace_mm_vmscan_direct_reclaim_begin(order, sc.gfp_mask); > =20 > + nr_cpus =3D current_is_kswapd() ? 0 : num_online_cpus(); > + while (nr_cpus && !atomic_add_unless(&pgdat->nr_reclaimers, 1, nr_cpu= s)) { > + if (schedule_timeout_killable(HZ / 10)) > + return SWAP_CLUSTER_MAX; > + } > + > nr_reclaimed =3D do_try_to_free_pages(zonelist, &sc); > =20 > + if (nr_cpus) > + atomic_dec(&pgdat->nr_reclaimers); > + > trace_mm_vmscan_direct_reclaim_end(nr_reclaimed); > set_task_reclaim_state(current, NULL); >=20 --=20 Zhengjun Xing