From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76A42C64E7A for ; Tue, 1 Dec 2020 08:20:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0A572085B for ; Tue, 1 Dec 2020 08:20:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0A572085B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2600F6B0036; Tue, 1 Dec 2020 03:20:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 211948D0002; Tue, 1 Dec 2020 03:20:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 126EF8D0001; Tue, 1 Dec 2020 03:20:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0228.hostedemail.com [216.40.44.228]) by kanga.kvack.org (Postfix) with ESMTP id F13E76B0036 for ; Tue, 1 Dec 2020 03:20:40 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BB6288249980 for ; Tue, 1 Dec 2020 08:20:40 +0000 (UTC) X-FDA: 77544017040.23.corn18_5404432273a9 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 9F6B237604 for ; Tue, 1 Dec 2020 08:20:40 +0000 (UTC) X-HE-Tag: corn18_5404432273a9 X-Filterd-Recvd-Size: 2684 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Dec 2020 08:20:39 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R981e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=alimailimapcm10staff010182156082;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0UH7E2fz_1606810833; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UH7E2fz_1606810833) by smtp.aliyun-inc.com(127.0.0.1); Tue, 01 Dec 2020 16:20:34 +0800 Subject: Re: [PATCH 1/3] mm/swap.c: pre-sort pages in pagevec for pagevec_lru_move_fn To: Michal Hocko Cc: vbabka@suse.cz, Konstantin Khlebnikov , Hugh Dickins , Yu Zhao , "Matthew Wilcox (Oracle)" , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20201126155553.GT4327@casper.infradead.org> <1606809735-43300-1-git-send-email-alex.shi@linux.alibaba.com> <20201201081031.GQ17338@dhcp22.suse.cz> From: Alex Shi Message-ID: <0a679cbb-bd4e-b958-f875-de8350e13c08@linux.alibaba.com> Date: Tue, 1 Dec 2020 16:20:30 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201201081031.GQ17338@dhcp22.suse.cz> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =D4=DA 2020/12/1 =CF=C2=CE=E74:10, Michal Hocko =D0=B4=B5=C0: > On Tue 01-12-20 16:02:13, Alex Shi wrote: >> Pages in pagevec may have different lruvec, so we have to do relock in >> function pagevec_lru_move_fn(), but a relock may cause current cpu wai= t >> for long time on the same lock for spinlock fairness reason. >> >> Before per memcg lru_lock, we have to bear the relock since the spinlo= ck >> is the only way to serialize page's memcg/lruvec. Now TestClearPageLRU >> could be used to isolate pages exculsively, and stable the page's >> lruvec/memcg. So it gives us a chance to sort the page's lruvec before >> moving action in pagevec_lru_move_fn. Then we don't suffer from the >> spinlock's fairness wait. > Do you have any data to show any improvements from this? >=20 Hi Michal, Thanks for quick response. Not yet. I am running for data. but according to the lru_add result, ther= e should be a big gain for multiple memcgs scenario. Also I don't except a quick accept, just send out the idea for comments=20 when the thread is still warm. :) Thanks Alex