From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77593C63777 for ; Mon, 23 Nov 2020 04:48:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DCDCF207FF for ; Mon, 23 Nov 2020 04:48:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DCDCF207FF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EA9666B005D; Sun, 22 Nov 2020 23:48:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E5A7A6B0070; Sun, 22 Nov 2020 23:48:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D6F9E6B0071; Sun, 22 Nov 2020 23:48:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id AA4146B005D for ; Sun, 22 Nov 2020 23:48:43 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 55225363D for ; Mon, 23 Nov 2020 04:48:43 +0000 (UTC) X-FDA: 77514452526.17.ink16_0b14e7e27362 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 2C8D5180D0184 for ; Mon, 23 Nov 2020 04:48:43 +0000 (UTC) X-HE-Tag: ink16_0b14e7e27362 X-Filterd-Recvd-Size: 2467 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Nov 2020 04:48:41 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R931e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04420;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0UGBILz2_1606106903; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UGBILz2_1606106903) by smtp.aliyun-inc.com(127.0.0.1); Mon, 23 Nov 2020 12:48:24 +0800 Subject: Re: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add To: Andrew Morton Cc: Konstantin Khlebnikov , Hugh Dickins , Yu Zhao , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605860847-47445-1-git-send-email-alex.shi@linux.alibaba.com> <20201120151948.c3f4175ed18ed74e46760b87@linux-foundation.org> From: Alex Shi Message-ID: Date: Mon, 23 Nov 2020 12:46:36 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201120151948.c3f4175ed18ed74e46760b87@linux-foundation.org> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.004258, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =D4=DA 2020/11/21 =C9=CF=CE=E77:19, Andrew Morton =D0=B4=B5=C0: > On Fri, 20 Nov 2020 16:27:27 +0800 Alex Shi wrote: >=20 >> The current relock logical will change lru_lock when found a new >> lruvec, so if 2 memcgs are reading file or alloc page at same time, >> they could hold the lru_lock alternately, and wait for each other for >> fairness attribute of ticket spin lock. >> >> This patch will sort that all lru_locks and only hold them once in >> above scenario. That could reduce fairness waiting for lock reget. >> Than, vm-scalability/case-lru-file-readtwice could get ~5% performance >> gain on my 2P*20core*HT machine. >=20 > But what happens when all or most of the pages belong to the same > lruvec? This sounds like the common case - won't it suffer? >=20 Hi Andrew, My testing show no regression on this situation, like original centos7, The most spending time is on lru_lock for lru sensitive case. Thanks Alex