From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA729C10F29 for ; Tue, 17 Mar 2020 08:59:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A040520674 for ; Tue, 17 Mar 2020 08:59:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A040520674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 548CD6B0005; Tue, 17 Mar 2020 04:59:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F9436B0007; Tue, 17 Mar 2020 04:59:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3E8536B0008; Tue, 17 Mar 2020 04:59:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id 249A96B0005 for ; Tue, 17 Mar 2020 04:59:08 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id ED0F4181AEF07 for ; Tue, 17 Mar 2020 08:59:07 +0000 (UTC) X-FDA: 76604254734.19.mouth97_4fad7bbe5ce2b X-HE-Tag: mouth97_4fad7bbe5ce2b X-Filterd-Recvd-Size: 3359 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Mar 2020 08:59:07 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CABB1AEC7; Tue, 17 Mar 2020 08:59:05 +0000 (UTC) Subject: Re: [v2 PATCH 1/2] mm: swap: make page_evictable() inline To: Yang Shi , shakeelb@google.com, willy@infradead.org, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1584397455-28701-1-git-send-email-yang.shi@linux.alibaba.com> From: Vlastimil Babka Message-ID: Date: Tue, 17 Mar 2020 09:59:05 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: <1584397455-28701-1-git-send-email-yang.shi@linux.alibaba.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/16/20 11:24 PM, Yang Shi wrote: > When backporting commit 9c4e6b1a7027 ("mm, mlock, vmscan: no more > skipping pagevecs") to our 4.9 kernel, our test bench noticed around 10% > down with a couple of vm-scalability's test cases (lru-file-readonce, > lru-file-readtwice and lru-file-mmap-read). I didn't see that much down > on my VM (32c-64g-2nodes). It might be caused by the test configuration, > which is 32c-256g with NUMA disabled and the tests were run in root memcg, > so the tests actually stress only one inactive and active lru. It > sounds not very usual in mordern production environment. > > That commit did two major changes: > 1. Call page_evictable() > 2. Use smp_mb to force the PG_lru set visible > > It looks they contribute the most overhead. The page_evictable() is a > function which does function prologue and epilogue, and that was used by > page reclaim path only. However, lru add is a very hot path, so it > sounds better to make it inline. However, it calls page_mapping() which > is not inlined either, but the disassemble shows it doesn't do push and > pop operations and it sounds not very straightforward to inline it. > > Other than this, it sounds smp_mb() is not necessary for x86 since > SetPageLRU is atomic which enforces memory barrier already, replace it > with smp_mb__after_atomic() in the following patch. > > With the two fixes applied, the tests can get back around 5% on that > test bench and get back normal on my VM. Since the test bench > configuration is not that usual and I also saw around 6% up on the > latest upstream, so it sounds good enough IMHO. > > The below is test data (lru-file-readtwice throughput) against the v5.6-rc4: > mainline w/ inline fix > 150MB 154MB > > With this patch the throughput gets 2.67% up. The data with using > smp_mb__after_atomic() is showed in the following patch. > > Fixes: 9c4e6b1a7027 ("mm, mlock, vmscan: no more skipping pagevecs") > Cc: Shakeel Butt > Cc: Vlastimil Babka > Cc: Matthew Wilcox > Signed-off-by: Yang Shi Acked-by: Vlastimil Babka Thanks.