From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C3C8C433E0 for ; Tue, 26 Jan 2021 15:00:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AC7CF22D58 for ; Tue, 26 Jan 2021 15:00:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AC7CF22D58 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 445848D00D8; Tue, 26 Jan 2021 10:00:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F6928D00B0; Tue, 26 Jan 2021 10:00:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30CFD8D00D8; Tue, 26 Jan 2021 10:00:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id 18CCE8D00B0 for ; Tue, 26 Jan 2021 10:00:38 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C4E2F8249980 for ; Tue, 26 Jan 2021 15:00:37 +0000 (UTC) X-FDA: 77748237714.16.pan05_410b0392758f Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id B0C7F100E690C for ; Tue, 26 Jan 2021 15:00:32 +0000 (UTC) X-HE-Tag: pan05_410b0392758f X-Filterd-Recvd-Size: 3376 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Tue, 26 Jan 2021 15:00:30 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1611673229; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=c+5YKS0svuQQeOzhA6HYPOjpNIFBWkPSoaZk/8ICdxs=; b=IoG1jC9tMwehf1Cs0lMjSzV4qcgYwo9JvHFp4bV/9A4XTGn0l+xA1ZWE8EJ+VUpVnJr8Bc XEF9/t2kuFBiFlu4V5dqGaHmvqCBlLUOwsdbdTE2gQuqxT4oUMvjMTvTTOp6sHgZcDTAyK 09YSdJJAoVx/nyrrf9g4rRWvB68rshQ= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 31137AB9F; Tue, 26 Jan 2021 15:00:29 +0000 (UTC) Date: Tue, 26 Jan 2021 16:00:16 +0100 From: Michal Hocko To: Xing Zhengjun Cc: linux-mm@kvack.org, LKML , Dave Hansen , Tony , Tim C Chen , "Huang, Ying" , "Du, Julie" Subject: Re: Test report for kernel direct mapping performance Message-ID: <20210126150016.GT827@dhcp22.suse.cz> References: <213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 15-01-21 15:23:07, Xing Zhengjun wrote: > Hi, > > There is currently a bit of a debate about the kernel direct map. Does using > 2M/1G pages aggressively for the kernel direct map help performance? Or, is > it an old optimization which is not as helpful on modern CPUs as it was in > the old days? What is the penalty of a kernel feature that heavily demotes > this mapping from larger to smaller pages? We did a set of runs with 1G and > 2M pages enabled /disabled and saw the changes. > > [Conclusions] > > Assuming that this was a good representative set of workloads and that the > data are good, for server usage, we conclude that the existing aggressive > use of 1G mappings is a good choice since it represents the best in a > plurality of the workloads. However, in a *majority* of cases, another > mapping size (2M or 4k) potentially offers a performance improvement. This > leads us to conclude that although 1G mappings are a good default choice, > there is no compelling evidence that it must be the only choice, or that > folks deriving benefits (like hardening) from smaller mapping sizes should > avoid the smaller mapping sizes. Thanks for conducting these tests! This is definitely useful and quite honestly I would have expected a much more noticeable differences. Please note that I am not really deep into benchmarking but one thing that popped in my mind was whethere these (micro)benchmarks are really representative workloads. Some of them tend to be rather narrow in executed code paths or data structures used AFAIU. Is it possible they simply didn't generate sufficient TLB pressure? Have you tried to look closer on profiles of respective configurations where the overhead comes from? -- Michal Hocko SUSE Labs