From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86170C432BE for ; Mon, 16 Aug 2021 14:29:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 286D460F5C for ; Mon, 16 Aug 2021 14:29:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 286D460F5C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C4C5E6B0075; Mon, 16 Aug 2021 10:29:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BFB986B0078; Mon, 16 Aug 2021 10:29:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B11D96B007B; Mon, 16 Aug 2021 10:29:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 97F0C6B0075 for ; Mon, 16 Aug 2021 10:29:26 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 48DB3204B6 for ; Mon, 16 Aug 2021 14:29:26 +0000 (UTC) X-FDA: 78481176732.35.ACA57A2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id AC99C700865B for ; Mon, 16 Aug 2021 14:29:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=WxJvdZx1S/GUBWkdMxiXeG3/WqogteDc2hsw7fqpOWQ=; b=aqgR5bsgQqTgxb7+KuMJOs76Tg n1HfBZZQxxcqM3mbaODNG0xMl6XO0UFaVW3P2ve1vwQhp3aGqHUZgLb7vhyHKnSMlz5wAsqn17phe 7JexrjksIgpA/y2P0Dlg3Wa3/P1QM4X9g/StN1kPntEl7i20inCbne+EwIux4M5z/PhGVnGUkGqWr 4Fns42xKv5l8ynPyoM6f3F0keF2/hDhbYenI6Wd0uGr+P9D++vbrWtzDwHLFNCeFYPqwogeSBStlY qUJ/WUNKb17HUHOycD7aUnh5iW1sZNAdQ2j0pd85CH3JyEvo6Yi39kTkcK2QCrCziAKJoEKexZi27 iOb1I5WQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mFdal-001Trs-Bq; Mon, 16 Aug 2021 14:28:09 +0000 Date: Mon, 16 Aug 2021 15:27:55 +0100 From: Matthew Wilcox To: David Hildenbrand Cc: Khalid Aziz , "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" , Steven Sistare , Anthony Yznaga , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "Gonglei (Arei)" Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC Message-ID: References: <55720e1b39cff0a0f882d8610e7906dc80ea0a01.camel@oracle.com> <88884f55-4991-11a9-d330-5d1ed9d5e688@redhat.com> <40bad572-501d-e4cf-80e3-9a8daa98dc7e@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <40bad572-501d-e4cf-80e3-9a8daa98dc7e@redhat.com> X-Rspamd-Queue-Id: AC99C700865B Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=aqgR5bsg; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam01 X-Stat-Signature: u9bzsxxow7y54emk6xyxd9tew8g3ry11 X-HE-Tag: 1629124165-936543 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 16, 2021 at 04:10:28PM +0200, David Hildenbrand wrote: > > > > Until recently, the CPUs only having 4 1GB TLB entries. I'm sure we > > > > still have customers using that generation of CPUs. 2MB pages perform > > > > better than 1GB pages on the previous generation of hardware, and I > > > > haven't seen numbers for the next generation yet. > > > > > > I read that somewhere else before, yet we have heavy 1 GiB page users, > > > especially in the context of VMs and DPDK. > > > > I wonder if those users actually benchmarked. Or whether the memory > > savings worked out so well for them that the loss of TLB performance > > didn't matter. > > These applications are extremely performance sensitive (i.e., RT workloads), "real time does not mean real fast". it means predictable latency. > > > I will rephrase my previous statement "hugetlbfs just doesn't raise these > > > problems because we are special casing it all over the place already". For > > > example, not allowing to swap such pages. Disallowing MADV_DONTNEED. Special > > > hugetlbfs locking. > > > > Sure, that's why I want to drag this feature out of "oh this is a > > hugetlb special case" and into "this is something Linux supports". > > I would have understood the move to optimize SHMEM internally - similar to > how we seem to optimize hugetlbfs SHMEM right now internally. (although > sharing page tables for shmem can still be quite tricky) > > I did not follow why we have to play games with MAP_PRIVATE, and having > private anonymous pages shared between processes that don't COW, introducing > new syscalls etc. It's not about SHMEM, it's about file-backed pages on regular filesystems. I don't want to have XFS, ext4 and btrfs all with their own implementations of ARCH_WANT_HUGE_PMD_SHARE.