From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id B597B6B0275 for ; Tue, 24 Jul 2018 05:08:07 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id j4-v6so2174744pgq.16 for ; Tue, 24 Jul 2018 02:08:07 -0700 (PDT) Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id u37-v6sor2419241pgk.366.2018.07.24.02.08.06 for (Google Transport Security); Tue, 24 Jul 2018 02:08:06 -0700 (PDT) Date: Tue, 24 Jul 2018 12:08:00 +0300 From: "Kirill A. Shutemov" Subject: Re: [PATCH] mm: thp: remove use_zero_page sysfs knob Message-ID: <20180724090800.g43mmfnuuqwczzb2@kshutemo-mobl1> References: <1532110430-115278-1-git-send-email-yang.shi@linux.alibaba.com> <20180720123243.6dfc95ba061cd06e05c0262e@linux-foundation.org> <3238b5d2-fd89-a6be-0382-027a24a4d3ad@linux.alibaba.com> <20180722035156.GA12125@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: owner-linux-mm@kvack.org List-ID: To: David Rientjes Cc: Matthew Wilcox , Yang Shi , Andrew Morton , hughd@google.com, aaron.lu@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org On Mon, Jul 23, 2018 at 02:33:08PM -0700, David Rientjes wrote: > On Mon, 23 Jul 2018, David Rientjes wrote: > > > > > The huge zero page can be reclaimed under memory pressure and, if it is, > > > > it is attempted to be allocted again with gfp flags that attempt memory > > > > compaction that can become expensive. If we are constantly under memory > > > > pressure, it gets freed and reallocated millions of times always trying to > > > > compact memory both directly and by kicking kcompactd in the background. > > > > > > > > It likely should also be per node. > > > > > > Have you benchmarked making the non-huge zero page per-node? > > > > > > > Not since we disable it :) I will, though. The more concerning issue for > > us, modulo CVE-2017-1000405, is the cpu cost of constantly directly > > compacting memory for allocating the hzp in real time after it has been > > reclaimed. We've observed this happening tens or hundreds of thousands > > of times on some systems. It will be 2MB per node on x86 if the data > > suggests we should make it NUMA aware, I don't think the cost is too high > > to leave it persistently available even under memory pressure if > > use_zero_page is enabled. > > > > Measuring access latency to 4GB of memory on Naples I observe ~6.7% > slower access latency intrasocket and ~14% slower intersocket. > > use_zero_page is currently a simple thp flag, meaning it rejects writes > where val != !!val, so perhaps it would be best to overload it with > additional options? I can imagine 0x2 defining persistent allocation so > that the hzp is not freed when the refcount goes to 0 and 0x4 defining if > the hzp should be per node. Implementing persistent allocation fixes our > concern with it, so I'd like to start there. Comments? Why not a separate files? -- Kirill A. Shutemov