From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 4223C830E7 for ; Mon, 29 Aug 2016 04:50:06 -0400 (EDT) Received: by mail-pf0-f200.google.com with SMTP id w128so290780856pfd.3 for ; Mon, 29 Aug 2016 01:50:06 -0700 (PDT) Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id 191si38094069pfz.229.2016.08.29.01.50.03 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 29 Aug 2016 01:50:05 -0700 (PDT) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7T8n7Gw026495 for ; Mon, 29 Aug 2016 04:50:03 -0400 Received: from e23smtp05.au.ibm.com (e23smtp05.au.ibm.com [202.81.31.147]) by mx0a-001b2d01.pphosted.com with ESMTP id 2537132hkc-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 29 Aug 2016 04:50:02 -0400 Received: from localhost by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 29 Aug 2016 18:50:00 +1000 Received: from d23relay10.au.ibm.com (d23relay10.au.ibm.com [9.190.26.77]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 250AD3578052 for ; Mon, 29 Aug 2016 18:49:58 +1000 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay10.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7T8nweB983408 for ; Mon, 29 Aug 2016 18:49:58 +1000 Received: from d23av01.au.ibm.com (localhost [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7T8nvmX013114 for ; Mon, 29 Aug 2016 18:49:57 +1000 Date: Mon, 29 Aug 2016 14:19:48 +0530 From: Anshuman Khandual MIME-Version: 1.0 Subject: Re: [PATCH] thp: reduce usage of huge zero page's atomic counter References: In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Message-Id: <57C3F72C.6030405@linux.vnet.ibm.com> Sender: owner-linux-mm@kvack.org List-ID: To: Aaron Lu , Linux Memory Management List Cc: "'Kirill A. Shutemov'" , Dave Hansen , Tim Chen , Huang Ying , Andrew Morton , Vlastimil Babka , Jerome Marchand , Andrea Arcangeli , Mel Gorman , Ebru Akagunduz , linux-kernel@vger.kernel.org On 08/29/2016 12:01 PM, Aaron Lu wrote: > The global zero page is used to satisfy an anonymous read fault. If > THP(Transparent HugePage) is enabled then the global huge zero page is used. > The global huge zero page uses an atomic counter for reference counting > and is allocated/freed dynamically according to its counter value. > > CPU time spent on that counter will greatly increase if there are > a lot of processes doing anonymous read faults. This patch proposes a > way to reduce the access to the global counter so that the CPU load > can be reduced accordingly. > > To do this, a new flag of the mm_struct is introduced: MMF_USED_HUGE_ZERO_PAGE. > With this flag, the process only need to touch the global counter in > two cases: > 1 The first time it uses the global huge zero page; > 2 The time when mm_user of its mm_struct reaches zero. > > Note that right now, the huge zero page is eligible to be freed as soon > as its last use goes away. With this patch, the page will not be > eligible to be freed until the exit of the last process from which it > was ever used. > > And with the use of mm_user, the kthread is not eligible to use huge > zero page either. Since no kthread is using huge zero page today, there > is no difference after applying this patch. But if that is not desired, > I can change it to when mm_count reaches zero. > > Case used for test on Haswell EP: > usemem -n 72 --readonly -j 0x200000 100G Is this benchmark publicly available ? Does not seem to be this one https://github.com/gnubert/usemem.git, Does it ? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org