From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF0E6C10F27 for ; Mon, 9 Mar 2020 15:48:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7233920578 for ; Mon, 9 Mar 2020 15:48:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="io6dNBk7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7233920578 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 21F336B0006; Mon, 9 Mar 2020 11:48:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CF6D6B0007; Mon, 9 Mar 2020 11:48:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0984B6B0008; Mon, 9 Mar 2020 11:48:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0205.hostedemail.com [216.40.44.205]) by kanga.kvack.org (Postfix) with ESMTP id E6DAF6B0006 for ; Mon, 9 Mar 2020 11:48:37 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AD47F8248047 for ; Mon, 9 Mar 2020 15:48:37 +0000 (UTC) X-FDA: 76576256274.03.crown90_85772b6f9423c X-HE-Tag: crown90_85772b6f9423c X-Filterd-Recvd-Size: 8434 Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Mar 2020 15:48:37 +0000 (UTC) Received: by mail-lf1-f68.google.com with SMTP id q10so7377930lfo.8 for ; Mon, 09 Mar 2020 08:48:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=kDiKixLTJFxf8TsQVVOvIeY0ce3mng9LoCPyf39vtXw=; b=io6dNBk7KCpUyK02V+BNnQP8/E9ebSA6T2JF2Fulh+lu+2EPKNOWqsxIYR7utFjL4g 868OFI3dc1OgHyWPubeV+Mq9Pb61VJ+SiQLnzE35cYj+Q26pYnOCiJLtUXXlTz5lFLNY TCFJFN39hsxPiCMhhcxNvyaiYUHspT73DVvMEEZPdcJTyNMlKG48W/syuFjJK6+doSpY WRblFmoUwhQ9qqjTb4/XQBXLaoLoxfbTfEhz7wRWmboEuenInzq8FtdGZmSl/LI/8Ffz r5Xpge9lBcgeM8tpPbnCcUFwc1fGOqo6dvcepJzqLq4DBigJNE9y+FbpxZFll13YfNDY HnHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=kDiKixLTJFxf8TsQVVOvIeY0ce3mng9LoCPyf39vtXw=; b=ghVI/+GTQwqcPCiszVEjMXmy8Ph02Mw3aJI8p9ibaECfsIUwrWqNOzMSVokZls0Dot wtaNegrMhSYJlK0/oLsmIEaq+cHtFSCC2G7olQb3P/4hP3THwuqi3vVxpHJ3woLzrRTm OZuMvmOikyEt0VhUWULqfANSrkBpZ0LXFXFNHZtIwVz5TzIYE2RZhLieHjSPI+Av9OqX RsTsoT+KQYbAJW8iS76yOcP61VhYdIqmIdG56xmTS9xKmR0wjSNN6OqnIHITPu4hvlvR kGhMIo1hDgHcGd+9ZWTDRFGoa0sqsbELGrLoA5LgDjzsE/NsaRJoigP8Y7DE4OX3H/WV vJlQ== X-Gm-Message-State: ANhLgQ0JiGpPYSOxBRKBdqrNx+N8jnjSHeYVoii+4lkDWcb+q2pn3x2w cnGY4SdyXdX7MQvF+ZxVYuMnQA== X-Google-Smtp-Source: ADFU+vtTheNO0DoqqbzJamsW16MAZJAqCrdEJVF3O8TMD14ffphAchEvBwP2mHL6sRHUJPPukC5O2Q== X-Received: by 2002:a19:3803:: with SMTP id f3mr8744714lfa.160.1583768915390; Mon, 09 Mar 2020 08:48:35 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id n7sm982143lfi.5.2020.03.09.08.48.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Mar 2020 08:48:34 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id A679E1013CA; Mon, 9 Mar 2020 18:48:34 +0300 (+03) Date: Mon, 9 Mar 2020 18:48:34 +0300 From: "Kirill A. Shutemov" To: David Rientjes Cc: Andrew Morton , Yang Shi , "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , Linux Kernel Mailing List , Linux MM Subject: Re: [patch 2/2] mm, thp: track fallbacks due to failed memcg charges separately Message-ID: <20200309154834.qrmq566e2kc54ktt@box> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 06, 2020 at 02:22:35PM -0800, David Rientjes wrote: > The thp_fault_fallback and thp_file_fallback vmstats are incremented if > either the hugepage allocation fails through the page allocator or the > hugepage charge fails through mem cgroup. > > This patch leaves this field untouched but adds two new fields, > thp_{fault,file}_fallback_charge, which is incremented only when the mem > cgroup charge fails. > > This distinguishes between attempted hugepage allocations that fail due to > fragmentation (or low memory conditions) and those that fail due to mem > cgroup limits. That can be used to determine the impact of fragmentation > on the system by excluding faults that failed due to memcg usage. > > Signed-off-by: David Rientjes Acked-by: Kirill A. Shutemov > --- > Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++ > include/linux/vm_event_item.h | 3 +++ > mm/huge_memory.c | 2 ++ > mm/shmem.c | 4 +++- > mm/vmstat.c | 2 ++ > 5 files changed, 20 insertions(+), 1 deletion(-) > > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst > --- a/Documentation/admin-guide/mm/transhuge.rst > +++ b/Documentation/admin-guide/mm/transhuge.rst > @@ -310,6 +310,11 @@ thp_fault_fallback > is incremented if a page fault fails to allocate > a huge page and instead falls back to using small pages. > > +thp_fault_fallback_charge > + is incremented if a page fault fails to charge a huge page and > + instead falls back to using small pages even though the > + allocation was successful. > + > thp_collapse_alloc_failed > is incremented if khugepaged found a range > of pages that should be collapsed into one huge page but failed > @@ -323,6 +328,11 @@ thp_file_fallback > is incremented if a file huge page is attempted to be allocated > but fails and instead falls back to using small pages. > > +thp_file_fallback_charge > + is incremented if a file huge page cannot be charged and instead > + falls back to using small pages even though the allocation was > + successful. > + > thp_file_mapped > is incremented every time a file huge page is mapped into > user address space. > diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h > --- a/include/linux/vm_event_item.h > +++ b/include/linux/vm_event_item.h > @@ -73,10 +73,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > THP_FAULT_ALLOC, > THP_FAULT_FALLBACK, > + THP_FAULT_FALLBACK_CHARGE, > THP_COLLAPSE_ALLOC, > THP_COLLAPSE_ALLOC_FAILED, > THP_FILE_ALLOC, > THP_FILE_FALLBACK, > + THP_FILE_FALLBACK_CHARGE, > THP_FILE_MAPPED, > THP_SPLIT_PAGE, > THP_SPLIT_PAGE_FAILED, > @@ -117,6 +119,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, > #ifndef CONFIG_TRANSPARENT_HUGEPAGE > #define THP_FILE_ALLOC ({ BUILD_BUG(); 0; }) > #define THP_FILE_FALLBACK ({ BUILD_BUG(); 0; }) > +#define THP_FILE_FALLBACK_CHARGE ({ BUILD_BUG(); 0; }) > #define THP_FILE_MAPPED ({ BUILD_BUG(); 0; }) > #endif > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, > if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) { > put_page(page); > count_vm_event(THP_FAULT_FALLBACK); > + count_vm_event(THP_FAULT_FALLBACK_CHARGE); > return VM_FAULT_FALLBACK; > } > > @@ -1406,6 +1407,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) > put_page(page); > ret |= VM_FAULT_FALLBACK; > count_vm_event(THP_FAULT_FALLBACK); > + count_vm_event(THP_FAULT_FALLBACK_CHARGE); > goto out; > } > > diff --git a/mm/shmem.c b/mm/shmem.c > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1871,8 +1871,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, > PageTransHuge(page)); > if (error) { > - if (PageTransHuge(page)) > + if (PageTransHuge(page)) { > count_vm_event(THP_FILE_FALLBACK); > + count_vm_event(THP_FILE_FALLBACK_CHARGE); > + } > goto unacct; > } > error = shmem_add_to_page_cache(page, mapping, hindex, > diff --git a/mm/vmstat.c b/mm/vmstat.c > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -1254,10 +1254,12 @@ const char * const vmstat_text[] = { > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > "thp_fault_alloc", > "thp_fault_fallback", > + "thp_fault_fallback_charge", > "thp_collapse_alloc", > "thp_collapse_alloc_failed", > "thp_file_alloc", > "thp_file_fallback", > + "thp_file_fallback_charge", > "thp_file_mapped", > "thp_split_page", > "thp_split_page_failed", -- Kirill A. Shutemov