From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D441DC433ED for ; Wed, 7 Apr 2021 01:46:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6E8FB610CE for ; Wed, 7 Apr 2021 01:46:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E8FB610CE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=lespinasse.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B33C78E0014; Tue, 6 Apr 2021 21:45:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19B408E0016; Tue, 6 Apr 2021 21:45:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B07AB8E0008; Tue, 6 Apr 2021 21:45:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 924B48E0006 for ; Tue, 6 Apr 2021 21:45:09 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 61432180ACF8F for ; Wed, 7 Apr 2021 01:45:09 +0000 (UTC) X-FDA: 78003877938.28.B5CDBA2 Received: from server.lespinasse.org (server.lespinasse.org [63.205.204.226]) by imf13.hostedemail.com (Postfix) with ESMTP id D7243E00010B for ; Wed, 7 Apr 2021 01:45:06 +0000 (UTC) DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-11-ed; t=1617759903; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=iBh5A+5wTMstGnPqgpP3ljuSrzHKXXv6cU8okVcPLeE=; b=XsGhiULxRA9nCIbrMh7+FdFhtJ/TpkWSJjnJ2m5rjzIiXhSUWPXJDpg+7W2RDy7HlKcpX dSpTVJSFqHW5l9kCw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lespinasse.org; i=@lespinasse.org; q=dns/txt; s=srv-11-rsa; t=1617759903; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : from; bh=iBh5A+5wTMstGnPqgpP3ljuSrzHKXXv6cU8okVcPLeE=; b=o55KlhOAIFr9rk/nOBWWIlXE7tCEMPE/X3Y1YYo/ocQwXKYNzIVBPbOREyxDJzHGy4pEX 6/vMoRjvWqD4SD7ls+IAFbm7ffBwNmB90h4kQOezDNFoU602wt41yIcjnjGVm/KhqPmvZy7 L7f8WXO+Z4pBJQ+o3Uqo9M9MCw+xdh1a2rCt8cNx/acAACjhob3xWJ3Cx6cHkl2C8Dbni0F HVYOioO3Auv2k5W1tUqmRIQW0OZd+cnj1JXmRDNZmSeirKSTEwFJyGH3bBQ4KPAyObPwAMd dgTMvQcLEoyK7RsnAgmkgiqqafsPta6wkwadZhZZZor4JyGkHDIpK1KVZ6vA== Received: from zeus.lespinasse.org (zeus.lespinasse.org [10.0.0.150]) by server.lespinasse.org (Postfix) with ESMTPS id 66D09160698; Tue, 6 Apr 2021 18:45:03 -0700 (PDT) Received: by zeus.lespinasse.org (Postfix, from userid 1000) id 58AE819F31E; Tue, 6 Apr 2021 18:45:03 -0700 (PDT) From: Michel Lespinasse To: Linux-MM Cc: Laurent Dufour , Peter Zijlstra , Michal Hocko , Matthew Wilcox , Rik van Riel , Paul McKenney , Andrew Morton , Suren Baghdasaryan , Joel Fernandes , Rom Lemarchand , Linux-Kernel , Michel Lespinasse Subject: [RFC PATCH 35/37] mm: spf statistics Date: Tue, 6 Apr 2021 18:45:00 -0700 Message-Id: <20210407014502.24091-36-michel@lespinasse.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210407014502.24091-1-michel@lespinasse.org> References: <20210407014502.24091-1-michel@lespinasse.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D7243E00010B X-Stat-Signature: 96i3373mmcodhe1rqnkbnztqs96kyu5y Received-SPF: none (lespinasse.org>: No applicable sender policy available) receiver=imf13; identity=mailfrom; envelope-from=""; helo=server.lespinasse.org; client-ip=63.205.204.226 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617759906-243732 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a new CONFIG_SPECULATIVE_PAGE_FAULT_STATS config option, and dump extra statistics about executed spf cases and abort reasons when the option is set. Signed-off-by: Michel Lespinasse --- arch/x86/mm/fault.c | 19 +++++++--- include/linux/mmap_lock.h | 19 +++++++++- include/linux/vm_event_item.h | 24 ++++++++++++ include/linux/vmstat.h | 6 +++ mm/Kconfig.debug | 7 ++++ mm/memory.c | 71 ++++++++++++++++++++++++++++------- mm/vmstat.c | 24 ++++++++++++ 7 files changed, 149 insertions(+), 21 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index b1a07ca82d59..e210bbcb8bc5 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1324,22 +1324,31 @@ void do_user_addr_fault(struct pt_regs *regs, =20 count_vm_event(SPF_ATTEMPT); seq =3D mmap_seq_read_start(mm); - if (seq & 1) + if (seq & 1) { + count_vm_spf_event(SPF_ABORT_ODD); goto spf_abort; + } rcu_read_lock(); vma =3D find_vma(mm, address); - if (!vma || vma->vm_start > address || - !vma_can_speculate(vma, flags)) { + if (!vma || vma->vm_start > address) { rcu_read_unlock(); + count_vm_spf_event(SPF_ABORT_UNMAPPED); + goto spf_abort; + } + if (!vma_can_speculate(vma, flags)) { + rcu_read_unlock(); + count_vm_spf_event(SPF_ABORT_NO_SPECULATE); goto spf_abort; } pvma =3D *vma; rcu_read_unlock(); - if (!mmap_seq_read_check(mm, seq)) + if (!mmap_seq_read_check(mm, seq, SPF_ABORT_VMA_COPY)) goto spf_abort; vma =3D &pvma; - if (unlikely(access_error(error_code, vma))) + if (unlikely(access_error(error_code, vma))) { + count_vm_spf_event(SPF_ABORT_ACCESS_ERROR); goto spf_abort; + } fault =3D do_handle_mm_fault(vma, address, flags | FAULT_FLAG_SPECULATIVE, seq, regs); =20 diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h index 8f4eca2d0f43..98f24a9910a9 100644 --- a/include/linux/mmap_lock.h +++ b/include/linux/mmap_lock.h @@ -7,6 +7,7 @@ #include #include #include +#include =20 #ifdef CONFIG_SPECULATIVE_PAGE_FAULT #define MMAP_LOCK_SEQ_INITIALIZER(name) \ @@ -104,12 +105,26 @@ static inline unsigned long mmap_seq_read_start(str= uct mm_struct *mm) return seq; } =20 -static inline bool mmap_seq_read_check(struct mm_struct *mm, unsigned lo= ng seq) +static inline bool __mmap_seq_read_check(struct mm_struct *mm, + unsigned long seq) { smp_rmb(); return seq =3D=3D READ_ONCE(mm->mmap_seq); } -#endif + +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT_STATS +static inline bool mmap_seq_read_check(struct mm_struct *mm, unsigned lo= ng seq, + enum vm_event_item fail_event) +{ + if (__mmap_seq_read_check(mm, seq)) + return true; + count_vm_event(fail_event); + return false; +} +#else +#define mmap_seq_read_check(mm, seq, fail) __mmap_seq_read_check(mm, seq= ) +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT_STATS */ +#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */ =20 static inline void mmap_write_lock(struct mm_struct *mm) { diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.= h index cc4f8d14e43f..6d25fd9ce4d1 100644 --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -124,6 +124,30 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOU= T, #ifdef CONFIG_SPECULATIVE_PAGE_FAULT SPF_ATTEMPT, SPF_ABORT, +#endif +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT_STATS + SPF_ABORT_ODD, + SPF_ABORT_UNMAPPED, + SPF_ABORT_NO_SPECULATE, + SPF_ABORT_VMA_COPY, + SPF_ABORT_ACCESS_ERROR, + SPF_ABORT_PUD, + SPF_ABORT_PMD, + SPF_ABORT_ANON_VMA, + SPF_ABORT_PTE_MAP_LOCK_SEQ1, + SPF_ABORT_PTE_MAP_LOCK_PMD, + SPF_ABORT_PTE_MAP_LOCK_PTL, + SPF_ABORT_PTE_MAP_LOCK_SEQ2, + SPF_ABORT_USERFAULTFD, + SPF_ABORT_FAULT, + SPF_ABORT_NON_SWAP_ENTRY, + SPF_ABORT_SWAP_NOPAGE, + SPF_ATTEMPT_ANON, + SPF_ATTEMPT_FILE, + SPF_ATTEMPT_SWAP, + SPF_ATTEMPT_NUMA, + SPF_ATTEMPT_PTE, + SPF_ATTEMPT_WP, #endif NR_VM_EVENT_ITEMS }; diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 506d625163a1..34e05604a93f 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -109,6 +109,12 @@ static inline void vm_events_fold_cpu(int cpu) =20 #endif /* CONFIG_VM_EVENT_COUNTERS */ =20 +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT_STATS +#define count_vm_spf_event(x) count_vm_event(x) +#else +#define count_vm_spf_event(x) do {} while (0) +#endif + #ifdef CONFIG_NUMA_BALANCING #define count_vm_numa_event(x) count_vm_event(x) #define count_vm_numa_events(x, y) count_vm_events(x, y) diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 1e73717802f8..6be8ca7950ee 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -150,3 +150,10 @@ config PTDUMP_DEBUGFS kernel. =20 If in doubt, say N. + +config SPECULATIVE_PAGE_FAULT_STATS + bool "Additional statistics for speculative page faults" + depends on SPECULATIVE_PAGE_FAULT + help + Additional statistics for speculative page faults. + If in doubt, say N. diff --git a/mm/memory.c b/mm/memory.c index 074945faf1ab..6165d340e134 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2585,7 +2585,7 @@ bool __pte_map_lock(struct vm_fault *vmf) } =20 local_irq_disable(); - if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq)) + if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq, SPF_ABORT_PTE_MAP_L= OCK_SEQ1)) goto fail; /* * The mmap sequence count check guarantees that the page @@ -2599,8 +2599,10 @@ bool __pte_map_lock(struct vm_fault *vmf) * is not a huge collapse operation in progress in our back. */ pmdval =3D READ_ONCE(*vmf->pmd); - if (!pmd_same(pmdval, vmf->orig_pmd)) + if (!pmd_same(pmdval, vmf->orig_pmd)) { + count_vm_spf_event(SPF_ABORT_PTE_MAP_LOCK_PMD); goto fail; + } #endif ptl =3D pte_lockptr(vmf->vma->vm_mm, vmf->pmd); if (!pte) @@ -2617,9 +2619,11 @@ bool __pte_map_lock(struct vm_fault *vmf) * We also don't want to retry until spin_trylock() succeeds, * because of the starvation potential against a stream of lockers. */ - if (unlikely(!spin_trylock(ptl))) + if (unlikely(!spin_trylock(ptl))) { + count_vm_spf_event(SPF_ABORT_PTE_MAP_LOCK_PTL); goto fail; - if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq)) + } + if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq, SPF_ABORT_PTE_MAP_L= OCK_SEQ2)) goto unlock_fail; local_irq_enable(); vmf->pte =3D pte; @@ -2891,6 +2895,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf= ) =20 if (unlikely(!vma->anon_vma)) { if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + count_vm_spf_event(SPF_ABORT_ANON_VMA); ret =3D VM_FAULT_RETRY; goto out; } @@ -3153,10 +3158,15 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf= ) { struct vm_area_struct *vma =3D vmf->vma; =20 + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + count_vm_spf_event(SPF_ATTEMPT_WP); + if (userfaultfd_pte_wp(vma, *vmf->pte)) { pte_unmap_unlock(vmf->pte, vmf->ptl); - if (vmf->flags & FAULT_FLAG_SPECULATIVE) + if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + count_vm_spf_event(SPF_ABORT_USERFAULTFD); return VM_FAULT_RETRY; + } return handle_userfault(vmf, VM_UFFD_WP); } =20 @@ -3340,6 +3350,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vm_fault_t ret =3D 0; void *shadow =3D NULL; =20 + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + count_vm_spf_event(SPF_ATTEMPT_SWAP); + #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) if (sizeof(pte_t) > sizeof(unsigned long)) { /* @@ -3366,6 +3379,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) entry =3D pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + count_vm_spf_event(SPF_ABORT_NON_SWAP_ENTRY); ret =3D VM_FAULT_RETRY; } else if (is_migration_entry(entry)) { migration_entry_wait(vma->vm_mm, vmf->pmd, @@ -3392,6 +3406,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) =20 if (vmf->flags & FAULT_FLAG_SPECULATIVE) { delayacct_clear_flag(DELAYACCT_PF_SWAPIN); + count_vm_spf_event(SPF_ABORT_SWAP_NOPAGE); return VM_FAULT_RETRY; } =20 @@ -3598,6 +3613,9 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) vm_fault_t ret =3D 0; pte_t entry; =20 + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + count_vm_spf_event(SPF_ATTEMPT_ANON); + /* File mapping without ->vm_ops ? */ if (vma->vm_flags & VM_SHARED) return VM_FAULT_SIGBUS; @@ -3627,8 +3645,10 @@ static vm_fault_t do_anonymous_page(struct vm_faul= t *vmf) } else { /* Allocate our own private page. */ if (unlikely(!vma->anon_vma)) { - if (vmf->flags & FAULT_FLAG_SPECULATIVE) + if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + count_vm_spf_event(SPF_ABORT_ANON_VMA); return VM_FAULT_RETRY; + } if (__anon_vma_prepare(vma)) goto oom; } @@ -3670,8 +3690,10 @@ static vm_fault_t do_anonymous_page(struct vm_faul= t *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); if (page) put_page(page); - if (vmf->flags & FAULT_FLAG_SPECULATIVE) + if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + count_vm_spf_event(SPF_ABORT_USERFAULTFD); return VM_FAULT_RETRY; + } return handle_userfault(vmf, VM_UFFD_MISSING); } =20 @@ -3712,7 +3734,8 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) #ifdef CONFIG_SPECULATIVE_PAGE_FAULT if (vmf->flags & FAULT_FLAG_SPECULATIVE) { rcu_read_lock(); - if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq)) { + if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq, + SPF_ABORT_FAULT)) { ret =3D VM_FAULT_RETRY; } else { /* @@ -4042,7 +4065,8 @@ static vm_fault_t do_fault_around(struct vm_fault *= vmf) rcu_read_lock(); #ifdef CONFIG_SPECULATIVE_PAGE_FAULT if (vmf->flags & FAULT_FLAG_SPECULATIVE) { - if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq)) { + if (!mmap_seq_read_check(vmf->vma->vm_mm, vmf->seq, + SPF_ABORT_FAULT)) { rcu_read_unlock(); return VM_FAULT_RETRY; } @@ -4091,8 +4115,10 @@ static vm_fault_t do_cow_fault(struct vm_fault *vm= f) vm_fault_t ret; =20 if (unlikely(!vma->anon_vma)) { - if (vmf->flags & FAULT_FLAG_SPECULATIVE) + if (vmf->flags & FAULT_FLAG_SPECULATIVE) { + count_vm_spf_event(SPF_ABORT_ANON_VMA); return VM_FAULT_RETRY; + } if (__anon_vma_prepare(vma)) return VM_FAULT_OOM; } @@ -4178,6 +4204,9 @@ static vm_fault_t do_fault(struct vm_fault *vmf) struct mm_struct *vm_mm =3D vma->vm_mm; vm_fault_t ret; =20 + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + count_vm_spf_event(SPF_ATTEMPT_FILE); + /* * The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */ @@ -4251,6 +4280,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf= ) bool was_writable =3D pte_savedwrite(vmf->orig_pte); int flags =3D 0; =20 + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + count_vm_spf_event(SPF_ATTEMPT_NUMA); + /* * The "pte" at this point cannot be used safely without * validation through pte_unmap_same(). It's of NUMA type but @@ -4423,6 +4455,9 @@ static vm_fault_t handle_pte_fault(struct vm_fault = *vmf) if (pte_protnone(vmf->orig_pte) && vma_is_accessible(vmf->vma)) return do_numa_page(vmf); =20 + if (vmf->flags & FAULT_FLAG_SPECULATIVE) + count_vm_spf_event(SPF_ATTEMPT_PTE); + if (!pte_spinlock(vmf)) return VM_FAULT_RETRY; entry =3D vmf->orig_pte; @@ -4490,20 +4525,26 @@ static vm_fault_t __handle_mm_fault(struct vm_are= a_struct *vma, local_irq_disable(); pgd =3D pgd_offset(mm, address); pgdval =3D READ_ONCE(*pgd); - if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval))) + if (pgd_none(pgdval) || unlikely(pgd_bad(pgdval))) { + count_vm_spf_event(SPF_ABORT_PUD); goto spf_fail; + } =20 p4d =3D p4d_offset(pgd, address); p4dval =3D READ_ONCE(*p4d); - if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval))) + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval))) { + count_vm_spf_event(SPF_ABORT_PUD); goto spf_fail; + } =20 vmf.pud =3D pud_offset(p4d, address); pudval =3D READ_ONCE(*vmf.pud); if (pud_none(pudval) || unlikely(pud_bad(pudval)) || unlikely(pud_trans_huge(pudval)) || - unlikely(pud_devmap(pudval))) + unlikely(pud_devmap(pudval))) { + count_vm_spf_event(SPF_ABORT_PUD); goto spf_fail; + } =20 vmf.pmd =3D pmd_offset(vmf.pud, address); vmf.orig_pmd =3D READ_ONCE(*vmf.pmd); @@ -4521,8 +4562,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area= _struct *vma, if (unlikely(pmd_none(vmf.orig_pmd) || is_swap_pmd(vmf.orig_pmd) || pmd_trans_huge(vmf.orig_pmd) || - pmd_devmap(vmf.orig_pmd))) + pmd_devmap(vmf.orig_pmd))) { + count_vm_spf_event(SPF_ABORT_PMD); goto spf_fail; + } =20 /* * The above does not allocate/instantiate page-tables because diff --git a/mm/vmstat.c b/mm/vmstat.c index 9ae1c27a549e..ac4ff4343a49 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1369,6 +1369,30 @@ const char * const vmstat_text[] =3D { "spf_attempt", "spf_abort", #endif +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT_STATS + "SPF_ABORT_ODD", + "SPF_ABORT_UNMAPPED", + "SPF_ABORT_NO_SPECULATE", + "SPF_ABORT_VMA_COPY", + "SPF_ABORT_ACCESS_ERROR", + "SPF_ABORT_PUD", + "SPF_ABORT_PMD", + "SPF_ABORT_ANON_VMA", + "SPF_ABORT_PTE_MAP_LOCK_SEQ1", + "SPF_ABORT_PTE_MAP_LOCK_PMD", + "SPF_ABORT_PTE_MAP_LOCK_PTL", + "SPF_ABORT_PTE_MAP_LOCK_SEQ2", + "SPF_ABORT_USERFAULTFD", + "SPF_ABORT_FAULT", + "SPF_ABORT_NON_SWAP_ENTRY", + "SPF_ABORT_SWAP_NOPAGE", + "SPF_ATTEMPT_ANON", + "SPF_ATTEMPT_FILE", + "SPF_ATTEMPT_SWAP", + "SPF_ATTEMPT_NUMA", + "SPF_ATTEMPT_PTE", + "SPF_ATTEMPT_WP", +#endif #endif /* CONFIG_VM_EVENT_COUNTERS || CONFIG_MEMCG */ }; #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA || CONFIG_MEMCG = */ --=20 2.20.1