From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F09DC433EF for ; Fri, 29 Apr 2022 13:36:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D9B96B0072; Fri, 29 Apr 2022 09:36:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 588936B0073; Fri, 29 Apr 2022 09:36:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 428CB6B0074; Fri, 29 Apr 2022 09:36:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 33E616B0072 for ; Fri, 29 Apr 2022 09:36:20 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 0075212063C for ; Fri, 29 Apr 2022 13:36:19 +0000 (UTC) X-FDA: 79410015720.30.5C2009F Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by imf28.hostedemail.com (Postfix) with ESMTP id 1C629C0044 for ; Fri, 29 Apr 2022 13:36:07 +0000 (UTC) Received: by mail-pg1-f176.google.com with SMTP id t13so6540990pgn.8 for ; Fri, 29 Apr 2022 06:36:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=dpbde/vaZRRFySCr85iPpDzqYqW+eIo3XY8pE4lO6sM=; b=ecGalpDz8llCqz+zXe8P6mTGdoUAiMgYIVGYu0HUBQH78AbPHlKQrKuujMDWaTYbpq mYAGEt3y1jeMMN6q1+wUH1yPXfM9+cxR4ZMo5cKthMRTBJDMyMtzHut3ETZODuzD1WtW +h0gx8sWt2DfqRO339NQFG2D/wje4F/h/r67YCun0rpJ5eNPYoloyyOyGj9/0RUV49E6 KA20ldAGAjuKP0rMWwd0bMuSdnWortC/3GlpcNz86YufkwZzMPSzw9tjPW39O9XqADDA 5ScIWptHZM719pknZrA/CI/BRFMkDhZ0xUZinkNQBjiunQ9nmjA8N0JWxXSJHnyXXwzH ajPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=dpbde/vaZRRFySCr85iPpDzqYqW+eIo3XY8pE4lO6sM=; b=TGACMe5F4NG0PKAvgo6i+Rw0sH2P3R1paMxCoz/MIGa614/vGlONutYDFiSoVHXTFp mUGfsxb4IBbUZEqYkqEDrjJ+bMkGMH0er3svotmZpnBt8X2AdXLsQeT7naq8EndT2lc3 wCL5xYS3vvYKLbziYrJHsygwkk8lzi9PfSI1514lpOyQsb4ttdrGFuEg/SHRWHb8tDo8 /wjyCdn9QWiv9FHX8gi8g8joF7S7kINBR5Pg6j5v7fkUnTEgNFjituyYIFBLAq2XrsI0 55x45dX7Wm6j6iT3n0cUgJ6zCAIrdZEN6/fBG6L/eWwR3uDu2IQTId9PZPK8kAb4CpRb kpCw== X-Gm-Message-State: AOAM532xpa784oFDo0siiUGzhaqNKRmPy21yvQlXxFWaeBKV39EZwVo7 O1CWdosOZKPDD1zvu/3FZE3X9w== X-Google-Smtp-Source: ABdhPJxeRhBFbkwTHIaM2b9Mdnm1pxMN/wK2KEcBoO7TNdUijwgyiXrx2s/neJyAY//6saUNM26sdA== X-Received: by 2002:a65:60d3:0:b0:39c:f431:5859 with SMTP id r19-20020a6560d3000000b0039cf4315859mr32419170pgv.442.1651239376174; Fri, 29 Apr 2022 06:36:16 -0700 (PDT) Received: from localhost.localdomain ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id m8-20020a17090a414800b001d81a30c437sm10681977pjg.50.2022.04.29.06.36.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Apr 2022 06:36:15 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, david@redhat.com, jgg@nvidia.com, tj@kernel.org, dennis@kernel.org, ming.lei@redhat.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com, Qi Zheng Subject: [RFC PATCH 00/18] Try to free user PTE page table pages Date: Fri, 29 Apr 2022 21:35:34 +0800 Message-Id: <20220429133552.33768-1-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: j55cqrxqrx4azoogn9b99uyx7tyfwef8 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 1C629C0044 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=ecGalpDz; spf=pass (imf28.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.215.176 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1651239367-151656 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, This patch series aims to try to free user PTE page table pages when no one is using it. The beginning of this story is that some malloc libraries(e.g. jemalloc or tcmalloc) usually allocate the amount of VAs by mmap() and do not unmap those VAs. They will use madvise(MADV_DONTNEED) to free physical memory if they want. But the page tables do not be freed by madvise(), so it can produce many page tables when the process touches an enormous virtual address space. The following figures are a memory usage snapshot of one process which actually happened on our server: VIRT: 55t RES: 590g VmPTE: 110g As we can see, the PTE page tables size is 110g, while the RES is 590g. In theory, the process only need 1.2g PTE page tables to map those physical memory. The reason why PTE page tables occupy a lot of memory is that madvise(MADV_DONTNEED) only empty the PTE and free physical memory but doesn't free the PTE page table pages. So we can free those empty PTE page tables to save memory. In the above cases, we can save memory about 108g(best case). And the larger the difference between the size of VIRT and RES, the more memory we save. In this patch series, we add a pte_ref field to the struct page of page table to track how many users of user PTE page table. Similar to the mechanism of page refcount, the user of PTE page table should hold a refcount to it before accessing. The user PTE page table page may be freed when the last refcount is dropped. Different from the idea of another patchset of mine before[1], the pte_ref becomes a struct percpu_ref type, and we switch it to atomic mode only in cases such as MADV_DONTNEED and MADV_FREE that may clear the user PTE page table entryies, and then release the user PTE page table page when checking that pte_ref is 0. The advantage of this is that there is basically no performance overhead in percpu mode, but it can also free the empty PTEs. In addition, the code implementation of this patchset is much simpler and more portable than the another patchset[1]. Testing: The following code snippet can show the effect of optimization: mmap 50G while (1) { for (; i < 1024 * 25; i++) { touch 2M memory madvise MADV_DONTNEED 2M } } As we can see, the memory usage of VmPTE is reduced: before after VIRT 50.0 GB 50.0 GB RES 3.1 MB 3.1 MB VmPTE 102640 kB 96 kB I also have tested the stability by LTP[2] for several weeks. I have not seen any crash so far. This series is based on v5.18-rc2. Comments and suggestions are welcome. Thanks, Qi. [1] https://patchwork.kernel.org/project/linux-mm/cover/20211110105428.32458-1-zhengqi.arch@bytedance.com/ [2] https://github.com/linux-test-project/ltp Qi Zheng (18): x86/mm/encrypt: add the missing pte_unmap() call percpu_ref: make ref stable after percpu_ref_switch_to_atomic_sync() returns percpu_ref: make percpu_ref_switch_lock per percpu_ref mm: convert to use ptep_clear() in pte_clear_not_present_full() mm: split the related definitions of pte_offset_map_lock() into pgtable.h mm: introduce CONFIG_FREE_USER_PTE mm: add pte_to_page() helper mm: introduce percpu_ref for user PTE page table page pte_ref: add pte_tryget() and {__,}pte_put() helper mm: add pte_tryget_map{_lock}() helper mm: convert to use pte_tryget_map_lock() mm: convert to use pte_tryget_map() mm: add try_to_free_user_pte() helper mm: use try_to_free_user_pte() in MADV_DONTNEED case mm: use try_to_free_user_pte() in MADV_FREE case pte_ref: add track_pte_{set, clear}() helper x86/mm: add x86_64 support for pte_ref Documentation: add document for pte_ref Documentation/vm/index.rst | 1 + Documentation/vm/pte_ref.rst | 210 ++++++++++++++++++++++++++ arch/x86/Kconfig | 1 + arch/x86/include/asm/pgtable.h | 7 +- arch/x86/mm/mem_encrypt_identity.c | 10 +- fs/proc/task_mmu.c | 16 +- fs/userfaultfd.c | 10 +- include/linux/mm.h | 162 ++------------------ include/linux/mm_types.h | 1 + include/linux/percpu-refcount.h | 6 +- include/linux/pgtable.h | 196 +++++++++++++++++++++++- include/linux/pte_ref.h | 73 +++++++++ include/linux/rmap.h | 2 + include/linux/swapops.h | 4 +- kernel/events/core.c | 5 +- lib/percpu-refcount.c | 86 +++++++---- mm/Kconfig | 10 ++ mm/Makefile | 2 +- mm/damon/vaddr.c | 30 ++-- mm/debug_vm_pgtable.c | 2 +- mm/filemap.c | 4 +- mm/gup.c | 20 ++- mm/hmm.c | 9 +- mm/huge_memory.c | 4 +- mm/internal.h | 3 +- mm/khugepaged.c | 18 ++- mm/ksm.c | 4 +- mm/madvise.c | 35 +++-- mm/memcontrol.c | 8 +- mm/memory-failure.c | 15 +- mm/memory.c | 187 +++++++++++++++-------- mm/mempolicy.c | 4 +- mm/migrate.c | 8 +- mm/migrate_device.c | 22 ++- mm/mincore.c | 5 +- mm/mlock.c | 5 +- mm/mprotect.c | 4 +- mm/mremap.c | 10 +- mm/oom_kill.c | 3 +- mm/page_table_check.c | 2 +- mm/page_vma_mapped.c | 59 +++++++- mm/pagewalk.c | 6 +- mm/pte_ref.c | 230 +++++++++++++++++++++++++++++ mm/rmap.c | 9 ++ mm/swap_state.c | 4 +- mm/swapfile.c | 18 ++- mm/userfaultfd.c | 11 +- mm/vmalloc.c | 2 +- 48 files changed, 1203 insertions(+), 340 deletions(-) create mode 100644 Documentation/vm/pte_ref.rst create mode 100644 include/linux/pte_ref.h create mode 100644 mm/pte_ref.c -- 2.20.1