From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96F65C433F5 for ; Wed, 10 Nov 2021 13:25:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4762361241 for ; Wed, 10 Nov 2021 13:25:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4762361241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C2C176B006C; Wed, 10 Nov 2021 08:25:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDBDB6B0071; Wed, 10 Nov 2021 08:25:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACA5B6B0072; Wed, 10 Nov 2021 08:25:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 9D4F46B006C for ; Wed, 10 Nov 2021 08:25:56 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5ACDD80877 for ; Wed, 10 Nov 2021 13:25:56 +0000 (UTC) X-FDA: 78793093512.14.13B532C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf23.hostedemail.com (Postfix) with ESMTP id E43879000383 for ; Wed, 10 Nov 2021 13:25:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1636550755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jP0yAKaqi1adm8H9orP2dppOC1y2MfJUJ8VPAeRgaww=; b=VN8e+Ul1wH8bUFbZ61lUmEGzm4mWhZjPLBKrLnhSN0+CXED/p2aL0ZkS0JEclLsUU8Sot0 Ki+LiVidm8fMudgJjkGvXlDPHz5/0Ci/L37HpXC1s3ucr5HUEJnImJC+/EwHjHDdlAsTfm BdfYVA0KOJ2kbtY1dODsSFYpPT6qrFw= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-509-eIodXDAZOyufSRKP0AmtNw-1; Wed, 10 Nov 2021 08:25:54 -0500 X-MC-Unique: eIodXDAZOyufSRKP0AmtNw-1 Received: by mail-wm1-f71.google.com with SMTP id l187-20020a1c25c4000000b0030da46b76daso3079414wml.9 for ; Wed, 10 Nov 2021 05:25:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=jP0yAKaqi1adm8H9orP2dppOC1y2MfJUJ8VPAeRgaww=; b=bELL6zbrObztmrJiCVwxA2xD/tqzfPy40HPsmyn0YVBLvmrQUC2xtsiF9YkWnhtNKB GKumm3Thc+K6np4Ad1ZSCtXb91frPopjnW4Dru9qjIcuCdVpc2Q4x0LhUrJ81prgK/rM U7th5RzrIebPWHvsBNHmgaFyX2JrWbDlhnIKVbp8GQ7BzyNKxUEiF5rtnzQqvIY314ga H031EAMMH1qWZOhVk5QMiQqGFC/GtkB7scU6dl4zXuH/5T3veif64kF8VcVk25Z8vRx9 KXtE56/nTrxyk2b5dKdaduZNSRfQBCEDc8c0GMAilpaYk6Nmb6rtJcJXI/nPUWrdmWYP bkRw== X-Gm-Message-State: AOAM530Ue6NoTumfsaEWUwY64a2BT8tRz3bHL0ikWZtZNHAWPEwoKJ2X BdviJZExsIjsola1DuyeKoAJal9nQOSoEdFrD0Uhz+qNHrQp0XG2SWTlRhpzzbMkTFsJgKQIy8+ PJpT3xd9o8/w= X-Received: by 2002:adf:f209:: with SMTP id p9mr19347919wro.191.1636550753111; Wed, 10 Nov 2021 05:25:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJwJSv+R+IWbpj+s2V4l4f0j/FjSMvgil2sH56ptcxuoK6bsMwBY0CkRGweCqJAhm+KUOnovqA== X-Received: by 2002:adf:f209:: with SMTP id p9mr19347881wro.191.1636550752878; Wed, 10 Nov 2021 05:25:52 -0800 (PST) Received: from [192.168.3.132] (p5b0c604f.dip0.t-ipconnect.de. [91.12.96.79]) by smtp.gmail.com with ESMTPSA id i17sm6016557wmq.48.2021.11.10.05.25.51 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 10 Nov 2021 05:25:52 -0800 (PST) Message-ID: <8d0bc258-58ba-52c5-2e0d-a588489f2572@redhat.com> Date: Wed, 10 Nov 2021 14:25:50 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 To: Jason Gunthorpe , Qi Zheng Cc: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, mika.penttila@nextfour.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com References: <20211110105428.32458-1-zhengqi.arch@bytedance.com> <20211110125601.GQ1740502@nvidia.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v3 00/15] Free user PTE page table pages In-Reply-To: <20211110125601.GQ1740502@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E43879000383 X-Stat-Signature: ckkxpw8khjaiu7yaock838taua7fnmrr Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VN8e+Ul1; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf23.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=david@redhat.com X-HE-Tag: 1636550740-30851 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10.11.21 13:56, Jason Gunthorpe wrote: > On Wed, Nov 10, 2021 at 06:54:13PM +0800, Qi Zheng wrote: > >> In this patch series, we add a pte_refcount field to the struct page of page >> table to track how many users of PTE page table. Similar to the mechanism of >> page refcount, the user of PTE page table should hold a refcount to it before >> accessing. The PTE page table page will be freed when the last refcount is >> dropped. > > So, this approach basically adds two atomics on every PTE map > > If I have it right the reason that zap cannot clean the PTEs today is > because zap cannot obtain the mmap lock due to a lock ordering issue > with the inode lock vs mmap lock. There are different ways to zap: madvise(DONTNEED) vs fallocate(PUNCH_HOLE). It depends on "from where" we're actually comming: a process page table walker or the rmap. The way locking currently works doesn't allow to remove a page table just by holding the mmap lock, not even in write mode. You'll also need to hold the respective rmap locks -- which implies that reclaiming apge tables crossing VMAs is "problematic". Take a look at khugepaged which has to play quite some tricks to remove a page table. And there are other ways we can create empty page tables via the rmap, like reclaim/writeback, although they are rather a secondary concern mostly. > > If it could obtain the mmap lock then it could do the zap using the > write side as unmapping a vma does. > > Rather than adding a new "lock" to ever PTE I wonder if it would be > more efficient to break up the mmap lock and introduce a specific > rwsem for the page table itself, in addition to the PTL. Currently the > mmap lock is protecting both the vma list and the page table. There is the rmap side of things as well. At least the rmap won't reclaim alloc/free page tables, but it will walk page tables while holding the respective rmap lock. > > I think that would allow the lock ordering issue to be resolved and > zap could obtain a page table rwsem. > > Compared to two atomics per PTE this would just be two atomic per > page table walk operation, it is conceptually a lot simpler, and would > allow freeing all the page table levels, not just PTEs. Another alternative is to not do it in the kernel automatically, but instead have a madvise(MADV_CLEANUP_PGTABLE) mechanism that will get called by user space explicitly once it's reasonable. While this will work for the obvious madvise(DONTNEED) users -- like memory allocators -- that zap memory, it's a bit more complicated once shared memory is involved and we're fallocate(PUNCH_HOLE) memory. But it would at least work for many use cases that want to optimize memory consumption for sparse memory mappings. Note that PTEs are the biggest memory consumer. On x86-64, a 1 TiB area will consume 2 GiB of PTE tables and only 4 MiB of PMD tables. So PTEs are most certainly the most important part piece. -- Thanks, David / dhildenb