From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69581C433EF for ; Sat, 30 Apr 2022 13:35:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B42EE6B0074; Sat, 30 Apr 2022 09:35:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AF3066B0075; Sat, 30 Apr 2022 09:35:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 993E36B0078; Sat, 30 Apr 2022 09:35:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 879B36B0074 for ; Sat, 30 Apr 2022 09:35:15 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 64DC1A8B for ; Sat, 30 Apr 2022 13:35:15 +0000 (UTC) X-FDA: 79413641790.16.29B53C0 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf11.hostedemail.com (Postfix) with ESMTP id 9910240045 for ; Sat, 30 Apr 2022 13:35:11 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id i24so9059999pfa.7 for ; Sat, 30 Apr 2022 06:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=message-id:date:mime-version:user-agent:subject:content-language:to :cc:references:from:in-reply-to:content-transfer-encoding; bh=HQ59R31kOZxcptZQ/S5iazOxCiS3izDuAYFbVd8n5/E=; b=069McI4DqR70XB3lIDxTUYctr70jsgeTteu9YrPkryeuCdx5reEP5fsrwymoWOztw0 UeacU9zAQAPbNv7GDh66Z6jn6K1L63pamKDQZbfImlOn6hHqtmNb4ofb2OkWugzt199f jqMTrTWtn1ei+Rz+J0TYmpapdXx32AvycWXHPUd3jSYaK7B5GJKw2aTyXoT4aAdpOvzq 1/FgH64mmfuA/zm8GlAqH6C42e+yRk2rIkYybNE77mMNpwZ77ux8m9D0O08+RhW4pvqX I8oWSX2QXZ+jZcH9xaUIJOWUSsWaPtg3YFtYQwzxGu4T9pOls3nmfVLOomr+RQvUUNzy pmEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=HQ59R31kOZxcptZQ/S5iazOxCiS3izDuAYFbVd8n5/E=; b=f+mHbRYjvDq/nFZ3DGQCAdJ1Tft4u1Ydnnuxw8pWOQM/LXWnVGJ0joq/h6Boez3RbO poP57V3G6NI3f5B/GSy1EZkGcuo0skNDAz9a317J3hRrIqkBAO29t7WX5d7k47si6MBZ cyAGC/+31loVtQLLC2OUdqh6zemCXt0Nxtq8w5e50AARspk2NByWpsAujGeB1A8dIy0a GLl7PUFcRnLRTp1UpU0ZfN9VQhxAGnz5ODx7NKfEOl42gd1Ak1sQ4YoW2pAZ/QXCRbX+ wWMulF9zXQnzHzONGnxKkLMP2MyCjDzbP9yHmH1pJU1evHfS/DuOag4xgVbomuifmX+H NyXw== X-Gm-Message-State: AOAM533FSA7/wJoDHvkf+g7KUYTiYkGJZSV5/rm7w2gFQYq2TOTfHSF0 Vp0tq2eeIvXQ7meSSdscj+C1vw== X-Google-Smtp-Source: ABdhPJw+GOcmPoRu8CvP5yUGeeT5/UT0/kkYaRyU2YknYleqFK6MVAW8Aww/4XsGKgnRkkTxY7ozXg== X-Received: by 2002:a63:1711:0:b0:3a6:d2df:65f6 with SMTP id x17-20020a631711000000b003a6d2df65f6mr3141798pgl.72.1651325713814; Sat, 30 Apr 2022 06:35:13 -0700 (PDT) Received: from [10.254.246.218] ([139.177.225.240]) by smtp.gmail.com with ESMTPSA id m11-20020a17090a71cb00b001cd4989feebsm16625792pjs.55.2022.04.30.06.35.08 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 30 Apr 2022 06:35:13 -0700 (PDT) Message-ID: Date: Sat, 30 Apr 2022 21:35:06 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.8.0 Subject: Re: [RFC PATCH 13/18] mm: add try_to_free_user_pte() helper Content-Language: en-US To: akpm@linux-foundation.org, tglx@linutronix.de, kirill.shutemov@linux.intel.com, david@redhat.com, jgg@nvidia.com, tj@kernel.org, dennis@kernel.org, ming.lei@redhat.com Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, songmuchun@bytedance.com, zhouchengming@bytedance.com References: <20220429133552.33768-1-zhengqi.arch@bytedance.com> <20220429133552.33768-14-zhengqi.arch@bytedance.com> From: Qi Zheng In-Reply-To: <20220429133552.33768-14-zhengqi.arch@bytedance.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: 4nszhj8rzjw95jhsr9fx1ns69x3wtuhw X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9910240045 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=069McI4D; spf=pass (imf11.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-HE-Tag: 1651325711-164042 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/4/29 9:35 PM, Qi Zheng wrote: > Normally, the percpu_ref of the user PTE page table page is in > percpu mode. This patch add try_to_free_user_pte() to switch > the percpu_ref to atomic mode and check if it is 0. If the > percpu_ref is 0, which means that no one is using the user PTE > page table page, then we can safely reclaim it. > > Signed-off-by: Qi Zheng > --- > include/linux/pte_ref.h | 7 +++ > mm/pte_ref.c | 99 ++++++++++++++++++++++++++++++++++++++++- > 2 files changed, 104 insertions(+), 2 deletions(-) > > diff --git a/include/linux/pte_ref.h b/include/linux/pte_ref.h > index bfe620038699..379c3b45a6ab 100644 > --- a/include/linux/pte_ref.h > +++ b/include/linux/pte_ref.h > @@ -16,6 +16,8 @@ void free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr); > bool pte_tryget(struct mm_struct *mm, pmd_t *pmd, unsigned long addr); > void __pte_put(pgtable_t page); > void pte_put(pte_t *ptep); > +void try_to_free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, > + bool switch_back); > > #else /* !CONFIG_FREE_USER_PTE */ > > @@ -47,6 +49,11 @@ static inline void pte_put(pte_t *ptep) > { > } > > +static inline void try_to_free_user_pte(struct mm_struct *mm, pmd_t *pmd, > + unsigned long addr, bool switch_back) > +{ > +} > + > #endif /* CONFIG_FREE_USER_PTE */ > > #endif /* _LINUX_PTE_REF_H */ > diff --git a/mm/pte_ref.c b/mm/pte_ref.c > index 5b382445561e..bf9629272c71 100644 > --- a/mm/pte_ref.c > +++ b/mm/pte_ref.c > @@ -8,6 +8,9 @@ > #include > #include > #include > +#include > +#include > +#include > > #ifdef CONFIG_FREE_USER_PTE > > @@ -44,8 +47,6 @@ void pte_ref_free(pgtable_t pte) > kfree(ref); > } > > -void free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) {} > - > /* > * pte_tryget - try to get the pte_ref of the user PTE page table page > * @mm: pointer the target address space > @@ -102,4 +103,98 @@ void pte_put(pte_t *ptep) > } > EXPORT_SYMBOL(pte_put); > > +#ifdef CONFIG_DEBUG_VM > +void pte_free_debug(pmd_t pmd) > +{ > + pte_t *ptep = (pte_t *)pmd_page_vaddr(pmd); > + int i = 0; > + > + for (i = 0; i < PTRS_PER_PTE; i++) > + BUG_ON(!pte_none(*ptep++)); > +} > +#else > +static inline void pte_free_debug(pmd_t pmd) > +{ > +} > +#endif > + > +static inline void pte_free_rcu(struct rcu_head *rcu) > +{ > + struct page *page = container_of(rcu, struct page, rcu_head); > + > + pgtable_pte_page_dtor(page); > + __free_page(page); > +} > + > +/* > + * free_user_pte - free the user PTE page table page > + * @mm: pointer the target address space > + * @pmd: pointer to a PMD > + * @addr: start address of the tlb range to be flushed > + * > + * Context: The pmd range has been unmapped and TLB purged. And the user PTE > + * page table page will be freed by rcu handler. > + */ > +void free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr) > +{ > + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); > + spinlock_t *ptl; > + pmd_t pmdval; > + > + ptl = pmd_lock(mm, pmd); > + pmdval = *pmd; > + if (pmd_none(pmdval) || pmd_leaf(pmdval)) { > + spin_unlock(ptl); > + return; > + } > + pmd_clear(pmd); > + flush_tlb_range(&vma, addr, addr + PMD_SIZE); > + spin_unlock(ptl); > + > + pte_free_debug(pmdval); > + mm_dec_nr_ptes(mm); > + call_rcu(&pmd_pgtable(pmdval)->rcu_head, pte_free_rcu); > +} > + > +/* > + * try_to_free_user_pte - try to free the user PTE page table page > + * @mm: pointer the target address space > + * @pmd: pointer to a PMD > + * @addr: virtual address associated with pmd > + * @switch_back: indicates if switching back to percpu mode is required > + */ > +void try_to_free_user_pte(struct mm_struct *mm, pmd_t *pmd, unsigned long addr, > + bool switch_back) > +{ > + pgtable_t pte; > + > + if (&init_mm == mm) > + return; > + > + if (!pte_tryget(mm, pmd, addr)) > + return; > + pte = pmd_pgtable(*pmd); > + percpu_ref_switch_to_atomic_sync(pte->pte_ref); > + rcu_read_lock(); > + /* > + * Here we can safely put the pte_ref because we already hold the rcu > + * lock, which guarantees that the user PTE page table page will not > + * be released. > + */ > + __pte_put(pte); > + if (percpu_ref_is_zero(pte->pte_ref)) { > + rcu_read_unlock(); > + free_user_pte(mm, pmd, addr & PMD_MASK); > + return; > + } > + rcu_read_unlock(); > + > + if (switch_back) { > + if (pte_tryget(mm, pmd, addr)) { > + percpu_ref_switch_to_percpu(pte->pte_ref); > + __pte_put(pte); > + } > + } We shouldn't switch back to percpu mode here, it will drastically reduce performance. > +} > + > #endif /* CONFIG_FREE_USER_PTE */ -- Thanks, Qi