From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99EEAC48286 for ; Tue, 30 Jan 2024 09:03:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3090D6B0093; Tue, 30 Jan 2024 04:03:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BA7C6B00A8; Tue, 30 Jan 2024 04:03:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12A496B00A9; Tue, 30 Jan 2024 04:03:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8261A6B00A8 for ; Tue, 30 Jan 2024 04:03:45 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 028001A01C1 for ; Tue, 30 Jan 2024 09:03:44 +0000 (UTC) X-FDA: 81735389610.23.61BF476 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id B06AF120032 for ; Tue, 30 Jan 2024 09:03:41 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706605421; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eF8XcVDoh8qeGnwwo+1/Vlp78kwMrQNdgnyzGZIrGVQ=; b=mJ3mTVvynRaNlPWSVwZqz5PBmBngpa+71fqr9zsAVzwmG71zrlTs74dNmZjhW1QJqifiVM HaU/UrmHCbrGPtNHu38jpQq9lN0Mj2a14gDwX47hizOryLq0diFiQ9kLIkTfpWVwjoS0/m L0XLH7ksbkKZYUjbUQSUmY4+aIALDfQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706605421; a=rsa-sha256; cv=none; b=Lc4NpRbbgZYHwWxhGKcZ077ttWZStu3HEkZ/hKvJgsZKNy+HY56t2tjB/zTF3/+Cp5wfcM CI6mGhASS+1gw9WQOPVOOHGOIRrChtqs42jW344lhFZAdzOOZPjXqjo+TBZ4J/jamJdisa Gy4IPW1BQlIp0K61Qq2ncOwG9txpDgo= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7DCEA139F; Tue, 30 Jan 2024 01:04:24 -0800 (PST) Received: from [10.57.79.54] (unknown [10.57.79.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CA66B3F738; Tue, 30 Jan 2024 01:03:37 -0800 (PST) Message-ID: <075975db-a59b-483a-95d7-0990442ebe6f@arm.com> Date: Tue, 30 Jan 2024 09:03:36 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 6/9] mm/mmu_gather: define ENCODED_PAGE_FLAG_DELAY_RMAP Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Catalin Marinas , Will Deacon , "Aneesh Kumar K.V" , Nick Piggin , Peter Zijlstra , Michael Ellerman , Christophe Leroy , "Naveen N. Rao" , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Arnd Bergmann , linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org References: <20240129143221.263763-1-david@redhat.com> <20240129143221.263763-7-david@redhat.com> From: Ryan Roberts In-Reply-To: <20240129143221.263763-7-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: npn5fdo7sfdtiqia9urspaeqsyutpyqf X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: B06AF120032 X-Rspam-User: X-HE-Tag: 1706605421-916248 X-HE-Meta: U2FsdGVkX1+OkB5dTYHkyDoE5ot2Dxtme8jAbJqJNwt0n+4uFKboxIG4rjpIKTQIHBagLeJzIrWs0Qqf+fkHLplOWqPDpTtySrkiovykHsZVEF+zYAflBVh9nLJmcu7JFkoagURdudH3xMDaFV7MkC/05ANKztstverf0qyfQYwKGDgzAouilCFBn0PvMgYTfxLmQvlCFLcBzTD7Rifw1Q/uCYdMK2sYCBIineIwB0IAGSJ+H4IMdJ+FAtUeJsJPDdbGfOyO8Mwj2rT2ckcJxkwYQhB/u1g7b2cB5Ec67H+fGfAqsbiu2corS2KX3cYd3M0sPoZeL35TEglSqyOKQXKfDg2togPt56dubg8bwEBBa2bWMHkLeOMOdOY8wbzjnz+XhhKOhh84BXwxJAEdj91TnLM1yK392Mf2EpHVvGtMF8IyT008ClqFZFJ7Dzb04rBWr67AF+aj+Y8YdLj9Qf87pEInKs4ku7SRakzV+BBIF7Ph9pY6nySdbWw5Qu0k6inpzvJUK/NH/ZcIMcfuJzeNaKG6+lIFwNvOahdr6460Ja9QrI6OaQEScBgxsSt8B163AYcvXQIlwddeI6Rl3QIy7SwqjNo/0lqAFoxhwuOvONNtmt8WELDOihTJOhP/IgpGvZ2wxm/CXpt7D06JJG0D6KiKXuYKV3z9cbzu28bhHgxTFVTN9jed3j1NTis/Nge6Lz/KbIH/BBzRkRV6LBb3UqO58lGa3ytlKh01X0OdzLZ0Vhy2nI28zpfFYinxMaUv0OJ9we3xL8Z8KwBgA04ZJHv6+dckcGzGcB1kQLhrrAnJ19vHzqyKXjvap81WGwZrgaP+Wj4/DVGvd0vUZlTP0Ur7GzHysOJbEgf235ADACg6OP5cO/XxBURgtL4crVavZOtoyy+Vr/A3JskX9FVdnwgKUCrMphevVxb8O0RvXnpOA6TyuYHCx5OpkRxPIzj43mNFDmK5pERajKQ tB+V7ZJg 1i6lSu2NRfUzqfgvuoVJflYMhSmafeGl27xEDXyqRpiDS0uP9rwpuyhMDxnD69AZkkFXcdqQiMtYb2yYWr4GUkcRT2uDk5JEdsz6BLjM/+b8wnMVAI7wnsXB0MUJgx1bDwwbc4d7AF8O3VgH0D6eQI3G7KN3K9dM87e9yKX2KDuzz7XA9rddU8KveIzy8zrXf4WhjjJutCS/N9JiFas0OpkBr0GTrRNsXCfyQPHlLK8AabqFZqYkj2ziNue7tiPk3D2O6IleIjelcVhMZxzq56IY9rw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 29/01/2024 14:32, David Hildenbrand wrote: > Nowadays, encoded pages are only used in mmu_gather handling. Let's > update the documentation, and define ENCODED_PAGE_BIT_DELAY_RMAP. While at > it, rename ENCODE_PAGE_BITS to ENCODED_PAGE_BITS. > > If encoded page pointers would ever be used in other context again, we'd > likely want to change the defines to reflect their context (e.g., > ENCODED_PAGE_FLAG_MMU_GATHER_DELAY_RMAP). For now, let's keep it simple. > > This is a preparation for using the remaining spare bit to indicate that > the next item in an array of encoded pages is a "nr_pages" argument and > not an encoded page. > > Signed-off-by: David Hildenbrand Reviewed-by: Ryan Roberts > --- > include/linux/mm_types.h | 17 +++++++++++------ > mm/mmu_gather.c | 5 +++-- > 2 files changed, 14 insertions(+), 8 deletions(-) > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 8b611e13153e..1b89eec0d6df 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -210,8 +210,8 @@ struct page { > * > * An 'encoded_page' pointer is a pointer to a regular 'struct page', but > * with the low bits of the pointer indicating extra context-dependent > - * information. Not super-common, but happens in mmu_gather and mlock > - * handling, and this acts as a type system check on that use. > + * information. Only used in mmu_gather handling, and this acts as a type > + * system check on that use. > * > * We only really have two guaranteed bits in general, although you could > * play with 'struct page' alignment (see CONFIG_HAVE_ALIGNED_STRUCT_PAGE) > @@ -220,21 +220,26 @@ struct page { > * Use the supplied helper functions to endcode/decode the pointer and bits. > */ > struct encoded_page; > -#define ENCODE_PAGE_BITS 3ul > + > +#define ENCODED_PAGE_BITS 3ul > + > +/* Perform rmap removal after we have flushed the TLB. */ > +#define ENCODED_PAGE_BIT_DELAY_RMAP 1ul > + > static __always_inline struct encoded_page *encode_page(struct page *page, unsigned long flags) > { > - BUILD_BUG_ON(flags > ENCODE_PAGE_BITS); > + BUILD_BUG_ON(flags > ENCODED_PAGE_BITS); > return (struct encoded_page *)(flags | (unsigned long)page); > } > > static inline unsigned long encoded_page_flags(struct encoded_page *page) > { > - return ENCODE_PAGE_BITS & (unsigned long)page; > + return ENCODED_PAGE_BITS & (unsigned long)page; > } > > static inline struct page *encoded_page_ptr(struct encoded_page *page) > { > - return (struct page *)(~ENCODE_PAGE_BITS & (unsigned long)page); > + return (struct page *)(~ENCODED_PAGE_BITS & (unsigned long)page); > } > > /* > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index ac733d81b112..6540c99c6758 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -53,7 +53,7 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_ > for (int i = 0; i < batch->nr; i++) { > struct encoded_page *enc = batch->encoded_pages[i]; > > - if (encoded_page_flags(enc)) { > + if (encoded_page_flags(enc) & ENCODED_PAGE_BIT_DELAY_RMAP) { > struct page *page = encoded_page_ptr(enc); > folio_remove_rmap_pte(page_folio(page), page, vma); > } > @@ -119,6 +119,7 @@ static void tlb_batch_list_free(struct mmu_gather *tlb) > bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, > bool delay_rmap, int page_size) > { > + int flags = delay_rmap ? ENCODED_PAGE_BIT_DELAY_RMAP : 0; > struct mmu_gather_batch *batch; > > VM_BUG_ON(!tlb->end); > @@ -132,7 +133,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, > * Add the page and check if we are full. If so > * force a flush. > */ > - batch->encoded_pages[batch->nr++] = encode_page(page, delay_rmap); > + batch->encoded_pages[batch->nr++] = encode_page(page, flags); > if (batch->nr == batch->max) { > if (!tlb_next_batch(tlb)) > return true;