From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C1FD7CA1005 for ; Mon, 1 Sep 2025 12:31:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5201E8E0049; Mon, 1 Sep 2025 08:31:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40B0F8E0013; Mon, 1 Sep 2025 08:31:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 236278E0049; Mon, 1 Sep 2025 08:31:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F17A68E0013 for ; Mon, 1 Sep 2025 08:31:01 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BC27613AFD0 for ; Mon, 1 Sep 2025 12:31:01 +0000 (UTC) X-FDA: 83840615922.02.AC274CE Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by imf03.hostedemail.com (Postfix) with ESMTP id D789C2000D for ; Mon, 1 Sep 2025 12:30:59 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=ionos.com header.s=google header.b="KUsl5ii/"; spf=pass (imf03.hostedemail.com: domain of max.kellermann@ionos.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=max.kellermann@ionos.com; dmarc=pass (policy=reject) header.from=ionos.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1756729860; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OBIjmGwuQfdaUaluJIvLRQBK4PGjY6jwjN1piFQcysw=; b=l5H27lqpnNYAG2jB0S4VZLngFqBL+TQLy4QX5ZvMAuS1Q4aRY+64EZ+WVsw3YmjiuVCJFZ Zq90kUJx2DPmhlOX1vDl94M+7Ou5RBf4Q+IFbw0pkd4VRfLLMkMU2QsznQat8nqabpP2XF hPbqNHYllJvQKuLXnedo7/1fzZOly7o= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=ionos.com header.s=google header.b="KUsl5ii/"; spf=pass (imf03.hostedemail.com: domain of max.kellermann@ionos.com designates 209.85.208.53 as permitted sender) smtp.mailfrom=max.kellermann@ionos.com; dmarc=pass (policy=reject) header.from=ionos.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1756729860; a=rsa-sha256; cv=none; b=7ntu134Kqgiu+e0KETv9QDjacjdcCR7/pxRg84QssaIbtUtExZ6z7sS0Ql0nQOI1M9edh7 rm6QBcGEyx6cTiY1vUX4ivIvM67rRGChRzM+qNpBlq7rcoeOtIXJsKBopgxVcYfNnyt9rq PPC1L3u6S3MmVSGxMRTDzZ3HiFIxAQc= Received: by mail-ed1-f53.google.com with SMTP id 4fb4d7f45d1cf-61cc281171cso7315659a12.0 for ; Mon, 01 Sep 2025 05:30:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; t=1756729858; x=1757334658; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=OBIjmGwuQfdaUaluJIvLRQBK4PGjY6jwjN1piFQcysw=; b=KUsl5ii/JTe0tb0sHNr+QWTr3fN0qd9pyhxwyyOCXzb6u+v5Fj1ebjxspU/q96fZxO F79DI1+9PLWo/fpq41Ct7IiWNeIcE/i+SYnm8au+WQkZjxPxvBDlO3lPLj3Rw6ZFRTDb mm4aCDu0kkZcpqhNky28slGx8K6OGm4PD+rkW/Vdf/Uq4neJ7KQyCfzCq6YDlULNTnlc bCIxiIpdP+mcdC0A/gURELPYOKUZr44PO4Cpmq+Akjj/EBpaBXObWlz/TUcYZMU8apbB +RY/qBb9ksqjCM/D+iwy+4FtRFfTNgNpSfkd3irET2+bb4wkAj4Hle7WHnHU+VctUejs TIgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756729858; x=1757334658; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OBIjmGwuQfdaUaluJIvLRQBK4PGjY6jwjN1piFQcysw=; b=gr7k2XvBvuaw+0MhcdQYJxBl43U6aPHghAz1EWkUauR3ps8OpzDwXzwVvBu+0aq36Z ee1BTPXidP/HGkgQTCLEXwH+lmgEofp8driAsPQxqlyGzKhjUD9a/sGtKbzZuetYbLAr akEsOwiRelzjdB/0Ti8AjVAyIpZj9ooUoexkTVvJYGk3doj9BObMbrc6NqF0CSajHJHI KfsIw5N0+qqQsLAKJsNwI704zVIoVKLHDLuMEia4V1QBqlGBUt8slh+PrQ6081QLn3vL twBF+tj4VkjtoUcRACZlmkuJTR2wYYXM+z/DdWJglfzGLoUxKngxEHQkY7xxKx9ACGOh Pjkw== X-Forwarded-Encrypted: i=1; AJvYcCUQkQ7/D3L9gYzh98mNPCGBIZyYExcKfAEsl5iwKgOcG3ZqeOrrNYaGsuoovNFtXyn8IyT4x+JDhA==@kvack.org X-Gm-Message-State: AOJu0YzNNRg9pdPt+uUoB/ltYLiApZe+8OZFhy86Cbf0omohezyL8fJA LxdM82knhGhL5eX3xVvdLgof6YRveQjPLNnnB/u8+fx71UOf19vZ9KNN46eJuBy6zvI= X-Gm-Gg: ASbGncvhaekqEN4GxSM12AMd7iIkb2OvtGkVmC9tbw/0RhIfALR81b8p6h21CTkyUaZ YVre1RP0FbwOP+fm7zGxPXa9E17hits9R1Mwf1c6IBi8C4e4jvZgw8lXDt6ZPGaIq3u5aoNkWiO sSL5zN9xmzQQfn5LnVv2Djv8PLnxBKY8upNZgfsaUGXQQCLJOJAKzaz38PJ5Rt6syXB0+ViuVei rRERiuo5q8oDdQWEGGLsNuJfoh5uBFpnqjbAGCA5sN3D0zLlOU/trfceS66C+xlm1ePU/61vQqx 5MR8TJUB7PeWLARzReYmSG2UBK0R/MbsQYHzag/rDBHCbaWqOm2W2Y7dmHJxYGr1gS9jmDZof6K SO8ovDLoa8+AAMoUabPGkyt+Ghwe6r4hy0jJOrFhYSop9jgB6dxHf8qEi/nAnkXkramlfLgNzRZ sZPN7ARIB27er5ODbWoiH4N2BSTXd0aAo8 X-Google-Smtp-Source: AGHT+IFiwGGkFlsym2WfGdbuIOc8JJYNdFhpv0BKkeSrtamBaCd4/FeUnocQUw4lZEglIaJHc8zMBQ== X-Received: by 2002:a05:6402:2742:b0:61c:7ba9:8a3 with SMTP id 4fb4d7f45d1cf-61d269974dfmr6695910a12.3.1756729858122; Mon, 01 Sep 2025 05:30:58 -0700 (PDT) Received: from raven.intern.cm-ag (p200300dc6f1d0f00023064fffe740809.dip0.t-ipconnect.de. [2003:dc:6f1d:f00:230:64ff:fe74:809]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-61eaf5883b6sm255566a12.20.2025.09.01.05.30.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Sep 2025 05:30:57 -0700 (PDT) From: Max Kellermann To: akpm@linux-foundation.org, david@redhat.com, axelrasmussen@google.com, yuanchu@google.com, willy@infradead.org, hughd@google.com, mhocko@suse.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, vishal.moola@gmail.com, linux@armlinux.org.uk, James.Bottomley@HansenPartnership.com, deller@gmx.de, agordeev@linux.ibm.com, gerald.schaefer@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@linux.ibm.com, svens@linux.ibm.com, davem@davemloft.net, andreas@gaisler.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, chris@zankel.net, jcmvbkbc@gmail.com, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, weixugc@google.com, baolin.wang@linux.alibaba.com, rientjes@google.com, shakeel.butt@linux.dev, max.kellermann@ionos.com, thuth@redhat.com, broonie@kernel.org, osalvador@suse.de, jfalempe@redhat.com, mpe@ellerman.id.au, nysal@linux.ibm.com, linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v5 12/12] mm: constify highmem related functions for improved const-correctness Date: Mon, 1 Sep 2025 14:30:28 +0200 Message-ID: <20250901123028.3383461-13-max.kellermann@ionos.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250901123028.3383461-1-max.kellermann@ionos.com> References: <20250901123028.3383461-1-max.kellermann@ionos.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D789C2000D X-Stat-Signature: cnph8xru7tq85rg3m8h3humb1o7e7djt X-Rspam-User: X-HE-Tag: 1756729859-59533 X-HE-Meta: U2FsdGVkX18q0Br9xX6gK00JLARpnKZzGyz3BTr7UPMOLzz1GIRU++NGmQ7kxCSdaxjY6t6M8XWVw0a4IFjkAsfHc3egDFl8Txv2gil/qk+dXy3fXcwY8o053nRjh6/vegmLyAUZg+8qhHnK3kJFv/df/urMhCHyxnUC0EilwmPN9/z8jysUWxb78G6IgOYAbypxK2ynEo2Ifxm1in/bvLDnWGlc6YesfmIfyMI5ApLQh75UQxuK2rEKlO0HfKnY0fDPt0Mnd42vijYYw2rbHjUPK9ACn81c4ODib3Rf0ckElA4RMgocTEPTdkGHFHTds6G/65TagUBzzzXsZO9H6jYEJnShdUPqtfJZIYMGDRhzC6bdeMplB+Hyp4uV34l64hUXUabPC3Zj9ZqHK3384Afhcec6FnXmMhA6xIKxkcbpICiRIXSYFipyPvwFhTXgjid9lLNOJMLwuKBu7jQQ/9DlzMoFLCg+0/cO5MqsBcj4BHuj2a7lwhIcGyHZ2mdiimAq2RUPhhIguMwMMA8KUsO4y0PXDu0HJ+SHCFB5vjaIfywe5eeL0a4tGqFUPFnEDpx+hqLnItMxuL8UuCoU4gowHC+f2feXLrLNgt7SohloMUdzD33o+aF8uC4QFJsGS1KpjXuLzeOOhEjkiYyT7D5NtT3eVss/D+uGICwvnoPo55+8Sd44f18veS/ObyzXzXfE5v55GlzjUd+CEQSOxNOcFr4u5WwlGjWi/1VcRWSp7prd/M1XRnOCY3NFY/Tu4aLnwE4Zt86nJUiAxOffz7oJUmESVTrIlpCpP7jIvD/hU57t76HNUt8cDzDSZXnRY+z8XExAIscyJgdCxJtIGXBHCQ5liuyJJQ9uPTXfNoSekm6lWR0WLocH1/sK89TYaMpU6Vd4IS0BKw317sIT7CgI604hYSAXvxJeoBEo8s+noq5PodWOaughqHkE/wePW4wRQ2fIdKEDORljdLn SKP7ChSq FImq4LDeIj71UDKgl9m9wOGzprG1DdH6nRVlzfPWKXnbOaxenNmybbCHXgePYmhrFWEJw7OZxxjIOZZjmSS7UZzz5sn0p503BXTdC48+aBpsKZNxJ8HDk7uyZDBoEjiLXaX2TjSpxKkCH7NSC1w7bZh+ykK5vghIPBe8PSrDYrjJUEccSWV+ig8tzJGVJqEb1HIKuae1ICWXwzWal4Zbv953/G5SNxDtFzc+OTcLDQxLNogCbblXdRLD8pzMWoQyEDlDMoglbzWpsj1QEiwRC3P+KsNe3odX6VLTR1mbvslH7BBKTftHYDST0A6ZNy82gSCRjx34gYU1XkQuDniljiLbSv/BPrCv/7VdIHSKldrIRbgupf+ouMFLQb+zK9sRFxYHhvuv04avG00tXmqXQ20lcHazXXEhvNO55sHer1ZO5ruktlw4NeluH+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Lots of functions in mm/highmem.c do not write to the given pointers and do not call functions that take non-const pointers and can therefore be constified. This includes functions like kunmap() which might be implemented in a way that writes to the pointer (e.g. to update reference counters or mapping fields), but currently are not. kmap() on the other hand cannot be made const because it calls set_page_address() which is non-const in some architectures/configurations. Signed-off-by: Max Kellermann --- arch/arm/include/asm/highmem.h | 6 ++--- arch/xtensa/include/asm/highmem.h | 2 +- include/linux/highmem-internal.h | 44 +++++++++++++++++-------------- include/linux/highmem.h | 8 +++--- mm/highmem.c | 10 +++---- 5 files changed, 37 insertions(+), 33 deletions(-) diff --git a/arch/arm/include/asm/highmem.h b/arch/arm/include/asm/highmem.h index b4b66220952d..023be74298f3 100644 --- a/arch/arm/include/asm/highmem.h +++ b/arch/arm/include/asm/highmem.h @@ -46,9 +46,9 @@ extern pte_t *pkmap_page_table; #endif #ifdef ARCH_NEEDS_KMAP_HIGH_GET -extern void *kmap_high_get(struct page *page); +extern void *kmap_high_get(const struct page *page); -static inline void *arch_kmap_local_high_get(struct page *page) +static inline void *arch_kmap_local_high_get(const struct page *page) { if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !cache_is_vivt()) return NULL; @@ -57,7 +57,7 @@ static inline void *arch_kmap_local_high_get(struct page *page) #define arch_kmap_local_high_get arch_kmap_local_high_get #else /* ARCH_NEEDS_KMAP_HIGH_GET */ -static inline void *kmap_high_get(struct page *page) +static inline void *kmap_high_get(const struct page *const page) { return NULL; } diff --git a/arch/xtensa/include/asm/highmem.h b/arch/xtensa/include/asm/highmem.h index 34b8b620e7f1..473b622b863b 100644 --- a/arch/xtensa/include/asm/highmem.h +++ b/arch/xtensa/include/asm/highmem.h @@ -29,7 +29,7 @@ #if DCACHE_WAY_SIZE > PAGE_SIZE #define get_pkmap_color get_pkmap_color -static inline int get_pkmap_color(struct page *page) +static inline int get_pkmap_color(const struct page *const page) { return DCACHE_ALIAS(page_to_phys(page)); } diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 36053c3d6d64..442d0efea5c7 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -7,7 +7,7 @@ */ #ifdef CONFIG_KMAP_LOCAL void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); -void *__kmap_local_page_prot(struct page *page, pgprot_t prot); +void *__kmap_local_page_prot(const struct page *page, pgprot_t prot); void kunmap_local_indexed(const void *vaddr); void kmap_local_fork(struct task_struct *tsk); void __kmap_local_sched_out(void); @@ -33,11 +33,11 @@ static inline void kmap_flush_tlb(unsigned long addr) { } #endif void *kmap_high(struct page *page); -void kunmap_high(struct page *page); +void kunmap_high(const struct page *page); void __kmap_flush_unused(void); struct page *__kmap_to_page(void *addr); -static inline void *kmap(struct page *page) +static inline void *kmap(struct page *const page) { void *addr; @@ -50,7 +50,7 @@ static inline void *kmap(struct page *page) return addr; } -static inline void kunmap(struct page *page) +static inline void kunmap(const struct page *const page) { might_sleep(); if (!PageHighMem(page)) @@ -68,12 +68,12 @@ static inline void kmap_flush_unused(void) __kmap_flush_unused(); } -static inline void *kmap_local_page(struct page *page) +static inline void *kmap_local_page(const struct page *const page) { return __kmap_local_page_prot(page, kmap_prot); } -static inline void *kmap_local_page_try_from_panic(struct page *page) +static inline void *kmap_local_page_try_from_panic(const struct page *const page) { if (!PageHighMem(page)) return page_address(page); @@ -81,13 +81,15 @@ static inline void *kmap_local_page_try_from_panic(struct page *page) return NULL; } -static inline void *kmap_local_folio(struct folio *folio, size_t offset) +static inline void *kmap_local_folio(const struct folio *const folio, + const size_t offset) { - struct page *page = folio_page(folio, offset / PAGE_SIZE); + const struct page *page = folio_page(folio, offset / PAGE_SIZE); return __kmap_local_page_prot(page, kmap_prot) + offset % PAGE_SIZE; } -static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) +static inline void *kmap_local_page_prot(const struct page *const page, + const pgprot_t prot) { return __kmap_local_page_prot(page, prot); } @@ -102,7 +104,7 @@ static inline void __kunmap_local(const void *vaddr) kunmap_local_indexed(vaddr); } -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +static inline void *kmap_atomic_prot(const struct page *const page, const pgprot_t prot) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) migrate_disable(); @@ -113,7 +115,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) return __kmap_local_page_prot(page, prot); } -static inline void *kmap_atomic(struct page *page) +static inline void *kmap_atomic(const struct page *const page) { return kmap_atomic_prot(page, kmap_prot); } @@ -167,38 +169,40 @@ static inline struct page *kmap_to_page(void *addr) return virt_to_page(addr); } -static inline void *kmap(struct page *page) +static inline void *kmap(struct page *const page) { might_sleep(); return page_address(page); } -static inline void kunmap_high(struct page *page) { } +static inline void kunmap_high(const struct page *const page) { } static inline void kmap_flush_unused(void) { } -static inline void kunmap(struct page *page) +static inline void kunmap(const struct page *const page) { #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif } -static inline void *kmap_local_page(struct page *page) +static inline void *kmap_local_page(const struct page *const page) { return page_address(page); } -static inline void *kmap_local_page_try_from_panic(struct page *page) +static inline void *kmap_local_page_try_from_panic(const struct page *const page) { return page_address(page); } -static inline void *kmap_local_folio(struct folio *folio, size_t offset) +static inline void *kmap_local_folio(const struct folio *const folio, + const size_t offset) { return folio_address(folio) + offset; } -static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) +static inline void *kmap_local_page_prot(const struct page *const page, + const pgprot_t prot) { return kmap_local_page(page); } @@ -215,7 +219,7 @@ static inline void __kunmap_local(const void *addr) #endif } -static inline void *kmap_atomic(struct page *page) +static inline void *kmap_atomic(const struct page *const page) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) migrate_disable(); @@ -225,7 +229,7 @@ static inline void *kmap_atomic(struct page *page) return page_address(page); } -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +static inline void *kmap_atomic_prot(const struct page *const page, const pgprot_t prot) { return kmap_atomic(page); } diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 6234f316468c..105cc4c00cc3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -43,7 +43,7 @@ static inline void *kmap(struct page *page); * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of * pages in the low memory area. */ -static inline void kunmap(struct page *page); +static inline void kunmap(const struct page *page); /** * kmap_to_page - Get the page for a kmap'ed address @@ -93,7 +93,7 @@ static inline void kmap_flush_unused(void); * disabling migration in order to keep the virtual address stable across * preemption. No caller of kmap_local_page() can rely on this side effect. */ -static inline void *kmap_local_page(struct page *page); +static inline void *kmap_local_page(const struct page *page); /** * kmap_local_folio - Map a page in this folio for temporary usage @@ -129,7 +129,7 @@ static inline void *kmap_local_page(struct page *page); * Context: Can be invoked from any context. * Return: The virtual address of @offset. */ -static inline void *kmap_local_folio(struct folio *folio, size_t offset); +static inline void *kmap_local_folio(const struct folio *folio, size_t offset); /** * kmap_atomic - Atomically map a page for temporary usage - Deprecated! @@ -176,7 +176,7 @@ static inline void *kmap_local_folio(struct folio *folio, size_t offset); * kunmap_atomic(vaddr2); * kunmap_atomic(vaddr1); */ -static inline void *kmap_atomic(struct page *page); +static inline void *kmap_atomic(const struct page *page); /* Highmem related interfaces for management code */ static inline unsigned long nr_free_highpages(void); diff --git a/mm/highmem.c b/mm/highmem.c index ef3189b36cad..93fa505fcb98 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -61,7 +61,7 @@ static inline int kmap_local_calc_idx(int idx) /* * Determine color of virtual address where the page should be mapped. */ -static inline unsigned int get_pkmap_color(struct page *page) +static inline unsigned int get_pkmap_color(const struct page *const page) { return 0; } @@ -334,7 +334,7 @@ EXPORT_SYMBOL(kmap_high); * * This can be called from any context. */ -void *kmap_high_get(struct page *page) +void *kmap_high_get(const struct page *const page) { unsigned long vaddr, flags; @@ -356,7 +356,7 @@ void *kmap_high_get(struct page *page) * If ARCH_NEEDS_KMAP_HIGH_GET is not defined then this may be called * only from user context. */ -void kunmap_high(struct page *page) +void kunmap_high(const struct page *const page) { unsigned long vaddr; unsigned long nr; @@ -508,7 +508,7 @@ static inline void kmap_local_idx_pop(void) #endif #ifndef arch_kmap_local_high_get -static inline void *arch_kmap_local_high_get(struct page *page) +static inline void *arch_kmap_local_high_get(const struct page *const page) { return NULL; } @@ -572,7 +572,7 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot) } EXPORT_SYMBOL_GPL(__kmap_local_pfn_prot); -void *__kmap_local_page_prot(struct page *page, pgprot_t prot) +void *__kmap_local_page_prot(const struct page *const page, const pgprot_t prot) { void *kmap; -- 2.47.2