From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 62EE7CAC5B0 for ; Mon, 29 Sep 2025 13:47:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0B0C8E0003; Mon, 29 Sep 2025 09:47:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AE2808E0002; Mon, 29 Sep 2025 09:47:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F8848E0003; Mon, 29 Sep 2025 09:47:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7E5DF8E0002 for ; Mon, 29 Sep 2025 09:47:56 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 1C508B8F5E for ; Mon, 29 Sep 2025 13:47:56 +0000 (UTC) X-FDA: 83942416152.23.A098ECC Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf24.hostedemail.com (Postfix) with ESMTP id 1D31E180004 for ; Mon, 29 Sep 2025 13:47:53 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=fail ("body hash did not verify") header.d=linuxfoundation.org header.s=korg header.b=P4mbpEoA; spf=pass (imf24.hostedemail.com: domain of gregkh@linuxfoundation.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1759153674; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:dkim-signature; bh=7ea3/G61V6ZRYlB9brYL6UtCCXC//ALvHbNNNSv7z5c=; b=KUupQmiR93tA4OeDiLLsVV26CDU3ASQucf6O4DpNv5lsxdDZGBMW8Z6dEFHu7k7uuWHtWC bG+Yo22ar1Poqd01OkOURmaoG184SdJZcGF66ynd8wWeia8TY2kLlbYZEMhHsEVtsl04Fk EYZBN6FQ5vHKOwdbKkI4aGhk4PgVTjc= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=fail ("body hash did not verify") header.d=linuxfoundation.org header.s=korg header.b=P4mbpEoA; spf=pass (imf24.hostedemail.com: domain of gregkh@linuxfoundation.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=gregkh@linuxfoundation.org; dmarc=pass (policy=none) header.from=linuxfoundation.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1759153674; a=rsa-sha256; cv=none; b=gJS8qcir6T9L0j18jwRvbg8rDG2HTetvKLghtZHLvj4gOsp5BKa55BbRgcRuTqvGIOupjD 8BR6w+AGn+Urx95q5AMP0UYM1ycl6hesyhi3V3OUwDKv1XQLKK1qmHQjcpMT9xNXo6sWXM eetIuONW4cTaiEftoBe3HEg/cI0GxQY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id C48D743231; Mon, 29 Sep 2025 13:47:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 792A6C113D0; Mon, 29 Sep 2025 13:47:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1759153672; bh=i8zaHMXkUiuMsK9+mWP4UUkYicXM43y26XFQuvBxK6k=; h=Subject:To:Cc:From:Date:In-Reply-To:From; b=P4mbpEoAYh0FExO0pYBgz83ozM9K7YeV/UjPBj/4DnjuGIiAkZLV7Q58Lc2zL4uvr pw/1c6cI1Y6uguYc/kGQ7PEbGEOb93fSur0gut3jW+QEYDrJ6Np4rNSosMK6WMiryc KE5aI3y6r//p2srKbbw29IWNgID61dGTXvCPY1J0= Subject: Patch "minmax: add in_range() macro" has been added to the 6.1-stable tree To: 20230206140639.538867-5-fengwei.yin@intel.com, David.Laight@ACULAB.COM, Rodrigo.Siqueira@amd.com, Xinhui.Pan@amd.com, adilger.kernel@dilger.ca, agk@redhat.com, airlied@gmail.com, akpm@linux-foundation.org, alexander.deucher@amd.com, alexandre.torgue@foss.st.com, amd-gfx@lists.freedesktop.org, andrii@kernel.org, andriy.shevchenko@linux.intel.com, anton.ivanov@cambridgegreys.com, artur.paszkiewicz@intel.com, ast@kernel.org, bp@alien8.de, brian.starkey@arm.com, christian.koenig@amd.com, clm@fb.com, coreteam@netfilter.org, daniel@ffwll.ch, daniel@iogearbox.net, dave.hansen@linux.intel.com, davem@davemloft.net, dm-devel@redhat.com, dmitry.baryshkov@linaro.org, dmitry.torokhov@gmail.com, dri-devel@lists.freedesktop.org, dsahern@kernel.org, dsterba@suse.com, dushistov@mail.ru, edumazet@google.com, evan.quan@amd.com, farbere@amazon.com, fei1.li@intel.com, freedreno@lists.freedesktop.org, fw@strlen.de, gregkh@linuxfoundation.org, haoluo@google.com, harry.wentland@amd.com, hdegoede@redhat.com, herve.codina@bootlin.com, hpa@zytor.co, m@kvack.org, jack@suse.com, james.morse@arm.com, james.qian.wang@arm.com, jdelvare@suse.com, jejb@linux.ibm.com, jernej.skrabec@gmail.com, jmaloy@redhat.com, joabreu@synopsys.com, johannes@sipsolutions.net, john.fastabend@gmail.com, jolsa@kernel.org, josef@toxicpanda.com, kadlec@netfilter.org, keescook@chromium.org, kpsingh@kernel.org, krzysztof.kozlowski@linaro.org, kuba@kernel.org, linus.walleij@linaro.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-staging@lists.linux.dev, linux-stm32@st-md-mailman.stormreply.com, linux-sunxi@lists.linux.dev, linux-um@lists.infradead.org, linux@armlinux.org.uk, linux@rasmusvillemoes.dk, linux@roeck-us.net, liviu.dudau@arm.com, luc.vanoostenryck@gmail.com, luto@kernel.org, maarten.lankhorst@linux.intel.com, malattia@linux.it, markgross@kernel.org, martin.lau@linux.dev, martin.petersen@oracle.com, maz@kernel.org, mchehab@kernel.org, mcoquelin.stm32@gmail.com, mhiramat@kernel.org, mihail.atanassov@arm.com, minchan@kernel.org, mingo@redhat.com, mripard@kernel.org, mykolal@fb.com, ngu@kvack.org, pta@vflare.org, pabeni@redhat.com, pablo@netfilter.org, peppe.cavallaro@st.com, peterz@infradead.org, pmladek@suse.com, qiuxu.zhuo@intel.com, quic_abhinavk@quicinc.com, quic_akhilpo@quicinc.com, rajur@chelsio.com, richard@nod.at, robdclark@gmail.com, rostedt@goodmis.org, rric@kernel.org, ruanjinjie@huawei.com, sakari.ailus@linux.intel.com, samuel@sholland.org, sashal@kernel.org, sdf@google.com, sean@poorly.run, senozhatsky@chromium.org, shuah@kernel.org, snitzer@kernel.org, song@kernel.org, sunpeng.li@amd.com, tglx@linutronix.de, tipc-discussion@lists.sourceforge.net, tony.luck@intel.com, tytso@mit.edu, tzimmermann@suse.de, wad@chromium.org, wens@csie.org, willy@infradead.org, x86@kernel.org, yhs@fb.com, ying.xue@windriver.com, yoshfuji@linux-ipv6.org Cc: From: Date: Mon, 29 Sep 2025 15:47:49 +0200 In-Reply-To: <20250924202320.32333-2-farbere@amazon.com> Message-ID: <2025092949-untapped-factoid-73cc@gregkh> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit X-stable: commit X-Patchwork-Hint: ignore X-Rspamd-Queue-Id: 1D31E180004 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: f1xwys3kcm7w3yorzqwricin5dp9xyrk X-HE-Tag: 1759153673-441791 X-HE-Meta: U2FsdGVkX19SzE74LQ6sC1ZshxSOzCK4ZCGo0JyI7vbJVkeMdipridVSlTghLcwzQyrRTQvYpLpqt2VeOIQzeQgRpauDWzd5DYhdXYh7eBgNvDpSdm0a9I50AYjKG/lXi3FAWB0UUuzPDSULKn3N8atP6dJ+Q537kuIUSEV8mzZTrqmVTCPygjPsu+A7A7QVbwrIRbhtVprfyQlSC08MOB1F9xOdLACNH4CwxgZy9UThFcTcfGK498Nj+jBjKD2RTfJS0IX13qaHhlgOPAHvO/3cTPlBIjeZHVguDqnytmi8hCz1lzCYXoJ7KOFwytXQdPULJKkBAGctanxvucjyMlULnE1MUNpoAYqT4h922lm8yJGs1WCxPV8iBS6o0EpfTOQmUx5eHAFipHNUg650YWSAzsx8wrXtZyGaX762crL3uRFOcLMPqtBGdxswJ6Vpb6Uty7DkeeA11qYxsiG3nGVKPRcmSqhr9rqZTGbfTF6VUz/BbBTXILAYpSz4EB6pcQEBr9KJXojwLPahwHSjjj9Baa0ZYXFiHaizREhrA0NWH14lxxzajRcvXMTP9EfxiQHywUtT/vUG+nHzu9RvD/mYJySUOVMAKfkOpi+xj2yS9wGo41duBwNfGrlYbU/1n3uoyFpZbeWgk4g7DQkE/vzfZDdLHJ8oKYL6RONvKb59kHrKe/BKRgCsb5FPX1BdytLA9SSK10qwe0qnCxvRgGizacHXkc5fJo19Z8Rul+30uxzFywf8+4FVhYiHXVElm6FDFjiaXXgzxRJvdTdKuX76SiOZf6ZNh+B+U2ZmvvfjqzzeYZyMYa3lcBVMSzJqUQus3CnlfMFLy0ejLsU8bJov4d12Y4zZpAtAxhhCDL9vUSZilJdv6W4wenSWzkYxKJE2J7FzGpJ4P622t3g69jSoPINPGwPsQQPvy+8NOY90dSgzkd3VN9QjGbtqsr3MPgGHPlR6TaXMZBA2HMn vQ9UI5Mt uv9uOjJqTDR3d6VjHxFp9qMYvs+tz1qYK1lvtGYIySlk39S+5upbQNnk0vtIAHH5tjYbL/+Yhpq9g5P9x8DjK8frIxFCwT+mKRCo1V9l84gYhATzw1QHD+oBXj49MH9k5yJ8WogVJ2/n0gMZa6MW4z1qGf8nnp5Rfm9Sb2aPwkAr30zjcb3y5au4iEv+CpKGLMFvbLeCxj1rSWCe+a4XlsL1N0fsN3kVuMsnV7OWyg77L1KqUpKNEvrIk8GURLmO+cJvf8e/bvUTRiDG5/QVG6ALyNZn1j4M86KbFzsLPXp40geFopRDcgAm7f7cCS0Abvw+UFeJBjb0aEBcusf4ldJks3MRjdn4lA5pjdKp/I4A2BxjKAVZIbfRtDfgKmLOAcWe6+PUtP1gWye7dNFt5Pf+6En++wZD6/4BLzJ0rJ8XuCkfGcSOs9KaYNeP9piOve2cYVTAdDZE/VtGVD8TivGbMLhzCicPAtfdop45nPHHuuhL45yF+LxwcE9M8Acubuk+/HKZpjJRHoQOb60EMegj82N0FlFYLpq2brNKoXUkM7AL0dP91nk+xVpByjV8NSNXRJU4L4QDQGNx3uRkdK2Orv5lklg5e/d0KpVDIIuJEeqos+WzzvN+JipE3BRVc1T9KLWYblHUJZwNgOyFW4mES9K+3tP9EoM3GDba/LXK+kUYiRh115u81s6jXuhcZAmLSer4G3fXq1bBzXAqRWTv7FV96k1FSOxdaAnYyzb67I+w4EojuTBZ7aI0h4co3rsZsOrn9mznAeaL4lulth++AV47bSN/PqCHnOQqEEnyF4gdAKH4fW7MkXsVQpuOYVQkzYjU99B8Oq/ud+AvK/oTmQLxudckXW8jtxkp24bmgy1Q4OQoABQTQklbaxjFm4mzhi5jOgqtqq/jqdThK4tr1IOIfhQmiON3Nn/gd31/1agc3oFznTK2rXTF0/rQrTX40rymRgKtWTzdfrhFQaYTwde4l YQNLns3o bJUeILUnaRy5n7M6uhVfhP43HLWrtRVzH1DkiJmvN1zQuva/51TkV1gvalUY5AVFI1EoBUAmsvAfCYOoCd2ktV/fWPdhH2pAABQxOJOu+LOKKqouIvvk35yzekBPtLwE3Iy5K9/22EyhmsNImXYWBmbbkTlRs8/IfxjPekb9MlQe/HTAr4UniobQXdFW4pH5D/BAoEv4h4HWQ3QuuvZZW56NDZ8oQJI+MTEyQrUvx+Y+VeiE/iuJzUyiusLOH0DF/XljHbkcjtr8MKZR+JhbgkV7V6wnzx6czS8pnNI+VTdHoxrlFHJJYnZ1ylKVYCZzBmF0hx8UVWmBnU59aBxlIQ8HVRevfj9nCgVHOkAYwtCZqcEqanBo17Q4jUyA1JPNSNPrsPu6rlsaDcHE5QqJT4JQFT/ceeO3QQQzpz7uEB94zdH87DxflOjXqIHArV9ONNPBEURFa+yho6KmBMeolNpPsSwVfGqb6zK2F5aOaloQVNRV+U28T6pEF4x62sbkfG/d+nICKXX3gx0xQIdV7zW+fh+DcGJZSTb8d4hAL5IQzavK5KNzBDl+v2n5gg33JckV8t8mIJSrLwdIpoloVG3cCfx1GUojCZnqkEdS4RIDTTfQ1guIbdsWvf8w7ILZlqR2phgHoH24ECdeHJBh7SuckYhUSqvJ8yips9KEhj4lKoBTYp1/bYiTRDZ53ne4wkfH37EPJamHe1QvpuNHDQmuvY39WeB1ySL/GEZn5wJEm/KeIo/K7d6Fd8YMp8K647ZbyXC2Hsj82Q7A/Z5+pbpEWqhvta9jVT4a+L+wATJw3FuxPaza1kFUlSNNnp936ni0rjEVj+9uIvx7SPKogPkZrY9khulme6jIjhjwp22VP1g+Y6ikVdBRK/DtyUbNVGcSj2kwIckWl0bpUd02RhwP4yjwROabOlOGIXRL+Sr47xtp2aGk/YG7hUkeA3tfJhF4nNUJ1iOZTxqUnEKs5GP8ydAxg 4CR3f1r8 pDsrpjSPnwCVYoiovJ8Pv9D+S7a4IwCsGN0B84RoGe78928kKUvxLvzxUlq/VJwbRB69zO2qYgmJXiQOxb6nuhgKVJ9VOaGUfw8ef+0S2LxEAB390l54BLs3fb0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This is a note to let you know that I've just added the patch titled minmax: add in_range() macro to the 6.1-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: minmax-add-in_range-macro.patch and it can be found in the queue-6.1 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >From prvs=3555e8f33=farbere@amazon.com Wed Sep 24 22:24:47 2025 From: Eliav Farber Date: Wed, 24 Sep 2025 20:23:02 +0000 Subject: minmax: add in_range() macro To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Message-ID: <20250924202320.32333-2-farbere@amazon.com> From: "Matthew Wilcox (Oracle)" [ Upstream commit f9bff0e31881d03badf191d3b0005839391f5f2b ] Patch series "New page table range API", v6. This patchset changes the API used by the MM to set up page table entries. The four APIs are: set_ptes(mm, addr, ptep, pte, nr) update_mmu_cache_range(vma, addr, ptep, nr) flush_dcache_folio(folio) flush_icache_pages(vma, page, nr) flush_dcache_folio() isn't technically new, but no architecture implemented it, so I've done that for them. The old APIs remain around but are mostly implemented by calling the new interfaces. The new APIs are based around setting up N page table entries at once. The N entries belong to the same PMD, the same folio and the same VMA, so ptep++ is a legitimate operation, and locking is taken care of for you. Some architectures can do a better job of it than just a loop, but I have hesitated to make too deep a change to architectures I don't understand well. One thing I have changed in every architecture is that PG_arch_1 is now a per-folio bit instead of a per-page bit when used for dcache clean/dirty tracking. This was something that would have to happen eventually, and it makes sense to do it now rather than iterate over every page involved in a cache flush and figure out if it needs to happen. The point of all this is better performance, and Fengwei Yin has measured improvement on x86. I suspect you'll see improvement on your architecture too. Try the new will-it-scale test mentioned here: https://lore.kernel.org/linux-mm/20230206140639.538867-5-fengwei.yin@intel.com/ You'll need to run it on an XFS filesystem and have CONFIG_TRANSPARENT_HUGEPAGE set. This patchset is the basis for much of the anonymous large folio work being done by Ryan, so it's received quite a lot of testing over the last few months. This patch (of 38): Determine if a value lies within a range more efficiently (subtraction + comparison vs two comparisons and an AND). It also has useful (under some circumstances) behaviour if the range exceeds the maximum value of the type. Convert all the conflicting definitions of in_range() within the kernel; some can use the generic definition while others need their own definition. Link: https://lkml.kernel.org/r/20230802151406.3735276-1-willy@infradead.org Link: https://lkml.kernel.org/r/20230802151406.3735276-2-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton Signed-off-by: Eliav Farber Signed-off-by: Greg Kroah-Hartman --- arch/arm/mm/pageattr.c | 6 +- drivers/gpu/drm/arm/display/include/malidp_utils.h | 2 drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c | 24 +++++------ drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 -- drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c | 18 ++++---- drivers/virt/acrn/ioreq.c | 4 - fs/btrfs/misc.h | 2 fs/ext2/balloc.c | 2 fs/ext4/ext4.h | 2 fs/ufs/util.h | 6 -- include/linux/minmax.h | 27 +++++++++++++ lib/logic_pio.c | 3 - net/netfilter/nf_nat_core.c | 6 +- net/tipc/core.h | 2 net/tipc/link.c | 10 ++-- tools/testing/selftests/bpf/progs/get_branch_snapshot.c | 4 - 16 files changed, 65 insertions(+), 59 deletions(-) --- a/arch/arm/mm/pageattr.c +++ b/arch/arm/mm/pageattr.c @@ -25,7 +25,7 @@ static int change_page_range(pte_t *ptep return 0; } -static bool in_range(unsigned long start, unsigned long size, +static bool range_in_range(unsigned long start, unsigned long size, unsigned long range_start, unsigned long range_end) { return start >= range_start && start < range_end && @@ -63,8 +63,8 @@ static int change_memory_common(unsigned if (!size) return 0; - if (!in_range(start, size, MODULES_VADDR, MODULES_END) && - !in_range(start, size, VMALLOC_START, VMALLOC_END)) + if (!range_in_range(start, size, MODULES_VADDR, MODULES_END) && + !range_in_range(start, size, VMALLOC_START, VMALLOC_END)) return -EINVAL; return __change_memory_common(start, size, set_mask, clear_mask); --- a/drivers/gpu/drm/arm/display/include/malidp_utils.h +++ b/drivers/gpu/drm/arm/display/include/malidp_utils.h @@ -35,7 +35,7 @@ static inline void set_range(struct mali rg->end = end; } -static inline bool in_range(struct malidp_range *rg, u32 v) +static inline bool malidp_in_range(struct malidp_range *rg, u32 v) { return (v >= rg->start) && (v <= rg->end); } --- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c +++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c @@ -305,12 +305,12 @@ komeda_layer_check_cfg(struct komeda_lay if (komeda_fb_check_src_coords(kfb, src_x, src_y, src_w, src_h)) return -EINVAL; - if (!in_range(&layer->hsize_in, src_w)) { + if (!malidp_in_range(&layer->hsize_in, src_w)) { DRM_DEBUG_ATOMIC("invalidate src_w %d.\n", src_w); return -EINVAL; } - if (!in_range(&layer->vsize_in, src_h)) { + if (!malidp_in_range(&layer->vsize_in, src_h)) { DRM_DEBUG_ATOMIC("invalidate src_h %d.\n", src_h); return -EINVAL; } @@ -452,14 +452,14 @@ komeda_scaler_check_cfg(struct komeda_sc hsize_out = dflow->out_w; vsize_out = dflow->out_h; - if (!in_range(&scaler->hsize, hsize_in) || - !in_range(&scaler->hsize, hsize_out)) { + if (!malidp_in_range(&scaler->hsize, hsize_in) || + !malidp_in_range(&scaler->hsize, hsize_out)) { DRM_DEBUG_ATOMIC("Invalid horizontal sizes"); return -EINVAL; } - if (!in_range(&scaler->vsize, vsize_in) || - !in_range(&scaler->vsize, vsize_out)) { + if (!malidp_in_range(&scaler->vsize, vsize_in) || + !malidp_in_range(&scaler->vsize, vsize_out)) { DRM_DEBUG_ATOMIC("Invalid vertical sizes"); return -EINVAL; } @@ -574,13 +574,13 @@ komeda_splitter_validate(struct komeda_s return -EINVAL; } - if (!in_range(&splitter->hsize, dflow->in_w)) { + if (!malidp_in_range(&splitter->hsize, dflow->in_w)) { DRM_DEBUG_ATOMIC("split in_w:%d is out of the acceptable range.\n", dflow->in_w); return -EINVAL; } - if (!in_range(&splitter->vsize, dflow->in_h)) { + if (!malidp_in_range(&splitter->vsize, dflow->in_h)) { DRM_DEBUG_ATOMIC("split in_h: %d exceeds the acceptable range.\n", dflow->in_h); return -EINVAL; @@ -624,13 +624,13 @@ komeda_merger_validate(struct komeda_mer return -EINVAL; } - if (!in_range(&merger->hsize_merged, output->out_w)) { + if (!malidp_in_range(&merger->hsize_merged, output->out_w)) { DRM_DEBUG_ATOMIC("merged_w: %d is out of the accepted range.\n", output->out_w); return -EINVAL; } - if (!in_range(&merger->vsize_merged, output->out_h)) { + if (!malidp_in_range(&merger->vsize_merged, output->out_h)) { DRM_DEBUG_ATOMIC("merged_h: %d is out of the accepted range.\n", output->out_h); return -EINVAL; @@ -866,8 +866,8 @@ void komeda_complete_data_flow_cfg(struc * input/output range. */ if (dflow->en_scaling && scaler) - dflow->en_split = !in_range(&scaler->hsize, dflow->in_w) || - !in_range(&scaler->hsize, dflow->out_w); + dflow->en_split = !malidp_in_range(&scaler->hsize, dflow->in_w) || + !malidp_in_range(&scaler->hsize, dflow->out_w); } static bool merger_is_available(struct komeda_pipeline *pipe, --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -680,12 +680,6 @@ struct block_header { u32 data[]; }; -/* this should be a general kernel helper */ -static int in_range(u32 addr, u32 start, u32 size) -{ - return addr >= start && addr < start + size; -} - static bool fw_block_mem(struct a6xx_gmu_bo *bo, const struct block_header *blk) { if (!in_range(blk->addr, bo->iova, bo->size)) --- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c +++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c @@ -2126,7 +2126,7 @@ static const struct ethtool_ops cxgb_eth .set_link_ksettings = set_link_ksettings, }; -static int in_range(int val, int lo, int hi) +static int cxgb_in_range(int val, int lo, int hi) { return val < 0 || (val <= hi && val >= lo); } @@ -2162,19 +2162,19 @@ static int cxgb_siocdevprivate(struct ne return -EINVAL; if (t.qset_idx >= SGE_QSETS) return -EINVAL; - if (!in_range(t.intr_lat, 0, M_NEWTIMER) || - !in_range(t.cong_thres, 0, 255) || - !in_range(t.txq_size[0], MIN_TXQ_ENTRIES, + if (!cxgb_in_range(t.intr_lat, 0, M_NEWTIMER) || + !cxgb_in_range(t.cong_thres, 0, 255) || + !cxgb_in_range(t.txq_size[0], MIN_TXQ_ENTRIES, MAX_TXQ_ENTRIES) || - !in_range(t.txq_size[1], MIN_TXQ_ENTRIES, + !cxgb_in_range(t.txq_size[1], MIN_TXQ_ENTRIES, MAX_TXQ_ENTRIES) || - !in_range(t.txq_size[2], MIN_CTRL_TXQ_ENTRIES, + !cxgb_in_range(t.txq_size[2], MIN_CTRL_TXQ_ENTRIES, MAX_CTRL_TXQ_ENTRIES) || - !in_range(t.fl_size[0], MIN_FL_ENTRIES, + !cxgb_in_range(t.fl_size[0], MIN_FL_ENTRIES, MAX_RX_BUFFERS) || - !in_range(t.fl_size[1], MIN_FL_ENTRIES, + !cxgb_in_range(t.fl_size[1], MIN_FL_ENTRIES, MAX_RX_JUMBO_BUFFERS) || - !in_range(t.rspq_size, MIN_RSPQ_ENTRIES, + !cxgb_in_range(t.rspq_size, MIN_RSPQ_ENTRIES, MAX_RSPQ_ENTRIES)) return -EINVAL; --- a/drivers/virt/acrn/ioreq.c +++ b/drivers/virt/acrn/ioreq.c @@ -351,7 +351,7 @@ static bool handle_cf8cfc(struct acrn_vm return is_handled; } -static bool in_range(struct acrn_ioreq_range *range, +static bool acrn_in_range(struct acrn_ioreq_range *range, struct acrn_io_request *req) { bool ret = false; @@ -389,7 +389,7 @@ static struct acrn_ioreq_client *find_io list_for_each_entry(client, &vm->ioreq_clients, list) { read_lock_bh(&client->range_lock); list_for_each_entry(range, &client->range_list, list) { - if (in_range(range, req)) { + if (acrn_in_range(range, req)) { found = client; break; } --- a/fs/btrfs/misc.h +++ b/fs/btrfs/misc.h @@ -8,8 +8,6 @@ #include #include -#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len)) - static inline void cond_wake_up(struct wait_queue_head *wq) { /* --- a/fs/ext2/balloc.c +++ b/fs/ext2/balloc.c @@ -36,8 +36,6 @@ */ -#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1) - struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb, unsigned int block_group, struct buffer_head ** bh) --- a/fs/ext4/ext4.h +++ b/fs/ext4/ext4.h @@ -3804,8 +3804,6 @@ static inline void set_bitmap_uptodate(s set_bit(BH_BITMAP_UPTODATE, &(bh)->b_state); } -#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1) - /* For ioend & aio unwritten conversion wait queues */ #define EXT4_WQ_HASH_SZ 37 #define ext4_ioend_wq(v) (&ext4__ioend_wq[((unsigned long)(v)) %\ --- a/fs/ufs/util.h +++ b/fs/ufs/util.h @@ -11,12 +11,6 @@ #include #include "swab.h" - -/* - * some useful macros - */ -#define in_range(b,first,len) ((b)>=(first)&&(b)<(first)+(len)) - /* * functions used for retyping */ --- a/include/linux/minmax.h +++ b/include/linux/minmax.h @@ -5,6 +5,7 @@ #include #include #include +#include /* * min()/max()/clamp() macros must accomplish three things: @@ -192,6 +193,32 @@ */ #define clamp_val(val, lo, hi) clamp_t(typeof(val), val, lo, hi) +static inline bool in_range64(u64 val, u64 start, u64 len) +{ + return (val - start) < len; +} + +static inline bool in_range32(u32 val, u32 start, u32 len) +{ + return (val - start) < len; +} + +/** + * in_range - Determine if a value lies within a range. + * @val: Value to test. + * @start: First value in range. + * @len: Number of values in range. + * + * This is more efficient than "if (start <= val && val < (start + len))". + * It also gives a different answer if @start + @len overflows the size of + * the type by a sufficient amount to encompass @val. Decide for yourself + * which behaviour you want, or prove that start + len never overflow. + * Do not blindly replace one form with the other. + */ +#define in_range(val, start, len) \ + ((sizeof(start) | sizeof(len) | sizeof(val)) <= sizeof(u32) ? \ + in_range32(val, start, len) : in_range64(val, start, len)) + /** * swap - swap values of @a and @b * @a: first value --- a/lib/logic_pio.c +++ b/lib/logic_pio.c @@ -20,9 +20,6 @@ static LIST_HEAD(io_range_list); static DEFINE_MUTEX(io_range_mutex); -/* Consider a kernel general helper for this */ -#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len)) - /** * logic_pio_register_range - register logical PIO range for a host * @new_range: pointer to the IO range to be registered. --- a/net/netfilter/nf_nat_core.c +++ b/net/netfilter/nf_nat_core.c @@ -242,7 +242,7 @@ static bool l4proto_in_range(const struc /* If we source map this tuple so reply looks like reply_tuple, will * that meet the constraints of range. */ -static int in_range(const struct nf_conntrack_tuple *tuple, +static int nf_in_range(const struct nf_conntrack_tuple *tuple, const struct nf_nat_range2 *range) { /* If we are supposed to map IPs, then we must be in the @@ -291,7 +291,7 @@ find_appropriate_src(struct net *net, &ct->tuplehash[IP_CT_DIR_REPLY].tuple); result->dst = tuple->dst; - if (in_range(result, range)) + if (nf_in_range(result, range)) return 1; } } @@ -523,7 +523,7 @@ get_unique_tuple(struct nf_conntrack_tup if (maniptype == NF_NAT_MANIP_SRC && !(range->flags & NF_NAT_RANGE_PROTO_RANDOM_ALL)) { /* try the original tuple first */ - if (in_range(orig_tuple, range)) { + if (nf_in_range(orig_tuple, range)) { if (!nf_nat_used_tuple(orig_tuple, ct)) { *tuple = *orig_tuple; return; --- a/net/tipc/core.h +++ b/net/tipc/core.h @@ -197,7 +197,7 @@ static inline int less(u16 left, u16 rig return less_eq(left, right) && (mod(right) != mod(left)); } -static inline int in_range(u16 val, u16 min, u16 max) +static inline int tipc_in_range(u16 val, u16 min, u16 max) { return !less(val, min) && !more(val, max); } --- a/net/tipc/link.c +++ b/net/tipc/link.c @@ -1624,7 +1624,7 @@ next_gap_ack: last_ga->bgack_cnt); } /* Check against the last Gap ACK block */ - if (in_range(seqno, start, end)) + if (tipc_in_range(seqno, start, end)) continue; /* Update/release the packet peer is acking */ bc_has_acked = true; @@ -2252,12 +2252,12 @@ static int tipc_link_proto_rcv(struct ti strncpy(if_name, data, TIPC_MAX_IF_NAME); /* Update own tolerance if peer indicates a non-zero value */ - if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { + if (tipc_in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { l->tolerance = peers_tol; l->bc_rcvlink->tolerance = peers_tol; } /* Update own priority if peer's priority is higher */ - if (in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI)) + if (tipc_in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI)) l->priority = peers_prio; /* If peer is going down we want full re-establish cycle */ @@ -2300,13 +2300,13 @@ static int tipc_link_proto_rcv(struct ti l->rcv_nxt_state = msg_seqno(hdr) + 1; /* Update own tolerance if peer indicates a non-zero value */ - if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { + if (tipc_in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) { l->tolerance = peers_tol; l->bc_rcvlink->tolerance = peers_tol; } /* Update own prio if peer indicates a different value */ if ((peers_prio != l->priority) && - in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) { + tipc_in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) { l->priority = peers_prio; rc = tipc_link_fsm_evt(l, LINK_FAILURE_EVT); } --- a/tools/testing/selftests/bpf/progs/get_branch_snapshot.c +++ b/tools/testing/selftests/bpf/progs/get_branch_snapshot.c @@ -15,7 +15,7 @@ long total_entries = 0; #define ENTRY_CNT 32 struct perf_branch_entry entries[ENTRY_CNT] = {}; -static inline bool in_range(__u64 val) +static inline bool gbs_in_range(__u64 val) { return (val >= address_low) && (val < address_high); } @@ -31,7 +31,7 @@ int BPF_PROG(test1, int n, int ret) for (i = 0; i < ENTRY_CNT; i++) { if (i >= total_entries) break; - if (in_range(entries[i].from) && in_range(entries[i].to)) + if (gbs_in_range(entries[i].from) && gbs_in_range(entries[i].to)) test1_hits++; else if (!test1_hits) wasted_entries++; Patches currently in stable-queue which might be from farbere@amazon.com are queue-6.1/minmax-fix-indentation-of-__cmp_once-and-__clamp_once.patch queue-6.1/minmax-add-in_range-macro.patch queue-6.1/minmax-deduplicate-__unconst_integer_typeof.patch queue-6.1/minmax-introduce-min-max-_array.patch