From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EA65CF259D for ; Mon, 14 Oct 2024 06:44:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D53CB6B0088; Mon, 14 Oct 2024 02:44:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D03CD6B008A; Mon, 14 Oct 2024 02:44:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCB696B008C; Mon, 14 Oct 2024 02:44:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9CC0B6B0088 for ; Mon, 14 Oct 2024 02:44:54 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DC3CDC0B86 for ; Mon, 14 Oct 2024 06:44:45 +0000 (UTC) X-FDA: 82671270024.29.FD512C6 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf18.hostedemail.com (Postfix) with ESMTP id C91781C0003 for ; Mon, 14 Oct 2024 06:44:49 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf18.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728888136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3IwQLVDj18Ylh/UeRIi9/FEdv2OxopHjtN4TyRqvpHM=; b=LLaHlIPTNCS70RPJuluvks8+PfGtD4v9B+k1vV0KWoxj2xz+JhnzNGN0aU36/kejdesum5 BBF+MIWceie7Ihcdog90kuP82PHyUsx99TZZHjPm86WbaO1flSCz8T7BZJw+2Du5uvAXai RL3jOXNIp/OJGnqUH5+zttrB9mTBq8Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728888136; a=rsa-sha256; cv=none; b=Mgt9c6CTUSVShYJDREoAS5B0C2MILMy3WCq4QGsKVO7lsCkhbjoZW5pc9lE8MoytJ55R3/ kwSof2nKcBwZxDoxWyxv7SdQE2ejXVWN3Nk4f/KPYcaXRF1bdPq1viWWqdbxf3vbjXrw++ n13z7S9yZLbz89B5TVO0ljyEN6QSkg8= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf18.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B76C71007; Sun, 13 Oct 2024 23:45:20 -0700 (PDT) Received: from [10.163.38.184] (unknown [10.163.38.184]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9E6053F51B; Sun, 13 Oct 2024 23:44:46 -0700 (PDT) Message-ID: <62410f7d-2642-4218-8e8e-a384dbe86954@arm.com> Date: Mon, 14 Oct 2024 12:14:48 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC v3 -next] cma: Enforce non-zero pageblock_order during cma_init_reserved_mem() To: "Ritesh Harjani (IBM)" , linux-mm@kvack.org Cc: linuxppc-dev@lists.ozlabs.org, Sourabh Jain , Hari Bathini , Zi Yan , David Hildenbrand , "Kirill A . Shutemov" , Mahesh J Salgaonkar , Michael Ellerman , Madhavan Srinivasan , "Aneesh Kumar K . V" , Donet Tom , LKML , Sachin P Bappalige References: <054b416302486c2d3fdd5924b624477929100bf6.1728656994.git.ritesh.list@gmail.com> Content-Language: en-US From: Anshuman Khandual In-Reply-To: <054b416302486c2d3fdd5924b624477929100bf6.1728656994.git.ritesh.list@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C91781C0003 X-Stat-Signature: ps8d8ixzukzwz1azuh6u9arusai6ooen X-Rspam-User: X-HE-Tag: 1728888289-471560 X-HE-Meta: U2FsdGVkX1/V4WKv0QQYp5n9+jGlmUjPfo6vlqD5yJpYfOK8hSMnunLkF7uAUs9w64M1+ZdD5SzOmu1Mii1BoOz1slg2m5Y4IY5ZKNZXe2oztJSPcRnub/SiAgS9veJ2Wc7pbAMub5l+qfXIsiqI5ATNQMUL6coJBqWxsLNDxDxv2muzXm6gPh5zfnnE4toeg2YLSFKafMKigGfvkewiK0ZvGrvPVh9PQUrE3ldUyYnDPvUV4Med0NYjKPpiRJZOYlk0KISzLX+O1snsQ8HvwmUnQVs5cDr6qpTG70DwRchL+oFdj3ALcsYJgWwR66rszlzJD1vSUeYP3J1m2r4zYn5SI4Tblb3BiHKpOHN1dGE79Mw67oHcANBw4ldP47eoah7HPRev+tjFw2IFmPWwPmGFPDK/FIjFFdtCVjf4TInqgDzJKagUpIhM0kjaNsuG+Gw1j911vKbNQ/yGMu8FgQp3iZMuuDMrrcYSX+tZtwO+dfpijXLnUVhf1z4YSOPnkggLfgqNW/VAHCm8v1fePZuzQaS9RzGv+NP0WeAu1eT0pjF/trnXkwDZnjVNuYJmPBb+lIPW/CKdibYkx/HckSkno3Odhx+KW1YQ1mKP84YA+L/V7w06ZAasCcrOY6BN3qG6eu6fKKjq6+t0LJE+ffNhIogeYcZq8/M1qoTc2bxPJLEFb/OPf/U2C4hJLvCGCWHMHeP0di8+KzoivDOzcocwb1lRCh4oPFh0iN975X+4tONzFBLlmbH21YkSV1cSZHe/XP7nJr9KKgHZEv6BXbSFRhH08dkQhjWDTdjaFZxQk9bXbKG6mhis6mgnbDJ8WBUlm+djqjKuG1T6FQ4VjCqu15mrW+IMFeZEtSYcT2nfpq7pi7bL1+f+Uw9WG6j/gBXmbq2Rh5bHquvkflHfWRpyNc5Ve+dT/9rAcpqvFo2ixIkjF0oGn02Tvqu8XEEJ7TAIUuZqz8qcDAQ2d5L /Q5wyvIt 7nEaSRz1QDOkcJaLH42fkPg5OyTFwmyPUs+qTLzfpMevsrleCDl34nXocGuVU/gzTdDBfvJcFKRgE18Or67WkEMuI6AiH1NpIk4lIKC2Ko35glj8RvCDifIs3idNoW4dhTZsr9oGu7eEQkFZtzFA6nPazmzulBRnCefBctCKHgw2Uck/XRMlYc2AiRfjcxIBqs7kEVOr4KsO5j5AGZH+It69fo6Z6uQVChLAQXfkcB3L74Mll8ZAkzS5MJRqiOiiVRy2NkUD1osGdhlMc7bum4u+XPH43tLYE5+AGRJ4vduGWJ20qRxVr1utjmjCD12v/IizRKOSRYv3ZRijF9Y+2RlQ7MZDo3HzIiht4UdG3VWTx84yKVaaZftQDXijir8DSPm9c0Gohc1wIVig= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10/11/24 20:26, Ritesh Harjani (IBM) wrote: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. That means if base and size does > not have pageblock_order alignment, it can cause functional failures > during cma activate area. > > So let's enforce pageblock_order to be non-zero during > cma_init_reserved_mem(). > > Acked-by: David Hildenbrand > Signed-off-by: Ritesh Harjani (IBM) > --- > v2 -> v3: Separated the series into 2 as discussed in v2. > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/ > > mm/cma.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/cma.c b/mm/cma.c > index 3e9724716bad..36d753e7a0bf 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > if (!size || !memblock_is_region_reserved(base, size)) > return -EINVAL; > > + /* > + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which > + * needs pageblock_order to be initialized. Let's enforce it. > + */ > + if (!pageblock_order) { > + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); > + return -EINVAL; > + } > + > /* ensure minimal alignment required by mm core */ > if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) > return -EINVAL; > -- > 2.46.0 > > LGTM, hopefully this comment regarding CMA_MIN_ALIGNMENT_BYTES alignment requirement will also probably remind us, to drop this new check in case CMA_MIN_ALIGNMENT_BYTES no longer depends on pageblock_order later. Reviewed-by: Anshuman Khandual