From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 033DBCEDDA9 for ; Thu, 10 Oct 2024 03:22:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 112406B0082; Wed, 9 Oct 2024 23:22:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C24D6B0083; Wed, 9 Oct 2024 23:22:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF2316B0085; Wed, 9 Oct 2024 23:22:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CCE576B0082 for ; Wed, 9 Oct 2024 23:22:54 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id B9F07A04F6 for ; Thu, 10 Oct 2024 03:22:48 +0000 (UTC) X-FDA: 82656245868.21.EBD0DEA Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf19.hostedemail.com (Postfix) with ESMTP id B875D1A000B for ; Thu, 10 Oct 2024 03:22:51 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=XvxVZt9o; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of ritesh.list@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=ritesh.list@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728530461; a=rsa-sha256; cv=none; b=tMNBaWWp+U1htPKRh6QxtAX5oMvSHRQlrNSvo3/mDD9dtLTGpyRmP0+eOkEoF8uoKI2zJ3 yDH4yDCTY3ylL0WEklT9kPkQClovvnyOmGO6aMfMTsvlRQzqfAE3LFmJEzs28IDZxa11Ud 6wuIkhjieuEZZkVSWX2ixK9ZyK8yrrE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=XvxVZt9o; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of ritesh.list@gmail.com designates 209.85.210.177 as permitted sender) smtp.mailfrom=ritesh.list@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728530461; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=pMGHgPkDyEzZscZF6jcbqFePhT4l3FTSkWuwbpzJ1+w=; b=DmYttb+juivr9Tzwaeq6mHApWrMc8S8lMs1xD+EINJyzvqsuoFQZjpSBWTFhcZK67CH70q M0P4QiklVW5mlj7AzoZ6sVLGVPq0lVCPzbkh0l3a36K3dSCFqz4sxggjH+p61atWV7qgtu G7wzYnMESRfoU8OqAiFf7950pCH157k= Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-71e038f3835so487039b3a.0 for ; Wed, 09 Oct 2024 20:22:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728530571; x=1729135371; darn=kvack.org; h=references:message-id:date:in-reply-to:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=pMGHgPkDyEzZscZF6jcbqFePhT4l3FTSkWuwbpzJ1+w=; b=XvxVZt9oUEdgEIPERLOwHGOGJeQhuvxhH/y3bchz0IqtS6ceRxkQXHzfs3wJtTlUhR RvO0KcWHXFB/x20jia8pU0aBqIjQkofmWa8drl2tkTdqDLMNw0afftBW7A4aEdW9CP/S RoHKKpptIRBwGANLIG8rpQ+oTRbevxDeeTFNVNrc2PBsvsGD9E1uO0AhsJn0suDJyj2y mjuhje+n1oWJ5rpG6joRoljUeS4eHceGXzMo+tCvXrLy+m9+Vz0bTZEKe5m00VaEx69x bEMhtZsKpehucB3BlYdQ/jSJJsPyqW6b3TbXO21xDk0oyaw9HRrTW2bxx87GvvZgnrEl NcmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728530571; x=1729135371; h=references:message-id:date:in-reply-to:subject:cc:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pMGHgPkDyEzZscZF6jcbqFePhT4l3FTSkWuwbpzJ1+w=; b=PekjvETisev0msK79QSWkkgOmp0jMM55o9Y1b2ss4f/PbM0mPTeeg/k3qObhpy1Mp7 UwJVBqGo3Kb/S9FAWNyh1s99DFfMEb+mBdZc+w8Dy5mHar56TW2g792Yl2tDIcujeks1 1sjqJvTvDWyX632riW/KG3TMdtWaPv3S5ivJ4ylYcDYuSKNq8ZDPMexxk/sL0IkVCzb7 4WSyvulYC4CFr3Pbuq83AyPwKj/phYY8eCQrjW00ONemhzK942GtoJgD342CCOztt7p6 NepHUaG/OSLSREUisJjK/bkJq1V9tnuJ5iHnOsOEEYKpUgHDGTasVY/BNLTlClIrCIOe Kgtw== X-Gm-Message-State: AOJu0YzJ89HTzvOz/IBwUaj3rRkbDSlnC+oEwBF5oYZNgIK+7rxEQcCy 0QW1Kah7xohk7qxZnpyJd8itGf6E8LcrIaFOJhZMcxDFcLKT0SHU X-Google-Smtp-Source: AGHT+IHbBrqfu0Zcv35UYFiQjcDYlwOYSZxjVbr4xyKtGlQh0hDzWB/1JzAP7WH7yyIXJjNQGa33AQ== X-Received: by 2002:a05:6a00:2292:b0:717:87af:fca0 with SMTP id d2e1a72fcca58-71e1daadf11mr7065478b3a.0.1728530571070; Wed, 09 Oct 2024 20:22:51 -0700 (PDT) Received: from dw-tp ([171.76.87.188]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-71e2a9f54b4sm157141b3a.73.2024.10.09.20.22.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Oct 2024 20:22:50 -0700 (PDT) From: Ritesh Harjani (IBM) To: David Hildenbrand , linuxppc-dev@lists.ozlabs.org Cc: linux-mm@kvack.org, Sourabh Jain , Hari Bathini , Zi Yan , "Kirill A . Shutemov" , Mahesh J Salgaonkar , Michael Ellerman , Madhavan Srinivasan , "Aneesh Kumar K . V" , Donet Tom , LKML , Sachin P Bappalige Subject: Re: [RFC 1/2] cma: Fix CMA_MIN_ALIGNMENT_BYTES during early_init In-Reply-To: <83eb128e-4f06-4725-a843-a4563f246a44@redhat.com> Date: Thu, 10 Oct 2024 08:49:20 +0530 Message-ID: <871q0ofxvr.fsf@gmail.com> References: <83eb128e-4f06-4725-a843-a4563f246a44@redhat.com> X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B875D1A000B X-Stat-Signature: qgqpurcatfzi4zp14bw5jzyrp7m6ns83 X-Rspam-User: X-HE-Tag: 1728530571-580298 X-HE-Meta: U2FsdGVkX1+w3pV4KvDaT42GRBoofIvuTDKfZLKK1inCTlzMauHYjhvofhY8H64CXN8A8TkCfdsnjAqbFbWRFY3SNLouhuCZiHlBjnkEpRasQS7dulb8Z6YTzPK8VRpcGDwKlq3w1kfxNpvPkvgf2J46untJeHrdmoHUn0UJ/o/0Qh7G8yp+aNMqrvvQXpi+KXYfAwHHGz0C7I7W+NN2MG4XHcODMpBEou0lRdob9s8l/3mdAkapfpy/F2cPs8ZV1Rv/CvVid9HynFZ/uJgHA20KMvpTJr51aAJtZj8tb/iTQbkEIs13DY0R4+u0sIY7BsMZp5SMzJdAuay3zjPSlAOs4JbqIfA25c+I95N1CiwOB9VlA815t2QZ6iiNiQ11LBfjmxXejWXwDCnpArUD2f9qM4IFaipaeVCEz9VO+awCQeTlv1gnA5qhEOj38ScBrDGa0NSVH63SU2mOOeGm2SlOdRfWG8Pe+xEqz0LG6E/vGJEuMT3pzBZ3xj4YXx1KCi8U8n8Z8zmnJU9UO1TAiO0dT4ocyCFVV8r3iXmwZoPrJMslJ7eNwUspMh0wqQ8FsMk2Su6fU+Y6qyR4E6hcSQYXF6MxEvXR7Hqb4q1Gr0AiStITvtiUHboa4asE9rvh8mVwljfoDP5SayE6vBLTulv+rsS8dxOLwUe+65IvJbFeWqoj1OtzzNxHViBdfYj4z/SYbPdhr35dCJ5nJRKtg3hIoSPFUAg+HiShY606oAJ/+wmNTUk18HTUbGL/QyqRPplMfaKrCpNWjGheqaQLNfLcif4bdjLTzgcCg/34hC9c3/vjWBczLzQ60R+lAnkytrhhWUfhVcPNLOoQF3iKvNa7Ssg7fldweE3qAQE32NrDhOl9hoY+6fuCn51rrm7sG8LNdYldn+RI06ULLyxXUoCMSHYNwrtPdVKsX+/ptJCZtSoa4ra8e64nuezYSprdwDBtclM/eI/Fa/jIFcl JKwIstD5 dS79boLMm4L54IkrYEf2uJwTMn5IaaLKTGZBRFeUXbP+AU1v0nytdqwyy1oOwCowLF8a+iqrq190BftT5FVJkZHez1A2aVjXYdMP7tewLUqboIUUOZD9nW0/Enp5HHiixQD6PBNhX1ujq3B12ysFSxGiJM4U+sflS3b854bWLwtEQ6A4zafmkz4RX19reXxnfdVd2RpQlln3czJa0CWm8pLQfaEcQstCWSGmx68YDUoXyvRR4+ky09iaoAyfYegx9Cy/6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.004804, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: David Hildenbrand writes: > On 08.10.24 15:27, Ritesh Harjani (IBM) wrote: >> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE, >> since pageblock_order is still zero and it gets initialized >> later during paging_init() e.g. >> paging_init() -> free_area_init() -> set_pageblock_order(). >> >> One such use case is - >> early_setup() -> early_init_devtree() -> fadump_reserve_mem() >> >> This causes CMA memory alignment check to be bypassed in >> cma_init_reserved_mem(). Then later cma_activate_area() can hit >> a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory >> area was not pageblock_order aligned. >> >> Instead of fixing it locally for fadump case on PowerPC, I believe >> this should be fixed for CMA_MIN_ALIGNMENT_BYTES. > > I think we should add a way to catch the usage of > CMA_MIN_ALIGNMENT_BYTES before it actually has meaning (before > pageblock_order was set) Maybe by enforcing that the pageblock_order should not be zero where we do the alignment check then? i.e. in cma_init_reserved_mem() diff --git a/mm/cma.c b/mm/cma.c index 3e9724716bad..36d753e7a0bf 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, if (!size || !memblock_is_region_reserved(base, size)) return -EINVAL; + /* + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which + * needs pageblock_order to be initialized. Let's enforce it. + */ + if (!pageblock_order) { + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); + return -EINVAL; + } + /* ensure minimal alignment required by mm core */ if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) return -EINVAL; > and fix the PowerPC usage by reshuffling the > code accordingly. Ok. I will submit a v2 with the above patch incldued. Thanks for the review! -ritesh