From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69934C433F5 for ; Wed, 4 May 2022 11:12:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D70916B0071; Wed, 4 May 2022 07:12:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CFA326B0073; Wed, 4 May 2022 07:12:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B99FC6B0074; Wed, 4 May 2022 07:12:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A73486B0071 for ; Wed, 4 May 2022 07:12:10 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 6C986120DCD for ; Wed, 4 May 2022 11:12:10 +0000 (UTC) X-FDA: 79427796420.14.A1A5ECB Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197]) by imf07.hostedemail.com (Postfix) with ESMTP id 1732D40082 for ; Wed, 4 May 2022 11:12:04 +0000 (UTC) Received: from zn.tnic (p5de8eeb4.dip0.t-ipconnect.de [93.232.238.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id EBAD81EC03AD; Wed, 4 May 2022 13:12:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim; t=1651662724; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=kUk21YBmxyczXtFGcC5AK2mHVk/xapqtm88uS9NvDyY=; b=p2B1tkQWwDlCdHp7NKG1T7/gTleL6yeWMd01DSIhaI10qDOd6aaT1LwQDM63Rjx0kSnRQ0 a5faOD4YbYR6hkbqoDdUzlx9bPjZzpYjqLUECnyB7huVXTuRW2EgvXRJ0U/M+zjSHWgyNY 9s8Fy4ymLhX99fwnLdC/F7cY2Y9CtMU= Date: Wed, 4 May 2022 13:12:06 +0200 From: Borislav Petkov To: "Kirill A. Shutemov" Cc: Andy Lutomirski , Sean Christopherson , Andrew Morton , Joerg Roedel , Ard Biesheuvel , Andi Kleen , Kuppuswamy Sathyanarayanan , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Varad Gautam , Dario Faggioli , Dave Hansen , Brijesh Singh , Mike Rapoport , David Hildenbrand , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv5 08/12] x86/mm: Provide helpers for unaccepted memory Message-ID: References: <20220425033934.68551-1-kirill.shutemov@linux.intel.com> <20220425033934.68551-9-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20220425033934.68551-9-kirill.shutemov@linux.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1732D40082 X-Stat-Signature: 6f1rwemh5e4w3bfnaqbh7jk6chu5ppwd X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=alien8.de header.s=dkim header.b=p2B1tkQW; spf=pass (imf07.hostedemail.com: domain of bp@alien8.de designates 5.9.137.197 as permitted sender) smtp.mailfrom=bp@alien8.de; dmarc=pass (policy=none) header.from=alien8.de X-HE-Tag: 1651662724-28315 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 25, 2022 at 06:39:30AM +0300, Kirill A. Shutemov wrote: > +/* Protects unaccepted memory bitmap */ > +static DEFINE_SPINLOCK(unaccepted_memory_lock); > + > +void accept_memory(phys_addr_t start, phys_addr_t end) > +{ > + unsigned long *unaccepted_memory; shorten that name. > + unsigned long flags; > + unsigned long range_start, range_end; The tip-tree preferred ordering of variable declarations at the beginning of a function is reverse fir tree order:: struct long_struct_name *descriptive_name; unsigned long foo, bar; unsigned int tmp; int ret; The above is faster to parse than the reverse ordering:: int ret; unsigned int tmp; unsigned long foo, bar; struct long_struct_name *descriptive_name; And even more so than random ordering:: unsigned long foo, bar; int ret; struct long_struct_name *descriptive_name; unsigned int tmp; > + > + if (!boot_params.unaccepted_memory) > + return; > + > + unaccepted_memory = __va(boot_params.unaccepted_memory); > + range_start = start / PMD_SIZE; > + > + spin_lock_irqsave(&unaccepted_memory_lock, flags); > + for_each_set_bitrange_from(range_start, range_end, unaccepted_memory, > + DIV_ROUND_UP(end, PMD_SIZE)) { > + unsigned long len = range_end - range_start; > + > + /* Platform-specific memory-acceptance call goes here */ > + panic("Cannot accept memory"); Yeah, no, WARN_ON_ONCE() pls. > + bitmap_clear(unaccepted_memory, range_start, len); > + } > + spin_unlock_irqrestore(&unaccepted_memory_lock, flags); > +} > + > +bool memory_is_unaccepted(phys_addr_t start, phys_addr_t end) > +{ > + unsigned long *unaccepted_memory = __va(boot_params.unaccepted_memory); As above, shorten that one. > + unsigned long flags; > + bool ret = false; > + > + spin_lock_irqsave(&unaccepted_memory_lock, flags); > + while (start < end) { > + if (test_bit(start / PMD_SIZE, unaccepted_memory)) { > + ret = true; Wait, what? That thing is lying: it'll return true for *some* PMD which is accepted but not the whole range of [start, end]. -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette