From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7BC1C25B74 for ; Fri, 24 May 2024 10:02:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3924C6B0083; Fri, 24 May 2024 06:02:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 340CE6B0088; Fri, 24 May 2024 06:02:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E29D6B0089; Fri, 24 May 2024 06:02:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F37FE6B0083 for ; Fri, 24 May 2024 06:02:20 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6252840A55 for ; Fri, 24 May 2024 10:02:20 +0000 (UTC) X-FDA: 82152849240.11.12BFBCC Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf18.hostedemail.com (Postfix) with ESMTP id F07C71C0024 for ; Fri, 24 May 2024 10:02:16 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=AezsDtp4; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=p2GbzN14; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=AezsDtp4; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=p2GbzN14; spf=pass (imf18.hostedemail.com: domain of osalvador@suse.de designates 195.135.223.131 as permitted sender) smtp.mailfrom=osalvador@suse.de; dmarc=pass (policy=none) header.from=suse.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716544937; a=rsa-sha256; cv=none; b=WrFpqOIt6N34h0FhCJGkDKiKh5SxvjA5e3yqnvv+A1414xh5V+3mLJuWsVYf+1pSwnpJM7 u6P8HlpyYTUy9PDksdmeLWAqtJ4XGIi0AaU5mvLUyFHYrwgjfWiyIQtAvVdIMcykeXbY+F 85ct0i6GCVkzj/AOxIo8RSY50wgiU1I= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=AezsDtp4; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=p2GbzN14; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=AezsDtp4; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=p2GbzN14; spf=pass (imf18.hostedemail.com: domain of osalvador@suse.de designates 195.135.223.131 as permitted sender) smtp.mailfrom=osalvador@suse.de; dmarc=pass (policy=none) header.from=suse.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716544937; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xw1st+132+7cHCcLHCU/tSryVepFeMRD9GXc7kRHvs8=; b=0sfWrSmY+upd2sCsAGoVSnis/CBdJqa8xhd0fz1nzwLhpMXjlQRvwuCvidsllEEvhcqSUP givf7w8P3IGwecXMnnxa9LvfbzlKka0D6+N1nkdBcr2ZpqGfGCOPqRTpcM2vh9pKpT+6ZT SAa5QKOdoGEP/bIu/46VEvkefpowrt8= Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 2A12520913; Fri, 24 May 2024 10:02:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1716544935; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xw1st+132+7cHCcLHCU/tSryVepFeMRD9GXc7kRHvs8=; b=AezsDtp4nSh+oMxw4BDtIl8rUEG4B/dd5g0C+qTgkZ9UaqwK89pMiN8ZL4ghT+pIxCPh7a aca/TuSYfLUggQx1tR/9iN48LoZc6xfKKAezrd9/sNhR0V+z1dW6ja0Bu5yDmg+Eiq55cn CFTmBaVFynSV0riudVMXrWRh1gYOmvw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1716544935; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xw1st+132+7cHCcLHCU/tSryVepFeMRD9GXc7kRHvs8=; b=p2GbzN1460fVO19DD2/E4e+jfP7mmwuNtu3pPoX8XCQBKgVVZC3MlwEikoLvxOM/c5cDTF L4awJK6sVy12VOAw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1716544935; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xw1st+132+7cHCcLHCU/tSryVepFeMRD9GXc7kRHvs8=; b=AezsDtp4nSh+oMxw4BDtIl8rUEG4B/dd5g0C+qTgkZ9UaqwK89pMiN8ZL4ghT+pIxCPh7a aca/TuSYfLUggQx1tR/9iN48LoZc6xfKKAezrd9/sNhR0V+z1dW6ja0Bu5yDmg+Eiq55cn CFTmBaVFynSV0riudVMXrWRh1gYOmvw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1716544935; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=xw1st+132+7cHCcLHCU/tSryVepFeMRD9GXc7kRHvs8=; b=p2GbzN1460fVO19DD2/E4e+jfP7mmwuNtu3pPoX8XCQBKgVVZC3MlwEikoLvxOM/c5cDTF L4awJK6sVy12VOAw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A427513A6B; Fri, 24 May 2024 10:02:14 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id j/JnJaZlUGbvPgAAD6G6ig (envelope-from ); Fri, 24 May 2024 10:02:14 +0000 Date: Fri, 24 May 2024 12:02:09 +0200 From: Oscar Salvador To: Christophe Leroy Cc: Andrew Morton , Jason Gunthorpe , Peter Xu , Michael Ellerman , Nicholas Piggin , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linuxppc-dev@lists.ozlabs.org Subject: Re: [RFC PATCH v2 07/20] powerpc/8xx: Rework support for 8M pages using contiguous PTE entries Message-ID: References: <71017345495dadf0cb96839d261ffeb904dbfef8.1715971869.git.christophe.leroy@csgroup.eu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <71017345495dadf0cb96839d261ffeb904dbfef8.1715971869.git.christophe.leroy@csgroup.eu> X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: F07C71C0024 X-Stat-Signature: rdgb7k4opqk8mg64h8szitni5q9x4sa9 X-HE-Tag: 1716544936-480598 X-HE-Meta: U2FsdGVkX1/CotljqpTEQoc6C/OtqkzK960zsO6+N6RcJtdks92HZurme6zHu2dvuaQcvzTN4plyFa0R3P/HGF3U53ILZKJyT14zCN7akHWGpTGf9MhGlGjho7uQnlzZqyD62z0yEYr1RG+IMItYo7jLdi4w2BXuefoDBKjlBzKTQuqBZC9dEd1sze9C2yUuXxaL8OGoR9jnKblOd4ha++mhg9/KWzjeJBe77UPCcIunKv4kd1VCFypJWOaYj1b0Jq3Ha/7XmgWgK39NUyP9ewzlv+3Rh7Ul0cg5kvn5pFW6bPB7N7pY58tmI9PhvZuasuidPNqscNEzbannA+kQprXU1keNc+mnt8/iSGfuzi+bSIFtHwV4Wy9EJeHJv0TupFpqKDv9lzSpiPXmqytBq4GTQ+GEBnE4v7DTrqRAciv7LZftQumjCH938R2ABsskvFBJxrJtX3kkxS7wuKRjUEj04fs/Q99Da+KaB4+9dygCV3/gyO8zH0ko+Mw4R1v8xcjdKbTc2ri7+7VI3DNzo1j93ZaVz35Qm9HwZ39DP0Y4PIs0XXMRhDz+A3YL3g8vq6uEBCKNwJZNWjs+mAZ1ZbssqIJ87/HsBXDMnPPlTezqX/wJFdNiOwNc6qqlwZ5FiIe3c7Rpbl8paRJ+S3UdpHu21dGbvKvxv39JWBCmr2xS6egUMaBt5Uqk6eE4Q8jRSskwGV/lLM81Tfb8390OkovVHVIoEeJK+e2kfo5UGaHkXO9qpgNw6GlwworSk/OGcSiJ6ZYI+XHS2B6agJqj7A463kCQVYsvBHzS84pybzAFqQqy60T2nP7drvMziIj+C3wUvR6MwTcgJvhFOKNwOvPBZiemgQL2bvxUQe9zvOEfAhK3mLRkLijNMHVxo6glmepbOlWmqTWHUKy9GTC/b374AMRjwiHjz/TwZrB4d36h2B+1gKwwWievPZDz7MxpfsVkUE61k7vySnVNZ+N 9Id/AWPT 2wdVA1lPvtz2QFJPR1n3fenb4rFL0MoLrRIK5p3gLUlyvitHJdplGH2BdrLyYG1Nxg3dZDef6GMyTszcxo/TqY11+o2Dpxb6L/z7a3/cVDdtCfLiiuztOwONRwelh24VSpkS/xw4TgiafMXYZiE0FdJpoK4IW6Sv3JjTzFhDlF7s/bYijksvPGUQuAIOvoLQI3TfW0d5oCFN5K8Jky7u+cGx+tu5yHAicIAosExLL5Kpd1vuctKzcXU4J3jbVgmHGMB2t5do5XabfWYo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, May 17, 2024 at 09:00:01PM +0200, Christophe Leroy wrote: > In order to fit better with standard Linux page tables layout, add > support for 8M pages using contiguous PTE entries in a standard > page table. Page tables will then be populated with 1024 similar > entries and two PMD entries will point to that page table. > > The PMD entries also get a flag to tell it is addressing an 8M page, > this is required for the HW tablewalk assistance. > > Signed-off-by: Christophe Leroy I guess that this will slightly change if you remove patch#1 and patch#2 as you said you will. So I will not comment on the overall design because I do not know how it will look afterwards, but just some things that caught my eye > --- a/arch/powerpc/include/asm/hugetlb.h > +++ b/arch/powerpc/include/asm/hugetlb.h > @@ -41,7 +41,16 @@ void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr, > static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, > unsigned long addr, pte_t *ptep) > { > - return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1)); > + pmd_t *pmdp = (pmd_t *)ptep; > + pte_t pte; > + > + if (IS_ENABLED(CONFIG_PPC_8xx) && pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) { There are quite some places where you do the "pmd_off" to check whether that is a 8MB entry. I think it would make somse sense to have some kind of macro/function to make more clear what we are checking against. e.g: #define pmd_is_SZ_8M(mm, addr, pmdp) (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) (or whatever name you see fit) then you would just need if (IS_ENABLED(CONFIG_PPC_8xx && pmd_is_SZ_8M(mm, addr, pdmp)) Because I see that is also scaterred in 8xx code. > + pte = __pte(pte_update(mm, addr, pte_offset_kernel(pmdp, 0), ~0UL, 0, 1)); > + pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), ~0UL, 0, 1); I have this fresh one because I recently read about 8xx pagetables, but not sure how my memory will survive this, so maybe throw a little comment in there that we are pointing the two pmds to the area. Also, the way we pass the parameters here to pte_update() is a bit awkward. Ideally we should be using some meaningful names? clr_all_bits = ~0UL set_bits = 0 bool is_huge = true pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), clr_all_bits, set_bits, is_huge) or something along those lines > -static inline int check_and_get_huge_psize(int shift) > -{ > - return shift_to_mmu_psize(shift); > + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) Here you could also use the pmd_is_SZ_8M() > + ptep = pte_offset_kernel(pmdp, 0); > + return ptep_get(ptep); > } > > #define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT > @@ -53,7 +33,14 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, > static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, unsigned long sz) > { > - pte_update(mm, addr, ptep, ~0UL, 0, 1); > + pmd_t *pmdp = (pmd_t *)ptep; > + > + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) { > + pte_update(mm, addr, pte_offset_kernel(pmdp, 0), ~0UL, 0, 1); > + pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), ~0UL, 0, 1); > + } else { > + pte_update(mm, addr, ptep, ~0UL, 0, 1); > + } Could we not leverage this in huge_ptep_get_and_clear()? AFAICS, huge_pet_get_and_clear(mm, addr, pte_t *p) { pte_t pte = pte_val(*p); huge_pte_clear(mm, addr, p); return pte; } Or maybe it is not that easy if different powerpc platforms provide their own. It might be worth checking though. > } > > #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT > @@ -63,7 +50,14 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm, > unsigned long clr = ~pte_val(pte_wrprotect(__pte(~0))); > unsigned long set = pte_val(pte_wrprotect(__pte(0))); > > - pte_update(mm, addr, ptep, clr, set, 1); > + pmd_t *pmdp = (pmd_t *)ptep; > + > + if (pmdp == pmd_off(mm, ALIGN_DOWN(addr, SZ_8M))) { > + pte_update(mm, addr, pte_offset_kernel(pmdp, 0), clr, set, 1); > + pte_update(mm, addr, pte_offset_kernel(pmdp + 1, 0), clr, set, 1); > + } else { > + pte_update(mm, addr, ptep, clr, set, 1); I would replace the "1" with "is_huge" or "huge", as being done in __ptep_set_access_flags , something that makes it more clear without the need to check pte_update(). > #endif /* _ASM_POWERPC_PGALLOC_32_H */ > diff --git a/arch/powerpc/include/asm/nohash/32/pte-8xx.h b/arch/powerpc/include/asm/nohash/32/pte-8xx.h > index 07df6b664861..b05cc4f87713 100644 > --- a/arch/powerpc/include/asm/nohash/32/pte-8xx.h > +++ b/arch/powerpc/include/asm/nohash/32/pte-8xx.h ... > - * For other page sizes, we have a single entry in the table. > + * For 8M pages, we have 1024 entries as if it was > + * 4M pages, but they are flagged as 8M pages for the hardware. Maybe drop a comment that a single PMD entry is worth 4MB, so > + * For 4k pages, we have a single entry in the table. > */ > -static pmd_t *pmd_off(struct mm_struct *mm, unsigned long addr); > -static int hugepd_ok(hugepd_t hpd); > - > static inline int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) > { > if (!huge) > return PAGE_SIZE / SZ_4K; > - else if (hugepd_ok(*((hugepd_t *)pmd))) > - return 1; > + else if ((pmd_val(*pmd) & _PMD_PAGE_MASK) == _PMD_PAGE_8M) > + return SZ_4M / SZ_4K; this becomes more intuitive. > +static inline void pmd_populate_kernel_size(struct mm_struct *mm, pmd_t *pmdp, > + pte_t *pte, unsigned long sz) > +{ > + if (sz == SZ_8M) > + *pmdp = __pmd(__pa(pte) | _PMD_PRESENT | _PMD_PAGE_8M); > + else > + *pmdp = __pmd(__pa(pte) | _PMD_PRESENT); > +} > + > +static inline void pmd_populate_size(struct mm_struct *mm, pmd_t *pmdp, > + pgtable_t pte_page, unsigned long sz) > +{ > + if (sz == SZ_8M) > + *pmdp = __pmd(__pa(pte_page) | _PMD_USER | _PMD_PRESENT | _PMD_PAGE_8M); > + else > + *pmdp = __pmd(__pa(pte_page) | _PMD_USER | _PMD_PRESENT); > +} In patch#1 you mentioned this will change with the removal of patch#1 and patch#2. > --- a/arch/powerpc/mm/hugetlbpage.c > +++ b/arch/powerpc/mm/hugetlbpage.c > @@ -183,9 +183,6 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, > if (!hpdp) > return NULL; > > - if (IS_ENABLED(CONFIG_PPC_8xx) && pshift < PMD_SHIFT) > - return pte_alloc_huge(mm, (pmd_t *)hpdp, addr, sz); > - > BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp)); > > if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, > @@ -198,10 +195,18 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, > pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, > unsigned long addr, unsigned long sz) > { > + pmd_t *pmd = pmd_off(mm, addr); > + > if (sz < PMD_SIZE) > - return pte_alloc_huge(mm, pmd_off(mm, addr), addr, sz); > + return pte_alloc_huge(mm, pmd, addr, sz); > > - return NULL; > + if (sz != SZ_8M) > + return NULL; > + if (!pte_alloc_huge(mm, pmd, addr, sz)) > + return NULL; > + if (!pte_alloc_huge(mm, pmd + 1, addr, sz)) > + return NULL; > + return (pte_t *)pmd; I think that having the check for invalid huge page sizes upfront would make more sense, maybe just a matter of taste. /* Unsupported size */ if (sz > PMD_SIZE && sz = SZ_8M) return NULL; if (sz < PMD_SIZE) ... /* 8MB huge pages */ ... return (pte_t *) pmd; Also, I am not a big fan of the two separate pte_alloc_huge() for pmd#0+pmd#1, and I am thinking we might want to hide that within a function and drop a comment in there explaining why we are updatng both pmds. > diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c > index d93433e26ded..99f656b3f9f3 100644 > --- a/arch/powerpc/mm/nohash/8xx.c > +++ b/arch/powerpc/mm/nohash/8xx.c > @@ -48,20 +48,6 @@ unsigned long p_block_mapped(phys_addr_t pa) > return 0; > } > > -static pte_t __init *early_hugepd_alloc_kernel(hugepd_t *pmdp, unsigned long va) > -{ > - if (hpd_val(*pmdp) == 0) { > - pte_t *ptep = memblock_alloc(sizeof(pte_basic_t), SZ_4K); > - > - if (!ptep) > - return NULL; > - > - hugepd_populate_kernel((hugepd_t *)pmdp, ptep, PAGE_SHIFT_8M); > - hugepd_populate_kernel((hugepd_t *)pmdp + 1, ptep, PAGE_SHIFT_8M); > - } > - return hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT); > -} > - > static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa, > pgprot_t prot, int psize, bool new) Am I blind or do we never use the 'new' parameter? I checked the tree and it seems we always pass it 'true'. arch/powerpc/mm/nohash/8xx.c: err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); arch/powerpc/mm/nohash/8xx.c: err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_8M, new); arch/powerpc/mm/nohash/8xx.c: err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); arch/powerpc/mm/nohash/8xx.c: __early_map_kernel_hugepage(VIRT_IMMR_BASE, PHYS_IMMR_BASE, PAGE_KERNEL_NCG, MMU_PAGE_512K, true); I think we can drop the 'new' and the block code that tries to handle it? > diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c > index acdf64c9b93e..59f0d7706d2f 100644 > --- a/arch/powerpc/mm/pgtable.c > +++ b/arch/powerpc/mm/pgtable.c > +void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, > + pte_t pte, unsigned long sz) > +{ > + pmd_t *pmdp = pmd_off(mm, addr); > + > + pte = set_pte_filter(pte, addr); > + > + if (sz == SZ_8M) { > + __set_huge_pte_at(pmdp, pte_offset_kernel(pmdp, 0), pte_val(pte)); > + __set_huge_pte_at(pmdp, pte_offset_kernel(pmdp + 1, 0), pte_val(pte) + SZ_4M); You also mentioned that this would slightly change after you drop patch#0 and patch#1. The only comment I have right know would be to add a little comment explaining the layout (the replication of 1024 entries), or just something like "see comment from number_of_cells_per_pte". -- Oscar Salvador SUSE Labs