From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65DA5EB64D9 for ; Wed, 14 Jun 2023 13:22:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E26BC6B0074; Wed, 14 Jun 2023 09:22:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAFAE6B0075; Wed, 14 Jun 2023 09:22:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C77708E0002; Wed, 14 Jun 2023 09:22:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id B0FAB6B0074 for ; Wed, 14 Jun 2023 09:22:18 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7F270A0464 for ; Wed, 14 Jun 2023 13:22:18 +0000 (UTC) X-FDA: 80901417156.27.64E27B0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf01.hostedemail.com (Postfix) with ESMTP id A17DE4000E for ; Wed, 14 Jun 2023 13:22:16 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PwXP+1Xb; spf=pass (imf01.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686748936; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WueZQ8Yv1v0IkCOYIPbmiIjA9jYMeN6sRWhQPx3TGDc=; b=SfxPPgDFR2OqnOTkkE8BIrYDLyLcBW6Az30qicwse09NklYKJn+d/XpFtJ1vmdE5tOyZac IFmYqgVAqCX2z7lQ+341731M3c3d8BIN8lLmFkYT5qEEQerRc6rW3K5Ww7mxgLeVX7S2+0 +BR/lFoi1PS8xvARiMNCjx0jk4WJAto= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686748936; a=rsa-sha256; cv=none; b=EXE9HeYkx60GtU4pSlo67k+8ikznnyTxtipFwJsmoXwhBdSvs6xvJ2ahvwl/K7EsP4chlP c5J1UseSThf6BEpdzq+H0aLLHtAU4EfHTXay5HfMjQKK4dKsAgEVxZzA+BSU372/Q3XfEI 0cssVG+8Zu38dBUBSNv+CIty8L16lpk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=PwXP+1Xb; spf=pass (imf01.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A19786175C; Wed, 14 Jun 2023 13:22:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5D64C433C0; Wed, 14 Jun 2023 13:22:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686748935; bh=0vqssVcI3AzSm6ZYzSMTFzLW0LK2ArpvNNFnyd0jDMg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=PwXP+1XbNJMc5dDtJjP8fRUbMVOfgYZkiV/CSEiQO+53OUw1mxkM7TNacSN2UVnvq Mwxs/DwA39lzTfMai8ygKMhnw/8A5txEm5g+274q1AspzudJm5ujeZxuL5SlxzV7kr XEeamoxRIjx3hqKKprCq30b101V9LmtVD7C/YrzqjcfWDUt0IERYcZ+YUZAUB1qm6M qU2wy/k+HKKhrQ/qrF2cNssBn4aoQYbJEpcB5m9Vm0+PGaXZfYQhkb0YSbhsS/HNdF f2cI0J/nTygASR5eUp+uYZzIW/J3GUH1QtysQDebu/WsZ8JxpIwHAuYB2k8FX53JGd 49yjsQPz5+zOQ== Date: Wed, 14 Jun 2023 16:21:37 +0300 From: Mike Rapoport To: "Vishal Moola (Oracle)" Cc: Andrew Morton , Matthew Wilcox , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , David Hildenbrand , Claudio Imbrenda Subject: Re: [PATCH v4 03/34] s390: Use pt_frag_refcount for pagetables Message-ID: <20230614132137.GB52412@kernel.org> References: <20230612210423.18611-1-vishal.moola@gmail.com> <20230612210423.18611-4-vishal.moola@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230612210423.18611-4-vishal.moola@gmail.com> X-Rspamd-Queue-Id: A17DE4000E X-Rspam-User: X-Stat-Signature: a3yoq78q9urkf874itg3t4p4zrxngppj X-Rspamd-Server: rspam03 X-HE-Tag: 1686748936-243117 X-HE-Meta: U2FsdGVkX1+DC4wCMws8DagbC0JxjXeQ873Nc9hAGzbI0Ho3Ry31essgp2ZcoGRVSTBak/L2G2OL502r+w0O2j16/FPNqlaFUF4EUEXsoK9fkRknIvuWUMYh86FfTSe9ZpjNOzIkyQOOnMWzI9SCXfsuoM3bTlSDHx0Rlv3Jja4onRENs19/7LXjKv17eYJA09JbQ6AAIv56ZzphNfNsk2yfT86YynlbjafyDBW8Rw1aEttSJXqfuHNgZvg2lBDEpWg+fJWM8pk2plPH/WfXFjzkK0fI7jkxab1s+67tysIcl82N5WQIsJucSycyeQCz1Jz1Vz4elS7YZPRk3kF03jfOKEbIeCJLgu5TgqQqC+e4lncmau9nMmcg22Dac98GrEqKAFe1LcL6POajcc8A/8AETkVBQId8VSCoF7ELNtTc9P32KFC66bHGUdoLDxkfuuzrHpbGgkbtc17tyJFX/PbqesymEUmf0BmJ8hXKoZXEHhHWpB46Q4PPI+0acuG/kjA0rqCfmyom+QQKoHFxYJeYRI+51rpFHr5El3YtazeV1djXDyIGcBA2pAC1RVV029PeBXya75RCM7w/nILGvwSCsKnhFw471t/8LzQThrzqT4fqFCihYVaSQzW6Yysy34Jgb6ppV9cMjGnQsF+9grz31g4bi2WbH/i9rS0dhM923HOgsqgPZ0IIZPKGJwOvg44UkzGK+kJuU+/DQE2pgdpzYYq55ESf57ujCUKfIBkMeLuvvd1FkRz/rQtvh9KZ/IBfjrQW1/Nv8Rl5jga0cJT/KrHZ5TCo013AFYFNWmbAeie+FBrTnEcf5juAsJ29Gl1U9pCKx2sSQXAd7EjbgpbNHUiehen1wl5ZuwJuBUKUaUbg5WcR/I7Ouwbo20k3zkwmCwOpk+eAUIoOMnCFgARY/FIDq/x6Zu1DdKZs3f1fBkhTve8mryMTZG2W8t5jQJJ56FJhUg8ikzZ9Nb2 AokSerNv 1zTiY73cbVIpEAK/c5x10lbncNKTporV5obpScHZ374/nUivcoWdAdAj4HIq+PiSenMKB87j5H5M2zVSMD7p9U6VMmmo4y6JTgp/ZksYJAHsQ+YmkKZAzq+QzeszFGowgZN5ze+kfHXwLyGXj11TzMklGb3JWLUuiPS2oI7q81+9aLRp1MzieqeFhGrNXo9j6kmB8Z36IDq+iY8NaKUUGZMX/5XDQW19q89hOfDBgk1z9tPO384sY+urUBcDEkrEktcf7olanN6pKpgqdPLqmY15CwzKhlBtVEtvCPxDmKVyDRf0WgjrCHofmmu4mI/ss1zcyYFtthJktOehlF9ahyZ2HYdeA9VTMdxlfbkC6JIcIBVw93a09AmDYzoViw4zR/0PDDfrh6GCX5VGkoakN+0mcGgyaUc3NWVEptgbRb4HZpxfJerJjPKNtSUkR0FPFrhjk4gLuOyIoJZapVjH57VCgoUQKT6F8RoOnODmX7KQB4Ks= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 12, 2023 at 02:03:52PM -0700, Vishal Moola (Oracle) wrote: > s390 currently uses _refcount to identify fragmented page tables. > The page table struct already has a member pt_frag_refcount used by > powerpc, so have s390 use that instead of the _refcount field as well. > This improves the safety for _refcount and the page table tracking. > > This also allows us to simplify the tracking since we can once again use > the lower byte of pt_frag_refcount instead of the upper byte of _refcount. > > Signed-off-by: Vishal Moola (Oracle) One nit below, otherwise Acked-by: Mike Rapoport (IBM) > --- > arch/s390/mm/pgalloc.c | 38 +++++++++++++++----------------------- > 1 file changed, 15 insertions(+), 23 deletions(-) > > diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c > index 66ab68db9842..6b99932abc66 100644 > --- a/arch/s390/mm/pgalloc.c > +++ b/arch/s390/mm/pgalloc.c > @@ -182,20 +182,17 @@ void page_table_free_pgste(struct page *page) > * As follows from the above, no unallocated or fully allocated parent > * pages are contained in mm_context_t::pgtable_list. > * > - * The upper byte (bits 24-31) of the parent page _refcount is used > + * The lower byte (bits 0-7) of the parent page pt_frag_refcount is used > * for tracking contained 2KB-pgtables and has the following format: > * > * PP AA > - * 01234567 upper byte (bits 24-31) of struct page::_refcount > + * 01234567 upper byte (bits 0-7) of struct page::pt_frag_refcount Nit: lower > * || || > * || |+--- upper 2KB-pgtable is allocated > * || +---- lower 2KB-pgtable is allocated > * |+------- upper 2KB-pgtable is pending for removal > * +-------- lower 2KB-pgtable is pending for removal > * > - * (See commit 620b4e903179 ("s390: use _refcount for pgtables") on why > - * using _refcount is possible). > - * > * When 2KB-pgtable is allocated the corresponding AA bit is set to 1. > * The parent page is either: > * - added to mm_context_t::pgtable_list in case the second half of the > @@ -243,11 +240,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm) > if (!list_empty(&mm->context.pgtable_list)) { > page = list_first_entry(&mm->context.pgtable_list, > struct page, lru); > - mask = atomic_read(&page->_refcount) >> 24; > + mask = atomic_read(&page->pt_frag_refcount); > /* > * The pending removal bits must also be checked. > * Failure to do so might lead to an impossible > - * value of (i.e 0x13 or 0x23) written to _refcount. > + * value of (i.e 0x13 or 0x23) written to > + * pt_frag_refcount. > * Such values violate the assumption that pending and > * allocation bits are mutually exclusive, and the rest > * of the code unrails as result. That could lead to > @@ -259,8 +257,8 @@ unsigned long *page_table_alloc(struct mm_struct *mm) > bit = mask & 1; /* =1 -> second 2K */ > if (bit) > table += PTRS_PER_PTE; > - atomic_xor_bits(&page->_refcount, > - 0x01U << (bit + 24)); > + atomic_xor_bits(&page->pt_frag_refcount, > + 0x01U << bit); > list_del(&page->lru); > } > } > @@ -281,12 +279,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm) > table = (unsigned long *) page_to_virt(page); > if (mm_alloc_pgste(mm)) { > /* Return 4K page table with PGSTEs */ > - atomic_xor_bits(&page->_refcount, 0x03U << 24); > + atomic_xor_bits(&page->pt_frag_refcount, 0x03U); > memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); > memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); > } else { > /* Return the first 2K fragment of the page */ > - atomic_xor_bits(&page->_refcount, 0x01U << 24); > + atomic_xor_bits(&page->pt_frag_refcount, 0x01U); > memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); > spin_lock_bh(&mm->context.lock); > list_add(&page->lru, &mm->context.pgtable_list); > @@ -323,22 +321,19 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) > * will happen outside of the critical section from this > * function or from __tlb_remove_table() > */ > - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > - mask >>= 24; > + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); > if (mask & 0x03U) > list_add(&page->lru, &mm->context.pgtable_list); > else > list_del(&page->lru); > spin_unlock_bh(&mm->context.lock); > - mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); > - mask >>= 24; > + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit); > if (mask != 0x00U) > return; > half = 0x01U << bit; > } else { > half = 0x03U; > - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); > - mask >>= 24; > + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); > } > > page_table_release_check(page, table, half, mask); > @@ -368,8 +363,7 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, > * outside of the critical section from __tlb_remove_table() or from > * page_table_free() > */ > - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > - mask >>= 24; > + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); > if (mask & 0x03U) > list_add_tail(&page->lru, &mm->context.pgtable_list); > else > @@ -391,14 +385,12 @@ void __tlb_remove_table(void *_table) > return; > case 0x01U: /* lower 2K of a 4K page table */ > case 0x02U: /* higher 2K of a 4K page table */ > - mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); > - mask >>= 24; > + mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4); > if (mask != 0x00U) > return; > break; > case 0x03U: /* 4K page table with pgstes */ > - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); > - mask >>= 24; > + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); > break; > } > > -- > 2.40.1 > > -- Sincerely yours, Mike.