From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13DECC433F5 for ; Tue, 5 Oct 2021 18:49:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA3C26120A for ; Tue, 5 Oct 2021 18:49:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AA3C26120A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2D2026B006C; Tue, 5 Oct 2021 14:49:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 281306B0071; Tue, 5 Oct 2021 14:49:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16F866B0073; Tue, 5 Oct 2021 14:49:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 04D666B006C for ; Tue, 5 Oct 2021 14:49:07 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9FB1B2D259 for ; Tue, 5 Oct 2021 18:49:06 +0000 (UTC) X-FDA: 78663271092.30.DE51D85 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf08.hostedemail.com (Postfix) with ESMTP id E3F2C3001434 for ; Tue, 5 Oct 2021 18:49:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=BRr1EAmo0s0RnDzgU4qYbTZlRUBhCYxx0FC4fgp87H0=; b=O6UQbgfYHBt+OSB9nWhDg0/JBz /yAEcHHw9/78J/Ikc2fZU1zsH9zDeOKowu/3oBv0jeq1Vd9nSDR7yb76yiBmIvm4MvfRj4pHqKdrr cVAk3FLwUMvbL9l63TtAHx2HqN97+EyJTXGzBWPtHHw48z2fsAboJrooItQtT7qEIyjh01xG4Ekqy u8rOfw5AqEtbIVar4s8Yuk2LIj+e9GpVJBo/OF2cvJg85sK+paUgqjP6msbtpaQR2Gy+0LzmCccck /K8MPTGDrz5b+hPXcAr1GsUhjkKYLVfED1kFazpSBrxhqlHQt+hEMIwc+Ec3YQlELnNyNpadWqA9y DNOb3kTw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mXpTz-000Avn-TW; Tue, 05 Oct 2021 18:48:23 +0000 Date: Tue, 5 Oct 2021 19:48:07 +0100 From: Matthew Wilcox To: David Hildenbrand Cc: linux-mm@kvack.org Subject: Re: [PATCH 03/62] mm: Split slab into its own type Message-ID: References: <20211004134650.4031813-1-willy@infradead.org> <20211004134650.4031813-4-willy@infradead.org> <02a055cd-19d6-6e1d-59bb-e9e5f9f1da5b@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <02a055cd-19d6-6e1d-59bb-e9e5f9f1da5b@redhat.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E3F2C3001434 X-Stat-Signature: it6izr4syd7o9utxncb7eykwmbo7umya Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=O6UQbgfY; dmarc=none; spf=none (imf08.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1633459745-526314 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Oct 05, 2021 at 06:10:24PM +0200, David Hildenbrand wrote: > My 2 cents just from reading the first 3 mails: > > I'm not particularly happy about the "/* Reuses the bits in struct page */" > part of thingy here, essentially really having to pay attention what > whenever we change something in "struct page" to not mess up all the other > special types we have. And I wasn't particularly happy scanning patch #1 and > #2 for the same reason. Can't we avoid that? I've tried to mitigate that with the compile-time assertions. They're actually a bit stronger than what we have now. > What I can see is that we want (and must right now for generic > infrastructure) keep some members of the the struct page" (e.g., flags, > _refcount) at the very same place, because generic infrastructure relies on > them. > > Maybe that has already been discussed somewhere deep down in folio mail > threads, but I would have expected that we keep struct-page generic inside > struct-page and only have inside "struct slab" what's special for "struct > slab". > > I would have thought that we want something like this (but absolutely not > this): > > struct page_header { > unsigned long flags; > } > > struct page_footer { > atomic_t _refcount; > #ifdef CONFIG_MEMCG > unsigned long memcg_data; > #endif > } > > struct page { > struct page_header header; > uint8_t reserved[$DO_THE_MATH] > struct page_footer footer; > }; The problem with this definition is the number of places which refer to page->flags and must now be churned to page->header.flags. _refcount is rather better encapsulated, and I'm not entirely sure how much we'd have to do for memcg_data. Maybe that was what you meant by "this but absolutely not this"? I don't quite understand what that was supposed to mean. > struct slab { > ... > }; > > struct slab_page { > struct page_header header; > struct slab; > struct page_footer footer; > }; > > Instead of providing helpers for struct slab_page, simply cast to struct > page and replace the structs in struct slab_page by simple placeholders with > the same size. > > That would to me look like a nice cleanup itself, ignoring all the other > parallel discussions that are going on. But I imagine the problem is more > involved, and a simple header/footer might not be sufficient. Yes, exactly, the problems are more involved. The location/contents of page->mapping are special, the contents of bit zero of the second word are special (determines compound_head or not) and slub particularly relies on the 128-bit alignment of the { freelist, counters } pair. The ultimate destination (and I think Kent/Johannes/I all agree on this) is to dynamically allocate struct slab. At _that_ point, we can actually drop _refcount from struct slab and even change how struct slab is defined based on CONFIG_SLUB / CONFIG_SLOB / CONFIG_SLAB. I think we'll still need ->flags to be the first element of struct slab, but bit 0 of the second word stops being special, we won't need to care about where page->mapping aliases, and the 128-bit alignment becomes solely the concern of the slub allocator instead of affecting everyone who uses struct page.