From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 749E3C433EF for ; Tue, 5 Oct 2021 16:10:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5CCB61350 for ; Tue, 5 Oct 2021 16:10:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E5CCB61350 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 535516B006C; Tue, 5 Oct 2021 12:10:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E2826B0071; Tue, 5 Oct 2021 12:10:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A9F9900002; Tue, 5 Oct 2021 12:10:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 2C3E66B006C for ; Tue, 5 Oct 2021 12:10:33 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D392A1802EC3E for ; Tue, 5 Oct 2021 16:10:32 +0000 (UTC) X-FDA: 78662871504.18.C915AE3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 66C51300143B for ; Tue, 5 Oct 2021 16:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1633450232; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=umDeLSlsk+ADxO9ixqcvzibZcun2a4rYC3AStEb/9E8=; b=cqbLT2QwNMJCqErcYG94yTpK51V+wyWroulC2JdzdtNcs2KDOsT5B1W8IhIsa+9g9KMA5C 4HlA7IwGiviMlGIPBh6IkXhy0JNEhkeSA1cTO8au5tjpiItip/QXX9w/IdL3wnmQosDr5m bBzoMp2Glwx8nPX4IytltEq9svC+A28= Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-352-5xmfBIQBO_SbgDHG88iD3A-1; Tue, 05 Oct 2021 12:10:31 -0400 X-MC-Unique: 5xmfBIQBO_SbgDHG88iD3A-1 Received: by mail-wr1-f70.google.com with SMTP id d13-20020adf9b8d000000b00160a94c235aso2306583wrc.2 for ; Tue, 05 Oct 2021 09:10:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:to:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=umDeLSlsk+ADxO9ixqcvzibZcun2a4rYC3AStEb/9E8=; b=V8ExjpJx1QRfnUYHZVM24ln56RmCI0bsLQSg9XWi4WpFmalJuIDHTCVLpFTy5lNmDB zCe20nFyOFfJFNqj5IwyeU2gVfrbA0/SNuohK5vSOsMdc/bKDWWedC/dxUiRrJHi9c7G htV9seHHNBi9gTj78C4auB+3ga1ECw3WuHRJeustgEYgyIt9M5k+rjowr0GnwDL7tryz QX6VWY7DERSN0EbaitmENOmsvOyyoGIeWkH1nRgER18QtLcqCswCsATzZdNiM9mUjTob ypwM9410GE24nQteiejh7Uo8xxUG8B/Rt5PkzuVX5VK/tjrNTkgaZi5HOU1m366oxC9k mjmA== X-Gm-Message-State: AOAM5325J+GfArgc6Zu4xswPOZ3X/tXFcsAudhrf2IKSy+VKk1K9pBfE WD127uYWuncl2XbKbGG096ptPk2nx26RPY64IoV0/wqSRDqAL3E+jo+GhhhPSvpqMArpSXJnU69 bPMtHcB73DSTupuUx5GO9zeQolAUg2aQv3CoIIViyBx5xPWASXVao79dBhjo= X-Received: by 2002:a1c:7910:: with SMTP id l16mr2852141wme.128.1633450229328; Tue, 05 Oct 2021 09:10:29 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzB9tWHRMHK07VcOOmurGyNsybGBrc8v0MgTBD8GokJjAsoZu2vKfZp0S5jOWHqidTnKzCxng== X-Received: by 2002:a1c:7910:: with SMTP id l16mr2851832wme.128.1633450224915; Tue, 05 Oct 2021 09:10:24 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6741.dip0.t-ipconnect.de. [91.12.103.65]) by smtp.gmail.com with ESMTPSA id f17sm17734363wrm.83.2021.10.05.09.10.24 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 05 Oct 2021 09:10:24 -0700 (PDT) To: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org References: <20211004134650.4031813-1-willy@infradead.org> <20211004134650.4031813-4-willy@infradead.org> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH 03/62] mm: Split slab into its own type Message-ID: <02a055cd-19d6-6e1d-59bb-e9e5f9f1da5b@redhat.com> Date: Tue, 5 Oct 2021 18:10:24 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20211004134650.4031813-4-willy@infradead.org> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 66C51300143B X-Stat-Signature: 4whibctnrjsft1wciyg6t1qbzgysxnpq Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cqbLT2Qw; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf08.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-HE-Tag: 1633450232-29585 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 04.10.21 15:45, Matthew Wilcox (Oracle) wrote: > Make struct slab independent of struct page. It still uses the > underlying memory in struct page for storing slab-specific data, > but slab and slub can now be weaned off using struct page directly. > Some of the wrapper functions (slab_address() and slab_order()) > still need to cast to struct page, but this is a significant > disentanglement. >=20 > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/linux/mm_types.h | 56 +++++++++++++++++++++++++++++ > include/linux/page-flags.h | 29 +++++++++++++++ > mm/slab.h | 73 +++++++++++++++++++++++++++++++++++++= + > mm/slub.c | 8 ++--- > 4 files changed, 162 insertions(+), 4 deletions(-) >=20 > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 7f8ee09c711f..c2ea71aba84e 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -239,6 +239,62 @@ struct page { > #endif > } _struct_page_alignment; > =20 > +/* Reuses the bits in struct page */ > +struct slab { > + unsigned long flags; > + union { > + struct list_head slab_list; > + struct { /* Partial pages */ > + struct slab *next; > +#ifdef CONFIG_64BIT > + int slabs; /* Nr of slabs left */ > + int pobjects; /* Approximate count */ > +#else > + short int slabs; > + short int pobjects; > +#endif > + }; > + struct rcu_head rcu_head; > + }; > + struct kmem_cache *slab_cache; /* not slob */ > + /* Double-word boundary */ > + void *freelist; /* first free object */ > + union { > + void *s_mem; /* slab: first object */ > + unsigned long counters; /* SLUB */ > + struct { /* SLUB */ > + unsigned inuse:16; > + unsigned objects:15; > + unsigned frozen:1; > + }; > + }; > + > + union { > + unsigned int active; /* SLAB */ > + int units; /* SLOB */ > + }; > + atomic_t _refcount; > +#ifdef CONFIG_MEMCG > + unsigned long memcg_data; > +#endif > +}; My 2 cents just from reading the first 3 mails: I'm not particularly happy about the "/* Reuses the bits in struct page=20 */" part of thingy here, essentially really having to pay attention what=20 whenever we change something in "struct page" to not mess up all the=20 other special types we have. And I wasn't particularly happy scanning=20 patch #1 and #2 for the same reason. Can't we avoid that? What I can see is that we want (and must right now for generic=20 infrastructure) keep some members of the the struct page" (e.g., flags,=20 _refcount) at the very same place, because generic infrastructure relies=20 on them. Maybe that has already been discussed somewhere deep down in folio mail=20 threads, but I would have expected that we keep struct-page generic=20 inside struct-page and only have inside "struct slab" what's special for=20 "struct slab". I would have thought that we want something like this (but absolutely=20 not this): struct page_header { unsigned long flags; } struct page_footer { atomic_t _refcount; #ifdef CONFIG_MEMCG unsigned long memcg_data; #endif } struct page { struct page_header header; uint8_t reserved[$DO_THE_MATH] struct page_footer footer; }; struct slab { ... }; struct slab_page { struct page_header header; struct slab; struct page_footer footer; }; Instead of providing helpers for struct slab_page, simply cast to struct=20 page and replace the structs in struct slab_page by simple placeholders=20 with the same size. That would to me look like a nice cleanup itself, ignoring all the other=20 parallel discussions that are going on. But I imagine the problem is=20 more involved, and a simple header/footer might not be sufficient. --=20 Thanks, David / dhildenb