From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FAE4C433EF for ; Wed, 1 Dec 2021 18:39:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E90706B0075; Wed, 1 Dec 2021 13:39:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E40976B0081; Wed, 1 Dec 2021 13:39:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE0B86B0088; Wed, 1 Dec 2021 13:39:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id C09296B0075 for ; Wed, 1 Dec 2021 13:39:32 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 713968849F for ; Wed, 1 Dec 2021 18:39:22 +0000 (UTC) X-FDA: 78870088164.27.74F89C4 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf09.hostedemail.com (Postfix) with ESMTP id 7F380300010A for ; Wed, 1 Dec 2021 18:39:23 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 712671FD58; Wed, 1 Dec 2021 18:39:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1638383960; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=94MkJzW6Cyx9mkQ8Z48eWJkifyKjdHI/gj8+B1ldLys=; b=DGUHQZrLvViSTSgb+93ti58vEGrEQzKaIIX4SOn94BbnVJNTq+PmA5Vkgs9IhK8erBCg4A 5/nLULsDjU6gqFquUOAda1sehfauurwMFBPVEmCpYWY15BwKLV4XfBG4/V1taW5rvfQOKd x6Ha3lqNEDyXfPWlt7l0cCGbj/iDy+M= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1638383960; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=94MkJzW6Cyx9mkQ8Z48eWJkifyKjdHI/gj8+B1ldLys=; b=bBVqAqXY928wJutX6BOOe2iNPErcHSHa0n7DgHesVu8FFB/ocqBMAY7oZ5aM8WKWczz/y+ fRnVxSRaH8OieFDw== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 4A7AE13B58; Wed, 1 Dec 2021 18:39:20 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id Gq96EVjBp2FdUQAAMHmgww (envelope-from ); Wed, 01 Dec 2021 18:39:20 +0000 Message-ID: <3efb8650-e879-8bb4-92a0-c96281b5a6ea@suse.cz> Date: Wed, 1 Dec 2021 19:39:19 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.3.2 Content-Language: en-US To: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg , Andrew Morton , Stephen Rothwell Cc: linux-mm@kvack.org, Linus Torvalds References: <20211201181510.18784-1-vbabka@suse.cz> From: Vlastimil Babka Subject: slab tree for next In-Reply-To: <20211201181510.18784-1-vbabka@suse.cz> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7F380300010A X-Stat-Signature: qdqym8sy6fxoi49pp1y97ga4fy5qizd9 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=DGUHQZrL; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=bBVqAqXY; spf=pass (imf09.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-HE-Tag: 1638383963-755783 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 12/1/21 19:14, Vlastimil Babka wrote: > Folks from non-slab subsystems are Cc'd only to patches affecting them, and > this cover letter. > > Series also available in git, based on 5.16-rc3: > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2 > > The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with > the git pull request way of eventually merging this, as it's also not a small > series. I will thus reply to this mail with asking to include my branch in > linux-next. > > As stated in the v1/RFC cover letter, I wouldn't mind to then continue with > maintaining a git tree for all slab patches in general. It was apparently > already done that way before, by Pekka: > https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/ Hi Stephen, Please include a new tree in linux-next: git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next i.e. https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next Which now is identical slab-struct_slab-v2r2 branch [1] When I tried to merge this to next-20211201, there were minor conflicts with two patches from motm: zsmalloc-move-huge-compressed-obj-from-page-to-zspage.patch mm-memcg-relocate-mod_objcg_mlstate-get_obj_stock-and-put_obj_stock.patch Both appear to be just a change in context. Thanks, Vlastimil [1] https://lore.kernel.org/all/20211201181510.18784-1-vbabka@suse.cz/ > Changes from v1/RFC: > https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/ > - Added virt_to_folio() and folio_address() in the new Patch 1. > - Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!) > - Added Tested-by: Marco Elver for the KFENCE parts (Thanks!) > > Previous version from Matthew Wilcox: > https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/ > > LWN coverage of the above: > https://lwn.net/Articles/871982/ > > This is originally an offshoot of the folio work by Matthew. One of the more > complex parts of the struct page definition are the parts used by the slab > allocators. It would be good for the MM in general if struct slab were its own > data type, and it also helps to prevent tail pages from slipping in anywhere. > As Matthew requested in his proof of concept series, I have taken over the > development of this series, so it's a mix of patches from him (often modified > by me) and my own. > > One big difference is the use of coccinelle to perform the relatively trivial > parts of the conversions automatically and at once, instead of a larger number > of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis > Chamberlain for all their help! > > Another notable difference is (based also on review feedback) I don't represent > with a struct slab the large kmalloc allocations which are not really a slab, > but use page allocator directly. When going from an object address to a struct > slab, the code tests first folio slab flag, and only if it's set it converts to > struct slab. This makes the struct slab type stronger. > > Finally, although Matthew's version didn't use any of the folio work, the > initial support has been merged meanwhile so my version builds on top of it > where appropriate. This eliminates some of the redundant compound_head() > being performed e.g. when testing the slab flag. > > To sum up, after this series, struct page fields used by slab allocators are > moved from struct page to a new struct slab, that uses the same physical > storage. The availability of the fields is further distinguished by the > selected slab allocator implementation. The advantages include: > > - Similar to folios, if the slab is of order > 0, struct slab always is > guaranteed to be the head page. Additionally it's guaranteed to be an actual > slab page, not a large kmalloc. This removes uncertainty and potential for > bugs. > - It's not possible to accidentally use fields of the slab implementation that's > not configured. > - Other subsystems cannot use slab's fields in struct page anymore (some > existing non-slab usages had to be adjusted in this series), so slab > implementations have more freedom in rearranging them in the struct slab. > > Matthew Wilcox (Oracle) (16): > mm: Split slab into its own type > mm: Add account_slab() and unaccount_slab() > mm: Convert virt_to_cache() to use struct slab > mm: Convert __ksize() to struct slab > mm: Use struct slab in kmem_obj_info() > mm: Convert check_heap_object() to use struct slab > mm/slub: Convert detached_freelist to use a struct slab > mm/slub: Convert kfree() to use a struct slab > mm/slub: Convert print_page_info() to print_slab_info() > mm/slub: Convert pfmemalloc_match() to take a struct slab > mm/slob: Convert SLOB to use struct slab > mm/kasan: Convert to struct folio and struct slab > zsmalloc: Stop using slab fields in struct page > bootmem: Use page->index instead of page->freelist > iommu: Use put_pages_list > mm: Remove slab from struct page > > Vlastimil Babka (17): > mm: add virt_to_folio() and folio_address() > mm/slab: Dissolve slab_map_pages() in its caller > mm/slub: Make object_err() static > mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab > mm/slub: Convert alloc_slab_page() to return a struct slab > mm/slub: Convert __free_slab() to use struct slab > mm/slub: Convert most struct page to struct slab by spatch > mm/slub: Finish struct page to struct slab conversion > mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab > mm/slab: Convert most struct page to struct slab by spatch > mm/slab: Finish struct page to struct slab conversion > mm: Convert struct page to struct slab in functions used by other > subsystems > mm/memcg: Convert slab objcgs from struct page to struct slab > mm/kfence: Convert kfence_guarded_alloc() to struct slab > mm/sl*b: Differentiate struct slab fields by sl*b implementations > mm/slub: Simplify struct slab slabs field definition > mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only > when enabled > > arch/x86/mm/init_64.c | 2 +- > drivers/iommu/amd/io_pgtable.c | 59 +- > drivers/iommu/dma-iommu.c | 11 +- > drivers/iommu/intel/iommu.c | 89 +-- > include/linux/bootmem_info.h | 2 +- > include/linux/iommu.h | 3 +- > include/linux/kasan.h | 9 +- > include/linux/memcontrol.h | 48 -- > include/linux/mm.h | 12 + > include/linux/mm_types.h | 38 +- > include/linux/page-flags.h | 37 - > include/linux/slab.h | 8 - > include/linux/slab_def.h | 16 +- > include/linux/slub_def.h | 29 +- > mm/bootmem_info.c | 7 +- > mm/kasan/common.c | 27 +- > mm/kasan/generic.c | 8 +- > mm/kasan/kasan.h | 1 + > mm/kasan/quarantine.c | 2 +- > mm/kasan/report.c | 13 +- > mm/kasan/report_tags.c | 10 +- > mm/kfence/core.c | 17 +- > mm/kfence/kfence_test.c | 6 +- > mm/memcontrol.c | 43 +- > mm/slab.c | 455 ++++++------- > mm/slab.h | 322 ++++++++- > mm/slab_common.c | 8 +- > mm/slob.c | 46 +- > mm/slub.c | 1164 ++++++++++++++++---------------- > mm/sparse.c | 2 +- > mm/usercopy.c | 13 +- > mm/zsmalloc.c | 18 +- > 32 files changed, 1317 insertions(+), 1208 deletions(-) >