From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DDAEC433F5 for ; Wed, 29 Dec 2021 11:23:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C642F6B0072; Wed, 29 Dec 2021 06:23:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C13516B0073; Wed, 29 Dec 2021 06:23:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADBBD6B0074; Wed, 29 Dec 2021 06:23:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id A01B76B0072 for ; Wed, 29 Dec 2021 06:23:05 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 559CF88CDB for ; Wed, 29 Dec 2021 11:23:05 +0000 (UTC) X-FDA: 78970595130.28.D849216 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf24.hostedemail.com (Postfix) with ESMTP id 7F4D1180003 for ; Wed, 29 Dec 2021 11:22:58 +0000 (UTC) Received: by mail-pj1-f41.google.com with SMTP id f18-20020a17090aa79200b001ad9cb23022so19638619pjq.4 for ; Wed, 29 Dec 2021 03:23:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=B8D9h3oa8OpyqxIH3xe2rIOLYO1p/ntyh/ItEZc2ers=; b=WGgAxIa78MQr67iJtQ1TD1NaSUDzL8Sj14e4SvxYWhEN9di3WJIBnUF+L3ExcGr1oj QG61NAIFhjT/XTvY0CDIKSy0x2wdhjgp5o3CP/sn9tLZp0iIoIh8BXvPhaMFXpuLfzRl ssYONzQtGOvuMs4NA/1vndNqMZ2aeOxJ5COnpBFYJWM3L7v9L5GRfb6CzWNnmn7NDNB5 Y3k9y/k0SaoD8ZsB9lFCcDRjDybncaD5TD9GRZZ/dYi8Nrw8/TG12zPX6PJi/KX88OD2 OuID2/9IQHJyO+voFSIrf5Pg524R1rvzzNEz20N8dgdaI7Q3jfQjIb0uJgqdnoHZZxJI MBfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=B8D9h3oa8OpyqxIH3xe2rIOLYO1p/ntyh/ItEZc2ers=; b=w+E8PqVoxMxtk2cHTt2QTP2BKTgYXAHKcfKPrvFfKf4JfR0K+7mvM4olvpDrOu9iL0 gBHOI+0YvP+oELfwxr9xZw5QyNleg04JDLueFsFFHevR5CCKn+OfeW4lztCqtTs7wmbD E00iEGxGHBJgwp4Ig/PCRc+ZSG6W5kRK/k0oc6po2tJohqvOt1o+K8K8jaWG2jbZVg6j 5NwwFXcC5IgWxitR9ISl9fVFrLqkKQa+s94twEgjLSq4xxFDCZDddHZYss9pq1FY4Yhg Va4ELxVuluyhsXmjSXsR1F8eiOQZdQgdQzYQ91OfeaXBcoksWAFF9EFwsSFYhP8Pv6ZN lp7g== X-Gm-Message-State: AOAM533sQAHZsUA2MxxMXITPnzRV5pKwIZedES5LIbnBCrseWiIeTQ4t QQk0lFPDWa1OynHWm3XKmnU= X-Google-Smtp-Source: ABdhPJwuzAxU4R0IYSsKh74KLKKx3PviuJjkKN5hUYyTQsDBZbYnOyqmi34FrA9NClJ4Q9W7ZJbPJg== X-Received: by 2002:a17:902:9343:b0:148:a2e7:fb5f with SMTP id g3-20020a170902934300b00148a2e7fb5fmr27487300plp.160.1640776983676; Wed, 29 Dec 2021 03:23:03 -0800 (PST) Received: from ip-172-31-30-232.ap-northeast-1.compute.internal (ec2-18-181-137-102.ap-northeast-1.compute.amazonaws.com. [18.181.137.102]) by smtp.gmail.com with ESMTPSA id pf7sm27063114pjb.8.2021.12.29.03.22.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Dec 2021 03:23:03 -0800 (PST) Date: Wed, 29 Dec 2021 11:22:54 +0000 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Matthew Wilcox , Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg , linux-mm@kvack.org, Andrew Morton , patches@lists.linux.dev, Alexander Potapenko , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , cgroups@vger.kernel.org, Dave Hansen , David Woodhouse , Dmitry Vyukov , "H. Peter Anvin" , Ingo Molnar , iommu@lists.linux-foundation.org, Joerg Roedel , Johannes Weiner , Julia Lawall , kasan-dev@googlegroups.com, Lu Baolu , Luis Chamberlain , Marco Elver , Michal Hocko , Minchan Kim , Nitin Gupta , Peter Zijlstra , Sergey Senozhatsky , Suravee Suthikulpanit , Thomas Gleixner , Vladimir Davydov , Will Deacon , x86@kernel.org, Roman Gushchin Subject: Re: [PATCH v2 00/33] Separate struct slab from struct page Message-ID: References: <20211201181510.18784-1-vbabka@suse.cz> <4c3dfdfa-2e19-a9a7-7945-3d75bc87ca05@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=WGgAxIa7; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.41 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7F4D1180003 X-Stat-Signature: xsz46srxjsx9u5gc35413i3rrbgcozpu X-HE-Tag: 1640776978-171622 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote: > On 12/14/21 13:57, Vlastimil Babka wrote: > > On 12/1/21 19:14, Vlastimil Babka wrote: > >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and > >> this cover letter. > >> > >> Series also available in git, based on 5.16-rc3: > >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2 > > > > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks > > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff: > > Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to > -next. I've shortened git commit log lines to make checkpatch happier, > so no range-diff as it would be too long. I believe it would be useless > spam to post the whole series now, shortly before xmas, so I will do it > at rc8 time, to hopefully collect remaining reviews. But if anyone wants > a mailed version, I can do that. > Hello Matthew and Vlastimil. it's part 3 of review. # mm: Convert struct page to struct slab in functions used by other subsystems Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> # mm/slub: Convert most struct page to struct slab by spatch Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> with a question below. -static int check_slab(struct kmem_cache *s, struct page *page) +static int check_slab(struct kmem_cache *s, struct slab *slab) { int maxobj; - if (!PageSlab(page)) { - slab_err(s, page, "Not a valid slab page"); + if (!folio_test_slab(slab_folio(slab))) { + slab_err(s, slab, "Not a valid slab page"); return 0; } Can't we guarantee that struct slab * always points to a slab? for struct page * it can be !PageSlab(page) because struct page * can be other than slab. but struct slab * can only be slab unlike struct page. code will be simpler if we guarantee that struct slab * always points to a slab (or NULL). # mm/slub: Convert pfmemalloc_match() to take a struct slab It's confusing to me because the original pfmemalloc_match() is removed and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and converted to use slab_test_pfmemalloc() helper. But I agree with the resulting code. so: Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> # mm/slub: Convert alloc_slab_page() to return a struct slab Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> # mm/slub: Convert print_page_info() to print_slab_info() Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> I hope to review rest of patches in a week. Thanks, Hyeonggon > Changes in v4: > - rebase to 5.16-rc6 to avoid a conflict with mainline > - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo - > thanks! > - in patch "mm/slub: Convert detached_freelist to use a struct slab" > renamed free_nonslab_page() to free_large_kmalloc() and use folio there, > as suggested by Roman > - in "mm/memcg: Convert slab objcgs from struct page to struct slab" > change one caller of slab_objcgs_check() to slab_objcgs() as suggested > by Johannes, realize the other caller should be also changed, and remove > slab_objcgs_check() completely.