From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35DF2C433EF for ; Tue, 12 Oct 2021 03:17:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9F4A360E8B for ; Tue, 12 Oct 2021 03:17:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9F4A360E8B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2C054900002; Mon, 11 Oct 2021 23:17:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26FE96B0071; Mon, 11 Oct 2021 23:17:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15E37900002; Mon, 11 Oct 2021 23:17:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0009.hostedemail.com [216.40.44.9]) by kanga.kvack.org (Postfix) with ESMTP id 05EE36B006C for ; Mon, 11 Oct 2021 23:17:23 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A90A38249980 for ; Tue, 12 Oct 2021 03:17:22 +0000 (UTC) X-FDA: 78686324724.18.A7A2B5B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 240F2B002090 for ; Tue, 12 Oct 2021 03:17:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=MSCS7MoICpVtLQFfYMLH4RJHsXwIg6SlUvd9jLHUO+A=; b=evTXpwx3d9Myfo7aT6iRBjU0u2 UiIkypDNomhY5wLdDXsMMZgbedhkonaPpJjQGCdG92Enw2VSlBdi8N1Gg0lQHgTG9+ePWTnOJtW5K rH3mTOkpuaMFWNX8B887i2x7+H742qfxmK30P4/RxpVk7MBoNvmZAHkl1k2XIR2dCNbzLxNzkTuAt Nh2OuvsAdpKMlsH3L+O6QJLjl0cku8jiWiDXSlN2rdBhzSOC9Z4IipT8Yl7Ewt0uOjg03b/VPQCcS v04lkmkAEpB+6w22qDtRCLOckLmRzyj+nklqLYu+EkeMRqUwcJfVqGrJcorFToTPmzQBHsfXRSQRe REnHG3eA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1ma8H6-006Aw1-LJ; Tue, 12 Oct 2021 03:16:30 +0000 Date: Tue, 12 Oct 2021 04:16:20 +0100 From: Matthew Wilcox To: Johannes Weiner Cc: Roman Gushchin , linux-mm@kvack.org Subject: Re: [PATCH 57/62] memcg: Convert object cgroups from struct page to struct slab Message-ID: References: <20211004134650.4031813-1-willy@infradead.org> <20211004134650.4031813-58-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 240F2B002090 X-Stat-Signature: ysypfbwxpyei6xayn3cxx3xb56ke8u1i Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=evTXpwx3; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-HE-Tag: 1634008642-347412 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 11, 2021 at 01:13:18PM -0400, Johannes Weiner wrote: > Because right now we can have user pages pointing to a memcg, random > alloc_page(GFP_ACCOUNT) pages pointing to an objcg, and slab pages > pointing to an array of objcgs - all in the same memcg_data member. Ah! I was missing the possibility that an alloc_page() could point to an objcg. I had thought that only slab pages could point to an objcg and only anon/file pages could point to a memcg. > After your patch, slab->memcg_data points to an array of objcgs, > period. The only time it doesn't is when there is a bug. Once the > memcg_data member is no longer physically shared between page and > slab, we can do: > > struct slab { > struct obj_cgroup **objcgs; > }; > > and ditch the accessor function altogether. Yes. > > - * page_objcgs_check - get the object cgroups vector associated with a page > > - * @page: a pointer to the page struct > > + * slab_objcgs_check - get the object cgroups vector associated with a page > > + * @slab: a pointer to the slab struct > > * > > - * Returns a pointer to the object cgroups vector associated with the page, > > - * or NULL. This function is safe to use if the page can be directly associated > > + * Returns a pointer to the object cgroups vector associated with the slab, > > + * or NULL. This function is safe to use if the slab can be directly associated > > * with a memory cgroup. > > */ > > -static inline struct obj_cgroup **page_objcgs_check(struct page *page) > > +static inline struct obj_cgroup **slab_objcgs_check(struct slab *slab) > > { > > - unsigned long memcg_data = READ_ONCE(page->memcg_data); > > + unsigned long memcg_data = READ_ONCE(slab->memcg_data); > > > > if (!memcg_data || !(memcg_data & MEMCG_DATA_OBJCGS)) > > return NULL; > > > > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); > > + VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, slab_page(slab)); > > > > return (struct obj_cgroup **)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > > This is a bit weird. > > The function is used in one place, to check whether a random page is a > slab page. It's essentially a generic type check on the page! > > After your changes, you pass a struct slab that might well be invalid > if this isn't a slab page, and you rely on the PAGE's memcg_data to > tell you whether this is the case. It works because page->memcg_data > is overlaid with slab->memcg_data, but that won't be the case if we > allocate struct slab separately. > > To avoid that trap down the road, I think it would be better to keep > the *page* the ambiguous object for now, and only resolve to struct > slab after the type check. So that every time you see struct slab, you > know it's valid. > > In fact, I think it would be best to just inline page_objcgs_check() > into its sole caller. It would clarify the resolution from wildcard > page to valid struct slab quite a bit: Yes. Every time I read through this, I was wondering if there was something I was missing. I mean, there was (the memcg/objcg/objcgs distinction above), but yes, if we know we have a slab, we don't need this function. > > @@ -2819,38 +2819,39 @@ int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s, > > */ > > struct mem_cgroup *mem_cgroup_from_obj(void *p) > > { > > - struct page *page; > > + struct slab *slab; > > > > if (mem_cgroup_disabled()) > > return NULL; > > > > - page = virt_to_head_page(p); > > + slab = virt_to_slab(p); > > > > /* > > * Slab objects are accounted individually, not per-page. > > * Memcg membership data for each individual object is saved in > > - * the page->obj_cgroups. > > + * the slab->obj_cgroups. > > */ > > - if (page_objcgs_check(page)) { > > + if (slab_objcgs_check(slab)) { > > I.e. do this instead: > > page = virt_to_head_page(p); > > /* object is backed by slab */ > if (page->memcg_data & MEMCG_DATA_OBJCGS) { > struct slab *slab = (struct slab *)page; > > objcg = slab_objcgs(...)[] > return objcg ? obj_cgroup_memcg(objcg): NULL; > } > > /* object is backed by a regular kernel page */ > return page_memcg_check(page); Maybe I'm missing something else, but why not discriminate based on PageSlab()? ie: slab = virt_to_slab(p); if (slab_test_cache(slab)) { ... } return page_memcg_check((struct page *)slab); ... but see the response to your other email for why not exactly this.