From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 688D9C54E5D for ; Tue, 12 Mar 2024 18:53:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F05386B02A4; Tue, 12 Mar 2024 14:52:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB51A6B02A5; Tue, 12 Mar 2024 14:52:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCAA76B02A6; Tue, 12 Mar 2024 14:52:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CE1106B02A4 for ; Tue, 12 Mar 2024 14:52:59 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A81D2403DF for ; Tue, 12 Mar 2024 18:52:59 +0000 (UTC) X-FDA: 81889284078.24.8C0D093 Received: from out-172.mta0.migadu.com (out-172.mta0.migadu.com [91.218.175.172]) by imf30.hostedemail.com (Postfix) with ESMTP id D9E9E80016 for ; Tue, 12 Mar 2024 18:52:56 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=aMHEXlN8; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf30.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.172 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710269577; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cB9p4AvFVqG7BzTvwz43ZlsmGe2aR9Z38/ULirCZ1Xs=; b=fKAxrytjP5dBMBwksZresFkbpJKM4tKP/7qM3BJBqQ3DpOlR3H7oQYJKV22H7husakfFlb PsaU4qwhAFRyJ9vtHa2EEETOncKTSkFWGrFvqpFqaLxwX8dGL5824bFgaI/Pbhg8no5Wo5 8C0Ea68B6pmFQSXvdgZ+vF6XlV9rv0A= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=aMHEXlN8; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf30.hostedemail.com: domain of roman.gushchin@linux.dev designates 91.218.175.172 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710269577; a=rsa-sha256; cv=none; b=NCBByZ2/s0pjkcVGoNLQ+ggqJE2IcnsEmVaLmtrb/loO0l2WMr8oGYGZ0fClCQk2DvBYpZ ANBU19Roe/1YYx5Rof1SjdpT2xmDziddXczlR3SYlACtediblP4Zxz7lGmA9rF7H5vORiF Djhpj2wgK7pk1cnzQHIBKvmoqXkIEf0= Date: Tue, 12 Mar 2024 11:52:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1710269574; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=cB9p4AvFVqG7BzTvwz43ZlsmGe2aR9Z38/ULirCZ1Xs=; b=aMHEXlN8b45FtQM0pRTQ8DM07syqKxAPvaKz0ABqgotLq+deoNr8afEOlc6CKRZ2v0k9E6 a3P5y8gz/TOouO5c09EhY0Rs9i2czqGoJGzyZDMPGbY0GJc2E1dpz830jhg8Df+eUWarjv ks8+DS5cwsk8Ga5QHoEc6NIZLTd/L7k= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Vlastimil Babka Cc: Linus Torvalds , Josh Poimboeuf , Jeff Layton , Chuck Lever , Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Alexander Viro , Christian Brauner , Jan Kara , linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH RFC 1/4] mm, slab: move memcg charging to post-alloc hook Message-ID: References: <20240301-slab-memcg-v1-0-359328a46596@suse.cz> <20240301-slab-memcg-v1-1-359328a46596@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240301-slab-memcg-v1-1-359328a46596@suse.cz> X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: D9E9E80016 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 6p6borg55bddbw8a54ck8n111j5b9oqd X-HE-Tag: 1710269576-919764 X-HE-Meta: U2FsdGVkX1+9rrgnt7whIcpdT/iVqLfaGUeKmZw8K/5ZTku3GAw6xJat9hE0VRtOUjMjRs7KrKQ5jkcsfm+KNUByetWjMdVFGtsOgx3TsAo1k2Nd9sJ8umRqx1tQLRXDL6lD0cNxTevYQ8G3sJNitj0eSnLdqRkrSHUK/gp/StXDE8+Ry8HLpSugmHZyImR0CyfNcoOJ6RbVhmML314KfGbSn8DK042rVHuxFmYS3LZkPjYLjZ6OZkyG724zHeqnkkuiEl78MWxBYReBFnoDRnlPTyEutLzwcN1NKR4X6uWJoPQuO+RdrGodPorGJMdxpM1HHRPJtjCjjdl8LKuAZpX7APt2q8n62mv3X+jCGccGXKWTULJEoyUcm7Tt2z3uJjEOxGEr0zHXqxjo+zi2ch4gD1JDOG69IIlGRIFoH3U2k6ebZ+aE/o1pIItcRKELLDkX0JXeAazFSdwbyB+zKrOf8KIoGuSa7Qc1sakefgSLGVT8M/L5hEjGOUHuCRoxhrYSwhY6+oSzSs2RRw0ny+VtjbvUdQYrDRQanGh2Vwi6wY49rlhh7RJhHu27uILSHO+NZcsH+Z/tJdA2lOyLdcpEqXHiUc3CD9wTQEXU9lA++usntsvXb7NBkMCQJrETodNJ1FGuVIkN43Qk6n3kyqcgDg2r1VYpMB0c6GQjQMn14YAg/5fOIsovL5VJ7vi3l3fecv5/xLSE7Pwfl3srXp8ZpSVjJ7n4SE8Is6t1Dv6vS4xdtSLXmXiD2ULnyVe0QY/8xD1vF26701ko/M0bjazOxbxyMJTH4wrZW1egWXvyOqv/2CCiU/p1Bw7DPKaZFBLRf01DAtdMrPHc5sSbubTUnYO6EDxlul+YxvLB1OjJAvNNry8L/A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 01, 2024 at 06:07:08PM +0100, Vlastimil Babka wrote: > The MEMCG_KMEM integration with slab currently relies on two hooks > during allocation. memcg_slab_pre_alloc_hook() determines the objcg and > charges it, and memcg_slab_post_alloc_hook() assigns the objcg pointer > to the allocated object(s). > > As Linus pointed out, this is unnecessarily complex. Failing to charge > due to memcg limits should be rare, so we can optimistically allocate > the object(s) and do the charging together with assigning the objcg > pointer in a single post_alloc hook. In the rare case the charging > fails, we can free the object(s) back. > > This simplifies the code (no need to pass around the objcg pointer) and > potentially allows to separate charging from allocation in cases where > it's common that the allocation would be immediately freed, and the > memcg handling overhead could be saved. > > Suggested-by: Linus Torvalds > Link: https://lore.kernel.org/all/CAHk-=whYOOdM7jWy5jdrAm8LxcgCMFyk2bt8fYYvZzM4U-zAQA@mail.gmail.com/ > Signed-off-by: Vlastimil Babka Nice cleanup, Vlastimil! Couple of small nits below, but otherwise, please, add my Reviewed-by: Roman Gushchin Thanks! --- > mm/slub.c | 180 +++++++++++++++++++++++++++----------------------------------- > 1 file changed, 77 insertions(+), 103 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 2ef88bbf56a3..7022a1246bab 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1897,23 +1897,36 @@ static inline size_t obj_full_size(struct kmem_cache *s) > return s->size + sizeof(struct obj_cgroup *); > } > > -/* > - * Returns false if the allocation should fail. > - */ > -static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, > - struct list_lru *lru, > - struct obj_cgroup **objcgp, > - size_t objects, gfp_t flags) > +static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, > + struct list_lru *lru, > + gfp_t flags, size_t size, > + void **p) > { > + struct obj_cgroup *objcg; > + struct slab *slab; > + unsigned long off; > + size_t i; > + > /* > * The obtained objcg pointer is safe to use within the current scope, > * defined by current task or set_active_memcg() pair. > * obj_cgroup_get() is used to get a permanent reference. > */ > - struct obj_cgroup *objcg = current_obj_cgroup(); > + objcg = current_obj_cgroup(); > if (!objcg) > return true; > > + /* > + * slab_alloc_node() avoids the NULL check, so we might be called with a > + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill > + * the whole requested size. > + * return success as there's nothing to free back > + */ > + if (unlikely(*p == NULL)) > + return true; Probably better to move this check up? current_obj_cgroup() != NULL check is more expensive. > + > + flags &= gfp_allowed_mask; > + > if (lru) { > int ret; > struct mem_cgroup *memcg; > @@ -1926,71 +1939,51 @@ static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, > return false; > } > > - if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) > + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) > return false; > > - *objcgp = objcg; > + for (i = 0; i < size; i++) { > + slab = virt_to_slab(p[i]); Not specific to this change, but I wonder if it makes sense to introduce virt_to_slab() variant without any extra checks for this and similar cases, where we know for sure that p resides on a slab page. What do you think?