From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95FF0C4321E for ; Sun, 27 Nov 2022 10:54:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BCACB6B0072; Sun, 27 Nov 2022 05:54:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7AA86B0073; Sun, 27 Nov 2022 05:54:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A69BC6B0074; Sun, 27 Nov 2022 05:54:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 982646B0072 for ; Sun, 27 Nov 2022 05:54:39 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5908580A74 for ; Sun, 27 Nov 2022 10:54:39 +0000 (UTC) X-FDA: 80178913878.27.29E63D3 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf15.hostedemail.com (Postfix) with ESMTP id 0C585A000D for ; Sun, 27 Nov 2022 10:54:38 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id p12so7665811plq.4 for ; Sun, 27 Nov 2022 02:54:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=kaGLCyXdBMIaKuINsSow+H91A2R/a1Mjh2wpGhwGfqc=; b=Lr3b8t+b+nSA4PI/Pfz5w8RtWZ0CYsK0Qx+29vlI65Y9TzCMP4rmS7qnx69Ohy0cMk n9P6TQiWiDHSrKXfRdHwhWUw8mAk2vSPJqDpViCtxZ6njrYzkNDw63j60++ACy1efjEa 0mRS6RGREsTNjaEo/PKxwQFjwLoYt+r3kZa1DzUKUssPkr5QgqvaK28TS0xcYf0PuUab jbQTeSLjuOUW8NudOa/XT61TzHp2T/IK8zp+3baM4Sj2ZS5dNpODOSOcK+cHgIsPlJTk LQb0LVwmbNlaQ405kK5SLpisHMo5V5mmXAfPRf6SiWFLF4k3SeWzNmWIUoVtdIgXXRTB NPug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=kaGLCyXdBMIaKuINsSow+H91A2R/a1Mjh2wpGhwGfqc=; b=yAxHN1ZCTm/JlponQL0xAZxjmYlx61tF7UJCpao6nrLTytu21LZJrJEAWaggffXmVN 0BSiDXTQxHwZe+bGV2ffi6lxK3+Bu26AQy68XdIrch26/O0kifWUYLgH1QzMDHiptBSR rB1s6bbxT0X2DSreHddhg5fY3OKj5YzHT9p8HYn2ifUAlxjYx3kzfhp8RI4h31aubmRn eWXhkC8gKf3lOgmad7ifMpkxiPDM+rVoYigcO/bcOEO+9OmmvEwOEMuBUDBPcU7GUmm3 kndQguZPzCHzkPV0NZYaLQx3mOcVxdAKHAE3m4mKe7AXTnvVrPkue4h31ewhi5saHugJ mEAA== X-Gm-Message-State: ANoB5pnAuIR6cM6Z8AN6RxWHgOAl4eKWuOIV6HOwGkv4oo/GBANgMCRF KBxbZgT044AAtvgwDWs8a70= X-Google-Smtp-Source: AA0mqf6Q4OvnuAJ3a6PQCQkeShpgc1L6HV00PSdTaTzyvG1/h9Bsf5dTkPMiPc9yM2saAJ7+DqHn7Q== X-Received: by 2002:a17:90a:eac2:b0:219:484:e955 with SMTP id ev2-20020a17090aeac200b002190484e955mr13854997pjb.214.1669546477967; Sun, 27 Nov 2022 02:54:37 -0800 (PST) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id s33-20020a63ff61000000b004772abe41f6sm5071054pgk.83.2022.11.27.02.54.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Nov 2022 02:54:36 -0800 (PST) Date: Sun, 27 Nov 2022 19:54:30 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: Christoph Lameter , David Rientjes , Joonsoo Kim , Pekka Enberg , Roman Gushchin , Andrew Morton , Linus Torvalds , Matthew Wilcox , patches@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 09/12] mm, slub: split out allocations from pre/post hooks Message-ID: References: <20221121171202.22080-1-vbabka@suse.cz> <20221121171202.22080-10-vbabka@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221121171202.22080-10-vbabka@suse.cz> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669546479; a=rsa-sha256; cv=none; b=GnHqQptrhJ/396n0+H9RAPbAvalW+VIWczHFKwRx6xOgBwwQzk3glayZkEbFb6FiYHFR+8 wjW54Q0zUKiiHbu08yEBWXHf+dZZeW0Muu6yYH6aripiV5Vgz4/4pcR1bgR0J9nGFd7TJr BW+WSPiJZkdcuL2rw6HQAtHMe248sJQ= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Lr3b8t+b; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669546479; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kaGLCyXdBMIaKuINsSow+H91A2R/a1Mjh2wpGhwGfqc=; b=tAn8Uve0xAiBd8l1gaLqCzzs19QBmjayyf0LtKXi+8EBizBr1f/lTDLtDDXfT7i0JVzI01 gslYh09Mc59nTZodQBI/fw5HJMxqSiRJaG7uzSbsqvKvos72BK3hbBB/fibCxA3TnbwKx9 EvwxHNE9uGgdnKbph6JMQVONLt435Q4= X-Rspamd-Queue-Id: 0C585A000D X-Stat-Signature: 8555epcn453bdxppeb3a4p84btc86wnb X-Rspam-User: Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Lr3b8t+b; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspamd-Server: rspam09 X-HE-Tag: 1669546478-201733 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 21, 2022 at 06:11:59PM +0100, Vlastimil Babka wrote: > In the following patch we want to introduce CONFIG_SLUB_TINY allocation > paths that don't use the percpu slab. To prepare, refactor the > allocation functions: > > Split out __slab_alloc_node() from slab_alloc_node() where the former > does the actual allocation and the latter calls the pre/post hooks. > > Analogically, split out __kmem_cache_alloc_bulk() from > kmem_cache_alloc_bulk(). > > Signed-off-by: Vlastimil Babka > --- > mm/slub.c | 127 +++++++++++++++++++++++++++++++++--------------------- > 1 file changed, 77 insertions(+), 50 deletions(-) [...] > + > +/* Note that interrupts must be enabled when calling this function. */ > +int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > + void **p) > +{ > + int i; > + struct obj_cgroup *objcg = NULL; > + > + /* memcg and kmem_cache debug support */ > + s = slab_pre_alloc_hook(s, NULL, &objcg, size, flags); > + if (unlikely(!s)) > + return false; > + > + i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg); > + > + /* > + * memcg and kmem_cache debug support and memory initialization. > + * Done outside of the IRQ disabled fastpath loop. > + */ > + if (i != 0) > + slab_post_alloc_hook(s, objcg, flags, size, p, > + slab_want_init_on_alloc(flags, s)); This patch looks mostly good but wondering what happens if someone calls it with size == 0 so it does not call slab_post_alloc_hook()? > + return i; > } > EXPORT_SYMBOL(kmem_cache_alloc_bulk); > > -- > 2.38.1 > -- Thanks, Hyeonggon