From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E061CC47085 for ; Mon, 24 May 2021 23:41:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8A4F3613D8 for ; Mon, 24 May 2021 23:41:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A4F3613D8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EDDD96B0074; Mon, 24 May 2021 19:40:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAA2B6B0073; Mon, 24 May 2021 19:40:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 478096B0075; Mon, 24 May 2021 19:40:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id E0E986B0075 for ; Mon, 24 May 2021 19:40:51 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8C6659435 for ; Mon, 24 May 2021 23:40:51 +0000 (UTC) X-FDA: 78177747102.22.7437CC7 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf12.hostedemail.com (Postfix) with ESMTP id 5FFF7E4 for ; Mon, 24 May 2021 23:40:43 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1621899648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tN0hphSDK3nKGkgVbOG2HToirz6jOZ+xMG1Pe64zWc0=; b=MVxXWx6D1MTXp2qaoo3uTuNEVRsDDvqy06ITXd8U/CqSLI4D+VI9SwSCKhiPSyMd3ulswG peDsQ/6LboCYbhmRaOVOC2enK8VM/T3zE20Nn6gGAPmZ4rlmFmYru+kd34SohRxEb7gHmu nY0ljCx1EijQqDlxnfmKENFAp3rmdSY= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1621899648; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tN0hphSDK3nKGkgVbOG2HToirz6jOZ+xMG1Pe64zWc0=; b=tt6FZ545rcVOWKtwHEwUbFmUhd5T21riPSQFLMJtgWKBz2Hrzfagfloy5ujAMXE8Qzh6fb qvuK3nTxUX+W9yCg== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 36533AF27; Mon, 24 May 2021 23:40:48 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn , Vlastimil Babka Subject: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Date: Tue, 25 May 2021 01:39:29 +0200 Message-Id: <20210524233946.20352-10-vbabka@suse.cz> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210524233946.20352-1-vbabka@suse.cz> References: <20210524233946.20352-1-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5FFF7E4 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=MVxXWx6D; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=tt6FZ545; dmarc=none; spf=pass (imf12.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Rspamd-Server: rspam03 X-Stat-Signature: phkb6bsixj7i4u4zj1c5gb6gjtffgew1 X-HE-Tag: 1621899643-691080 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently __slab_alloc() disables irqs around the whole ___slab_alloc(). = This includes cases where this is not needed, such as when the allocation ends= up in the page allocator and has to awkwardly enable irqs back based on gfp fla= gs. Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled eve= n when it hits the __slab_alloc() slow path, and long periods with disabled inte= rrupts are undesirable. As a first step towards reducing irq disabled periods, move irq handling = into ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu poin= ter from becoming invalid via migrate_disable(). This does not protect agains= t access preemption, which is still done by disabled irq for most of ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() cal= l from ___slab_alloc() is now done with irqs enabled. kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enable= s them before calling ___slab_alloc(), which then disables them at its discretio= n. The whole kmem_cache_alloc_bulk() operation also disables cpu migration. When ___slab_alloc() calls new_slab() to allocate a new page, re-enable preemption, because new_slab() will re-enable interrupts in contexts that= allow blocking. The patch itself will thus increase overhead a bit due to disabled migrat= ion and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but tha= t will be gradually improved in the following patches. Signed-off-by: Vlastimil Babka --- mm/slub.c | 26 +++++++++++++++++++------- 1 file changed, 19 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 06f30c9ad361..c5f4f9282496 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2631,7 +2631,7 @@ static inline void *get_freelist(struct kmem_cache = *s, struct page *page) * we need to allocate a new slab. This is the slowest path since it inv= olves * a call to the page allocator and the setup of a new slab. * - * Version of __slab_alloc to use when we know that interrupts are + * Version of __slab_alloc to use when we know that preemption is * already disabled (which is the case for bulk allocation). */ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int nod= e, @@ -2639,9 +2639,11 @@ static void *___slab_alloc(struct kmem_cache *s, g= fp_t gfpflags, int node, { void *freelist; struct page *page; + unsigned long flags; =20 stat(s, ALLOC_SLOWPATH); =20 + local_irq_save(flags); page =3D c->page; if (!page) { /* @@ -2704,6 +2706,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, VM_BUG_ON(!c->page->frozen); c->freelist =3D get_freepointer(s, freelist); c->tid =3D next_tid(c->tid); + local_irq_restore(flags); return freelist; =20 new_slab: @@ -2721,14 +2724,17 @@ static void *___slab_alloc(struct kmem_cache *s, = gfp_t gfpflags, int node, goto check_new_page; } =20 + migrate_enable(); page =3D new_slab(s, gfpflags, node); + migrate_disable(); + c =3D this_cpu_ptr(s->cpu_slab); =20 if (unlikely(!page)) { + local_irq_restore(flags); slab_out_of_memory(s, gfpflags, node); return NULL; } =20 - c =3D raw_cpu_ptr(s->cpu_slab); if (c->page) flush_slab(s, c); =20 @@ -2768,6 +2774,7 @@ static void *___slab_alloc(struct kmem_cache *s, gf= p_t gfpflags, int node, return_single: =20 deactivate_slab(s, page, get_freepointer(s, freelist), c); + local_irq_restore(flags); return freelist; } =20 @@ -2779,20 +2786,19 @@ static void *__slab_alloc(struct kmem_cache *s, g= fp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c) { void *p; - unsigned long flags; =20 - local_irq_save(flags); + migrate_disable(); #ifdef CONFIG_PREEMPTION /* * We may have been preempted and rescheduled on a different - * cpu before disabling interrupts. Need to reload cpu area + * cpu before disabling preemption. Need to reload cpu area * pointer. */ c =3D this_cpu_ptr(s->cpu_slab); #endif =20 p =3D ___slab_alloc(s, gfpflags, node, addr, c); - local_irq_restore(flags); + migrate_enable(); return p; } =20 @@ -3312,8 +3318,9 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, * IRQs, which protects against PREEMPT and interrupts * handlers invoking normal fastpath. */ - local_irq_disable(); + migrate_disable(); c =3D this_cpu_ptr(s->cpu_slab); + local_irq_disable(); =20 for (i =3D 0; i < size; i++) { void *object =3D kfence_alloc(s, s->object_size, flags); @@ -3334,6 +3341,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, */ c->tid =3D next_tid(c->tid); =20 + local_irq_enable(); + /* * Invoking slow path likely have side-effect * of re-populating per CPU c->freelist @@ -3346,6 +3355,8 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, c =3D this_cpu_ptr(s->cpu_slab); maybe_wipe_obj_freeptr(s, p[i]); =20 + local_irq_disable(); + continue; /* goto for-loop */ } c->freelist =3D get_freepointer(s, object); @@ -3354,6 +3365,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp= _t flags, size_t size, } c->tid =3D next_tid(c->tid); local_irq_enable(); + migrate_enable(); =20 /* * memcg and kmem_cache debug support and memory initialization. --=20 2.31.1