From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95153C2B9F8 for ; Tue, 25 May 2021 12:47:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14BB86142D for ; Tue, 25 May 2021 12:47:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 14BB86142D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F8836B006C; Tue, 25 May 2021 08:47:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79E5C6B006E; Tue, 25 May 2021 08:47:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EB806B0070; Tue, 25 May 2021 08:47:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id DB26F6B006C for ; Tue, 25 May 2021 08:47:13 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 67C66181AC9BF for ; Tue, 25 May 2021 12:47:13 +0000 (UTC) X-FDA: 78179728746.38.676A0C5 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf19.hostedemail.com (Postfix) with ESMTP id B2DF290009E9 for ; Tue, 25 May 2021 12:47:05 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1621946831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ow7nAyPcGdke18qJEo6av0sYcdwU6XcTU/DgEkAn7yM=; b=FxL6ZlDtF8Wy2BqyYb/3eJSiSp6XCFeSeDaVDIVooNyPTiOzyDH4+pa/YteL8e1dl/y1vR kA7Ml6AJ5FUJi1O38ppB+jzJgRKzq64rKlaaBEiZOCkb/MIntldHQDji6r5jskgrbya4fN o/UHqWJgwQ5QofOl5M/nXlaD3fo4SMc= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1621946831; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ow7nAyPcGdke18qJEo6av0sYcdwU6XcTU/DgEkAn7yM=; b=qebmyN+O+iU2R107ggiYlMVKFtXlAATSYJ60aSRZl5H8i0OUyxfI37srJ1LjZXQ3pfIi8g Vy4Cp9Ldag/KVIAw== Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 4FB68AB71; Tue, 25 May 2021 12:47:11 +0000 (UTC) To: Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Sebastian Andrzej Siewior , Thomas Gleixner , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn References: <20210524233946.20352-1-vbabka@suse.cz> <20210524233946.20352-10-vbabka@suse.cz> <20210525123536.GR30378@techsingularity.net> From: Vlastimil Babka Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Message-ID: Date: Tue, 25 May 2021 14:47:10 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.2 MIME-Version: 1.0 In-Reply-To: <20210525123536.GR30378@techsingularity.net> Content-Type: text/plain; charset=iso-8859-15 Content-Language: en-US Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=FxL6ZlDt; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=qebmyN+O; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.15 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B2DF290009E9 X-Stat-Signature: 9yccx5k66y9odxccepyqbqjs5ftshwny X-HE-Tag: 1621946825-500728 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 5/25/21 2:35 PM, Mel Gorman wrote: > On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote: >> Currently __slab_alloc() disables irqs around the whole ___slab_alloc(= ). This >> includes cases where this is not needed, such as when the allocation e= nds up in >> the page allocator and has to awkwardly enable irqs back based on gfp = flags. >> Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled = even when >> it hits the __slab_alloc() slow path, and long periods with disabled i= nterrupts >> are undesirable. >>=20 >> As a first step towards reducing irq disabled periods, move irq handli= ng into >> ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu p= ointer >> from becoming invalid via migrate_disable(). This does not protect aga= inst >> access preemption, which is still done by disabled irq for most of >> ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() = call from >> ___slab_alloc() is now done with irqs enabled. >>=20 >> kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-ena= bles them >> before calling ___slab_alloc(), which then disables them at its discre= tion. The >> whole kmem_cache_alloc_bulk() operation also disables cpu migration. >>=20 >> When ___slab_alloc() calls new_slab() to allocate a new page, re-enab= le >> preemption, because new_slab() will re-enable interrupts in contexts t= hat allow >> blocking. >>=20 >> The patch itself will thus increase overhead a bit due to disabled mig= ration >> and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but = that will >> be gradually improved in the following patches. >>=20 >> Signed-off-by: Vlastimil Babka >=20 > Why did you use migrate_disable instead of preempt_disable? There is a > fairly large comment in include/linux/preempt.h on why migrate_disable > is undesirable so new users are likely to be put under the microscope > once Thomas or Peter notice it. I understood it as while undesirable, there's nothing better for now. > I think you are using it so that an allocation request can be preempted= by > a higher priority task but given that the code was disabling interrupts= , > there was already some preemption latency. Yes, and the disabled interrupts will get progressively "smaller" in the = series. > However, migrate_disable > is more expensive than preempt_disable (function call versus a simple > increment). That's true, I think perhaps it could be reimplemented so that on !PREEMP= T_RT and with no lockdep/preempt/whatnot debugging it could just translate to = an inline migrate_disable? > On that basis, I'd recommend starting with preempt_disable > and only using migrate_disable if necessary. That's certainly possible and you're right it would be a less disruptive = step. My thinking was that on !PREEMPT_RT it's actually just preempt_disable (h= owever with the call overhead currently), but PREEMPT_RT would welcome the lack = of preempt disable. I'd be interested to hear RT guys opinion here. > Bonus points for adding a comment where ___slab_alloc disables IRQs to > clarify what is protected -- I assume it's protecting kmem_cache_cpu > from being modified from interrupt context. If so, it's potentially a > local_lock candidate. Yeah that gets cleared up later :)