From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A26C4C4707F for ; Tue, 25 May 2021 15:10:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EDB16141D for ; Tue, 25 May 2021 15:10:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EDB16141D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 520E76B0036; Tue, 25 May 2021 11:10:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F65D6B006E; Tue, 25 May 2021 11:10:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BEA46B0070; Tue, 25 May 2021 11:10:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0201.hostedemail.com [216.40.44.201]) by kanga.kvack.org (Postfix) with ESMTP id 071B26B0036 for ; Tue, 25 May 2021 11:10:16 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A23BA8249980 for ; Tue, 25 May 2021 15:10:16 +0000 (UTC) X-FDA: 78180089232.08.8215C7D Received: from outbound-smtp21.blacknight.com (outbound-smtp21.blacknight.com [81.17.249.41]) by imf24.hostedemail.com (Postfix) with ESMTP id E3147A0000FF for ; Tue, 25 May 2021 15:10:05 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp21.blacknight.com (Postfix) with ESMTPS id 18DC4CCB09 for ; Tue, 25 May 2021 16:10:10 +0100 (IST) Received: (qmail 23622 invoked from network); 25 May 2021 15:10:09 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.23.168]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 25 May 2021 15:10:09 -0000 Date: Tue, 25 May 2021 16:10:08 +0100 From: Mel Gorman To: Vlastimil Babka Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim , Sebastian Andrzej Siewior , Thomas Gleixner , Jesper Dangaard Brouer , Peter Zijlstra , Jann Horn Subject: Re: [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Message-ID: <20210525151008.GV30378@techsingularity.net> References: <20210524233946.20352-1-vbabka@suse.cz> <20210524233946.20352-10-vbabka@suse.cz> <20210525123536.GR30378@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: E3147A0000FF Authentication-Results: imf24.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf24.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.41 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspamd-Server: rspam04 X-Stat-Signature: 16stq4xzikh7snznjdsjxyffh7d64w8y X-HE-Tag: 1621955405-228498 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 25, 2021 at 02:47:10PM +0200, Vlastimil Babka wrote: > On 5/25/21 2:35 PM, Mel Gorman wrote: > > On Tue, May 25, 2021 at 01:39:29AM +0200, Vlastimil Babka wrote: > >> Currently __slab_alloc() disables irqs around the whole ___slab_alloc(). This > >> includes cases where this is not needed, such as when the allocation ends up in > >> the page allocator and has to awkwardly enable irqs back based on gfp flags. > >> Also the whole kmem_cache_alloc_bulk() is executed with irqs disabled even when > >> it hits the __slab_alloc() slow path, and long periods with disabled interrupts > >> are undesirable. > >> > >> As a first step towards reducing irq disabled periods, move irq handling into > >> ___slab_alloc(). Callers will instead prevent the s->cpu_slab percpu pointer > >> from becoming invalid via migrate_disable(). This does not protect against > >> access preemption, which is still done by disabled irq for most of > >> ___slab_alloc(). As the small immediate benefit, slab_out_of_memory() call from > >> ___slab_alloc() is now done with irqs enabled. > >> > >> kmem_cache_alloc_bulk() disables irqs for its fastpath and then re-enables them > >> before calling ___slab_alloc(), which then disables them at its discretion. The > >> whole kmem_cache_alloc_bulk() operation also disables cpu migration. > >> > >> When ___slab_alloc() calls new_slab() to allocate a new page, re-enable > >> preemption, because new_slab() will re-enable interrupts in contexts that allow > >> blocking. > >> > >> The patch itself will thus increase overhead a bit due to disabled migration > >> and increased disabling/enabling irqs in kmem_cache_alloc_bulk(), but that will > >> be gradually improved in the following patches. > >> > >> Signed-off-by: Vlastimil Babka > > > > Why did you use migrate_disable instead of preempt_disable? There is a > > fairly large comment in include/linux/preempt.h on why migrate_disable > > is undesirable so new users are likely to be put under the microscope > > once Thomas or Peter notice it. > > I understood it as while undesirable, there's nothing better for now. > I think the "better" option is to reduce preempt_disable sections as much as possible but you probably have limited options there. It might be easier to justify if the sections you were protecting need to go to sleep like what mm/highmem.c needs but that does not appear to be the case. > > I think you are using it so that an allocation request can be preempted by > > a higher priority task but given that the code was disabling interrupts, > > there was already some preemption latency. > > Yes, and the disabled interrupts will get progressively "smaller" in the series. > > > However, migrate_disable > > is more expensive than preempt_disable (function call versus a simple > > increment). > > That's true, I think perhaps it could be reimplemented so that on !PREEMPT_RT > and with no lockdep/preempt/whatnot debugging it could just translate to an > inline migrate_disable? > It might be a bit too large for that. > > On that basis, I'd recommend starting with preempt_disable > > and only using migrate_disable if necessary. > > That's certainly possible and you're right it would be a less disruptive step. > My thinking was that on !PREEMPT_RT it's actually just preempt_disable (however > with the call overhead currently), but PREEMPT_RT would welcome the lack of > preempt disable. I'd be interested to hear RT guys opinion here. > It does more than preempt_disable even on !PREEMPT_RT. It's only on !SMP that it becomes inline. While it might allow a higher priority task to preempt, PREEMPT_RT is also not the common case and I think it's better to use the lighter-weight option for the majority of configurations. > > Bonus points for adding a comment where ___slab_alloc disables IRQs to > > clarify what is protected -- I assume it's protecting kmem_cache_cpu > > from being modified from interrupt context. If so, it's potentially a > > local_lock candidate. > > Yeah that gets cleared up later :) > I saw that after glancing through the rest of the series. While I didn't spot anything major, I'd also like to hear from Peter or Thomas on whether migrate_disable or preempt_disable would be preferred for mm/slub.c. The preempt-rt tree does not help answer the question given that the slub changes there are mostly about deferring some work until IRQs are enabled. -- Mel Gorman SUSE Labs