From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03833C433E1 for ; Fri, 14 Aug 2020 21:52:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B517820716 for ; Fri, 14 Aug 2020 21:52:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Eo3QUy5e" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B517820716 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 58C9E6B0002; Fri, 14 Aug 2020 17:52:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 53CD06B0005; Fri, 14 Aug 2020 17:52:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42A736B0006; Fri, 14 Aug 2020 17:52:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 2E2DA6B0002 for ; Fri, 14 Aug 2020 17:52:20 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E1361825F062 for ; Fri, 14 Aug 2020 21:52:19 +0000 (UTC) X-FDA: 77150523198.05.pan53_5c084f127000 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id BE28118041768 for ; Fri, 14 Aug 2020 21:52:19 +0000 (UTC) X-HE-Tag: pan53_5c084f127000 X-Filterd-Recvd-Size: 5019 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Fri, 14 Aug 2020 21:52:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=8RAn4n98mFtGVK2OIEQsn7uMJF3Cb5kh3+aalm6qQ9k=; b=Eo3QUy5elerxdUN1Ii06U0MG70 PCNKVj4Sc+ZHXDEYCArudzG9ORAKzsYDUt/xO9Netct9+k3ndHcKXD2WGYNGscHNSX3ygqKqqy/61 kgFAnQHXRYznL1u5pPNRJoZMYg66STwtIGqGdoFENesogysoD5MYKFbBz5Pt8oA+cQVNvXi72Gw3f rzX7i66ynoGlODBGq8zwpaWjbe9LLPGRTeZ6wP5eHDw7E25eZ6UefVMWultyTuCA+2wxSjuaXi5Bs TyXymcXwI33eEVqz67Eb/TaRrAGeL5OJkIm6usbN7fZEgSgN81an0b8SkuAratqvPwDVO2Kn6lbmj vxNkUbYA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=worktop.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1k6hcP-0007ky-GF; Fri, 14 Aug 2020 21:52:11 +0000 Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000) id 36A19980C9E; Fri, 14 Aug 2020 23:52:06 +0200 (CEST) Date: Fri, 14 Aug 2020 23:52:06 +0200 From: Peter Zijlstra To: "Paul E. McKenney" Cc: Thomas Gleixner , Michal Hocko , Uladzislau Rezki , LKML , RCU , linux-mm@kvack.org, Andrew Morton , Vlastimil Babka , Matthew Wilcox , "Theodore Y . Ts'o" , Joel Fernandes , Sebastian Andrzej Siewior , Oleksiy Avramchenko Subject: Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag Message-ID: <20200814215206.GL3982@worktop.programming.kicks-ass.net> References: <20200813185257.GF4295@paulmck-ThinkPad-P72> <20200813220619.GA2674@hirez.programming.kicks-ass.net> <875z9m3xo7.fsf@nanos.tec.linutronix.de> <20200814083037.GD3982@worktop.programming.kicks-ass.net> <20200814141425.GM4295@paulmck-ThinkPad-P72> <20200814161106.GA13853@paulmck-ThinkPad-P72> <20200814174924.GI3982@worktop.programming.kicks-ass.net> <20200814180224.GQ4295@paulmck-ThinkPad-P72> <875z9lkoo4.fsf@nanos.tec.linutronix.de> <20200814204140.GT4295@paulmck-ThinkPad-P72> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200814204140.GT4295@paulmck-ThinkPad-P72> User-Agent: Mutt/1.10.1 (2018-07-13) X-Rspamd-Queue-Id: BE28118041768 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Aug 14, 2020 at 01:41:40PM -0700, Paul E. McKenney wrote: > > And that enforces the GFP_NOLOCK allocation mode or some other solution > > unless you make a new rule that calling call_rcu() is forbidden while > > holding zone lock or any other lock which might be nested inside the > > GFP_NOWAIT zone::lock held region. > > Again, you are correct. Maybe the forecasted weekend heat will cause > my brain to hallucinate a better solution, but in the meantime, the > GFP_NOLOCK approach looks good from this end. So I hate __GFP_NO_LOCKS for a whole number of reasons: - it should be called __GFP_LOCKLESS if anything - it sprinkles a bunch of ugly branches around the allocator fast path - it only works for order==0 Combined I really odn't think this should be a GFP flag. How about a special purpose allocation function, something like so.. --- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 901a21f61d68..cdec9c99fba7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4875,6 +4875,47 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, } EXPORT_SYMBOL(__alloc_pages_nodemask); +struct page *__rmqueue_lockless(struct zone *zone, struct per_cpu_pages *pcp) +{ + struct list_head *list; + struct page *page; + int migratetype; + + for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++) { + list = &pcp->list[migratetype]; + page = list_first_entry_or_null(list, struct page, lru); + if (page && check_new_pcp(page)) { + list_del(&page->lru); + pcp->count--; + return page; + } + } + + return NULL; +} + +struct page *__alloc_page_lockless(void) +{ + struct zonelist *zonelist = node_zonelist(numa_node_id(), GFP_KERNEL); + struct per_cpu_pages *pcp; + struct page *page = NULL; + unsigned long flags; + struct zoneref *z; + struct zone *zone; + + for_each_zone_zonelist(zone, z, zonelist, ZONE_NORMAL) { + local_irq_save(flags); + pcp = &this_cpu_ptr(zone->pageset)->pcp; + page = __rmqueue_lockless(zone, pcp); + local_irq_restore(flags); + + if (page) + break; + } + + return page; +} + /* * Common helper functions. Never use with __GFP_HIGHMEM because the returned * address cannot represent highmem pages. Use alloc_pages and then kmap if