From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E660BC46467 for ; Wed, 4 Jan 2023 11:45:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4FE158E0002; Wed, 4 Jan 2023 06:45:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AE1B8E0001; Wed, 4 Jan 2023 06:45:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C45D8E0002; Wed, 4 Jan 2023 06:45:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2A03D8E0001 for ; Wed, 4 Jan 2023 06:45:09 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 04D291A09FB for ; Wed, 4 Jan 2023 11:45:08 +0000 (UTC) X-FDA: 80316935538.02.4A5ACFE Received: from outbound-smtp41.blacknight.com (outbound-smtp41.blacknight.com [46.22.139.224]) by imf29.hostedemail.com (Postfix) with ESMTP id 6714612000D for ; Wed, 4 Jan 2023 11:45:06 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.224 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672832706; a=rsa-sha256; cv=none; b=yG3rqKEtMBsaExrndjYXZ8cHBcGxhNDLNYLnQgGRHwbDiMYysCywPnLOe643ClJuUZF0w0 nGAqFvyJFa9LujAzQp9QBqCDWPLNffEzc7nPmhwVZWw3VOIpFr2QBvZeHM75IfA+Dm4VC5 ur8vC0fxy0CsqE0nb/nA5RC7MPm7paE= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.139.224 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672832706; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Q3eOdpHBzq9I8EOOChOPzvBVJ4hdZChie5+8pxozOFE=; b=IzZCv0KfRk6vRZm3dC/2LnCX6mgWZ33sKjfcBcve09s1HcfL634ftSN861Kdn2T7XdzCS2 md+TOjZJNKZpxilN7JYNckihRL7IC4L4exLwPu54UsMrrtfs7T9ePKcwi1Qy0FsGBybtID KglqF/PGoCfZabHu4JKsYJkwt4QUXEM= Received: from mail.blacknight.com (pemlinmail02.blacknight.ie [81.17.254.11]) by outbound-smtp41.blacknight.com (Postfix) with ESMTPS id 94D7420AF for ; Wed, 4 Jan 2023 11:45:04 +0000 (GMT) Received: (qmail 1950 invoked from network); 4 Jan 2023 11:45:04 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 4 Jan 2023 11:45:04 -0000 Date: Wed, 4 Jan 2023 11:45:02 +0000 From: Mel Gorman To: Vlastimil Babka Cc: Linux-MM , Andrew Morton , Michal Hocko , NeilBrown , Thierry Reding , Matthew Wilcox , LKML Subject: Re: [PATCH 3/6] mm/page_alloc: Explicitly record high-order atomic allocations in alloc_flags Message-ID: <20230104114502.j4hzzjohxk7bdkcj@techsingularity.net> References: <20221129151701.23261-1-mgorman@techsingularity.net> <20221129151701.23261-4-mgorman@techsingularity.net> <915a5034-53e6-9464-3fc7-4d1b5a0aa26d@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <915a5034-53e6-9464-3fc7-4d1b5a0aa26d@suse.cz> X-Rspam-User: X-Rspamd-Queue-Id: 6714612000D X-Rspamd-Server: rspam01 X-Stat-Signature: r1ttf1gxt141xsorw5kx8yaj8zddsgup X-HE-Tag: 1672832706-337935 X-HE-Meta: U2FsdGVkX18o7F48FzhfNJADIx9c9Nk7tvsfRMjuLCj57togRKS3R34BQxqSwdDqKwKQzsgD+M37MAixBR5EtX1MnVkRgJKmNxE4pZ3glcijmBKF/FkSl7v7Z7d+6paIh0WIbv5jgvFTqYvn7GIQRcPi5U+ABv9dX9ELdMUzhJGgM3LlIOVuMDmHYHmBaZvXRIFiDSdw4AnhoAqwZzNsH7PXRRsIkOovhU67LRmNedx2DlS4Ccxb6wWFezMF1NahOT8fc6xTrDhjL6OyM/K4FI7VLb/CLpypSTjtgVTj1VqBLkHHOeodHzbioueDvFP5o5xYvWkquqM688RYVyIliYB96jJWBV/OpIuWdgyW1Zuoq3QuKzu6ArosykNDcPxEAEcgHgnlY5Md8M71KjM+OHfQl2n0pI7sAxAeI8vHYJ6G7x9V59XWEU358O08P98uQSYfncc+/FKMR5BxmYeKEKLVYYC0MsXOYvkXPM3RdQ/w4riEYYfi9/Wttb48S2nsRNv6bfMkstrqzjueYquk4FhkFJWq4lGo2eO6N5rA6vxJoEuhW8+YxrBfvaSbaNVA5mJO0FRbWZ/zPYoqxmnRFAoFNOo5HAbRFOSpODiR7rOSKnyJqCCoEXtbkHD80yTJdf434gSvX1PWqvtJrDmSMn7EQoZYjU6UGogzXBsNBIjGdE8gvgdC0PVwMmsssq3dkB2xXoYEpJ+xLrM1UTQzG8VN6wweT9h++DKVn6BTYDUFkWDKDfEzxp6sq+Bc/HgMmZjBRSjKI2lsKEG/w9qIQiqsYp8gyE39mWqmeKi59WH2dGM4UnACs/UxY5hCjbdUtD/POIpvT6LkhX9hXQZ1nl0+rSPLYRwqNVbj25/fpiZsUxO8d1GDsWDLz1hM58Cy0DuaL6fFcZAXd9hqZFgfqwsPDQFIEr2CbkALsadQWLWJfJRTsrsfnChO2qUYWYcb3ozV8YQ0sXGpxXtASNY kgwjdXbI c5e1Lo7uswfqOqRe8A6RWxaD4vBmeQWVwd/16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: First off, sorry for the long delay getting back to you. I was sick for a few weeks and still catching up. I'm still not 100%. On Thu, Dec 08, 2022 at 05:51:11PM +0100, Vlastimil Babka wrote: > On 11/29/22 16:16, Mel Gorman wrote: > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index da746e9eb2cf..e2b65767dda0 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -3710,7 +3710,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, > > * reserved for high-order atomic allocation, so order-0 > > * request should skip it. > > */ > > - if (order > 0 && alloc_flags & ALLOC_HARDER) > > + if (alloc_flags & ALLOC_HIGHATOMIC) > > page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); > > if (!page) { > > page = __rmqueue(zone, order, migratetype, alloc_flags); > > @@ -4028,8 +4028,10 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, > > return true; > > } > > #endif > > - if (alloc_harder && !free_area_empty(area, MIGRATE_HIGHATOMIC)) > > + if ((alloc_flags & ALLOC_HIGHATOMIC) && > > + !free_area_empty(area, MIGRATE_HIGHATOMIC)) { > > return true; > > alloc_harder is defined as > (alloc_flags & (ALLOC_HARDER|ALLOC_OOM)); > AFAICS this means we no longer allow ALLOC_OOM to use the highatomic > reserve. Isn't that a risk? > Yes, it is. I intend to apply the patch below on top. I didn't alter the first check for ALLOC_HIGHATOMIC as I wanted OOM handling to only use the high-order reserves if there was no other option. While this is a change in behaviour, it should be a harmless one. I'll add a note in the changelog. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 50fc1e7cb154..0ef4f3236a5a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3710,6 +3710,16 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone, page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); if (!page) { page = __rmqueue(zone, order, migratetype, alloc_flags); + + /* + * If the allocation fails, allow OOM handling access + * to HIGHATOMIC reserves as failing now is worse than + * failing a high-order atomic allocation in the + * future. + */ + if (!page && (alloc_flags & ALLOC_OOM)) + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); + if (!page) { spin_unlock_irqrestore(&zone->lock, flags); return NULL; @@ -4023,7 +4033,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; } #endif - if ((alloc_flags & ALLOC_HIGHATOMIC) && + if ((alloc_flags & (ALLOC_HIGHATOMIC|ALLOC_OOM)) && !free_area_empty(area, MIGRATE_HIGHATOMIC)) { return true; } -- Mel Gorman SUSE Labs