From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EA9FC433EF for ; Mon, 25 Jul 2022 08:42:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9457F940007; Mon, 25 Jul 2022 04:42:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8F4848E0001; Mon, 25 Jul 2022 04:42:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E2C8940007; Mon, 25 Jul 2022 04:42:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6AC648E0001 for ; Mon, 25 Jul 2022 04:42:14 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 46731C0385 for ; Mon, 25 Jul 2022 08:42:14 +0000 (UTC) X-FDA: 79724980188.16.5FA9F15 Received: from outbound-smtp25.blacknight.com (outbound-smtp25.blacknight.com [81.17.249.193]) by imf10.hostedemail.com (Postfix) with ESMTP id 65A52C00AC for ; Mon, 25 Jul 2022 08:42:13 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp25.blacknight.com (Postfix) with ESMTPS id 09814CAEB2 for ; Mon, 25 Jul 2022 09:42:12 +0100 (IST) Received: (qmail 26044 invoked from network); 25 Jul 2022 08:42:11 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 25 Jul 2022 08:42:11 -0000 Date: Mon, 25 Jul 2022 09:42:04 +0100 From: Mel Gorman To: Jaewon Kim Cc: minchan@kernel.org, akpm@linux-foundation.org, bhe@redhat.com, vbabka@suse.cz, hannes@cmpxchg.org, mhocko@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, gh21.hong@samsung.com, ytk.lee@samsung.com, jaewon31.kim@gmail.com Subject: Re: [PATCH] page_alloc: fix invalid watemark check on a negative value Message-ID: <20220725084204.52kdi6jyjhytzudm@techsingularity.net> References: <20220725012843.17115-1-jaewon31.kim@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20220725012843.17115-1-jaewon31.kim@samsung.com> ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf10.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.193 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658738533; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZzCqf5SO3G5XNE6TyYnaglEEdIMWG1eN6LeWnvG0iHs=; b=0k79F9Y4MAcUVCRQ4A4fVE2N1dtGnYHsFZht5i3X7dYjrD8OuUCjf/WE77iVDuVfEgAmg9 vo29RKB5bYfdOk1o+roB7h+qD/iMc20NxOYkQPkr3phJ2g/7zxTGBZrP19+hunaZFaShcr CbwbmoHDjo7JfaWcQ8mrceza4fZRvFk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658738533; a=rsa-sha256; cv=none; b=WQIeRd0yRJIRlm5ikMIs2DZ2GjGZrBXhO9bLdTbABOSHEaFZgLcRsEFXsGsXyybOVUadYY OCUB3jg6BA7/fzad6KAcG32k1Xx7w7uTShpvX6VoHivUdJmxjEc4M6bJ2t6vKhp7V3LUbN Y17y8bUmznZH37fN4/YZF9vBfLBdlME= X-Rspam-User: X-Stat-Signature: moessoh5wbjjxwjgumosxst7xu14hqty X-Rspamd-Queue-Id: 65A52C00AC Authentication-Results: imf10.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf10.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.193 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net X-Rspamd-Server: rspam04 X-HE-Tag: 1658738533-187871 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 25, 2022 at 10:28:43AM +0900, Jaewon Kim wrote: > There was a report that a task is waiting at the > throttle_direct_reclaim. The pgscan_direct_throttle in vmstat was > increasing. > > This is a bug where zone_watermark_fast returns true even when the free > is very low. The commit f27ce0e14088 ("page_alloc: consider highatomic > reserve in watermark fast") changed the watermark fast to consider > highatomic reserve. But it did not handle a negative value case which > can be happened when reserved_highatomic pageblock is bigger than the > actual free. > > If watermark is considered as ok for the negative value, allocating > contexts for order-0 will consume all free pages without direct reclaim, > and finally free page may become depleted except highatomic free. > > Then allocating contexts may fall into throttle_direct_reclaim. This > symptom may easily happen in a system where wmark min is low and other > reclaimers like kswapd does not make free pages quickly. > > To handle the negative value, get the value as long type like > __zone_watermark_ok does. > > Reported-by: GyeongHwan Hong > Signed-off-by: Jaewon Kim Add Fixes: f27ce0e14088 ("page_alloc: consider highatomic reserve in watermark fast") The fix is fine as-is but it's not immediately obvious why this can wrap negative as it depends on an implementation detail of __zone_watermark_unusable_free. The variable copy just to change the sign could get accidentally "fixed" later as a micro-optimisation (same if the type of mark was changed) so maybe leave a comment like /* unusable may over-estimate high-atomic reserves */ Otherwise Acked-by: Mel Gorman The problem could also be made explicit with something like below. I know you are copying the logic of __zone_watermark_ok but I don't think min can go negative there. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 934d1b5a5449..f8f50a2aa43e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4048,11 +4048,15 @@ static inline bool zone_watermark_fast(struct zone *z, unsigned int order, * need to be calculated. */ if (!order) { - long fast_free; + long usable_free; + long reserved; - fast_free = free_pages; - fast_free -= __zone_watermark_unusable_free(z, 0, alloc_flags); - if (fast_free > mark + z->lowmem_reserve[highest_zoneidx]) + usable_free = free_pages; + reserved = __zone_watermark_unusable_free(z, 0, alloc_flags); + + /* reserved may over estimate high-atomic reserves. */ + usable_free -= min(usable_free, reserved); + if (usable_free > mark + z->lowmem_reserve[highest_zoneidx]) return true; } -- Mel Gorman SUSE Labs