From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F142FC3600C for ; Thu, 3 Apr 2025 21:18:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 982DC280004; Thu, 3 Apr 2025 17:18:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93010280001; Thu, 3 Apr 2025 17:18:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7830C280004; Thu, 3 Apr 2025 17:18:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 4F4A0280001 for ; Thu, 3 Apr 2025 17:18:28 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E5E458106F for ; Thu, 3 Apr 2025 21:18:28 +0000 (UTC) X-FDA: 83293996296.17.F07D8D7 Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf17.hostedemail.com (Postfix) with ESMTP id B67354000C for ; Thu, 3 Apr 2025 21:18:26 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=sG8VUAJ4; spf=pass (imf17.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.182 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1743715107; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0kOc9IVgq4Yk9moCNK2lpDYwr+QzgoTOSJskjayuc4I=; b=6Wny57EKdkazP5oY1uPwgcrug3rA55YF/p+4qUjf6zOxaRS9+pgnX4xVwMynjymArpxRI6 vKauyWMXymo4Sxr0q13CB+jaVlshP6V1ITqrYo8V4GM3dtUHAJsVxTPD4n4om757oNcaD5 qNV1Ci0T5lE2mo6nQLwqdlUTBskBB/g= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=sG8VUAJ4; spf=pass (imf17.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.182 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1743715107; a=rsa-sha256; cv=none; b=UBI4A8Zl4mdFcgm14igJNN/2jol1F+B+eN8wktSutXHJ1ieRQKkiJlomS3eTSrLjQEwN+a 8jvbz+pj5LsaoGd6te3xrEujvUIcCVoGFxlvq2B/gtLApI+/UjJa2D93xfL0AgzOvYC6n7 3sktUSo+yiL3mMyGp8sO7PP4fnjOFZc= Received: by mail-qt1-f182.google.com with SMTP id d75a77b69052e-4775ccf3e56so25743071cf.0 for ; Thu, 03 Apr 2025 14:18:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1743715105; x=1744319905; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=0kOc9IVgq4Yk9moCNK2lpDYwr+QzgoTOSJskjayuc4I=; b=sG8VUAJ4v3oRdtef2mMwpDtfcYVj4t7VkE1SkTpRms+p9n/F8SRD3fO+SmaEPJS+f1 OOV0ReiFglfM1EQBD8Vo0Va7I2Oz7FU6FOTty7LPCLtwXmN8L+sfImZF1kPyNQUGiy5V EXnb46haP4e+0u2Pc3E+C8POhshn3ah8DqSkrSV2tGZtbzI7ozCBgSqc+EEfV3V7yN1T OeZYuUIPsOguPS51wATaqMfN3zEfC7Beu4G2VsT3SUb/Autu8kepaT1lgG4wqd4nNC+l QH4cHLvCOP7pNHxtQdAexyaDya8mGTW2RFmUtFZHyEox9sP6LMRvbSkQILe3ly1URJCF VOkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743715105; x=1744319905; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=0kOc9IVgq4Yk9moCNK2lpDYwr+QzgoTOSJskjayuc4I=; b=GTg95/zWuF95xsdgyQenwomgMPlMDZAK9yAeWmauJm/U0cgPnxlfz25xpUCs5Xbzfk cNyeph3tcv8EBJONxIcO15Qtqxss96Jufe/D7k4V2yLNb+7prjzK+o/GFuuJGE4yBAWV 267uHGwws7tn1zMKq2IWEuMoH2RZEm1l8541f2PCS4XRqQXrnA4KE61z3gU39q3dKj3I hC0ssqIJes4FwhK4YhP4Jclu2oryH9ZGZYfsVmMOnm1Ka5z9AXwZVwGBGTqM9M5XwUY0 tHgsMaQk7knokPRSgZXHhBIT4U/NxFn6BMlqWz1fW/PdlYjruu9Dmd2z3jVActXjF6+e 94kg== X-Forwarded-Encrypted: i=1; AJvYcCUr5Gk0+1bnu28lvhiv5LNP7tgr9JR+fuS44sPD8jatEktP5Z8P/jYUpifLgfzONrHDA7QejbMCUw==@kvack.org X-Gm-Message-State: AOJu0YwrwAuRDXDPZcBrWOzd+Vl5HjNU1gRMTvECELFdHaKh6I7BG75I f/nOnzfBL1+FOr/PQ1PRXFw7iJYkvJPi3G8cCxkaAP1axeCrdwstQcFe//jsGq4= X-Gm-Gg: ASbGncsusFYVsulY24cdF8sWPuGoa+w7esZgt6qnCzXKWHowupl8l623GnFpPcHbYtD FEKexelNl8KRyjbQbvWQHfpUAxB1jiBWVJ8LWKwRPi6Th4i2n2Z3K+SQPR0ur/tdqXMr5k3WhNv ldm+CYQNuJcQf15LeU6TZ2fARRzlfWZWBflfl9r9BA9377obnfdKhQiMzWcI9IBzI1O5+vfJJxr 9+ndmnakroX5XEe4kT4BHRkyAYL8vM/5SPENJKvfpgHq3T9QtJIsdYvrBpVEONbGazoYXhv6JNK 5dFTgXQI8SSErABfy6TNsrDS/5L0O8svjbIu5jyl5YQ= X-Google-Smtp-Source: AGHT+IG8qbRrdJJRNYAPb5HBuHq1caolO9hi310F9UTL31muqwVvUZfvWxa+/+p3NN97BPjnjl9qYg== X-Received: by 2002:a05:622a:1350:b0:474:f9f2:ecb with SMTP id d75a77b69052e-47924da2174mr13754871cf.18.1743715105567; Thu, 03 Apr 2025 14:18:25 -0700 (PDT) Received: from localhost ([2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with UTF8SMTPSA id d75a77b69052e-4791b07141csm12331961cf.18.2025.04.03.14.18.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 03 Apr 2025 14:18:24 -0700 (PDT) Date: Thu, 3 Apr 2025 17:18:20 -0400 From: Johannes Weiner To: Carlos Song Cc: "baolin.wang@linux.alibaba.com" , "ying.huang@intel.com" , "vbabka@suse.cz" , "david@redhat.com" , "mgorman@techsingularity.net" , "ziy@nvidia.com" , "akpm@linux-foundation.org" , "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , Brendan Jackman Subject: Re: Ask help about this patch c0cd6f557b90 "mm: page_alloc: fix freelist movement during block conversion" Message-ID: <20250403211820.GA447372@cmpxchg.org> References: <20250402194425.GB198651@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B67354000C X-Stat-Signature: y5tphsbhwzcsjno3njhcza6rozxg5mgb X-HE-Tag: 1743715106-198424 X-HE-Meta: U2FsdGVkX18Ln2+czGfJQ1CciUCIgSYqhWUN2LBxDPP89qxaF4VExfmvH3LUbUub/ZuM5GmPOYsJGciokIekeEF76KXN0ZyfPi0JWxpzKxkwOMOp3t/s3knGPiqLMxaq6Or15jH61q5nqdZNbAfoTccfwv4W858IG2uT2yVW+MnbG9mAT3qv8shkHazNxIK0zv0P+dBp/QJGzYu++s7mZm2oG821wy2RcRIZH5ZwNuANMtLiVkhxbXuKnhnCf5agheS6n7qepl760QMs5lJGtuYOPKmSgZsVP31Kuv0TD12WL4GoBlsUkesEHMo3rDX7BXUzxhVor7zrsJjm+VP1E0wl3ERs8wJYyd7xBsIvLemBrlp0A4hwKGXGshr1PxIQOhPpIvlwW09ajVwU6RzVb9xN/QzYsPp1Jm3MEw+kT0dzkxIWVFS20zHTKokxHUe70M/VvZMKwIr4QlUw+W6zak2KkhrF+BHv7R6dq5POGwPgShgXoVwfJq2lEFutxbPEyh2Q2x0l82FVpopBY2Aeyq+fmvbSNA/I2R5Rpp5w8Z7a2EGcHsUdkesz/UlA6iRcsMmsfUsFV2Lcev4PLGr9MrY8AcuGIsqam2trMiVky0quM5yEdGefLmVTU2/v9/e1XLtW70u1+pCxY3sYtc/AeZRdcDmbLjF8AMBAz7hRTcqp94mpgZpS90CoXKYbVJCibSHvRHciM76IM6nPnDXLTCzxNcL3oo4HP1xJI9dM+Djd0ULJLjBaA+HoiCXLYh0YK2Y5I5djBkTH/2B1pMoRxypqqBF4/49k5WxIPCxYIBlSHTwgr9h9+5Rcjv37bU1RBTLtEK4Sk05bJExpQqEYv/aYsszqAPxaZvm3mYyrBlhk/8tSjBZttXKBTptZMrQoNbqopCWaWRl8mAHZqzS/7qW09AoOdcXeHn2dCQUdxbgTfC/5oLTSLv5jH+hxHTRL+mEUiLkpRmhpIeogr52 4G+M7RnZ ahqCIRczaeum1cD0rrly7s5JKVju+eVXYSRpTOzr5vH6J3C2OzPOzUaWqcVedQ5uF3Et9jNqakBcAspABMQzR4MSP1ASi+QJS5OLuAJm0t3Fk/PrAvU+qt7nxMfz5wf5ce8VS9fMOYaSmlZjSLqGHZsFPU8H2dxy+ZXpC1JnWTzcyognndiv0lOarnxQGYjYjKhFjkxME4MtNTUnV9UgY1ftWq00J5jmuc5IRoWMPrLdG14xWETYpU/JOfU8UOuKUkdCRUFeYLeVEJ915qwUdppXJXeu+wC7gOHvWiGK3FxSiivwZ2FF8F/luxRknzS5S5wcAWS+H+uNws/fNNr4QYaO6cgnUFP8LfV2DZy7+cfaAeLb2r0CoAKN+3OJc5p/WX5EYv4462Y8XjrAUVOP15RRn04QxoZo6WkA5pzs3zTS3H8RhIk5XWtQrnjkkVyXdeCdKdbhdCpJ+F9vRSL9eDken0zBeNXdtMEDDqcQ4cavD8oR6nKBhCCnEmg0Ax0JhK9NlUNGevuaBZ2sMWtswNSK/ogJ7pVf7dSCxek8pIvIbke3YyTC4UBh8he4BC7/rAYI7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Carlos, On Thu, Apr 03, 2025 at 09:23:55AM +0000, Carlos Song wrote: > Thank you for your quick ack and help! After applied this patch, it improved well. > I apply this patch at this HEAD: > f0a16f536332 (tag: next-20250403, origin/master, origin/HEAD) Add linux-next specific files for 20250403 > > and do 10 times same test like what I have done before in IMX7D: > The IRQ off tracer shows the irq_off time 7~10ms. Is this what you expected? This is great, thank you for testing it! > # irqsoff latency trace v1.1.5 on 6.14.0-next-20250403-00003-gf9e8473ee91a > # -------------------------------------------------------------------- > # latency: 8111 us, #4323/4323, CPU#0 | (M:NONE VP:0, KP:0, SP:0 HP:0 #P:2) > # ----------------- > # | task: dd-820 (uid:0 nice:0 policy:0 rt_prio:0) > # ----------------- > # => started at: __rmqueue_pcplist > # => ended at: _raw_spin_unlock_irqrestore > # > # > # _------=> CPU# > # / _-----=> irqs-off/BH-disabled > # | / _----=> need-resched > # || / _---=> hardirq/softirq > # ||| / _--=> preempt-depth > # |||| / _-=> migrate-disable > # ||||| / delay > # cmd pid |||||| time | caller > # \ / |||||| \ | / > dd-820 0d.... 1us : __rmqueue_pcplist > dd-820 0d.... 3us : _raw_spin_trylock <-__rmqueue_pcplist > dd-820 0d.... 7us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 11us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 13us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 15us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 17us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 19us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 21us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 23us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 25us : __mod_zone_page_state <-__rmqueue_pcplist > ... > dd-820 0d.... 1326us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1328us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1330us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1332us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1334us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1336us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1337us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1339us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1341us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1343us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1345us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1347us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1349us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1351us : __mod_zone_page_state <-__rmqueue_pcplist > ... > dd-820 0d.... 1660us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1662us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1664us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 1666us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1668us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1670us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1672us+: try_to_claim_block <-__rmqueue_pcplist > dd-820 0d.... 1727us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1729us+: try_to_claim_block <-__rmqueue_pcplist > dd-820 0d.... 1806us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1807us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1809us+: try_to_claim_block <-__rmqueue_pcplist > dd-820 0d.... 1854us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1856us+: try_to_claim_block <-__rmqueue_pcplist > dd-820 0d.... 1893us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1895us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1896us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1898us+: try_to_claim_block <-__rmqueue_pcplist > dd-820 0d.... 1954us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 1956us+: try_to_claim_block <-__rmqueue_pcplist > dd-820 0d.... 2012us : find_suitable_fallback <-__rmqueue_pcplist > ... > dd-820 0d.... 8077us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8079us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 8081us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8083us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 8084us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8086us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8088us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8089us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8091us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8093us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 8095us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8097us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 8098us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8100us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8102us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 8104us : find_suitable_fallback <-__rmqueue_pcplist > dd-820 0d.... 8105us : __mod_zone_page_state <-__rmqueue_pcplist > dd-820 0d.... 8107us : _raw_spin_unlock_irqrestore <-__rmqueue_pcplist > dd-820 0d.... 8110us : _raw_spin_unlock_irqrestore > dd-820 0d.... 8113us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore > dd-820 0d.... 8156us : This pattern looks much better. Once it fails to claim blocks, it goes straight to single-page stealing. Another observation is that find_suitable_callback() is hot. Looking closer at that function, I think there are a few optimizations we can do. Attaching another patch below, to go on top of the previous one. Carlos, would you be able to give this a spin? Thanks! --- >From 621b1842b9fbbb26848296a5feb4daf5b038ba33 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Thu, 3 Apr 2025 16:44:32 -0400 Subject: [PATCH] mm: page_alloc: tighten up find_suitable_fallback() find_suitable_fallback() is not as efficient as it could be: 1. should_try_claim_block() is a loop invariant. There is no point in checking fallback areas if the caller is interested in claimable blocks but the order and the migratetype don't allow for that. 2. __rmqueue_steal() doesn't care about claimability, so it shouldn't have to run those tests. Different callers want different things from this helper: 1. __compact_finished() scans orders up until it finds a claimable block 2. __rmqueue_claim() scans orders down as long as blocks are claimable 3. __rmqueue_steal() doesn't care about claimability at all Move should_try_claim_block() out of the loop. Only test it for the two callers who care in the first place. Distinguish "no blocks" from "order + mt are not claimable" in the return value; __rmqueue_claim() can stop once order becomes unclaimable, __compact_finished() can keep advancing until order becomes claimable. Signed-off-by: Johannes Weiner --- mm/compaction.c | 4 +--- mm/internal.h | 2 +- mm/page_alloc.c | 31 +++++++++++++------------------ 3 files changed, 15 insertions(+), 22 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 139f00c0308a..7462a02802a5 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2348,7 +2348,6 @@ static enum compact_result __compact_finished(struct compact_control *cc) ret = COMPACT_NO_SUITABLE_PAGE; for (order = cc->order; order < NR_PAGE_ORDERS; order++) { struct free_area *area = &cc->zone->free_area[order]; - bool claim_block; /* Job done if page is free of the right migratetype */ if (!free_area_empty(area, migratetype)) @@ -2364,8 +2363,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) * Job done if allocation would steal freepages from * other migratetype buddy lists. */ - if (find_suitable_fallback(area, order, migratetype, - true, &claim_block) != -1) + if (find_suitable_fallback(area, order, migratetype, true) >= 0) /* * Movable pages are OK in any pageblock. If we are * stealing for a non-movable allocation, make sure diff --git a/mm/internal.h b/mm/internal.h index 50c2f590b2d0..55384b9971c3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -915,7 +915,7 @@ static inline void init_cma_pageblock(struct page *page) int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool claim_only, bool *claim_block); + int migratetype, bool claimable); static inline bool free_area_empty(struct free_area *area, int migratetype) { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 03b0d45ed45a..1522e3a29b16 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2077,31 +2077,25 @@ static bool should_try_claim_block(unsigned int order, int start_mt) /* * Check whether there is a suitable fallback freepage with requested order. - * Sets *claim_block to instruct the caller whether it should convert a whole - * pageblock to the returned migratetype. - * If only_claim is true, this function returns fallback_mt only if + * If claimable is true, this function returns fallback_mt only if * we would do this whole-block claiming. This would help to reduce * fragmentation due to mixed migratetype pages in one pageblock. */ int find_suitable_fallback(struct free_area *area, unsigned int order, - int migratetype, bool only_claim, bool *claim_block) + int migratetype, bool claimable) { int i; - int fallback_mt; + + if (claimable && !should_try_claim_block(order, migratetype)) + return -2; if (area->nr_free == 0) return -1; - *claim_block = false; for (i = 0; i < MIGRATE_PCPTYPES - 1 ; i++) { - fallback_mt = fallbacks[migratetype][i]; - if (free_area_empty(area, fallback_mt)) - continue; + int fallback_mt = fallbacks[migratetype][i]; - if (should_try_claim_block(order, migratetype)) - *claim_block = true; - - if (*claim_block || !only_claim) + if (!free_area_empty(area, fallback_mt)) return fallback_mt; } @@ -2206,7 +2200,6 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype, int min_order = order; struct page *page; int fallback_mt; - bool claim_block; /* * Do not steal pages from freelists belonging to other pageblocks @@ -2225,11 +2218,14 @@ __rmqueue_claim(struct zone *zone, int order, int start_migratetype, --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false, &claim_block); + start_migratetype, true); + + /* No block in that order */ if (fallback_mt == -1) continue; - if (!claim_block) + /* Advanced into orders too low to claim, abort */ + if (fallback_mt == -2) break; page = get_page_from_free_area(area, fallback_mt); @@ -2254,12 +2250,11 @@ __rmqueue_steal(struct zone *zone, int order, int start_migratetype) int current_order; struct page *page; int fallback_mt; - bool claim_block; for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, - start_migratetype, false, &claim_block); + start_migratetype, false); if (fallback_mt == -1) continue; -- 2.49.0