From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F519C7618E for ; Fri, 21 Apr 2023 14:25:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 93F3B6B007B; Fri, 21 Apr 2023 10:25:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C8836B007D; Fri, 21 Apr 2023 10:25:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 790156B007E; Fri, 21 Apr 2023 10:25:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 651C36B007B for ; Fri, 21 Apr 2023 10:25:40 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 37840140639 for ; Fri, 21 Apr 2023 14:25:40 +0000 (UTC) X-FDA: 80705621640.14.2561F8A Received: from outbound-smtp44.blacknight.com (outbound-smtp44.blacknight.com [46.22.136.52]) by imf20.hostedemail.com (Postfix) with ESMTP id 3C8CE1C0013 for ; Fri, 21 Apr 2023 14:25:37 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.52 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682087138; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uJgmwi6CLy7OLgvss4fEbaV4lxael9UvdHnJqigVC1M=; b=w4ne829f7EMMmXNzCwANqNyFKNpanppBlHuvV2/bJgSyAzGX0l+l3rg+o1LYUzdkS4Thds fc6+jxEnNnZQKM9xWggkfpfySHUPmmmbaNaSWpFPgQKWtQNIqHa1kOQVQok2b8S0AZQJXZ 9QbsV8itSuVkF+pik0GZbXYflaKFD/Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682087138; a=rsa-sha256; cv=none; b=z2P4lYySdOfdfXJYjkHEAgaQO3FpW7evkdRf7FKX6oMKSvmao47ZINfwWL/9g7URKJktD3 6dwZm2zoqBmFZiM8LuST/PRYzeJipgNAsZ+IM5D/79JOZPn2Afygb4mfgM1O2+ujDE0yZm P2lqz6FN8gziq5FkqHsSRXU2GnKb6x4= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; spf=pass (imf20.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.52 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp44.blacknight.com (Postfix) with ESMTPS id 96585F84BD for ; Fri, 21 Apr 2023 15:25:36 +0100 (IST) Received: (qmail 13269 invoked from network); 21 Apr 2023 14:25:36 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.21.103]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 21 Apr 2023 14:25:36 -0000 Date: Fri, 21 Apr 2023 15:25:33 +0100 From: Mel Gorman To: Johannes Weiner Cc: linux-mm@kvack.org, Kaiyang Zhao , Vlastimil Babka , David Rientjes , linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [RFC PATCH 11/26] mm: page_alloc: introduce MIGRATE_FREE Message-ID: <20230421142533.nm44wkmh3wkudlqn@techsingularity.net> References: <20230418191313.268131-1-hannes@cmpxchg.org> <20230418191313.268131-12-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20230418191313.268131-12-hannes@cmpxchg.org> X-Rspam-User: X-Rspamd-Queue-Id: 3C8CE1C0013 X-Rspamd-Server: rspam09 X-Stat-Signature: kzsae1jjqrxqj4ygte9y635a3n994stz X-HE-Tag: 1682087137-433717 X-HE-Meta: U2FsdGVkX1/z73lsDN2X28kPx58nseVBXEyjeGA98lIpdq6jALpyGMsJyhLvlOBs55Em7t91UfcdecggHvWsF18xH+id5yb1+Fx7JfsZpf1f70OKBwIMJlUl7Z7YxKFdrJqcqn5ToU81B/g2UQ5yddhxoj4b6JkL5ixIBUb+uMxqOjYt9pnPavnEaaxQH/tvrUsb9somsMnUA8e7j7zc/RbMLInTVy4RC1CT1KzSsmAJob7zwLjPEPZhR6ZI6kVv1o/nabUo6xH28NJuCBfq3zFGnbgUiF0VCvuCcPdBdHKH8j/6BQt1Ay3Sq8UGBWFrpfOkvPO9c9Ol+6Mm3bjM2PJdSMOlnKhNg2zcWVd459RUakxJk8GvmBaIeBQWPZ6gnL1xmNy7qcprCGHRO1+v3hZr60uBS1BkuroxppdZ0GZuSR3K/7SPCZK7+/5yHjC6RPvILhIGFJtgrfBmtgckkzR35gASrB4zWOL66NnBkZtSNaU268FrAfCHlKvFVi9kkx/kD82a5IZAsxszqV+HMVNjBYQnzJ6ygCYAMo2oExj2lt+T44ii5TS3Xgq1QKtXrzbqWHukyCndEmQY1+X9fjhPWRfoBM/jZLPHHqJTHlgjy2PJj9ZoECs0bI1QWOLD/YiNcXNz+ibNHW3mWGHA3Xys2Hrep/UP88Ncwksk16OJ6kED+ojkkAyhieLvBjGMFCQiuwM+WCHZkPbw/ydJq5d/duvgPsKdCIsyKEbFQXEQ0gNPM0dg0x6kSfGQnloK6nAw2J6yH3Qk5Fj3C51DjPuZ+hEgIrdmqvG1HbbYwO+sjugpPXbvhCQNjvwnAWsFZ21o4ftJbUGngYtpUN4ShOF18Pxp78XCLFLJUczFDnNyi12GiecK1airRRZw53ReMomruuDOPyGbK5rYH0SdstbhskPGxhefSW/kl39mO0P7opaRWnhSC5aigwPJEIfN9Ir3CfHfUrfqAwwswHV 9cWfIabu rdl/6of1Vbe1yy7zt54etZjkm2d5jKUb5oYVPjLNIKY6Bel//7wXHWDQIMzhtaG3MswOWr90CNL7jhHoBINVZkBg2UJ80BoJ+wJjNfkNMgUBza3OFLsM2BOII0vcbE8tnff7ftGmJtrUk+CUtTYO+b1Ss/vznc/ZV6BTuVgBkfRj8oADKszu7X9UOMslzoJRNvBbpHtMu5uE5al+RybDzhEZ/2w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 18, 2023 at 03:12:58PM -0400, Johannes Weiner wrote: > To cut down on type mixing, put empty pageblocks on separate freelists > and make them the first fallback preference before stealing space from > incompatible blocks. > > The neutral block designation will also be handy in subsequent patches > that: simplify compaction; add per-mt freelist counts and make > compaction_suitable() more precise; and ultimately make pageblocks the > basis of free memory management. > This patch is a line in the sand for the series. Patches 1-10 can stand alone with supporting data because this is the first major change that has a material impact on fragmentation avoidance and its overhead. Maybe there is something in the later patches that makes the need for this patch more obvious but putting the empty pageblocks on separate freelists is not that helpful in itself. The main problem is that __rmqueue() starts with __rmqueue_smallest which for huge pages is probably fine because it searches first for free pageblocks, but it's not for SLUB high-order allocations because __rmqueue_smallest for orders < pageblock_order encourages mixing. Obviously it would also not be fine for contiguous page allocations for page cache or anything else that is planned. If nothing else, this patch highlights that fragmentation avoidance was originally focused on huge pages which was fine at the time, but not any longer. The need for MIGRATE_FREE type could potentially be avoided by having __rmqueue() start with __rmqueue_smallest(order == pageblock_order) to encourage full block usage first before mixing. -- Mel Gorman SUSE Labs