From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 424C9CD11DD for ; Tue, 26 Mar 2024 11:28:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B7C06B0082; Tue, 26 Mar 2024 07:28:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 667B26B0083; Tue, 26 Mar 2024 07:28:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E1486B0085; Tue, 26 Mar 2024 07:28:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3AEFA6B0082 for ; Tue, 26 Mar 2024 07:28:44 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id C838D40A63 for ; Tue, 26 Mar 2024 11:28:43 +0000 (UTC) X-FDA: 81938967726.14.78075FC Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf24.hostedemail.com (Postfix) with ESMTP id 2B5B8180016 for ; Tue, 26 Mar 2024 11:28:39 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bRYKe74h; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=MK4jiCYk; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bRYKe74h; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=MK4jiCYk; dmarc=none; spf=pass (imf24.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711452520; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4QrNWpOnpOslw16TWElkWPNU/kcnbluG5mElIbDpht4=; b=mDC2mKD+IKLFbnPu5P6OglM1wxJWm8sZ51I1JnJYlMDSOuPoy7n3FPdjUSOe4DBEuu//56 pL/Idm+prcS4jEYE6Z5bF0dDGzTAsuV9Klx2QLMY+Ke5k4ANC+l+RKoxsofmzg/mPWTQ+X qOuq5zgbuHL2SPN9CQXYUf2mlRh9GD4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bRYKe74h; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=MK4jiCYk; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=bRYKe74h; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=MK4jiCYk; dmarc=none; spf=pass (imf24.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711452520; a=rsa-sha256; cv=none; b=aRNp6ESj2BxL996e2zCY2d8o2Lm/A3OMyLHOr8/vSvTfJMDYu2zOaOzGR4zwE5jaRiDblr /E8QZS3Oe58g65C/bpboJ3tNJaOxjUm8x58rSUfiwP3KKJvccjaGMS0E6It+JjEscLXkDv a32IV3d06m9HtMEVfwkoKhgcZxSmtZY= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 6001C37AF8; Tue, 26 Mar 2024 11:28:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1711452518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QrNWpOnpOslw16TWElkWPNU/kcnbluG5mElIbDpht4=; b=bRYKe74h1k7yBGWP6uuOjzpmQRbD+tUIijyvLVAb7FHH0JUEUiFrx6LkQwB/O3UBIKiZV4 F3P+KHS5ZStWrw20E4z0NLnxzhMyG4bXwCjmZmwWEnq9JdjsNLcHMD1AQ6UT5OYzHA2a4p VgblDNAWJDPdB6WGt5kJ53O4TXDcpFE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1711452518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QrNWpOnpOslw16TWElkWPNU/kcnbluG5mElIbDpht4=; b=MK4jiCYkXMHOOlxelFraK6uTlINx638FiDqifeuQxh7gs4GMFaMs2mx9iRkqvaTNgL1yLi 1dRkN/a+MrMmbsCw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1711452518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QrNWpOnpOslw16TWElkWPNU/kcnbluG5mElIbDpht4=; b=bRYKe74h1k7yBGWP6uuOjzpmQRbD+tUIijyvLVAb7FHH0JUEUiFrx6LkQwB/O3UBIKiZV4 F3P+KHS5ZStWrw20E4z0NLnxzhMyG4bXwCjmZmwWEnq9JdjsNLcHMD1AQ6UT5OYzHA2a4p VgblDNAWJDPdB6WGt5kJ53O4TXDcpFE= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1711452518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4QrNWpOnpOslw16TWElkWPNU/kcnbluG5mElIbDpht4=; b=MK4jiCYkXMHOOlxelFraK6uTlINx638FiDqifeuQxh7gs4GMFaMs2mx9iRkqvaTNgL1yLi 1dRkN/a+MrMmbsCw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 44B5613306; Tue, 26 Mar 2024 11:28:38 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id UjRpD2axAmbUEAAAD6G6ig (envelope-from ); Tue, 26 Mar 2024 11:28:38 +0000 Message-ID: Date: Tue, 26 Mar 2024 12:28:37 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 06/10] mm: page_alloc: fix freelist movement during block conversion To: Johannes Weiner , Andrew Morton Cc: Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240320180429.678181-1-hannes@cmpxchg.org> <20240320180429.678181-7-hannes@cmpxchg.org> Content-Language: en-US From: Vlastimil Babka In-Reply-To: <20240320180429.678181-7-hannes@cmpxchg.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 2B5B8180016 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: jo7obmah934giip8gzi63hj9fra8gh4p X-HE-Tag: 1711452519-655392 X-HE-Meta: U2FsdGVkX18MOS2CUpohPZYYKOszqavNGuCUerSt08VyE3v6qEg46yA6G5aVTFaevNJpBja+3GxX7pyC1UrAWQ5PQ5RhS/IGAQ85PryVWJzUHJvTntzZZndB2TiXnI9OC4uOxy6zY9uUGJplB2Tn0DBaH1US2j08s6rwLh0GVo+z36B0FW2MW1T7nSYAD0g/pnFOIZKeQ8/H7L9nTcChRjFjEusVOVudxU0wQOq/8BA5neEGo0l9dnHmZYWlG/VVKQQr+CF753cJPRJ7YoOC5qLY6HItm52lBMWC/E5a0jz6eBcK3d2ueY1hDDPUWiqjLOgPIk9AivYT8jTumqkeOymfo0kAh01DPYYMd0PtRGBD1gErRpM9KCPBwSDH0j4P9g+3cAFJtlRRkUcIxh06rv2A6gdzUrI5k0N3dThtneWT8oGuGV5bf5XCwxtLGFWUXHX44vgqfZl02BXqcgEI3qnV36kkYy8Yxzep8qA7kzpL4Lq+ZxmmubbgW9TpMPMk2xBim3Qnp23BBTlhIx5oKxreTe6Ra4a1XCFdCUT9eZwA12oeEYaZOpVnlz6Rj7SQZFFmS0xfcpEHSqA3F2+StF1hv3gy1SdudneV9YFfrTDpRwvoe+sXMahFdh5QvrE92MbNr6pIVd62D+ErDjaHx6cQx8h7bv+MUt1FYZvUnAiGKiBtTxdy3LUkbf5TGuO3rb05T9adGjL5zcy3Q4euXO7ex3/PnuvFXTH2U+vNDf18LVQrZK+ioiCU8vJs0pPpgRc4q7D7v3heHRLcn3h6+fcYa9XbIMrZI9jPTytMbHp9XYJVJI3fowwBltdibHZ/SEcJHnujhu4AYyaVfAGdyJ9jZfhbYzMt+Oj5cXSSDAQPVde8iCQTg0e0lHiOG6Mk8OOW2wi7nofmp2fPpJ27ucXba2jGY7s6dqTIJl3WeIkMeiiYrF8I1ZladwCqcELDwsVwbl1iaLH92wIFkSe C3Xcn63M v8LgyNImgCu5Hmx7tBr26WRdXG2AgitM28ySLrftoGrW05HfVlQID/qJHXrxSFv9s7Ways5TPY/ZUW5f39eUbZJVZIwUR6BFE3xG/8mBPEJGCoSWGOi2LolrN/PWWCzgnPxeDJ0Se7zF9Jwex196qDslQy9dPzJePFYaAYr1Pusabuazpmp8EArDAC5e2hi7IxpzwZIlPuucT6Mn/YY5Hck5Cqd63B2oOUhn68uwjuNpfhCXXixg+/8iNEARSfQ1x+fc3dsgzuPVHYGIoYWM8XdGMCWnXFaiibEMFx4I+mUcOZGr42njjn0bYu6ik7M6DCYTpHXNIiM55HHaCOaqjr9mgvL4uGsOf9HRk7Pjn74jiLxmrLc2b9LSAfCr8g+VWCWW34HeKxbPQZQA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 3/20/24 7:02 PM, Johannes Weiner wrote: > Currently, page block type conversion during fallbacks, atomic > reservations and isolation can strand various amounts of free pages on > incorrect freelists. > > For example, fallback stealing moves free pages in the block to the > new type's freelists, but then may not actually claim the block for > that type if there aren't enough compatible pages already allocated. > > In all cases, free page moving might fail if the block straddles more > than one zone, in which case no free pages are moved at all, but the > block type is changed anyway. > > This is detrimental to type hygiene on the freelists. It encourages > incompatible page mixing down the line (ask for one type, get another) > and thus contributes to long-term fragmentation. > > Split the process into a proper transaction: check first if conversion > will happen, then try to move the free pages, and only if that was > successful convert the block to the new type. > > Tested-by: "Huang, Ying" > Signed-off-by: Johannes Weiner Reviewed-by: Vlastimil Babka Nit below: > @@ -1743,33 +1770,37 @@ static inline bool boost_watermark(struct zone *zone) > } > > /* > - * This function implements actual steal behaviour. If order is large enough, > - * we can steal whole pageblock. If not, we first move freepages in this > - * pageblock to our migratetype and determine how many already-allocated pages > - * are there in the pageblock with a compatible migratetype. If at least half > - * of pages are free or compatible, we can change migratetype of the pageblock > - * itself, so pages freed in the future will be put on the correct free list. > + * This function implements actual steal behaviour. If order is large enough, we > + * can claim the whole pageblock for the requested migratetype. If not, we check > + * the pageblock for constituent pages; if at least half of the pages are free > + * or compatible, we can still claim the whole block, so pages freed in the > + * future will be put on the correct free list. Otherwise, we isolate exactly > + * the order we need from the fallback block and leave its migratetype alone. > */ > -static void steal_suitable_fallback(struct zone *zone, struct page *page, > - unsigned int alloc_flags, int start_type, bool whole_block) > +static struct page * > +steal_suitable_fallback(struct zone *zone, struct page *page, > + int current_order, int order, int start_type, > + unsigned int alloc_flags, bool whole_block) > { > - unsigned int current_order = buddy_order(page); > int free_pages, movable_pages, alike_pages; > - int old_block_type; > + unsigned long start_pfn, end_pfn; > + int block_type; > > - old_block_type = get_pageblock_migratetype(page); > + block_type = get_pageblock_migratetype(page); > > /* > * This can happen due to races and we want to prevent broken > * highatomic accounting. > */ > - if (is_migrate_highatomic(old_block_type)) > + if (is_migrate_highatomic(block_type)) > goto single_page; > > /* Take ownership for orders >= pageblock_order */ > if (current_order >= pageblock_order) { > + del_page_from_free_list(page, zone, current_order); > change_pageblock_range(page, current_order, start_type); > - goto single_page; > + expand(zone, page, order, current_order, start_type); > + return page; Is the exact order here important (AFAIK shouldn't be?) or we could just change_pageblock_range(); block_type = start_type; goto single_page? > } > > /* > @@ -1784,10 +1815,9 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, > if (!whole_block) > goto single_page; > > - free_pages = move_freepages_block(zone, page, start_type, > - &movable_pages); > /* moving whole block can fail due to zone boundary conditions */ > - if (!free_pages) > + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, > + &free_pages, &movable_pages)) > goto single_page; > > /* > @@ -1805,7 +1835,7 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, > * vice versa, be conservative since we can't distinguish the > * exact migratetype of non-movable pages. > */ > - if (old_block_type == MIGRATE_MOVABLE) > + if (block_type == MIGRATE_MOVABLE) > alike_pages = pageblock_nr_pages > - (free_pages + movable_pages); > else > @@ -1816,13 +1846,16 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page, > * compatible migratability as our allocation, claim the whole block. > */ > if (free_pages + alike_pages >= (1 << (pageblock_order-1)) || > - page_group_by_mobility_disabled) > + page_group_by_mobility_disabled) { > + move_freepages(zone, start_pfn, end_pfn, start_type); > set_pageblock_migratetype(page, start_type); > - > - return; > + return __rmqueue_smallest(zone, order, start_type); > + } > > single_page: > - move_to_free_list(page, zone, current_order, start_type); > + del_page_from_free_list(page, zone, current_order); > + expand(zone, page, order, current_order, block_type); > + return page; > }