From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7753CC021B8 for ; Sat, 1 Mar 2025 18:32:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF47F6B0082; Sat, 1 Mar 2025 13:32:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DCA626B0083; Sat, 1 Mar 2025 13:32:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C1D18280001; Sat, 1 Mar 2025 13:32:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9F7016B0082 for ; Sat, 1 Mar 2025 13:32:55 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 580E8A539A for ; Sat, 1 Mar 2025 18:32:55 +0000 (UTC) X-FDA: 83173828710.22.5E73E9C Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf30.hostedemail.com (Postfix) with ESMTP id ED24A8000C for ; Sat, 1 Mar 2025 18:32:52 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Y7XC8CwU; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=IFzMv8US; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Y7XC8CwU; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=IFzMv8US; spf=pass (imf30.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740853973; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=penlQTe+riwtTDW9Uueq/LdEFS+tLLwCk8SHyJOLplI=; b=ydTgshfPOwVGvG9BpS1DTVMAJRo2CV3M551BXVRcUTVb6GMpHdpvxIx/dMmoCHxtTWyP2j R8r8seSb+jlDcd3Bi5eyW42tGV/i0yads68MzWoxHswj9arZ1HAVp0aizo2nYj4u15hcFh PTBb2OWbrob8GvTC6yfz0BpgrBDEWrM= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Y7XC8CwU; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=IFzMv8US; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=Y7XC8CwU; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=IFzMv8US; spf=pass (imf30.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740853973; a=rsa-sha256; cv=none; b=7e6SLUqQpGluAJFjaXoHTmUDJhMROPXTlSf74SNcSnKZK7xtUa92+yRTxB57EpjhcokhI6 9NbyZ9wOUkyCqom5CWiZ0Wy63ryayLuSdPNV2BHfwhysXsXvFQDlMPXsXeDGZ7Q/365Mfl zKf8lps6pSHC4fdwnqUwdnyPlb4G0IU= Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 01ADC2116C; Sat, 1 Mar 2025 18:32:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1740853971; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=penlQTe+riwtTDW9Uueq/LdEFS+tLLwCk8SHyJOLplI=; b=Y7XC8CwUxS9OCBx0nF/K4Gk9ROu/U/VFH7iTxhirf2MFrD5Aouf5gEF2LlTJMgQ0pC1Qc0 5HacwK1YEvuRLFeqDPA62MN4AyAQy2v/WA9dGjdRV0PmMwQMSHWLNEpq5XrcBgOAYGICIB OuDOr3L3YNN0CZGBHh0+ZvaLXrHEgYo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1740853971; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=penlQTe+riwtTDW9Uueq/LdEFS+tLLwCk8SHyJOLplI=; b=IFzMv8USla0cG8p65my4ZXE2+RFBLTjME4QjEkHraMRiyGEOeSmPlNk8rEklDL5OHXpmpw X/lsL3pSyfttLYCw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1740853971; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=penlQTe+riwtTDW9Uueq/LdEFS+tLLwCk8SHyJOLplI=; b=Y7XC8CwUxS9OCBx0nF/K4Gk9ROu/U/VFH7iTxhirf2MFrD5Aouf5gEF2LlTJMgQ0pC1Qc0 5HacwK1YEvuRLFeqDPA62MN4AyAQy2v/WA9dGjdRV0PmMwQMSHWLNEpq5XrcBgOAYGICIB OuDOr3L3YNN0CZGBHh0+ZvaLXrHEgYo= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1740853971; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=penlQTe+riwtTDW9Uueq/LdEFS+tLLwCk8SHyJOLplI=; b=IFzMv8USla0cG8p65my4ZXE2+RFBLTjME4QjEkHraMRiyGEOeSmPlNk8rEklDL5OHXpmpw X/lsL3pSyfttLYCw== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D1B4113A53; Sat, 1 Mar 2025 18:32:50 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id vVp0MtJSw2esIwAAD6G6ig (envelope-from ); Sat, 01 Mar 2025 18:32:50 +0000 Message-ID: Date: Sat, 1 Mar 2025 19:32:50 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm/list_lru: allocate on first insert instead of allocation Content-Language: en-US To: Jingxiang Zeng , linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, david@redhat.com, muchun.song@linux.dev, chengming.zhou@linux.dev, kasong@tencent.com, lkp@intel.com References: <20250228113836.136318-1-jingxiangzeng.cas@gmail.com> From: Vlastimil Babka Autocrypt: addr=vbabka@suse.cz; keydata= xsFNBFZdmxYBEADsw/SiUSjB0dM+vSh95UkgcHjzEVBlby/Fg+g42O7LAEkCYXi/vvq31JTB KxRWDHX0R2tgpFDXHnzZcQywawu8eSq0LxzxFNYMvtB7sV1pxYwej2qx9B75qW2plBs+7+YB 87tMFA+u+L4Z5xAzIimfLD5EKC56kJ1CsXlM8S/LHcmdD9Ctkn3trYDNnat0eoAcfPIP2OZ+ 9oe9IF/R28zmh0ifLXyJQQz5ofdj4bPf8ecEW0rhcqHfTD8k4yK0xxt3xW+6Exqp9n9bydiy tcSAw/TahjW6yrA+6JhSBv1v2tIm+itQc073zjSX8OFL51qQVzRFr7H2UQG33lw2QrvHRXqD Ot7ViKam7v0Ho9wEWiQOOZlHItOOXFphWb2yq3nzrKe45oWoSgkxKb97MVsQ+q2SYjJRBBH4 8qKhphADYxkIP6yut/eaj9ImvRUZZRi0DTc8xfnvHGTjKbJzC2xpFcY0DQbZzuwsIZ8OPJCc LM4S7mT25NE5kUTG/TKQCk922vRdGVMoLA7dIQrgXnRXtyT61sg8PG4wcfOnuWf8577aXP1x 6mzw3/jh3F+oSBHb/GcLC7mvWreJifUL2gEdssGfXhGWBo6zLS3qhgtwjay0Jl+kza1lo+Cv BB2T79D4WGdDuVa4eOrQ02TxqGN7G0Biz5ZLRSFzQSQwLn8fbwARAQABzSBWbGFzdGltaWwg QmFia2EgPHZiYWJrYUBzdXNlLmN6PsLBlAQTAQoAPgIbAwULCQgHAwUVCgkICwUWAgMBAAIe AQIXgBYhBKlA1DSZLC6OmRA9UCJPp+fMgqZkBQJkBREIBQkRadznAAoJECJPp+fMgqZkNxIQ ALZRqwdUGzqL2aeSavbum/VF/+td+nZfuH0xeWiO2w8mG0+nPd5j9ujYeHcUP1edE7uQrjOC Gs9sm8+W1xYnbClMJTsXiAV88D2btFUdU1mCXURAL9wWZ8Jsmz5ZH2V6AUszvNezsS/VIT87 AmTtj31TLDGwdxaZTSYLwAOOOtyqafOEq+gJB30RxTRE3h3G1zpO7OM9K6ysLdAlwAGYWgJJ V4JqGsQ/lyEtxxFpUCjb5Pztp7cQxhlkil0oBYHkudiG8j1U3DG8iC6rnB4yJaLphKx57NuQ PIY0Bccg+r9gIQ4XeSK2PQhdXdy3UWBr913ZQ9AI2usid3s5vabo4iBvpJNFLgUmxFnr73SJ KsRh/2OBsg1XXF/wRQGBO9vRuJUAbnaIVcmGOUogdBVS9Sun/Sy4GNA++KtFZK95U7J417/J Hub2xV6Ehc7UGW6fIvIQmzJ3zaTEfuriU1P8ayfddrAgZb25JnOW7L1zdYL8rXiezOyYZ8Fm ZyXjzWdO0RpxcUEp6GsJr11Bc4F3aae9OZtwtLL/jxc7y6pUugB00PodgnQ6CMcfR/HjXlae h2VS3zl9+tQWHu6s1R58t5BuMS2FNA58wU/IazImc/ZQA+slDBfhRDGYlExjg19UXWe/gMcl De3P1kxYPgZdGE2eZpRLIbt+rYnqQKy8UxlszsBNBFsZNTUBCACfQfpSsWJZyi+SHoRdVyX5 J6rI7okc4+b571a7RXD5UhS9dlVRVVAtrU9ANSLqPTQKGVxHrqD39XSw8hxK61pw8p90pg4G /N3iuWEvyt+t0SxDDkClnGsDyRhlUyEWYFEoBrrCizbmahOUwqkJbNMfzj5Y7n7OIJOxNRkB IBOjPdF26dMP69BwePQao1M8Acrrex9sAHYjQGyVmReRjVEtv9iG4DoTsnIR3amKVk6si4Ea X/mrapJqSCcBUVYUFH8M7bsm4CSxier5ofy8jTEa/CfvkqpKThTMCQPNZKY7hke5qEq1CBk2 wxhX48ZrJEFf1v3NuV3OimgsF2odzieNABEBAAHCwXwEGAEKACYCGwwWIQSpQNQ0mSwujpkQ PVAiT6fnzIKmZAUCZAUSmwUJDK5EZgAKCRAiT6fnzIKmZOJGEACOKABgo9wJXsbWhGWYO7mD 8R8mUyJHqbvaz+yTLnvRwfe/VwafFfDMx5GYVYzMY9TWpA8psFTKTUIIQmx2scYsRBUwm5VI EurRWKqENcDRjyo+ol59j0FViYysjQQeobXBDDE31t5SBg++veI6tXfpco/UiKEsDswL1WAr tEAZaruo7254TyH+gydURl2wJuzo/aZ7Y7PpqaODbYv727Dvm5eX64HCyyAH0s6sOCyGF5/p eIhrOn24oBf67KtdAN3H9JoFNUVTYJc1VJU3R1JtVdgwEdr+NEciEfYl0O19VpLE/PZxP4wX PWnhf5WjdoNI1Xec+RcJ5p/pSel0jnvBX8L2cmniYnmI883NhtGZsEWj++wyKiS4NranDFlA HdDM3b4lUth1pTtABKQ1YuTvehj7EfoWD3bv9kuGZGPrAeFNiHPdOT7DaXKeHpW9homgtBxj 8aX/UkSvEGJKUEbFL9cVa5tzyialGkSiZJNkWgeHe+jEcfRT6pJZOJidSCdzvJpbdJmm+eED w9XOLH1IIWh7RURU7G1iOfEfmImFeC3cbbS73LQEFGe1urxvIH5K/7vX+FkNcr9ujwWuPE9b 1C2o4i/yZPLXIVy387EjA6GZMqvQUFuSTs/GeBcv0NjIQi8867H3uLjz+mQy63fAitsDwLmR EP+ylKVEKb0Q2A== In-Reply-To: <20250228113836.136318-1-jingxiangzeng.cas@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: jbeii55reoyqjrb9hop5cn3jwjkjssog X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: ED24A8000C X-Rspam-User: X-HE-Tag: 1740853972-726292 X-HE-Meta: U2FsdGVkX1/3Sbr7pk/HycYzdl9/o/bi3BqUILdOTJCc0RTVT2HiC+kvsGJ4Tr9yMMwFXVIWTwJKyil+IggeD520dSsTskfNwGqsdH7Phev9TBhtmY+BKn0OTUsIN2h0sMWw9ChVZLg8BNByHvpGozJ+T04ABQgT4muGB65FPbj+odZB0A+Wetd3VpqOvh9MCiVz1kfcmrQMWi3WeolYJVDuIxpudWo2sV4/4e70zLIJNno6oIBspPXwHBrzAOfE8VvCLBjnmC2uKhk2tdFVroJ8GIT3BlMkxeqVKlasBckvI4nwNmyp6PkqYRdCbCWRBMplgzhd201ULReUHmvAA2IyuRj1nDO5LCoYfbrUXpQ0lA+2GLTwocd6JntPzNoWnhI8vyx2Ya3H9zQyG/Q/gWwPiYtCbkiOPP9mAo529XXeybmgqK8nc8jJnsDkBJVvxfA/V21aMmQMNoWZmSdPdcXsrxYCPbellJUes4cSWdCZifv8YMxc2DLl5NTLFTrVqOBXmiPqeEdTKb6x369BI1XyuzX53XLE6Rx71RWtJdrvN22KU+1QaeOG/J0iUrgM6cMrR96fceWOHhZymGHHbSSP77b3MvX7KZOGLtJTXU0itKECo9BEBiPss7cl926CvzMHFyiDmk/WT8PTdGQCU0JpOs7n5calAjKRG/vHvAJigB0iDv6eBssqIpCAY9pbDvan0LTLLKuP6+ijg6hWFPiPQOVfpfqHQO3d2rs0NGtg87ZYlJqeU2mSDRHWJhMhgcU8eaDteumcPfMZLL3XYTzZka1NVk2tYJjlidahYgXNDcHsktY/zFEAO0ckdKsiD2ZfwEivRuAfYWWVS2lKtAdwkjsQSr4NWCLZ6Kx8oP/HSO2QfndAtv09sg/HZZwvLX+AOf60oDtLKdFq9ILoQsTrSrhrkh1N5Qr3TZnn9Pv2QgLmVECK8YmcKnskNOtdSILgUnRoUcMFG0qXCGp lkwiyOCZ 9TNNy+U0H7AA8iQywmmmsOc1o0XfjuZNQTCgYmy3B403YBV300V87MCmA/UWqMHJphydaTTkFvH6CAUDwN1im/HC+XGiLUAVOcO7EFx1hIJLfq97ndk7xkR9Iia+JS1+S8MDRHfdlomcROzKxgBzVqTGMYEsu5lpHlXHdfYZRvFZZ1EozpP+/nO7/lcysn1ggFJrIY5nvtfQgZQxXv8ySRk1lV3YM7mUvNu1uvyeoYUE1CNHUrasIzGPTjciv0St6xLOPOGewnrqDe79dJyRBvC9tfoaZrjCFtKw9TKryIbr04DznCY8DqhSROObvf+UHeUFLNMeLFyBmUdnWGaKu8JQCaRvAvOH6JTs575NjYQ0aCGs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2/28/25 12:38, Jingxiang Zeng wrote: > From: Zeng Jingxiang > > It is observed that each time the memcg_slab_post_alloc_hook function > and the zswap_store function are executed, xa_load will be executed > once through the following path, which adds unnecessary overhead. > This patch optimizes this part of the code. When a new mlru is > inserted into list_lru, xa_load is only executed once, and other slab > requests of the same type will not be executed repeatedly. > > __memcg_slab_post_alloc_hook > ->memcg_list_lru_alloc > ->->memcg_list_lru_allocated > ->->->xa_load > > zswap_store > ->memcg_list_lru_alloc > ->->memcg_list_lru_allocated > ->->->xa_load How do you know it's xa_load itself that's the issue? I think you might be able to eliminate some call overhead easily: - move list_lru_memcg_aware() and memcg_list_lru_allocated() to list_lru.h - make memcg_list_lru_alloc() also a static inline in list_lru.h, so it does the list_lru_memcg_aware() and memcg_list_lru_allocated() checks inline (can be even likely()) and then call __memcg_list_lru_alloc() which is renamed from the current memcg_list_lru_alloc() but the checks moved away. The result is callers of memcg_list_lru_alloc() will (in the likely case) only perform a direct call to xa_load() in xarray.c and not additional call through memcg_list_lru_alloc() in list_lru.c > We created 1,000,000 negative dentry on test machines with 10, 1,000, > and 10,000 cgroups for performance testing, and then used the bcc > funclatency tool to capture the time consumption of the > kmem_cache_alloc_lru_noprof function. The performance improvement > ranged from 3.3% to 6.2%: > > 10 cgroups, 3.3% performance improvement. > without the patch: > avg = 1375 nsecs, total: 1375684993 nsecs, count: 1000000 > with the patch: > avg = 1331 nsecs, total: 1331625726 nsecs, count: 1000000 > > 1000 cgroups, 3.7% performance improvement. > without the patch: > avg = 1364 nsecs, total: 1364564848 nsecs, count: 1000000 > with the patch: > avg = 1315 nsecs, total: 1315150414 nsecs, count: 1000000 > > 10000 cgroups, 6.2% performance improvement. > without the patch: > avg = 1385 nsecs, total: 1385361153 nsecs, count: 1000002 > with the patch: > avg = 1304 nsecs, total: 1304531155 nsecs, count: 1000000 > > Signed-off-by: Zeng Jingxiang > Suggested-by: Kairui Song > --- > include/linux/list_lru.h | 2 -- > mm/list_lru.c | 22 +++++++++++++++------- > mm/memcontrol.c | 16 ++-------------- > mm/slab.h | 4 ++-- > mm/slub.c | 20 +++++++++----------- > mm/zswap.c | 9 --------- > 6 files changed, 28 insertions(+), 45 deletions(-) > > diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h > index fe739d35a864..04d4b051f618 100644 > --- a/include/linux/list_lru.h > +++ b/include/linux/list_lru.h > @@ -79,8 +79,6 @@ static inline int list_lru_init_memcg_key(struct list_lru *lru, struct shrinker > return list_lru_init_memcg(lru, shrinker); > } > > -int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, > - gfp_t gfp); > void memcg_reparent_list_lrus(struct mem_cgroup *memcg, struct mem_cgroup *parent); > > /** > diff --git a/mm/list_lru.c b/mm/list_lru.c > index 490473af3122..c5a5d61ac946 100644 > --- a/mm/list_lru.c > +++ b/mm/list_lru.c > @@ -49,6 +49,8 @@ static int lru_shrinker_id(struct list_lru *lru) > return lru->shrinker_id; > } > > +static int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru); > + > static inline struct list_lru_one * > list_lru_from_memcg_idx(struct list_lru *lru, int nid, int idx) > { > @@ -84,6 +86,9 @@ lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, > spin_unlock_irq(&l->lock); > else > spin_unlock(&l->lock); > + } else { > + if (!memcg_list_lru_alloc(memcg, lru)) > + goto again; > } > /* > * Caller may simply bail out if raced with reparenting or > @@ -93,7 +98,6 @@ lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, > rcu_read_unlock(); > return NULL; > } > - VM_WARN_ON(!css_is_dying(&memcg->css)); > memcg = parent_mem_cgroup(memcg); > goto again; > } > @@ -506,18 +510,16 @@ static inline bool memcg_list_lru_allocated(struct mem_cgroup *memcg, > return idx < 0 || xa_load(&lru->xa, idx); > } > > -int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, > - gfp_t gfp) > +static int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru) > { > unsigned long flags; > struct list_lru_memcg *mlru = NULL; > - struct mem_cgroup *pos, *parent; > + struct mem_cgroup *pos, *parent, *cur; > XA_STATE(xas, &lru->xa, 0); > > if (!list_lru_memcg_aware(lru) || memcg_list_lru_allocated(memcg, lru)) > return 0; > > - gfp &= GFP_RECLAIM_MASK; > /* > * Because the list_lru can be reparented to the parent cgroup's > * list_lru, we should make sure that this cgroup and all its > @@ -536,11 +538,13 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, > } > > if (!mlru) { > - mlru = memcg_init_list_lru_one(lru, gfp); > + mlru = memcg_init_list_lru_one(lru, GFP_KERNEL); > if (!mlru) > return -ENOMEM; > } > xas_set(&xas, pos->kmemcg_id); > + /* We could be scanning items in another memcg */ > + cur = set_active_memcg(pos); > do { > xas_lock_irqsave(&xas, flags); > if (!xas_load(&xas) && !css_is_dying(&pos->css)) { > @@ -549,12 +553,16 @@ int memcg_list_lru_alloc(struct mem_cgroup *memcg, struct list_lru *lru, > mlru = NULL; > } > xas_unlock_irqrestore(&xas, flags); > - } while (xas_nomem(&xas, gfp)); > + } while (xas_nomem(&xas, GFP_KERNEL)); > + set_active_memcg(cur); > } while (pos != memcg && !css_is_dying(&pos->css)); > > if (unlikely(mlru)) > kfree(mlru); > > + if (css_is_dying(&pos->css)) > + return -EBUSY; > + > return xas_error(&xas); > } > #else > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 16f3bdbd37d8..583e2587c17b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -2966,8 +2966,8 @@ static inline size_t obj_full_size(struct kmem_cache *s) > return s->size + sizeof(struct obj_cgroup *); > } > > -bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > - gfp_t flags, size_t size, void **p) > +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, > + size_t size, void **p) > { > struct obj_cgroup *objcg; > struct slab *slab; > @@ -2994,18 +2994,6 @@ bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > > flags &= gfp_allowed_mask; > > - if (lru) { > - int ret; > - struct mem_cgroup *memcg; > - > - memcg = get_mem_cgroup_from_objcg(objcg); > - ret = memcg_list_lru_alloc(memcg, lru, flags); > - css_put(&memcg->css); > - > - if (ret) > - return false; > - } > - > if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) > return false; > > diff --git a/mm/slab.h b/mm/slab.h > index e9fd9bf0bfa6..3b20298d2ea1 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -598,8 +598,8 @@ static inline enum node_stat_item cache_vmstat_idx(struct kmem_cache *s) > } > > #ifdef CONFIG_MEMCG > -bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > - gfp_t flags, size_t size, void **p); > +bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, > + size_t size, void **p); > void __memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, > void **p, int objects, struct slabobj_ext *obj_exts); > #endif > diff --git a/mm/slub.c b/mm/slub.c > index 184fd2b14758..545c4b5f2bf2 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -2153,8 +2153,8 @@ alloc_tagging_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, > static void memcg_alloc_abort_single(struct kmem_cache *s, void *object); > > static __fastpath_inline > -bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > - gfp_t flags, size_t size, void **p) > +bool memcg_slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, > + size_t size, void **p) > { > if (likely(!memcg_kmem_online())) > return true; > @@ -2162,7 +2162,7 @@ bool memcg_slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > if (likely(!(flags & __GFP_ACCOUNT) && !(s->flags & SLAB_ACCOUNT))) > return true; > > - if (likely(__memcg_slab_post_alloc_hook(s, lru, flags, size, p))) > + if (likely(__memcg_slab_post_alloc_hook(s, flags, size, p))) > return true; > > if (likely(size == 1)) { > @@ -2241,12 +2241,11 @@ bool memcg_slab_post_charge(void *p, gfp_t flags) > return true; > } > > - return __memcg_slab_post_alloc_hook(s, NULL, flags, 1, &p); > + return __memcg_slab_post_alloc_hook(s, flags, 1, &p); > } > > #else /* CONFIG_MEMCG */ > static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, > - struct list_lru *lru, > gfp_t flags, size_t size, > void **p) > { > @@ -4085,9 +4084,8 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) > } > > static __fastpath_inline > -bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > - gfp_t flags, size_t size, void **p, bool init, > - unsigned int orig_size) > +bool slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags, size_t size, > + void **p, bool init, unsigned int orig_size) > { > unsigned int zero_size = s->object_size; > bool kasan_init = init; > @@ -4135,7 +4133,7 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, > alloc_tagging_slab_alloc_hook(s, p[i], flags); > } > > - return memcg_slab_post_alloc_hook(s, lru, flags, size, p); > + return memcg_slab_post_alloc_hook(s, flags, size, p); > } > > /* > @@ -4174,7 +4172,7 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list > * In case this fails due to memcg_slab_post_alloc_hook(), > * object is set to NULL > */ > - slab_post_alloc_hook(s, lru, gfpflags, 1, &object, init, orig_size); > + slab_post_alloc_hook(s, gfpflags, 1, &object, init, orig_size); > > return object; > } > @@ -5135,7 +5133,7 @@ int kmem_cache_alloc_bulk_noprof(struct kmem_cache *s, gfp_t flags, size_t size, > * memcg and kmem_cache debug support and memory initialization. > * Done outside of the IRQ disabled fastpath loop. > */ > - if (unlikely(!slab_post_alloc_hook(s, NULL, flags, size, p, > + if (unlikely(!slab_post_alloc_hook(s, flags, size, p, > slab_want_init_on_alloc(flags, s), s->object_size))) { > return 0; > } > diff --git a/mm/zswap.c b/mm/zswap.c > index 10f2a16e7586..178728a936ed 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1562,15 +1562,6 @@ bool zswap_store(struct folio *folio) > if (!pool) > goto put_objcg; > > - if (objcg) { > - memcg = get_mem_cgroup_from_objcg(objcg); > - if (memcg_list_lru_alloc(memcg, &zswap_list_lru, GFP_KERNEL)) { > - mem_cgroup_put(memcg); > - goto put_pool; > - } > - mem_cgroup_put(memcg); > - } > - > for (index = 0; index < nr_pages; ++index) { > struct page *page = folio_page(folio, index); >