From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7C65C4332F for ; Tue, 7 Nov 2023 21:34:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3A6AB8D005B; Tue, 7 Nov 2023 16:34:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3572E8D0001; Tue, 7 Nov 2023 16:34:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 244F18D005B; Tue, 7 Nov 2023 16:34:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 155398D0001 for ; Tue, 7 Nov 2023 16:34:00 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D4C4D80BBC for ; Tue, 7 Nov 2023 21:33:59 +0000 (UTC) X-FDA: 81432460998.16.0BE8C6E Received: from out-177.mta1.migadu.com (out-177.mta1.migadu.com [95.215.58.177]) by imf05.hostedemail.com (Postfix) with ESMTP id D25C910001A for ; Tue, 7 Nov 2023 21:33:57 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=I7TnO4jv; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf05.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.177 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699392838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ch3BQ6OEJbabgew7Q5xC5Mec88IFXokbgGI8lZkyw+U=; b=py2Z5FvKiSN97ouETp4idD9Hv0wXeCl9zZRFbIKEpFoZkDnIf5CwL3w/l9C/iLlQOUibuj 9F80TKyaEuY7GKZF4S3gLlvjfZ6oFkxDRyA4w9R2qK8hw93jms3szQFzCtk5M9dhickczb 2TNE0MDkJ8B1mqV7WsnzHmLpT1nCwY0= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=I7TnO4jv; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf05.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.177 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699392838; a=rsa-sha256; cv=none; b=8VajCzrjLRKDl+XLie+TvrOXyjuiXlb5xgBAF+jYu9ctcF6JMJ0mIdqqp1zw+j+WsYg+GP ku2qmP4p6x4+YMhdyReJZj3Njv1EyOD9Lv/QsVrhi8C0jxge3sq3Sl2OPm30oQaYjJq8Uw v9ryHlBksl7lrzpuA5O8RWEd9OMmwlQ= Date: Tue, 7 Nov 2023 13:33:41 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1699392835; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Ch3BQ6OEJbabgew7Q5xC5Mec88IFXokbgGI8lZkyw+U=; b=I7TnO4jv6gac29hIoQQR9pXQLv3DKHI17eNZ6zqe6T5AvlcH08XgSGLd+v7WMIxXcY9m5x rnfur6jk93P6JvJU7wRIYBqAMRPQX7NEwlxpmPuOTDdNp5VhnRk1VbFWgtDz80DYRmKM08 mBz2PwCZ7wlcR9TkJqCCz5vPMnznpXo= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: Matthew Wilcox Cc: Christoph Lameter , linux-mm@kvack.org, cgroups@vger.kernel.org Subject: Re: cgroups: warning for metadata allocation with GFP_NOFAIL (was Re: folio_alloc_buffers() doing allocations > order 1 with GFP_NOFAIL) Message-ID: References: <6b42243e-f197-600a-5d22-56bd728a5ad8@gentwo.org> <8f6d3d89-3632-01a8-80b8-6a788a4ba7a8@linux.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: D25C910001A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 9jyzdrq7raqhpfcasprdbs7xmocusoxo X-HE-Tag: 1699392837-251585 X-HE-Meta: U2FsdGVkX1+/SVk3ozNkr9ISrc5okwf23R54fRuK+x8qeOL7yAw/9Mz17i+Nr9T02txXpVwbogL/c9lepww49+MRdR9xplY1wm5x/fbRUNjuQXL45miwsJqp9uveP+OULD562JiAMbL2w1w0YhFeGeBUNR/GWxrIWGrcLz6NDaG0b6TcPRlTDSEFMI7SgxGXPDIhZNMzLQuunOeKH42VSlJONVZlpRTD7+cvQGA4yHaVqC5xTVjsgvdIumwDtG+inlqN9OJ4bQDH8EE4IhQsEPxnaDdJFxlCUvFvGn5VBxnaqu5SXMRI7CxTQBUDs3rxvCyp6SNZBhnEx4CPAMdXa9VKZNlPdNwodKtG1br7WJYsBVZAdRpnVc7rYKVHOEBGV34EhuCl8x+yJuASTbFS1KFrqWRfbatNnFbcPfgfFBkpAWUklQnTVuJ49fyV1ZZm2PQiArRhqXy44Vk1m3s4W/o5NI923Sbb+sNj9XgpT5KlL1oHEATOddGRdUFs/48TTs4QEeyvT+luCpcg1H5CQy5063Uy1rV65K0Hm6bWcIIcmXInFltyoOx2bAh39QsojtDreBZBzD363sYrncg/Cwbo9F3qpEPuM2nl4ODV5LYr+HaqLascDnjUCexb+RsB1V+UcuphfJZLvRXMpDqAkMGmcWW/DlpZ7eB32HugC95TY8dZxaMQXEch4HRUuLqNZhizw7iUcTx0chLtZjTkSwpg583v3i/7oryDB1fWIJ3aaBFzx84COdIu8lwC5hdsvQLJbO5zrEXnbHDAKyMsAfrzMmedIhW16T3CvfIvzlL2Ta1L1ORj0MBY2BEgvsEtPCzvOZcPOxTOF7fiGR8sB4TcaDU/aEy9/zyBdj+kd9kKArDgund/v3iUfjWNNSer1GWgeF/H/5o/M6Jvzunr4YZ66C6ToWLvGNLPDsPFOGQ6s6Pr7A7OrHuNKkdcwTEtsI6a36U7128EojnlHeO G/ZA1yfC JG3MYimJwqSevdyBzJv32tnYKLwwusisTfCZPhirVSJdaDuJAVfMSZV0lO3yUZuHMFbva6xvG5JQ0t6VB/ZPYWfXiRWXQ+0Pf5ivtMkMp7tTh7TwquAXbdALsu90lIkRWtDkLsgEjOJIeF1HARmf+ceJbwGjTbfFt9jLINLLnrmRQq1UZXJzenDIV6Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Nov 07, 2023 at 07:24:08PM +0000, Matthew Wilcox wrote: > On Mon, Nov 06, 2023 at 06:57:05PM -0800, Christoph Lameter wrote: > > Right.. Well lets add the cgoup folks to this. > > > > The code that simply uses the GFP_NOFAIL to allocate cgroup metadata using > > an order > 1: > > > > int memcg_alloc_slab_cgroups(struct slab *slab, struct kmem_cache *s, > > gfp_t gfp, bool new_slab) > > { > > unsigned int objects = objs_per_slab(s, slab); > > unsigned long memcg_data; > > void *vec; > > > > gfp &= ~OBJCGS_CLEAR_MASK; > > vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, > > slab_nid(slab)); > > But, but but, why does this incur an allocation larger than PAGE_SIZE? > > sizeof(void *) is 8. We have N objects allocated from the slab. I > happen to know this is used for buffer_head, so: > > buffer_head 1369 1560 104 39 1 : tunables 0 0 0 : slabdata 40 40 0 > > we get 39 objects per slab. and we're only allocating one page per slab. > 39 * 8 is only 312. > > Maybe Christoph is playing with min_slab_order or something, so we're > getting 8 pages per slab. That's still only 2496 bytes. Why are we > calling into the large kmalloc path? What's really going on here? Good question and I *guess* it's something related to Christoph's hardware (64k pages or something like this) - otherwise we would see it sooner. I'd like to have the answer too. Thanks!