From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1E25C25B78 for ; Tue, 4 Jun 2024 17:54:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45F126B00B1; Tue, 4 Jun 2024 13:54:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40F846B00B2; Tue, 4 Jun 2024 13:54:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D6CE6B00B3; Tue, 4 Jun 2024 13:54:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 10B4C6B00B1 for ; Tue, 4 Jun 2024 13:54:23 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 9E02CA1791 for ; Tue, 4 Jun 2024 17:54:22 +0000 (UTC) X-FDA: 82193955564.15.5FA8C09 Received: from mail-ed1-f42.google.com (mail-ed1-f42.google.com [209.85.208.42]) by imf10.hostedemail.com (Postfix) with ESMTP id AA630C0020 for ; Tue, 4 Jun 2024 17:54:19 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=i02rUvGU; spf=pass (imf10.hostedemail.com: domain of yuzhao@google.com designates 209.85.208.42 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717523659; a=rsa-sha256; cv=none; b=n38vIzv0NHZfCG4lnIrDuOeenn/HpnhOCt9/RWpxCJkag1U1e+L0KY6t1UIHJtWW6/yvQy cRp6HwFX0Y1rCM5HRtOzA/Eb3Q1LD2Dg8SP4CjGWDp3DHzDQUCwumxMZ2dOwyQah9eJwXR aA8sCkJMx3/NQ7R1+fKtiP5QivR+1dk= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=i02rUvGU; spf=pass (imf10.hostedemail.com: domain of yuzhao@google.com designates 209.85.208.42 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717523659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pvjvNFsuZm1XCxQL706IG6zhzQH06/KtKZ39FICaL2E=; b=OkjOHpua58oudJGnuSHQpPTtH7ip8515PLnOAGxW2JLCMIhMhvl0vTyKipPX1RVeUMHA1N dv4W9a6zNAscHgV+6BG1HVtj6JmrtVqOYU/qVRTEscN/55BZQjxmeKgoIJUfJZSQ7Vy1Ol jMee2sSXlmYSCYWhSmNM6yLolHH3ZAM= Received: by mail-ed1-f42.google.com with SMTP id 4fb4d7f45d1cf-57a1b122718so802a12.1 for ; Tue, 04 Jun 2024 10:54:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1717523658; x=1718128458; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pvjvNFsuZm1XCxQL706IG6zhzQH06/KtKZ39FICaL2E=; b=i02rUvGUVt0+ffO3REqBpE+G5vDyKDRASI25ZO7t19DRkfXdSReWAtkS90kLksoRb7 dJOB/h4FEC3m+/nGw2dCEVssBxr/ts2rMHGmHjUMmcaQ0i4C94LCk2pWCAG+zG8GNcJ/ 0ZdhslfkvmzJhgWpJzISqoEpgMCD05eDwnksAM5+kbC+r7kBdMtGqqhQK4qDZbRz53Ei fNWDb5JSIm3XoGYg6YXGso4y3tt9ZA7KeABAbBtmYuCjRVcknchdEueuwerD+FiOiusp UhDuoxtpvaq3o8qSUMF/Rmr45hmx0CcTnWajQMuAOp5Ms6p7TsL/a+2dWqPltSeBmRzc xDag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717523658; x=1718128458; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pvjvNFsuZm1XCxQL706IG6zhzQH06/KtKZ39FICaL2E=; b=Ox8mnu5nuOz7tLW4ltBRoCxQkectvnnYL5hJmmwGmZWQD/afu7AoqGc5RQgxHNqNL4 cs+FdFfYIgh/ExVeduyozLF7HySagU/Rz88AWJ/I6+B8yK00cDvTWkszgFt7rRxWr8mS t5l4CE5pB5mt7Ud7WQVFfJv5hR+0Ek/d8rEia4Oi9Q4wlgogT8kBbLieoV/wig1pkNrA pTqPEPMYZiqmcNToAzLlhYz8RAsnKa+IINYU/qyi3Lh6sWksGYAdbOeXDFLn+rdXhS0Z r4dS6MrOAUThogzMm2WpK4Uz7t68QXv50ML5PHF91FWwHoul7GoQy2FbiXbCn9b6HLuA /8Wg== X-Forwarded-Encrypted: i=1; AJvYcCWvileqkU6c4/98hZPi2j9Q6wlOebyAXdavR2KH8org3eGL7N5pr0uXgBHXkhMwNM1J965EmJCuWgjGxGWwwxwpyrY= X-Gm-Message-State: AOJu0YxxebfqDPKYCNTKssXuk9L32Z7rGhDHk6bjphvmv77rN/DFeWUc maDU8l6zD7wAD7/HZpsOnuZdhZZDLUVACIxtubFLdLEeEPY2DzGiezcb+JcHBWZO9d/DIu+bm5D j4R7pPulSeCcV/XTIZT5MrhQ0R06kNiDdvmtk X-Google-Smtp-Source: AGHT+IEqQH4VcW+bet6kw3ahjQXxsK9cW/QD3nBI7Uk5BIGc4R3Umj4HZMjvuIihX6oNv84m9DLjuGyDpzWWY0nDRIs= X-Received: by 2002:aa7:c245:0:b0:57a:2276:2a86 with SMTP id 4fb4d7f45d1cf-57a8d6e48c9mr2045a12.4.1717523657843; Tue, 04 Jun 2024 10:54:17 -0700 (PDT) MIME-Version: 1.0 References: <20240508202111.768b7a4d@yea> <20240515224524.1c8befbe@yea> <20240602200332.3e531ff1@yea> <20240604001304.5420284f@yea> <20240604134458.3ae4396a@yea> In-Reply-To: From: Yu Zhao Date: Tue, 4 Jun 2024 11:53:39 -0600 Message-ID: Subject: Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc) To: Yosry Ahmed Cc: Erhard Furtner , linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Johannes Weiner , Nhat Pham , Chengming Zhou , Sergey Senozhatsky , Minchan Kim Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AA630C0020 X-Stat-Signature: hha1m79ykiubz3h8pse4h3g54edmizhx X-HE-Tag: 1717523659-907681 X-HE-Meta: U2FsdGVkX19yQa22H4w6vu+p44LdBn1JnNY8eEgQ7cQYBbMhRu+fH6YfOWHBRxMLzbBQCY9jB9Tr650c8cadW5iJ+lM6dLgy5cWmcpx9VRs3NMn77/3SngnFxWUBEiFp32j+Y37XNP3xASA/pBEzffUmzLQ9ubn3xo8jsjy+IhKsdWK6JElqSrw/vYPo01KmGlgJnJ5WAeY6d602ehHtQnEHzTSIRpqg4SsTnllPCcBMNnFrGURe/DSGqfmTjMMbtVhHhlYqogPz3v2fFKMoJMmm/IETJTj9dpYXL8n7bcmqHSIa1vPSR6KjelkouKHig1ojDt3M9bAve6AcMkb8yOn154358fzbX930fDanfcNrI3Yw97mutWG/CzrKbWuuuqDOia92dXXJ3jTXD7QdR8uvOGwzlT8QeI+wAxJc+V0x4NfrumRq5MUposVOurMuLZtNbKYj0TftWRX8Xb2hQVWEfBb7PDIoo6ZhYVrWZd1Yij3VNdCffaaASgxGOAlwFC2kQgvCN2JqyaSIVb9ynP05ghWWZTPc/lQR3bZrRw53EjNkcuITcCy8eL2Kt+YHIqzOEBpv4prO2esneIJnOyoxk2JOIKHZZPZAJCr56Vzw+ZISmGebs4dJqjm0RWXtmPOFQSROxPLTlrfXm103KHZtiUROImcf3tGy6reRf+EZuXAa8w/l6v5OElrJaS0X93tYGs/wKD7eXmC28IF2C1Bu0YyiIiuJ9tLyT9sWN3uEH3Q/UL2SLnzcrsmrSvVuvmwSzPE2rZBQ0tI3CR4ZYKh5B+4m4g8j/cGAYIm9P6ATaljQrRggWK4/mqk7LbauWjEMjpm8+evD32WTfVs10bKgce6rAfis2nKuf/GeciEisH8nEhiLiHw7cSpWniZnadGBMBI/3Ttr8kQaCOaWi6ZNn0/GTAKE6vJGPfLfWY5LZrrf9pIj1ynTKkqkAXRPHcGq34hN5/XCfj1miPh yykfeYPW /upmed0pn0gymJrxWlDZokr2SRQj1RSYU28zMrvvNA3fe9LM/gd791pvvkB9Jk/rKENwCsOxARCU59Kn2rxXpUhkjYoA2V6Q5CGnZN31JylrAKItmYP3UKH3Abcxiahq+HSTQIPKJ3eUn16ZDMImA67N36ni5qe40DFPFitV/DYJSc93sIVLIouqyCVo5DDNC6OFhVz6PiDffYiZiyFgtXVqV0xRruWWu6j/44QIV8GZzhaRgz4zQKIiSD7QBHBVcKn4Zq7sEbnrueLc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jun 4, 2024 at 11:34=E2=80=AFAM Yosry Ahmed = wrote: > > On Tue, Jun 4, 2024 at 10:19=E2=80=AFAM Yu Zhao wrote= : > > > > On Tue, Jun 4, 2024 at 10:12=E2=80=AFAM Yosry Ahmed wrote: > > > > > > On Tue, Jun 4, 2024 at 4:45=E2=80=AFAM Erhard Furtner wrote: > > > > > > > > On Mon, 3 Jun 2024 16:24:02 -0700 > > > > Yosry Ahmed wrote: > > > > > > > > > Thanks for bisecting. Taking a look at the thread, it seems like = you > > > > > have a very limited area of memory to allocate kernel memory from= . One > > > > > possible reason why that commit can cause an issue is because we = will > > > > > have multiple instances of the zsmalloc slab caches 'zspage' and > > > > > 'zs_handle', which may contribute to fragmentation in slab memory= . > > > > > > > > > > Do you have /proc/slabinfo from a good and a bad run by any chanc= e? > > > > > > > > > > Also, could you check if the attached patch helps? It makes sure = that > > > > > even when we use multiple zsmalloc zpools, we will use a single s= lab > > > > > cache of each type. > > > > > > > > Thanks for looking into this! I got you 'cat /proc/slabinfo' from a= good HEAD, from a bad HEAD and from the bad HEAD + your patch applied. > > > > > > > > Good was 6be3601517d90b728095d70c14f3a04b9adcb166, bad was b8cf32dc= 6e8c75b712cbf638e0fd210101c22f17 which I got both from my bisect.log. I got= the slabinfo shortly after boot and a 2nd time shortly before the OOM or t= he kswapd0: page allocation failure happens. I terminated the workload (str= ess-ng --vm 2 --vm-bytes 1930M --verify -v) manually shortly before the 2 G= iB RAM exhausted and got the slabinfo then. > > > > > > > > The patch applied to git b8cf32dc6e8c75b712cbf638e0fd210101c22f17 u= nfortunately didn't make a difference, I got the kswapd0: page allocation f= ailure nevertheless. > > > > > > Thanks for trying this out. The patch reduces the amount of wasted > > > memory due to the 'zs_handle' and 'zspage' caches by an order of > > > magnitude, but it was a small number to begin with (~250K). > > > > > > I cannot think of other reasons why having multiple zsmalloc pools > > > will end up using more memory in the 0.25GB zone that the kernel > > > allocations can be made from. > > > > > > The number of zpools can be made configurable or determined at runtim= e > > > by the size of the machine, but I don't want to do this without > > > understanding the problem here first. Adding other zswap and zsmalloc > > > folks in case they have any ideas. > > > > Hi Erhard, > > > > If it's not too much trouble, could you "grep nr_zspages /proc/vmstat" > > on kernels before and after the bad commit? It'd be great if you could > > run the grep command right before the OOM kills. > > > > The overall internal fragmentation of multiple zsmalloc pools might be > > higher than a single one. I suspect this might be the cause. > > I thought about the internal fragmentation of pools, but zsmalloc > should have access to highmem, and if I understand correctly the > problem here is that we are running out of space in the DMA zone when > making kernel allocations. > > Do you suspect zsmalloc is allocating memory from the DMA zone > initially, even though it has access to highmem? There was a lot of user memory in the DMA zone. So at a point the highmem zone was full and allocation fallback happened. The problem with zone fallback is that recent allocations go into lower zones, meaning they are further back on the LRU list. This applies to both user memory and zsmalloc memory -- the latter has a writeback LRU. On top of this, neither the zswap shrinker nor the zsmalloc shrinker (compaction) is zone aware. So page reclaim might have trouble hitting the right target zone. We can't really tell how zspages are distributed across zones, but the overall number might be helpful. It'd be great if someone could make nr_zspages per zone :)