From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EC1EC77B60 for ; Thu, 27 Apr 2023 01:38:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E0C396B0071; Wed, 26 Apr 2023 21:38:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D95116B0072; Wed, 26 Apr 2023 21:38:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5DBF6B0074; Wed, 26 Apr 2023 21:38:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B56C06B0071 for ; Wed, 26 Apr 2023 21:38:04 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 74728AC70C for ; Thu, 27 Apr 2023 01:38:04 +0000 (UTC) X-FDA: 80725460088.07.00BFCA2 Received: from p3plwbeout16-05.prod.phx3.secureserver.net (p3plsmtp16-05-2.prod.phx3.secureserver.net [173.201.193.62]) by imf20.hostedemail.com (Postfix) with ESMTP id 361051C000F for ; Thu, 27 Apr 2023 01:38:00 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=none; spf=none (imf20.hostedemail.com: domain of phillip@squashfs.org.uk has no SPF policy when checking 173.201.193.62) smtp.mailfrom=phillip@squashfs.org.uk ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682559481; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TgSOvKNfr1UWxTNU/GGPbn7xvrIsPFGHy+u044ofv5M=; b=wOdMOM7tIQOVQsJOfyLs1vKhJYr9wpJ7J70bSVvRCwFzrTv8UxduF1LoX1IkPInRi1TdE/ GdREeJT+Vr5+ZgNzEVne7pmZzfHRVRVbYNsUpcH9F4QTDGanY6ucEjCV4WsQhBwKps3j8n xMjxGIXHQMTXHkxuw4pJyiqKyMb0/uk= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=none; spf=none (imf20.hostedemail.com: domain of phillip@squashfs.org.uk has no SPF policy when checking 173.201.193.62) smtp.mailfrom=phillip@squashfs.org.uk ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682559481; a=rsa-sha256; cv=none; b=KF6p6wgHQyOM+1CeIu1vrjh2g5VNoieEkocleMJr67A7Hi1SEgmzK+i3mnEb3IB4JWRtqw 0sAX7FfDeKpNePzlMbNMBEEgeuvWcgblkJNG3JZRP/kRIlQ4tajFMmWqFwQTS7eQ0nL3kV BtLqy9ZH57BEp7SifNdKmLTA4Le8ETM= Received: from mailex.mailcore.me ([94.136.40.143]) by :WBEOUT: with ESMTP id rqa6pXhT4bMYVrqa6pJhJV; Wed, 26 Apr 2023 18:37:59 -0700 X-SECURESERVER-ACCT: phillip@squashfs.org.uk X-SID: rqa6pXhT4bMYV Received: from 82-69-79-175.dsl.in-addr.zen.co.uk ([82.69.79.175] helo=[192.168.178.87]) by smtp03.mailcore.me with esmtpa (Exim 4.94.2) (envelope-from ) id 1prqZr-0005kU-EL; Thu, 27 Apr 2023 02:37:44 +0100 Message-ID: Date: Thu, 27 Apr 2023 02:37:55 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH 1/1] mm/oom_kill: trigger the oom killer if oom occurs without __GFP_FS To: Hui Wang , Yang Shi Cc: Michal Hocko , linux-mm@kvack.org, akpm@linux-foundation.org, surenb@google.com, colin.i.king@gmail.com, hannes@cmpxchg.org, vbabka@suse.cz, hch@infradead.org, mgorman@suse.de References: <20230426051030.112007-1-hui.wang@canonical.com> <20230426051030.112007-2-hui.wang@canonical.com> <9f827ae2-eaec-8660-35fa-71e218d5a2c5@squashfs.org.uk> <72cf1f14-c02b-033e-6fa9-8558e628ffb6@squashfs.org.uk> <553e6668-bebd-7411-9f69-d62e9658da1d@squashfs.org.uk> <1f181fe6-60f4-6b71-b8ca-4f6365de0b4c@canonical.com> Content-Language: en-GB From: Phillip Lougher In-Reply-To: <1f181fe6-60f4-6b71-b8ca-4f6365de0b4c@canonical.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Mailcore-Auth: 439999529 X-Mailcore-Domain: 1394945 X-123-reg-Authenticated: phillip@squashfs.org.uk X-Originating-IP: 82.69.79.175 X-CMAE-Envelope: MS4xfDoQetVRP03MNf5kFUVk+Q2Z12ZH8+SDznLZ4f/SAohN8968roWsLSwMZ0vArmWcrF+7VJNqLTms7cg6Hyequ1nGSR/xEOv27iv5PSUFOju0H7M9KwRs 4EJQr60deQCcE3ZSGm6t0yf1IJFzFX5V7qlWjW/7bCZZx16xJzoOVkjNiH4JhcD7FitIWoyz308PastDsbRG7EFPnaIKktWzBWo= X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 361051C000F X-Stat-Signature: j689oehenaifj3nru6nopc6sj6mktz4q X-Rspam-User: X-CMAE-Analysis: v=2.4 cv=be3TEsDB c=1 sm=1 tr=0 ts=6449d1f9 a=U5o0kuB+J6BZiKGIAxhz6Q==:117 a=84ok6UeoqCVsigPHarzEiQ==:17 a=ggZhUymU-5wA:10 a=IkcTkHD0fZMA:10 a=dKHAf1wccvYA:10 a=FXvPX3liAAAA:8 a=kMvBhhh4GrGACB9IZlEA:9 a=QEXdDO2ut3YA:10 a=UObqyxdv-6Yh2QiB9mM_:22 X-HE-Tag: 1682559480-977288 X-HE-Meta: U2FsdGVkX19unutRm0oUZ4Levn1gxgtCbN/Ne5zH3r59wnIAS78Eai1mGjLrAdJ9ZIz+wrY8w0WvKAPeTsX3PDfoQb6h0sWUF9yKnr9Qr4CZ16TaeMk3goG90uvwGj9ggYWFBZ58G4mUUN0xQ+gixead+ygHWtQk4EgWVBddHF7Lw6nS8qpB7wl4AX3b3UM/3sHReZLdWoMNF1taMK1EiPx5qbwkPgAz9Lv4wP89U2xkQEelybzhXkrBip6QLi6qCfI1TvCt/8aGCsx9XwA0PpMTOqrWFoCi33SLqo7IKaXbkck/3efhyX4ZgNXQsuxkUvGpeqV+oKh8OI/G40+hy04fo+vIF8oZKF37IhcxaMhw3RXg41DsxZKQ0DRPOhOUBllV3AjXth54yyOfld8Qs+xJDPy2CdxSERAnqd9KBiYAAhm2CRFFg381iY7ab0Smvh/nDOjxQGtzWJ+5eEztd0PN+eEreV4nw9/2svXa3QC3kmYHZSTl461Z5wVfeEZJVUo72AcE4BfJQxr9AktbbwplK0/aQoAQvYwibFAEqClpzpZFjSzYAa6gkdc6f6PUlm6afPycne6Lm48UHha72GnnBm8cGo5MpkJvDhMBDTeqeQiUAcljopjiHAQ2iKr6MNlUdtVZ/NVfbvlTrftXvtuKLAP0HVBqzAcEgDCdcL039LdxK1f1gRVUZMmZELkbY9M/LiQxD7FnZTVtkQeO/V48mcjBxotIkf+wXn5MAcY6c1lglZuMPuobUFMxuuIBFbPd9SNh2b4KyQHY0amPHir68k8xAI89/I0h4HGYmi2kKprhgCvNd2pSrUkol9nrC0i0MUzn01fdRTAGOBAA49EVT6A7rHRJRQW2qYmD+scPhvZA1jW082uvv+PhjlDVKM+G8T/k+PCsIUre1rWbIHl4siT7BllEYLvq2GMp5+FT6HrXuS6gxuPO4JaN17elAznMQTRgvUX0b52DEJu Px9x6bZJ +r72Wcgu0biYK4KbHNbP5bpZU2uyX+olF4GpdOfrDHE/jpU1YjHggujMoXn+sAaywSK60vP13ieBXIGCB6WbFb9qIhw3OmUJaBKkZRRiIfmZyfmfSpc/cr7n2Ji+LlDR7RDiKoYaq4iHTLJ8MG4Z/4LfxqKDyVELFtl7r27FAdL6w+9kIAM/emjIFJvS7J0AU6ka/5xDlFH7xCLUduUAxjBozStn58RVHGiIaFRMU99ZiulTlQNl57ddE1cmparW7kjqAu/55le0HVIqxMwCokNNWGKi/ynd65QEXiux5se6EVpug4BTNogO7wX6Kso1zy1qsBziHom5apo65Jwc9RRvAo0POXWgZUxe1ZZRFq9/v9xfFY3IdD/unEKwrujeKqectu9wDzypb0S4ys8VOAUlJMQbt81hfnEh2ZrLppfx81mMP/00DC0EVhk/Oj1sOlpZEiZ5VNixv/ZOGFc9Gw3IeDDlJa723hSNQQhKrfbLivSY6nm3TsjjBg0vsT7zAgs/YWu1w3FRA9oSrgujstQAr2+lMWYzBFDonAeZIk/ykiCvMAC+boS5Tqr3NpmU8HWRhQDF8mGixG74N4aHLIaPDuZ+IINLnHpQ69mIjrqyqnF7lIwPO5I17yIJA+CBVUEATRciELIvewlHRH7uTLSixUWh3nDevJLRQzTn7IxgBQlk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 27/04/2023 01:42, Hui Wang wrote: > > On 4/27/23 03:34, Phillip Lougher wrote: >> >> On 26/04/2023 20:06, Phillip Lougher wrote: >>> >>> On 26/04/2023 19:26, Yang Shi wrote: >>>> On Wed, Apr 26, 2023 at 10:38 AM Phillip Lougher >>>> wrote: >>>>> >>>>> On 26/04/2023 17:44, Phillip Lougher wrote: >>>>>> On 26/04/2023 12:07, Hui Wang wrote: >>>>>>> On 4/26/23 16:33, Michal Hocko wrote: >>>>>>>> [CC squashfs maintainer] >>>>>>>> >>>>>>>> On Wed 26-04-23 13:10:30, Hui Wang wrote: >>>>>>>>> If we run the stress-ng in the filesystem of squashfs, the system >>>>>>>>> will be in a state something like hang, the stress-ng couldn't >>>>>>>>> finish running and the console couldn't react to users' input. >>>>>>>>> >>>>>>>>> This issue happens on all arm/arm64 platforms we are working on, >>>>>>>>> through debugging, we found this issue is introduced by oom >>>>>>>>> handling >>>>>>>>> in the kernel. >>>>>>>>> >>>>>>>>> The fs->readahead() is called between memalloc_nofs_save() and >>>>>>>>> memalloc_nofs_restore(), and the squashfs_readahead() calls >>>>>>>>> alloc_page(), in this case, if there is no memory left, the >>>>>>>>> out_of_memory() will be called without __GFP_FS, then the oom >>>>>>>>> killer >>>>>>>>> will not be triggered and this process will loop endlessly and >>>>>>>>> wait >>>>>>>>> for others to trigger oom killer to release some memory. But >>>>>>>>> for a >>>>>>>>> system with the whole root filesystem constructed by squashfs, >>>>>>>>> nearly all userspace processes will call out_of_memory() without >>>>>>>>> __GFP_FS, so we will see that the system enters a state >>>>>>>>> something like >>>>>>>>> hang when running stress-ng. >>>>>>>>> >>>>>>>>> To fix it, we could trigger a kthread to call page_alloc() with >>>>>>>>> __GFP_FS before returning from out_of_memory() due to without >>>>>>>>> __GFP_FS. >>>>>>>> I do not think this is an appropriate way to deal with this issue. >>>>>>>> Does it even make sense to trigger OOM killer for something like >>>>>>>> readahead? Would it be more mindful to fail the allocation >>>>>>>> instead? >>>>>>>> That being said should allocations from squashfs_readahead use >>>>>>>> __GFP_RETRY_MAYFAIL instead? >>>>>>> Thanks for your comment, and this issue could hardly be >>>>>>> reproduced on >>>>>>> ext4 filesystem, that is because the ext4->readahead() doesn't call >>>>>>> alloc_page(). If changing the ext4->readahead() as below, it >>>>>>> will be >>>>>>> easy to reproduce this issue with the ext4 filesystem (repeatedly >>>>>>> run: $stress-ng --bigheap ${num_of_cpu_threads} --sequential 0 >>>>>>> --timeout 30s --skip-silent --verbose) >>>>>>> >>>>>>> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c >>>>>>> index ffbbd9626bd8..8b9db0b9d0b8 100644 >>>>>>> --- a/fs/ext4/inode.c >>>>>>> +++ b/fs/ext4/inode.c >>>>>>> @@ -3114,12 +3114,18 @@ static int ext4_read_folio(struct file >>>>>>> *file, >>>>>>> struct folio *folio) >>>>>>>   static void ext4_readahead(struct readahead_control *rac) >>>>>>>   { >>>>>>>          struct inode *inode = rac->mapping->host; >>>>>>> +       struct page *tmp_page; >>>>>>> >>>>>>>          /* If the file has inline data, no need to do >>>>>>> readahead. */ >>>>>>>          if (ext4_has_inline_data(inode)) >>>>>>>                  return; >>>>>>> >>>>>>> +       tmp_page = alloc_page(GFP_KERNEL); >>>>>>> + >>>>>>>          ext4_mpage_readpages(inode, rac, NULL); >>>>>>> + >>>>>>> +       if (tmp_page) >>>>>>> +               __free_page(tmp_page); >>>>>>>   } >>>>>>> >>>>>>> >>>>>>> BTW, I applied my patch to the linux-next and ran the oom stress-ng >>>>>>> testcases overnight, there is no hang, oops or crash, looks like >>>>>>> there is no big problem to use a kthread to trigger the oom >>>>>>> killer in >>>>>>> this case. >>>>>>> >>>>>>> And Hi squashfs maintainer, I checked the code of filesystem, looks >>>>>>> like most filesystems will not call alloc_page() in the >>>>>>> readahead(), >>>>>>> could you please help take a look at this issue, thanks. >>>>>> >>>>>> This will be because most filesystems don't need to do so. >>>>>> Squashfs is >>>>>> a compressed filesystem with large blocks covering much more than >>>>>> one >>>>>> page, and it decompresses these blocks in squashfs_readahead().   If >>>>>> __readahead_batch() does not return the full set of pages >>>>>> covering the >>>>>> Squashfs block, it allocates a temporary page for the >>>>>> decompressors to >>>>>> decompress into to "fill in the hole". >>>>>> >>>>>> What can be done here as far as Squashfs is concerned .... I could >>>>>> move the page allocation out of the readahead path (e.g. do it at >>>>>> mount time). >>>>> You could try this patch which does that.  Compile tested only. >>>> The kmalloc_array() may call alloc_page() to trigger this problem too >>>> IIUC. It should be pre-allocated as well? >>> >>> >>> That is a much smaller allocation, and so it entirely depends >>> whether it is an issue or not.  There are also a number of other >>> small memory allocations in the path as well. >>> >>> The whole point of this patch is to move the *biggest* allocation >>> which is the reported issue and then see what happens.   No point in >>> making this test patch more involved and complex than necessary at >>> this point. >>> >>> Phillip >>> >> >> Also be aware this stress-ng triggered issue is new, and apparently >> didn't occur last year.   So it is reasonable to assume the issue has >> been introduced as a side effect of the readahead improvements.  One >> of these is this allocation of a temporary page to decompress into >> rather than falling back to entirely decompressing into a >> pre-allocated buffer (allocated at mount time).  The small memory >> allocations have been there for many years. >> >> Allocating the page at mount time effectively puts the memory >> allocation situation back to how it was last year before the >> readahead work. >> >> Phillip >> > Thanks Phillip and Yang. > > And Phillip, > > I tested your change, it didn't help. According to my debug, the OOM > happens at the place of allocating memory for bio, it is at the line > of "struct page *page = alloc_page(GFP_NOIO);" in the > squashfs_bio_read(). Other filesystems just use the pre-allocated > memory in the "struct readahead_control" to do the bio, but squashfs > allocate the new page to do the bio (maybe because the squashfs is a > compressed filesystem). > The test patch was a process of elimination, it removed the obvious change from last year. It is also because it is a compressed filesystem, in most filesystems what is read off disk in I/O is what ends up in the page cache.  In a compressed filesystem what is read in isn't what ends up in the page cache. > BTW, this is not a new issue for squashfs, we have uc20 (linux-5.4 > kernel) and uc22 (linux-5.15 kernel), all have this issue. The issue > already existed in the squahsfs_readpage() in the 5.4 kernel. That information would have been rather useful in the initial report, and saved myself from wasting my time.  Thanks for that. Now in the squashfs_readpage() situation does processes hang or crash?  In the squashfs_readpage() path __GFP_NOFS should not be in effect.  So is the OOM killer being invoked in this code path or not?   Does alloc_page() in the bio code return NULL, and/or invoke the OMM killer or does it get stuck.  Don't keep this information to yourself so I have to guess. > I guess if could use pre-allocated memory to do the bio, it will help. We'll see. As far I can see it you've made the system run out of memory, and are now complaining about the result.  There's nothing unconventional about Squashfs handling of out of memory, and most filesystems put into an out of memory situation will fail. Phillip > > Thanks, > > Hui. > > > >> >>>>>    fs/squashfs/page_actor.c     | 10 +--------- >>>>>    fs/squashfs/page_actor.h     |  1 - >>>>>    fs/squashfs/squashfs_fs_sb.h |  1 + >>>>>    fs/squashfs/super.c          | 10 ++++++++++ >>>>>    4 files changed, 12 insertions(+), 10 deletions(-) >>>>> >>>>> diff --git a/fs/squashfs/page_actor.c b/fs/squashfs/page_actor.c >>>>> index 81af6c4ca115..6cce239eca66 100644 >>>>> --- a/fs/squashfs/page_actor.c >>>>> +++ b/fs/squashfs/page_actor.c >>>>> @@ -110,15 +110,7 @@ struct squashfs_page_actor >>>>> *squashfs_page_actor_init_special(struct squashfs_sb_ >>>>>          if (actor == NULL) >>>>>                  return NULL; >>>>> >>>>> -       if (msblk->decompressor->alloc_buffer) { >>>>> -               actor->tmp_buffer = kmalloc(PAGE_SIZE, GFP_KERNEL); >>>>> - >>>>> -               if (actor->tmp_buffer == NULL) { >>>>> -                       kfree(actor); >>>>> -                       return NULL; >>>>> -               } >>>>> -       } else >>>>> -               actor->tmp_buffer = NULL; >>>>> +       actor->tmp_buffer = msblk->actor_page; >>>>> >>>>>          actor->length = length ? : pages * PAGE_SIZE; >>>>>          actor->page = page; >>>>> diff --git a/fs/squashfs/page_actor.h b/fs/squashfs/page_actor.h >>>>> index 97d4983559b1..df5e999afa42 100644 >>>>> --- a/fs/squashfs/page_actor.h >>>>> +++ b/fs/squashfs/page_actor.h >>>>> @@ -34,7 +34,6 @@ static inline struct page >>>>> *squashfs_page_actor_free(struct squashfs_page_actor * >>>>>    { >>>>>          struct page *last_page = actor->last_page; >>>>> >>>>> -       kfree(actor->tmp_buffer); >>>>>          kfree(actor); >>>>>          return last_page; >>>>>    } >>>>> diff --git a/fs/squashfs/squashfs_fs_sb.h >>>>> b/fs/squashfs/squashfs_fs_sb.h >>>>> index 72f6f4b37863..8feddc9e6cce 100644 >>>>> --- a/fs/squashfs/squashfs_fs_sb.h >>>>> +++ b/fs/squashfs/squashfs_fs_sb.h >>>>> @@ -47,6 +47,7 @@ struct squashfs_sb_info { >>>>>          struct squashfs_cache *block_cache; >>>>>          struct squashfs_cache *fragment_cache; >>>>>          struct squashfs_cache *read_page; >>>>> +       void *actor_page; >>>>>          int next_meta_index; >>>>>          __le64 *id_table; >>>>>          __le64 *fragment_index; >>>>> diff --git a/fs/squashfs/super.c b/fs/squashfs/super.c >>>>> index e090fae48e68..674dc187d961 100644 >>>>> --- a/fs/squashfs/super.c >>>>> +++ b/fs/squashfs/super.c >>>>> @@ -329,6 +329,15 @@ static int squashfs_fill_super(struct >>>>> super_block *sb, struct fs_context *fc) >>>>>                  goto failed_mount; >>>>>          } >>>>> >>>>> + >>>>> +       /* Allocate page for >>>>> squashfs_readahead()/squashfs_read_folio() */ >>>>> +       if (msblk->decompressor->alloc_buffer) { >>>>> +               msblk->actor_page = kmalloc(PAGE_SIZE, GFP_KERNEL); >>>>> + >>>>> +               if(msblk->actor_page == NULL) >>>>> +                       goto failed_mount; >>>>> +       } >>>>> + >>>>>          msblk->stream = squashfs_decompressor_setup(sb, flags); >>>>>          if (IS_ERR(msblk->stream)) { >>>>>                  err = PTR_ERR(msblk->stream); >>>>> @@ -454,6 +463,7 @@ static int squashfs_fill_super(struct >>>>> super_block *sb, struct fs_context *fc) >>>>>          squashfs_cache_delete(msblk->block_cache); >>>>>          squashfs_cache_delete(msblk->fragment_cache); >>>>>          squashfs_cache_delete(msblk->read_page); >>>>> +       kfree(msblk->actor_page); >>>>>          msblk->thread_ops->destroy(msblk); >>>>>          kfree(msblk->inode_lookup_table); >>>>>          kfree(msblk->fragment_index); >>>>> -- >>>>> 2.35.1 >>>>> >>>>>> Adding __GFP_RETRY_MAYFAIL so the alloc() can fail will mean >>>>>> Squashfs >>>>>> returning I/O failures due to no memory.  That will cause a lot of >>>>>> applications to crash in a low memory situation.  So a crash rather >>>>>> than a hang. >>>>>> >>>>>> Phillip >>>>>> >>>>>> >>>>>> >>>>>>