From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48A36C433DB for ; Wed, 17 Feb 2021 15:00:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE43864E28 for ; Wed, 17 Feb 2021 15:00:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BE43864E28 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 130D26B0006; Wed, 17 Feb 2021 10:00:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E1F96B006C; Wed, 17 Feb 2021 10:00:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC5AE6B006E; Wed, 17 Feb 2021 10:00:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id D1E806B0006 for ; Wed, 17 Feb 2021 10:00:09 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 57378908E for ; Wed, 17 Feb 2021 15:00:09 +0000 (UTC) X-FDA: 77828070138.13.clock35_0e065032764d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 34C4418140B7C for ; Wed, 17 Feb 2021 15:00:09 +0000 (UTC) X-HE-Tag: clock35_0e065032764d X-Filterd-Recvd-Size: 6826 Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 15:00:08 +0000 (UTC) Received: by mail-lf1-f48.google.com with SMTP id v5so21903321lft.13 for ; Wed, 17 Feb 2021 07:00:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Rh+5DVV2EZcIZwJ1XuaHOxn/tnJr/iJkF+N040Yn/HY=; b=M2F9g6UwE5JX/c5d+1RDp2TjHOgK94HobVq5+b+33U+hA6d1v8leYWM7ITPeSwqBw6 wj51n/YUEQM3/eGuAYvT8/qAWMZii7Yik+bLrs7GvTwFJbwFE7GcBC+6qfNhU6/0dLq1 hwzHf2OkLBmUdv4bHUJtKup9MPpD8HuAf/ndec9RzEXUpHfdvo40swjWsFNjwpF2C/zx 7SAEyvvlO/NLKOwoSu+ffhOP5o2t08+/NWa3VXVBAo5i8vzoO9/FpPeZEAO673Zid6Dv 9XaU04B0WRvQfG1NvhQ9l4NcG8K+XCRgrEqcgSEhauKVEuPYoZYrp3lI07JNWy4mjrNi NPlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Rh+5DVV2EZcIZwJ1XuaHOxn/tnJr/iJkF+N040Yn/HY=; b=gYbNUmapKik7/j12zLmt79DiHwVTBiQOF5V0SxYqs4MdM8mStOky1s7BdA23uSu8ve 4Oh5oVbZdZWtuCPzNmki8QfeNIRu/palmX2OPfrAnQc54H5BnpfLbyHgNkH5Pd7xRs+Q rt66UkIRRltU2QbQbtflrk713FREBzCgzWfkVHGbXeC/L4j/aXuYwVRr8o5IxQalKLYT KuWIX/yNRfu8227rFPNRuTk9m1KZVejwGAoUSPC+PsiNMAEKNt5ctk3Qdmio63/3rPx5 BlSvTcvFn03e8ZslKSro7Qil1qOPiTH9/ucrfwYifIvh81SopjeZ7fO5e13wbuM8qoA1 KqDw== X-Gm-Message-State: AOAM531o+X0h7zW6qG13UFwSnhsP6uoDkEsAwrOtMJduUsvizItQyBk/ 5uQ9+09xUjLlakwZBONfTdZrawCmRQNYVdoxvZlZilUviMo= X-Google-Smtp-Source: ABdhPJxC3vk206lSTN9W66FKVHDRC0E1DkhIxFEY6OsGgcDpuUBwmLqZwlDnx1hY/9TIQakOxaCd4H9IKdKLhqkPPus= X-Received: by 2002:a05:6512:10c8:: with SMTP id k8mr15123602lfg.299.1613574006710; Wed, 17 Feb 2021 07:00:06 -0800 (PST) MIME-Version: 1.0 References: <20210216030713.79101-1-eiichi.tsukata@nutanix.com> In-Reply-To: From: Shakeel Butt Date: Wed, 17 Feb 2021 06:59:55 -0800 Message-ID: Subject: Re: [RFC PATCH] mm, oom: introduce vm.sacrifice_hugepage_on_oom To: David Rientjes , Johannes Weiner , Tejun Heo Cc: Michal Hocko , Eiichi Tsukata , Jonathan Corbet , Mike Kravetz , mcgrof@kernel.org, Kees Cook , yzaikin@google.com, Andrew Morton , linux-doc@vger.kernel.org, LKML , Linux MM , linux-fsdevel , felipe.franciosi@nutanix.com Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 16, 2021 at 5:25 PM David Rientjes wrote: > > On Tue, 16 Feb 2021, Michal Hocko wrote: > > > > Hugepages can be preallocated to avoid unpredictable allocation latency. > > > If we run into 4k page shortage, the kernel can trigger OOM even though > > > there were free hugepages. When OOM is triggered by user address page > > > fault handler, we can use oom notifier to free hugepages in user space > > > but if it's triggered by memory allocation for kernel, there is no way > > > to synchronously handle it in user space. > > > > Can you expand some more on what kind of problem do you see? > > Hugetlb pages are, by definition, a preallocated, unreclaimable and > > admin controlled pool of pages. > > Small nit: true of non-surplus hugetlb pages. > > > Under those conditions it is expected > > and required that the sizing would be done very carefully. Why is that a > > problem in your particular setup/scenario? > > > > If the sizing is really done properly and then a random process can > > trigger OOM then this can lead to malfunctioning of those workloads > > which do depend on hugetlb pool, right? So isn't this a kinda DoS > > scenario? > > > > > This patch introduces a new sysctl vm.sacrifice_hugepage_on_oom. If > > > enabled, it first tries to free a hugepage if available before invoking > > > the oom-killer. The default value is disabled not to change the current > > > behavior. > > > > Why is this interface not hugepage size aware? It is quite different to > > release a GB huge page or 2MB one. Or is it expected to release the > > smallest one? To the implementation... > > > > [...] > > > +static int sacrifice_hugepage(void) > > > +{ > > > + int ret; > > > + > > > + spin_lock(&hugetlb_lock); > > > + ret = free_pool_huge_page(&default_hstate, &node_states[N_MEMORY], 0); > > > > ... no it is going to release the default huge page. This will be 2MB in > > most cases but this is not given. > > > > Unless I am mistaken this will free up also reserved hugetlb pages. This > > would mean that a page fault would SIGBUS which is very likely not > > something we want to do right? You also want to use oom nodemask rather > > than a full one. > > > > Overall, I am not really happy about this feature even when above is > > fixed, but let's hear more the actual problem first. > > Shouldn't this behavior be possible as an oomd plugin instead, perhaps > triggered by psi? I'm not sure if oomd is intended only to kill something > (oomkilld? lol) or if it can be made to do sysadmin level behavior, such > as shrinking the hugetlb pool, to solve the oom condition. The senpai plugin of oomd actually is a proactive reclaimer, so oomd is being used for more than oom-killing. > > If so, it seems like we want to do this at the absolute last minute. In > other words, reclaim has failed to free memory by other means so we would > like to shrink the hugetlb pool. (It's the reason why it's implemented as > a predecessor to oom as opposed to part of reclaim in general.) > > Do we have the ability to suppress the oom killer until oomd has a chance > to react in this scenario? There is no explicit knob but there are indirect ways to delay the kernel oom killer. In the presence of reclaimable memory the kernel is very conservative to trigger the oom-kill. I think the way Facebook is achieving this in oomd is by using swap to have good enough reclaimable memory and then using memory.swap.high to throttle the workload's allocation rates which will increase the PSI as well. Since oomd pools PSI, it will be able to react before the kernel oom-killer.