From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A821C64EC4 for ; Tue, 7 Mar 2023 00:58:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D467B6B0073; Mon, 6 Mar 2023 19:58:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF68B6B0074; Mon, 6 Mar 2023 19:58:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBE656B0075; Mon, 6 Mar 2023 19:58:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id AB34B6B0073 for ; Mon, 6 Mar 2023 19:58:19 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 7C5E41C5F93 for ; Tue, 7 Mar 2023 00:58:19 +0000 (UTC) X-FDA: 80540291118.08.19DAC3D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf19.hostedemail.com (Postfix) with ESMTP id C53D01A000B for ; Tue, 7 Mar 2023 00:58:16 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VNpiQ5hH; spf=pass (imf19.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678150696; a=rsa-sha256; cv=none; b=xgBhibVRvVKl8hDbmTnBXHpnA1vQYPpPpV4nbd2HAi08mI9vAlnerroXx+4ALfYIKufAGp MdVrvDR7fI7HkWLURfjklf2HS5vw2DdkG+FbWr6b3Lu/aGzIHThjT6f5L9qRavPHLnhnQL czApKimd3D4xw/KlvCC0dcn0ZhKnylU= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VNpiQ5hH; spf=pass (imf19.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678150696; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=P5h7jUPzBsftkM0X7sRtXrekywVxMsHPQRlFAOFh6Ys=; b=QZxSRIwS4nU1YzNP5YqR/gv1am6p13e6+jGgQpSpQiTsjFR4VvCcAp8AZlWG07R5W0qa3f NPAlX9Bhlso9mLKdCS8UoihI5S37w1+XkagPg8Bu2R1Z7fvmrhbgKMtMMq0BVu8TnDWMeh 3bTxBl2rhkjbytqarlIDjZjldK8S10Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678150696; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=P5h7jUPzBsftkM0X7sRtXrekywVxMsHPQRlFAOFh6Ys=; b=VNpiQ5hHbDE788VWR/po0crvTPC99qa4Y3KsEiY43wX9oslsJDT0NBuTVB5QQVc76jeNY1 35iozCaV6CR5cuoLvcTM2Qpspgr/wi7tQ/DnQu6hYfAbIbuYAhienvF+ZX6k2MhIn1Q6Jk Ho2+P+5IHE+UDSoQwog4coUEkLhci0U= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-192-AUpE3SMmPyi8H5uTVVmE1w-1; Mon, 06 Mar 2023 19:58:10 -0500 X-MC-Unique: AUpE3SMmPyi8H5uTVVmE1w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2BCE329AB3EB; Tue, 7 Mar 2023 00:58:10 +0000 (UTC) Received: from localhost (ovpn-12-63.pek2.redhat.com [10.72.12.63]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B49E0492C3E; Tue, 7 Mar 2023 00:58:08 +0000 (UTC) Date: Tue, 7 Mar 2023 08:58:04 +0800 From: Baoquan He To: Michal Hocko Cc: Uladzislau Rezki , Gao Xiang , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mel Gorman , Vlastimil Babka , Christoph Hellwig Subject: Re: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations Message-ID: References: <20230305053035.1911-1-hsiangkao@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Rspam-User: X-Rspamd-Queue-Id: C53D01A000B X-Rspamd-Server: rspam01 X-Stat-Signature: gce1pno6tmpnc9a516zpwrh9o5oywm7u X-HE-Tag: 1678150696-76070 X-HE-Meta: U2FsdGVkX19KM/XUN1YUSRftT0wkj9NeZ+CjIm9/lSGor3h6ayD+vfaxAljlPi1UzcFyAAiPS8MCl7GgX5FOPKB7bDU9O5h0eT+dpJt75wnnlXPLL8qiMWQVOqsyArXxQeniB87AqMHO9wnvnKUW3axZKYLBI3RVnCFL0sxmDL2+T2ksh5jIJElFv6gwQxo2VckShMBZcoXXKEylf6pcverLjHDqg6w7/7S4fC/Yakn2EgSWsJ4dKNTe6OnfoafMLv2N/s7ykMzPWkD1P1G8qCTMZyxDHJktBZ1bwskk1PO4pXeMbNC3uDDlkXoVdbreWS5SUaVNYOHy06gafRyCl3zIFnDOY/qPfK6tFtB6oiZ20xdYc4MhlvjFl8Xnlizy3wyIOY9BqdIDjHLDLyom8l1KFCUol3pV3N9XoFmEddZ6Zgq16KaODlLcA0YcNhB04ZezYJe6n+t7wPqomy/+YTdWvvfnboRcvFmPKzqeUiOu0szRVNt9jDcBSJpQNhcY1DNTwrFqDQpZAsTsey0xi0foeCDbrqLKLsmrT0OfAvYgGpwIOuC3LkA8RCYguZoIEIoeJIodywLMWUB4zXGdAOon7JcOpfOyyT3PY4LTTKg09rflnsGrdCQ1mxwY297Ow+/Coy2jZC3Gmaz1JxPGNi5XnDpCG3fxtoHzDg+1zDE9RkjENbOJLgllO1aQPe6dREsxWCC8acED+3Zod3vDgwav7rTAo8FyfvZwBUklLqr4hyrJK37oqEQfkvdQ+ASdrEpzCw/9nKLSJBvgu94onyVYmDF7xMnXAvLn9mqT5fuOuGCnSLFIBjP6bcaf+YbAJD7hC46SHcfaBRriofhMxXGlIJ1loFpborlq7ogLJdpFp/cOyvuzUOiUE+TL9dIluXhxI3Ui7hjtokgkm++tBHzsJ6HothR9irtRG5u6up4Ra6k27aljqDE7K718Zb5tmRYF6L69nbvU9cL+N7o PbWs7enO ZPT5OeQ8fKutV3JPoqOijfsvlE4suiZwEu1zMBK1y4JlR/+PQMshj4VmdES/3kiX+ea3ZVtqddaaCxKEgWbeddjDnIpbyPfI4KTXVDiL7oc0ffFNEQBEASTx4OsDKLuvnkwpUtdJUrO7K07FWeqmCid9zHiN7pSBL0JvlFiSQ7iPIzSNMDg/F4R3tdknCjljIMIQb4XQvpffRXQM02Qz/+hHD1us3ZGuOpYeXBeIe6pBG9lhVCN9WV7ynSPprWAtdJ1COjC3Nn1B8i8FIGIzDbrTNcaKVD9itUNVbpgpCAde8Qw07G6oq4bD/daZga9nQG/o3lJqdzYNSm/hI2D4JV4++ijHdc3qygmfLWg2KuEVldXEslIH1+gqPwg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: n 03/06/23 at 03:03pm, Michal Hocko wrote: On Mon 06-03-23 13:14:43, Uladzislau Rezki wrote: [...] > Some questions: > > 1. Could you please add a comment why you want the bulk_gfp without > the __GFP_NOFAIL(bulk path)? The bulk allocator is not documented to fully support __GFP_NOFAIL semantic IIRC. While it uses alloc_pages as fallback I didn't want to make any assumptions based on the current implementation. At least that is my recollection. If we do want to support NOFAIL by the batch allocator then we can drop the special casing here. > 2. Could you please add a comment why a high order pages do not want > __GFP_NOFAIL? You have already explained. See below > 3. Looking at the patch: > > > + } else { > + alloc_gfp &= ~__GFP_NOFAIL; > + nofail = true; > > > if user does not want to go with __GFP_NOFAIL flag why you force it in > case a high order allocation fails and you switch to 0 order allocations? Not intended. The above should have been else if (gfp & __GFP_NOFAIL). Thanks for catching that! This would be the full patch with the description: --- From 3ccfaa15bf2587b8998c129533a0404fedf5a484 Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Mon, 6 Mar 2023 09:15:17 +0100 Subject: [PATCH] mm, vmalloc: fix high order __GFP_NOFAIL allocations Gao Xiang has reported that the page allocator complains about high order __GFP_NOFAIL request coming from the vmalloc core: __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549 alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286 vm_area_alloc_pages mm/vmalloc.c:2989 [inline] __vmalloc_area_node mm/vmalloc.c:3057 [inline] __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227 kvmalloc_node+0x156/0x1a0 mm/util.c:606 kvmalloc include/linux/slab.h:737 [inline] kvmalloc_array include/linux/slab.h:755 [inline] kvcalloc include/linux/slab.h:760 [inline] it seems that I have completely missed high order allocation backing vmalloc areas case when implementing __GFP_NOFAIL support. This means that [k]vmalloc at al. can allocate higher order allocations with __GFP_NOFAIL which can trigger OOM killer for non-costly orders easily or cause a lot of reclaim/compaction activity if those requests cannot be satisfied. Fix the issue by falling back to zero order allocations for __GFP_NOFAIL requests if the high order request fails. Fixes: 9376130c390a ("mm/vmalloc: add support for __GFP_NOFAIL") Reported-by: Gao Xiang Signed-off-by: Michal Hocko --- mm/vmalloc.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ef910bf349e1..bef6cf2b4d46 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2883,6 +2883,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned int nr_pages, struct page **pages) { unsigned int nr_allocated = 0; + gfp_t alloc_gfp = gfp; + bool nofail = false; struct page *page; int i; @@ -2893,6 +2895,7 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * more permissive. */ if (!order) { + /* bulk allocator doesn't support nofail req. officially */ gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL; while (nr_allocated < nr_pages) { @@ -2931,20 +2934,35 @@ vm_area_alloc_pages(gfp_t gfp, int nid, if (nr != nr_pages_request) break; } + } else if (gfp & __GFP_NOFAIL) { + /* + * Higher order nofail allocations are really expensive and + * potentially dangerous (pre-mature OOM, disruptive reclaim + * and compaction etc. + */ + alloc_gfp &= ~__GFP_NOFAIL; + nofail = true; } /* High-order pages or fallback path if "bulk" fails. */ - while (nr_allocated < nr_pages) { if (fatal_signal_pending(current)) break; if (nid == NUMA_NO_NODE) - page = alloc_pages(gfp, order); + page = alloc_pages(alloc_gfp, order); else - page = alloc_pages_node(nid, gfp, order); - if (unlikely(!page)) - break; + page = alloc_pages_node(nid, alloc_gfp, order); + if (unlikely(!page)) { + if (!nofail) + break; + + /* fall back to the zero order allocations */ + alloc_gfp |= __GFP_NOFAIL; + order = 0; + continue; + } + /* * Higher order allocations must be able to be treated as * indepdenent small pages by callers (as they can with Reivewed-by: Baoquan He