From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12D35C52D7C for ; Fri, 23 Aug 2024 16:42:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FEB8800A4; Fri, 23 Aug 2024 12:42:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 960106B0345; Fri, 23 Aug 2024 12:42:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B4BC800A4; Fri, 23 Aug 2024 12:42:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5969D6B047A for ; Fri, 23 Aug 2024 12:42:57 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id EF67C120CCC for ; Fri, 23 Aug 2024 16:42:56 +0000 (UTC) X-FDA: 82484079552.19.DCBCE18 Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by imf23.hostedemail.com (Postfix) with ESMTP id DC7F5140008 for ; Fri, 23 Aug 2024 16:42:54 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JGjvfX6z; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724431357; a=rsa-sha256; cv=none; b=6xLivgOMsxDxLOcZlsMhScPLIrfys3Uy2NeqbS9xi1BEWr3663yeciqCQmuMP/NThrvMrD 9AklacGl1elly8gfnIqnOVn8ht4IPfYG8E0T2ihj84WDUH516rozTRNy2+GUWe/uRrLHgP HgqZ5M23Erp5QxoXSGt2QwkGIYF1Nc8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=JGjvfX6z; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf23.hostedemail.com: domain of urezki@gmail.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=urezki@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724431357; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cK52LZ2pqFwye/usYRJoo3LbuS95xxU06k6WwSINR4E=; b=o9+ebcpra9DEmgVMoq2ULVqzSgSt/dHlqGseOBc/RCePL7UaCQeP4Stphx5SvpuMMwfokP T2dAeYrqVK/df2LVnJ8EibC2SrRmmpdpieAq5tnG9HPoD0fwIyIBeAYbMsJHb8ECYXCTuV ENisi1Nz1CJ8XVJXCHWZOTwDr969vyk= Received: by mail-lj1-f178.google.com with SMTP id 38308e7fff4ca-2f3f25a1713so23368471fa.2 for ; Fri, 23 Aug 2024 09:42:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724431373; x=1725036173; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=cK52LZ2pqFwye/usYRJoo3LbuS95xxU06k6WwSINR4E=; b=JGjvfX6zyt6nsJwIdjKq69Kw5u2APot27vENca9spbwQ9gbnYLHQoTKCabk2sqqG4k mps2rwiVSXM+qeOF0ZmG8movng9ZqFjqseRNHhoyPpc7yHeWQ2+o2LGlzEM1OMoEA55d eJmQRvYX+nL0K8gzS/T/7dqkyWh4LJrYnflrXw2bAh765xDr3r0YNT11wt85guGy8Xwp EbO28p3sh68uRa0d7LpNIW7iNY+RhKKjIAlWGw/Ga6XA3F8VtehUSpmm8SBBjSa3BkPi xkmeUslwxg94KY9iUyghQ62ReimxJSnp3wCpuLlN8/D9YdorwyoKnmGqknatPhQdkBxt QXPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724431373; x=1725036173; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=cK52LZ2pqFwye/usYRJoo3LbuS95xxU06k6WwSINR4E=; b=klZhygThY+jsQ2dp8v50pwsSJToLK+H79NfofI/CVOM2Z/6Xb+bogIRcBFGIT47eL2 LGY49zOMoZsGr3yciRt8q8ofelBNOOEfyKDQfChq/peOeYZ132KHPnrAM9qFo0BY24fU CPLNO47eMfaFyxsXJLj6B+cVV7XhEPXK/ducrKDLrl+ACZgAi2r1lAhaq8RWokKNh8kb /bnoAOjG2TezcAm59TUMvuaQmsnOFXbEX37zz9Kvz9MNcoZCnZWBUfm5456X6r9xCe7F 82zg93Bl3PpQ3toCDM50BlReLmzkngsnCBO/xyK+mgU8lFlGCNjFZmrAjTBFNbAixoEs DE9g== X-Forwarded-Encrypted: i=1; AJvYcCUVv8mMFpEj3WyDip4oC4Q7byoD2YPjK87CDSFsq8069H4mhZh9gMN7XxGUJqvl0h2dYRdmoVosDA==@kvack.org X-Gm-Message-State: AOJu0YytAipef49tib2dzRpuBCeBR2lPGSja5kWUPzQ5OkfCZSSntpzm 1t+W2i4mTcq8n9is1ruYJPIDODo9r4ZP4tvCS0yLWEuiLDHzdyWD X-Google-Smtp-Source: AGHT+IGr8qEq/yllkgFywwhHWY1jskPoCv5mtjeA3Qu0q5m5j5/g/7q1bo6S2bl5O71wtvcRv3WwKw== X-Received: by 2002:a05:6512:3986:b0:530:e1f6:6eca with SMTP id 2adb3069b0e04-534387bb5c6mr1935292e87.37.1724431372039; Fri, 23 Aug 2024 09:42:52 -0700 (PDT) Received: from pc638.lan ([84.217.131.213]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5334ea5d6e4sm590346e87.225.2024.08.23.09.42.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Aug 2024 09:42:51 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Fri, 23 Aug 2024 18:42:47 +0200 To: Michal Hocko Cc: Hailong Liu , Uladzislau Rezki , Andrew Morton , Barry Song <21cnbao@gmail.com>, Christoph Hellwig , Vlastimil Babka , Tangquan Zheng , stable@vger.kernel.org, Baoquan He , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RESEND PATCH v1] mm/vmalloc: fix page mapping if vm_area_alloc_pages() with high order fallback to order 0 Message-ID: References: <20240808122019.3361-1-hailong.liu@oppo.com> <20240815220709.47f66f200fd0a072777cc348@linux-foundation.org> <20240816091232.fsliktqgza5o5x6t@oppo.com> <20240816114626.jmhqh5ducbk7qeur@oppo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Queue-Id: DC7F5140008 X-Rspamd-Server: rspam01 X-Stat-Signature: o57pnuh9pz8azg6yb9jkhsontr5amu53 X-HE-Tag: 1724431374-843596 X-HE-Meta: U2FsdGVkX198UrSwfAXBWX0NAYz+bTbYSjA+00kEtCYOdZ5fflxgA0EedyXuKf9/DR4K0/7jmRa5POlWdCSI44GZwJp49aISEfowFnl4nf+XaRlWMO3wQ3bgWNNq6v8PKajMpIlmc+CmML+wyAsxXHLs+XlNnuyWPm9rLgFdvZaKDpPBU6AT8FKrQwJ4GwQYqKZvEYwKa7GSjIdoY3bkboX2dkWygnSn6wjy3xRnnSHX2RwxPvEk9IFaW39/r+NnL9RSVO0LT+k/Fr7niSJb4n85WQk6bouerSS6m8GwUSz2lI3OrFSoVt+sjpCpB8sjRO4nemDoh62/iT8qAanN9hee0HuhcQPiYiSsqDS6ByYIZm1ZBN93jUMqjy/SJUyo6iaHIgXIrxwW9WIG1+Wkb7Co7ax7/DF0yxUqqa/w+9DlwuNCblGDmhdpuVUECq2nwXE2b17PsAyIx/kR8MzEp0Qj5EdNmaoNCvV8pEEfKhrLvAKB7pIy2DwftCYVC9Ga3ssuOYSOwdzP+f/4K4Ny2aQ7AoHw0FW/9k+h8f0XPPGnipsdtIi3I3FRdTIS+I9qqiDRhLi8jI1A7gOG55HInYcMqImqh6bqU657jBpSQeuOWy2kbFIBtLommrggR5Kcmzg0yvj6GJd25e98qGKrWH0DJ3iIYr9+v6J51YUiq+eIP7F7B4/6//egq3fhHEBP9Nrd2TNac13jLybeE8/YoxNm/1ZQFrGXWhTQ+aV5dZFGgIKsurWBcDH+1JH8RBZiAvnNJu8+8cH76QhBMYcG6KWz1a3qhK2FwVH8NnMnGCWSi2Uwblhoq5FGGniR8YNIDm96IzQfGcRmpaDNcjeFDndjotL8vYrW1sKJosz76RfIDPda/etvcrHq4Vk0CV8HVnZ6J2mSVd39APbkk5RP6WnaF9s7DiCfK1FHYMazwfQ3GJjaOBYQU1IATas5REDiJEkpALgq4hnWmTaVSuv mJ8pzbmi ipHREpM+kYvQ76H3FfLnNOXgTtdO3XlFDYrk99GwgB54wIz8PWYXDDYPpPeF0Ub7N+wjTGWHjWQR3tkLW/4v2qqvb49cr23LueRR/mlBJWUlP8hMrI9ccQ6GunoPX8+9F3SP6A26JAJ2y2+tdHfrNc/Rdh07w72qU/oEifWZqZwFj6vcelqGKJu3+SHjIN20SQfK9U9mLyOzYBWLQxuZL4riHU4Pd5vkmR0OwvClZDiAXDC6mi0fC3zqZFBEsA8LwA/R1E2MahCKAOlWwlQXTM71a+6QaL5jVulzqdmuBiTTmdVH9lPPrHDHyu3YnugQXk6QXbYv+iPPQ48DF5xj1xcPruHx2mkHeiMZC/Im29w/6/MdkN181aBZoQdc467RvxgAu X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hello, Michal. > > Let me clarify what I would like to have clarified: > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 6b783baf12a1..fea90a39f5c5 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -3510,13 +3510,13 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) > EXPORT_SYMBOL_GPL(vmap_pfn); > #endif /* CONFIG_VMAP_PFN */ > > +/* GFP_NOFAIL semantic is implemented by __vmalloc_node_range_noprof */ > static inline unsigned int > vm_area_alloc_pages(gfp_t gfp, int nid, > unsigned int order, unsigned int nr_pages, struct page **pages) > { > unsigned int nr_allocated = 0; > - gfp_t alloc_gfp = gfp; > - bool nofail = gfp & __GFP_NOFAIL; > + gfp_t alloc_gfp = gfp & ~ __GFP_NOFAIL; > struct page *page; > int i; > > @@ -3527,9 +3527,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > * more permissive. > */ > if (!order) { > - /* bulk allocator doesn't support nofail req. officially */ > - gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL; > - > while (nr_allocated < nr_pages) { > unsigned int nr, nr_pages_request; > > @@ -3547,12 +3544,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > * but mempolicy wants to alloc memory by interleaving. > */ > if (IS_ENABLED(CONFIG_NUMA) && nid == NUMA_NO_NODE) > - nr = alloc_pages_bulk_array_mempolicy_noprof(bulk_gfp, > + nr = alloc_pages_bulk_array_mempolicy_noprof(alloc_gfp, > nr_pages_request, > pages + nr_allocated); > > else > - nr = alloc_pages_bulk_array_node_noprof(bulk_gfp, nid, > + nr = alloc_pages_bulk_array_node_noprof(alloc_gfp, nid, > nr_pages_request, > pages + nr_allocated); > > @@ -3566,13 +3563,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, > if (nr != nr_pages_request) > break; > } > - } else if (gfp & __GFP_NOFAIL) { > - /* > - * Higher order nofail allocations are really expensive and > - * potentially dangerous (pre-mature OOM, disruptive reclaim > - * and compaction etc. > - */ > - alloc_gfp &= ~__GFP_NOFAIL; > } > > /* High-order pages or fallback path if "bulk" fails. */ > -- > See below the change. It does not do any functional change and it is rather a small refactoring, which includes the comment i wanted to add and what you wanted to be clarified(if i got you correctly): diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 3f9b6bd707d2..24fad2e48799 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3531,8 +3531,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int order, unsigned int nr_pages, struct page **pages) { unsigned int nr_allocated = 0; - gfp_t alloc_gfp = gfp; - bool nofail = gfp & __GFP_NOFAIL; struct page *page; int i; @@ -3543,9 +3541,6 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * more permissive. */ if (!order) { - /* bulk allocator doesn't support nofail req. officially */ - gfp_t bulk_gfp = gfp & ~__GFP_NOFAIL; - while (nr_allocated < nr_pages) { unsigned int nr, nr_pages_request; @@ -3563,12 +3558,12 @@ vm_area_alloc_pages(gfp_t gfp, int nid, * but mempolicy wants to alloc memory by interleaving. */ if (IS_ENABLED(CONFIG_NUMA) && nid == NUMA_NO_NODE) - nr = alloc_pages_bulk_array_mempolicy_noprof(bulk_gfp, + nr = alloc_pages_bulk_array_mempolicy_noprof(gfp & ~__GFP_NOFAIL, nr_pages_request, pages + nr_allocated); - else - nr = alloc_pages_bulk_array_node_noprof(bulk_gfp, nid, + /* bulk allocator doesn't support nofail req. officially */ + nr = alloc_pages_bulk_array_node_noprof(gfp & ~__GFP_NOFAIL, nid, nr_pages_request, pages + nr_allocated); @@ -3582,24 +3577,18 @@ vm_area_alloc_pages(gfp_t gfp, int nid, if (nr != nr_pages_request) break; } - } else if (gfp & __GFP_NOFAIL) { - /* - * Higher order nofail allocations are really expensive and - * potentially dangerous (pre-mature OOM, disruptive reclaim - * and compaction etc. - */ - alloc_gfp &= ~__GFP_NOFAIL; } /* High-order pages or fallback path if "bulk" fails. */ while (nr_allocated < nr_pages) { - if (!nofail && fatal_signal_pending(current)) + if (!(gfp & __GFP_NOFAIL) && fatal_signal_pending(current)) break; if (nid == NUMA_NO_NODE) - page = alloc_pages_noprof(alloc_gfp, order); + page = alloc_pages_noprof(gfp, order); else - page = alloc_pages_node_noprof(nid, alloc_gfp, order); + page = alloc_pages_node_noprof(nid, gfp, order); + if (unlikely(!page)) break; @@ -3666,7 +3655,16 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, set_vm_area_page_order(area, page_shift - PAGE_SHIFT); page_order = vm_area_page_order(area); - area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN, + /* + * Higher order nofail allocations are really expensive and + * potentially dangerous (pre-mature OOM, disruptive reclaim + * and compaction etc. + * + * Please note, the __vmalloc_node_range_noprof() falls-back + * to order-0 pages if high-order attempt has been unsuccessful. + */ + area->nr_pages = vm_area_alloc_pages(page_order ? + gfp_mask &= ~__GFP_NOFAIL : gfp_mask | __GFP_NOWARN, node, page_order, nr_small_pages, area->pages); atomic_long_add(area->nr_pages, &nr_vmalloc_pages); Is that aligned with your wish? Thanks! -- Uladzislau Rezki