From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B70CCDD0FB for ; Tue, 22 Oct 2024 22:56:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2F946B00B4; Tue, 22 Oct 2024 18:56:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CB8836B00B5; Tue, 22 Oct 2024 18:56:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B31A06B00B7; Tue, 22 Oct 2024 18:56:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 903916B00B4 for ; Tue, 22 Oct 2024 18:56:14 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0DA00C05EE for ; Tue, 22 Oct 2024 22:55:56 +0000 (UTC) X-FDA: 82702747638.06.604B358 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 61B33180020 for ; Tue, 22 Oct 2024 22:56:09 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ke4qrMyA; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729637605; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=a2laQmTKi0u/YzAhWWIOMkgVb8H8J6EJKO3n/JvAGpE=; b=SiIelWBq3EJOI66d42rGwbdC1UR6IPbEeO0iCY+mp9jgfnokdXFDoFPq8t76M188Jc371M QWSCqRJq5wktqxiN6M7ilmod/kCK0ztZk+aF29klDsdhZYyY83wcJclBWfyHOkq4BkeABL mqMWSQJpLi0OPSQ3Dq6YkGVZgtheqPo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729637605; a=rsa-sha256; cv=none; b=PsrBxpw5nxXBSmDV5QNNVQkJyPB5O9M1cSG0vsNPvats86xUkdzoZ6zjCPdRufoYL3S6qc jrpvo2MCu6g6OyRcrYdBLMR5m/LVcwIc3KDtH9LlZeHFdslXqxprNljwGNV/UeECn0SCJU 3kK537FZhmLmT0G/8KvLRouHLJzvfq0= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ke4qrMyA; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=21cnbao@gmail.com Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-20ce5e3b116so49207835ad.1 for ; Tue, 22 Oct 2024 15:56:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1729637771; x=1730242571; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a2laQmTKi0u/YzAhWWIOMkgVb8H8J6EJKO3n/JvAGpE=; b=ke4qrMyAkLD2n7njg1qyls8c1kkQy+9vrS2I3DS6GVJT+KJXp82U5ETeqUKYsvAC/b jOGJuqfbOPa4BohBshTV3xCGAdhkjZmcuP3jv/eYsqlzroIEOZKtNYpDRBHdMu6588HD dZaR6MmeB48Qr3E6ruTOaQ0aSCBaAGCr+GNJKYDFMIGp0bdkl1NJDTaeMln2Fo4/vlRs RHqhLromSNKSCmUSXsM5FICms5ecpNir1RXxR6Km2RGmAIW2V+P94RNBCqsfEJjxNsrw LANcN7zI1HU2gcgpi5wPUuCu0R2MZeG4/Cri+uOzybOY9o5Od5eXbbmT1gNszbXcql7h Y3xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1729637771; x=1730242571; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a2laQmTKi0u/YzAhWWIOMkgVb8H8J6EJKO3n/JvAGpE=; b=mYuVumP5ZNEVNQFZqr3skHyH4wcxjZtxrigJ6PCvM9ZJTlCPjiBprsvKReTOg5A+7a 1qzUgsN006UQGreTs6pX4X3BzSMF5DNjLkUMzgDyR/s2ub01xsgct/Mb0r4Q7M3AyqYs l3wS2Ah6WOOR9xTBRCmp6v+rluo+4zb3L53x9GWvOAiCIBMleiI9QI+0jYmd485iJZat EymmQUzuFvgBOxgCw5vxA0jO1Q1FM7FyEPfZl3lxVqHq7KqNfuQk0Ib6ltR17vf6F70B 33ucO0XSeH9jMgMBCR59zdYFewPsmjjS2rV8lHVQax4xVnr6VH/CoLRQSUqi41PnRjo+ YOUw== X-Forwarded-Encrypted: i=1; AJvYcCV2sxLj/E8/StqEPvSeBRmZWLlR2SGJrr/J9o0a4uECY9gRn+ZItOotcKgqK7IbGBzKTw6Cn3L0bg==@kvack.org X-Gm-Message-State: AOJu0YwCt9SF55B3AEqYEGf/lOnixV0T6uPwZFTLHdYO1gle6h2lJtgn If2qRDGsrhjgqL2wX3Ui4DuxeYHwfU4AQMrbwz/0BaVKC6xwgVQV X-Google-Smtp-Source: AGHT+IFNGX9q87vyEiSkz4I6dUT5SDi4HZt/3EC0GohOnQRT1xUQE47EeraSjtgJLXW450Rdo5GALg== X-Received: by 2002:a17:902:ce84:b0:20b:b21e:db21 with SMTP id d9443c01a7336-20fa9eb4f14mr8932335ad.47.1729637770567; Tue, 22 Oct 2024 15:56:10 -0700 (PDT) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:19e4:a9dc:492c:dd0e]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-20e7f0bcddesm47256915ad.157.2024.10.22.15.56.06 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 22 Oct 2024 15:56:10 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: wangkefeng.wang@huawei.com Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, david@redhat.com, hughd@google.com, linux-mm@kvack.org, willy@infradead.org Subject: Re: [PATCH] mm: shmem: convert to use folio_zero_range() Date: Wed, 23 Oct 2024 11:56:03 +1300 Message-Id: <20241022225603.10491-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <06d99b89-17ad-447e-a8f1-8e220b5688ac@huawei.com> References: <06d99b89-17ad-447e-a8f1-8e220b5688ac@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 61B33180020 X-Stat-Signature: wgaodp5mjgbw3f461q9q8zwg9cn9hurm X-Rspam-User: X-HE-Tag: 1729637769-99666 X-HE-Meta: U2FsdGVkX1+lDdBq4VGwXeCLxI2TYW+JtRute1tLtVLHEYW31QDkJ1VoFz4ERzIErxG9k+ppw0sL3WA+6zOOc03wvcZLfeS7F5XXYEmcNt1K6foAE68TRElpq87EtaSrGK9x0eYjCJ5uEpIzkTYPkm1B/bCfa0F+/+l77C8vQrHtz26z5BqY+Cy/UfJWNcDmabXGosnOT7K6j4fjWeiD9L8cRkbVyQGjR/Ks3r120A/StqDbP/2j3E1Grq7t3ypiX8Ax9oj+rcH5R+F6oUOfnMrNEbIoLn4QVN2MS2TgcVkllWUOWY7XnDVb6HlYSfYzGByaDj92Q00z9o/VDRRdPrR4tarhaZZXrj+1f9c/d0647+4CM2D00LJQcP7XbPo23Za9i4lzff4kKUb8BM5mQTSdizgUI9hEYJXfeoohfHysVAQPOV45M4uJcqnm/6eZumnquz1f18t+cxF2eQElKy5FxAeGzBWwEu9w5HngCiZxk1HdznATqu95BbJSJT6UWP9LFkLu4k9whRINZzR/gr9qNH+FXpqJb95DLQBsUgDousIyE3StFRhJrz+xeDhPE01LRmua5m24ZOwSc/xIEbcWVmc0AlSHagIPYlXZRUHKrMefNlEQKF7QoAaxv+G8ej37nFAJLhepUVD8Sq2O20Y8/eGLRV2dcMUpOlduQ4x/h2oz9cawTg/aI03TA9nmYCutNSZq2VGqND92brmRdF7HVAKnLiDkabDzYP1Y0XMQVMTzcEgXO4gvKA0MtQmEZxPkOU/lWpFcoH2IX+kNfYUXnr6QFND8TNai6j/mAJINHvIElcxl5KqIlWKwH5iS0E++/5oEfow9PYZmIDiIRanD6IIjl1Kffe4OCxGWF66+PDZSOLuIlEwVjAPruvPDwYbeXCm9/IpKW4aS7dJsmEyDaWf/FhakxISCk+LIAEQ7nWhmrDxvmABNiOfdetqmekTNTJTcWFiGSK6S4+C vcj32QNz XaYa/1aIsTb1twkxaQJaX9uxmxWX8WvcZH4jC7BEhyuNdeeGBlv/UI/xAwLwhPjFBzwzMldvr5Q9z+U6itEWnE9lu67s0LpaMnSI4C/sS1UbiDw4ndU+8iUjEfhZTMN4w+1h5SoPGFgly3rgQAKg/E0GqaIlfFvuujMDylqI4DXf7YQzwgkBQRniNjRrszmFaajDjetlBqC+zpPaqIFy5Tn708ak9VrU+Xess+SIVfCmCujXJJJtd8tpT4Fc+SMbY4wuvINX/jXbxUli+YD2KEKjyJHNoOiJEd6fAss55DjWqP+mXaxVxoPrNdvfDS8LEsd2dd/nguRrRMMp8TL4OXY6haeVDKla3zKABHqfCAPk/ilIaDCWERFAWoERT+gPmaoTKmY3utHJPuSe0f2OZ/jHSSRlfOpbHZkvUEQVib9KlunTOP5TW+tdBwKz1Ph62BPX1lnrJbfRTKuXqOBsvWc/Q8i7xbZZtK3PmzSaCuFu6v5k= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Oct 23, 2024 at 4:10 AM Kefeng Wang wrote: > > > > On 2024/10/22 4:32, Barry Song wrote: > > On Tue, Oct 22, 2024 at 4:33 AM Kefeng Wang wrote: > >> > >> > >> > >> On 2024/10/21 17:17, Barry Song wrote: > >>> On Mon, Oct 21, 2024 at 9:14 PM Kefeng Wang wrote: > >>>> > >>>> > >>>> > >>>> On 2024/10/21 15:55, Barry Song wrote: > >>>>> On Mon, Oct 21, 2024 at 8:47 PM Barry Song <21cnbao@gmail.com> wrote: > >>>>>> > >>>>>> On Mon, Oct 21, 2024 at 7:09 PM Kefeng Wang wrote: > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> On 2024/10/21 13:38, Barry Song wrote: > >>>>>>>> On Mon, Oct 21, 2024 at 6:16 PM Kefeng Wang wrote: > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> On 2024/10/21 12:15, Barry Song wrote: > >>>>>>>>>> On Fri, Oct 18, 2024 at 8:48 PM Kefeng Wang wrote: > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> On 2024/10/18 15:32, Kefeng Wang wrote: > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> On 2024/10/18 13:23, Barry Song wrote: > >>>>>>>>>>>>> On Fri, Oct 18, 2024 at 6:20 PM Kefeng Wang > >>>>>>>>>>>>> wrote: > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> On 2024/10/17 23:09, Matthew Wilcox wrote: > >>>>>>>>>>>>>>> On Thu, Oct 17, 2024 at 10:25:04PM +0800, Kefeng Wang wrote: > >>>>>>>>>>>>>>>> Directly use folio_zero_range() to cleanup code. > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> Are you sure there's no performance regression introduced by this? > >>>>>>>>>>>>>>> clear_highpage() is often optimised in ways that we can't optimise for > >>>>>>>>>>>>>>> a plain memset().  On the other hand, if the folio is large, maybe a > >>>>>>>>>>>>>>> modern CPU will be able to do better than clear-one-page-at-a-time. > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Right, I missing this, clear_page might be better than memset, I change > >>>>>>>>>>>>>> this one when look at the shmem_writepage(), which already convert to > >>>>>>>>>>>>>> use folio_zero_range() from clear_highpage(), also I grep > >>>>>>>>>>>>>> folio_zero_range(), there are some other to use folio_zero_range(). > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> fs/bcachefs/fs-io-buffered.c:           folio_zero_range(folio, 0, > >>>>>>>>>>>>>> folio_size(folio)); > >>>>>>>>>>>>>> fs/bcachefs/fs-io-buffered.c:                   folio_zero_range(f, > >>>>>>>>>>>>>> 0, folio_size(f)); > >>>>>>>>>>>>>> fs/bcachefs/fs-io-buffered.c:                   folio_zero_range(f, > >>>>>>>>>>>>>> 0, folio_size(f)); > >>>>>>>>>>>>>> fs/libfs.c:     folio_zero_range(folio, 0, folio_size(folio)); > >>>>>>>>>>>>>> fs/ntfs3/frecord.c:             folio_zero_range(folio, 0, > >>>>>>>>>>>>>> folio_size(folio)); > >>>>>>>>>>>>>> mm/page_io.c:   folio_zero_range(folio, 0, folio_size(folio)); > >>>>>>>>>>>>>> mm/shmem.c:             folio_zero_range(folio, 0, folio_size(folio)); > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>>> IOW, what performance testing have you done with this patch? > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> No performance test before, but I write a testcase, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> 1) allocate N large folios (folio_alloc(PMD_ORDER)) > >>>>>>>>>>>>>> 2) then calculate the diff(us) when clear all N folios > >>>>>>>>>>>>>>           clear_highpage/folio_zero_range/folio_zero_user > >>>>>>>>>>>>>> 3) release N folios > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> the result(run 5 times) shown below on my machine, > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> N=1, > >>>>>>>>>>>>>>               clear_highpage  folio_zero_range    folio_zero_user > >>>>>>>>>>>>>>          1      69                   74                 177 > >>>>>>>>>>>>>>          2      57                   62                 168 > >>>>>>>>>>>>>>          3      54                   58                 234 > >>>>>>>>>>>>>>          4      54                   58                 157 > >>>>>>>>>>>>>>          5      56                   62                 148 > >>>>>>>>>>>>>> avg       58                   62.8               176.8 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> N=100 > >>>>>>>>>>>>>>               clear_highpage  folio_zero_range    folio_zero_user > >>>>>>>>>>>>>>          1    11015                 11309               32833 > >>>>>>>>>>>>>>          2    10385                 11110               49751 > >>>>>>>>>>>>>>          3    10369                 11056               33095 > >>>>>>>>>>>>>>          4    10332                 11017               33106 > >>>>>>>>>>>>>>          5    10483                 11000               49032 > >>>>>>>>>>>>>> avg     10516.8               11098.4             39563.4 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> N=512 > >>>>>>>>>>>>>>               clear_highpage  folio_zero_range   folio_zero_user > >>>>>>>>>>>>>>          1    55560                 60055              156876 > >>>>>>>>>>>>>>          2    55485                 60024              157132 > >>>>>>>>>>>>>>          3    55474                 60129              156658 > >>>>>>>>>>>>>>          4    55555                 59867              157259 > >>>>>>>>>>>>>>          5    55528                 59932              157108 > >>>>>>>>>>>>>> avg     55520.4               60001.4            157006.6 > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> folio_zero_user with many cond_resched(), so time fluctuates a lot, > >>>>>>>>>>>>>> clear_highpage is better folio_zero_range as you said. > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> Maybe add a new helper to convert all folio_zero_range(folio, 0, > >>>>>>>>>>>>>> folio_size(folio)) > >>>>>>>>>>>>>> to use clear_highpage + flush_dcache_folio? > >>>>>>>>>>>>> > >>>>>>>>>>>>> If this also improves performance for other existing callers of > >>>>>>>>>>>>> folio_zero_range(), then that's a positive outcome. > >>>>>>>>>>>> > >>>>>>> ... > >>>>>>> > >>>>>>>>>> hi Kefeng, > >>>>>>>>>> what's your point? providing a helper like clear_highfolio() or similar? > >>>>>>>>> > >>>>>>>>> Yes, from above test, using clear_highpage/flush_dcache_folio is better > >>>>>>>>> than using folio_zero_range() for folio zero(especially for large > >>>>>>>>> folio), so I'd like to add a new helper, maybe name it folio_zero() > >>>>>>>>> since it zero the whole folio. > >>>>>>>> > >>>>>>>> we already have a helper like folio_zero_user()? > >>>>>>>> it is not good enough? > >>>>>>> > >>>>>>> Since it is with many cond_resched(), the performance is worst... > >>>>>> > >>>>>> Not exactly? It should have zero cost for a preemptible kernel. > >>>>>> For a non-preemptible kernel, it helps avoid clearing the folio > >>>>>> from occupying the CPU and starving other processes, right? > >>>>> > >>>>> --- a/mm/shmem.c > >>>>> +++ b/mm/shmem.c > >>>>> > >>>>> @@ -2393,10 +2393,7 @@ static int shmem_get_folio_gfp(struct inode > >>>>> *inode, pgoff_t index, > >>>>>             * it now, lest undo on failure cancel our earlier guarantee. > >>>>>             */ > >>>>> > >>>>>            if (sgp != SGP_WRITE && !folio_test_uptodate(folio)) { > >>>>> -               long i, n = folio_nr_pages(folio); > >>>>> - > >>>>> -               for (i = 0; i < n; i++) > >>>>> -                       clear_highpage(folio_page(folio, i)); > >>>>> +               folio_zero_user(folio, vmf->address); > >>>>>                    flush_dcache_folio(folio); > >>>>>                    folio_mark_uptodate(folio); > >>>>>            } > >>>>> > >>>>> Do we perform better or worse with the following? > >>>> > >>>> Here is for SGP_FALLOC, vmf = NULL, we could use folio_zero_user(folio, > >>>> 0), I think the performance is worse, will retest once I can access > >>>> hardware. > >>> > >>> Perhaps, since the current code uses clear_hugepage(). Does using > >>> index << PAGE_SHIFT as the addr_hint offer any benefit? > >>> > >> > >> when use folio_zero_user(), the performance is vary bad with above > >> fallocate test(mount huge=always), > >> > >>         folio_zero_range   clear_highpage         folio_zero_user > >> real    0m1.214s             0m1.111s              0m3.159s > >> user    0m0.000s             0m0.000s              0m0.000s > >> sys     0m1.210s             0m1.109s              0m3.152s > >> > >> I tried with addr_hint = 0/index << PAGE_SHIFT, no obvious different. > > > > Interesting. Does your kernel have preemption disabled or > > preemption_debug enabled? > > ARM64 server, CONFIG_PREEMPT_NONE=y this explains why the performance is much worse. > > > > > If not, it makes me wonder whether folio_zero_user() in > > alloc_anon_folio() is actually improving performance as expected, > > compared to the simpler folio_zero() you plan to implement. :-) > > Yes, maybe, the folio_zero_user(was clear_huge_page) is from > 47ad8475c000 ("thp: clear_copy_huge_page"), so original clear_huge_page > is used in HugeTLB, clear PUD size maybe spend many time, but for PMD or > other size of large folio, cond_resched is not necessary since we > already have some folio_zero_range() to clear large folio, and no issue > was reported. probably worth an optimization. calling cond_resched() for each page seems too aggressive and useless. diff --git a/mm/memory.c b/mm/memory.c index 0f614523b9f4..5fc38347d782 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6738,6 +6738,19 @@ EXPORT_SYMBOL(__might_fault); #endif #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) +/* + * To prevent process_huge_page() from starving other processes, + * we allow other processes a chance with each batch. + */ +static inline void batched_cond_resched(int *nr) +{ +#define BATCHED_PROCESS_NR 64 + if (*nr++ < BATCHED_PROCESS_NR) + return; + cond_resched(); + *nr = 0; +} + /* * Process all subpages of the specified huge page with the specified * operation. The target subpage will be processed last to keep its @@ -6748,7 +6761,7 @@ static inline int process_huge_page( int (*process_subpage)(unsigned long addr, int idx, void *arg), void *arg) { - int i, n, base, l, ret; + int i, n, base, l, ret, processed_nr = 0; unsigned long addr = addr_hint & ~(((unsigned long)nr_pages << PAGE_SHIFT) - 1); @@ -6761,7 +6774,7 @@ static inline int process_huge_page( l = n; /* Process subpages at the end of huge page */ for (i = nr_pages - 1; i >= 2 * n; i--) { - cond_resched(); + batched_cond_resched(&processed_nr); ret = process_subpage(addr + i * PAGE_SIZE, i, arg); if (ret) return ret; @@ -6772,7 +6785,7 @@ static inline int process_huge_page( l = nr_pages - n; /* Process subpages at the begin of huge page */ for (i = 0; i < base; i++) { - cond_resched(); + batched_cond_resched(&processed_nr); ret = process_subpage(addr + i * PAGE_SIZE, i, arg); if (ret) return ret; @@ -6786,11 +6799,11 @@ static inline int process_huge_page( int left_idx = base + i; int right_idx = base + 2 * l - 1 - i; - cond_resched(); + batched_cond_resched(&processed_nr); ret = process_subpage(addr + left_idx * PAGE_SIZE, left_idx, arg); if (ret) return ret; - cond_resched(); + batched_cond_resched(&processed_nr); ret = process_subpage(addr + right_idx * PAGE_SIZE, right_idx, arg); if (ret) return ret;