From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2897C433F5 for ; Mon, 22 Nov 2021 00:51:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 54F706B0071; Sun, 21 Nov 2021 19:50:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4D7B16B0072; Sun, 21 Nov 2021 19:50:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32A966B0073; Sun, 21 Nov 2021 19:50:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0214.hostedemail.com [216.40.44.214]) by kanga.kvack.org (Postfix) with ESMTP id 1CC3B6B0071 for ; Sun, 21 Nov 2021 19:50:55 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D4AC48A9A0 for ; Mon, 22 Nov 2021 00:50:44 +0000 (UTC) X-FDA: 78834735756.19.CD5A438 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf13.hostedemail.com (Postfix) with ESMTP id 594A81052C98 for ; Mon, 22 Nov 2021 00:50:42 +0000 (UTC) Received: by mail-lf1-f50.google.com with SMTP id m27so72453512lfj.12 for ; Sun, 21 Nov 2021 16:50:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=iop0zFXukl0T1HERRvTdNZUCzIsBjJa6pjhtaIM25R0=; b=N0Mw7g8KjmbWWGollngUAJ8tzEzkmqMP7CdPxPYvZFOtZltUSdLcTlsEOEaFPOpYJC FRNGwnzh8vDopfzeK1HnXO4kSqm68BPlO5b7BW91oabp2Ozk3cLAmJYyZfWjFrSxoixn cGwpCljtKBT5BSMn32nApWvnxyjG+MVaRFXL9gnC7IW7fnUoCD/K3UoL18pqvGDz3CWQ mv5/FxDGro/bk/tzrv/iBqNOSZsDXhedZU+0IZwiCft/CR1ZmtsgTOrQhfvXFChRQEM0 q5E+cc5GzwsrA4SinGelGfEc+dswUfqUXapfiBhP0zmOhgnmeZMZTJbhIIotsTTbTInR itoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=iop0zFXukl0T1HERRvTdNZUCzIsBjJa6pjhtaIM25R0=; b=Pm/T0ojwbW2DgiAAaN1IMtJkfOBQ0ARwsQp026lLENAfV958HF5JPIUNVCSs7yiJxB CvXZPGFxPFM4ivnvwgRnUPYuulbrnXtTK5ZF7VDW/nMj7vn21B2vh8SaGNmhgy1Xlouc DAsPtVeZYvXS5O1RebxtWmrM8FX5zX1f5ZpFcDR/tk1xSfCST/kpqmIsBeZE/e5Q99iv 1+bzG7PJ+NzkQsyo9X3d5zH4iiibuGJ6VMC2K4C6YCznDhQWcv03IsNjvTCKNgk6MR0M sY+V5aZa4YS8RqzOvgFycEd0IYVBDxcbnpzXwxMJP3lB+o1u3DqQkEsOd4XMAm0NXgTQ QfQQ== X-Gm-Message-State: AOAM5311ID+beeAGWMFU8suruweIXYOJ0BgtaWBBwynYKwlSpZKP5mw4 Wpql4unB5n70qV2UitlP3tVKtQ== X-Google-Smtp-Source: ABdhPJxpbs7Uii+3o0hrtuSW3LlYVSow+pou20+rQV2vHDOZqOjJarHc+meZAffH7EVgLFkgqjYnMA== X-Received: by 2002:ac2:4bc1:: with SMTP id o1mr54023042lfq.254.1637542242667; Sun, 21 Nov 2021 16:50:42 -0800 (PST) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id y32sm792317lfa.171.2021.11.21.16.50.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 21 Nov 2021 16:50:42 -0800 (PST) Received: by box.localdomain (Postfix, from userid 1000) id 538941034BF; Mon, 22 Nov 2021 03:50:47 +0300 (+03) Date: Mon, 22 Nov 2021 03:50:47 +0300 From: "Kirill A. Shutemov" To: Shakeel Butt Cc: David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , Zi Yan , Matthew Wilcox , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: split thp synchronously on MADV_DONTNEED Message-ID: <20211122005047.ufnyvqlqu55c5trt@box> References: <20211120201230.920082-1-shakeelb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211120201230.920082-1-shakeelb@google.com> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 594A81052C98 X-Stat-Signature: 5zbbx6obcowpio56ir1y6ij31dza1qpg Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=shutemov-name.20210112.gappssmtp.com header.s=20210112 header.b=N0Mw7g8K; spf=none (imf13.hostedemail.com: domain of kirill@shutemov.name has no SPF policy when checking 209.85.167.50) smtp.mailfrom=kirill@shutemov.name; dmarc=none X-HE-Tag: 1637542242-752800 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Nov 20, 2021 at 12:12:30PM -0800, Shakeel Butt wrote: > Many applications do sophisticated management of their heap memory for > better performance but with low cost. We have a bunch of such > applications running on our production and examples include caching and > data storage services. These applications keep their hot data on the > THPs for better performance and release the cold data through > MADV_DONTNEED to keep the memory cost low. > > The kernel defers the split and release of THPs until there is memory > pressure. This causes complicates the memory management of these > sophisticated applications which then needs to look into low level > kernel handling of THPs to better gauge their headroom for expansion. In > addition these applications are very latency sensitive and would prefer > to not face memory reclaim due to non-deterministic nature of reclaim. > > This patch let such applications not worry about the low level handling > of THPs in the kernel and splits the THPs synchronously on > MADV_DONTNEED. Have you considered impact on short-living tasks where paying splitting tax would hurt performace without any benefits? Maybe a sparete madvise opration needed? I donno. > Signed-off-by: Shakeel Butt > --- > include/linux/mmzone.h | 5 ++++ > include/linux/sched.h | 4 ++++ > include/linux/sched/mm.h | 11 +++++++++ > kernel/fork.c | 3 +++ > mm/huge_memory.c | 50 ++++++++++++++++++++++++++++++++++++++++ > mm/madvise.c | 8 +++++++ > 6 files changed, 81 insertions(+) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 58e744b78c2c..7fa0035128b9 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -795,6 +795,11 @@ struct deferred_split { > struct list_head split_queue; > unsigned long split_queue_len; > }; > +void split_local_deferred_list(struct list_head *defer_list); > +#else > +static inline void split_local_deferred_list(struct list_head *defer_list) > +{ > +} > #endif > > /* > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 9d27fd0ce5df..a984bb6509d9 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -1412,6 +1412,10 @@ struct task_struct { > struct mem_cgroup *active_memcg; > #endif > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > + struct list_head *deferred_split_list; > +#endif > + > #ifdef CONFIG_BLK_CGROUP > struct request_queue *throttle_queue; > #endif It looks dirty. We really don't have options to pass it down? Maybe passdown the list via zap_details and call a new rmap remove helper if the list is present? > > +void split_local_deferred_list(struct list_head *defer_list) > +{ > + struct list_head *pos, *next; > + struct page *page; > + > + /* First iteration for split. */ > + list_for_each_safe(pos, next, defer_list) { > + page = list_entry((void *)pos, struct page, deferred_list); > + page = compound_head(page); > + > + if (!trylock_page(page)) > + continue; > + > + if (split_huge_page(page)) { > + unlock_page(page); > + continue; > + } > + /* split_huge_page() removes page from list on success */ > + unlock_page(page); > + > + /* corresponding get in deferred_split_huge_page. */ > + put_page(page); > + } > + > + /* Second iteration to putback failed pages. */ > + list_for_each_safe(pos, next, defer_list) { > + struct deferred_split *ds_queue; > + unsigned long flags; > + > + page = list_entry((void *)pos, struct page, deferred_list); > + page = compound_head(page); > + ds_queue = get_deferred_split_queue(page); > + > + spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > + list_move(page_deferred_list(page), &ds_queue->split_queue); > + spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > + > + /* corresponding get in deferred_split_huge_page. */ > + put_page(page); > + } > +} Looks like a lot of copy-paste from deferred_split_scan(). Can we get them consolidated? -- Kirill A. Shutemov