From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90825C433F5 for ; Thu, 23 Dec 2021 01:07:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC2096B0072; Wed, 22 Dec 2021 20:07:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E71606B0073; Wed, 22 Dec 2021 20:07:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D38A56B0074; Wed, 22 Dec 2021 20:07:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id C289D6B0072 for ; Wed, 22 Dec 2021 20:07:13 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8777289D51 for ; Thu, 23 Dec 2021 01:07:13 +0000 (UTC) X-FDA: 78947270346.08.6190A23 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf19.hostedemail.com (Postfix) with ESMTP id E94EA1A000A for ; Thu, 23 Dec 2021 01:07:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1640221632; x=1671757632; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=G18tnnL+Hp4+XPM1Yabt858Hl7hd2hnrvqUnfFExzfo=; b=Rh7WZ5er1P6e0rR3OJvVF0tTMamCZAx3XmYvA4X842eAH5922Vj9xQbo /xG/+m/GamJ20aWafbfklhk8hZRrGQBrCr1Hi4ZaLRzpG6KE6GxPCjcY1 nRwK5Gtp1dhyMgnVbFhPex0+dQOLavGMHyKGrLQvmHI2CM4Pgaw34V7T6 iNtbfHd84Ia8YIq6B4A0YAJcHS6NqooO1PrAyI75TVViyBy3zGK7zWnt7 3H7mAIaWncoG10memB6PvDslkO0wNYwgZ9zCnrF10Z5HAAoWTXueN+Xw0 tDq8bR0HEywRZ4/yxeu3UG3jHJNleWAhHcpIDkgNDI2evNxTjTZxFUXdp Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10206"; a="228031968" X-IronPort-AV: E=Sophos;i="5.88,228,1635231600"; d="scan'208";a="228031968" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2021 17:07:10 -0800 X-IronPort-AV: E=Sophos;i="5.88,228,1635231600"; d="scan'208";a="521891684" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.11]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2021 17:07:07 -0800 From: "Huang, Ying" To: Baolin Wang Cc: , , , , , , , , Subject: Re: [PATCH v2 0/2] Add a new scheme to support demotion on tiered memory system References: Date: Thu, 23 Dec 2021 09:07:05 +0800 In-Reply-To: (Baolin Wang's message of "Wed, 22 Dec 2021 19:14:39 +0800") Message-ID: <87a6gsceo6.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E94EA1A000A X-Stat-Signature: qpdbwtrkqgm4r37mdxcm768p3cx7g6z7 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Rh7WZ5er; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf19.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=ying.huang@intel.com X-HE-Tag: 1640221631-711620 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Baolin Wang writes: > Hi, > > Now on tiered memory system with different memory types, the reclaim path in > shrink_page_list() already support demoting pages to slow memory node instead > of discarding the pages. However, at that time the fast memory node memory > wartermark is already tense, which will increase the memory allocation latency > during page demotion. So a new method from user space demoting cold pages > proactively will be more helpful. > > We can rely on the DAMON in user space to help to monitor the cold memory on > fast memory node, and demote the cold pages to slow memory node proactively to > keep the fast memory node in a healthy state. > > This patch set introduces a new scheme named DAMOS_DEMOTE to support this feature, > and works well from my testing. Any comments are welcome. Thanks. As a performance optimization patch, it's better to provide some test results. Another question is why we shouldn't do this in user space? With DAMON, it's possible to export cold memory regions information to the user space, then we can use move_pages() to migrate them from DRAM to PMEM. What's the problem of that? Best Regards, Huang, Ying > Changes from v1: > - Reuse the demote_page_list(). > - Fix some comments style issues. > - Move the DAMOS_DEMOTE definition to the correct place. > - Rename some function name. > - Change to return void type for damos_isolate_page(). > - Remove unnecessary PAGE_ALIGN() in damos_demote(). > - Fix the return value for damos_demote(). > > Baolin Wang (2): > mm: Export the demote_page_list() function > mm/damon: Add a new scheme to support demotion on tiered memory system > > include/linux/damon.h | 3 ++ > mm/damon/dbgfs.c | 1 + > mm/damon/vaddr.c | 147 ++++++++++++++++++++++++++++++++++++++++++++++++++ > mm/internal.h | 2 + > mm/vmscan.c | 4 +- > 5 files changed, 155 insertions(+), 2 deletions(-)