From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F325C25B74 for ; Thu, 9 May 2024 01:45:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 279276B0083; Wed, 8 May 2024 21:45:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2292C6B0092; Wed, 8 May 2024 21:45:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F1696B0089; Wed, 8 May 2024 21:45:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E552C6B0092 for ; Wed, 8 May 2024 21:45:31 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8DE7712118F for ; Thu, 9 May 2024 01:45:31 +0000 (UTC) X-FDA: 82097165262.12.EFBBF6B Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18]) by imf24.hostedemail.com (Postfix) with ESMTP id D6D4018000D for ; Thu, 9 May 2024 01:45:27 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=QxG6B4Mr; spf=pass (imf24.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715219129; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QW7GmFj6PMip7JcGbmFS38gmN8L9zp5Yb4q0/3Hc4Cs=; b=zX7BGczxaGFOM1g4+PnXZc1W6nKiCx0+OQhfAvaRO4YyoO01J6ERgaPbJ7+jVqwyr5/DhT dvIJ3b5Y+tBdnMXuaaYP8k6EzAbRHoVeya1FRV68PTJdVcqq1NWvz4ws6io7TNgo/EclV0 pHmgV9aQb39EUbC74ZOvNV1R/t56zrE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715219129; a=rsa-sha256; cv=none; b=yCGC4iOHVm5XCkJnGgW56gU9QSXrHvVCpw1dXoLMlCt3cqeKNbKuzzpFVx0REExuXBsn5s cuxmQRIe0UEeYly2qDKqFkcUspkPRgl8Rc09BwQi0Q2o1LD7RpKk7z3VHKmU+qk9Tjd9TO 8UPHuh45MdXKdqp169+1dhgU9nPVibo= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=QxG6B4Mr; spf=pass (imf24.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.18 as permitted sender) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1715219128; x=1746755128; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=ubj3GO49pcMcRkVwIvyVKjSKQC1cGNVxo0T+q7I39fk=; b=QxG6B4MrAVuZeluJSVJuv3V7SvAFw1p47s9vce1C51Bq14DRSDrtwArx FRr9HcT3drf0uRJdAJFyt0KKui5+evApnU6OVf7+AvYRtJnbDw50FjyKD GPgPLGqYk4mrbhQacfEcNyNBCqKhAHLXqTh+VKpEFPRK/YE8JhC5X7eYD n99sz8jpUegZQDpxdirxRPlexONIiAFvNm75485FjOoRHsCJ5/A72YpZ5 isO8vjkEDF8FK7puFFZtEk9Kw6YHQoHYPgLJUYnb4r5Kv1xA3CazgQJe4 xtKefjthQUzfST1eZ8BUL4TgmqLPsG2tYjtcIVyLo8/MOcF39RLJ3O7/J Q==; X-CSE-ConnectionGUID: 2cHCTLzFQHifa+tv7LkMsw== X-CSE-MsgGUID: CMTUPI6hRw26hfcySGtDHA== X-IronPort-AV: E=McAfee;i="6600,9927,11067"; a="10953352" X-IronPort-AV: E=Sophos;i="6.08,146,1712646000"; d="scan'208";a="10953352" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2024 18:44:52 -0700 X-CSE-ConnectionGUID: wYuboV9lREix8TBbEwrU0w== X-CSE-MsgGUID: H/caSaOKTEKyziOlB2FYiA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,146,1712646000"; d="scan'208";a="29173330" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2024 18:44:46 -0700 From: "Huang, Ying" To: David Rientjes Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org, Michal Hocko , Dan Williams , John Hubbard , Zi Yan , Bharata B Rao , Dave Jiang , "Aneesh Kumar K.V" , Alistair Popple , Christoph Lameter , Andrew Morton , Linus Torvalds , Dave Hansen , Mel Gorman , Jon Grimm , Gregory Price , Wei Xu , Johannes Weiner , SeongJae Park , David Hildenbrand , peterz@infradead.org, a.manzanares@samsung.com Subject: Re: [LSF/MM/BPF TOPIC] Locally attached memory tiering In-Reply-To: <20240508213918.7ndnrjs6pxnklbpi@offworld> (Davidlohr Bueso's message of "Wed, 8 May 2024 14:39:18 -0700") References: <20240508213918.7ndnrjs6pxnklbpi@offworld> Date: Thu, 09 May 2024 09:42:54 +0800 Message-ID: <87pltviwv5.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Stat-Signature: 9it4xc84n9bdeztknofcacf6ie1zzqag X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D6D4018000D X-HE-Tag: 1715219127-490878 X-HE-Meta: U2FsdGVkX18oAHS5Pmgnv7NfesRvFdKoKxpzPjmLYbYqcSdl4XcHBe04e+yhC7ZRkX0gkE6pojJcr+ztqBEIiNMBJpDTo9rnfO2yvnSNfF+Q5eFFDEEX/syZvZrpE1u7LHXrDQZkjI7WVPzcl+cDvfkiAmXKV2z4+Wys0LHT9nujH19Yr/DcD+/qMbiPzxebpXUcCXMKf5rz8ACnsykoS2cuNpKyW06mUsinR1D3Vy5e2xOCF4hNDgKIGhoSCkxlluLul3gC8AKn9NwGWZRHfgHsXfbZO8o+f3FDKW38Iz+CaikB8TuoYmcFJwuD7LMGtBI0heSkRWi/S1gCbb7H9itz2vYD82AKYgMHn+ije4ict5c2pPd+ZLCh9JkmGHPeMnGiVlTGpgjPfSulawPU3cy6nHo3rnH5GIVgi2uZo2D1Bc+f7PrFM2i7jAFItG9QzDUVcy0T+xOk1R7sH73DY0nGtsMy1mu5E3uIWPtw9Yu4MMlgW1fg3lKbWORTm/qK3+jRT/ARGMpwQraFJyGSZ9Q0PV28GAtwFepQbjNKd1Hi7uuFRFT6XgSpedfRvIMGG2Ju/uLyP0wvFUqD8QltT/JTgkG8e4rwdQG8AQj2SI4LO0W2UszLffkKxQQtz0hHtg1kZ900ngQ90ekej6ctFGysX7BHy38GpJFMpHTdnacTurPnu7aV6jlgF56pRoLzFFwnQojQ6gq8U78QvniuEmukta+5HKCHyM271K2F9RmBi0J6yEh5kkkxjaivDvSI4FDwifYvsMHa3JSOlUj+OQy7tD6k+/4+FSXDM+jWfVMNVSnf3p3eqqvqm+H54KYd65GMOsJJmFKnm8DjznIrwgcHbaGHUyVHo8Oz0JoI/7dnsA1q2Oru8J3e39jNj3ZuHhFPI1hoymxSoy39Mw6lx8dJq8AQJHw3OQspm0VDtiQz9cyX6XxBpKQe30K/aMq0dtxvIc9g9r0a5Y5/ssO gePV4vcQ ro+3k5OnH/jgTDVlwsVuuvkaEhQAgHWE+foIZfGDYW4iR4RMp9zYqgtlm2hEMg4J4pwMm04fYGeywURGU62nKKvSvOIATKihPfyRWlaOFChyMRFqWoU9GphK2e5r6wtmjOwEucWObZfXDpR6vImE4r/cahTZikDZYXWEF9DO6AgZCuJ4WQI5bCiok7ALsh7bAYNDoNjwienzxgSzzJBkYzCQ1NbDbCqcYaZAuhk5UgkabCz3w9H0pO07w4PKnOdbYo34HHUMKWPqk21eJypM+8lYQx/GOeJxCZ1dPTC7QMNxZ3OcpG941vSabbTZxNbctHC99pcNzdGyT9NdUdOZUFaE3NiDoJz2P2VWy/ECWyEtl6POoKSZAxwujFe4zt+bmOzSpVGQ9dxVNwflJO8dkyEQPEeWGpkIvxWfBuogUacFl/oLHkAyT0gbfTKAExKBlnKOPn0v5BpGqAOQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, Davidlohr, Davidlohr Bueso writes: > On Mon, 06 May 2024, David Rientjes wrote: > >>Hi all, >> >>I think it would be very worthwhile to have a block set aside for >>discussion on locally attached memory tiering extensions at LSF/MM/BPF >>2024. > > +1 > > fyi Adam's proposal which touches on both cxl and tiering: > > https://lore.kernel.org/all/9bf86b97-319f-4f58-b658-1fe3ed0b1993@nmtadam.samsung/ > >>Primarily interested in discussing Linux enlightenment for CXL 1.1 and >>later type-3 memory expansion devices (CXL.mem). I think we could touch >>on CXL 2.0 and later memory pooling architectures if we have time and >>there is interest, but the primary focus here would be local attached. >> >>Based on the premise for a Memory Tiering Working Group[1], there is >>widespread interest in the foundational topics for generally useful Linux >>enlightenment: >> >> - Decoupling CPU balancing from memory balancing (or obsoleting CPU >> balancing entirely) >> >> + John Hubbard notes this would be useful for GPUs: >> >> a) GPUs have their own processors that are invisible to the kernel's >> NUMA "which tasks are active on which NUMA nodes" calculations, >> and >> >> b) Similar to where CXL is generally going, we have already built >> fully memory-coherent hardware, which include memory-only NUMA >> nodes. > > +Cc peterz > >> - In-kernel hot memory abstraction, informed by hardware hinting drivers >> (incl some architectures like Power10), usable as a NUMA Balancing >> backend for promotion and other areas of the kernel like transparent >> hugepage utilization >> >> - NUMA and memory tiering enlightenment for accelerators, such as for >> optimal use of GPU memory, extremely important for a cloud provider >> (hint hint :) >> >> - Asynchronous memory promotion independent of task_numa_fault() while >> considering the cost of page migration (due to identifying cold memory) > > This would be nice for users who like to disable NUMA balancing. But overall > when compared to anything hardware can give us (ala ppc, without the required > kernel overhead of x86-based counters), I fear that software solutions will > always be found wanting. And, afaik, numa balancing based promotion is still > one of the top pain points in memory tiering. > > So, of course, improving the software approach is still a good thing. Fyi > along these lines, improving/optimizing the current numa balancing approach > has proven irrelevant in the larger scale of benchmarks, afaik. For example > (active) LRU based promotion instead of blindly promoting the faulting page > which could be rarely used. With the default configuration, current NUMA balancing based promotion solution will almost try to promote any faulting pages. To select hot pages to promote and control thrashing between NUMA nodes, the promote rate limit needs to be configured. For example, via, echo 200 > /proc/sys/kernel/numa_balancing_promote_rate_limit_MBps 200MB hot pages will be selected and promoted every second. Can you try it? > Benchmarks shows significant reduction in a lot > of the promote/demote traffic dealing with ping pong cases, but unfortunately > show little to no tangible performance wins in actual benchmark numbers. > Similarly, the proposed migrc[1] which shows great TLB flushing benefits but > minimal benchmark (XSBench) improvement. > > ... which brings me to the topic of benchmarking. What are the workloads > people care about, beyond pmbench? I tend to use oltp based database workloads > with wss/buffers larger than the total amount of fast memory nodes. > >> - What the role of userspace plays in this decision-making and how we can >> extend the default policy and mechanisms in the kernel to allow for it >> if necessary >> >>Additional topics that you find interesting are also very helpful! >> >>I'm biased toward a generally useful solution that would leverage the >>kernel as the ultimate source of truth for page hotness that can be >>extended for multiple use caes, one of which is memory tiering support. >>But certainly if there are other approaches, we can discuss that as well. >> >>A few main goals from this discussion: >> >> - Ensure that proposals address, or can be extended to address, the >> emerging needs of the various use cases that users may have >> >> - Surface any constraints that stakeholders may find to be prohibitive >> for support in the core MM subsystem >> >> - Alignment and division of work for developers who are actively looking >> to contribute to this area >> >>As I'm just one of many stakeholders for this discussion, I'd nominate >>Michal Hocko to moderate it if he's willing to do so. If he's so willing, >>we'd be in good hands :) > >> >> [1] https://lore.kernel.org/linux-mm/45d850ec-623b-7c07-c266-e948cdbf1f62@linux.com/T/ >> > > Thanks, > Davidlohr > > [1] https://lore.kernel.org/linux-mm/20240226030613.22366-1-byungchul@sk.com/ -- Best Regards, Huang, Ying