From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B07EC48BE8 for ; Fri, 18 Jun 2021 19:23:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0B0F461260 for ; Fri, 18 Jun 2021 19:23:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0B0F461260 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 847DB6B006C; Fri, 18 Jun 2021 15:23:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F75D6B006E; Fri, 18 Jun 2021 15:23:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 671BC6B0072; Fri, 18 Jun 2021 15:23:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id 336E86B006C for ; Fri, 18 Jun 2021 15:23:37 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CA69F180AD817 for ; Fri, 18 Jun 2021 19:23:36 +0000 (UTC) X-FDA: 78267818832.03.79094DE Received: from mail-io1-f48.google.com (mail-io1-f48.google.com [209.85.166.48]) by imf04.hostedemail.com (Postfix) with ESMTP id 85B5D56C for ; Fri, 18 Jun 2021 19:23:36 +0000 (UTC) Received: by mail-io1-f48.google.com with SMTP id s19so1656664ioc.3 for ; Fri, 18 Jun 2021 12:23:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=zc8AtrVHWA9AkZaqDkKQgkWS/IH5x7Pl6nrtq87Wmv0=; b=Y2Q45INRss0CiJPT/BHtQS3FM0KaWNvIGi0USIufykAbknd9TWbSR3YerUjiCxpQKz af+/6Poi8a7sP9f3owKwprqhG2D6S1jUpIaM3OfF+lKzBQXKvg6zl90wX+EkuJMZtHGa RFUt+jmUU4xNhogfe50XKzr+KTPScJIcUJbY9j0TO+3YQrJJyyP0YCap9hOEv+MA5q09 elIAhS0s3uNgLQdhixZfxhCVRKcCCd4eLZMhQbdSvj/98trKqAQ+1Vd2aUwKfLZM/8hN r2fpmsLEjBl4W3KnYTP6hMTookIt8BFjMHhrV4QUgsh05aeQ2NE31DLzP8O0F6bPhSa5 WwZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=zc8AtrVHWA9AkZaqDkKQgkWS/IH5x7Pl6nrtq87Wmv0=; b=PanZsoSz1FK7jEqlP8qpzRGa0nh4vJPuZXRRSDXPUTPALieJ/gVNMQwf4gLIiyr0Ld HpNXa46yMm4fBQRjtdOzUaxf3lDjtgb8ObmGmSWhZztQP3KqoRr9xzp5eCisQbseM2pa vdz25A6Q5G+36FKLWz8GwfQxp+N9Qr2APHDbtDf9LXutEm7O+WQ1xZqbpNY5t14PaXao Jny1JMnYyeBGt/YZx4q3iJNRNlFhXB5KUrWUM+Eqwqzp0jAmfjXvvNHFWFeD+RXe1Nyb 0wJradNI2HSigcUyONNads7laiUCZeH/MFDtlCEXYAtF3KxJLAmRuXgZIEMVP76Ovw7r Dq3Q== X-Gm-Message-State: AOAM533WU+cd5N9mfKwIZCDswpT4nkr62E4dbePa5o36L/iop8PUo3Sc hI34gz1/Zk2n+0CoqpJJ5EP5FVDat7sQkDPhDu8k2A== X-Google-Smtp-Source: ABdhPJwtdPpSbygA6j/xTFBfN3nVOArXrh8zLievVLxoYtCoR8/GmFnD2sgxpVA9UYCR7DA9C2UWPBFArtoekEOYONI= X-Received: by 2002:a02:c6bb:: with SMTP id o27mr4702013jan.85.1624044215631; Fri, 18 Jun 2021 12:23:35 -0700 (PDT) MIME-Version: 1.0 References: <15B5F859-3F7C-4B27-9528-D42D478B48E6@nvidia.com> In-Reply-To: <15B5F859-3F7C-4B27-9528-D42D478B48E6@nvidia.com> From: Wei Xu Date: Fri, 18 Jun 2021 12:23:24 -0700 Message-ID: Subject: Re: [LSF/MM/BPF TOPIC] Userspace managed memory tiering To: Zi Yan Cc: lsf-pc@lists.linux-foundation.org, Linux MM , Dan Williams , Dave Hansen , Tim Chen , David Rientjes , Greg Thelen , Paul Turner , Shakeel Butt Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=Y2Q45INR; spf=pass (imf04.hostedemail.com: domain of weixugc@google.com designates 209.85.166.48 as permitted sender) smtp.mailfrom=weixugc@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 85B5D56C X-Stat-Signature: 4s94t83nuzg6x6d3yhu4oweju151w5xf X-HE-Tag: 1624044216-569647 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jun 18, 2021 at 12:13 PM Zi Yan wrote: > > On 18 Jun 2021, at 13:50, Wei Xu wrote: > > > In this proposal, I'd like to discuss userspace-managed memory tiering > > and the kernel support that it needs. > > > > New memory technologies and interconnect standard make it possible to > > have memory with different performance and cost on the same machine > > (e.g. DRAM + PMEM, DRAM + cost-optimized memory attached via CXL.mem). > > We can expect heterogeneous memory systems that have performance > > implications far beyond classical NUMA to become increasingly common > > in the future. One of important use cases of such tiered memory > > systems is to improve the data center and cloud efficiency with > > better performance/TCO. > > > > Because different classes of applications (e.g. latency sensitive vs > > latency tolerant, high priority vs low priority) have different > > requirements, richer and more flexible memory tiering policies will > > be needed to achieve the desired performance target on a tiered > > memory system, which would be more effectively managed by a userspace > > agent, not by the kernel. Moreover, we (Google) are explicitly trying > > to avoid adding a ton of heuristics to enlighten the kernel about the > > policy that we want on multi-tenant machines when the userspace offers > > more flexibility. > > > > To manage memory tiering in userspace, we need the kernel support in > > the three key areas: > > > > - resource abstraction and control of tiered memory; > > - API to monitor page accesses for making memory tiering decisions; > > - API to migrate pages (demotion/promotion). > > > > Userspace memory tiering can work on just NUMA memory nodes, provided > > that memory resources from different tiers are abstracted into > > separate NUMA nodes. The userspace agent can create a tiering > > topology among these nodes based on their distances. > > > > An explicit memory tiering abstraction in the kernel is preferred, > > though, because it can not only allow the kernel to react in cases > > where it is challenging for userspace (e.g. reclaim-based demotion > > when the system is under DRAM pressure due to usage surge), but also > > enable tiering controls such as per-cgroup memory tier limits. > > This requirement is mostly aligned with the existing proposals [1] > > and [2]. > > > > The userspace agent manages all migratable user memory on the system > > and this can be transparent from the point of view of applications. > > To demote cold pages and promote hot pages, the userspace agent needs > > page access information. Because it is a system-wide tiering for user > > memory, the access information for both mapped and unmapped user pages > > is needed, and so are the physical page addresses. A combination of > > page table accessed-bit scanning and struct page scanning should be > > needed. Such page access monitoring should be efficient as well > > because the scans can be frequent. To return the page-level access > > information to the userspace, one proposal is to use tracepoint > > events. The userspace agent can then use BPF programs to collect such > > data and also apply customized filters when necessary. > > > > The userspace agent can also make use of hardware PMU events, for > > which the existing kernel support should be sufficient. > > I agree that userspace agents would be more flexible in terms of implemen= ting > different page migration policies if the OS provides interfaces for that > like IRIX did before[1]. > > > The third area is the API support for migrating pages. The existing > > move_pages() syscall can be a candidate, though it is virtual-address > > based and cannot migrate unmapped pages. Is a physical-address based > > variant (e.g. move_pfns()), an acceptable proposal? > > PFN cannot be moved, right? I guess you mean moving the data from one > page to another based on the given PFN. What are the potential use > cases of moving unmapped pages? Moving unmapped page cache pages? Right, move_pfns() is not the best name. The idea is exactly to move data = from one page to another based on the given PFN. Other than page cache pages, another example is tmpfs pages that are not mmap-ed. > Besides all above, using DMA engine or other HW-provided data copy engine > for page migration instead of CPUs[2] and migrating pages in an async way > are something I am interested in, since it could save CPU resources > when page migration between nodes becomes more frequent. This is a great point, which is also what we are interested in. The idea is that the API to migrate pages can be optimized with such HW acceleration when available. Even with CPUs, we have found that non-temporal stores are usef= ul for demotions because it bypasses caches and works better for hardware such= as PMEM. > > [1] https://studies.ac.upc.edu/dso/papers/nikolopoulos00case.pdf > [2] https://lwn.net/Articles/784925/ > > > =E2=80=94 > Best Regards, > Yan, Zi Wei