From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71A5FC27C4F for ; Tue, 18 Jun 2024 04:35:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B20256B00AD; Tue, 18 Jun 2024 00:35:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AD0486B00AE; Tue, 18 Jun 2024 00:35:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 998F98D0001; Tue, 18 Jun 2024 00:35:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 7EB856B00AD for ; Tue, 18 Jun 2024 00:35:53 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F341BA03D0 for ; Tue, 18 Jun 2024 04:35:52 +0000 (UTC) X-FDA: 82242746544.08.8D11ED8 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by imf19.hostedemail.com (Postfix) with ESMTP id 5F1DD1A0006 for ; Tue, 18 Jun 2024 04:35:50 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qc0lNpts; spf=pass (imf19.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718685344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LuT4sE0m/jaaHSmNYKRK8FOTW65Sl89gPq+12ykTnvw=; b=698YVIbZsNYIIeXttdY6XNeSfPLsdXtNuBiva4Kj5cG6ZotE9rUoG3mIwTSQo8J6piWsC0 z/WcNNV/sUr0lXkL4uzlJSjZebOLyUj8ry6hlLa8ZgVppd84wZ+CmEv3Y7msjYPYRf53s/ aqOaAQ2FkmtJC3ESCwdJkwCVeINWqo0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718685344; a=rsa-sha256; cv=none; b=0AsM1a4fPCWd0pZDHe9uPog3L/BQi8siP6Sp95ZWZQV6YC9bb5FL3PjWhn0rTOCdNEoHiD 8diXy90z58ZRoiODNZa/4VW2QH2SQBL6tGVXVwQO4nKHz9FKSyLGAojlceffBkncG0eLyl reW5UX9YY1S1D9cjI258x/ktvKujEEk= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qc0lNpts; spf=pass (imf19.hostedemail.com: domain of chrisl@kernel.org designates 145.40.73.55 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id 7BC5DCE1410 for ; Tue, 18 Jun 2024 04:35:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C60B4C4AF1C for ; Tue, 18 Jun 2024 04:35:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1718685345; bh=wmS4Z1+zhk8g7L1BEZPd9G1cB+ga3g+3sjeUdv7+wgM=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=qc0lNptsa9UY/gUGrVoVtY5q387bLfBAd0l9lFcI3TuXr2ZCo2vaVqQOEPT4Ph4Hj DB1FSwyTrFgQ/u8ckwO/ldff9QmesCgUion2Z4FqjD7Z/hi1HAeo+7aAWuBbo4+q5J rCvsmybiqnYuyRrenuUv8qCDwhuTIRpkBGzwySkLbKaIuCY7GHIRTYhdFnSyFgGKE3 Q9Um0eCrwaUl837ydGKoXH/MqjlMU87YEuYp7bxyudeDE4gC9s5DP0jqTfI8nJGJQc PSbmPvH5gcENf7hjiP9cESjSH/YcnXHZ+tSJEkmr7qDgzSUO+1TozdvYtj9QIS8X9H Wz6WaLh7uqpoA== Received: by mail-il1-f181.google.com with SMTP id e9e14a558f8ab-3748ebe7e53so22895435ab.1 for ; Mon, 17 Jun 2024 21:35:45 -0700 (PDT) X-Forwarded-Encrypted: i=1; AJvYcCXme4mIquF+xAEk1FvpOtsUlbdCbgCOPnh9GYabsVVFqR4ZWcVe0mCMpSxgErDgufbJMT4PlAbhfnzu4nIym/xk+Q8= X-Gm-Message-State: AOJu0YxIny7C8cA9hSMEjAR6J5j5v3pamiXiSexa1LcFHgMRVTOeekyO GmU5tbkIjRNI1AKHYLnDEbGxKwLelTRzF+CvfvgMTn5yxCI+CaAFeLztIKrnWuGXHTgV6Cw8Qjf qyKjta72hpcqaiB5kfRS0qvz53lOBPzTpMMmd X-Google-Smtp-Source: AGHT+IHvmKrB915v6FRz7bPfsZfQz4tFqOemII0BXgoR9p/8al1ZbEhHjkD3DfYmJlPv05+n45QagUhoUU2doZ5I9aY= X-Received: by 2002:a05:6e02:1582:b0:375:b4a3:9bc4 with SMTP id e9e14a558f8ab-375e108ae6emr128605685ab.28.1718685345040; Mon, 17 Jun 2024 21:35:45 -0700 (PDT) MIME-Version: 1.0 References: <20240524-swap-allocator-v1-0-47861b423b26@kernel.org> <87cyp5575y.fsf@yhuang6-desk2.ccr.corp.intel.com> <875xuw1062.fsf@yhuang6-desk2.ccr.corp.intel.com> <87o78mzp24.fsf@yhuang6-desk2.ccr.corp.intel.com> <875xum96nn.fsf@yhuang6-desk2.ccr.corp.intel.com> <87wmmw6w9e.fsf@yhuang6-desk2.ccr.corp.intel.com> <87a5jp6xuo.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87a5jp6xuo.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Chris Li Date: Mon, 17 Jun 2024 21:35:32 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH 0/2] mm: swap: mTHP swap allocator base on swap cluster order To: "Huang, Ying" Cc: Andrew Morton , Kairui Song , Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5F1DD1A0006 X-Stat-Signature: bijj5tzewk7zzyd4ujia5i4qqiuwcktf X-HE-Tag: 1718685350-235990 X-HE-Meta: U2FsdGVkX18vzGkQ6V3MdkMe+sekhqq5won+sWIYpAa+w45gtOJvwwtK1FjK2I5teKG/5jq6qyGjZB6CiskT69wjFu+5R9c1WjiL9YWU7d9skbe9M++5wJfsKalixo/OIhKbFgqst0wGl0ES2Uj734NvkL8mXBeNQI6MTEmggOEEnHKIJDK3MFTM/4CFlBCa3n+v4kdzmvfzlbg7sN5WLKqiEfCN3N0he7x0CXLs7+Sau5hpZhXqOGJKhAUhanDNofiqh1lD5cVIgPo+v0bRCXRLtWhxjTxY1bpfGFpJe2rLQYxkvhY6fGD/bglsdkK2KS/24ry+4Kk8gFYMXOl1CL867Cj3hNcTdAgOuxDYkyPXLwLu5Dg5dKlMUHOjdi4V8/pIOAUy9VZ/AhIV0871FoAOeKvG7GEqYFxXd/j1adc8htFJfV6+BZFrfWf5PdecWvTqu1EGIF0Ffwmrmyj8bF0ogQFvfdbOlcUKjxHU4FeWpyN7hXcSgBSI+rXd45iy7lvAGlKK10Qfbm12jZPzOzhPOoOiti5YIG3d+z/C9YrsKl639P4tL/o+CL9LL+1HvNwpRf92wvr3ckpe8gkqK6sV2OK5fo5KCzxxxhmnQLouuc0TA8z+Cds2v/iYhCi7FYoz9n9qIbCOzUUH8eVwB6LgrV8Cq1fV3WdnwnaG6Bua9RrW3PCbiKZfHx3T/FagvhNmPGfg7w8IjgdAKWHsjzSxz2xec1zh4R07dE59lYsuco/vBc13FCQ7bRPUvLK7TgtDT0gSeg9KUsp6pEeu4MYDUC8+sDG2F1KfCiAtImtRyETs7CdYSy9ay1tXM4r1iRYX/LOQWixPjq6CA3VmTtCokwL2tVLm+MrdTQMBPP2xjZXWT39Opqi0MhfrnvKMZL/LL2j+fDAJpgK6U7Y3A/dqNhOl4MXv7lFfvu7M2P4Dh6ZfWMI0GU8R9oc+/fJ+d0wYI+L0JRWejwFENDh 6ShnGb+P XmpYM7J4uFeOLbbzFWx7PJvPo4jiwLwN0DpWE7adPt2yFfzCVkFRdb6+GL4WlrVVZCwjYZ2wOXNEQzVr6JvbT4C3TpXSeDnzOZCTsSpprXU91UF1ZNemKDoMUKetbHYf4Fm9EvtDhCn+tj3WEofFnM5NA4Gzi2wxboo/SRwogkxvmYGi8O2DfWqFVmwLvyIv1Cq0BAiU6/Zqwb1UiTmCgMbHXOWijdIC0Z2bEYjPJJQnPz3fGz8iF0VeufN7drtVGRygpSmMplDo73Bs3M6nU/XLjA+59G/jRAELHmPZr+0QRpbN8E8m7Hd5K54NCv3+A9bNKKVFLqLeIGZkckqlM4vJ6wsCjNLe6XABk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 13, 2024 at 1:40=E2=80=AFAM Huang, Ying = wrote: > > Chris Li writes: > > > On Mon, Jun 10, 2024 at 7:38=E2=80=AFPM Huang, Ying wrote: > >> > >> Chris Li writes: > >> > >> > On Wed, Jun 5, 2024 at 7:02=E2=80=AFPM Huang, Ying wrote: > >> >> > >> >> Chris Li writes: > >> >> > >> > > >> >> > In the page allocation side, we have the hugetlbfs which reserve = some > >> >> > memory for high order pages. > >> >> > We should have similar things to allow reserve some high order sw= ap > >> >> > entries without getting polluted by low order one. > >> >> > >> >> TBH, I don't like the idea of high order swap entries reservation. > >> > May I know more if you don't like the idea? I understand this can be > >> > controversial, because previously we like to take the THP as the bes= t > >> > effort approach. If there is some reason we can't make THP, we use t= he > >> > order 0 as fall back. > >> > > >> > For discussion purpose, I want break it down to smaller steps: > >> > > >> > First, can we agree that the following usage case is reasonable: > >> > The usage case is that, as Barry has shown, zsmalloc compresses bigg= er > >> > size than 4K and can have both better compress ratio and CPU > >> > performance gain. > >> > https://lore.kernel.org/linux-mm/20240327214816.31191-1-21cnbao@gmai= l.com/ > >> > > >> > So the goal is to make THP/mTHP have some reasonable success rate > >> > running in the mix size swap allocation, after either low order or > >> > high order swap requests can overflow the swap file size. The alloca= te > >> > can still recover from that, after some swap entries got free. > >> > > >> > Please let me know if you think the above usage case and goal are no= t > >> > reasonable for the kernel. > >> > >> I think that it's reasonable to improve the success rate of high-order > > > > Glad to hear that. > > > >> swap entries allocation. I just think that it's hard to use the > >> reservation based method. For example, how much should be reserved? > > > > Understand, it is harder to use than a fully transparent method, but > > still better than no solution at all. The alternative right now is we > > can't do it. > > > > Regarding how much we should reserve. Similarly, how much should you > > choose your swap file size? If you choose N, why not N*120% or N*80%? > > That did not stop us from having a swapfile, right? > > > >> Why system OOM when there's still swap space available? And so forth. > > > > Keep in mind that the reservation is an option. If you prefer the old > > behavior, you don't have to use the reservation. That shouldn't be a > > reason to stop others who want to use it. We don't have an alternative > > solution for the long run mix size allocation yet. If there is, I like > > to hear it. > > It's not enough to make it optional. When you run into issue, you need > to debug it. And you may debug an issue on a system that is configured > by someone else. That is in general true with all kernel development regardless of using options or not. If there is a bug in my patch, I will need to debug and fix it or the patch might be reverted. I don't see that as a reason to take the option path or not. The option just means the user taking this option will need to understand the trade off and accept the defined behavior of that option. > > >> So, I prefer the transparent methods. Just like THP vs. hugetlbfs. > > > > Me too. I prefer transparent over reservation if it can achieve the > > same goal. Do we have a fully transparent method spec out? How to > > achieve fully transparent and also avoid fragmentation caused by mix > > order allocation/free? > > > > Keep in mind that we are still in the early stage of the mTHP swap > > development, I can have the reservation patch relatively easily. If > > you come up with a better transparent method patch which can achieve > > the same goal later, we can use it instead. > > Because we are still in the early stage, I think that we should try to > improve transparent solution firstly. Personally, what I don't like is > that we don't work on the transparent solution because we have the > reservation solution. Do you have a road map or the design for the transparent solution you can s= hare? I am interested to know what is the short term step(e.g. a month) in this transparent solution you have in mind, so we can compare the different approaches. I can't reason much just by the name "transparent solution" itself. Need more technical details. Right now we have a clear usage case we want to support, the swap in/out mTHP with bigger zsmalloc buffers. We can start with the limited usage case first then move to more general ones. > >> >> that's really important for you, I think that it's better to design > >> >> something like hugetlbfs vs core mm, that is, be separated from the > >> >> normal swap subsystem as much as possible. > >> > > >> > I am giving hugetlbfs just to make the point using reservation, or > >> > isolation of the resource to prevent mixing fragmentation existing i= n > >> > core mm. > >> > I am not suggesting copying the hugetlbfs implementation to the swap > >> > system. Unlike hugetlbfs, the swap allocation is typically done from > >> > the kernel, it is transparent from the application. I don't think > >> > separate from the swap subsystem is a good way to go. > >> > > >> > This comes down to why you don't like the reservation. e.g. if we us= e > >> > two swapfile, one swapfile is purely allocate for high order, would > >> > that be better? > >> > >> Sorry, my words weren't accurate. Personally, I just think that it's > >> better to make reservation related code not too intrusive. > > > > Yes. I will try to make it not too intrusive. > > > >> And, before reservation, we need to consider something else firstly. > >> Whether is it generally good to swap-in with swap-out order? Should w= e > > > > When we have the reservation patch (or other means to sustain mix size > > swap allocation/free), we can test it out to get more data to reason > > about it. > > I consider the swap in size policy an orthogonal issue. > > No. I don't think so. If you swap-out in higher order, but swap-in in > lower order, you make the swap clusters fragmented. Sounds like that is the reason to apply swap-in the same order of the swap = out. In any case, my original point still stands. We need to have the ability to allocate high order swap entries with reasonable success rate *before* we have the option to choose which size to swap in. If allocating a high order swap always fails, we will be forced to use the low order one, there is no option to choose from. We can't evalute "is it generally good to swap-in with swap-out order?" by actual runs. Chris