From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 479E6C5321D for ; Mon, 26 Aug 2024 15:40:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BBD576B0092; Mon, 26 Aug 2024 11:40:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B6D786B0093; Mon, 26 Aug 2024 11:40:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0EC16B0095; Mon, 26 Aug 2024 11:40:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 7FECD6B0092 for ; Mon, 26 Aug 2024 11:40:41 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1C26C1611D5 for ; Mon, 26 Aug 2024 15:40:41 +0000 (UTC) X-FDA: 82494809082.14.7B2DF45 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id 10196100015 for ; Mon, 26 Aug 2024 15:40:37 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eU65gLjg; spf=pass (imf05.hostedemail.com: domain of npache@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724686753; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=27eVj2Z6RNgb00mecDqw/aDsuiUwWuk/YVU/+hWWgVg=; b=1JXg3Yslm8w1ToSCGcNe/OFv+Op5TB7wj8urwR0/99CtUsWj5WsDbISVLwR/7uIMEqMBo2 JiGD6rjlmv7kCjieWV3UR4uf7TQH5DQULHeL9sGnPCHHtwqT209w5p8Aguv686oD40gvMm aatv8SDKvM5ITiNg6RqB8mH0MU8P250= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724686753; a=rsa-sha256; cv=none; b=q/4S86GV4iH6cBd2lMOGxCB4rkYLjcjfs3IefL1c93Wo13IC1OU0W7NAR0nvC+xBamCysn kDPeSoLIP+xqTPKR/JWWxjb29l1kiESxzkjRuSTNBe1Ve78B2cDc1WNiAIIXjB0B5USnOX sLvMRgLo+S+wP7N9OEit+B8eRSin3jw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eU65gLjg; spf=pass (imf05.hostedemail.com: domain of npache@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=npache@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724686837; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=27eVj2Z6RNgb00mecDqw/aDsuiUwWuk/YVU/+hWWgVg=; b=eU65gLjgJrp4BwYkxc5ZO9dfPYNCKLPcYqd3wpj32MZV0ULJpIeChNcDXyLMG9Exw5pTof Xydogdq+KfuXfPKQgCOeiKYqmdfiA3r5TBiIbqXUPo6U4lO7Oy3XN554Q/TOhSSnfDz5wY +v2ix02cIjqn6eHQ7IGHctQ8nqN+ibE= Received: from mail-yw1-f198.google.com (mail-yw1-f198.google.com [209.85.128.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-27-Y1P6XOn9M8mzvVQNWrTotA-1; Mon, 26 Aug 2024 11:40:36 -0400 X-MC-Unique: Y1P6XOn9M8mzvVQNWrTotA-1 Received: by mail-yw1-f198.google.com with SMTP id 00721157ae682-690404fd230so75188517b3.3 for ; Mon, 26 Aug 2024 08:40:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724686835; x=1725291635; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=27eVj2Z6RNgb00mecDqw/aDsuiUwWuk/YVU/+hWWgVg=; b=jhjoXXSQhfwT5tVB8T0Hbrv8x93NHzDujEWdSoWdMkefB8d57vcE6bYn03uiZ/L+rq QESydfN/LyQb9u4p0N04eTJa0Mmfe38ORUAAx+lcp0Etar5FzoYiRQ0g3SC+bhXHA3/o I2jsQ1V6dUiUgecx+UsRe8LM2hoSJW32gvD0dvEix5IiTyVwfMVgfD/EFMYiYzHhNe3z IicqZIkjKphCC/l3zPBKrAovoY4enDcoj0fh5mH3xf98bJM0eKcsVLWlyfXngctHd332 /VUwB/BTtxp/wK128ekvjLyZd4FWxGypoDBf+5Gk7mf1dAMb3bELYF7bi2M2swFJu1z3 WqzA== X-Gm-Message-State: AOJu0YzNYiRbYqo9Tsx7/gKK7rkVylLWVi+H7sh+tM5UNdGEhr4hXog5 PjzD6/3wP/xTuF9J8M9juyXzkZ+vhmdA1IJHi1ssIDLbCpt6C+b4pg4j/WmbM9G1h1lVRuiB1Dv tAjxvkfLatxkKXwwT0ViyEJNQM97JIsCV8ocqI2EdfCSYQUiWF6LUbrLKgJfZHjESqPG+IlQRLw k8wTTXey2NTFsP+6ZfdAMwS4XSfi6rvgrNqQ== X-Received: by 2002:a05:690c:ecb:b0:6ae:1e27:c994 with SMTP id 00721157ae682-6c6248e8e21mr102812497b3.3.1724686835468; Mon, 26 Aug 2024 08:40:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHHo5xy1tCcPATzjwKv562swJ8qdcQ3hXU/FJ7I5SxAT5TQe0vT7bwPymRwDqoI/eR5G7ezSBJ8+GTy4PeNLRU= X-Received: by 2002:a05:690c:ecb:b0:6ae:1e27:c994 with SMTP id 00721157ae682-6c6248e8e21mr102812327b3.3.1724686835153; Mon, 26 Aug 2024 08:40:35 -0700 (PDT) MIME-Version: 1.0 References: <20240729222727.64319-1-npache@redhat.com> <72320F9D-9B6A-4ABA-9B18-E59B8382A262@nvidia.com> In-Reply-To: From: Nico Pache Date: Mon, 26 Aug 2024 09:40:09 -0600 Message-ID: Subject: Re: [RFC 0/2] mm: introduce THP deferred setting To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: linux-doc@vger.kernel.org, Andrew Morton , David Hildenbrand , Matthew Wilcox , Barry Song , Ryan Roberts , Baolin Wang , Lance Yang , Peter Xu , Rafael Aquini , Andrea Arcangeli , Jonathan Corbet , "Kirill A . Shutemov" , Zi Yan , usamaarif642@gmail.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 10196100015 X-Stat-Signature: mxjkaw89afcip91c5dbkosfe1nf7qanz X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724686837-982633 X-HE-Meta: U2FsdGVkX193dmqWR9Tx1kSK4tPFy+4uLYOTwyPpW9YAsSVXgugDsCpRyDm++gmjHXVkKQRxG1DTkZnii7KBVX5Uh/XSXxzrDWfo7yafX15WoF5e/vrtIdTCJtY8asiasW1Am0sa7axkXMNg/GATn69/AcjtWlOu0LaDEC8mGxPIT19PLW3a4aHbg5vgmFrN2xivfL+8rNkcPG2lddzHxVDS+Tn/mjmKXlhWSlA98LpQP9lRAN/dOtGGrVRoKX+GSs+j00iP7KTUZ5vH3KkzuKSEdKU59CBgaFmeP7v5WJKSERdkUlGX7CE5ODg3iKIWzD4jRkpvgDrCeyBiylBVN4rhti+/LsqKNBlE7gQ8lqDjrP3SyyPq7iIZuCiziKp7zvk6omyKZQvyjQKmtYYtTLeLj0VpgMTid5wjaTBXdvAflHRagQEnHyJkpW9GCQDiDzYZaCYTOleNuz/rryGS8LHhQlFYo9rYMbbRGZd34T4LthtYrcW+c+RmQHWhZOZFtIKsf7HoOJ992qZnbC3kYiFU24gRM+GdxN3T0DxasAcLcGJwWCevmcRmbkQBuAbYVQET1HAWv/qBuBhFfV4lXuR+gbNrBMCRDFfRFlXB4g7Ti4CTknaDDzTAqwjhYqG6f1Ig8boyd7fB7MHuzAjr93HTVnuG8+Hx1HvlCa80UFrnBb7xdSoVMh3DBcX0xgcW9QQQCjqRY5WbClsRq0a4j2lIc1iHnoAARoDQ+i+9nQXJD5UJe8VzZrR+KgnmUJ6nRvyjcqrdVe6GTEPrQ0DMPawsMt7694IrHo855sWpLYxnZ4RITxVme6ClUQcYBWUD7/x1pJIXW/T0yFtxahUFPm/OphqBqDEpdumflAxRVM59okh6z5eODu/LhK16r0qTFsR/u3UF5LkwzA/r6j/301lZy+u2Dhiv4awxskMojiFx90twBz6X2AcD1YY53/9gZGh0H3Nb4T1+pXGCW6A upM8ZoQW yofLqPUs5D7FzJ3pL27Mq29L92ws2nULFkEujtK+4zcA95r9SGjPmooe+FD0fZ6TVyCuGwLDwgZhXyMWDTfngsCp3j5V6TV8ym+nMUAPbI/RPzCzjC4eqquGb7Jdk205DZXVyhjAFHNvT3+XojkCWq46Rhqn+5DPaloSUlNftfqU+Jr/CMp8RDvdskKAVA7VJqEBIH64gClvvu89VWKVlLuYz59JxFUlb1NZwWYaJNso/V/Es1dg9X3DtrKE8bNPX5yvKxNqeatWpB1hAi08uT3UH2uQCcQGJAeGmHi44XluaCA4bTwvr7e9y6jA+/zgZxukC/u/2calN6n1onNOox2YGkOmrfeid000JF4tJ7PkI8Wns1Z2GNicBiiD5ItxWUbIciyAXt00Bhd9aMNjYe9hwVF6ejx60Yn+QKRAx6YJsMMCW6ikB6crKkUrJ/+MCQBcm/Vgpl0qgJqX/Dj1louy/lPPAk1XqKAMKaDNMa04U1FDXWiQQ3nV/JmRkbKo/nvNiYgpYnXB319n6o47Y+aibkCqw1PsmMPpn4LBqHnmetgQgCxJBffB4DAlgnWyS/XaPWe14JFgt9KbBwZn354RcYZDftUX0+13UVvZVbbsAXSg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jul 30, 2024 at 4:37=E2=80=AFPM Nico Pache wrot= e: > > Hi Zi Yan, > On Mon, Jul 29, 2024 at 7:26=E2=80=AFPM Zi Yan wrote: > > > > +Kirill > > > > On 29 Jul 2024, at 18:27, Nico Pache wrote: > > > > > We've seen cases were customers switching from RHEL7 to RHEL8 see a > > > significant increase in the memory footprint for the same workloads. > > > > > > Through our investigations we found that a large contributing factor = to > > > the increase in RSS was an increase in THP usage. > > > > Any knob is changed from RHEL7 to RHEL8 to cause more THP usage? > IIRC, most of the systems tuning is the same. We attributed the > increase in THP usage to a combination of improvements in the kernel, > and improvements in the libraries (better alignments). That allowed > THP allocations to succeed at a higher rate. I can go back and confirm > this tomorrow though. > > > > > > > > For workloads like MySQL, or when using allocators like jemalloc, it = is > > > often recommended to set /transparent_hugepages/enabled=3Dnever. This= is > > > in part due to performance degradations and increased memory waste. > > > > > > This series introduces enabled=3Ddefer, this setting acts as a middle > > > ground between always and madvise. If the mapping is MADV_HUGEPAGE, t= he > > > page fault handler will act normally, making a hugepage if possible. = If > > > the allocation is not MADV_HUGEPAGE, then the page fault handler will > > > default to the base size allocation. The caveat is that khugepaged ca= n > > > still operate on pages thats not MADV_HUGEPAGE. > > > > Why? If user does not explicitly want huge page, why bother providing h= uge > > pages? Wouldn't it increase memory footprint? > > So we have "always", which will always try to allocate a THP when it > can. This setting gives good performance in a lot of conditions, but > tends to waste memory. Additionally applications DON'T need to be > modified to take advantage of THPs. > > We have "madvise" which will only satisfy allocations that are > MADV_HUGEPAGE, this gives you granular control, and a lot of times > these madvises come from libraries. Unlike "always" you DO need to > modify your application if you want to use THPs. > > Then we have "never", which of course, never allocates THPs. > > Ok. back to your question, like "madvise", "defer" gives you the > benefits of THPs when you specifically know you want them > (madv_hugepage), but also benefits applications that dont specifically > ask for them (or cant be modified to ask for them), like "always" > does. The applications that dont ask for THPs must wait for khugepaged > to get them (avoid insertions at PF time)-- this curbs a lot of memory > waste, and gives an increased tunability over "always". Another added > benefit is that khugepaged will most likely not operate on short lived > allocations, meaning that only longstanding memory will be collapsed > to THPs. > > The memory waste can be tuned with max_ptes_none... lets say you want > ~90% of your PMD to be full before collapsing into a huge page. simply > set max_ptes_none=3D64. or no waste, set max_ptes_none=3D0, requiring the > 512 pages to be present before being collapsed. > > > > > > > > > This allows for two things... one, applications specifically designed= to > > > use hugepages will get them, and two, applications that don't use > > > hugepages can still benefit from them without aggressively inserting > > > THPs at every possible chance. This curbs the memory waste, and defer= s > > > the use of hugepages to khugepaged. Khugepaged can then scan the memo= ry > > > for eligible collapsing. > > > > khugepaged would replace application memory with huge pages without spe= cific > > goal. Why not use a user space agent with process_madvise() to collapse > > huge pages? Admin might have more knobs to tweak than khugepaged. > > The benefits of "always" are that no userspace agent is needed, and > applications dont have to be modified to use madvise(MADV_HUGEPAGE) to > benefit from THPs. This setting hopes to gain some of the same > benefits without the significant waste of memory and an increased > tunability. > > future changes I have in the works are to make khugepaged more > "smart". Moving it away from the round robin fashion it currently > operates in, to instead make smart and informed decisions of what > memory to collapse (and potentially split). > > Hopefully that helped explain the motivation for this new setting! Any last comments before I resend this? Ive been made aware of https://lore.kernel.org/all/20240730125346.1580150-1-usamaarif642@gmail.com= /T/#u which introduces THP splitting. These are both trying to achieve the same thing through different means. Our approach leverages khugepaged to promote pages, while Usama's uses the reclaim path to demote hugepages and shrink the underlying memory. I will leave it up to reviewers to determine which is better; However, we can't have both, as we'd be introducing trashing conditions. Cheers, -- Nico > > Cheer! > -- Nico > > > > > > > > Admins may want to lower max_ptes_none, if not, khugepaged may > > > aggressively collapse single allocations into hugepages. > > > > > > RFC note > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > > Im not sure if im missing anything related to the mTHP > > > changes. I think now that we have hugepage_pmd_enabled in > > > commit 00f58104202c ("mm: fix khugepaged activation policy") everythi= ng > > > should work as expected. > > > > > > Nico Pache (2): > > > mm: defer THP insertion to khugepaged > > > mm: document transparent_hugepage=3Ddefer usage > > > > > > Documentation/admin-guide/mm/transhuge.rst | 18 ++++++++++--- > > > include/linux/huge_mm.h | 15 +++++++++-- > > > mm/huge_memory.c | 31 +++++++++++++++++++-= -- > > > 3 files changed, 55 insertions(+), 9 deletions(-) > > > > > > Cc: Andrew Morton > > > Cc: David Hildenbrand > > > Cc: Matthew Wilcox > > > Cc: Barry Song > > > Cc: Ryan Roberts > > > Cc: Baolin Wang > > > Cc: Lance Yang > > > Cc: Peter Xu > > > Cc: Zi Yan > > > Cc: Rafael Aquini > > > Cc: Andrea Arcangeli > > > Cc: Jonathan Corbet > > > -- > > > 2.45.2 > > > > -- > > Best Regards, > > Yan, Zi