From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E15CC3ABDA for ; Wed, 14 May 2025 23:45:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BAF988D0010; Wed, 14 May 2025 19:43:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B36868D0001; Wed, 14 May 2025 19:43:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F8F48D0010; Wed, 14 May 2025 19:43:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6307A8D0001 for ; Wed, 14 May 2025 19:43:53 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 93AC9140672 for ; Wed, 14 May 2025 23:43:54 +0000 (UTC) X-FDA: 83443143588.25.3E2BBD8 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf23.hostedemail.com (Postfix) with ESMTP id C412814000C for ; Wed, 14 May 2025 23:43:52 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zkFcvbSi; spf=pass (imf23.hostedemail.com: domain of 3tyolaAsKCOgKMUObVOidXQQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--ackerleytng.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3tyolaAsKCOgKMUObVOidXQQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266232; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VfnYbWQzMmQc1PANKcXbyhh93wrWxLA5NUfiI2kZlfs=; b=sX0CtdrM0viTz1KQZnBMcS/GHoWeHIj0FYMQcl1/uCdlgb/5pZhOscvZuetkvCzjbnynFS kCq9H9nnSdMMMN97G+3zu+wqo7YDIY4lEX1/HlzQmoedhu/zXCF/MIJqpztxaz9zsRmtVy L0rMrDNEfTeeogwGjnTYtp7k5ON+Pmk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=zkFcvbSi; spf=pass (imf23.hostedemail.com: domain of 3tyolaAsKCOgKMUObVOidXQQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--ackerleytng.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3tyolaAsKCOgKMUObVOidXQQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266232; a=rsa-sha256; cv=none; b=BwqX+LPJGz3oxa3xcDUvX2Zufhv65Hat72BneegLz6D5mEI6OU/Ph8Hn6TawP6OBSC4zXB xDI9CFFUFHvPXr6QumVkuC8iPxTGxDjsl/Krb16A1+uTQHyymnqZhciHm/c5CtWFpp7j9b 5LqP7T37+2zWFE2AoILQTc3zqXPRanc= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30aab0f21a3so373972a91.3 for ; Wed, 14 May 2025 16:43:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266232; x=1747871032; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VfnYbWQzMmQc1PANKcXbyhh93wrWxLA5NUfiI2kZlfs=; b=zkFcvbSiEs7b47891gttxUOypFKNHNQ2mMqrD10X+1LqKXzy5mAjQcazZFi0S/UakP oopaCf4hUGAm4grr3SBajhX85P2Q0qUtJhKQ2EeJx84/jxmAA9gs72erZEhTaOfTQ2ly yK95AoS8PqlKzMm2cOTj/KDLGrSf6olwZosvwc+V1I5HApkr+i7eTppraa7x9lzZWNoj 0j5xdjhscFXHp5DCirlPOZFF2535qHlhNutAt+GSdyIDp8OmcfuGKg8JY9bnCGGxgBb1 6v3w+jTZ5YT7ixAHN91qyWW1fUJsgVst/u8ikrses64tUiOBdMSOH0YvnEYeEkvq29Aq Xafg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266232; x=1747871032; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VfnYbWQzMmQc1PANKcXbyhh93wrWxLA5NUfiI2kZlfs=; b=v2ZN4F9mdLQ7whMXnkMFZhh+hed1gByQeT3mksoUxCjsAkATYnN+9dSuTPmZod8roB wWk5ww+KPdXm2exdOe72lgJ8H9zOcXigMmiSYJf946a5PApCsUi2BOekL3/rD20AzhTc OnnWhPdEbJR4+w3qyaykw9pDnZ+LjQn51zxPLy4pE4ksxeW7C4t67Sl0s/ha6N3W0YVS 3awrlWzKued7iILuht4qoLhNc112olse/yJbg1k/FEqBcNJKsfo6/TfFZ6h5XL64zAXs qGoBBSD1NHmNNC0pVVkPg3uBfhBpqqChzL8WVD8S3s4+n0got/v+zMWKqK2UmX3vuK7Y ejGw== X-Forwarded-Encrypted: i=1; AJvYcCXC78CmvUhG1z9s8J5P73sT8aI6boiHhP1l0iDP0K2KalL8Jd8flvCBaWyInxz3uYThK9PH5ejgLg==@kvack.org X-Gm-Message-State: AOJu0Yw+XH+IpiWvH91VA2BrjcnLSuQhoyzsc1mD2zdwojl/IoapjsHx DCE7+UQvUwb/K8PcSN8o9afM8o6xa4Q94Pyz9o9nMsBIUaBYPwlRXaTfqJ/JSAdJzqIxZI/yeUg RMOMQAGWKu/mk5cqXc9wNPg== X-Google-Smtp-Source: AGHT+IFVGcqlDEnJpkFEd+R21FoPe5Rerk1Iq1HXA53kK0tkVEoANeobibZBnoWnktKVrpAkNdhU2ptVfud0MkCE0w== X-Received: from pji8.prod.google.com ([2002:a17:90b:3fc8:b0:2ea:5084:5297]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4acb:b0:2ee:b4bf:2d06 with SMTP id 98e67ed59e1d1-30e2e6133d8mr7435241a91.19.1747266231568; Wed, 14 May 2025 16:43:51 -0700 (PDT) Date: Wed, 14 May 2025 16:42:14 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <2ae41e0d80339da2b57011622ac2288fed65cd01.1747264138.git.ackerleytng@google.com> Subject: [RFC PATCH v2 35/51] mm: guestmem_hugetlb: Add support for splitting and merging pages From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: wi6sw6e3shn8mjy5r64cgeu67ghmwt4k X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C412814000C X-HE-Tag: 1747266232-522646 X-HE-Meta: U2FsdGVkX18GtinlQscJv1C48jlut+4dsCQGiGvVtN58RtUflSDvf6UJZKHCtD50g+gP5hcRDw0Lmd0EtdfcynWh9FqWvc/lF6TTEESM0LN3XqVeJ2RB5Vf2G0RK8RJhHZ63tHVJYzvICm6aUdMakITUDpeefm+OC3KMOMuPdP24LVx04ZcOPjlmm0K3ZOvdlyisMXUH6uNmekvphBbi0PYjX5jE4kA04YU5NM+TdAru7rlBGL5Anadp9iyqOr7z7X1jFoEhR94GQjvZYiBRLtQ3S3+h3VGXrdKSCMQCtCh3+cpnDVAQpNmWhQIUsFLhmf4UbLlfxfiiVeY/ykGcq7pkP7RQdGeL8YduF7z1RSiY50HmoveK1I36xaJK+4JoRp+pITrdzAv1+vZTHRIOBCdAVLqlBnL2Hh/j5/p7OJfuny6n/58s2uSZS6V8AXiIAuvKoFrufkp3tWGesdM7/isOsEK8M5FPV1UCMuKeOhFAOTWuNWQ4q+br1PeBDuRZaN//Lj9u9MJhC1PRKcbUWejoAU/YrGyFACS1fPJ3V2K0SiM1l5+TO1Bc1nYjrVRIsjyYypJC9BesKFE+uJpxNelo0D7x5GxhBVWBRinampadK9LSzju4uJzvn/SgB0Hn3/GWlr4dFDc+jtDfCYECbduEEGQreGpxFL7QwTJJQBnIg2Ez62CAGTun1VioB1KoIwrGQUMHlUA2rFtzR6g2moD6kzXZxFJ2k4sgQee9bWPrGELhsbyVDMuFCZN1lbeVr6zsiUDr0Goc7KDew1dG9W2QPRahQnBoDgweUkc75WZkcTQx1Br0WV3BcpR6/dGY8eJM6JkpPIOpPU0k/Md1NzRCF337DGGKxPfmCDzXK/kw08QMVBczUkIHZerGWZLARuvwuVQpMEySB92uPOB623BWTzdZiac9V03aPD36oaRrCZ+7+JAEdVcL99V6phyxlj9VlG0mjJJ2VyDYeb1 X/aGSuS3 uRhPu8g702UA36/yMhFJSoU3HvO/pQMtkdfkqifn3ROl2iN+7u/MuXr+40r2BhH4hZkIeeYZP5jTJ6zJw3BxaVTKasvzO1VU4m/QKCk0BJsv6N/zjoDxNztKfZnBsAbln9bqKbFxlrbzZh8jKMjrYgGjLqcpwUV98bDzVUiQkbj8JPF3VAGFZmuVVOOi0eWOyHhIHHNMar2rZ2pCbzkhNRJUrJse+Xj2MGvoKZOoAxsowMvJ0jwTpJGA+YGxjf96CYdSnbTbeXPWBANNcYEwo8G8LRkofJ1cWNWeRPW6KCyLGQzB39kK/qrfr3lOCiEms/eDsngmX6ksQCSSxRICUPxokwFwpdcPykzc4mTR9K5y8Cokm4dIanm/0G5Z+3qbyDruaEDUeH/2tDUTi2S0MdR5qjB3N15yUXkqyLmOzQsHo2puEVIRGtllAJnjZRogK2zVniU+xQ1BTz7Nvn4vQtThUMIMKGwQ8CoopzMFQUBAqKggyT99giyEEq+69MUQbJsCv+LtW92AcRwyoJ9DNG8vNW4QjvG8oHlKpvDuPeZFNewOTLuQnZJGYUMHBvqZn8TU5pSIxyY+ZGGTuRPHXaOs8lxv6yiUGcc0hhlU0ghZudNsfaHIKhV5xVBLBmGitoYUQjrXfIwVkYvSQ2m/DEg2im9nRxSbEiUZc28ZKLsJVl48oxJx66tcQuAJEYxtN5HTQSO/DEYEZR18= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: These functions allow guest_memfd to split and merge HugeTLB pages, and clean them up on freeing the page. For merging and splitting pages on conversion, guestmem_hugetlb expects the refcount on the pages to already be 0. The caller must ensure that. For conversions, guest_memfd ensures that the refcounts are already 0 by checking that there are no unexpected refcounts, and then freezing the expected refcounts away. On unexpected refcounts, guest_memfd will return an error to userspace. For truncation, on unexpected refcounts, guest_memfd will return an error to userspace. For truncation on closing, guest_memfd will just remove its own refcounts (the filemap refcounts) and mark split pages with PGTY_guestmem_hugetlb. The presence of PGTY_guestmem_hugetlb will trigger the folio_put() callback to handle further cleanup. This cleanup process will merge pages (with refcount 0, since cleanup is triggered from folio_put()) before returning the pages to HugeTLB. Since the merging process is long, it is deferred to a worker thread since folio_put() could be called from atomic context. Change-Id: Ib04a3236f1e7250fd9af827630c334d40fb09d40 Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve --- include/linux/guestmem.h | 3 + mm/guestmem_hugetlb.c | 349 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 347 insertions(+), 5 deletions(-) diff --git a/include/linux/guestmem.h b/include/linux/guestmem.h index 4b2d820274d9..3ee816d1dd34 100644 --- a/include/linux/guestmem.h +++ b/include/linux/guestmem.h @@ -8,6 +8,9 @@ struct guestmem_allocator_operations { void *(*inode_setup)(size_t size, u64 flags); void (*inode_teardown)(void *private, size_t inode_size); struct folio *(*alloc_folio)(void *private); + int (*split_folio)(struct folio *folio); + void (*merge_folio)(struct folio *folio); + void (*free_folio)(struct folio *folio); /* * Returns the number of PAGE_SIZE pages in a page that this guestmem * allocator provides. diff --git a/mm/guestmem_hugetlb.c b/mm/guestmem_hugetlb.c index ec5a188ca2a7..8727598cf18e 100644 --- a/mm/guestmem_hugetlb.c +++ b/mm/guestmem_hugetlb.c @@ -11,15 +11,12 @@ #include #include #include +#include #include #include "guestmem_hugetlb.h" - -void guestmem_hugetlb_handle_folio_put(struct folio *folio) -{ - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); -} +#include "hugetlb_vmemmap.h" struct guestmem_hugetlb_private { struct hstate *h; @@ -34,6 +31,339 @@ static size_t guestmem_hugetlb_nr_pages_in_folio(void *priv) return pages_per_huge_page(private->h); } +static DEFINE_XARRAY(guestmem_hugetlb_stash); + +struct guestmem_hugetlb_metadata { + void *_hugetlb_subpool; + void *_hugetlb_cgroup; + void *_hugetlb_hwpoison; + void *private; +}; + +struct guestmem_hugetlb_stash_item { + struct guestmem_hugetlb_metadata hugetlb_metadata; + /* hstate tracks the original size of this folio. */ + struct hstate *h; + /* Count of split pages, individually freed, waiting to be merged. */ + atomic_t nr_pages_waiting_to_be_merged; +}; + +struct workqueue_struct *guestmem_hugetlb_wq __ro_after_init; +static struct work_struct guestmem_hugetlb_cleanup_work; +static LLIST_HEAD(guestmem_hugetlb_cleanup_list); + +static inline void guestmem_hugetlb_register_folio_put_callback(struct folio *folio) +{ + __folio_set_guestmem_hugetlb(folio); +} + +static inline void guestmem_hugetlb_unregister_folio_put_callback(struct folio *folio) +{ + __folio_clear_guestmem_hugetlb(folio); +} + +static inline void guestmem_hugetlb_defer_cleanup(struct folio *folio) +{ + struct llist_node *node; + + /* + * Reuse the folio->mapping pointer as a struct llist_node, since + * folio->mapping is NULL at this point. + */ + BUILD_BUG_ON(sizeof(folio->mapping) != sizeof(struct llist_node)); + node = (struct llist_node *)&folio->mapping; + + /* + * Only schedule work if list is previously empty. Otherwise, + * schedule_work() had been called but the workfn hasn't retrieved the + * list yet. + */ + if (llist_add(node, &guestmem_hugetlb_cleanup_list)) + queue_work(guestmem_hugetlb_wq, &guestmem_hugetlb_cleanup_work); +} + +void guestmem_hugetlb_handle_folio_put(struct folio *folio) +{ + guestmem_hugetlb_unregister_folio_put_callback(folio); + + /* + * folio_put() can be called in interrupt context, hence do the work + * outside of interrupt context + */ + guestmem_hugetlb_defer_cleanup(folio); +} + +/* + * Stash existing hugetlb metadata. Use this function just before splitting a + * hugetlb page. + */ +static inline void +__guestmem_hugetlb_stash_metadata(struct guestmem_hugetlb_metadata *metadata, + struct folio *folio) +{ + /* + * (folio->page + 1) doesn't have to be stashed since those fields are + * known on split/reconstruct and will be reinitialized anyway. + */ + + /* + * subpool is created for every guest_memfd inode, but the folios will + * outlive the inode, hence we store the subpool here. + */ + metadata->_hugetlb_subpool = folio->_hugetlb_subpool; + /* + * _hugetlb_cgroup has to be stored for freeing + * later. _hugetlb_cgroup_rsvd does not, since it is NULL for + * guest_memfd folios anyway. guest_memfd reservations are handled in + * the inode. + */ + metadata->_hugetlb_cgroup = folio->_hugetlb_cgroup; + metadata->_hugetlb_hwpoison = folio->_hugetlb_hwpoison; + + /* + * HugeTLB flags are stored in folio->private. stash so that ->private + * can be used by core-mm. + */ + metadata->private = folio->private; +} + +static int guestmem_hugetlb_stash_metadata(struct folio *folio) +{ + XA_STATE(xas, &guestmem_hugetlb_stash, 0); + struct guestmem_hugetlb_stash_item *stash; + void *entry; + + stash = kzalloc(sizeof(*stash), 1); + if (!stash) + return -ENOMEM; + + stash->h = folio_hstate(folio); + __guestmem_hugetlb_stash_metadata(&stash->hugetlb_metadata, folio); + + xas_set_order(&xas, folio_pfn(folio), folio_order(folio)); + + xas_lock(&xas); + entry = xas_store(&xas, stash); + xas_unlock(&xas); + + if (xa_is_err(entry)) { + kfree(stash); + return xa_err(entry); + } + + return 0; +} + +static inline void +__guestmem_hugetlb_unstash_metadata(struct guestmem_hugetlb_metadata *metadata, + struct folio *folio) +{ + folio->_hugetlb_subpool = metadata->_hugetlb_subpool; + folio->_hugetlb_cgroup = metadata->_hugetlb_cgroup; + folio->_hugetlb_cgroup_rsvd = NULL; + folio->_hugetlb_hwpoison = metadata->_hugetlb_hwpoison; + + folio_change_private(folio, metadata->private); +} + +static int guestmem_hugetlb_unstash_free_metadata(struct folio *folio) +{ + struct guestmem_hugetlb_stash_item *stash; + unsigned long pfn; + + pfn = folio_pfn(folio); + + stash = xa_erase(&guestmem_hugetlb_stash, pfn); + __guestmem_hugetlb_unstash_metadata(&stash->hugetlb_metadata, folio); + + kfree(stash); + + return 0; +} + +/** + * guestmem_hugetlb_split_folio() - Split a HugeTLB @folio to PAGE_SIZE pages. + * + * @folio: The folio to be split. + * + * Context: Before splitting, the folio must have a refcount of 0. After + * splitting, each split folio has a refcount of 0. + * Return: 0 on success and negative error otherwise. + */ +static int guestmem_hugetlb_split_folio(struct folio *folio) +{ + long orig_nr_pages; + int ret; + int i; + + if (folio_size(folio) == PAGE_SIZE) + return 0; + + orig_nr_pages = folio_nr_pages(folio); + ret = guestmem_hugetlb_stash_metadata(folio); + if (ret) + return ret; + + /* + * hugetlb_vmemmap_restore_folio() has to be called ahead of the rest + * because it checks and page type. This doesn't actually split the + * folio, so the first few struct pages are still intact. + */ + ret = hugetlb_vmemmap_restore_folio(folio_hstate(folio), folio); + if (ret) + goto err; + + /* + * Can clear without lock because this will not race with the folio + * being mapped. folio's page type is overlaid with mapcount and so in + * other cases it's necessary to take hugetlb_lock to prevent races with + * mapcount increasing. + */ + __folio_clear_hugetlb(folio); + + /* + * Remove the first folio from h->hugepage_activelist since it is no + * longer a HugeTLB page. The other split pages should not be on any + * lists. + */ + hugetlb_folio_list_del(folio); + + /* Actually split page by undoing prep_compound_page() */ + __folio_clear_head(folio); + +#ifdef NR_PAGES_IN_LARGE_FOLIO + /* + * Zero out _nr_pages, otherwise this overlaps with memcg_data, + * resulting in lookups on false memcg_data. _nr_pages doesn't have to + * be set to 1 because folio_nr_pages() relies on the presence of the + * head flag to return 1 for nr_pages. + */ + folio->_nr_pages = 0; +#endif + + for (i = 1; i < orig_nr_pages; ++i) { + struct page *p = folio_page(folio, i); + + /* Copy flags from the first page to split pages. */ + p->flags = folio->flags; + + p->mapping = NULL; + clear_compound_head(p); + } + + return 0; + +err: + guestmem_hugetlb_unstash_free_metadata(folio); + + return ret; +} + +/** + * guestmem_hugetlb_merge_folio() - Merge a HugeTLB folio from the folio + * beginning @first_folio. + * + * @first_folio: the first folio in a contiguous block of folios to be merged. + * + * The size of the contiguous block is tracked in guestmem_hugetlb_stash. + * + * Context: The first folio is checked to have a refcount of 0 before + * reconstruction. After reconstruction, the reconstructed folio has a + * refcount of 0. + */ +static void guestmem_hugetlb_merge_folio(struct folio *first_folio) +{ + struct guestmem_hugetlb_stash_item *stash; + struct hstate *h; + + stash = xa_load(&guestmem_hugetlb_stash, folio_pfn(first_folio)); + h = stash->h; + + /* + * This is the step that does the merge. prep_compound_page() will write + * to pages 1 and 2 as well, so guestmem_unstash_hugetlb_metadata() has + * to come after this. + */ + prep_compound_page(&first_folio->page, huge_page_order(h)); + + WARN_ON(guestmem_hugetlb_unstash_free_metadata(first_folio)); + + /* + * prep_compound_page() will set up mapping on tail pages. For + * completeness, clear mapping on head page. + */ + first_folio->mapping = NULL; + + __folio_set_hugetlb(first_folio); + + hugetlb_folio_list_add(first_folio, &h->hugepage_activelist); + + hugetlb_vmemmap_optimize_folio(h, first_folio); +} + +static struct folio *guestmem_hugetlb_maybe_merge_folio(struct folio *folio) +{ + struct guestmem_hugetlb_stash_item *stash; + unsigned long first_folio_pfn; + struct folio *first_folio; + unsigned long pfn; + size_t nr_pages; + + pfn = folio_pfn(folio); + + stash = xa_load(&guestmem_hugetlb_stash, pfn); + nr_pages = pages_per_huge_page(stash->h); + if (atomic_inc_return(&stash->nr_pages_waiting_to_be_merged) < nr_pages) + return NULL; + + first_folio_pfn = round_down(pfn, nr_pages); + first_folio = pfn_folio(first_folio_pfn); + + guestmem_hugetlb_merge_folio(first_folio); + + return first_folio; +} + +static void guestmem_hugetlb_cleanup_folio(struct folio *folio) +{ + struct folio *merged_folio; + + merged_folio = guestmem_hugetlb_maybe_merge_folio(folio); + if (merged_folio) + __folio_put(merged_folio); +} + +static void guestmem_hugetlb_cleanup_workfn(struct work_struct *work) +{ + struct llist_node *node; + + node = llist_del_all(&guestmem_hugetlb_cleanup_list); + while (node) { + struct folio *folio; + + folio = container_of((struct address_space **)node, + struct folio, mapping); + + node = node->next; + folio->mapping = NULL; + + guestmem_hugetlb_cleanup_folio(folio); + } +} + +static int __init guestmem_hugetlb_init(void) +{ + INIT_WORK(&guestmem_hugetlb_cleanup_work, guestmem_hugetlb_cleanup_workfn); + + guestmem_hugetlb_wq = alloc_workqueue("guestmem_hugetlb", + WQ_MEM_RECLAIM | WQ_UNBOUND, 0); + if (!guestmem_hugetlb_wq) + return -ENOMEM; + + return 0; +} +subsys_initcall(guestmem_hugetlb_init); + static void *guestmem_hugetlb_setup(size_t size, u64 flags) { @@ -164,10 +494,19 @@ static struct folio *guestmem_hugetlb_alloc_folio(void *priv) return ERR_PTR(-ENOMEM); } +static void guestmem_hugetlb_free_folio(struct folio *folio) +{ + if (xa_load(&guestmem_hugetlb_stash, folio_pfn(folio))) + guestmem_hugetlb_register_folio_put_callback(folio); +} + const struct guestmem_allocator_operations guestmem_hugetlb_ops = { .inode_setup = guestmem_hugetlb_setup, .inode_teardown = guestmem_hugetlb_teardown, .alloc_folio = guestmem_hugetlb_alloc_folio, + .split_folio = guestmem_hugetlb_split_folio, + .merge_folio = guestmem_hugetlb_merge_folio, + .free_folio = guestmem_hugetlb_free_folio, .nr_pages_in_folio = guestmem_hugetlb_nr_pages_in_folio, }; EXPORT_SYMBOL_GPL(guestmem_hugetlb_ops); -- 2.49.0.1045.g170613ef41-goog