From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7F3FC02182 for ; Tue, 21 Jan 2025 05:08:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F10B6B007B; Tue, 21 Jan 2025 00:08:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1A02E6B0082; Tue, 21 Jan 2025 00:08:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 068AB6B0083; Tue, 21 Jan 2025 00:08:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DA28B6B007B for ; Tue, 21 Jan 2025 00:08:21 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8541180688 for ; Tue, 21 Jan 2025 05:08:21 +0000 (UTC) X-FDA: 83030278002.22.689E2B4 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) by imf11.hostedemail.com (Postfix) with ESMTP id 99FF640003 for ; Tue, 21 Jan 2025 05:08:19 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=l8iH+tyS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.128.48 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1737436099; a=rsa-sha256; cv=none; b=e9mSXEZT7TATe/HAr5GjY5rnBgjapykwhCGW6BPkDyUMA+6LSxhJI8LBjRNdBPLqTj7rOY FhWwZlTi4BisQKDWTALm1AvUeJSDybA8BsyIfeRNFdT/NP360vFkQV/J5haDUZVOqt3pwh uashBVq3OKWoAdwapefDqF/wIMINLm4= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=l8iH+tyS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.128.48 as permitted sender) smtp.mailfrom=jiaqiyan@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1737436099; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mgOMVcNlsXvlQ7MTDnuZzeXSDSWzbcsT6y24bbTR23U=; b=YiqtOOgK68/Hhw9yXVKEAVoPaGUaOtYrFfmPRwnm84rarMYJkqIt5RcOz6fulYC4oK9GD+ +J+X+0tYfy6G35JkYYiu28SILwoN7nmQZN5EZc2jZDJUjOcwcpTNDMCMew9CTMw8x4wAPU 3U0yStwVTx/hoJyBDFcXIs9UvzlxuQk= Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-4368a290e0dso100585e9.1 for ; Mon, 20 Jan 2025 21:08:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1737436098; x=1738040898; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=mgOMVcNlsXvlQ7MTDnuZzeXSDSWzbcsT6y24bbTR23U=; b=l8iH+tySUZO3vgpwsgO/c5AfhTgfDh/aM/C2z8mWi7DiwOdyRpAqW0N2zX92oVD6V9 yXwTbjIT+yEmcrBf4NCh0mKDo3az7oatLoiu+dJLeLYOSiPkDTdhq6Dpau5D5pY85Cxb HKtD8uTqjiVazuzFx+qezMRgEYfMhpvjrPZu4R0pOajiBb+5mkc37/eOzsR7VwnadZJb dyU3mrH5jgEFTlfaY8ptMq/Zo8YKrftWT2lBaGNl9e8uXGRxTYUAZbDDHhaw+BDVJhx9 lU1NgYWZYHlknRT3prjewUAk0d3AgAz23f5Wmnm4l/T7SDnJCSbiWQ1IpZRItVHIZJoQ YhJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737436098; x=1738040898; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mgOMVcNlsXvlQ7MTDnuZzeXSDSWzbcsT6y24bbTR23U=; b=JQl97BdkZWiOXSY12ys8vev4/TDaeFwHwksO0Fbb5uHwRygGAulIaJT6uOEYvVjvHU e860iVJQBeHurwdEZKroCXqYG0Zo9FKo5Q6KrPeUCbVaEBtlIZ/6I1BYbC0ZWKH9Zrq4 2EvV1LS8DIxUpGpBMS9+0jKPsnbrDVWWYcSTHKG7lDwKkHI/G0xd1makKOVZV/vol4zl xq7rYuauHxkl7PNB2t2pRc43yI900yO0TSYFBgJaMg0rZLGyHZfC5UVVzN3nOUbIhJwV ro8OGU3Z06DTnXRQLl/iZAq++vLz2MplEb8UdrsjJN3Qk2VONuoj85vw/Eb3T4lpeJC4 UzBg== X-Forwarded-Encrypted: i=1; AJvYcCXaPGiDip69ALWRAGY/xU8HKuE9WzNyCzod0eYh1Eqc18U2Olq5vkWG5ccWcjvPeIRhmdmdvE3PLg==@kvack.org X-Gm-Message-State: AOJu0YwOht1kodsPoA1DDwkoKpnloPwCXYtU02psQcqRqdU7JgBSQAk+ skOgnC24qAicjG3x3FLkqH20qDmVerkdPK3PrD4X5IKqOmGXepeQrPGs1MnO5wj4K4DZjZ3d/kR 7roXx0shrWjnURTbyV+5FhVVV5ra4G1lOc0WylvpdCSwNWCBUHg== X-Gm-Gg: ASbGncvGgMF8MOaefs/sUS05x7ZL7F61PZZtFHpCnT4/0zglcUpIGhZIlDst2DIpf4s Uc3CoaLbuXLNlAXtPVMYEVVXjRuF60SFxI19kT6rlrQ4PUnCnCNiG6vUIRR/InfAOCpJZcXuLFV yqxD+P0L5a X-Google-Smtp-Source: AGHT+IGZ7UfyXvDBvjdr8kJTiV6eaUpgFfRvhvriND4chM39d8CFywGj9FhuJEpBMDVck8XIaDitaqpSBcRz4MS9vLU= X-Received: by 2002:a05:600c:6d8c:b0:434:9e1d:44ef with SMTP id 5b1f17b1804b1-438a0f45d1bmr3236825e9.7.1737436098076; Mon, 20 Jan 2025 21:08:18 -0800 (PST) MIME-Version: 1.0 References: <20250119180608.2132296-1-jiaqiyan@google.com> <6f97f3b6-3e7b-4ff8-8d67-ef972791cccd@redhat.com> <673b0353-ad8c-471b-8670-25d9f06d232b@oracle.com> In-Reply-To: <673b0353-ad8c-471b-8670-25d9f06d232b@oracle.com> From: Jiaqi Yan Date: Mon, 20 Jan 2025 21:08:06 -0800 X-Gm-Features: AbW1kvbP2uEinhRllADHNXlsZ7HCBniUXtaY2uyBL9mp1zkXSRHoKdWdEZRGvaA Message-ID: Subject: Re: [RFC PATCH v1 0/2] How HugeTLB handle HWPoison page at truncation To: jane.chu@oracle.com Cc: David Hildenbrand , nao.horiguchi@gmail.com, linmiaohe@huawei.com, sidhartha.kumar@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org, osalvador@suse.de, rientjes@google.com, jthoughton@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 99FF640003 X-Stat-Signature: 1be6j7g7doc4f7pg4zse6m81ydi75uq6 X-Rspam-User: X-HE-Tag: 1737436099-111712 X-HE-Meta: U2FsdGVkX19FfEFOFRL9oX37Fhk67ZobhgKpM2QxMJOy2k5Zhl6+V+3s0UgtycwxnYBapOQbTv3ZIdobJO98+1Uy+FisWuqpxqo0cr4CUBYeIDlxF8vt/1oJOJKuaIyWvRvR5Hy3y/noMKczMXc7uxVKx83eN1ZoB+g+PqaYcoW2iK02Og72ZXQEBlPFETOCgfCAl8D9NnSrLJPcqQYHj6nZ7YdusdyjaTTMoDhM9K/EU4kGd0n5WOqpahvYKIeUjiMd/ofsb8TWOfZ0YZJQ5MgBDyyWr63X0T+LUQtCLOxBGH4k01++oN03FWRsX/4AM2ff9QY71whz9BWAJzdJXDHN5DepDcwRHoF8IGtIvQojMwLsbfnyk6z3FN9TF8Bbxx5F8PtLWNIBpR4CuHYO8DIXVik10nxoZ5NNHDfPj4W6xiccLh22Lebui48+GNf491n0iIQnt7nwoK+V6d7mubqzTCnRCxn7gluiKb6RHitbuaKfSYIUpqeWToKlfDy5hTY+qycfwizeEzd29gWpGcBGamIt4oNYKzFeteA2LrqGxBAt//08hrWj+JdVfEqn30s3xQTjNGsvVLX+VKJ4ICYZP55iuJn7G1+A+D0KXMMPKzsSEHetIzy0TAQytu1UoZbk208c0ha6EwZKnvFGZoyM0/qN+mimOLvOrD7LemkL8y5JKALG49T7q4Nhyl47/ImVrijwy8FZ+iScuAD25ukNxjH7aA/EyTIy7Ht0D6ZqUUHGlX6DY6ShSYHsuCqSpBVi9Syqp2pszZhPRTLzpWbIKyI62lLX6fl+3mpzEn51y+ApRYY57zwgdJHt+dMgoB7hjTD3eacl/EaZskPjwluni96b3lk10YJokV+T8R/yn+oLNM5Bl+Tpk2p0hVYss13QIBwB1q+jt+8oUDw88RK9PUQQOhsP5D10fjlxnQYK8e1omnp/M3r42LakGclXarV3bQ/HYWfEqv61mPx JbM4kKdj KPL9CcJbiTtxC2SE/9QTNWn1rnN+t8V60DIfSz5MD5nm/S4FGNY0rD9itNlcNOsCDYTHVedajAHMPqdcTgwqeC521mjb0Ieuraz9LhvE1JpcbLpB+wURsTI/8E5Hh89LPiyirc/l6tAwSZmo5XbOOYSpcrF0b7BMiIvPg9ZEQGitMVzqSQudllh6yM8QaM/87pIdcVSaA13MebZ77vCvGpDR5iUbjbJtTnQOyK6WMJknY1exUHiEqW4UrucmbxL60ygc/2jDzk89J8VrzzpTBVZ1uWN1q8Skwws5PFSxJB+T0lF2V5BAIMDHN7A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Jan 20, 2025 at 9:01=E2=80=AFPM wrote: > > > On 1/20/2025 5:21 PM, Jiaqi Yan wrote: > > On Mon, Jan 20, 2025 at 2:59=E2=80=AFAM David Hildenbrand wrote: > >> On 19.01.25 19:06, Jiaqi Yan wrote: > >>> While I was working on userspace MFR via memfd [1], I spend some time= to > >>> understand what current kernel does when a HugeTLB-backing memfd is > >>> truncated. My expectation is, if there is a HWPoison HugeTLB folio > >>> mapped via the memfd to userspace, it will be unmapped right away but > >>> still be kept in page cache [2]; however when the memfd is truncated = to > >>> zero or after the memfd is closed, kernel should dissolve the HWPoiso= n > >>> folio in the page cache, and free only the clean raw pages to buddy > >>> allocator, excluding the poisoned raw page. > >>> > >>> So I wrote a hugetlb-mfr-base.c selftest and expect > >>> 0. say nr_hugepages initially is 64 as system configuration. > >>> 1. after MADV_HWPOISON, nr_hugepages should still be 64 as we kept ev= en > >>> HWPoison huge folio in page cache. free_hugepages should be > >>> nr_hugepages minus whatever the amount in use. > >>> 2. after truncated memfd to zero, nr_hugepages should reduced to 63 a= s > >>> kernel dissolved and freed the HWPoison huge folio. free_hugepag= es > >>> should also be 63. > >>> > >>> However, when testing at the head of mm-stable commit 2877a83e4a0a > >>> ("mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()"), I fo= und > >>> although free_hugepages is reduced to 63, nr_hugepages is not reduced > >>> and stay at 64. > >>> > >>> Is my expectation outdated? Or is this some kind of bug? > >>> > >>> I assume this is a bug and then digged a little bit more. It seems th= ere > >>> are two issues, or two things I don't really understand. > >>> > >>> 1. During try_memory_failure_hugetlb, we should increased the target > >>> in-use folio's refcount via get_hwpoison_hugetlb_folio. However, > >>> until the end of try_memory_failure_hugetlb, this refcout is not= put. > >>> I can make sense of this given we keep in-use huge folio in page > >>> cache. > >> Isn't the general rule that hwpoisoned folios have a raised refcount > >> such that they won't get freed + reused? At least that's how the buddy > >> deals with them, and I suspect also hugetlb? > > Thanks, David. > > > > I see, so it is expected that the _entire_ huge folio will always have > > at least a refcount of 1, even when the folio can become "free". > > > > For *free* huge folio, try_memory_failure_hugetlb dissolves it and > > frees the clean pages (a lot) to the buddy allocator. This made me > > think the same thing will happen for *in-use* huge folio _eventually_ > > (i.e. somehow the refcount due to HWPoison can be put). I feel this is > > a little bit unfortunate for the clean pages, but if it is what it is, > > that's fair as it is not a bug. > > Agreed with David. For *in use* hugetlb pages, including unused shmget > pages, hugetlb shouldn't dissvolve the page, not until an explicit freein= g action is taken like > RMID and echo 0 > nr_hugepages. To clarify myself, I am not asking memory-failure.c to dissolve the hugepage at the time it is in-use, but rather when it becomes free (truncated or process exited). > > -jane > > > > >>> [ 1069.320976] page: refcount:1 mapcount:0 mapping:0000000000000000 i= ndex:0x0 pfn:0x2780000 > >>> [ 1069.320978] head: order:18 mapcount:0 entire_mapcount:0 nr_pages_m= apped:0 pincount:0 > >>> [ 1069.320980] flags: 0x400000000100044(referenced|head|hwpoison|node= =3D0|zone=3D1) > >>> [ 1069.320982] page_type: f4(hugetlb) > >>> [ 1069.320984] raw: 0400000000100044 ffffffff8760bbc8 ffffffff8760bbc= 8 0000000000000000 > >>> [ 1069.320985] raw: 0000000000000000 0000000000000000 00000001f400000= 0 0000000000000000 > >>> [ 1069.320987] head: 0400000000100044 ffffffff8760bbc8 ffffffff8760bb= c8 0000000000000000 > >>> [ 1069.320988] head: 0000000000000000 0000000000000000 00000001f40000= 00 0000000000000000 > >>> [ 1069.320990] head: 0400000000000012 ffffdd53de000001 ffffffffffffff= ff 0000000000000000 > >>> [ 1069.320991] head: 0000000000040000 0000000000000000 00000000ffffff= ff 0000000000000000 > >>> [ 1069.320992] page dumped because: track hwpoison folio's ref > >>> > >>> 2. Even if folio's refcount do drop to zero and we get into > >>> free_huge_folio, it is not clear to me which part of free_huge_f= olio > >>> is handling the case that folio is HWPoison. In my test what I > >>> observed is that evantually the folio is enqueue_hugetlb_folio()= -ed. > >> How would we get a refcount of 0 if we assume the raised refcount on a > >> hwpoisoned hugetlb folio? > >> > >> I'm probably missing something: are you saying that you can trigger a > >> hwpoisoned hugetlb folio to get reallocated again, in upstream code? > > No, I think it is just my misunderstanding. From what you said, the > > expectation of HWPoison hugetlb folio is just it won't get reallocated > > again, which is true. > > > > My (wrong) expectation is, in addition to the "won't reallocated > > again" part, some (large) portion of the huge folio will be freed to > > the buddy allocator. On the other hand, is it something worth having / > > improving? (1G - some_single_digit * 4KB) seems to be valuable to the > > system, though they are all 4K. #1 and #2 above are then what needs to > > be done if the improvement is worth chasing. > > > >> > >> -- > >> Cheers, > >> > >> David / dhildenb > >>