From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52A4BC3DA42 for ; Wed, 10 Jul 2024 17:12:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B28766B0093; Wed, 10 Jul 2024 13:12:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AD8AE6B0095; Wed, 10 Jul 2024 13:12:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A0E46B0096; Wed, 10 Jul 2024 13:12:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 762466B0093 for ; Wed, 10 Jul 2024 13:12:46 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 00872A017F for ; Wed, 10 Jul 2024 17:12:45 +0000 (UTC) X-FDA: 82324487532.06.9EA0635 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) by imf01.hostedemail.com (Postfix) with ESMTP id 0CA6440021 for ; Wed, 10 Jul 2024 17:12:43 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=16LGEqwz; spf=pass (imf01.hostedemail.com: domain of yuzhao@google.com designates 209.85.128.50 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720631547; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LPwiABRO0gJOFSW7TMavO+8O7Q4qhArDFONeWxtkC/4=; b=eLqxtVH7GJfOGgCdW9WpdEmA/McmoqAGVE6u5eyyzs52L5rTQVtWatOF6GdAdDkJ68ENH2 kef2l61zx2/JoULbb2Vy5t1JT9bsVrHVuQl4LMhvIr8HdDqIiYbwtgutb7uB5NZgMaj9Fe 9iGzTeB3jfrK4Vov0EpMYKt0PiI+53Q= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=16LGEqwz; spf=pass (imf01.hostedemail.com: domain of yuzhao@google.com designates 209.85.128.50 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720631547; a=rsa-sha256; cv=none; b=U+7mAyvzL9C8jKa3zPGyMBE0NnpuY3Ve4Qc70eJsWl0pyBxnO92LJ3Bx/Xy2kJjfs11rk6 eLXuR/w3j8W3SV65Sapwfyu0xKHePIdyBdHKOEsE6zJcG8TPtSy/YlOHmEoI4fy94JLOdK 8TCyP1loLqUxvlaScm6x2hRB54BcAkk= Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-4266edcc54cso1875e9.0 for ; Wed, 10 Jul 2024 10:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720631562; x=1721236362; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=LPwiABRO0gJOFSW7TMavO+8O7Q4qhArDFONeWxtkC/4=; b=16LGEqwz90JAlOHZiMcTE73o/TTlNS+wzfuuOwi508OlG/uS2a8yzbmeyhBV9aNKh5 qQPxRW/ATsi7xmtwdBgpNMJ1FDp/hty3NtoMhADXfJsEli30riMBcbcnDdHq0I1C7SZk yRZNOIdeqexDNH1UkrgnSfWbrG87v+VI4pW67aS4SgA0AXwB10LX7NOa8H4mfCxE3LxB jq47cUEPHIjL9uI6MIGyvjG90Y+gUb7JXwoib4GBn9W3R3s6FySSOfX8lOZkZ7/fuEBL EHzci0x7XQ5KoxFFKntKqX7lMRFf0yPmkzzUKYQTWuecNeMv0ji//jMYuO0lVfFlMyrc 63hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720631562; x=1721236362; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LPwiABRO0gJOFSW7TMavO+8O7Q4qhArDFONeWxtkC/4=; b=j1TLWlCyAmUJ/hDTHeUIdZ2mDTGJnUo7Sk10xeu8drhl5Wc3uijipODKzNRQJ5aJWl oFw1IYoiVm9wMcPyydyCAaNbuQaSeJE6KuoB2gek7d7qEEtVPNMe7XJO8oEjmP3T6LmY ToO9evlDE256O8UHi4bLKUX/1AoCyHya3qZLE3oZ7aIElD38R01adMCGcB1y0E8v/Vqp QJgMdhnBRWE7lgpIrOngxFXnquJafvp3TCexCXukVgZFDHCViXuseCL301XI2F1c+zPZ /G6cB9Aq5OWLxkdrgpVdOCvYE59iS9NkElOgfRegbkKkBmzgJeJNJxd5wCklCC/HSXuk +s7Q== X-Forwarded-Encrypted: i=1; AJvYcCUXDwtygkCOPaLHTPnV3tI9TI5h+7BSBe+F4ayoxk1tZ4baJ7vw71+3xAsAsSnk2tH6rhRkYUNf/uhVpTKqGDL7Vzo= X-Gm-Message-State: AOJu0YzRRwy0NvMnC7EGv6k43Xa8pAmAwsbs0MUntljzktHtTeXYWFdF xbYohBTBiRICZ2ZiyrCLP6GSknggosy+8tHkNlmVFHFMjki5GQT9GTA4BcxAvjTLhsrkTUY5fRs S34x3E1BTuJqlC3aMY6Lakx5pz0byd9rgoMjE X-Google-Smtp-Source: AGHT+IFp/Qq+1y0meIj9pymlqXMiS+2/rtF+O2LmyGTaDm/g/qYw3Oop34bWO4Zb1vn1Iw7Kt13tULjmbZtjNce0nwU= X-Received: by 2002:a7b:ce99:0:b0:424:898b:522b with SMTP id 5b1f17b1804b1-427942230e7mr1686105e9.1.1720631562070; Wed, 10 Jul 2024 10:12:42 -0700 (PDT) MIME-Version: 1.0 References: <20240113094436.2506396-1-sunnanyong@huawei.com> In-Reply-To: From: Yu Zhao Date: Wed, 10 Jul 2024 11:12:01 -0600 Message-ID: Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize To: Catalin Marinas Cc: Nanyong Sun , will@kernel.org, mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org, anshuman.khandual@arm.com, willy@infradead.org, wangkefeng.wang@huawei.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 0CA6440021 X-Stat-Signature: 5nduiqkzqg5gr67f57f6uf886t4351b7 X-Rspam-User: X-HE-Tag: 1720631563-771124 X-HE-Meta: U2FsdGVkX18QE4a3ACEXbXa9/8o2JMQ8ayJU+49JqBWRjGN7mjxQcvAnM/2e9MYqACyFhDY+0juuWS5tVNUgqWW1ty6gQR8i4UnW680YnPQFC/15rnrpI+CVRoVAOX95zpjXUIME9mfRZMDQk1sp5caGVrEnqbaOI97iZN9/Krx5Vu2aBukP0MP2oN2Y43GYcnjc4dntMEuvwSt+bm+CyATdGTvzUyeEV0ySyYkb3yD2C3kwl6UmqUGObcQnKtcwq9thKO8t7JQ1rkxRhxpaeY3+Cm4m/Rpm9SvEMLEOVvQDAmjCsZu5NH1uW49yMYKa+XzMQoZbC8qNbJfo3SbgGlCKYopeID22XEhhv0UR5sEz1EIgiiIrmF82FZ35E+GasFrxh03JR7fj5hri1io6lJyfkS7uC+TwFiTrMPmA3O1NKbo504qX+/orxNlFJa1Z3jly+5obDhtfobrpY7MnlCGU7gGWKTkF0um8CSu35UbdEjLN7RQJB4BufhmHe5pT7J2pPW3nrlZ3b7uoiaw/XJzqG6px3mf4WrO4BC0NdbZwxIki1/rmwlGKSICmqq0mNpaKxVb/F5FPPT9O7f42zt8aRddnHshK2CP6WQLEXMmxyQZnp8OA9L/awULZgN14zhDcBBWX/R1DKhMhQ9DKfmdNh9mGioW6rQckoUzLnB7IbIggItb6gp6OQGq1Ipvl6NFZKKS7F8E8z+ilz/xTYZjuaMB3yVOIYtSMNidAuknedWON1q8vZO0e0yLM/XmJsl1jKJCOL7Yh+xlKqEcwr4S7UMVY8ra0bnLRQdbv6pG+QX9OM83oo63yfJfuXYXShJkQkUWGZzFgEUEu7la0BdqH8vw114ez+rNYu8nsXwaF+it2rEgciaFYGga64blfxK/KkSXMpBediXn70VQ/POlMLeCu6ouJXbkbASux6y9+FdPl7aL7azP/Uw2g3LTNqzH7Ojzxd76Z5wgK+bd nvoRx0Ag 3o+IICD/1e6ZI3tJwHL1LetOayUUaXE6lkpIm/hg1oa8QZiwVlXyi4qxGH31nF31Fq9xuW/7umBO0XPDxdB6i2k8Ze7RXZSjw4wzrpjyLtllGGQFRQepkHcw/GG1LRHcNjBupVnX6sA9QHgAzscWudLiFIGL7cUFT1UTx3dXVlOapkX2TwY09VPrfJdL4htzirs95ZCczwAwVq/dwcfAmYpnsMKTbklYIND7wPs35nbKInlM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jul 10, 2024 at 10:51=E2=80=AFAM Catalin Marinas wrote: > > On Fri, Jul 05, 2024 at 11:41:34AM -0600, Yu Zhao wrote: > > On Fri, Jul 5, 2024 at 9:49=E2=80=AFAM Catalin Marinas wrote: > > > If I did the maths right, for a 2MB hugetlb page, we have about 8 > > > vmemmap pages (32K). Once we split a 2MB vmemap range, > > > > Correct. > > > > > whatever else > > > needs to be touched in this range won't require a stop_machine(). > > > > There might be some misunderstandings here. > > > > To do HVO: > > 1. we split a PMD into 512 PTEs; > > 2. for every 8 PTEs: > > 2a. we allocate an order-0 page for PTE #0; > > 2b. we remap PTE #0 *RW* to this page; > > 2c. we remap PTEs #1-7 *RO* to this page; > > 2d. we free the original order-3 page. > > Thanks. I now remember why we reverted such support in 060a2c92d1b6 > ("arm64: mm: hugetlb: Disable HUGETLB_PAGE_OPTIMIZE_VMEMMAP"). The main > problem is that point 2c also changes the output address of the PTE > (and the content of the page slightly). The architecture requires a > break-before-make in such scenario, though it would have been nice if it > was more specific on what could go wrong. > > We can do point 1 safely if we have FEAT_BBM level 2. For point 2, I > assume these 8 vmemmap pages may be accessed and that's why we can't do > a break-before-make safely. Correct > I was wondering whether we could make the > PTEs RO first and then change the output address but we have another > rule that the content of the page should be the same. I don't think > entries 1-7 are identical to entry 0 (though we could ask the architects > for clarification here). Also, can we guarantee that nothing writes to > entry 0 while we would do such remapping? Yes, it's already guaranteed. > We know entries 1-7 won't be > written as we mapped them as RO but entry 0 contains the head page. > Maybe it's ok to map it RO temporarily until the newly allocated hugetlb > page is returned. We can do that. I don't understand how this could elide BBM. After the above, we would still need to do: 3. remap entry 0 from RO to RW, mapping the `struct page` page that will be shared with entry 1-7. 4. remap entry 1-7 from their respective `struct page` pages to that of entry 0, while they remain RO. > If we could get the above work, it would be a lot simpler than thinking > of stop_machine() or other locks to wait for such remapping. Steps 3/4 would not require BBM somehow? > > To do de-HVO: > > 1. for every 8 PTEs: > > 1a. we allocate 7 order-0 pages. > > 1b. we remap PTEs #1-7 *RW* to those pages, respectively. > > Similar problem in 1.b, changing the output address. Here we could force > the content to be the same I don't follow the "the content to be the same" part. After HVO, we have: Entry 0 -> `struct page` page A, RW Entry 1 -> `struct page` page A, RO ... Entry 7 -> `struct page` page A, RO To de-HVO, we need to make them: Entry 0 -> `struct page` page A, RW Entry 1 -> `struct page` page B, RW ... Entry 7 -> `struct page` page H, RW I assume the same content means PTE_0 =3D=3D PTE_1/.../7? > and remap PTEs 1-7 RO first to the new page, > turn them RW afterwards and it's all compliant with the architecture > (even without FEAT_BBM). It'd be great if we could do that. I don't fully understand it though, at the moment.