From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90152C61DA4 for ; Wed, 22 Feb 2023 04:08:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A54706B0078; Tue, 21 Feb 2023 23:08:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DC806B007B; Tue, 21 Feb 2023 23:08:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 855C96B007D; Tue, 21 Feb 2023 23:08:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6E8866B0078 for ; Tue, 21 Feb 2023 23:08:34 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3C11C80753 for ; Wed, 22 Feb 2023 04:08:34 +0000 (UTC) X-FDA: 80493596148.07.2D5D029 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by imf22.hostedemail.com (Postfix) with ESMTP id 691EFC0005 for ; Wed, 22 Feb 2023 04:08:32 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=hWMdkQTj; spf=pass (imf22.hostedemail.com: domain of stevensd@chromium.org designates 209.85.167.50 as permitted sender) smtp.mailfrom=stevensd@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677038912; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=F+B0fY6OIZOUOfO5fghGD9jdPyBcF9slq/7u1qpDWVg=; b=yr8/Diy37ogiOaqrpwQSUnvUWjgmHiQ3XGj00gVJam0fVUeZ7xjnO9SBPYxkqnZ4u+oSf6 4mHDhLCllV5MrFfEXYjbofqCFgitWaH9EUFhgrFIsKu6EMFmVb8urxlP4hQvxUIbDnYUpF CsM6ZtigQBZmus+oUxyK01reDMGPFqQ= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=hWMdkQTj; spf=pass (imf22.hostedemail.com: domain of stevensd@chromium.org designates 209.85.167.50 as permitted sender) smtp.mailfrom=stevensd@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677038912; a=rsa-sha256; cv=none; b=a4JXMZD73VPtYyqVerzfdAhcahWi08M3ZV6UCxlkuAvSHRBFmBCPonTEW/v9vAjfvRkN2Y ot+WZ+6sKAlUmy4CxOq3Y1SraW3S6QLHvElhWVIpaZ6T/9Rieu6onuQBXFDnu9OTZYJko9 XdOLbfA1rFzXFdzr7qSoTMqcJLMnpM8= Received: by mail-lf1-f50.google.com with SMTP id w27so8449453lfu.4 for ; Tue, 21 Feb 2023 20:08:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=F+B0fY6OIZOUOfO5fghGD9jdPyBcF9slq/7u1qpDWVg=; b=hWMdkQTjarG+Ng3i59vMjDLCjlnT1OTWFJ8yLtKdCviZNse9PpI4lkBLJsd2pERhze RT6hwPEzHIQLlh36165Kgh2CFwoLQLqKwQukvKxbdu4sZoaj7UNGEYK+tErWBJPAwdDr bcjytmPqmW/AwjmroBt4l3WwpbVP5LUiptRlk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=F+B0fY6OIZOUOfO5fghGD9jdPyBcF9slq/7u1qpDWVg=; b=RIGNFZe6O/hPP8juBUz5tpqnk2V6zbbtCh7E/rATITqWmpko538sX9JxNHH5dBr4rT 0qPxEkJ5RECxFtfV94jN9J2QBlvL7PONTQfpOJMeh17V9XUuE0+cadW1TRR2U/1XFc88 FB+ktmqsoIi7Q/sl3T33WWwSZ+poqLcf3Ue0h570eA40pYvrlJ0SBoQkB2WeJAdhPyKN tw5UtSq7g+i6SfgNgYQsI5TjMFlDwrJ2M6iKYVoH/GQdbysGQCp6aNCt1/5877bAzD9X dkbh1RSjg3/B8b1SSwx/Fn/EIzyzNxExNOm0mp2zsyoXEDjR4OvFD9DLhgHiY8vL1ut3 AH2g== X-Gm-Message-State: AO0yUKVbDv2CIG0fLy7wMNp3qBaMWKrUn0BUPLsDKzHrqs8DF0Zdk/aw e6PsFOKgPyLtHyHYMDiIsQxHqIe6nPZ09HBW5TwIcQ== X-Google-Smtp-Source: AK7set+X23Qxo6DS0GPIqY1MO00TIOV3iYP+D8G2PS3zQxWIsp0PRBjOpz59BNr+AMs2TW5J1OAZ05/Im6JBZzhSwNE= X-Received: by 2002:a05:6512:b83:b0:4da:fa1f:bb20 with SMTP id b3-20020a0565120b8300b004dafa1fbb20mr2650739lfv.6.1677038910748; Tue, 21 Feb 2023 20:08:30 -0800 (PST) MIME-Version: 1.0 References: <20230217085439.2826375-1-stevensd@google.com> <20230217085439.2826375-2-stevensd@google.com> In-Reply-To: From: David Stevens Date: Wed, 22 Feb 2023 13:08:19 +0900 Message-ID: Subject: Re: [PATCH v4 1/3] mm/khugepaged: refactor collapse_file control flow To: Peter Xu Cc: linux-mm@kvack.org, Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Yang Shi , David Hildenbrand , Hugh Dickins , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 691EFC0005 X-Rspam-User: X-Stat-Signature: on1cogyypxyzff3upib1r4q9y5tcdpwj X-HE-Tag: 1677038912-682830 X-HE-Meta: U2FsdGVkX1+get5uguqGqPp3YC610gZRT3XqaHoEriPA9++xRTE44h9ZukohS26v2eLKsUk9vSfJzr2VWViMH/KBC3N5VNbjIdarzgloJeolYJ8Z6GttLGi1uNDeFTdleIeXJmH/HpjDLafe79lHlg/Pvu12yzrQ/uKWioj6lQKSHz5Eu6N+kkZaCY/0gQYkK9Wmt7gGH0evUHi7d4uY8NG5S4n2Y/h3fC4uBEp3pF2NcLWbpx0DOKT6yN2Pvc5YTo9s1OtfNj8EgJqyxCf/9l7WCeA4p/CdRZ8dtQQlG9V8/wqBJyNcElrzMK0K9TjaVJ4ekvUT/RRwoXcafq+pnbYz2OnhjCSf/gDAAfPyox1k45iut9VzgBUjeSyP3sWLe1GUuICetH+WbocEzB+QR8U/Q2sNQ8JatJ69OvLVkzMiKNfs74qnQcLpc74ZysSUV3RWIV9geODINum85Nq8aweXj0/lKntG8TBrz2qwV+FoBXt7eQN8mlQRCZ4tYGUo/j2tIlAHnuBMlu/KIZWhdwtZOELPLCPNGk6o4YlMAll6rqAemA8L3CICi/kb6fhzWF2KiCi6LfSnuQk9hhoszmI9MvecfvPpDI/YeJynks2mbNFh41nh1hCR8et6Q0oreW04yxXDAaz9YE05EjpBepA5F2aJWXfsZ+oeOztq9TPr7sdgNNKlxOhJFOi5sptAqyq0DrtLvGesaQDQVi699cnYCRGprUB7c/LLTaOAM8Hx7yLrd8H62BeTjmxqKyHW/wTX8YV9WA41ReMmPlw4XXiR8BwbcGR//pNpTP/MjwuqbAEy1G4eSIZ7CyHArYkqy82O4ZlLFhc5NBhVmcNetsyFgmacnbmVP0XLO/VqsWt1bvbUBzZjISrnA4Iok63SunWOnxnQ5WzIYWCbuR3ZJgObKDnreTlkwaIooaFjgTcxHDfEnFS4ggdeHb6KFVPgZBh6ObhQyJSXOeqNTs5 4Evp/jlT KcLFMLnK9sYqZ+R94rL4CQ7PuLXUvBxZBKjHWi+nzuIHmlgJRlMc5Vyv6ZTAQLN6TdKlbqaAfhjA6SiGMdrAPB/yZzA+dlLMwVVOlNeAG9WTbVhxKuMoDj7L6PFNmwxWjhOAJu4BIXWGWAYksTwzdM46AjdrbPmo6q05Lor3+nBCk4HU99aF2sfXQsfIUkg0BVK++htZjQJhRkSrWy4yGc0LO4KLen9Za16DIbrGUwJWXxi/0MLDWTpF97mJTSS9Pz8MuiEVQatskNfnrcPt3HPUV1IF1V66UGeg6Q+EM5PPphw2FfiiRSDgTCVwzb/QXf8EnhIkKAKCP6CKgMqEKgRencwiyrEwFKChUNo0U0cQ305P8R6Ns2qyhdzzNtylqfUAaYa89RkFLgCnUd/D6A8ODz9tnHRmOh7zO00X8YfshWSD5KUCiMOMYPA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 22, 2023 at 6:54 AM Peter Xu wrote: > > On Fri, Feb 17, 2023 at 05:54:37PM +0900, David Stevens wrote: > > From: David Stevens > > > > Add a rollback label to deal with failure, instead of continuously > > checking for RESULT_SUCCESS, to make it easier to add more failure > > cases. The refactoring also allows the collapse_file tracepoint to > > include hpage on success (instead of NULL). > > > > Signed-off-by: David Stevens > > --- > > mm/khugepaged.c | 223 ++++++++++++++++++++++++------------------------ > > 1 file changed, 110 insertions(+), 113 deletions(-) > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index 8dbc39896811..6a3d6d2e25e0 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -1885,6 +1885,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > if (result != SCAN_SUCCEED) > > goto out; > > > > + __SetPageLocked(hpage); > > + if (is_shmem) > > + __SetPageSwapBacked(hpage); > > + hpage->index = start; > > + hpage->mapping = mapping; > > + > > /* > > * Ensure we have slots for all the pages in the range. This is > > * almost certainly a no-op because most of the pages must be present > > @@ -1897,16 +1903,10 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > xas_unlock_irq(&xas); > > if (!xas_nomem(&xas, GFP_KERNEL)) { > > result = SCAN_FAIL; > > - goto out; > > + goto rollback; > > } > > } while (1); > > > > - __SetPageLocked(hpage); > > - if (is_shmem) > > - __SetPageSwapBacked(hpage); > > - hpage->index = start; > > - hpage->mapping = mapping; > > - > > /* > > * At this point the hpage is locked and not up-to-date. > > * It's safe to insert it into the page cache, because nobody would > > @@ -2123,131 +2123,128 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, > > */ > > try_to_unmap_flush(); > > > > - if (result == SCAN_SUCCEED) { > > - /* > > - * Replacing old pages with new one has succeeded, now we > > - * attempt to copy the contents. > > - */ > > - index = start; > > - list_for_each_entry(page, &pagelist, lru) { > > - while (index < page->index) { > > - clear_highpage(hpage + (index % HPAGE_PMD_NR)); > > - index++; > > - } > > - if (copy_mc_page(hpage + (page->index % HPAGE_PMD_NR), > > - page) > 0) { > > - result = SCAN_COPY_MC; > > - break; > > - } > > - index++; > > - } > > - while (result == SCAN_SUCCEED && index < end) { > > + if (result != SCAN_SUCCEED) > > + goto rollback; > > + > > + /* > > + * Replacing old pages with new one has succeeded, now we > > + * attempt to copy the contents. > > + */ > > + index = start; > > + list_for_each_entry(page, &pagelist, lru) { > > + while (index < page->index) { > > clear_highpage(hpage + (index % HPAGE_PMD_NR)); > > index++; > > } > > + if (copy_mc_page(hpage + (page->index % HPAGE_PMD_NR), > > + page) > 0) { > > + result = SCAN_COPY_MC; > > + goto rollback; > > + } > > + index++; > > + } > > + while (index < end) { > > + clear_highpage(hpage + (index % HPAGE_PMD_NR)); > > + index++; > > } > > > > - if (result == SCAN_SUCCEED) { > > - /* > > - * Copying old pages to huge one has succeeded, now we > > - * need to free the old pages. > > - */ > > - list_for_each_entry_safe(page, tmp, &pagelist, lru) { > > - list_del(&page->lru); > > - page->mapping = NULL; > > - page_ref_unfreeze(page, 1); > > - ClearPageActive(page); > > - ClearPageUnevictable(page); > > - unlock_page(page); > > - put_page(page); > > - } > > + /* > > + * Copying old pages to huge one has succeeded, now we > > + * need to free the old pages. > > + */ > > + list_for_each_entry_safe(page, tmp, &pagelist, lru) { > > + list_del(&page->lru); > > + page->mapping = NULL; > > + page_ref_unfreeze(page, 1); > > + ClearPageActive(page); > > + ClearPageUnevictable(page); > > + unlock_page(page); > > + put_page(page); > > + } > > > > - xas_lock_irq(&xas); > > - if (is_shmem) > > - __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr); > > - else > > - __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr); > > + xas_lock_irq(&xas); > > + if (is_shmem) > > + __mod_lruvec_page_state(hpage, NR_SHMEM_THPS, nr); > > + else > > + __mod_lruvec_page_state(hpage, NR_FILE_THPS, nr); > > + > > + if (nr_none) { > > + __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none); > > + /* nr_none is always 0 for non-shmem. */ > > + __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none); > > + } > > + /* Join all the small entries into a single multi-index entry. */ > > + xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > + xas_store(&xas, hpage); > > + xas_unlock_irq(&xas); > > > > - if (nr_none) { > > - __mod_lruvec_page_state(hpage, NR_FILE_PAGES, nr_none); > > - /* nr_none is always 0 for non-shmem. */ > > - __mod_lruvec_page_state(hpage, NR_SHMEM, nr_none); > > - } > > - /* Join all the small entries into a single multi-index entry. */ > > - xas_set_order(&xas, start, HPAGE_PMD_ORDER); > > - xas_store(&xas, hpage); > > - xas_unlock_irq(&xas); > > + folio = page_folio(hpage); > > + folio_mark_uptodate(folio); > > + folio_ref_add(folio, HPAGE_PMD_NR - 1); > > > > - folio = page_folio(hpage); > > - folio_mark_uptodate(folio); > > - folio_ref_add(folio, HPAGE_PMD_NR - 1); > > + if (is_shmem) > > + folio_mark_dirty(folio); > > + folio_add_lru(folio); > > > > - if (is_shmem) > > - folio_mark_dirty(folio); > > - folio_add_lru(folio); > > + /* > > + * Remove pte page tables, so we can re-fault the page as huge. > > + */ > > + result = retract_page_tables(mapping, start, mm, addr, hpage, > > + cc); > > + unlock_page(hpage); > > + goto out; > > + > > +rollback: > > + /* Something went wrong: roll back page cache changes */ > > + xas_lock_irq(&xas); > > + if (nr_none) { > > + mapping->nrpages -= nr_none; > > + shmem_uncharge(mapping->host, nr_none); > > + } > > > > - /* > > - * Remove pte page tables, so we can re-fault the page as huge. > > - */ > > - result = retract_page_tables(mapping, start, mm, addr, hpage, > > - cc); > > - unlock_page(hpage); > > - hpage = NULL; > > - } else { > > - /* Something went wrong: roll back page cache changes */ > > - xas_lock_irq(&xas); > > - if (nr_none) { > > - mapping->nrpages -= nr_none; > > - shmem_uncharge(mapping->host, nr_none); > > + xas_set(&xas, start); > > + xas_for_each(&xas, page, end - 1) { > > + page = list_first_entry_or_null(&pagelist, > > + struct page, lru); > > + if (!page || xas.xa_index < page->index) { > > + if (!nr_none) > > + break; > > + nr_none--; > > + /* Put holes back where they were */ > > + xas_store(&xas, NULL); > > + continue; > > } > > > > - xas_set(&xas, start); > > - xas_for_each(&xas, page, end - 1) { > > - page = list_first_entry_or_null(&pagelist, > > - struct page, lru); > > - if (!page || xas.xa_index < page->index) { > > - if (!nr_none) > > - break; > > - nr_none--; > > - /* Put holes back where they were */ > > - xas_store(&xas, NULL); > > - continue; > > - } > > + VM_BUG_ON_PAGE(page->index != xas.xa_index, page); > > > > - VM_BUG_ON_PAGE(page->index != xas.xa_index, page); > > + /* Unfreeze the page. */ > > + list_del(&page->lru); > > + page_ref_unfreeze(page, 2); > > + xas_store(&xas, page); > > + xas_pause(&xas); > > + xas_unlock_irq(&xas); > > + unlock_page(page); > > + putback_lru_page(page); > > + xas_lock_irq(&xas); > > + } > > + VM_BUG_ON(nr_none); > > + /* > > + * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only. > > + * This undo is not needed unless failure is due to SCAN_COPY_MC. > > + */ > > + if (!is_shmem && result == SCAN_COPY_MC) > > + filemap_nr_thps_dec(mapping); > > > > - /* Unfreeze the page. */ > > - list_del(&page->lru); > > - page_ref_unfreeze(page, 2); > > - xas_store(&xas, page); > > - xas_pause(&xas); > > - xas_unlock_irq(&xas); > > - unlock_page(page); > > - putback_lru_page(page); > > - xas_lock_irq(&xas); > > - } > > - VM_BUG_ON(nr_none); > > - /* > > - * Undo the updates of filemap_nr_thps_inc for non-SHMEM file only. > > - * This undo is not needed unless failure is due to SCAN_COPY_MC. > > - */ > > - if (!is_shmem && result == SCAN_COPY_MC) > > - filemap_nr_thps_dec(mapping); > > + xas_unlock_irq(&xas); > > > > - xas_unlock_irq(&xas); > > + hpage->mapping = NULL; > > > > - hpage->mapping = NULL; > > - } > > + unlock_page(hpage); > > + mem_cgroup_uncharge(page_folio(hpage)); > > + put_page(hpage); > > > > - if (hpage) > > - unlock_page(hpage); > > out: > > VM_BUG_ON(!list_empty(&pagelist)); > > - if (hpage) { > > - mem_cgroup_uncharge(page_folio(hpage)); > > - put_page(hpage); > > - } > > Moving this chunk seems wrong, as it can leak the huge page if > alloc_charge_hpage() failed on mem charging, iiuc. > > And I found that keeping it seems wrong either, because if mem charge > failed we'll uncharge even without charging it before. But that's nothing > about this patch alone. > > Posted a patch for this: > > https://lore.kernel.org/all/20230221214344.609226-1-peterx@redhat.com/ > > I _think_ this patch will make sense after rebasing to that fix if that's > correct, but please review it and double check. > Ah, good catch. I didn't notice that alloc_charge_hpage could allocate *hpage even on failure. The failure path should work properly with your fix. -David