From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B792DC83F10 for ; Sun, 27 Aug 2023 08:01:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0447E8E0008; Sun, 27 Aug 2023 04:01:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F35498E0001; Sun, 27 Aug 2023 04:01:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFC598E0008; Sun, 27 Aug 2023 04:01:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CF7D58E0001 for ; Sun, 27 Aug 2023 04:01:09 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 9B024A017B for ; Sun, 27 Aug 2023 08:01:09 +0000 (UTC) X-FDA: 81169139058.01.EC1761F Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by imf24.hostedemail.com (Postfix) with ESMTP id C19AD18002C for ; Sun, 27 Aug 2023 08:01:07 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=D1ZukG5d; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1693123267; a=rsa-sha256; cv=none; b=zPxfG6BsnjaPK2S7vnL8flUZdIjwjcfvdIzQixT4v4BxMqGQiWg80bzm3hd+KEFauVeHoZ ODqdRqwU9X3WirxemKIMIbMVIDV2w5qJa1Ngao/OPifVjHe5EwDASZMOERhcOoA9a0KJws O/MbvO/Ou7k4li4AaMJsjkTC2KK09tM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=D1ZukG5d; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf24.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.54 as permitted sender) smtp.mailfrom=lstoakes@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1693123267; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=czA0szAFhC2F1/m40SXjRmw3SdqXWgDEYhXx4X7fi80=; b=uIpxeQuxaYVSPR5zBufgtEmysvOmOvM8WRc7HuFKrzk5VKTj3ZqATYKAbyB/kpespwtdll DBBGdHjWtSuoEfi/R55/hQGqv8cyto40wPWEQvZl3ijzm/hilYBoPN04VJpS7+rcZpYYuK X7M5IIjUa/gV6ZMgJSKhcNd+3HBTJ74= Received: by mail-wm1-f54.google.com with SMTP id 5b1f17b1804b1-3fef56f7222so21621625e9.2 for ; Sun, 27 Aug 2023 01:01:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693123266; x=1693728066; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=czA0szAFhC2F1/m40SXjRmw3SdqXWgDEYhXx4X7fi80=; b=D1ZukG5doBm0tEbqFoysL7nBViSCAdEUxO+cIWjcEIYXyM3dUS20DBNmKp2zhGan0H /4MqfFLyg9rk+odRq/Zt85FtIDfjjfrgyroqc0FdycAQKFnExcRpV4DF6p3VpLGD3dWt wGY5QsZa/HI2ojQZ8HY1gTNC0P61W+7SCXzqsETd5z4ZuoLYd0RxVmBard4dc+6xh/2M steMJ19yYBVSVAl/ZHxuXCdImcSaR28TVMbWGPE8+v4tBY9u4xvKJBtRiPTWPnlQtZTl /3OU4nCKzCyyiSw9O6+G3MBiLLBMkd7ayrjcxevowc2sm6b+kZXz0l2SQib95smUaJ5X FSig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693123266; x=1693728066; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=czA0szAFhC2F1/m40SXjRmw3SdqXWgDEYhXx4X7fi80=; b=K9osy/hvfYHFcjM4XX4OJyJ46UMDvZdJQocuzNjmWUbr3qfsoMHySNpFnODBWnN/D8 dKVsg7yctRp+dzXX/Np6KmMz47djpx9KmgFykzrS4MZopANZrIFfKGwfUXYpznx8ImFq pXewKTIqaXNM8rycFlup2iUcJFXAInsrEqwopir2es/7K4yYPxdXHJMtGWa4ndtEMy0L IfElhMWKnG9BuxwDVpzQotsL0BrLx9roVFtcyTOuyqSJCbpqkmjP9geJhPVPp+GRcNLn s/uGPrXN4UGegA4AUrme8T2HHzylaVLyJuRJrnV3ppRX4QNuXRveorH3nyL74qqRzAKE josg== X-Gm-Message-State: AOJu0YzrsGiKOiyu2mBh4PJH1po04wNmAuaIuU0O9618tBBV7zkiSOh9 mq9GlelRez7/WobCAf90lCs= X-Google-Smtp-Source: AGHT+IHUrnfPfvDPgWLusou+HxQwmMB7kdj/hkSDPAj6cJyp9Gpya2d88mGBqYwK9rbJz5A62noiNw== X-Received: by 2002:a05:600c:2102:b0:3fc:dd9:91fd with SMTP id u2-20020a05600c210200b003fc0dd991fdmr16295938wml.40.1693123265990; Sun, 27 Aug 2023 01:01:05 -0700 (PDT) Received: from localhost ([2a00:23c5:dc8c:8701:1663:9a35:5a7b:1d76]) by smtp.gmail.com with ESMTPSA id t16-20020a7bc3d0000000b00401c9228bf7sm640258wmj.18.2023.08.27.01.01.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Aug 2023 01:01:04 -0700 (PDT) Date: Sun, 27 Aug 2023 09:01:04 +0100 From: Lorenzo Stoakes To: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org, linux-perf-users@vger.kernel.org, peterz@infradead.org, mingo@redhat.com, acme@kernel.org, urezki@gmail.com, hch@infradead.org Subject: Re: [RFC PATCH 4/4] perf: Use folios for the aux ringbuffer & pagefault path Message-ID: <5231893a-e5a5-4544-a6d2-2c98cbebca09@lucifer.local> References: <20230821202016.2910321-1-willy@infradead.org> <20230821202016.2910321-5-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230821202016.2910321-5-willy@infradead.org> X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C19AD18002C X-Stat-Signature: fn1f4f6egq3quc1acj5g1u1brdb5y8bd X-HE-Tag: 1693123267-140187 X-HE-Meta: U2FsdGVkX1/+wHSDQhUtJtRsvjk6F1MU6AUoOVgn2hppN5j3jCMIccsmROlDfGPUetd5Y1ceGDNywzHKh1udv9NLcq7pY2JBL8DO4TClp4UxnkqBNry9OzJ6T4Tt8Ugs8QlL151GCsEpz+bOF1Tn3h24G/JHddPcBJsIDvhnIg5gmPolQ0pp/ajlDbtXJE7nQjb/aCHvs9w3LmGLvhTopvqDxdXQndT5wl3YzEItSmRfZqVGwgLl1AfOm/NIug23DL/vGE+WI6q/Oa/GShJu2n1+xSRI6tk3SeIymveQ1+QIY6iD60Hr+oss5i1pmgOd2XzgLJqkn0SgL2D4tMNcL+A7az8KImovFDXObkGAiX/Vgyfu42nm79KbrL46ZxV3k4BaVlEPeZgFzsesOrPmS151XcSwuzoJ4kTyVDm2LJtPksNbgJMmEZCqrSopMArNs66VS2DRccoOqLkoO0mpHvScY3DqArLyHMAaAIF6J01BS2c4dyQtIDEtcG5WaAEJ+GimRsXTakpfYCIvMngs+oRRPP5fvideet/k2KCUonjtrUyEtJnOWJ3kCdwr6aBsDO5E1fv/MFh69T9YB+hsD7XzpoT9MdK4RLCPxPAv385T4tQDlOarahKSt77qbbuo3oBjGetaM2nfbkoo3MsxxwO6/9AFRbPOK3mrse+qxIM/kqXO3FCqmNBX9mCO93AayJ0K6WP6WXSPaCz44mLkc8S1tKfdyX8nL3nPqsglh4SCbglBBh9X+qMBqYNiEf/ag55dQPs97AxiT4pbZ6rhFuDTXjeDEWgwiFhC9/SC5FXhviFaocC1tqAiPj76oG8i1nVevzeYPkTu3R9wMNq1rDXn+F6sDmIHfARviR0KBXtU/iehu/HhwAnLeXVp3QsV457L/U8VmEVPajsbnanefPZNWpdpLHakI80JCynrCo6L7UCsMwt5EVJ/A9zMyCVnuw+K0h8DfZVCLHOBiy6 s4kL4btm MNLLgRfa6BUflY1UcotlrSnOMXhRaQ2y2wJO28GyWrPGL/MD2fsQkaJWkrs6Bfg/Cb/69GBBL4fivc0VeFdFfOZdWRyD4vSz5/hOb7DbZEqKxXtT5v9hUyTJxCtbsg6hZh2Zqr6CGQ21f+6wVj36JE+cFPIX3mzUlRu11Qs/qxN0KDMHYFJzFRad9U/MPqjxBNOnoG3n9V1/GvgVpJ4IGIk3ZukTFWs0EjlvnDto/x3IIsphwB3EdGqd0v2aDO0kGXUBu9L4h3x4Fgnq0C7nvNNy972jr4KUstjoZm4l9eN7RSZoyHDVr21Ddlg6E19OkwKjtInoGZMNvpsiX3PQHhZMhvi/79/UTOyQYAweATB4iIja1HGGrIAZVvD3xkYRC+NEx4YRa/uUI63TlZUhudmHUuwnHmisMwv/L2LSUz0dqkT5FlpTS+IthYsWMW+sqaAaIyz2m5TeiVG0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 21, 2023 at 09:20:16PM +0100, Matthew Wilcox (Oracle) wrote: > Instead of allocating a non-compound page and splitting it, allocate > a folio and make its refcount the count of the number of pages in it. > That way, when we free each page in the folio, we'll only actually free > it when the last page in the folio is freed. Keeping the memory intact > is better for the MM system than allocating it and splitting it. > > Now, instead of setting each page->mapping, we only set folio->mapping > which is better for our cacheline usage, as well as helping towards the > goal of eliminating page->mapping. We remove the setting of page->index; > I do not believe this is needed. And we return with the folio locked, > which the fault handler should have been doing all along. > > Signed-off-by: Matthew Wilcox (Oracle) > --- > kernel/events/core.c | 13 +++++++--- > kernel/events/ring_buffer.c | 51 ++++++++++++++++--------------------- > 2 files changed, 31 insertions(+), 33 deletions(-) > > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 4c72a41f11af..59d4f7c48c8c 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -29,6 +29,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -6083,6 +6084,7 @@ static vm_fault_t perf_mmap_fault(struct vm_fault *vmf) > { > struct perf_event *event = vmf->vma->vm_file->private_data; > struct perf_buffer *rb; > + struct folio *folio; > vm_fault_t ret = VM_FAULT_SIGBUS; Since we're explicitly returning VM_FAULT_LOCKED on success, perhaps worth simply renaming the unlock label to error and returning VM_FAULT_SIGBUS there? The FAULT_FLAG_MKWRITE branch can simply return vmf->pgoff == 0 ? 0 : VM_FAULT_SIGBUS; > > if (vmf->flags & FAULT_FLAG_MKWRITE) { > @@ -6102,12 +6104,15 @@ static vm_fault_t perf_mmap_fault(struct vm_fault *vmf) > vmf->page = perf_mmap_to_page(rb, vmf->pgoff); > if (!vmf->page) > goto unlock; > + folio = page_folio(vmf->page); > > - get_page(vmf->page); > - vmf->page->mapping = vmf->vma->vm_file->f_mapping; > - vmf->page->index = vmf->pgoff; > + folio_get(folio); > + rcu_read_unlock(); > + folio_lock(folio); > + if (!folio->mapping) > + folio->mapping = vmf->vma->vm_file->f_mapping; > > - ret = 0; > + return VM_FAULT_LOCKED; > unlock: > rcu_read_unlock(); > > diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c > index 56939dc3bf33..0a026e5ff4f5 100644 > --- a/kernel/events/ring_buffer.c > +++ b/kernel/events/ring_buffer.c > @@ -606,39 +606,28 @@ long perf_output_copy_aux(struct perf_output_handle *aux_handle, > > #define PERF_AUX_GFP (GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN | __GFP_NORETRY) > > -static struct page *rb_alloc_aux_page(int node, int order) > +static struct folio *rb_alloc_aux_folio(int node, int order) > { > - struct page *page; > + struct folio *folio; > > if (order > MAX_ORDER) > order = MAX_ORDER; > > do { > - page = alloc_pages_node(node, PERF_AUX_GFP, order); > - } while (!page && order--); > - > - if (page && order) { > - /* > - * Communicate the allocation size to the driver: > - * if we managed to secure a high-order allocation, > - * set its first page's private to this order; > - * !PagePrivate(page) means it's just a normal page. > - */ > - split_page(page, order); > - SetPagePrivate(page); > - set_page_private(page, order); I'm guessing this was used in conjunction with the page_private() logic that existed below and can simply be discarded now? > - } > + folio = __folio_alloc_node(PERF_AUX_GFP, order, node); > + } while (!folio && order--); > > - return page; > + if (order) > + folio_ref_add(folio, (1 << order) - 1); Can't order go to -1 if we continue to fail to allocate a folio? > + return folio; > } > > static void rb_free_aux_page(struct perf_buffer *rb, int idx) > { > - struct page *page = virt_to_page(rb->aux_pages[idx]); > + struct folio *folio = virt_to_folio(rb->aux_pages[idx]); > > - ClearPagePrivate(page); > - page->mapping = NULL; > - __free_page(page); > + folio->mapping = NULL; > + folio_put(folio); > } > > static void __rb_free_aux(struct perf_buffer *rb) > @@ -672,7 +661,7 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event, > pgoff_t pgoff, int nr_pages, long watermark, int flags) > { > bool overwrite = !(flags & RING_BUFFER_WRITABLE); > - int node = (event->cpu == -1) ? -1 : cpu_to_node(event->cpu); > + int node = (event->cpu == -1) ? numa_mem_id() : cpu_to_node(event->cpu); > int ret = -ENOMEM, max_order; > > if (!has_aux(event)) > @@ -707,17 +696,21 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event, > > rb->free_aux = event->pmu->free_aux; > for (rb->aux_nr_pages = 0; rb->aux_nr_pages < nr_pages;) { > - struct page *page; > - int last, order; > + struct folio *folio; > + unsigned int i, nr, order; > + void *addr; > > order = min(max_order, ilog2(nr_pages - rb->aux_nr_pages)); > - page = rb_alloc_aux_page(node, order); > - if (!page) > + folio = rb_alloc_aux_folio(node, order); > + if (!folio) > goto out; > + addr = folio_address(folio); > + nr = folio_nr_pages(folio); I was going to raise the unspeakably annoying nit about this function returning a long, but then that made me wonder why, given folio->_folio_nr_pages is an unsigned int folio_nr_pages() returns a long in the first instance? > > - for (last = rb->aux_nr_pages + (1 << page_private(page)); > - last > rb->aux_nr_pages; rb->aux_nr_pages++) > - rb->aux_pages[rb->aux_nr_pages] = page_address(page++); > + for (i = 0; i < nr; i++) { > + rb->aux_pages[rb->aux_nr_pages++] = addr; > + addr += PAGE_SIZE; > + } > } > > /* > -- > 2.40.1 >