From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62119C19759 for ; Thu, 1 Aug 2019 23:47:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 08806206A3 for ; Thu, 1 Aug 2019 23:47:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XY0tvLDm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 08806206A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9D4B86B0003; Thu, 1 Aug 2019 19:47:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 924AF6B000D; Thu, 1 Aug 2019 19:47:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C1056B000A; Thu, 1 Aug 2019 19:47:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id F03AF6B0008 for ; Thu, 1 Aug 2019 19:47:45 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id n4so40008192plp.4 for ; Thu, 01 Aug 2019 16:47:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=jtSqljpSDE/qLhKuzfPgyz4WtxgB0XXEpPKPdjuFDcw=; b=d84C6le+L2e8wUQ3Agn4Mf6g1lwZoSBDrnhDwvgHRa3nkA98MRlbKJSAlDxnqOss/N SVN999BbEyDS98yp/LU8+fJBXfdMrF+L+MpPl6/qTW7WZ4LDaoZ13/o68W1MjW0Ev3bl /Sg8tec6SQJBYkKS0iZbUen6iG1XDKV8hO0r46kSOtHv7H8kJZUbE//0LZu7jyhiteMl IizqDd7IZ1ihMaFxbzEmcGR0KAMGosPYqSICMtWWhzAqRL8GtKYrMPcK2bsy02BklOrS IEnuFOByUXJxSNp5Bzt1tgQmAAxXtnGMXsGMo2rUXr0GaZsCywUwAoPodBNoeFTj5sA5 WOXg== X-Gm-Message-State: APjAAAWgzsduDaz0nvaowdux3KpByfBZa24kZfZJQvun5TAt9FwG6WNc C93u5ibqHLNZJKYY6xf96oerkj1zRSKHyFphf82BPXCpnzlbGPvXFUUjJl2zUa4wQY3NRd1iuvF vthfD6zU5Dvp+SLkIA0KvW/jy6GQCLoa19YIMMNlMxrtvy4wQra8KLSlOjnx1UEKFcg== X-Received: by 2002:a17:902:549:: with SMTP id 67mr127995228plf.86.1564703265528; Thu, 01 Aug 2019 16:47:45 -0700 (PDT) X-Received: by 2002:a17:902:549:: with SMTP id 67mr127995053plf.86.1564703261705; Thu, 01 Aug 2019 16:47:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564703261; cv=none; d=google.com; s=arc-20160816; b=Un1GYDVQI0+/gc2M20sJM2N0fU0WJLSYfxFVT6JIxd13tvYkQ5g/jMkBOoCnzfyvnr lkGxs8RiaOvGcQYDA4A1ZOo+uk/2KteVR4VvbM6jE9gVAf1xzbvVBA2muflJMVhFOKEo 0TYfIZZoBMEA2HS7dVDuJmvS+MdWYEHxfOyQT/gcQpfeO6lDQZPmkyoMzMjx1XSEAH5P S0oAFtBz0FDSJjOQzlCDunOiEUyLqNoUT9EumqPQROupdBiSd/LvSOIq3xvvof54kI2t 54JytxvPFJa5Tdnp36tcOFQOzd98T7gVfqmRuvvUSAPud7szLcDrxL1XukwlAweHO9Da gltA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=jtSqljpSDE/qLhKuzfPgyz4WtxgB0XXEpPKPdjuFDcw=; b=Tbr531uR4SaGYveUkdknNIFqYZiUaMlYVAoDql3DFu371VtVH1GbXcYk2U9Q8VjZ6d U9RaZRyLNvZfGHcPNjm/p5Dhejyig8mUDBW67bGkVaDgctNXnKI2QfjBRqIWm1jra0R7 dZb0vhhhRdWvv6nfETSzoaMFJQbQQUj6Ms37Veb1a8HeBPzyz42JfzyzHw8hD2y9HgQt DDp1l5G5e93JKrcj/BSogGmxTdT1YRkZmqcVoJHAbKbcv7Cr5GEVYUT4HkljQHcAvW0E tdHdPkgwTjYp8fJuDAkidWE/RCIXBnt33pjWblMNPBYzAhpcuZ9oIt1cO0K1gAF5K+uk W/PQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=XY0tvLDm; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id d4sor88961750plo.54.2019.08.01.16.47.41 for (Google Transport Security); Thu, 01 Aug 2019 16:47:41 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=XY0tvLDm; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jtSqljpSDE/qLhKuzfPgyz4WtxgB0XXEpPKPdjuFDcw=; b=XY0tvLDmjuPVZVxwS5b4b7INHJjdW0uUQgZFBh9SEnLMraZJI0OQ0Jg23pRx6E0Y4G SRLXCdTX2Yau8Ux//pZOUJCkdyJkrobDIFvANiT0dOVo0Bg5617xVN+rzgfYWkWRL1+r 662Ju3FcWrwP1QjrGkKVjURieTDkgRo8Jx9Lstr7Vqn48EPdkgzvOauLByW4C4NXhcKG FNm6V1p6sijxvm84S+kkoLXRqoaX8C+a6wHUl1jc0LsVF9pQLmVMaQoW/aV8UupTEbWt WbQhp279E+nit3bEKcVYsUGacLDc93ygc9uGS6T8HgtyXYQnspaSYaOZ/nBKxLpLzMP9 Bzbw== X-Google-Smtp-Source: APXvYqz+KyNY8t4He+m9ZIeh73c4EAv2ceSDT/5SMG+Ut83JlVFgma+Y3h7gKqByFTAVKILRxOvnUw== X-Received: by 2002:a17:902:7687:: with SMTP id m7mr75178521pll.310.1564703261386; Thu, 01 Aug 2019 16:47:41 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id q7sm79090792pff.2.2019.08.01.16.47.39 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 16:47:40 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard , Ira Weiny Subject: [PATCH v5 1/3] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() Date: Thu, 1 Aug 2019 16:47:33 -0700 Message-Id: <20190801234735.2149-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801234735.2149-1-jhubbard@nvidia.com> References: <20190801234735.2149-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: 8bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Provide more capable variation of put_user_pages_dirty_lock(), and delete put_user_pages_dirty(). This is based on the following: 1. Lots of call sites become simpler if a bool is passed into put_user_page*(), instead of making the call site choose which put_user_page*() variant to call. 2. Christoph Hellwig's observation that set_page_dirty_lock() is usually correct, and set_page_dirty() is usually a bug, or at least questionable, within a put_user_page*() calling chain. This leads to the following API choices: * put_user_pages_dirty_lock(page, npages, make_dirty) * There is no put_user_pages_dirty(). You have to hand code that, in the rare case that it's required. Reviewed-by: Christoph Hellwig Cc: Matthew Wilcox Cc: Jan Kara Cc: Ira Weiny Cc: Jason Gunthorpe Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 +- drivers/infiniband/hw/hfi1/user_pages.c | 5 +- drivers/infiniband/hw/qib/qib_user_pages.c | 13 +-- drivers/infiniband/hw/usnic/usnic_uiom.c | 5 +- drivers/infiniband/sw/siw/siw_mem.c | 18 +--- include/linux/mm.h | 5 +- mm/gup.c | 115 +++++++++------------ 7 files changed, 60 insertions(+), 106 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 08da840ed7ee..965cf9dea71a 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -54,10 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page = sg_page_iter_page(&sg_iter); - if (umem->writable && dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index b89a9b9aef7a..469acb961fbd 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,10 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - if (dirty) - put_user_pages_dirty_lock(p, npages); - else - put_user_pages(p, npages); + put_user_pages_dirty_lock(p, npages, dirty); if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index bfbfbb7e0ff4..26c1fb8d45cc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -37,15 +37,6 @@ #include "qib.h" -static void __qib_release_user_pages(struct page **p, size_t num_pages, - int dirty) -{ - if (dirty) - put_user_pages_dirty_lock(p, num_pages); - else - put_user_pages(p, num_pages); -} - /** * qib_map_page - a safety wrapper around pci_map_page() * @@ -124,7 +115,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, return 0; bail_release: - __qib_release_user_pages(p, got, 0); + put_user_pages_dirty_lock(p, got, false); bail: atomic64_sub(num_pages, ¤t->mm->pinned_vm); return ret; @@ -132,7 +123,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, void qib_release_user_pages(struct page **p, size_t num_pages) { - __qib_release_user_pages(p, num_pages, 1); + put_user_pages_dirty_lock(p, num_pages, true); /* during close after signal, mm can be NULL */ if (current->mm) diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 0b0237d41613..62e6ffa9ad78 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - if (dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index 67171c82b0c4..2284966e4499 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -60,20 +60,6 @@ struct siw_mem *siw_mem_id2obj(struct siw_device *sdev, int stag_index) return NULL; } -static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, - bool dirty) -{ - struct page **p = chunk->plist; - - while (num_pages--) { - if (!PageDirty(*p) && dirty) - put_user_pages_dirty_lock(p, 1); - else - put_user_page(*p); - p++; - } -} - void siw_umem_release(struct siw_umem *umem, bool dirty) { struct mm_struct *mm_s = umem->owning_mm; @@ -82,8 +68,8 @@ void siw_umem_release(struct siw_umem *umem, bool dirty) for (i = 0; num_pages; i++) { int to_free = min_t(int, PAGES_PER_CHUNK, num_pages); - siw_free_plist(&umem->page_chunk[i], to_free, - umem->writable && dirty); + put_user_pages_dirty_lock(&umem->page_chunk[i], to_free, + umem->writable && dirty); kfree(umem->page_chunk[i].plist); num_pages -= to_free; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..9759b6a24420 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1057,8 +1057,9 @@ static inline void put_user_page(struct page *page) put_page(page); } -void put_user_pages_dirty(struct page **pages, unsigned long npages); -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); + void put_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..7fefd7ab02c4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,85 +29,70 @@ struct follow_page_context { unsigned int page_mask; }; -typedef int (*set_dirty_func_t)(struct page *page); - -static void __put_user_pages_dirty(struct page **pages, - unsigned long npages, - set_dirty_func_t sdf) -{ - unsigned long index; - - for (index = 0; index < npages; index++) { - struct page *page = compound_head(pages[index]); - - /* - * Checking PageDirty at this point may race with - * clear_page_dirty_for_io(), but that's OK. Two key cases: - * - * 1) This code sees the page as already dirty, so it skips - * the call to sdf(). That could happen because - * clear_page_dirty_for_io() called page_mkclean(), - * followed by set_page_dirty(). However, now the page is - * going to get written back, which meets the original - * intention of setting it dirty, so all is well: - * clear_page_dirty_for_io() goes on to call - * TestClearPageDirty(), and write the page back. - * - * 2) This code sees the page as clean, so it calls sdf(). - * The page stays dirty, despite being written back, so it - * gets written back again in the next writeback cycle. - * This is harmless. - */ - if (!PageDirty(page)) - sdf(page); - - put_user_page(page); - } -} - /** - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. + * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages + * @pages: array of pages to be maybe marked dirty, and definitely released. * @npages: number of pages in the @pages array. + * @make_dirty: whether to mark the pages dirty * * "gup-pinned page" refers to a page that has had one of the get_user_pages() * variants called on that page. * * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). + * compound page) dirty, if @make_dirty is true, and if the page was previously + * listed as clean. In any case, releases all pages using put_user_page(), + * possibly via put_user_pages(), for the non-dirty case. * * Please see the put_user_page() documentation for details. * - * set_page_dirty(), which does not lock the page, is used here. - * Therefore, it is the caller's responsibility to ensure that this is - * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is + * required, then the caller should a) verify that this is really correct, + * because _lock() is usually required, and b) hand code it: + * set_page_dirty_lock(), put_user_page(). * */ -void put_user_pages_dirty(struct page **pages, unsigned long npages) +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { - __put_user_pages_dirty(pages, npages, set_page_dirty); -} -EXPORT_SYMBOL(put_user_pages_dirty); + unsigned long index; -/** - * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. - * @npages: number of pages in the @pages array. - * - * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). - * - * Please see the put_user_page() documentation for details. - * - * This is just like put_user_pages_dirty(), except that it invokes - * set_page_dirty_lock(), instead of set_page_dirty(). - * - */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) -{ - __put_user_pages_dirty(pages, npages, set_page_dirty_lock); + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ + + if (!make_dirty) { + put_user_pages(pages, npages); + return; + } + + for (index = 0; index < npages; index++) { + struct page *page = compound_head(pages[index]); + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key + * cases: + * + * 1) This code sees the page as already dirty, so it + * skips the call to set_page_dirty(). That could happen + * because clear_page_dirty_for_io() called + * page_mkclean(), followed by set_page_dirty(). + * However, now the page is going to get written back, + * which meets the original intention of setting it + * dirty, so all is well: clear_page_dirty_for_io() goes + * on to call TestClearPageDirty(), and write the page + * back. + * + * 2) This code sees the page as clean, so it calls + * set_page_dirty(). The page stays dirty, despite being + * written back, so it gets written back again in the + * next writeback cycle. This is harmless. + */ + if (!PageDirty(page)) + set_page_dirty_lock(page); + put_user_page(page); + } } EXPORT_SYMBOL(put_user_pages_dirty_lock); -- 2.22.0