From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 692ECC76194 for ; Wed, 24 Jul 2019 04:45:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 16777218D4 for ; Wed, 24 Jul 2019 04:45:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nvfJg8L6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16777218D4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BC4006B0005; Wed, 24 Jul 2019 00:45:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B74DD8E0005; Wed, 24 Jul 2019 00:45:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9EF4B8E0003; Wed, 24 Jul 2019 00:45:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 03FBF6B0005 for ; Wed, 24 Jul 2019 00:45:46 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id t19so27473233pgh.6 for ; Tue, 23 Jul 2019 21:45:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=5gbvfTIK/mSaKMjbkYWk6GHqvja6ZqCh33qCfyFDpP0=; b=c3J9qWiVTHbKzHasS4JLLIprgj67bPGTzSLNDyizJB173/Gslz5EHWYHKhEzlucHqr 9QLVw4at7W6lfmkzjaNqUe7h59IPbO7hvQgu4Qvgq/9SqdS8cZwm6NhbfI0Trev+MEUv SgcT2x92qlKUnWYBd7hpS1SRlNkrS9nf7yXboj5Vzm5DtwKErINHQ0Eg3+svW32UMbfA kVEzwkybSt2elkzHHvyBuPvm534dBNvORahDhPkO7+MgAHJFhC7KrWpahTWCjn8SSdka H9yUSyQXrR/z6zFSN+uogzxy8ZMd5J1PN5QaSI3VyeMjppsX/0h+pzoEpN3vn3yD0YiQ yQTQ== X-Gm-Message-State: APjAAAXEGM0kRRmybb20bUW6kJLKk7Frmaf3ghyq9AFNCKzYbCzq43la 3JxxtSlDeO57NciI7IuI3Gq27Gr4mCgtvkdKpZD51VMH8owrRgRE8DJoVGQL+uvwpADr0NU1voW yMCKTa3/r5tbf0Zk0mxbnnIaZN7BnvdoD9mPbV+p+vj+BjHVY4xwBh8vVxbw8tHn4IA== X-Received: by 2002:a62:584:: with SMTP id 126mr9368533pff.73.1563943545640; Tue, 23 Jul 2019 21:45:45 -0700 (PDT) X-Received: by 2002:a62:584:: with SMTP id 126mr9368496pff.73.1563943544956; Tue, 23 Jul 2019 21:45:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1563943544; cv=none; d=google.com; s=arc-20160816; b=blYL2KtvNxsRQTwVbTc0wF/jGxKdmSoa6vBkarl+BPlF8u9nCUbzXyC9evBEt0wnhZ V66Te+djSC4ZYuyGS9IcoleRxoxrzJLE23wKL6LnKiFS6eYQ8Rmwi9MW97zbIzemFfxm bAEImQSc7GBziBo4JpIr+Y5i5zOuyTHQB4Wn4pMpWRlix/0sljEz7eK8uHrWE4sz8Ajs MzMUBCvZINb2/BqwxszRbVQfep/urt3Y4+AkHodQnU1QotRr1/PBnrMnGZjSWkcTbXa3 6O+s5t6EhE9Mq4+//856TrVbwLcGFGL+yOM3j84QuLH5H3trP8gBhlg0InMKFpFql12J GXLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=5gbvfTIK/mSaKMjbkYWk6GHqvja6ZqCh33qCfyFDpP0=; b=fCGVwxM5Xq4R02zlGsN6j2U4enXaCCR3mdQErTHNOgZS+/Qj/urYCnajnygO8aooFv 811t8P5IpxNMjwXNHNp0Vh8XnTkDEH7c8ZlPHDDYcdcjVSGGcx+X8yB92hAo4Sklm5XX R7Q1cnyheNzp0FW4PTvKW6VALEVBX8Wz910MjY8bLWEXq3dsK/iEaoQ5Cc1ouM1xr1dv g5C9vIPVwkRZQHZEaUvx5IlRe7ybYeDf7tnALhZo/vh0RIsUxIcZGUtvHIdQejqHNzL9 TpOeifmHXGCHuMR71GVrGHpLNWTij1/D7Mt1J7yDiP5kPiXGVQcqG9kBmRmRJrdh61zR d9dQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=nvfJg8L6; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j13sor6783492pgb.72.2019.07.23.21.45.44 for (Google Transport Security); Tue, 23 Jul 2019 21:45:44 -0700 (PDT) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=nvfJg8L6; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5gbvfTIK/mSaKMjbkYWk6GHqvja6ZqCh33qCfyFDpP0=; b=nvfJg8L6kCjtcS6OvfQnb3B+zNVAHAt1ksvCg6gjK8JBMCyv+dNVPN9pB4ULPwJ3i6 0zuV8m/qEcJIlPpngufycd7eMjbho6fwy6ZOsXLMuecq9W29q7a5AB69HkvoimaHjkZj taHDc6g6M7rXj9qxNrkW5uvIBhe8cDfyHK063C4BoWjBMPntgVsYshPH1XT5u0V4wG7u GEeo0jpIS5yvMcCdN/tdH+7RNPCOsSUMLA77R5Xouxx4TZHgZwXUFQD+J95ND7VoZ0YM tYx61eSLtBc1yPLeLbA5/bjFKsbl2mREub/KfMsU/BHKIpKKmjoJcvPAnmcLqSof5Snt AMwQ== X-Google-Smtp-Source: APXvYqzG7JbnYjDAKU6V4WZKP9CYy5aw6s4qPqDyYNxrQHvzcZcKpM7TOzA7wu6VUYuUobq8Zy1IRg== X-Received: by 2002:a63:7358:: with SMTP id d24mr79513689pgn.224.1563943544538; Tue, 23 Jul 2019 21:45:44 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id b30sm65685861pfr.117.2019.07.23.21.45.43 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:45:44 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard , Ira Weiny Subject: [PATCH v3 1/3] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() Date: Tue, 23 Jul 2019 21:45:35 -0700 Message-Id: <20190724044537.10458-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724044537.10458-1-jhubbard@nvidia.com> References: <20190724044537.10458-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Transfer-Encoding: 8bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Provide more capable variation of put_user_pages_dirty_lock(), and delete put_user_pages_dirty(). This is based on the following: 1. Lots of call sites become simpler if a bool is passed into put_user_page*(), instead of making the call site choose which put_user_page*() variant to call. 2. Christoph Hellwig's observation that set_page_dirty_lock() is usually correct, and set_page_dirty() is usually a bug, or at least questionable, within a put_user_page*() calling chain. This leads to the following API choices: * put_user_pages_dirty_lock(page, npages, make_dirty) * There is no put_user_pages_dirty(). You have to hand code that, in the rare case that it's required. Cc: Matthew Wilcox Cc: Jan Kara Cc: Christoph Hellwig Cc: Ira Weiny Cc: Jason Gunthorpe Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 +- drivers/infiniband/hw/hfi1/user_pages.c | 5 +- drivers/infiniband/hw/qib/qib_user_pages.c | 5 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 5 +- drivers/infiniband/sw/siw/siw_mem.c | 8 +- include/linux/mm.h | 5 +- mm/gup.c | 115 +++++++++------------ 7 files changed, 58 insertions(+), 90 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 08da840ed7ee..965cf9dea71a 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -54,10 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page = sg_page_iter_page(&sg_iter); - if (umem->writable && dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index b89a9b9aef7a..469acb961fbd 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,10 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - if (dirty) - put_user_pages_dirty_lock(p, npages); - else - put_user_pages(p, npages); + put_user_pages_dirty_lock(p, npages, dirty); if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index bfbfbb7e0ff4..6bf764e41891 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,10 +40,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - if (dirty) - put_user_pages_dirty_lock(p, num_pages); - else - put_user_pages(p, num_pages); + put_user_pages_dirty_lock(p, num_pages, dirty); } /** diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 0b0237d41613..62e6ffa9ad78 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - if (dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index 67171c82b0c4..358d440efa11 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -65,13 +65,7 @@ static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, { struct page **p = chunk->plist; - while (num_pages--) { - if (!PageDirty(*p) && dirty) - put_user_pages_dirty_lock(p, 1); - else - put_user_page(*p); - p++; - } + put_user_pages_dirty_lock(chunk->plist, num_pages, dirty); } void siw_umem_release(struct siw_umem *umem, bool dirty) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..9759b6a24420 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1057,8 +1057,9 @@ static inline void put_user_page(struct page *page) put_page(page); } -void put_user_pages_dirty(struct page **pages, unsigned long npages); -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); + void put_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..7fefd7ab02c4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,85 +29,70 @@ struct follow_page_context { unsigned int page_mask; }; -typedef int (*set_dirty_func_t)(struct page *page); - -static void __put_user_pages_dirty(struct page **pages, - unsigned long npages, - set_dirty_func_t sdf) -{ - unsigned long index; - - for (index = 0; index < npages; index++) { - struct page *page = compound_head(pages[index]); - - /* - * Checking PageDirty at this point may race with - * clear_page_dirty_for_io(), but that's OK. Two key cases: - * - * 1) This code sees the page as already dirty, so it skips - * the call to sdf(). That could happen because - * clear_page_dirty_for_io() called page_mkclean(), - * followed by set_page_dirty(). However, now the page is - * going to get written back, which meets the original - * intention of setting it dirty, so all is well: - * clear_page_dirty_for_io() goes on to call - * TestClearPageDirty(), and write the page back. - * - * 2) This code sees the page as clean, so it calls sdf(). - * The page stays dirty, despite being written back, so it - * gets written back again in the next writeback cycle. - * This is harmless. - */ - if (!PageDirty(page)) - sdf(page); - - put_user_page(page); - } -} - /** - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. + * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages + * @pages: array of pages to be maybe marked dirty, and definitely released. * @npages: number of pages in the @pages array. + * @make_dirty: whether to mark the pages dirty * * "gup-pinned page" refers to a page that has had one of the get_user_pages() * variants called on that page. * * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). + * compound page) dirty, if @make_dirty is true, and if the page was previously + * listed as clean. In any case, releases all pages using put_user_page(), + * possibly via put_user_pages(), for the non-dirty case. * * Please see the put_user_page() documentation for details. * - * set_page_dirty(), which does not lock the page, is used here. - * Therefore, it is the caller's responsibility to ensure that this is - * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is + * required, then the caller should a) verify that this is really correct, + * because _lock() is usually required, and b) hand code it: + * set_page_dirty_lock(), put_user_page(). * */ -void put_user_pages_dirty(struct page **pages, unsigned long npages) +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { - __put_user_pages_dirty(pages, npages, set_page_dirty); -} -EXPORT_SYMBOL(put_user_pages_dirty); + unsigned long index; -/** - * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. - * @npages: number of pages in the @pages array. - * - * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). - * - * Please see the put_user_page() documentation for details. - * - * This is just like put_user_pages_dirty(), except that it invokes - * set_page_dirty_lock(), instead of set_page_dirty(). - * - */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) -{ - __put_user_pages_dirty(pages, npages, set_page_dirty_lock); + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ + + if (!make_dirty) { + put_user_pages(pages, npages); + return; + } + + for (index = 0; index < npages; index++) { + struct page *page = compound_head(pages[index]); + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key + * cases: + * + * 1) This code sees the page as already dirty, so it + * skips the call to set_page_dirty(). That could happen + * because clear_page_dirty_for_io() called + * page_mkclean(), followed by set_page_dirty(). + * However, now the page is going to get written back, + * which meets the original intention of setting it + * dirty, so all is well: clear_page_dirty_for_io() goes + * on to call TestClearPageDirty(), and write the page + * back. + * + * 2) This code sees the page as clean, so it calls + * set_page_dirty(). The page stays dirty, despite being + * written back, so it gets written back again in the + * next writeback cycle. This is harmless. + */ + if (!PageDirty(page)) + set_page_dirty_lock(page); + put_user_page(page); + } } EXPORT_SYMBOL(put_user_pages_dirty_lock); -- 2.22.0