From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1833CC17441 for ; Tue, 12 Nov 2019 15:25:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DBB77222C6 for ; Tue, 12 Nov 2019 15:25:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DBB77222C6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 771FA6B0006; Tue, 12 Nov 2019 10:25:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 723816B0007; Tue, 12 Nov 2019 10:25:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6605F6B0008; Tue, 12 Nov 2019 10:25:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id 5081F6B0006 for ; Tue, 12 Nov 2019 10:25:25 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 1BC103499 for ; Tue, 12 Nov 2019 15:25:25 +0000 (UTC) X-FDA: 76147999410.22.hate09_2f3e5cd3aa106 X-HE-Tag: hate09_2f3e5cd3aa106 X-Filterd-Recvd-Size: 3496 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Nov 2019 15:25:24 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 1C20F68BE1; Tue, 12 Nov 2019 16:25:22 +0100 (CET) Date: Tue, 12 Nov 2019 16:25:21 +0100 From: Christoph Hellwig To: Ralph Campbell Cc: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , Shuah Khan , linux-rdma@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: Re: [PATCH v4 2/2] mm/hmm/test: add self tests for HMM Message-ID: <20191112152521.GC12550@lst.de> References: <20191104222141.5173-1-rcampbell@nvidia.com> <20191104222141.5173-3-rcampbell@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191104222141.5173-3-rcampbell@nvidia.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Shouldn't this go into mm/ instead? It certainly doesn't seem like a library. > +static int dmirror_bounce_copy_from(struct dmirror_bounce *bounce, > + unsigned long addr) > +{ > + unsigned long end = addr + bounce->size; > + char __user *uptr = (void __user *)addr; > + void *ptr = bounce->ptr; > + > + for (; addr < end; addr += PAGE_SIZE, ptr += PAGE_SIZE, > + uptr += PAGE_SIZE) { > + int ret; > + > + ret = copy_from_user(ptr, uptr, PAGE_SIZE); > + if (ret) > + return ret; > + } > + > + return 0; > +} Why does this iterate in page sized chunks? I don't remember a page size limit on copy_{from,to}_user. > +static int dmirror_invalidate_range_start(struct mmu_notifier *mn, > + const struct mmu_notifier_range *update) > +{ > + struct dmirror *dmirror = container_of(mn, struct dmirror, notifier); > + > + if (mmu_notifier_range_blockable(update)) > + mutex_lock(&dmirror->mutex); > + else if (!mutex_trylock(&dmirror->mutex)) > + return -EAGAIN; > + > + dmirror_do_update(dmirror, update->start, update->end); > + mutex_unlock(&dmirror->mutex); > + return 0; > +} Can we adopts this to Jasons new interval tree invalidate? > +static int dmirror_fops_open(struct inode *inode, struct file *filp) > +{ > + struct cdev *cdev = inode->i_cdev; > + struct dmirror_device *mdevice; > + struct dmirror *dmirror; > + > + /* No exclusive opens. */ > + if (filp->f_flags & O_EXCL) > + return -EINVAL; Device files usually just ignore O_EXCL, I don't see why this one would be any different. > + mdevice = container_of(cdev, struct dmirror_device, cdevice); > + dmirror = dmirror_new(mdevice); > + if (!dmirror) > + return -ENOMEM; > + > + /* Only the first open registers the address space. */ > + mutex_lock(&mdevice->devmem_lock); > + if (filp->private_data) > + goto err_busy; > + filp->private_data = dmirror; > + mutex_unlock(&mdevice->devmem_lock); ->open is only called for the first open of a given file structure.. > +static int dmirror_fops_release(struct inode *inode, struct file *filp) > +{ > + struct dmirror *dmirror = filp->private_data; > + > + if (!dmirror) > + return 0; This can't happen if your ->open never returns 0 without setting the private data. > + filp->private_data = NULL; The file is feed afterwards, no need to clear the private data.