From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.4 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1435FC17441 for ; Tue, 12 Nov 2019 21:51:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A3D1721925 for ; Tue, 12 Nov 2019 21:51:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="e+kCCH0w" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3D1721925 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1BF116B0005; Tue, 12 Nov 2019 16:51:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 16F116B0006; Tue, 12 Nov 2019 16:51:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 085166B0007; Tue, 12 Nov 2019 16:51:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id E67436B0005 for ; Tue, 12 Nov 2019 16:51:14 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id AAA2E349B for ; Tue, 12 Nov 2019 21:51:14 +0000 (UTC) X-FDA: 76148971668.08.toe52_44fd558434f5e X-HE-Tag: toe52_44fd558434f5e X-Filterd-Recvd-Size: 5613 Received: from hqemgate15.nvidia.com (hqemgate15.nvidia.com [216.228.121.64]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Nov 2019 21:51:13 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 13:51:12 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 13:51:12 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 12 Nov 2019 13:51:12 -0800 Received: from rcampbell-dev.nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 21:51:11 +0000 Subject: Re: [PATCH v4 2/2] mm/hmm/test: add self tests for HMM To: Christoph Hellwig , Andrew Morton CC: Jerome Glisse , John Hubbard , Jason Gunthorpe , Shuah Khan , , , , References: <20191104222141.5173-1-rcampbell@nvidia.com> <20191104222141.5173-3-rcampbell@nvidia.com> <20191112152521.GC12550@lst.de> X-Nvconfidentiality: public From: Ralph Campbell Message-ID: <07589a71-3984-b2a6-b24b-6b9a23e1b60d@nvidia.com> Date: Tue, 12 Nov 2019 13:51:11 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.0 MIME-Version: 1.0 In-Reply-To: <20191112152521.GC12550@lst.de> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573595472; bh=o8S1FMOsDYVa7X2xeImNy++Zw7OnjSU0JPm4JtD4kWU=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=e+kCCH0wyWLDdWOJ92sd3L9x2p+aCHhnQ2sNerz3oDr7YsSvQq6a9ThaZYshlEc6c gfTLtLBSEjrtxWhdBUoTzJ145ZYuE1mDmQC27h1356TyBfEPnpchX72lZjJMgTOkwm 0aNcmp7Y6js0j9exgwd3/0k6RSDi/TYIand/q72RQE7Ca7yCg+f44yCNiEChVfe72p F7LzZ9YGZJ9HIO1dunrf6MFgeDc7LjOO5IE93D3VdDBV+n9cj8BjxcWlA0MwajI7e+ vpDhOKlQXSiGyTiMBRbtjB3JArbYRKg98Ome+58yJ9GwFirbhFTV84yx1rpZgPvOD8 5/Vm1wjI3jXAw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11/12/19 7:25 AM, Christoph Hellwig wrote: > Shouldn't this go into mm/ instead? It certainly doesn't seem > like a library. I was following the convention for the other vm test kernel modules. I see a couple of modules in mm/ but I don't have a personal preference for where to place it. Andrew, do you have a preference? >> +static int dmirror_bounce_copy_from(struct dmirror_bounce *bounce, >> + unsigned long addr) >> +{ >> + unsigned long end = addr + bounce->size; >> + char __user *uptr = (void __user *)addr; >> + void *ptr = bounce->ptr; >> + >> + for (; addr < end; addr += PAGE_SIZE, ptr += PAGE_SIZE, >> + uptr += PAGE_SIZE) { >> + int ret; >> + >> + ret = copy_from_user(ptr, uptr, PAGE_SIZE); >> + if (ret) >> + return ret; >> + } >> + >> + return 0; >> +} > > Why does this iterate in page sized chunks? I don't remember a page > size limit on copy_{from,to}_user. Good point. I'll fix that. >> +static int dmirror_invalidate_range_start(struct mmu_notifier *mn, >> + const struct mmu_notifier_range *update) >> +{ >> + struct dmirror *dmirror = container_of(mn, struct dmirror, notifier); >> + >> + if (mmu_notifier_range_blockable(update)) >> + mutex_lock(&dmirror->mutex); >> + else if (!mutex_trylock(&dmirror->mutex)) >> + return -EAGAIN; >> + >> + dmirror_do_update(dmirror, update->start, update->end); >> + mutex_unlock(&dmirror->mutex); >> + return 0; >> +} > > Can we adopts this to Jasons new interval tree invalidate? Well, it would mean registering for the whole process address space. I'll give it a try. >> +static int dmirror_fops_open(struct inode *inode, struct file *filp) >> +{ >> + struct cdev *cdev = inode->i_cdev; >> + struct dmirror_device *mdevice; >> + struct dmirror *dmirror; >> + >> + /* No exclusive opens. */ >> + if (filp->f_flags & O_EXCL) >> + return -EINVAL; > > Device files usually just ignore O_EXCL, I don't see why this one > would be any different. OK, I'll remove that test. >> + mdevice = container_of(cdev, struct dmirror_device, cdevice); >> + dmirror = dmirror_new(mdevice); >> + if (!dmirror) >> + return -ENOMEM; >> + >> + /* Only the first open registers the address space. */ >> + mutex_lock(&mdevice->devmem_lock); >> + if (filp->private_data) >> + goto err_busy; >> + filp->private_data = dmirror; >> + mutex_unlock(&mdevice->devmem_lock); > > ->open is only called for the first open of a given file structure.. > >> +static int dmirror_fops_release(struct inode *inode, struct file *filp) >> +{ >> + struct dmirror *dmirror = filp->private_data; >> + >> + if (!dmirror) >> + return 0; > > This can't happen if your ->open never returns 0 without setting the > private data. > >> + filp->private_data = NULL; > > The file is feed afterwards, no need to clear the private data. OK, I'll clean that up.