From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32D2CC47257 for ; Mon, 4 May 2020 16:22:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C20F520721 for ; Mon, 4 May 2020 16:22:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="iSM+uzVb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C20F520721 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 30C198E005C; Mon, 4 May 2020 12:22:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BC9F8E0058; Mon, 4 May 2020 12:22:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AAAE8E005C; Mon, 4 May 2020 12:22:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 00C6B8E0058 for ; Mon, 4 May 2020 12:22:54 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AC95B181AC9B6 for ; Mon, 4 May 2020 16:22:54 +0000 (UTC) X-FDA: 76779555468.20.tail46_367851bfec623 X-HE-Tag: tail46_367851bfec623 X-Filterd-Recvd-Size: 9752 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 4 May 2020 16:22:54 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id x25so170720wmc.0 for ; Mon, 04 May 2020 09:22:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=MCHVg2/8+pArsDEcibsWYzrGnqgIsL6O2tKqGg2boGQ=; b=iSM+uzVb/Wm9o2OFV1sl72Hl6D+zeT7joT4vOaOlq68PXnFwe/TAk2ttWj8v9dU1dp pnF/omBOpYMJB4/th5D32NG9HLmrLVMes4cCMc8VMzYdg2qLFT6vSgmCDp0Aw1nUm0uZ zskO35wWlL+RbvHkboG5mUQ/L/n+3y61s2o0w7CQDO57+OpdcBSazwAbkdt1PSVLTLBo UV1pf09sh92pKqxAH7nSVA1ZYxhO3wvKQTlkwY+OI6YzOkNriXNcGNotaZCNb5aVlTZL xKffUZeH5WhVXFHQFh1APhUCIeRsfiN64kn4VSmarUHBvmY+F0QmpO3mbkXMwkZKQxvG rmfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=MCHVg2/8+pArsDEcibsWYzrGnqgIsL6O2tKqGg2boGQ=; b=aKcynHaIqYga+3nR6J3P6oN+nVB/8KzjzkBtJ297F40RkO2SuHT+GDmpU/EQhDH0ej 1jsQP6W+F6jUAoZLxctIexqXFtZP9WlyaUbTM2y4P3q93mL17+eCCc1FJNZ35RuDRYea sDpiBcvhk/bzDNl/NE0b7x0f7JkQ3Av9R8kgXlgYAHGZrrN5tMI12BBYmwRE/DQcFWJv FNTneLl7PyxEGPGnD4099udx6dWJ87d9ah3JO/yu+5ClzTD7cowMxwSqYDvLxU8FiylX VN06aauj0RxS+ZRFaIvgHW5TxTZHdIGDy8rVasQwZfu88D3nFWz7IVkdgOsWoLUWDYnV 92aA== X-Gm-Message-State: AGi0PuaNIsWFa3lw+f74xMn4AATNjI34shX7K2j93bYXoqjcz0Bkt8SC 51RHYVt9w61Z8MdGclUBZ1jrlA== X-Google-Smtp-Source: APiQypIIfd6fa9/u+aeLTQxZQ97zHXvpxVqHN8qn939cucwqFuaXUhEBZSvGLYaIJPT3wr7V8s6L9Q== X-Received: by 2002:a7b:c14b:: with SMTP id z11mr15041535wmi.44.1588609373077; Mon, 04 May 2020 09:22:53 -0700 (PDT) Received: from myrica ([2001:171b:226e:c200:c43b:ef78:d083:b355]) by smtp.gmail.com with ESMTPSA id s8sm16208905wrt.69.2020.05.04.09.22.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 May 2020 09:22:52 -0700 (PDT) Date: Mon, 4 May 2020 18:22:42 +0200 From: Jean-Philippe Brucker To: Lu Baolu Cc: iommu@lists.linux-foundation.org, devicetree@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, joro@8bytes.org, catalin.marinas@arm.com, will@kernel.org, robin.murphy@arm.com, kevin.tian@intel.com, Jonathan.Cameron@huawei.com, jacob.jun.pan@linux.intel.com, christian.koenig@amd.com, felix.kuehling@amd.com, zhangfei.gao@linaro.org, jgg@ziepe.ca, xuzaibo@huawei.com, fenghua.yu@intel.com, hch@infradead.org Subject: Re: [PATCH v6 04/25] iommu: Add a page fault handler Message-ID: <20200504162242.GF170104@myrica> References: <20200430143424.2787566-1-jean-philippe@linaro.org> <20200430143424.2787566-5-jean-philippe@linaro.org> <9a8ec004-0a9c-d772-8e7a-f839002a40b5@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9a8ec004-0a9c-d772-8e7a-f839002a40b5@linux.intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, May 03, 2020 at 01:49:01PM +0800, Lu Baolu wrote: > > +static void iopf_handle_group(struct work_struct *work) > > +{ > > + struct iopf_group *group; > > + struct iopf_fault *iopf, *next; > > + enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS; > > + > > + group = container_of(work, struct iopf_group, work); > > + > > + list_for_each_entry_safe(iopf, next, &group->faults, head) { > > + /* > > + * For the moment, errors are sticky: don't handle subsequent > > + * faults in the group if there is an error. > > + */ > > + if (status == IOMMU_PAGE_RESP_SUCCESS) > > + status = iopf_handle_single(iopf); > > + > > + if (!(iopf->fault.prm.flags & > > + IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) > > + kfree(iopf); > > The iopf is freed,but not removed from the list. This will cause wild > pointer in code. We free the list with the group below, so this one is fine. > > > + } > > + > > + iopf_complete_group(group->dev, &group->last_fault, status); > > + kfree(group); > > +} > > + > > [...] > > > +/** > > + * iopf_queue_flush_dev - Ensure that all queued faults have been processed > > + * @dev: the endpoint whose faults need to be flushed. > > + * @pasid: the PASID affected by this flush > > + * > > + * The IOMMU driver calls this before releasing a PASID, to ensure that all > > + * pending faults for this PASID have been handled, and won't hit the address > > + * space of the next process that uses this PASID. The driver must make sure > > + * that no new fault is added to the queue. In particular it must flush its > > + * low-level queue before calling this function. > > + * > > + * Return: 0 on success and <0 on error. > > + */ > > +int iopf_queue_flush_dev(struct device *dev, int pasid) > > +{ > > + int ret = 0; > > + struct iopf_device_param *iopf_param; > > + struct dev_iommu *param = dev->iommu; > > + > > + if (!param) > > + return -ENODEV; > > + > > + mutex_lock(¶m->lock); > > + iopf_param = param->iopf_param; > > + if (iopf_param) > > + flush_workqueue(iopf_param->queue->wq); > > There may be other pasid iopf in the workqueue. Flush all tasks in > the workqueue will hurt other pasids. I might lose any context. Granted this isn't optimal because we don't take the PASID argument into account (I think I'll remove it, don't know how to use it). But I don't think it affects other PASIDs, because all flush_workqueue() does is wait until all faults currently in the worqueue are processed. So it only blocks the current thread, but nothing is lost. > > > + else > > + ret = -ENODEV; > > + mutex_unlock(¶m->lock); > > + > > + return ret; > > +} > > +EXPORT_SYMBOL_GPL(iopf_queue_flush_dev); > > + > > +/** > > + * iopf_queue_discard_partial - Remove all pending partial fault > > + * @queue: the queue whose partial faults need to be discarded > > + * > > + * When the hardware queue overflows, last page faults in a group may have been > > + * lost and the IOMMU driver calls this to discard all partial faults. The > > + * driver shouldn't be adding new faults to this queue concurrently. > > + * > > + * Return: 0 on success and <0 on error. > > + */ > > +int iopf_queue_discard_partial(struct iopf_queue *queue) > > +{ > > + struct iopf_fault *iopf, *next; > > + struct iopf_device_param *iopf_param; > > + > > + if (!queue) > > + return -EINVAL; > > + > > + mutex_lock(&queue->lock); > > + list_for_each_entry(iopf_param, &queue->devices, queue_list) { > > + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) > > + kfree(iopf); > > iopf is freed but not removed from the list. Ouch yes this is wrong, will fix. > > > + } > > + mutex_unlock(&queue->lock); > > + return 0; > > +} > > +EXPORT_SYMBOL_GPL(iopf_queue_discard_partial); > > + > > +/** > > + * iopf_queue_add_device - Add producer to the fault queue > > + * @queue: IOPF queue > > + * @dev: device to add > > + * > > + * Return: 0 on success and <0 on error. > > + */ > > +int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev) > > +{ > > + int ret = -EBUSY; > > + struct iopf_device_param *iopf_param; > > + struct dev_iommu *param = dev->iommu; > > + > > + if (!param) > > + return -ENODEV; > > + > > + iopf_param = kzalloc(sizeof(*iopf_param), GFP_KERNEL); > > + if (!iopf_param) > > + return -ENOMEM; > > + > > + INIT_LIST_HEAD(&iopf_param->partial); > > + iopf_param->queue = queue; > > + iopf_param->dev = dev; > > + > > + mutex_lock(&queue->lock); > > + mutex_lock(¶m->lock); > > + if (!param->iopf_param) { > > + list_add(&iopf_param->queue_list, &queue->devices); > > + param->iopf_param = iopf_param; > > + ret = 0; > > + } > > + mutex_unlock(¶m->lock); > > + mutex_unlock(&queue->lock); > > + > > + if (ret) > > + kfree(iopf_param); > > + > > + return ret; > > +} > > +EXPORT_SYMBOL_GPL(iopf_queue_add_device); > > + > > +/** > > + * iopf_queue_remove_device - Remove producer from fault queue > > + * @queue: IOPF queue > > + * @dev: device to remove > > + * > > + * Caller makes sure that no more faults are reported for this device. > > + * > > + * Return: 0 on success and <0 on error. > > + */ > > +int iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev) > > +{ > > + int ret = 0; > > + struct iopf_fault *iopf, *next; > > + struct iopf_device_param *iopf_param; > > + struct dev_iommu *param = dev->iommu; > > + > > + if (!param || !queue) > > + return -EINVAL; > > + > > + mutex_lock(&queue->lock); > > + mutex_lock(¶m->lock); > > + iopf_param = param->iopf_param; > > + if (iopf_param && iopf_param->queue == queue) { > > + list_del(&iopf_param->queue_list); > > + param->iopf_param = NULL; > > + } else { > > + ret = -EINVAL; > > + } > > + mutex_unlock(¶m->lock); > > + mutex_unlock(&queue->lock); > > + if (ret) > > + return ret; > > + > > + /* Just in case some faults are still stuck */ > > + list_for_each_entry_safe(iopf, next, &iopf_param->partial, head) > > + kfree(iopf); > > The same here. Here is fine, we free the iopf_param below Thanks, Jean > > > + > > + kfree(iopf_param); > > + > > + return 0; > > +} > > +EXPORT_SYMBOL_GPL(iopf_queue_remove_device);