From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20321CA9EA0 for ; Tue, 22 Oct 2019 07:57:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 96C8B21872 for ; Tue, 22 Oct 2019 07:57:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=ffwll.ch header.i=@ffwll.ch header.b="CBb48fDU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 96C8B21872 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ffwll.ch Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D4A66B0006; Tue, 22 Oct 2019 03:57:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A9976B0008; Tue, 22 Oct 2019 03:57:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BFDD6B000A; Tue, 22 Oct 2019 03:57:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 09D0D6B0006 for ; Tue, 22 Oct 2019 03:57:41 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 9A18A62CC for ; Tue, 22 Oct 2019 07:57:40 +0000 (UTC) X-FDA: 76070666280.18.beef59_8ae3e9610d41 X-HE-Tag: beef59_8ae3e9610d41 X-Filterd-Recvd-Size: 12516 Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 22 Oct 2019 07:57:39 +0000 (UTC) Received: by mail-wm1-f67.google.com with SMTP id q70so8837837wme.1 for ; Tue, 22 Oct 2019 00:57:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=sender:date:from:to:cc:subject:message-id:mail-followup-to :references:mime-version:content-disposition :content-transfer-encoding:in-reply-to:user-agent; bh=ROh4rdNDqXOhRRdO284KPkGzZyY++utv+oGYDMQe5MU=; b=CBb48fDUyTRQMv9HvwoxGfLenOC40zL3DrVPS/y/X0lpnxGaEbBd6CvNglLC88n0nm H2S3nHw5RAiDUKsFcOCAo16yGxKfcB3KblwNeRUSbQpswczeghaDIjPJmbnOPri9UZGo 0xkMKv5HA2iYR4s8pU8+gACjEJj2rhE47sB+o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :content-transfer-encoding:in-reply-to:user-agent; bh=ROh4rdNDqXOhRRdO284KPkGzZyY++utv+oGYDMQe5MU=; b=osoQzngUsVUvsfudbCcByd/dvU6FpvIBB3PvOC+4iF7nNthLOgiaPO8qMpwnkfUDct VmQ/i3YNIHhdLgnEn7/ZX20bRkF6XIkfJMKGDAW/rllrb9KhD2vvVIHE7LoQgj01y7Ry kZhqpDI2rJwwRMffL9QMhydVw6KkG2hqnsVwsHbQ3T0TWnxVrucvaK4/gvoGLAqZe+/u z2Kf3VhBoZBDRo7agDjZo7o2E3vQ0OtsVrodo9OzXHbc7HrRYU0pI7Lw5NXw4Lt91N3U s/pvl5tLyWujVnUpBp8USaCZqEjUnis1N017OJqG0LYS4QomwjO91UyUQ1N0yg3JxvN2 cgyA== X-Gm-Message-State: APjAAAXEPJIKixPC+JyhcZSjMdbuQLKelebcdbcnl+yLfMRzluw5KaQa WS5ql4Y4uReWkMGNkAe3YNVk8A== X-Google-Smtp-Source: APXvYqzKrxZLRkZGIn/j/6GXzN01h0XlQ1NZaDTvcyCGyqNp2p/Kw3f/rv9PyApcC7hMNit/dkHghw== X-Received: by 2002:a7b:c3c8:: with SMTP id t8mr1558153wmj.87.1571731058043; Tue, 22 Oct 2019 00:57:38 -0700 (PDT) Received: from phenom.ffwll.local (212-51-149-96.fiber7.init7.net. [212.51.149.96]) by smtp.gmail.com with ESMTPSA id p7sm16093245wma.34.2019.10.22.00.57.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Oct 2019 00:57:37 -0700 (PDT) Date: Tue, 22 Oct 2019 09:57:35 +0200 From: Daniel Vetter To: Jason Gunthorpe Cc: "Koenig, Christian" , Andrea Arcangeli , "Yang, Philip" , Ralph Campbell , "linux-rdma@vger.kernel.org" , John Hubbard , "Kuehling, Felix" , "amd-gfx@lists.freedesktop.org" , "linux-mm@kvack.org" , Jerome Glisse , "dri-devel@lists.freedesktop.org" , Ben Skeggs Subject: Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Message-ID: <20191022075735.GV11828@phenom.ffwll.local> Mail-Followup-To: Jason Gunthorpe , "Koenig, Christian" , Andrea Arcangeli , "Yang, Philip" , Ralph Campbell , "linux-rdma@vger.kernel.org" , John Hubbard , "Kuehling, Felix" , "amd-gfx@lists.freedesktop.org" , "linux-mm@kvack.org" , Jerome Glisse , "dri-devel@lists.freedesktop.org" , Ben Skeggs References: <20191016160444.GB3430@mellanox.com> <2df298e2-ee91-ef40-5da9-2bc1af3a17be@gmail.com> <2046e0b4-ba05-0683-5804-e9bbf903658d@amd.com> <20191018203608.GA5670@mellanox.com> <20191021135744.GA25164@mellanox.com> <20191021151221.GC25164@mellanox.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20191021151221.GC25164@mellanox.com> X-Operating-System: Linux phenom 5.2.0-2-amd64 User-Agent: Mutt/1.10.1 (2018-07-13) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 21, 2019 at 03:12:26PM +0000, Jason Gunthorpe wrote: > On Mon, Oct 21, 2019 at 02:28:46PM +0000, Koenig, Christian wrote: > > Am 21.10.19 um 15:57 schrieb Jason Gunthorpe: > > > On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote: > > >> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe: > > >>> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote= : > > >>> [SNIP] > > >>> =20 > > >>>> So again how are they serialized? > > >>> The 'driver lock' thing does it, read the hmm documentation, the = hmm > > >>> approach is basically the only approach that was correct of all t= he > > >>> drivers.. > > >> Well that's what I've did, but what HMM does still doesn't looks c= orrect > > >> to me. > > > It has a bug, but the basic flow seems to work. > > > > > > https://patchwork.kernel.org/patch/11191 > >=20 > > Maybe wrong link? That link looks like an unrelated discussion on ker= nel=20 > > image relocation. >=20 > Sorry, it got corrupted: >=20 > https://patchwork.kernel.org/patch/11191397/ >=20 > > >>> So long as the 'driver lock' is held the range cannot become > > >>> invalidated as the 'driver lock' prevents progress of invalidatio= n. > > >> Correct, but the problem is it doesn't wait for ongoing operations= to > > >> complete. > > >> > > >> See I'm talking about the following case: > > >> > > >> Thread A=A0=A0=A0 Thread B > > >> invalidate_range_start() > > >> mmu_range_read_begin() > > >> get_user_pages()/hmm_range_fault() > > >> grab_driver_lock() > > >> Updating the ptes > > >> invalidate_range_end() > > >> > > >> As far as I can see in invalidate_range_start() the driver lock is= taken > > >> to make sure that we can't start any invalidation while the driver= is > > >> using the pages for a command submission. > > > Again, this uses the seqlock like scheme *and* the driver lock. > > > > > > In this case after grab_driver_lock() mmu_range_read_retry() will > > > return false if Thread A has progressed to 'updating the ptes. > > > > > > For instance here is how the concurrency resolves for retry: > > > > > > CPU1 CPU2 > > > seq =3D mmu_range_read_begin() > > > invalidate_range_start() > > > invalidate_seq++ > >=20 > > How that was order was what confusing me. But I've read up on the cod= e=20 > > in mmu_range_read_begin() and found the lines I was looking for: > >=20 > > +=A0=A0=A0 if (is_invalidating) > > +=A0=A0=A0 =A0=A0=A0 wait_event(mmn_mm->wq, > > +=A0=A0=A0 =A0=A0=A0 =A0=A0=A0 =A0=A0 READ_ONCE(mmn_mm->invalidate_se= q) !=3D seq); > >=20 > > [SNIP] >=20 > Right, the basic design is that the 'seq' returned by > mmu_range_read_begin() is never currently under invalidation. >=20 > Thus if the starting seq is not invalidating, then if it doesn't > change we are guaranteed the ptes haven't changed either. >=20 > > > For the above I've simplified the mechanics of the invalidate_seq, = you > > > need to look through the patch to see how it actually works. > >=20 > > Yea, that you also allow multiple write sides is pretty neat. >=20 > Complicated, but necessary to make the non-blocking OOM stuff able to > read the interval tree under all conditions :\ >=20 > > > One of the motivations for this work is to squash that bug by addin= g a > > > seqlock like pattern. But the basic hmm flow and collision-retry > > > approach seems sound. > > > > > > Do you see a problem with this patch? > >=20 > > No, not any more. >=20 > Okay, great, thanks > =20 > > Essentially you are doing the same thing I've tried to do with the=20 > > original amdgpu implementation. The difference is that you don't try = to=20 > > use a per range sequence (which is a good idea, we never got that ful= ly=20 > > working) and you allow multiple writers at the same time. >=20 > Yes, ODP had the per-range sequence and it never worked right > either. Keeping track of things during the invalidate_end was too compl= ex > =20 > > Feel free to stitch an Acked-by: Christian K=F6nig=20 > > on patch #2, >=20 > Thanks! Can you also take some review and test for the AMD related > patches? These were quite hard to make, it is very likely I've made an > error.. >=20 > > but you still doing a bunch of things in there which are way beyond > > my understanding (e.g. where are all the SMP barriers?). >=20 > The only non-locked data is 'struct mmu_range_notifier->invalidate_seq' >=20 > On the write side it uses a WRITE_ONCE. The 'seq' it writes is > generated under the mmn_mm->lock spinlock (held before and after the > WRITE_ONCE) such that all concurrent WRITE_ONCE's are storing the same > value.=20 >=20 > Essentially the spinlock is providing the barrier to order write: >=20 > invalidate_range_start(): > spin_lock(&mmn_mm->lock); > mmn_mm->active_invalidate_ranges++; > mmn_mm->invalidate_seq |=3D 1; > *seq =3D mmn_mm->invalidate_seq; > spin_unlock(&mmn_mm->lock); >=20 > WRITE_ONCE(mrn->invalidate_seq, cur_seq); >=20 > invalidate_range_end() > spin_lock(&mmn_mm->lock); > if (--mmn_mm->active_invalidate_ranges) > mmn_mm->invalidate_seq++ > spin_unlock(&mmn_mm->lock); >=20 > ie when we do invalidate_seq++, due to the active_invalidate_ranges > counter and the spinlock, we know all other WRITE_ONCE's have passed > their spin_unlock and no concurrent ones are starting. The spinlock is > providing the barrier here. >=20 > On the read side.. It is a similar argument, except with the > driver_lock: >=20 > mmu_range_read_begin() > seq =3D READ_ONCE(mrn->invalidate_seq); >=20 > Here 'seq' may be the current value, or it may be an older > value. Ordering is eventually provided by the driver_lock: >=20 > mn_tree_invalidate_start() > [..] > WRITE_ONCE(mrn->invalidate_seq, cur_seq); > driver_lock() > driver_unlock() >=20 > vs > driver_lock() > mmu_range_read_begin() > return seq !=3D READ_ONCE(mrn->invalidate_seq); > driver_unlock() >=20 > Here if mn_tree_invalidate_start() has passed driver_unlock() then > because we do driver_lock() before mmu_range_read_begin() we must > observe the WRITE_ONCE. ie the driver_unlock()/driver_lock() provide > the pair'd barrier. >=20 > If mn_tree_invalidate_start() has not passed driver_lock() then the > mmu_range_read_begin() side prevents it from passing driver_lock() > while it holds the lock. In this case it is OK if we don't observe the > WRITE_ONCE() that was done just before as the invalidate_start() > thread can't proceed to invalidation. >=20 > It is very unusual locking, it would be great if others could help > look at it! >=20 > The unusual bit in all of this is using a lock's critical region to > 'protect' data for read, but updating that same data before the lock's > critical secion. ie relying on the unlock barrier to 'release' program > ordered stores done before the lock's own critical region, and the > lock side barrier to 'acquire' those stores. I think this unusual use of locks as barriers for other unlocked accesses deserves comments even more than just normal barriers. Can you pls add them? I think the design seeems sound ... Also the comment on the driver's lock hopefully prevents driver maintainers from moving the driver_lock around in a way that would very subtle break the scheme, so I think having the acquire barrier commented in each place would be really good. Cheers, Daniel >=20 > This approach is borrowed from the hmm mirror implementation.. >=20 > If for some reason the scheme doesn't work, then the fallback is to > expand the mmn_mm->lock spinlock to protect the mrn->invalidate_seq at > some cost in performance. >=20 > Jason > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel --=20 Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch