From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42E15C4320D for ; Tue, 24 Sep 2019 15:20:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EAC3B21783 for ; Tue, 24 Sep 2019 15:20:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bf86zzT/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EAC3B21783 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 91D316B026A; Tue, 24 Sep 2019 11:20:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8CC396B026C; Tue, 24 Sep 2019 11:20:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8094C6B026D; Tue, 24 Sep 2019 11:20:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 5F3606B026A for ; Tue, 24 Sep 2019 11:20:35 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id EA03C180AD803 for ; Tue, 24 Sep 2019 15:20:34 +0000 (UTC) X-FDA: 75970175988.11.stick40_3f37b18866655 X-HE-Tag: stick40_3f37b18866655 X-Filterd-Recvd-Size: 8188 Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Sep 2019 15:20:34 +0000 (UTC) Received: by mail-io1-f65.google.com with SMTP id h144so5361697iof.7 for ; Tue, 24 Sep 2019 08:20:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=3vndOFpomYvqhjMHonq+ZiK1F3lWOY+1wZwRmFatWgM=; b=bf86zzT/BJVIQLMuNEsr24gPRADQByrsb4uOvrzzyYNXxKxHkX+iTTEYpEj96J2fP2 /M/bWWME7Q7/8YU/EJPOwxFfn5vA3/QZIO8jvwD0TSWIp8hRnPHjl61MuN4XpA5ZNsw1 QJ5q/mLvJvq20wTq4mHhg+5RJLxhI2SXTGixj5Gsv/TRri4n7D/MaerbUzB5cwLnL4Z1 bzjJY9PNSS2f6+7+gg4LuaZRY+J/qUuk2YXqq+Dls6fshFGZgc/qXPrZyDiyoYHsQ2qy ij/WL2ZS0Ojjx/w7lEcvPBELzKTug3T2ZSV5Ann4+aQrV+PZKqx3bnEvKI4kB6/eOOLZ ZzyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=3vndOFpomYvqhjMHonq+ZiK1F3lWOY+1wZwRmFatWgM=; b=kTuiQkZccH2LTEhJkP4Ksd4humNtJGgktkGtJSbDjiBVU9bTgN8Rp/PllwTWl8JsEA w/UnBRvtAbVg4adC7rkiP+NvehwT3/GxPqkAxWt/udPEnid2+WIgqyxHaxrB78ONHssV F64zZYD57W/XQv6Vm3o4Objlky68omWYVud1+9YgYzaxd6PuSfKjO/PNsUecpVQF5Mmy iTRyI4oXjpFkr/AcWyHdbdijnqUwgv+K03XCevWqLrFamkDeFHvxp/ZiQldSlInVjbza kbeSQ6I2lpe00y4PZhfXJx6JbZrTFYWnVMycaomEHSjvUaXeXA7SVqWqOEiq4penP7k9 MSiA== X-Gm-Message-State: APjAAAWBsbEkYqnEmMOlV4B/81Gs2m+7UdNx9vnztBC6hMo45dkiYfn7 ltukEsLzMxs/aNS5sDjnVrx9jRNXti589ksIkOE= X-Google-Smtp-Source: APXvYqxiSzuS37ahQKkUVLhmdpmjhXw0Q7k1sEchjzhO4ftl5DDIvJoMCrNvxaI4DbtZfEVDHguSv0F6qzLq4/fTLjQ= X-Received: by 2002:a5d:8908:: with SMTP id b8mr4116574ion.237.1569338433491; Tue, 24 Sep 2019 08:20:33 -0700 (PDT) MIME-Version: 1.0 References: <20190918175109.23474.67039.stgit@localhost.localdomain> <20190924142342.GX23050@dhcp22.suse.cz> In-Reply-To: <20190924142342.GX23050@dhcp22.suse.cz> From: Alexander Duyck Date: Tue, 24 Sep 2019 08:20:22 -0700 Message-ID: Subject: Re: [PATCH v10 0/6] mm / virtio: Provide support for unused page reporting To: Michal Hocko Cc: virtio-dev@lists.oasis-open.org, kvm list , "Michael S. Tsirkin" , David Hildenbrand , Dave Hansen , LKML , Matthew Wilcox , linux-mm , Vlastimil Babka , Andrew Morton , Mel Gorman , linux-arm-kernel@lists.infradead.org, Oscar Salvador , Yang Zhang , Pankaj Gupta , Konrad Rzeszutek Wilk , Nitesh Narayan Lal , Rik van Riel , lcapitulino@redhat.com, "Wang, Wei W" , Andrea Arcangeli , Paolo Bonzini , Dan Williams , Alexander Duyck Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Sep 24, 2019 at 7:23 AM Michal Hocko wrote: > > On Wed 18-09-19 10:52:25, Alexander Duyck wrote: > [...] > > In order to try and keep the time needed to find a non-reported page to > > a minimum we maintain a "reported_boundary" pointer. This pointer is used > > by the get_unreported_pages iterator to determine at what point it should > > resume searching for non-reported pages. In order to guarantee pages do > > not get past the scan I have modified add_to_free_list_tail so that it > > will not insert pages behind the reported_boundary. > > > > If another process needs to perform a massive manipulation of the free > > list, such as compaction, it can either reset a given individual boundary > > which will push the boundary back to the list_head, or it can clear the > > bit indicating the zone is actively processing which will result in the > > reporting process resetting all of the boundaries for a given zone. > > Is this any different from the previous version? The last review > feedback (both from me and Mel) was that we are not happy to have an > externally imposed constrains on how the page allocator is supposed to > maintain its free lists. The main change for v10 versus v9 is that I allow the page reporting boundary to be overridden. Specifically there are two approaches that can be taken. The first is to simply reset the iterator for whatever list is updated. What this will do is reset the iterator back to list_head and then you can do whatever you want with that specific list. The other option is to simply clear the ZONE_PAGE_REPORTING_ACTIVE bit. That will essentially notify the page reporting code that any/all hints that were recorded have been discarded and that it needs to start over. All I am trying to do with this approach is reduce the work. Without doing this the code has to walk the entire free page list for the higher orders every iteration and that will not be cheap. Admittedly it is a bit more invasive than the cut/splice logic used in compaction which is taking the pages it has already processed and moving them to the other end of the list. However, I have reduced things so that we only really are limiting where add_to_free_list_tail can place pages, and we are having to check/push back the boundaries if a reported page is removed from a free_list. > If this is really the only way to go forward then I would like to hear > very convincing arguments about other approaches not being feasible. > There are none in this cover letter unfortunately. This will be really a > hard sell without them. So I had considered several different approaches. What I started out with was logic that was performing the hinting as a part of the architecture specific arch_free_page call. It worked but had performance issues as we were generating a hint per page freed which has fairly high overhead. The approach Nitesh has been using is to try and maintain a separate bitmap of "dirty" pages that have recently been freed. There are a few problems I saw with that approach. First is the fact that it becomes lossy in that pages could be reallocated out while we are waiting for the iterator to come through and process the page. This results in there being a greater amount of work as we have to hunt and peck for the pages, as such the zone lock has to be freed and reacquired often which slows this approach down further. Secondly there is the management of the bitmap itself and sparse memory which would likely necessitate doing something similar to pageblock_flags on order to support possible gaps in the zones. I had considered trying to maintain a separate list entirely and have the free pages placed there. However that was more invasive then this solution. In addition modifying the free_list/free_area in any way is problematic as it can result in the zone lock falling into the same cacheline as the highest order free_area. Ultimately what I settled on was the approach we have now where adding a page to the head of the free_list is unchanged, adding a page to the tail requires a check to see if the iterator is currently walking the list, and removing the page requires pushing back the iterator if the page is at the top of the reported list. I was trying to keep the amount of code that would have to be touched in the non-reported case to a minimum. With this we have to test for a bit in the zone flags if adding to tail, and we have to test for a bit in the page on a move/del from the freelist. So for the most common free/alloc cases we would only have the impact of the one additional page flag check.