From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.2 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54536C433E0 for ; Fri, 5 Feb 2021 01:44:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CA48164FAF for ; Fri, 5 Feb 2021 01:44:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CA48164FAF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 591A76B0005; Thu, 4 Feb 2021 20:44:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5428D6B006E; Thu, 4 Feb 2021 20:44:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 431DB6B0070; Thu, 4 Feb 2021 20:44:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id 27AED6B0005 for ; Thu, 4 Feb 2021 20:44:56 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D1224180AD807 for ; Fri, 5 Feb 2021 01:44:55 +0000 (UTC) X-FDA: 77782520550.19.field78_4e02b0a275e0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id A76281AD1B2 for ; Fri, 5 Feb 2021 01:44:55 +0000 (UTC) X-HE-Tag: field78_4e02b0a275e0 X-Filterd-Recvd-Size: 5876 Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com [209.85.221.41]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Fri, 5 Feb 2021 01:44:55 +0000 (UTC) Received: by mail-wr1-f41.google.com with SMTP id g10so5866084wrx.1 for ; Thu, 04 Feb 2021 17:44:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=7UMwdaHyyHv0tHFbd3G67+onVWpDFqTXMlcalyPzKs4=; b=H3Jxne9YFSw4SZ0Af2wo1mnJcL6VkMlA7qSuh7l61KB2fSQaIJ8CcdXiUuadI5NAGo 0dOJ2Xmj8/cCJOXg7U1GEu8qemNplA2S4CpYvHoFTXnsjw/ODgJar3TiR/dBT0Tii3fv dgFGTmFZXdj8lPLGHVck0FLcmRXte0ls8Yyy8F2uQlh4gpCGaFk3BUkrr3eaOpYGDJ3/ 7gn3yYqUS2+dshVxaJVdtpZi3wfS5e35doTcIaD+PQbRDGl2BniVy9TaNAMmwAWzS31h A9G5PxzG5lON+vmWFt4N35ZXAymM1ZRfnvpj06aUEX9FpT24bry/WnW8BbFBL15VKVOI cTgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=7UMwdaHyyHv0tHFbd3G67+onVWpDFqTXMlcalyPzKs4=; b=Pf+IhDOqSNzeIu4y1XrjK85wisJen4eiEjNp3Of0RbTjzo8CQBTh5S+r5Ourk7WsIL zuGApmt+OofX879aQX/dpz+45xhLo+jVRsEhX605XhGXpSN3u7KhP/ef89iv5Ggw2bpr T2b0dEDBYr/shCHEWGi1xRLfxvjZ7n5dOa5dHuf2exUTeRoAZf2A5JDJDtMAc4s0K2UT vZi10uh8HjBVHuIcnMRKiYBpDs26JvNW8FeMtOTpOEA1qEqM83wIpKbQ2KfzaL04GkZh iCuKRKuzxIiiccIJ6aopPlXz4eZ9ajt0a2IhHHQNk+2GsP0EWfiQtPfmgaJmd9onb2wa +/Qg== X-Gm-Message-State: AOAM533D+hplTKPxNsrjQfWuaSWW+YxWavv5g8S8h5R73GyQ14d3YFdo QnNbbyyWNvie9B9kKA8Z3z9iueJugYidfOPkzZ691g== X-Google-Smtp-Source: ABdhPJynJ6JrBMa/DQKWCpk0FS7dcRHP2n2WhTiiNNQikExYOU1sXMFDwlWSecS21j+vYcVyZjB4A/DDYkstJDHDhTg= X-Received: by 2002:a5d:453b:: with SMTP id j27mr2242917wra.92.1612489493985; Thu, 04 Feb 2021 17:44:53 -0800 (PST) MIME-Version: 1.0 References: <20210203155001.4121868-1-minchan@kernel.org> <7e7c01a7-27fe-00a3-f67f-8bcf9ef3eae9@nvidia.com> <9900858e-4d9b-5111-e695-fd2bb7463af9@nvidia.com> <96bc11de-fe47-c7d3-6e61-5a5a5b6d2f4c@nvidia.com> In-Reply-To: <96bc11de-fe47-c7d3-6e61-5a5a5b6d2f4c@nvidia.com> From: Suren Baghdasaryan Date: Thu, 4 Feb 2021 17:44:42 -0800 Message-ID: Subject: Re: [PATCH] mm: cma: support sysfs To: John Hubbard Cc: Minchan Kim , Andrew Morton , Greg Kroah-Hartman , John Dias , LKML , linux-mm Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000014, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 4, 2021 at 4:34 PM John Hubbard wrote: > > On 2/4/21 4:25 PM, John Hubbard wrote: > > On 2/4/21 3:45 PM, Suren Baghdasaryan wrote: > > ... > >>>>>> 2) The overall CMA allocation attempts/failures (first two items above) seem > >>>>>> an odd pair of things to track. Maybe that is what was easy to track, but I'd > >>>>>> vote for just omitting them. > >>>>> > >>>>> Then, how to know how often CMA API failed? > >>>> > >>>> Why would you even need to know that, *in addition* to knowing specific > >>>> page allocation numbers that failed? Again, there is no real-world motivation > >>>> cited yet, just "this is good data". Need more stories and support here. > >>> > >>> IMHO it would be very useful to see whether there are multiple > >>> small-order allocation failures or a few large-order ones, especially > >>> for CMA where large allocations are not unusual. For that I believe > >>> both alloc_pages_attempt and alloc_pages_fail would be required. > >> > >> Sorry, I meant to say "both cma_alloc_fail and alloc_pages_fail would > >> be required". > > > > So if you want to know that, the existing items are still a little too indirect > > to really get it right. You can only know the average allocation size, by > > dividing. Instead, we should provide the allocation size, for each count. > > > > The limited interface makes this a little awkward, but using zones/ranges could > > work: "for this range of allocation sizes, there were the following stats". Or, > > some other technique that I haven't thought of (maybe two items per file?) would > > be better. > > > > On the other hand, there's an argument for keeping this minimal and simple. That > > would probably lead us to putting in a couple of items into /proc/vmstat, as I > > just mentioned in my other response, and calling it good. True. I was thinking along these lines but per-order counters felt like maybe an overkill? I'm all for keeping it simple. > > ...and remember: if we keep it nice and minimal and clean, we can put it into > /proc/vmstat and monitor it. No objections from me. > > And then if a problem shows up, the more complex and advanced debugging data can > go into debugfs's CMA area. And you're all set. > > If Android made up some policy not to use debugfs, then: > > a) that probably won't prevent engineers from using it anyway, for advanced debugging, > and > > b) If (a) somehow falls short, then we need to talk about what Android's plans are to > fill the need. And "fill up sysfs with debugfs items, possibly duplicating some of them, > and generally making an unecessary mess, to compensate for not using debugfs" is not > my first choice. :) > > > thanks, > -- > John Hubbard > NVIDIA