From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F9F9C49EAB for ; Mon, 28 Jun 2021 12:25:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2264761C69 for ; Mon, 28 Jun 2021 12:25:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2264761C69 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3BFEA8D002E; Mon, 28 Jun 2021 08:25:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36EED8D0016; Mon, 28 Jun 2021 08:25:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E8888D002E; Mon, 28 Jun 2021 08:25:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id D8A0F8D0016 for ; Mon, 28 Jun 2021 08:24:59 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AEE1C1D5E6 for ; Mon, 28 Jun 2021 12:24:59 +0000 (UTC) X-FDA: 78303051918.34.7F2A2CE Received: from mail-lj1-f181.google.com (mail-lj1-f181.google.com [209.85.208.181]) by imf01.hostedemail.com (Postfix) with ESMTP id 4719D5001702 for ; Mon, 28 Jun 2021 12:24:59 +0000 (UTC) Received: by mail-lj1-f181.google.com with SMTP id w11so4820329ljh.0 for ; Mon, 28 Jun 2021 05:24:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=mZl19EAHfBMj6o+esp+Ag6rbRzMSpGqttEk0/Qom/6Y=; b=ZVOOxa14dG+gGR+HyULgfW1QlZEE0vq5WSIHRrgiYp/5DUgQrlUg70uTV9+iiQJOS8 Pe+tAx0ojKZ2yuaQIO/FdCeNLGMO39zO+3ghXHaGFb8Qv8ePjd2zRehTJFBSME3itL+N I4PF9q/n+1mVW9pd2W3di0CYIOLGySCGxFs1hPQL3cVZ8Nvm7PSEvIwX0+LSAqzn+AHP iK3mFtpAvxhnFAsLzeYuLG33r3zC3xTSgxkhDRIZ1EyktWdBn7qOB0Lv/WnKkjwB8Zei PZfQu+0zQ2cpmQizSDr0XELO9PnJNinhQMSs8rMRcoWZ7nNKtHnHR/jLeX1UEUH/9d2F YEBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=mZl19EAHfBMj6o+esp+Ag6rbRzMSpGqttEk0/Qom/6Y=; b=Vc91x+LaBdzBS+DymlaZvFCNBVnduEjCwpSa6CQmTARj5Z1mxK7qBmFKE05pbhBJdZ i7VIV+dzkP8Xf/FCv7T0G8iyZAx8s4jN+tdchnS1LToD7xqORS1imN/eYOlbmcP4zcBJ X2ltI0qtfU/aU/RAFMsuDlCluOiC5fTTaIz6SITALX0AdyQfb/gSgxKg6CDfo2FpT+SU mqYanZsGfGloiZbYe8K/GHvBAvgSkoTm3RnXNPyoNFVSNbJh3IcqUz0QkSbX2XBmOnVi YkRod5Fed0wRgxcWi3yYYDcM/YSE/Rwa0tjS5Wn2Q63Phrn8P5EOK80W+C91AFyKXSsW Gi1g== X-Gm-Message-State: AOAM5328bGT+BHspTKZsXXn5Ma3KW15FIOyJAcMfEm9UECawUxf+0wRS CqHMtB2F+VkEq14ijtWPi+lO3w== X-Google-Smtp-Source: ABdhPJyfJ8jeeyfOn0CBv6l9Qm89e2M0XZhNu8UyBRm8yx8et6Em68BU7cax6g9fdMATElyOwR72kA== X-Received: by 2002:a05:651c:10a8:: with SMTP id k8mr5284294ljn.99.1624883097640; Mon, 28 Jun 2021 05:24:57 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id v3sm1355213lji.115.2021.06.28.05.24.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Jun 2021 05:24:56 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 066BE10280E; Mon, 28 Jun 2021 15:24:56 +0300 (+03) Date: Mon, 28 Jun 2021 15:24:55 +0300 From: "Kirill A. Shutemov" To: Peter Collingbourne Cc: John Hubbard , Matthew Wilcox , Andrew Morton , Catalin Marinas , Evgenii Stepanov , Jann Horn , Linux ARM , linux-mm@kvack.org, kernel test robot , Linux API , linux-doc@vger.kernel.org Subject: Re: [PATCH v4] mm: introduce reference pages Message-ID: <20210628122455.sqo77q4jfxtiwt5b@box.shutemov.name> References: <20210619092002.1791322-1-pcc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210619092002.1791322-1-pcc@google.com> Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=ZVOOxa14; dmarc=none; spf=none (imf01.hostedemail.com: domain of kirill@shutemov.name has no SPF policy when checking 209.85.208.181) smtp.mailfrom=kirill@shutemov.name X-Stat-Signature: 44pprzf1b4wc6t7rt3o3kxxi6z66wp1s X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4719D5001702 X-HE-Tag: 1624883099-403858 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jun 19, 2021 at 02:20:02AM -0700, Peter Collingbourne wrote: > #include > #include > #include > #include > #include > > constexpr unsigned char pattern_byte = 0xaa; > > #define PAGE_SIZE 4096 > > _Alignas(PAGE_SIZE) static unsigned char pattern[PAGE_SIZE]; > > int main(int argc, char **argv) { > if (argc < 3) > return 1; > bool use_refpage = argc > 3; > size_t mmap_size = atoi(argv[1]); > size_t touch_size = atoi(argv[2]); > > int refpage_fd; > if (use_refpage) { > memset(pattern, pattern_byte, PAGE_SIZE); > refpage_fd = syscall(448, pattern, 0); > } > for (unsigned i = 0; i != 1000; ++i) { > char *p; > if (use_refpage) { > p = (char *)mmap(0, mmap_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, > refpage_fd, 0); > } else { > p = (char *)mmap(0, mmap_size, PROT_READ | PROT_WRITE, > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > memset(p, pattern_byte, mmap_size); > } > for (unsigned j = 0; j < touch_size; j += PAGE_SIZE) > p[j] = 0; > munmap(p, mmap_size); > } > } I don't like the inteface. It is tied to PAGE_SIZE and this doesn't seem to be very future looking. How would it work with THPs? Maybe we should cosider passing down a filling pattern to kernel and let kernel allocate appropriate page size on read page fault? The pattern has to be power of 2 and limited in lenght. -- Kirill A. Shutemov