From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D397AECAAD3 for ; Thu, 1 Sep 2022 23:44:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5C27280087; Thu, 1 Sep 2022 19:44:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 570FD8000D; Thu, 1 Sep 2022 19:44:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 43A1C80087; Thu, 1 Sep 2022 19:44:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 338B38000D for ; Thu, 1 Sep 2022 19:44:47 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 0EFB61A07F3 for ; Thu, 1 Sep 2022 23:44:47 +0000 (UTC) X-FDA: 79865149014.30.4C23FE8 Received: from mail-vk1-f176.google.com (mail-vk1-f176.google.com [209.85.221.176]) by imf25.hostedemail.com (Postfix) with ESMTP id B9BA8A005C for ; Thu, 1 Sep 2022 23:44:45 +0000 (UTC) Received: by mail-vk1-f176.google.com with SMTP id b63so272130vkh.5 for ; Thu, 01 Sep 2022 16:44:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=E/w20BQSdzic7HV/DXA4w/RdjxH6IBYdbHhBUiDju7A=; b=d64iMYOScRb/kfj6GTe+dCDg7yBC91L/fdLc+4cxP5Pvcgx6qaOMzL6WLG9T2Qx03q AOJzYfb8Saf9iYFBLzhpB5XfRHYTDsbwIk1HRF7sZ4yEIweU88mUcura4/ZUPzv/4NRf /fXuTOfD3ASnWYUUoXWlU6BZ40RgqoR981eHf0lL6q0L2Tuy8xIjXC4mJgvOJ8/6hoCb TiYPMTEkZ6mxnEupyokGjyQwusEhbNpXe4N1AO5jRXUQqYaHmKnn3ajjMdUsXacezMJ5 75x/FX2ZxhFjV1ivLvjjVGSjgXPHeiVdkEzWdRsF+WFhfH3KUckgvRvLwJY47vJe0SPX KY6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=E/w20BQSdzic7HV/DXA4w/RdjxH6IBYdbHhBUiDju7A=; b=WhurjpmQMUsMNCLbsBz4uxkplQLGU4Vx7Wrc8YcV+TXicQMUXaza0QZE4LCfaT2LP3 I7dtoui51qhwB6WVxX74sUbNCVRof0TS8lkcj2CYbDe7mMcVjrmAO+U/END2TvkAe7wx g0v4vBTfz8PKOGNL2cZHSCdHnzFfkz8W76WWriggfi8uil83zzXhta44mGUJkOx1QdQy yZrZBa3D8TZrhCq7UNNZORhpTJufeThfllXnnuZktEhna7iL8GdwF8TDmU5VqT9m856Q 0Nm1IWaQqwPsjdS4w2wZA9O+kfqa4XLZmWE9cDbhGilRUnOaE/0t6dqpkBMNWTRcGHka H1DA== X-Gm-Message-State: ACgBeo3vIyS2VNDRrW58mChEmIEb33DDk9+0ubo+N7TEsQ2P0QL4FDjc FSRorp1MghBObpHRBMGQgg4zRPkfTES+WurPEZRfXw== X-Google-Smtp-Source: AA6agR4FMwqqn0XOwg5/1luwsPx0Iw1rXdGs6d4W88tUlFkF3J5qguh/+CJfJWKYnI2txoV7pm8mMTJ+2OJH3G/vBRM= X-Received: by 2002:a1f:2c8c:0:b0:394:76ba:b08c with SMTP id s134-20020a1f2c8c000000b0039476bab08cmr6966094vks.32.1662075879803; Thu, 01 Sep 2022 16:44:39 -0700 (PDT) MIME-Version: 1.0 References: <20220826150807.723137-1-glider@google.com> <20220826150807.723137-5-glider@google.com> <20220826211729.e65d52e7919fee5c34d22efc@linux-foundation.org> <20220829122452.cce41f2754c4e063f3ae8b75@linux-foundation.org> <20220830150549.afa67340c2f5eb33ff9615f4@linux-foundation.org> In-Reply-To: <20220830150549.afa67340c2f5eb33ff9615f4@linux-foundation.org> From: Yu Zhao Date: Thu, 1 Sep 2022 17:44:03 -0600 Message-ID: Subject: Re: [PATCH v5 04/44] x86: asm: instrument usercopy in get_user() and put_user() To: Andrew Morton , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot Cc: Alexander Potapenko , Marco Elver , Alexander Viro , Alexei Starovoitov , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Jens Axboe , Joonsoo Kim , Kees Cook , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev , Linux Memory Management List , Linux-Arch , LKML Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662075885; a=rsa-sha256; cv=none; b=38sNRVHbphdO0dvG9gwJB/AkJvuCQUjxfNEphgFm98Daha9neTlxYIHocpBqdTomKup6RW wtllydwrFC8YB0CX28zadqRGxNSGn68sibh18Qcx6L23H1pxXLvE3/uFiLoi6iFM56iu6T 2tEJEauk5v8kx0lFJ/a0C2Ui1a3TK9E= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=d64iMYOS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of yuzhao@google.com designates 209.85.221.176 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662075885; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E/w20BQSdzic7HV/DXA4w/RdjxH6IBYdbHhBUiDju7A=; b=CgWdy4UUa/fw+PzFRtVi2mFmUnEXj4DgetoZfrT4nDW4Ie9QG+G7AwJogyg2INQkDYMPmo rJWizFZzBjnUQt+0EhOczO8DriViJa/oK+Ixe3BHS/EAHMMDqkFRaPJapfyPAiiikU9JhW DWtvQIU814Def5ZPfVSJ16sH/EWRMo0= X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B9BA8A005C Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=d64iMYOS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of yuzhao@google.com designates 209.85.221.176 as permitted sender) smtp.mailfrom=yuzhao@google.com X-Rspam-User: X-Stat-Signature: fuj77o48mptxu5rmfajw96mi3jhtmdjf X-HE-Tag: 1662075885-291573 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 30, 2022 at 4:05 PM Andrew Morton wrote: ... > Yu, that inclusion is regrettable. I don't think mm_types.h is an > appropriate site for implementing lru_gen_use_mm() anyway. Adding a > new header is always the right fix for these things. I'd suggest > adding a new mglru.h (or whatever) and putting most/all of the mglru > material in there. > > Also, the addition to kernel/sched/core.c wasn't clearly changelogged, > is uncommented and I doubt if the sched developers know about it, let > alone reviewed it. Please give them a heads-up. Adding Ingo, Peter, Juri and Vincent. I added lru_gen_use_mm() (one store operation) to context_switch() in kernel/sched/core.c, and I would appreciate it if you could take a look and let me know if you have any concerns: https://lore.kernel.org/r/20220815071332.627393-9-yuzhao@google.com/ I'll resend the series in a week or so, and cc you when that happens. > The addition looks fairly benign, but core context_switch() is the > sort of thing which people get rather defensive about and putting > mm-specific stuff in there might be challenged. Some quantitative > justification of this optimization would be appropriate. The commit message (from the above link) touches on the theory only: This patch uses the following optimizations when walking page tables: 1. It tracks the usage of mm_struct's between context switches so that page table walkers can skip processes that have been sleeping since the last iteration. Let me expand on this. TLDR: lru_gen_use_mm() introduces an extra store operation whenever switching to a new mm_struct, which sets a flag for page reclaim to clear. For systems that are NOT under memory pressure: 1. This is a new overhead. 2. I don't think it's measurable, hence can't be the last straw. 3. Assume it can be measured, the belief is that underutilized systems should be sacrificed (to some degree) for the greater good. For systems that are under memory pressure: 1. When this flag is set on a mm_struct, page reclaim knows that this mm_struct has been used since the last time it cleared this flag. So it's worth checking out this mm_struct (to clear the accessed bit). 2. The similar idea has been used on Android and ChromeOS: when an app or a tab goes to the background, these systems (conditionally) call MADV_COLD. The majority of GUI applications don't implement this idea. MGLRU opts to do it for the benefit of them. How it benefits server applications is unknown (uninteresting). 3. This optimization benefits arm64 v8.2+ more than x86, since x86 supports the accessed bit in non-leaf entries and therefore the search space can be reduced based on that. On a 4GB ARM system with 40 Chrome tabs opened and 5 tabs in active use, this optimization improves page table walk performance by about 5%. The overall benefit is small but measurable under heavy memory pressure. 4. The idea can be reused by other MM components, e.g., khugepaged. Thanks.