From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9099DC433F5 for ; Wed, 9 Mar 2022 21:08:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC6BC8D0002; Wed, 9 Mar 2022 16:08:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D758F8D0001; Wed, 9 Mar 2022 16:08:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3EB78D0002; Wed, 9 Mar 2022 16:08:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0026.hostedemail.com [216.40.44.26]) by kanga.kvack.org (Postfix) with ESMTP id B18B08D0001 for ; Wed, 9 Mar 2022 16:08:47 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6D21C8249980 for ; Wed, 9 Mar 2022 21:08:47 +0000 (UTC) X-FDA: 79226087094.26.ED37CF2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id E805E1C001D for ; Wed, 9 Mar 2022 21:08:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1646860126; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=MglxRBPyUbTeRJGcxsoUAZ8JhFWGp4eKx23qba031D0=; b=cny3deX1OaE+qYcb1ZRxME5EKNcgZPGkziSg2RTj46zo5RLRHu7+DuCnhGy2jQoXBjZvwV M0NZ4THlUgcCCcigfQIc39cr1+aeZcAnQufhjY1VgI5HDGy6PWUvH9MxhHPMSbSRKSD3d1 Ab2/W5b7h3PFDP4oVuk/Fj98Mo2ObYw= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-503-qj30FfGQO1aZX8IwvnW3TQ-1; Wed, 09 Mar 2022 16:08:45 -0500 X-MC-Unique: qj30FfGQO1aZX8IwvnW3TQ-1 Received: by mail-wr1-f71.google.com with SMTP id p9-20020adf9589000000b001e333885ac1so1116361wrp.10 for ; Wed, 09 Mar 2022 13:08:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=MglxRBPyUbTeRJGcxsoUAZ8JhFWGp4eKx23qba031D0=; b=xkrUX/ZqZfkEBMAMNG3sI8QqUH/Jy8DgmV0rQAezRmNPAvr7OOl+Eq+XDRbk5RNv2t caoMm9n3rPUOL1IIlbn/fQcO9lApipOMZyscsEMrDEiYyhZA9e9hGqD0gq/OWNECQmeq Xi+nVKjSl7MSBLW2SyVAVF7JnAsuohDRKhxN8XXTk6GYEHUYOA5Xwjiy0vDdy9wgIH/B WmTwg42R1slv5wpSH6daa0ZVxTDDIOC0lvaY5tKjzz9dc3k1JRguVrmwxhmAJFi8XW11 eziNrBazXVbjnI0a3NZIO1+5Brw5cc60nRhcYm2e+5gSWDdymEkQFyWACOCVov0VUrSb ZBIA== X-Gm-Message-State: AOAM531o3CWX7hXCGL0BaEdgC+DReqn80lUwLN2RX5AuNRp3wF5yLoYm tRzslH2wj2TRIiqNMy0parqI+DguBmoBiuN4Y7ZmOEduI3srAo+wIg0Ll+vfkuOq/4hcwWscKp3 0Aaj2+zK6JIdy9Rkf1u6Y7RdUQpg= X-Received: by 2002:a05:600c:22cd:b0:389:c99a:45a4 with SMTP id 13-20020a05600c22cd00b00389c99a45a4mr1031350wmg.38.1646860124120; Wed, 09 Mar 2022 13:08:44 -0800 (PST) X-Google-Smtp-Source: ABdhPJw7hkfjg0Oh5K+Q1w3HaqTFk5Ml3O3uxREayIyKT3hnlocfDzkhydhZe8v9YaMY9wEWxhSnFZjIW2JyrmqXnrw= X-Received: by 2002:a05:600c:22cd:b0:389:c99a:45a4 with SMTP id 13-20020a05600c22cd00b00389c99a45a4mr1031343wmg.38.1646860123956; Wed, 09 Mar 2022 13:08:43 -0800 (PST) MIME-Version: 1.0 References: <20220309184238.1583093-1-agruenba@redhat.com> In-Reply-To: From: Andreas Gruenbacher Date: Wed, 9 Mar 2022 22:08:32 +0100 Message-ID: Subject: Re: Buffered I/O broken on s390x with page faults disabled (gfs2) To: Linus Torvalds , Filipe Manana Cc: Catalin Marinas , David Hildenbrand , Alexander Viro , linux-s390 , Linux-MM , linux-fsdevel , linux-btrfs X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: E805E1C001D X-Stat-Signature: zufbmqrk5b8z8yifppigkamygydxktjx X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cny3deX1; spf=none (imf21.hostedemail.com: domain of agruenba@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=agruenba@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam07 X-HE-Tag: 1646860126-946485 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 9, 2022 at 8:08 PM Linus Torvalds wrote: > On Wed, Mar 9, 2022 at 10:42 AM Andreas Gruenbacher wrote: > > With a large enough buffer, a simple malloc() will return unmapped > > pages, and reading into such a buffer will result in fault-in. So page > > faults during read() are actually pretty normal, and it's not the user's > > fault. > > Agreed. But that wasn't the case here: > > > In my test case, the buffer was pre-initialized with memset() to avoid > > those kinds of page faults, which meant that the page faults in > > gfs2_file_read_iter() only started to happen when we were out of memory. > > But that's not the common case. > > Exactly. I do not think this is a case that we should - or need to - > optimize for. > > And doing too much pre-faulting is actually counter-productive. > > > * Get rid of max_size: it really makes no sense to second-guess what the > > caller needs. > > It's not about "what caller needs". It's literally about latency > issues. If you can force a busy loop in kernel space by having one > unmapped page and then do a 2GB read(), that's a *PROBLEM*. > > Now, we can try this thing, because I think we end up having other > size limitations in the IO subsystem that means that the filesystem > won't actually do that, but the moment I hear somebody talk about > latencies, that max_size goes back. Thanks, this puts fault_in_safe_writeable() in line with fault_in_readable() and fault_in_writeable(). There currently are two users of fault_in_safe_writeable()/fault_in_iov_iter_writeable(): gfs2 and btrfs. In gfs2, we cap the size at BIO_MAX_VECS pages (256). I don't see an explicit cap in btrfs; adding Filipe. Andreas