From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E692C77B61 for ; Fri, 28 Apr 2023 03:12:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B59176B0072; Thu, 27 Apr 2023 23:12:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B09946B0075; Thu, 27 Apr 2023 23:12:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F8066B0078; Thu, 27 Apr 2023 23:12:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8F8AC6B0072 for ; Thu, 27 Apr 2023 23:12:15 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 6C05D1A017D for ; Fri, 28 Apr 2023 03:12:15 +0000 (UTC) X-FDA: 80729326230.12.CED0381 Received: from out30-111.freemail.mail.aliyun.com (out30-111.freemail.mail.aliyun.com [115.124.30.111]) by imf23.hostedemail.com (Postfix) with ESMTP id 5FA8D140015 for ; Fri, 28 Apr 2023 03:12:11 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf23.hostedemail.com: domain of alibuda@linux.alibaba.com designates 115.124.30.111 as permitted sender) smtp.mailfrom=alibuda@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682651532; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gD35OyAb+WWN7STxa4sm3222l2PokaxvW7BmMsh+ZRQ=; b=3hEqwd+/HcjbBohT26lzaT5vVaRpGP76T41+ytHdhcVskGDfMXSt9wkNRhuwcMl086v+rI r6QFMvyzGictOodUGaRgconunt89WAt72p5dZN9XupGirXjxO2sPzSQT/QfJKwdlVk3EJm nJspcSxZpIHkQ8reo/AlwcDFHq8pDX8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf23.hostedemail.com: domain of alibuda@linux.alibaba.com designates 115.124.30.111 as permitted sender) smtp.mailfrom=alibuda@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682651532; a=rsa-sha256; cv=none; b=Groaku8NoPrF4creYxTMDmIErCmIMJ9MZg9/wJ1Q3qnSmY+0squ3aTqOQrLmig5IZ60/7W yAeQkwpEYk4rYHjF8+0FUaA2b5Jdm+vwFW/VxQhBqmtDLB11lMqIuZYBnQB706ZuL34qTX wJxvFu7rWe8Brv752s390nYfqEKvIdc= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R371e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045192;MF=alibuda@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0Vh9lpN8_1682651524; Received: from 30.221.146.237(mailfrom:alibuda@linux.alibaba.com fp:SMTPD_---0Vh9lpN8_1682651524) by smtp.aliyun-inc.com; Fri, 28 Apr 2023 11:12:05 +0800 Message-ID: Date: Fri, 28 Apr 2023 11:12:03 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 Subject: Re: [PATCH v3 51/55] smc: Drop smc_sendpage() in favour of smc_sendmsg() + MSG_SPLICE_PAGES Content-Language: en-US From: "D. Wythe" To: David Howells , Matthew Wilcox , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Al Viro , Christoph Hellwig , Jens Axboe , Jeff Layton , Christian Brauner , Chuck Lever III , Linus Torvalds , netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Karsten Graul , Wenjia Zhang , Jan Karcher , linux-s390@vger.kernel.org References: <20230331160914.1608208-1-dhowells@redhat.com> <20230331160914.1608208-52-dhowells@redhat.com> <4253f27c-2c5e-3033-14b3-6e31ee344e8b@linux.alibaba.com> In-Reply-To: <4253f27c-2c5e-3033-14b3-6e31ee344e8b@linux.alibaba.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5FA8D140015 X-Stat-Signature: pu319pycmg7amj6bdm9dntbi1ea1kc8d X-Rspam-User: X-HE-Tag: 1682651531-293850 X-HE-Meta: U2FsdGVkX1+KgyTMskU8I4yR9u6BQ0/VzbJAMrgUDxxZpVWyUJ4hTEEu+gbP/S/SmzbxmywS9Y0UP3g2oWbbKXtcuovfJMlQ1Ivq6lVe6vA1BU1cXF1hkDgp+16zR9EciXE4Mg5w8R8GoIb1XbtYPXxGpfaQ0Q6mK6JshTqifte9NJcc6MmljqiZYfj+Zehxc9Yvxgy9YKZc1yRXy/dD+JtlyTl0+bQLYarpup7CnypSDp0RlR5qkZD5ltCZ3h9nqNb30rNKHK1H5emSvOLZSHSiISUNKR0uGHn3CRZ8X4qXvGNscyllDrfCfz+8nEJV49aBy7ZgYe5QdNlP0yEo0MN4gfPI1NBVJ8dO1clDGkX0kTjteYHCQBFuFQtQqU1orWZVTkAPbSEvfSXWPZRuYS96OGawliG6nWQTgCrpkxb5Xt6ZEobFDTOBI7tCupYpWySfzLNZcjrLYhkTjm5py+qKvYWOOfG6mVwu1Gvj3kwPyfyZhUp4LKeQTewHm04vIJcS1l2er4+oFogvy3GurGvCJcv4n50oHdyEgF/HfdO1bfcwY6n22ztPFawkocy9xOjFNI37Gw+G313jg0WzejVOTCWzwZNnb0GNGYvIQcqw461t9V6wtfuElG3mAWTgTvkwVRDStfJsNkEYUEWiGfWyA7FY21+izFlOJubT30LKDO4/L3HzTx6TsQ9p2/DKhPYcWi+s26kgq34iGIkKGF/0lhs3VQFUnuZBlJO+wio3v5/e6dR4ZMigWNHVDDqAWcmu6q2PPTZXwVlnc/KNrqdsY5HYFvimTiJWtjiljKQ/1QnwJzCVfDSFjzjA6N3k8gP/qdQaZn+/N4A+QQjEpotV2TLF9Yya5JGXAYq+7K+NqQXPpJb3O2Rh43IxBfhehGqIZ49a6oGxgBmpi0yi0J5n+HcBtOvVnx84aCGl98KMq673bL3908SPxBhgw6BN608J7wIpjzl2Xuww4/+ ExeCwyI5 6uWvvygtjJUQssulfT+NMCLZHKk3eTx/lh6mEyYYAogA1uW4MYRUk4CAeHP7uqDjIAoknBUhFs6am/4IhuRHo0650XU1SQUNgkvAhhlPrmy/eeWS2tx5ec+jNnViQAMzoREN/37s1OIOS5ssGad5vgEG520kSVqAEiGLe6AGV14m3YxcM3SV04mXkKcQhhBll9KSyby/YfB19dswOHIK91zufw4PTl5zuClsVbTJlkc0AEAfak2o48EIVkPL4DjDx6BcoKUkjA4TXMOcPH58afnavW4PM5TXtONoY5tCnO3vgj27VD2fE0lQVt3Z+sxsfabP3iMZptWS1EA2/xRKFktE0uZTA1Bq3MHFw5ODeJuIRM86p8xp1ptuWI2oytT/S7969 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 4/26/23 9:07 PM, D. Wythe wrote: > > Hi David, > > Fallback is one of the most important features of SMC, which > automatically downgrades to TCP > when SMC discovers that the peer does not support SMC. After fallback, > SMC hopes the the ability can be > consistent with that of TCP sock. If you delete the smc_sendpage, when > fallback occurs, it means that the sock after the fallback > loses the ability of  sendpage( tcp_sendpage). > > Thanks > D. Wythe Sorry, I missed the key email context. The problem mentioned here does not exist ... > > On 4/1/23 12:09 AM, David Howells wrote: >> Drop the smc_sendpage() code as smc_sendmsg() just passes the call >> down to >> the underlying TCP socket and smc_tx_sendpage() is just a wrapper around >> its sendmsg implementation. >> Signed-off-by: David Howells >> cc: Karsten Graul >> cc: Wenjia Zhang >> cc: Jan Karcher >> cc: "David S. Miller" >> cc: Eric Dumazet >> cc: Jakub Kicinski >> cc: Paolo Abeni >> cc: Jens Axboe >> cc: Matthew Wilcox >> cc: linux-s390@vger.kernel.org >> cc: netdev@vger.kernel.org >> --- >>   net/smc/af_smc.c    | 29 ----------------------------- >>   net/smc/smc_stats.c |  2 +- >>   net/smc/smc_stats.h |  1 - >>   net/smc/smc_tx.c    | 16 ---------------- >>   net/smc/smc_tx.h    |  2 -- >>   5 files changed, 1 insertion(+), 49 deletions(-) >> >> diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c >> index a4cccdfdc00a..d4113c8a7cda 100644 >> --- a/net/smc/af_smc.c >> +++ b/net/smc/af_smc.c >> @@ -3125,34 +3125,6 @@ static int smc_ioctl(struct socket *sock, >> unsigned int cmd, >>       return put_user(answ, (int __user *)arg); >>   } >>   -static ssize_t smc_sendpage(struct socket *sock, struct page *page, >> -                int offset, size_t size, int flags) >> -{ >> -    struct sock *sk = sock->sk; >> -    struct smc_sock *smc; >> -    int rc = -EPIPE; >> - >> -    smc = smc_sk(sk); >> -    lock_sock(sk); >> -    if (sk->sk_state != SMC_ACTIVE) { >> -        release_sock(sk); >> -        goto out; >> -    } >> -    release_sock(sk); >> -    if (smc->use_fallback) { >> -        rc = kernel_sendpage(smc->clcsock, page, offset, >> -                     size, flags); >> -    } else { >> -        lock_sock(sk); >> -        rc = smc_tx_sendpage(smc, page, offset, size, flags); >> -        release_sock(sk); >> -        SMC_STAT_INC(smc, sendpage_cnt); >> -    } >> - >> -out: >> -    return rc; >> -} >> - >>   /* Map the affected portions of the rmbe into an spd, note the >> number of bytes >>    * to splice in conn->splice_pending, and press 'go'. Delays >> consumer cursor >>    * updates till whenever a respective page has been fully processed. >> @@ -3224,7 +3196,6 @@ static const struct proto_ops smc_sock_ops = { >>       .sendmsg    = smc_sendmsg, >>       .recvmsg    = smc_recvmsg, >>       .mmap        = sock_no_mmap, >> -    .sendpage    = smc_sendpage, >>       .splice_read    = smc_splice_read, >>   }; >>   diff --git a/net/smc/smc_stats.c b/net/smc/smc_stats.c >> index e80e34f7ac15..ca14c0f3a07d 100644 >> --- a/net/smc/smc_stats.c >> +++ b/net/smc/smc_stats.c >> @@ -227,7 +227,7 @@ static int smc_nl_fill_stats_tech_data(struct >> sk_buff *skb, >>                     SMC_NLA_STATS_PAD)) >>           goto errattr; >>       if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_SENDPAGE_CNT, >> -                  smc_tech->sendpage_cnt, >> +                  0, >>                     SMC_NLA_STATS_PAD)) >>           goto errattr; >>       if (nla_put_u64_64bit(skb, SMC_NLA_STATS_T_CORK_CNT, >> diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h >> index 84b7ecd8c05c..b60fe1eb37ab 100644 >> --- a/net/smc/smc_stats.h >> +++ b/net/smc/smc_stats.h >> @@ -71,7 +71,6 @@ struct smc_stats_tech { >>       u64            clnt_v2_succ_cnt; >>       u64            srv_v1_succ_cnt; >>       u64            srv_v2_succ_cnt; >> -    u64            sendpage_cnt; >>       u64            urg_data_cnt; >>       u64            splice_cnt; >>       u64            cork_cnt; >> diff --git a/net/smc/smc_tx.c b/net/smc/smc_tx.c >> index f4b6a71ac488..d31ce8209fa2 100644 >> --- a/net/smc/smc_tx.c >> +++ b/net/smc/smc_tx.c >> @@ -298,22 +298,6 @@ int smc_tx_sendmsg(struct smc_sock *smc, struct >> msghdr *msg, size_t len) >>       return rc; >>   } >>   -int smc_tx_sendpage(struct smc_sock *smc, struct page *page, int >> offset, >> -            size_t size, int flags) >> -{ >> -    struct msghdr msg = {.msg_flags = flags}; >> -    char *kaddr = kmap(page); >> -    struct kvec iov; >> -    int rc; >> - >> -    iov.iov_base = kaddr + offset; >> -    iov.iov_len = size; >> -    iov_iter_kvec(&msg.msg_iter, ITER_SOURCE, &iov, 1, size); >> -    rc = smc_tx_sendmsg(smc, &msg, size); >> -    kunmap(page); >> -    return rc; >> -} >> - >>   /***************************** sndbuf consumer >> *******************************/ >>     /* sndbuf consumer: actual data transfer of one target chunk with >> ISM write */ >> diff --git a/net/smc/smc_tx.h b/net/smc/smc_tx.h >> index 34b578498b1f..a59f370b8b43 100644 >> --- a/net/smc/smc_tx.h >> +++ b/net/smc/smc_tx.h >> @@ -31,8 +31,6 @@ void smc_tx_pending(struct smc_connection *conn); >>   void smc_tx_work(struct work_struct *work); >>   void smc_tx_init(struct smc_sock *smc); >>   int smc_tx_sendmsg(struct smc_sock *smc, struct msghdr *msg, size_t >> len); >> -int smc_tx_sendpage(struct smc_sock *smc, struct page *page, int >> offset, >> -            size_t size, int flags); >>   int smc_tx_sndbuf_nonempty(struct smc_connection *conn); >>   void smc_tx_sndbuf_nonfull(struct smc_sock *smc); >>   void smc_tx_consumer_update(struct smc_connection *conn, bool force); >