From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E51BBC433DB for ; Mon, 22 Mar 2021 13:36:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7FF196192E for ; Mon, 22 Mar 2021 13:36:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FF196192E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5AAF46B00C4; Mon, 22 Mar 2021 09:17:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 581CA6B00C6; Mon, 22 Mar 2021 09:17:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 44A546B00C7; Mon, 22 Mar 2021 09:17:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id 2580B6B00C4 for ; Mon, 22 Mar 2021 09:17:32 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 763ED8249980 for ; Mon, 22 Mar 2021 13:36:22 +0000 (UTC) X-FDA: 77947609362.06.5710253 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf01.hostedemail.com (Postfix) with ESMTP id 91B5C500153D for ; Mon, 22 Mar 2021 13:36:19 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1616420179; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Er/Zcqg/XdYcpF2E8Um0ObszDPeK8KxGe7TwgPDl5Yg=; b=Yd2CgRzKJqWmqxzHp04u750tekZEzryADU8q2+jkvNRRlKknnJSNnHS/j//o6HLXF9klDz 5cwa88f4DLIE89oZGiR3qT2cYgkbloZnK0A+EPFVemuUpMSiqQhoPAsxNaGvJdfMdSX8kW GDh8AD0U/7gqUEu/98iT3ZNBHsaG85o= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 74A47AE5C; Mon, 22 Mar 2021 13:36:19 +0000 (UTC) Date: Mon, 22 Mar 2021 14:36:18 +0100 From: Michal Hocko To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Hillf Danton , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt , Minchan Kim , huang ying Subject: Re: [PATCH RFC 3/3] mm/vmalloc: remove vwrite() Message-ID: References: <20210319143452.25948-1-david@redhat.com> <20210319143452.25948-4-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210319143452.25948-4-david@redhat.com> X-Stat-Signature: aktg6prt69xzj9qns6fi6pb376rq5kxk X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 91B5C500153D Received-SPF: none (suse.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616420179-764140 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 19-03-21 15:34:52, David Hildenbrand wrote: > The last user (/dev/kmem) is gone. Let's drop it. > > Cc: Andrew Morton > Cc: Hillf Danton > Cc: Michal Hocko > Cc: Matthew Wilcox > Cc: Oleksiy Avramchenko > Cc: Steven Rostedt > Cc: Minchan Kim > Cc: huang ying > Signed-off-by: David Hildenbrand Acked-by: Michal Hocko > --- > include/linux/vmalloc.h | 1 - > mm/vmalloc.c | 111 ---------------------------------------- > 2 files changed, 112 deletions(-) > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index 390af680e916..9c1b17c7dd95 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -200,7 +200,6 @@ static inline void set_vm_flush_reset_perms(void *addr) > > /* for /proc/kcore */ > extern long vread(char *buf, char *addr, unsigned long count); > -extern long vwrite(char *buf, char *addr, unsigned long count); > > /* > * Internals. Dont't use.. > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index ccb405b82581..07a39881f9d6 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2820,43 +2820,6 @@ static int aligned_vread(char *buf, char *addr, unsigned long count) > return copied; > } > > -static int aligned_vwrite(char *buf, char *addr, unsigned long count) > -{ > - struct page *p; > - int copied = 0; > - > - while (count) { > - unsigned long offset, length; > - > - offset = offset_in_page(addr); > - length = PAGE_SIZE - offset; > - if (length > count) > - length = count; > - p = vmalloc_to_page(addr); > - /* > - * To do safe access to this _mapped_ area, we need > - * lock. But adding lock here means that we need to add > - * overhead of vmalloc()/vfree() calles for this _debug_ > - * interface, rarely used. Instead of that, we'll use > - * kmap() and get small overhead in this access function. > - */ > - if (p) { > - /* > - * we can expect USER0 is not used (see vread/vwrite's > - * function description) > - */ > - void *map = kmap_atomic(p); > - memcpy(map + offset, buf, length); > - kunmap_atomic(map); > - } > - addr += length; > - buf += length; > - copied += length; > - count -= length; > - } > - return copied; > -} > - > /** > * vread() - read vmalloc area in a safe way. > * @buf: buffer for reading data > @@ -2936,80 +2899,6 @@ long vread(char *buf, char *addr, unsigned long count) > return buflen; > } > > -/** > - * vwrite() - write vmalloc area in a safe way. > - * @buf: buffer for source data > - * @addr: vm address. > - * @count: number of bytes to be read. > - * > - * This function checks that addr is a valid vmalloc'ed area, and > - * copy data from a buffer to the given addr. If specified range of > - * [addr...addr+count) includes some valid address, data is copied from > - * proper area of @buf. If there are memory holes, no copy to hole. > - * IOREMAP area is treated as memory hole and no copy is done. > - * > - * If [addr...addr+count) doesn't includes any intersects with alive > - * vm_struct area, returns 0. @buf should be kernel's buffer. > - * > - * Note: In usual ops, vwrite() is never necessary because the caller > - * should know vmalloc() area is valid and can use memcpy(). > - * This is for routines which have to access vmalloc area without > - * any information, as /dev/kmem. > - * > - * Return: number of bytes for which addr and buf should be > - * increased (same number as @count) or %0 if [addr...addr+count) > - * doesn't include any intersection with valid vmalloc area > - */ > -long vwrite(char *buf, char *addr, unsigned long count) > -{ > - struct vmap_area *va; > - struct vm_struct *vm; > - char *vaddr; > - unsigned long n, buflen; > - int copied = 0; > - > - /* Don't allow overflow */ > - if ((unsigned long) addr + count < count) > - count = -(unsigned long) addr; > - buflen = count; > - > - spin_lock(&vmap_area_lock); > - list_for_each_entry(va, &vmap_area_list, list) { > - if (!count) > - break; > - > - if (!va->vm) > - continue; > - > - vm = va->vm; > - vaddr = (char *) vm->addr; > - if (addr >= vaddr + get_vm_area_size(vm)) > - continue; > - while (addr < vaddr) { > - if (count == 0) > - goto finished; > - buf++; > - addr++; > - count--; > - } > - n = vaddr + get_vm_area_size(vm) - addr; > - if (n > count) > - n = count; > - if (!(vm->flags & VM_IOREMAP)) { > - aligned_vwrite(buf, addr, n); > - copied++; > - } > - buf += n; > - addr += n; > - count -= n; > - } > -finished: > - spin_unlock(&vmap_area_lock); > - if (!copied) > - return 0; > - return buflen; > -} > - > /** > * remap_vmalloc_range_partial - map vmalloc pages to userspace > * @vma: vma to cover > -- > 2.29.2 -- Michal Hocko SUSE Labs