From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79313C433DF for ; Thu, 25 Jun 2020 15:00:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0556D20768 for ; Thu, 25 Jun 2020 15:00:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="WrGoSe8u" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0556D20768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5E1906B007D; Thu, 25 Jun 2020 11:00:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 592116B0080; Thu, 25 Jun 2020 11:00:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47F6D8D0002; Thu, 25 Jun 2020 11:00:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 2F1BE6B007D for ; Thu, 25 Jun 2020 11:00:57 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 8A60210A6F3 for ; Thu, 25 Jun 2020 15:00:56 +0000 (UTC) X-FDA: 76968046512.18.cake93_0c0834d26e4d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id BD2EF1012D043 for ; Thu, 25 Jun 2020 15:00:46 +0000 (UTC) X-HE-Tag: cake93_0c0834d26e4d X-Filterd-Recvd-Size: 10408 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Thu, 25 Jun 2020 15:00:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593097243; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=vaH2XhJqrgmeupOZvHE/JM6nCD/HOI9ko13ymUaYKgc=; b=WrGoSe8uWLEf6bixBo4hc88Nzmq1DSusnWCOANuTMo/N6hLc1FEdabmPGJE7H3Pt9dbld7 YH/gwgweYZJv0oyzhYFrK60snVHwZgqRAz3KXUFSaGFteJVddlL+yLJN6QiRdIumVbsl07 dNtN862BVrAge0bqwNcEc4eRCPyQp1U= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-426-BsZRdPTVOZ2BB0HH1sIPXw-1; Thu, 25 Jun 2020 11:00:40 -0400 X-MC-Unique: BsZRdPTVOZ2BB0HH1sIPXw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AAD509113A; Thu, 25 Jun 2020 15:00:37 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-65.ams2.redhat.com [10.36.113.65]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7A02C101E670; Thu, 25 Jun 2020 15:00:30 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org, linux-mm@kvack.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Andrew Morton , Gerald Schaefer Subject: [PATCH RFC] s390x/vmem: get rid of memory segment list Date: Thu, 25 Jun 2020 17:00:29 +0200 Message-Id: <20200625150029.45019-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Rspamd-Queue-Id: BD2EF1012D043 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: I can't come up with a satisfying reason why we still need the memory segment list. We used to represent in the list: - boot memory - standby memory added via add_memory() - loaded dcss segments When loading/unloading dcss segments, we already track them in a separate list and check for overlaps (arch/s390/mm/extmem.c:segment_overlaps_others()) when loading segments. The overlap check was introduced for some segments in commit b2300b9efe1b ("[S390] dcssblk: add >2G DCSSs support and stacked contiguous DCSSs support.") and was extended to cover all dcss segments in commit ca57114609d1 ("s390/extmem: remove code for 31 bit addressing mode"). Although I doubt that overlaps with boot memory and standby memory are relevant, let's reshuffle the checks in load_segment() to request the resource first. This will bail out in case we have overlaps with other resources (esp. boot memory and standby memory). The order is now different compared to segment_unload() and segment_unload(), but that should not matter. This smells like a leftover from ancient times, let's get rid of it. We can now convert vmem_remove_mapping() into a void function - everybody ignored the return value already. Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Andrew Morton Cc: Gerald Schaefer Signed-off-by: David Hildenbrand --- arch/s390/include/asm/pgtable.h | 2 +- arch/s390/mm/extmem.c | 25 +++---- arch/s390/mm/vmem.c | 115 ++------------------------------ 3 files changed, 21 insertions(+), 121 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgta= ble.h index 19d603bd1f36e..7eb01a5459cdf 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -1669,7 +1669,7 @@ static inline swp_entry_t __swp_entry(unsigned long= type, unsigned long offset) #define kern_addr_valid(addr) (1) =20 extern int vmem_add_mapping(unsigned long start, unsigned long size); -extern int vmem_remove_mapping(unsigned long start, unsigned long size); +extern void vmem_remove_mapping(unsigned long start, unsigned long size)= ; extern int s390_enable_sie(void); extern int s390_enable_skey(void); extern void s390_reset_cmma(struct mm_struct *mm); diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c index 9e0aa7aa03ba4..105c09282f8c5 100644 --- a/arch/s390/mm/extmem.c +++ b/arch/s390/mm/extmem.c @@ -313,15 +313,10 @@ __segment_load (char *name, int do_nonshared, unsig= ned long *addr, unsigned long goto out_free; } =20 - rc =3D vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1= ); - - if (rc) - goto out_free; - seg->res =3D kzalloc(sizeof(struct resource), GFP_KERNEL); if (seg->res =3D=3D NULL) { rc =3D -ENOMEM; - goto out_shared; + goto out_free; } seg->res->flags =3D IORESOURCE_BUSY | IORESOURCE_MEM; seg->res->start =3D seg->start_addr; @@ -335,12 +330,17 @@ __segment_load (char *name, int do_nonshared, unsig= ned long *addr, unsigned long if (rc =3D=3D SEG_TYPE_SC || ((rc =3D=3D SEG_TYPE_SR || rc =3D=3D SEG_TYPE_ER) && !do_nonshared)= ) seg->res->flags |=3D IORESOURCE_READONLY; + + /* Check for overlapping resources before adding the mapping. */ if (request_resource(&iomem_resource, seg->res)) { rc =3D -EBUSY; - kfree(seg->res); - goto out_shared; + goto out_free_resource; } =20 + rc =3D vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1= ); + if (rc) + goto out_resource; + if (do_nonshared) diag_cc =3D dcss_diag(&loadnsr_scode, seg->dcss_name, &start_addr, &end_addr); @@ -351,14 +351,14 @@ __segment_load (char *name, int do_nonshared, unsig= ned long *addr, unsigned long dcss_diag(&purgeseg_scode, seg->dcss_name, &dummy, &dummy); rc =3D diag_cc; - goto out_resource; + goto out_mapping; } if (diag_cc > 1) { pr_warn("Loading DCSS %s failed with rc=3D%ld\n", name, end_addr); rc =3D dcss_diag_translate_rc(end_addr); dcss_diag(&purgeseg_scode, seg->dcss_name, &dummy, &dummy); - goto out_resource; + goto out_mapping; } seg->start_addr =3D start_addr; seg->end =3D end_addr; @@ -377,11 +377,12 @@ __segment_load (char *name, int do_nonshared, unsig= ned long *addr, unsigned long (void*) seg->end, segtype_string[seg->vm_segtype]); } goto out; + out_mapping: + vmem_remove_mapping(seg->start_addr, seg->end - seg->start_addr + 1); out_resource: release_resource(seg->res); + out_free_resource: kfree(seg->res); - out_shared: - vmem_remove_mapping(seg->start_addr, seg->end - seg->start_addr + 1); out_free: kfree(seg); out: diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index 8b6282cf7d139..3b9e71654c37b 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -20,14 +20,6 @@ =20 static DEFINE_MUTEX(vmem_mutex); =20 -struct memory_segment { - struct list_head list; - unsigned long start; - unsigned long size; -}; - -static LIST_HEAD(mem_segs); - static void __ref *vmem_alloc_pages(unsigned int order) { unsigned long size =3D PAGE_SIZE << order; @@ -300,94 +292,25 @@ void vmemmap_free(unsigned long start, unsigned lon= g end, { } =20 -/* - * Add memory segment to the segment list if it doesn't overlap with - * an already present segment. - */ -static int insert_memory_segment(struct memory_segment *seg) -{ - struct memory_segment *tmp; - - if (seg->start + seg->size > VMEM_MAX_PHYS || - seg->start + seg->size < seg->start) - return -ERANGE; - - list_for_each_entry(tmp, &mem_segs, list) { - if (seg->start >=3D tmp->start + tmp->size) - continue; - if (seg->start + seg->size <=3D tmp->start) - continue; - return -ENOSPC; - } - list_add(&seg->list, &mem_segs); - return 0; -} - -/* - * Remove memory segment from the segment list. - */ -static void remove_memory_segment(struct memory_segment *seg) -{ - list_del(&seg->list); -} - -static void __remove_shared_memory(struct memory_segment *seg) +void vmem_remove_mapping(unsigned long start, unsigned long size) { - remove_memory_segment(seg); - vmem_remove_range(seg->start, seg->size); -} - -int vmem_remove_mapping(unsigned long start, unsigned long size) -{ - struct memory_segment *seg; - int ret; - mutex_lock(&vmem_mutex); - - ret =3D -ENOENT; - list_for_each_entry(seg, &mem_segs, list) { - if (seg->start =3D=3D start && seg->size =3D=3D size) - break; - } - - if (seg->start !=3D start || seg->size !=3D size) - goto out; - - ret =3D 0; - __remove_shared_memory(seg); - kfree(seg); -out: + vmem_remove_range(start, size); mutex_unlock(&vmem_mutex); - return ret; } =20 int vmem_add_mapping(unsigned long start, unsigned long size) { - struct memory_segment *seg; int ret; =20 - mutex_lock(&vmem_mutex); - ret =3D -ENOMEM; - seg =3D kzalloc(sizeof(*seg), GFP_KERNEL); - if (!seg) - goto out; - seg->start =3D start; - seg->size =3D size; - - ret =3D insert_memory_segment(seg); - if (ret) - goto out_free; + if (start + size > VMEM_MAX_PHYS || + start + size < start) + return -ERANGE; =20 + mutex_lock(&vmem_mutex); ret =3D vmem_add_mem(start, size); if (ret) - goto out_remove; - goto out; - -out_remove: - __remove_shared_memory(seg); -out_free: - kfree(seg); -out: + vmem_remove_range(start, size); mutex_unlock(&vmem_mutex); return ret; } @@ -421,27 +344,3 @@ void __init vmem_map_init(void) pr_info("Write protected kernel read-only data: %luk\n", (unsigned long)(__end_rodata - _stext) >> 10); } - -/* - * Convert memblock.memory to a memory segment list so there is a singl= e - * list that contains all memory segments. - */ -static int __init vmem_convert_memory_chunk(void) -{ - struct memblock_region *reg; - struct memory_segment *seg; - - mutex_lock(&vmem_mutex); - for_each_memblock(memory, reg) { - seg =3D kzalloc(sizeof(*seg), GFP_KERNEL); - if (!seg) - panic("Out of memory...\n"); - seg->start =3D reg->base; - seg->size =3D reg->size; - insert_memory_segment(seg); - } - mutex_unlock(&vmem_mutex); - return 0; -} - -core_initcall(vmem_convert_memory_chunk); --=20 2.26.2