Hugendubel.info - Die B2B Online-Buchhandlung 

Merkliste
Die Merkliste ist leer.
Bitte warten - die Druckansicht der Seite wird vorbereitet.
Der Druckdialog öffnet sich, sobald die Seite vollständig geladen wurde.
Sollte die Druckvorschau unvollständig sein, bitte schliessen und "Erneut drucken" wählen.

Decentralized Systems and Distributed Computing

Wiley-Scrivenererschienen am01.07.2024
This book provides a comprehensive exploration of next-generation internet, distributed systems, and distributed computing, offering valuable insights into their impact on society and the future of technology.
The use of distributed systems is a big step forward in IT and computer science. As the number of tasks that depend on each other grows, a single machine can no longer handle all of them. Distributed computing is better than traditional computer settings in several ways. Distributed systems reduce the risks of a single point of failure, making them more reliable and able to handle mistakes. Most modern distributed systems are made to be scalable, which means that processing power can be added on the fly to improve performance. The internet of the future is meant to give us freedom and choices, encourage diversity and decentralization, and make it easier for people to be creative and do research. By making the internet more three-dimensional and immersive, the metaverse could introduce more ways to use it. Some people have expressed negative things about the metaverse, and there is much uncertainty regarding its future. Analysts in the field have pondered if the metaverse will differ much from our current digital experiences, and if so, whether people will be willing to spend hours per day exploring virtual space while wearing a headset. This book will look at the different aspects of the next-generation internet, distributed systems, distributed computing, and their effects on society as a whole.


Sandhya Avasthi, PhD, is an assistant professor in the Computer Science and Engineering Department at ABES Engineering College, Dr. Abdul Kalam Technical University, Ghaziabad, India. She has more than 17 years of teaching experience and is an active researcher in the field of machine learning and data mining. She has published numerous research articles in international journals, conference proceedings, and book chapters. She is associated as senior member the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery and is continuously involved in different professional activities along with academic work.
Suman Lata Tripathi, PhD, is a professor at Lovely Professional University with more than 17 years of experience in academics. She has published more than 89 research papers, as well as 13 Indian patents and 4 copyrights. She has organized several workshops, summer internships, and expert lectures for students and has worked as a session chair, conference steering committee member, editorial board member, and peer reviewer in national and international journals and conferences. She has edited and authored more than 14 books and one book series in different areas of electronics and electrical engineering.
Namrata Dhanda, PhD, is a professor in the Department of Computer Science and Engineering at Amity University, Uttar Pradesh, Lucknow. She has over 21 years of experience teaching graduate and post-graduate students. Currently, she is guiding a large number of PhD scholars working in the domains of machine learning, data science, and big data analytics. Additionally, she has more than 50 papers published in reputed journals, national and international conferences, and book chapters.
Satya Bhushan Verma, PhD, is working as an associate professor and head of the Department of Computer Science and Engineering, Institute of Technology in Shri Ramswaroop Memorial University Lucknow-Deva Road, India. He has five years of teaching experience at the undergraduate and post graduate levels, as well as research experience. Additionally, he has been actively involved in a number of international committees as an abstract reviewer, workshop reviewer, and panel reviewer and has published several research papers in various journals, conferences, and symposia of national and international recognition.
mehr
Verfügbare Formate
BuchGebunden
EUR188,50
E-BookPDF0 - No protectionE-Book
EUR168,99

Produkt

KlappentextThis book provides a comprehensive exploration of next-generation internet, distributed systems, and distributed computing, offering valuable insights into their impact on society and the future of technology.
The use of distributed systems is a big step forward in IT and computer science. As the number of tasks that depend on each other grows, a single machine can no longer handle all of them. Distributed computing is better than traditional computer settings in several ways. Distributed systems reduce the risks of a single point of failure, making them more reliable and able to handle mistakes. Most modern distributed systems are made to be scalable, which means that processing power can be added on the fly to improve performance. The internet of the future is meant to give us freedom and choices, encourage diversity and decentralization, and make it easier for people to be creative and do research. By making the internet more three-dimensional and immersive, the metaverse could introduce more ways to use it. Some people have expressed negative things about the metaverse, and there is much uncertainty regarding its future. Analysts in the field have pondered if the metaverse will differ much from our current digital experiences, and if so, whether people will be willing to spend hours per day exploring virtual space while wearing a headset. This book will look at the different aspects of the next-generation internet, distributed systems, distributed computing, and their effects on society as a whole.


Sandhya Avasthi, PhD, is an assistant professor in the Computer Science and Engineering Department at ABES Engineering College, Dr. Abdul Kalam Technical University, Ghaziabad, India. She has more than 17 years of teaching experience and is an active researcher in the field of machine learning and data mining. She has published numerous research articles in international journals, conference proceedings, and book chapters. She is associated as senior member the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery and is continuously involved in different professional activities along with academic work.
Suman Lata Tripathi, PhD, is a professor at Lovely Professional University with more than 17 years of experience in academics. She has published more than 89 research papers, as well as 13 Indian patents and 4 copyrights. She has organized several workshops, summer internships, and expert lectures for students and has worked as a session chair, conference steering committee member, editorial board member, and peer reviewer in national and international journals and conferences. She has edited and authored more than 14 books and one book series in different areas of electronics and electrical engineering.
Namrata Dhanda, PhD, is a professor in the Department of Computer Science and Engineering at Amity University, Uttar Pradesh, Lucknow. She has over 21 years of experience teaching graduate and post-graduate students. Currently, she is guiding a large number of PhD scholars working in the domains of machine learning, data science, and big data analytics. Additionally, she has more than 50 papers published in reputed journals, national and international conferences, and book chapters.
Satya Bhushan Verma, PhD, is working as an associate professor and head of the Department of Computer Science and Engineering, Institute of Technology in Shri Ramswaroop Memorial University Lucknow-Deva Road, India. He has five years of teaching experience at the undergraduate and post graduate levels, as well as research experience. Additionally, he has been actively involved in a number of international committees as an abstract reviewer, workshop reviewer, and panel reviewer and has published several research papers in various journals, conferences, and symposia of national and international recognition.
Details
Weitere ISBN/GTIN9781394205103
ProduktartE-Book
EinbandartE-Book
FormatEPUB
Erscheinungsjahr2024
Erscheinungsdatum01.07.2024
Seiten400 Seiten
SpracheEnglisch
Dateigrösse12508
Artikel-Nr.17258219
Rubriken
Genre9201

Inhalt/Kritik

Leseprobe

1
Introduction to Next-Generation Internet and Distributed Systems

Swapnil Gupta1, Rajat Verma1* and Namrata Dhanda2

1Department of Computer Science and Engineering, Pranveer Singh Institute of Technology, Kanpur, Uttar Pradesh, India

2Department of Computer Science and Engineering, Amity University Uttar Pradesh, Lucknow, India
Abstract

As digital vulnerabilities and concerns over data breaches and surveillance continue to rise, the need for an internet that provides better protection for users´ sensitive information and communications becomes imperative. This document serves as a comprehensive introduction to the Next-Generation Internet (NGI) and distributed systems, highlighting their pivotal role in addressing these growing concerns. The NGI represents a paradigm shift that unlocks new frontiers for applications and services currently unattainable with the existing internet infrastructure, including virtual and augmented reality, distributed systems, advanced artificial intelligence, and machine learning systems. The development of the NGI is expected to have a profound impact on future data management practices, necessitating a significant departure from the platform-centric approach of the present. Instead, the NGI emphasizes a human-centric model, placing users at the center of the data ecosystem and empowering them with greater control over their data. By adopting this approach, the NGI offers potential solutions to mitigate the digital divide, enhance inclusivity, and improve accessibility to the internet, particularly for individuals lacking reliable internet infrastructure. The document also explores the intricate challenges associated with studying distributed systems, characterized by their complex nature and the difficulties involved in analyzing their various components. It provides an overview of the current network infrastructure, including protocols, proposed architectures, and deployment challenges. Additionally, the chapter introduces distributed systems, examining their general design, goals, and the disruptive technology of blockchain. Throughout the document, key findings suggest that the NGI´s human-centric model and distributed systems can revolutionize data management practices, fostering security, inclusivity, and user empowerment. The NGI has the potential to reshape the internet landscape and bridge the digital divide by offering innovative solutions for secure and accessible data sharing. However, realizing these advancements requires addressing technical challenges, such as scalability, performance, and security, through ongoing research and development. The overview of the NGI and distributed systems is catered extensively, offering insights into their potential implications for data management. By embracing a human-centric approach and leveraging distributed systems, the NGI represents a promising future for a more secure, inclusive, and user-centric internet. The document emphasizes the need for continued exploration and innovation to address challenges and unlock the full potential of these transformative technologies.

Keywords: CDN, CCN, consensus, DCS, NGI, P2P, scale, SDN
1.1 Introduction

Due to the extraordinary spread of the omnipresent internet as well as IoT device dispersion, the Internet is growing at an exponential rate. From the perspective of the devices, rather than at the Internet´s core infrastructure, this development is mostly taking place on its outskirts. Next-Generation Internet is an umbrella term that encompasses a range of technological advancements and innovations. NGI incorporates the use of artificial intelligence and machine learning technologies to enable more intelligent, personalized, and efficient internet services. One of the key technical components of NGI is the development of advanced networking technologies, such as software-defined networking (SDN) [1] and network function virtualization (NFV) [1]. These technologies are designed to improve network flexibility, efficiency, and scalability, making it easier to manage and optimize complex network infrastructures. Although the current Internet has been phenomenally successful, its foundational network is weakened by the limitations of the IPv4 protocol, such as address space exhaustion, lack of quality of service, and security vulnerabilities. To address these issues, a transition to the Next-Generation Internet is necessary, with the Internet Engineering Task Force (IETF) [2] proposing IPv6 as a solution since 1991. IPv6 became standard in 1995, and the IETF established a free test bed called 6Bone [3] in 1996 to evaluate IPv6 before phasing it out by 2005. IPv6 provides numerous technical benefits, such as a larger address space, enhanced efficiency, improved security, and better quality of service [4].

The conventional network framework also entails significant cost implications. The expense associated with procuring network equipment, including routers, switches, and other networking devices, is often exorbitant. This prohibitive cost poses a challenge for organizations to justify the expense of upgrading or replacing their existing infrastructure. Moreover, managing a large network´s complexity can be expensive in terms of personnel and training costs. Furthermore, traditional networks typically necessitate dedicated hardware and software for specific tasks like load balancing or security, which can further escalate costs. In contrast, emerging software-defined networking (SDN) and network function virtualization (NFV) technologies provide more flexible and cost-effective alternatives to traditional network architectures. In a traditional network, the control and data planes are both embedded within a single device. One of the major problems that traditional architecture faces is that it is a closed network architecture [5]. Manufacturers of network equipment have their own hardware, operating system, application of standards, and extended range of functions. This implies that timely testing and verification of the new techniques are not possible. This is because the system is only accessible to the manufacturer. In a conventional network, switches serve as both the controller software and the forwarding element. Software instruction simply imposes rules that dictate the forwarding component. These networks are intricate and challenging to control [6]. When the data plane (forwarding devices) and the control plane (one that regulates network traffic) are combined into a single network device, it becomes more sophisticated. This architecture decreases flexibility and eliminates the opportunity for innovation, preventing the advancement of network infrastructure.

A distributed system is a collection of independent computing pieces that seem to users as a single cohesive system. Distributed systems are characterized by geographically dispersed computers, which are interconnected using wired, wireless, or hybrid networks [7]. The size of a distributed system can range from a few devices to a vast network of millions of computers. These systems are highly dynamic, with computers frequently joining and leaving, causing continuous changes to the network topology and performance [8]. Building and maintaining distributed systems can be challenging, as there are several limitations and trade-offs to consider. Users may be located in various parts of the world, and that can introduce issues with network latency and communication. It is difficult to completely prevent failures of networks and nodes, and distinguishing between slow and failing computers can be tricky. It is also hard to be certain that a server performed an operation before a crash and achieving full transparency in a system can come at the cost of performance. Keeping web caches up to date with the head can be difficult, as can immediately flushing write operations to disk for fault tolerance. Despite these challenges, it is important to consider all the factors while designing and building distributed systems to ensure that they are dependable and efficient. The current internet is made up of a patchwork of different technologies, standards, and protocols, which can make it difficult to integrate new applications and services. To achieve the next generation of the internet, there will be a need for greater interoperability between different systems and devices, as well as more open and standardized protocols for data exchange and communication. Achieving the next generation of the internet will require a collaborative effort between organizations and individuals around the world, as well as significant investment in research, development, and infrastructure. To effectively construct the next iteration of the Internet, a comprehensive outlook is required, encompassing a range of topics such as advancements in the physical and transmission layers of both conventional and wireless media, in addition to the implementation of new switching and routing paradigms at the device and sub-system layer. Additionally, it is necessary to consider alternative approaches to TCP and UDP for data transport in emerging environments, as well as novel models and theoretical frameworks for comprehending network complexity [9].
1.2 Traditional Network

It is important to have an overview and outline of current-generation internet or traditional network and their architecture to better understand the next-generation internet network. In a traditional network architecture, a single device, such as a switch or router, is responsible for integrating both the control plane and data plane functions [10]. The...
mehr