Computer Networks and Distributed Systems
Fatemeh Davami; Sahar Adabi; Ali Rezaee; Amir Masoud Rahamni
Volume 7, Issue 2 , May 2021, , Pages 126-136
Abstract
In the last ten years, the Cloud data centers have been manifested as the crucial computing architectures to enable extreme data workflows. Due to the complicatedness and diverse kinds of computational resources like Fog nodes and Cloud servers, workflow scheduling has been proposed to be the main challenge ...
Read More
In the last ten years, the Cloud data centers have been manifested as the crucial computing architectures to enable extreme data workflows. Due to the complicatedness and diverse kinds of computational resources like Fog nodes and Cloud servers, workflow scheduling has been proposed to be the main challenge in Cloud and or Fog computing environments. For resolving this issue, the present study offers a scheduling algorithm according to the critical path extraction, referred to as the Critical Path Extraction Algorithm (CPEA). In fact, it is one of the new multi-criteria decision-making algorithms to extract the critical paths of multiple workflows because it is of high importance to find the critical path in the creation and control of the scheduling. Moreover, an extensive software simulation investigation has been performed to compare this new algorithm in the real work-loads and recent algorithm. We compare our algorithm with the GRP-HEFT algorithm. The experimental results confirm the proposed algorithm's superiority in fulfilling make-span and waiting time and that workflow scheduling based on CPEA further improves the workflow make-span and waiting time.
Computer Networks and Distributed Systems
Alireza Enami; Javad Akbari Torkestani
Volume 7, Issue 1 , February 2021, , Pages 19-34
Abstract
Fog computing is being seen as a bridge between smart IoT devices and large scale cloud computing. It is possible to develop cloud computing services to network edge devices using Fog computing. As one of the most important services of the system, the resource allocation should always be available to ...
Read More
Fog computing is being seen as a bridge between smart IoT devices and large scale cloud computing. It is possible to develop cloud computing services to network edge devices using Fog computing. As one of the most important services of the system, the resource allocation should always be available to achieve the goals of Fog computing. Resource allocation is the process of distributing limited available resources among applications based on predefined rules. Because the problems raised in the resource management system are NP-hard, and due to the complexity of resource allocation, heuristic algorithms are promising methods for solving the resource allocation problem. In this paper, an algorithm is proposed based on learning automata to solve this problem, which uses two learning automata: a learning automata is related to applications (LAAPP) and the other is related to Fog nodes (LAN). In this method, an application is selected from the action set of LAAPP and then, a Fog node is selected from the action set of LAN. If the requirements of deadline, response time and resources are met, then the resource will be allocated to the application. The efficiency of the proposed algorithm is evaluated through conducting several simulation experiments under different Fog configurations. The obtained results are compared with several existing methods in terms of the makespan, average response time, load balancing and throughput.
Computer Networks and Distributed Systems
Yaser Ramzanpoor; Mirsaeid Hosseini Shirvani; Mehdi GolSorkhTabar
Volume 7, Issue 1 , February 2021, , Pages 67-80
Abstract
Fog computing is known as a new computing technology where it covers cloud computing shortcomings in term of delay. This is a potential for running IoT applications containing multiple services taking benefit of closeness to fog nodes near to devices where the data are sensed. This article formulates ...
Read More
Fog computing is known as a new computing technology where it covers cloud computing shortcomings in term of delay. This is a potential for running IoT applications containing multiple services taking benefit of closeness to fog nodes near to devices where the data are sensed. This article formulates service placement issue into an optimization problem with total power consumption minimization inclination. It considers resource utilization and traffic transmission between different services as two prominent factors of power consumption, once they are placed on different fog nodes. On the other hand, placing all of the services on the single fog node owing to power reduction reduces system reliability because of one point of failure phenomenon. In the proposed optimization model, reliability limitations are considered as constraints of stated problem. To solve this combinatorial problem, an energy-aware reliable service placement algorithm based on whale optimization algorithm (ER-SPA-WOA) is proposed. The suggested algorithm was validated in different circumstances. The results reported from simulations prove the dominance of proposed algorithm in comparison with counterpart state-of-the-arts.
Computer Networks and Distributed Systems
Derdus Kenga; Vincent Oteke Omwenga; Patrick Job Ogao
Volume 6, Issue 3 , August 2020, , Pages 145-154
Abstract
The ability to measure the energy consumed by cloud infrastructure is a crucial step towards the development of energy efficiency policies in the cloud infrastructure. There are hardware-based and software-based methods of measuring energy usage in cloud infrastructure. However, most hardware-based energy ...
Read More
The ability to measure the energy consumed by cloud infrastructure is a crucial step towards the development of energy efficiency policies in the cloud infrastructure. There are hardware-based and software-based methods of measuring energy usage in cloud infrastructure. However, most hardware-based energy measurement methods measure the energy consumed system-wide - including the energy lost in transit. In an environment such as the cloud, where energy consumption can be a result of different components, it is important to isolate the energy, which is consumed as a result of executing application workloads. This information can be crucial in making decisions such as workload consolidation. In this paper, we propose an experimental approach of measuring power consumption as a result of executing application workloads in IaaS cloud. This approach is based on Intel’s Running Average Power Limit (RAPL) interface. Application workload is obtained from Phoronix Test Suite (PTS)’ 7zip and aio-stress. To demonstrate the feasibility of this approach, we have described an approach, which can be used to study the effect of workload consolidation on CPU and I/O's power performance by varying the number of Virtual Machines (VMs) . Power is measured in watts. Performance of CPU is measured in Million Instructions per Second (MIPS) and I/O performance (as a result of processing data intensive) is measured in MB/s. Our results on the effect of workload consolidation has been compared with previous research and was found to be consistent. This shows that the proposed method of measuring power consumption is accurate.
Computer Networks and Distributed Systems
Scholastica Nwanneka Mallo; Francisca Nonyelum Ogwueleka
Volume 5, Issue 3 , August 2019, , Pages 169-180
Abstract
Cloud computing technology is providing businesses, be it micro, small, medium, and large scale enterprises with the same level playing grounds. Small and Medium enterprises (SMEs) that have adopted the cloud are taking their businesses to greater heights with the competitive edge that cloud computing ...
Read More
Cloud computing technology is providing businesses, be it micro, small, medium, and large scale enterprises with the same level playing grounds. Small and Medium enterprises (SMEs) that have adopted the cloud are taking their businesses to greater heights with the competitive edge that cloud computing offers. The limitations faced by (SMEs) in procuring and maintaining IT infrastructures has been handled on the cloud platform for the SMEs that adopt it. In this research, the impact and challenges of cloud computing on SME’s that have adopted it in Nigeria has been investigated. The impacts identified ranges from provisioning IT infrastructures, reshaping and extending business values and outreach to giving competitive edge to businesses subscribed to it. Though Cloud computing has many benefits; however, it is not without some pitfalls. These pitfalls include data vulnerability, vendor lock-in, limited control over the infrastructure by the subscribers etc. To investigate the level of impacts and challenges being faced by SMEs in Nigeria on the cloud platform, questionnaires were administered to managers and employees of about fifty SMEs that have deployed cloud. The data collected were analyzed using Statistical Package for Social Sciences (SPSS), from which appropriate recommendations were made.Key Words: Cloud Computing, Impacts, Challenges, SME.
Computer Networks and Distributed Systems
Marzieh Bozorgi Elize; Ahmad KhademZadeh
Volume 3, Issue 4 , November 2017, , Pages 203-212
Abstract
Cloud computing is a result of the continuing progress made in the areas of hardware, technologies related to the Internet, distributed computing and automated management. The Increasing demand has led to an increase in services resulting in the establishment of large-scale computing and data centers, ...
Read More
Cloud computing is a result of the continuing progress made in the areas of hardware, technologies related to the Internet, distributed computing and automated management. The Increasing demand has led to an increase in services resulting in the establishment of large-scale computing and data centers, in addition to high operating costs and huge amounts of electrical power consumption. Insufficient cooling systems and inefficient, causing overheating sources, shortening the life of the machine and too much carbon dioxide is produced. In this paper, we aim to improve system performance; Cloud Computing based on a decrease in migration of among virtual machines (VM), and reduce energy consumption to be able to manage resources to achieve optimal energy efficiency. For this reason, various techniques such as genetic algorithms (GAs), virtual machine migration and ways Dynamic voltage and frequency scaling (DVFS), and resize virtual machines to reduce energy consumption and fault tolerance are used. The main purpose of this article, the allocation of resources with the aim of reducing energy consumption in cloud computing. The results show that reduced energy consumption and hold down the rate of virtual machines breach of contract, reduces migration as well.
Computer Networks and Distributed Systems
Ghazaal Emadi; Amir Masoud Rahmani; Hamed Shahhoseini
Volume 3, Issue 3 , August 2017, , Pages 135-144
Abstract
The cloud computing is considered as a computational model which provides the uses requests with resources upon any demand and needs.The need for planning the scheduling of the user's jobs has emerged as an important challenge in the field of cloud computing. It is mainly due to several reasons, including ...
Read More
The cloud computing is considered as a computational model which provides the uses requests with resources upon any demand and needs.The need for planning the scheduling of the user's jobs has emerged as an important challenge in the field of cloud computing. It is mainly due to several reasons, including ever-increasing advancements of information technology and an increase of applications and user needs for these applications with high quality, as well as, the popularity of cloud computing among user and rapidly growth of them during recent years. This research presents the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), an evolutionary algorithm in the field of optimization for tasks scheduling in the cloud computing environment. The findings indicate that presented algorithm, led to a reduction in execution time of all tasks, compared to SPT, LPT, and RLPT algorithms.Keywords: Cloud Computing, Task Scheduling, Virtual Machines (VMs), Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
Computer Networks and Distributed Systems
Hadi Moei Emamqeysi; Nasim Soltani; Masomeh Robati; Mohamad Davarpanah
Volume 3, Issue 3 , August 2017, , Pages 173-180
Abstract
The issue of management and allocation of resources in cloud computing environments, according to the breadth of scale and modern technology implementation, is a complicated issue. Issues such as: the heterogeneity of resources, resource dependencies to each other, the dynamics of the environment, virtualization, ...
Read More
The issue of management and allocation of resources in cloud computing environments, according to the breadth of scale and modern technology implementation, is a complicated issue. Issues such as: the heterogeneity of resources, resource dependencies to each other, the dynamics of the environment, virtualization, workload diversity as well as a wide range of management objectives of cloud service providers to provide services in this environment. In this paper, first, the description of cloud computing environment and related issues have been reported. According to the performed studies, challenges such as: the absence of a comprehensive management for resources in the cloud environment, the method of predicting the resource allocation process, optimum resource allocation methods to reduce energy consumption and reducing the time to access resources and also implementation of dynamic resources allocation methods in the mobile cloud environments, have been addressed. Finally, with regard to the challenges, some recommendations to improve the process of allocation of resources in a cloud computing environment is has been proposed.
Computer Networks and Distributed Systems
Bahareh Rahmati; Amir Masoud Rahmani; Ali Rezaei
Volume 3, Issue 2 , May 2017, , Pages 75-80
Abstract
Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, ...
Read More
Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, accessing data files is critical for performing such applications. Sometimes accessing data becomes a bottleneck for the whole cloud workflow system and decreases the performance of the system dramatically. Job scheduling and data replication are two important techniques which can enhance the performance of data-intensive applications. It is wise to integrate these techniques into one framework for achieving a single objective. In this paper, we integrate data replication and job scheduling with the aim of reducing response time by reduction of data access time in cloud computing environment. This is called data replication-based scheduling (DRBS). Simulation results show the effectiveness of our algorithm in comparison with well-known algorithms such as random and round-robin.
Computer Networks and Distributed Systems
Seyed Hossein Ahmadpanah; Rozita Jamili Oskouei; Abdullah Jafari Chashmi
Volume 3, Issue 2 , May 2017, , Pages 89-106
Abstract
Peer-to-peer applications (P2P) are no longer limited to home users, and start being accepted in academic and corporate environments. While file sharing and instant messaging applications are the most traditional examples, they are no longer the only ones benefiting from the potential advantages of P2P ...
Read More
Peer-to-peer applications (P2P) are no longer limited to home users, and start being accepted in academic and corporate environments. While file sharing and instant messaging applications are the most traditional examples, they are no longer the only ones benefiting from the potential advantages of P2P networks. For example, network file storage, data transmission, distributed computing, and collaboration systems have also taken advantage of such networks.The reasons why this model of computing is attractive unfold in three. First, P2P networks are scalable, i.e., deal well (efficiently) with both small groups and with large groups of participants. In this paper, we will present a summary of the main safety aspects to be considered in P2P networks, highlighting its importance for the development of P2P applications and systems on the Internet and deployment of enterprise applications with more critical needs in terms of security. P2P systems are no longer limited to home users, and start being accepted in academic and corporate environments.
Computer Networks and Distributed Systems
Minoo Soltanshahi
Volume 2, Issue 3 , August 2016, , Pages 9-14
Abstract
Cloud computing is the latest technology that involves distributed computation over the Internet. It meets the needs of users through sharing resources and using virtual technology. The workflow user applications refer to a set of tasks to be processed within the cloud environment. Scheduling algorithms ...
Read More
Cloud computing is the latest technology that involves distributed computation over the Internet. It meets the needs of users through sharing resources and using virtual technology. The workflow user applications refer to a set of tasks to be processed within the cloud environment. Scheduling algorithms have a lot to do with the efficiency of cloud computing environments through selection of suitable resources and assignment of workflows to them. Given the factors affecting their efficiency, these algorithms try to use resources optimally and increase the efficiency of this environment. The palbimm algorithm provides a scheduling method that meets the majority of the requirements of this environment and its users. In this article, we improved the efficiency of the algorithm by adding fault tolerance capability to it. Since this capability is used in parallel with task scheduling, it has no negative impact on the makespan. This is supported by simulation results in CloudSim environment.
Computer Networks and Distributed Systems
Kobra Bagheri; Mehran Mohsenzadeh
Volume 2, Issue 3 , August 2016, , Pages 27-34
Abstract
Abstract— Data grids are an important branch of gird computing which provide mechanisms for the management of large volumes of distributed data. Energy efficiency has recently emerged as a hot topic in large distributed systems. The development of computing systems is traditionally focused on performance ...
Read More
Abstract— Data grids are an important branch of gird computing which provide mechanisms for the management of large volumes of distributed data. Energy efficiency has recently emerged as a hot topic in large distributed systems. The development of computing systems is traditionally focused on performance improvements driven by the demand of client's applications in scientific and business domains. High energy consumption in computer systems leads to their limited performance because of the increased consumption of carbon dioxide and amount of electricity bills. Thus, the goal of design of computer systems has been shifted to power and energy efficiency. Data grids can solve large scale applications that require a large amount of data. Data replication is a common solution to improve availability and file access time in such environments. This solution replicates the data file in many different sites. In this paper, a new data replication method is proposed that is not only data aware, but also is energy efficient. Simulation results with CLOUDSIM show that the proposed method gives better energy consumption, average response time, and network usage than other algorithms and prevents the unnecessary creation of replica, which leads to efficient storage usage.
Computer Networks and Distributed Systems
Ali Abbasi; Amir Masoud Rahmani; Esmaeil Zeinali Khasraghi
Volume 1, Issue 4 , November 2015, , Pages 1-14
Abstract
Abstract - One of the important problems in grid environments is data replication in grid sites. Reliability and availability of data replication in some cases is considered low. To separate sites with high reliability and high availability of sites with low availability and low reliability, clustering ...
Read More
Abstract - One of the important problems in grid environments is data replication in grid sites. Reliability and availability of data replication in some cases is considered low. To separate sites with high reliability and high availability of sites with low availability and low reliability, clustering can be used. In this study, the data grid dynamically evaluate and predict the condition of the sites. The reliability and availability of sites were calculated and it was used to make decisions to replicate data. With these calculations, we have information on the locations of users in grid with reliability and availability or cost commensurate with the value of the work they did. This information can be downloaded from users who are willing to send them data with suitable reliability and availability. Simulation results show that the addition of the two parameters, reliability and availability, assessment criteria have been improved in certain access patterns.
Computer Networks and Distributed Systems
Somayeh Taherian Dehkordi; Vahid Khatibi Bardsiri
Volume 1, Issue 4 , November 2015, , Pages 25-32
Abstract
Cloud computing refers to services that run in a distributed network and are accessible through common internet protocols. It merges a lot of physical resources and offers them to users as services according to service level agreement. Therefore, resource management alongside with task scheduling has ...
Read More
Cloud computing refers to services that run in a distributed network and are accessible through common internet protocols. It merges a lot of physical resources and offers them to users as services according to service level agreement. Therefore, resource management alongside with task scheduling has direct influence on cloud networks’ performance and efficiency. Presenting a proper scheduling method can lead to efficiency of resources by decreasing response time and costs. This paper studies the existing approaches of task scheduling and resource allocation in cloud infrastructures and assessment of their advantages and disadvantages. Afterwards, a compound algorithm is presented in order to allocate tasks to resources properly and decrease runtime. The proposed algorithm is built according to conditions of compounding Min-min and Sufferage algorithms. In the proposed algorithm, task allocation between machines takes place alternatively and with continuous change of scheduling algorithms. The main idea of the proposed algorithm is to concentrate on the number of tasks instead of the existing resources. The simulation results reveal that the proposed algorithm can achieve higher performance in decreasing response time.
Computer Networks and Distributed Systems
Somayeh Taherian Dehkordi; Vahid Khatibi Bardsiri
Volume 1, Issue 3 , August 2015, , Pages 17-22
Abstract
Since software systems play an important role in applications more than ever, the security has become one of the most important indicators of softwares.Cloud computing refers to services that run in a distributed network and are accessible through common internet protocols. Presenting a proper scheduling ...
Read More
Since software systems play an important role in applications more than ever, the security has become one of the most important indicators of softwares.Cloud computing refers to services that run in a distributed network and are accessible through common internet protocols. Presenting a proper scheduling method can lead to efficiency of resources by decreasing response time and costs. This research studies the existing approaches of task scheduling and resource allocation in cloud infrastructures and assessment of their advantages and disadvantages. Afterwards, a compound algorithm is presented in order to allocate tasks to resources properly and decrease runtime. In this paper we proposed a new method for task scheduling by learning automata (LA). This method where has named RAOLA is trained by historical information of task execution on the cloud, then divide task to many classes and evaluate them. Next, manage virtual machine for capture physical resources at any period based on rate of task classes, such that improve efficiency of cloud network.
Computer Networks and Distributed Systems
Seyedeh Roudabeh Hosseini; Sepideh Adabi; Reza Tavoli
Volume 1, Issue 3 , August 2015, , Pages 23-32
Abstract
Migration of Virtual Machine (VM) is a critical challenge in cloud computing. The process to move VMs or applications from one Physical Machine (PM) to another is known as VM migration. In VM migration several issues should be considered. One of the major issues in VM migration problem is selecting an ...
Read More
Migration of Virtual Machine (VM) is a critical challenge in cloud computing. The process to move VMs or applications from one Physical Machine (PM) to another is known as VM migration. In VM migration several issues should be considered. One of the major issues in VM migration problem is selecting an appropriate PM as a destination for a migrating VM. To face this issue, several approaches are proposed that focus on ranking potential destination PMs by addressing migration objectives. In this paper we propose a new hierarchal fuzzy logic system for ranking potential destination PMs for a migrating VM by considering following parameters: Performance efficiency, Communication cost between VMs, Power consumption, Workload, Temperature efficiency and Availability. Using hierarchal fuzzy logic systems which consider the mentioned six parameters which have great role in ranking of potential destination PMs for a migrating VM together, the accuracy of PMs ranking approach is increased, furthermore the number of fuzzy rules in the system are reduced, thereby reducing the computational time (which is critical in cloud environment). In our experiments, we compare our proposed approach that is named as (HFLSRPM: Hierarchal Fuzzy Logic Structure for Ranking potential destination PMs for a migrating VM) with AppAware algorithm in terms of communication cost and performance efficiency. The results demonstrate that by considering more effective parameters in the proposed PMs ranking approach, HFLSRPM outperforms AppAware algorithm.
Computer Networks and Distributed Systems
Mahdi Sattarivand
Volume 1, Issue 2 , May 2015, , Pages 9-14
Abstract
Abstract— Peer-to-Peer ( P2P ) systems have been the center of attention in recent years due to their advantage . Since each node in such networks can act both as a service provider and as a client , they are subject to different attacks . Therefore it is vital to manage confidence for these vulnerable ...
Read More
Abstract— Peer-to-Peer ( P2P ) systems have been the center of attention in recent years due to their advantage . Since each node in such networks can act both as a service provider and as a client , they are subject to different attacks . Therefore it is vital to manage confidence for these vulnerable environments in order to eliminate unsafe peers . This paper investigates the use of genetic programing for achieving trust of a peer without central monitoring . A model of confidence management is proposed here in which every peer ranks other peers according to calculated local confidence based on recommendations and previous interactions . The results show that this model identifies malicious nodes without the use of a central supervisor or overall confidence value and thus the system functions.Index Terms — peer - to - peer systems , confidence , genetic programing , malicious nodes .