Computer Networks and Distributed Systems
Sara Mohammadi; Parvaneh Asghari; Amir Masoud Rahmani
Articles in Press, Accepted Manuscript, Available Online from 02 December 2022
Abstract
As a new technology, cloud computing is a key part of making systems more efficient and better and improving the Internet of Things. One of the significant challenges in fog computing is trust management, taking into account the processing, storage, and network constraints of fog devices. This study ...
Read More
As a new technology, cloud computing is a key part of making systems more efficient and better and improving the Internet of Things. One of the significant challenges in fog computing is trust management, taking into account the processing, storage, and network constraints of fog devices. This study suggests that a multi-objective imperialist competitive optimization algorithm be used to increase trust and decrease response time in fog environments. After formulating trust, delay, and accuracy, the multi-objective imperialist competitive optimization algorithm is developed and evaluated for fog server selection. Evaluations show that the proposed method is more efficient and works well in terms of accuracy, delay, and trust than other algorithms.
Computer Networks and Distributed Systems
Touraj BaniRostam; Hamid BaniRostam; Mir Mohsen Pedram; Amir Masoud Rahamni
Volume 7, Issue 3 , August 2021, , Pages 157-166
Abstract
Abstract—Several studies have been presented to solve challenges of electronic card (e-card) fraud that the two main purposes of these studies are to identify types of e-card fraud and to investigate the methods used in bank fraud detection. To achieve this purpose, one of the most common methods ...
Read More
Abstract—Several studies have been presented to solve challenges of electronic card (e-card) fraud that the two main purposes of these studies are to identify types of e-card fraud and to investigate the methods used in bank fraud detection. To achieve this purpose, one of the most common methods of detecting fraud is to investigate suspicious changes in user behavior. Supervised learning techniques help to find anomalies by analyzing user behavioral history based on past transaction patterns in fraud detection systems. One of challenging issues in detecting fraud is to consider the change of customer behavior and the ability of fraudsters to devise new patterns of fraud, which makes unsupervised learning techniques popular for detecting unknown and new frauds. In this paper, the concepts of fraud, types of banking fraud along with their challenges, different form of fraud and banks' data research tools for early identification have been examined, then a review of the researches on fraud detection will be conducted. This paper aims to introduce fraud detection techniques and methods that have provided appropriate results in the big data environment. Finally, the fraud detection algorithms and proposed methods of related works presented in this paper, will be fully compared on a common dataset in terms of parameters such as speed of fraud detection, accuracy, and cost (hardware and network resources). Ensemble Meta-Learning can be used alone to build a stronger classifier. These techniques have been relatively successful in detecting fraud and reducing costs.
Computer Networks and Distributed Systems
Elham shamsinejad; Mir Mohsen Pedram; Amir Masoud Rahamni; Touraj BaniRostam
Volume 7, Issue 3 , August 2021, , Pages 187-196
Abstract
By increasing access to high amounts of data through internet-based technologies such as social networks and mobile phones and electronic devices, many companies have considered the issues of accessing large, random and fast data along with maintaining data confidentiality. Therefore, confidentiality ...
Read More
By increasing access to high amounts of data through internet-based technologies such as social networks and mobile phones and electronic devices, many companies have considered the issues of accessing large, random and fast data along with maintaining data confidentiality. Therefore, confidentiality concerns and protection of specific data disclosure are one of the most challenging topics. In this paper, a variety of data anonymity methods, anonymity operators, the attacks that can endanger data anonymity and lead to the disclosure of sensitive data in the big data have been investigated. Also, different aspects of big data such as data sources, content format, data preparation, data processing and common data repositories will be discussed. Privacy attacks and contrastive techniques like k anonymity, neighborhood t and L diversity have been investigated and two main challenges to use k anonymity on big data will be identified, as well. Two main challenges to use k anonymity on big data will be identified. The first challenge of confidential attributes can also be as pseudo-identifier attributes, which increases the number of pseudo-identifier elements, and it may lead to the loss of great information to achieve k anonymity. The second challenge in big data is the unlimited number of data controllers are likely to lead to the disclosure of sensitive data through the independent publication of k anonymity. Then different anonymity algorithms will be presented and finally, the different parameters of time order and the consumable space of big data anonymity algorithms will be compared.
Computer Networks and Distributed Systems
Fatemeh Davami; Sahar Adabi; Ali Rezaee; Amir Masoud Rahamni
Volume 7, Issue 2 , May 2021, , Pages 126-136
Abstract
In the last ten years, the Cloud data centers have been manifested as the crucial computing architectures to enable extreme data workflows. Due to the complicatedness and diverse kinds of computational resources like Fog nodes and Cloud servers, workflow scheduling has been proposed to be the main challenge ...
Read More
In the last ten years, the Cloud data centers have been manifested as the crucial computing architectures to enable extreme data workflows. Due to the complicatedness and diverse kinds of computational resources like Fog nodes and Cloud servers, workflow scheduling has been proposed to be the main challenge in Cloud and or Fog computing environments. For resolving this issue, the present study offers a scheduling algorithm according to the critical path extraction, referred to as the Critical Path Extraction Algorithm (CPEA). In fact, it is one of the new multi-criteria decision-making algorithms to extract the critical paths of multiple workflows because it is of high importance to find the critical path in the creation and control of the scheduling. Moreover, an extensive software simulation investigation has been performed to compare this new algorithm in the real work-loads and recent algorithm. We compare our algorithm with the GRP-HEFT algorithm. The experimental results confirm the proposed algorithm's superiority in fulfilling make-span and waiting time and that workflow scheduling based on CPEA further improves the workflow make-span and waiting time.
Computer Networks and Distributed Systems
Mona Alimardani; Amir Masoud Rahmani; Houman Zarrabi
Volume 3, Issue 4 , November 2017, , Pages 181-188
Abstract
Advances in medical sciences with other fields of science and technology is closely casual profound mutations in different branches of science and methods for providing medical services affect the lives of its descriptor. Wireless Body Area Network (WBAN) represents such a leap. Those networks excite ...
Read More
Advances in medical sciences with other fields of science and technology is closely casual profound mutations in different branches of science and methods for providing medical services affect the lives of its descriptor. Wireless Body Area Network (WBAN) represents such a leap. Those networks excite new branches in the world of telemedicine. Small wireless sensors, to be quite precise and calculated, are installed in or on the body and create a WBAN that various vital statistics or environmental parameters sampling, processing and radio. These nodes allow independent monitoring of a person's location, in typical environments and for long periods and provide for the user and the medical, offer real-time feedback from the patient's health status. In this article, the introduction of WBAN and review issues and applications of medical sensor networks, to offer a protocol has been established that the threshold for data transmission reduces power consumption on sensor nodes, increasing the lifetime of the network and motion phase to increase the dynamics of the network. The proposed protocol in the network been compared with the SIMPLE and ATTEMPT protocols. Results indicate a significant reduction in energy consumption of sensors to reduce energy consumption the entire network.
Computer Networks and Distributed Systems
Bahareh Rahmati; Amir Masoud Rahmani; Ali Rezaei
Volume 3, Issue 2 , May 2017, , Pages 75-80
Abstract
Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, ...
Read More
Abstract— High-performance computing and vast storage are two key factors required for executing data-intensive applications. In comparison with traditional distributed systems like data grid, cloud computing provides these factors in a more affordable, scalable and elastic platform. Furthermore, accessing data files is critical for performing such applications. Sometimes accessing data becomes a bottleneck for the whole cloud workflow system and decreases the performance of the system dramatically. Job scheduling and data replication are two important techniques which can enhance the performance of data-intensive applications. It is wise to integrate these techniques into one framework for achieving a single objective. In this paper, we integrate data replication and job scheduling with the aim of reducing response time by reduction of data access time in cloud computing environment. This is called data replication-based scheduling (DRBS). Simulation results show the effectiveness of our algorithm in comparison with well-known algorithms such as random and round-robin.
Hamid Hassan Kashi; Amir Masoud Rahamni; Mehdi Hoseinzadeh; Vahid Sadatpour
Volume 1, Issue 1 , February 2015, , Pages 1-8
Abstract
In wireless sensor networks, optimal consumptionof energy and having maximum life time are important factors. In this article attempt has been made to send the data packets with particular reliability from the beginning based on AODV protocol. In this way two new fields add to the routing packets and ...
Read More
In wireless sensor networks, optimal consumptionof energy and having maximum life time are important factors. In this article attempt has been made to send the data packets with particular reliability from the beginning based on AODV protocol. In this way two new fields add to the routing packets and during routing and discovering of new routes, the lowest remained energy of nodes and route traffic based on the number of discarded packages, will store in this field as two variables. These two variables will be considered during choosing a suitable route for sending the data to that message which should be answered by sink. The efficiency of this protocol is based on the fact that, at the route request, it finds the routes with high energy and low traffic through which data are sent(information is sent).So data packets reach the destination with a higher probability and also the balance of energy consumption is considered in the network. From the energy point of view, not using weak nodes routes leads to not having off nodes at the end of the process. This fact affects balancing of energy consumption and reducing the variance of the energy remainder proportional to AODV model. Not using high traffic routes leads to reducing collision and sending fewer signaling packets; more data packets with lower delay reach the destination. In the case of high congestion, for meeting the desired reliability, which is among the main goals there may be more sending signaling packets, delay and collision. But this happens with sending more packets and a guaranteed reliability.