Pattern Analysis and Intelligent Systems
Somayeh Lotfi; Mohammad Ghasemzadeh; Mehran Mohsenzadeh; Mitra Mirzarezaee
Volume 7, Issue 1 , February 2021, , Pages 55-66
Abstract
The decision tree is one of the popular methods for learning and reasoning through recursive partitioning of data space. To choose the best attribute in the case on numerical features, partitioning criteria should be calculated for individual values or the value range of each attribute should be divided ...
Read More
The decision tree is one of the popular methods for learning and reasoning through recursive partitioning of data space. To choose the best attribute in the case on numerical features, partitioning criteria should be calculated for individual values or the value range of each attribute should be divided into two or more intervals using a set of cut points. In partitioning range of attribute, the fuzzy partitioning can be used to reduce the noise sensitivity of data and to increase the stability of decision trees. Since the tree-building algorithms need to keep in main memory the whole training dataset, they have memory restrictions. In this paper, we present an algorithm that builds the fuzzy decision tree on the large dataset. In order to avoid storing the entire training dataset in main memory and overcome the memory limitation, the algorithm builds DTs in an incremental way. In the discretization stage, a fuzzy partition was generated on each continuous attribute based on fuzzy entropy. Then, in order to select the best feature for branches, two criteria, including fuzzy information gain and occurrence matrix are used. Besides, real datasets are used to evaluate the behavior of the algorithm in terms of classification accuracy, decision tree complexity, and execution time as well. The results show that proposed algorithm without a need to store the entire dataset in memory and reduce the complexity of the tree is able to overcome the memory limitation and making balance between accuracy and complexity .
Computer Networks and Distributed Systems
Midia Reshadi; Ali Ramezanzad; Akram Reza
Volume 4, Issue 2 , May 2018, , Pages 79-86
Abstract
Effective and congestion-aware routing is vital to the performance of network-on-chip. The efficient routing algorithm undoubtedly relies on the considered selection strategy. If the routing function returns a number of more than one permissible output ports, a selection function is exploited to choose ...
Read More
Effective and congestion-aware routing is vital to the performance of network-on-chip. The efficient routing algorithm undoubtedly relies on the considered selection strategy. If the routing function returns a number of more than one permissible output ports, a selection function is exploited to choose the best output port to reduce packets latency. In this paper, we introduce a new selection strategy that can be used in any adaptive routing algorithm. The intended selection function is named Modified-Neighbor-on-Path, the purpose of that is handling the condition of hesitation happening when the routing function provides a set of acceptable output ports. In fact, number of inquiries that each router has sent to its neighbors in determined past cycles is a new parameter that can be combined with number of free slots of adjacent nodes in the latest selection function named Neighbor-on-Path. Performance analysis is performed by using exact simulation tools under different traffic scenarios. Outcomes show how the proposed selection function applied to West-first and North-last routing algorithms outperforms in average delay up to 20 percent on maximum and an acceptable improvement in total energy consumption.
Computer Networks and Distributed Systems
Seyed Hossein Ahmadpanah; Rozita Jamili Oskouei; Abdullah Jafari Chashmi
Volume 3, Issue 2 , May 2017, , Pages 89-106
Abstract
Peer-to-peer applications (P2P) are no longer limited to home users, and start being accepted in academic and corporate environments. While file sharing and instant messaging applications are the most traditional examples, they are no longer the only ones benefiting from the potential advantages of P2P ...
Read More
Peer-to-peer applications (P2P) are no longer limited to home users, and start being accepted in academic and corporate environments. While file sharing and instant messaging applications are the most traditional examples, they are no longer the only ones benefiting from the potential advantages of P2P networks. For example, network file storage, data transmission, distributed computing, and collaboration systems have also taken advantage of such networks.The reasons why this model of computing is attractive unfold in three. First, P2P networks are scalable, i.e., deal well (efficiently) with both small groups and with large groups of participants. In this paper, we will present a summary of the main safety aspects to be considered in P2P networks, highlighting its importance for the development of P2P applications and systems on the Internet and deployment of enterprise applications with more critical needs in terms of security. P2P systems are no longer limited to home users, and start being accepted in academic and corporate environments.
Pattern Analysis and Intelligent Systems
Jensi R
Volume 5, Issue 2 , May 2019, , Pages 93-106
Abstract
Data clustering is the process of partitioning a set of data objects into meaning clusters or groups. Due to the vast usage of clustering algorithms in many fields, a lot of research is still going on to find the best and efficient clustering algorithm. K-means is simple and easy to implement, but it ...
Read More
Data clustering is the process of partitioning a set of data objects into meaning clusters or groups. Due to the vast usage of clustering algorithms in many fields, a lot of research is still going on to find the best and efficient clustering algorithm. K-means is simple and easy to implement, but it suffers from initialization of cluster center and hence trapped in local optimum. In this paper, a new hybrid data clustering approach which combines the modified krill herd and K-means algorithms, named as K-MKH, is proposed. K-MKH algorithm utilizes the power of quick convergence behaviour of K-means and efficient global exploration of Krill Herd and random phenomenon of Levy flight method. The Krill-herd algorithm is modified by incorporating Levy flight in to it to improve the global exploration. The proposed algorithm is tested on artificial and real life datasets. The simulation results are compared with other methods such as K-means, Particle Swarm Optimization (PSO), Original Krill Herd (KH), hybrid K-means and KH. Also the proposed algorithm is compared with other evolutionary algorithms such as hybrid modified cohort intelligence and K-means (K-MCI), Simulated Annealing (SA), Ant Colony Optimization (ACO), Genetic Algorithm (GA), Tabu Search (TS), Honey Bee Mating Optimization (HBMO) and K-means++. The comparison shows that the proposed algorithm improves the clustering results and has high convergence speed.
Computer Networks and Distributed Systems
Mohammadreza Pourkiani; Sepideh Adabi; Sam Jabbehdari; Ahmad Khademzadeh
Volume 3, Issue 3 , August 2017, , Pages 153-166
Abstract
The systems in which information and communication technologies and systems engineering concepts are utilized to develop and improve transportation systems of all kinds are called “The Intelligent Transportation Systems (ITS)”. ITS integrates information, communications, computers and other ...
Read More
The systems in which information and communication technologies and systems engineering concepts are utilized to develop and improve transportation systems of all kinds are called “The Intelligent Transportation Systems (ITS)”. ITS integrates information, communications, computers and other technologies and uses them in the field of transportation to build an integrated system of people, roads and vehicles by utilizing advanced data communication technologies. Vehicular Ad-hoc Networks which is a subset of Mobile Ad-hoc Networks, provide Vehicle to Vehicle (V2V), Vehicle to Roadside (V2R) and Vehicle to Infrastructure (V2I) communications and plays an important role in Intelligent Transportation System. Due to special characteristics of VANETs, QoS (Quality of Service) provisioning in these networks is a challenging task. QoS is the capability of a network for providing superior service to a selected network traffic over various heterogeneous technologies. In this paper we present an overview of Vehicular Networks, QoS Concepts, QoS challenges in VANETs and approaches which aim to enhance the Quality of Service in Vehicular Networks
Pattern Analysis and Intelligent Systems
Saman Khalandi; Farhad Soleimanian Gharehchopogh
Volume 4, Issue 3 , August 2018, , Pages 167-184
Abstract
With the fast increase of the documents, using Text Document Classification (TDC) methods has become a crucial matter. This paper presented a hybrid model of Invasive Weed Optimization (IWO) and Naive Bayes (NB) classifier (IWO-NB) for Feature Selection (FS) in order to reduce the big size of features ...
Read More
With the fast increase of the documents, using Text Document Classification (TDC) methods has become a crucial matter. This paper presented a hybrid model of Invasive Weed Optimization (IWO) and Naive Bayes (NB) classifier (IWO-NB) for Feature Selection (FS) in order to reduce the big size of features space in TDC. TDC includes different actions such as text processing, feature extraction, forming feature vectors, and final classification. In the presented model, the authors formed a feature vector for each document by means of weighting features use for IWO. Then, documents are trained with NB classifier; then using the test, similar documents are classified together. FS do increase accuracy and decrease the calculation time. IWO-NB was performed on the datasets Reuters-21578, WebKb, and Cade 12. In order to demonstrate the superiority of the proposed model in the FS, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been used as comparison models. Results show that in FS the proposed model has a higher accuracy than NB and other models. In addition, comparing the proposed model with and without FS suggests that error rate has decreased.
Computer Networks and Distributed Systems
Scholastica Nwanneka Mallo; Francisca Nonyelum Ogwueleka
Volume 5, Issue 3 , August 2019, , Pages 169-180
Abstract
Cloud computing technology is providing businesses, be it micro, small, medium, and large scale enterprises with the same level playing grounds. Small and Medium enterprises (SMEs) that have adopted the cloud are taking their businesses to greater heights with the competitive edge that cloud computing ...
Read More
Cloud computing technology is providing businesses, be it micro, small, medium, and large scale enterprises with the same level playing grounds. Small and Medium enterprises (SMEs) that have adopted the cloud are taking their businesses to greater heights with the competitive edge that cloud computing offers. The limitations faced by (SMEs) in procuring and maintaining IT infrastructures has been handled on the cloud platform for the SMEs that adopt it. In this research, the impact and challenges of cloud computing on SME’s that have adopted it in Nigeria has been investigated. The impacts identified ranges from provisioning IT infrastructures, reshaping and extending business values and outreach to giving competitive edge to businesses subscribed to it. Though Cloud computing has many benefits; however, it is not without some pitfalls. These pitfalls include data vulnerability, vendor lock-in, limited control over the infrastructure by the subscribers etc. To investigate the level of impacts and challenges being faced by SMEs in Nigeria on the cloud platform, questionnaires were administered to managers and employees of about fifty SMEs that have deployed cloud. The data collected were analyzed using Statistical Package for Social Sciences (SPSS), from which appropriate recommendations were made.Key Words: Cloud Computing, Impacts, Challenges, SME.
Computer Networks and Distributed Systems
Marzieh Bozorgi Elize; Ahmad KhademZadeh
Volume 3, Issue 4 , November 2017, , Pages 203-212
Abstract
Cloud computing is a result of the continuing progress made in the areas of hardware, technologies related to the Internet, distributed computing and automated management. The Increasing demand has led to an increase in services resulting in the establishment of large-scale computing and data centers, ...
Read More
Cloud computing is a result of the continuing progress made in the areas of hardware, technologies related to the Internet, distributed computing and automated management. The Increasing demand has led to an increase in services resulting in the establishment of large-scale computing and data centers, in addition to high operating costs and huge amounts of electrical power consumption. Insufficient cooling systems and inefficient, causing overheating sources, shortening the life of the machine and too much carbon dioxide is produced. In this paper, we aim to improve system performance; Cloud Computing based on a decrease in migration of among virtual machines (VM), and reduce energy consumption to be able to manage resources to achieve optimal energy efficiency. For this reason, various techniques such as genetic algorithms (GAs), virtual machine migration and ways Dynamic voltage and frequency scaling (DVFS), and resize virtual machines to reduce energy consumption and fault tolerance are used. The main purpose of this article, the allocation of resources with the aim of reducing energy consumption in cloud computing. The results show that reduced energy consumption and hold down the rate of virtual machines breach of contract, reduces migration as well.
Pattern Analysis and Intelligent Systems
Sajjad Najafi; Farhad Soleimanian Gharehchopogh
Volume 5, Issue 4 , November 2019, , Pages 233-244
Abstract
There are many algorithms for optimizing the search engine results, ranking takes place according to one or more parameters such as; Backward Links, Forward Links, Content, click through rate and etc. The quality and performance of these algorithms depend on the listed parameters. The ranking is one ...
Read More
There are many algorithms for optimizing the search engine results, ranking takes place according to one or more parameters such as; Backward Links, Forward Links, Content, click through rate and etc. The quality and performance of these algorithms depend on the listed parameters. The ranking is one of the most important components of the search engine that represents the degree of the vitality of a web page. It also examines the relevance of search results with the user's query. In this paper, we try to optimize the search engine results ranking by using the hybrid of the structure-based algorithms (Distance Rank algorithm) and user feedback-based algorithms (Time Rank algorithm). The proposed method acts on multiple parameters and with more parameters it tries to get better results while keeping the complexity and running time of the algorithms. Average distance and average attention time have been evaluated on web pages and by using the obtained data, proposed method performance has been evaluated. We compare proposed method with several famous algorithms such as Time Rank, Page Rank, R Rank, WPR and sNorm(p) in this field by applying Precision@N (P@N), Average Precision (AP), Mean Reciprocal Rank (MRR), Mean Average Precision (MAP), Discounted Cumulative Gain (DCG) and Normalized Discounted Cumulative Gain (NDCG) criteria. The results indicate better performance in comparison with existing algorithms.
Pattern Analysis and Intelligent Systems
Md Golam Sarowar; Azim Khan; Maruf Hasan Shakil; Mohammad Arafat Ullah
Volume 4, Issue 4 , November 2018, , Pages 237-246
Abstract
this research explores the manipulation of biomedical big data and diseases detection using automated computing mechanisms. As efficient and cost effective way to discover disease and drug is important for a society so computer aided automated system is a must. This paper aims to understand the importance ...
Read More
this research explores the manipulation of biomedical big data and diseases detection using automated computing mechanisms. As efficient and cost effective way to discover disease and drug is important for a society so computer aided automated system is a must. This paper aims to understand the importance of computer aided automated system among the people. The analysis result from collected data contributes to finding an effective result that people have enough understanding and much better knowledge about big data and computer aided automated system. moreover, perspective and trustworthiness of people regarding recent advancement of computer aided technologies in biomedical science have been demonstrated in this research. however, appearance of big data in the field of medical science and manipulation of those data have been concentrated on this research. Finally suggestions have been developed for further research related to computer technology in manipulation of big data, disease detection and drug discovery.
Software Engineering and Information Systems
Ramin Saljoughinejad; Vahid Khatibi
Volume 4, Issue 1 , February 2018, , Pages 27-40
Abstract
The literature review shows software development projects often neither meet time deadlines, nor run within the allocated budgets. One common reason can be the inaccurate cost estimation process, although several approaches have been proposed in this field. Recent research studies suggest that in order ...
Read More
The literature review shows software development projects often neither meet time deadlines, nor run within the allocated budgets. One common reason can be the inaccurate cost estimation process, although several approaches have been proposed in this field. Recent research studies suggest that in order to increase the accuracy of this process, estimation models have to be revised. The Constructive Cost Model (COCOMO) has often been referred as an efficient model for software cost estimation. The popularity of COCOMO is due to its flexibility; it can be used in different environments and it covers a variety of factors. In this paper, we aim to improve the accuracy of cost estimation process by enhancing COCOMO model. To this end, we analyze the cost drivers using meta-heuristic algorithms. In this method, the improvement of COCOMO is distinctly done by effective selection of coefficients and reconstruction of COCOMO. Three meta-heuristic optimization algorithms are applied synthetically to enhance the process of COCOMO model. Eventually, results of the proposed method are compared to COCOMO itself and other existing models. This comparison explicitly reveals the superiority of the proposed method.
Pattern Analysis and Intelligent Systems
Vahid Seydi Ghomsheh; Mohamad Teshnehlab; Mehdi Aliyari Shoordeli
Volume 1, Issue 2 , May 2015, , Pages 29-38
Abstract
This study proposes a modified version of cultural algorithms (CAs) which benefits from rule-based system for influence function. This rule-based system selects and applies the suitable knowledge source according to the distribution of the solutions. This is important to use appropriate influence function ...
Read More
This study proposes a modified version of cultural algorithms (CAs) which benefits from rule-based system for influence function. This rule-based system selects and applies the suitable knowledge source according to the distribution of the solutions. This is important to use appropriate influence function to apply to a specific individual, regarding to its role in the search process. This rule based system is optimized using Genetic Algorithm (GA). The proposed modified CA algorithm is compared with several other optimization algorithms including GA, particle swarm optimization (PSO), especially standard version of cultural algorithm. The obtained results demonstrate that the proposed modification enhances the performance of the CA in terms of global optimality.Optimization is an important issue in different scientific applications. Many researches dedicated to algorithms that can be used to find an optimal solution for different applications. Intelligence optimizations which are generally classified as, evolutionary computations techniques like Genetic Algorithm, evolutionary strategy, and evolutionary programming, and swarm intelligence algorithms like particle swarm intelligence algorithm and ant colony optimization, etc are powerful tools for solving optimization problems
Computer Networks and Distributed Systems
Hamid Haj Seyyed Javadi; Mohaddese Anzani
Volume 1, Issue 3 , August 2015, , Pages 33-38
Abstract
Key distribution is an important problem in wireless sensor networks where sensor nodesare randomly scattered in adversarial environments.Due to the random deployment of sensors, a list of keys must be pre-distributed to each sensor node before deployment. To establish a secure communication, two nodes ...
Read More
Key distribution is an important problem in wireless sensor networks where sensor nodesare randomly scattered in adversarial environments.Due to the random deployment of sensors, a list of keys must be pre-distributed to each sensor node before deployment. To establish a secure communication, two nodes must share common key from their key-rings. Otherwise, they can find a key- path in which ensures that either two neighboring nodes have a key in common from source to destination. Com-binatorial designs are powerful mathematical tools with comprehensive and simple structures. Recently, many researchers have used combinatorial designs as key pre-distribution scheme in wireless sensor networks. In this paper we consider a hybrid key pre-distribution scheme based on Balanced Incomplete Block Design. We consider a new approach for choosing key-rings in the hybrid symmetric design to improve the connectivity and resilience. Performance and security properties of the proposed scheme are studied both analytically and computationally. Theobtained results show that our scheme provides better resilience than symmetric design.
Hossein Barghi Jond; Adel Akbarimajd; Nurhan Gürsel Özmen; Sonia Gharibzadeh
Volume 2, Issue 3 , August 2016, , Pages 35-42
Abstract
This paper aims to discuss the requirements of safe and smooth trajectory planning of transporter mobile robots to perform non-prehensile object manipulation task. In non-prehensile approach, the robot and the object must keep their grasp-less contact during manipulation task. To this end, dynamic grasp ...
Read More
This paper aims to discuss the requirements of safe and smooth trajectory planning of transporter mobile robots to perform non-prehensile object manipulation task. In non-prehensile approach, the robot and the object must keep their grasp-less contact during manipulation task. To this end, dynamic grasp concept is employed for a box manipulation task and corresponding conditions are obtained and are represented as a bound on robot acceleration. A trajectory optimization problem is defined for general motion where dynamic grasp conditions are regarded as constraint on acceleration. The optimal trajectory planning for linear, circular and curve motions are discussed. Optimization problems for linear and circular trajectories were analytically solved by previous studies and here we focused with curve trajectory where Genetic Algorithm is employed as a solver tool. Motion simulations showed that the resulted trajectories satisfy the acceleration constraint as well as velocity boundary condition that is needed to accomplish non-prehensile box manipulation task.
Nazal Modhej; Mohammad Teshnehlab; Mashallah Abbasi Dezfouli
Volume 1, Issue 1 , February 2015, , Pages 37-42
Abstract
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy ...
Read More
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon or slower convergence speed due to larger fixed or smaller fixed learning rate respectively. The present research deals with offering two solutions for this problem. The original idea of the present research is using changeable learning rate at each state of training phase in the CMAC model. The first algorithm deals with a new learning rate based on reviation of learning rate. The second algorithm deals with number of training iteration and performance learning, with respect to this fact that error is compatible with inverse training time. Simulation results show that this algorithms have faster convergence and better performance in comparison to conventional CMAC model in all training cycles.
Computer Architecture and Digital Systems
Ehsan Abbasi
Volume 2, Issue 1 , February 2016, , Pages 37-44
Abstract
The studies in aerial vehicles modeling and control have been increased rapidly recently. In this paper , a coordination of two types of heterogeneous robots , namely unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) is considered. In this paper the UAV plays the role of a virtual leader ...
Read More
The studies in aerial vehicles modeling and control have been increased rapidly recently. In this paper , a coordination of two types of heterogeneous robots , namely unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) is considered. In this paper the UAV plays the role of a virtual leader for the UGVs. The system consists of a vision- based target detection algorithm that uses the color and image moment of a given target. The modeling of the vertical take off and landing vehicle will be described by using Euler - Newton equations. All of flight controller commands are directly generated based on the offset of the target from the image frame. The image processing and intelligent control algorithms such a Kalman filter and so on have been implemented on a latest computer. Matlab Simulink software has been used to test, analyze and compare the performance of the controllers in simulations .
Pattern Analysis and Intelligent Systems
Rani Deepika Balavendran Joseph; Alok Pal; Jeanne Tunks; Gayatri Mehta
Volume 5, Issue 1 , February 2019, , Pages 37-48
Abstract
In this paper, we study intrinsic vs. extrinsic motivation in players playing an electrical engineering gaming environment. We used UNTANGLED, a highly interactive game to conduct this study. This game is developed to solve complex mapping problem from electrical engineering using human intuitions. Our ...
Read More
In this paper, we study intrinsic vs. extrinsic motivation in players playing an electrical engineering gaming environment. We used UNTANGLED, a highly interactive game to conduct this study. This game is developed to solve complex mapping problem from electrical engineering using human intuitions. Our goal is to find whether there are differences in the ways anonymous players solved electrical engineering puzzles in an electronic gaming environment when motivated to play competitively, as compared to self-regulated play. For our experiments, we used puzzles from four games from UNTANGLED. A one-way analysis of variance (ANOVA) was calculated on participants’ scores, type of plays, number of plays, and time spent playing, as both self-regulated and competitive players. We also examined difference between the type of moves used by the competitive and self-regulated players. Our results support the theory of motivation as being internally embedded in learners. The results also demonstrate that a self-regulated learner does not require motivation to improve one’s performance.
Pattern Analysis and Intelligent Systems
Leila Yahyaie; Sohrab Khanmohammadi
Volume 2, Issue 4 , November 2016, , Pages 39-48
Abstract
Abstract— In this paper, a new extended method of multi criteria decision making based on fuzzy-Topsis theory is introduced. fuzzy mcdm algorithm for determining the best choice among all possible choices when the data are fuzzy is also presented. Using a new index leads to procedure for choosing ...
Read More
Abstract— In this paper, a new extended method of multi criteria decision making based on fuzzy-Topsis theory is introduced. fuzzy mcdm algorithm for determining the best choice among all possible choices when the data are fuzzy is also presented. Using a new index leads to procedure for choosing fuzzy ideal and negative ideal solutions directly from the fuzzy data observed alternatives.in this algorithm we used triangular fuzzy number. Mostly, it is not possible to gather precise data, so decision making based on these data loses its efficiency. The fuzzy theory has been used to overcome this draw back. In multi-criteria decision making, criteria can correlate with each other, most of which are ignored in classic MCDM. In this paper, correlation coefficient of fuzzy criteria has been studied to adapt the interrelation between criteria and a new algorithm is proposed to obtain decision making. Finally the efficiency of suggested method is demonstrated with an example..
Computer Architecture and Digital Systems
Ehsan Abbasi; Nader Naghavi
Volume 3, Issue 1 , February 2017, , Pages 41-44
Abstract
Proportional + Integral + Derivative (PID) controllers are widely used in engineering applications such that more than half of the industrial controllers are PID controllers. There are many methods for tuning the PID parameters in the literature. In this paper an intelligent technique based on eXtended ...
Read More
Proportional + Integral + Derivative (PID) controllers are widely used in engineering applications such that more than half of the industrial controllers are PID controllers. There are many methods for tuning the PID parameters in the literature. In this paper an intelligent technique based on eXtended Classifier System (XCS) is presented to tune the PID controller parameters. The PID controller with the gains obtained by the proposed method can robustly control nonlinear multiple-input–multiple-output (MIMO) plants in any form, such as robot dynamics and so on. The performance of this method is evaluated with Integral Squared Error (ISE) criteria which is one of the most popular optimizing methods for the PID controller parameters. Both methods are used to control the ball position in a magnetic levitation (MagLev) system and the performance of controllers are compared. Matlab Simulink has been used to test, analyze and compare the performance of the two optimization methods in simulations.
Pattern Analysis and Intelligent Systems
fatemeh Abdi; Aliasghar Safaei
Volume 1, Issue 4 , November 2015, , Pages 43-52
Abstract
Todays, in many modern applications, we search for frequent and repeating patterns in the analyzed data sets. In this search, we look for patterns that frequently appear in data set and mark them as frequent patterns to enable users to make decisions based on these discoveries. Most algorithms presented ...
Read More
Todays, in many modern applications, we search for frequent and repeating patterns in the analyzed data sets. In this search, we look for patterns that frequently appear in data set and mark them as frequent patterns to enable users to make decisions based on these discoveries. Most algorithms presented in the context of data stream mining and frequent pattern detection, work either on uncertain data, or use the sliding window model to assess data streams. Sliding window model uses a fixed-size window to only maintain the most recently inserted data and ignores all previous data (or those that are out of its window). Many real-world applications however require maintaining all inserted or obtained data. Therefore, the question arises that whether other window models can be used to find frequent patterns in dynamic streams of uncertain data.In this paper, we used landmark window model and time-fading model to answer that question. The method presented in the form of proposed algorithm, which uses the idea of landmark window model to find frequent patterns in the relational and uncertain data streams, shows a better performance in finding functional dependencies than other methods in this field. Another advantage of this method compared with other methods is that it shows tuples that do not follow a single dependency. This feature can be used to detect inconsistent data in a data set.
Computer Networks and Distributed Systems
Maryam Bagheri; Bita Amirshahi; Mehdi Khalili
Volume 2, Issue 2 , May 2016, , Pages 43-48
Abstract
Mobile ad hoc network congestion control is a significant problem. Standard mechanism for congestion control (TCP), the ability to run certain features of a wireless network, several mutations are not common. In particular, the enormous changes in the network topology and the joint nature of the wireless ...
Read More
Mobile ad hoc network congestion control is a significant problem. Standard mechanism for congestion control (TCP), the ability to run certain features of a wireless network, several mutations are not common. In particular, the enormous changes in the network topology and the joint nature of the wireless network. It also creates significant challenges in mobile ad hoc networks (MANET), density is one of the most important limitations that disrupts the function of the entire network, after multi-path routing can load balance in relation to the single-path routing in ad hoc networks better, so the traffic division multiple routs congestion is reduced. This study is a multi-path load balancing and congestion control based on the speed of rate control mechanism to avoid congestion in the network provides communication flows. Given such a speed control method that is consistent is that the destination node copy speed is estimated at intermediate nodes and its reflection in the In the forward direction confirmation to the sender sends a packet, therefore the rate quickly estimate The results of the simulation has been set to demonstrate that a given method better package delivery speed and expanded capacity and density to be effective checks congestion control method is better than The result traditional.
Computer Networks and Distributed Systems
Yaser Ramzanpoor; Mirsaeid Hosseini Shirvani; Mehdi GolSorkhTabar
Volume 7, Issue 1 , February 2021, , Pages 67-80
Abstract
Fog computing is known as a new computing technology where it covers cloud computing shortcomings in term of delay. This is a potential for running IoT applications containing multiple services taking benefit of closeness to fog nodes near to devices where the data are sensed. This article formulates ...
Read More
Fog computing is known as a new computing technology where it covers cloud computing shortcomings in term of delay. This is a potential for running IoT applications containing multiple services taking benefit of closeness to fog nodes near to devices where the data are sensed. This article formulates service placement issue into an optimization problem with total power consumption minimization inclination. It considers resource utilization and traffic transmission between different services as two prominent factors of power consumption, once they are placed on different fog nodes. On the other hand, placing all of the services on the single fog node owing to power reduction reduces system reliability because of one point of failure phenomenon. In the proposed optimization model, reliability limitations are considered as constraints of stated problem. To solve this combinatorial problem, an energy-aware reliable service placement algorithm based on whale optimization algorithm (ER-SPA-WOA) is proposed. The suggested algorithm was validated in different circumstances. The results reported from simulations prove the dominance of proposed algorithm in comparison with counterpart state-of-the-arts.
Computer Networks and Distributed Systems
Swathi B H; Megha V; Gururaj H L; Hamsaveni M; Janhavi V
Volume 4, Issue 2 , May 2018, , Pages 87-100
Abstract
Security is the major area of concern in communication channel. Security is very crucial in wireless sensor networks which are deployed in remote environments. Adversary can disrupt the communication within multi hop sensor networks by launching the attack. The common attacks which disrupt the communication ...
Read More
Security is the major area of concern in communication channel. Security is very crucial in wireless sensor networks which are deployed in remote environments. Adversary can disrupt the communication within multi hop sensor networks by launching the attack. The common attacks which disrupt the communication of nodes are packet dropping, packet modification, packet fake routing, badmouthing attack and Sybil attack. In this paper we considered these attacks and presented a solution to identify the attacks. Many approaches have been proposed to diminish these attacks, but very few methods can detect these attacks effectively. In this simple scheme, every node selects a parent node to forward the packet towards base station or sink. Each node append its unique identity and trust to the parent as a path marker. It encrypts the bytes using a secret key generated and shared among the sink. The encrypted packet is then forwarded to the parent node. Base station can identify the malicious nodes by using these unique identity and trust value.
Software Engineering and Information Systems
Vida Doranipour
Volume 3, Issue 2 , May 2017, , Pages 107-112
Abstract
Nowadays, effort estimation in software projects is turned to one of the key concerns for project managers. In fact, accurately estimating of essential effort to produce and improve a software product is effective in software projects success or fail, which is considered as a vital factor. Lack of access ...
Read More
Nowadays, effort estimation in software projects is turned to one of the key concerns for project managers. In fact, accurately estimating of essential effort to produce and improve a software product is effective in software projects success or fail, which is considered as a vital factor. Lack of access to satisfying accuracy and little flexibility in existing estimation models have attracted the researchers’ attention to this area in last few years. One of the existing effort estimation methods is COCOMO (Constructive Cost Model) which has been taken importantly as an appropriate method for software projects. Although COCOMO has been invented some years ago, it has still got effort estimation ability in software projects. Many researchers have attempted to improve effort estimation ability in this model by improving COCOMO operation; but despite many efforts, COCOMO results are not satisfying yet. In this research, a new compound method is presented to increase COCOMO estimation accuracy. In the proposed method, much better factors are gained using combination of invasive weed optimization and COCOMO estimation method in contrast with basic COCOMO. With the best factors, the proposed model’s optimality will be maximized. In this method, a real data set is used for evaluating and its operation is analyzed in contrast to other models. Operational parameters improvement is affirmed by this model’s estimation results.
Computer Networks and Distributed Systems
Olayemi Mikail Olaniyi; Ameh Innocent Ameh; Lukman Adewale Ajao; Omolara Ramota Lawal
Volume 5, Issue 2 , May 2019, , Pages 107-116
Abstract
Security is a vital issue in the usage of Automated Teller Machine (ATM) for cash, cashless and many off the counter banking transactions. Weaknesses in the use of ATM machine could not only lead to loss of customer’s data confidentiality and integrity but also breach in the verification of user’s ...
Read More
Security is a vital issue in the usage of Automated Teller Machine (ATM) for cash, cashless and many off the counter banking transactions. Weaknesses in the use of ATM machine could not only lead to loss of customer’s data confidentiality and integrity but also breach in the verification of user’s authentication. Several challenges are associated with the use of ATM smart card such as: card cloning, card skimming, cost of issuance and maintenance. In this paper, we present secure bio-cryptographic authentication system for cardless ATM using enhanced fingerprint biometrics trait and encrypted Personal Identification Number (PIN). Fingerprint biometrics is used to provide automatic identification/verification of a legitimate customer based on unique feature possessed by the customer. Log-Gabor filtering algorithm was used for enhancing low image quality and effect of noise on feature extracted from customer’s fingerprint minutiae. Truncated SHA 512/256 hash algorithm was used to secure the integrity and confidentiality of the PIN from sniffers and possible adversary within the channel of remote ATM banking transactions. Performance evaluation was carried out using False Acceptance Rate (FAR), False Rejection Rate (FRR) metrics and Collision Attack was performed on the Truncated SHA-512/256 hashed data (PIN). Results of the system performance shows Genuine Acceptance Rate (1-FRR) of 97.5% to 100%, Equal Error Rate of 0.0015% and Collision Attack carried out on the encrypted PIN message digest gave an unsuccessful attack. Therefore, the results of performance evaluation show the applicability of the developed system for secure cardless ATM transaction